[speaker002:] what things to talk about. [speaker006:] I'm [disfmarker] What? Really? Oh, that's horrible! Disincentive! [speaker001:] OK, we're recording. [speaker006:] Hello? [speaker002:] Check check [pause] check check. [speaker006:] Hello? [speaker004:] Uh, yeah. [speaker006:] Which am I? [speaker003:] Oh right. [speaker002:] Alright. Good. [speaker006:] Channel fi OK. OK. Are you doing something? OK, then I guess I'm doing something. So, um, So basically the result of m much thinking since the last time we met, um, but not as much writing, um, is a sheet that I have a lot of, like, thoughts and justification of comments on but I'll just pass out as is right now. So, um, here. If you could pass this around? And there's two things. And so one on one side is [disfmarker] on one side is a sort of the revised sort of updated semantic specification. [speaker004:] Um [disfmarker] [speaker006:] And the other side is, um, sort of a revised construction formalism. [speaker004:] The [disfmarker] wait. [speaker005:] This is just one sheet, right? [speaker004:] Ah! [speaker006:] It's just one sheet. [speaker004:] Just one sheet. [speaker006:] It's just a [disfmarker] Nothing else. [speaker004:] OK. Front, back. [speaker006:] Um, Enough to go around? OK. And in some ways it's [disfmarker] it's [disfmarker] it's very similar to [disfmarker] There are very few changes in some ways from what we've, um, uh, b done before but I don't think everyone here has seen all of this. So, uh, I'm not sure where to begin. Um, as usual the disclaimers are there are [disfmarker] all these things are [disfmarker] it's only slightly more stable than it was before. [speaker005:] Mm hmm. [speaker006:] And, um, after a little bit more discussion and especially like Keith and I [disfmarker] I have more linguistic things to settle in the next few days, um, it'll probably change again some more. [speaker005:] Yeah. [speaker006:] Um, maybe I will [disfmarker] let's start b let's start on number two actually on the notation, um, because that's, I'm thinking, possibly a little more familiar to, um [disfmarker] to people. OK, so the top block is just sort of a [disfmarker] sort of abstract nota it's sort of like, um, listings of the kinds of things that we can have. And certain things that have, um, changed, have changed back to this. There [disfmarker] there's been a little bit of, um, going back and forth. But basically obviously all constructions have some kind of name. I forgot to include that you could have a type included in this line. [speaker003:] What I was gonna [disfmarker] [speaker006:] So something like, um [disfmarker] [speaker003:] Right. [speaker006:] Well, there's an example [disfmarker] the textual example at the end has clausal construction. So, um, just to show it doesn't have to be beautiful It could be, you know, simple old text as well. Um, there are a couple of [disfmarker] Uh, these three have various ways of doing certain things. So I'll just try to go through them. So they could all have a type at the beginning. Um, and then they say the key word construction [speaker003:] Oh, I see. [speaker006:] and they have some name. [speaker003:] So [disfmarker] so the current syntax is if it s if there's a type it's before construct [speaker006:] Yeah, right. [speaker003:] OK, that's fine. [speaker006:] OK, and then it has a block that is constituents. And as usual I guess all the constructions her all the examples here have only, um, tsk [comment] one type of constituent, that is a constructional constituent. I think that's actually gonna turn out to m be certainly the most common kind. But in general instead of the word "construct", th here you might have "meaning" or "form" as well. OK? So if there's some element that doesn't [disfmarker] that isn't yet constructional in the sense that it maps form and meaning. OK, um, the main change with the constructs which [disfmarker] each of which has, um, the key word "construct" and then some name, and then some type specification, is that it's [disfmarker] it's pro it's often [disfmarker] sometimes the case in the first case here that you know what kind of construction it is. So for example whatever I have here is gonna be a form of the word "throw", or it's gonna be a form of the word, you know, I don't know, "happy", or something like that. Or, you know, some it'll be a specific word or maybe you'll have the type. You'll say "I need a p uh spatial relation phrase here" or "I need a directional specifier here". So uh you could have a j a actual type here. Um, or you could just say in the second case that you only know the meaning type. So a very common example of this is that, you know, in directed motion, the first person to do something should be an agent of some kind, often a human. Right? So if I [disfmarker] you know, the um, uh, run down the street then I [disfmarker] I [disfmarker] I run down the street, it's typed, uh, "I", meaning category is what's there. The [disfmarker] the new kind is this one that is sort of a pair and, um, sort of skipping fonts and whatever. The idea is that sometimes there are, um, general constructions that you know, that you're going to need. It's [disfmarker] it's the equivalent of a noun phrase or a prepositional phrase, or something like that there. [speaker005:] Mm hmm. [speaker006:] And usually it has formal um, considerations that will go along with it. [speaker003:] Mm hmm. [speaker006:] And then uh, you might know something much more specific depending on what construction you're talking about, about what meaning [disfmarker] what specific meaning you want. So the example again at the bottom, which is directed motion, you might need a nominal expression to take the place of, you know, um, "the big th", you you know, "the big [disfmarker] the tall dark man", you know, "walked into the room". [speaker005:] Mm hmm. [speaker006:] But because of the nature of this particular construction you know not just that it's nominal of some kind but in particular, that it's some kind of animate nominal, and which will apply just as well to like, you know, a per you know, a simple proper noun or to some complicated expression. Um, so I don't know if the syntax will hold but something that gives you a way to do both constructional and meaning types. So. OK, then I don't think the, [comment] um [disfmarker] at least [disfmarker] Yeah. [comment] None of these examples have anything different for formal constraints? But you can refer to any of the, um, sort of available elements and scope, right? which here are the constructs, [comment] to say something about the relation. And I think i if you not if you compare like the top block and the textual block, um, we dropped like the little F subscript. The F subscripts refer to the "form" piece of the construct. [speaker003:] Good. [speaker006:] And I think that, um, in general it'll be unambiguous. Like if you were giving a formal constraint then you're referring to the formal pole of that. So [disfmarker] so by saying [disfmarker] if I just said "Name one" then that means name one formal and we're talking about formal struc [comment] Which [disfmarker] which makes sense. Uh, there are certain times when we'll have an exception to that, in which case you could just indicate "here I mean the meaningful for some reason". Right? Or [disfmarker] Actually it's more often that, only to handle this one special case of, you know, "George and Jerry walk into the room in that order". So we have a few funny things where something in the meaning might refer to something in the form. [speaker005:] Mm hmm. [speaker006:] But [disfmarker] but s we're not gonna really worry about that for right now and there are way We can be more specific if we have to later on. OK, and so in terms of the [disfmarker] the relations, you know, as usual they're before and ends. I should have put an example in of something that isn't an interval relation but in form you might also have a value binding. You know, you could say that, um, you know, "name one dot", t you know, "number equals", you know, a plural or something like that. [speaker005:] Mm hmm. [speaker006:] There are certain things that are attribute value, similar to the bindings below but I mean they're just [disfmarker] us usually they're going to be value [disfmarker] value fillers, right? OK, and then again semantic constraints here are just [disfmarker] are just bindings. There was talk of changing the name of that. And Johno and I [disfmarker] I [disfmarker] you [disfmarker] you and I can like fight about that if you like? but about changing it to "semantic [pause] n effects", which I thought was a little bit too order biased [speaker002:] Well [disfmarker] Th [speaker006:] and "semantic bindings", which I thought might be too restrictive in case we don't have only bindings. And so it was an issue whether constraints [disfmarker] um, there were some linguists who reacted against "constraints", saying, "oh, if it's not used for matching, then it shouldn't be called a constraint". But I think we want to be uncommitted about whether it's used for matching or not. Right? Cuz there are [disfmarker] I think we thought of some situations where it would be useful to use whatever the c bindings are, for actual, you know, sort of like modified constraining purposes. [speaker003:] Well, you definitely want to de couple the formalism from the parsing strategy. [speaker005:] Yeah. [speaker003:] So that whether or not it's used for matching or only for verification, I [disfmarker] [speaker006:] Yeah, yeah. It's used shouldn't matter, right? Mm hmm. [speaker003:] s For sure. [speaker006:] Mm hmm. [speaker003:] I mean, I don't know what, uh, term we want to use but we don't want to [disfmarker] [speaker006:] Yeah, uh, there was one time when [disfmarker] when Hans explained why "constraints" was a misleading word for him. [speaker003:] Yep. [speaker006:] And I think the reason that he gave was similar to the reason why Johno thought it was a misleading term, which was just an interesting coincidence. Um, but, uh [disfmarker] And so I was like, "OK, well both of you don't like it? Fine, we can change it". [speaker003:] It's g it's gone. [speaker006:] But I [disfmarker] I [disfmarker] I'm starting to like it again. So that that's why [disfmarker] [comment] That's why I'll stick with it. [speaker002:] But [disfmarker] [speaker006:] So [disfmarker] [speaker001:] Well, you know what? If you have an "if then" phrase, do you know what the "then" phrase is called? [speaker003:] Th [speaker006:] What? Con uh, a consequent? [speaker001:] Yeah. [speaker006:] Yeah, but it's not an "if then". [speaker003:] I know. Anyway, so the other [disfmarker] the other strategy you guys could consider is when you don't know what word to put, you could put no word, [speaker001:] No, but [disfmarker] [speaker006:] Mm hmm. [speaker003:] just meaning. OK? [speaker005:] Yeah. [speaker003:] And the then let [disfmarker] [speaker006:] Yeah, that's true. [speaker002:] So that's why you put semantic constraints up top and meaning bindings down [disfmarker] down here? [speaker006:] Oh, oops! No. That was just a mistake of cut and paste from when I was going with it. [speaker002:] OK. [speaker003:] OK. [speaker006:] So, I'm sorry. I didn't mean [disfmarker] that one's an in unintentional. [speaker002:] So this should be semantic and [disfmarker] [speaker006:] Sometimes I'm intentionally inconsistent cuz I'm not sure yet. Here, I actually [disfmarker] it was just a mistake. [speaker002:] Th so this definitely should be "semantic constraints" down at the bottom? [speaker005:] Sure. [speaker006:] Yeah. [speaker002:] OK. [speaker006:] Well, unless I go with "meaning" but i I mean, I kind of like "meaning" better than "semantic" [speaker002:] Or [disfmarker] [speaker003:] Oh, whatever. [speaker006:] but I think there's [pause] vestiges of other people's biases. [speaker003:] Or [disfmarker] wh [speaker006:] Like [disfmarker] [speaker003:] That b Right. Minor [disfmarker] min problem [disfmarker] [speaker006:] Minor point. [speaker005:] Extremely. [speaker003:] OK. [speaker006:] OK, um, so I think the middle block doesn't really give you any more information, ex than the top block. And the bottom block similarly only just illus you know, all it does is illustrate that you can drop the subscripts and [disfmarker] and that you can drop the, um [disfmarker] uh, that you can give dual types. Oh, one thing I should mention is about "designates". I think I'm actually inconsistent across these as well. So, um, strike out the M subscript on the middle block. [speaker003:] Mm hmm. [speaker006:] So basically now, um, this is actually [disfmarker] this little change actually goes along with a big linguistic change, which is that "designates" isn't only something for the semantics to worry about now. [speaker003:] Good. [speaker006:] So we want s "designates" to actually know one of the constituents which acts like a head in some respects but is sort of, um, really important for say composition later on. So for instance, if some other construction says, you know, "are you of type [disfmarker] is this part of type whatever", um, the "designates" tells you which sort of part is the meaning part. OK, so if you have like "the big red ball", you know, you wanna know if there's an object or a noun. Well, ball is going to be the designated sort of element of that kind of phrase. [speaker005:] Mmm. [speaker006:] Um, there is a slight complication here which is that when we talk about form it's useful sometimes to talk about, um [disfmarker] to talk about there also being a designated object and we think that that'll be the same one, right? So the ball is the head of the phrase, "the r the [disfmarker]", um, "big red ball", and the entity denoted by the word "ball" is sort of the semantic head in some ways of [disfmarker] of this sort of, um, in interesting larger element. [speaker003:] A a and the [disfmarker] Yeah. And there's [disfmarker] uh there's ca some cases where the grammar depends on some form property of the head. [speaker006:] Mm hmm. [speaker005:] Yeah. [speaker003:] And [disfmarker] and this enables you to get that, if I understand you right. [speaker005:] Yeah. [speaker006:] Right, right. [speaker005:] That's the idea. Yeah. [speaker003:] Yeah yeah. [speaker006:] And, uh, you might be able to say things like if the head has to go last in a head final language, you can refer to the head as a p the, you know [disfmarker] the formal head as opposed to the rest of the form having to be at the end of that decision. [speaker003:] Right. [speaker006:] So that's a useful thing so that you can get some internal structural constraints in. [speaker003:] OK, so that all looks good. Let me [disfmarker] Oh, w Oh. I don't know. Were you finished? [speaker006:] Um, there was a list of things that isn't included but you [disfmarker] you can [disfmarker] you can ask a question. That might [@ @] it. [speaker003:] OK. So, i if I understand this the [disfmarker] aside from, uh, construed and all that sort of stuff, the [disfmarker] the differences are mainly that, [vocalsound] we've gone to the possibility of having form meaning pairs for a type [speaker006:] Mm hmm. [speaker003:] or actually gone back to, [speaker006:] Right. [speaker003:] if we go back far enough [disfmarker] [speaker006:] Well, except for their construction meaning, so it's not clear that, uh [disfmarker] Well, right now it's a c uh contr construction type and meaning type. So I don't know what a form type is. [speaker003:] Oh, I see. Yeah, yeah, yeah. I'm sorry, [speaker006:] Yeah. [speaker003:] you're right. A construction type. Uh, that's fine. [speaker006:] Right. [speaker003:] But it, um [disfmarker] [speaker006:] A well, and a previous, um, you know, version of the notation certainly allowed you to single out the meaning bit by it. So you could say "construct of type whatever designates something". [speaker003:] Yeah. [speaker006:] But that was mostly for reference purposes, just to refer to the meaning pole. I don't think that it was often used to give an extra meaning const type constraint on the meaning, which is really what we want most of the time I think. [speaker003:] Mm hmm. [speaker006:] Um, I [disfmarker] I don't know if we'll ever have a case where we actually h if there is a form category constraint, you could imagine having a triple there that says, you know [disfmarker] that's kind of weird. [speaker003:] No, no, no, I don't think so. I think that you'll [disfmarker] you'll do fine. [speaker005:] I [disfmarker] [speaker003:] In fact, these are, um, as long as [disfmarker] as Mark isn't around, these are form constraints. So a nominal expression is [disfmarker] uh, the fact that it's animate, is semantic. The fact that it's n uh, a nominal expression I would say on most people's notion of [disfmarker] of f you know, higher form types, this i this is one. [speaker006:] Mm hmm. [speaker005:] Yeah. [speaker006:] Right, right. [speaker005:] Yeah, yeah. [speaker003:] And I think that's just fine. [speaker006:] Which is fine, yeah. [speaker005:] It's [disfmarker] that now, um, I'm mentioned this, [speaker003:] Yeah. [speaker005:] I [disfmarker] I don't know if I ever explained this but the point of, um, I mentioned in the last meeting, [comment] the point of having something called "nominal expression" is, um, because it seems like having the verb subcategorize for, you know, like say taking as its object just some expression which, um, designates an object or designates a thing, or whatever, um, that leads to some syntactic problems basically? So you wanna, you know [disfmarker] you sort of have this problem like "OK, well, I'll put the word", uh, let's say, the word "dog", you know. And that has to come right after the verb [speaker006:] Mm hmm. [speaker005:] cuz we know verb meets its object. And then we have a construction that says, oh, you can have "the" preceding a noun. And so you'd have this sort of problem that the verb has to meet the designatum. [speaker003:] Right. [speaker005:] And you could get, you know, "the kicked dog" or something like that, meaning "kicked the dog". [speaker003:] Right. [speaker005:] Um, so you kind of have to let this phrase idea in there but [disfmarker] [speaker003:] That I [disfmarker] I have no problem with it at all. [speaker005:] It it Yeah. [speaker006:] Yeah. [speaker003:] I think it's fine. [speaker006:] Right, n s you may be [disfmarker] you may not be like everyone else in [disfmarker] in Berkeley, [speaker005:] Yeah. Yeah. [speaker006:] but that's OK. [speaker005:] I mean, we [disfmarker] we [disfmarker] we sort of thought we were getting away with, uh [disfmarker] with, a p [speaker006:] Uh, we don't mind either, so [disfmarker] [speaker005:] I mean, this is not reverting to the X bar theory of [disfmarker] of phrase structure. [speaker003:] Right. [speaker005:] But, uh, [speaker006:] Right. [speaker005:] I just know that this is [disfmarker] Like, we didn't originally have in mind that, uh [disfmarker] that verbs would subcategorize for a particular sort of form. [speaker006:] Mm hmm. [speaker003:] But they do. [speaker005:] Um, but they does. [speaker006:] Well, there's an alternative to this [speaker005:] At least in English. [speaker006:] which is, um [disfmarker] The question was did we want directed motion, [speaker003:] Yeah. [speaker006:] which is an argument structure construction [disfmarker] [speaker005:] Yeah. [speaker003:] Mm hmm. [speaker006:] did we want it to worry about, um, anything more than the fact that it, you know, has semantic [disfmarker] You know, it's sort of frame based construction. So one option that, you know, Keith had mentioned also was like, well if you have more abstract constructions such as subject, predicate, basically things like grammatical relations, [speaker005:] Mm hmm. [speaker006:] those could intersect with these in such a way that subject, predicate, or subject, predicate, subject, verb, ob you know, verb object would require that those things that f fill a subject and object are NOM expressions. [speaker003:] Right. [speaker006:] And that would be a little bit cleaner in some way. But you know, for now, I mean, [speaker003:] Yeah. But it [disfmarker] y y it's [disfmarker] yeah, just moving it [disfmarker] moving the c the cons the constraints around. [speaker006:] uh, you know. M moving it to another place, right. [speaker005:] Yeah. [speaker003:] OK, so that's [disfmarker] [speaker006:] But there does [disfmarker] basically, the point is there has to be that constraint somewhere, right? [speaker003:] Right. [speaker006:] So, yeah. [speaker003:] And so that was the [disfmarker] [speaker006:] Robert's not happy now? [speaker001:] No! [speaker006:] Oh, OK. [speaker003:] OK, and sort of going with that is that the designatum also now is a pair. [speaker006:] Yes. [speaker003:] Instead of just the meaning. [speaker006:] Mm hmm. [speaker003:] And that aside from some terminology, that's basically it. [speaker006:] Right. [speaker003:] I just want to b I'm [disfmarker] I'm asking. [speaker006:] Yep. [speaker005:] Mm hmm. Yeah. [speaker006:] Yeah, um, the un sort of the un addressed questions in this, um, definitely would for instance be semantic constraints we talked about. [speaker003:] Yeah. [speaker006:] Here are just bindings but, right? we might want to introduce mental spaces [disfmarker] You know, there's all these things that we don't [disfmarker] [speaker003:] The whole [disfmarker] the mental space thing is clearly not here. [speaker006:] Right? So there's going to be some extra [disfmarker] you know, definitely other notation we'll need for that which we skip for now. [speaker005:] Mm hmm. [speaker003:] By the way, I do want to get on that as soon as Robert gets back. [speaker006:] Uh Yeah. OK. [speaker003:] So, uh, the [disfmarker] the mental space thing. [speaker005:] Mm hmm. [speaker003:] Um, obviously, [vocalsound] construal is a b is a b is a big component of that so this probably not worth trying to do anything till he gets back. But sort of as soon as he gets back I think um, we ought to [disfmarker] [speaker006:] Mm hmm. Mm hmm. [speaker005:] So what's the [disfmarker] what's the time frame? I forgot again when you're going away for how long? [speaker001:] Just, uh, as a [disfmarker] sort of a mental bridge, I'm not [disfmarker] I'm skipping fourth of July. [speaker005:] OK. [speaker001:] So, uh, [vocalsound] right afterwards I'm back. [speaker005:] OK. [speaker006:] What? You're missing like the premier American holiday? What's the point of spending a year here? [speaker001:] Uh, I've had it often enough. [speaker006:] So, anyway. [speaker002:] Well he w he went to college here. [speaker006:] Oh, yeah, I forgot. Oops. [comment] Sorry. [speaker003:] Yeah. [speaker006:] OK. [speaker003:] And furthermore it's well worth missing. [speaker006:] Not in California. Yeah, that's true. [speaker005:] Yes. [speaker006:] I like [disfmarker] I [disfmarker] I like spending fourth of July in other countries, [vocalsound] whenever I can. [speaker003:] Right. [speaker006:] Um [disfmarker] [speaker003:] OK, so that's great. [speaker006:] Construal, OK, so [disfmarker] Oh, so there was one question that came out. I hate this thing. Sorry. Um, which is, so something like "past" which i you know, we think is a very simple [disfmarker] uh, we've often just stuck it in as a feature, [speaker003:] Right. [speaker006:] you know, "oh, [pause] this event takes place before speech time", [comment] OK, is what this means. [speaker003:] Right. [speaker006:] Um, it's often thought of as [disfmarker] it is also considered a mental space, [speaker003:] Right. [speaker006:] you know, by, you know, lots of people around here. [speaker003:] Right. [speaker006:] So there's this issue of well sometimes there are really exotic explicit space builders that say "in France, blah blah blah", [speaker005:] Mm hmm. [speaker006:] and you have to build up [disfmarker] you ha you would imagine that would require you, you know, to be very specific about the machinery, whereas past is a very conventionalized one and we sort of know what it means but it [disfmarker] we doesn't [disfmarker] don't necessarily want to, you know, unload all the notation every time we see that it's past tense. [speaker003:] Right. [speaker006:] So, you know, we could think of our [disfmarker] uh, just like X schema "walk" refers to this complicated structure, past refers to, you know, a certain configuration of this thing with respect to it. [speaker003:] I think that's exactly right. [speaker006:] So [disfmarker] so we're kind of like having our cake and eating it [disfmarker] you know, having it both ways, right? [speaker003:] Yeah. [speaker006:] So, i i [speaker003:] Yeah. [pause] No, I think [disfmarker] I think that i we'll have to see how it works out when we do the details [speaker006:] Mm hmm. [speaker003:] but my intuition would be that that's right. [speaker006:] Mm hmm. Yeah, OK. [speaker001:] Do you want to do the same for space? [speaker006:] Wha sorry? [speaker001:] Space? [speaker006:] Space? [speaker001:] Here? Now? [speaker006:] Oh, oh, oh, oh, instead of just time? Yeah, yeah, yeah. [speaker001:] Mm hmm. [speaker006:] Same thing. So there are very conventionalized like deictic ones, right? And then I think for other spaces that you introduce, you could just attach y whatever [disfmarker] You could build up an appropriately [disfmarker] uh, appropriate structure according to the l the sentence. [speaker001:] Hmm. [speaker003:] Yeah. [speaker001:] Hmm, well this [disfmarker] this basically would involve everything you can imagine to fit under your C dot something [disfmarker] [speaker005:] N [speaker006:] Yeah. [speaker001:] you know, where [disfmarker] where it's contextually dependent, [speaker006:] Right. Mm hmm. [speaker001:] "what is now, what was past, what is in the future, where is this, what is here, what is there, what is [disfmarker]" [speaker006:] Mm hmm. Yeah. So time and space. Um, we'll [disfmarker] we'll get that on the other side a little, like very minimally. There's a sort of there's a slot for setting time and setting place. [speaker003:] Good. [speaker006:] And you know, you could imagine for both of those are absolute things you could say about the time and place, and then there are many in more interestingly, linguistically anyway, [comment] there are relative things that, you know, you relate the event in time and space to where you are now. If there's something a lot more complicated like, or so [disfmarker] hypothetical or whatever, then you have to do your job, [speaker005:] Mm hmm. [speaker006:] like or somebody's job anyway. [speaker005:] Yeah. [speaker006:] I'm gonna point to [disfmarker] at random. [speaker005:] Yeah. I mean, I'm [disfmarker] I'm s curious about how much of the mental [disfmarker] I mean, I'm not sure that the formalism, sort of the grammatical side of things, [comment] is gonna have that much going on in terms of the mental space stuff. You know, um, basically all of these so called space builders that are in the sentence are going to sort of [disfmarker] I think of it as, sort of giving you the coordinates of, you know [disfmarker] assuming that at any point in discourse there's the possibility that we could be sort of talking about a bunch of different world scenarios, whatever, and the speaker's supposed to be keeping track of those. The, um [disfmarker] the construction that you actually get is just gonna sort of give you a cue as to which one of those that you've already got going, um, you're supposed to add structure to. [speaker006:] Mm hmm. [speaker005:] So "in France, uh, Watergate wouldn't have hurt Nixon" or something like that. Um, well, you say, "alright, I'm supposed to add some structure to my model of this hypothetical past France universe" or something like that. The information in the sentence tells you that much but it doesn't tell you like exactly what it [disfmarker] what the point of doing so is. So for example, depending on the linguistic con uh, context it could be [disfmarker] like the question is for example, what does "Watergate" refer to there? Does it, you know [disfmarker] does it refer to, um [disfmarker] if you just hear that sentence cold, the assumption is that when you say "Watergate" you're referring to "a Watergate like scandal as we might imagine it happening in France". But in a different context, "oh, you know, if Nixon had apologized right away it wouldn't [disfmarker] you know, Watergate wouldn't have hurt him so badly in the US and in France it wouldn't have hurt him at all". Now we're s now that "Watergate" [disfmarker] we're now talking about the real one, [speaker006:] They're real, right. [speaker005:] and the "would" sort of [disfmarker] it's a sort of different dimension of hypothe theticality, right? [speaker006:] I see [disfmarker] right. [speaker005:] We're not saying [disfmarker] What's hypothetical about this world. In the first case, hypothetically we're imagining that Watergate happened in France. [speaker006:] Hmm. [speaker005:] In the second case we're imagining hypothetically that Nixon had apologized right away [speaker006:] Mm hmm. Right. [speaker005:] or something. Right? So a lot of this isn't happening at the grammatical level. Uh, um, and so [disfmarker] [speaker003:] Correct. [speaker006:] Mm hmm. [speaker005:] I don't know where that sits then, [speaker001:] Hmm. [speaker005:] sort of the idea of sorting out what the person meant. [speaker006:] It seems like, um, the grammatical things such as the auxiliaries that you know introduce these conditionals, whatever, give you sort of the [disfmarker] the most basi [speaker005:] Mm hmm. [speaker006:] th those we [disfmarker] I think we can figure out what the possibilities are, right? [speaker005:] Mm hmm. [speaker006:] There are sort of a relatively limited number. And then how they interact with some extra thing like "in France" or "if such and such", that's like there are certain ways that they c they can [disfmarker] [speaker005:] Yeah. [speaker006:] You know, one is a more specific version of the general pattern that the grammat grammar gives you. I think. [speaker005:] Yeah. [speaker006:] But, you know, whatever, we [disfmarker] we're [disfmarker] [speaker003:] Yeah, in the short run all we need is a enough mechanism on the form side to get things going. [speaker005:] Mm hmm. Yeah. [speaker003:] Uh, I [disfmarker] uh, you [disfmarker] you [disfmarker] [speaker005:] But the whole point of [disfmarker] the whole point of what Fauconnier and Turner have to say about, uh, mental spaces, and blending, and all that stuff is that you don't really get that much out of the sentence. You know, there's not that much information contained in the sentence. It just says, "Here. Add this structure to this space." and exactly what that means for the overall ongoing interpretation is quite open. An individual sentence could mean a hundred different things depending on, quote, "what the space configuration is at the time of utterance". [speaker006:] Mm hmm. Mm hmm. [speaker005:] And so somebody's gonna have to be doing a whole lot of work but not me, I think. [speaker003:] Well [disfmarker] I think that's right. Oh, I [disfmarker] yeah, I, uh, uh [disfmarker] I think that's [disfmarker] Not k I th I don't think it's completely right. I mean, in fact a sentence examples you gave in f did constrain the meaning b the form did constrain the meaning, [speaker005:] Yeah. [speaker003:] and so, um, it isn't, uh [disfmarker] [speaker005:] Sure, but like what [disfmarker] what was the point of saying that sentence about Nixon and France? That is not [disfmarker] there is nothing about that in the [disfmarker] in the sentence really. [speaker006:] That's OK. We usually don't know the point of the sentence at all. [speaker005:] Yeah. [speaker006:] But we know what it's trying to say. We [disfmarker] we know that it's [disfmarker] what predication it's setting up. [speaker005:] Y yeah. [speaker003:] Yeah. But [disfmarker] but [disfmarker] bottom line, I agree with you, [speaker005:] Yeah. [speaker006:] That's all. [speaker005:] Yeah. [speaker003:] that [disfmarker] that [disfmarker] that we're not expecting much out of the, uh f [speaker006:] Purely linguistic cues, right? [speaker003:] uh, the purely form cues, yeah. [speaker006:] So. Mmm. [speaker003:] And, um [disfmarker] I mean, you're [disfmarker] you're the linguist but, uh, it seems to me that th these [disfmarker] we [disfmarker] we [disfmarker] you know, we've talked about maybe a half a dozen linguistics theses in the last few minutes or something. [speaker005:] Yeah, yeah. [speaker003:] Yeah, I mean [disfmarker] [speaker005:] Yeah. Oh, yeah. [speaker003:] uh, I [disfmarker] I mean, that [disfmarker] that's my feeling that [disfmarker] that these are really hard uh, problems that decide exactly what [disfmarker] what's going on. [speaker005:] Mm hmm. Yeah. Yeah. [speaker003:] OK. [speaker006:] OK, so, um, one other thing I just want to point out is there's a lot of confusion about the terms like "profile, designate, focus", et cetera, et cetera. [speaker005:] Mm hmm. [speaker003:] Uh, right, right, right. [speaker006:] Um, for now I'm gonna say like "profile"'s often used [disfmarker] like two uses that come to mind immediately. One is in the traditional like semantic highlight of one element with respect to everything else. So "hypotenuse", you profiled this guy against the background of the [pause] right t right triangle. [speaker005:] Mm hmm. [speaker006:] OK. And the second use, um, is in FrameNet. It's slightly different. Oh, I was asking Hans about this. They use it to really mean, um, this [disfmarker] in a frame th this is [disfmarker] the profiles on the [disfmarker] these are the ones that are required. So they have to be there or expressed in some way. Which [disfmarker] which [disfmarker] I'm not saying one and two are mutually exclusive but they're [disfmarker] they're different meanings. [speaker003:] Right. [speaker005:] Mm hmm. [speaker006:] So the closest thing [disfmarker] so I was thinking about how it relates to this notation. For us, um [disfmarker] OK, so how is it [disfmarker] so "designate" [disfmarker] FrameNet? [speaker003:] Does that [disfmarker] Is that really what they mean in [disfmarker] in [disfmarker] [speaker006:] FrameNet? [speaker003:] I didn't know that. [speaker006:] Yeah, yeah. I [disfmarker] I mean, I [disfmarker] I was a little bit surprised about it too. I knew that [disfmarker] [speaker003:] Yeah. [speaker006:] I thought that that would be something like [disfmarker] there's another term that I've heard for that thing [speaker003:] Right, OK. [speaker006:] but they [disfmarker] I mean [disfmarker] uh, well, at least Hans says they use it that way. And [disfmarker] [speaker003:] Well, I'll check. [speaker006:] and may maybe he's wrong. Anyway, so I think the [disfmarker] the "designate" that we have in terms of meaning is really the "highlight this thing with respect to everything else". OK? [speaker003:] Right. [speaker006:] So this is what [disfmarker] what it means. But the second one seems to be useful but we might not need a notation for it? We don't have a notation for it but we might want one. So for example we've talked about if you're talking about the lexical item "walk", you know it's an action. Well, it also has this idea [disfmarker] it carries along with it the idea of an actor or somebody's gonna do the walking. Or if you talk about an adjective "red", it carries along the idea of the thing that has the property of having color red. So we used to use the notation "with" for this and I think that's closest to their second one. [speaker003:] Right. [speaker006:] So I d don't yet know, I have no commitment, as to whether we need it. It might be [disfmarker] it's the kind of thing that w a parser might want to think about whether we require [disfmarker] you know, these things are like it's semantically part of it [disfmarker] [speaker003:] N no, no. Well, uh, th critically they're not required syntactically. Often they're pres presu presupposed and all that sort of stuff. [speaker006:] Right. Right, right. Yeah, um, definitely. So, um, "in" was a good example. If you walk "in", like well, in what? You know, like you have to have the [disfmarker] [comment] So [disfmarker] so it's only semantically is it [disfmarker] [speaker003:] Right, there's [disfmarker] [speaker006:] it is still required, say, by simulation time though to have something. [speaker003:] Right. [speaker006:] So it's that [disfmarker] I meant the idea of like that [disfmarker] the semantic value is filled in by sim simulation. I don't know if that's something we need to spa to [disfmarker] to like say ever as part of the requirement? [disfmarker] or the construction? or not. We'll [disfmarker] we'll again defer. [speaker003:] Or [disfmarker] I mean, or [disfmarker] or, uh so the [disfmarker] [speaker006:] Have it construed, is that the idea? [speaker003:] Yeah, yeah. [speaker006:] Just point at Robert. Whenever I'm confused just point to him. [speaker003:] Right. [speaker006:] You tell me. [speaker003:] It's [disfmarker] it's his thesis, right? [speaker006:] OK. [speaker003:] Anyway, right, yeah, w this is gonna be a b you're right, this is a bit of in a mess and we still have emphasis as well, or stress, or whatever. [speaker006:] OK, well we'll get, uh uh, I [disfmarker] we have thoughts about those as well. [speaker003:] Yeah. Great. [speaker006:] Um, the I w I would just s some of this is just like my [disfmarker] you know, by fiat. I'm going to say, this is how we use these terms. I don't you know, there's lots of different ways in the world that people use it. [speaker005:] Yeah. [speaker003:] I [disfmarker] that's fine. [speaker006:] I think that, um, the other terms that are related are like focus and stress. So, s [speaker003:] Mm hmm. [speaker006:] I think that the way I [disfmarker] we would like to think, uh, I think is focus is something that comes up in, I mean, lots of [disfmarker] basically this is the information structure. OK, it's like [disfmarker] uh, it's not [disfmarker] it might be that there's a syntactic, uh, device that you use to indicate focus [speaker003:] Mm hmm. [speaker006:] or that there are things like, you know, I think Keith was telling me, [comment] things toward the end of the sentence, post verbal, tend to be the focused [disfmarker] focused element, [speaker005:] Mmm. [speaker006:] the new information. You know, if I [disfmarker] "I walked into the room", you [disfmarker] tend to think that, whatever, "into the room" is sort of like the more focused kind of thing. [speaker005:] Mm hmm. Yeah. [speaker006:] And when you, uh, uh, you have stress on something that might be, you know, a cue that the stressed element, or for instance, the negated element is kind of related to information structure. So that's like the new [disfmarker] the sort of like import or whatever of [disfmarker] of this thing. Uh, so [disfmarker] so I think that's kind of nice to keep "focus" being an information structure term. "Stress" [disfmarker] I th and then there are different kinds of focus that you can bring to it. So, um, like "stress", th stress is kind of a pun on [disfmarker] you might have like [disfmarker] whatever, like, um, accent kind of stress. [speaker005:] Mm hmm. [speaker006:] And that's just a [disfmarker] uh, w we'll want to distinguish stress as a form device. You know, like, oh, high volume or whatever. [speaker005:] Yeah. [speaker006:] Um, t uh, and distinguish that from it's effect which is, "Oh, the kind of focus we have is we're emphasizing this value often as opposed to other values", right? So focus carries along a scope. Like if you're gonna focus on this thing and you wanna know [disfmarker] it sort of evokes all the other possibilities that it wasn't. Um, so my classic [disfmarker] my now classic example of saying, "Oh, he did go to the meeting?", [speaker005:] Mm hmm. Yeah. [speaker006:] that was my way of saying [disfmarker] as opposed to, you know, "Oh, he didn't g" or "There was a meeting?" I think that was the example that was caught on by the linguists immediately. [speaker005:] Yeah. Yeah. [speaker006:] And so, um, the [disfmarker] like if you said he [disfmarker] you know, there's all these different things that if you put stress on a different part of it then you're, c focusing, whatever, on, uh [disfmarker] [speaker005:] Mm hmm. [speaker006:] "he walked to the meeting" as opposed to "he ran", or "he did walk to the meeting" as opposed to "he didn't walk". You know, [speaker005:] Mm hmm. [speaker006:] so we need to have a notation for that which, um, I think that's still in progress. So, sort of I'm still working it out. But it did [disfmarker] one [disfmarker] one implication it does f have for the other side, which we'll get to in a minute is that I couldn't think of a good way to say "here are the possible things that you could focus on", cuz it seems like any entity in any sentence, you know, or any meaning component of anyth you know [disfmarker] all the possible meanings you could have, any of them could be the subject of focus. [speaker003:] Mmm. [speaker006:] But I think one [disfmarker] the one thing you can schematize is the kind of focus, right? So for instance, you could say it's the [disfmarker] the tense on this as opposed to, um, the [disfmarker] the action. OK. Or it's [disfmarker] uh, it's an identity thing or a contrast with other things, or stress this value as opposed to other things. So, um, it's [disfmarker] it is kind of like a profile [disfmarker] profile background thing but I [disfmarker] I can't think of like the limited set of possible meanings that you would [disfmarker] that you would focu [speaker005:] Light up with focus, yeah. [speaker006:] light [disfmarker] highlight as opposed to other ones. So it has some certain complications for the, uh, uh [disfmarker] later on. Li I mean, uh, the best thing I can come up with is that information has a list of focused elements. For instance, you [disfmarker] Oh, one other type that I forgot to mention is like query elements and that's probably relevant for the like "where is", you know, "the castle" kind of thing? [speaker005:] Mm hmm. [speaker006:] Because you might want to say that, um, location or cert certain WH words bring [disfmarker] you know, sort of automatically focus in a, you know, "I don't know the identity of this thing" kind of way on certain elements. So. OK. Anyway. So that's onl there are [disfmarker] there are many more things that are uncl that are sort of like a little bit unstable about the notation but it's most [disfmarker] I think it's [disfmarker] this is, you know, the current [disfmarker] current form. Other things we didn't [vocalsound] totally deal with, um, well, we've had a lot of other stuff that Keith and I have them working on in terms of like how you deal with like an adjective. [speaker005:] Oh, there's a bunch. Yeah. [speaker006:] You know, a [disfmarker] a nominal expression. And, um, [speaker005:] Yeah. [speaker006:] I mean, we should have put an example of this and we could do that later. [speaker005:] Yeah. [speaker006:] But I think the not inherently like the general principles still work though, that, um, we can have constructions that have sort of constituent structure in that there is like, you know, for instance, one [disfmarker] Uh, you know, they [disfmarker] they have constituents, right? So you can like nest things when you need to, but they can also overlap in a sort of flatter way. So if you don't have like a lot of grammar experience, then like this [disfmarker] this might, you know, be a little o opaque. But, you know, we have the [pause] properties of dependency grammars and some properties of constituents [disfmarker] constituent based grammar. So that's [disfmarker] I think that's sort of the main thing we wanted to aim for [speaker005:] Mm hmm. [speaker006:] and so far it's worked out OK. [speaker003:] Good. [speaker006:] So. OK. [speaker001:] I can say two things about the f [speaker006:] Yes. [speaker001:] Maybe you want to forget stress. This [disfmarker] my f [speaker006:] As a word? [speaker001:] No, as [disfmarker] as [disfmarker] Just don't [disfmarker] don't think about it. [speaker006:] As a [disfmarker] What's that? Sorry. [speaker001:] If [disfmarker] canonically speaking you can [disfmarker] if you look at a [disfmarker] a curve over sentence, you can find out where a certain stress is and say, "hey, that's my focus exponent." [speaker006:] Mm hmm. [speaker005:] Right. [speaker001:] It doesn't tell you anything what the focus is. [speaker006:] Mm hmm. [speaker001:] If it's just that thing, [speaker006:] Or the constituent that it falls in. [speaker001:] a little bit more or the whole phrase. [speaker005:] Mm hmm. [speaker001:] Um [disfmarker] [speaker006:] You mean t forget about stress, the form cue? [speaker001:] The form bit [speaker005:] Yeah. [speaker001:] because, uh, as a form cue, um, not even trained experts can always [disfmarker] well, they can tell you where the focus exponent is sometimes. [speaker006:] OK. [speaker001:] And that's also mostly true for read speech. In [disfmarker] in real speech, um, people may put stress. It's so d context dependent on what was there before, phrase ba breaks, um, restarts. [speaker006:] Yeah. Mm hmm. [speaker001:] It's just, um [disfmarker] it's absurd. It's complicated. [speaker006:] OK, [speaker005:] Yeah, I mean, I [disfmarker] I'm sort of inclined to say let's worry about specifying the information structure focus of the sentence [speaker001:] And all [disfmarker] [speaker006:] I believe you, yeah. Mm hmm. [speaker005:] and then, [speaker006:] Ways that you can get it come from th [speaker005:] hhh, [comment] the phonology component can handle actually assigning an intonation contour to that. [speaker006:] right. [speaker005:] You know, I mean, later on we'll worry about exactly how [disfmarker] [speaker001:] Or [disfmarker] or map from the contour to [disfmarker] to what the focus exponent is. [speaker005:] y Yeah. [speaker006:] Mm hmm. [speaker005:] Exactly. But figure out how the [disfmarker] Yeah. [speaker001:] But, uh, if you don't know what you're [disfmarker] what you're focus is then you're [disfmarker] you're hopeless uh ly lost anyways, [speaker006:] Right. That's fine, yeah. Mm hmm. [speaker001:] and the only way of figuring out what that is, [vocalsound] is, um, by sort of generating all the possible alternatives to each focused element, decide which one in that context makes sense and which one doesn't. [speaker006:] Mm hmm. [speaker001:] And then you're left with a couple three. [speaker006:] Mm hmm. [speaker001:] So, you know, again, that's something that h humans can do, um, but far outside the scope of [disfmarker] of any [disfmarker] anything. So. You know. It's [disfmarker] [speaker006:] OK. Well, uh, yeah, I wouldn't have assumed that it's an easy problem in [disfmarker] in absence of all the oth [speaker001:] u u [speaker006:] you need all the other information I guess. [speaker001:] But it's [disfmarker] it's [disfmarker] what it [disfmarker] uh, it's pretty easy to put it in the formalism, though. I mean, because [speaker006:] Yeah. [speaker001:] you can just say whatever stuff, "i is the container being focused or the [disfmarker] the entire whatever, both, and so forth." [speaker006:] Mm hmm, mm hmm. [speaker005:] Mm hmm. [speaker006:] Yeah. Exactly. So the sort of effect of it is something we want to be able to capture. [speaker003:] Yeah, so b b but I think the poi I'm not sure I understand but here's what I th think is going on. That if we do the constructions right when a particular construction matches, it [disfmarker] the fact that it matches, does in fact specify the focus. [speaker006:] W uh, I'm not sure about that. Or it might limit [disfmarker] it cert certainly constrains the possibilities of focus. [speaker003:] OK. Uh [disfmarker] k uh, at at the very least it constrai [speaker006:] I think that's [disfmarker] that's, th that's certainly true. And depending on the construction it may or may not f specify the focus, right? [speaker003:] Oh, uh, for sure, yes. There are constrai yeah, it's not every [disfmarker] but there are constructions, uh, where you t explicitly take into account those considerations [speaker006:] Yeah. Mm hmm. [speaker003:] that you need to take into account in order to decide which [disfmarker] what is being focused. [speaker006:] Mm hmm. [speaker001:] Mm hmm. So we talked about that a little bit this morning. John is on the bus, [speaker006:] Mm hmm. [speaker001:] not Nancy. " [speaker006:] Hmm. [speaker001:] So that's [disfmarker] focuses on John. [speaker003:] Right. [speaker001:] "John is on the bus and not on the train." [speaker006:] Mm hmm. [speaker003:] Right. [speaker001:] "John is on the bus" versus "John is on the train." [speaker006:] Right. [speaker001:] And "John is on the bus" versus "was", and e [speaker006:] Is on. "John is on the bus". [speaker001:] "it's the bu" so e [speaker006:] Yeah. Yeah. [speaker003:] Right. Yeah, all [disfmarker] all of those. Yeah. [speaker006:] Right. [speaker001:] All of these and will we have [disfmarker] u is it all the same constructions? Just with a different foc focus constituent? [speaker006:] Yeah, I would say that argument structure in terms of like the main like sort of, [speaker001:] Mm hmm. [speaker006:] I don't know [disfmarker] the fact that you can get it without any stress and you have some [disfmarker] whatever is predicated anyway should be the same set of constructions. So that's why I was talking about overlapping constructions. So, then you have a separate thing that picks out, you know, stress on something relative to everything else. [speaker003:] Yeah. [speaker005:] Mm hmm. [speaker003:] So, the question is actually [disfmarker] [speaker006:] And it would [disfmarker] [speaker003:] oh, I'm sorry, [speaker006:] yeah, [speaker003:] go ahead, finish. [speaker006:] and it w and that would have to [disfmarker] uh it might be ambiguous as, uh, whether it picks up that element, or the phrase, or something like that. But it's still is limited possibility. [speaker001:] Hmm. [speaker006:] So that should, you know, interact with [disfmarker] it should overlap with whatever other construction is there. [speaker001:] Yeah. [speaker003:] S s the question is, do we have a way on the other page, uh, when we get to the s semantic side, of saying what the stressed element was, or stressed phrase, or something. [speaker006:] Mm hmm. Well, so that's why I was saying how [disfmarker] since I couldn't think of an easy like limited way of doing it, um, all I can say is that information structure has a focused slot [speaker003:] Right. [speaker006:] and I think that should be able to refer to [disfmarker] [speaker003:] So that's down at the bottom here when we get over there. [speaker006:] Yeah, [speaker003:] OK. [speaker006:] and, infer [disfmarker] and I don't have [disfmarker] I don't have a great way or great examples [speaker003:] I'll I'll wait. [speaker006:] but I think that [disfmarker] something like that is probably gonna be, uh, more [disfmarker] more what we have to do. [speaker003:] OK. [speaker001:] Hmm. [speaker006:] But, um, [speaker003:] OK. [speaker001:] So [speaker006:] OK, that was one comment. And you had another one? [speaker001:] Yeah, well the [disfmarker] once you know what the focus is the [disfmarker] everything else is background. How about "topic comment" that's the other side of information. [speaker006:] How about what? [speaker001:] Topic comment. [speaker006:] Yeah, so that was the other thing. And so I didn't realize it before. It's like, "oh!" It was an epiphany that it [disfmarker] you know, topic and focus are a contrast set. So topic is [disfmarker] Topic focused seems to me like, um, background profile, OK, or a landmark trajector, or some something like that. There's [disfmarker] there's definitely, um, that kind of thing going on. [speaker001:] Mmm. [speaker006:] Now I don't know whether [disfmarker] I n I don't have as many great examples of like topic indicating constructions on like focus, right? Um, topic [disfmarker] it seems kind of [disfmarker] you know, I think that might be an ongoing kind of thing. [speaker001:] Mm hmm. [speaker005:] Japanese has this though. [speaker006:] Topic marker? [speaker005:] You know. Yeah, that's what "wa" is, [speaker001:] Yeah. [speaker006:] Mm hmm. [speaker005:] uh, just to mark which thing is the topic. [speaker006:] Mm hmm. [speaker005:] It doesn't always have to be the subject. [speaker006:] Right. So again, information structure has a topic slot. And, you know, I stuck it in thinking that we might use it. [speaker001:] Mm hmm. [speaker006:] Um, I think I stuck it in. [speaker003:] Yep, it's there. [speaker006:] Um, and one thing that I didn't do consistently, um, is [disfmarker] when we get there, is like indicate what kind of thing fits into every role. I think I have an idea of what it should be but th you know, so far we've been getting away with like either a type constraint or, um, you know, whatever. I forg it'll be a frame. You know, it'll be [disfmarker] it'll be another predication or it'll be, um, I don't know, some value from [disfmarker] from some something, some variable and scope or something like that, or a slot chain based on a variable and scope. OK, so well that's [disfmarker] should we flip over to the other side officially then? [speaker001:] Mm hmm, hmm. [speaker006:] I keep, uh, like, pointing forward to it. [speaker005:] OK, side one. [speaker006:] Yeah. Now we'll go back to s OK, so this doesn't include something which mi mi may have some effect on [disfmarker] on it, which is, um, the discourse situation context record, right? So I didn't [disfmarker] I [disfmarker] I meant just like draw a line and like, you know, you also have, uh, some tracking of what was going on. [speaker003:] Right. [speaker006:] And sort of [disfmarker] this is a big scale comment before I, you know, look into the details of this. But for instance you could imagine instead of having [disfmarker] I [disfmarker] I changed the name of [disfmarker] um it used to be "entities". So you see it's "scenario", "referent" and "discourse segment". And "scenario" is essentially what kind of [disfmarker] what's the basic predication, what event happened. And actually it's just a list of various slots from which you would draw [disfmarker] draw in order to paint your picture, a bunch of frames, bi and bindings, right? Um, and obviously there are other ones that are not included here, general cultural frames and general like, uh, other action f you know, specific X schema frames. [speaker005:] Mm hmm. [speaker006:] OK, whatever. The middle thing used to be "entities" because you could imagine it should be like really a list where here was various information. And this is intended to be grammatically specifiable information about a referent [disfmarker] uh, you know, about some entity that you were going to talk about. So "Harry walked into the room", "Harry" and "room", you know, the room [disfmarker] th but they would be represented in this list somehow. And it could also have for instance, it has this category slot. Um, it should be either category or in or instance. Basically, it could be a pointer to ontology. So that everything you know about this could be [disfmarker] could be drawn in. But the important things for grammatical purposes are for [disfmarker] things like number, gender, um [disfmarker] ki the ones I included here are slightly arbitrary but you could imagine that, um, you need to figure out wheth if it's a group whether, um, some event is happening, linear time, linear spaces, like, you know, are [disfmarker] are they doing something serially or is it like, um, uh I'm [disfmarker] I'm not sure. Because this partly came from, uh, Talmy's schema and I'm not sure we'll need all of these actually. But [disfmarker] Um, and then the "status" I used was like, again, in some languages, you know, like for instance in child language you might distinguish between different status. So, th the [disfmarker] the big com and [disfmarker] and finally "discourse segment" is about [vocalsound] sort of speech act y information structure y, like utterance specific kinds of things. So the comment I was going to make about, um, changing entity [disfmarker] the entity's block to reference is that [vocalsound] you can imagine your discourse like situation context, you have a set of entities that you're sort of referring to. And you might [disfmarker] that might be sort of a general, I don't know, database of all the things in this discourse that you could refer to. And I changed to "reference" cuz I would say, for a particular utterance you have particular referring expressions in it. And those are the ones that you get information about that you stick in here. For instance, I know it's going to be plural. I know it's gonna be feminine or something like that. And [disfmarker] and these could actually just point to, you know, the [disfmarker] the ID in my other list of enti active entities, right? So, um, uh, th there's [disfmarker] there's all this stuff about discourse status. We've talked about. I almost listed "discourse status" as a slot where you could say it's active. You know, there's this, um, hierarchy [disfmarker] uh there's a schematization of, you know, things can be active or they can be, um, accessible, inaccessible. [speaker005:] Yeah. [speaker006:] It was the one that, you know, Keith, um, emailed to us once, to some of us, not all of us. And the thing is that that [disfmarker] I noticed that that, um, list was sort of discourse dependent. It was like in this particular set, s you know, instance, it has been referred to recently or it hasn't been, [speaker005:] Yeah. [speaker006:] or this is something that's like in my world knowledge but not active. [speaker003:] This [disfmarker] Uh [disfmarker] yeah, well there [disfmarker] there seems to be context properties. [speaker006:] So. Yeah, they're contex and for instance, I used to have a location thing there [speaker003:] Yeah. [speaker006:] but actually that's a property of the situation. And it's again, time, you know [disfmarker] at cert certain points things are located, you know, near or far from you [speaker003:] Well, uh, uh, this is recursive [speaker006:] and [disfmarker] [speaker003:] cuz until we do the uh, mental space story, we're not quite sure [disfmarker] [comment] Th th [speaker006:] Yeah. [speaker003:] which is fine. [speaker006:] Yeah, yeah. [speaker003:] We'll just [disfmarker] we'll j [speaker006:] So some of these are, uh [disfmarker] [speaker003:] we just don't know yet. [speaker006:] Right. So I [disfmarker] so for now I thought, well maybe I'll just have in this list the things that are relevant to this particular utterance, right? Everything else here is utterance specific. Um, and I left the slot, "predications", open because you can have, um, things like "the guy I know from school". Or, you know, like your referring expression might be constrained by certain like unbounded na amounts of prep you know, predications that you might make. [speaker005:] Mm hmm. [speaker006:] And it's unclear whether [disfmarker] I mean, you could just have in your scenario, "here are some extra few things that are true", right? [speaker005:] Mm hmm. [speaker006:] And then you could just sort of not have this slot here. Right? You're [disfmarker] but [disfmarker] but it's used for identification purposes. So it's [disfmarker] it's a little bit different from just saying "all these things are true from my utterance". [speaker005:] Yeah. [speaker003:] Right. [speaker005:] Yeah. [speaker006:] Um. [speaker005:] Right, "this guy I know from school came for dinner" does not mean, um, "there's a guy, I know him from school, and he came over for dinner". That's not the same effect. [speaker006:] Yeah, it's a little bit [disfmarker] it's a little bit different. Right? So [disfmarker] Or maybe that's like a restrictive, non restrictive [disfmarker] you know, it's like it gets into that kind of thing for [disfmarker] [speaker005:] Yeah. [speaker006:] um, but maybe I'm mixing, you know [disfmarker] this is kind of like the final result after parsing the sentence. So you might imagine that the information you pass to, you know [disfmarker] in identifying a particular referent would be, "oh, some [disfmarker]" you know, it's a guy [speaker005:] Mm hmm. [speaker006:] and it's someone I know from school ". So maybe that would, you know, be some intermediate structure that you would pass into the disc to the, whatever, construal engine or whatever, discourse context, to find [disfmarker] you know, either create this reference, [speaker005:] Yeah. [speaker006:] in which case it'd be created here, [speaker005:] Mm hmm. [speaker006:] and [disfmarker] you know, so [disfmarker] so you could imagine that this might not [disfmarker] So, uh, I'm uncommitted to a couple of these things. [speaker001:] But [disfmarker] to make it m precise at least in my mind, uh, it's not precise. [speaker006:] Um. [speaker001:] So "house" is gender neuter? [speaker006:] Um, it could be in [disfmarker] [speaker001:] In reality or in [disfmarker] [speaker003:] Semantically. [speaker006:] semantically, yeah. Yeah. [speaker001:] semantically. So [disfmarker] [speaker006:] So it uh, uh, a table. You know, a thing that c doesn't have a gender. So. Uh, it could be that [disfmarker] I mean, maybe you'd [disfmarker] maybe not all these [disfmarker] I mean, I wou I would say that I tried to keep slots here that were potentially relevant to most [disfmarker] most things. [speaker001:] No, just to make sure that we [disfmarker] everybody that's [disfmarker] completely agreed that it [disfmarker] it has nothing to do with, uh, form. [speaker006:] Yeah. OK, that is semantic as opposed to [disfmarker] Yeah. Yeah. That's right. Um. S so again [disfmarker] [speaker001:] Then "predications" makes sense to [disfmarker] to have it open for something like, uh, accessibility or not. [speaker006:] Open to various things. [speaker001:] Yeah. [speaker006:] Right. OK, so. Let's see. So maybe having made that big sca sort of like large scale comment, should I just go through each of these slots [disfmarker] uh, each of these blocks, um, a little bit? [speaker005:] Sure. [speaker006:] Um, mostly the top one is sort of image schematic. And just a note, which was that, um [disfmarker] s so when we actually ha so for instance, um, some of them seem more inherently static, OK, like a container or sort of support ish. And others are a little bit seemingly inherently dynamic like "source, path, goal" is often thought of that way or "force", or something like that. But in actual fact, I think that they're intended to be sort of neutral with respect to that. And different X schemas use them in a way that's either static or dynamic. So "path", you could just be talking about the path between this and this. [speaker005:] Mmm. [speaker006:] And you know, "container" that you can go in and out. All of these things. And so, um, I think this came up when, uh, Ben and I were working with the Spaniards, um, the other day [disfmarker] the "Spaniettes", as we [vocalsound] called them [disfmarker] um, to decide like how you want to split up, like, s image schematic contributions versus, like, X schematic contributions. How do you link them up. And I think again, um, it's gonna be something in the X schema that tells you "is this static or is this dynamic". So we definitely need [disfmarker] that sort of aspectual type gives you some of that. Um, that, you know, is it, uh, a state or is it a change of state, or is it a, um, action of some kind? [speaker001:] Uh, i i i is there any meaning to when you have sort of parameters behind it and when you don't? [speaker006:] Uh. Yeah. [speaker001:] Just means [disfmarker] [speaker006:] Oh, oh! You mean, in the slot? [speaker001:] Mm hmm. [speaker006:] Um, no, it's like X sc it's [disfmarker] it's like I was thinking of type constraints but X schema, well it obviously has to be an X schema. "Agent", I mean, the [disfmarker] the performer of the X schema, that s depends on the X schema. You know, and I [disfmarker] in general it would probably be, you know [disfmarker] [speaker005:] So the difference is basically whether you thought it was obvious what the possible fillers were. [speaker006:] Yeah, basically. [speaker001:] Mm hmm. [speaker005:] OK. [speaker006:] Um, "aspectual type" probably isn't obvious but I should have [disfmarker] So, I just neglected to stick something in. "Perspective", "actor", "undergoer", "observer", um, [speaker002:] Mmm. [speaker006:] I think we've often used "agent", "patient", obser [speaker005:] "Whee!" That's that one, right? [speaker006:] Yeah, exactly. [vocalsound] Exactly. Um, and so one nice thing that, uh, we had talked about is this example [comment] of like, if you have a passive construction then one thing it does is ch you know [disfmarker] definitely, it is one way to [disfmarker] for you to, you know, specifically take the perspective of the undergoing kind of object. And so then we talked about, you know, whether well, does that specify topic as well? Well, maybe there are other things. You know, now that it's [disfmarker] subject is more like a topic. And now that, you know [disfmarker] Anyway. So. Sorry. I'm gonna trail off on that one cuz it's not that f important right now. Um, [speaker003:] N now, for the moment we just need the ability to l l write it down if [disfmarker] if somebody figured out what the rules were. [speaker006:] To know how [disfmarker] Yeah. Yeah. Exactly. [speaker003:] Yeah. [speaker006:] Um, some of these other ones, let's see. So, uh, one thing I'm uncertain about is how polarity interacts. So polarity, uh, is using for like action did not take place for instance. [speaker003:] Mm hmm. [speaker006:] So by default it'll be like "true", I guess, you know, if you're specifying events that did happen. You could imagine that you skip out this [disfmarker] you know, leave off this polarity, you know, not [disfmarker] don't have it here. And then have it part of the speech act in some way. [speaker003:] Mm hmm. [speaker006:] There's some negation. But the reason why I left it in is cuz you might have a change of state, let's say, where some state holds and then some state doesn't hold, and you're just talking, you know [disfmarker] if you're trying to have the nuts and bolts of simulation you need to know that, you know, whatever, the holder doesn't and [disfmarker] [speaker003:] No, I th I think at this lev which is [disfmarker] it should be where you have it. [speaker006:] OK, it's [disfmarker] so it's [disfmarker] it's [disfmarker] it's fine where it is. So, [speaker003:] I mean, how you get it may [disfmarker] may in will often involve the discourse [speaker006:] OK. May come from a few places. [speaker003:] but [disfmarker] but [disfmarker] by the time you're simulating you sh y you should know that. [speaker006:] Right. Right. [speaker005:] So, [vocalsound] I'm still just really not clear on what I'm looking at. The "scenario" box, like, what does that look like for an example? [speaker006:] Yeah. [speaker005:] Like, not all of these things are gonna be here. [speaker006:] Mm hmm. [speaker003:] Correct. [speaker005:] This is just basically says [speaker006:] It's a grab bag of [disfmarker] [speaker005:] "part of what I'm going to hand you is a whole bunch of s uh, schemas, image, and X schemas. Here are some examples of the sorts of things you might have in there". [speaker006:] So that's exactly what it is. [speaker005:] OK. [speaker006:] And for a particular instance which I will, you know, make an example of something, is that you might have an instance of container and path, let's say, as part of your, you know, "into" you know, definition. [speaker005:] Mm hmm. Mm hmm. [speaker006:] So you would eventually have instances filled in with various [disfmarker] various values for all the different slots. [speaker005:] Mm hmm. [speaker006:] And they're bound up in, you know, their bindings and [disfmarker] and [disfmarker] and values. [speaker005:] OK. [speaker003:] W it c [speaker005:] Do you have to say about the binding in your [disfmarker] is there a slot in here for [disfmarker] that tells you how the bindings are done? [speaker003:] No, no, no. I [disfmarker] let's see, I think we're [disfmarker] we're not [disfmarker] I don't think we have it quite right yet. [speaker005:] OK. [speaker003:] So, uh, what this is, let's suppose for the moment it's complete. [speaker005:] OK. [speaker003:] OK, uh, then this says that when an analysis is finished, the whole analysis is finished, [comment] you'll have as a result, uh, some s resulting s semspec for that utterance in context, [speaker005:] Mm hmm. [speaker003:] which is made up entirely of these things and, uh, bindings among them. [speaker005:] Mm hmm. [speaker003:] And bindings to ontology items. [speaker005:] Mm hmm. [speaker003:] So that [disfmarker] that the who that this is the tool kit under whi out of which you can make a semantic specification. [speaker005:] Mm hmm. [speaker003:] So that's A. [speaker005:] OK. [speaker003:] But B, which is more relevant to your life, is this is also the tool kit that is used in the semantic side of constructions. [speaker005:] Mm hmm. [speaker003:] So this is an that anything you have, in the party line, [comment] anything you have as the semantic side of constructions comes, from pieces of this [disfmarker] ignoring li [speaker005:] OK. [speaker003:] I mean, in general, you ignore lots of it. [speaker005:] Right. [speaker003:] But it's got to be pieces of this along with constraints among them. [speaker005:] OK. [speaker003:] Uh, so that the, you know, goal of the, uh uh, "source, path, goal" has to be the landmark of the conta you know, the interior of this container. [speaker005:] Mm hmm. [speaker003:] Or whate whatever. [speaker005:] Yeah. [speaker003:] So those constraints appear in constructions [speaker005:] Mm hmm. [speaker003:] but pretty much this is the full range of semantic structures available to you. [speaker005:] OK. [speaker006:] Except for "cause", that I forgot. But anyway, there's som some kind of causal structure for composite events. [speaker005:] Yeah. [speaker003:] OK, good. Let's [disfmarker] let's mark that. So we need a c [speaker006:] Uh, I mean, so it gets a little funny. These are all [disfmarker] so far these structures, especially from "path" and on down, these are sort of relatively familiar, um, image schematic kind of slots. Now with "cause", uh, the fillers will actually be themselves frames. Right? So you'll say, event one causes event B [disfmarker] [speaker003:] Right. [speaker005:] Mm hmm. [speaker003:] And [disfmarker] and [disfmarker] and [disfmarker] and this [disfmarker] this [disfmarker] this again may ge our, um [disfmarker] and we [disfmarker] and [disfmarker] and, of course, worlds. [speaker006:] uh, event two ", and [disfmarker] Yeah. So that's, uh these are all implicitly one [disfmarker] within, uh within one world. [speaker005:] Mm hmm. [speaker006:] Um, even though saying that place takes place, whatever. Uh, if y if I said "time" is, you know, "past", that would say "set that this world", you know, "somewhere, before the world that corresponds to our current speech time". [speaker005:] Mm hmm. Mm hmm. Yeah. [speaker006:] So. But that [disfmarker] that [disfmarker] that's sort of OK. The [disfmarker] the [disfmarker] within the event it's st it's still one world. Um. Yeah, so "cause" and [disfmarker] Other frames that could come in [disfmarker] I mean, unfortunately you could bring in say for instance, um, uh, "desire" or something like that, [speaker005:] Mm hmm. [speaker006:] like "want". And actually there is right now under "discourse segments", um, "attitude"? [speaker005:] Mm hmm. [speaker006:] "Volition"? could fill that. So there are a couple things where I like, oh, I'm not sure if I wanted to have it there [speaker005:] Well that's [disfmarker] [speaker006:] or [disfmarker] "Basically there was a whole list of [disfmarker] of possible speaker attitudes that like say Talmy listed. And, like, well, I don't [disfmarker] you know, it was like" hope, wish. desire ", [speaker003:] Right. [speaker005:] Uh huh. [speaker006:] blah blah blah. And it's like, well, I feel like if I wanted to have an extra meaning [disfmarker] I don't know if those are grammatically marked in the first place. So [disfmarker] They're more lexically marked, right? At least in English. [speaker005:] Mmm. [speaker006:] So if I wanted to I would stick in an extra frame in my meaning, saying, e so th it'd be a hierarchical frame them, right? You know, like "Naomi wants [disfmarker] wants su a certain situation and that situation itself is a state of affairs". [speaker003:] S right. So [disfmarker] so, "want" itself can be [disfmarker] [pause] i i i i i [speaker006:] u Can be just another frame that's part of your [disfmarker] [speaker003:] Well, and it i basically it's an action. In [disfmarker] in our s in our [disfmarker] in our [disfmarker] [speaker006:] Yeah. Situation. [comment] Right, right. [speaker003:] in [disfmarker] in our [disfmarker] in our s terminology, "want" can be an action and "what you want" is a world. [speaker006:] Mm hmm. [speaker002:] Hmm. [speaker006:] Mmm. [speaker003:] So that's [disfmarker] I mean, it's certainly one way to do it. [speaker005:] Mm hmm. [speaker003:] Yeah, there [disfmarker] there are other things. Causal stuff we absolutely need. Mental space we need. [speaker006:] Mm hmm. [speaker003:] The context we need. Um, so anyway, Keith [disfmarker] So is this comfortable to you that, uh, once we have this defined, it is your tool kit for building the semantic part of constructions. [speaker005:] Mm hmm. [speaker003:] And then when we combine constructions semantically, the goal is going to be to fill out more and more of the bindings needed in order to come up with the final one. [speaker005:] Mm hmm. Yeah. [speaker003:] And that's the wh and [disfmarker] and I mean, that [disfmarker] according to the party line, that's the whole story. [speaker005:] Mm hmm. Yeah. Um. y Right. That makes sense. So I mean, there's this stuff in the [disfmarker] off in the scenario, which just tells you how various [disfmarker] what schemas you're using and they're [disfmarker] how they're bound together. And I guess that some of the discourse segment stuff [disfmarker] [speaker006:] Mm hmm. [speaker005:] is that where you would sa I mean, that's [disfmarker] OK, that's where the information structure is which sort of is a kind of profiling on different parts of, um, of this. [speaker006:] Right. Exactly. [speaker005:] I mean, what's interesting is that the information structure stuff [disfmarker] Hmm. There's almost [disfmarker] I mean, we keep coming back to how focus is like this [disfmarker] this, uh, trajector landmark thing. [speaker006:] Yeah. [speaker005:] So if I say, um, You know, "In France it's like this". You know, great, we've learned something about France but the fact is that utterances of that sort are generally used to help you draw a conclusion also about some implicit contrast, like "In France it's like this". [speaker006:] Right. [speaker005:] And therefore you're supposed to say, "Boy, life sure [disfmarker]" You know, "in France kids are allowed to drink at age three". And w you're [disfmarker] that's not just a fact about France. [speaker006:] Right, right. [speaker005:] You also conclude something about how boring it is here in the U S. Right? [speaker003:] Right. [speaker005:] And so [disfmarker] [speaker006:] S so I would prefer not to worry about that for right now [speaker005:] OK. [speaker006:] and to think that there are, um, [speaker005:] That comes in [speaker006:] discourse level constructions in a sense, topic [disfmarker] topic focus constructions that would say, "oh, when you focus something" then [disfmarker] [speaker005:] and, uh [disfmarker] Mm hmm. Yeah. [speaker006:] just done the same way [disfmarker] just actually in the same way as the lower level. If you stressed, you know, "John went to the [disfmarker]", you know, "the bar" whatever, you're focusing that [speaker005:] Mm hmm. [speaker006:] and a in a possible inference is "in contrast to other things". [speaker005:] Yeah. [speaker006:] So similarly for a whole sentence, you know, "in France such and such happens". [speaker005:] Yeah. Yeah, yeah. [speaker006:] So the whole thing is sort of like again implicitly as opposed to other things that are possible. [speaker005:] Yeah. [speaker001:] Uh, just [disfmarker] just, uh, look [disfmarker] read uh even sem semi formal Mats Rooth. [speaker006:] I mean [disfmarker] Yeah. Uh huh. [speaker001:] If you haven't read it. It's nice. And just pick any paper on alternative semantics. [speaker006:] Uh huh. [speaker005:] OK. [speaker001:] So that's his [disfmarker] that's the best way of talking about focus, is I think his way. [speaker005:] OK, what was the name? [speaker001:] Mats. MATS. [speaker005:] OK. [speaker001:] Rooth. I think two O's, yes, TH. [speaker005:] OK. [speaker001:] I never know how to pronounce his name because he's sort of, [speaker003:] S Swede? [speaker001:] uh, he is Dutch [speaker003:] Dutch? [speaker005:] Yeah. [speaker003:] Oh, Dutch. [speaker001:] and, um [disfmarker] but very confused background I think. [speaker003:] Uh huh. [speaker001:] So [pause] and, um, [speaker005:] Mats Gould. [speaker001:] And sadly enough he also just left the IMS in Stuttgart. So he's not there anymore. [speaker005:] Hmm. [speaker001:] But, um [disfmarker] I don't know where he is right now but alternative semantics is [disfmarker] if you type that into an, uh, uh, browser or search engine you'll get tons of stuff. [speaker005:] OK. OK. OK, thanks. [speaker001:] And what I'm kind of confused about is [disfmarker] is what the speaker and the hearer is [disfmarker] is sort of doing there. [speaker006:] So for a particular segment it's really just a reference to some other entity again in the situation, right? So for a particular segment the speaker might be you or might be me. [speaker001:] Yeah. [speaker006:] Um, hearer is a little bit harder. It could be like multiple people. I guess that [disfmarker] that [disfmarker] that [disfmarker] that's not very clear from here [disfmarker] I mean, that's not allowed here. [speaker001:] Yeah, but you [disfmarker] Don't we ultimately want to handle that analogously to the way we handle time and place, because "you", "me", "he", "they", you know, "these guys", all these expressions, nuh, are in [disfmarker] in much the same way contextually dependent as "here," and "now," and "there" [disfmarker] [speaker006:] Mm hmm. [speaker003:] Now, this is [disfmarker] this is assuming you've already solved that. [speaker006:] Ye yeah. [speaker003:] So it's [disfmarker] it's Fred and Mary, [speaker006:] So th [speaker003:] so the speaker would be Fred and the [disfmarker] [speaker001:] Ah! [speaker006:] Right, so the constructions might [disfmarker] of course will refer, using pronouns or whatever. [speaker001:] Mm hmm. [speaker006:] In which case they have to check to see, uh, who the, uh, speaker in here wa in order to resolve those. But when you actually say that "he walked into [disfmarker]", whatever, um, the "he" will refer to a particular [disfmarker] You [disfmarker] you will already have figured who "he" or "you", mmm, or "I", maybe is a bett better example, who "I" refers to. Um, and then you'd just be able to refer to Harry, you know, in wherever that person [disfmarker] whatever role that person was playing in the event. [speaker001:] Mmm. That's up at the reference part. [speaker006:] Yeah, yeah. [speaker001:] And down there in the speaker hearer part? [speaker006:] S so, that's [disfmarker] I think that's just [disfmarker] n for instance, Speaker is known from the situation, right? You're [disfmarker] when you hear something you're told who the speaker is [disfmarker] I mean, you know who the speaker is. In fact, that's kind of constraining how [disfmarker] in some ways you know this before you get to the [disfmarker] you fill in all the rest of it. I think. [speaker003:] Mmm. [speaker006:] I mean, how else would you um [disfmarker] [speaker001:] You know, uh, uh, it's [disfmarker] the speaker may [disfmarker] in English is allowed to say "I." [speaker003:] Yeah. Well, here [disfmarker] [speaker006:] Yeah. [speaker001:] Uh, among the twenty five percent most used words. [speaker006:] Right. [speaker001:] But wouldn't the "I" then set up the [disfmarker] the s s referent [disfmarker] [speaker006:] Mm hmm. [speaker001:] that happens to be the speaker this time [speaker006:] Right, right. [speaker001:] and not "they," whoever they are. [speaker006:] So [disfmarker] [speaker001:] Or "you" [disfmarker] much like the "you" could n [speaker006:] S so [disfmarker] OK, so I would say ref under referent should be something that corresponds to "I". And maybe each referent should probably have a list of way whatever, the way it was referred to. So that's "I" but, uh, uh, should we say it [disfmarker] it refers to, what? Uh, if it were "Harry" it would refer to like some ontology thing. If it were [disfmarker] if it's "I" it would refer to the current speaker, OK, which is given to be like, you know, whoever it is. [speaker001:] Well, not [disfmarker] not always. I mean, so there's "and then he said, I w" Uh huh. [speaker006:] "I" within the current world. [speaker003:] Uh [disfmarker] [speaker001:] Yeah. [speaker003:] Yeah. That's right. [speaker006:] Yeah, yeah, yeah, yeah. [speaker003:] So [disfmarker] so again, this [disfmarker] uh, this [disfmarker] this is gonna to get us into the mental space stuff and t because you know, "Fred said that Mary said [disfmarker]", and whatever. [speaker006:] Mm hmm. [speaker005:] Mmm. [speaker003:] And [disfmarker] and so we're, uh gonna have to, um, chain those as well. [speaker001:] Mm hmm. Twhhh whhh. But [disfmarker] [speaker006:] Mm hmm. So this entire thing is inside a world, [speaker003:] Right. [speaker006:] not just like the top part. [speaker003:] Right. I [disfmarker] I think, uh [disfmarker] [speaker006:] That's [disfmarker] [speaker001:] Mm hmm. [speaker003:] Except s it's [disfmarker] it's trickier than that because um, the reference for example [disfmarker] So he where it gets really tricky is there's some things, [speaker006:] Yeah. [speaker003:] and this is where blends and all terribl [speaker006:] Yeah. [speaker003:] So, some things which really are meant to be identified and some things which aren't. [speaker006:] Right. [speaker003:] And again, all we need for the moment is some way to say that. [speaker006:] Right. So I thought of having like [disfmarker] for each referent, having the list of [disfmarker] of the things t with which it is identified. You know, which [disfmarker] which, uh you know, you [disfmarker] you [disfmarker] you [disfmarker] [speaker003:] You could do that. [speaker006:] for instance, um [disfmarker] So, I guess, it sort of depends on if it is a referring exp if it's identifiable already or it's a new thing. [speaker005:] Mm hmm. [speaker006:] If it's a new thing you'd have to like create a structure or whatever. If it's an old thing it could be referring to, um, usually w something in a situation, right? Or something in ontology. [speaker003:] uh huh. [speaker006:] So, there's a you know, whatever, it c it could point at one of these. [speaker003:] I just had a [disfmarker] I just had an [disfmarker] an idea that would be very nice if it works. [speaker006:] For what? If it works. [speaker003:] Uh, uh, uh, I haven't told you what it is yet. [speaker006:] Mm hmm. Mmm. [speaker003:] This was my build up. An i an idea that would be nice i [speaker006:] Yeah. OK, we're crossing our fingers. [speaker003:] Right. If it worked. [speaker002:] So we're building a mental space, good. [speaker003:] Yeah. [speaker006:] OK. [speaker003:] Right, it was a space builder. Um, we might be able to handle context in the same way that we handle mental spaces because, uh, you have somewhat the same things going on of, uh, things being accessible or not. [speaker006:] Mm hmm. [speaker003:] And so, i [speaker006:] Yep. [speaker003:] it c it [disfmarker] it, uh I think if we did it right we might be able to get at least a lot of the same structure. [speaker006:] Use the same [disfmarker] [comment] Yep. [speaker003:] So that pulling something out of a discourse context is I think similar to other kinds of, uh, mental space phenomena. [speaker002:] I see. [speaker006:] Mm hmm. And [disfmarker] [speaker003:] Uh, I've [disfmarker] I've [disfmarker] I've never seen anybody write that up [speaker006:] And [disfmarker] [speaker003:] but maybe they did. I don't know. That may be all over the literature. [speaker006:] Yeah. So [disfmarker] so by default [disfmarker] [speaker005:] There's things like ther you know, there's all kinds of stuff like, um, in [disfmarker] I think I mentioned last time in Czech if you have a [disfmarker] a verb of saying then um, you know, you say something like [disfmarker] or [disfmarker] or I was thinking you can say something like, "oh, I thought, uh, you are a republican" or something like that. Where as in English you would say, "I thought you were". [speaker003:] Right. [speaker005:] Um, you know, sort of the past tense being copied onto the lower verb doesn't happen there, so you have to say something about, you know, tense is determined relative to current blah blah blah. [speaker006:] Mm hmm. Mm hmm. [speaker005:] Same things happens with pronouns. There's languages where, um, if you have a verb of saying then, ehhh, where [disfmarker] OK, so a situation like "Bob said he was going to the movies", where that lower subject is the same as the person who was saying or thinking, you're actually required to have "I" there. [speaker006:] Mm hmm. [speaker005:] Um, and it's sort of in an extended function [disfmarker] [speaker003:] Mm hmm. So we would have it be in quotes in English. [speaker005:] Yeah. [speaker006:] Right. [speaker002:] Right. [speaker005:] But it's not perceived as a quotative construction. I mean, it's been analyzed by the formalists as being a logophoric pronoun, [speaker003:] Yeah. [speaker005:] um which means a pronoun which refers back to the person who is speaking or that sort of thing, right? [speaker006:] Oh, right. Yeah, that makes sense. [speaker003:] OK. [speaker005:] Um, but [disfmarker] uh, that happens to sound like the word for "I" but is actually semantically unrelated to it. [speaker006:] Oh, no! [speaker003:] Oh, good, I love the formali [speaker005:] Um, [speaker006:] Really? [speaker005:] Yeah. [vocalsound] Yeah. [speaker006:] You're kidding. [speaker005:] There's a whole book which basically operates on this assumption. Uh, Mary Dalrymple, uh, this book, a ninety three book on, uh on pronoun stuff. [speaker006:] No, that's horrible. OK. That's horrible. [comment] OK. [speaker005:] Well, yeah. And then the same thing for ASL where, you know, you're signing and someone says something. And then, you know, so "he say", and then you sort of do a role shift. And then you sign "I, this, that, and the other". [speaker006:] Uh huh. [speaker005:] And you know, "I did this". That's also been analyzed as logophoric and having nothing to do with "I". And the role shift thing is completely left out and so on. So, I mean, the point is that pronoun references, uh, you know, sort of ties in with all this mental space stuff and so on, and so forth. [speaker006:] Uh huh. [speaker005:] And so, yeah, I mean [disfmarker] [speaker006:] Yeah. [speaker003:] So that [disfmarker] that d that does sound like it's co consistent with what we're saying, yeah. [speaker005:] Right. Yeah. [speaker006:] OK, so it's kind of like the unspecified mental spaces just are occurring in context. And then when you embed them sometimes you have to pop up to the h you know, depending on the construction or the whatever, um, you [disfmarker] you [disfmarker] you're scope is [disfmarker] m might extend out to the [disfmarker] the base one. [speaker005:] Mm hmm. [speaker003:] Mm hmm. [speaker005:] Yeah. [speaker006:] It would be nice to actually use the same, um, mechanism since there are so many cases where you actually need it'll be one or the other. [speaker005:] Yeah. [speaker006:] It's like, oh, actually, it's the same [disfmarker] same operation. [speaker003:] Oh, OK, so this [disfmarker] this is worth some thought. [speaker006:] So. [speaker005:] It's like [disfmarker] it's like what's happening [disfmarker] that, yeah, what what's happening, uh, there is that you're moving the base space or something like that, right? [speaker006:] Yeah, yeah. [speaker005:] So that's [disfmarker] that's how Fauconnier would talk about it. [speaker006:] Mm hmm. [speaker005:] And it happens diff under different circumstances in different languages. And so, [speaker006:] Mm hmm. [speaker005:] um, things like pronoun reference and tense which we're thinking of as being these discourse y things actually are relative to a Bayes space which can change. [speaker006:] Mm hmm, [speaker005:] And we need all the same machinery. [speaker006:] right. [speaker001:] Mm hmm. [speaker006:] Robert. [speaker003:] Well, but, uh, this is very good actually [speaker005:] Schade. [speaker003:] cuz it [disfmarker] it [disfmarker] it [disfmarker] to the extent that it works, it y [speaker006:] Ties it all into it. [speaker003:] it [disfmarker] it ties together several of [disfmarker] of these things. [speaker006:] Yeah. Yep. [speaker001:] Mm hmm. Mm hmm. And I'm sure gonna read the transcript of this one. So. But the, uh, [disfmarker] [vocalsound] But it's too bad that we don't have a camera. You know, all the pointing is gonna be lost. [speaker006:] Oh, yeah. [speaker005:] Yeah. [speaker006:] Yeah, that's why I said "point to Robert", [vocalsound] when I did it. [speaker002:] Well every time Nancy giggles it means [disfmarker] it means that it's your job. [speaker001:] Uh. Yeah. Mmm, isn't [disfmarker] I mean, I'm [disfmarker] I was sort of dubious why [disfmarker] why he even introduces this sort of reality, you know, as your basic mental space and then builds up [disfmarker] [speaker005:] Mm hmm. [speaker001:] d doesn't start with some [disfmarker] because it's so obvi it should be so obvious, at least it is to me, [comment] that whenever I say something I could preface that with "I think." [speaker005:] Yeah. [speaker001:] Nuh? So there should be no categorical difference between your base and all the others that ensue. [speaker005:] Yeah. [speaker003:] No, but there's [disfmarker] there's a Gricean thing going on there, that when you say "I think" you're actually hedging. [speaker006:] Mmm. [speaker005:] Yeah, I mean [disfmarker] [speaker006:] It's like I don't totally think [disfmarker] [speaker005:] Yeah. [speaker003:] Right. [speaker006:] I mostly think, uh [disfmarker] [speaker005:] Y [speaker001:] Yeah, it's [disfmarker] Absolutely. [speaker005:] Yeah, it's an [disfmarker] it's an evidential. It's sort of semi grammaticalized. People have talked about it this way. And you know, you can do sort of special things. You can, th put just the phrase "I think" as a parenthetical in the middle of a sentence and so on, and so forth. [speaker006:] Actually one of the child language researchers who works with T Tomasello studied a bunch of these constructions [speaker001:] Yeah. [speaker005:] So [disfmarker] [speaker006:] and it was like it's not using any kind of interesting embedded ways just to mark, you know, uncertainty or something like that. [speaker005:] Yeah. [speaker006:] So. [speaker001:] Yeah, but about linguistic hedges, I mean, those [disfmarker] those tend to be, um, funky anyways because they blur [disfmarker] [speaker003:] So we don't have that in here either do we? [speaker005:] Yeah. [speaker006:] Hedges? [speaker003:] Yeah, yeah. [speaker006:] Hhh, [comment] I [disfmarker] there used to be a slot for speaker, um, it was something like factivity. I couldn't really remember what it meant so I took it out. [speaker005:] Yeah. [speaker006:] But it's something [disfmarker] [speaker005:] Um. Well we were just talking about this sort of evidentiality and stuff like that, right? [speaker006:] we [disfmarker] we were talking about sarcasm too, right? Oh, oh. [speaker005:] I mean, [speaker006:] Oh, yeah, yeah, right. [speaker005:] that's what I think is, um, sort of telling you what percent reality you should give this [speaker003:] So we probably should. [speaker006:] Yeah. [speaker005:] or the, you know [disfmarker] [speaker001:] Mm hmm. [speaker003:] Confidence or something like that. [speaker005:] Yeah, and the fact that I'm, you know [disfmarker] the fact maybe if I think it versus he thinks that might, you know, depending on how much you trust the two of us or whatever, [speaker006:] Yeah. [speaker005:] you know [disfmarker] [speaker001:] Uh great word in the English language is called "about". If you study how people use that it's also [disfmarker] [speaker006:] What's the word? [speaker001:] "about." [speaker003:] About. [speaker001:] It's about [disfmarker] [speaker003:] Oh, that [disfmarker] in that use of "about", yeah. [speaker001:] clever. [speaker006:] Oh, oh, oh, as a hedge. [speaker005:] Yeah. [speaker003:] And I think [disfmarker] And I think [pause] y if you want us to spend a pleasant six or seven hours you could get George started on that. [speaker005:] He wrote a paper about thirty five years ago on that one. [speaker002:] I r I read that paper, [speaker003:] Yeah. [speaker002:] the hedges paper? [speaker005:] Yeah. [speaker002:] I read some of that paper actually. [speaker005:] Would you believe that that paper lead directly to the development of anti lock brakes? [speaker003:] Yeah. [speaker006:] What? [speaker003:] No. [speaker005:] Ask me about it later I'll tell you how. When we're not on tape. [speaker006:] I'd love to know. [speaker002:] Oh, man. [speaker006:] So, and [disfmarker] and I think, uh, someone had raised like sarcasm as a complication at some point. [speaker003:] There's all that stuff. Yeah, let's [disfmarker] I [disfmarker] I don't [disfmarker] I think [disfmarker] [speaker006:] And we just won't deal with sarcastic people. [speaker003:] Yeah, I mean [disfmarker] [speaker005:] I don't really know what like [disfmarker] We [disfmarker] we don't have to care too much about the speaker attitude, right? Like there's not so many different [disfmarker] [speaker006:] Certainly not as some [disfmarker] [speaker005:] hhh, [comment] I don't know, m [speaker006:] Well, they're intonational markers I think for the most part. [speaker005:] Yeah. [speaker006:] I don't know too much about the like grammatical [disfmarker] [speaker005:] I just mean [disfmarker] There's lots of different attitudes that [disfmarker] that the speaker could have and that we can clearly identify, and so on, and so forth. [speaker006:] Yeah. [speaker005:] But like what are the distinctions among those that we actually care about for our current purposes? [speaker003:] Right. Right, so, uh, this [disfmarker] this raises the question of what are our current purposes. [speaker006:] Mm hmm. [speaker003:] Right? [speaker006:] Oh, yeah, do we have any? [speaker005:] Oh, shoot. Here it is three fifteen already. [speaker001:] Mmm. Yeah. [speaker003:] Uh, so, um, I [disfmarker] I don't know the answer but [disfmarker] but, um, it does seem that, you know, this is [disfmarker] this is coming along. I think it's [disfmarker] it's converging. It's [disfmarker] as far as I can tell there's this one major thing we have to do which is the mental [disfmarker] the whole s mental space thing. And then there's some other minor things. [speaker006:] Mm hmm. [speaker003:] Um, and we're going to have to s sort of bound the complexity. I mean, if we get everything that anybody ever thought about you know, w we'll go nuts. [speaker005:] Yeah. [speaker003:] So we had started with the idea that the actual, uh, constraint was related to this tourist domain and the kinds of interactions that might occur in the tourist domain, assuming that people were being helpful and weren't trying to d you know, there's all sorts of [disfmarker] God knows, irony, and stuff like [disfmarker] [speaker005:] Yeah. [speaker003:] which you [disfmarker] isn't probably of much use in dealing with a tourist guide. [speaker005:] Yeah. [speaker003:] Yeah? Uh. [speaker006:] M mockery. [speaker003:] Right. Whatever. So y uh, no end of things th that [disfmarker] that, you know, we don't deal with. [speaker001:] But it [disfmarker] [speaker003:] And [disfmarker] Go ahead. [speaker001:] i isn't that part easy though because in terms of the s simspec, it would just mean you put one more set of brack brackets around it, and then just tell it to sort of negate whatever the content of that is in terms of irony [speaker005:] Yeah. [speaker003:] N no. [speaker006:] Mmm. [speaker001:] or [disfmarker] [speaker005:] Right. [speaker003:] No. [speaker006:] Maybe. Yeah, in model theory cuz the semantics is always like "speaker believes not P", [speaker003:] No. [speaker006:] you know? Like "the speaker says P and believes not P". [speaker003:] Right. [speaker005:] We have a theoretical model of sarcasm now. [speaker006:] But [disfmarker] [speaker005:] Yeah, right, I mean. [speaker006:] Right, right, but, [speaker003:] Right. No, no. Anyway, so [disfmarker] so, um, I guess uh, let me make a proposal on how to proceed on that, which is that, um, it was Keith's, uh, sort of job over the summer to come up with this set of constructions. Uh, and my suggestion to Keith is that you, over the next couple weeks, n [speaker005:] Mmm. [speaker003:] don't try to do them in detail or formally but just try to describe which ones you think we ought to have. [speaker005:] OK. [speaker003:] Uh, and then when Robert gets back we'll look at the set of them. [speaker005:] OK. [speaker003:] Just [disfmarker] just sort of, you know, define your space. [speaker005:] Yeah, OK. [speaker003:] And, um, so th these are [disfmarker] this is a set of things that I think we ought to deal with. [speaker005:] Yeah. [speaker003:] And then we'll [disfmarker] we'll [disfmarker] we'll go back over it and w people will [disfmarker] will give feedback on it. [speaker005:] OK. [speaker003:] And then [disfmarker] then we'll have a [disfmarker] at least initial spec of [disfmarker] of what we're actually trying to do. [speaker005:] Yeah. [speaker003:] And that'll also be useful for anybody who's trying to write a parser. [speaker005:] Mm hmm. [speaker003:] Knowing uh [disfmarker] [speaker005:] In case there's any around. [speaker006:] If we knew anybody like that. [speaker003:] Right, "who might want" et cetera. So, uh [disfmarker] [speaker005:] OK. [speaker003:] So a and we get this [disfmarker] this, uh, portals fixed and then we have an idea of the sort of initial range. And then of course Nancy you're gonna have to, uh, do your set of [disfmarker] but you have to do that anyway. [speaker006:] For the same, yeah, data. Yeah, mm hmm. [speaker003:] So [disfmarker] so we're gonna get the w we're basically dealing with two domains, the tourist domain and the [disfmarker] and the child language learning. [speaker002:] Mmm. [speaker003:] And we'll see what we need for those two. And then my proposal would be to, um, not totally cut off more general discussion but to focus really detailed work on the subset of things that we've [disfmarker] we really want to get done. [speaker005:] Mm hmm. Mm hmm. [speaker003:] And then as a kind of separate thread, think about the more general things and [disfmarker] and all that. [speaker005:] Mm hmm. [speaker001:] Well, I also think the detailed discussion will hit [disfmarker] you know, bring us to problems that are of a general nature [speaker003:] Uh, without doubt. [speaker001:] and maybe even [disfmarker] [speaker006:] Yeah. [speaker003:] Yeah. But what I want to do is [disfmarker] is [disfmarker] is to [disfmarker] to constrain the things that we really feel responsible for. [speaker001:] even suggest some solutions. Yeah. Mmm. [speaker005:] Mm hmm. [speaker003:] So that [disfmarker] that we say these are the things we're really gonna try do by the end of the summer and other things we'll put on a list of [disfmarker] of research problems or something, because you can easily get to the point where nothing gets done because every time you start to do something you say, "oh, yeah, but what about this case?" [speaker005:] Mm hmm. [speaker003:] This is [disfmarker] this is called being a linguist. [speaker001:] Mmm. [speaker005:] Yeah. [speaker003:] And, uh, [speaker005:] Basically. [speaker006:] Or me. [speaker003:] Huh? [speaker006:] Or me. Anyways [disfmarker] [speaker002:] There's that quote in Jurafsky and Martin where [disfmarker] where it goes [disfmarker] where some guy goes, "every time I fire a linguist the performance of the recognizer goes up." [speaker003:] Right. [speaker006:] Yeah. [speaker005:] Exactly. [speaker003:] Right. But anyway. So, is [disfmarker] is that [disfmarker] does that make sense as a, uh [disfmarker] a general way to proceed? [speaker006:] Sure, yeah. [speaker005:] Yeah, yeah, we'll start with that, just figuring out what needs to be done then actually the next step is to start trying to do it. [speaker003:] Exactly right. [speaker005:] Got it. [speaker001:] Mmm. Mmm. [speaker005:] OK. [speaker001:] We have a little bit of news, uh, just minor stuff. The one big [disfmarker] [speaker002:] Ooo, can I ask a [disfmarker] [speaker005:] You ran out of power. [speaker001:] Huh? [speaker002:] Can I ask a quick question about this side? [speaker006:] Yes. [speaker001:] Yeah. [speaker002:] Is this, uh [disfmarker] was it intentional to leave off things like "inherits" and [disfmarker] [speaker006:] Oops. Um, [speaker005:] No. [speaker006:] not really [disfmarker] just on the constructions, right? [speaker002:] Yeah, [speaker006:] Um, [speaker002:] like constructions can inherit from other things, am I right? [speaker006:] yeah. [speaker002:] Yeah. [speaker006:] I didn't want to think too much about that for [disfmarker] for now. [speaker002:] OK. [speaker006:] So, uh, maybe it was subconsciously intentional. [speaker003:] Yeah. Yeah, uh [disfmarker] [speaker005:] Um, yeah, there should be [disfmarker] I [disfmarker] I wanted to s find out someday if there was gonna be some way of dealing with, uh, if this is the right term, multiple inheritance, [speaker003:] yeah. [speaker005:] where one construction is inheriting from, uh from both parents, [speaker003:] Mm hmm. [speaker006:] Uh huh. Yep. [speaker005:] uh, or different ones, or three or four different ones. [speaker003:] Yeah. [speaker006:] Yeah. [speaker005:] Cuz the problem is that then you have to [disfmarker] [speaker003:] So let me [disfmarker] [speaker006:] Refer to [pause] them. [speaker005:] which of [disfmarker] you know, which are [disfmarker] how they're getting bound together. [speaker003:] Yeah, right, right, right. [speaker006:] Yeah, and [disfmarker] and there are certainly cases like that. [speaker003:] Yeah, yeah, yeah. [speaker006:] Even with just semantic schemas we have some examples. [speaker003:] Right. [speaker006:] So, and we've been talking a little bit about that anyway. [speaker003:] Yeah. So what I would like to do is separate that problem out. [speaker006:] Inherits. [speaker003:] So um, [speaker005:] OK. [speaker003:] my argument is there's nothing you can do with that that you can't do by just having more constructions. [speaker005:] Yeah, yes. [speaker003:] It's uglier and it d doesn't have the deep linguistic insights and stuff. [speaker005:] That's right. [speaker003:] Uh, [speaker005:] But whatever. [speaker006:] Uh, those are over rated. [speaker005:] Yeah, no, no, no no. [speaker003:] Right. [speaker005:] No, by all means, right. Uh, sure. [speaker003:] And so I [disfmarker] what I'd like to do is [disfmarker] is in the short run focus on getting it right. And when we think we have it right then saying, aha!, [speaker005:] Yeah. [speaker003:] can we make it more elegant? " [speaker005:] Yeah, that's [disfmarker] Yeah. [speaker003:] Can [disfmarker] can we, uh [disfmarker] What are the generalizations, and stuff? [speaker005:] Connect the dots. Yeah. [speaker003:] But rather than try to guess a inheritance structure and all that sort of stuff before we know what we're doing. [speaker005:] Yep. Yeah. [speaker003:] So I would say in the short run we're not gonna b [speaker005:] Yeah. [speaker003:] First of all, we're not doing them yet at all. And [disfmarker] and it could be that half way through we say, "aha!, we [disfmarker] we now see how we want to clean it up." [speaker005:] Mm hmm. [speaker003:] Uh, and inheritance is only one [disfmarker] I mean, that's one way to organize it but there are others. [speaker005:] Yeah. [speaker003:] And it may or may not be the best way. I'm sorry, you had news. [speaker001:] Mmm. Oh, just small stuff. Um, thanks to Eva on our web site we can now, if you want to run JavaBayes, uh, you could see [disfmarker] get [disfmarker] download these classes. And then it will enable you [disfmarker] she modified the GUI so it has now a m a m a button menu item for saving it into the embedded JavaBayes format. [speaker004:] Mm hmm. [speaker002:] Mmm. [speaker001:] So that's wonderful. [speaker003:] Great. [speaker001:] And, um and she, a You tested it out. Do you want to say something about that, that it works, right? [speaker004:] I was just checking like, when we wanna, um, get the posterior probability of, like, variables. [speaker001:] With the [disfmarker] [speaker004:] You know how you asked whether we can, like, just observe all the variables like in the same list? You can't. You have to make separate queries every time. [speaker001:] Uh huh. OK, that's [disfmarker] that's a bit unfortunate [speaker004:] So [disfmarker] Yeah. [speaker001:] but for the time being it's [disfmarker] it's [disfmarker] it's fine to do it [disfmarker] [speaker004:] You just have to have a long list of, you know, all the variables. [speaker001:] Yeah. [speaker004:] Basically. [speaker001:] But uh [disfmarker] [speaker006:] Uh, all the things you want to query, you just have to like ask for separately. [speaker004:] Yeah, yeah. [speaker001:] Well that's [disfmarker] probably maybe in the long term that's good news because it forces us to think a little bit more carefully how [disfmarker] how we want to get an out output. Um, but that's a different discussion for a different time. And, um, I don't know. We're really running late, so I had, uh, an idea yesterday but, uh, I don't know whether we should even start discussing. [speaker003:] W what [disfmarker] Yeah, sure, tell us what it is. [speaker001:] Um, the construal bit that, um, has been pointed to but hasn't been, um, made precise by any means, um, may w may work as follows. I thought that we would, uh [disfmarker] that the following thing would be in incredibly nice and I have no clue whether it will work at all or nothing. So that's just a tangent, a couple of mental disclaimers here. Um, imagine you [disfmarker] you write a Bayes net, um [disfmarker] [speaker006:] Bayes? [speaker001:] Bayes net, [speaker006:] OK. [speaker001:] um, completely from scratch every time you do construal. So you have nothing. Just a white piece of paper. [speaker003:] Mmm, right. [speaker001:] You consult [disfmarker] consult your ontology which will tell you a bunch of stuff, and parts, and properties, uh uh uh [speaker006:] Grout out the things that [disfmarker] that you need. [speaker003:] Right. [speaker001:] then y you'd simply write, uh, these into [disfmarker] onto your [disfmarker] your white piece of paper. And you will get a lot of notes and stuff out of there. You won't get [disfmarker] you won't really get any C P T's, therefore we need everything that [disfmarker] that configures to what the situation is, IE, the context dependent stuff. So you get whatever comes from discourse but also filtered. Uh, so only the ontology relevant stuff from the discourse [speaker006:] Mm hmm. [speaker001:] plus the situation and the user model. And that fills in your CPT's with which you can then query, um, the [disfmarker] the net that you just wrote and find out how thing X is construed as an utterance U. And the embedded JavaBayes works exactly like that, that once you [disfmarker] we have, you know, precise format in which to write it, so we write it down. You query it. You get the result, and you throw it away. And the [disfmarker] the nice thing about this idea is that you don't ever have to sit down and think about it or write about it. You may have some general rules as to how things can be [disfmarker] can be construed as what, so that will allow you to craft the [disfmarker] the [disfmarker] the initial notes. But it's [disfmarker] in that respect it's completely scalable. Because it doesn't have any prior, um, configuration. It's just you need an ontology of the domain and you need the context dependent modules. And if this can be made to work at all, [vocalsound] that'd be kind of funky. [speaker003:] Um, it sounds to me like you want P R [speaker001:] P R Ms uh, PRM I mean, since you can unfold a PRM into a straightforward Bayes net [disfmarker] [speaker003:] Beca because it [disfmarker] b because [disfmarker] No, no, you can't. See the [disfmarker] the critical thing about the PRM is it gives these relations in general form. So once you have instantiated the PRM with the instances and ther then you can [disfmarker] then you can unfold it. [speaker001:] Then you can. Mm hmm, yeah. No, I was m using it generic. So, uh, probabilistic, whatever, relational models. Whatever you write it. [speaker003:] Well, no, but it matters a lot [speaker001:] In [disfmarker] [speaker003:] because you [disfmarker] what you want are these generalized rules about the way things relate, th that you then instantiate in each case. [speaker001:] And then [disfmarker] then instantiate them. [speaker003:] Yeah, and that's [disfmarker] [speaker001:] That's ma maybe the [disfmarker] the way [disfmarker] the only way it works. [speaker003:] Yeah, that's the only way it could work. I [disfmarker] we have a [disfmarker] our local expert on P R uh, but my guess is that they're not currently good enough to do that. But we'll [disfmarker] we'll have to see. Uh [disfmarker] [speaker001:] But, uh, [speaker003:] Yes. This is [disfmarker] that's [disfmarker] that would be a good thing to try. It's related to the Hobbs abduction story in that you th you throw everything into a pot and you try to come up with the, uh [disfmarker] [speaker001:] Except there's no [disfmarker] no theorem prover involved. [speaker006:] Best explanation. [speaker003:] No, there isn't a theorem prover but there [disfmarker] but [disfmarker] but the, um, The cove the [disfmarker] the P R Ms are like rules of inference and you're [disfmarker] you're coupling a bunch of them together. [speaker001:] Mm hmm, yeah. [speaker003:] And then ins instead of proving you're trying to, you know, compute the most likely. Uh [disfmarker] Tricky. But you [disfmarker] yeah, it's a good [disfmarker] it's a [disfmarker] it's a good thing to put in your thesis proposal. [speaker001:] What's it? [speaker003:] So are you gonna write something for us before you go? [speaker001:] Yes. Um. [speaker003:] Oh, you have something. [speaker001:] In the process thereof, or whatever. [speaker003:] OK. So, what's [disfmarker] what [disfmarker] when are we gonna meet again? [speaker006:] When are you leaving? Thursday, [speaker001:] Fri uh, [speaker006:] Friday? [speaker004:] Fri [speaker001:] Thursday's my last day here. [speaker006:] OK. [speaker003:] Yeah. [speaker001:] So [disfmarker] I would suggest as soon as possible. Do you mean by we, the whole ben gang? [speaker003:] N no, I didn't mean y just the two of us. We [disfmarker] obviously we can [disfmarker] we can do this. But the question is do you want to, for example, send the little group, uh, a draft of your thesis proposal and get, uh, another session on feedback on that? Or [disfmarker] [speaker001:] We can do it Th Thursday again. Yeah. [speaker005:] Fine with me. Should we do the one PM time for Thursday since we were on that before or [disfmarker]? [speaker001:] Sure. [speaker005:] OK. [speaker003:] Alright. [speaker004:] Hmm. [speaker001:] Thursday at one? I can also maybe then sort of run through the, uh [disfmarker] the talk I have to give at EML which highlights all of our work. [speaker003:] OK. [speaker001:] And we can make some last minute changes on that. [speaker003:] OK. [speaker002:] You can just give him the abstract that we wrote for the paper. [speaker003:] That that'll tell him exactly what's going on. Yeah, that [disfmarker] [speaker006:] Can we do [disfmarker] can we do one thirty? [speaker003:] Alright. [speaker001:] No. [speaker006:] Oh, you already told me no. One, [speaker001:] But we can do four. [speaker006:] OK, it's fine. I can do one. It's fine. It's fine. [speaker001:] One or four. I don't care. [speaker005:] To me this is equal. I don't care. [speaker001:] If it's equal for all? What should we do? [speaker006:] Yeah, it's fine. Fine. [speaker001:] Four? [speaker006:] Yeah [disfmarker] no, no, no, uh, I don't care. It's fine. [speaker001:] It's equal to all of us, so you can decide one or four. [speaker002:] The pressure's on you Nancy. [speaker001:] Liz actually said she likes four because it forces the Meeting Recorder people to cut, you know [disfmarker] the discussions short. [speaker006:] OK. OK, four. OK? OK. I am. [speaker005:] Well, if you insist, then. [speaker002:] We're, I mean [pause] we [disfmarker] We didn't have a house before. [speaker004:] Yeah. Yeah. [speaker005:] OK. [speaker004:] We're on again? OK. [speaker001:] Mm hmm. That is really great. [speaker008:] Yeah, so if [pause] uh [disfmarker] [pause] So if anyone hasn't signed the consent form, please do so. [speaker004:] OK [speaker002:] Oh, yeah! [speaker001:] That's terrific. [speaker008:] The new consent form. The new and improved consent form. [speaker001:] Now you won't be able to walk or ride your bike, huh? [speaker006:] Uh. [speaker004:] OK. OK. [speaker002:] Right. [speaker008:] And uh, shall I go ahead and do some digits? [speaker004:] Uh, we were gonna do that at the end, remember? [speaker008:] OK, whatever you want. [speaker004:] Yeah. Just [disfmarker] just to be consistent, from here on in at least, that [disfmarker] [pause] that we'll do it at the end. [speaker002:] The new consent form. [speaker008:] It's uh [disfmarker] [pause] Yeah, it doesn't matter. OK. [speaker004:] OK [speaker006:] Testing, one, two, three. [speaker004:] Um Well, it ju I mean it might be that someone here has to go, and [disfmarker] Right? That was [disfmarker] that was sort of the point. So, uh [pause] I had asked actually anybody who had any ideas for an agenda [pause] to send it to me and no one did. So, [speaker008:] So we all forgot. [speaker004:] Uh, [speaker006:] From last time I wanted to [disfmarker] Uh [pause] [pause] The [disfmarker] An iss uh [pause] one topic from last time. [speaker004:] Right, s OK, so one item for an agenda is uh [pause] Jane has some uh [vocalsound] uh some research to talk about, research issues. Um [pause] and [pause] Uh, Adam has some short research issues. [speaker008:] And I have some [pause] short research issues. [speaker004:] Um, I have a [pause] list of things that I think were done over the last three months I was supposed to [vocalsound] [vocalsound] send off, uh [pause] and, um [pause] I [disfmarker] I sent a note about it to uh [disfmarker] to Adam and Jane but I think I'll just run through it [pause] also and see if someone thinks it's inaccurate or [pause] uh insufficient. [speaker001:] A list that you have to send off to who? [speaker004:] Uh, to uh uh, IBM. [speaker001:] Oh. [speaker004:] OK. They're, you know [disfmarker] So. Um, So, uh [pause] so, I'll go through that. Um, [pause] And, Anything else? [pause] anyone wants to talk about? No. OK. [speaker001:] What about the, um [disfmarker] your trip, yesterday? [speaker004:] Um. Sort of off topic I guess. Cuz that's [pause] Cuz that was all [disfmarker] all about the, uh [disfmarker] [speaker001:] Oh, OK. [speaker004:] I [disfmarker] I [disfmarker] I can chat with you about that [pause] off line. That's another thing. Um, And, Anything else? Nothing else? Uh, there's a [disfmarker] I mean, there is a [disfmarker] [pause] a, um [pause] uh [pause] telephone call tomorrow, [pause] which will be a conference call [pause] that some of us are involved in [pause] for uh a possible proposal. Um, we'll talk [disfmarker] we'll talk about it next week if [disfmarker] if something [disfmarker] [speaker008:] Do you want me to [pause] be there for that? I noticed you C C'd me, but I wasn't actually a recipient. I didn't quite know what to make of that. [speaker004:] Uh Well, we'll talk [disfmarker] talk about that after our meeting. [speaker008:] OK. [speaker004:] OK. Uh, OK. So it sounds like the [disfmarker] the three main things that we have to talk about are, uh this list, uh Jane and [disfmarker] Jane and Adam have some research items, and, other than that, anything, [pause] as usual, [pause] anything goes beyond that. OK, uh, Jane, since [disfmarker] since you were sort of cut off last time why don't we start with yours, make sure we get to it. [speaker006:] OK, it's [disfmarker] it's very [pause] eh [disfmarker] it's [pause] very brief, I mean [disfmarker] just let me [disfmarker] just hand these out. Oops. [speaker008:] Is this the same as the email or different? [speaker006:] It's slightly different. [speaker003:] Thanks. [speaker006:] I [disfmarker] [pause] basically the same. [speaker008:] OK. [speaker001:] Same idea? [speaker006:] But, same idea. So, if you've looked at this you've seen it before, so [pause] Basically, [vocalsound] um [pause] as you know, uh [pause] part of the encoding [pause] includes a mark that indicates [pause] an overlap. It's not indicated [pause] with, um [pause] uh, tight precision, it's just indicated that [disfmarker] OK, so, It's indicated to [disfmarker] to [disfmarker] so the people know [pause] what parts of sp which [disfmarker] which stretches of speech were in the clear, versus being overlapped by others. So, I [pause] used this mark and, um [pause] and, uh [pause] uh, [pause] divided the [disfmarker] I wrote a script [pause] which divides things into individual minutes, [pause] of which we ended up with forty [pause] five, and a little bit. And, uh [pause] you know, minute zero, of course, is the first minute up to [pause] sixty seconds. [speaker003:] OK. [speaker006:] And, um [pause] What you can see is the number of overlaps [pause] and then [pause] to the right, [pause] whether they involve two speakers, three speakers, or more than three speakers. And, [pause] um [pause] and, what I was looking for sp sp specifically was the question of [pause] whether they're distributed evenly throughout or whether they're [pause] bursts of them. Um. And [pause] it looked to me as though [disfmarker] uh, you know [disfmarker] y this is just [disfmarker] [pause] eh [disfmarker] eh, this would [disfmarker] this is not statistically [pause] verified, [pause] but it [pause] did look to me as though there are bursts throughout, rather than being [pause] localized to a particular region. The part down there, where there's the maximum number of [disfmarker] [pause] of, um [pause] overlaps is an area where we were discussing [pause] [vocalsound] whether or not it would be useful to indi to s to [pause] code [pause] stress, [pause] uh, sentence stress [pause] as possible indication of, uh [pause] information retrieval. So it's like, [pause] you know, rather, [pause] lively discussion there. [speaker004:] What was [disfmarker] what's the [disfmarker] the parenthesized stuff [pause] that says, like [disfmarker] e the first one that says six overlaps and then two point eight? [speaker006:] Oh, th [vocalsound] [pause] That's the per cent. [speaker004:] Mmm. [speaker006:] So, six is, uh [pause] two point eight percent [pause] of the total number of overlaps in the [pause] session. [speaker005:] Mm hmm. [speaker004:] Ah. [speaker003:] Mm hmm. [speaker006:] At the very end, this is when people were, [pause] you know, packing up to go basically, there's [pause] this final stuff, I think we [disfmarker] [pause] I don't remember where the digits [pause] fell. I'd have to look at that. But [pause] the final three there are no overlaps at all. And [pause] couple times there [pause] are not. So, i it seems like it goes through bursts [pause] but, um [pause] that's kind of it. [speaker005:] Mm hmm. Mm hmm. [speaker006:] Now, [pause] Another question is [pause] is there [disfmarker] are there [pause] individual differences in whether you're likely to be overlapped with or to overlap with others. And, again [pause] I want to emphasize this is just one [pause] particular [pause] um [disfmarker] [pause] one particular meeting, and also there's been no statistical testing of it all, but [pause] I, um [pause] I took the coding of [pause] the [disfmarker] I, you know, my [disfmarker] I had this script [pause] figure out, um [pause] who [pause] was the first speaker, who was the second speaker involved in a two person overlap, I didn't look at the ones involving three or more. And, um [pause] [pause] this is how it breaks down in the individual cells of [pause] who tended to be overlapping most often with who [disfmarker] who else, and [pause] if you look at the marginal totals, which is the ones on the right side and across the bottom, you get [pause] the totals for an individual. So, [vocalsound] um [pause] If you [pause] look at the bottom, those are the, um [pause] numbers of overlaps in which [pause] um [pause] Adam was involved as the person doing the overlapping and if you look [disfmarker] I'm sorry, but you're o alphabetical, that's why I'm choosing you And then if you look across the right, [pause] then [pause] that's where he was the [pause] person who was the sp first speaker in the pair [pause] and got overlap overlapped with by somebody. [speaker001:] Hmm! [speaker005:] Mm hmm. [speaker006:] And, [pause] then if you look down in the summary table, [pause] then you see that, um [pause] th they're differences in [pause] whether a person got overlapped with or [pause] overlapped by. [speaker008:] Is this uh [pause] just raw counts or is it [disfmarker] [speaker006:] Raw counts. [speaker008:] So it would be interesting to see how much each person spoke. [speaker002:] Mm hmm. [speaker006:] Yes, very true [disfmarker] very true [speaker005:] Yeah [vocalsound] Yeah [speaker003:] Yeah. [speaker008:] Normalized to how much [disfmarker] [speaker006:] it would be good to normalize with respect to that. Now on the table I did [pause] take one step toward, uh [pause] away from the raw frequencies by putting, [pause] uh [pause] percentages. So that the percentage of time [pause] of the [disfmarker] of the times that a person spoke, [pause] what percentage [pause] eh, w so. Of the times a person spoke and furthermore was involved in a two two person overlap, [vocalsound] [vocalsound] what percentage of the time were they the overlapper and what percent of the time were they th the overlappee? And there, it looks like you see some differences, um, [pause] that some people tend to be overlapped [pause] with more often than they're overlapped, but, of course, uh i e [vocalsound] this is just one meeting, [pause] uh [pause] there's no statistical testing involved, and that would be [pause] required for a [disfmarker] for a finding [pause] of [pause] any [pause] kind of [pause] scientific [pause] reliability. [speaker004:] S so, i it would be statistically incorrect to conclude from this that Adam talked too much or something. [speaker008:] No [disfmarker] no actually, that would be actually statistically correct, [speaker006:] No, no, no. [speaker004:] Yeah, yeah. [speaker005:] Yeah, yeah. [speaker008:] but [speaker005:] Yeah, yeah. [speaker006:] Yeah, that's right. That's right. [speaker004:] Yeah. [speaker006:] And I'm [pause] you know, I'm [disfmarker] I don't see a point of singling people out, [speaker004:] Excuse me. [speaker006:] now, this is a case where obviously [disfmarker] [speaker004:] B I [disfmarker] I [disfmarker] I rather enjoyed it, but [disfmarker] but this [speaker001:] But the numbers speak for themselves. [speaker005:] He's [disfmarker] Yeah, yeah, yeah. [speaker008:] Yes, that's right, so you don't nee OK. [speaker006:] Well, [vocalsound] you know, it's like [disfmarker] I'm not [disfmarker] I'm not saying on the tape who did [pause] better or worse because [pause] I don't think that it's [disfmarker] [speaker004:] Sure. [speaker006:] I [pause] you know, and [disfmarker] and th here's a case where of course, human subjects people would say be sure that you anonymize the results, [pause] and [disfmarker] and, so, might as well do this. [speaker008:] Yeah, when [disfmarker] this is what [disfmarker] This is actually [disfmarker] when Jane sent this email first, is what caused me to start thinking about anonymizing the data. [speaker004:] Yeah. [speaker006:] Well, fair enough. Fair enough. And actually, [pause] you know, the point is not about an individual, it's the point about [pause] tendencies toward [pause] you know, different styles, different speaker styles. [speaker004:] Yeah. Oh sure. [speaker006:] And [pause] it would be, you know [pause] of course, [pause] there's also the question of what type of overlap was this, and w what were they, and i and I [disfmarker] and I know that I can distinguish at least three types and, probably more, I mean, the [vocalsound] general [pause] [vocalsound] cultural idea which w uh, the conversation analysts originally started with in the seventies was that we have this [vocalsound] strict model where politeness involves that you let the person finish th before you start talking, and [pause] and you know, I mean, [pause] w we know that [disfmarker] [pause] an and they've loosened up on that too s in the intervening time, that [pause] that that's [disfmarker] that's viewed as being [pause] a culturally relative thing, I mean, [pause] that you have the high involvement style from the East Coast where people [vocalsound] will overlap often as an indication of interest in what the other person is saying. [speaker008:] Uh huh. [speaker006:] And Yeah, exactly! [speaker002:] Exactly! [speaker006:] Well, there you go. [speaker005:] Yeah [speaker006:] Fine, that's alright, that's OK. And [disfmarker] and, [pause] you know, in contrast, so Deborah [disfmarker] d and also Deborah Tannen's [pause] thesis she talked about differences of these types, [pause] that they're just different styles, and it's um [pause] you [disfmarker] you can't impose a model of [disfmarker] [pause] there [disfmarker] of the ideal being no overlaps, and [pause] you know, conversational analysts also agree with that, so it's [pause] now, universally [pause] a ag agreed with. And [disfmarker] and, als I mean, I can't say universally, but anyway, the people who used to say it was strict, [pause] um [pause] now, uh [pause] don't. I mean they [disfmarker] they [pause] also [pause] [vocalsound] you know, uh [pause] uh, ack acknowledge the influence of [pause] sub of subcultural norms and [pause] cross cultural norms and things. So, um Then it beco [pause] though [disfmarker] so [disfmarker] just [disfmarker] just superficially to give [pause] um [pause] a couple ideas of the types of overlaps involved, I have at the bottom several that I noticed. So, [pause] [vocalsound] uh, there are backchannels, like what Adam just did now and, um [pause] [vocalsound] um, anticipating the end of a question and [pause] simply answering it earlier, and there are several of those in this [disfmarker] in these data where [disfmarker] [speaker002:] Mm hmm. [speaker006:] because we're [pause] people who've talked to each other, um [pause] we know [pause] basically what the topic is, what the possibilities are and w and we've spoken with each other so we know basically what the other person's style is likely to be and so [vocalsound] and t there are a number of places where someone just answered early. No problem. And places [pause] also which I thought were interesting, where two or more people gave exactly th the same answer in unison [disfmarker] different words of course but you know, the [disfmarker] basically, [pause] you know everyone's saying "yes" or [disfmarker] you know, or ev even more sp specific than that. So, uh, the point is that, um [pause] [vocalsound] overlap's not necessarily a bad thing and that it would be im [pause] i useful to subdivide these further and see if there are individual differences in styles with respect to the types involved. And that's all I wanted to say on that, [pause] unless people have questions. [speaker004:] Well, of course th the biggest, [pause] um [pause] result here, which is one we've [disfmarker] [pause] we've talked about many times and isn't new to us, but which I think would be interesting to show someone who isn't familiar with this [vocalsound] [pause] is just the sheer number of overlaps. [speaker008:] Yep. [speaker005:] Yes, yes! [speaker006:] Oh, OK [disfmarker] interesting. [speaker004:] That [disfmarker] that [disfmarker] Right? [pause] that [disfmarker] that, um [speaker005:] Yeah. [speaker004:] here's a relatively short meeting, it's a forty [disfmarker] [pause] forty plus minute [pause] [vocalsound] meeting, and not only were there two hundred and fifteen overlaps [vocalsound] [pause] but, [pause] uh I think there's one [disfmarker] [pause] one minute there where there [disfmarker] where [disfmarker] where there wasn't any overlap? [speaker008:] Hundred ninety seven. [speaker004:] I mean, it's [disfmarker] [pause] [vocalsound] uh throughout this thing? [speaker006:] Well, at the bottom, you have the bottom three. [speaker004:] It's [disfmarker] You have [disfmarker] [speaker008:] S n are [disfmarker] [speaker001:] It'd be interesting [disfmarker] [speaker005:] Yeah. [speaker006:] So four [disfmarker] four minutes all together with none [disfmarker] none. [speaker004:] Oh, so the bottom three did have s stuff going on? [speaker001:] But it w [speaker006:] Yes, uh huh. [speaker004:] There was speech? [speaker006:] Yeah. But just no overlaps. [speaker004:] OK, so if [disfmarker] the [disfmarker] this [disfmarker] [speaker001:] It'd be interesting to see what the total amount of time is in the overlaps, versus [disfmarker] [speaker006:] Yes, exactly and that's [disfmarker] that's where Jose's pro project comes in. [speaker007:] I was about to ask [disfmarker] [speaker005:] Yeah, yeah, I h I have this that infor I have th that information now. [speaker003:] Yeah. [speaker001:] Yeah. [speaker002:] Hmm. [speaker004:] Oh, about how much is it? [speaker005:] The [disfmarker] the duration of eh [disfmarker] of each of the overlaps. [speaker004:] O oh, what's [disfmarker] what's the [disfmarker] what's the average [pause] length? [speaker005:] M I [disfmarker] I haven't averaged it now but, uh [pause] I [disfmarker] I will, uh I will do the [disfmarker] the study of the [disfmarker] [pause] with the [disfmarker] with the program with the [disfmarker] uh, the different, uh [pause] the, nnn, [pause] distribution of the duration of the overlaps. [speaker004:] You don't know? OK, you [disfmarker] you don you don't have a feeling for roughly how [pause] much it is? [speaker005:] mmm, [pause] Because the [disfmarker] the uh, [@ @] is [@ @]. [speaker004:] Yeah. [speaker005:] The duration is, uh [pause] the variation [disfmarker] the variation of the duration is uh, very big on the dat [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker006:] I suspect that it will also differ, [pause] depending on the type of overlap [pause] involved. [speaker005:] but eh [disfmarker] Yeah. [speaker006:] So backchannels will be very brief [speaker004:] Oh, I'm sure. [speaker005:] Because, on your surface eh [pause] a bit of zone of overlapping with the duration eh, overlapped and another very very short. [speaker006:] and [disfmarker] [speaker004:] Yeah. Yeah. [speaker005:] Uh, i probably it's very difficult to [disfmarker] to [disfmarker] because the [disfmarker] the overlap is, uh on is only the [disfmarker] in the final "S" of the [disfmarker] of the [disfmarker] the fin the [disfmarker] the end [disfmarker] the end word of the, um [pause] previous speaker [vocalsound] with the [disfmarker] the next word of the [disfmarker] the new speaker. [speaker001:] Mm hmm. [speaker005:] Um, I considered [pause] that's an overlap but it's very short, it's an "X" with a [disfmarker] and [disfmarker] the idea is probably, eh [pause] when eh [disfmarker] when eh, we studied th th that zone, eh [pause] [pause] eh, we h we have eh eh [pause] confusion with eh eh noise. [speaker001:] Mm hmm. [speaker005:] With eh [pause] that fricative sounds, but uh [pause] I have new information but I have to [disfmarker] to study. [speaker004:] Yeah. Yeah, but I [disfmarker] I'd [disfmarker] [vocalsound] u [speaker007:] Can I [disfmarker] [speaker004:] go ahead. [speaker006:] Yeah. [speaker007:] You split this by minute, um [pause] so if an overlap straddles [pause] the boundary between two minutes, that counts towards both of those minutes. [speaker006:] Yes. Mm hmm. Actually, um [vocalsound] um [pause] actually not. Uh, so [pause] le let's think about the case where [vocalsound] A starts speaking [pause] [vocalsound] and then B overlaps with A, [pause] and then the minute boundary happens. And let's say that [vocalsound] after that minute boundary, [vocalsound] um [pause] B is still speaking, [pause] and A overlaps [pause] with B, that would be a new overlap. But otherwise [pause] um, let's say B [pause] comes to the conclusion of [disfmarker] of that turn without [pause] anyone overlapping with him or her, in which case there would be no overlap counted in that second minute. [speaker007:] No, but suppose they both talk simultaneously [vocalsound] [pause] both a [disfmarker] a portion of it is in minute one and another portion of minute two. [speaker006:] OK. In that case, um [pause] my c [pause] the coding that I was using [disfmarker] [vocalsound] since we haven't, [pause] uh [pause] incorporated Adam's, uh [pause] coding of overlap yets, the coding of Yeah, "yets" is not a word. Uh [vocalsound] since we haven't incorporated Adam's method of handling overl overlaps yet [vocalsound] um [pause] then [pause] that would have fallen through the cra cracks. It would be an underestimate of the number of overlaps because, um [pause] I wou I wouldn't be able to pick it up from the way it was [pause] encoded so far. We just haven't done th the precise second to sec you know, [pause] second to second coding of when they occur. [speaker004:] I I I I I'm [disfmarker] I'm [disfmarker] I'm confused now. So l l let me restate what I thought Andreas was saying and [disfmarker] and see. [speaker006:] Uh huh. [speaker004:] Let's say that in [disfmarker] in second fifty seven [pause] [vocalsound] of one minute, [pause] you start talking and I start talking and [pause] we ignore each other and keep on talking for six seconds. [speaker006:] Yep. OK. Mm hmm. [speaker004:] So we go over [disfmarker] So we were [disfmarker] we were talking over one another, [pause] and it's just [disfmarker] in each case, it's just sort of one [pause] interval. [speaker006:] Mm hmm? [speaker004:] Right? So, um [pause] we talked over the minute boundary. Is this [pause] considered as one overlap in each of the minutes, the way you have done this. [speaker006:] No, it wouldn't. It would be considered as an overlap in the first one. [speaker004:] OK, so that's [pause] good, i I think, in the sense that I think Andreas meant the question, [speaker002:] That's [disfmarker] [pause] that's good, yeah, cuz the overall rate is [disfmarker] [speaker007:] Yeah. [speaker008:] Statistical. [speaker002:] Yeah. [speaker008:] Yep. [speaker006:] Mm hmm. [speaker007:] Other otherwise you'd get double counts, here and there. [speaker004:] right? [speaker006:] Yeah. They're not double counted. [speaker005:] Yeah. [speaker002:] Ah but, yeah. [speaker007:] And then it would be harder [disfmarker] [speaker004:] Yeah. [speaker005:] Yeah. [speaker006:] I should also say I did a simplifying, uh [pause] count in that [vocalsound] if A was speaking [pause] B overlapped with A and then A came back again and overlapped with B again, I [disfmarker] I didn't count that as a three person overlap, I counted that as a two person overlap, [pause] and it was A being overlapped with by D. [speaker008:] Mm hmm. [speaker006:] Because the idea was the first speaker [pause] had the floor [pause] and the second person [pause] started speaking and then the f the first person reasserted the floor [pause] kind of thing. These are simplifying assumptions, [speaker002:] Yeah. [speaker006:] didn't happen very often, there may be like three overlaps affected that way in the whole thing. [speaker008:] I want to go back and listen to minute forty one. [speaker006:] Yeah, yeah. [speaker008:] Cuz i i I find it interesting that there were a large number of overlaps and they were all two speaker. [speaker004:] Yeah. [speaker008:] I mean what I thought [disfmarker] what I would have thought in [pause] is that when there were a large number of overlaps, it was because everyone was talking at once, [vocalsound] but uh apparently not. [speaker006:] That's interesting. That's interesting. [speaker005:] Yeah. Yeah. Mmm. [speaker008:] That's really neat. [speaker006:] Yeah, there's a lot of backchannel, a lot o a lot of [disfmarker] [speaker004:] Yeah. [speaker008:] This is [pause] really interesting data. [speaker006:] Yeah, it is. I think so too, I think [disfmarker] [speaker002:] I think what's really interesting though, it is [pause] before d [pause] saying "yes, meetings have a lot of overlaps" is to actually find out how many more [pause] we have than two party. Cuz in two party conversations, like Switchboard, there's an awful lot too if you just look at backchannels, if you consider those overlaps? it's also ver it's huge. It's just that people haven't been [pause] looking at that because they've been doing single channel processing for [pause] speech recognition. [speaker008:] Mm hmm. [speaker004:] Mm hmm? [speaker002:] So, the question is, you know, how many more overlaps [pause] [vocalsound] do you have [pause] of, say the two person type, by adding more people. to a meeting, and it may be a lot more but i it may [disfmarker] [pause] it may not be. [speaker004:] Well, but see, I find it interesting even if it wasn't any more, [speaker002:] So. [speaker004:] because [pause] since we were dealing with this full duplex sort of thing in Switchboard where it was just all separated out [vocalsound] we just [disfmarker] everything was just nice, [speaker002:] Mm hmm? [speaker004:] so that [disfmarker] so the issue is in [disfmarker] in a situation [pause] where th that's [disfmarker] [speaker002:] Well, it's not really [pause] "nice". It depends what you're doing. So if you were actually [pause] [vocalsound] having, uh [disfmarker] depends what you're doing, if [disfmarker] Right now we're do we have individual mikes on the people in this meeting. [speaker004:] Mm hmm? [speaker002:] So the question is, you know [disfmarker] "are there really more overlaps happening than there would be in a two person [pause] party". [speaker004:] Let [disfmarker] let m let me rephrase what I'm saying cuz I don't think I'm getting it across. [speaker002:] And [disfmarker] and there well may be, but [disfmarker] [speaker004:] What [disfmarker] what I [disfmarker] what [disfmarker] I shouldn't use words like "nice" because maybe that's too [disfmarker] i too imprecise. But what I mean is [vocalsound] that, um in Switchboard, [pause] despite the many [disfmarker] many other problems that we have, one problem that we're not considering is overlap. And what we're doing now is, [pause] aside from the many other differences in the task, we are considering overlap and one of the reasons that we're considering it, [pause] you know, one of them not all of them, one of them is [vocalsound] that w uh at least, [pause] you know I'm very interested in [vocalsound] the scenario in which, uh [pause] both people talking are pretty much equally [pause] audible, [vocalsound] and from a single microphone. And so, [pause] in that case, it does get mixed in, [vocalsound] and it's pretty hard to jus [pause] to just ignore it, to just do processing on one and not on the other. [speaker002:] I [disfmarker] I agree that it's an issue here [pause] but it's also an issue for Switchboard and if you [pause] think of meetings [pause] being recorded over the telephone, which I think, you know, this whole point of studying meetings isn't just to have people in a room but to also have [pause] meetings over different phone lines. [speaker004:] Mm hmm. [speaker002:] Maybe far field mike people wouldn't be interested in that but all the dialogue issues still apply, [speaker004:] Mm hmm. [speaker002:] so if each of us was calling and having [pause] [vocalsound] a meeting that way [pause] you kn you know like a conference call. And, just the question is, [pause] y you know, in Switchboard [pause] you would think that's the simplest case of a meeting of more than one person, [speaker004:] Mm hmm. [speaker002:] and [pause] [vocalsound] I'm wondering how much more [pause] overlap [pause] of [pause] the types that [disfmarker] that Jane described happen with more people present. So it may be that having three people [pause] [vocalsound] is very different from having two people or it may not be. [speaker004:] That's an important question to ask. I think what I'm [disfmarker] [pause] All I'm s really saying is that I don't think we were considering that in Switchboard. [speaker002:] So. Not you, me. [speaker004:] Were you? [speaker008:] Though it wasn't [pause] in the design. [speaker002:] But uh [disfmarker] but [disfmarker] but [speaker004:] Were you [disfmarker] were you [disfmarker] were you [disfmarker] were you measuring it? I mean, w w were [disfmarker] [speaker002:] There [disfmarker] there's actually to tell you the truth, the reason why it's hard to measure is because of so, from the point of view of studying dialogue, I mean, which [pause] Dan Jurafsky and Andreas and I had some projects on, you want to know the sequence of turns. [speaker004:] Yeah. [speaker002:] So what happens is if you're talking and I have a backchannel in the middle of your turn, and then you keep going what it looks like in a dialogue model is your turn and then my backchannel, [speaker004:] Yeah. [speaker002:] even though my backchannel occurred completely inside your turn. [speaker004:] Yeah? [speaker002:] So, for things like language modeling or dialogue modeling [pause] [vocalsound] it's [disfmarker] We know that that's wrong in real time. [speaker004:] Yeah? [speaker002:] But, because of the acoustic segmentations that were done and the fact that some of the acoustic data in Switchboard were missing, people couldn't study it, but that doesn't mean in the real world that people don't talk that way. [speaker004:] Yeah, I wasn't saying that. [speaker002:] So, it's [disfmarker] um [speaker004:] Right? I was just saying that w now we're looking at it. [speaker002:] Well, we've als [speaker004:] And [disfmarker] and [disfmarker] and, you [disfmarker] you maybe wanted to look at it before but, for these various technical reasons in terms of how the data was you weren't. [speaker002:] Right. We're looking at it here. [speaker004:] So that's why it's coming to us as new even though it may well be [pause] you know, if your [disfmarker] if your hypothes The hypothesis you were offering [vocalsound] eh [disfmarker] [speaker002:] Um. [speaker004:] Right? [disfmarker] if it's the null poth [comment] hypothesis, and if actually you have as much overlap in a two person, [vocalsound] we don't know the answer to that. The reason we don't know the answer to is cuz it wasn't studied and it wasn't studied because it wasn't set up. Right? [speaker002:] Yeah, all I meant is that if you're asking the question from the point of view of [pause] what's different about a meeting, studying meetings of, say, more than two people versus [pause] what kinds of questions you could ask with a two person [pause] meeting. [speaker004:] Mm hmm? [speaker002:] It's important to distinguish [pause] that, you know, this project [pause] is getting a lot of overlap [pause] but other projects were too, but we just couldn't study them. [speaker004:] May have been. [speaker002:] And and so uh [speaker004:] May have been. Right? We do kn we don't know the numbers. [speaker002:] Well, there is a high rate, So. It's [disfmarker] but I don't know how high, in fact [speaker001:] Well, here I have a question. [speaker002:] that would be interesting to know. [speaker004:] See, I mean, i i le let me t I mean, my point was just if you wanted to say to somebody, "what have we learned about overlaps here?" just never mind comparison with something else, [speaker002:] Mm hmm. [speaker004:] what we've learned about is overlaps in this situation, is that [disfmarker] the first [disfmarker] [pause] the first order thing I would say is that there's a lot of them. [speaker005:] Yeah. [speaker004:] Right? In [disfmarker] in the sense that i if you said if [disfmarker] i i i [speaker002:] Yeah, I [disfmarker] I don't di I agree with that. [speaker004:] In a way, I guess what I'm comparing to is more the common sense notion of [vocalsound] how [disfmarker] how much people overlap. Uh [pause] you know the fact that when [disfmarker] when [disfmarker] when, uh, Adam was looking for a stretch of [disfmarker] of speech before, that didn't have any overlaps, and he w he was having such a hard time and now I look at this and I go, "well, I can see why he was having such a hard time". [speaker002:] Right. That's also true of Switchboard. [speaker004:] It's happening a lot. I wasn't saying it wasn't. [speaker002:] It may not be [disfmarker] Right. So it's just, um [speaker004:] Right? I was commenting about this. [speaker002:] OK. [speaker004:] I'm saying if I [disfmarker] [pause] I'm saying if I have this complicated thing in front of me, [vocalsound] and we sh which, [pause] you know we're gonna get much more sophisticated about when we get lots more data, [speaker002:] All I'm saying is that from the [speaker004:] But [disfmarker] Then, if I was gonna describe to somebody what did you learn [pause] right here, about, you know, the [disfmarker] the modest amount of data that was analyzed I'd say, "Well, the first order thing was there was a lot of overlaps". In fact [disfmarker] [vocalsound] and it's not just an overlap [disfmarker] bunch of overlaps [disfmarker] second order thing is [vocalsound] it's not just a bunch of overlaps in one particular point, [vocalsound] but that there's overlaps, uh throughout the thing. [speaker008:] Right. [speaker004:] And that's interesting. [speaker002:] Right. No, I [disfmarker] I agree with that. [speaker004:] That's all. [speaker002:] I'm just [pause] [vocalsound] saying that it may [disfmarker] [pause] the reason you get overlaps may or may not be due to sort of the number of people in the meeting. [speaker004:] Oh yeah. Yeah. [speaker002:] And that's all. [speaker004:] Yeah, I wasn't making any statement about that. [speaker002:] And [disfmarker] and it would actually be interesting to find out [speaker004:] Yeah. [speaker002:] because some of the data say Switchboard, which isn't exactly the same kind of context, I mean these are two people who don't know each other and so forth, But we should still be able to somehow say what [disfmarker] what is the added contra contribution to sort of overlap time of each additional person, or something like that. [speaker008:] Yep. I could certainly see it going either way. [speaker004:] Yeah, that would be good to know, [speaker006:] OK, now. [speaker001:] What [disfmarker] [speaker004:] but w we [disfmarker] [speaker006:] Wh yeah, I [disfmarker] I agree [disfmarker] I agree with Adam. [speaker002:] But yeah. [speaker005:] Yeah. [speaker006:] And the reason is because I think there's a limit [disfmarker] [pause] there's an upper bound [pause] on how many you can have, simply [pause] from the standpoint of audibility. When we speak we [disfmarker] we do make a judgment of [pause] "can [disfmarker]" you know, as adults. [speaker004:] Mm hmm. [speaker002:] Right. [speaker006:] I mean, children don't adjust so well, I mean, if a truck goes rolling past, [vocalsound] adults will well, depending, but mostly, adults will [disfmarker] will [disfmarker] [pause] will hold off to what [disfmarker] [pause] to finish the end of the sentence till the [disfmarker] till the noise is past. [speaker004:] Mm hmm. [speaker006:] And I think we generally do [vocalsound] monitor things like that, [pause] about [disfmarker] whether we [disfmarker] whether our utterance will be in the clear or not. [speaker002:] Right. [speaker006:] And partly it's related to rhythmic structure in conversation, so, [vocalsound] you know, you [disfmarker] you t Yeah, this is d also um, people tend to time their [disfmarker] their [disfmarker] [vocalsound] their, um [pause] when they [pause] come into the conversation based on the overall rhythmic, [pause] uh uh, ambient thing. [speaker001:] Well [disfmarker] [speaker002:] Right. [speaker006:] So you don't want to be c cross cutting. And [disfmarker] and, just to finish this, that um That I think that [vocalsound] there may be an upper bound on how many overlaps you can have, simply from the standpoint of audibility and how loud the other people are who are already [pause] in the fray. But I [disfmarker] you know, of certain types. Now if it's just backchannels, [vocalsound] people [pause] may be doing that [pause] with less [pause] intention of being heard, [pause] just sort of spontaneously doing backchannels, in which case [pause] that [disfmarker] those might [disfmarker] there may be no upper bound on those. [speaker007:] I [disfmarker] I have a feeling that backchannels, which are the vast majority of overlaps in Switchboard, [pause] uh, don't play as big a role here, because it's very unnatural I think, to backchannel if [disfmarker] in a multi audience [disfmarker] you know, in a multi person [vocalsound] [pause] audience. [speaker002:] If you can see them, actually. It's interesting, so if you watch people are going like [disfmarker] [comment] [comment] Right [disfmarker] right, like this here, [speaker007:] Right. [speaker005:] Yeah. [speaker002:] but That may not be the case if you couldn't see them. [speaker004:] u [speaker007:] But [disfmarker] [pause] but, it's sort of odd if one person's speaking and everybody's listening, and it's unusual to have everybody going "uh huh, uh huh" [speaker004:] Actually, I think I've done it [pause] a fair number of times today. But. [speaker002:] Yeah. There's a lot of head nodding, in this [speaker008:] Um. [speaker007:] Yeah. [speaker008:] Yep, we need to put trackers on it. [speaker001:] In [disfmarker] in the two person [disfmarker] [speaker005:] Yeah, yeah, yeah. [speaker006:] He could, he could. [speaker007:] Plus [disfmarker] plus [disfmarker] plus the [disfmarker] Yeah. So [disfmarker] so actually, um That's in part because the nodding, if you have visual contact, [pause] the nodding has the same function, but on the phone, in Switchboard [vocalsound] you [disfmarker] you [disfmarker] that wouldn't work. [speaker008:] Yeah, you don't have it. [speaker007:] So [vocalsound] so you need to use the backchannel. [speaker008:] Your mike is [disfmarker] [speaker001:] So, in the two person conversations, [pause] when there's backchannel, is there a great deal of [pause] overlap [pause] in the speech? [speaker008:] That is an earphone, so if you just put it [pause] so it's on your ear. [speaker001:] or [disfmarker] Cuz my impression is sometimes it happens when there's a pause, [speaker002:] Yes. [speaker008:] There you go. [speaker002:] Yeah. [speaker008:] Thank you. [speaker006:] E for example. [speaker001:] you know, like you [disfmarker] you get a lot of backchannel, when somebody's pausing [speaker002:] Yes. Right. [speaker006:] She's doing that. [speaker002:] Sorry, what were you saying? [speaker001:] It's hard to do both, huh? Um [pause] no, when [disfmarker] when [disfmarker] when there's backchannel, I mean, just [disfmarker] I was just listening, and [disfmarker] and when there's two people talking and there's backchannel it seems like, [pause] um the backchannel happens when, you know, the pitch drops and the first person [disfmarker] [speaker002:] Oh. [speaker001:] and a lot of times, the first person actually stops talking and then there's a backchannel [pause] and then they start up again, and so I'm wondering about [disfmarker] h I just wonder how much overlap there is. Is there a lot? [speaker002:] I think there's a lot of the kind that Jose was talking about, where [disfmarker] [pause] I mean, this is called "precision timing" in [pause] conversation analysis, where [pause] [vocalsound] they come in overlapping, [pause] but at a point where the [pause] information is mostly [pause] complete. So all you're missing is some last syllables or something or the last word or some highly predictable words. [speaker001:] Mmm. Mm hmm. [speaker002:] So technically, it's an overlap. But [pause] you know, from information flow point of view it's not an overlap in [pause] the predictable information. [speaker001:] But maybe a [disfmarker] just a small overlap? [speaker005:] More, yeah. [speaker008:] It'd be interesting if we could do prediction. [speaker001:] I was just thinking more in terms of alignment, alignment overlap. [speaker002:] Yeah. [speaker008:] Language model prediction of overlap, that would be really interesting. [speaker007:] So [disfmarker] [pause] so [disfmarker] [speaker004:] Yeah. [speaker002:] Well, that's exactly, exactly why we wanted to study the precise timing of overlaps ins in uh Switchboard, [speaker007:] Right. [speaker008:] Right. [speaker007:] So [disfmarker] so here's a [disfmarker] here's a first interesting [pause] labeling task. [speaker002:] say, because there's a lot of that. [speaker007:] Uh, to distinguish between, say, backchannels [vocalsound] [pause] precision timing [disfmarker] Sort of [vocalsound] you know, benevolent overlaps, and [disfmarker] and [disfmarker] [vocalsound] [pause] and w and [disfmarker] and sort of, um [pause] I don't know, hostile overlaps, where [vocalsound] someone is trying to grab the floor from someone else. [speaker008:] Mm hmm. Let's pick a different word. [speaker005:] Yeah. [speaker007:] Uh, that [disfmarker] that might be an interesting, um [pause] problem to look at. [speaker005:] Yeah. [speaker001:] Hostile takeovers. [speaker005:] Yeah. [speaker004:] Yeah. [speaker006:] Well, I mean you could do that. [speaker005:] Yeah. [speaker006:] I ju I [disfmarker] I think that [pause] in this meeting I really had the feeling that wasn't happening, that [pause] the hostile [disfmarker] hostile type. These were [disfmarker] these were [pause] benevolent types, as people [pause] finishing each other's sentences, and [pause] stuff. [speaker007:] OK. [speaker002:] Mm hmm. [speaker007:] Um, I could imagine that as [disfmarker] there's a fair number of [vocalsound] um cases where, and this is sort of, not [pause] really hostile, but sort of competitive, where [vocalsound] one person is finishing something and [vocalsound] you have, like, two or three people jumping [disfmarker] trying to [disfmarker] [pause] trying to [disfmarker] [pause] trying to, uh grab the next turn. [speaker008:] Trying to get the floor. [speaker004:] Yeah. [speaker007:] And so it's not against the person who talks first [pause] because actually we're all waiting for that person to finish. But they all want to [pause] be next. [speaker004:] I have a feeling most of these things are [disfmarker] that [disfmarker] [pause] that are not [pause] a benevolent kind are [disfmarker] are [vocalsound] [pause] are, uh [pause] um [pause] [vocalsound] are [disfmarker] are competitive as opposed to real really [disfmarker] really hostile. [speaker007:] Right. [speaker006:] Yeah, I agree. [speaker004:] But. [speaker001:] I wonder what determines who gets the floor? [speaker006:] I agree. Well, there are various things, you [disfmarker] you have the [disfmarker] [speaker001:] I mean [disfmarker] [speaker004:] Uh a vote [disfmarker] vote in Florida. [speaker008:] It's been studied a lot. [speaker004:] Yeah. [speaker005:] Voting for [disfmarker] [speaker004:] Um, o one thing [disfmarker] I [disfmarker] I wanted to [disfmarker] or you can tell a good joke and then everybody's laughing and you get a chance to g break in. [speaker007:] Seniority. [speaker004:] But. But. Um. You know, the other thing I was thinking was that, [pause] um [pause] these [disfmarker] all these interesting questions are, of course, pretty hard to answer with, uh u [pause] you know, a small amount of data. [speaker008:] Ach. [speaker004:] So, um [pause] I wonder if what you're saying suggests that we should make a conscious attempt to have, um [vocalsound] a [disfmarker] a fair number of meetings with, uh a smaller number of people. Right? I mean [vocalsound] we [disfmarker] most of our meetings are [pause] uh, meetings currently with say five, six, seven, eight people Should we [pause] really try to have some two person meetings, [pause] or some three person meetings and re record them [vocalsound] just to [disfmarker] to [disfmarker] to beef up the [disfmarker] the statistics on that? [speaker006:] That's a control. Well, [vocalsound] it seems like there are two possibilities there, I mean [pause] i it seems like [vocalsound] if you have just [pause] two people it's not [pause] really, y like a meeting, w is not as similar as the rest of the [disfmarker] [pause] of the sample. It depends on what you're after, of course, but [vocalsound] It seems like that would be more a case of the control condition, compared to, uh [pause] an experimental [pause] condition, with more than two. [speaker008:] Mm hmm. [speaker004:] Well, Liz was raising the question of [disfmarker] of whether i it's the number [disfmarker] there's a relationship between the number of people and the number of overlaps or type of overlaps there, [speaker006:] Mm hmm. [speaker004:] and, um [vocalsound] If you had two people meeting in this kind of circumstance then you'd still have the visuals. You wouldn't have that difference [pause] also that you have in the [vocalsound] say, in Switchboard data. [speaker006:] Mm hmm. [speaker004:] Uh [speaker006:] Yeah, I'm just thinking that'd be more like a c control condition. [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker008:] Well, but from the acoustic point of view, it's all good. [speaker005:] Yeah. Is the same. [speaker004:] Yeah, acoustic is fine, [speaker007:] If [disfmarker] if the goal were to just look at overlap you would [disfmarker] you could serve yourself [disfmarker] save yourself a lot of time but not even transcri transcribe the words. [speaker004:] but [disfmarker] [speaker008:] Yep. [speaker002:] Well, I was thinking you should be able to do this from the [pause] acoustics, on the close talking mikes, [speaker008:] Well, that's [disfmarker] the [disfmarker] that was my [disfmarker] my status report, [speaker004:] Yeah. [speaker006:] You've been working on that. [speaker002:] right? Right, I mean Adam was [disfmarker] [speaker006:] Yeah. [speaker005:] Yeah. [speaker008:] so [vocalsound] [pause] Once we're done with this stuff discussing, [speaker002:] right. [speaker004:] Yeah. [speaker002:] I mean, not as well as what [disfmarker] I mean, you wouldn't be able to have any kind of typology, obviously, [speaker008:] Mm hmm. [speaker002:] but you'd get some rough statistics. [speaker004:] But [disfmarker] what [disfmarker] what do you think about that? [speaker002:] So. [speaker004:] Do you think that would be useful? I'm just thinking that as an action item of whether we should try to record some two person meetings or something. [speaker002:] I guess my [disfmarker] my first comment was, um [pause] only that [vocalsound] um we should n not attribute overlaps only to meetings, but maybe that's obvious, maybe everybody knew that, [speaker004:] Yeah. [speaker002:] but that [vocalsound] in normal conversation with two people there's an awful lot of the same kinds of overlap, [speaker004:] Mm hmm. [speaker002:] and that it would be interesting to look at [pause] whether there are these kinds of constraints that Jane mentioned, that [vocalsound] what maybe the additional people add to this competition that happens right after a turn, you know, because now you can have five people trying to grab the turn, but pretty quickly there're [disfmarker] they back off and you go back to this sort of only one person at a time with one person interrupting at a time. [speaker004:] Mm hmm. [speaker002:] So, I don't know. To answer your question I [pause] it [disfmarker] I don't think it's crucial to have controls but I think it's worth recording all the meetings we [pause] can. [speaker008:] Can. [speaker004:] Well, [vocalsound] OK. [speaker005:] Yeah. [speaker002:] So, um [pause] you know. [speaker007:] I [disfmarker] I have an idea. [speaker002:] D I wouldn't not record a two person meeting just because it only has two people. [speaker008:] Right. [speaker007:] Could we [disfmarker] Could we, um [disfmarker] we have [disfmarker] have in the past and I think continue [disfmarker] will continue to have a fair number of [pause] uh phone conference calls. [speaker004:] Uh huh. [speaker008:] Yeah, we talked about this repeatedly. [speaker007:] And, [vocalsound] uh, [pause] and as a [disfmarker] to, um [vocalsound] as another c [pause] c comparison [pause] condition, [pause] we could um see what [disfmarker] what what happens in terms of overlap, when you don't have visual contact. So, um [disfmarker] [speaker002:] Can we actually record? [speaker008:] It just seems like that's a very different [pause] thing than what we're doing. [speaker004:] Uh Well, we'll have to set up for it. [speaker002:] I mean [pause] physically [pause] can we record the o the other [disfmarker] [speaker004:] Yeah. Well, we're not really set up for it [pause] to do that. But. [speaker007:] Or, this is getting a little extravagant, we could put up some kind of blinds or something to [disfmarker] [pause] to remove, uh [pause] visual contact. [speaker004:] Yeah. [speaker008:] Barriers! [speaker003:] Yeah. [speaker002:] That's what they did on Map Task, you know, this Map Task corpus? They ran exactly the same pairs of people with and without visual cues and it's quite interesting. [speaker004:] Well, we [disfmarker] we record this meeting so regularly it wouldn't be that [disfmarker] I mean [pause] a little strange. [speaker008:] OK, we can record, but no one can look at each other. [speaker002:] Well, we could just put [pause] b blindfolds on. [speaker003:] Yeah. [speaker008:] Close your eyes. [speaker007:] Well y no you [disfmarker] f [speaker006:] Blindf [speaker007:] Yeah, Yeah. [speaker008:] Turn off the lights. [speaker002:] and we'd take a picture of everybody sitting here with blindfolds. That would [disfmarker] [speaker004:] Oh, th that was the other thing, weren't we gonna take a picture [pause] at the beginning of each of these meetings? [speaker008:] Um, what [disfmarker] I had thought we were gonna do is just take pictures of the whiteboards. rather than take pictures of the meeting. [speaker006:] Well, linguistic [disfmarker] [speaker008:] And, uh [disfmarker] [speaker006:] Yeah. [speaker004:] Yes. [speaker006:] Linguistic anthropologists would [disfmarker] would suggest it would be useful to also take a picture of the meeting. [speaker004:] There's a head nodding here vigorously, yeah. [speaker001:] Why [disfmarker] why do we want to have a picture of the meeting? [speaker006:] The [disfmarker] because you get then the spatial relationship of the speakers. [speaker002:] Ee [pause] you mean, transc [pause] no [disfmarker] [speaker007:] Well, you could do that by just noting on the enrollment sheet the [disfmarker] [pause] the seat number. [speaker006:] And that [pause] could be [speaker005:] Yeah Yeah. Yeah. [speaker008:] Seat number, that's a good idea. I'll do that. [speaker005:] Yeah. [speaker008:] I'll do that on the next set of forms. [speaker007:] So you'd number them somehow. [speaker005:] Yeah. Is possible to get information from the rhythmic [disfmarker] f from the ge, eh [pause] uh, files. [speaker008:] I finally remembered to put, uh put native language on the newer forms. [speaker001:] We can [disfmarker] can't you figure it out from the mike number? [speaker008:] No. [speaker001:] OK. [speaker008:] The wireless ones. And even the jacks, I mean, I'm sitting here and the jack is [pause] over [pause] in front of you. [speaker001:] Oh. [speaker002:] But probably from these you could've [comment] infer it. [speaker007:] Yeah, but It's [disfmarker] it would be trivial [disfmarker] [speaker008:] It would be another task. [speaker002:] It would be a research task. [speaker008:] Having [disfmarker] having ground tu truth would be nice, so [pause] seat number would be good. [speaker001:] You know where you could get it? [speaker002:] Yeah, yeah. [speaker005:] Yeah. [speaker001:] Beam forming during the digit [pause] uh stuff. [speaker008:] So I'm gonna put little labels on all the chairs with the seat number. [speaker003:] Mm hmm. [speaker008:] That's a good idea. [speaker007:] Not the chairs. [speaker002:] But you have to keep the chairs in the same pla like here. [speaker008:] But, uh [disfmarker] [speaker007:] The chairs are [disfmarker] Chairs are movable. Put them [disfmarker] [pause] Like, [pause] put them on the table where they [disfmarker] [speaker005:] The chair [comment] Yeah. [speaker008:] Yep. [speaker003:] Yeah. [speaker008:] Yep. [speaker006:] But you know, they [disfmarker] the [disfmarker] s the linguistic anthropologists would say it would be good to have a digital picture anyway, [speaker001:] Just remembered a joke. [speaker006:] because you get [pause] a sense also of posture. [speaker007:] What people were wearing. [speaker008:] Yeah. [speaker006:] Posture, and we could like, [pause] you know, [pause] block out the person's face or whatever [speaker002:] The fashion statement. [speaker006:] but [disfmarker] [vocalsound] but, you know, these are important cues, [speaker007:] Oh, Andreas was [disfmarker] [speaker001:] How big their heads are. [speaker006:] I mean the [disfmarker] the [disfmarker] how a person is sitting [pause] is [disfmarker] [speaker004:] But if you just [speaker007:] Yeah. [speaker004:] f But from one picture, I don't know that you really get that. [speaker007:] Andreas was wearing that same old sweater again. [speaker004:] Right? You'd want a video for that, I think. [speaker006:] It'd be better than nothing, is [disfmarker] is [disfmarker] i Just from a single picture I think you can tell some aspects. [speaker005:] A video, yeah. [speaker004:] Think so? [speaker006:] I mean I [disfmarker] I could tell you I mean, if I if I'm in certain meetings I notice that there are certain people who really do [disfmarker] eh [disfmarker] The body language is very uh [disfmarker] is very interesting in terms of the dominance aspect. [speaker007:] And [disfmarker] And [disfmarker] [speaker004:] Hmm. [speaker007:] Yeah. And [disfmarker] and Morgan had that funny hair again. [speaker006:] Yeah. [comment] Well, I mean you black out the [disfmarker] that part. [speaker008:] Hmm. [speaker006:] But it's just, you know, the [disfmarker] the body [speaker001:] He agreed. [speaker006:] you know? [speaker008:] Of course, the [disfmarker] where we sit at the table, I find is very interesting, that we do tend to [pause] cong [pause] to gravitate to the same place each time. [speaker006:] Yeah. [speaker008:] and it's somewhat coincidental. I'm sitting here so that I can run into the room if the hardware starts, you know, catching fire or something. [speaker007:] Oh, no, you [disfmarker] you just like to be in charge, that's why you're sitting [disfmarker] [speaker008:] I just want to be at the head of the table. [speaker007:] Yeah. [speaker008:] Take control. [speaker004:] Speaking of taking control, you said you had some research to talk about. [speaker006:] Yeah. [speaker008:] Yeah, I've been playing with, um uh, using the close talking mike to do [disfmarker] to try to figure out who's speaking. [speaker006:] Yeah. [speaker008:] So my first attempt was just using thresholding and filtering, that we talked about [disfmarker] about two weeks ago, and so I played with that a little bit, and [vocalsound] it works O K, [pause] except that [pause] it's very sensitive to your choice of [vocalsound] your filter width and your [vocalsound] threshold. So if you fiddle around with it a little bit and you get good numbers you can actually do a pretty good job of segmenting when someone's talking and when they're not. But if you try to use the same paramenters on another speaker, it doesn't work anymore, even if you normalize it based on the absolute loudness. [speaker002:] But does it work for that one speaker throughout the whole meeting? [speaker008:] It does work for the one speaker throughout the whole meeting. Um Pretty well. Pretty well. [speaker001:] How did you do it Adam? [speaker008:] How did I do it? What do you mean? [speaker001:] Yeah. I mean, wh what was the [disfmarker] [speaker008:] The algorithm was, uh take o every frame that's over the threshold, and then median filter it, [vocalsound] and then look for runs. [speaker001:] Yeah. Mm hmm. [speaker008:] So there was a minimum run length, [speaker001:] Every frame that's over what threshold? [speaker008:] so that [disfmarker] A threshold that you pick. [speaker001:] In terms of energy? [speaker008:] Yeah. [speaker001:] Ah! OK. [speaker006:] Say that again? Frame over fres threshold. [speaker008:] So you take a [disfmarker] each frame, and you compute the energy and if it's over the threshold you set it to one, and if it's under the threshold you set it to zero, [vocalsound] so now you have a bit stream [pause] of zeros and ones. [speaker006:] Hmm. OK. [speaker008:] And then I median filtered that [vocalsound] using, um [pause] a fairly long [pause] filter length. Uh [pause] well, actually I guess depends on what you mean by long, you know, tenth of a second sorts of numbers. Um and that's to average out you know, pitch, you know, the pitch contours, and things like that. And then, uh looked for long runs. [speaker006:] OK [speaker008:] And that works O K, if you fil if you tune the filter parameters, if you tune [vocalsound] how long your median filter is and how high you're looking for your thresholds. [speaker001:] Did you ever try running the filter before you pick a threshold? [speaker008:] No. I certainly could though. But this was just I had the program mostly written already so it was easy to do. OK and then the other thing I did, was I took [vocalsound] Javier's speaker change detector [disfmarker] acoustic change detector, and I implemented that with the close talking mikes, and [pause] unfortunately that's not working real well, and it looks like it's [disfmarker] the problem is [disfmarker] he does it in two passes, the first pass [vocalsound] is to find candidate places to do a break. And he does that using a neural net doing broad phone classification and he has the [vocalsound] the, uh [pause] one of the phone classes is silence. And so the possible breaks are where silence starts and ends. And then he has a second pass which is a modeling [disfmarker] a Gaussian mixture model. Um looking for [vocalsound] uh [vocalsound] whether it improves or [disfmarker] or degrades to split at one of those particular places. And what looks like it's happening is that the [disfmarker] even on the close talking mike the broad phone class classifier's doing a really bad job. [speaker001:] Who was it trained on? [speaker008:] Uh, I have no idea. I don't remember. [speaker001:] Hmm. [speaker008:] Does an do you remember, Morgan, was it Broadcast News? [speaker004:] I think so, yeah. [speaker008:] Um [pause] So, at any rate, my next attempt, [pause] which I'm in the midst of and haven't quite finished yet was actually using the [vocalsound] uh, thresholding as the way of generating the candidates. Because one of the things that definitely happens is if you put the threshold low [vocalsound] you get lots of breaks. All of which are definitely acoustic events. They're definitely [vocalsound] someone talking. But, like, it could be someone who isn't the person here, but the person over there or it can be the person breathing. And then feeding that into the acoustic change detector. And so I think that might work. But, I haven't gotten very far on that. But all of this is close talking mike, so it's, uh [pause] just [disfmarker] just trying to get some ground truth. [speaker005:] Only with eh uh, but eh I [disfmarker] I [disfmarker] I think, eh when [disfmarker] when, y I [disfmarker] [vocalsound] I saw the [disfmarker] the [disfmarker] the [disfmarker] the speech from PDA and, eh [pause] close [pause] [vocalsound] talker. I [disfmarker] I think the there is a [disfmarker] a great difference in the [disfmarker] in the signal. [speaker008:] Oh, absolutely. So [pause] s my intention for this is [disfmarker] is as an aide for ground truth. [speaker005:] Um but eh I [disfmarker] but eh I [disfmarker] I [disfmarker] I mean that eh eh [pause] in the [disfmarker] in the mixed file [vocalsound] you can find, uh [pause] zone with, eh [pause] great different, eh [pause] level of energy. [speaker008:] not [disfmarker] [speaker005:] Um [vocalsound] I [disfmarker] I think for, eh [pause] algorithm based on energy, [pause] eh, that um h mmm, [disfmarker] more or less, eh, like eh [pause] eh, mmm, first sound energy detector. [speaker008:] Say it again? [speaker005:] eh nnn. When y you the detect the [disfmarker] the [disfmarker] the first at [disfmarker] at the end of [disfmarker] of the [vocalsound] detector of, ehm princ um. What is the [disfmarker] the name in English? the [disfmarker] the, mmm, [pause] [vocalsound] the de detector of, ehm of a word in the [disfmarker] in the s in [disfmarker] an isolated word in [disfmarker] in the background That, uh [speaker008:] I'm [disfmarker] I'm not sure what you're saying, can you try [disfmarker] [speaker005:] I mean that when [disfmarker] when you use, eh [pause] eh [pause] any [speaker001:] I think he's saying the onset detector. [speaker005:] Yeah. [speaker008:] Onset detector, OK. [speaker005:] I [disfmarker] I think it's probably to work well eh, because, eh [pause] you have eh, in the mixed files a great level of energy. eh [pause] and great difference between the sp speaker. And probably is not so easy when you use the [disfmarker] the PDA, eh that [disfmarker] Because the signal is, eh [pause] the [disfmarker] in the e energy level. [speaker008:] Right. [speaker005:] in [disfmarker] in that, eh [pause] eh [pause] speech file [vocalsound] is, eh [pause] more similar. [speaker008:] Right. [speaker005:] between the different eh, speaker, [vocalsound] um [pause] I [disfmarker] I think is [disfmarker] eh, it will [pause] i is my opinion. [speaker008:] But different speakers. [speaker005:] It will be, eh [pause] more difficult to [disfmarker] to detect bass tone energy. the [disfmarker] the change. I think that, um [speaker008:] Ah, in the clo in the P D A, you mean? [speaker005:] In the PDA. [speaker008:] Absolutely. Yeah, no question. [speaker005:] Yeah. Yeah. [speaker008:] It'll be much harder. Much harder. [speaker004:] Yeah. [speaker005:] And the [disfmarker] the another question, that when I review the [disfmarker] the [disfmarker] the work of Javier. I think the, nnn, the, nnn, [pause] that the idea of using a [pause] neural network [vocalsound] to [disfmarker] to get a broad class of phonetic, eh [pause] from, eh uh a candidate from the [disfmarker] the [disfmarker] the speech signal. If you have, eh [vocalsound] uh, I'm considering, only because Javier, eh [pause] only consider, eh [pause] like candidate, the, nnn, eh [pause] the silence, because it is the [disfmarker] the only model, eh [disfmarker] eh, he used that, eh [pause] [vocalsound] eh [pause] nnn, to detect the [disfmarker] the possibility of a [disfmarker] a change between the [disfmarker] between the speaker, [speaker008:] Right. [speaker005:] Um [pause] another [disfmarker] another research thing, different groups, eh [pause] working, eh [pause] on Broadcast News [vocalsound] prefer to, eh [pause] to consider hypothesis eh [pause] between each phoneme. [speaker008:] Mm hmm. Yeah, when a [pause] phone changes. [speaker005:] Because, I [disfmarker] I [disfmarker] I think it's more realistic that, uh [pause] only consider the [disfmarker] [vocalsound] the [vocalsound] the [disfmarker] the silence between the speaker. Eh [pause] there [disfmarker] there exists eh [pause] silence between [disfmarker] between, eh [pause] a speaker. is [disfmarker] is, eh [pause] eh [pause] acoustic, eh [pause] event, important to [disfmarker] to consider. [speaker004:] Mm hmm. Mm hmm. [speaker005:] I [disfmarker] I found that the, eh [pause] silence in [disfmarker] in many occasions in the [disfmarker] in the speech file, but, eh [pause] when you have, eh [pause] eh, two speakers together without enough silence between [disfmarker] between them, eh [pause] [vocalsound] I think eh [pause] is better to use the acoustic change detector basically and I [disfmarker] I [disfmarker] I IX or, mmm, BIC criterion for consider all the frames in my opinion. [speaker004:] Mm hmm. Yeah, the [disfmarker] you know, the reason that he, uh [pause] just used silence [vocalsound] was not because he thought it was better, it was [disfmarker] it was [disfmarker] it was the place he was starting. [speaker005:] Yeah. [speaker008:] Yep. [speaker005:] Yeah. [speaker004:] So, he was trying to get something going, [speaker005:] Yeah, yeah, yeah, yeah. [speaker004:] and, uh e e you know, as [disfmarker] as [disfmarker] [vocalsound] as is in your case, if you're here for only a modest number of months you try to pick a realistic goal, [speaker005:] Yeah. [speaker008:] Do something. [speaker005:] Yeah, yeah, yeah, yeah. [speaker004:] But his [disfmarker] his goal was always to proceed from there to then allow broad category change also. [speaker005:] Uh huh. But, eh [pause] do [disfmarker] do you think that if you consider all the frames to apply [vocalsound] the [disfmarker] the, eh [pause] the BIC criterion to detect the [disfmarker] the [disfmarker] the different acoustic change, [vocalsound] eh [pause] between speaker, without, uh [pause] with, uh [pause] silence or [vocalsound] with overlapping, uh, I think like [disfmarker] like, eh [pause] eh a general, eh [pause] eh [pause] way of process the [disfmarker] the acoustic change. [speaker004:] Mm hmm. [speaker005:] In a first step, I mean. [speaker004:] Mm hmm. [speaker005:] An and then, eh [pause] [vocalsound] eh [pause] without considering the you [disfmarker] you [disfmarker] you, um [pause] you can consider the energy [vocalsound] like a another parameter in the [disfmarker] in the feature vector, [speaker008:] Right. Absolutely. [speaker005:] eh. [speaker004:] Mm hmm. [speaker005:] This [disfmarker] this is the idea. And if, if you do that, eh [pause] eh, with a BIC uh criterion for example, or with another kind of, eh [pause] of distance in a first step, [vocalsound] and then you, eh [pause] you get the, eh [pause] the hypothesis to the [disfmarker] this change acoustic, [vocalsound] eh [pause] [vocalsound] to po process [speaker008:] Right. [speaker005:] Because, eh [pause] eh, probably you [disfmarker] you can find the [disfmarker] the [vocalsound] eh [pause] a small gap of silence between speaker [vocalsound] with eh [pause] eh [pause] a ga mmm, [pause] [vocalsound] small duration Less than, [vocalsound] eh [pause] two hundred milliseconds for example [speaker004:] Mm hmm. [speaker005:] and apply another [disfmarker] another algorithm, another approach like, eh [pause] eh [pause] detector of ene, eh detector of bass tone energy to [disfmarker] to consider that, eh [vocalsound] that, eh [pause] zone. of s a small silence between speaker, or [vocalsound] another algorithm to [disfmarker] to process, [vocalsound] eh [pause] the [disfmarker] the segment between marks eh [pause] founded by the [disfmarker] the [vocalsound] the BIC criterion and applied for [disfmarker] for each frame. [speaker004:] Mm hmm. [speaker005:] I think is, eh [pause] nnn, it will be a an [disfmarker] an [disfmarker] a more general approach [vocalsound] the [pause] if we compare [disfmarker] with use, eh [pause] a neural net or another, eh [pause] speech recognizer with a broad class or [disfmarker] or narrow class, [speaker004:] Mm hmm. [speaker005:] because, in my opinion eh [pause] it's in my opinion, [vocalsound] eh if you [disfmarker] if you change the condition of the speech, I mean, if you adjust to your algorithm with a mixed speech file and to, eh [vocalsound] to, eh [pause] [vocalsound] adapt the neural net, eh [pause] used by Javier with a mixed file. [speaker004:] Mm hmm. Mm hmm. [speaker008:] With the what file? [speaker005:] uh With a m mixed file, with a [disfmarker] the mix, mix. [speaker006:] "Mixed." [speaker001:] "Mixed". [speaker008:] "Mixed?" [speaker004:] Mm hmm. [speaker005:] Sorry. And [pause] and then you [disfmarker] you, eh you try to [disfmarker] to apply that, eh, eh, eh, speech recognizer to that signal, to the PDA, eh [pause] speech file, [vocalsound] I [disfmarker] I think you will have problems, because the [disfmarker] the [disfmarker] the [disfmarker] the [pause] condition [vocalsound] you [disfmarker] you will need t t I [disfmarker] I suppose that you will need to [disfmarker] to [disfmarker] to retrain it. [speaker008:] Oh, absolutely. [speaker004:] Well, I [disfmarker] I [disfmarker] [speaker008:] This is [disfmarker] this is not what I was suggesting to do. [speaker004:] u [vocalsound] Look, I [disfmarker] I think this is a [disfmarker] [speaker005:] Really? [speaker004:] One [disfmarker] once [disfmarker] It's a [disfmarker] I used to work, like, on voiced [disfmarker] on voice silence detection, you know, and this is this [pause] kind of thing. [speaker005:] Yeah. [speaker004:] Um [pause] If you [vocalsound] have somebody who has some experience with this sort of thing, and they work on it for a couple months, [vocalsound] they can come up with something that gets most of the cases fairly easily. Then you say, "OK, I don't just wanna get most of the cases I want it to be really accurate." Then it gets really hard no matter what you do. So, the p the problem is is that if you say, "Well I [disfmarker] I have these other data over here, [vocalsound] that I learn things from, either explicit training of neural nets or of Gaussian mixture models or whatever." [speaker005:] Yeah. [speaker004:] Uh [pause] Suppose you don't use any of those things. You say you have looked for acoustic change. Well, what does that mean? That [disfmarker] that means you set some thresholds somewhere or something, [speaker005:] Yeah. [speaker004:] right? [speaker005:] Yeah. [speaker004:] and [disfmarker] and so [vocalsound] where do you get your thresholds from? From something that you looked at. So [vocalsound] you always have this problem, you're going to new data um [pause] H how are you going to adapt whatever you can very quickly learn about the new data? [vocalsound] Uh, if it's gonna be different from old data that you have? And I think that's a problem [pause] with this. [speaker008:] Well, also what I'm doing right now is not intended to be an acoustic change detector for far field mikes. What I'm doing [vocalsound] is trying to use the close talking mike [vocalsound] and just use [disfmarker] [pause] Can and just generate candidate and just [pause] try to get a first pass at something that sort of works. [speaker005:] Yeah! [speaker007:] Actually [disfmarker] actually [disfmarker] actually [disfmarker] [speaker001:] You have candidates. [speaker005:] the candidate. [speaker007:] I [disfmarker] [speaker001:] to make marking easier. [speaker007:] Or [disfmarker] [speaker001:] Yeah. [speaker008:] and I haven't spent a lot of time on it and I'm not intending to spend a lot of time on it. [speaker007:] OK. I [disfmarker] um, I, unfortunately, have to run, [speaker008:] So. [speaker007:] but, um [pause] I can imagine [pause] uh building [pause] a [pause] um [pause] model of speaker change [pause] detection [pause] that [vocalsound] takes into account [pause] both the far field and the [vocalsound] uh [pause] actually, not just the close talking mike for that speaker, but actually for all of th [pause] for all of the speakers. [speaker008:] Yep. Everyone else. [speaker004:] Yeah. [speaker007:] um [pause] If you model the [disfmarker] [pause] the [pause] effect that [pause] me speaking has on [pause] your [pause] microphone and everybody else's microphone, as well as on that, [vocalsound] and you build, um [disfmarker] basically I think you'd [disfmarker] you would [pause] build a [disfmarker] [vocalsound] an HMM that has as a state space all of the possible speaker combinations [speaker008:] All the [disfmarker] Yep. [speaker005:] Yeah. [speaker007:] and, um [vocalsound] you can control [disfmarker] [speaker008:] It's a little big. [speaker007:] It's not that big actually, um [speaker008:] Two to the N. Two to the number of people in the meeting. [speaker004:] But [disfmarker] Actually, Andreas may maybe [disfmarker] maybe just something simpler but [disfmarker] but along the lines of what you're saying, [speaker008:] Anyway. [speaker005:] Yeah. [speaker004:] I was just realizing, I used to know this guy who used to build, uh [vocalsound] um, mike mixers [disfmarker] automatic mike mixers where, you know, t in order to able to turn up the gain, you know, uh [vocalsound] as much as you can, you [disfmarker] you [disfmarker] you lower the gain on [disfmarker] on the mikes of people who aren't talking, [speaker007:] Mmm. [speaker005:] Yeah [comment] Yeah. [speaker007:] Mmm. Mm hmm. [speaker004:] right? [speaker007:] Mm hmm. [speaker004:] And then he had some sort of [vocalsound] reasonable way of doing that, but [vocalsound] uh, what if you were just looking at very simple measures like energy measures but you don't just compare it to some threshold [pause] overall but you compare it to the [vocalsound] energy in the other microphones. [speaker008:] I was thinking about doing that originally to find out [pause] who's the loudest, and that person is certainly talking. [speaker004:] Yeah. [speaker008:] But I also wanted to find threshold [disfmarker] uh, excuse me, mol overlap. [speaker004:] Yeah. [speaker008:] So, not just [disfmarker] just the loudest. [speaker006:] Mm hmm. [speaker005:] But, eh I [disfmarker] I Sorry. I [disfmarker] I have found that when [disfmarker] when I I analyzed the [disfmarker] the speech files from the, [pause] eh [pause] mike, eh [pause] from the eh close eh [pause] microphone, eh [pause] I found zones with a [disfmarker] a different level of energy. [speaker007:] Sorry, I have to go. [speaker008:] OK. Could you fill that out anyway? Just, [pause] put your name in. Are y you want me to do it? I'll do it. [speaker001:] But he's not gonna even read that. [speaker008:] I know. [speaker001:] Oh. [speaker005:] including overlap zone. including. because, eh [pause] eh [pause] depend on the position of the [disfmarker] of the microph of the each speaker [vocalsound] to, eh, to get more o or less energy [vocalsound] i in the mixed sign in the signal. and then, [vocalsound] if you consider energy to [disfmarker] to detect overlapping in [disfmarker] in, uh, and you process the [disfmarker] the [disfmarker] in [disfmarker] the [disfmarker] the [disfmarker] the speech file from the [disfmarker] the [disfmarker] the mixed signals. The mixed signals, eh. I [disfmarker] I think it's [disfmarker] it's difficult, um [vocalsound] [pause] only to en with energy to [disfmarker] to consider that in that zone We have eh, eh, overlapping zone Eh, if you process only the the energy of the, of each frame. [speaker004:] Well, it's probably harder, but I [disfmarker] I think what I was s nnn noting just when he [disfmarker] when Andreas raised that, was that there's other information to be gained from looking at all [vocalsound] of the microphones and you may not need to look at very sophisticated things, [speaker005:] Yeah. [speaker004:] because if there's [disfmarker] [vocalsound] if most of the overlaps [disfmarker] you know, this doesn't cover, say, three, but if most of the overlaps, say, are two, [vocalsound] if the distribution looks like there's a couple high ones and [disfmarker] and [pause] the rest of them are low, [speaker005:] Yeah. Yeah. Yeah. [speaker008:] And everyone else is low, yeah. [speaker005:] Yeah. [speaker004:] you know, what I mean, [speaker005:] Yeah. [speaker004:] there's some information there about their distribution even with very simple measures. [speaker005:] Yeah. Yeah. [speaker004:] Uh, by the way, I had an idea with [disfmarker] while I was watching Chuck nodding at a lot of these things, is that we can all wear little bells on our heads, [vocalsound] so that [vocalsound] then you'd know that [disfmarker] [speaker005:] Yeah. [speaker008:] Ding, ding, ding, ding. [speaker006:] "Ding". [speaker005:] Yeah. [speaker006:] That's cute! [speaker002:] I think that'd be really interesting too, with blindfolds. [speaker008:] Nodding with blindfolds, [speaker002:] Then [disfmarker] Yeah. [speaker008:] "what are you nodding about?" [speaker002:] The question is, [pause] like [pause] whether [disfmarker] Well, trying with and [disfmarker] [pause] with and without, yeah. [speaker008:] "Sorry, I'm just [disfmarker] I'm just going to sleep." [speaker002:] But then there's just one [@ @], like. [speaker004:] Yeah. [speaker001:] Actually, I saw a uh [disfmarker] a woman at the bus stop the other day who, um, was talking on her cell phone [vocalsound] speaking Japanese, and was bowing. [speaker005:] Yeah [comment] Yeah. [speaker001:] you know, profusely. [speaker002:] Oh, yeah, that's really common. [speaker003:] Yeah. [speaker006:] Ah. [speaker001:] Just, kept [disfmarker] [speaker005:] Yeah. [speaker004:] Wow. [speaker002:] It's very difficult if you try [disfmarker] while you're trying, say, to convince somebody on the phone it's difficult not to move your hands. Not [disfmarker] You know, if you watch people they'll actually do these things. [speaker004:] Mm hmm? [speaker002:] So. I still think we should try a [disfmarker] a meeting or two with the blindfolds, at least of this meeting that we have lots of recordings of [speaker008:] Mm hmm. [speaker004:] Yeah, I think th [speaker002:] Um, maybe for part of the meeting, we don't have to do it the whole meeting. [speaker004:] I think it's a great idea. [speaker002:] That could be fun. It'll be too hard to make barriers, I was thinking because they have to go all the way [speaker004:] W Yeah. [speaker002:] you know, I can see Chuck even if you put a barrier here. [speaker008:] Well, we could just turn out the lights. [speaker006:] Actually [pause] well also [disfmarker] I [disfmarker] I can say I made barr barriers for [disfmarker] so that [disfmarker] the [pause] stuff I was doing with Collin wha [pause] which [pause] just used, um [pause] this [pause] kind of foam board. [speaker002:] Y Yeah? [speaker006:] R really inexpensive. You can [disfmarker] you can masking tape it together, these are [pause] you know, pretty l large partitions. [speaker004:] Yeah. [speaker002:] But then we also have these mikes, is the other thing I was thinking, so we need a barrier that doesn't disturb [pause] the sound, [speaker008:] The acoustics. [speaker006:] It's true, it would disturb the, um [pause] the [disfmarker] the long range [disfmarker] [speaker002:] um [speaker004:] Blindfolds would be good. [speaker008:] I think, blindfolds. [speaker006:] it would [disfmarker] [speaker002:] I mean, it sounds weird but [disfmarker] but [disfmarker] [pause] you know it's [disfmarker] [vocalsound] it's cheap and, uh Be interesting to have the camera going. [speaker004:] Probably we should wait until after Adam's set up the mikes, But. [speaker006:] OK. I think we're going to have to work on the, uh [disfmarker] [pause] on the human subjects [vocalsound] form. [speaker001:] I'll be peeking. [speaker008:] Yeah, that's right, we didn't tell them we would be blindfolding. [speaker004:] That's [disfmarker] [speaker006:] "Do you mind being blindfolded while you're interviewed?" [speaker004:] that's [disfmarker] that's [disfmarker] that's the one that we videotape. So. Um, I [disfmarker] I wanna move this along. Uh [pause] I did have this other agenda item which is, uh [@ @] [disfmarker] it's uh a list which I sent to uh [disfmarker] a couple folks, but um I wanted to get broader input on it, So this is the things that I think we did [vocalsound] in the last three months obviously not everything we did but [disfmarker] but sort of highlights that I can [disfmarker] [pause] can [pause] tell [pause] s some outside person, you know, what [disfmarker] what were you [pause] actually working on. Um [pause] in no particular order [vocalsound] uh, one, uh, ten more hours of meeting r meetings recorded, something like that, you know from [disfmarker] from, uh [pause] three months ago. Uh [pause] XML formats and other transcription aspects sorted out [pause] and uh [pause] sent to IBM. Um, pilot data put together and sent to IBM for transcription, uh [pause] next batch of recorded data put together on the CD ROMs for shipment to IBM, [speaker008:] Hasn't been sent yet, but [disfmarker] It's getting ready. [speaker004:] But yeah, that's why I phrased it that way, yeah OK. Um [pause] human subjects approval on campus, uh [pause] and release forms worked out so the meeting participants have a chance to request audio pixelization of selected parts of the spee their speech. Um [vocalsound] audio pixelization software written and tested. Um [pause] [vocalsound] preliminary analysis of overlaps in the pilot data we have transcribed, and exploratory analysis of long distance inferences for topic coherence, that was [disfmarker] I was [disfmarker] [pause] wasn't [pause] sure if those were the right way [disfmarker] [pause] that was the right way to describe that because of that little exercise that [disfmarker] that you [comment] and [disfmarker] and Lokendra did. [speaker006:] What was that called? The, uh [pause] say again? [speaker004:] I [disfmarker] well, I I'm probably saying this wrong, but what I said was exploratory analysis of long distance inferences [vocalsound] for topic coherence. Something like that. Um [pause] so, uh [pause] I [disfmarker] [vocalsound] [pause] a lot of that was from, you know, what [disfmarker] what [disfmarker] what you two were doing so I [disfmarker] I sent it to you, and you know, please mail me, you know, the corrections or suggestions for changing [speaker008:] Mm hmm. [speaker004:] I [disfmarker] I don't want to make this twice it's length but [disfmarker] [vocalsound] but you know, just im improve it. Um Is there anything anybody [disfmarker] [speaker008:] I [disfmarker] I did a bunch of stuff for supporting of digits. [speaker004:] "Bunch of stuff for s" OK, maybe [disfmarker] maybe send me a sentence that's a little thought through about that. [speaker008:] So, [pause] OK, I'll send you a sentence that doesn't just say "a bunch of"? [speaker004:] "Bunch of stuff", yeah, "stuff" is probably bad too, [speaker008:] Yep. "Stuff" [pause] is not very technical. [speaker004:] Yeah, well. [speaker008:] I'll try to [pause] phrase it in passive voice. [speaker004:] Yeah. Yeah, yeah, [speaker001:] Technical stuff. [speaker004:] "range of things", yeah. Um [pause] and [disfmarker] and you know, I sort of threw in what you did with what Jane did on [disfmarker] in [disfmarker] under the, uh [pause] uh [vocalsound] preliminary analysis of overlaps. Uh [vocalsound] uh [vocalsound] Thilo, can you tell us about all the work you've done on this project in the last, uh [pause] last three months? [speaker005:] Yeah. [speaker003:] So [disfmarker] what is [disfmarker] what [disfmarker] Um. [speaker004:] That's [disfmarker] [speaker003:] Not really. [speaker001:] It's too complicated. [speaker003:] Um, [pause] I didn't get it. Wh what is "audio pixelization"? [speaker004:] Uh, audio pix wh he did it, so why don't you explain it quickly? [speaker008:] It's just, uh [pause] beeping out parts that you don't want included in the meeting so, you know you can say things like, "Well, this should probably not be on the record, but beep" [speaker003:] OK, OK. I got that. [speaker004:] Yeah. We [disfmarker] we [disfmarker] we spent a [disfmarker] a [disfmarker] a fair amount of time early on just talk dealing with this issue about op w e e [vocalsound] we realized, "well, people are speaking in an impromptu way and they might say something that would embarrass them or others later", and, how do you get around that [speaker003:] OK. [speaker004:] so in the consent form it says, well you [disfmarker] we will look at the transcripts later and if there's something that you're [pause] unhappy with, yeah. [speaker003:] OK, and you can say [disfmarker] OK. [speaker004:] But you don't want to just totally excise it because um uh, well you have to be careful about excising it, how [disfmarker] how you excise it keeping the timing right and so forth so that at the moment tho th the idea we're running with is [disfmarker] is h putting the beep over it. [speaker008:] Yeah, you can either beep or it can be silence. [speaker003:] OK. [speaker008:] I [disfmarker] I couldn't decide. which was the right way to do it. [speaker005:] Ah, yeah. [speaker008:] Beep is good auditorily, if someone is listening to it, there's no mistake that it's been beeped out, [speaker003:] Yeah. Yeah. [speaker008:] but for software it's probably better for it to be silence. [speaker001:] No, no. You can [disfmarker] you know, you could make a m as long as you keep using the same beep, people could make a model of that beep, [speaker006:] Hmm. [speaker008:] Yep. [speaker001:] and [disfmarker] [speaker006:] I like that idea. [speaker008:] And I use [disfmarker] it's [disfmarker] it's, uh [pause] it's an A below middle C beep, [speaker002:] I think the beep is a really good idea. [speaker006:] It's very clear. [speaker001:] Yeah. [speaker006:] Then you don't think it's a long pause. [speaker002:] Also [disfmarker] [speaker008:] so [speaker001:] Yeah, it's more obvious that there was something there than if there's just silence. [speaker004:] Yeah, that [disfmarker] I mean, he's [disfmarker] he's removing the old [pause] thing [speaker006:] Yeah. [speaker003:] Yeah. [speaker005:] Yeah [speaker008:] Yep. [speaker004:] and [disfmarker] and [disfmarker] and [disfmarker] [speaker001:] Yea right. Right. [speaker008:] Yeah, it's not [disfmarker] [speaker001:] But I mean if you just replaced it with silence, [pause] it's not clear whether that's really silence or [disfmarker] [speaker003:] Yeah. [speaker005:] Yeah. [speaker006:] Yeah, I agree. [speaker008:] Yep. [speaker005:] Yeah. [speaker004:] Yeah. [speaker006:] One [disfmarker] one question. Do you do it on all channels? [speaker008:] Of course. [speaker006:] Interesting. I like that. [speaker008:] Yeah you have to do it on all channels because it's, uh [pause] audible. [speaker004:] Yeah. [speaker006:] Yeah, I like that. Very clear. Very clear. [speaker008:] Uh, it's [disfmarker] it's potentially audible, you could potentially recover it. [speaker004:] Ke keep a back door. [speaker006:] Well, the other thing that [disfmarker] you know, I mean the [disfmarker] the alternative might be to s [speaker008:] Yeah. Well, I [disfmarker] I haven't thrown away any of the meetings that I beeped. Actually yours is the only one that I beeped and then, uh [pause] the ar DARPA meeting. [speaker002:] Notice how quiet I am. [speaker008:] Sorry, and then the DARPA meeting I just excised completely, [speaker006:] Yeah. [speaker008:] so it's in a private directory. [speaker006:] That's great. [speaker002:] You have some people who only have beeps as their speech in these meetings. [speaker006:] Yeah. [speaker004:] OK. [speaker001:] They're easy to find, then. [speaker004:] Alright, so, uh [pause] I think we should, uh [pause] uh, go on to the digits? [speaker008:] OK. [speaker006:] I have one concept a t I [disfmarker] I want to say, which is that I think it's nice that you're preserving the time relations, s so you're [disfmarker] you're not just cutting [disfmarker] you're not doing scissor snips. [speaker008:] Right. [speaker006:] You're [disfmarker] you're keeping the, uh [pause] the time duration of a [disfmarker] de deleted [disfmarker] deleted part. [speaker002:] Yeah, definitely. [speaker004:] Yeah. [speaker006:] OK, good, digits. [speaker008:] Yeah, since we wanna [pause] possibly synchronize these things as well. Oh, I should have done that. Shoot. [speaker006:] It's great. [speaker008:] Oh well. [speaker004:] Yeah. Oh [speaker002:] So I guess if there's an overlap, [pause] like, if I'm saying something that's [pause] bleepable and somebody else overlaps during it they also get bleeped, too? [speaker008:] You'll lose it. There's no way around that. [speaker004:] Yeah. Um [pause] I d I did [disfmarker] before we do the digits, I did also wanna remind people, uh [pause] [vocalsound] please do send me, you know, uh thoughts for an agenda, [speaker008:] Agenda? [speaker004:] yeah that [disfmarker] that would be that'd be good. [speaker006:] Good. [speaker004:] Eh So that, uh, people's ideas don't get [speaker008:] Thursday crept up on me this week. [speaker004:] yeah, well it does creep up, doesn't it? OK. [speaker002:] And, I wanted to say, I think this is really interesting [pause] analysis. [speaker008:] It's cool stuff, definitely. [speaker006:] Thank you. Thank you. [speaker002:] I meant to say that before I started off on the [pause] Switchboard stuff. [speaker008:] I was gonna say can you do that for the other meetings, [speaker002:] It's neat. [speaker008:] can you do it for them? And, no actually, you can't. [speaker002:] Yeah. Does it take [disfmarker] [speaker001:] Actually [disfmarker] actually I [disfmarker] I thought that's what you were giving us was another meeting and I was like, "Oh, OK!" [speaker006:] Thank you. [speaker008:] "Ooo, cool!" [speaker006:] Yeah. Aw, thanks. [speaker002:] How long does it [pause] take, just briefly, like [pause] t to [disfmarker] [pause] OK. [pause] to label the, [speaker006:] No. I have the script now, so, I mean, it can work off the, uh [pause] other thing, [speaker008:] It's [disfmarker] As soon as we get labels, yep. [speaker002:] OK. [speaker001:] But it has to be hand labeled first? [speaker006:] but [disfmarker] Uh, well, yeah. Because, uh [pause] well, I mean [pause] once his [disfmarker] his algorithm is up and running then we can do it that way. [speaker008:] If it works well enough. Right now it's not. Not quite to the point where it works. [speaker002:] OK. [speaker006:] But [pause] I [disfmarker] I just worked off of my [speaker004:] OK, go ahead [speaker002:] It's really neat. [speaker006:] Thanks. Appreciate that. I think [disfmarker] what I [disfmarker] what this has, uh, caused me [disfmarker] so this discussion caused me to wanna subdivide these further. I'm gonna take a look at the, uh [pause] backchannels, how much we have anal I hope to have that for next time. [speaker008:] Yeah, my [disfmarker] my algorithm worked great actually on these, [speaker001:] That'd be interesting. [speaker008:] but when you wear it like that or with the uh, lapel [pause] or if you have it very far from your face, that's when it starts [pause] failing. [speaker001:] Mm hmm. [speaker002:] Well, I can wear it, I mean if you [disfmarker] [speaker001:] Oh. [speaker008:] It doesn't matter. [speaker002:] OK. [speaker008:] I mean, we want it to work, right? [speaker001:] It's too late now. [speaker008:] I [disfmarker] I don't want [pause] to change the way we do the meeting. [speaker002:] I feel like this troublemaker. [speaker008:] It's uh [disfmarker] [pause] so, it was just a comment on the software, not a comment on [vocalsound] prescriptions on how you wear microphones. [speaker002:] OK. [speaker004:] OK, that's [disfmarker] let's [disfmarker] let's [disfmarker] let's do digits. [speaker008:] Get the bolts, "whh whh" [speaker006:] Let's do it. [speaker008:] OK. [speaker006:] OK. [speaker002:] I'm sorry. [speaker008:] OK, thank you. [speaker006:] Do you want us to put a mark on the bottom of these when they've actually been read, or do you just [pause] i i the only one that wasn't read is [disfmarker] is known, so we don't do it. OK. [speaker007:] headphones that aren't so uncomfortable. [speaker004:] Hmm. [speaker002:] I think [disfmarker] Well, this should be off the record, but I think [disfmarker] [speaker004:] Uh, OK. [speaker001:] We're not recording yet, are we? [speaker007:] Well, I don't think [disfmarker] No. [speaker006:] No, uh, that [disfmarker] that wasn't recorded. [speaker007:] Um, I don't think they're designed to be over your ears. [speaker002:] Yeah, I know. It just [disfmarker] it really hurts. It gives you a headache, like if you [disfmarker] [speaker006:] Temple squeezers. [speaker002:] On your temple [disfmarker] [speaker007:] Yep. [speaker002:] Yeah. Yeah. [speaker004:] Mm hmm. [speaker007:] But I definitely [pause] haven't figured it out. [speaker001:] Um, Meeting Recorder meeting. [speaker006:] I guess I have to d stop doing this sigh of contentment, you know, after sipping cappuccino or something. [speaker002:] Yeah, with the [disfmarker] We kno I know. [speaker007:] "Sip, sigh." [speaker006:] I was just noticing a big s [speaker002:] We know exactly how much you have left in your cup. [speaker004:] So are we recording now? Is this [disfmarker] [speaker005:] Yeah. [speaker004:] Oh! We're [disfmarker] we're [disfmarker] we're live. OK. [speaker005:] Yeah. [speaker004:] So, uh, [vocalsound] what were we gonna talk about again? So we said [disfmarker] we said data collection, which we're doing. [speaker002:] Were we gonna do digits? [speaker001:] OK. Do we do th do you go around the room [pause] and do names or anything? [speaker007:] I think that [disfmarker] [speaker005:] It's a good idea. [speaker007:] u usually we've done that and also we've s done digits as well, but I forgot to print any out. So. Besides with this big a group, it would take too much time. [speaker004:] No. I it'd be even better with this big [disfmarker] [speaker002:] You can write them on the board, if you want. [speaker005:] Which way is [disfmarker] [speaker007:] Yeah, but it takes too much time. [speaker005:] Mari? [speaker008:] What [disfmarker] [speaker001:] What? [speaker005:] Y I think your [disfmarker] your [disfmarker] your thing [nonvocalsound] may be pointing in a funny direction. [speaker004:] It's not that long. [speaker005:] Sort of it's [disfmarker] it helps if it points sort of upwards. [speaker001:] Whoops. [speaker005:] Sort of it [disfmarker] you know. Yeah. [speaker001:] Would it [disfmarker] m [speaker005:] So that thing [disfmarker] the little [disfmarker] th that part should be pointing upwards. [speaker004:] w u [speaker001:] So [disfmarker] Oh, this thing. [speaker005:] That's it. Yeah. [speaker008:] Otherwise you just get a heartbeats. [speaker001:] Yeah. It's kind of [disfmarker] [speaker004:] Oh, yeah, the element, yeah, n should be as close to you [disfmarker] your mouth as possible. [speaker001:] Yeah. [speaker005:] That's good. [speaker001:] OK. [speaker005:] That kind of thing is good. [speaker008:] It's a [disfmarker] [speaker001:] This w Alright. [speaker005:] Yeah. Oh, yeah. [speaker004:] Yeah. [speaker001:] How's that working? [speaker005:] It's a [disfmarker] It's working. [speaker001:] OK. [speaker004:] Alright. So what we had [pause] was that we were gonna talk about data collection, and, um, uh, you [disfmarker] you put up there data format, [speaker001:] Um. [speaker004:] and other tasks during data collection, [speaker001:] So, I think the goal [disfmarker] the goal was what can we do [disfmarker] how can you do the data collection differently to get [disfmarker] [speaker004:] and [disfmarker] [speaker001:] what can you add to it to get, um, some information that would be helpful for the user interface design? Like [disfmarker] [speaker007:] Uh, especially for querying. [speaker001:] Especially for querying. So, getting people to do queries afterwards, getting people to do summaries afterwards. Um. [speaker008:] Well, one thing that came up in the morning [disfmarker] in the morning was the, um, i uh, if he [disfmarker] I, um [disfmarker] if he has [disfmarker] s I [disfmarker] I don't remember, Mister Lan Doctor Landry? La Landay? [speaker007:] Landay. James. [speaker008:] So he has, um, these, uh, um, tsk [comment] note taking things, then that would sort of be a summary which you wouldn't have to solicit. [speaker001:] Mm hmm. [speaker008:] y if [disfmarker] if we were able to [disfmarker] to do that. [speaker001:] Well, if [disfmarker] if you actually take notes as a summary as opposed to n take notes in the sense of taking advantage of the time stamps. So action item or uh, reminder to send this to so and so, blah blah blah. [speaker008:] Mm hmm. [speaker001:] So that wouldn't be a summary. That would just be [disfmarker] that would b relate to the query side. [speaker008:] Mm hmm. [speaker007:] But if we had the CrossPads, we could ask people, you know, if [disfmarker] if something comes up [vocalsound] write it down and mark it [vocalsound] [pause] somehow, you know. [speaker005:] Right. I mean, we [disfmarker] because you'd have several people with these pads, you could collect different things. I mean, cuz I tend to take notes which are summaries. [speaker001:] Right. [speaker005:] And so, you know [disfmarker] [speaker006:] I mean, the down side to that is that he sort of indicated that the, uh, quality of [vocalsound] the handwriting recognition was quite poor. [speaker001:] Well [disfmarker] [speaker007:] But that's alright. I don't think there'd be so many that you couldn't have someone clean it up [speaker001:] So [disfmarker] [speaker007:] pretty easily. [speaker001:] Yeah. We also could come up with some code for things that people want to do so that [disfmarker] for frequent things. [speaker006:] Yeah. [speaker001:] And the other things, people can write whatever they want. I mean, it's to some extent, uh, for his benefit. So, if that [disfmarker] you know, if [disfmarker] if we just keep it simple then maybe it's still useful. [speaker006:] Right. [speaker007:] Yeah. [speaker004:] I just realized we skipped the part that we were saying we were gonna do at the front where we each said who we were. [speaker008:] The roll call. [speaker001:] Right. [speaker004:] Roll call. [speaker001:] I thought you did that on purpose. But anyway, shall we do the roll call? [speaker004:] No, not a No, I just [disfmarker] My mind went elsewhere. So, uh, yeah, I'm Morgan, and where am I? I'm on channel three. [speaker007:] And I'm Adam Janin on channel A. [speaker008:] I'm Jane Edwards, I think on channel B. [speaker005:] I'm Dan Ellis. [speaker006:] Eric on channel nine. [speaker002:] Liz, on channel one. [speaker001:] Mari on channel zero. [speaker003:] Katrin on channel two. [speaker008:] Should we have used pseudo names? Should we do it a second time with pseudo No. [vocalsound] No. [speaker004:] I'm Rocky Raccoon [vocalsound] on channel [disfmarker] [speaker007:] And, uh, do you want to do the P D As and the [pause] P Z [speaker005:] Let me, uh, turn that off. Oh. PZM nearest, nearest, next nearest. Next one. [speaker008:] Next nearest. [speaker007:] Far. [speaker005:] Furthest. PDM right, PZA right [disfmarker] PDA right, PDA left. [speaker008:] OK. [speaker005:] Thanks. [speaker007:] Yeah, and eventually once this room gets a little more organized, the Jimlets [comment] will be mounted under the table, and these guys will be permanently mounted somehow. You know, probably with double sided tape, but [disfmarker] So. You [disfmarker] So we won't have to go through that. [speaker001:] Hmm. [speaker008:] I have a question on protocol in these meetings, which is when you say "Jimlet" and the person listening won't know what that is, sh shou How [disfmarker] how do we get [disfmarker] Is that important information? You know, the Jimlet [disfmarker] I mean, the box that contains the [disfmarker] [speaker004:] Well, I mean, suppose we broaden out and go to a range of meetings besides just these internal ones. There's gonna be lots of things that any group of people who know each other have in column [disfmarker] common [comment] that we will not know. [speaker008:] Mm hmm. [speaker001:] Right. [speaker008:] OK. [speaker001:] Right. So the there will be jargon that we he There'll be transcription errors. [speaker008:] Good. [speaker004:] Yeah. [speaker008:] OK. [speaker004:] I mean, we [disfmarker] we were originally gonna do this with VLSI design, and [disfmarker] and [disfmarker] and the reason we didn't go straight to that was because immediately ninety percent of what we heard would be [vocalsound] jargon to [disfmarker] to us. So. [speaker007:] Well, that was just one of the reasons. But, yeah, definitely. [speaker004:] Yeah. [speaker008:] OK. Good. [speaker004:] That [disfmarker] that's right. There were others of course. Yeah. [speaker008:] OK, so we were on the data collection [pause] [comment] and the summary issue. [speaker004:] Right. We can go back. [speaker001:] So, uh, u u So, actually there's kind of three issues. There's the CrossPad issue. Should we do it and, if so, what'll we have them do? Um, do we have s people write summaries? Everybody or one person? And then, do we ask people for how they would query things? [speaker006:] There's [disfmarker] there're sub problems in that, [speaker001:] Is that [disfmarker] [speaker006:] in that where [disfmarker] or when do you actually ask them about that? I mean, that was [disfmarker] One thing I was thinking about was is that Dan said earlier that, you know, maybe two weeks later, which is when you would want to query these things, you might ask them then. [speaker001:] Right. Right. [speaker006:] But there's a problem with that in that if [pause] you're not [disfmarker] If you don't have an interactive system, it's gonna be hard to go beyond sort of the first level of question. [speaker001:] Right. [speaker006:] Right. And furth id explore the data further. [speaker001:] Right. [speaker006:] So. [speaker007:] And [disfmarker] [speaker004:] There's [disfmarker] there's another problem which is, um, we certainly do want to branch out beyond, uh, uh, recording meetings about Meeting Recorder. And, uh, once we get out beyond our little group, the people's motivation factor, uh, reduces enormously. And if we start giving them a bunch of other things to do, how [disfmarker] you know, we [disfmarker] we did n you know another meeting here for another group and [disfmarker] and, uh, they were fine with it. But if we'd said, "OK, now all eight of you have to [disfmarker] [vocalsound] have to come up with, uh, the summar" [speaker007:] Well, I asked them to and none of them did. [speaker004:] t See? [speaker008:] Mm hmm. [speaker007:] So, I [disfmarker] I asked them to send me ideas for queries after the meeting [speaker004:] There we go. [speaker001:] They [disfmarker] [speaker007:] and no one ever did. [speaker005:] Mm hmm. [speaker007:] I didn't follow up either. So I didn't track them down and say "please do th do it now". [speaker001:] Yeah. [speaker007:] But, uh, no one spontaneously provided anything. [speaker004:] I I'm worried that if you did [disfmarker] even if you did push them into it, it [disfmarker] it [disfmarker] it might be semi random, [speaker001:] Right. [speaker004:] uh, as opposed to what you'd really want to know if you were gonna use this thing. [speaker005:] Right. [speaker007:] I just don't know how else to generate the queries other than getting an expert to actually listen to the meeting and say that's important, [speaker001:] OK. [speaker007:] that might be a query ". [speaker008:] Tsk. Well, there is this other thing which y which you were alluding to earlier, which is, um, there are certain key words like, you know, "action item" and things like that, which could be used in, uh, t to some degree finding the structure. [speaker005:] Although [disfmarker] [speaker001:] Yeah. W [speaker008:] And [disfmarker] and I also, um, was thinking, with reference to the n uh, note taking, the advantage there is that you get structure without the person having to do something artificial later. And the fir third thing I wanted to say is the summaries afterwards, um, I think they should be recorded instead of written because I think that, um, it would take so long for people to write that I think you wouldn't get as good a summary. [speaker001:] How about this idea? That normally at most meetings somebody is delegated to be a note taker. [speaker008:] Yeah, good. Good point. Yeah. [speaker001:] And [disfmarker] So why don't we just use the notes that somebody takes? [speaker007:] I mean, that gives you a summary but it doesn't really [disfmarker] How do you generate queries from that? [speaker005:] Well. But, I mean, maybe a summary is one of the things we'd want from the output of the system. [speaker008:] Yeah. [speaker005:] Right? [speaker001:] Right. [speaker005:] I mean, they're something. It's a [disfmarker] a kind of output you'd like. [speaker002:] Actually [disfmarker] [speaker007:] Uh, James and I were talking about this during one of the breaks. [speaker002:] And so [disfmarker] [speaker007:] And the problem with that is, I'm definitely going to do something with information retrieval even if it's sort of not full full bore what I'm gonna do for my thesis. [speaker001:] Right. [speaker007:] I'm gonna do something. I'm not gonna do anything with summarization. And so if someone wants to do that, that's fine, but it's not gonna be me. [speaker004:] Well, I think that we [disfmarker] I mean, the [disfmarker] the f the core thing is that you know once we get some of these issues nailed down, we need to do a bunch of recordings [speaker001:] Well [disfmarker] [speaker004:] and send them off to IBM and get a bunch of transcriptions even if they're slightly flawed [speaker007:] Yep. [speaker004:] or need some other [disfmarker] And then we'll have some data there. And then, i i we can start l looking and thinking, what do we want to know about these things [speaker001:] Yeah. [speaker004:] and [disfmarker] at the very least. [speaker005:] Mm hmm. [speaker002:] I actually want to say something about the note pad. [speaker001:] Yeah [disfmarker] [speaker002:] So, if you could sense just when people are writing, and you tell them not to doodle, or try not to [pause] be using that for other purposes, [comment] and each person has a note pad. They just get it when they come in the room. Then you c you can just have a fff [comment] plot of wh you know, who's writing when. [speaker004:] Hmm. [speaker005:] Activity. [speaker002:] That's all you [disfmarker] [speaker005:] Yeah. [speaker002:] And, you can also have notes of the meeting. But I bet that's [disfmarker] that will allow you to go into the [disfmarker] sort of the hot places where people are writing things down. [speaker004:] Mm hmm. [speaker007:] Oh, I see. [speaker002:] I mean, you can tell when you're in a meeting when everybody stops to write something down that something was just said. [speaker001:] Mm hmm. [speaker002:] It may not be kept in the later summary, but at that point in time is was something that was important. [speaker008:] Mm hmm. [speaker002:] And that wouldn't take any extra [disfmarker] [speaker008:] That's a nice idea. [speaker002:] Or someone could just pu you could just put your hand on the pad [speaker004:] It [disfmarker] [speaker001:] Mm hmm. [speaker003:] Mm hmm. [speaker002:] and go like that if you want to. [speaker004:] That's a good idea [speaker002:] It's [disfmarker] [speaker004:] but that doesn't [disfmarker] Maybe I'm missing something, but that doesn't get to the question of how we come up with queries, right? [speaker002:] Well, then you can go to the points where the [disfmarker] you could actually go to those points in time [speaker001:] Well, what it does [disfmarker] [speaker002:] and find out what they were talking about. And you r [speaker004:] Yeah. [speaker001:] Well, what it does is provide a different [disfmarker] [speaker002:] And [disfmarker] [speaker007:] Uh, y [speaker001:] I [disfmarker] I think it's an interesting thing. I don't think it gets at the [disfmarker] the queries per se, but it does give us an information fusion sort of thing that, you know, you wanna i say "what were the hot points of the meeting?" [speaker002:] Yeah. [speaker004:] That [disfmarker] that's what I mean, is that I think it gets at something interesting but if we were asking the question, which I thought we were, of [disfmarker] of [disfmarker] of, um, "how do we figure out what's the nature of the queries that people are gonna want to ask of such a system?", knowing what's important doesn't tell you what people are going to be asking. [speaker002:] But I bet it's a good [pause] superset of it. [speaker004:] Does it? [speaker005:] Well, see, there are th [speaker001:] Well, yeah. I think you could say they're gonna ask about, uh, when [disfmarker] uh, when did so and so s talk about blah. And at least that gives you the word [pause] that they might run a query on. [speaker002:] At least you can find the locations where there are maybe keywords [speaker007:] I mean, i this would tell you what the hit is, [speaker004:] Maybe. [speaker002:] and [disfmarker] [speaker007:] not what the query is. What [disfmarker] [speaker002:] Right, right. [speaker001:] Right. It'll tell you the hit but not the query. [speaker002:] But I think [disfmarker] I think thinking about queries is a little bit dangerous right now. [speaker007:] And so you could [disfmarker] you can generate a query from the hits, but [disfmarker] [speaker001:] Right. [speaker002:] We don't even know what [disfmarker] I mean, if you want to find out what any user will use, that might be true for one domain and one user, [speaker004:] Mm hmm. [speaker002:] but I mean a different domain and a different user [disfmarker] [speaker008:] Mm hmm. [speaker004:] Yeah, but we're just looking for a place to start with that [speaker002:] Um. [speaker004:] because, you know, th what [disfmarker] what [disfmarker] what James is gonna be doing is looking at the user interface and he's looking at the query in [disfmarker] in [disfmarker] i We [disfmarker] we have five hours of pilot data of the other stuff but we have zero hours of [disfmarker] of [disfmarker] of queries. So he's just sort of going "where [disfmarker] where do I [disfmarker] where do I start?" [speaker001:] w Well, th you could do [disfmarker] I think the summaries actually may help get us there, [speaker004:] OK. [speaker001:] for a couple reasons. One, if you have a summary [disfmarker] if you have a bunch of summaries, you can do a word frequency count and see what words come up in different types of meetings. [speaker004:] Mm hmm. [speaker001:] So "action item" is gonna come up whether it's a VLSI meeting, or speech meeting, or whatever. So words that come up in different types of meetings may be something that you would want to query about. [speaker007:] Mm hmm. [speaker001:] Um, the second thing you could possibly do with it is just run a little pilot experiment with somebody saying "here's a summary of a meeting, what questions might you want to ask about it to go back?" [speaker007:] Yeah, I think that's difficult because then they're not gonna ask the questions that are in the summary. [speaker001:] Well [disfmarker] [speaker007:] But, I think it would give [disfmarker] [speaker001:] That's one possi one possible scenario, though, is you have the summary, [speaker007:] Mm hmm. [speaker001:] and you want to ask questions to get more detail. [speaker007:] th Yeah, I think it has to be a participant. Well, it doesn't have to be. OK. So that [disfmarker] that is another use of Meeting Recorder that we haven't really talked about, which is for someone else, as opposed to as a [pause] remembrance agent, which is what had been my primary thought in the information retrieval part of it would be. But, uh, I guess if you had a meeting participant, they could use the summary to refresh themselves about the meeting and then make up queries. But it's not [disfmarker] [speaker004:] Mm hmm. [speaker007:] I don't know how to do it if [disfmarker] until you have a system. [speaker002:] The summary is actually gonna drive the queries then. [speaker005:] Yeah. [speaker001:] Mmm. [speaker002:] I mean, your research is going to be very circular. [speaker007:] Yeah, that [disfmarker] that's what I was saying. [speaker001:] Yeah. [speaker005:] But th there is this, um [disfmarker] [vocalsound] [vocalsound] [vocalsound] There is this class of queries, which are the things that you didn't realize were important at the time but some in retrospect you think "oh, hang on, didn't we talk about that?" And it's something that didn't appear in the summary but you [disfmarker] [speaker001:] Mm hmm. [speaker005:] And that's kind of what this kind of, uh, complete data capture is kind of nicest for. [speaker001:] Right. [speaker002:] Right. [speaker005:] Cuz it's the things that you wouldn't have bothered to make an effort to record but they get recorded. [speaker001:] Right. [speaker005:] So, I mean [disfmarker] And th there's no way of generating those, u u until we just [disfmarker] until they actually occur. You know, it's like [disfmarker] [speaker002:] But you could always post hoc label them. [speaker005:] Right, right. Exactly. [speaker002:] Yeah. [speaker005:] But I mean, it's difficult to sort of say "and if I was gonna ask four questions about this, what would they be?" [speaker002:] Yeah. [speaker005:] Those aren't the kind of things that come up. [speaker007:] But at least it would get us started. [speaker005:] Oh, yeah. Yeah, sure. [speaker008:] I also think that w if [disfmarker] if you can use the summaries as an indication of the important points of the [disfmarker] of the meeting, then you might get something like [disfmarker] y So if th if the obscure item you want to know more about was some form of data collection, you know, maybe the summary would say, you know, "we discussed types of na data collection". And, you know [disfmarker] And [disfmarker] and maybe you could get to it by that. If you [disfmarker] if you had the [disfmarker] the larger structure of the [disfmarker] of the discourse, then if you can categorize what it is that you're looking for with reference to those l those larger headings, then you can find it even if you don't have a direct route to that. [speaker007:] Mmm. [speaker008:] I think that [disfmarker] [speaker007:] Although it seems like that's, um, a high burden on the note taker. That's a pretty fine grain that the note taker will have to take. [speaker002:] Maybe Landay can put a student in to be a note taker. [speaker001:] I th No. I think you got to have somebody who knows the pro knows the topic or [disfmarker] you know, whose job it is delegated to be the note taker. [speaker002:] No? [speaker004:] Mm hmm. [speaker001:] Somebody who's part of the meeting. [speaker002:] No, I mean, but someone who can come sit in on the meetings and then takes the notes with them that the real note taker [disfmarker] [speaker007:] But they [disfmarker] [speaker002:] And that way that one student has, you know, a rough idea of what was going on, and they can use it for their research. I mean, this isn't really necessarily what you would do in a real system, [speaker007:] Mm hmm. [speaker002:] because that that's a lot of trouble [speaker004:] Mm hmm. [speaker002:] and maybe it's not the best way to do it. [speaker004:] Mm hmm. [speaker002:] But if he has some students that want to study that then they should sort of get to know the people and attend those meetings, and get the notes from the note taker or something. [speaker007:] Right. [speaker004:] Hmm. [speaker007:] Well, I think that's a little bit of a problem. Their sort of note taking application stuff they've been doing for the last couple of years, and I don't think anyone is still working on it. I think they're done. [speaker004:] Yeah. [speaker007:] Um, so I'm not sure that they have anyone currently working on notes. So what we'd have to interest someone in is the combination of note and speech. [speaker002:] Mm hmm. [speaker007:] And so the question is "is there such a person?" And I think right now, the answer is "no". [speaker004:] I've b been thinking [disfmarker] [speaker001:] Well [speaker007:] We'll just have to see. [speaker004:] I've been thinking about it a little bit here [disfmarker] about the [disfmarker] uh, th this, e um [disfmarker] I think that the [disfmarker] now I'm thinking that the summary [disfmarker] a summary, uh, is actually a reasonable, uh, bootstrap into this [disfmarker] into what we'd like to get at. It's [disfmarker] it's not ideal, but we [disfmarker] you know, we [disfmarker] we have to get started someplace. So I was [disfmarker] I was just thinking about, um, suppose we wanted to get [disfmarker] w We have this collection of meeting. We have five hours of stuff. Uh, we get that transcribed. So now we have five hours of meetings and, uh, you ask me, uh, uh, "Morgan, what d you know, what kind of questions do you want to ask?" Uh, I wouldn't have any idea what kind of questions I want to ask. I'd have to get started someplace. So in fact if I looked at summary of it, I'd go "oh, yeah, I was in that meeting, I remember that, um, what was the part that [disfmarker]" And [disfmarker] and th I think that might then help me to think of things [disfmarker] even things that aren't listed in the summary, but just as a [disfmarker] as a [disfmarker] as a refresh of what the general thing was going on in the meeting. [speaker007:] Mm hmm. [speaker001:] I think it serves two purpo purposes. One, as sort of a refresh to help bootstrap queries, [speaker004:] Mm hmm. Yeah. [speaker001:] but also, I mean, maybe we do want to generate summaries. [speaker004:] Well, yeah. That's true too. [speaker001:] And then it's [disfmarker] you know, it's kind of a key. [speaker005:] Hmm. [speaker007:] Yeah, absolutely. Then you want to have it. [speaker002:] So how does the summary get generated? [speaker001:] Uh [disfmarker] [speaker002:] I'm not against the idea of a summary, [speaker001:] Well, i i [disfmarker]? [speaker007:] By hand. [speaker002:] but I wanted to think carefully about who's generating it [speaker001:] Or, d o [speaker002:] and how [disfmarker] because the summary will drive the queries. So [disfmarker] [speaker001:] What I [disfmarker] I think, you know, in most meetings, this one being [pause] different, but in most meetings that I attend, there's somebody t explicitly taking notes, frequently on a laptop [disfmarker] Um, you can just make it be on a laptop, [speaker002:] Mm hmm. [speaker001:] so then yo you're dealing with ASCII and not somebody [disfmarker] you don't have to go through handwriting recognition. Um, and then they post edit it into, uh, a summary and they email it out for minutes. I mean, that happens in most meetings. [speaker008:] I I [disfmarker] I think that, um, there's [disfmarker] we're using "summary" in two different ways. So what you just described I would describe as "minutes". [speaker007:] Minutes. [speaker005:] Yeah. [speaker001:] Right. [speaker008:] And what I originally thought was, um, if you asked someone "what was the meeting about?" [speaker002:] Yeah. OK. [speaker008:] And then they would say well, we talked about this [speaker001:] Hmm. [speaker008:] and then we talked about that, and so and so talked about [disfmarker] And then you'd have, like [disfmarker] I [disfmarker] e My thought was to have multiple people summarize it, on recording rather than writing because writing takes time and you get irrelevant other things that u take time, that [disfmarker] [speaker001:] Mm hmm. [speaker008:] Whereas if you just say it immediately after the meeting, you know, a two minute summary of what the meeting was about, I think you would get, uh, with mult See, I [disfmarker] I also worry about having a single note taker because that's just one person's perception. And, um, you know, it [disfmarker] it's releva it's relative to what you're focus was on that meeting, [speaker004:] Mm hmm. [speaker008:] and [disfmarker] and people have different [comment] major topics that they're interested in. [speaker004:] A [speaker008:] So, my proposal would be that it may be worth considering both of those types, you know, the note taking and a spontaneous oral summary afterwards, [speaker001:] OK. [speaker005:] Yeah. [speaker008:] no longer than two minutes, [speaker004:] Adam, you can [disfmarker] [speaker008:] from multiple people. Yeah. [speaker004:] you can correct me on this, but [disfmarker] but, uh, my impression was that, uh, pretty much, uh, true that the meetings here, nobody sits with a w uh, with a laptop [speaker007:] Never. [speaker004:] and [disfmarker] [speaker007:] Never. I've never seen it at ICSI. Does anyone [disfmarker]? I mean, Dan is the one who [disfmarker] who most frequently would take notes, [speaker004:] I [speaker002:] Dan? [speaker005:] Yeah. [speaker007:] and [disfmarker] [speaker005:] I've d When we [disfmarker] when we have other meetings. When I have meetings on the European projects, we have someone taking notes. [speaker007:] Oh, really? [speaker005:] In fact, I often do it. [speaker004:] Yeah, but those are bigger deal things. Right? [speaker005:] Yeah. [speaker004:] Where you've got fifteen peo I mean, most [disfmarker] th this is one of the larger meetings. Most of the meetings we have are four or five people [speaker007:] That's true [disfmarker] are four or five people. [speaker004:] and you're not [disfmarker] you don't have somebody sitting and taking minutes for it. [speaker001:] Yeah. [speaker004:] You just [vocalsound] get together and talk about where you are. [speaker001:] Right. So, I think it depends on whether it's a business meeting or a technical discussion. [speaker007:] Culture. [speaker004:] Yeah. Yeah. [speaker001:] And I agree, technical discussions you don't usually have somebody taking notes. [speaker004:] Yeah. [speaker007:] The IRAM meeting, they [disfmarker] they take notes every [disfmarker] [speaker004:] Yeah. Do they? [speaker007:] There's uh a person with a laptop [pause] at each meeting. [speaker005:] How many people are those meetings? [speaker007:] There are more. I mean, there are ten ish. [speaker005:] Yeah. [speaker004:] Yeah. [speaker002:] Y you should also have a record of what's on the board. [speaker007:] They're very sparse. [speaker002:] I mean, I find it very [pause] hard to reconstruct what's going on. I [disfmarker] I don't know how [disfmarker] [speaker007:] Yeah. This is something early in the project we talked a lot about. [speaker002:] I don't know how, but for instance, I mean, the outline is sort of up here and that's what people are seeing. And if you have a [disfmarker] Or you shou could tell people not to [disfmarker] to use the boards. But there's sort of this missing information otherwise. [speaker008:] I agree. [speaker005:] We sh we should [disfmarker] [speaker007:] I agree, but [disfmarker] but you [disfmarker] you just [disfmarker] you g end up with video, [speaker005:] Well, I don't know. [speaker007:] and [disfmarker] and instrumented rooms. And [pause] that's a different project, I think. [speaker005:] f u I think for this data capture, it would be nice to have a digital camera [speaker007:] Yeah, different [disfmarker] [speaker002:] Uh, y [speaker005:] just to take pictures of who's there, where the microphones are, and then we could also put in what's on the board. You know, like three or four snaps for every [disfmarker] [speaker002:] Right. [speaker008:] I agree. [speaker002:] Yeah. [speaker008:] That's wonderful. [speaker005:] for every meeting. [speaker002:] People who were never at the meeting will have a very hard time understanding it otherwise. [speaker003:] Mm hmm. [speaker007:] But don't you think that's [disfmarker] [speaker008:] I agree. [speaker007:] Don't you think that [disfmarker] But [disfmarker] [speaker003:] Mm hmm. [speaker005:] Well, no. I mean, I [disfmarker] I just think [disfmarker] I mean, I think that right now we don't make a record of where people are sitting on the tables. [speaker002:] Even people who were at the meeting. [speaker001:] Right. [speaker007:] Huh. Right. [speaker005:] And that [disfmarker] the [disfmarker] at some point that might be awfully useful. [speaker007:] But I think adding photographs adds a whole nother level of problems. [speaker008:] Mm hmm. [speaker001:] Mmm. [speaker005:] Yeah. We [speaker008:] It's just a digital record. [speaker005:] n uh, Not [disfmarker] not as part of the [disfmarker] not as a part of the data that you have to recover. [speaker002:] I don't mean that you model it. [speaker005:] Just [disfmarker] just in terms of [disfmarker] [speaker002:] We should just [disfmarker] Like archiving it or storing it. [speaker005:] Yeah. [speaker008:] Yes, I agree. I agree. It's i because discourse is about things, [speaker001:] Mm hmm. [speaker002:] Because someone [disfmarker] [speaker008:] and then you have the things that are about, and it's recoverable. [speaker002:] someone later might be able to take these and say OK, they, you know [disfmarker] at least these are the people who were there [speaker005:] So [disfmarker] [speaker002:] and here's sort of what they started talking about, and [disfmarker] and just [disfmarker] [speaker008:] Yes. And it's so simple. Like you said, three snapshots [speaker004:] Li uh, L L [speaker008:] and [disfmarker] [speaker004:] L Liz, you [disfmarker] [speaker008:] Just to archive. [speaker004:] u uh, Liz, you sa you sat in on the, uh, [vocalsound] subcommittee meeting or whatever [disfmarker] [speaker005:] Actually [disfmarker] [speaker004:] uh, on [disfmarker] you [disfmarker] on the subcommittee meeting for [disfmarker] for [disfmarker] at the, uh [disfmarker] that workshop we were at that, uh, uh, Mark Liberman was [disfmarker] was having. So I [disfmarker] I wasn't there. They [disfmarker] they [disfmarker] they [disfmarker] they h must have had some discussion about video and the visual aspect, and all that. [speaker002:] Big, big interest. [speaker004:] Yeah. [speaker002:] Huge. I mean, it [disfmarker] personally, I don't [disfmarker] I would never want to deal with it. But I'm just saying first of all there's a whole bunch of fusion issues that DARPA's interested in. [speaker004:] Yeah. Yeah. [speaker002:] You know, fusing gesture and face recognition, [speaker004:] Yeah. [speaker002:] even lip movement and things like that, for this kind of task. And there's also I think a personal interest on the part of Mark Liberman in this kind of [disfmarker] in storing these images in any data we collect [speaker004:] Mm hmm. [speaker002:] so that later we can do other things with it. [speaker004:] Yeah. So [disfmarker] so to address what [disfmarker] what Adam's saying, [speaker008:] Mmm. Mm hmm. [speaker004:] I mean, I think you [disfmarker] uh, that [speaker002:] And [disfmarker] [speaker004:] the key thing there is that this is a description of database collection effort that they're talking about doing. [speaker007:] Mm hmm. [speaker004:] And if the database exists and includes some visual information that doesn't mean that an individual researcher is going to make any use of it. Right? [speaker008:] Mm hmm. [speaker004:] So, uh [disfmarker] [speaker007:] But that [disfmarker] it's gonna be a lot of effort on our part to create it, and store it, and get all the standards, and to do anything with it. [speaker004:] Right. So we're gonna [disfmarker] So we're gonna do what we're gonna do, whatever's reasonable for us. [speaker007:] Yeah. [speaker002:] I think even doing something very crude [disfmarker] [speaker004:] But having [disfmarker] [speaker002:] Like I know with ATIS, we just had a tape recorder running all the time. [speaker007:] Mm hmm. [speaker002:] And later on it turned out it was really good that you had a tape recorder of what was happening, even though you w you just got the speech from the machine. So if you can find some really, you know, low, uh, perplexity, [speaker007:] Low fidelity. Yeah. [speaker002:] yeah, [comment] way of [disfmarker] of doing that, I think it would be worthwhile. [speaker008:] I agree. And if it's simple as [disfmarker] I mean, as simple as just the digital [disfmarker] [speaker002:] Otherwise you'd [disfmarker] you lose it. [speaker008:] Yeah. [speaker004:] Well, minimally, I mean, what [disfmarker] what Dan is referring to at least having some representation of the p the spatial position of the people, cuz we are interested in some spatial processing. [speaker008:] Mm hmm. [speaker004:] And so [disfmarker] [speaker007:] Right. [speaker001:] Mm hmm. [speaker004:] so, um [disfmarker] [speaker007:] Well, once the room is a little more fixed that's a little easier [speaker004:] Yeah. [speaker007:] cuz you'll [disfmarker] Well, the wireless. [speaker004:] Yeah. Yeah. [speaker007:] But [disfmarker] [speaker002:] Also CMU has been doing this and they were the most vocal at this meeting, Alex Waibel's group. And they have [pause] said, I talked to the student who had done this, [comment] that with two fairly inexpensive cameras they [disfmarker] they just recorded all the time [speaker004:] Mm hmm. [speaker002:] and were able to get all the information from [disfmarker] or maybe it was three [disfmarker] from all the parts of the room. So I think we would be [disfmarker] we might lose the chance to use this data for somebody later who wants to do some kind of processing on it if we don't collect it [comment] at all. [speaker007:] Yeah. I [disfmarker] I [disfmarker] I don't disagree. I think that if you have that, then people who are interested in vision can use this database. The problem with it is you'll have more people who don't want to be filmed than who don't want to be recorded. [speaker008:] Well, she's not [pause] making [disfmarker] [speaker007:] So that there's going to be another group of people who are gonna say "I won't participate". [speaker003:] Mmm. That's true. [speaker001:] Mm hmm. [speaker002:] Or you could put a paper bag over everybody's head [speaker007:] Um [disfmarker] [speaker002:] and not look at each other and not look at boards, and just all be sitting [vocalsound] talking. [speaker004:] Uh huh. [speaker008:] Well [disfmarker] [speaker002:] That would be an interes [vocalsound] Bu [speaker008:] Well, there's [disfmarker] that'd be the [disfmarker] the parallel, yeah. [speaker004:] Great idea. [speaker008:] But I think y she's [disfmarker] we're just proposing [pause] a minimal preservation of things on boards, [speaker002:] Yeah. I definitely won't participate if there's a camera. [speaker008:] sp spatial organization [disfmarker] And you could anonymize the faces for that matter. You know, I mean, this is [disfmarker] We can talk about the [disfmarker] [speaker007:] But, you know, that's a lot of infrastructure and work. [speaker008:] It's just one snapshot. [speaker007:] To set it up and then anonymize it? [speaker002:] No, it wa n not, um [disfmarker] [speaker008:] We're not talking about a movie. [speaker001:] No, no, no, no. [speaker008:] We're talking about a snapshot. [speaker002:] Not for [disfmarker] not for CMU. [speaker001:] So [disfmarker] [speaker007:] Mm hmm. [speaker002:] They have a pretty crude set up. [speaker008:] Yeah. [speaker007:] Mm hmm. [speaker002:] And they had [disfmarker] they just turn on these cameras. They were [disfmarker] they were not moving or anything. [speaker007:] Couldn't find it? [speaker002:] And stored it on analog media. [speaker007:] Hmm? [speaker008:] Hmm. [speaker002:] And they [disfmarker] they didn't actually align it or anything. They just [disfmarker] they have it, though. [speaker008:] Yeah. Well, it's worth considering. Maybe we don't want to [disfmarker] spend that much more time discussing it, [speaker006:] Did they store it digitally, [speaker008:] but [disfmarker] [speaker006:] or [disfmarker]? [speaker002:] Hmm mm. I think they just [disfmarker] [speaker006:] or just put it on videotape? [speaker002:] I think they just had the videotapes with a c you know, a counter or something. Um, [speaker004:] Mm hmm. Well, I think for [disfmarker] I mean, for our purposes we probably will d [speaker002:] I'm not sure. [speaker004:] we [disfmarker] we might try that some and [disfmarker] and we certainly already have some recordings that don't have that, uh, which, you know, we we'll [disfmarker] we'll get other value out of, I think. [speaker008:] Yeah. [speaker002:] Yeah. [speaker008:] Th The thing is, if it's easy to collect it [disfmarker] it th then I think it's a wise thing to do because once it's gone it's gone. And [disfmarker] [speaker002:] I'm just [disfmarker] The community [disfmarker] If LDC collects this data [disfmarker] u I mean, and L if Mark Liberman is a strong proponent of how they collect it and what they collect, there will probably be some video data in there. [speaker004:] There you go. [speaker002:] And so that could argue for us not doing it or it could argue for us doing it. The only place where it sort of overlaps is when some of the summarization issues are [disfmarker] actually could be, um, easier [disfmarker] made easier if you had [pause] the video. [speaker008:] Mm hmm. [speaker004:] I think at the moment we should be determining this on the basis of our own, uh, interests and needs rather than hypothetical ones from a community thing. As you say, if they [disfmarker] [vocalsound] if they decide it's really critical then they will collect a lot more data than we can afford to, uh, and [disfmarker] and will include all that. [speaker002:] Mmm. [speaker004:] Um, I [disfmarker] I [disfmarker] I'm not worried about the cost of setting it up. [speaker001:] e [speaker004:] I'm worried about the cost of people looking at it. In other words, it's [disfmarker] it [disfmarker] it'd be kind of silly to collect it all and not look at it at all. And so I [disfmarker] I [disfmarker] I think that we do have to do some picking and choosing of the stuff that we're doing. But I [disfmarker] I am int I do think that we m minimally want [disfmarker] something [disfmarker] we might want to look at [disfmarker] at some [disfmarker] some, uh, subsets of that. Like for a meeting like this, at least, uh, take a Polaroid of the [disfmarker] [vocalsound] of the [disfmarker] of the boards, [speaker002:] Of the board. [speaker008:] Yeah. [speaker002:] Or at least make sure that the note taker takes a sh you know, a snapshot of the board. [speaker008:] Exactly. [speaker004:] and [disfmarker] a and know the position of the people [disfmarker] [speaker002:] That'll make it a lot easier for meetings that are structured. [speaker008:] Exactly. [speaker001:] Mm hmm. [speaker002:] I mean, otherwise later on if nobody wrote this stuff on the board down we'd have a harder time summarizing it or agreeing on a summary. [speaker008:] We [disfmarker] And it [disfmarker] Especially since this is common knowledge. I mean, this is shared knowledge among all the participants, and it's a shame to keep it off the recording. s [speaker007:] Uh, except in [disfmarker] er, if we weren't recording this, this [disfmarker] this would get lost. Right? [speaker008:] Well, I don't understand that point. [speaker004:] Yeah. [speaker008:] I mean, I just think that the [disfmarker] [speaker007:] The point is that we're not saving it anyway. Right? [speaker008:] Well [disfmarker] [speaker007:] In [disfmarker] in [pause] our real life setting. [speaker001:] What do you mean we're not saving it anyway? I've written all of this down and it's getting emailed to you. [speaker003:] And you're gonna send it out by email, too. [speaker007:] Well, uh, in that case we don't need to take pictures of it. [speaker002:] Right. That would be the other alternative, to make sure that anything that was on the board, um, is in the record. [speaker001:] Yeah. [speaker003:] Yeah. [speaker008:] Well [disfmarker] [speaker001:] Well, that's why [disfmarker] that's why I'm saying that I think the note taking would be [disfmarker] I think in many [disfmarker] for many meetings there will be some sort of note taking, in which case, that's a useful thing to have [disfmarker] Uh, I mean, we [disfmarker] uh, we don't need to require it. [speaker007:] Mm hmm. [speaker001:] Just like the [disfmarker] I mean, I think it would be great if we try to get a picture with every meeting. Um, [speaker008:] I agree. [speaker001:] so [disfmarker] so we won't worry about requiring these things, but the more things that we can get it for, the more useful it will be for various applications. [speaker004:] So [disfmarker] [speaker001:] So. [speaker004:] So, I mean, departing for the moment from the data collection question but actually talking about, you know, this group and what we actually want to do, uh, so I guess that's th the way [disfmarker] what you were figuring on doing was [disfmarker] was [disfmarker] was, uh, putting together some notes and sending them to [disfmarker] to everybody from [disfmarker] from today? OK. So. Um [disfmarker] [speaker008:] That's great. [speaker004:] So [disfmarker] [vocalsound] so the question [comment] that [disfmarker] that we started with was whether there was anything else we should do during [disfmarker] during th during the collection. [speaker002:] Ow. [speaker004:] And I guess the CrossPads was certainly one idea, uh, and we'll get them from him and we'll just do that. Right? And then the next thing we talked about was the [disfmarker] was the summaries and are we gonna do anything about that. [speaker001:] Well, before we leave the CrossPads and [disfmarker] and call it done. [speaker004:] Oh, OK. [speaker001:] So, if I'm collecting data then there is this question of do I use CrossPads? [speaker004:] Yeah. [speaker001:] So, I think that if we really seriously have me collect data and I can't use CrossPads, it's probably less useful for you guys to go to the trouble of using it, um, unless you think that the CrossPads are gonna [disfmarker] n I'm not [disfmarker] I'm not sure what they're gonna do. But [disfmarker] but having a small percentage of the data with it, I'm not sure whether that's useful or not. [speaker004:] What [disfmarker] [speaker001:] Maybe [disfmarker] maybe it's no big deal. Maybe we just do it and see what happens. [speaker004:] I guess the point was to try [disfmarker] again, to try to collect more information that could be useful later for [disfmarker] for the UI stuff. [speaker001:] Mm hmm. [speaker004:] So it's sort of Landay supplying it so that Landay's stuff can be easier to do. [speaker001:] Right. [speaker004:] So it [disfmarker] it [disfmarker] Right now he's g operating from zero, [speaker001:] Nothing. [speaker004:] and so even if we didn't get it done from UW, it seems like that would [disfmarker] could still [disfmarker] You shou [speaker001:] OK. [speaker004:] I mean, at least try it. [speaker002:] I think it'd be useful to have a small amount of it just as a proof of concept. [speaker004:] Yeah. [speaker002:] It will [disfmarker] [speaker001:] Right. [speaker007:] And [disfmarker] and they seem to [pause] not be able to give enough of them away, [speaker002:] You know, what you can do with things. [speaker001:] OK. [speaker007:] so we could probably get more as well. [speaker002:] Yeah. But not [disfmarker] not to rely on them for [pause] basic modeling. [speaker001:] That's true. So if it [disfmarker] if it seems to be really useful to you guys, we could probably get a donation to me. [speaker007:] Yeah, I'm not sure. I think it it [disfmarker] it will again depend on Landay, and if he has a student who's interested, and how much infrastructure we'll need. I mean, if it's easy, we can just do it. [speaker004:] Yeah. [speaker007:] Um, but if it requires a lot of our time, we probably won't do it. [speaker001:] Right. [speaker004:] I guess a lot of the stuff we're doing now really is pilot in one sense or another. [speaker007:] Yeah. Yeah, we have to sort of figure out what we're gonna do. [speaker004:] And so we try it out and see how it works. [speaker007:] Right. [speaker001:] Yeah. [speaker002:] I just wouldn't base any of the modeling on having those. [speaker007:] Right. I ag I think I agree with that. [speaker001:] Right. [speaker002:] Yeah. It's just [disfmarker] [speaker001:] Right. OK. [speaker007:] I think, though, the importance marking is a [pause] good idea, though. That if [disfmarker] if people have something in front of them [disfmarker] [speaker002:] I'd be sort of cool. I mean, it would [disfmarker] Yeah. That w shouldn't be hard for [disfmarker] [speaker007:] Yeah. Do it on pilots or laptops or something. OK, if something's important everyone clap. [speaker001:] OK. So CrossPads, we're just gonna try it and see what happens. [speaker004:] OK. [speaker007:] Yeah. Um, I think that's right. [speaker004:] OK. [speaker001:] OK. The note taking [disfmarker] So, I [disfmarker] I think that this is gonna be useful. So if we record data I will definitely ask for it. So, I j I think we should just say this is not [disfmarker] we don't want to put any extra burden on people, but if they happen to generate minutes, could [disfmarker] could they send it to us? [speaker007:] Yeah. Oh, OK. That's fine. Absolutely. [speaker004:] Mm hmm. [speaker003:] Mm hmm. [speaker001:] And then [disfmarker] [speaker007:] Yeah. What I was gonna say is that I don't want to ask people to do something they wouldn't normally do in a meeting. It's ver I just want to keep away from the artificiality. But I think it [pause] definitely [speaker004:] Mm hmm. [speaker007:] if they exist. And then Jane's idea of summarization afterward I think is not a bad one. Um, picking out [disfmarker] basically to let you pick out keywords, um, and, uh, construct queries. [speaker004:] So who [disfmarker] who does this summarization? [speaker008:] Yeah, I'm thinking that [disfmarker] [speaker007:] People in the meeting. [speaker008:] Yeah. [speaker007:] You know, just at [disfmarker] at the end of the meeting, before you go, [speaker008:] Uh huh. [speaker002:] Without hearing each other though, probably. [speaker007:] go around the table. [speaker005:] Yeah. [speaker004:] Yeah. [speaker007:] Yeah. [speaker006:] Or even just have one or two people stay behind. [speaker007:] Ugh. Yeah. [speaker004:] Yeah. [speaker005:] People with radio mikes can go into separate rooms and continue recording without hearing each other. That's the nice thing. [speaker002:] Well, then you should try them a few weeks later [speaker008:] How fascinating. [speaker002:] and [disfmarker] [speaker007:] And see [disfmarker] score them? [speaker002:] They have all these memory experiments about how little you actually retain [speaker005:] That's right. Well, that's the interesting thing, though. [speaker002:] and wasn't [disfmarker] [speaker005:] If we do [disfmarker] if we collect four different summaries, you know, we're gonna get all this weird data about how people perceive things differently. [speaker007:] Oh. Hmm. [speaker005:] It's like [comment] this is not what we meant to research. [speaker002:] Right, right. [speaker004:] Oh. [speaker008:] That could be very interesting. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker006:] Hmm. [speaker008:] Yeah. [speaker004:] Ru [speaker001:] But [disfmarker] but again, like the CrossPads, I don't think I would base a lot of stuff on it, [speaker007:] I d yeah, I don't know how you would do it, though. Mm hmm. [speaker001:] because I think [disfmarker] I know when I see the [disfmarker] the clock coming near the end of the meeting, I'm like inching towards the door. [speaker007:] Running to [disfmarker] Yeah, fff! [speaker001:] So, [speaker005:] Hmm. [speaker001:] you're probably not gonna get [pause] a lot of people wanting to do this. [speaker008:] Well, I think if [disfmarker] [speaker007:] Maybe e Is email easier? I mean, I [disfmarker] when you first said do [disfmarker] do it, um, spoken, what I was thinking is, oh then people have to come up [speaker008:] Mm hmm. [speaker007:] and you have to hook them up to the recorder. So, if they're already here I think that's good, but if they're not already here for [disfmarker] I'd rather do email. [speaker001:] Right. [speaker008:] Yeah, I'd just try [disfmarker] Well, however the least intrusive and [disfmarker] and quickest way is, [speaker007:] I'm much faster typing than anything else. [speaker008:] and th and closest to the meeting time too, cuz people will start to forget it as soon as they l leave. [speaker007:] Yeah. [speaker001:] Yeah. I think that [disfmarker] I think doing it orally at the end of the meeting is the best time. [speaker007:] I don't know. At [disfmarker] [speaker008:] Mm hmm. [speaker001:] I just don't [disfmarker] because they're kind of a captive audience. [speaker007:] Mm hmm. [speaker001:] Once they leave, you know, forget it. But [disfmarker] but i [speaker007:] Yeah, read the digits, do the summary. [speaker001:] Right. But, uh, I don't think that they'll necessarily [disfmarker] you'll [disfmarker] you'll get many people willing to stay. [speaker004:] Hmm. [speaker008:] w [speaker001:] But, you know, if you get even one [disfmarker] [speaker008:] I would s [speaker004:] Well, I think it's like the note taking thing, [speaker008:] Yeah. [speaker004:] that [disfmarker] that y that you can't [disfmarker] certainly can't require it or people aren't gonna want to do this. But [disfmarker] but if there's some cases where they will, then it would be helpful. [speaker008:] And I'm also wondering, couldn't that be included in the data sample [speaker006:] Hmm. [speaker008:] so that you could increase the num you know, the words that are, uh, recognized by a particular individual? If you could include the person's meeting stuff and also the person's summary stuff, maybe that would be uh, [speaker005:] Yeah. [speaker008:] an ad addition to their database. [speaker005:] It's kind of nice. Yeah. [speaker004:] Hmm. [speaker008:] Under the same acoustic circumstance, cuz if they just walk next door with their set up, nothing's changed, [speaker005:] Right. [speaker008:] just [disfmarker] [speaker006:] So I have a question about queries, [speaker007:] God, that's bugging me. [speaker006:] which is, um, [speaker008:] You turn [disfmarker] [speaker007:] Can we turn that light off? [speaker006:] uh [disfmarker] [speaker007:] If [disfmarker] can we turn that just [disfmarker] that [disfmarker] that let [disfmarker] [speaker008:] Uh, let the record show the light is flickering. [speaker004:] The fl the fluorescent light is flickering. [speaker006:] I don't know. [speaker001:] Yeah. [speaker007:] Yeah, there's a [disfmarker] [speaker004:] Yeah. [speaker002:] Oh, it is [disfmarker] it is like [disfmarker] OK. [speaker007:] Very annoying. [speaker001:] Yeah. [speaker006:] There you go. [speaker007:] Oh, much better. [speaker006:] OK. [speaker003:] Oh, yeah. [speaker004:] Yeah. [speaker006:] Good. [speaker001:] For a little while I thought it was just that I was really tired. [speaker003:] That's better. [speaker007:] Too much caffeine. [speaker001:] That and y [comment] Too much caffeine and really tired, but then I thought "no, maybe that's real". [speaker004:] OK. [speaker006:] So, [speaker007:] I thought it was the projector for a moment. It was like, "what's going on?" [speaker004:] Yeah. [speaker006:] the question I had about queries was, um, so what we're planning to do is have people look at the summaries and then generate queries? [speaker007:] We [disfmarker] we've just been talking, how do we generate queries? [speaker006:] Are [disfmarker] are we gonna try and o Yeah. [speaker007:] And so that was one suggestion. [speaker006:] Well, I mean, so, the question I had is is have we given any thought to how we would generate queries automatically given a summary? I mean, I think that's a whole research topic un unto itself, [speaker004:] Mmm. [speaker006:] so that it may not be a feasible thing. But [disfmarker] n [speaker005:] Hello. Dan here. [speaker004:] Yeah. [speaker002:] Shouldn't Landay and his group be in charge of figuring out how to do this? I mean, this is an issue that goes a little bit beyond where [pause] we are right now. [speaker005:] OK. [speaker002:] They're the expert [speaker005:] Mari? [speaker001:] Yeah? [speaker005:] Someone wants to know when you're getting picked up. Is someone picking you up? [speaker001:] Um, [vocalsound] what's our schedule? [speaker004:] Well, you still wanted to talk with Liz. [speaker001:] Let's see, you and I need dis Uh, [speaker004:] And you and I need to [speaker001:] no, we did the Liz talk. [speaker004:] Oh, oh. You already did the Liz talk. [speaker001:] Yeah. [speaker004:] OK. [speaker001:] So [disfmarker] so that was the prosody thing. [speaker002:] We [vocalsound] I don't remember it. [speaker001:] Um, we need to finish the [disfmarker] [speaker004:] Oh, OK. [speaker001:] It's already four fifteen. [speaker002:] I have like no recall memory. [speaker004:] Yeah. [speaker001:] Uh, after. We need to [pause] finish this discussion, and you and I need a little time for wrap up and quad chart. So, [speaker007:] And what? [speaker004:] I'm at your disposal. [speaker001:] um [disfmarker] [speaker004:] So, up to you. [speaker001:] Um, what [disfmarker] what's the plan for this discussion? We should [disfmarker] [speaker004:] Um, I think we should be able to wind up in another half hour or something, [speaker007:] At least. Yeah. [speaker004:] you think? [speaker007:] m i Even if that much? [speaker001:] Uh, less. [speaker007:] Less. [speaker001:] Yeah. [speaker004:] Less? [speaker001:] So, I think [disfmarker] [speaker002:] It's interesting that he's got, like, [pause] this discussion free [speaker004:] Well, I mean, we still haven't talked about the action items from here and so on. [speaker001:] Action [disfmarker] Yeah. So, [speaker002:] yet it's separate. [speaker004:] And [disfmarker] [speaker001:] e e why don't you say five thirty? [speaker005:] OK, five thirty. [speaker001:] I don't [disfmarker] Is that OK? We'll probably hit horrible traffic. [speaker005:] Sounds [disfmarker] OK. h Thanks, bye. [speaker001:] That's not a lot of time, [speaker005:] That's that. [speaker001:] but [disfmarker] [speaker004:] Yeah. [speaker007:] Well, in answer to "is it Landay's problem?", um, he doesn't have a student who's interested right now in doing anything. So he has very little manpower. Um, there's very little allocated for him and also he's pretty focused on user interface. So I don't think he wants to do information retrieval, query generation, that sort of stuff. [speaker004:] Yeah, well there's gonna be these student projects that can do some things but it can't be, yeah, very deep. u I [disfmarker] I actually think that [disfmarker] that, uh, again, just as a bootstrap, [comment] if we do have something like summaries, then having the people who are involved in the meetings themselves, who are cooperative and willing to do yet more, come up with [disfmarker] with [disfmarker] with queries, uh, could at least give [disfmarker] give Landay an idea of the kind of things that people might want to know. I mean, ye Right? If he doesn't know anything about the area, and [disfmarker] the people are talking about and [disfmarker] and, uh [disfmarker] [speaker002:] But the people will just look at the summaries or the minutes and re and sort of back generate the queries. That's what I'm worried about. So you might as well just give him the summaries. [speaker004:] Mm hmm. [speaker002:] And [disfmarker] [speaker004:] Maybe. [speaker006:] Well, I'm not sure [disfmarker] I'm not sure that's a solved problem. [speaker007:] y Well, but I think [disfmarker] [speaker006:] Right? [speaker002:] Oh, OK. [speaker006:] Of how to [disfmarker] how to generate queries from a [disfmarker] [speaker002:] How to do this [speaker007:] I, uh [disfmarker] [speaker004:] Yeah. [speaker002:] from the summary. [speaker006:] That was sort of what my [pause] question was [pause] aimed towards. [speaker002:] So what you want to h to do is, people who were there, who later see, uh, minutes and s put in summary form, which is not gonna be at the same time as the meeting. There's no way that can happen. Are we gonna later go over it [speaker001:] Hmm. [speaker007:] Right. [speaker004:] Right. [speaker002:] and, like, make up some stuff to which these notes would be an answer, or [disfmarker] or a deeper [disfmarker] Yeah. [speaker007:] Or [disfmarker] or just a memory refresher. [speaker002:] I mean [disfmarker] But that's done off [disfmarker] they have to do that off line. [speaker007:] Yep. I agree. [speaker008:] I'm also wondering if we could ask the [disfmarker] the people a [disfmarker] a question [speaker002:] You [speaker008:] which would be "what was the most interesting thing you got out of this meeting?" Becau in terms of like informativeness, [speaker002:] That's a good one. [speaker008:] it might be, you know, that the summary would [disfmarker] would not in even include what the person thought was the most interesting fact. [speaker004:] I would think that would be the most likely thing. [speaker002:] Dan doesn't know what sex he is. [speaker005:] Yeah, really. [speaker001:] But actually I would say that's a better thing to ask than have them summarize the meeting. [speaker008:] I think you get two different types of information. [speaker001:] You get two [disfmarker] Yeah, that's true. [speaker005:] Yeah. [speaker008:] Because you get, like, the general structure of important points and what the [disfmarker] what the meeting was about. [speaker004:] Hey. [speaker002:] Ah [speaker004:] Yeah. [speaker007:] We're still here. [speaker008:] So you get the general structure, the important points of what the meeting was about [pause] with the summary. But with the "what's the most interesting thing you learned?" [disfmarker] Uh, so the fact that, uh, I know that Transcriber uses Snack is something that I thought was interesting [speaker002:] Going to see the kids. [speaker005:] You [disfmarker] you can keep it on. [speaker008:] and that [disfmarker] and that Dan worked on [disfmarker] on that. So I thought that was really [disfmarker] you know. So, I mean, you could ge pick up some of the micro items that wouldn't even occur as major headings [speaker001:] Mm hmm. [speaker008:] but could be very informative. [speaker001:] Yeah, that's actually a really good idea. [speaker008:] I think it wouldn't be too, uh, uh, cost intensive either. You know, I mean, it's like something someone can do pretty easily on the spur of the moment. [speaker003:] Are you thinking about just asking one participant or all of them? [speaker007:] As many are willing to do it. [speaker003:] Make it a voluntary thing, [speaker005:] Yeah. Cuz you'll get [disfmarker] cuz you'll get very different answers from everybody, right? [speaker003:] and then [disfmarker] Yeah. That's why I was wondering. [speaker005:] So [disfmarker] [speaker007:] Well, maybe one thing we could do is for the meetings we've already done [disfmarker] I mean, I [disfmarker] we didn't take minutes and we don't have summaries. But, uh, people could, like, listen to them a little bit and [pause] generate some queries. [speaker008:] Yeah. [speaker007:] Of course Jane doesn't need to. I'm sure you have that meeting memorized by now. [speaker005:] Yeah. [speaker001:] But actually it would be an easy thing to just go around the room and say [pause] what was the most interesting thing you learned, [speaker007:] Mmm. [speaker008:] Yeah. Yeah. [speaker001:] for those pe people willing to stay. [speaker008:] And that [disfmarker] I think it would pick up the micro structure, the [disfmarker] some [disfmarker] some of the little things that would be hidden. [speaker001:] And [disfmarker] and that might be something people are willing to stay for. [speaker004:] Boy, I [disfmarker] I don't know how we get at this [disfmarker] [speaker008:] That would be interesting. [speaker007:] Or want to get up and leave. [speaker003:] Yeah, but when you go around the room you might just get the effect that somebody says something and then you go around the room and they say "yeah, me too, I agree." [speaker007:] Me too, me too, me too. [speaker004:] Yeah. [speaker005:] On the other hand people might try and come up with different ones, right? [speaker001:] That's fine. [speaker003:] So [disfmarker] [speaker004:] Well [disfmarker] [speaker005:] They might say oh, I was gonna say that one [speaker007:] Well, [speaker005:] but now I have to think of something else ". [speaker007:] you have the other thing, that [disfmarker] that they know why we're doing it. We'll [disfmarker] I mean, we'll [disfmarker] we'll be telling them that the reason we're trying to do this is [disfmarker] is to d generate queries in the future, so try to pick things that other people didn't say. [speaker004:] It's gonna take some thought. I mean, It seemed [disfmarker] [vocalsound] The kind of, uh, interest that I had in this thing initially was, uh, that i basically the form that you're doing something else [pause] later, [speaker001:] Mm hmm. [speaker004:] and you want to pick up something from this meeting related to the something else. So it's really the imp the [disfmarker] the list of what's important's in the something else [speaker005:] Mmm. [speaker001:] Right. [speaker004:] rather than the [disfmarker] And it might be something minor [disfmarker] of minor importance to the meeting. [speaker007:] Right. [speaker003:] Mm hmm. [speaker004:] Uh, in fact if [disfmarker] if it was really major, if it's the thing that really stuck in your head, then you might not need to go back and [disfmarker] and [disfmarker] and check on it even. So it's [disfmarker] it's that you're trying to find [disfmarker] [comment] You're [disfmarker] you've now [disfmarker] You weren't interested [disfmarker] Say I [disfmarker] I said "well, I wasn't that much interested in dialogue, I'm more of an acoustics person". [speaker005:] Right. [speaker004:] But [disfmarker] but thr three months from now if for some reason I get really interested in dialogue, and I'm "well what is [disfmarker] what was that part that [disfmarker] that [disfmarker] that, uh, Mari was saying?" [speaker007:] Yeah, like Jim Bass says "add a few lines on dialogue in your next perf" [speaker001:] Uh [disfmarker] [speaker004:] Yeah. Yeah. [speaker001:] Yeah. [speaker004:] And then I'm trying to fi I mean, that's [disfmarker] that's when I look [disfmarker] in general when I look things up most, is when it's something that [vocalsound] didn't really stick in my head the first time around and [disfmarker] but for some [comment] new reason I'm [disfmarker] I'm [disfmarker] I'm interested in [disfmarker] in [disfmarker] in the old stuff. [speaker007:] But that [disfmarker] that's gonna be very hard to generate. [speaker004:] So, I don't [disfmarker] I don't know. [speaker001:] Well, I [disfmarker] That's hard to generate [speaker006:] Do we [disfmarker] [speaker003:] Mm hmm. [speaker001:] and [disfmarker] and I think that's half of what i I would use it for. But I also a lot of times um, make [disfmarker] you know, think to myself this is interesting, [speaker004:] Mm hmm. [speaker001:] I've gotta come back and follow up on it ". [speaker004:] Mm hmm. Mm hmm. [speaker001:] So, things that I think are interesting, um, I would be, uh, wanting to do a query about. And also, I like the idea of going around the room, because if somebody else thought something was interesting, I'd kind of want to know about it and then I'd want to follow up on it. [speaker005:] Hmm. [speaker004:] Yeah. That [disfmarker] that might get at some of what I was [disfmarker] I was concerned about, uh, being interested in something later that w uh, I didn't consider to be important the first time, which for me is actually the dominant thing, because if I thought it was really important it tends to stick more than if I didn't, but some new [pause] task comes along that makes me want to look up. [speaker007:] But [disfmarker] But what's interesting to me may not b have been interesting to you. [speaker004:] Yeah. So having multiple people might get at some of that. [speaker007:] By [disfmarker] so by going around [disfmarker] Yeah. [speaker004:] Yeah. [speaker007:] Yeah, I [disfmarker] I think [pause] you can't get at all of it, right? [speaker004:] Yeah. [speaker007:] W we just need to start somewhere. [speaker004:] Yeah, and this is a starting point. [speaker006:] The question [disfmarker] the question then is h h how much bias do we introduce by [disfmarker] you know, introduce by saying, you know, this was important now [speaker008:] Uh huh. [speaker006:] and, you know, maybe tha something else is important later? [speaker004:] Mm hmm. [speaker006:] I mean, does it [disfmarker] does the bias matter? I [disfmarker] I don't know. I mean, uh, that's, I guess, a question for you guys. But [disfmarker] [speaker008:] Well, and [disfmarker] and one thing, we [disfmarker] we're saying "important" and we're saying "interesting". And [disfmarker] and those [disfmarker] those can be two different things. [speaker006:] Uh, yeah, yeah. Sure, sure. [speaker001:] Mm hmm. [speaker006:] But I [disfmarker] I [disfmarker] I guess that's the question, really, is that [disfmarker] I mean, [speaker008:] Mm hmm. [speaker004:] W [speaker006:] does building queries based on what's important now introduce an irreversible bias on being able to do what Morgan wants to do later? [speaker008:] OK, good. [speaker004:] Well, irreversible. [speaker006:] That's [disfmarker] that's [disfmarker] [speaker008:] Yeah. [speaker004:] I [disfmarker] I [disfmarker] I mean, I guess what I what I [disfmarker] I keep coming back to in my own mind is that, um, the soonest we can do it, we need to get up some kind of system [speaker006:] Right. [speaker004:] so that people who've been involved in the meeting can go back later, even if it's a poor system in some ways, and, uh [disfmarker] and ask the questions that they actually want to know. If [disfmarker] you know, if [disfmarker] uh, as soon as we can get that going at any kind of level, then I think we'll have a much better handle on what kind of questions people want to ask than in any [disfmarker] anything we do before that. But obviously we have to bootstrap somehow, [speaker006:] Sure. [speaker001:] Right. [speaker004:] and [disfmarker] [speaker008:] Mm hmm. [speaker006:] I agree. [speaker001:] Right. [speaker008:] I will say that [disfmarker] that I [disfmarker] I chose "interesting" because I think it includes also "important" in some cases. But, um, I [disfmarker] I [disfmarker] I feel like the summary gets [pause] at a different type of information. [speaker006:] I think "important" can often be uninteresting. [speaker005:] Mmm. [speaker001:] Mmm. [speaker008:] Well, and [disfmarker] and also [disfmarker] [speaker007:] Hmm. [speaker005:] And "interesting" is more interesting than "important". [speaker008:] i it puts a lot of burden on the person to [disfmarker] to evaluate. You know, I think inter "interesting" is [disfmarker] is non threatening in [disfmarker] [speaker001:] OK OK. [speaker008:] Yeah. [speaker001:] In the interest of, um, [speaker008:] Yeah. [speaker007:] Importance? [speaker001:] generati [comment] generating an interesting summary, [comment] um [disfmarker] [speaker008:] Mm hmm. Yeah. [speaker001:] No, i in the interest of generating some minutes here, uh, and also moving on to action items and other things, let me just go through the things that I wrote down as being important, um, that we at least decided on. CrossPads we were going to try, um, if Landay can get the, uh [disfmarker] get them to [disfmarker] to you guys, um, and see if they're interesting. And if they are, then we'll try to get m do it more. Um, getting electronic summary from a note taking person if they happen to do it anyway. [speaker004:] Mm hmm. [speaker001:] Um, getting [pause] just, uh, digital pictures [disfmarker] a couple digital pictures of the [disfmarker] the table and boards to set the context of the meeting. Uh, and then going around the room at the end to just say [disfmarker] qu ask people to mention something interesting that they learned. So rather than say the most interesting thing, something interesting, [speaker008:] k Sure. [speaker001:] and that way you'll get more variety. [speaker004:] Mm hmm. [speaker008:] That's good. [speaker007:] I wouldn't even say that "that they learned". [speaker008:] I like that. I like that. [speaker001:] OK. [speaker005:] Yeah. [speaker007:] Uh, you might want to mention something that [disfmarker] that you brought up. [speaker001:] "Thing [pause] that was [pause] discussed." And then the last thing c would be for those people who are willing to stay afterwards and give an oral summary. [speaker008:] Mm hmm. [speaker001:] OK? Does that pretty much cover everything we talked about? [speaker008:] Mm hmm. [speaker001:] That [disfmarker] well, that we want to do? [speaker008:] A And one [disfmarker] and one qualification on [disfmarker] on the oral summaries. They'd be s they'd be separate. They wouldn't be hearing each other's summaries. [speaker001:] OK. [speaker007:] Yeah, that's like [disfmarker] n I think that's gonna predominantly end up being whoever [pause] takes down the equipment then. [speaker008:] And [disfmarker] and that would also be that the data would be included in the database. [speaker007:] Yeah, that would be, let's see, me. [speaker001:] Mm hmm. [speaker008:] Mm hmm. [speaker005:] I mean, there is still this hope that people might actually think of real queries they really want to ask at some point. [speaker008:] OK. [speaker005:] And that if [disfmarker] if that ever should happen, then we should try and write them down. [speaker008:] Mm hmm. [speaker004:] Right. [speaker007:] Give them a reward, [speaker003:] Mm hmm. [speaker007:] a dollar a query? [speaker005:] Yeah, really. [speaker004:] Yeah. [speaker005:] If they're real queries. [speaker008:] Yeah. [speaker001:] OK. So [disfmarker] [speaker004:] Well, and again, if we can figure out a way to jimmy a [disfmarker] a [disfmarker] a [disfmarker] a very rough system, say in a year, then [disfmarker] uh, so that in the second and third years we [disfmarker] we actually have something to [disfmarker] [speaker007:] Yeah. [speaker001:] Play with and generate real queries from. [speaker004:] ask queries. [speaker007:] Yeah. [speaker001:] Right. [speaker005:] Yeah. [speaker001:] OK. [speaker004:] Uh [disfmarker] [speaker001:] So. Yeah. [speaker002:] I think [disfmarker] I just wanted to say one thing about queries. I mean, [vocalsound] [vocalsound] the level of the query could be, you know, very low level or very high level. And it gets fuzzier and fuzzier as you go up, right? [speaker007:] Well, we're gonna [disfmarker] [speaker002:] So you need to have some sort of [disfmarker] if you start working with queries, some way of identifying what the [disfmarker] you know, if this is something that requires a [disfmarker] a one word answer or it's one place in the recording versus was there general agreement on this issue of all the people who ha [speaker005:] Hmm. [speaker002:] You know, you can gen you can ask queries that are meaningful for people. [speaker007:] Yep. [speaker002:] In fact, they're very meaningful cuz they're very high level. But they won't exist anywhere in the [pause] a you know [disfmarker] [speaker007:] Absolutely. So I think we're gonna have to start with keywords [speaker008:] Mm hmm. [speaker007:] and [disfmarker] and if someone becomes more interested we could work our way up. [speaker004:] I I'm [disfmarker] I I'm not so sure I agree with that. [speaker007:] But [disfmarker] [speaker002:] It [disfmarker] But it may well [disfmarker] [speaker007:] Really? [speaker004:] Because [disfmarker] uh, b because it depends on, uh, what our goal is. If our goal is Wizard of Oz ish, [speaker007:] Oh, that's true. [speaker004:] we might want to know what is it that people would really like to know about this data. [speaker005:] Yeah. [speaker004:] And if it's [disfmarker] if [disfmarker] if it's something that we don't know how to do yet, th great, [speaker005:] Yeah. [speaker001:] Right. [speaker004:] that's, you know, research project for year four or something. [speaker005:] Research, yeah. [speaker001:] Mm hmm. [speaker004:] You know? [speaker001:] Mm hmm. [speaker007:] Yeah, I was thinking about Wizard of Oz, but it requires the wizard to know all about the meetings. [speaker005:] We'd have to listen to all the data. [speaker004:] Um, well, not [disfmarker] maybe not true Wizard of Oz [speaker007:] So. Oh, yeah. I [disfmarker] I understand. [speaker004:] because people are too [speaker005:] Well just imagine if [disfmarker] [speaker004:] uh, aware of what's going on. But [disfmarker] but just [disfmarker] [speaker005:] Get people to ask questions that they def the machine definitely can't answer at the moment, [speaker007:] Yep. [speaker005:] but [disfmarker] [speaker004:] Yeah. w Just "what would you like to know?" [speaker005:] Yeah. [speaker007:] But that [disfmarker] neither could anyone else, though, is what, uh, my point is. [speaker005:] Yes. [speaker008:] I I was wondering if [disfmarker] if there might be one s more source of queries which is indicator phrases like "action item", [speaker001:] OK. [speaker008:] which could be obtained from the text [disfmarker] from the transcript. [speaker007:] Right. Since we have the transcript. [speaker008:] Yeah. [speaker007:] Dates maybe. I don't know. That's something I always forget. [speaker008:] Yeah, that's something to be determined, something to be specified, [speaker002:] Well, probably if you have to sit there at the end of a meeting and say one thing you remember, it's probably whatever action item was assigned to you. [speaker008:] but text oriented. [speaker002:] I mean, in gen that's all I remember from most meetings. [speaker008:] I think you'd remember that, yeah. [speaker007:] That [disfmarker] that's all I wrote down. [speaker002:] So, in general, I mean, that could be something you could say, right? I'm supposed to [pause] do this. [speaker008:] Yeah, that's true. [speaker002:] It [disfmarker] it doesn't [disfmarker] [speaker008:] Well, but then you could [disfmarker] you could prompt them to say, you know, "other than your action item", you know, whatever. But [disfmarker] but the action item would be a way to get, uh, maybe an additional query. [speaker005:] Well [disfmarker] [speaker002:] I mean, that's realistically what people might [pause] well be remembering. [speaker008:] So. [speaker002:] So. [speaker004:] Hmm. [speaker008:] Yeah. Well, but [disfmarker] you know, but you could get again [@ @] [disfmarker] [speaker001:] Well, we're piloting. We'll just do it and see what happens. [speaker002:] Yeah. Yeah. [speaker008:] Yeah. [speaker004:] Yeah. I usually don't remember my action items. But [disfmarker] [vocalsound] I'd [disfmarker] I [disfmarker] [speaker001:] OK OK. Speaking of action items, can we move on to action items? [speaker004:] Yeah. Mm hmm. [speaker008:] Yeah. yeah. [speaker007:] Sure. Can you hand me my note pad? [speaker001:] Um, or maybe we should wait until the summary of this [disfmarker] until this meeting is transcribed and then we will hav [speaker005:] Yeah. Then we'll know. [speaker004:] We [disfmarker] we had [disfmarker] I mean, [speaker007:] Thanks. [speaker004:] somewhere up there we had milestones, but I guess [disfmarker] Did y did you get enough milestone, uh, from the description things? [speaker001:] I got [disfmarker] Yeah. In fact, why don't you hand me those transparencies so that I remember to take them. eee, [speaker004:] OK. [speaker001:] OK. [speaker004:] And, you know, there's obviously [pause] detail behind each of those, as much as is needed. So, you just have to [pause] let us know. [speaker001:] OK. What I have down for action items is we're supposed to find out about our human subject, um, [vocalsound] requirements. [speaker008:] Good. [speaker007:] Yep. [speaker001:] Uh, people are supposed to send me U R for their [disfmarker] for web pages, to c and I'll put together an overall cover. And you're s [speaker005:] Right. We [disfmarker] [speaker001:] Hmm? [speaker005:] we need to look at our web page and make one that's [disfmarker] [speaker001:] And [disfmarker] and you also need to look at your web page [speaker005:] that's p [speaker007:] Right. [speaker001:] and clean it up by mid July. [speaker005:] PDA free. Yeah. [speaker001:] Um, [speaker004:] Right. [speaker001:] let's see. Choo choo choo. [speaker007:] Mailing lists. [speaker001:] We [disfmarker] Mailing list? Uh, you need to put together a mailing list. [speaker004:] Three of them. Well [disfmarker] [vocalsound] I mean, [speaker001:] Uh, I think w Yeah. [speaker004:] uh, [speaker001:] Um, [speaker004:] mostly together. [speaker001:] uh, I need to email Adam or Jane, um, about getting the data. Who should I email? [speaker007:] Uh, how quickly do you want it? [speaker001:] Um. [speaker007:] My July is really very crowded. And so, uh [disfmarker] [speaker001:] How about if I just c Uh, Right now all I want [disfmarker] I personally only want text data. I think the only thing Jeff would do anything with right now [disfmarker] But I'm just speaking fr based on a conversation with him two weeks ago I had in Turkey. But I think all he would want is the digits. Um, but I'll just speak for myself. I'm interested in getting the language model data. Eh, so I'm just interested in getting transcriptions. [speaker007:] Mm hmm. [speaker008:] OK. [speaker001:] So then just email you? [speaker008:] So y Sure, sure, sure. [speaker001:] OK. [speaker008:] You could email to both of us, uh, just [disfmarker] I mean, if you wanted to. [speaker004:] Wh Mm hmm. [speaker008:] I mean, I don't think either of us would mind recei [speaker001:] OK. [speaker004:] i [speaker008:] but [disfmarker] but in any case I'd be happy to send you the [disfmarker] [speaker007:] That's right. [speaker001:] And your email is? [speaker004:] i [speaker008:] Edwards at ICSI. [speaker001:] OK. [speaker004:] w [speaker007:] Dot Berkeley dot EDU, of course. [speaker004:] In [disfmarker] in our phone call, uh, before, we [disfmarker] we, uh [disfmarker] It turns out the way we're gonna send the data is by, uh, [speaker001:] And then [disfmarker] [speaker004:] And, uh [disfmarker] and then what they're gonna do is take the CD ROM and transfer it to analog tape and [vocalsound] give it to a transcription service, uh, that will [disfmarker] [speaker007:] Oh, is this IBM? [speaker004:] Yeah. [speaker008:] Yeah, using foot pedals and [disfmarker] [speaker004:] Yeah, foot [disfmarker] foot pedals and [disfmarker] [speaker007:] Uh, so do they [disfmarker] How are they gonna do the multi channel? [speaker004:] See, that's a good question. [speaker008:] Yeah. They [disfmarker] they don't have a way. [speaker007:] I thought so. [speaker008:] But they have a verification. [speaker004:] No, I mean, it'll be [speaker007:] Mix? [speaker004:] probably about like you did, and then there will be some things [disfmarker] you know, many things that don't work out well. And that'll go back to IBM and they'll [disfmarker] they'll, uh [disfmarker] they run their aligner on it and it kicks out things that don't work well, which [disfmarker] you know, the overlaps will certainly be examples of that. And, uh [disfmarker] I mean, what w we will give them all of it. Right? We'll give them all the [disfmarker] the multi channel stuff [speaker007:] OK. That's, uh, my question. So we'll give them all sixteen channels [speaker004:] and [disfmarker] [speaker007:] and they'll do whatever they want with it. [speaker004:] Yeah. [speaker002:] But you also should probably give them the mixed [disfmarker] You know, equal sound level [disfmarker] [speaker004:] Yeah. [vocalsound] Good idea. [speaker001:] Mm hmm. [speaker002:] I mean, they're not gonna easily be able to do that, probably. [speaker007:] It's not hard. [speaker004:] Well [disfmarker] [speaker002:] Ah, yeah. [speaker007:] So. [speaker004:] It's also won't be adding much to the data to give them the mixed. [speaker006:] But w [speaker002:] Mm hmm. [speaker006:] It's not [disfmarker] [speaker007:] Yep. Absolutely. [speaker002:] I [speaker006:] Right. It doesn't [disfmarker] it isn't difficult for us to do, [speaker001:] Right. [speaker002:] i You should [disfmarker] [speaker006:] so we might as well just do it. [speaker007:] Absolutely. [speaker002:] Yeah. You should [disfmarker] that may be all that they want to send off to their [pause] transcribers. [speaker007:] So, sure. [speaker001:] OK. Related to [disfmarker] to the conversation with Picheny, I need to email him, uh, my shipping address and you need to email them something which you already did. [speaker008:] I did. I [disfmarker] I m emailed them the Transcriber URL, um, the on line, uh, data that Adam set up, The URL so they can click on an utterance and hear it. and I emailed them the str streamlined conventions which you got a copy of today. [speaker004:] Right. And I was gonna m email them the [disfmarker] which I haven't yet, a pointer to [disfmarker] to the web pages that we [disfmarker] that we currently have, cuz in particular they want to see the one with the [disfmarker] the way the recording room is set up [speaker008:] Good. [speaker004:] and so on, your [disfmarker] your page on that. [speaker008:] Oh, excellent. Good. [speaker007:] And then p possibly [disfmarker] [speaker008:] I C I CC'd Morgan. I should have sent [disfmarker] I should have CC'd you as well. [speaker001:] OK. [speaker007:] Not an immediate action item but something we do have to worry about is data formats for [disfmarker] for higher level information. [speaker001:] OK. [speaker004:] Oh, yeah. [speaker007:] Well, or d or not even higher level, [speaker004:] We were gonna [disfmarker] [speaker007:] different level, prosody and all that sort of stuff. We're gonna have to figure out how we're gonna annotate that. [speaker004:] Yeah, we w [speaker001:] Yeah. We never had our data format discussion. [speaker008:] Oh, I thought we did. [speaker004:] Right. [speaker008:] We discussed, uh, musi musical score notation [speaker007:] But that's not [disfmarker] That's display. [speaker008:] and [disfmarker] [speaker001:] Oh, OK. [speaker008:] and its XML [disfmarker] [speaker007:] That's different than format. [speaker001:] That's [disfmarker] [speaker008:] Well, um [disfmarker] [speaker001:] W My [disfmarker] my u feeling right now on format is you guys have been doing all the work [speaker005:] Well [disfmarker] uh, yeah. [speaker008:] Yeah. [speaker001:] and whatever you want, we're happy to live with. [speaker008:] OK, excellent. [speaker001:] Um, [speaker004:] OK. [speaker001:] other people may not agree with that, [speaker004:] So, what n important thing [disfmarker] [speaker008:] Well, it c [speaker001:] but [disfmarker] Cuz I'm not actually touching the data, [speaker005:] Right. [speaker001:] so I shouldn't be the one to talk. But [disfmarker] [speaker003:] No, I think that's fine. [speaker004:] So a key thing will be that you [disfmarker] we tell you [speaker008:] Great. [speaker001:] Yeah. [speaker004:] what it is. [speaker006:] " Here's a mysterious file [speaker004:] Uh, we also had [disfmarker] [speaker006:] and [disfmarker] " [speaker005:] Yeah. [speaker004:] We also had the, uh, uh [disfmarker] that we were s uh, that you were gonna get us the eight hundred number and we're all gonna [disfmarker] [speaker001:] Oh, yeah. [speaker004:] we're gonna call up your Communicator thing and [disfmarker] and we're gonna be good slash bad, depending on how you define it, uh, users. [speaker003:] Now, something that I mentioned earlier to Mari and Liz is that it's probably important to get as many non technical and non speech people as possible in order to get some realistic users. So if you could ask other people to call and use our system, that'd be good. Cuz we don't want people who already know how to deal with dialogue systems, [speaker001:] Yeah. [speaker003:] who know that [speaker001:] Or, [vocalsound] like if you have a [disfmarker] [speaker003:] you shouldn't hyper articulate, for instance, and things like that. [speaker001:] Or, like if you have somebody who makes your [disfmarker] your plane reservations for you, [speaker003:] So. [speaker004:] Yeah. [speaker001:] um, which is [speaker004:] Yeah, we can do that. [speaker001:] the n [speaker007:] Get my parents to do it. [speaker003:] Yeah, for instance. [speaker001:] Yeah. Yeah. Seriously. [speaker004:] Yeah. [speaker003:] Your grandmother. [speaker004:] Hmm. [speaker001:] Yeah. e You know, it could [pause] result in some good bloopers, which is always good for presentations. So [disfmarker] [speaker004:] Yeah. [speaker001:] Um, anyway [disfmarker] [speaker007:] I think my father would last through the second prompt before he hang [disfmarker] hung up. [speaker004:] My mother would have a very interesting conversation with it [speaker001:] Mmm. [speaker007:] He would never use it. [speaker004:] but it wouldn't have anything to do with the travel. [speaker001:] OK. [speaker004:] OK. [speaker001:] Um, other [disfmarker] Let's see, other action items. So I have the [disfmarker] [speaker004:] We talked about that we're getting the recording equipment running at UW. And so it depends, w e e e they're [disfmarker] you know, they're p m If that comes together within the next month, there at least will be, uh, uh, major communications between Dan and [vocalsound] UW folks [speaker005:] Yeah. I mean, [speaker001:] I'm [disfmarker] I'm shooting to try to get it done [disfmarker] get it put together by [pause] the beginning of August. [speaker004:] as to [disfmarker] [speaker005:] we should talk about it, but [disfmarker] [speaker008:] Mmm. [speaker001:] So, um, [speaker004:] But we have [disfmarker] it [disfmarker] it's [disfmarker] it's pretty [disfmarker] [speaker001:] you know, if [speaker004:] We don't know. I mean, he [disfmarker] he s uh, he said that it was sitting in some room collecting dust [speaker001:] We don't know. [speaker004:] and [disfmarker] and so we don't know, i e [speaker001:] i It's probably unlikely that we'll pull this off, but a at least it's worth trying. [speaker007:] Mm hmm. What is it? [speaker004:] We don't know. [speaker007:] Oh, OK. [speaker004:] "Recording equipment." [speaker005:] Yeah. [speaker007:] It's a tape recorder. [speaker004:] W We know it's eight channels. Uh, we know it's digital. [speaker007:] It's eight tape recorders. [speaker004:] We don't even know if there're microphones. So, we'll find out. [speaker001:] OK. Um, and I will email these notes [disfmarker] Um, I'm not sure what to do about action items for the data stuff, although, then somebody [disfmarker] I guess somebody needs to tell Landay that you want the pads. [speaker004:] Yeah, OK. I'll do that. [speaker001:] OK. [speaker004:] Um, and he also said something about outside [disfmarker] there [disfmarker] that came up about the outside text sources, that he [disfmarker] he may have [speaker007:] Mm hmm. [speaker001:] Oh! [speaker004:] some text sources that are close enough to the sort of thing that we can play with them for a language model. [speaker003:] Hmm. [speaker005:] Yeah, that was [disfmarker] uh, that was [disfmarker] What he was saying was this [disfmarker] he [disfmarker] this thing that, uh, Jason had been working on finds web pages that are thematically related to what you're talking about. Well, that's the idea. So that that [disfmarker] that would be a source of text which is [disfmarker] supposedly got the right vocabulary. [speaker004:] Mm hmm. [speaker001:] Right. [speaker005:] But it's obviously very different material. It's not spoken material, for instance, [speaker004:] Yeah. [speaker005:] so [disfmarker] [speaker004:] But it's p it might be [disfmarker] [speaker001:] But [disfmarker] but that's actually what I wanna do. [speaker005:] OK. [speaker001:] That's [disfmarker] that's what I wanna work with, is [disfmarker] is things that s the wrong material but the right da the right source. [speaker005:] Yeah. [speaker007:] Yeah. [speaker005:] Yeah. [speaker007:] Un unfortunately Landay told me that Jason is not gonna be working on that anymore. [speaker001:] Yeah. [speaker007:] He's switching to other stuff again. [speaker001:] Yeah. He seemed [disfmarker] when I asked him if he could actually supply data, he seemed a little bit more reluctant. So, I'll [disfmarker] I'll send him email. I'll put it in an action item that I send him email about it. And if I get something, great. If I don't get something [disfmarker] [speaker007:] Who? Landay or Jason? [speaker001:] Landay. [speaker007:] OK. [speaker004:] OK. [speaker001:] And, uh, um, you know, otherwise, if you guys have any papers or [disfmarker] I could [disfmarker] I could use, uh [disfmarker] I could use your web pages. That's what we could do. You've got all the web pages on the Meeting Recor [speaker004:] Yeah, why search for them? [speaker007:] True. [speaker004:] They're [disfmarker] we know where they are. [speaker001:] Yeah! [speaker007:] Absolutely. [speaker004:] Yeah, that's true. [speaker005:] Sure. [speaker001:] Oh, forget this! [speaker007:] Well, but that's not very much. [speaker001:] I [disfmarker] One less action item. I can use what web pages there are out there on meeting recorders. [speaker007:] Yep. [speaker005:] Right. [speaker007:] I mean, that [disfmarker] that's [disfmarker] Yeah. Basically what his software does is h it picks out keywords and does a Google like search. [speaker001:] Yeah. [speaker005:] Yeah. Yeah. [speaker004:] Yeah. So we can [disfmarker] we can [disfmarker] we can do better than that. [speaker005:] We can do that. Yeah. [speaker007:] So you could [disfmarker] Yeah. [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker004:] There's [disfmarker] there's some, uh, Carnegie Mellon stuff, right? On [disfmarker] on meeting recording, [speaker007:] Yep. [speaker002:] And Xerox. [speaker004:] and [disfmarker] [speaker001:] So, there's [disfmarker] there's ICSI, Xerox, [speaker004:] And Xerox. [speaker002:] And there's [disfmarker] You should l look under, like, intelligent environments, [speaker004:] Yeah. [speaker002:] smart rooms, [speaker007:] Um, the "Georgia Tech Classroom Two Thousand" is a good one. [speaker002:] um [disfmarker] [speaker001:] CMU, [speaker002:] Right. And then [disfmarker] Right. J There's [disfmarker] th That's where I thought you would want to eventually be able to have a board or a camera, because of all these classroom [disfmarker] [speaker001:] Mm hmm. [speaker007:] Well, Georgia Tech did a very elaborate instrumented room. And I want to try to stay away from that. [speaker002:] Yeah. [speaker001:] Mm hmm. [speaker007:] So [disfmarker] [speaker001:] OK. Great. That solves that problem. One less action item. Um [disfmarker] OK. I think that's good enou that's [disfmarker] that's pretty much all I can think of. [speaker008:] Can I ask, uh, one thing? It relates to data [disfmarker] data collection and I [disfmarker] and I'd [disfmarker] and we mentioned earlier today, this question of [disfmarker] um, so, um, I s I know that from [disfmarker] with the near field mikes some of the problems that come with overlapping speech, uh, are lessened. But I wonder if [disfmarker] Uh, is that sufficient or should we consider maybe getting some data gathered in such a way that, um, u w we would c uh, p have a meeting with less overlap than would otherwise be the case? So either by rules of participation, or whatever. [speaker001:] Oh, yeah. [speaker008:] Now, I mean, you know, it's true, I mean, we were discussing this earlier, that depending on the task [disfmarker] so if you've got someone giving a report you're not gonna have as much overlap. [speaker006:] Adam! [speaker008:] But, um, i i uh, so we're gonna have s you know, non overlapping samples anyway. But, um, in a meeting which would otherwise be highly overlapping, is the near field mike enough or should we have some rules of participation for some of our samples to lessen the overlap? [speaker004:] Hmm. [speaker005:] turn off [speaker001:] I don't think we should have rules of participation, but I think we should try to [pause] get a variety of meetings. That's something that if we get the [disfmarker] the meeting stuff going at UW, that I probably can do more than you guys, [speaker008:] OK. [speaker001:] cuz you guys are probably mostly going to get ICSI people here. [speaker004:] Mm hmm. [speaker001:] But we can get anybody in EE, uh, over [disfmarker] and [disfmarker] and possibly also some CS people, uh, over at UW. So, I think that [disfmarker] that there's a good chance we could get more variety. [speaker008:] OK. Just want to be sure there's enough data to [disfmarker] OK, good. [speaker002:] They're still gonna overlap, [speaker001:] Um, [speaker002:] but [disfmarker] Mark and others have said that there's quite a lot of found data [comment] from the discourse community that has this characteristic and also the political [disfmarker] Y you know, anything that was televised for a third party has the characteristic of not very much overlap. [speaker004:] Mm hmm. Wasn but w I think we were saying before also that the natural language group here had less overlap. [speaker008:] Mm hmm. [speaker001:] Mm hmm. [speaker002:] So. [speaker004:] So it also depends on the style of the group of people. [speaker002:] Like the, um, dominance relations of the people in the meeting. [speaker001:] Right. [speaker008:] Mm hmm. On the task, and the task. It's just [disfmarker] I just wanted to [disfmarker] uh, [speaker007:] Mm hmm. [speaker004:] Yeah. [speaker008:] because you know, it is true people can modify the amount of overlap that they do if [disfmarker] if they're asked to. [speaker004:] Yeah. [speaker008:] Not [disfmarker] not entirely modify it, but lessen it if [disfmarker] if it's desired. But if [disfmarker] if that's sufficient data [disfmarker] I just wanted to be sure that we will not be having a lot of data which can't be processed. [speaker001:] OK. So I'm just writing here, we're not gonna try to specify rules of interaction but we're gonna try to get more variety by i using different [pause] groups of people [speaker008:] Time. [speaker001:] and different sizes. [speaker008:] Fine. And I [disfmarker] you know, I [disfmarker] I know that the near f near field mikes will take care of also the problems to s to a certain degree. [speaker001:] e e Yeah. [speaker008:] I just wanted to be sure. [speaker001:] And then the other thing might be, um, uh, technical versus administrative. Cuz if I recorded some administrative meetings then that may have less overlap, because you might have more overlap when you're doing something technical and disagreeing or whatever. [speaker008:] Mm hmm. Mm hmm. Well, I [disfmarker] just as [disfmarker] as [disfmarker] as a contributary [disfmarker] eh, so I [disfmarker] I know that in l in legal depositions people are pr are prevented from overlapping. They'll just say, you know [disfmarker] you know, "wait till each person is finished before you say something". So it is possible to lessen if we wanted to. But [disfmarker] but these other factors are fine. I just wanted to raise the issue. [speaker001:] Well, the reason why I didn't want to is be why I personally didn't want to [comment] is because I wanted it to be [pause] as, uh, unintrusive as possi [speaker008:] Mm hmm. Mm hmm. [speaker001:] as you could be with these things hanging on you. [speaker008:] Oh, yeah. Yeah, I think that's always desired. I just want to be sure we don't [disfmarker] that we're able to process, i u uh, you know, as much data as we can. Yeah. [speaker004:] Yeah. Did they discuss any of that in the [disfmarker] the meeting they had with L Liberman? What [disfmarker] [speaker002:] Mm hmm. [speaker004:] What [disfmarker] what do they [disfmarker] [speaker002:] And there was a big division, so Liberman and others [pause] were interested in a lot of found data. [speaker004:] Yeah. [speaker002:] So there's lots of recordings that [disfmarker] [speaker004:] Yeah. [speaker002:] They're not close talk mike, but [disfmarker] And [disfmarker] and there's lots of television, you know, stuff on, um, political debates and things like that, congre congressional hearings. Boring stuff like that. Um, and then the CMU folks and I were sort of on the other side in [disfmarker] cuz they had collected a lot of meetings that were sort of like this and said that those are nothing like these meetings. Um, so there're really two different kinds of data. And, I guess we just left it as [disfmarker] [@ @] [comment] that [pause] if there's found data that can be transformed for use in speech recognition easily, then of course we would do it, [speaker004:] Mm hmm. [speaker002:] but newly collected data would [disfmarker] would be natural meetings. [speaker004:] Actually, th [@ @] [comment] the CMU folk have collected a lot of data. [speaker002:] So. [speaker004:] Is that [disfmarker] is that going to be publicly available, [speaker002:] As far as I know, they h have not. [speaker004:] or [disfmarker]? OK. [speaker002:] Um, but e [speaker007:] It's also [disfmarker] it's not [disfmarker] it's not near far, right? [speaker002:] I'm not sure. Um, if people were interested they could talk to them, but I [disfmarker] I got the feeling there was some politics involved. [speaker007:] I think [@ @] gonna add that to one of my action items. [speaker002:] No. [speaker004:] Just to check. Yeah. W we should know what's out there certainly. [speaker002:] I [disfmarker] I don't know. [speaker007:] Yeah. [speaker002:] I mean, the [disfmarker] [speaker004:] Yeah. [speaker007:] Cuz I had thought they'd only done far field, [speaker002:] I think you need to talk to Waibel and [disfmarker] [speaker007:] intelligent room sorts of things. [speaker005:] Oh, really? It's those guys. [speaker007:] I hadn't known that then [disfmarker] they'd done any more than that. [speaker004:] Oh, they only did the far field? [speaker007:] Yeah. [speaker004:] I see. [speaker002:] But they had multiple mikes and they did do recognition, and they did do real conversations. But as far as I know they didn't offer that data to the community at this meeting. [speaker007:] Mm hmm. [speaker002:] But that could change cuz Mark [disfmarker] you know, Mark's really into this. We should keep in touch with him. [speaker004:] Yeah. [speaker008:] Yeah, I think [disfmarker] [speaker004:] Well, once we send out [disfmarker] I mean, we still haven't sent out the first note saying "hey, this list exists". But [disfmarker] but, uh, once we do that [disfmarker] [speaker001:] Is that an action item? [speaker004:] Yeah. It's on [disfmarker] I already added that one on my board to do that. So, uh [disfmarker] uh, hopefully everybody here is on that list. We should at least check that everybody here [disfmarker]? [speaker007:] I think everyone here is on the list. [speaker004:] Yeah. [speaker006:] I'm not. [speaker008:] u e e [speaker007:] I think you are. [speaker004:] We haven't sent anything to the list yet. [speaker006:] Oh! [speaker008:] Yeah. [speaker006:] OK. [speaker004:] We're just compiling the list. [speaker006:] I see. [speaker007:] I [disfmarker] I added a few people who didn't [disfmarker] who I knew had to be on it even though they didn't tell me. [speaker005:] Who specifically ask not to be. [speaker007:] Like Jane, for example. [speaker004:] Mm hmm. Yeah. [speaker007:] You are on it, aren't you? [speaker008:] Yeah, I am. [speaker004:] Yeah. [speaker008:] So, I w uh, just [disfmarker] just for clarification. So "found data", they mean like established corpora of linguistics and [disfmarker] and other fields, right? [speaker002:] What they mean is stuff they don't have to fund to collect, [speaker008:] It sounds like such a t [speaker002:] and especially good [disfmarker] [speaker008:] Yeah, OK. [speaker002:] Well, I mean, "found" has, uh, also the meaning that's it very natural. It's things occur without any [disfmarker] You know, the pe these people weren't wearing close talking mikes, but they were recorded anyway, like the congressional hearings and, you know, for legal purposes or whatever. [speaker008:] OK. But it includes like standard corpora that have been used for years in linguistics and [pause] other fields. [speaker002:] Mark's aware of those, too. [speaker005:] "Hey, look what we found!" [speaker008:] OK. [speaker002:] That would be found data [speaker007:] Hmm. [speaker008:] Exactly. [speaker002:] because they found it [vocalsound] and it exists. [speaker005:] "I found this great corpora." Yeah. [speaker002:] They didn't have to collect it. [speaker007:] "Psst. [comment] Want to buy a corpora?" [speaker002:] Of course it's not "found" in the sense that at the time it was collected for the purpose. [speaker008:] Yeah. OK, OK. [speaker002:] But what he means is that [disfmarker] You know, Mark was really a fan of getting as much data as possible from [disfmarker] you know, reams and reams of stuff, of broadcast stuff, [speaker008:] That's interesting. [speaker002:] web stuff, [speaker008:] Mm hmm. [speaker002:] TV stuff, radio stuff. But he well understands that that's very different than these [disfmarker] this type of meeting. [speaker007:] It's not the same. [speaker002:] But, so what? It's still [disfmarker] it's interesting for other reasons. [speaker008:] OK. Yeah. Just wanted to know. [speaker004:] So, seems like we're winding down. Right? [speaker001:] Mm hmm. [speaker004:] Many [pause] ways. [speaker005:] So we should go [disfmarker] go around and s [speaker002:] You can [pause] tell [pause] by the [pause] prosody. [speaker001:] Yeah. [speaker005:] We should go around and say something interesting that happened at the meeting? [speaker001:] Oh. Yes, we should do that. [speaker002:] Rrrh! [speaker007:] Now, I was already thinking about it, so [disfmarker] [speaker004:] Oh! Good man. [speaker002:] This is painful task. [speaker003:] Hmm. [speaker007:] So, um, [speaker002:] I [disfmarker] [speaker007:] I really liked the idea of [disfmarker] what I thought was interesting was the combination of the CrossPad and the speech. Especially, um, the interaction of them rather than just note taking. So, can you [pause] determine the interesting points by who's writing? Can you do special gestures and so on that [disfmarker] that have, uh, special meaning to the corpora? I really liked that. [speaker008:] Well, I [disfmarker] I just realized there's another category of interesting things which is that, um, I [disfmarker] I found this discussion very, uh, i this [disfmarker] this question of how you get at queries really interesting. And [disfmarker] and the [disfmarker] and I [disfmarker] and the fact that it's sort of, uh, nebulous, what [disfmarker] what that [disfmarker] what kind of query it would be because it depends on what your purpose is. So I actually found that whole process of [disfmarker] of trying to think of what that would involve to be interesting. But that's not really a specific fact. I just sort of thought we [disfmarker] we went around a nice discussion of the factors involved there, which I thought was worthwhile. [speaker005:] I had a real revelation about taking pictures. I don't know why I didn't do this before and I regret it. So that was very interesting for me. [speaker008:] Mm hmm. [speaker007:] Did you take pictures of the boards? [speaker008:] Yeah. [speaker005:] Not that I [disfmarker] The boards aren't really related to this meeting. I mean, I will take pictures of them, but [disfmarker] [speaker008:] That's a good point. [speaker001:] They're related to this morning's meeting. [speaker005:] But [disfmarker] [speaker008:] Yeah. [speaker007:] To the pre previous meeting. That's right. [speaker005:] OK. Well, that's why I'll take pictures of them, then. [speaker006:] I'm gonna pass because I can't [disfmarker] I mean, of the [disfmarker] Jane took my answer. [speaker007:] Ah! [speaker008:] Oh. [speaker006:] So. Um, so I'm gonna pass for the moment but y come [disfmarker] come back to me. [speaker005:] For the moment. [speaker002:] Pass. [speaker001:] I think [disfmarker] I think "pass" is socially acceptable. But I will say [disfmarker] uh, I will actually [disfmarker] uh, a spin on different [disfmarker] slightly different spin on what you said, this issue of, uh, realizing that we could take minutes, and that actually may be a goal. So that [disfmarker] that may be kind of the test [disfmarker] in a sense, test data, uh, the [disfmarker] the template of what we want to test against, generating a summary. So that's an interesting new twist on what we can do with this data. [speaker003:] I agree with Jane and Eric. I think the question of how to generate queries automatically was the most interesting question that came up, and it's something that, as you said, is a whole research topic in itself, so I don't think we'll be able to do anything on it because we don't have funding on it, uh, in this project. But, um, [vocalsound] it's definitely something I would [pause] want to do something on. [speaker007:] I wonder if work's already been done on it. [speaker008:] Like e expert systems and stuff, or [disfmarker]? [speaker004:] Hmm. [speaker008:] Uh huh. [speaker004:] Well, being more management lately than [disfmarker] [vocalsound] than research, I think the thing that impressed me most was the people dynamics and not any of the facts. That is, I [disfmarker] I really enjoyed hanging out with this group of people today. So that's what really impressed me. [speaker005:] How are we gonna find that in the data? [speaker007:] Well, if we had people wearing the wireless mikes all the time [disfmarker] [speaker005:] Oh, yeah. [speaker007:] Yeah, I think [disfmarker] [speaker006:] Well, I mean, one thing you could search for is were people laughing a lot. Right? [speaker005:] Right. [speaker006:] So. [speaker005:] How happy were they? [speaker004:] Yeah. I'd probably search for something like that. [speaker007:] That actually has come up a couple times in queries. I was talking to Landay and that was one of his examples. [speaker004:] Yeah. Yeah. [speaker007:] When [disfmarker] when did people laugh? [speaker005:] That's great. [speaker004:] Find me a funny thing that Jeff said. [speaker007:] So we need a laugh detector. [speaker004:] Yeah. [speaker005:] Yeah. [speaker001:] Mm hmm. [speaker008:] Yeah. [speaker005:] Perfect. [speaker007:] Cuz that seems to be pretty common. Not in the congressional hearings. [speaker006:] No. [speaker007:] Quiet sobbing. [speaker004:] So I think we're done. [speaker005:] OK. [speaker001:] OK. [speaker007:] I think we're done. [speaker006:] OK. [speaker005:] Great. [speaker001:] Great. [speaker004:] Great. [speaker008:] h Do we need [disfmarker] do I need to turn something off here, or I do unplug this, or [disfmarker]? [speaker004:] Now these we turn off. Right? [speaker005:] Is it on? [speaker001:] OK, here we go. [speaker002:] Uh, so [disfmarker] uh, so I c I definitely have to leave at three thirty today [speaker001:] So [disfmarker] [speaker002:] for [pause] whatever. That's [disfmarker] [speaker001:] Definitely have to go to tea. So [disfmarker] [speaker002:] Yeah, they're [disfmarker] they're having birthday cake and such [speaker001:] That too, but [disfmarker] [speaker002:] and so I can't miss that. [speaker003:] You're a May birthday, right? [speaker002:] I'm a May birthday, yeah, so I have to [disfmarker] [speaker004:] Oops. [speaker003:] Barbara told me. [speaker002:] Yeah. [speaker003:] Barbara Peskin. [speaker001:] Um [disfmarker] So, only a couple of agenda items, since no one sent me email for agenda items. Uh, the first is the IBM transcripts. I'd like to uh, mention [disfmarker] I s I spoke with several of you about this, but just to brainstorm a little bit about what we can do with it. [speaker006:] Mm hmm. [speaker001:] So we got back the IBM transcript, and the accuracy looks OK. There are [disfmarker] I mean, there are areas which are clearly wrong and there are a lot of areas where they put question marks, where the acoustics weren't very good. And I think that's fine. Putting it in question marks is better than doing it wrong. The big problem was, they had the wrong number of beeps in it. [speaker004:] Oops. [speaker001:] So it didn't align. And that's exactly what I was afraid of. And I went in, and more or less by hand, corrected it, by loading it [disfmarker] I wrote a script that will convert it to the multi the Channeltrans format, looked at it in [disfmarker] in Channeltrans and found the places where it got a unsynchronized, and then either [disfmarker] [speaker006:] Hmm. [speaker004:] Inserted ones. [speaker001:] And in this case only [disfmarker] only had to remove beeps, because that was th always the error, is that they put in it extraneous beeps. [speaker006:] Hmm. [speaker001:] But it's not easy to do, because the whole thing gets offset, and basically, it's assigning the wrong speaker to the text, and it all gets unaligned. So it's very difficult to do. [speaker006:] Hmm. [speaker001:] So it took me several hours just to do that. [speaker002:] So this is [disfmarker] [speaker004:] For one meeting. [speaker002:] this is one [disfmarker] it's one hour of [disfmarker] of [disfmarker] of speech? [speaker001:] Yep. [speaker002:] And it took you a few hours to [disfmarker] to do it. [speaker001:] It took me a couple hours to write the scripts, and then it [disfmarker] it probably took me maybe forty five minutes, just going through it. Now obviously it will be a lot faster from now on, because now I sort of understand how it works. But, I think if we had some way [disfmarker] uh some tool that made it easier to insert and delete beeps it would make it a lot easier. Cuz one of the problems is that it takes a very long time when you make a change to load it back up in the channel transcriber. [speaker004:] Yep. [speaker006:] Yeah, that's true. I've noticed that. Yeah. [speaker005:] One of the things that Adam said that I thought would be a good idea would be, um, if we g used different beeps. [speaker001:] And so there may be [disfmarker] [speaker006:] Which makes sense. [speaker005:] So for example if we alternated between two beeps, a high and a low. [speaker002:] He said that? [speaker005:] th Uh, well in [disfmarker] in some email that, uh, we had. [speaker006:] Oh, on the email. [speaker004:] For [disfmarker] [speaker002:] Oh, oh, OK. [speaker006:] Yeah. [speaker001:] I didn't just say that, no. [speaker005:] Yeah. Um [disfmarker] Because then [disfmarker] [speaker001:] So th so this is the o the other approach which is, what do we do in the future, to try to make this less common? [speaker005:] Yeah. [speaker001:] But it's still gonna happen. [speaker006:] Well, and then the other possibility was [disfmarker] e that we provide them with a [disfmarker] a file that already has the beeps in it or something. [speaker001:] So [disfmarker] [speaker002:] Hunh. [speaker006:] Wasn't that what you said? [speaker005:] Sort of a text template, but I don't know how we can do that. [speaker006:] Uh huh. [speaker001:] Yeah, we don't know what their process is. So I had two ideas. The first was to provide them a text template that had both the beeps in it and the speaker I Ds. [speaker006:] Yeah. [speaker001:] You know, just "male female", "English nonEnglish", "one two three four". Um [disfmarker] [speaker002:] Can I [disfmarker] I [disfmarker] I just [disfmarker] [speaker001:] And they just filled it in. [speaker002:] How how many [disfmarker] how many beeps are there, and how many do we were they [disfmarker] d were [disfmarker] were off? [speaker001:] It was like a hundred twenty, and they had a hundred twenty three or something like that. [speaker002:] Alright. [speaker001:] Is that right? [speaker005:] Well, l I I thought it was a lot more than that. [speaker002:] And is it a [disfmarker] [speaker001:] Well I [disfmarker] I don't remember. [speaker005:] They were three off, basically. [speaker002:] Yeah, and it was a hundred ish. [speaker003:] And this is the mixed signal? [speaker006:] Oh! [speaker002:] So [disfmarker] so if you get one [disfmarker] [speaker006:] Hmm. [speaker002:] uh, uh, excuse me. If you get one off, it [disfmarker] t everything is off cuz of the alignment right? [speaker001:] Yep. [speaker004:] Yep. [speaker002:] Uh, and this is all in one file that's continuous. [speaker005:] Yeah. [speaker001:] Yep. [speaker002:] So why don't you make it multiple files? [speaker001:] How would that help? [speaker002:] Because then if they have one off, then [disfmarker] then it would only screw up that file. [speaker001:] Oh, you mean just break the one hour into a few chunks. [speaker002:] Yeah. I mean, it seems now, if you just ha make one mistake then everything is off. [speaker005:] Well the only [disfmarker] the only question about that that I have is, um, I think Brian has to put all this stuff onto a tape. So I don't know whether [disfmarker] [speaker002:] Yeah. [speaker001:] I'm sure they would prefer it sort of one hour. [speaker005:] Can he send them [disfmarker] I guess he could send them multiple tapes [speaker006:] Oh! [speaker005:] Uh [disfmarker] [speaker002:] Yeah, just say, "OK, is [disfmarker]" ten minutes is [disfmarker] I mean, maybe there's other things to do that are more clever, but this just seemed to be the sort of the obvious thing because, again, if you make one mistake, then everything after that mistake is off. [speaker005:] Mm hmm. [speaker002:] And, uh [disfmarker] [speaker005:] Well, the only other issue is we'd have to worry about reassembling them in the proper order. Uh [disfmarker] [speaker002:] Yeah. [speaker001:] That [disfmarker] I mean, just naming schemes will get [disfmarker] We can definitely do that. So t so one question is, are there good places in the files where we can really do that? [speaker003:] Well, the other thing [disfmarker] You can [disfmarker] [speaker001:] I mean, so [disfmarker] so [disfmarker] so someone will have to go through them and listen to them and pick a place to break them. [speaker004:] Yeah. [speaker003:] The other thing you can do This is on the mixed signal, or are these the individual [disfmarker] the ones with the beeps that they're getting, that they made mistakes on is the mixed one? [speaker001:] They are played from the individual. [speaker005:] Individual channels. [speaker004:] Individual channels. [speaker003:] Or they're indivi If it's individual, um [disfmarker] y c we can [disfmarker] probably run a forced alignment, where you c include the beeps as words and pretty much figure out where the errors are. I don't think we could do it on the mixed signal, cuz it [disfmarker] But on the individual channels that would probably work, if they have enough words in the transcript. [speaker005:] Yeah, we [disfmarker] In fact, that's [disfmarker] Yeah, maybe we [disfmarker] [speaker003:] So, I mean, that's pretty quick to run. It [disfmarker] it's, um [disfmarker] I mean, we could try it. Cuz if a beep is treated as a unique word, [speaker001:] You mean, so, actually put [disfmarker] put the wavefile in with beep in it. [speaker003:] "beep" [disfmarker] Put in [disfmarker] put "beep" in as a word. [speaker001:] And pu and have a beep model. [speaker005:] We have the original wavefile with all the beeps. [speaker003:] Yeah. Yeah. Well [disfmarker] the beep model [disfmarker] Actually, you could train a beep model. That [disfmarker] that's a good idea. [speaker005:] It would be very good. [speaker003:] Um [disfmarker] [speaker001:] Yes, it would be, cuz it's very, very regular. [speaker003:] Yeah. Yeah. So, um [disfmarker] [speaker002:] Oh. Kind of low variance, wouldn't that be? [speaker004:] Yeah. [speaker005:] Yeah, very low. [speaker001:] Yeah. [speaker002:] You sort of [pause] might end up with some infinities there, [speaker003:] Um, that might actually [disfmarker] [speaker001:] You have to [disfmarker] may [disfmarker] add some jitter to it. [speaker002:] but [disfmarker] Can I see that? [speaker003:] I mean, that might actually work, [speaker002:] Where's the case? [speaker003:] because you'll [disfmarker] I guess the trick is figuring out, you know, where it doesn't align? But if it's a problem of extra beeps, rather than missing beeps, then [disfmarker] um [disfmarker] that's easier [speaker005:] It could t it could [disfmarker] There could be some missing ones. [speaker003:] and it might work. Oh. [speaker005:] But [disfmarker] [speaker002:] So is this a [speaker001:] It's just in this case they were not. [speaker002:] seven or something? Or [disfmarker] What is it? [speaker001:] So I think [disfmarker] uh, Chuck had a good explanation for it which I liked, [speaker002:] M one hundred. [speaker001:] which is these [disfmarker] they're listening to it and so they write something down, and let's say they miss it, they rewind. Well, they hear the beep again. Was that that beep, or was that a different beep? [speaker003:] Oh, yeah. [speaker001:] So that's why Chuck was suggesting different tones. My original thought was, you could have a different tone for each speaker. You know, there's plenty of tone space. And that might help cue the transcriptionist. You know, give them a little cue about where they are. [speaker002:] Or you could [disfmarker] you could have ascending tones, or something. [speaker001:] What do you mean? [speaker002:] Uh [disfmarker] Or [disfmarker] [speaker003:] Yeah. [speaker005:] Increasingly high [disfmarker] higher frequencies. [speaker003:] You could have, um, each speaker in the meeting say "beep" and record it and that will be their pro [speaker002:] No, I just [disfmarker] thi I don't know. If you have a hundred or something just [disfmarker] just sort of move them up i mu up a [disfmarker] a half tone or something for [disfmarker] for [vocalsound] each one, [speaker003:] Sorry. [speaker002:] and then [disfmarker] [speaker001:] That's what I was saying. Is that [disfmarker] that you could have [disfmarker] each speaker could have their own tone. [speaker002:] Oh was it? Oh. See, I should listen. Yeah? [speaker003:] Hmm. [speaker001:] And you know, since there are no more than, you know, ten speakers or so per meeting, there's plenty of tone space. [speaker006:] Oh. Well [disfmarker] But [disfmarker] [speaker002:] Uh, OK, yeah. So the only [disfmarker] That might work too. Yeah, I was just thinking that actually temporally just, uh, uh, uh, an increasing tones so that [speaker001:] Oh, OK. [speaker002:] they would really have a sense of whether they were going, you know, forward or backward [speaker001:] Bo eep [comment] be oop. [speaker004:] In the end? Yeah. [speaker006:] Well you th [speaker002:] or [disfmarker] [speaker006:] So just r r truly chron chromatically up [pause] all the way through the entire meeting. [speaker002:] Yeah. Yeah. Well, it's a thought. [speaker006:] Interesting. [speaker001:] Oh, oh, th oh, I see. [speaker006:] It'd be impossible to r to misorder them. [speaker002:] Yeah. [speaker006:] Yeah. [speaker003:] But yo I guess you'd still get the mistake that you mentioned, because the person [disfmarker] [speaker001:] Well [disfmarker] [speaker003:] I mean, if that's some of the errors, those would probably stay the same, regardless. [speaker001:] Yep. I mean, Chuck's suggestion of just two beeps is nice [speaker002:] Yeah, maybe. [speaker001:] because then you could have them actually transcribe "H beep" or "L beep", "high beep" or "low beep". [speaker005:] And then when we get them back, if we see two L beeps in a row [disfmarker] [speaker001:] It would be much easier. [speaker002:] That's probably better, [speaker003:] Yeah, you have like a sanity [disfmarker] sanity check on them. [speaker002:] yeah. S scratch my other idea. [speaker006:] It's pretty simple, pretty simple. [speaker003:] That's a good idea. [speaker002:] Yeah. [speaker001:] Yep. [speaker003:] Alternating. [speaker002:] Yeah. [speaker005:] And it would [disfmarker] it seems like it would help them, you know, if they just finished a long passage and they heard one beep and then they hear it again, they know it's the same beep. Versus if they heard a slightly different one. [speaker002:] Yeah. [speaker006:] Hmm, interesting. [speaker003:] Yeah, that's a good, like, error checking approach, I think. [speaker001:] Yeah, we can, yeah, like, do [disfmarker] do, uh, one for each ca one for each person in the meeting, you can only hire people who have perfect pitch, and they can say "A flat" [vocalsound] "B". [speaker006:] Well. [vocalsound] Well, "high", "low" is pretty straightforward, [speaker002:] Great idea, yeah. [speaker006:] yeah. [speaker002:] No, two is probably better than what I was suggesting, because what I was suggesting, in order to make it through a reasonable range for the whole thing, you'd have to have the [disfmarker] the tonal differences between the two kind of small, and actually you want them large. [speaker001:] Yep. [speaker002:] So. [speaker006:] You'd have to [disfmarker] they'd have to have the acoustics of dogs t [vocalsound] hear into the high high ranges. [speaker001:] Mm hmm. [speaker002:] Yeah. [speaker001:] So I think, certainly, doing two beeps is [disfmarker] is no brainer. [speaker006:] Yeah. [speaker001:] And then the other question is, if they can [disfmarker] If they do something on a computer in a format we can handle, we could give them a text file that was a template with speaker ID and beeps already in it. And then they could just fill that in. [speaker005:] Yeah. Maybe I should write to Brian and tell him what the problem was [speaker002:] So what [disfmarker] [speaker005:] and what our proposed solutions are. [speaker001:] Right. [speaker002:] Yeah. [speaker005:] See what he thinks would work best with them. [speaker002:] I mean, what's going wrong on the other end? What are they doing? [speaker001:] Well, as I said, I think [disfmarker] We don't actually know, [speaker002:] What, uh [disfmarker] [speaker001:] but I think Chuck's hypothesis was a good one, which is, you re r you listen to something, you write down what you thought you heard, but you want to listen to it again, so you r rewind. [speaker005:] And there are some [disfmarker] [speaker001:] And then you hear the beep [comment] again, and then you say "well, is that that beep, or is that a new beep? I don't remember." And so, you know, a couple times they got it wrong. [speaker005:] And there are some cases where there's, um, very little speech before you hear a beep. So you hear a little bit of speech, beep, [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker005:] a little bit of speech, beep. Maybe three of those in a row, and was that three beeps, was that two beeps? Um [disfmarker] See, they have th Brian said the setup that they have is, um [disfmarker] uh [disfmarker] i um [disfmarker] They've got headphones and then I guess they have a computer, and they have a foot pedal which lets them quickly scan back and forth through the tape. I i t i s makes the tape go real fast forward or fast backward. And so they can hear something, then step on the pedal and quickly rewind and go back and hear it again. And so they're [disfmarker] I think they're probably going over and over f sections. And then they get confused about whether they've already [nonvocalsound] put that beep, or if they heard, you know, three beeps in a row, was that three or two, and [disfmarker] [speaker006:] It's a shame we can't really give a number. [speaker002:] OK, s [speaker006:] So it would be a number and a beep. e It's a shame we can't do that. [speaker001:] Well, we could. [speaker005:] Oh, that's an interesting idea. [speaker001:] We could, but then that starts getting pretty long. [speaker005:] No, no, no. But wait, maybe [disfmarker] maybe what we have instead of a beep is a, uh, synthesized number. [speaker004:] Yep. [speaker005:] And they put the number in when they hear it. [speaker006:] That'll work. [speaker005:] So, inst replacing the beep with "twenty three", "twenty four", "twenty five" [disfmarker] And then they have to transcribe what that number is each time. [speaker006:] Wow, that'd also be shorter to type. [speaker005:] Yeah, just a "two three, two four", that would be [disfmarker] [speaker001:] Well, they would have to have a mark. [speaker005:] There'd be some kind of mark in front of it. [speaker006:] Interesting. [speaker002:] Well, cuz they're s distinguishing between twenty three [disfmarker] that's your indicator there, and twenty three that somebody happens to say in a meeting. [speaker006:] Oh, that's true. [speaker005:] Well it would have t yeah, it would have to be um, you know either obviously a s sounding synthesized, uh, little [disfmarker] [speaker004:] Yeah. [speaker002:] But they might miss that, right? [speaker006:] Or maybe could it be a click and the number. [speaker005:] Yeah. Or a beep and the number. [speaker001:] Well, it's sequential. [speaker006:] Or a beep and the number, and then [disfmarker] and then you just do the [disfmarker] the num th you do it as a d a digit. [speaker001:] Right, you would go sequential. So unless you got pretty unlucky, [speaker005:] Uh, right. [speaker004:] Yeah. [speaker001:] what the person was saying and the number [disfmarker] [speaker005:] In fact, we could put it at the beginning of each utterance. We could say "beep one". And then they'll hear the speech. [speaker002:] I think having a beep too is good. [speaker001:] So beep number. Yeah the [speaker005:] Or a number beep [disfmarker] [speaker001:] It's just getting pretty long. You know, the utterances are very short. [speaker004:] Yep. [speaker005:] Mm hmm. [speaker001:] And so you're gonna be talking beep, number, "yes!" beep, number, "no". [speaker005:] Mm hmm. [speaker006:] Well, if [disfmarker] if it saves time, [speaker005:] Yeah, and u I think it would [disfmarker] I [disfmarker] Actually, I thi I like that. [speaker006:] yeah. [speaker001:] And when they transcribe this meeting it's gonna be really impossible. [speaker005:] I think it would [disfmarker] [speaker004:] Yeah, [comment] that's funny. [speaker002:] W we're saying "beep". Yeah [disfmarker] [comment] for instance. [speaker001:] Yeah, yep. [speaker005:] Yeah. But see that would [disfmarker] that would definitely keep [disfmarker] [speaker001:] It's not a bad idea. [speaker005:] Yeah. Keep things from getting out of sequence. If they heard it over again they would know. It would be obvious from their transcript whether they did that one or not. [speaker006:] It might actually help them in terms of their place holding. [speaker001:] That would help get them synchronized. [speaker006:] Yeah. And it might be helpful to them in terms of d act [speaker005:] Yeah. [speaker001:] I think it would be, because they would know their place. [speaker005:] And we have plenty of digits data, so. [speaker001:] Yeah. They would know their place, darn it! [speaker005:] We [speaker006:] Know [disfmarker] [speaker001:] Those transcriptionists need to know their place. [speaker004:] Oh. [speaker006:] Well, no no no no. [speaker002:] But it sounds like a k a [disfmarker] a [disfmarker] a short conversation with Brian might be [disfmarker] might be helpful to get a sense of what's going on. [speaker005:] Yeah. I'll [disfmarker] I'll [disfmarker] I'll write him an [disfmarker] [speaker001:] Yeah, the numbers are a good idea. [speaker005:] Yeah. [speaker006:] OK. [speaker004:] Or just [disfmarker] [speaker001:] Othe other than lengthening the transcript, I think it would be very helpful. [speaker004:] Just instead of the sequential number, for perhaps just the speaker ID [comment] or something. Just say speaker one [comment] or spea or just one, two, three, four [comment] for four speakers or something. Pppt! Could [disfmarker] could be helpful. [speaker006:] I [disfmarker] i [speaker001:] Yeah, you don't have to [disfmarker] you could keep them short by not s or just go one through ten, one through ten, one through ten, or one through twenty. [speaker004:] Yeah. Yeah. Yeah. Yep. [speaker006:] And then also, if we [disfmarker] I mean, I don't know what their setup is, but your template idea might be a way of [disfmarker] of of [disfmarker] [speaker001:] Mm hmm. [speaker006:] We could actually build the numbers into a template. [speaker001:] Yep. [speaker006:] Well. [speaker005:] I'll talk to Brian, and see what he [disfmarker] [speaker001:] Yeah, single digit numbers [disfmarker] That also appeals to me. So that you don't have to, you know [disfmarker] [speaker004:] Yeah. [speaker006:] That's a good idea. Very simple. [speaker001:] "One hundred twenty four" [speaker005:] Yeah. [speaker004:] Yeah. [speaker002:] Well, th although [disfmarker] [speaker001:] "Beep. One thousand three hundred forty two" [speaker002:] You're saying i it's [disfmarker] it's about an hour meeting, [speaker001:] Mm hmm. [speaker002:] and there were a hundred and twenty three [disfmarker] [speaker001:] I don't remember. [speaker006:] It's forty five minutes. [speaker001:] Chuck was saying there was more than that. [speaker002:] Hmm? [speaker006:] Forty five minutes for that one. [speaker001:] More beeps than that. [speaker005:] I was thinking there was on the order of fifteen hundred segments, but maybe I'm [disfmarker] [speaker003:] There must be. Because otherwise, you're going a whole [disfmarker] [speaker004:] I think it should be more, [speaker006:] It's possible. [speaker004:] yeah. [speaker003:] That was at least forty five minutes, right? [speaker006:] Yeah. That's right. [speaker001:] Was it that many? I just don't recall. [speaker006:] Yeah, it was forty five minutes. [speaker003:] That's the shortest meeting we have. Most of them are double that, or [speaker001:] Yep. [speaker002:] Well, I was just wondering what the average length thing was. [speaker003:] so [speaker005:] Oh, of a [disfmarker] of a segment that they're hearing? [speaker004:] It's [disfmarker] [speaker002:] Yeah. [speaker005:] I don't know. [speaker006:] Hmm, interesting. [speaker001:] Easy enough to figure out. [speaker002:] Uh [speaker005:] I don't know. [speaker003:] So if there are in the [disfmarker] in the hundreds and thou or thousands of these [disfmarker] [speaker005:] Yeah. [speaker001:] We have all that data. [speaker003:] It must be in the high hundreds of them, at least. [speaker002:] So, right. So i [speaker003:] So that's getting a little cumbersome for them to type, like, five hundred? [speaker004:] Yeah. [speaker006:] Well, with just one digit though, I think [disfmarker] [speaker003:] But yeah, if they recycle through [disfmarker] [speaker002:] Well [disfmarker] [speaker005:] Yeah, if we [disfmarker] if we did mod ten, then [disfmarker] [speaker004:] Yeah. [speaker002:] Right, but my point was, you know, if [disfmarker] It's a big difference to me if there's a thousand or there's a hundred. If there's a thousand, then that means that this is only happening three times out of a thousand. Which means, you know, even just splitting it up into a few files would probably take [disfmarker] You know, nearly all the time, there would be no [disfmarker] [speaker003:] Yeah. [speaker002:] W if it's a hundred, then [disfmarker] you know, that's not the issue, but if it's a hundred, then, the average length of each one is pretty [disfmarker] pretty long and so having a little thing at the front of it is no big deal. [speaker005:] Mm hmm. [speaker002:] So, that's why I was curious. [speaker001:] Yeah, you're right. It had to have been more than that. I mean, it was a forty five minute meeting, so [disfmarker] [speaker003:] Right. [speaker001:] And it certainly was not a minute a chunk, it was a few seconds a chunk. [speaker003:] That's what I [disfmarker] [speaker005:] Mm hmm. [speaker003:] Yeah, so it'd be a lot of overhead to type. [speaker001:] So I'm just mis remembering. I'm just mis remembering. [speaker003:] Oh, I [disfmarker] [speaker001:] So. [speaker006:] Well, and the other thing is, maybe the numbers would help them keep their place in the transcript and speed the process [disfmarker] [speaker003:] Yeah. [speaker001:] It might make the transcript faster [disfmarker] [speaker003:] Yeah. [speaker001:] W wel I mean I [disfmarker] A quick conversation with Brian would be good, [speaker006:] Mm hmm. [speaker005:] Yeah. [speaker001:] so. OK. And other than that [disfmarker] [speaker003:] Actually, if it is [disfmarker] u I mean, it's not a bad idea to try the alignment. Um, if there's only like three of them, then if you align, the first point at which things get messed up is sort of the location of the first problem. [speaker001:] Mm hmm. [speaker003:] And then, the second p I mean, you could do it that way, or you could [disfmarker] It's [disfmarker] it's [disfmarker] it would be able to handle any errors. So if they make, on the average, that many, and it costs them more time to do something different than what they're doing, which I don't know, [comment] but if it does, then we could try doing that as a post process, [speaker001:] Right. [speaker003:] and, um, have a student or a transcriber run this alignment, or we can do it [speaker001:] Yep. [speaker003:] and [disfmarker] [speaker005:] You know actually, what [disfmarker] all that we would [disfmarker] [speaker003:] then you can iteratively figure out [comment] where the problems are. [speaker005:] You know, Liz, all we would re [speaker003:] It would take a little work, but not any real human [disfmarker] not a lot of human work. [speaker001:] Right. I mean, that's what we want to avoid. [speaker005:] I mean if [disfmarker] in terms of the alignment, actually, all we would need is [disfmarker] we wouldn't even need words. We would need a general speech model and a beep model. Cuz all we're really concerned about is missing beeps or too many beeps. [speaker003:] Well, the for the forced alignment will run fa [speaker005:] So if we had a [disfmarker] [speaker003:] otherwise it'll take forever. I mean, to really run recognition? [speaker005:] Mm hmm. Well [disfmarker] l I guess what I'm thinking of is that w that a lot of times there will be [disfmarker] they'll put in question marks, that represent some unknown amount of speech, [speaker003:] Or [disfmarker]? Right. [speaker005:] so we're gonna have to have some kind of [disfmarker] [speaker003:] Yeah, then we just map it to a reject model. In [disfmarker] and in fact, that's what we do now. Cuz there's cases, even after Jane listens, where, you know, we have [disfmarker] [speaker005:] Right, h [speaker003:] an expert can't even tell what they're talking about. [speaker005:] Yeah. [speaker003:] Um, so that's OK. Just map that to reject [speaker005:] Right. Well, what I was thinking is, map all the speech to the reject model. [speaker003:] and [disfmarker] [speaker005:] And then you have a beep model. [speaker002:] Yeah. [speaker001:] But that won't work [speaker002:] Yeah, you're still talking a forced al [speaker001:] because it's [disfmarker] [speaker003:] The forced alignment might not work then, although we can try it. Um [disfmarker] [speaker001:] Well, but, all we'll get [disfmarker] [speaker002:] Well, with the beep models, is what's really [disfmarker] you'd have really g it would be really good though. [speaker005:] The beep model is [disfmarker] is gonna match, like, really well. [speaker003:] Right. [speaker002:] So [disfmarker] so it's p so it's really just doing, yeah, a keyword, like a spotting for beep with [disfmarker] [speaker003:] Right. But it'll [disfmarker] it'll grab the next beep, in other words. It'll be [disfmarker] You'll get back another offset. Um [disfmarker] I mean, I was thinking of [disfmarker] then I [disfmarker] and then I realized "Well, the recognizer will just go along and line up all the beeps, and then there'll be all the extra beeps at the end if there were more beeps then you wanted." [speaker001:] Right, you [disfmarker] you need [disfmarker] But you need the text [speaker003:] So you need the word [disfmarker] you need the word to c sort of control the relative location. [speaker001:] to tell it where it got it wrong. Yeah. [speaker005:] No, I'm not sug I'm not suggesting that we don't have anything between the beeps. [speaker003:] It will tell you [disfmarker] Oh. [speaker005:] But I [disfmarker] what I'm saying is, what's between the beeps we don't really care about. [speaker003:] I think you do care about it. [speaker001:] Because you have to know which ones match which. [speaker003:] Unless you wanna know if they're [disfmarker] Right. Otherwise you just count up the total number of beeps. [speaker001:] And we can do that with wordcount. Right? We already know that [disfmarker] that we're three short. [speaker003:] Yeah. [speaker001:] And so if we just did spee uh beep and non beep all you would get is that you have three mistakes, but you wouldn't tell you which ones were wrong. [speaker003:] Right. [speaker001:] You need some of the words in there, so that you can say "well, this segment matches this one, this segment matches this one, and this one doesn't match at all, unless we insert a beep". [speaker003:] I mean the only thing I'm worried about with that approach is that if we need to figure out the beep alignment problem before the transcribers do the corrections here, [comment] then we're in trouble. [speaker002:] OK. [speaker003:] In other words, if the transcripts aren't sort of good enough that the aligner constraints are good enough to sort of show you where the errors are, then it wouldn't work. But it might work. It might work to do this if their transcripts are pretty close on the words that usually get recognized correctly, which are the, you know, function words, the common words. [speaker001:] Well, actually, you know, there's an even easier way. We don't really need a beep model. Um, just extract the segments and do a forced alignment, and if the score is good, then you say it matches [disfmarker] [speaker003:] Right, you can also [disfmarker] In fact that's what we do [disfmarker] Right. The individual segments between the beeps [disfmarker] Um, if beeps were like the segments that we get from um, the transcription tool already, that that's what we have done, and it works very well. You can see these segments align and these don't. Um, then you just have to go back in and figure out where the endpoints of those segments are, cuz some of them will be wrong. Because the bi the beeps were missing, by definition. [speaker001:] Right. [speaker005:] So do you get a score for each alignment? [speaker003:] So it might actually work. [speaker005:] Is that what you get? [speaker003:] Um, you ca you get, um [disfmarker] Definitely when they don't align at all, it [disfmarker] it [disfmarker] it fails. I mean, that [disfmarker] that's how we found a lot of problems before, with, um, words being on the wrong place or something. [speaker005:] Mm hmm. [speaker003:] So a failed alignment is a very good indicator that [disfmarker] that [disfmarker] the words don't match up. [speaker005:] Mm hmm. I'm just wondering when [disfmarker] wha what it means to fail. [speaker003:] Um [disfmarker] [speaker005:] Is it the [disfmarker] the likelihood scores below some threshold [disfmarker] [speaker003:] Yeah, is, uh [disfmarker] it [disfmarker] Right. It can't match up the [disfmarker] That's why you actually need the text. In order to force you to try to match something that gives you a model to match against. So [disfmarker] It's just an idea, if [disfmarker] if it turns out that [disfmarker] I mean, I also like this idea of high and low beeps, or [disfmarker] But it [disfmarker] if [disfmarker] Suppose we get one or two errors, still, per [disfmarker] you know, per transcript, then we z we might wanna try some approach like that. [speaker001:] Well, the numbered ones would make it a lot easier, cuz you could then really localize where the error is. [speaker006:] Mm hmm. Yeah. [speaker003:] Where it is. Right. If [disfmarker] so, if that doesn't add time for them, that'd be great. [speaker006:] Yeah. [speaker001:] Yep. [speaker003:] You know, like [disfmarker] [speaker006:] I think it would save them time. [speaker003:] We have to ask them. [speaker006:] I think it would help them keep the track [disfmarker] track of it. [speaker002:] Uh [disfmarker] uh, again, especially if there's hundred [disfmarker] many hundreds or thousands I still think it would [disfmarker] it would cut the incidence of this a lot if it was possible to break it into a couple files. [speaker006:] Interesting, [speaker002:] Yeah. Cuz it, you know, it's r r [speaker006:] yeah. [speaker003:] I guess also even for the transcribers, it's quicker to load a smaller file [disfmarker] into the [disfmarker] You know, t for the checking problem. [speaker001:] Yeah, I mean, we could certainly break the meetings [disfmarker] We could certainly break the meetings into pi pieces. [speaker003:] Into twenty minutes chunks or something, or [disfmarker] [speaker001:] So, just [disfmarker] [speaker002:] I mean, you've already broken them into chunks, so if you have [disfmarker] if you have a th if you have a thousand chunks then make [pause] ten things of a hundred, or [disfmarker] or you know. [speaker005:] Just [disfmarker] Just accumulate it till you get twenty minutes worth? There's one file. [speaker001:] That's true, yeah. [speaker004:] Yep. [speaker002:] And then it will be, you know, this [disfmarker] Eve every now and then there'll be a [disfmarker] a beep missing. [speaker006:] So, someone's gonna talk to Brian. [speaker005:] Yeah, I'll [disfmarker] [speaker001:] I think that was Chuck. [speaker005:] I'll talk to him. [speaker006:] OK. [speaker001:] Yep. [speaker006:] I also looked thr over the text and I was impressed by the accuracy, it looks really good. [speaker001:] Yeah, it does look good. I mean, I found several errors, but they were not significant. They were all things that I could easily listen to and sometimes convince myself they said one thing and sometimes the other, so. [speaker006:] Yeah, it's there. Mm hmm. [speaker003:] I haven't looked at these, um [disfmarker] cuz I was gone last week, but [disfmarker] but Don had told me that there's a difference in some of the conventions? [speaker006:] Oh. Yes. But this is not important. [speaker003:] So [disfmarker] but those are all easy things, [speaker006:] It's systematic, [speaker003:] right? [speaker006:] yeah. [speaker003:] Yeah. OK. [speaker006:] And [disfmarker] and Brian actually forwarded me in advance. He [disfmarker] he wor he very nicely worked out a set of conventions which is intermediate between ours and the ones that they're used to. Which is [disfmarker] which is really a good [disfmarker] a good way to do it. [speaker003:] OK. So we can just map [disfmarker] map them to [disfmarker] [speaker006:] No problem. [speaker003:] Great. Hmm. [speaker001:] OK. Um [disfmarker] We still haven't really sat down and talked about file reorganization, and directory reorganization. So we still have to do that. But I don't think we need to do that in this meeting. But, uh [disfmarker] It is something that needs to get done, and I wanna also coordinate it with Dave so that we do a level zero backup right after. So we don't waste a lot of tapes. But, uh [disfmarker] So [disfmarker] let's [disfmarker] let's try to do that sometime, OK? ch OK? [speaker006:] Yeah. We can do it this week. Yeah. [speaker001:] Um [disfmarker] We also still [pause] have to make a decision about mike issues, what we wanna do with that. [speaker006:] I thought we were going to get two more of the ones [disfmarker] [speaker001:] And just swap them in and out? Yeah, we could certainly do that. [speaker004:] So why not? [speaker001:] So [disfmarker] uh, Morgan, just to [disfmarker] uh, since you weren't at the meeting last week, uh, apparently a bunch of the EDU fe folks really hate this style mike. [speaker003:] Um, I wouldn't [disfmarker] well, it [disfmarker] they didn't say "hate" but they [disfmarker] they c they come on time to their meetings in order to not be left the last person who has to sit by those mikes. [speaker006:] Can I [disfmarker] I c I [disfmarker] [speaker001:] I mean, that's [disfmarker] that's [disfmarker] that's a fairly strong indication of dislike. [speaker006:] I actually really like them though. [speaker001:] Um [disfmarker] [speaker002:] Which one is these? [speaker003:] There were a few people [disfmarker] you, and like, three or four people who really like them. [speaker002:] The uh [disfmarker] [speaker001:] The Crown. [speaker002:] Uh huh. [speaker006:] Yeah, I do. [speaker003:] And e and th all the others really don't. [speaker001:] So what I was thinking is, we could get a few more of the Sony ones and just unplug them and plug them in. And the only thing is that when you fill out the digit forms, you have to be sure to indicate which mike was actually used. So, I mean, that's easy to do. Um [disfmarker] it [disfmarker] it [disfmarker] moves us away from this uniform, all the mikes are the same, which some people had said was a benefit. But [disfmarker] [speaker006:] The other thing, too, is th y you said that the acoustic pra q quality of the one that you're wearing, the fancy one, is actually somewhat better than [speaker001:] It is. Yeah. [speaker006:] So I I don't know if it's a noticeable difference, but it is [disfmarker] I mean, I p I personally really prefer that model, so I think you know we've got this problem of people having different [disfmarker] different preferences. [speaker002:] You know, there are so many things about this that are non uniform that I think having [disfmarker] having some modicum of uniformity is a good thing, but you're just not going to [disfmarker] I mean we've [disfmarker] [speaker006:] OK. [speaker002:] uh [disfmarker] For instance, jus just the fact that people sit in different positions, different times, the [disfmarker] exactly where they put it with respect to their mouth is different each time. [speaker001:] Yep. [speaker002:] Uh [disfmarker] I [disfmarker] I just don't [disfmarker] I mean, I think we [disfmarker] we're [disfmarker] we're getting rid of major differences [speaker006:] Mm hmm. [speaker002:] like we're not using the lapel anymore, an and so forth. But the other stuff is just gonna [disfmarker] some variability we're gonna have to accept. [speaker006:] It's [disfmarker] I [disfmarker] I understand, that's fine with me. I [disfmarker] I g would say that um, I find this model easier to adjust, that's one of the reasons I don't like these, cuz I keep m you know, bending it, [speaker002:] Uh huh. [speaker006:] and then it's n and I never really know [disfmarker] [speaker002:] Yeah, see I [disfmarker] I happen to like the old one better. [speaker006:] You do? [speaker002:] But [disfmarker] [speaker006:] OK. [speaker002:] Yeah. [speaker003:] What about just a different headband thing? [speaker002:] And some people do, but [disfmarker] [speaker003:] Or even if we could attach two headband [disfmarker] [speaker001:] I mean [disfmarker] th [speaker003:] Like, I don't mind those, [speaker001:] We abs th [speaker003:] but it [disfmarker] they bounce around. [speaker001:] We have lots o of choices of microphones. [speaker003:] I mean, can we keep the microphones, and just somehow attach a more comfortable [comment] thing over your head? [speaker001:] Mm hmm. Mmm. Yeah. [speaker003:] That's my problem with this one. The [disfmarker] the ear thing comes out to here. It doesn't even fit over my ear. [speaker001:] Right. [speaker003:] And for some of the others, it's [disfmarker] their ears were shaped in a way that didn't hold the [disfmarker] [speaker006:] Hmm. [speaker001:] Right. [speaker006:] Huh. [speaker001:] I mean, so we can go microphone shopping, and get [disfmarker] get more microphones. It's just [disfmarker] w [speaker006:] Then they have to be rewired though, [speaker001:] I'm just not sure what we should do. [speaker004:] Yeah. [speaker006:] don't they? Specially. [speaker002:] You know. I mean, here's another argument against worrying too much about uniformity. We have something like forty five hours of data collected. And, uh, we're also gonna be integrating in this corpus stuff that's [disfmarker] that's being done at UW, uh, with entirely different microphones. [speaker001:] Yep. [speaker002:] So I [disfmarker] I think, you know, getting away from the lapel, having close mounted mikes, having people adjust them as they're most [disfmarker] as [disfmarker] as [disfmarker] as comfortable as they can [disfmarker] [speaker001:] Well, I [disfmarker] [speaker002:] I think that's sort of the big deal. [speaker001:] I agree with that, and so my feeling is, we should get mikes that people like. [speaker002:] Yeah. [speaker006:] Yeah. [speaker001:] And so I my feeling is, going out and buying a few more mikes is fine. [speaker002:] Yeah. [speaker004:] Yeah, that's right. [speaker001:] So should I just go do that? [speaker002:] Yeah. [speaker001:] OK. [speaker003:] What about these [disfmarker] [speaker001:] And then also, should we go ahead and get another wireless system? You know, for whatever it's gonna be, uh [disfmarker] [speaker006:] Mm hmm. [speaker002:] Uh, another wireless system. You mean, another transmitter? Or [disfmarker] or [disfmarker] [speaker001:] Uh, no, actually, get [disfmarker] we need another box. Because each [disfmarker] each box in the back room can only take six. [speaker002:] Oh, to get r to get rid of the wired things. [speaker001:] So we could Yep. [speaker002:] Yes, absolutely. [speaker001:] OK. [speaker002:] Yeah. Uh, because [disfmarker] cuz we've had [disfmarker] With the exception of forgetting to change the batteries from time to time, uh, we've really had no tr no trouble with this at all, [speaker001:] So I'll just go d I'll do that. Yep. [speaker002:] right? And [disfmarker] and the wired stuff is just gonna continue to degrade as, you know, and nobody will remember how to [disfmarker] [speaker003:] Another one just bit the dust. [speaker002:] yeah, nobody will remember how to repair a Jimbox or a Jimlet or a [disfmarker] [speaker005:] Really? [speaker003:] I think there's a problem with this one. [speaker001:] Yep. [speaker002:] you know, it's [disfmarker] [speaker001:] OK. Then I will just go do that, [speaker002:] Yeah. [speaker001:] and send the bill to [disfmarker] uh, [speaker002:] Whoever will pay, yeah. [speaker001:] what's her name. Whoever will pay. You. [speaker002:] Yeah, yeah. [speaker003:] I have a quick question about microphones, um [disfmarker] I got [disfmarker] this crazy idea that, um, i in the future, people will just walk around with the microphones that they use for their cell phones? You know, these little boom ones, like, and really go to meetings with close talking mikes. If they're their own personal microphone. And so I'm wondering if we can get a couple of tho I don't know how good quality they are, but it would be really interesting to see if they're good enough. [speaker002:] Do you mean the kind that hang from the ear, sort of thing, [speaker003:] The k kind that guys that like to look like they're really cool at airports wear. [speaker002:] or the [disfmarker] [speaker004:] Yeah. [speaker001:] Yeah. [speaker002:] Yeah, but I don't know that one. So what's that mean? [speaker001:] This [disfmarker] it's [disfmarker] it's a cell phone jack. [speaker003:] You can't miss it. [speaker005:] Oh, yeah, you do. [speaker003:] You cannot miss it. [speaker002:] Is this [disfmarker] just this little [disfmarker] [speaker003:] They're the guys going around [disfmarker] they're probably talking to nobody, but [speaker002:] Oh they [disfmarker] they have little headsets? [speaker004:] Yep. [speaker001:] Th d [speaker003:] Yes. [speaker002:] Or what do they [disfmarker] [speaker001:] Yep. [speaker003:] They wear the heads E Well, there you go. [speaker002:] Oh you've [disfmarker] you've got one. [speaker003:] Yes. That's [disfmarker] that [disfmarker] it looks sort of like that. [speaker001:] Except that one's even bigger than most of the ones I've seen. [speaker002:] So where [disfmarker] where does [disfmarker] how do you wear that? [speaker001:] Most of the ones are just a little boom. [speaker002:] What does that [disfmarker] [speaker006:] You [disfmarker] you put it over the corner of your ear. [speaker002:] Yeah, over the corner of the ear [disfmarker] [speaker006:] I can't [disfmarker] I'm having competing microphone [disfmarker] headphone problems here. [speaker005:] There's another kind which s goes in your ear [speaker002:] Huh. [speaker005:] and it hangs down. There's a little b [comment] [pause] bulb [speaker004:] Yep. It just ha just hangs down there. [speaker002:] Yeah, that I've seen. Yeah. [speaker005:] that hangs [disfmarker] has the mike on it. [speaker003:] OK, so whatever people sort of wear to use l [speaker006:] This one. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker003:] Yeah, something like tha with their cell phones, [speaker002:] Yeah. [speaker003:] it'd be really great, I think, if we can argue t that [disfmarker] [speaker006:] It's an interesting idea. [speaker003:] people like that. [speaker006:] The acoustics [disfmarker] the acoustics on this are really s m better than when I talk into it directly. [speaker001:] Yeah. The [disfmarker] the biggest issue is the stupid cable [speaker002:] But [disfmarker] what [disfmarker] [speaker003:] Are they? [speaker002:] Why would it be cool? [speaker001:] that Sony [disfmarker] The connector that Sony has, that, uh [disfmarker] [speaker003:] Oh, OK. So it's [disfmarker] they're not compatible? [speaker001:] It's non standard, and so you can't just plug something in. You need to get it wired. [speaker002:] Oh, I'm sorry. You're talking about that as a possible thing to plug into the Sonys. [speaker003:] Well, um, just as an example for, you know, the future, of the fact that maybe people will wear those microphones, [speaker001:] Right. [speaker002:] No? [speaker005:] How standard are these? [speaker006:] Works with this. I don't know. [speaker003:] or some people might, [speaker002:] Well, some people will, yeah. [speaker003:] to meetings, you know. Not a [disfmarker] I'm not saying there's not a far field microphone, uh, application, but [disfmarker] If we have a choi I've always wondered how well they would work. [speaker005:] It is an interesting idea. If we could build an adaptor that went from [disfmarker] cuz I think mine looks like this too. If we had this style plug that went to that, then we'd just have adaptors [speaker001:] Right. [speaker005:] and people would plug them in. You could use [disfmarker] they could use their own. [speaker003:] Oh, right. So they could use the [disfmarker] [speaker001:] I wonder if you could do an adaptor. I don't know if you can or not. [speaker003:] Anyway, it's just an idea. [speaker002:] Yeah. [speaker003:] Um [disfmarker] [speaker001:] An adaptor might be a better idea than do redoing the wiring all the time. [speaker003:] Mmm. [speaker002:] Well, I mean, another s An another side to that is that, um, in principle, uh, if it all comes together, supposedly we're gonna get two um, IRAM boards. [speaker001:] Right. [speaker002:] Uh, little later on in the year. And so, we're gonna wanna be thinking in terms of some wireless connection to it. [speaker001:] Mm hmm. [speaker002:] Um, so we have, you know, some kind of little something, and, uh, we have a wireless connection, this board, that's doing the computation for the f the, uh [disfmarker] the recognition. And so, you know, we wanna be able to show off to somebody, you know, we have a base station somewhere, and you're wandering around, uh, hands free, and [disfmarker] [speaker001:] Yep. [speaker005:] I don't understand. So [disfmarker] so there's a mike that plugs in [disfmarker] You're saying there's a mike that plugs in to the IRAM board? [speaker002:] No, no, what I'm saying is that we're gonna have to put together some kind of a little superstructure, d some sort of demo box that it [disfmarker] that doesn't do a whole lot, [speaker005:] or [disfmarker] [speaker001:] Demo. [speaker002:] but it somehow connects to [disfmarker] uh, remotely to this IRAM board which will be doing the real computation. [speaker005:] Oh, I see. [speaker002:] In other words, we're [disfmarker] we're not gonna attempt to build, uh, it [disfmarker] do the real work of building a portable device that's little, that is doing all the computation we wanna do for the recognition and the far field mike, uh, compensation and all that stuff. So we're gonna do all that on this powerful board, and, uh, have some wireless connection to it. And then the question is what is [disfmarker] what does it look like, does it look like something that you [disfmarker] you hold like this, or do we make a big deal out of it that you hold it arm [disfmarker] at arm's length th so that you can see some display, or, uh, do we have one of these [disfmarker] You know. I don't know. So, it th these are all possibilities. [speaker001:] Yep. I mean, what I was envisioning was a PDA with a cellular link in it. You know, with one of the r short range wireless. And let th Just do [disfmarker] capture the audio on the PDA and send the audio over the wireless net. [speaker002:] Yeah. So, is is Bluetooth the sort of thing that'd be, uh [disfmarker] [speaker001:] But. Uh, Bluetooth is shorter range, but you could use Bluetooth. Wh whatever the network guys already have hooked up for this floor. I don't remember what it's called. [speaker002:] Hmm. [speaker001:] OK, then uh, another issue on file reorganization is, uh, making data available to people outside ICSI. So, specifically, the U W folks have been wanting to get access to it. So I think the right thing to do for that, is figure out how to do CVS without uh, compromising security. Some pass SSH tunneled CVS, [speaker005:] Well, I I was [disfmarker] [speaker001:] and then give them access. [speaker005:] I was reading through some of the CVS documentation and you can just substitute CVS for your R S H command. [speaker001:] Yes, except that the command has to not take a password. [speaker005:] Hmm. [speaker001:] Because it it's non interactive. And so the only way to do that is with, uh [disfmarker] S hosts, which is insecure. [speaker005:] Hmm. [speaker001:] So we still have to sort of look at it a little bit. I mean, I'm sure it's been solved, and I just haven't found the solution yet. But I am sure that people have done it because it's gonna be a problem a lot of people have. But, um [disfmarker] [speaker002:] Who [disfmarker] Well, we're s talking about a fairly limited group, are [disfmarker] are [disfmarker] is it a small enough group that they would [disfmarker] could just have accounts? Does that solve anything? [speaker001:] Uh, n n not really, because then we have the coordination issue. I mean, so one thing they could do [disfmarker] I mean, I guess that's true. They could simply log in to ICSI [speaker002:] Yeah. [speaker005:] Well they're gonna have to have accounts [disfmarker] [speaker001:] and do everything locally. [speaker005:] If they use SSH they have to have an account anyways. [speaker001:] Yeah, except they could share one. [speaker002:] Well, but that's not [disfmarker] [speaker001:] The, uh [disfmarker] the SSH accounts, and the user accounts don't have to be similar. So. I mean, there are lots of ways of doing it, but w but if we can't figure out a remote way of doing it, just letting them log in to ICSI might be OK. [speaker003:] Mm hmm. [speaker001:] So the intention was to put everything except the audio files themselves under revision control. And the audio files, it's not worth doing cuz they're too big. You're never gonna be copying them around and making working copies of them. [speaker002:] Right. [speaker001:] And then the other issue Jane and I spoke of [pause] briefly is just general permission issues. That right now, lots of people create the files and then we have group "Meeting Recorder". But that means anyone in the Meeting Recorder group can overwrite the files. And so that's a pretty coarse level of [pause] granularity. So we might wanna think about doing a Meeting Recorder user [disfmarker] owner for those files, and then doing group slightly differently. But [disfmarker] I think what we have now [disfmarker] [speaker003:] Can you just do, like, SUX or something to modify [disfmarker] I mean, if you just have one user that [disfmarker] [speaker001:] Another option is to m make a non make it owned by a not real user, and then root will be able to do it. But root is pretty tightly controlled here. [speaker003:] Mm hmm. [speaker006:] I had another thought. [speaker001:] So th I don't think that's a good solution. [speaker006:] Would [disfmarker] w would it be possi There are scripts that uh, run, and [disfmarker] and when it's finished compressing, would it be possible for the script to change the permissions on the file to be more strict? Have them [disfmarker] [speaker001:] Yes, but if you do that, then someone who isn't the owner can't unchange it. [speaker006:] But after the meeting's done, would they want to? [speaker001:] Yeah, as I said, it's just a question of, uh, do you wanna have to track down root if you have to make any change. [speaker003:] But it doesn't have to be root. It could be some other user that we all can SUX to. [speaker006:] That's true. [speaker001:] You have security issues with that. So. [speaker006:] Yeah, that's true. [speaker003:] I mean [disfmarker] [speaker005:] That's what we do with doctor speech though, right? [speaker001:] No, with doctor speech, it's just a group. So [disfmarker] there're lots of different owners of i i of the files, but the group is always doctor speech. Or real, usually. [speaker005:] But there is a user doctor speech. Right? I mean, I [disfmarker] I SUX to doctor speech w when I need to make a [disfmarker] a directory on, uh, one of the [disfmarker] [speaker001:] Nnn Oh, do you? [speaker003:] Oh. So tha that's what I meant, it [disfmarker] just some user like "meeting root" or something. [speaker001:] Oh, OK. [speaker003:] I don't know. [speaker001:] I mean, I'm not too worried, because the group's pretty small. But [disfmarker] [speaker002:] Yeah. [speaker006:] Also it seems like once it's been stored that night, it's [disfmarker] it's possible to recover from backups, so that the main risk is avoiding it being accidentally deleted during the day that it was recorded. [speaker001:] Well, and just [disfmarker] You, then, also just have the general problem of permission on the other files, that, do you want people checking out a transcript file when they find an error, and correcting it, or do they wa do you want it to go through you? [speaker006:] Yeah. Yeah. And I'd prefer it to go through me. [speaker001:] OK, but that means that you're gonna have to be available for people, all the time, who say "there's an error here", rather than just [disfmarker] them just going in and correcting it. [speaker006:] Well I p At this point [disfmarker] [speaker005:] Well, with CVS, though, you don't actually have to do it that way. [speaker006:] Yeah. [speaker003:] Yeah. [speaker001:] Well, but [disfmarker] You have a choice. Either you let people do it themselves, or you don't. [speaker006:] Yeah. [speaker005:] What I'm saying is that it can be the case that people can do it themselves and it can be reviewed. [speaker006:] w w Well [speaker005:] Right? [speaker001:] Right. [speaker005:] I mean, it's not like it's one or the other. We can do both. [speaker003:] Yeah, you can [disfmarker] [speaker001:] Right, so you could get email. Send email to Jane that someone has done a revision. [speaker005:] Right. [speaker006:] I [disfmarker] I wanna also say that, you know, this is a [disfmarker] uh, uh, My [disfmarker] my view on this is that once it's in a final version and I'm working toward all these being final versions, then it can go into this next ro this next realm. But it [disfmarker] it's sort of counter productive to have changes made which I would be making if I'd just gotten that far in the file. You know, it's [disfmarker] it's, um. [speaker001:] Well, so, you don't have to release it to the world until you're ready to. [speaker006:] Yeah. That's right. [speaker001:] So the question is s w [speaker006:] And then after that, then this would probably work. Yeah. [speaker001:] So you would keep it in a separate file structure until you were ready to put it in the repository. [speaker006:] Good, good. [speaker003:] I mean [disfmarker] [speaker001:] Or just keep the lock on it. [speaker003:] Is CVS, like [disfmarker] Yeah, just keep the lock on. I mean, just check it out [disfmarker] check it out [speaker001:] Until you're [disfmarker] Yep. [speaker003:] and don't let anyone check it out [disfmarker] [speaker005:] No, it does not [disfmarker] the CVS doesn't work that way. [speaker003:] Oh, it [disfmarker] Oh, it doesn't. [speaker005:] There's no [disfmarker] there's no lock when you check something out, [speaker003:] Oh, OK. [speaker005:] not like RCS. In other words, everybody can check out anything. [speaker003:] Oh. [speaker005:] The C stands for "concurrent". [speaker003:] So why not use RCS at that point? Just at that stage? [speaker001:] Cuz it's not remote. [speaker003:] It's not remote. [speaker001:] The nice thing about CVS is [disfmarker] is, you can be on a different machine. [speaker003:] But at that point [disfmarker] we only want, probably, Jane to be in control anyway, and she [disfmarker] I mean, it's just an i [speaker001:] No, because UW wants to have access to these files. That's the whole point. [speaker003:] But not to modify them, right? Just to read them. [speaker001:] Well, if they k en if they record a meeting, and they create a "key" file, and they find an error in their "key" file, they shouldn't have to tell us about that so that they can create [disfmarker] correct an error in one of their files. [speaker006:] Well, the "key" file is [disfmarker] [speaker001:] So that's why I wanna do it [disfmarker] [speaker003:] I don't know, [speaker006:] ke "key" file's different. [speaker003:] uh [disfmarker] I mean [disfmarker] [speaker006:] "Key" fi [speaker001:] Different than what? [speaker006:] Well, "key" file has the extra attributes that if you notice that there are lots of spikes on channel so and so, it's useful as a repository for putting that there. And also if you notice that [pause] there was an extra speaker, and didn't get mentioned in the "key" file, but they were only there for a second, but it's t useful to mention that they were there. So I th I think that the "key" file is usefu for notations related to the meeting, [speaker001:] Well [disfmarker] B But, I mean, the meetings are gonna be the same way. [speaker006:] but it's different from the transcript. [speaker001:] So UW records a meeting, we transcribe it. [speaker006:] Yeah. [speaker001:] A few months later they listen to it and they say "Ope! [comment] They got that acronym wrong." [speaker006:] Yeah. [speaker003:] Then they should send that through [disfmarker] [speaker001:] Why [disfmarker] why sh [speaker003:] Well [disfmarker] It's sort of not really a question about permissions, but more of procedure. Either those all go through Jane, or through someone, [speaker001:] Mm hmm. [speaker003:] or they all don't. But it's sort of [disfmarker] And if they all do, then there isn't a problem, right? Cuz once they give us the data, it's ours, and if they wanna make changes [disfmarker] I mean, ours to sort of transcribe and annotate. [speaker001:] The [disfmarker] the p the point here is that it would be nice t for there to be one repository. [speaker003:] And if they wanna make changes, they can [pause] do that. [speaker002:] Yeah [speaker006:] Yeah. [speaker001:] And if we don't let them modify their own data, [comment] they're not gonna store their data in our repository. Or if we make it inconvenient for them to change their data. [speaker003:] So then if they do have acc [speaker001:] I [disfmarker] I'm just looking at it the other way around. [speaker003:] OK, so what if they have [disfmarker] What if they have accounts here, and they use RCS, [speaker001:] So [disfmarker] so let's say we k [speaker003:] at that point, where you can l really lock a file. I mean, I'm worried. If you can't lock a file, this [disfmarker] this to me sounds very scary. [speaker006:] Hmm. [speaker003:] Um [disfmarker] And if they have accounts here and they're modifying it [disfmarker] If they're ac if they're so closely linked that they're actually modifying transcripts and "key" files, then they could do it by, you know, secure shelling into ICSI, under RCS, at that point. [speaker001:] Ye well, I don't see a real difference between RCS and any other system. I mean, it's just mechanisms. The nice thing about CVS is that you can have multiple people modifying and if the changes don't overlap each other, they can just do it. [speaker006:] Hmm. That's interesting. [speaker001:] So the problem with RCS is that if two people want to modify a file at the same time they can't, [speaker004:] Yeah. [speaker001:] because the granularity of locking is at the file level. [speaker002:] What what does CVS do if they [disfmarker] if they do overlap? [speaker001:] Uh, there're lots of different policies for dealing with it. [speaker003:] Mm hmm. [speaker005:] But typically a person has to, uh, mediate you know, which changes actually get [disfmarker] go through. [speaker001:] So [disfmarker] [speaker004:] Yeah. [speaker006:] Oh. [speaker004:] Yeah, to decide which one is [disfmarker] [speaker006:] Oh. [speaker004:] Yeah. [speaker002:] Oh, I [speaker003:] So, maybe there is a way in CVS to effectively [comment] lock something if you don't want people to make any changes? [speaker001:] At the repository level, there certainly is. [speaker003:] Uh huh. [speaker001:] You can mark a file as [disfmarker] as "you can't check this out". [speaker003:] Oh. So [disfmarker] [speaker005:] CVS is pretty good. [speaker003:] So then you could use CVS and, you know, just f have remote access, [speaker002:] OK. [speaker001:] Yep. [speaker003:] but then it's up to whoever is, sort of, responsible for that level of transcription to decide how and when to put these locks, [speaker001:] Well, it's [disfmarker] it's just like source code, [speaker003:] or [disfmarker] [speaker001:] right? So that, when you're developing it and it's really rough, you don't put it in the repository. [speaker003:] Yeah. [speaker001:] You wait to the point when you're ready to release it, and then you put it in the repository [speaker005:] Yeah, so the w [speaker006:] Mm hmm. [speaker001:] and if other people wanna [disfmarker] Copy. [speaker005:] In The [disfmarker] Maybe the model [disfmarker] It's a little bit different in CVS. When you check something out, you actually create your own directory with copies of everything in it, and that's what you work on. So, until you're ready to check it in, nobody sees anything you've done. And then when you check it in, it puts, sort of, you know [disfmarker] every change that you've done is [disfmarker] goes into the central [disfmarker] [speaker003:] Right. So if that's at an [speaker002:] Hmm. [speaker003:] I guess it depends on Jane's [disfmarker] Uh, you know, if that's [disfmarker] if that model works for the transcripts, then that's fine. But if that pr [@ @] um, allows someone to come in and modify while you're modifying, and they turn out to be changes that, you know, would have been better to wait until your version came out, then that's really up to you, not up to the software, [speaker006:] I also think, um, that the type of change ch ch varies an I think it's a very good point. Because what if the [disfmarker] uh, what if one set of changes one person was making was with reference to some typographical convention. [speaker001:] Mm hmm. [speaker006:] And the other person n unkn n quite unknowingly, was changing it the other way around. Then you could end up with a real mess, in terms of, like, consistency of conventions across a file, there is an argument to be made for having a single editor editing at that level. But I like b I like what Liz is saying about [disfmarker] [speaker005:] Well yeah, there would [disfmarker] there would have to be somebody to enforce the consistency. [speaker006:] That's right. And I also like [disfmarker] I l also like Liz's uh, k um, phraseology of "at this level of" [disfmarker] I don't remember how you put it, [comment] but this [disfmarker] "at this level of transcription" or whatever. [speaker005:] Yeah. [speaker006:] I mean it's like there are different layers of whatever and different types of conventions, and if it were an acronym, then I would [disfmarker] I think that it would make sense f for them to have input into that level of things, but I wouldn't want them to be changing [disfmarker] [speaker001:] I'm just [disfmarker] Imagine that we reverse this and they were keeping the repository and we were d collecting meetings and sending them to them. [speaker006:] Yeah. Mm hmm. [speaker001:] If I found an error, and I sent them email and said "here's this error", and I didn't get a response for a couple weeks, I would stop sending them the [disfmarker] my data and I would start collecting it here and we would end up with two corpora. [speaker006:] Well, [speaker001:] And I don't want that to happen. [speaker006:] yeah. I agree. [speaker001:] So we have to make it convenient for them to make changes that they wanna make. [speaker006:] Well [disfmarker] Well, I see now, we're k back into the same situation as when we were, uh, talking about a allowing people to edit the transcripts before they [comment] become public. So there's a question of how easy or how hard to modify, and how that impacts the amount of changing that's done. [speaker001:] Mm hmm. [speaker006:] And I [disfmarker] I think that it's, um, I [disfmarker] I have received some suggestions for change, which I've made. And um, I think with an acronym, that's a clear case where it would be an easy change to incorporate. But it would worry me to have people, um, having free reign to change these things, because of the need to have consistent conventions. And [pause] people who are only just visiting the data might not have a sense of the larger system. [speaker003:] Yeah, that's what I would be worried about too. [speaker005:] Well, I think [disfmarker] yeah, I think there are probably ways that you can say that, you know, only certain users are allowed to change things. [speaker006:] Yeah. Mm hmm. [speaker005:] Uh, so we can probably restrict it to be [disfmarker] You know, f casual users who are just browsing can just r get it read only, can't check in to the archive. [speaker006:] Mm hmm. [speaker005:] Whereas we could say "other people", you know, "these people are allowed to actually submit changes". [speaker003:] Right. [speaker005:] I mean, I [disfmarker] I think one thing that's probably everybody would agree on is that there will never be a point where you know, we can say that for sure there's no problems with any of the transcripts. I mean there's, uh [disfmarker] f You know, something's going to get overlooked at some point, or somebody's gonna wanna make a change somehow. So we need to have some mechanism for, um, handling changes in the future. [speaker006:] I agr I agree but I have to also add one other thing which i before I forget it, which is Adam's point about when you were reviewing the IBM transcripts. u This was mentioned at the beginning of the meeting. You said that there were some places where [pause] it wasn't what y was written already, but you could see how they might have thought that's what it said. So there [disfmarker] it's not just a right and wrong thing. Sometimes there tru there are absolutely stretches where there is no [disfmarker] no [disfmarker] k if you were at the meeting and you could ask the person, [speaker005:] Yeah. Exactly. Mm hmm. [speaker006:] maybe, m you know, they could [disfmarker] they could've told you, but five minutes later they might not have been able to tell you, [speaker005:] Right. [speaker006:] even if they were listening to it. So, sometimes it isn't right and wrong, [speaker005:] Yeah. [speaker006:] sometimes it's multiple interpretations, and [disfmarker] [speaker005:] Good point. Yeah. [speaker006:] Now [disfmarker] But of course, this does raise the possibility of allowing, um, an extra annotation for, you know, alternative interpretations of a stretch. [speaker005:] Mm hmm. [speaker006:] But I just wanted to say it isn't simply right or wrong. [speaker005:] Yeah, that's a good point. [speaker006:] Or overlooked or not overlooked. Sometimes they're truly ambiguous, and um, uh, um, [speaker005:] Mm hmm. [speaker006:] nice to have a notation for that, [speaker002:] Gotta go. [speaker006:] but. Yeah. [speaker005:] Yeah. [speaker003:] I mean, I like that idea because even a casual user can always send email to whoever's in charge, [speaker005:] Mm hmm. [speaker003:] and say, you know, "we'd like these changes" [speaker001:] Well [disfmarker] [speaker003:] and, you know, hopefully we'll give them a response. [speaker001:] I just [disfmarker] I don't s [speaker003:] And if they really do it a lot, and they say "we're a casual user but we want a chance to change the transcripts" then we can face that if it happens, [speaker001:] I don't see that this is e Yep. Can you write "unread" on that? [speaker003:] but [speaker001:] Yeah, thank you. [speaker003:] I don't really see the people at UW that I know of right now making huge amoun investing huge amounts of time in changing transcripts. But I could be wrong. [speaker001:] Probably not transcripts, but it would be nice to have one mechanism for all the files. And they're certainly gonna wanna change tools. [speaker003:] Yeah. [speaker001:] And so [disfmarker] you know, it's the same as source code. You [disfmarker] you release source code at some point. And, either you have to do everything or you share responsibility. [speaker005:] Yeah I mean a lot of the open source pro projects face this same problem. [speaker003:] Right. [speaker005:] You know, p anybody can check out the code, but not everybody can check it in. [speaker003:] Right. And there's some happy medium [speaker005:] Yeah. [speaker003:] and we don't know what that is yet until we get feedback from people, but what if it's OK to just handle it with sort of a person in charge of the philosophy behind the changes, and some people with permissions, maybe by request, to make changes, That we don't just give people permissions if they're not gonna make changes. Because I've overwritten a [disfmarker] a file by mistake, not wanting to have done that cuz I didn't think I had permission, when I did. [speaker001:] Mm hmm. [speaker003:] Um, and then just seeing if that is enough to [disfmarker] to handle the transcript changes. [speaker006:] Mm hmm. [speaker003:] I'm just worried about letting everybody go in and make changes, cuz it's real easy when you're trying to, I don't know, run alignments and there's a word you wanna fix, to go in and do that and then mess up other things, if you don't know, you know, the overall philosophy behind the [disfmarker] the conventions. [speaker006:] Mm hmm. [speaker001:] Right. [speaker005:] Then [disfmarker] There is also the possibility [disfmarker] Remember, in CVS, when we talk about making changes, there's sort of two types that can be made. [speaker006:] And als it also Mm hmm. [speaker001:] You can back everything out. [speaker005:] To your local, or to the global. [speaker003:] Mm hmm. [speaker005:] And so people can make changes to their local [speaker003:] Right. [speaker005:] and if they screw those up, that's, you know, in their [disfmarker] that's only on them. [speaker003:] Exactly. [speaker006:] Mm hmm. [speaker005:] It's w it's the checking in part that r we really care about, and that we can control with, you know, who can check things in and stuff. So we can [disfmarker] I think we can do this. [speaker003:] Or just, yeah, start by make it a really tight control [speaker006:] Mm hmm. [speaker005:] Mm hmm. [speaker003:] and then as people really need the control you can ascertain whether or not to [disfmarker] [speaker005:] Yeah. They can still check it out [disfmarker] Yeah, cuz they can still check out full copies and make whatever changes they want to their local copies. [speaker003:] Right. [speaker005:] And it won't affect the [disfmarker] the g the global, official, ones at all. [speaker003:] Right. [speaker001:] Right, but what I want to avoid is ending up with two corpora. [speaker004:] Yeah. [speaker005:] Yes. Definitely. [speaker001:] And so if you make it too hard for them to check back in, they'll check it out once, [speaker005:] Yep. [speaker001:] they'll make all the changes, they'll never tell us about the changes, [speaker005:] Mm hmm. [speaker001:] and we'll get two different corpora. [speaker005:] Mm hmm. [speaker001:] Right? [speaker003:] I think we're already gonna [disfmarker] I mean there's already some chance that different annotations, different places, you know [disfmarker] But you can control that by knowing you're making two corpora or knowing that you're adding um, annotations on one version and you don't have the latest corrections maybe at that point. And then you finish the project and you realize that there were corrections made on your originals and then you have to merge them. And the b the thing that makes it ea OK to do that, is knowing where the synch time boundaries are. Cuz you can automatically pretty much merge things if you've only got twenty words or so in an utterance. It's when you get the whole meeting and the synch times ch have changed, or you can't correspond to a previous version with synch times, that you get in trouble. So. Was there anything else on your list? [speaker001:] Nope. Shall we do some digits? [speaker003:] Now, wait, are [disfmarker] are we doing them simultaneously or one at a time? [speaker001:] I wasn't planning on doing it simultaneously. [speaker003:] Alright. [speaker005:] Lee, you liked that l huh? [speaker001:] Oop! [comment] Ouch! [speaker006:] Ooo! [speaker006:] OK. [speaker002:] Uh. Somebody else should run this. I'm sick of being the one to sort of go through and say, "Well, what do you think about this?" You wanna [disfmarker]? [speaker006:] Should we take turns? [speaker004:] Yeah. [speaker006:] You want me to run it today? [speaker002:] Yeah. Why don't you run it today? [speaker006:] OK. [speaker002:] OK. [speaker006:] OK. Um. Let's see, maybe we should just get a list of items [disfmarker] things that we should talk about. Um, I guess there's the usual [pause] updates, everybody going around and saying, uh, you know, what they're working on, the things that happened the last week. But aside from that is there anything in particular that anybody wants to bring up [speaker004:] Mmm. [speaker006:] for today? No? OK. So why don't we just around and people can give updates. [speaker005:] Oh. [speaker006:] Uh, do you want to start, Stephane? [speaker003:] Alright. Um. Well, the first thing maybe is that the p Eurospeech paper is, uh, accepted. Um. Yeah. [speaker006:] This is [disfmarker] what [disfmarker] what do you, uh [disfmarker] what's in the paper there? [speaker003:] So it's the paper that describe basically the, um, system that were proposed for the [pause] Aurora. [speaker006:] The one that we s we submitted the last round? [speaker003:] Right, yeah. [speaker006:] Uh huh. [speaker004:] Yeah. [speaker003:] Um [disfmarker] Yeah. So and the, fff [comment] comments seems [disfmarker] from the reviewer are good. So. [speaker006:] Hmm. [speaker003:] Mmm [disfmarker] Yeah. [speaker006:] Where [disfmarker] where's it gonna be this year? [speaker003:] It's, uh, Aalborg in Denmark. [speaker006:] Oh, OK. [speaker003:] And it's, yeah, September. [speaker006:] Mmm. [speaker003:] Mmm [disfmarker] Yeah. Then, uh, whhh well, I've been working on [disfmarker] on t mainly on on line normalization this week. Uh, I've been trying different [disfmarker] slightly [disfmarker] slightly different approaches. Um, the first thing is trying to play a little bit again with the, um, time constant. Uh, second thing is, uh, the training of, uh, on line normalization with two different means, one mean for the silence and one for the speech. Um, and so I have two recursions which are controlled by the, um, probability of the voice activity detector. Mmm. This actually don't s doesn't seem to help, although it doesn't hurt. So. But [disfmarker] well, both [pause] on line normalization approach seems equivalent. [speaker006:] Are the means pretty different [pause] for the two? [speaker003:] Well, they [disfmarker] Yeah. They can be very different. Yeah. Mm hmm. [speaker006:] Hmm. [speaker002:] So do you maybe make errors in different places? Different kinds of errors? [speaker003:] I didn't look, uh, more closely. Um. It might be, yeah. Mm hmm. Um. Well, eh, there is one thing that we can observe, is that the mean are more different for [disfmarker] for C zero and C one than for the other coefficients. And [disfmarker] Yeah. And [disfmarker] Yeah, it [disfmarker] the C one is [disfmarker] There are strange [disfmarker] strange thing happening with C one, is that when you have different kind of noises, the mean for the [disfmarker] the silence portion is [disfmarker] can be different. [speaker006:] Hmm. [speaker003:] And [disfmarker] So when you look at the trajectory of C one, it's [disfmarker] has a strange shape and I was expecting th the s that these two mean helps, especially because of the [disfmarker] the strange C ze C one shape, uh, which can [disfmarker] like, yo you can have, um, a trajectory for the speech and then when you are in the silence it goes somewhere, but if the noise is different it goes somewhere else. [speaker006:] Oh. [speaker003:] So which would mean that if we estimate the mean based on all the signal, even though we have frame dropping, but we don't frame ev uh, drop everything, but [disfmarker] uh, this can [disfmarker] hurts the estimation of the mean for speech, and [disfmarker] Mmm. [comment] But I still have to investigate further, I think. Um, a third thing is, um, [vocalsound] that instead of t having a fixed time constant, I try to have a time constant that's smaller at the beginning of the utterances to adapt more quickly to the r something that's closer to the right mean. T t um [disfmarker] Yeah. And then this time constant increases and I have a threshold that [disfmarker] [speaker002:] Mm hmm. [speaker003:] well, if it's higher than a certain threshold, I keep it to this threshold to still, uh, adapt, um, the mean when [disfmarker] [vocalsound] if the utterance is, uh, long enough to [disfmarker] to continue to adapt after, like, one second [speaker002:] Mm hmm. [speaker003:] or [disfmarker] [speaker002:] Mm hmm. [speaker003:] Mmm. Uh, well, this doesn't help neither, but this doesn't hurt. So, well. It seems pretty [disfmarker] [speaker006:] Wasn't there some experiment you were gonna try where you did something differently for each, um, [vocalsound] uh [disfmarker] I don't know whether it was each mel band or each, uh, um, FFT bin or someth There was something you were gonna [disfmarker] uh, [comment] some parameter you were gonna vary depending on the frequency. I don't know if that was [disfmarker] [speaker003:] I guess it was [disfmarker] I don't know. No. u Maybe it's this [disfmarker] this idea of having different [pause] on line normalization, um, tunings for the different MFCC's. [speaker006:] For each, uh [disfmarker] [speaker002:] Mm hmm. [speaker003:] But [disfmarker] Mm hmm. [speaker006:] Yeah. I [disfmarker] I thought, Morgan, you brought it up a couple meetings ago. And then it was something about, uh, some and then somebody said "yeah, it does seem like, you know, C zero is the one that's, you know, the major one" or, uh, s I can't remember exactly what it was now. [speaker003:] Mmm. Yeah. There [disfmarker] uh, actually, yeah. S um, it's very important to normalize C zero and [pause] much less to normalize the other coefficients. And, um, actu uh, well, at least with the current on line normalization scheme. And we [disfmarker] I think, we [vocalsound] kind of know that normalizing C one doesn't help with the current scheme. And [disfmarker] and [disfmarker] Yeah. In my idea, I [disfmarker] I was thinking that the [disfmarker] the [disfmarker] the reason is maybe because of these funny things that happen between speech and silence which have different means. Um [disfmarker] Yeah. But maybe it's not so [disfmarker] [vocalsound] so easy to [disfmarker] [speaker002:] Um, I I really would like to suggest looking, um, a little bit at the kinds of errors. I know you can get lost in that and go forever and not see too much, but [disfmarker] [vocalsound] sometimes, [speaker003:] Mm hmm. [speaker002:] but [disfmarker] but, um, just seeing that each of these things didn't make things better may not be enough. It may be that they're making them better in some ways and worse in others, [speaker003:] Yeah. Mm hmm. [speaker002:] or increasing insertions and decreasing deletions, or [disfmarker] or, um, um, you know, helping with noisy case but hurting in quiet case. And if you saw that then maybe you [disfmarker] it would [disfmarker] [vocalsound] something would occur to you of how to deal with that. [speaker003:] Mm hmm. Mm hmm. [speaker004:] Hmm. [speaker003:] Alright. Mmm. Yeah. W um, So that's it, I think, for the on line normalization. Um [disfmarker] Yeah. I've been playing a little bit with some kind of thresholding, and, mmm, as a first experiment, I think I Yeah. Well, what I did is t is to take, um [disfmarker] [vocalsound] to measure the average [disfmarker] no, the maximum energy of s each utterance and then put a threshold [disfmarker] Well, this for each mel band. Then put a threshold that's fifteen DB below [disfmarker] well, uh, a couple of DB below this maximum, [speaker002:] Mm hmm. Mmm. [speaker003:] and [disfmarker] Actually it was not a threshold, it was just adding noise. [speaker002:] Mm hmm. [speaker003:] So I was adding a white noise energy, uh, that's fifteen DB below the maximum energy of the utterance. And [disfmarker] Yeah. When we look at [disfmarker] at the, um, MFCC that result from this, they are [pause] a lot more smoother. Um, when we compare, like, a channel zero and channel one utterance [disfmarker] um, so a clean and, uh, the same noisy utterance [disfmarker] well, there is almost no difference between the cepstral coefficients of the two. [speaker006:] Hmm. [speaker003:] Um. And [disfmarker] Yeah. And the result that we have in term of speech recognition, actually it's not [disfmarker] it's not worse, it's not better neither, [speaker006:] Hmm. [speaker003:] but it's, um, kind of surprising that it's not worse because basically you add noise that's fifteen DB [disfmarker] just fifteen DB below [pause] the maximum energy. [speaker001:] Sorry. [speaker003:] And [speaker006:] So why does that m [pause] smooth things out? [speaker003:] at least [disfmarker] [speaker006:] I don't [disfmarker] I don't understand that. [speaker002:] Well, there's less difference. Right? [speaker003:] It's [disfmarker] I think, it's whitening [disfmarker] This [disfmarker] the portion that are more silent, [speaker002:] Cuz it's [disfmarker] [speaker003:] as you add a white noise that are [disfmarker] has a very high energy, it whitens everything [speaker006:] Huh. Oh, OK. [speaker003:] and [disfmarker] and the high energy portion of the speech don't get much affected anyway by the other noise. And as the noise you add is the same is [disfmarker] [pause] the shape, it's also the same. [speaker006:] Hmm. [speaker002:] Yeah. [speaker003:] So they have [disfmarker] the trajectory are very, very similar. And [disfmarker] and [disfmarker] [speaker002:] So, I mean, again, if you trained in one kind of noise and tested in the same kind of noise, you'd [disfmarker] you know, given enough training data you don't do b do badly. The reason that we d that we have the problems we have is because [pause] it's different in training and test. Even if [vocalsound] the general kind is the same, the exact instances are different. [speaker006:] Mm hmm. [speaker002:] And [disfmarker] and so when you whiten it, then it's like you [disfmarker] the [disfmarker] the only noise [disfmarker] to [disfmarker] to first order, the only th noise that you have is white noise and you've added the same thing to training and test. [speaker006:] Mm hmm. [speaker002:] So it's, [speaker006:] Hmm. [speaker002:] uh [disfmarker] [speaker006:] So would that [pause] be similar to, like, doing the smoothing, then, over time or [disfmarker]? [speaker003:] Mm hmm. [speaker002:] Well, it's a kind of smoothing, [speaker003:] I think it's [disfmarker] I think it's different. [speaker002:] but [disfmarker] [speaker003:] It's [disfmarker] it's something that [disfmarker] yeah, that affects more or less the silence portions because [disfmarker] [speaker006:] Mm hmm. [speaker003:] Well, anyway, the sp the portion of speech that ha have high energy are not ch a lot affected by the noises in the Aurora database. [speaker002:] Mm hmm. [speaker003:] If [disfmarker] if you compare th the two shut channels of SpeechDat Car during speech portion, it's n n n the MFCC are not very different. They are very different when energy's lower, like during fricatives or during speech pauses. And, uh [disfmarker] [speaker002:] Yeah, but you're still getting more recognition errors, which means [vocalsound] that the differences, even though they look like they're not so big, [vocalsound] are [disfmarker] are hurting your recognition. [speaker003:] Ye [speaker002:] Right? [speaker003:] Yeah. So it distort [vocalsound] the speech. Right. Um. [speaker002:] Yeah. [speaker006:] So performance went down? [speaker003:] No. It didn't. [speaker006:] Oh. [speaker003:] But [disfmarker] Yeah. So, but in this case I [disfmarker] I really expect that maybe the [disfmarker] the two [disfmarker] these two stream of features, they are very different. I mean, and maybe we could gain something by combining them [speaker002:] Well, the other thing is that you just picked one particular way of doing it. [speaker003:] or [disfmarker] [speaker002:] Uh, I mean, first place it's fifteen DB, uh, [vocalsound] down across the utterance. And [vocalsound] maybe you'd want to have something that was a little more adaptive. Secondly, you happened to pick fifteen DB [speaker003:] Mmm. [speaker002:] and maybe twenty'd be better, [speaker003:] Yeah. [speaker002:] or [disfmarker] or twelve. [speaker003:] Yeah. Right. [speaker006:] So what was the [disfmarker] what was the threshold part of it? Was the threshold, uh, how far down [disfmarker]? [speaker002:] Yeah. Well, he [disfmarker] yeah, he had to figure out how much to add. So he was looking [disfmarker] he was looking at the peak value. [speaker006:] Uh huh. [speaker002:] Right? [speaker003:] Uh huh. [speaker002:] And then [disfmarker] [speaker006:] And [disfmarker] and so what's [disfmarker] ho I don't understand. How does it go? If it [disfmarker] if [disfmarker] if the peak value's above some threshold, then you add the noise? Or if it's below s [speaker003:] I systematically [comment] add the noise, but the, um, noise level is just [pause] some kind of threshold below the peak. [speaker006:] Oh, oh. I see. [speaker003:] Mmm. [speaker006:] I see. [speaker003:] Um. [speaker002:] Yeah. [speaker003:] Yeah. Which is not really noise, actually. It's just adding a constant to each of the mel, uh, energy. [speaker006:] Mm hmm. [speaker003:] To each of the [pause] mel filter bank. Yeah. [speaker006:] I see. [speaker003:] So, yeah, it's really, uh, white noise. I th [speaker006:] Mm hmm. [speaker002:] Yeah. So then afterwards a log is taken, and that's so sort of why the [disfmarker] [vocalsound] the little variation tends to go away. [speaker003:] Mm hmm. Um. Yeah. So may Well, the [disfmarker] this threshold is still a factor that we have to look at. And I don't know, maybe a constant noise addition would [disfmarker] [vocalsound] would be fine also, or [disfmarker] Um [disfmarker] [speaker002:] Or [disfmarker] or not constant but [disfmarker] but, uh, varying over time [pause] in fact is another way [pause] to go. [speaker003:] Mm hmm. Mm hmm. [speaker002:] Um. [speaker003:] Yeah. Um [disfmarker] [speaker002:] Were you using the [disfmarker] the normalization in addition to this? I mean, what was the rest of the system? [speaker003:] Um [disfmarker] Yeah. It was [disfmarker] it was, uh, the same system. Mm hmm. [speaker002:] OK. [speaker003:] It was the same system. Mmm. Oh, yeah. A third thing is that, um, [vocalsound] I play a little bit with the, um [disfmarker] [vocalsound] finding what was different between, um, And there were a couple of differences, like the LDA filters were not the same. Um, he had the France Telecom blind equalization in the system. Um, the number o of MFCC that was [disfmarker] were used was different. You used thirteen and we used fifteen. Well, a bunch of differences. And, um, actually the result that he [disfmarker] he got were much better on TI digits especially. So I'm kind of investigated to see what was the main factor for this difference. And it seems that the LDA filter is [disfmarker] is [disfmarker] was hurting. Um, [vocalsound] so when we put s some noise compensation the, um, LDA filter that [disfmarker] that's derived from noisy speech is not more [disfmarker] anymore optimal. And it makes a big difference, um, [vocalsound] on TI digits trained on clean. Uh, if we use the [disfmarker] the old LDA filter, I mean the LDA filter that was in the proposal, we have, like, eighty two point seven percent recognition rate, um, on noisy speech when the system is trained on clean speech. But [disfmarker] and when we use the filter that's derived from clean speech we jumped [disfmarker] so from eighty two point seven to eighty five point one, which is a huge leap. [speaker002:] Mm hmm. [speaker003:] Um. Yeah. So now the results are more similar, and I don't [disfmarker] I will not, I think, investigate on the other differences, which is like the number of MFCC that we keep and other small things that we can I think optimize later on anyway. [speaker002:] Sure. But on the other hand if everybody is trying different kinds of noise suppression things and so forth, it might be good to standardize on the piece [vocalsound] that we're not changing. Right? So if there's any particular reason to ha pick one or the other, I mean [disfmarker] Which [disfmarker] which one is closer to what the proposal was that was submitted to Aurora? Are they [disfmarker] they both [disfmarker]? Well, I mean [disfmarker] [speaker003:] I think [disfmarker] Yeah. I think th th uh, the new system that I tested is, I guess, closer because it doesn't have [disfmarker] it have less of [disfmarker] of France Telecom stuff, [speaker004:] You mean the [disfmarker] [speaker003:] I [disfmarker] [speaker004:] The [disfmarker] whatever you, uh, tested with recently. Right? [speaker003:] Mmm? Yeah. [speaker004:] Yeah? [speaker002:] Well, no, I [disfmarker] I'm [disfmarker] I [disfmarker] Yeah, you're trying to add in France Telecom. [speaker003:] But, we [disfmarker] [speaker002:] Tell them about the rest of it. Like you said the number of filters might be [vocalsound] different or something. Right? [speaker004:] The number of cepstral coefficients is what? [speaker002:] Or [disfmarker] Cep [speaker003:] Mm hmm. [speaker002:] Yeah. So, I mean, I think we'd wanna standardize there, wouldn't we? [speaker003:] Yeah, yeah. [speaker002:] So, sh you guys should pick something [speaker004:] Yeah. [speaker002:] and [disfmarker] [speaker004:] Yeah. [speaker002:] Well, all th all three of you. [speaker003:] I think we were gonna work with [disfmarker] with this or this new system, or with [disfmarker] [speaker004:] Uh, so the [disfmarker] the [disfmarker] right now, the [disfmarker] the system that is there in the [disfmarker] what we have in the repositories, with [disfmarker] uses fifteen. [speaker003:] So [disfmarker] Right. Yeah. [speaker004:] Yeah, so [disfmarker] Yeah, so [disfmarker] Yep. [speaker003:] But we will use the [disfmarker] the LDA filters f derived from clean speech. [speaker004:] Yeah, yeah. [speaker003:] Well, yeah, actually it's [disfmarker] it's not the [disfmarker] the LDA filter. [speaker004:] So [disfmarker] [speaker003:] It's something that's also short enough in [disfmarker] in latency. [speaker004:] Yeah. Well. [speaker003:] So. [speaker004:] Yeah. So, we haven't [disfmarker] w we have been always using, uh, fifteen coefficients, not thirteen? [speaker003:] Yeah. Mm hmm. [speaker004:] Yeah. Well, uh, that's [disfmarker] something's [disfmarker] Um. Yeah. Then [disfmarker] mmm [disfmarker] [speaker002:] I think as long as you guys agree on it, it doesn't matter. I think we have a maximum of sixty, [vocalsound] uh, features that we're allowed. So. [speaker003:] Yeah. [speaker004:] Yeah. Ma maybe we can [disfmarker] I mean, at least, um, I'll t s run some experiments to see whether [disfmarker] once I have this [vocalsound] [comment] noise compensation to see whether thirteen and fifteen really matters or not. [speaker003:] Mm hmm. Mm hmm. [speaker004:] Never tested it with the compensation, but without, [vocalsound] uh, compensation it was like fifteen was s slightly better than thirteen, [speaker003:] Yeah. [speaker004:] so that's why we stuck to thirteen. [speaker003:] Yeah. And there is [disfmarker] there is also this log energy versus C zero. [speaker004:] Sorry, fifteen. Yeah, the log energy versus C zero. [speaker003:] Well. [speaker004:] Uh, that's [disfmarker] that's the other thing. [speaker003:] W w if [disfmarker] if [disfmarker] [speaker004:] I mean, without noise compensation certainly C zero is better than log energy. Be I mean, because the [disfmarker] there are more, uh, mismatched conditions than the matching conditions for testing. [speaker003:] Mm hmm. [speaker004:] You know, always for the matched condition, you always get a [pause] slightly better performance for log energy than C zero. [speaker003:] Mm hmm. [speaker004:] But not for [disfmarker] I mean, for matched and the clean condition both, you get log energy [disfmarker] I mean you get a better performance with log energy. [speaker003:] Mm hmm. [speaker004:] Well, um, maybe once we have this noise compensation, I don't know, we have to try that also, whether we want to go for C zero or log energy. [speaker003:] Mm hmm. [speaker004:] We can see that. [speaker003:] Yeah. [speaker004:] Hmm. [speaker003:] Mmm. [speaker006:] So do you have [pause] more, Stephane, or [disfmarker]? [speaker003:] Uh, that's it, I think. Mmm. [speaker006:] Do you have anything, Morgan, or [disfmarker]? [speaker002:] Uh, no. I'm just, you know, being a manager this week. So. [speaker006:] How about you, Barry? [speaker001:] Um, [vocalsound] still working on my [disfmarker] my quals preparation stuff. Um, [vocalsound] so I'm [disfmarker] I'm thinking about, um, starting some, [vocalsound] uh, cheating experiments to, uh, determine the, um [disfmarker] [vocalsound] the relative effectiveness of, um, some intermediate categories that I want to classify. So, for example, um, [vocalsound] if I know where voicing occurs and everything, um, [vocalsound] I would do a phone [disfmarker] um, phone recognition experiment, um, somehow putting in the [disfmarker] the, uh [disfmarker] the perfect knowledge that I have about voicing. So, um, in particular I was thinking, [vocalsound] um, in [disfmarker] in the hybrid framework, just taking those LNA files, [vocalsound] and, um, [vocalsound] setting to zero those probabilities that, um [disfmarker] that these phones are not voicing. So say, like, I know this particular segment is voicing, [speaker006:] Mm hmm. [speaker001:] um, [vocalsound] I would say, uh, go into the corresponding LNA file and zonk out the [disfmarker] the posteriors for, um, those phonemes that, um, are not voiced, [speaker006:] Mm hmm. [speaker001:] and then see what kinds of improvements I get. [speaker006:] Hmm. [speaker001:] And so this would be a useful thing, um, to know [vocalsound] in terms of, like, which [disfmarker] which, um [disfmarker] which of these categories are [disfmarker] are good for, um, speech recognition. [speaker006:] Mm hmm. [speaker001:] So, that's [disfmarker] I hope to get those, uh [disfmarker] those experiments done by [disfmarker] by the time quals come [disfmarker] come around in July. [speaker006:] So do you just take the probabilities of the other ones and spread them out evenly among the [disfmarker] the remaining ones? [speaker001:] Yeah. I [disfmarker] I [disfmarker] I was thinking [disfmarker] OK, so just set to [disfmarker] set to some really low number, the [disfmarker] the non voiced, um, phones. [speaker006:] Mm hmm. [speaker001:] Right? And then renormalize. [speaker006:] Mmm. [speaker001:] Right. [speaker006:] Cool. [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker006:] That will be really interesting to see, you know. So then you're gonna feed the [disfmarker] those into [pause] some standard recognizer. [speaker001:] Mm hmm. [speaker006:] Uh, wh are you gonna do digits [speaker001:] Yeah, m [speaker006:] or [disfmarker]? [speaker001:] Um, well, I'm gonna f work with TIMIT [disfmarker] [speaker006:] With TIMIT. OK. [speaker001:] TIMIT [disfmarker] uh, phone recognition with TIMIT. [speaker006:] Mm hmm. [speaker001:] And, um [disfmarker] [speaker006:] Oh, so then you'll feed those [disfmarker] Sorry. So where do the outputs of the net go into if you're doing phone recognition? [speaker001:] Oh. Um, the outputs of the net go into the standard, h um, ICSI hybrid, um, recognizer. So maybe, um, Chronos [speaker006:] An and you're gonna [disfmarker] the [disfmarker] you're gonna do phone recognition with that? [speaker001:] or [disfmarker] Phone recognition. [speaker006:] OK, OK. [speaker001:] Right, right. [speaker006:] I see. [speaker001:] So. And, uh, another thing would be to extend this to, uh, digits or something where I can look at whole words. [speaker006:] Mm hmm. [speaker001:] And I would be able to see, uh, not just, like, phoneme events, but, um, [vocalsound] inter phoneme events. [speaker006:] Mm hmm. [speaker001:] So, like, this is from a stop to [disfmarker] to a vo a vocalic segment. You know, so something that is transitional in nature. [speaker006:] Right. Cool. [speaker001:] Yeah. [speaker006:] Great. Uh [disfmarker] [speaker001:] So that's [disfmarker] that's it. [speaker006:] OK. Um [disfmarker] [speaker001:] Yeah. [speaker006:] Let's see, I haven't done a whole lot on anything related to this this week. I've been focusing mainly on Meeting Recorder stuff. [speaker003:] Oh. [speaker006:] So, um, [vocalsound] I guess I'll just pass it on to Dave. [speaker007:] Uh, OK. Well, in my lunch talk last week I [disfmarker] I said I'd tried phase normalization and gotten garbage results using that l um, long term mean subtraction approach. It turned out there was a bug in my Matlab code. So I tried it again, um, and, um, the results [vocalsound] were [disfmarker] were better. I got intelligible speech back. But they still weren't as good as just subtracting the magnitude [disfmarker] the log magnitude means. And also I've been talking to, um, Andreas and Thilo about the, um, SmartKom language model and about coming up with a good model for, um, far mike use of the SmartKom system. So I'm gonna be working on, um, implementing this mean subtraction approach in the [vocalsound] far mike system [disfmarker] for the SmartKom system, I mean. And, um, one of the experiments we're gonna do is, um, we're gonna, um, train the [disfmarker] a Broadcast News net, which is because that's what we've been using so far, and, um, adapt it on some other data. Um, An Andreas wants to use, um, data that resembles read speech, like [pause] these digit readings, because he feels that the SmartKom system interaction is not gonna be exactly conversational. [speaker006:] Mm hmm. [speaker007:] S so actually I was wondering, how long does it take to train that Broadcast News net? [speaker002:] The big one takes a while. Yeah. That takes two, three weeks. [speaker007:] Two, three weeks. [speaker002:] So [disfmarker] but, you know, uh, you can get [disfmarker] I don't know if you even want to run the big one, uh, um, in the [disfmarker] in the final system, cuz, you know, it takes a little while to run it. So, [vocalsound] um, you can scale it down by [disfmarker] I'm sorry, it was two, three weeks for training up for the large Broadcast News test set [disfmarker] training set. [speaker007:] Oh. [speaker002:] I don't know how much you'd be training on. [speaker007:] OK. [speaker002:] The full? Uh, i so if you trained on half as much [vocalsound] and made the net, uh, uh, half as big, then it would be one fourth [pause] the amount of time [speaker007:] OK. [speaker002:] and it'd be nearly as good. So. [speaker007:] OK. [speaker002:] Yeah. Also, I guess we had [disfmarker] we've had these, uh, little di discussions [disfmarker] I guess you ha haven't had a chance to work with it too much [disfmarker] about [disfmarker] about, uh [disfmarker] uh, uh m other ways of taking care of the phase. [speaker007:] Mm hmm. [speaker002:] So, I mean, I [disfmarker] I guess that was something I could say would be that we've talked a little bit about you just doing it all with complex arithmetic and, uh [disfmarker] and not [disfmarker] not, uh, doing the polar representation with magnitude and phase. But [vocalsound] it looks like there's ways that one could potentially just work with the complex numbers and [disfmarker] and [disfmarker] and in principle get rid of the [vocalsound] effects of the average complex spectrum. But [disfmarker] [speaker007:] And, um, actually, regarding the phase normalization [disfmarker] So I did two experiments, and one is [disfmarker] So, phases get added, modulo two pi, and [disfmarker] because you only know the phase of the complex number t t to a value modulo two pi. And so I thought at first, um, that, uh, what I should do is unwrap the phase because that will undo that. Um, but I actually got worse results doing that unwrapping using the simple phase unwrapper that's in Matlab than I did not unwrapping at all. [speaker006:] Hmm. [speaker004:] Mm hmm. [speaker002:] Yeah. P So. [speaker007:] And that's all I have to say. [speaker006:] Hmm. [speaker002:] Yeah. So I'm [disfmarker] I'm still hopeful that [disfmarker] that [disfmarker] I mean, we [disfmarker] we don't even know if the phase [vocalsound] is something [disfmarker] the average phase is something that we do want to remove. I mean, maybe there's some deeper reason why it isn't the right thing to do. But, um, at least in principle it looks like there's [disfmarker] there's, uh, a couple potential ways to do it. One [disfmarker] one being to just work with the complex numbers, um, and, uh [disfmarker] in rectangular kind of coordinates. And the other is [vocalsound] to, uh, do a Taylor series [disfmarker] Well. So you work with the complex numbers and then when you get the spectrum [disfmarker] the average complex spectrum [disfmarker] um, actually divide it out, um, as opposed to taking the log and subtracting. So then, um, um, you know, there might be some numerical issues. We don't really know that. The other thing we talked a little bit about was Taylor series expansion. And, um, uh, actually I was talking to Dick Karp about it a little bit, and [disfmarker] and [disfmarker] and, since I got thinking about it, and [disfmarker] and, uh, so one thing is that y you'd have to do, I think, uh [disfmarker] we may have to do this on a whiteboard, but I think you have to be a little careful about scaling the numbers that you're [vocalsound] taking [disfmarker] the complex numbers that you're taking the log of because [vocalsound] the Taylor expansion for it has, you know, a square and a cube, and [disfmarker] and so forth. And [disfmarker] and so if [disfmarker] [vocalsound] if you have a [disfmarker] a number that is modulus, you know, uh, very different from one [disfmarker] [vocalsound] It should be right around one, if it's [disfmarker] cuz it's a expansion of log one [disfmarker] one minus epsilon or o is [disfmarker] is [vocalsound] one plus epsilon, or is it one plus [disfmarker]? Well, there's an epsilon squared over two [speaker007:] OK. [speaker002:] and an epsilon cubed over three, and so forth. So if epsilon is bigger than one, then it diverges. [speaker007:] Oh. [speaker002:] So you have to do some scaling. But that's not a big deal cuz it's the log of [disfmarker] [vocalsound] of K times a complex number, then you can just [disfmarker] that's the same as log of K plus [vocalsound] log of the complex number. [speaker007:] Oh. OK. [speaker002:] Uh, so there's [disfmarker] converges. But. [speaker006:] Hmm. OK. How about you, Sunil? [speaker004:] So, um, I've been, uh, implementing this, uh, Wiener filtering for this Aurora task. And, uh, I [disfmarker] I actually thought it was [disfmarker] it was doing fine when I tested it once. I it's, like, using a small section of the code. And then I ran the whole recognition experiment with Italian and I got, [vocalsound] like, worse results than not using it. Then I [disfmarker] So, I've been trying to find where the problem came from. And then it looks like I have some problem in the way [disfmarker] there is some [disfmarker] some very silly bug somewhere. And, ugh! I [disfmarker] I mean, i uh, it actually [disfmarker] i it actually made the whole thing worse. I was looking at the spectrograms that I got and it's, like [disfmarker] w it's [disfmarker] it's very horrible. Like, when I [disfmarker] [speaker002:] I [disfmarker] I missed the v I'm sorry, I was [disfmarker] I was distracted. I missed the very first sentence. So then, I'm a little lost on the rest. [speaker004:] Oh, I mean [disfmarker] [speaker002:] What [disfmarker] what [disfmarker] what [disfmarker]? [speaker004:] Oh, yeah. I actually implemented the Wiener f f fil filtering as a module and then tested it out separately. [speaker002:] Yeah, I see. Oh, OK. [speaker004:] And it [disfmarker] it [disfmarker] it gave, like [disfmarker] I just got the signal out and it [disfmarker] it was OK. So, I plugged it in somewhere and then [disfmarker] I mean, it's like I had to remove some part and then plugging it in somewhere. And then I [disfmarker] in that process I messed it up somewhere. So. [speaker002:] OK. [speaker004:] So, it was real I mean, I thought it was all fine and then I ran it, and I got something worse than not using it. So, I was like [disfmarker] I'm trying to find where the m m problem came, [speaker002:] Uh huh. [speaker004:] and it seems to be, like, somewhere [disfmarker] [speaker002:] OK. [speaker004:] some silly stuff. And, um, the other thing, uh, was, uh, uh [disfmarker] Well, Hynek showed up one [disfmarker] suddenly on one day and then I was t talking wi [speaker002:] Right. Yeah. As [disfmarker] as he is wont to do. Yeah. [speaker004:] Uh, yeah. So I was actually [disfmarker] that day I was thinking about d doing something about the Wiener filtering, and then Carlos matter of stuff. And then he showed up and then I told him. And then he gave me a whole bunch of filters [disfmarker] what Carlos used for his, uh, uh, thesis and then [vocalsound] that was something which came up. And then, um [disfmarker] So, uh, I'm actually, [vocalsound] uh, thinking of using that also in this, uh, W Wiener filtering because that is a m modified Wiener filtering approach, where instead of using the current frame, it uses [vocalsound] adjacent frames also in designing the Wiener filter. So instead of designing our own new Wiener filters, I may just use one of those Carlos filters in [disfmarker] in this implementation [speaker002:] Mm hmm. [speaker004:] and see whether it [disfmarker] it actually gives me something better than using just the current f current frame, which is in a way, uh, something like the smoothing [disfmarker] the Wiener filter [disfmarker] [speaker002:] Mm hmm. [speaker004:] but [@ @] [disfmarker] S so, I don't know, I was h I'm [disfmarker] I'm [disfmarker] I'm, like [disfmarker] that [disfmarker] so that is the next thing. Once this [disfmarker] I [disfmarker] once I sort this pro uh, problem out maybe I'll just go into that also. And the [disfmarker] the other thing was about the subspace approach. So, um, I, like, plugged some groupings for computing this eigen uh, uh, uh, s values and eigenvectors. So just [disfmarker] I just [@ @] some small block of things which I needed to put together for the subspace approach. And I'm in the process of, like, building up that stuff. And, um, uh [disfmarker] [nonvocalsound] Yeah. I guess [disfmarker] Yep. I guess that's it. And, uh, th th that's where I am right now. So. [speaker006:] Oh. How about you, Carmen? [speaker005:] Mmm. I'm working with VTS. Um, I do several experiment with the Spanish database first, only with VTS and nothing more. Not VAD, no LDA, nothing more. [speaker006:] What [disfmarker] what is VTS again? [speaker004:] New [disfmarker] [speaker005:] Eh, Vectorial Taylor Series. [speaker006:] Oh, yes. Right, right. [speaker005:] To remove the noise too. [speaker006:] I think I ask you that every single meeting, don't I? [speaker005:] What? [speaker006:] I ask you that question every meeting. [speaker005:] Yeah. If [disfmarker] Well [disfmarker] [speaker002:] So, that'd be good from [disfmarker] for analysis. It's good to have some, uh, cases of the same utterance at different [disfmarker] different times. [speaker006:] Yeah. "What is VTS?" [speaker002:] Yeah. [speaker005:] VTS. I'm sor Well, um, the question is that [disfmarker] Well. Remove some noise but not too much. And when we put the [disfmarker] m m the, em, VAD, the result is better. And we put everything, the result is better, but it's not better than the result that we have without VTS. No, no. [speaker002:] I see. So that [@ @] [comment] given that you're using the VAD also, the effect of the VTS is not [pause] so far [disfmarker] [speaker005:] Is not. [speaker002:] Do you [disfmarker] How much of that do you think is due to just the particular implementation and how much you're adjusting it? Or how much do you think is intrinsic to [disfmarker]? [speaker005:] Pfft. I don't know because [disfmarker] Hhh, [speaker003:] Are you still using only the ten first frame for noise estimation or [disfmarker]? [speaker005:] Uh, I do the experiment using only the f onl eh, to use on only one fair estimation of the noise. [speaker003:] Or i? Yeah. Hmm. [speaker005:] And also I did some experiment, [vocalsound] uh, doing, um, a lying estimation of the noise. And, well, it's a little bit better but not [disfmarker] n [speaker003:] Maybe you have to standardize this thing also, noise estimation, because all the thing that you are testing use a different [disfmarker] [speaker005:] Mmm. [speaker004:] Mmm. [speaker003:] They all need some [disfmarker] some noise [disfmarker] noise spectra [speaker005:] No, I do that two [disfmarker] t did two time. [speaker003:] but they use [disfmarker] every [disfmarker] all use a different one. [speaker002:] I have an idea. If [disfmarker] if, uh, uh, y you're right. I mean, each of these require this. Um, given that we're going to have for this test at least of [disfmarker] uh, boundaries, what if initially we start off by using [pause] known sections of nonspeech [pause] for the estimation? [speaker005:] Mm hmm. [speaker003:] Mm hmm. [speaker002:] Right? [speaker003:] Yeah. [speaker002:] S so, e um, [speaker003:] Mm hmm. [speaker002:] first place, I mean even if ultimately we wouldn't be given the boundaries, [vocalsound] uh, this would be a good initial experiment to separate out the effects of things. I mean, how much is the poor [disfmarker] [vocalsound] you know, relatively, uh, unhelpful result that you're getting in this or this or this is due to some inherent limitation to the method for these tasks and how much of it is just due to the fact that you're not accurately [vocalsound] finding enough regions that [disfmarker] that are really [vocalsound] n noise? [speaker004:] Mmm. [speaker005:] Mm hmm. [speaker003:] Mm hmm. [speaker002:] Um. So maybe if you tested it using that, [vocalsound] you'd have more reliable [pause] stretches of nonspeech to do the estimation from and see if that helps. [speaker005:] Yeah. Another thing is the, em [disfmarker] the codebook, the initial codebook. That maybe, well, it's too clean and [disfmarker] [speaker002:] Mm hmm. [speaker005:] Cuz it's a [disfmarker] I don't know. The methods [disfmarker] If you want, you c I can say something about the method. [speaker002:] Mm hmm. [speaker005:] Yeah. In the [disfmarker] Because it's [vocalsound] a little bit different of the other method. Well, we have [disfmarker] If this [disfmarker] if this is the noise signal, [nonvocalsound] uh, in the log domain, we have something like this. Now, we have something like this. And the idea of these methods is to [disfmarker] [vocalsound] n given a, um [disfmarker] How do you say? I will read because it's better for my English. I i given is the estimate of the PDF of the noise signal when we have a, um, a statistic of the clean speech and an statistic of the noisy speech. And the clean speech [disfmarker] the statistic of the clean speech is [pause] from a [pause] codebook. Mmm? This is the idea. Well, like, this relation is not linear. The methods propose to develop this in a vectorial Taylor series [pause] approximation. [speaker002:] I I'm actually just confused about [pause] the equations you have up there. So, uh, the top equation is [disfmarker] is [disfmarker] is [disfmarker] [speaker005:] No, this in the [disfmarker] it's [disfmarker] this is the log domain. I [disfmarker] I must to say that. [speaker002:] Which is [disfmarker] which is the log domain? [speaker005:] Is the T [disfmarker] is egual [disfmarker] [comment] is equal to, uh, log of [disfmarker] [speaker002:] And [disfmarker] but Y is what? Y of [disfmarker] the spectrum [speaker005:] Uh, this [disfmarker] this is this [speaker002:] or [disfmarker]? [speaker005:] and this is this. [speaker002:] No, no. The top Y is what? [speaker005:] Mm hmm. [speaker002:] Is that power spectrum? [speaker005:] Uh, this is the noisy speech. [speaker003:] p s this [disfmarker] [speaker002:] No, is that power spectrum? Is it [disfmarker]? [speaker003:] Yeah. I guess it's the power spectrum of noisy speech. [speaker005:] Yeah. It's the power spectrum. [speaker002:] Oh, OK. [speaker003:] Yeah. And [disfmarker] [speaker002:] So that's uh [disfmarker] [speaker005:] This is the noisy [disfmarker] Yeah, it's [disfmarker] of the value [disfmarker] [speaker002:] OK. Yeah, OK. So this [disfmarker] it's the magnitude squared or something. [speaker005:] Yeah. [speaker002:] OK, so you have power spectrum added there and down here you have [disfmarker] [vocalsound] you [disfmarker] you put the [disfmarker] depends on T, [speaker005:] w o [speaker002:] but [disfmarker] b all of this is just [disfmarker] you just mean [disfmarker] [speaker005:] Yeah. It's the same. [speaker002:] you just mean the log of the [disfmarker] of the one up above. [speaker005:] Yeah. [speaker003:] Mm hmm. [speaker002:] And, uh, so that is X times, uh, [speaker005:] Yeah, maybe [disfmarker] [speaker004:] One [disfmarker] one plus N by X. [speaker005:] But, n [speaker002:] o [speaker005:] Well, y we can expre we can put this expression [disfmarker] [speaker002:] X times one plus, uh, N [disfmarker] uh, N [disfmarker] N [disfmarker] N minus X? [speaker005:] The [disfmarker] Yeah. [speaker002:] And then, [speaker005:] And the noise signal. [speaker002:] uh [disfmarker] So that's log of X plus log of one plus, uh [disfmarker] Well. Is that right? Log of [disfmarker] [speaker004:] One plus N by X. [speaker005:] Well, mmm [disfmarker] [speaker002:] I actually don't see how you get that. [speaker005:] Well, if we apply the log, we have E is n [speaker002:] Uh. [speaker003:] Mmm. [speaker005:] uh, log [disfmarker] [nonvocalsound] E is equal, oh, to log of X plus N. [speaker004:] Uh, and [disfmarker] [speaker002:] Yeah. [speaker005:] And, well, [speaker004:] And, log of [disfmarker] [speaker005:] uh, we can say that E [nonvocalsound] [vocalsound] is equal to log of, [nonvocalsound] [nonvocalsound] um, exponential of X plus exponential of N. [speaker002:] Uh [disfmarker] [speaker004:] Mm hmm. [speaker002:] No. [speaker004:] No. [speaker002:] That doesn't follow. [speaker004:] Well, if E restricts [disfmarker] It is y [speaker005:] Well, this is [disfmarker] this is in the ti the time domain. Well, we have that, um [disfmarker] We have first that, for example, X is equal, uh [disfmarker] Well. This is the frequency domain and we can put [vocalsound] u that n the log domain [disfmarker] [speaker002:] Yeah. [speaker005:] log of X omega, but, well, in the time domain we have an exponential. No? No? Oh, maybe it's I am [disfmarker] I'm problem. [speaker002:] Yeah. I mean, just never mind what they are. Uh, it's just if X and N are variables [disfmarker] Right? [speaker004:] What is, uh [disfmarker]? [speaker002:] The [disfmarker] the [disfmarker] the log of X plus N is not the same as the log of E to the X plus E to the N. [speaker005:] Yeah. But this i Well, I don't [disfmarker] Well, uh, [speaker002:] Maybe we can take it off line, [speaker005:] maybe [disfmarker] [speaker002:] but I [disfmarker] I don't know. [speaker005:] I [disfmarker] I can do this incorrectly. Well, the expression that appear in the [disfmarker] in the paper, [nonvocalsound] is, uh [disfmarker] [speaker004:] The log [disfmarker] the Taylor series expansion for log one plus N by X is [disfmarker] [speaker005:] is X [disfmarker] [speaker003:] Is it the first order expansion? [speaker002:] OK. I i [speaker004:] Yeah, the first one. [speaker003:] Yeah, I guess. [speaker004:] Yeah. Yeah. [speaker002:] OK. [speaker003:] Yeah. [speaker002:] Yeah. Cuz it doesn't just follow what's there. [speaker003:] Uh huh. [speaker004:] Y yeah. If [disfmarker] if you take log X into log one plus N by X, and then expand the log one plus N by X into Taylor series [disfmarker] [speaker002:] It has to be some, uh, Taylor series [disfmarker] [speaker003:] Yeah. [speaker005:] Now, this is the [disfmarker] and then [disfmarker] [speaker003:] Yeah, but the [disfmarker] the second [pause] expression that you put is the first order expansion of the nonlinear relation between [disfmarker] [speaker005:] Not exactly. [speaker002:] No. [speaker005:] No, no, no. It's not the first space. Well, we have [disfmarker] pfft, uh, em [disfmarker] Well, we can put that X is equal [disfmarker] I is equal to log of, uh, mmm [disfmarker] [speaker002:] That doesn't follow. [speaker005:] Well, we can put, uh, this? [speaker004:] Mmm. No. [speaker002:] That [disfmarker] I mean, that [disfmarker] the f top one does not [pause] imply the second one. [speaker005:] The top? [speaker002:] Because [disfmarker] cuz the log of a sum is not the same as [pause] th [speaker005:] Yeah, yeah, yeah, yeah, yeah. [speaker002:] I mean, as [disfmarker] Yeah. [speaker005:] But we can [disfmarker] uh, we [disfmarker] we know that, for example, the log of [vocalsound] E plus B is equal to log of E plus log to B. [speaker002:] Right. [speaker005:] And we can say here, it i [speaker002:] Right. So you could s [speaker003:] What is that? [speaker005:] And we can, uh, put this inside. [speaker002:] Yeah. [speaker005:] And then we can, uh, you know [disfmarker] [speaker002:] N no, but [disfmarker] [speaker005:] Yeah. [speaker004:] Uh. [speaker002:] I don't see how you get the second expression from the top one. The [disfmarker] I mean, just more generally here, [vocalsound] if you say "log of, um, A plus B", the log of [disfmarker] log of A plus B is not [disfmarker] or A plus B is not the, um, log of E to the A plus E to the B. [speaker005:] No, no, no, no, no, no, no. This not. No. [speaker002:] Right? And that's what you seem to be saying. [speaker005:] No. It's not. But this is the same [disfmarker] oh. [speaker002:] Right? Cuz you [disfmarker] cuz you [disfmarker] up here you have the A plus B [disfmarker] [speaker005:] No. I say if I apply log, I have, uh, log of E is equal to log of, uh [disfmarker] in this side, is equal to log of X [speaker002:] Plus N. [speaker005:] plus N. [speaker002:] Right. [speaker005:] No? Right. [speaker002:] Right. And then how do you go from there to the [disfmarker]? [speaker005:] This is right. And then if I apply exponential, to have here E [disfmarker] [speaker002:] Look. OK, so let's [disfmarker] I mean, C equals A plus B, [speaker003:] It's log o of capital Y. Yeah, right. [speaker002:] and then [disfmarker] [speaker005:] Yeah. [speaker003:] Capital [pause] Y. [speaker004:] X. X. This is X, inside. [speaker003:] Mm hmm. [speaker005:] We have this, [speaker002:] Right. [speaker005:] no? [speaker002:] Yeah. That one's right. [speaker005:] Mm hmm. [speaker004:] One and [disfmarker] [speaker005:] S uh, i th we can put here the set transformation. [speaker002:] Oh. I see. [speaker005:] No? [speaker002:] I see. OK, I understand now. Alright, thanks. [speaker005:] Yeah. In this case, well, we can put here a [nonvocalsound] Y. [speaker002:] OK. So, yeah. It's just by definition [pause] that the individual [disfmarker] [vocalsound] that the, uh [disfmarker] So, capital X is by definition the same as E to the little X because she's saying that the little X is [disfmarker] is the, uh [disfmarker] is the log. Alright. [speaker005:] Now we can put this. [speaker002:] Yeah. [speaker005:] No? And here we can multiply by X. [speaker002:] Alright. I think these things are a lot clearer when you can use fonts [disfmarker] different fonts there [speaker005:] Oh, yes. [speaker002:] so you know which is which. [speaker005:] Yeah, yeah. [speaker002:] But I [disfmarker] I under I understand what you mean now. [speaker005:] That's true. That's true. [speaker002:] OK. [speaker005:] But this [disfmarker] this is correct? [speaker002:] Sure. [speaker005:] And now I can do it, uh [disfmarker] pfff! I can put log [nonvocalsound] of EX [vocalsound] plus log [disfmarker] [speaker002:] Oh. Yes. I understand now. [speaker005:] And this is [disfmarker] [speaker002:] And that's where it comes from. Yeah. [speaker003:] Yeah. Right. [speaker002:] Right. [speaker005:] Now it's correct. [speaker002:] Right. OK. Thanks. [speaker005:] Well. The idea [disfmarker] Well, we have fixed this equa [speaker002:] OK. So now once you get that [disfmarker] that one, then you [disfmarker] then you do a first or second order, or something, Taylor [vocalsound] series expansion of this. [speaker005:] Yeah. This is another linear relation that this [disfmarker] to develop this in [vocalsound] vector s Taylor series. [speaker003:] Yeah, sure. [speaker002:] Right. [speaker005:] Mm hmm. And for that, well, the goal is to obtain, um [disfmarker] [vocalsound] [vocalsound] est estimate a PDF for the noisy speech when we have a [disfmarker] [vocalsound] a statistic for clean speech and for the noisy speech. Mmm? And when w the way to obtain the PDF for the noisy speech is [disfmarker] well, we know this statistic and we know the noisy st well, we can apply first order of the vector st Taylor series of the [disfmarker] of the [disfmarker] of [disfmarker] well, the order that we want, increase the complexity of the problem. [speaker002:] Mm hmm. [speaker005:] And then when we have a expression, uh, for the [vocalsound] mean and variance of the noisy speech, we apply a technique of minimum mean square estimation [speaker002:] Mm hmm. [speaker005:] to obtain the expected value of the clean speech given the [disfmarker] this [vocalsound] statistic for the noisy speech [disfmarker] [speaker002:] Mm hmm. [speaker005:] the statistic for clean speech and the statistic of the noisy speech. This only that. But the idea is that [disfmarker] [speaker003:] And the [disfmarker] the model of clean speech is a codebook. Right? [speaker005:] u Yeah. We have our codebook with different density [vocalsound] Gaussian. [speaker002:] Mm hmm. [speaker005:] We can expre we can put that the [vocalsound] PDF [comment] for the clean test, probability of the clean speech is equal to [disfmarker] [speaker002:] Yeah. [speaker003:] Mm hmm. [speaker002:] So, um, how [disfmarker] h how much [disfmarker] in [disfmarker] in the work they reported, how much noisy speech did you need to get, uh, good enough statistics for the [disfmarker] to get this mapping? [speaker005:] I don't know exactly. [speaker002:] Yeah. [speaker005:] I [disfmarker] I need to s [speaker002:] Yeah. [speaker005:] I don't know exactly. [speaker002:] Cuz I think what's certainly characteristic of a lot of the [pause] data in this test is that, um, you don't have [disfmarker] [vocalsound] the [disfmarker] the training set may not be a [disfmarker] a great estimator for the noise in the test set. Sometimes it is and sometimes it's not. [speaker005:] Yeah. I [disfmarker] the clean speech [disfmarker] the codebook for clean speech, I am using TIMIT. And I have now, uh, sixty four [nonvocalsound] Gaus Gaussian. [speaker002:] Uh huh. And what are you using for the noisy [disfmarker]? Y y doing that strictly [disfmarker] [speaker005:] Of the noise [disfmarker] I estimate the noises wi Well, for the noises I only use one Gaussian. [speaker002:] Mm hmm. And [disfmarker] and you [disfmarker] and you train it up entirely from, uh, nonspeech sections in the test? [speaker003:] Hmm. [speaker005:] Uh, yes. The first experiment that I do it is solely to calculate the, mmm [disfmarker] well, this value [disfmarker] [speaker002:] Yeah. [speaker005:] uh, the compensation of the dictionary o one time using the [disfmarker] the noise at the f beginning of the sentence. This is the first experiment. [speaker002:] Mm hmm. Yeah. [speaker005:] And I fix this for all the [disfmarker] all the sentences. Uh, because [disfmarker] well, the VTS methods [disfmarker] In fact the first thing that I do is to [disfmarker] to obtain, uh, an expression for E [disfmarker] probability e expression of [disfmarker] of E. That mean that the VTS [disfmarker] mmm, with the VTS we obtain, uh [disfmarker] well, we [disfmarker] we obtain the means for each Gaussian [comment] and the variance. [speaker002:] Mm hmm. [speaker005:] This is one. Eh, this is the composition of the dictionary. [speaker002:] Mm hmm. [speaker005:] This one thing. And the other thing that this [disfmarker] with these methods is to, uh, obtain [disfmarker] to calculate this value. [speaker002:] Mm hmm. [speaker005:] Because we can write [disfmarker] uh, we can write that [vocalsound] the estimation of the clean speech is equal at an expected value of the clean speech conditional to, uh, the noise signal [disfmarker] [vocalsound] the probability f of the [disfmarker] the statistic of the clean speech and the statistic of the noise. [speaker002:] Mm hmm. Mm hmm. [speaker005:] This is the methods that say that we're going obtain this. [speaker002:] Mm hmm. [speaker005:] And we can put that this is equal to the estimated value of E minus a function that conditional to E to the T [disfmarker] to the noise signal. Well, this is [disfmarker] this function is the [vocalsound] the term [disfmarker] after develop this, the term that we [disfmarker] we take. Give PX and, uh, P the noise. [speaker004:] X K C noise. [speaker002:] Mmm. [speaker005:] And I can [vocalsound] put that this is equal to [pause] the [pause] noise signal minus [disfmarker] Well, I put before [pause] this name, uh [disfmarker] And I can calculate this. [speaker002:] What is the first variable in that probability? [speaker005:] Uh, this is the Gaussian. [speaker002:] No, no. I'm sorry. In [disfmarker] in the one you pointed at. [speaker005:] v [speaker002:] What's that variable? [speaker005:] Uh, this is the [disfmarker] [speaker004:] Weak. So probably it [disfmarker] it would do that. [speaker005:] like this, [speaker003:] It's one mixture of the model. Right? [speaker005:] but conditional. No, it's condition it's not exactly this. It's modify. Uh, if we have clean speech [disfmarker] we have the dictionary for the clean speech, we have a probability f of [disfmarker] our [disfmarker] our weight for each Gaussian. No. And now, this weight is different now [speaker002:] OK. Yes. [speaker005:] because it's conditional. And this I need to [disfmarker] to calcu I know this [speaker002:] Uh huh. [speaker005:] and I know this because this is from the dictionary that you have. [speaker002:] Uh huh. [speaker005:] I need to calculate this. And for calculate this, [vocalsound] I have an [disfmarker] I [disfmarker] I can develop an expression that is [speaker002:] Yes. [speaker004:] It's overlapping. [speaker005:] that. I can calculate [disfmarker] I can [disfmarker] I calculated this value, [vocalsound] uh, with the statistic of the noisy speech that I calculated before with the VTS approximation. [speaker002:] Mm hmm. [speaker005:] And [disfmarker] well, normalizing. And I know everything. Uh, with the, nnn [disfmarker] when I develop this in s Taylor [disfmarker] Taylor series, I can't, um, [vocalsound] calculate the mean and the variance [vocalsound] of the [disfmarker] for each of the Gaussian of the dictionary for the noisy speech. Now. And this is fixed. [speaker002:] Mm hmm. [speaker005:] If I never do an estimat a newer estimation of the noise, this mean as [disfmarker] mean and the variance are fixed. [speaker002:] Mm hmm. [speaker005:] And for each s uh, frame of the speech the only thing that I need to do is to calculate this in order to calculate the estimation of the clean speech given our noisy speech. [speaker002:] So, I'm [disfmarker] I'm not following this perfectly but, um, I [disfmarker] Are you saying that all of these estimates are done [pause] using, um, estimates of the probability density for the noise that are calculated only from the first ten frames? [speaker005:] Yeah. [speaker002:] And never change throughout anything else? [speaker005:] Never cha This is one of the approximations that I am doing. [speaker002:] Per [disfmarker] per [disfmarker] per utterance, or per [disfmarker]? [speaker005:] Per utterance. Yes. [speaker002:] Per utterance. [speaker005:] Per utterance. Yes. [speaker002:] OK. So it's done [disfmarker] it's done new for each new utterance. [speaker005:] And th Yeah. [speaker002:] So this changes the whole mapping for every utterance. [speaker005:] It's not [disfmarker] Yeah. Yeah. It's fixed, the dictionary. [speaker002:] OK. [speaker005:] And the other estimation is when I do the uh on line estimation, I change the means and variance of th for the noisy speech [speaker002:] OK. Yeah? [speaker005:] each time that I detect noise. [speaker002:] Mm hmm. [speaker005:] I do it uh again this develop. Estimate the new mean and the variance of the noisy speech. And with th with this new s new mean and variance I estimate again this. [speaker002:] So you estimated, uh, f completely forgetting what you had before? [speaker005:] Um, [speaker002:] Uh, or is there some adaptation? [speaker005:] no, no, no. It's not completely [disfmarker] No, it's [disfmarker] I am doing something like an adaptation of the noise. [speaker002:] OK. Now do we know, either from their experience or from yours, that, uh, just having, uh, two parameters, the [disfmarker] the mean and variance, is enough? Yeah. I mean, I know you don't have a lot of data to estimate with, but [disfmarker] but, uh, um [disfmarker] [speaker005:] I estimate mean and variance for each one of the Gaussian of the codebook. [speaker002:] No, I'm talking about the noise. [speaker005:] Oh, um. Well, only one [disfmarker] [speaker002:] There's only one Gaussian. [speaker005:] I am only [disfmarker] using only one. [speaker002:] Right. [speaker005:] I don't know i [speaker002:] And you [disfmarker] and [disfmarker] and it's, uh, uh [disfmarker] right, it's only [disfmarker] it's only one [disfmarker] Wait a minute. This is [disfmarker] what's the dimensionality of the Gaussian? This is [disfmarker] [speaker005:] Uh, it's in [disfmarker] after the mel filter bank. [speaker002:] So this is twenty or something? [speaker005:] Twenty three. [speaker002:] Twenty? So it's [disfmarker] Yeah. So it's actually forty numbers [pause] that you're getting. Yeah, maybe [disfmarker] maybe you don't have a [disfmarker] [speaker005:] Uh, the original paper say that only one Gaussian for the noise. [speaker002:] Well, yeah. But, I mean, [vocalsound] no [disfmarker] no paper is [disfmarker] is a Bible, [speaker005:] Yeah, maybe isn't the right thing. [speaker002:] you know. This is [disfmarker] this is, uh [disfmarker] [speaker005:] Yeah, yeah, yeah. [speaker002:] The question is, um, [vocalsound] whether it would be helpful, i particularly if you used [disfmarker] if you had more [disfmarker] So, suppose you did [disfmarker] This is almost cheating. It certainly isn't real time. But if y suppose you use the real boundaries that [disfmarker] that you were [disfmarker] in fact were given [vocalsound] by the VAD and so forth or I [disfmarker] I guess we're gonna be given even better boundaries than that. And you look [disfmarker] you take all o all of the nonspeech components in an utterance, so you have a fair amount. Do you benefit from having a better model for the noise? That would be another question. [speaker005:] Maybe. [speaker002:] So first question would be [vocalsound] to what extent i are the errors that you're still seeing [vocalsound] based on the fact that you have poor boundaries for the, uh, uh, nonspeech? And the second question might be, given that you have good boundaries, could you do better if you used more parameters to characterize the noise? Um. Also another question might be [disfmarker] Um, they are doing [disfmarker] they're using first term only of the vector Taylor series? [speaker005:] Yeah. [speaker002:] Um, if you do a second term does it get too complicated cuz of the nonlinearity? [speaker005:] Yeah. It's quite complicated. [speaker002:] Yeah, OK. No, I won't ask the next question then. [speaker005:] Oh, it's [disfmarker] it's the [disfmarker] for me it's the first time that I am working with VTS. Uh [disfmarker] [speaker002:] Yeah. No, it's interesting. Uh, w we haven't had anybody work with it before, so it's interesting to get your [disfmarker] get your feedback about it. [speaker005:] It's another type of approximation because i because it's a statistic [disfmarker] statistic approximation to remove the noise. I don't know. [speaker002:] Right. [speaker006:] Great. OK. Well, I guess we're about done. Um, so some of the digit forms don't have digits. Uh, [vocalsound] we ran out there were some blanks in there, so not everybody will be reading digits. But, um, I guess you've got some. Right, Morgan? [speaker002:] I have some. [speaker006:] So, why don't you go ahead and start. And I think it's [pause] just us down here at this end that have them. [speaker004:] S [speaker005:] um [speaker006:] So. [speaker002:] Uh, OK. [speaker004:] S so, we switch off with this [speaker006:] Whenever you're ready. [speaker004:] or n? [speaker006:] Uh, leave it on, [speaker004:] No. OK. [speaker006:] uh, [speaker002:] They prefer to have them on [speaker006:] and the [disfmarker] [speaker002:] just so that they're continuing to get the distant, uh, information. [speaker006:] Yeah. [speaker004:] OK. OK. [speaker006:] OK. [speaker002:] OK. S [speaker005:] I guess. [speaker001:] OK, we're on. So just make sure that th your wireless mike is on, if you're wearing a wireless. [speaker005:] Check one. Check one. [speaker001:] And you should be able to see which one [disfmarker] which one you're on by, uh, watching the little bars change. [speaker002:] So, which is my bar? Mah! Number one. [speaker001:] Yep. [speaker005:] Sibilance. Sibilance. [speaker001:] So, actually, if you guys wanna go ahead and read digits now, as long as you've signed the consent form, that's alright. [speaker005:] Are we supposed to read digits at the same time? [speaker001:] No. [speaker005:] Oh, OK. [speaker001:] No. Each individually. We're talking about doing all at the same time but I think cognitively that would be really difficult. [vocalsound] To try to read them while everyone else is. [speaker005:] Everyone would need extreme focus. [speaker001:] So, when you're reading the digit strings, the first thing to do is just say which transcript you're on. [speaker003:] Other way. We m We may wind up with ver We [disfmarker] we may need versions of all this garbage. [speaker002:] For our stuff. [speaker003:] Yeah. [speaker002:] Yeah. [speaker001:] Um. So the first thing you'd wanna do is just say which transcript you're on. [speaker003:] Yeah. [speaker001:] So. You can see the transcript? There's two large number strings on the digits? So you would just read that one. And then you read each line with a small pause between the lines. And the pause is just so the person transcribing it can tell where one line ends and the other begins. And I'll give [disfmarker] I'll read the digit strings first, so can see how that goes. Um. Again, I'm not sure how much I should talk about [pause] stuff before everyone's here. [speaker003:] Mmm. Well, we have one more coming. [speaker001:] OK. Well, why don't I go ahead and read digit strings and then we can go on from there. [speaker003:] OK. Well, we can start doing it. [speaker001:] Thanks. So, uh, just also a note on wearing the microphones. All of you look like you're doing it reasonably correctly, but you want it about two thumb widths away from your mouth, and then, at the corner. And that's so that you minimize breath sounds, so that when you're breathing, you don't breathe into the mike. Um. Yeah, that's good. And uh [disfmarker] So, everyone needs to fill out, only once, the speaker form and the consent form. And the short form [disfmarker] I mean, you should read the consent form, but uh, the thing to notice is that we will give you an opportunity to edit a all the transcripts. So, if you say things and you don't want them to be released to the general public, which, these will be available at some point to anyone who wants them, uh, you'll be given an opportunity by email, uh, to bleep out any portions you don't like. Um. On the speaker form just fill out as much of the information as you can. If you're not exactly sure about the region, we're not exactly sure either. So, don't worry too much about it. The [disfmarker] It's just self rating. Um. And I think that's about it. I mean, should I [disfmarker] Do you want me to talk at all about why we're doing this and what this project is? [speaker003:] Um, yeah. [speaker001:] or [disfmarker]? [speaker003:] No. There was [disfmarker] there was [disfmarker] Let's see. [speaker005:] Does Nancy know that we're meeting in here? [speaker003:] Oh [disfmarker] She got an emai she was notified. [speaker002:] I sent an email. [speaker005:] Oh yeah, she got an e Yeah, yeah. [speaker003:] Whether she knows [vocalsound] is another question. Um. So are the people going to be identified by name? [speaker001:] Well, what we're gonna [disfmarker] we'll anonymize it in the transcript. [speaker003:] Right. [speaker001:] Um, but not in the audio. [speaker003:] OK. So, then in terms of people worrying about, uh, excising things from the transcript, it's unlikely. [speaker001:] So the [speaker003:] Since it [disfmarker] it does isn't attributed. Oh, I see, but the a but the [disfmarker] but the [disfmarker] [speaker001:] Right, so if I said, "Oh, hi Jerry, how are you?", we're not gonna go through and cancel out the "Jerry"s. [speaker003:] Yeah. Sure. [speaker001:] Um, so we will go through and, in the speaker ID tags there'll be, you know, M one O seven, M one O eight. [speaker003:] Right. Right. [speaker001:] Um, but uh, um, it w uh, I don't know a good way of doing it on the audio, and still have people who are doing discourse research be able to use the data. [speaker003:] OK. Mm hmm. No, I [disfmarker] I wasn't complaining, I just wanted to understand. [speaker001:] Yep. Right. [speaker003:] OK. [speaker002:] Well, we can make up aliases for each of us. [speaker001:] Yeah, I mean, whatever you wanna do is fine, [speaker003:] Right. [speaker006:] OK. [speaker001:] but we find that [disfmarker] We want the meeting to be as natural as possible. [speaker003:] OK. [speaker001:] So, we're trying to do real meetings. And so we don't wanna have to do aliases [speaker003:] Right. [speaker001:] and we don't want people to be editing what they say. [speaker002:] Right. [speaker003:] Right. [speaker001:] So I think that it's better just as a pro post process to edit out every time you bash Microsoft. [speaker002:] Mm hmm. [speaker001:] You know? [speaker003:] Right. Um, OK. So why don't you tell us briefly your [disfmarker] give [disfmarker] give your e normal schpiel. [speaker001:] OK. So th Um. So this is [disfmarker] The project is called Meeting Recorder and there are lots of different aspects of the project. Um. So my particular interest is in the PDA of the future. This is a mock up of one. Yes, we do believe the PDA of the future will be made of wood. Um. [comment] The idea is that you'd be able to put a PDA at the table at an impromptu meeting, and record it, and then be able to do querying and retrieval later on, on the meeting. So that's my particular interest, is a portable device to do m uh, information retrieval on meetings. Other people are interested in other aspects of meetings. Um. So the first step on that, in any of these, is to collect some data. And so what we wanted is a room that's instrumented with both the table top microphones, and these are very high quality pressure zone mikes, as well as the close talking mikes. What the close talk ng talking mikes gives us is some ground truth, gives us, um, high quality audio, um, especially for people who aren't interested in the acoustic parts of this corpus. So, for people who are more interested in language, we didn't want to penalize them by having only the far field mikes available. And then also, um, it's a very, very hard task in terms of speech recognition. Um. And so, uh, on the far field mikes we can expect very low recognition results. So we wanted the near field mikes to at least isolate the difference between the two. So that's why we're recording in parallel with the close talking and the far field at the same time. And then, all these channels are recorded simultaneously and framed synchronously so that you can also do things like, um, beam forming on all the microphones and do research like that. Our intention is to release this data to the public, um, probably through f through a body like the LDC. And, uh, just make it as a generally available corpus. Um. [vocalsound] There's other work going on in meeting recording. So, we're [disfmarker] we're working with SRI, with UW, Um. NIST has started an effort which will include video. We're not including video, obviously. And uh [disfmarker] and then also, um, a small amount of assistance from IBM. Is also involved. Um. Oh, and the digit strings, this is just a more constrained task. Um. So because the general environment is so challenging, we decided to [disfmarker] to do at least one set of digit strings to give ourselves something easier. And it's exactly the same digit strings as in TI digits, which is a common connected digits corpus. So we'll have some, um, comparison to be able to be made. [speaker003:] OK. [speaker001:] Anything else? [speaker003:] No. [speaker001:] OK, so when the l last person comes in, just have them wear a wireless. It should be on already. Um. Either one of those. And uh, read the digit strings and [disfmarker] and fill out the forms. So, the most important form is the consent form, so just be s be sure everyone signs that, if they consent. [speaker002:] I'm sure it's pretty usual for meetings that people come late, so you will have to leave what you set. [speaker001:] Yeah. Right. And uh, just give me a call, which, my number's up there when your meeting is over. [speaker003:] Yep. [speaker001:] And [disfmarker] I'm going to leave the mike here but it's n [nonvocalsound] Uh, but I'm not gonna be on so don't have them use this one. It'll just be sitting here. [speaker002:] Input? Yeah. There we go. [speaker003:] By the way, Adam, we will be using the, uh, screen as well. [speaker002:] Yep. [speaker003:] So, you know. Wow! Organization. So you guys who got email about this [pause] oh f uh, Friday or something about what we're up to. [speaker005:] No. [speaker006:] No. [speaker002:] I got it. [speaker005:] What was the nature of the email? [speaker003:] Oh, this was about [pause] um, inferring intentions from features in context, and the words, like "s go to see", or "visit", or some [speaker002:] Wel we I [disfmarker] uh [disfmarker] I [disfmarker] I [disfmarker] [speaker003:] You didn't get it? [speaker005:] I don't think I did. [speaker003:] I guess these g have got better filters. Cuz I sent it to everybody. You just blew it off. [speaker005:] Ah. [speaker003:] OK. [speaker002:] It's really simple though. So this is the idea. Um. We could pursue, um, if we thought it's [disfmarker] it's worth it but, uh, I think we [disfmarker] we will agree on that, um, to come up with a [disfmarker] with a sort of very, very first crude prototype, and do some implementation work, and do some [disfmarker] some research, and some modeling. So the idea is if you want to go somewhere, um, and focus on that object down [disfmarker] Oh, I can actually walk with this. This is nice. down here. That's the Powder Tower. Now, um, [vocalsound] we found in our, uh, data and from experiments, that there's three things you can do. Um, you can walk this way, and come really, really close to it. And touch it. But you cannot enter or do anything else. Unless you're interested in rock climbing, it won't do you no good standing there. It's just a dark alley. But you can touch it. If you want to actually go up or into the tower, you have to go this way, and then through some buildings and up some stairs and so forth. If you actually want to see the tower, and that's what actually most people want to do, is just have a good look of it, take a picture for the family, [comment] you have to go this way, and go up here. And there you have a vre really view [disfmarker] It exploded, the [disfmarker] during the Thirty years war. Really uh, interesting sight. [speaker005:] Mmm. [speaker002:] And um, these uh [disfmarker] these lines are, um, paths, or so That's ab er, i the street network of our geographic information system. And you can tell that we deliberately cut out this part. Because otherwise we couldn't get our GIS system to take [disfmarker] to lead people this way. It would always use the closest point to the object, and then the tourists would be faced, you know, in front of a wall, but it would do them absolutely no good. So, [vocalsound] what we found interesting is, first of all, intentions differ. Maybe you want to enter a building. Maybe you want to see it, take a picture of it. Or maybe you actually want to come as close as possible to the building. For whatever reason that may be. [speaker005:] What's it [disfmarker] what's it made out of? [speaker002:] Um, r red limestone. [speaker005:] So maybe you would wanna touch it. [speaker002:] Yeah, maybe you would want to touch it. Um. Okay, I [disfmarker] This, um [disfmarker] These intentions, we [disfmarker] w w we could, if we want to, call it the [disfmarker] the Vista mode, where we just want to [disfmarker] eh [disfmarker] s get the overview or look at it, the Enter mode, and the, well, Tango mode. I always come up with [disfmarker] with silly names. So this "Tango" means, literally translated, "to touch". So [disfmarker] But sometimes the [disfmarker] the Tango mode is really relevant in the [disfmarker] in the sense that, um, if you want to, uh [disfmarker] If you don't have the intention of entering your building, but you know that something is really close to it, and you just want to approach it, or get to that building. Consider, for example, the Post Office in Chicago, a building so large that it has its own zip code. So the entrance could be miles away from the closest point. So sometimes it m m m makes sense maybe to d to distinguish there. So, um, I've looked, uh, through twenty some [disfmarker] Uh, I didn't look through all the data. um, and there [disfmarker] there's uh, a lot more different ways in people [disfmarker] uh, the ways people phrase how to g get [disfmarker] if they want to get to a certain place. And sometimes here it's b it's a little bit more obvious [disfmarker] Um. [vocalsound] Maybe I should go back a couple of steps and go through the [disfmarker] [speaker003:] No, OK come in, sit down. If you grab yourself a microphone. [speaker002:] You need to sign some stuff and read some digits. [speaker003:] Well, you can sign afterwards. [speaker002:] O or later. [speaker005:] You have to al also have to read some digits. [speaker003:] Afterwards. [speaker004:] OK. [comment] OK. Afterwards is fine. [speaker002:] They are uncomfortable. Mm hmm. [speaker004:] Really small? OK. I see. OK. [speaker002:] Yep. [speaker004:] Thank you. [speaker002:] OK, but that was our idea. [speaker003:] And it [disfmarker] it [disfmarker] it [disfmarker] it it also has to be switched on, Nance. [speaker002:] Is [disfmarker] [speaker005:] No, that one's already on, I thought he said. [speaker002:] I [disfmarker] I think [disfmarker] [speaker003:] It's on? OK, good. [speaker005:] Yeah. [speaker004:] OK. It's on. [speaker002:] OK. That was the idea. Um, people, when they w when they want to go to a building, sometimes they just want to look at it. Sometimes they want to enter it. And sometimes they want to get really close to it. That's something we found. It's just a truism. And the places where you will lead them for these intentions are sometimes ex in incredibly different. I [disfmarker] I gave an example where the point where you end up if you want to look at it is completely different from where [disfmarker] if you want to enter it. So, this is sort of how people may, uh [disfmarker] may phrase those requests to a [disfmarker] a [disfmarker] a mock up system at least that's the way they did it. And we get tons of [disfmarker] of these "how do I get to", "I want to go to", but also, "give me directions to", and "I would like to see". And um, what we can sort of do, if we look closer a closer at the [disfmarker] the data [disfmarker] That was the wrong one. um, we can look at some factors that may make a difference. First of all, very important, and um, that [disfmarker] I've completely forgot that when we talked. This is of course a crucial factor, "what type of object is it?" So, some buildings you just don't want to take pictures of. Or very rarely. But you usually want to enter them. Some objects are more picturesque, and you [disfmarker] more f more highly photographed. Then of course the [disfmarker] the actual phrases may give us some idea of what the person wants. Um. Sometimes I found in the [disfmarker] Uh, looking at the data, in a superficial way, I found some s sort of modifiers that [disfmarker] that m may also give us a hint, um, "I'm trying to get to" Nuh? "I need to get to". Sort of hints to the fact that you're not really sightseeing and [disfmarker] and just f there for pleasure and so forth and so on. And this leads us straight to the context which also should be considered. That whatever it is you're doing at the moment may also inter influence the interpretation of [disfmarker] of a phrase. So, this is, uh, really uh, uh, uh [disfmarker] My suggestion is really simple. We start with, um [disfmarker] Now, Let me, uh, say one more thing. What we do know, is that the parser we use in the SmartKom system will never differentiate between any of these. So, basically all of these things will result in the same XML M three L structure. Sort of action "go", and then an object. [speaker004:] Mm hmm. [speaker002:] Yeah? and a source. So it's [disfmarker] it's [disfmarker] it's way too crude to d capture those differences in intentions. So, I thought, "Mmm! Maybe for a deep understanding task, that's a nice sort of playground or first little thing." Where we can start it and n sort of look [disfmarker] OK, we need, we gonna get those M three L structures. The crude, undifferentiated parse. Interpreted input. We may need additional part of speech, or maybe just some information on the verb, and modifiers, auxiliaries. We'll see. And I will try to [disfmarker] to sort of come up with a list of factors that we need to get out of there, and maybe we want to get a g switch for the context. So this is not something which we can actually monitor, [vocalsound] now, but just is something we can set. And then you can all imagine sort of a [disfmarker] a constrained satisfaction program, depending on [disfmarker] on what, um, comes out. We want to have an [disfmarker] a structure resulting if we feed it through a belief net or [disfmarker] or something along those lines. We'd get an inferred intention, we [disfmarker] we produce a structure that differentiates between the Vista, the Enter, and the, um, Tango mode. Which I think we maybe want to ignore. But. That's my idea. It's up for discussion. We can change all of it, any bit of it. Throw it all away. [speaker006:] Now [@ @] this email that you sent, actually. [speaker003:] What? [speaker006:] Now I remember the email. [speaker003:] OK. [speaker005:] Huh. Still, I have no recollection whatsoever of the email. I'll have to go back and check. [speaker003:] Not important. So, what is important is that we understand what the proposed task is. And, the [disfmarker] the i uh, Robert and I talked about this some on Friday. And we think it's well formed. So we think it's a well formed, uh, starter task for this, uh, deeper understanding in the tourist domain. [speaker006:] So, where exactly is the, uh, deeper understanding being done? Like I mean, s is it before the Bayes net? Is it, uh [disfmarker] [speaker003:] Well, it's the [disfmarker] it's [disfmarker] it's always all of it. So, in general it's always going to be, the answer is, everywhere. Uh, so the notion is that, uh, this isn't real deep. But it's deep enough that you can distinguish between these th three quite different kinds of, uh, going to see some tourist thing. And, so that's [disfmarker] that's the quote "deep" that we're trying to get at. And, Robert's point is that the current front end doesn't give you any way to [disfmarker] Not only doesn't it do it, but it also doesn't give you enough information to do it. It isn't like, if you just took what the front end gives you, and used some clever inference algorithm on it, you would be able to figure out which of these is going on. So, uh, and this is [disfmarker] Bu I in general it's gonna be true of any kind of deep understanding, there's gonna be contextual things, there're gonna be linguistic things, there're gonna be discourse things, and they gotta be combined. And, my idea on how to combine them is with a belief net, although it may turn out that t some totally different thing is gonna work better. Um, the idea would be that [vocalsound] you, uh, take your [disfmarker] You're editing your slide? [speaker002:] Yeah. As i a sort of, as I get ideas, uh w uh. [speaker003:] Oh. [speaker002:] So, discourse [disfmarker] I [disfmarker] I [disfmarker] I thought about that. Of course that needs to sort of go in there. [speaker003:] Oh. I'm sorry. OK. So. This is minutes [disfmarker] taking minutes as we go, in his [disfmarker] in his own way. [speaker002:] Yep. [speaker003:] Um, but the p the [disfmarker] Anyway. So the thing is, [vocalsound] i uh, d naively speaking, you've [disfmarker] you've got a [disfmarker] for this little task, a belief net, which is going to have as output, the conditional pr probability of one of three things, that the person wants to [disfmarker] uh, to View it, to Enter it, or to Tango with it. Um. So that [disfmarker] the [disfmarker] the output of the belief net is pretty well formed. And, then the inputs are going to be these kinds of things. And, then the question is [disfmarker] there are two questions [disfmarker] is, uh, one, where do you get this i [comment] information from, and two, what's the structure of the belief net? So what are the conditional probabilities of this, that, and the other, given these things? And you probably need intermediate nodes. I [disfmarker] we don't know what they are yet. So it may well be that, uh, for example, that, uh, knowing whether [disfmarker] Oh, another thing you want is some information abou I think, about the time of day. Now, they may wanna call that part of context. But the time of day matters a lot. [speaker002:] Mm hmm. [speaker003:] And, if things are obviously closed, then, you [disfmarker] [speaker002:] People won't want to enter it. [speaker003:] Pe people don't wanna enter them. And, if it's not obvious, you may want to actually uh, point out to people that it's closed [disfmarker] you know, what they're g going to is closed and they don't have the option of entering it. [speaker002:] s b [speaker003:] So another thing that can come up, and will come up as soon as you get serious about this is, that another option of course is to have a [disfmarker] more of a dialogue. So if someone says something you could ask them. [speaker005:] Yeah. [speaker003:] OK. And [disfmarker] Now, one thing you could do is always ask them, but that's boring. And it also w it also be a pain for the person using it. So one thing you could do is build a little system that, said, "whenever you got a question like that I've got one of three answers. Ask them which one you want." OK. But that's, um, not what we're gonna do. [speaker002:] But maybe that's a false state of the system, that it's too close to call. [speaker003:] Oh yeah. You want the [disfmarker] you want the ability to a You want the ability to ask, but what you don't wanna do is onl build a system that always asks every time, and i That's not getting at the scientific problem, and it's [disfmarker] [speaker002:] Mm hmm. [speaker003:] In general you're [disfmarker] you know, it's gonna be much more complex than that. a This is purposely a really simple case. [speaker002:] Yeah. [speaker003:] So, uh [disfmarker] Yeah. [speaker002:] I have one more point to [disfmarker] to Bhaskara's question. Um, I think also the [disfmarker] the [disfmarker] the deep understanding part of it is [disfmarker] is going to be in there to the extent that we um, want it in terms of our modeling. We can start, you know, basic from human beings, model that, its motions, going, walking, seeing, we can mem model all of that and then compose whatever inferences o we make out of these really conceptual primitives. That will be extremely deep in the [disfmarker] in [disfmarker] in [disfmarker] in my understanding. [speaker003:] Yeah. S so [disfmarker] so the way that might come up, if you wanna [disfmarker] Suppose you wanted to do that, you might say, "Um, as an intermediate step in your belief net, is there a Source Path Goal schema involved?" OK? And if so, uh, is there a focus on the goal? Or is there a focus on the path? or something. And that could be, uh, one of the conditiona you know, th the [disfmarker] In some piece of the belief net, that could be the [disfmarker] the appropriate thing to enter. [speaker006:] So, where would we extract that information from? From the M three L? [speaker003:] No. No. See, the M three L is not gonna give th What he was saying is, the M three L does not have any of that. All it has is some really crude stuff saying, "A person wants to go to a place." [speaker006:] Right. [speaker005:] The M three L is the old SmartKom output? [speaker003:] Right. [speaker005:] OK. [speaker003:] M three well, M three L itself refers to Multimedia Mark up Language. [speaker005:] It's just a language. Right, yeah. [speaker003:] So we have th w we we we have to have a better w way of referring to [disfmarker] [speaker002:] The parser output? [speaker003:] Mm hmm. Yeah. [speaker002:] "Analyzed speech" I think it's what they call it, [speaker003:] The [disfmarker] Well, OK. [speaker002:] really, oder [disfmarker] [speaker003:] Yeah. [speaker002:] o th No, actually, intention lattices is what we're gonna get. [speaker003:] Is i but they c they call it intention lattice, but tha [speaker002:] In in a intention lattice k Hypothesis. [speaker003:] Anyway. [speaker002:] They call it intention hypotheses. [speaker003:] Right. So, th they're gonna give us some cr uh [disfmarker] or [disfmarker] We can assume that y you get this crude information. About intention, and that's all they're going to provide. And they don't give you the kind of object, they don't give you any discourse history, if you want to keep that you have to keep it somewhere else. [speaker002:] Well, they keep it. We have to request it. [speaker003:] Right. [speaker002:] Nuh? But it's not in there. [speaker003:] Well, they [disfmarker] they kee they keep it by their lights. It may [disfmarker] it may or may not be what [disfmarker] what we want. [speaker002:] Hmm. Yeah, or i [speaker003:] Yeah. [speaker005:] So, if someone says, "I wanna touch the side of the Powder Tower", that would [disfmarker] basically, we need to pop up Tango mode and the [disfmarker] and the directions? [speaker003:] If i if [disfmarker] Yeah, if it got as simple as that, yeah. [speaker005:] Yeah. [speaker003:] But it wouldn't. [speaker005:] OK. But that doesn't necessarily [disfmarker] But we'd have to infer a Source Path Goal to some degree for touching the side, right? [speaker002:] Well [disfmarker] Uh, th the there is a p a point there if I understand you. Correct? Um, because um, sometimes people just say things [disfmarker] This you find very often. "Where is the city hall?" And this do they don't wanna sh see it on a map, or they don't wanna know it's five hundred yards away from you, or that it's to the [disfmarker] your north. They wanna go there. That's what they say, is, "Where is it?". Where is that damn thing? [speaker005:] And the parser would output [disfmarker] [speaker002:] Well, that's a [disfmarker] a question mark. sh A lot of parsers, um, just, uh [disfmarker] That's way beyond their scope, is [disfmarker] of interpreting that. You know? But um, still outcome w the outcome will be some form of structure, with the town hall and maybe saying it's a WH focus on the town hall. But to interpret it, [speaker004:] Mm hmm. [speaker002:] you know? somebody else has to do that job later. [speaker005:] I'm just trying to figure out what the SmartKom system would output, [speaker003:] Yeah. [speaker005:] depending on these things. [speaker002:] Um, it will probably tell you how far away it is, at least that's [disfmarker] That's even what Deep Map does. It tells you how far away it is, and [disfmarker] and shows it to you on a map. Because i we can not differentiate, at the moment, between, you know, the intention of wanting to go there or the intention of just know wanting to know where [disfmarker] where it is. [speaker004:] People no might not be able to infer that either, right? Like the fact [disfmarker] Like, I could imagine if someone came up to me and asked, "Where's the city hall?", I might say, g ar "Are you trying to get there?" Because how I describe um, t its location [disfmarker] uh, p probably depend on whether I think I should give them, you know, directions now, or say, you know, whatever, "It's half a mile away" or something like that. [speaker002:] Mm hmm. It's a granularity factor, [speaker003:] Yeah. [speaker002:] because where people ask you, "Where is New York?", you will tell them it's on the East Coast. [speaker004:] Uh huh. Yeah. Exactly. Right. [speaker002:] Y y eh [disfmarker] you won't tell them how to get there, ft you know, take that bus to the airport and blah blah blah. [speaker004:] Right. Yeah. Right. [speaker002:] But if it's the post office, you will tell them how to get there. [speaker004:] Mm hmm. [speaker002:] So th They have done some interesting experiments on that in Hamburg as well. [speaker004:] Right. Right. [speaker002:] So. [speaker003:] But [disfmarker] i Go [disfmarker] go back to the [disfmarker] the uh, th Yeah, that slide. [speaker002:] So I w this is [disfmarker] "onto" is [disfmarker] is knowledge about buildings, their opening times, and then t coupled with time of day, um, this should [disfmarker] You know. [speaker004:] So that context was like, um, their presumed purpose context, i like business or travel, as well as the utterance context, like, "I'm now standing at this place at this time". [speaker003:] Yeah, well I think we ought to d a As we have all along, d We [disfmarker] we've been distu distinguishing between situational context, which is what you have as context, and discourse context, [speaker002:] Mm hmm. [speaker003:] which you have as DH, I don't know what the H means. [speaker002:] Nuh. History. Discourse history. [speaker003:] OK. Whatever. [speaker002:] Yeah. [speaker003:] So we can work out terminology later. [speaker002:] Yep. [speaker003:] So, they're [disfmarker] they're quite distinct. I mean, you need them both, but they're quite distinct. And, so what we were talking about doing, a a as a first shot, is not doing any of the linguistics. Except to find out what seems to be [pause] useful. So, the [disfmarker] the [disfmarker] the reason the belief net is in blue, is the notion would be [disfmarker] Uh, this may be a bad dis bad idea, but the idea is to take as a first goal, see if we could actually build a belief net that would make this three way distinction uh, in a plausible way, given these [disfmarker] We have all these transcripts and we're able to, by hand, extract the features to put in the belief net. Saying, "Aha! here're the things which, if you get them out of [disfmarker] out of the language and discourse, and put them into the belief net, it would tell you which of these three uh, intentions is most likely." And if [disfmarker] to actually do that, build it, um [disfmarker] you know, run it [disfmarker] y y run it on the data where you hand transcribe the parameters. And see how that goes. If that goes well, then we can start worrying about how we would extract them. So [disfmarker] where would you get this information? And, expand it to [disfmarker] to other things like this. But if we can't do that, then we're in trouble. I mean th th i i if you can't do this task, um [disfmarker] [speaker002:] We need a different, uh, engine. [speaker003:] Uh, uh, yeah, or something. [speaker002:] Machine, I mean. [speaker003:] Well it [disfmarker] i I if it [disfmarker] if it's the belief nets, we we'll switch to you know, logic or some terrible thing, but I don't think that's gonna be the case. I think that, uh, if we can get the information, a belief net is a perfectly good way of doing the inferential combination of it. The real issue is, do what are the factors involved in determining this? And I don't know. [speaker002:] Hmm. But, only w [speaker003:] Hold on a s Hold on a second. [speaker002:] Muh. [speaker003:] So, I know. Uh, uh, is it clear what's going on here? [speaker006:] Yep. [speaker004:] Um, I missed the beginning, but, um I guess [disfmarker] could you back to the slide, the previous one? So, is it that it's, um [disfmarker] These are all factors that uh, a These are the ones that you said that we are going to ignore now? or that we want to [vocalsound] take into account? You were saying n [speaker003:] Take them into account. [speaker004:] Take the [disfmarker] the linguistic factors too. [speaker003:] But [disfmarker] but you don't worry about [disfmarker] h [speaker004:] Oh, how to extract these features. [speaker003:] how to extract them. [speaker004:] OK. [speaker003:] So, f let's find out which ones we need first, [speaker004:] Got it. OK. And [disfmarker] and it's clear from the data, um, like, sorta the correct answer in each case. [speaker003:] and [disfmarker] No. [speaker004:] But l OK. [speaker002:] No. [speaker004:] That's [disfmarker] that's the thing I'm curious ab [speaker002:] But [disfmarker] [speaker003:] Let's go back to th Let's go back to the [disfmarker] the [disfmarker] the slide of data. [speaker004:] Like do we know from the data wh which [disfmarker] [speaker002:] Um [disfmarker] [speaker004:] OK. So [disfmarker] [speaker002:] Not from that data. But, um, since we are designing a [disfmarker] a [disfmarker] a [disfmarker] an, compared to this, even bigger data collection effort, [comment] um, we will definitely take care to put it in there, [speaker004:] Mm hmm. Mm hmm. Mm hmm. [speaker002:] in some shape, way, form over the other, [speaker003:] Yeah. [speaker002:] to see whether we can, then, get sort of empirically validated data. [speaker004:] Right. [speaker002:] Um, from this, we can sometimes, you know [disfmarker] an and that's [disfmarker] that [disfmarker] but that [disfmarker] isn't that what we need for a belief net anyhow? is sort of [disfmarker] s sometimes when people want to just see it, they phrase it more like this? [speaker004:] Mm hmm. [speaker002:] But it doesn't exclude anybody from phrasing it totally differently, even if they still [disfmarker] [speaker004:] Right. Right. [speaker002:] you know? But then other factors may come into play that change the outcome of their belief net. [speaker004:] Right. [speaker002:] So, um, this is exactly what [disfmarker] Because y you can never be sure. And I'm sure even i the most, sort of, deliberate data collection experiment will never give you data that say, "Well, if it's phrased like that, the intention is this." [speaker004:] Sure. [speaker002:] You know, because then, uh, you [disfmarker] [speaker004:] u u I mean, the only way you could get that is if you were to give th the x subjects a task. Right? Where you have [disfmarker] where your, uh, current goal is to [disfmarker] [speaker002:] We Yeah! That's what we're doing. [speaker004:] So that's what you want? [speaker002:] But [disfmarker] but we will still get the phrasing all over the place. [speaker004:] OK. So you will know. Mm hmm. [speaker002:] I'm sure that, you know [disfmarker] [speaker003:] Yeah. [speaker004:] The [disfmarker] No, that's fine. I guess, it's just knowing the intention from the experimental subject. [speaker003:] Yeah. [speaker002:] Mm hmm. [speaker003:] From that task, yeah. So, uh, I think you all know this, but we are going to actually use this little room and start recording subjects probably within a month or something. So, this is not any [disfmarker] lo any of you guys' worry, except that we may want to push that effort to get information we need. So our job [vocalsound] is to figure out how to solve these problems. If it turns out that we need data of a certain sort, then the sort of data collection branch can be, uh, asked to do that. And one of the reasons why we're recording the meeting for these guys is cuz we want their help when we d we start doing uh, recording of subjects. So, yeah [disfmarker] y you're absolutely right, though. No, you [disfmarker] you will not have, and there it is, and, uh [disfmarker] But you know, y y the, um [disfmarker] [speaker004:] And I think the other concern that has come up before, too, is if it's [disfmarker] um [disfmarker] I don't know if this was collected [disfmarker] what situation this data was collected in. Was it [disfmarker] is it the one that you showed in your talk? Like people [disfmarker] [speaker002:] No, no. [speaker004:] But [speaker002:] No. [speaker004:] OK. So was this, like, someone actually mobile, like [disfmarker] s using a device? [speaker002:] Uh, N no, no not [disfmarker] i it was mobile but not [disfmarker] not with a w a real wizard system. So there were never answers. [speaker004:] Uh huh. OK. OK. But, is it [disfmarker] I guess I don't know [disfmarker] The situation of [disfmarker] of collecting th the data of, like [disfmarker] Here you could imagine them being [disfmarker] walking around the city. as like one situation. And then you have all sorts of other c situational context factors that would influence w how to interpret, like you said, the scope and things like that. [speaker002:] Mm hmm. [speaker004:] If they're doing it in a [disfmarker] you know, "I'm sitting here with a map and asking questions", I [disfmarker] I would imagine that the data would be really different. Um, so it's just [disfmarker] [speaker002:] Yeah. But [disfmarker] It was never th th the goal of that data collection to [disfmarker] to serve for sat for such a purpose. So that's why for example the tasks were not differentiated by intentionality, [speaker004:] Mm hmm. [speaker002:] there was n there was no label, [speaker004:] Mm hmm. Right. [speaker002:] you know, intention A, intention B, intention C. Or task A, B, C. Um I'm sure we can produce some if we need it, [speaker004:] Mm hmm. [speaker002:] um, that [disfmarker] that will help us along those lines. But, you know, you gotta leave something for other people to model. So, to [disfmarker] Finding out what, you know, situational con what the contextual factors of the situation really are, you know is an interesting s interesting thing. [speaker004:] Mm hmm. Mm hmm. [speaker002:] u u Sort of I'm, at the moment, curious and I'm [disfmarker] I'm [disfmarker] s w want to approach it from the end where we can s sort of start with this toy system that we can play around with, [speaker004:] Mm hmm. [speaker002:] so that we get a clearer notion of what input we need for that, [speaker004:] Mm hmm. [speaker002:] what suffices and what doesn't. And then we can start worrying about where to get this input, what [disfmarker] what do we need, you know [disfmarker] Ultimately once we are all experts in changing that parser, for example, maybe, there's just a couple three things we need to do and then we get more whatever, part of speech and more construction type like stuff out of it. [speaker004:] Mm hmm. Hmm. [speaker002:] It's a m pragmatic approach, uh, at the moment. [speaker005:] How exactly does the data collection work? Do they have a map, and then you give them a scenario of some sort? [speaker002:] OK. Imagine you're the [disfmarker] the subject. You're gonna be in here, and somebody [disfmarker] And [disfmarker] and you see, uh, either th the three D model, or uh, a QuickTime animation of standing u in a square in Heidelberg. So you actually see that. Um. The uh, um, first thing is you have to read a text about Heidelberg. So, just off a textbook, uh, tourist guide, to familiarize, uh, yourself with that sort of odd sounding German street names, like Fischergasse and so forth. So that's part one. Part two is, you're told that this huge new, wonderful computer system exists, that can y tell you everything you want to know, and it understands you completely. And so you're gonna pick up that phone, dial a number, and you get a certain amount of tasks that you have to solve. First you have to know [disfmarker] find out how to get to that place, maybe with the intention of buying stamps in there. Maybe [disfmarker] So, the next task is to get to a certain place and take a picture for your grandchild. The third one is to get information on the history of an object. The fourth one [disfmarker] And then the g system breaks down. It crashes, [speaker004:] a At the third? [speaker002:] And [disfmarker] [speaker004:] Right then? [speaker002:] After the third task. [speaker004:] OK. [speaker002:] And then [disfmarker] Or after the fourth. Some find [disfmarker] [@ @] [comment] Forget that for now. And then, a human operator comes on, and [disfmarker] and exp apologizes that the system has crashed, but, you know, urges you to continue, you know? now with a human operator. And so, you have basically the same tasks again, just with different objects, and you go through it again, and that was it. Oh, and one [disfmarker] one little bit [disfmarker] w And uh, the computer you are [disfmarker] you are being told the computer system knows exactly where you are, via GPS. When the human operator comes on, um, that person does not know. So the GPS is crashed as well. So the person first has to ask you "Where are you?". And so you have to do some [disfmarker] s tell the person sort of where you are, depending on what you see there. Um, this is a [disfmarker] a [disfmarker] a [disfmarker] a [disfmarker] a bit that I d I don't think we [disfmarker] Did we discuss that bit? Uh, I just sort of squeezed that in now. But it's something, uh, that would provide some very interesting data for some people I know. So. [speaker004:] So, in the display you can [disfmarker] Oh, you said that you cou you might have a display that shows, like, the [disfmarker] [speaker002:] Yeah. a Additionally, y you have a [disfmarker] a [disfmarker] a sort of a map type display. [speaker004:] a w your perspective? sort of? And so, as you [disfmarker] [speaker002:] Uh, two D. n [speaker004:] Oh, two D. OK. [speaker002:] Two D. [speaker004:] So as you move through it that's they just track it on the [disfmarker] for themselves [speaker002:] Yeah. b y You don't [disfmarker] [speaker004:] there. [speaker002:] That's [disfmarker] I don't know. I but y I don't think you really move, [speaker004:] OK. [speaker002:] sort of. [speaker004:] So [speaker002:] Yeah? I mean that would be an [disfmarker] an [disfmarker] an enormous technical effort, unless we would [disfmarker] We can show it walks to, you know. We can have movies of walking, you walking through [disfmarker] through Heidelberg, and u ultimately arriving there. [speaker004:] Mm hmm. [speaker002:] Maybe we wanna do that. Yeah. [speaker004:] Uh, I was just trying to figure out how [disfmarker] how ambitious the system is. [speaker002:] The map was sort of intended to [disfmarker] You want to go to that place. [speaker004:] Mm hmm. [speaker002:] You know, and it's sort of there. And you see the label of the name [disfmarker] So we get those names, pronunciation stuff, and so forth, [speaker004:] Mm hmm. [speaker002:] and we can change that. [speaker004:] Mm hmm. So your tasks don't require you to [disfmarker] I mean, uh [disfmarker] yo you're told [disfmarker] So when your task is, I don't know, "Go buy stamps" or something like that? So, do you have to respond? or does your [disfmarker] Uh, what are you ste what are you supposed to be telling the system? Like, w what you're doing now? or [disfmarker] [speaker002:] Well, we'll see what people do. [speaker004:] There's no [disfmarker] OK, so it's just like, "Let's figure out what they would say under the circumstances". [speaker002:] Yeah, and [disfmarker] and we will record both sides. I mean, we will record the Wi the Wizard [disfmarker] [speaker004:] Uh huh. Uh huh. [speaker002:] I mean, in both cases it's gonna be a human, in the computer, and in the operator case. And we will re there will be some dialogue, you know? So, you first have to do this, and that, [speaker004:] Yep. [speaker002:] and [disfmarker] and [disfmarker] [speaker004:] Mm hmm. [speaker002:] see wh what they say. We can ins instruct the, uh, wizard in how expressive and talkative he should be. But um, maybe the [disfmarker] maybe what you're suggesting [disfmarker] Is what you're suggesting that it might be too poor, the data, if we sort of limit it to this ping pong one t uh, task results in a question and then there's an answer and that's the end of the task? You wanna m have it more [disfmarker] more steps, sort of? [speaker004:] Yeah, I [disfmarker] I don't know how much direction is given to the subject about what their interaction [disfmarker] I mean, th they're unfamiliar w with interacting with the system. All they know is it's this great system that could do s stuff. [speaker002:] Mm hmm. Mm hmm. [speaker003:] Oh yeah, but [disfmarker] to some extent this is a different discussion. [speaker004:] Right? So [disfmarker] [speaker003:] OK? So. Uh, we [disfmarker] we have to have this discussion of th the experiment, and the data collection, and all that sorta stuff [speaker004:] Uh huh. [speaker003:] and we do have, um, a student who is a candidate for wizard. Uh, she's gonna get in touch with me. It's a student of Eve's. FEY, Fey? Spelled FEY. Do you [disfmarker] do you [disfmarker] [speaker004:] Oh, Fey Parrill. [speaker003:] You know her? [speaker004:] Yeah. Uh huh. [speaker003:] OK. Sh Is sh [speaker004:] She started taking the class last year and then didn't [disfmarker] um, you know, didn't continue. I g She's a g Is she an undergradua [speaker003:] She's graduated. [speaker004:] She is a graduate, OK. [speaker003:] Yeah. [speaker004:] Yeah, I m I know her very, very briefly. I know she was inter you know, interested in aspect and stuff like that. [speaker003:] OK. So, anyway, she's looking for some more part time work w while she's waiting actually for graduate school. And she'll be in touch. So we may have someone, uh, to do this, and she's got you know, some background in [disfmarker] in all this stuff. And is a linguist st and, so So. [vocalsound] That's [disfmarker] So, Nancy, we'll have an At some point we'll have another discussion on exactly wha t t you know, how that's gonna go. [speaker004:] Mm hmm. [speaker003:] And um, Jane, but also, uh, Liz have offered to help us do this, uh, data collection and design and stuff. [speaker004:] Mm hmm. Mmm. [speaker003:] So, when we get to that we'll have some people doing it that know what they're doing. [speaker004:] OK. I guess the reason I was asking about the sort of the de the details of this kind of thing is that, um, it's one thing to collect data for, I don't know, speech recognition or various other tasks that have pretty c clear correct answers, but with intention [vocalsound] um, obviously, as you point out, there's a lot of di other factors and [disfmarker] I'm not really sure, um, how [disfmarker] how [disfmarker] e the question of how to make it a t appropriate toy version of that [disfmarker] Um, it's ju it's just hard. So, I mean, obviously it's a [disfmarker] [speaker005:] Yeah, uh, actually I guess that was my question. Is the intention implicit in the scenario that's given? Like, do the [disfmarker] [speaker004:] It is, if they have these tasks that they're supposed to [disfmarker] [speaker005:] Yeah, I just wasn't sure to what level of detail the task was. [speaker004:] to [disfmarker] to give [disfmarker] Yeah, uh [disfmarker] [speaker002:] Mm hmm. [speaker004:] Right. [speaker002:] n No one is, at the moment. [speaker004:] Right. [speaker005:] OK. [speaker003:] So, we that's part of what we'll have to figure out. [speaker004:] Right. Mm hmm. [speaker003:] But, uh, the [disfmarker] The problem that I was tr gonna try to focus on today was, let's suppose by magic you could collect dialogues in which, one way or the other, you were able to, uh, figure out both the intention, and set the context, and know what language was used. So let's suppose that we can get that kind of data. Um. The issue is, can we find a way to, basically, featurize it so that we get some discrete number of features so that, uh, when we know the values to all those features, or as many as possible, we can w come up with the best estimate of which of the, in this case three little intentions, are most likely. [speaker004:] w What are the t three intentions? Is it to go there, to see it, and [disfmarker] [speaker002:] To come as close as possible to it. [speaker003:] Th the terminology we're using is to [disfmarker] [speaker004:] Yeah, it's [@ @]. [speaker003:] Go back. To v [speaker004:] OK. [speaker003:] to View it. OK? To Enter it. Now those [disfmarker] It seems to me those are cl you c you have no trouble with those being distinct. "Take a picture of it" [speaker004:] Mm hmm. [speaker003:] you [disfmarker] you might well want to be a really rather different place than entering it. [speaker004:] Mm hmm. [speaker003:] And, for an object that's at all big, uh, sort of getting to the nearest part of it uh, could be quite different than either of those. [speaker004:] Mm hmm. Mm hmm. Mm hmm. [speaker003:] Just sort of [disfmarker] [speaker004:] OK, so now I understand the referent of Tango mode. [speaker005:] See, I would have thought it was more of a waltz. [speaker004:] I didn't get that before. [speaker002:] S To "Waltz" it? [speaker004:] Yeah, like, how close are you gonna be? [speaker003:] Well. [speaker004:] Like, [vocalsound] Tango's really close. [speaker005:] Yeah, cuz a tango [disfmarker] Yeah. [speaker003:] Well, anyway. [speaker006:] All these [speaker003:] So [disfmarker] [speaker006:] So, like, the question is how what features can [disfmarker] like, do you wanna try to extract from, say, the parse or whatever? [speaker003:] Right. [speaker006:] Like, the presence of a word or the presence of a certain uh, stem, or [disfmarker] certain construction or whatever. [speaker003:] Right. Is there a construction, or the kind of object, or w uh, anything else that's in the si It's either in the [disfmarker] in the s the discourse itself or in the context. So if it turns out that, whatever it is, you want to know whether the person's uh, a tourist or not, OK? that becomes a feature. Now, how you determine that is another issue. But fo for the current problem, it would just be, "OK, if you can be sure that it's a tourist, versus a businessman, versus a native," or something, uh, that would give you a lot of discriminatory power and then just have a little section in your belief net that said, "pppt!" Though sin f in the short run, you'd set them, [speaker006:] Mm hmm. [speaker003:] and see ho how it worked, and then in the longer run, you would figure out how you could derive them. From previous discourse or w any anything else you knew. [speaker006:] Right. So, how should [disfmarker] What's the uh, plan? Like, how should we go about figuring out these [disfmarker] [speaker003:] OK. So, first of all is, uh, do e either of you guys, you got a favorite belief net that you've, you know, played with? JavaBayes or something? [speaker006:] Oh. No, not really. [speaker003:] OK. Well, anyway. f Get one. OK? So [disfmarker] y so one of th one of the things we wanna do is actually, uh, pick a package, doesn't matter which one, uh, presumably one that's got good interactive abilities, cuz a lot of what we're gonna be d You know, we don't need the one that'll solve massive, uh, belief nets quickly. d w These are not gonna get big in [disfmarker] in the foreseeable future. But we do want one in which it's easy to interact with and, uh, modify. Because i that's [disfmarker] A lot of what it's gonna be, is, um, playing with this. And probably one in which it's easy to have, um, what amounts to transcript files. So that if [disfmarker] if we have all these cases [disfmarker] OK? So we make up cases that have these features, OK, and then you'd like to be able to say, "OK, here's a bunch of cases" [disfmarker] There're even ones tha that you can do learning OK? So you have all their cases and [disfmarker] and their results and you have a [disfmarker] algorithms to go through and run around trying to set the [disfmarker] the probabilities for you. Um, probably that's not worth it. I mean, my guess is we aren't gonna have enough data that's good enough to make the [disfmarker] these data fitting ones worth it, but I don't know. So I would say you guy the first task for you two guys is to um, pick a package. OK, and you wanna it s You know, the standard things you want it stable, you want it [disfmarker] yeah, [@ @]. And, as soon as we have one, we can start trying to, uh, make a first cut at what's going on. [speaker002:] An Nuh. [speaker003:] But it [disfmarker] what I like about it is it's very concrete. OK? We [disfmarker] we have a [disfmarker] we know what the outcomes are gonna be, and we have some [disfmarker] some data that's loose, we can use our own intuition, and see how hard it is, and, importantly, what intermediate nodes we think we need. So it [disfmarker] if it turns out that just, thinking about the problem, you come up with things you really need to [disfmarker] You know, this is the kind of thing that is, you know, an intermediate little piece in your belief net. That'd be really interesting. [speaker005:] Mm hmm. [speaker002:] And it [disfmarker] and it may serve as a platform for a person, maybe me, or whoever, who is interested in doing some linguistic analysis. I mean, w we have the For FrameNet group here, and we can see what they have found out about those concepts already, that are contained in the data, um, you know, to come up with a nice little set of features and um, maybe even means of s uh, extracting them. And [disfmarker] and that altogether could also be [disfmarker] uh, become a nice paper that's going to be published somewhere, if we sit down and write it. And um [disfmarker] When you said JavaBayes belief net you were talking about ones that run on coffee? or that are in the program language Java? [speaker003:] No, th It turns out that there is a, uh [disfmarker] The new end of Java libraries. OK, and it turns out one called Which is one that fair [disfmarker] people around here use a fair amount. [speaker002:] Mmm. OK. [speaker003:] I have no idea whether that's [disfmarker] The obvious advantage of that is that you can then, relatively easily, get all the other Java packages for GUIs or whatever else you might want to do. [speaker002:] Mm hmm. [speaker003:] So that i that's I think why a lot of people doing research use that. But it may not be [disfmarker] I have no idea whether that's the best choice an and there're plenty of people around, students in the department who, you know, live and breathe Bayes nets. So, uh, [speaker004:] There's the m tool kit that um, Kevin Murphy has developed, [speaker003:] Right. It's OK. [speaker004:] which might be useful too. [speaker006:] Right. [speaker003:] So, yeah, Kevin would be a good person to start with. [speaker004:] And it's available Matlab code. [speaker003:] Nancy knows him well. I don't know I don't know whether you guys have met Kevin yet or not, [speaker002:] Mm hmm. [speaker003:] but, uh [disfmarker] [speaker006:] Yeah, I know him. [speaker002:] But i [speaker003:] Yeah. [speaker002:] But since we all probably are pretty sure that, um, the [disfmarker] For example, this th th the dialogue history is [disfmarker] is um, producing XML documents. M three L of course is XML. And the ontology that um, uh the student is [disfmarker] is constructing for me back in [disfmarker] in EML is in OIL and that's also in XML. And so that's where a lot of knowledge about bakeries, about hotels, about castles and stuff is gonna come from. [speaker003:] Mm hmm. Yeah. [speaker002:] Um, so, if it has that IO capability and if it's a Java package, it will definitely be able [disfmarker] We can couple. [speaker003:] Yeah. So, yeah, we're sort of [nonvocalsound] committed to XML as the kind of, uh, interchange. But that's, you know, not a big deal. [speaker002:] Who isn't, nuh? [speaker003:] So, in terms of [disfmarker] of [disfmarker] interchanging in and out of any module we build, It'll be XML. And if you're going off to queries to the ontology, for example, you'll have to deal with its interface. But that's [disfmarker] that's fine an and um, all of these things have been built with much bigger projects than this in mind. So they [disfmarker] they have worked very hard. It's kind of blackboards and multi wave blackboards and ways of interchanging and registering your a And so forth. So, that I don't think is even worth us worrying about just yet. I mean if we can get the core of the thing to work, in a way that we're comfortable with, then we ca we can get in and out of it with, uh, XML, um, little descriptors. I believe. I don't [disfmarker] I don't see [disfmarker] [speaker002:] Hmm. Yeah. Yeah, I like, for example, the [disfmarker] what you said about the getting input from [disfmarker] from just files about where you h where you have the data, have specified the features and so forth. That's, of course, easy also to do with, you know, XML. [speaker003:] Uh, you could have an X [disfmarker] yeah, you could make and XML format for that. Sure. [speaker002:] So r [speaker003:] That [disfmarker] that [disfmarker] um, you know, feature value XML format is probably as good a way as any. So it's als Yeah, I guess it's also worth, um, while you're poking around, poke around for XML packages that um, do things you'd like. [speaker006:] Doesn't [disfmarker] does SmartKom system have such packages? [speaker002:] Yeah. [speaker003:] Sure. [speaker002:] The [disfmarker] the lib M three L library does that. [speaker003:] And the question is, d you c you [disfmarker] you'll have to l We'll have to l [speaker002:] It's also [disfmarker] [speaker003:] That should be [disfmarker] ay We should be able to look at that [disfmarker] [speaker002:] No, u u y um the [disfmarker] What I [disfmarker] What sort of came to my mind i is [disfmarker] was the notion of an idea that if [disfmarker] if there are l nets that can actually lear try to set their own, um, probability factors based on [disfmarker] on [disfmarker] on [disfmarker] on input [disfmarker] [speaker003:] Yeah. [speaker002:] which is in file format, if we, um, get really w wild on this, we may actually want to use some [disfmarker] some corpora that other people made and, for example, if [disfmarker] if they are in [disfmarker] in MATE, then we get X M L documents with discourse annotations, t [speaker006:] Mm hmm. [speaker002:] you know, t from the discourse act down to the phonetic level. Um, Michael has a project where [disfmarker] you know, recognizing discourse acts and he does it all in MATE, and so they're actually annotating data and data and data. So if we w if we think it's worth it one of these days, not [disfmarker] not with this first prototype but maybe with a second, and we have the possibility of [disfmarker] of taking input that's generated elsewhere and learn from that, that'd be nice. [speaker006:] Right. [speaker003:] It'd be nice, but [disfmarker] but I [disfmarker] I [disfmarker] I do I don't wanna count on it. I mean, you can't [disfmarker] you can't run your project based on the speculation that [disfmarker] that the data will come, [speaker002:] No, no, uh, just for [disfmarker] [speaker003:] and you don't have to actually design the nets. [speaker002:] Nuh. Just a back door that [disfmarker] I [disfmarker] I think we should devote m [speaker003:] Could happen. Yeah. So in terms of [disfmarker] of the, um [disfmarker] the [disfmarker] what the SmartKom gives us for M three L packages, it could be that they're fine, or it could be eeh. You don't [disfmarker] You know, you don't really like it. So we're not [disfmarker] we're not abs we're not required to use their packages. We are required at the end to give them stuff in their format, but hey. [speaker006:] Right. [speaker003:] Um, it's, uh [disfmarker] It doesn't control what you do in you know, internally. [speaker005:] What's the time frame for this? [speaker002:] Two days? [speaker003:] Huh? [speaker002:] Two, three days? [speaker003:] Yeah bu w I'd like that this [disfmarker] y yeah, this week, to ha to n to [vocalsound] have y guys, uh, you know, pick [vocalsound] the [disfmarker] y you know, belief net package [speaker002:] No. [speaker003:] and tell us what it is, and give us a pointer so we can play with it or something. [speaker006:] Sure. [speaker003:] And, then as soon as we have it, I think we should start trying to populate it for this problem. Make a first cut at, you know, what's going on, and probably the ea easiest way to do that is some on line way. I mean, you can f figure out whether you wanna make it a web site or [disfmarker] You know, how [speaker002:] Uh [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] um, OK, I [disfmarker] t Yeah. I was actually more joking. With the two or three days. [speaker003:] OK, I wasn't. [speaker002:] So this was [disfmarker] was a usual jo Um, it will take as long as y y yo you guys need for that. [speaker003:] Yeah. Right. [speaker002:] But um, maybe it might be interesting if [disfmarker] if the two of you can agree on who's gonna be the speaker next Monday, to tell us something about the net you picked, and what it does, and how it does that. [speaker003:] Well, y Well, or both of them speak. [speaker006:] Sure. [speaker003:] We don't care. [speaker002:] Yeah, or you can split it up. So, y [speaker006:] Hmm. [speaker002:] So that will be sort of the assignment for next week, is to [disfmarker] to [disfmarker] for slides and whatever net you picked and what it can do and [disfmarker] and how far you've gotten. Pppt! [speaker003:] Well, I'd like to also, though, uh, ha have a first cut at what the belief net looks like. Even if it's really crude. OK? So, you know, here a here are [disfmarker] [speaker005:] So we're supposed to [@ @] about features and whatnot, and [disfmarker] [speaker003:] Right. Yeah. [speaker006:] Mm hmm. [speaker003:] And, as I said, what I'd like to do is, I mean, what would be really great is you bring it in [disfmarker] If [disfmarker] if [disfmarker] if we could, uh, in the meeting, say, you know, "Here's the package, here's the current one we have," uh, you know, "What other ideas do you have?" and then we can think about this idea of making up the data file. Of, uh, you know, get a [disfmarker] t a p tentative format for it, let's say XML, that says, l you know, "These are the various scenarios we've experienced." We can just add to that and there'll be this [disfmarker] this file of them and when you think you've got a better belief net, You just run it against this, um [disfmarker] this data file. [speaker006:] So we'll be like, hand, uh, doing all the probabilities. OK. [speaker003:] Oh, yeah, unt until we know more. [speaker005:] And what's the relation to this with [disfmarker] Changing the table so that the system works in English? [speaker002:] OK. So this is [disfmarker] Whi while you were doing this, I received two lovely emails. The [disfmarker] the full NT and the full Linux version are there. I've downloaded them both, and I started to unpack the Linux one [disfmarker] Uh, the NT one worked fine. and I started unta pack the Linux one, it told me that I can't really unpack it because it contains a future date. So this is the time difference between Germany. I had to wait until one o'lock this afternoon before I was able to unpack it. Now, um [disfmarker] Then it will be my job to get this whole thing running both on Swede and on this machine. And so that we have it. And then um [disfmarker] Hopefully that [disfmarker] hoping that my urgent message will now come through to Ralph and Tilman that it will send some more documentation along, we [disfmarker] I control p Maybe that's what I will do next Monday is show the state and show the system and show that. [speaker003:] Yeah. Yeah. So the answer, Johno, is that these are, at the moment, separate. Uh, what one hopes is that when we understand how the analyzer works, we can both worry about converting it to English and worry about how it could ex extract the parameters we need for the belief net. [speaker005:] I guess my question was more about time frame. So we're gonna do belief nets this week, and then [disfmarker] [speaker003:] Oh, yeah. I don't know. n None of this is i n Neither of these projects has got a real tight time line, in the sense that over the next month there's a [disfmarker] there's a deliverable. [speaker005:] OK. [speaker003:] OK. S so uh, it's opportu in that sense it's opportunistic. If [disfmarker] if [disfmarker] you know, if we don't get any information for these guys f for several weeks then we aren't gonna sit around, you know, wasting time, trying to do the problem or guess what they [disfmarker] You know, just pppt! go on and do other things. [speaker005:] OK. [speaker002:] Yeah, but uh [disfmarker] but the uh [disfmarker] This point is really [disfmarker] I think very, very valid that ultimately we hope that [disfmarker] that both will merge into a harmonious and, um, wonderful, um, state where we can not only do the bare necessities, IE, changing the table so it does exactly in English what it does in German, but also that we can sort of have the system where we can say, "OK, this is what it usually does, and now we add this little thing to it", you know? whatever, Johno's and Bhaskara's great belief net, and we plug it in, and then for these certain tasks, and we know that navigational tasks are gonna be a core domain of the new system, it all [disfmarker] all of a sudden it does much better. Nuh? Because it can produce better answers, tell the person, as I s showed you on this map, n you know, produce either you know, a red line that goes to the Vista point or a red line that goes to the Tango point or red line that goes to the door, which would be great. So not only can you show that you know something sensible but ultimately, if you produce a system like this, it takes the person where it wants to go. Rather than taking him always to the geometric center of a building, [speaker006:] Mmm. [speaker002:] which is what they do now. And we even had to take out a bit. Nancy, you missed that part. We had to take out a bit of the road work. So that it doesn't take you to the wall [vocalsound] every time. [speaker004:] Oh, really? [speaker002:] So. Um [disfmarker] So this was actually an actual problem that we encountered, which nobody have [disfmarker] has [disfmarker] because car navigation systems don't really care. You know, they get you to the beginning of the street, some now do the house number. [speaker004:] Hmm. Mm hmm. [speaker002:] But even that is problematic. If you go d If you wanna drive to the SAP in Waldorf, I'm sure the same is true of Microsoft, it takes you to the [disfmarker] the address, whatever, street number blah blah blah, you are miles away from the entrance. [speaker003:] Yep. [speaker002:] Because the s postal address is maybe a mailbox somewhere. [speaker004:] Mm hmm. [speaker002:] Nuh? but the entrance where you actually wanna go is somewhere completely different. So unless you're a mail person you really don't wanna go there. [speaker004:] Right, yeah. [speaker003:] Probably not then, cuz y you probably can't drop the mail there anyway. [speaker002:] Probably neither [disfmarker] e not even that. [speaker003:] Yeah. Clear? [speaker006:] OK. Sounds good. [speaker005:] The Powder Tower is made of red limestone. [speaker004:] I was wondering. OK. [speaker002:] Do you wanna see a picture? [speaker004:] Sure! [speaker005:] Sure! [speaker002:] Have to reboot for that though. [speaker004:] Um. So, you two, who'll be working on this, li are [disfmarker] are you gl will you be doing [disfmarker] Well, I mean are you supposed to just do it by thinking about the situation? Can you use the sample data? Is it like [disfmarker] [speaker003:] Of course they use the sample data. [speaker004:] Yeah, I mean, ho is there more than [disfmarker] Is there a lot s of sample data that is beyond what you [disfmarker] what you have there? [speaker002:] There [disfmarker] there's more than I showed, but um, um, I think this is sort of um, in part my job to look at that and [disfmarker] and to see whether there are features in there that can be extracted, [speaker004:] Yeah. [speaker002:] and to come up with some features that are not you know, empirically based on [disfmarker] on a real experiment or on [disfmarker] on [disfmarker] on reality [speaker004:] Right. Mm hmm. [speaker002:] but sort of on your intuition of you know, Aha! [speaker006:] Mm hmm. [speaker004:] Mm hmm. [speaker002:] This is maybe a sign for that, and this is maybe a sign for this. " [speaker004:] Mm hmm. [speaker005:] Mm hmm. [speaker006:] So, yeah. Later this week we should sort of get together, and sort of start thinking about that, hopefully. [speaker002:] Talk features. Yep. [speaker003:] OK. We can end the meeting and call Adam, and then we wanna s look at some filthy pictures of Heidelberg. We can do that as well. Uh, is that OK? [speaker002:] Well they had [disfmarker] they used the ammunition [disfmarker] They stored the ammunition in that tower. [speaker003:] Alright. [speaker002:] And that's why, when it was hit by uh, a cannon ball, it exploded. [speaker005:] It exploded. [speaker003:] Oh. Ni [speaker005:] That's why they call it the Powder Tower. [speaker002:] Ahh. [speaker005:] OK. I first thought it had something to do with the material that it [disfmarker] w that's why I asked. [speaker004:] That's right, OK. [speaker002:] Mmm. [speaker001:] Alright. We're on. [speaker002:] Test, um. Test, test, test. Guess that's me. Yeah. OK. [speaker004:] Ooh, Thursday. [speaker002:] So. There's two sheets of paper in front of us. [speaker005:] Yeah. So. [speaker001:] What are these? [speaker002:] This is the arm wrestling? [speaker003:] Uh. Yeah, we formed a coalition actually. [speaker005:] Yeah. Almost. [speaker003:] We already made it into one. [speaker002:] Oh, good. [speaker003:] Yeah. [speaker005:] Yeah. [speaker002:] Excellent. That's the best thing. [speaker005:] Mm hmm. [speaker002:] So, tell me about it. [speaker005:] So it's [disfmarker] well, it's [pause] spectral subtraction or Wiener filtering, um, depending on if we put [disfmarker] if we square the transfer function or not. [speaker002:] Right. [speaker005:] And then with over estimation of the noise, depending on the, uh [disfmarker] the SNR, with smoothing along time, um, smoothing along frequency. [speaker002:] Mm hmm. [speaker005:] It's very simple, smoothing things. [speaker002:] Mm hmm. [speaker005:] And, um, [vocalsound] the best result is [vocalsound] when we apply this procedure on FFT bins, uh, with a Wiener filter. [speaker002:] Mm hmm. [speaker005:] And there is no noise addition after [disfmarker] after that. [speaker002:] OK. [speaker005:] So it's good because [vocalsound] [vocalsound] it's difficult when we have to add noise to [disfmarker] to [disfmarker] to find the right level. [speaker002:] OK. [speaker001:] Are you looking at one in [disfmarker] in particular of these two? [speaker005:] Yeah. So the sh it's the sheet that gives fifty f three point sixty six. [speaker002:] Mm hmm. [speaker005:] Um, [vocalsound] the second sheet is abo uh, about the same. It's the same, um, idea but it's working on mel bands, [vocalsound] and it's a spectral subtraction instead of Wiener filter, and there is also a noise addition after, uh, cleaning up the mel bins. Mmm. Well, the results are similar. [speaker002:] Yeah. I mean, [vocalsound] it's [disfmarker] [comment] it's actually, uh, very similar. [speaker005:] Mm hmm. [speaker002:] I mean, [vocalsound] if you look at databases, uh, the, uh, one that has the smallest [disfmarker] smaller overall number is actually better on the Finnish and Spanish, uh, but it is, uh, worse on the, uh, Aurora [disfmarker] [speaker005:] It's worse on [disfmarker] [speaker002:] I mean on the, uh, TI TI digits, [speaker005:] on the multi condition in TI digits. Yeah. [speaker002:] uh, uh. Um. [speaker005:] Mmm. [speaker002:] So, it probably doesn't matter that much either way. But, um, when you say u uh, unified do you mean, uh, it's one piece of software now, or [disfmarker]? [speaker005:] So now we are, yeah, setting up the software. [speaker002:] Mm hmm. [speaker005:] Um, it should be ready, uh, very soon. Um, and we [speaker001:] So what's [disfmarker] what's happened? I think I've missed something. [speaker002:] OK. So a week ago [disfmarker] maybe you weren't around when [disfmarker] when [disfmarker] when Hynek and Guenther and I [disfmarker]? [speaker003:] Hynek was here. [speaker001:] Yeah. I didn't. [speaker002:] Oh, OK. So [disfmarker] Yeah, let's summarize. Um [disfmarker] And then if I summarize somebody can tell me if I'm wrong, which will also be possibly helpful. What did I just press here? I hope this is still working. [speaker005:] p p p [speaker002:] We, uh [disfmarker] we looked at, [nonvocalsound] uh [disfmarker] anyway we [disfmarker] [vocalsound] after coming back from QualComm we had, you know, very strong feedback and, uh, I think it was [vocalsound] Hynek and Guenter's and my opinion also that, um, you know, we sort of spread out to look at a number of different ways of doing noise suppression. But given the limited time, uh, it was sort of time to [pause] choose one. [speaker001:] Mm hmm. Mmm. [speaker002:] Uh, and so, uh, th the vector Taylor series hadn't really worked out that much. Uh, the subspace stuff, uh, had not been worked with so much. Um, so it sort of came down to spectral subtraction versus Wiener filtering. [speaker001:] Hmm. [speaker002:] Uh, we had a long discussion about how they were the same and how they were d uh, completely different. [speaker001:] Mm hmm. [speaker002:] And, uh, I mean, fundamentally they're the same sort of thing but the math is a little different so that there's a [disfmarker] a [disfmarker] [vocalsound] there's an exponent difference in the index [disfmarker] you know, what's the ideal filtering, and depending on how you construct the problem. [speaker001:] Uh huh. [speaker002:] And, uh, I guess it's sort [disfmarker] you know, after [disfmarker] after that meeting it sort of made more sense to me because [vocalsound] um, if you're dealing with power spectra then how are you gonna choose your error? And typically you'll do [disfmarker] choose something like a variance. And so that means it'll be something like the square of the power spectra. [speaker003:] Mm hmm. [speaker002:] Whereas when you're [disfmarker] when you're doing the [disfmarker] the, uh, um, [vocalsound] looking at it the other way, you're gonna be dealing with signals and you're gonna end up looking at power [disfmarker] uh, noise power that you're trying to reduce. And so, eh [disfmarker] so there should be a difference [vocalsound] of [disfmarker] you know, conceptually of [disfmarker] of, uh, a factor of two in the exponent. [speaker001:] Mm hmm. [speaker002:] But there're so many different little factors that you adjust in terms of [disfmarker] of, uh, [vocalsound] uh, over subtraction and [disfmarker] and [disfmarker] and [disfmarker] and [disfmarker] and so forth, um, that [vocalsound] arguably, you're c and [disfmarker] and [disfmarker] and the choice of do you [disfmarker] do you operate on the mel bands or do you operate on the FFT beforehand. There're so many other choices to make that are [disfmarker] are almost [disfmarker] well, if not independent, certainly in addition to [pause] the choice of whether you, uh, do spectral subtraction or Wiener filtering, that, um, [vocalsound] [@ @] again we sort of felt the gang should just sort of figure out which it is they wanna do and then let's pick it, go forward with it. So that's [disfmarker] that was [disfmarker] that was last week. And [disfmarker] [vocalsound] and, uh, we said, uh, take a week, go arm wrestle, you know, [speaker004:] Oh. [speaker002:] figure it out. I mean, and th the joke there was that each of them had specialized in one of them. And [disfmarker] and so they [disfmarker] [speaker001:] Oh, OK. [speaker002:] so instead they went to Yosemite and bonded, and [disfmarker] and they came out with a single [disfmarker] single piece of software. So it's [vocalsound] another [disfmarker] another victory for international collaboration. So. Uh. [speaker001:] So [disfmarker] so you guys have combined [disfmarker] or you're going to be combining the software? [speaker005:] Oh boy. [speaker003:] Well, the piece of software has, like, plenty of options, like you can parse command line arguments. So depending on that, it [disfmarker] it becomes either spectral subtraction or Wiener filtering. [speaker001:] Oh, OK. [speaker003:] So, ye [speaker002:] Well, that's fine, [speaker001:] They're close enough. [speaker002:] but the thing is [disfmarker] the important thing is that there is a piece of software that you [disfmarker] that we all will be using now. [speaker003:] Yeah. Yeah. [speaker002:] Yes. [speaker005:] Yeah. [speaker003:] There's just one piece of software. [speaker002:] Yeah. [speaker005:] I need to allow it to do everything and even more [disfmarker] more than this. Well, if we want to, like, optimize different parameters of [disfmarker] [speaker003:] Right. Parameters. Yeah. [speaker002:] Sure. [speaker005:] Yeah, we can do it later. But, still [disfmarker] so, there will be a piece of software with, [vocalsound] [vocalsound] uh, will give this system, the fifty three point sixty six, by default and [disfmarker] [speaker002:] Mm hmm. [speaker001:] How [disfmarker] how is [disfmarker] how good is that? [speaker005:] Mm hmm. [speaker001:] I [disfmarker] I [disfmarker] I don't have a sense of [disfmarker] [speaker005:] It's just one percent off of the [pause] best proposal. [speaker003:] Best system. [speaker005:] It's between [disfmarker] i we are second actually if we take this system. Right? [speaker003:] Yeah. [speaker002:] Yeah. [speaker001:] OK. Compared to the last evaluation numbers? Yeah. [speaker002:] But, uh [disfmarker] [speaker005:] Mm hmm. Yeah. [speaker003:] Yeah. [speaker002:] w which we sort of were before but we were considerably far behind. And the thing is, this doesn't have neural net in yet for instance. [speaker005:] Mm hmm. [speaker002:] You know? [speaker001:] Hmm. [speaker002:] So it [disfmarker] so, um, it's [disfmarker] it it's not using our full bal bag of tricks, if you will. [speaker001:] Mm hmm. [speaker002:] And, uh, and it [disfmarker] it is, uh, very close in performance to the best thing that was there before. Uh, but, you know, looking at it another way, maybe more importantly, uh, [vocalsound] we didn't have any explicit noise, uh, handling [disfmarker] stationary [disfmarker] dealing with [disfmarker] e e we didn't explicitly have anything to deal with stationary noise. [speaker001:] Mm hmm. [speaker002:] And now we do. [speaker001:] So will the [pause] neural net operate on the output from either the Wiener filtering or the spectral subtraction? [speaker002:] Well, so [disfmarker] so [disfmarker] so argu arguably, I mean, what we should do [disfmarker] [speaker001:] Or will it operate on the original? [speaker002:] I mean, I gather you have [disfmarker] it sounds like you have a few more days of [disfmarker] of nailing things down with the software and so on. But [disfmarker] and then [disfmarker] but, um, [vocalsound] arguably what we should do is, even though the software can do many things, we should for now pick a set of things, th these things I would guess, [speaker005:] Mm hmm. [speaker002:] and not change that. And then focus on [pause] everything that's left. And I think, you know, that our goal should be by next week, when Hynek comes back, [vocalsound] uh, to [disfmarker] uh, really just to have a firm path, uh, for the [disfmarker] you know, for the time he's gone, of [disfmarker] of, uh, what things will be attacked. But I would [disfmarker] I would [disfmarker] I would thought think that what we would wanna do is not futz with this stuff for a while because what'll happen is we'll change many other things in the system, [speaker001:] Mm hmm. [speaker002:] and then we'll probably wanna come back to this and possibly make some other choices. But, um. [speaker001:] But just conceptually, where does the neural net go? Do [disfmarker] do you wanna h run it on the output of the spectrally subtracted [disfmarker]? [speaker005:] Mmm. [speaker002:] Well, depending on its size [disfmarker] Well, one question is, is it on the, um, server side or is it on the terminal side? Uh, if it's on the server side, it [disfmarker] you probably don't have to worry too much about size. [speaker001:] Mm hmm. [speaker002:] So that's kind of an argument for that. We do still, however, have to consider its latency. So the issue is [disfmarker] is, um, [vocalsound] for instance, could we have a neural net that only looked at the past? [speaker001:] Right. [speaker002:] Um, what we've done in uh [disfmarker] in the past is to use the neural net, uh, to transform, [vocalsound] um, all of the features that we use. So this is done early on. This is essentially, [vocalsound] um, um [disfmarker] I guess it's [disfmarker] it's more or less like a spee a speech enhancement technique here [disfmarker] right? [disfmarker] [speaker001:] Mm hmm. [speaker002:] where we're just kind of creating [vocalsound] new [disfmarker] if not new speech at least new [disfmarker] new FFT's that [disfmarker] that have [disfmarker] you know, which could be turned into speech [disfmarker] [speaker005:] Mm hmm. [speaker002:] uh, that [disfmarker] that have some of the noise removed. [speaker001:] Mm hmm. [speaker002:] Um, after that we still do a mess of other things to [disfmarker] to produce a bunch of features. [speaker001:] Right. [speaker002:] And then those features are not now currently transformed [vocalsound] by the neural net. And then the [disfmarker] the way that we had it in our proposal two before, we had the neural net transformed features and we had [vocalsound] the untransformed features, which I guess you [disfmarker] you actually did linearly transform with the KLT, [speaker005:] Yeah. Yeah. Right. [speaker002:] but [disfmarker] but [disfmarker] but [disfmarker] uh, to orthogonalize them [disfmarker] but [disfmarker] [vocalsound] but they were not, uh, processed through a neural net. And Stephane's idea with that, as I recall, was that [vocalsound] you'd have one part of the feature vector that was very discriminant and another part that wasn't, [speaker001:] Mm hmm. [speaker002:] uh, which would smooth things a bit for those occasions when, uh, the testing set was quite different than what you'd trained your discriminant features for. So, um, all of that is [disfmarker] is, uh [disfmarker] still seems like a good idea. The thing is now we know some other constraints. We can't have unlimited amounts of latency. Uh, y you know, that's still being debated by the [disfmarker] by people in Europe but, [vocalsound] uh, no matter how they end up there, it's not going to be unlimited amounts, so we have to be a little conscious of that. [speaker001:] Yeah. [speaker002:] Um. So there's the neural net issue. There's the VAD issue. And, uh, there's the second stream [pause] thing. And I think those that we [disfmarker] last time we agreed that those are the three things that have to get, uh, focused on. [speaker001:] What was the issue with the VAD? [speaker002:] Well, better [comment] ones are good. [speaker001:] And so the w the default, uh, boundaries that they provide are [disfmarker] they're OK, but they're not all that great? [speaker002:] I guess they still allow two hundred milliseconds on either side or some? [speaker005:] Mm hmm. [speaker002:] Is that what the deal is? [speaker005:] Uh, so th um, they keep two hundred milliseconds at the beginning and end of speech. And they keep all the [disfmarker] [speaker001:] Outside the beginnings and end. [speaker005:] Yeah. And all the speech pauses, [speaker001:] Uh huh. [speaker005:] which is [disfmarker] Sometimes on the SpeechDat Car you have pauses that are more than one or two seconds. [speaker001:] Wow. [speaker005:] More than one second for sure. Um. [speaker001:] Hmm. [speaker005:] Yeah. And, yeah, it seems to us that this way of just dropping the beginning and end is not [disfmarker] We cou we can do better, I think, [speaker001:] Mm hmm. [speaker005:] because, um, [vocalsound] with this way of dropping the frames they improve [pause] over the baseline by fourteen percent and [vocalsound] Sunil already showed that with our current VAD we can improve by more than twenty percent. [speaker001:] On top of the VAD that they provide? [speaker003:] No. [speaker005:] Just using either their VAD or our current VAD. [speaker003:] Our way. [speaker001:] Oh, OK. [speaker005:] So, our current VAD is [disfmarker] is more than twenty percent, while their is fourteen. [speaker001:] Theirs is fourteen? [speaker005:] Yeah. [speaker001:] I see. Huh. [speaker005:] So. Yeah. And [pause] another thing that we did also is that we have all this training data for [disfmarker] let's say, for SpeechDat Car. We have channel zero which is clean, channel one which is far field microphone. And if we just take only the, um, VAD probabilities computed on the clean signal and apply them on the far field, uh, test utterances, [vocalsound] then results are much better. [speaker001:] Mm hmm. [speaker005:] In some cases it divides the error rate by two. [speaker001:] Wow. [speaker005:] So it means that there are stim [comment] still [disfmarker] If [disfmarker] if we can have a good VAD, well, it would be great. [speaker001:] How [disfmarker] how much latency does the, uh [disfmarker] does our VAD add? Is it significant, [speaker005:] Uh, right now it's, um, a neural net with nine frames. [speaker001:] or [disfmarker]? [speaker005:] So it's forty milliseconds plus, um, the rank ordering, which, uh, should be [speaker003:] Like another ten frames. [speaker005:] ten [disfmarker] Yeah. So, right now it's one hundred and forty [pause] milliseconds. [speaker004:] Rank. Oh. [speaker002:] With the rank ordering [disfmarker]? I'm sorry. [speaker003:] The [disfmarker] the [disfmarker] the smoothing [disfmarker] the m the [disfmarker] the filtering of the probabilities. [speaker005:] The [disfmarker] The, um [disfmarker] [speaker003:] on the R. [speaker005:] Yeah. It's not a median filtering. It's just [disfmarker] We don't take the median value. We take something [disfmarker] Um, so we have eleven, um, frames. [speaker002:] Oh, this is for the VAD. [speaker005:] And [disfmarker] [speaker003:] Yeah. [speaker005:] for the VAD, yeah [disfmarker] and we take th the third. [speaker003:] Yeah. [speaker002:] Oh, OK. [speaker003:] Yeah. [speaker004:] Dar [speaker005:] Um. [speaker002:] Yeah. Um. [speaker005:] Mmm. [speaker002:] So [disfmarker] [comment] Yeah, I was just noticing on this that it makes reference to delay. So what's the [disfmarker]? If you ignore [disfmarker] Um, the VAD is sort of in [disfmarker] in parallel, isn't i isn't it, with [disfmarker] with the [disfmarker]? I mean, it isn't additive with the [disfmarker] the, uh, LDA and the Wiener filtering, and so forth. [speaker003:] The LDA? Yeah. So [disfmarker] so what happened right now, we removed the delay of the LDA. [speaker002:] Right? [speaker005:] Mm hmm. [speaker002:] Yeah. [speaker003:] So we [disfmarker] I mean, if [disfmarker] so if we [disfmarker] if [disfmarker] so which is like if we reduce the delay of VA So, the f the final delay's now ba is f determined by the delay of the VAD, because the LDA doesn't have any delay. So if we re if we reduce the delay of the VAD, I mean, it's like effectively reducing the delay. [speaker001:] How [disfmarker] how much, uh, delay was there on the LDA? [speaker003:] So the LDA and the VAD both had a hundred millisecond delay. So and they were in parallel, so which means you pick either one of them [disfmarker] [speaker001:] Mmm. [speaker003:] the [disfmarker] the biggest, whatever. [speaker002:] Mm hmm. [speaker001:] I see. [speaker003:] So, right now the LDA delays are more. [speaker002:] And there [disfmarker] [speaker001:] Oh, OK. [speaker002:] And there didn't seem to be any, uh, penalty for that? [speaker003:] Pardon? [speaker002:] There didn't seem to be any penalty for making it causal? [speaker003:] Oh, no. It actually made it, like, point one percent better or something, actually. [speaker002:] OK. [speaker003:] Or something like that [speaker002:] Well, may as well, then. [speaker003:] and [disfmarker] [speaker002:] And he says Wiener filter is [disfmarker] is forty milliseconds delay. [speaker003:] Yeah. So that's the one which Stephane was discussing, like [disfmarker] [speaker005:] Mmm. [speaker002:] So is it [disfmarker]? The smoothing? [speaker003:] Yeah. The [disfmarker] you smooth it and then delay the decision by [disfmarker] So. [speaker002:] Right. OK. So that's [disfmarker] that's really not [disfmarker] not bad. So we may in fact [disfmarker] we'll see what they decide. We may in fact have, [vocalsound] um, the [disfmarker] the, uh, latency time available for [disfmarker] to have a neural net. I mean, sounds like we probably will. [speaker003:] Mm hmm. [speaker002:] So. That'd be good. Cuz I [disfmarker] cuz it certainly always helped us before. So. Uh. [speaker001:] What amount of latency are you thinking about when you say that? [speaker002:] Well, they're [disfmarker] you know, they're disputing it. You know, they're saying, uh [disfmarker] one group is saying a hundred and thirty milliseconds and another group is saying two hundred and fifty milliseconds. [speaker001:] Mmm. [speaker002:] Two hundred and fifty is what it was before actually. So, [speaker001:] Oh. [speaker002:] uh, some people are lobbying [disfmarker] lobbying [comment] to make it shorter. [speaker001:] Hmm. [speaker002:] Um. And, um. [speaker001:] Were you thinking of the two fifty or the one thirty when you said we should [pause] have enough for the neural net? [speaker002:] Well, it just [disfmarker] it [disfmarker] when we find that out it might change exactly how we do it, is all. I mean, how much effort do we put into making it causal? [speaker001:] Oh, OK. [speaker002:] I mean, [vocalsound] I think the neural net will probably do better if it looks at a little bit of the future. [speaker001:] Mm hmm. [speaker002:] But, um, it will probably work to some extent to look only at the past. And we ha you know, limited machine and human time, and [vocalsound] effort. And, you know, how [disfmarker] how much time should we put into [disfmarker] into that? So it'd be helpful if we find out from the [disfmarker] the standards folks whether, you know, they're gonna restrict that or not. [speaker001:] Mm hmm. [speaker002:] Um. But I think, you know, at this point our major concern is making the performance better and [disfmarker] and, um, [vocalsound] if, uh, something has to take a little longer in latency in order to do it that's [pause] you know, a secondary issue. [speaker001:] Mm hmm. [speaker002:] But if we get told otherwise then, you know, we may have to c clamp down a bit more. [speaker004:] Mmm. [speaker003:] So, the one [disfmarker] one [disfmarker] one difference is that [disfmarker] was there is like we tried computing the delta and then doing the frame dropping. [speaker004:] S [speaker005:] Mm hmm. [speaker003:] The earlier system was do the frame dropping and then compute the delta on the [disfmarker] [speaker002:] Uh huh. [speaker003:] So this [disfmarker] [speaker001:] Which could be a kind of a funny delta. Right? [speaker003:] Yeah. [speaker002:] Oh, oh. So that's fixed in this. Yeah, we talked about that. [speaker003:] Yeah. [speaker005:] Yeah. Uh huh. [speaker003:] So we have no delta. And then [disfmarker] [speaker002:] Good. [speaker003:] So the frame dropping is the last thing that we do. So, yeah, what we do is we compute the silence probability, convert it to that binary flag, [speaker002:] Uh huh. [speaker003:] and then in the end you c up upsample it to [vocalsound] match the final features number of [disfmarker] [speaker005:] Mm hmm. [speaker001:] Did that help then? [speaker003:] It seems to be helping on the well matched condition. So that's why this improvement I got from the last result. So. And it actually r reduced a little bit on the high mismatch, so in the final weightage it's b b better because the well matched is still weighted more than [disfmarker] [speaker002:] So, [@ @] I mean, you were doing a lot of changes. Did you happen to notice how much, [vocalsound] uh, the change was due to just this frame dropping problem? What about this? [speaker003:] Uh, y you had something on it. Right? [speaker005:] Just the frame dropping problem. Yeah. But it's [disfmarker] it's difficult. Sometime we [disfmarker] we change two [disfmarker] two things together and [disfmarker] But it's around [pause] maybe [disfmarker] it's less than one percent. [speaker002:] Uh huh. [speaker003:] Yeah. [speaker005:] It [disfmarker] [speaker002:] Well. [vocalsound] But like we're saying, if there's four or five things like that then [vocalsound] pretty sho soon you're talking real improvement. [speaker005:] Yeah. Yeah. And it [disfmarker] Yeah. And then we have to be careful with that also [disfmarker] with the neural net [speaker002:] Yeah. [speaker005:] because in [comment] the proposal the neural net was also, uh, working on [disfmarker] after frame dropping. [speaker002:] Mm hmm. [speaker005:] Um. [speaker002:] Oh, that's a real good point. [speaker005:] So. Well, we'll have to be [disfmarker] to do the same kind of correction. [speaker002:] It might be hard if it's at the server side. Right? [speaker005:] Mmm. Well, we can do the frame dropping on the server side or we can just be careful at the terminal side to send a couple of more frames before and after, and [disfmarker] So. I think it's OK. [speaker002:] OK. [speaker001:] You have, um [disfmarker] So when you [disfmarker] Uh, maybe I don't quite understand how this works, but, um, couldn't you just send all of the frames, but mark the ones that are supposed to be dropped? Cuz you have a bunch more bandwidth. Right? [speaker002:] Well, you could. Yeah. I mean, it [disfmarker] it always seemed to us that it would be kind of nice to [disfmarker] in addition to, uh, reducing insertions, actually use up less bandwidth. [speaker001:] Yeah. Yeah. [speaker002:] But nobody seems to have [vocalsound] cared about that in this [pause] evaluation. [speaker001:] And that way the net could use [disfmarker] [speaker002:] So. [speaker001:] If the net's on the server side then it could use all of the [pause] frames. [speaker003:] Yes, it could be. It's, like, you mean you just transferred everything and then finally drop the frames after the neural net. Right? [speaker001:] Mm hmm. [speaker005:] Mm hmm. [speaker003:] Yeah. That's [disfmarker] that's one thing which [disfmarker] [speaker001:] But you could even mark them, before they get to the server. [speaker003:] Yeah. Right now we are [disfmarker] Uh, ri Right now what [disfmarker] wha what we did is, like, we just mark [disfmarker] we just have this additional bit which goes around the features, [vocalsound] saying it's currently a [disfmarker] it's a speech or a nonspeech. [speaker001:] Oh, OK. [speaker003:] So there is no frame dropping till the final features, like, including the deltas are computed. [speaker001:] I see. [speaker003:] And after the deltas are computed, you just pick up the ones that are marked silence and then drop them. [speaker001:] Mm hmm. I see. [speaker002:] So it would be more or less the same thing with the neural net, I guess, actually. [speaker005:] Mm hmm. [speaker001:] I see. [speaker003:] So. Yeah, that's what [disfmarker] that's what [disfmarker] that's what, uh, this is doing right now. [speaker001:] I see. OK. [speaker002:] Yeah. [speaker005:] Mm hmm. [speaker002:] Um. OK. So, uh, what's, uh [disfmarker]? That's [disfmarker] that's a good set of work that [disfmarker] that, uh [disfmarker] [speaker003:] Just one more thing. Like, should we do something f more for the noise estimation, because we still [disfmarker]? [speaker002:] Yeah. I was wondering about that. [speaker003:] Yeah. [speaker005:] Mm hmm. [speaker002:] That was [disfmarker] I [disfmarker] I had written that down there. Um [disfmarker] [speaker005:] So, we, uh [disfmarker] actually I did the first experiment. This is [pause] with just fifteen frames. Um. We take the first fifteen frame of each utterance to it, [speaker002:] Yeah. [speaker005:] and average their power spectra. Um. I tried just plugging the, um, [vocalsound] uh, Guenter noise estimation on this system, and it [disfmarker] uh, it got worse. Um, but of course I didn't play [pause] with it. [speaker002:] Uh huh. [speaker005:] But [disfmarker] Mm hmm. Uh, I didn't [pause] do much more [pause] for noise estimation. I just tried this, and [disfmarker] [speaker002:] Hmm. Yeah. Well, it's not surprising it'd be worse the first time. [speaker005:] Mm hmm. [speaker002:] But, um, it does seem like, you know, i i i i some compromise between always depending on the first fifteen frames and a a always depending on a [disfmarker] a pause is [disfmarker] is [disfmarker] is a good idea. Uh, maybe you have to weight the estimate from the first teen [disfmarker] fifteen frames more heavily than [disfmarker] than was done in your first attempt. But [disfmarker] [speaker005:] Mm hmm. [speaker002:] but [disfmarker] [speaker005:] Yeah, I guess. [speaker002:] Yeah. Um. No, I mean [disfmarker] Um, do you have any way of assessing how well or how poorly the noise estimation is currently doing? [speaker005:] Mmm. No, we don't. [speaker002:] Yeah. [speaker005:] We don't have nothing [pause] that [disfmarker] [speaker003:] Is there [disfmarker] was there any experiment with [disfmarker]? Well, I [disfmarker] I did [disfmarker] The only experiment where I tried was I used the channel zero VAD for the noise estimation and frame dropping. So I don't have a [disfmarker] [vocalsound] I don't have a split, like which one helped more. [speaker005:] Yeah. [speaker003:] So. It [disfmarker] it was the best result I could get. [speaker005:] Mm hmm. [speaker003:] So, that's the [disfmarker] [speaker002:] So that's something you could do with, um, this final system. Right? Just do this [disfmarker] everything that is in this final system except, [vocalsound] uh, use the channel zero. [speaker003:] Mm hmm. For the noise estimation. [speaker002:] Yeah. [speaker003:] Yeah. We can try something. [speaker002:] And then see how much better it gets. [speaker003:] Mm hmm. Sure. [speaker002:] If it's, you know, essentially not better, then [pause] it's probably not worth [speaker005:] Yeah. [speaker002:] any more. [speaker003:] Yeah. But the Guenter's argument is slightly different. It's, like, ev even [disfmarker] even if I use a channel zero VAD, I'm just averaging the [disfmarker] [vocalsound] the s power spectrum. But the Guenter's argument is, like, if it is a non stationary [pause] segment, then he doesn't update the noise spectrum. So he's, like [disfmarker] he tries to capture only the stationary part in it. So the averaging is, like, [vocalsound] different from [pause] updating the noise spectrum only during stationary segments. So, th the Guenter was arguing that, I mean, even if you have a very good VAD, averaging it, like, over the whole thing is not a good idea. Because you're averaging the stationary and the non stationary, and finally you end up getting something [speaker002:] I see. [speaker003:] which is not really the s because, you [disfmarker] anyway, you can't remove the stationary part fr I mean, non stationary part from [vocalsound] the signal. So [disfmarker] [speaker002:] Not using these methods anyway. Yeah. [speaker003:] Yeah. So you just [pause] update only doing [disfmarker] or update only the stationary components. Yeah. So, that's [disfmarker] so that's still a slight difference from what Guenter is trying [speaker002:] Well, yeah. And [disfmarker] and also there's just the fact that, um, eh, uh, although we're trying to do very well on this evaluation, um, we actually would like to have something that worked well in general. [speaker003:] Yeah, yeah. [speaker002:] And, um, relying on having fifteen frames at the front or something is [disfmarker] is pretty [disfmarker] I mean, you might, you might not. [speaker003:] Mmm. [speaker005:] Mm hmm. [speaker002:] So, um. Um, it'd certainly be more robust to different kinds of input if you had at least some updates. Um. [speaker005:] Mm hmm. [speaker002:] But, um. Well, I don't know. What [disfmarker] what do you, uh [disfmarker] what do you guys see as [disfmarker] as being what you would be doing in the next week, given wha what's [pause] happened? [speaker003:] Cure the VAD? [speaker005:] Yeah. [speaker001:] What was that? [speaker003:] VAD. [speaker001:] Oh. [speaker003:] And [disfmarker] [speaker002:] OK. [speaker005:] So, should we keep the same [disfmarker]? I think we might try to keep the same idea of having a neural network, but [vocalsound] training it on more data and adding better features, I think, but [disfmarker] because the current network is just PLP features. Well, it's trained on noisy [pause] PLP [disfmarker] [speaker003:] Just the cepstra. Yeah. [speaker005:] PLP features computed on noisy speech. But [vocalsound] [vocalsound] there is no nothing particularly robust in these features. [speaker001:] So, I I uh [disfmarker] [speaker003:] No. [speaker005:] There's no RASTA, no [disfmarker] [speaker001:] So, uh, I [disfmarker] I don't remember what you said [vocalsound] the answer to my, uh, question earlier. Will you [disfmarker] will you train the net on [disfmarker] after you've done the spectral subtraction or the Wiener filtering? [speaker002:] This is a different net. [speaker001:] Oh. [speaker003:] So we have a VAD which is like neur that's a neural net. [speaker005:] Oh, yeah. Hmm. [speaker001:] Oh, you're talking about the VAD net. [speaker003:] Yeah. [speaker001:] OK. [speaker005:] Mm hmm. [speaker001:] I see. [speaker003:] So that [disfmarker] that VAD was trained on the noisy features. [speaker001:] Mm hmm. [speaker003:] So, right now we have, like, uh [disfmarker] we have the cleaned up features, so we can have a better VAD by training the net on [pause] the cleaned up speech. [speaker001:] Mm hmm. I see. I see. [speaker003:] Yeah, but we need a VAD for uh noise estimation also. So it's, like, where do we want to put the VAD? Uh, it's like [disfmarker] [speaker001:] Can you use the same net to do both, or [disfmarker]? [speaker003:] For [disfmarker] [speaker001:] Can you use the same net that you [disfmarker] that I was talking about to do the VAD? [speaker003:] Mm hmm. Uh, it actually comes at v at the very end. So the net [disfmarker] the final net [disfmarker] I mean, which is the feature net [disfmarker] [speaker001:] Mm hmm. [speaker003:] so that actually comes after a chain of, like, LDA plus everything. So it's, like, it takes a long time to get a decision out of it. And [disfmarker] [vocalsound] and you can actually do it for final frame dropping, but not for the VA f noise estimation. [speaker001:] Mm hmm. [speaker002:] You see, the idea is that the, um, initial decision to [disfmarker] that [disfmarker] that you're in silence or speech happens pretty quickly. [speaker001:] Oh, OK. [speaker003:] Hmm. [speaker002:] And that [disfmarker] [speaker001:] Cuz that's used by some of these other [disfmarker]? [speaker002:] Yeah. And that's sort of fed forward, and [disfmarker] and you say "well, flush everything, it's not speech anymore". [speaker001:] Oh, OK. I see. [speaker003:] Yeah. [speaker001:] I thought that was only used for doing frame dropping later on. [speaker002:] Um, it is used, uh [disfmarker] Yeah, it's only used f Well, it's used for frame dropping. Um, it's used for end of utterance [speaker005:] Mmm. [speaker002:] because, you know, there's [disfmarker] [vocalsound] if you have [pause] more than five hundred milliseconds of [disfmarker] of [disfmarker] of nonspeech then you figure it's end of utterance or something like that. [speaker001:] Mm hmm. [speaker002:] So, um. [speaker005:] And it seems important for, like, the on line normalization. Um. We don't want to update the mean and variance during silen long silence portions. Um. So it [disfmarker] it has to be done before [speaker001:] Oh. I see. [speaker005:] this mean and variance normalization. Um. [speaker002:] Um. Yeah. So probably the VAD and [disfmarker] and maybe testing out the noise [pause] estimation a little bit. I mean, keeping the same method but [disfmarker] but, uh, [vocalsound] seeing if you cou but, um noise estimation could be improved. [speaker005:] Mm hmm. [speaker002:] Those are sort of related issues. It probably makes sense to move from there. And then, uh, [vocalsound] later on in the month I think we wanna start including the [pause] neural net at the end. Um. OK. Anything else? [speaker005:] The Half Dome was great. [speaker002:] Good. Yeah. You didn't [disfmarker] didn't fall. [speaker003:] Well, yeah. [speaker002:] That's good. Our e our effort would have been devastated if you guys had [comment] [vocalsound] run into problems. [speaker001:] So, Hynek is coming back next week, you said? [speaker002:] Yeah, that's the plan. [speaker001:] Hmm. [speaker002:] I guess the week after he'll be, uh, going back to Europe, and so we wanna [disfmarker] [speaker001:] Is he in Europe right now or is he up at [disfmarker]? [speaker002:] No, no. He's [disfmarker] he's [disfmarker] he's dropped into the US. Yeah. Yeah. [speaker001:] Oh. Hmm. [speaker002:] So. Uh. [vocalsound] So, uh. Uh, the idea was that, uh, we'd [disfmarker] we'd sort out where we were going next with this [disfmarker] with this work before he, uh, left on this next trip. Good. [vocalsound] [vocalsound] Uh, Barry, you just got through your [vocalsound] quals, so I don't know if you [vocalsound] have much to say. But, uh. [speaker004:] Mmm. No, just, uh, looking into some [disfmarker] some of the things that, um, [vocalsound] uh, John Ohala and Hynek, um, gave as feedback, um, as [disfmarker] as a starting point for the project. Um. In [disfmarker] in my proposal, I [disfmarker] I was thinking about starting from a set of, uh, phonological features, [vocalsound] or a subset of them. Um, but that might not be necessarily a good idea according to, um, John. [speaker001:] Mm hmm. [speaker004:] He said, uh, um, these [disfmarker] these phonological features are [disfmarker] are sort of figments of imagination also. [speaker001:] Mm hmm. [speaker004:] Um. S [speaker002:] In conversational speech in particular. [speaker004:] Ye [speaker002:] I think you can [disfmarker] you can put them in pretty reliably in synthetic speech. But [vocalsound] we don't have too much trouble recognizing synthetic speech since we create it in the first place. So, it's [disfmarker] [speaker004:] Right. Yeah. So, um, a better way would be something more [disfmarker] more data driven, [speaker001:] Mm hmm. [speaker004:] just looking at the data and seeing what's similar and what's not similar. [speaker001:] Mm hmm. [speaker004:] So, I'm [disfmarker] I'm, um, taking a look at some of, um, [vocalsound] Sangita's work on [disfmarker] on TRAPS. She did something where, um [disfmarker] [vocalsound] w where the TRAPS learn She clustered the [disfmarker] the temporal patterns of, um, certain [disfmarker] certain phonemes in [disfmarker] in m averaged over many, many contexts. And, uh, some things tended to cluster. [speaker001:] Mm hmm. [speaker004:] Right? You know, like stop [disfmarker] stop consonants clustered really well. [speaker001:] Hmm. [speaker004:] Um, silence was by its own self. [speaker001:] Mm hmm. [speaker004:] And, uh, um, [vocalsound] v vocalic was clustered. [speaker001:] Mm hmm. [speaker004:] And, [vocalsound] um, so, [vocalsound] those are [pause] interesting things to [disfmarker] [speaker001:] So you're [disfmarker] now you're sort of looking to try to gather a set of these types of features? [speaker004:] Right. [speaker001:] Mm hmm. [speaker004:] Yeah. Just to see where [disfmarker] where I could start off from, [speaker001:] Mm hmm. [speaker004:] uh, you know? A [disfmarker] a [disfmarker] a set of small features and continue to iterate and find, uh, a better set. [speaker001:] Mm hmm. [speaker004:] Yeah. [speaker002:] OK. Well, short meeting. That's OK. [speaker001:] Yeah. [speaker002:] OK. So next week hopefully we'll [disfmarker] can get Hynek here to [disfmarker] to join us and, uh, uh. [speaker001:] Should we do digits? [speaker002:] Digits, digits. OK, now. [speaker001:] Go ahead, Morgan. [speaker002:] Alright. Let me get my glasses on so I can [pause] see them. [speaker001:] You can start. [speaker002:] OK. [speaker001:] OK. And we're off. [speaker002:] Mm [speaker005:] Yeah. [speaker002:] Um, so. If we can't, we can't. But uh we're gonna try to make this an abbreviated meeting cuz the [disfmarker] the next [disfmarker] next occupants were pushing for it, so. Um. So. Agenda is [disfmarker] according to this, is transcription status, DARPA demos XML tools, disks, backups, et cetera and [speaker008:] Does anyone have anything to [pause] add to the agenda? [speaker002:] OK. Should we just go in order? Transcription status? Who's [disfmarker] that's probably you. [speaker001:] I can do that quickly. Um I hired several more transcribers, They're making great progress. [speaker002:] Seven? [speaker001:] Seve several, several. [speaker002:] Oh. [speaker001:] And uh [disfmarker] and uh, uh I've been uh finishing up the uh double checking. I hoped to have had that done by today but it's gonna take one more week. [speaker008:] Um as a somewhat segue into the next topic, um could I get a hold of uh the data even if it's not really corrected yet [speaker004:] I g [speaker008:] just so I can get the data formats and make sure the information retrieval stuff is working? [speaker001:] Certainly. Yeah I mean, it's in the same place it's been. [speaker008:] So can you just [disfmarker] Oh, it is. OK. [speaker001:] Uh huh. No change. [speaker008:] Just [disfmarker] So, "transcripts" is the sub directory? [speaker001:] Uh [disfmarker] Yes. Uh huh. [speaker008:] OK. So I'll [disfmarker] I'll probably just make some copies of those rather than use the ones that are there. [speaker001:] OK. [speaker008:] Um and then just [disfmarker] we'll have to remember to delete them once the corrections are made. [speaker001:] OK. [speaker002:] OK, wh [speaker004:] I also got anot a short remark to the transcription. I've uh just processed the first five EDU meetings and they are chunked up so they would [disfmarker] they probably can be sent to IBM whenever they want them. [speaker006:] Well the second one of those [speaker003:] Cool. [speaker004:] Yep. It's already at IBM, [speaker006:] is already at IBM. [speaker004:] but the other ones [disfmarker] [speaker006:] That's the one that [pause] we're waiting to hear from them on. [speaker004:] Yeah. Yeah. [speaker006:] Yeah. [speaker001:] OK. These are separate from the ones that [disfmarker] [speaker006:] As soon as [disfmarker] [speaker001:] I mean, these are [disfmarker] [speaker006:] They're the IBM set. [speaker008:] It's this one. [speaker004:] Yep. [speaker001:] Excellent. [speaker006:] Yeah. [speaker001:] Good. [speaker008:] Is my mike on? [speaker006:] And so as soon as we hear from Brian that this one is OK [speaker008:] Yeah. [speaker006:] and we get the transcript back and we find out that hopefully there are no problems matching up the transcript with what we gave them, then uh we'll be ready to go and we'll just send them the next four as a big batch, [speaker001:] Excellent. [speaker006:] and let them work on that. [speaker008:] And so we're doing those as disjoint from the ones we're transcribing here? [speaker006:] Yes, [speaker008:] OK, good. [speaker006:] exactly. We're sort of doing things in parallel, that way we can get as much done a at once. [speaker008:] Yeah, I think that's the right way to do it, [speaker006:] Yeah. [speaker008:] especially for the information retrieval stuff. Anything else on transcription status? [speaker001:] Hm mmm. [speaker008:] OK. [speaker002:] DARPA demos, we had the submeeting the other day. [speaker008:] Right, which uh [disfmarker] So I've been working on using the THISL tools to do information retrieval on meeting data and the THISL tools are [disfmarker] there're two sets, there's a back end and a front end, so the front end is the user interface and the back end is the indexing tool and the querying tool. And so I've written some tools to convert everything into the right for file formats. And the command line version of the indexing and the querying is now working. So at least on the one meeting that I had the transcript for uh conveniently you can now do information retrieval on it, do [disfmarker] type in a [disfmarker] a string and get back a list of start end times for the meeting, uh of hits. [speaker006:] What [disfmarker] what kind of uh [disfmarker] what does that look like? The string that you type in. What are you [disfmarker] are you [disfmarker] are they keywords, or are they [disfmarker]? [speaker008:] Keywords. [speaker006:] OK. I see. [speaker008:] Right? And so [disfmarker] and then it munges it to pass it to the THISL IR which uses an SGML like format for everything. [speaker006:] I see. [speaker002:] And then does it play something back or that's something you're having to program? [speaker008:] Um, right now, I have a tool that will do that on a command line using our standard tools, [speaker002:] Yeah. [speaker008:] but my intention is to do a prettier user interface based either [disfmarker] So [disfmarker] so that's the other thing I wanted to discuss, is well what should we do for the user interface? We have two tools that have already been written. Um the SoftSound guys did a web based one, [speaker002:] Mm hmm. [speaker008:] um, which I haven't used, haven't looked at. Dan says it's pretty good [speaker002:] Mm hmm. [speaker008:] but it does mean you need to be running a web server. [speaker002:] Mm hmm. [speaker008:] And so it [disfmarker] it's pretty big and complex. Uh and it would be difficult to port to Windows because it means porting the web server to Windows. [speaker002:] Mm hmm. [speaker008:] Uh the other option is Dan did the Tcl TK THISL GUI front end for Broadcast News [speaker002:] Yeah. [speaker008:] which I think looks great. I think that's a nice demo. Um and that would be much easier to port to Windows. And so I think that's the way we should go. [speaker001:] I [disfmarker] Can I ask a question? [speaker008:] Mm hmm. [speaker001:] So um as it stands within the [disfmarker] the Channeltrans interface, it's possible to do a find and a play. You can find a searched string and play. So e Are you [disfmarker] So you're adding like um, I don't know, uh are they fuzzy matches or are they [pause] uh [disfmarker]? [speaker008:] It's a sort of standard, text retrieval based [disfmarker] So it's uh term frequency, inverse document frequency scoring. [speaker001:] OK. [speaker008:] Um and then there are all sorts of metrics for spacing how far apart they have to be and things like that. So it [disfmarker] it's [speaker001:] It's a lot more sophisticated than the uh the basically Windows based [disfmarker] [speaker008:] i it's like doing a Google query or anyth anything else like that. So i it uses [disfmarker] So it pr produces an index ahead of time [speaker001:] OK. [speaker008:] so you don't [disfmarker] you're not doing a linear search through all the documents. Cuz you can imagine if [disfmarker] with [disfmarker] if we have the sixty hours' worth you do [disfmarker] wouldn't wanna do a search. [speaker001:] Hm mmm. [speaker008:] Um you have to do preindexing [speaker001:] Good. [speaker008:] and so that [disfmarker] these tools do all that. And so the work to get the front end to work would be porting it [disfmarker] well [disfmarker] uh to get it to work on the UNIX systems, our side is just rewriting them and modifying them to work for meetings. So that it understands that they're different speakers and that it's one big audio file instead of a bunch of little ones and just sorta things like that. [speaker002:] Mm hmm. Mm hmm. [speaker007:] Mm hmm. [speaker006:] So what does the user see as the result of the query? [speaker008:] On which tool? [speaker006:] THISL. [speaker008:] The THISL GUI tool which is the one that Dan wrote, Tcl TK [speaker006:] Yeah. [speaker008:] um you type in a query and then you get back a list of hits and you can type on them and listen to them. Click on them rather [comment] with a mouse. [speaker006:] Ah. So if you typed in "small heads" or something you could [speaker002:] Mmm [speaker008:] Right, you'd get [disfmarker] [speaker006:] get back a uh uh [comment] something that would let you click and listen to some audio where that phrase had occurred [speaker008:] something [disfmarker] You [disfmarker] you'd get to listen to "beep". [speaker006:] or some [speaker002:] That was a really good look. It's too bad that that couldn't [vocalsound] come into the [disfmarker] [speaker008:] You couldn't get a video. [speaker007:] Guess who I practice on? [speaker001:] At some point we're gonna have to say what that private joke is, that keeps coming up. [speaker002:] Yeah. And then again, maybe not. So, [vocalsound] uh [disfmarker] Yeah, that soun that sounds reasonable. Yeah, it loo it [disfmarker] my [disfmarker] my recollection of it is it's [disfmarker] it's a pretty reasonable uh demo sort of format. [speaker008:] Right. [speaker006:] Yeah that sounds good. [speaker008:] And so I think there'd be minimal effort to get it to work, minimally [speaker006:] That sounds really neat. [speaker008:] and then we'd wanna add things like query by speaker and by meeting and all that sort of stuff. Um Dave Gelbart expressed some interest in working on that so I'll work with him on it. And it [disfmarker] it's looking pretty good, you know, the fact that I got the query system working. So if we wanna just do a video based one I think that'll be easy. [speaker002:] Mm hmm. [speaker008:] If we wanna get it to Windows it's gonna be a little more work because the THISL IR, the information retrieval tool's [disfmarker] um, I had difficulty just compiling them on Solaris. [speaker002:] Mm hmm. [speaker008:] So getting them to compile on Windows might be challenging. [speaker002:] Mm hmm. [speaker006:] But you were saying that [disfmarker] that the uh [disfmarker] that there's that set of tools, uh, Cygnus tools, that [disfmarker] [speaker008:] So. It certainly helps. [speaker006:] Uh huh. [speaker008:] Um, I mean without those I wouldn't even attempt it. [speaker006:] Yeah. [speaker002:] Mm hmm. [speaker008:] But what those [disfmarker] they [disfmarker] what those do is provide sort of a BSD compatibility layer, so that the normal UNIX function calls all work. [speaker002:] Mm hmm. Mm hmm. [speaker008:] Um, [speaker006:] And you have to have all the o [speaker008:] But the problem is that [disfmarker] that the THISL tools didn't use anything like Autoconf and so you have the normal porting problems of different header files and th some things are defined and some things aren't and uh different compiler work arounds and so on. So the fact that um it took me a day to get it c to compile under Solaris means it's probably gonna take me s significantly more than that to get it to compile under Windows. [speaker002:] How about having it run under free BSD? [speaker005:] Well what you need [disfmarker] [speaker008:] Free BSD would probably be easier. [speaker005:] All you need to do is say to Dan "gee it would be nice if this worked under Autoconf" and it'll be done in a day. [speaker008:] That's true. [speaker005:] Right? [speaker004:] Uh [disfmarker] [speaker008:] Actually you know I should check because he did port it to SPRACHcore [speaker005:] Right. [speaker008:] so he might have done that already. [speaker005:] I [disfmarker] I [disfmarker] I wouldn't be surprised. [speaker002:] So [disfmarker] [speaker008:] I'll check at that [disfmarker] [speaker006:] How does it play? [speaker005:] What I [disfmarker] [speaker002:] But it would [disfmarker] what would serve [disfmarker] would serve both purposes, is if you contact him and ask him if he's already done it. [speaker008:] Yeah, right. [speaker002:] If he has then you learn, if he hasn't then he'll do it. [speaker008:] Right. [speaker001:] Wow. [speaker006:] I hope he never listens to these meetings. [speaker008:] That's right. So, and I've been corresponding with Dan and also with uh uh, SoftSound guy, uh [disfmarker] [speaker001:] It's amazing. [speaker002:] Yeah. [speaker008:] Blanking on his name. [speaker006:] Tony Robinson? [speaker002:] Tony Robinson? [speaker008:] Do I mean Tony? I guess I do. [speaker002:] Yeah. [speaker008:] Or S or Steve Renals. [speaker005:] James Christie. [speaker008:] Which one do I mean? [speaker002:] Steve Renal Steve Renals. [speaker005:] Steve Renals is not SoftSound, is he? [speaker002:] No. [speaker008:] My brain is not working, I don't remember who I've been corresponding with. [speaker002:] OK. [speaker005:] Steve wro i it's Ste Steve Renals wrote THISL IR. [speaker008:] Then it's Steve Renals. [speaker005:] OK. [speaker002:] Oh, OK. [speaker008:] So uh just getting documentation and uh and f and formats, [speaker002:] Yeah. [speaker008:] so that's all going pretty well, I think we'll be OK with that. [speaker005:] Right. [speaker002:] Assuming we're [disfmarker] [speaker006:] What about issues of playing sound files [@ @] between the two platforms? [speaker008:] Um we have [disfmarker] Well, that's a good point too. [speaker005:] Here's a [disfmarker] here's a crazy idea [pause] actually. [speaker008:] I don't know. [speaker005:] Why don't you try and merge [pause] Transcriber [pause] and THISL IR? [speaker008:] Well this is one of the reasons [disfmarker] [speaker005:] They're both Tcl interfaces. [speaker008:] This is the [disfmarker] one of the reasons that I'm gonna have uh Dave Gelbart [disfmarker] Gelbart [disfmarker] Having him volunteer to work on it is a really good thing because he's worked on the Transcriber stuff [speaker005:] Right. [speaker008:] and he's more familiar with Tcl TK than I am. [speaker005:] And then you get [disfmarker] they [disfmarker] then you get the Windows media playing for free. [speaker008:] Well that's Snack, not [disfmarker] not Transcriber. [speaker005:] Right. But the point is that the Transcriber uses Snack and then you can [disfmarker] but you can use a [disfmarker] a lot of the same functionality [speaker008:] Yeah, yeah, I mean, I [disfmarker] I think THISL [disfmarker] THISL GUI probably uses Snack. [speaker005:] and it's [disfmarker] [speaker008:] And so my intention was just to base it on that. [speaker005:] Yeah. Well my thought was is that it would be nice [disfmarker] it would be nice to have the running transcripts um eh you know, from speaker to speaker. [speaker008:] And if it doesn't [disfmarker] [speaker005:] Right? Do you have [disfmarker] you have, you know, a speaker mark here and a speaker mark here? [speaker008:] Right, we'll have to figure out a user interface for that, so. [speaker005:] Right. Well that [disfmarker] eh my thought was if you had like Multitrans or whatever do it. Or whatever. [speaker008:] Yeah. It might be fairly difficult to get that to work in [comment] the little short segments we'd be talking about and having the search tools and so on. We [disfmarker] we can look into it, but [disfmarker] [speaker005:] Yeah. [speaker002:] The thing I was asking about with, um, free BSD is that it might be easier to get PowerPoint shows running in free BSD than to get this other package running in [disfmarker] [speaker008:] Yeah, I mean we have to [disfmarker] I have to sit down and try it before I make too many judgments, [speaker002:] Yeah. [speaker008:] so uh Um My experience with the Gnu compatibility library is really it's just as hard and just as easy to port to any system. Right? The Windows system isn't any harder because it [disfmarker] it looks like a BSD system. [speaker002:] Mm hmm. [speaker008:] It's just, you know, just like all of them, the "include" files are a little different and the function calls are a little different. [speaker002:] Right. [speaker008:] So I [disfmarker] it might be a little easier but it's not gonna be a lot easier. [speaker002:] OK. So there was that demo, which was one of the main ones, then we talked about um some other stuff which would basically be um showing off the [disfmarker] the Transcriber interface itself and as you say, maybe we could even merge those in some sense, but [disfmarker] but um, uh [disfmarker] and part of that was showing off what the speech non uh nonspeech [comment] stuff that Thilo has done [pause] s [pause] looks like. [speaker004:] Yeah. [speaker001:] Can I ask one more thing about THISL? [speaker008:] Mm hmm. [speaker001:] So with the IR stuff then you end up with a somewhat prioritized um [disfmarker]? [speaker008:] Mm hmm, ranked. [speaker001:] Excellent. Excellent. Yeah. [speaker007:] So another idea I w t had just now actually for the demo was whether it might be of interest to sh to show some of the prosody uh [vocalsound] work that Don's been doing. [speaker002:] Mm hmm. [speaker007:] Um actually show some of the features and then show for instance a task like finding sentence boundaries or finding turn boundaries. Um, you know, you can show that graphically, sort of what the features are doing. It, you know, it doesn't work great but it's definitely giving us something. [speaker002:] Well I think [speaker007:] I don't know if that would be of interest or not. [speaker002:] at [disfmarker] at the very least we're gonna want something illustrative with that cuz I'm gonna want to talk about it and so i if there's something that shows it graphically it's much better than me just having a bullet point [speaker007:] Yeah. [speaker002:] pointing at something I don't know much about, [speaker007:] I mean, you're looking at this now [disfmarker] [speaker002:] so. [speaker007:] Are you looking at Waves or Matlab? [speaker003:] Um yeah I'm starting to and um [disfmarker] Yeah we can probably find some examples of different type of prosodic events going on. [speaker007:] Yeah def [speaker002:] S so when we here were having this demo meeting, what we're sort of coming up with is that we wanna have all these pieces together, to first order, by the end of the month [speaker007:] I [speaker002:] and then that'll give us a week or so. [speaker007:] Oh, the end of this month or next month? [speaker003:] Ooo. The end of [disfmarker] [speaker008:] This month. [speaker007:] Oh, you mean like today? [speaker002:] Ju [speaker007:] Oh. [speaker008:] Oh sorry, next month. [speaker002:] June. June. June. [speaker007:] Next month. [speaker003:] Yeah. [speaker007:] Sorry. [speaker008:] Today isn't June first, [speaker006:] There's another one. [speaker008:] is it. [speaker005:] Exactly. [speaker002:] Uh [disfmarker] that'll [disfmarker] that'll give us [disfmarker] that'll give us a week or so to uh [disfmarker] to port things over to my laptop and make sure that works, [speaker007:] Sorry. I think, I mean eh where [disfmarker] [speaker002:] yeah. [speaker003:] Yeah, I mean I'll be here. [speaker007:] Yeah if d if Don can sort of talk to whoever's [disfmarker] cuz we're doing this anyway as part of our [disfmarker] you know, the research, [speaker002:] Yeah. [speaker007:] visualizing what these features are doing [speaker002:] Yeah. [speaker007:] and so either [disfmarker] it might not be integrated but it [disfmarker] it could potentially be in it. Could find some. [speaker002:] Yeah. Well, this is to an audience of researchers so I mean, you know, to let s the goal is to let them know what it is we're doing. [speaker007:] I mean it's different. [speaker002:] So that's [disfmarker] [speaker007:] I don't think anyone has done this on meeting data so it might be neat, you know. [speaker002:] Yeah. Good. Done with that. XML tools? [speaker008:] Um. So I've been doing a bunch of XML tools where you [disfmarker] we're sort of moving to XML as the general format for everything and I think that's definitely the right way to go because there are a lot of tools that let you do extraction and reformatting of XML tools. Um. So yet again we should probably meet to talk about transcription formats in XML because I'm not particularly happy with what we have now. I mean it works with Transcriber but it [disfmarker] it's a pain to use it in other tools uh because it doesn't mark start and end. [speaker006:] Start and end of each [disfmarker]? [speaker008:] Uh [disfmarker] [speaker004:] Yeah. [speaker008:] Utterance. [speaker006:] Utterance. Just marks [disfmarker]? [speaker008:] So it's implicit in [disfmarker] in there but you have to do a lot of processing to get it. [speaker006:] Yeah. Right. Right. [speaker008:] And so [disfmarker] and also I'd like to do the indirect time line business. Um but regardless, I mean, w that's something that you, me, and Jane can talk about later. Um, but I've installed XML tools of various sorts in various languages and so if people are interested in doing [disfmarker] extracting any information from any of these files, either uh information on users because the user database is that way [disfmarker] I'm converting the Key files to XML so that you can extract m uh various inf uh sorted information on individual meetings [speaker003:] Cool. [speaker004:] Yeah. [speaker008:] and then also the transcripts. And so l just let me know there [disfmarker] it's mostly Java and Perl but we can get other languages too if [disfmarker] if that's desirable. [speaker007:] Oh, quick question on that. Is [disfmarker] do we have the [disfmarker] [vocalsound] the seat information? In [disfmarker] in the Key files now? [speaker001:] Mm hmm. [speaker008:] The seat information is on the Key files for the ones which [speaker001:] Ah. [speaker007:] Oh in [disfmarker] [speaker008:] it's been recorded, [speaker007:] For the new one OK. [speaker008:] yeah. [speaker002:] Seat? [speaker007:] Great. Sea yeah. [speaker008:] Where [disfmarker] where you're sitting. [speaker002:] Oh! Not [disfmarker] not the quality or anything. No. [speaker004:] n [speaker008:] Right. [speaker002:] OK. I see. [speaker008:] "It's pretty soft and squishy." [speaker002:] Yeah. [speaker007:] Alright. [speaker002:] Yeah. OK. [speaker008:] Oh, but that might just be me. [speaker007:] OK. [speaker008:] Um. [speaker007:] Alright. [speaker002:] That's more seat information than we wanted. [speaker007:] Never mind. [speaker005:] Hmm. [speaker007:] I'm just trying to figure out, you know, when Morgan's voice appears on someone's microphone are they next to him or are they across from him? [speaker008:] Maybe we should bleep that out. [speaker002:] Yeah. Mmm, yeah. [speaker006:] Wait a minute, [speaker008:] Right. [speaker006:] how [disfmarker] how w eh where is it in the Key file? [speaker002:] Yeah. [speaker008:] The square bracket. [speaker007:] Cuz I mean I haven't been putting it in and [disfmarker] in by [disfmarker] [speaker008:] You haven't been putting it in. [speaker007:] Right. I have not. [speaker001:] Well bu [speaker008:] Oh, OK. [speaker001:] Isn't it always on the digits? [speaker007:] And [disfmarker] [speaker002:] Some of these are missing. Aren't they? [speaker001:] Isn't it always on the digits forms? [speaker002:] Some fall out of [disfmarker] [speaker007:] Well it [speaker008:] Yeah so we can go back and fill them in for the ones we have. [speaker003:] Ooo. [speaker007:] I mean they're on th right, these, but I just hadn't ever been putting it in the Key files. [speaker006:] Yeah I [disfmarker] I never [disfmarker] [speaker007:] And I don't think Chuck was either [speaker006:] I never knew we were supposed to put it in the Key file. [speaker007:] cuz [disfmarker] [speaker008:] I had told you guys about it [speaker007:] Oh, so we're both sorry. [speaker008:] but [disfmarker] [speaker006:] Oh really? [speaker007:] So [disfmarker] [speaker008:] I mean this is why I wanna use a g a tool to do it rather than the plain text [speaker007:] OK. [speaker008:] because with the plain text it's very easy to skip those things. [speaker006:] OK. [speaker007:] OK. [speaker008:] So. Um if you use the Edit key, or Key edit [disfmarker] I think it's Edit key, [comment] command [disfmarker] [speaker004:] Edit key. [speaker008:] Did I show you guys that? [speaker004:] Yep. [speaker008:] I did show it to you, [speaker006:] You mentioned it, [speaker008:] but I think you both said "no, you'll just use text file". [speaker006:] yeah. Yeah. Text. [speaker008:] Um it has it in there, a place to fill it in. [speaker007:] OK. [speaker006:] OK. [speaker008:] Yeah, and so if you don't fill it in, you're not gonna get it in the meetings. [speaker007:] So if [disfmarker] [speaker008:] So. [speaker007:] Right. Well I [disfmarker] I just realized I hadn't been doing it [speaker008:] Yep. [speaker007:] and probably [disfmarker] So [disfmarker] [speaker008:] Yeah and then the other thing also that Thilo noticed is, on the microphone, [speaker003:] u [speaker008:] on channel zero it says hand held mike or Crown mike, [speaker007:] Yeah. Right. [speaker008:] you actually have to say which one. [speaker007:] I know [disfmarker] Yeah, I usually delete the [disfmarker] [speaker008:] So. [speaker007:] I don't, [speaker006:] Oh! OK. I didn't do that either. [speaker007:] maybe I forgot to d [speaker004:] Yeah. [speaker006:] Takes me no time at all to edit these. [speaker007:] But it's almost [disfmarker] Yeah. [speaker008:] Yeah that's cuz you kn [speaker006:] I'm not doing anything. [speaker008:] I [disfmarker] I know why. [speaker007:] And I was [disfmarker] I was looking at Chuck's, like, "oh what did Chuck do, OK I'll do that". So. [speaker008:] And then uh also in a couple of places instead of filling the participants under "participants" they were filled in under "description". [speaker002:] Ah, OK. [speaker007:] Uh [disfmarker] [speaker008:] And so that's also a problem. So anyway. [speaker007:] We will do better. [speaker008:] That's it. Oh uh also I'm working on another version of this tool, the [disfmarker] the one that shows up here, [comment] that will flash yellow if the mike isn't connected. And it's not quite ready to go yet because um it's hard to tell whether the mike's connected or not because the best quality ones, the Crown ones, [comment] are about the same level if they're off and no one's o off or if they're on and no one's talking. [speaker003:] Huh. [speaker008:] Um these [disfmarker] these ones, they are much easier, there's a bigger difference. So I'm working on that and it [disfmarker] it sorta works and so eventually we will change to that and then you'll be able to see graphically if your mike is dropping in or out. [speaker003:] Will that also include like batteries dying? [speaker008:] Yep. [speaker003:] Just a any time the mike's putting out zeros basically. [speaker008:] Yep. Yep. [speaker006:] But with the screensaver kicking in, it [disfmarker] [speaker008:] Now [disfmarker] [speaker004:] But y yeah. [speaker008:] Well I'll turn off the screensaver too. [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Oops. [speaker008:] Um the other thing is as I've said before, it is actually on [speaker003:] Speaking of which. [speaker008:] the thing. There's a little level meter but of course no one ever pays attention to it. So I think having it on the screen is more easy to notice. [speaker001:] It would be nice if [disfmarker] if these had little light indicators, little L E Ds for [disfmarker] [speaker008:] Uh buzzer. [speaker001:] Yeah, a buzzer. [speaker008:] "Bamp, bamp!" [speaker002:] Small shocks [speaker001:] Yeah. Actually [disfmarker] [speaker002:] administered to the [disfmarker] OK. [speaker008:] OK, disk backup, et cetera? [speaker002:] Oh [disfmarker] [speaker008:] Um I spoke with Dave Johnson about putting all the Meeting Recorder stuff on non backed up disk to save the overhead of backup and he pretty much said "yeah, you could do that if you want" but he thought it was a bad idea. In fact what he said is doing the manual one, [comment] doing uh NW archive to copy it [comment] is a good idea and we should do that and have it backed up. He w he's a firm believer in [disfmarker] in lots of different modalities of backup. I mean, his point was well taken. This data cannot be recovered. [speaker007:] Yeah. [speaker008:] And so if a mistake is made and we lose the backup we should have the archive and if then a mistake is made and we lose the archive we should have the backup. [speaker002:] Well I guess it is true that even with something that's backed up it's not gonna [disfmarker] if it's stationary it's not going to go through the increment it's not gonna burden things in the incremental backups. [speaker008:] Just [disfmarker] just the monthly full. [speaker001:] Hmm. [speaker002:] Yeah, so the monthly full will be a bear but [disfmarker] [speaker008:] Yeah. But he said that [disfmarker] that we sh shouldn't worry too much about that, that we're getting a new backup system and we're far enough away from saturation on full backups that it's w probably OK. [speaker002:] Really? [speaker008:] And uh, so the only issue here is the timing between getting more disks and uh recording meetings. [speaker002:] So I guess the idea is that we would be reserving the non backed up space for things that took less than twenty four hours to recreate or something like that, [speaker008:] Things that are recreatable easily and also [disfmarker] Yeah, basically things that are recreatable. [speaker002:] right? Yeah. Yeah. [speaker008:] The expanded files and things like that. [speaker002:] OK. [speaker008:] They take up a lot more room anyway. [speaker002:] Yeah. [speaker008:] Uh but we do need more disk. [speaker002:] So we can get more disk. [speaker008:] Yeah. [speaker002:] Yeah. So. [speaker008:] And I [disfmarker] I think I agree with him. I mean his point was well taken that if we lose one of these we cannot get it back. [speaker002:] OK. [speaker008:] I don't think there was any other et cetera there. [speaker002:] Well I was allowing someone else to come up with something related that they had uh [disfmarker] [speaker005:] I thought you guys were gonna burn C Ds? [speaker008:] Um unfortunately [disfmarker] we could burn C Ds but first of all it's a pain. [speaker005:] Yeah. [speaker008:] Because you have to copy it down to the PC and then burn it and that's a multi step procedure. And second of all the [disfmarker] the write once burners as opposed to a professional press don't last. [speaker005:] Yeah. [speaker008:] So I think burning them for distribution is fine but burning them for backup is not a good idea. [speaker005:] I see. [speaker008:] Cuz th they [disfmarker] they fail after a couple years. [speaker005:] OK. Alright. [speaker001:] I do have uh uh [disfmarker] It's a different topic. Can I add one top topic? We have time? I wanted to ask, I know that uh that Thilo you were, um, bringing the Channeltrans interface onto the Windows machine? [speaker004:] Yeah it's [disfmarker] it [disfmarker] Basically it's done, [speaker001:] And I wanted to know is th [speaker004:] yeah. [speaker001:] It's all done? That's g wonderful. Great. [speaker004:] Yeah. [speaker008:] Yes, since Tcl TK runs on it, basically things'll just work. [speaker004:] Yeah it [disfmarker] Yeah, it was just a problem with the Snack version and the Transcriber version but it's solved. So. [speaker001:] Does [disfmarker] and that [disfmarker] does that mean, I [disfmarker] maybe I should know this but I don't. Does this mean that the [disfmarker] that this could be por uh ported to a Think Pad note or some other type of uh [disfmarker] [speaker004:] Yeah, basically uh I did install it on my laptop and yeah [speaker001:] Wonderful. [speaker004:] it worked. [speaker001:] Wonderful. [speaker002:] Hmm! Good. CrossPads? CrossPads? [speaker008:] Uh got an email from uh James Landay who basically said "if you're not using them, could you return them?" So he said he doesn't need them, he just periodically w at the end of each term sends out email to everyone who was recorded as having them and asks them if they're still using them. [speaker002:] So we've never used them. [speaker001:] We used them once. [speaker008:] We [disfmarker] we used them a couple times, [speaker002:] Once? [speaker001:] Mm hmm. [speaker006:] Them? [speaker008:] but [disfmarker] [speaker001:] Couple times. [speaker006:] There's more than one? [speaker008:] Yeah, we have two. [speaker001:] Yeah. [speaker002:] i [speaker008:] Um. [speaker002:] But [disfmarker] [speaker008:] My opinion on it is, first, I never take notes anyway so I'm not gonna use it, um and second, it's another level of infrastructure that we have to deal with. [speaker001:] And I have [disfmarker] uh so my [disfmarker] my feeling on it is that I think in principle it's a really nice idea, and you have the time tags which makes it better tha than just taking ra raw notes. On the other hand, I [disfmarker] the down side for me was that I think the pen is really noisy. So you have ka kaplunk, kaplunk, kaplunk. And I [disfmarker] and I don't know if it's audible on the [disfmarker] but I [disfmarker] I sort of thought that was a disadvantage. I do take notes, I mean, I could be taking notes on these things and I guess the plus with the CrossPads would be the time markings but [disfmarker] I don't know. [speaker004:] Uh, what is a CrossPad? [speaker002:] So it's [disfmarker] it's um [disfmarker] it's a regular pad, just a regular pad of paper but there's this pen which indicates position. [speaker003:] Thank you. [speaker002:] And so you have time and position stuff stored [speaker004:] OK. [speaker002:] so that you can [disfmarker] you have a record of whatever it is you've written. [speaker004:] OK. [speaker008:] And then you can download it and they have OCR and searching and all sorts of things. [speaker004:] OK. OK. [speaker008:] So i if you take notes it's a great little device. [speaker001:] Could [disfmarker] Mm hmm. [speaker008:] But I don't take notes, [speaker002:] And one of the reasons that it was brought up originally was because uh we were interested in [disfmarker] in higher level things, [speaker008:] so. [speaker002:] not just the, you know, microphone stuff but also summarization and so forth and the question is if you were going to go to some gold standard of what wa what was it that happened in the meeting you know, where would it come from? And um I think that was one of the things, [speaker004:] Yeah. [speaker008:] Yep. [speaker007:] Yeah. [speaker002:] right? And so the [disfmarker] it seemed like a neat idea. We'll have a [disfmarker] you know, have a scribe, have somebody uh take good notes and then that's part of the record of the meeting. And then we did it once or twice and we sort of [disfmarker] [speaker008:] Yep, and then just sort of died out. [speaker002:] probably chose the wrong scribe but it was [disfmarker] [vocalsound] It's uh [disfmarker] [speaker008:] Yeah that's right. [speaker007:] I mean [disfmarker] [speaker001:] Well I did it one time but um [disfmarker] [speaker008:] Yep. [speaker002:] Yeah. [speaker001:] u but I guess the [disfmarker] the other thing I'm thinking is if we wanted that kind of thing I wonder if we'd lose that much by having someone be a scribe by listening to the tape, to the recording afterwards and taking notes in some other interface. [speaker006:] I mean we're transcribing it anyways, why do we need notes? [speaker001:] Oh it's la it's useful, [speaker008:] Because that's summary. [speaker001:] have a summary and high points. [speaker007:] I think [disfmarker] [speaker002:] Summary. [speaker007:] there's also [disfmarker] there's this use that [disfmarker] [speaker006:] Summarize it from the transcription. [speaker007:] the [disfmarker] Well, what if you're sitting there and you just wanna make an X and you don't wanna take notes and you're [disfmarker] you just wanna [speaker006:] Doodle. [speaker007:] get the summary of the transcript from this time location like [disfmarker] you know, and [disfmarker] and then while you're bored you don't do anything and once in a while, maybe there's a joke and you put a X and [disfmarker] [comment] But [disfmarker] in [disfmarker] in other words you can use that just to highlight times in a very simple way. Also with [disfmarker] I was thinking and I know Morgan disagrees with me on this but suppose you have a group in here and you wanna let them note whenever they think there might be something later that they might not wanna distribute in terms of content, they could just sort of make an X near that point or a question mark that sort of alerts them that when they get the transcript back they c could get some red flags in that transcript region and they can then look at it. So. I know we haven't been using it but I w I can imagine it being useful just for sort of marking time periods [speaker008:] Right. [speaker007:] which you then get back in a transcript [speaker001:] Well. [speaker002:] I guess [disfmarker] so, you know, what [disfmarker] what makes one think i is maybe we should actually schedule some periods where people go over something later [speaker007:] so. [speaker002:] and [disfmarker] and [disfmarker] and put some kind of summary or something uh you know, some [disfmarker] there'd be some scribe who would actually listen, w who'd agreed to actually listen to the whole thing, not transcribe it, but just sort of write down things that struck them as important. But [disfmarker] then you don't [disfmarker] you don't have the time reference uh that you'd have if you had it live. [speaker007:] Right. And you don't have a lot of other cues that might be useful, [speaker002:] Yeah. [speaker006:] How do you synchronize the time in the CrossPad and the time of the recording? [speaker007:] so. [speaker008:] I mean that was one of the issues we talked about originally and that that's w part of the difficulty is that we need an infrastructure for using the time [disfmarker] the CrossPads and so that means synchronizing the time [disfmarker] You know you want it pretty close [speaker001:] Mm hmm. [speaker008:] and there's a fair amount of skew because it's a hand held unit with a battery [speaker001:] Well when [disfmarker] when I d [speaker008:] and so you [disfmarker] [speaker001:] OK. [speaker008:] so you have to synchronize at the beginning of each meeting all the pads that are being used, so that it's synchronized with the time on that and then you have to download to an application, and then you have to figure out what the data formats are and convert it over if you wanna do anything with this information. And so there's a lot of infrastructure which [speaker001:] w Mm hmm. [speaker005:] Why [disfmarker] [speaker001:] There is an alternative. [speaker008:] unless someone [disfmarker] [speaker001:] There is an alternative, I mean, it's still, there's uh [disfmarker] you know, your point stands about there be [disfmarker] needing to be an infrastructure, but it doesn't have to be synchronized with the little clock's timer on it. You c I mean, I [disfmarker] when I [disfmarker] when I did it I synchronized it by voice, by whispering "one, two, three, four" onto the microphone [speaker008:] Hmm. [speaker001:] and uh, you know. [speaker008:] Well, but then there's the infrastructure at the other end [speaker005:] Right. [speaker008:] which someone has to listen to that and find that point, [speaker001:] Yeah, it's transcribed. [speaker008:] and then mark it. [speaker001:] It's in the transcript. [speaker008:] So. [speaker001:] Yeah. Well it's in the transcript. [speaker007:] Well, could we keep one of these things for another year? Would h I mean is there a big cau [speaker008:] We can keep all [disfmarker] both of them for the whole whole year. [speaker007:] just [disfmarker] just in case we [disfmarker] [speaker008:] I mean, it's just [disfmarker] [speaker007:] even maybe some of the transcribers who might be wanting to annotate uh f just there's a bunch of things that might be neat to do but I [disfmarker] it might not be the case that we can actually synchronize them and then do all the infrastructure but we could at least try it out. [speaker002:] Well [disfmarker] one thing that we might try um is on some set of meetings, some collection of meetings, maybe EDU is the right one or maybe something else, we [disfmarker] we get somebody to buy into the idea of doing this as part of the task. I mean, [speaker007:] Right. [speaker002:] uh part of the reason [disfmarker] I think part of the reason that Adam was so interested in uh the SpeechCorder sort of f idea from the beginning is he said from the beginning he hated taking notes [speaker008:] Yep. [speaker002:] and [disfmarker] and so forth so and [disfmarker] and Jane is more into it but eh uh you know I don't know if you wanna really do [disfmarker] do this all the time so I think the thing is to [disfmarker] to get someone to actually buy into it and have at least some series of meetings where we do it. Um [disfmarker] and if so, it's probably worth having one. The p the [disfmarker] the problem with the [disfmarker] the more extended view, all these other you know with uh quibbling about particular applications of it is that it looks like it's hard to get people to um uh routinely use it, I mean it just hasn't happened anyway. But maybe if we can get a person to [disfmarker] [speaker007:] Yeah I don't think it has to be part of a what everybody does in a meeting but it might be a useful, neat part of the project that we can, you know, show off as a mechanism for synchronizing events in time that happen that you just wanna make a note of, like what Jane was talking about with some later browsing, just [disfmarker] just as a convenience, even if it's not a full blown note taking substitute. [speaker005:] Well if you wanted to do that maybe the right architecture for it is to get a PDA with a wireless card. And [disfmarker] and that way you can synchronize very easily with the [disfmarker] the [disfmarker] the meeting because you'll be synchroni you can synchronize with the [disfmarker] the Linux server and uh [disfmarker] [speaker007:] So what kind of input would you be [disfmarker]? [speaker005:] so [disfmarker] so, I mean, if you're not worried about [disfmarker] [speaker008:] Buttons. [speaker007:] You'd just be pressing like a [disfmarker] a [disfmarker] [speaker005:] Well [disfmarker] well you have a PDA and may and you could have the same sort of X interface or whatever, I mean, you'd have to do a little eh a little bit of coding to do it. [speaker007:] Mm hmm. [speaker005:] But you could imagine, [speaker007:] Yeah, that be good. [speaker005:] I mean, if [disfmarker] if all you really wanted was [disfmarker] you didn't want this secondary note taking channel but just sort of being able to use m markers of some sort, a PDA with a l a wireless card would be the [disfmarker] probably the right way to go. I mean even buttons you could do, sort of, I mean, as you said. [speaker008:] I mean for what [disfmarker] what you've been describing buttons would be even more convenient than anything else, [speaker007:] M right. [speaker008:] right? [speaker007:] That would be fine too. [speaker008:] You have the [disfmarker] [speaker005:] Right. [speaker007:] I mean, I don't have, you know, grandiose ideas in mind but I'm just sort of thinking well we've [disfmarker] we're getting into the next year now and we have a lot of these things worked out at [disfmarker] in terms of the speech maybe somebody will be interested in this and [disfmarker] [speaker001:] I like this PDA idea. Yeah. [speaker002:] Yeah, I do like the idea of having a couple buttons [speaker008:] Well I'm sure there would [disfmarker] [speaker007:] Yeah. [speaker002:] where like one [disfmarker] one button was "uh oh" [speaker001:] Yeah. [speaker002:] and then another button was "that's great" and another button "that's f" [speaker007:] Or like this is my "I'm supposed to do this" kind of button, [speaker001:] Yeah. [speaker007:] like "I better remember to [disfmarker]" [speaker008:] Action item. [speaker007:] Yeah something like that or [disfmarker] [speaker008:] I mean I think the CrossPad idea is a good one. [speaker001:] And then [disfmarker] Uh huh. [speaker008:] It's just a question of getting people to use it and getting the infrastructure set up in such a way that it's not a lot of extra work. I mean that's part of the reason why it hasn't happened is that it's been a lot of extra work for me [speaker007:] Yeah. [speaker002:] Right. [speaker008:] and [disfmarker] [speaker001:] Well, and not just for you. [speaker007:] W [speaker001:] But it's also, it has this problem of having to go from an analog to a d a digital record too, doesn't it? [speaker008:] Well it's digital but it's in a format that is not particularly standard. [speaker001:] I mean [disfmarker] But I mean, say, if i if [disfmarker] if you're writing [disfmarker] if you're writing notes in it does [disfmarker] it [disfmarker] it can't do handwriting recognition, right? [speaker002:] No, no, but it's just [disfmarker] it's just storing the pixel informa position information, [speaker001:] OK. [speaker002:] it's all digital. [speaker001:] I [disfmarker] I guess what I'm thinking is that the PDA solution you h you have it already without needing to go from the pixelization to a [disfmarker] to a [disfmarker] I mean [disfmarker] [speaker002:] Right. You don't have to [disfmarker] [speaker005:] The transfer function is less errorful, yes. [speaker007:] Yeah, yeah. [speaker002:] Yeah. [speaker001:] Oh, nicely put. [speaker002:] Yeah. Yeah. [speaker007:] Well it also [disfmarker] it's maybe realistic cuz people are supposed to be bringing their P D As to the meeting eventually, right? That's why we have this little [disfmarker] I don't know what [disfmarker] I don't wanna cause more work for anyone but I can imagine some interesting things that you could do with it and so if we don't have to return it and we can keep it for a year [disfmarker] I don't know. [speaker008:] Well [disfmarker] w we don't [disfmarker] we certainly don't have to return it, as I said. All [disfmarker] all he said is that if you're not using it could you return it, if you are using it feel free to keep it. The point is that we haven't used it at all and are we going to? [speaker002:] So we have no but [disfmarker] uh by I [disfmarker] I would suggest you return one. [speaker008:] OK. [speaker007:] Yeah. [speaker002:] Because we [disfmarker] we you know, we [disfmarker] we haven't used it at all. [speaker007:] We c [speaker002:] We have some aspirations of using them [speaker007:] One would probably be fine. Maybe we could do like a student project, [speaker002:] and [disfmarker] [speaker007:] you know, maybe someone who wants to do this as their main like s project for something would be cool. [speaker002:] Yeah. [speaker008:] Yep. I mean if we had them out and sitting on the table people might use them a little more [speaker002:] Maybe Jeremy could sit in some meetings and press a button when there [disfmarker] when [disfmarker] when somebody laughed. [speaker008:] although there is a little [disfmarker] [speaker007:] Well, I'm [disfmarker] yeah, that's not a bad [disfmarker] [speaker002:] Yeah, yeah. Yeah. [speaker007:] Jeremy's gonna be an [disfmarker] he's a new student starting on modeling brea breath and laughter, actually, which sounds funny but I think it should be cool, so. [speaker002:] Yeah. [speaker008:] Sounds breathy to me. [speaker007:] OK. [speaker008:] Breath and lau [speaker007:] "Ha ha ha." [speaker008:] "ha ha ha ha". "Ha ha ha ha." [speaker002:] Well dear. [speaker008:] Um. [speaker002:] Hmm. [speaker008:] That reminded me of something. Oh well, too late. It slipped out. [speaker002:] OK. [speaker008:] Oh, [speaker007:] You're [disfmarker] you're gonna tease me? [speaker008:] equipment. [speaker007:] OK. [speaker008:] Ordered [disfmarker] Uh, well I'm always gonna do that. W uh [disfmarker] [comment] We ordered uh more wireless, and so they should be coming in at some point. [speaker007:] Great. [speaker008:] And then at the same time I'll probably rewire the room as per Jane's suggestion so that uh the first N channels are wireless, eh are the m the close talking and the next N are far field. [speaker002:] You know what he means but isn't that funny sounding? "We ordered more wireless." It's like wires are the things so you're wiring [disfmarker] you're [disfmarker] you're [disfmarker] you [disfmarker] we're [disfmarker] we ordered more absence of the thing. [speaker007:] That's a very philosophical statement from Morgan. [speaker008:] wired less, wired more. [speaker007:] I just [disfmarker] it's sort of a anachronism, I mean it's like [disfmarker] It's great. [speaker008:] Should we do digits? [speaker002:] Anyway. [speaker008:] Do we have anything else? [speaker002:] Yeah. [speaker001:] OK. [speaker002:] I mean there's [disfmarker] there's all this stuff going on uh between uh Andreas and [disfmarker] and [disfmarker] and Dave and Chuck and others with various kinds of runs uh um [disfmarker] recognition runs, trying to figure things out about the features but it's [disfmarker] it's all sort of in process, so there's not much to say right now. Uh why don't we start with our [disfmarker] our esteemed guest. [speaker005:] OK. Alright. [speaker008:] So just the transcript number and then the [disfmarker] then the [disfmarker] [speaker005:] This is [disfmarker] Yes, this is number two for me today. [speaker002:] See all you have to do is go away to move way up in the [disfmarker] [speaker005:] Oh. [speaker007:] We could do simultaneous. Initiate him. [speaker002:] We [disfmarker] we could. [speaker008:] Should we do simultaneous? [speaker007:] Well, I'm just thinking, are you gonna try to save the data before this next group comes in? [speaker002:] Yeah. [speaker008:] Yeah, absolutely. [speaker007:] Yeah, so we might wanna do it simultaneous. [speaker008:] I mean you hav sorta have to. [speaker007:] Right, so [disfmarker] [speaker002:] Well OK, so let's do one of those simultaneous ones. [speaker007:] so we might n we might need to do that actually. [speaker008:] OK. [speaker002:] That sounds good. [speaker005:] OK. [speaker002:] Everybody ready? [speaker001:] Yeah. [speaker002:] A one. [speaker007:] You have to plug your ears, by the way uh Eric, [speaker008:] Well I have to, [speaker007:] or [disfmarker] [speaker008:] I don't know about other people. [speaker007:] or you start laughing. [speaker005:] OK, alright. [speaker004:] You don't have to. [speaker002:] OK, a one and a two and a three. OK, babble, take five. [speaker001:] OK, we're on. [speaker003:] OK, what are we talking about today? [speaker002:] I don't know. Do you have news from the conference talk? Uh, that was programmed for yesterday [disfmarker] I guess. [speaker003:] Uh [disfmarker] [speaker004:] Yesterday [speaker003:] Uh [disfmarker] [speaker004:] Yesterday morning on video conference. [speaker003:] Uh, [speaker002:] Well [speaker003:] oh, [speaker005:] Oh. Conference call. [speaker003:] I'm sorry. I know [disfmarker] now I know what you're talking about. No, nobody's told me anything. [speaker002:] Alright. [speaker001:] Oh, this was the, uh, talk where they were supposed to try to decide [disfmarker] [speaker002:] To [disfmarker] to decide what to do, yeah. [speaker004:] Yeah. [speaker001:] Ah, right. [speaker003:] Yeah. No, that would have been a good thing to find out before this meeting, that's. No, I have no [disfmarker] I have no idea. Um, Uh, so I mean, let's [disfmarker] let's assume for right now that we're just kind of plugging on ahead, [speaker002:] Yeah. [speaker003:] because even if they tell us that, uh, the rules are different, uh, we're still interested in doing what we're doing. So what are you doing? [speaker002:] Mm hmm. Uh, well, we've [disfmarker] a little bit worked on trying to see, uh, what were the bugs and the problem with the latencies. [speaker004:] To improve [disfmarker] [speaker002:] So, We took [disfmarker] first we took the LDA filters and, [vocalsound] uh, we designed new filters, using uh recursive filters actually. [speaker003:] So when you say "we", is that something Sunil is doing or is that [disfmarker]? [speaker002:] I'm sorry? [speaker003:] Who is doing that? [speaker002:] Uh, us. [speaker003:] Oh, oh. [speaker002:] Yeah. [speaker003:] Oh, OK. [speaker004:] But [disfmarker] [speaker002:] So we took the filters [disfmarker] the FIR filters [vocalsound] and we [comment] designed, uh, IIR filters that have the same frequency response. [speaker003:] Mm hmm. [speaker002:] Well, similar, but that have shorter delays. [speaker003:] Mm hmm. [speaker002:] So they had two filters, one for the low frequency bands and another for the high frequency bands. And so we redesigned two filters. And the low frequency band has sixty four milliseconds of delay, and the high frequency band filter has something like eleven milliseconds compared to the two hundred milliseconds of the IIR filters. But it's not yet test. So we have the filters but we still have to implement a routine that does recursive filtering [speaker003:] OK. [speaker002:] and [disfmarker] [speaker003:] You [disfmarker] you had a discussion with Sunil about this though? [speaker002:] No. No. [speaker003:] Uh huh. Yeah, you should talk with him. [speaker002:] Yeah, yeah. [speaker003:] Yeah. No, I mean, because the [disfmarker] the [disfmarker] the [disfmarker] the whole problem that happened before was coordination, right? [speaker002:] Mm hmm. [speaker003:] So [disfmarker] so you need to discuss with him what we're doing, [speaker002:] Yeah. [speaker003:] uh, cuz they could be doing the same thing and [disfmarker] or something. [speaker002:] Mm hmm. Uh, I [disfmarker] yeah, I don't know if th that's what they were trying to [disfmarker] [speaker003:] Right. [speaker002:] They were trying to do something different like taking, uh [disfmarker] well, using filter that takes only a past and this is just a little bit different. But I will I will send him an email and tell him exactly what we are doing, so. [speaker003:] Yeah, yeah. Um, I mean [disfmarker] [speaker002:] Um, [speaker003:] We just [disfmarker] we just have to be in contact more. I think that [disfmarker] the [disfmarker] the fact that we [disfmarker] we did that with [disfmarker] had that thing with the latencies was indicative of the fact that there wasn't enough communication. [speaker002:] Mm hmm. [speaker003:] So. OK. [speaker002:] Alright. Um, Yeah. Well, there is w one, um, remark about these filters, that they don't have a linear phase. So, [speaker003:] Right. [speaker002:] Well, I don't know, perhaps it [disfmarker] perhaps it doesn't hurt because the phase is almost linear but. Um, and so, yeah, for the delay I gave you here, it's [disfmarker] it's, uh, computed on the five hertz modulation frequency, which is the [disfmarker] mmm, well, the most important for speech so. Uh, this is the first thing. [speaker004:] The low f f [speaker003:] So that would be, uh, a reduction of a hundred and thirty six milliseconds, [speaker002:] Yeah. [speaker003:] which, uh [disfmarker] What was the total we ended up with through the whole system? [speaker002:] Three hundred and thirty. [speaker003:] So that would be within [disfmarker]? [speaker002:] Yeah, but there are other points actually, uh, which will perhaps add some more delay. Is that some other [disfmarker] other stuff in the process were perhaps not very [disfmarker] um perf well, not very correct, like the downsampling which w was simply dropping frames. [speaker003:] Yeah. [speaker002:] Um, so we will try also to add a nice downsampling having a filter that [disfmarker] that [disfmarker] [speaker003:] Uh huh. [speaker002:] well, a low pass filter at [disfmarker] at twenty five hertz. Uh, because wh when [disfmarker] when we look at the LDA filters, well, they are basically low pass but they leave a lot of what's above twenty five hertz. [speaker003:] Yeah. [speaker002:] Um, and so, yeah, this will be another filter which would add ten milliseconds again. [speaker003:] Yeah. [speaker002:] Um, yeah, and then there's a third thing, is that, um, basically the way on line normalization was done uh, is just using this recursion on [disfmarker] on the um, um, on the feature stream, [speaker003:] Yeah. [speaker002:] and [disfmarker] but this is a filter, so it has also a delay. Uh, and when we look at this filter actually it has a delay of eighty five milliseconds. So if we [disfmarker] [speaker003:] Eighty five. [speaker002:] Yeah. If we want to be very correct, so if we want to [disfmarker] the estimation of the mean t t to [disfmarker] to be [disfmarker] well, the right estimation of the mean, we have to t to take eighty five milliseconds in the future. Mmm. [speaker003:] Hmm! That's a little bit of a problem. [speaker002:] Yeah. Um, But, well, when we add up everything it's [disfmarker] it will be alright. We would be at six so, sixty five, plus ten, plus [disfmarker] for the downsampling, plus eighty five for the on line normalization. So it's [speaker003:] Uh, yeah, but then there's [disfmarker] [speaker002:] plus [disfmarker] plus eighty for the neural net and PCA. [speaker003:] Oh. [speaker002:] So it would be around two hundred and forty [disfmarker] so, well, [speaker003:] Just [disfmarker] just barely in there. [speaker002:] plus [disfmarker] plus the frames, but it's OK. [speaker001:] What's the allowable? [speaker003:] Two fifty, unless they changed the rules. [speaker002:] Hmm. [speaker003:] Which there is [disfmarker] there's some discussion of. But [disfmarker] [speaker001:] What were they thinking of changing it to? [speaker002:] Yeah. [speaker003:] Uh, well the people who had very low latency want it to be low [disfmarker] uh, very [disfmarker] [vocalsound] very very narrow, uh, latency bound. And the people who have longer latency don't. So. [speaker001:] Huh. [speaker002:] So, yeah. [speaker003:] Unfortunately we're the main ones with long latency, but [speaker001:] Ah! [speaker003:] But, uh, you know, it's [disfmarker] [speaker002:] Yeah, and basically the best proposal had something like thirty or forty milliseconds of latency. [speaker003:] Yeah. [speaker002:] So. Well. [speaker003:] Yeah, so they were basically [disfmarker] I mean, they were more or less trading computation for performance and we were, uh, trading latency for performance. And they were dealing with noise explicitly and we weren't, and so I think of it as complementary, that if we can put the [disfmarker] [speaker001:] Think of it as what? [speaker003:] Complementary. [speaker001:] Hmm. [speaker003:] I think the best systems [disfmarker] so, uh, everything that we did in in a way it was [disfmarker] it was just adamantly insisting on going in with a brain damaged system, which is something [disfmarker] actually, we've done a lot over the last thirteen years. Uh, [vocalsound] which is we say, well this is the way we should do it. And then we do it. And then someone else does something that's straight forward. So, w th w this was a test that largely had additive noise and we did [disfmarker] we adde did absolutely nothing explicitly to handle ad additive noise. [speaker001:] Right. [speaker003:] We just, uh, you know, trained up systems to be more discriminant. And, uh, we did this, uh, RASTA like filtering which was done in the log domain and was tending to handle convolutional noise. We did [disfmarker] we actually did nothing about additive noise. So, um, the, uh, spectral sub subtraction schemes a couple places did seem to seem to do a nice job. And so, uh, we're talking about putting [disfmarker] putting some of that in while still keeping some of our stuff. I think you should be able to end up with a system that's better than both but clearly the way that we're operating for this other stuff does involved some latency to [disfmarker] to get rid of most of that latency. To get down to forty or fifty milliseconds we'd have to throw out most of what we're doing. And [disfmarker] and, uh, I don't think there's any good reason for it in the application actually. I mean, you're [disfmarker] you're [disfmarker] you're speaking to a recognizer on a remote server and, uh, having a [disfmarker] a [disfmarker] a quarter second for some processing to clean it up. It doesn't seem like it's that big a deal. [speaker001:] Mm hmm. [speaker003:] These aren't large vocabulary things so the decoder shouldn't take a really long time, and. So. [speaker001:] And I don't think anybody's gonna notice the difference between a quarter of a second of latency and thirty milliseconds of latency. [speaker003:] No. What [disfmarker] what does [disfmarker] wa was your experience when you were doing this stuff with, uh, the [disfmarker] the [disfmarker] the surgical, uh, uh, microscopes and so forth. Um, how long was it from when somebody, uh, finished an utterance to when, uh, something started happening? [speaker001:] Um, we had a silence detector, so we would look for the end of an utterance based on the silence detector. [speaker003:] Mm hmm. [speaker001:] And I [disfmarker] I can't remember now off the top of my head how many frames of silence we had to detect before we would declare it to be the end of an utterance. [speaker003:] Mm hmm. Mm hmm. [speaker001:] Um, but it was, uh, I would say it was probably around the order of two hundred and fifty milliseconds. [speaker003:] Yeah, and that's when you'd start doing things. [speaker001:] Yeah, we did the back trace at that point [speaker003:] Yeah. [speaker001:] to get the answer. [speaker003:] Of course that didn't take too long at that point. [speaker001:] No, no it was pretty quick. [speaker003:] Yeah. Yeah, [speaker001:] So [disfmarker] [speaker003:] so you [disfmarker] you [disfmarker] so you had a so you had a [disfmarker] a quarter second delay before, uh, plus some little processing time, [speaker001:] this w Right. [speaker003:] and then the [disfmarker] the microscope would start moving or something. [speaker001:] Right. [speaker003:] Yeah. [speaker001:] Right. [speaker003:] And there's physical inertia there, so probably the [disfmarker] the motion itself was all [disfmarker] [speaker001:] And it felt to, uh, the users that it was instantaneous. I mean, as fast as talking to a person. It [disfmarker] th I don't think anybody ever complained about the delay. [speaker003:] Yeah, so you would think as long as it's under half a second or something. [speaker001:] Yeah. [speaker003:] Uh, I'm not an expert on that but. [speaker001:] Yeah. I don't remember the exact numbers but [speaker003:] Yeah. [speaker001:] it was something like that. I don't think you can really tell. A person [disfmarker] I don't think a person can tell the difference between, uh, you know, a quarter of a second and a hundred milliseconds, and [disfmarker] [speaker003:] Yeah. [speaker001:] I'm not even sure if we can tell the difference between a quarter of a second and half a second. [speaker003:] Yeah. [speaker001:] I mean it just [disfmarker] it feels so quick. [speaker003:] I mean, basically if you [disfmarker] yeah, if you said, uh, um, "what's the, uh, uh [disfmarker] what's the shortest route to the opera?" and it took half a second to get back to you, [speaker001:] Yeah. [speaker003:] I mean, [vocalsound] it would be f I mean, it might even be too abrupt. You might have to put in a s a s [vocalsound] a delay. [speaker001:] Yeah. I mean, it may feel different than talking to a person [speaker003:] Yeah. [speaker001:] because when we talk to each other we tend to step on each other's utterances. So like if I'm asking you a question, you may start answering before I'm even done. [speaker003:] Yeah. [speaker001:] So it [disfmarker] it would probably feel different [speaker003:] Right. [speaker001:] but I don't think it would feel slow. [speaker003:] Right. Well, anyway, I mean, I think [disfmarker] we could cut [disfmarker] we know what else, we could cut down on the neural net time by [disfmarker] by, uh, playing around a little bit, going more into the past, or something like that. We t we talked about that. [speaker001:] So is the latency from the neural net caused by how far ahead you're looking? [speaker003:] Mm hmm. [speaker002:] Mm hmm. [speaker003:] And there's also [disfmarker] well, there's the neural net and there's also this, uh, uh, multi frame, uh, uh, KLT. [speaker001:] Wasn't there [disfmarker] Was it in the, uh, recurrent neural nets where they weren't looking ahead at all? [speaker003:] They weren't looking ahead much. They p they looked ahead a little bit. [speaker001:] A little bit. OK. [speaker003:] Yeah. Yeah, I mean, you could do this with a recurrent net. And [disfmarker] and then [disfmarker] But you also could just, um, I mean, we haven't experimented with this but I imagine you could, um, uh, predict a, uh [disfmarker] um, a label, uh, from more in the past than in [disfmarker] than [disfmarker] than in the future. I mean, we've d we've done some stuff with that before. I think it [disfmarker] it works OK. [speaker002:] Mm hmm. [speaker003:] So. [speaker001:] We've always had [disfmarker] usually we used the symmetric windows but [speaker003:] Yeah, [speaker001:] I don't think [disfmarker] [speaker003:] but we've [disfmarker] but we played a little bit with [disfmarker] with asymmetric, guys. [speaker001:] Yeah. [speaker003:] You can do it. So. So, that's what [disfmarker] that's what you're busy with, s messing around with this, [speaker002:] Uh, yeah. [speaker003:] yeah. And, uh, [speaker004:] Also we were thinking to [disfmarker] to, uh, apply the eh, spectral subtraction from Ericsson [speaker002:] Yeah. [speaker003:] Uh huh. [speaker004:] and to [disfmarker] to change the contextual KLT for LDA. [speaker001:] Change the what? [speaker004:] The contextual KLT. [speaker001:] I'm missing that last word. Context [speaker003:] K [disfmarker] KLT. [speaker005:] Oh. KLT. [speaker004:] KLT [disfmarker] [speaker001:] KLT. Oh, KLT. [speaker003:] Mm hmm. [speaker004:] KLT, [speaker001:] Uh huh. [speaker004:] I'm sorry. Uh, to change and use LDA discriminative. [speaker002:] Yeah. [speaker004:] But [disfmarker] [speaker003:] Uh huh. [speaker004:] I don't know. [speaker003:] Uh, [speaker001:] What is the advantage of that? [speaker004:] Uh [disfmarker] [speaker002:] Well, it's that by the for the moment we have, uh, something that's discriminant and nonlinear. And the other is linear but it's not discriminant at all. Well, it's it's a linear transformation, that [disfmarker] Uh [disfmarker] [speaker003:] So at least just to understand maybe what the difference was between how much you were getting from just putting the frames together and how much you're getting from the discriminative, what the nonlinearity does for you or doesn't do for you. Just to understand it a little better I guess. [speaker002:] Mmm. Well [disfmarker] uh [disfmarker] yeah. Actually what we want to do, perhaps it's to replace [disfmarker] to [disfmarker] to have something that's discriminant but linear, also. And to see if it [disfmarker] if it improves ov over [disfmarker] over the non discriminant linear transformation. And if the neural net is better than this [speaker001:] Hmm. [speaker002:] or, well. [speaker003:] Yeah, well, that's what I meant, is to see whether [disfmarker] whether it [disfmarker] having the neural net really buys you anything. [speaker002:] So. Ye Mmm. [speaker003:] Uh, I mean, it doe did look like it buys you something over just the KLT. [speaker002:] Yeah. [speaker003:] But maybe it's just the discrimination and [disfmarker] and maybe [disfmarker] yeah, maybe the nonlinear discrimination isn't necessary. [speaker004:] S maybe. [speaker002:] Yeah. Mm hmm. [speaker004:] Maybe. [speaker003:] Could be. Good [disfmarker] good to know. But the other part you were saying was the spectral subtraction, so you just kind of, uh [disfmarker] [speaker002:] Yeah. [speaker003:] At what stage do you do that? Do you [disfmarker] you're doing that, um [disfmarker]? [speaker002:] So it would be on the um [disfmarker] [speaker004:] We was think [speaker002:] on [disfmarker] on the mel frequency bands, so. Yeah, [speaker004:] Yeah, we [disfmarker] no [disfmarker] nnn [speaker002:] be before everything. [speaker003:] OK, so just do that on the mel f [speaker004:] We [disfmarker] we was thinking to do before after VAD or Oh, [comment] we don't know exactly when it's better. [speaker002:] Yeah, um [disfmarker] [speaker004:] Before after VAD or [disfmarker] and then [speaker003:] So [disfmarker] so you know that [disfmarker] that [disfmarker] that the way that they're [disfmarker] [speaker002:] Um. [speaker003:] uh, one thing that would be no [disfmarker] good to find out about from this conference call is that what they were talking about, what they're proposing doing, was having a third party, um, run a good VAD, and [disfmarker] and determine boundaries. [speaker004:] Yeah. [speaker003:] And then given those boundaries, then have everybody do the recognition. [speaker004:] Begin to work. [speaker003:] The reason for that was that, um, uh [disfmarker] if some one p one group put in the VAD and another didn't, uh, or one had a better VAD than the other since that [disfmarker] they're not viewing that as being part of the [disfmarker] the task, and that any [disfmarker] any manufacturer would put a bunch of effort into having some s kind of good speech silence detection. It still wouldn't be perfect but I mean, e the argument was "let's not have that be part of this test." "Let's [disfmarker] let's separate that out." And so, uh, I guess they argued about that yesterday and, yeah, I'm sorry, I don't [disfmarker] don't know the answer but we should find out. I'm sure we'll find out soon what they, uh [disfmarker] what they decided. So, uh [disfmarker] Yeah, so there's the question of the VAD but otherwise it's [disfmarker] it's on the [disfmarker] the, uh [disfmarker] the mel fil filter bank, uh, energies I guess? [speaker004:] Mm hmm. [speaker003:] You do [disfmarker] doing the [disfmarker]? [speaker002:] Mmm, yeah. [speaker004:] Mm hmm. [speaker003:] And you're [disfmarker] you're subtracting in the [disfmarker] in the [disfmarker] in the [disfmarker] I guess it's power [disfmarker] power domain, uh, or [disfmarker] or magnitude domain. Probably power domain, right? why [speaker002:] I guess it's power domain, yeah. I don't remember exactly. [speaker004:] I don't remember. [speaker003:] Yeah, [speaker002:] But [disfmarker] yeah, so it's before everything else, [speaker003:] yep. [speaker002:] and [disfmarker] [speaker003:] I mean, if you look at the theory, it's [disfmarker] it should be in the power domain but [disfmarker] but, uh, I've seen implementations where people do it in the magnitude domain [speaker002:] Yeah. [speaker003:] and [disfmarker] [speaker002:] Mmm. [speaker003:] I have asked people why and they shrug their shoulders and say, "oh, it works." So. [speaker002:] Yeah. [speaker003:] Uh, and there's this [disfmarker] I guess there's this mysterious [disfmarker] I mean people who do this a lot I guess have developed little tricks of the trade. I mean, there's [disfmarker] there's this, um [disfmarker] you don't just subtract the [disfmarker] the estimate of the noise spectrum. You subtract th that times [disfmarker] [speaker002:] A little bit more and [disfmarker] [speaker003:] Or [disfmarker] or less, [speaker002:] Yeah. [speaker003:] or [disfmarker] [speaker001:] Really? [speaker002:] Yeah. [speaker001:] Huh! [speaker003:] Yeah. [speaker002:] And generated this [disfmarker] this, [speaker003:] Uh. [speaker002:] um, so you have the estimation of the power spectra of the noise, and you multiply this by a factor which is depend dependent on the SNR. [speaker004:] Hmm, maybe. [speaker002:] So. Well. [speaker001:] Hmm! [speaker002:] When the speech lev when the signal level is more important, compared to this noise level, the coefficient is small, and around one. But when the power le the s signal level is uh small compared to the noise level, the coefficient is more important. And this reduce actually the music musical noise, uh [speaker001:] Oh! [speaker002:] which is more important during silence portions, when the s the energy's small. [speaker001:] Uh huh. Hmm! [speaker002:] So there are tricks like this but, mmm. [speaker001:] Hmm! [speaker003:] Yeah. So. [speaker002:] Yeah. [speaker001:] Is the estimate of the noise spectrum a running estimate? [speaker002:] Yeah. [speaker003:] Yeah. [speaker001:] Or [disfmarker] [speaker002:] Yeah. [speaker003:] Well, that's [disfmarker] I mean, that's what differs from different [disfmarker] different tasks and different s uh, spectral subtraction methods. I mean, if [disfmarker] if you have, uh, fair assurance that, uh, the noise is [disfmarker] is quite stationary, then the smartest thing to do is use as much data as possible to estimate the noise, [speaker001:] Hmm! [speaker003:] get a much better estimate, and subtract it off. [speaker001:] Mm hmm. [speaker003:] But if it's varying at all, which is gonna be the case for almost any real situation, you have to do it on line, uh, with some forgetting factor or something. [speaker001:] So do you [disfmarker] is there some long window that extends into the past over which you calculate the average? [speaker003:] Well, there's a lot of different ways of computing the noise spectrum. So one of the things that, uh, Hans Guenter Hirsch did, uh [disfmarker] and pas and other people [disfmarker] actually, he's [disfmarker] he wasn't the only one I guess, was to, uh, take some period of [disfmarker] of [disfmarker] of speech and in each band, uh, develop a histogram. So, to get a decent histogram of these energies takes at least a few seconds really. But, uh [disfmarker] I mean you can do it with a smaller amount but it's pretty rough. And, um, in fact I think the NIST standard method of determining signal to noise ratio is based on this. So [disfmarker] [speaker001:] A couple seconds? [speaker003:] No, no, it's based on this kind of method, this histogram method. [speaker001:] Hmm. [speaker003:] So you have a histogram. Now, if you have signal and you have noise, you basically have these two bumps in the histogram, which you could approximate as two Gaussians. [speaker001:] But wh don't they overlap sometimes? [speaker003:] Oh, yeah. [speaker001:] OK. [speaker003:] So you have a mixture of two Gaussians. [speaker001:] Yeah. [speaker003:] Right? And you can use EM to figure out what it is. You know. [speaker001:] Yeah. [speaker003:] So [disfmarker] so basically now you have this mixture of two Gaussians, you [disfmarker] you n know what they are, and, uh [disfmarker] I mean, sorry, you estimate what they are, and, uh, so this gives you what the signal is and what the noise e energy is in that band in the spectrum. And then you look over the whole thing and now you have a noise spectrum. So, uh, Hans Guenter Hirsch and others have used that kind of method. And the other thing to do is [disfmarker] which is sort of more trivial and obvious [comment] [disfmarker] is to, uh, uh, determine through magical means that [disfmarker] that, uh, there's no speech in some period, and then see what the spectrum is. [speaker001:] Mm hmm. [speaker003:] Uh, but, you know, it's [disfmarker] that [disfmarker] that [disfmarker] that's tricky to do. It has mistakes. Uh, and if you've got enough time, uh, this other method appears to be somewhat more reliable. Uh, a variant on that for just determining signal to noise ratio is to just, uh [disfmarker] you can do a w a uh [disfmarker] an iterative thing, EM like thing, to determine means only. I guess it is EM still, but just [disfmarker] just determine the means only. Don't worry about the variances. [speaker001:] Mm hmm. [speaker003:] And then you just use those mean values as being the [disfmarker] the, uh uh signal to noise ratio in that band. [speaker001:] But what is the [disfmarker] it seems like this kind of thing could add to the latency. I mean, depending on where the window was that you used to calculate [pause] the signal to noise ratio. [speaker002:] Yeah, sure. But [disfmarker] Mmm. [speaker003:] Not necessarily. Cuz if you don't look into the future, right? [speaker001:] OK, well that [disfmarker] I guess that was my question, [speaker003:] if you just [disfmarker] yeah [disfmarker] I mean, if you just [disfmarker] [speaker001:] yeah. [speaker003:] if you [disfmarker] you, uh [disfmarker] a at the beginning you have some [disfmarker] [speaker001:] Guess. [speaker003:] esti some guess and [disfmarker] and, uh, uh [disfmarker] [speaker002:] Yeah, but it [disfmarker] [speaker003:] It's an interesting question. I wonder how they did do it? [speaker002:] Actually, it's a mmm [disfmarker] If if you want to have a good estimation on non stationary noise you have to look in the [disfmarker] in the future. I mean, if you take your window and build your histogram in this window, um, what you can expect is to have an estimation of th of the noise in [disfmarker] in the middle of the window, not at the end. So [disfmarker] [speaker004:] Mm hmm. [speaker003:] Well, yeah, but what does [disfmarker] what [disfmarker] what [disfmarker] what does Alcatel do? [speaker002:] the [disfmarker] but [disfmarker] but people [disfmarker] [speaker003:] And [disfmarker] and France Telecom. [speaker002:] The They just look in the past. I guess it works because the noise are, uh pret uh, almost stationary [speaker003:] Pretty stationary. [speaker005:] Pretty stationary, [speaker002:] but, [speaker005:] yeah. [speaker002:] um [disfmarker] [speaker003:] Well, the thing, e e e e Yeah, y I mean, you're talking about non stationary noise but I think that spectral subtraction is rarely [disfmarker] is [disfmarker] is not gonna work really well for [disfmarker] for non stationary noise, [speaker002:] Well, if y if you have a good estimation of the noise, [speaker003:] you know? [speaker002:] yeah, because well it it has to work. [speaker003:] But it's hard to [disfmarker] [speaker002:] i [speaker003:] but that's hard to do. [speaker002:] Yeah, that's hard to do. Yeah. [speaker003:] Yeah. So [disfmarker] so I think that [disfmarker] that what [disfmarker] what is [disfmarker] wh what's more common is that you're going to be helped with r slowly varying or stationary noise. [speaker002:] But [disfmarker] Mm hmm. [speaker003:] That's what spectral subtraction will help with, practically speaking. [speaker002:] Mm hmm. Mm hmm. [speaker003:] If it varies a lot, to get a If [disfmarker] if [disfmarker] to get a good estimate you need a few seconds of speech, even if it's centered, right? if you need a few seconds to get a decent estimate but it's changed a lot in a few seconds, then it, you know, i it's kind of a problem. [speaker002:] Mm hmm. Yeah. [speaker003:] I mean, imagine e five hertz is the middle of the [disfmarker] of the speech modulation spectrum, [speaker002:] Mmm. [speaker003:] right? So imagine a jack hammer going at five hertz. [speaker002:] Yeah, [speaker003:] I mean, good [disfmarker] good luck. [speaker002:] that's [disfmarker] [speaker003:] So, [speaker002:] So in this case, yeah, sure, you cannot [disfmarker] [speaker003:] Yeah. [speaker002:] But I think y um, Hirsch does experiment with windows of like between five hundred milliseconds and one second. And well, five hundred wa was not so bad. I mean and he worked on non stationary noises, like noise modulated with well, wi with amplitude modulations and things like that, and [disfmarker] [speaker001:] Were his, uh, windows centered around the [disfmarker] [speaker002:] But [disfmarker] Um, yeah. Well, I think [disfmarker] Yeah. Well, in [disfmarker] in the paper he showed that actually the estimation of the noise is [disfmarker] is delayed. Well, it's [disfmarker] there is [disfmarker] you [disfmarker] you have to center the window, yeah. [speaker003:] Yeah. [speaker002:] Mmm. [speaker003:] No, I understand it's better to do but I just think that [disfmarker] that, uh, for real noises wh what [disfmarker] what's most likely to happen is that there'll be some things that are relatively stationary [speaker002:] Mmm. [speaker003:] where you can use one or another spectral subtraction thing and other things where it's not so stationary [speaker002:] Yeah. [speaker003:] and [disfmarker] I mean, you can always pick something that [disfmarker] that falls between your methods, uh, [speaker002:] Hmm. [speaker003:] uh, but I don't know if, you know, if sinusoidally, uh, modul amplitude modulated noise is [disfmarker] is sort of a big problem in [disfmarker] in in [disfmarker] practice. I think that [vocalsound] it's uh [disfmarker] [speaker002:] Yeah. [speaker001:] We could probably get a really good estimate of the noise if we just went to the noise files, and built the averages from them. [speaker003:] Yeah. Well. [speaker002:] What [disfmarker] What do you mean? [speaker003:] Just cheat [disfmarker] You're saying, cheat. [speaker002:] But if the [disfmarker] if the noise is stationary perhaps you don't even need some kind of noise estimation algorithm. [speaker003:] Yeah. Yeah. [speaker002:] We just take th th th the beginning of the utterance and [speaker003:] Oh, yeah, sure. [speaker004:] It's the same. [speaker002:] I I know p I don't know if people tried this for Aurora. Well, everybody seems to use some kind of adaptive, well, scheme [speaker004:] Yeah. [speaker003:] But [disfmarker] but [disfmarker] [speaker004:] A dictionary. [speaker002:] but, [speaker003:] you know, stationary [disfmarker] [speaker002:] is it very useful [speaker001:] Very slow adaptation. [speaker002:] and is the c [speaker003:] Right, [speaker001:] th [speaker003:] the word "stationary" is [disfmarker] has a very precise statistical meaning. But, you know, in [disfmarker] in signal processing really what we're talking about I think is things that change slowly, uh, compared with our [disfmarker] our processing techniques. So if you're driving along in a car I [disfmarker] I would think that most of the time the nature of the noise is going to change relatively slowly. [speaker002:] Mm hmm. [speaker003:] It's not gonna stay absolute the same. If you [disfmarker] if you check it out, uh, five minutes later you may be in a different part of the road or whatever. [speaker002:] Mm hmm. [speaker003:] But it's [disfmarker] it's [disfmarker] i i i using the local characteristics in time, is probably going to work pretty well. [speaker002:] Mm hmm. [speaker003:] But you could get hurt a lot if you just took some something from the beginning of all the speech, of, you know, an hour of speech and then later [disfmarker] [speaker002:] Yeah. [speaker003:] Uh, so they may be [disfmarker] you know, may be overly, uh, complicated for [disfmarker] for this test but [disfmarker] but [disfmarker] but, uh, I don't know. But what you're saying, you know, makes sense, though. I mean, if possible you shouldn't [disfmarker] you should [disfmarker] you should make it, uh, the center of the [disfmarker] center of the window. But [disfmarker] uh, we're already having problems with these delay, uh [disfmarker] [vocalsound] delay issues. [speaker002:] Yeah, so. [speaker003:] So, uh, we'll have to figure ways without it. Um, [speaker001:] If they're going to provide a, uh, voice activity detector that will tell you the boundaries of the speech, then, couldn't you just go outside those boundaries and do your estimate there? [speaker003:] Oh, yeah. You bet. Yeah. So I [disfmarker] I imagine that's what they're doing, right? Is they're [disfmarker] they're probably looking in nonspeech sections and getting some, uh [disfmarker] [speaker002:] Yeah, they have some kind of threshold on [disfmarker] on the previous estimate, and [disfmarker] So. Yeah. I think. Yeah, I think Ericsson used this kind of threshold. Yeah, so, they h they have an estimate of the noise level and they put a threshold like six or ten DB above, and what's under this threshold is used to update the estimate. Is [disfmarker] is that right [speaker004:] Yeah. [speaker002:] or [disfmarker]? [speaker004:] I think so. I have not here the proposal. [speaker002:] So it's [disfmarker] it's [disfmarker] Yeah. [speaker003:] Does France Telecom do this [disfmarker] [speaker002:] It's like saying what's under the threshold is silence, and [disfmarker] [speaker005:] Hmm. [speaker003:] Does France Telecom do th do the same thing? More or less? [speaker002:] I d I [disfmarker] Y you know, perhaps? [speaker004:] No. I do I have not here the proposal. [speaker003:] OK. Um, OK, if we're [disfmarker] we're done [disfmarker] done with that, uh, let's see. Uh, maybe we can talk about a couple other things briefly, just, uh, things that [disfmarker] that we've been chatting about but haven't made it into these meetings yet. So you're coming up with your quals proposal, and, uh [disfmarker] Wanna just give a two three minute summary of what you're planning on doing? [speaker005:] Oh, um, two, three, it can be shorter than that. Um. [speaker003:] Yeah. [speaker005:] Well, I've [disfmarker] I've talked to some of you already. Um, but I'm, uh, looking into extending the work done by Larry Saul and John Allen and uh Mazin Rahim. Um, they [disfmarker] they have a system that's, uh, a multi band, um, system but their multi band is [disfmarker] is a little different than the way that we've been doing multi band in the past, where um [disfmarker] Where we've been [@ @] [comment] uh taking [pause] um [pause] [vocalsound] sub band features and i training up these neural nets and [disfmarker] on [disfmarker] on phonetic targets, and then combining them some somehow down the line, um, they're [disfmarker] they're taking sub band features and, um, training up a detector that detects for, um, these phonetic features for example, um, he presents um, uh, a detector to detect sonorance. And so what [disfmarker] what it basically is [disfmarker] is, um [disfmarker] it's [disfmarker] there's [disfmarker] at the lowest level, there [disfmarker] it's [disfmarker] it's an OR ga I mean, it's an AND gate. So, uh, on each sub band you have several independent tests, to test whether um, there's the existence of sonorance in a sub band. And then, um, it c it's combined by a soft AND gate. And at the [disfmarker] at the higher level, for every [disfmarker] if, um [disfmarker] The higher level there's a soft OR gate. Uh, so if [disfmarker] if this detector detects um, the presence of [disfmarker] of sonorance in any of the sub bands, then the detect uh, the OR gate at the top says, "OK, well this frame has evidence of sonorance." And these are all [disfmarker] [speaker001:] What are [disfmarker] what are some of the low level detectors that they use? [speaker005:] Oh, OK. Well, the low level detectors are logistic regressions. Um, and the, uh [disfmarker] the one o [speaker003:] So that, by the way, basically is a [disfmarker] is one of the units in our [disfmarker] in our [disfmarker] our neural network. So that's all it is. It's a sig it's a sigmoid, [speaker005:] Yeah. [speaker003:] uh, with weighted sum at the input, [speaker001:] Hmm. [speaker005:] Right. [speaker003:] which you train by gradient [pause] descent. [speaker005:] Yeah, so he uses, um, an EM algorithm to [disfmarker] to um train up these um parameters for the logistic regression. The [disfmarker] [speaker003:] Well, actually, yeah, so I was using EM to get the targets. So [disfmarker] so you have this [disfmarker] this [disfmarker] this AND gate [disfmarker] what we were calling an AND gate, but it's a product [disfmarker] product rule thing at the output. And then he uses, uh, i u and then feeding into that are [disfmarker] I'm sorry, there's [disfmarker] it's an OR at the output, isn't it? Yeah, [speaker005:] Mm hmm. [speaker003:] so that's the product. And then, um, then he has each of these AND things. And, um, but [disfmarker] so they're little neural [disfmarker] neural units. Um, and, um, they have to have targets. And so the targets come from EM. [speaker001:] And so are each of these, low level detectors [comment] [disfmarker] are they, uh [disfmarker] are these something that you decide ahead of time, like "I'm going to look for this particular feature or I'm going to look at this frequency," or [disfmarker] What [disfmarker] what [disfmarker] what are they looking at? [speaker005:] Um [disfmarker] [speaker001:] What are their inputs? [speaker005:] Uh Right, so the [disfmarker] OK, so at each for each sub band [comment] there are basically, uh, several measures of SNR and [disfmarker] and correlation. [speaker001:] Ah, OK, [speaker005:] Um, [speaker001:] OK. [speaker005:] um and he said there's like twenty of these per [disfmarker] per sub band. Um, and for [disfmarker] for every s every sub band, e you [disfmarker] you just pick ahead of time, um, "I'm going to have like five [pause] i independent logistic tests." [speaker001:] Mm hmm. [speaker005:] And you initialize these parameters, um, in some [disfmarker] some way and use EM to come up with your training targets for a [disfmarker] for the [disfmarker] the low level detectors. [speaker001:] Mm hmm. [speaker005:] And then, once you get that done, you [disfmarker] you [disfmarker] you train the whole [disfmarker] whole thing on maximum likelihood. Um, and h he shows that using this [disfmarker] this method to detect sonorance is it's very robust compared to, um [disfmarker] [vocalsound] to typical, uh, full band Gaussian mixtures um estimations of [disfmarker] of sonorance. [speaker001:] Mm hmm. Mm hmm. [speaker005:] And, uh so [disfmarker] so that's just [disfmarker] that's just one detector. So you can imagine building many of these detectors on different features. You get enough of these detectors together, um, then you have enough information to do, um, higher level discrimination, for example, discriminating between phones [speaker001:] Mm hmm. [speaker005:] and then you keep working your way up until you [disfmarker] you build a full recognizer. [speaker001:] Mm hmm. [speaker005:] So, um, that's [disfmarker] that's the direction which I'm [disfmarker] I'm thinking about going in my quals. [speaker001:] Cool. [speaker003:] You know, it has a number of properties that I really liked. I mean, one is the going towards, um, using narrow band information for, uh, ph phonetic features of some sort rather than just, uh, immediately going for the [disfmarker] the typical sound units. [speaker001:] Right. [speaker003:] Another thing I like about it is that you t this thing is going to be trained [disfmarker] explicitly trained for a product of errors rule, which is what, uh, Allen keeps pointing out that Fletcher observed in the twenties, [speaker001:] Mm hmm. [speaker003:] uh, for people listening to narrow band stuff. That's Friday's talk, by the way. And then, um, Uh, the third thing I like about it is, uh, and we've played around with this in a different kind of way a little bit but it hasn't been our dominant way of [disfmarker] of operating anything, um, this issue of where the targets come from. So in our case when we've been training it multi band things, the way we get the targets for the individual bands is, uh, that we get the phonetic label [disfmarker] for the sound there [speaker001:] Mm hmm. [speaker003:] and we say, "OK, we train every [disfmarker]" What this is saying is, OK, that's maybe what our ultimate goal is [disfmarker] or not ultimate but penultimate [vocalsound] goal is getting these [disfmarker] these small sound units. But [disfmarker] but, um, along the way how much should we, uh [disfmarker] uh, what should we be training these intermediate things for? I mean, because, uh, we don't know uh, that this is a particularly good feature. I mean, there's no way, uh [disfmarker] someone in the audience yesterday was asking, "well couldn't you have people go through and mark the individual bands and say where the [disfmarker] where it was sonorant or not?" [speaker001:] Mm hmm. [speaker003:] But, you know, I think having a bunch of people listening to critical band wide, [vocalsound] uh, chunks of speech trying to determine whether [disfmarker] [comment] I think it'd be impossible. [speaker005:] Ouch. [speaker003:] It's all gonna sound like [disfmarker] like sine waves to you, more or less. [speaker001:] Mm hmm. [speaker003:] I mean [disfmarker] Well not I mean, it's g all g narrow band uh, i I m I think it's very hard for someone to [disfmarker] to [disfmarker] a person to make that determination. So, um, um, we don't really know how those should be labeled. It could sh be that you should, um, not be paying that much attention to, uh, certain bands for certain sounds, uh, in order to get the best result. [speaker001:] Mm hmm. [speaker003:] So, um, what we have been doing there, just sort of mixing it all together, is certainly much [disfmarker] much cruder than that. We trained these things up on the [disfmarker] on the, uh the final label. Now we have I guess done experiments [disfmarker] you've probably done stuff where you have, um, done separate, uh, Viterbis on the different [disfmarker] [speaker005:] Yeah. Forced alignment on the sub band labels? [speaker003:] Yeah. [speaker005:] Yeah. [speaker003:] You've done that. Did [disfmarker] did that help at all? [speaker005:] Um, it helps for one or t one iteration but um, anything after that it doesn't help. [speaker003:] So [disfmarker] so that may or may t it [disfmarker] that aspect of what he's doing may or may not be helpful because in a sense that's the same sort of thing. You're taking global information and determining what you [disfmarker] how you should [disfmarker] But this is [disfmarker] this is, uh, I th I think a little more direct. [speaker001:] How did they measure the performance of their detector? [speaker003:] And [disfmarker] Well, he's look he's just actually looking at, uh, the confusions between sonorant and non sonorant. [speaker001:] Mm hmm. [speaker003:] So he hasn't applied it to recognition or if he did he didn't talk about it. It's [disfmarker] it's just [disfmarker] And one of the concerns in the audience, actually, was that [disfmarker] that, um, the, uh, uh [disfmarker] he [disfmarker] he did a comparison to, uh, you know, our old foil, the [disfmarker] the nasty old standard recognizer with [vocalsound] mel [disfmarker] mel filter bank at the front, and H M Ms, and [disfmarker] and so forth. And, um, it didn't do nearly as well, especially in [disfmarker] in noise. But the [disfmarker] one of the good questions in the audience was, well, yeah, but that wasn't trained for that. I mean, this use of a very smooth, uh, spectral envelope is something that, you know, has evolved as being generally a good thing for speech recognition but if you knew that what you were gonna do is detect sonorants or not [disfmarker] So sonorants and non sonorants is [disfmarker] is [disfmarker] is almost like voiced unvoiced, except I guess that the voiced stops are [disfmarker] are also called "obstruents". Uh, so it's [disfmarker] it's [disfmarker] uh, but with the exception of the stops I guess it's pretty much the same as voiced unvoiced, right? [speaker001:] Mm hmm. [speaker003:] So [disfmarker] so [disfmarker] Um. So, um, if you knew you were doing that, if you were doing something say for a [disfmarker] a, uh [disfmarker] a [disfmarker] a Vocoder, you wouldn't use the same kind of features. You would use something that was sensitive to the periodicity and [disfmarker] and not just the envelope. Uh, and so in that sense it was an unfair test. Um, so I think that the questioner was right. It [disfmarker] it was in that sense an unfair test. Nonetheless, it was one that was interesting because, uh, this is what we are actually using for speech recognition, these smooth envelopes. And this says that perhaps even, you know, trying to use them in the best way that we can, that [disfmarker] that [disfmarker] that we ordinarily do, with, you know, Gaussian mixtures and H M Ms [comment] and so forth, you [disfmarker] you don't, uh, actually do that well on determining whether something is sonorant or not. [speaker001:] Didn't they [disfmarker] [speaker003:] Which means you're gonna make errors between similar sounds that are son sonorant or obstruent. [speaker001:] Didn't they also do some kind of an oracle experiment where they said if we [pause] could detect the sonorants perfectly [pause] and then show how it would improve speech recognition? I thought I remember hearing about an experiment like that. [speaker003:] The these same people? [speaker001:] Mm hmm. [speaker003:] I don't remember that. [speaker001:] Hmm. [speaker003:] That would [disfmarker] that's [disfmarker] you're right, that's exactly the question to follow up this discussion, is suppose you did that, uh, got that right. Um, Yeah. [speaker001:] Hmm. [speaker002:] What could be the other low level detectors, I mean, for [disfmarker] [comment] Other kind of features, or [disfmarker]? in addition to detecting sonorants or [disfmarker]? [speaker005:] Um [disfmarker] [speaker002:] Th that's what you want to [disfmarker] to [disfmarker] to go for also [speaker005:] What t [speaker002:] or [disfmarker]? [speaker005:] Oh, build other [disfmarker] other detectors on different [pause] phonetic features? [speaker002:] Other low level detectors? Yeah. [speaker005:] Um, uh Let's see, um, Yeah, I d I don't know. e Um, um, I mean, w easiest thing would be to go [disfmarker] go do some voicing stuff but that's very similar to sonorance. [speaker002:] Mm hmm. [speaker005:] Um, [speaker001:] When we [disfmarker] when we talked with John Ohala the other day we made a list of some of the things that w [speaker005:] Yeah. Oh! OK. [speaker001:] like frication, [speaker005:] Mm hmm. [speaker001:] abrupt closure, [speaker005:] Mm hmm. [speaker001:] R coloring, nasality, voicing [disfmarker] [speaker003:] Yeah, so there's a half dozen like that that are [disfmarker] [speaker001:] Uh. [speaker005:] Yeah, nasality. [speaker003:] Now this was coming at it from a different angle but maybe it's a good way to start. Uh, these are things which, uh, John felt that a [disfmarker] a, uh [disfmarker] a human annotator would be able to reliably mark. [speaker005:] Oh, OK. [speaker003:] So the sort of things he felt would be difficult for a human annotator to reliably mark would be tongue position kinds of things. [speaker005:] Placing stuff, [speaker001:] Mm hmm. [speaker003:] Yeah. [speaker005:] yeah. [speaker003:] Uh [disfmarker] [speaker001:] There's also things like stress. You can look at stress. [speaker005:] Mm hmm. [speaker003:] But stress doesn't, uh, fit in this thing of coming up with features that will distinguish words from one another, right? It's a [disfmarker] it's a good thing to mark and will probably help us ultimate with recognition [speaker001:] Yeah, there's a few cases where it can like permit [comment] and permit. [speaker003:] but [disfmarker] [speaker001:] But [disfmarker] that's not very common in English. In other languages it's more uh, important. [speaker003:] Well, yeah, but i either case you'd write PERMIT, right? So you'd get the word right. [speaker001:] No, I'm saying, i i e I thought you were saying that stress doesn't help you distinguish between words. [speaker003:] Um, [speaker001:] Oh, I see what you're saying. As long as you get [disfmarker] The sequence, [speaker003:] We're g if we're doing [disfmarker] if we're talking about transcription as opposed to something else [disfmarker] [speaker001:] right? Yeah. Yeah, yeah, yeah. Yeah. [speaker003:] Yeah. [speaker001:] Right. So where it could help is maybe at a higher level. [speaker005:] Like a understanding application. [speaker003:] Right. [speaker001:] Yeah. Understanding, yeah. [speaker003:] Yeah. [speaker005:] Yeah. [speaker001:] Exactly. [speaker003:] But that's this afternoon's meeting. Yeah. We don't understand anything in this meeting. Yeah, so that's [disfmarker] yeah, that's, you know, a neat [disfmarker] neat thing and [disfmarker] and, uh [disfmarker] [speaker005:] S so, um, Ohala's going to help do these, uh [pause] transcriptions of the meeting data? [speaker003:] So. [speaker001:] Uh, well I don't know. We d we sort of didn't get that far. Um, we just talked about some possible features that could be marked by humans and, um, [speaker005:] Hmm. [speaker001:] because of having maybe some extra transcriber time we thought we could go through and mark some portion of the data for that. And, uh [disfmarker] [speaker005:] Hmm. [speaker003:] Yeah, I mean, that's not an immediate problem, that we don't immediately have a lot of extra transcriber time. [speaker001:] Yeah, [speaker003:] But [disfmarker] but, uh, in the long term I guess Chuck is gonna continue the dialogue with John [speaker001:] right. [speaker003:] and [disfmarker] and, uh, and, we'll [disfmarker] we'll end up doing some I think. [speaker001:] I'm definitely interested in this area, too, f uh, acoustic feature stuff. [speaker003:] Uh huh. [speaker005:] OK. [speaker001:] So. [speaker003:] Yeah, I think it's an interesting [disfmarker] interesting way to go. [speaker005:] Cool. [speaker003:] Um, I say it like "said int". I think it has a number of good things. Um, so, uh, y you want to talk maybe a c two or three minutes about what we've been talking about today and other days? [speaker006:] Ri Yeah, OK, so, um, we're interested in, um, methods for far mike speech recognition, um, [pause] mainly, uh, methods that deal with the reverberation [pause] in the far mike signal. So, um, one approach would be, um, say MSG and PLP, like was used in Aurora one and, um, there are other approaches which actually attempt to [pause] remove the reverberation, instead of being robust to it like MSG. And so we're interested in, um, comparing the performance of [pause] um, a robust approach like MSG with these, um, speech enhancement or de reverber de reverberation approaches. [speaker002:] Mm hmm. [speaker006:] And, um, [vocalsound] it looks like we're gonna use the Meeting Recorder digits data for that. [speaker002:] And the de reverberation algorithm, do you have [disfmarker] can you give some more details on this or [disfmarker]? Does it use one microphone? [speaker006:] o [speaker002:] Several microphones? [speaker006:] o [speaker002:] Does it [disfmarker]? [speaker006:] OK, well, um, there was something that was done by, um, a guy named Carlos, I forget his last name, [comment] who worked with Hynek, who, um, [speaker003:] Avendano. [speaker006:] OK. Who, um, [speaker003:] Yeah. [speaker002:] Mm hmm. [speaker006:] um, it was like RASTA in the sense that of it was, um, de convolution by filtering um, except he used a longer time window, [speaker002:] Mm hmm. [speaker006:] like a second maybe. And the reason for that is RASTA's time window is too short to, um include the whole, um, reverberation [disfmarker] um, I don't know what you call it the reverberation response. I if you see wh if you see what I mean. The reverberation filter from my mouth to that mike is like [disfmarker] it's t got it's too long in the [disfmarker] in the time domain for the um [disfmarker] for the RASTA filtering to take care of it. And, um, then there are a couple of other speech enhancement approaches which haven't been tried for speech recognition yet but have just been tried for enhancement, which, um, have the assumption that um, you can do LPC um analysis of th of the signal you get at the far microphone and the, um, all pole filter that you get out of that should be good. It's just the, um, excitation signal [comment] that is going to be distorted by the reverberation and so you can try and reconstruct a better excitation signal and, um, feed that through the i um, all pole filter and get enhanced speech with reverberation reduced. [speaker002:] Mm hmm. Mm hmm. [speaker003:] There's also this, uh, um, uh, echo cancellation stuff that we've sort of been chasing, so, uh we have, uh [disfmarker] and when we're saying these digits now we do have a close microphone signal and then there's the distant microphone signal. And you could as a kind of baseline say, "OK, given that we have both of these, uh, we should be able to do, uh, a cancellation." So that, uh, um, we [disfmarker] we, uh, essentially identify the system in between [disfmarker] the linear time invariant system between the microphones and [disfmarker] and [disfmarker] and [disfmarker] and re and invert it, uh, or [disfmarker] or cancel it out to [disfmarker] to some [disfmarker] some reasonable approximation [speaker002:] Mm hmm. [speaker003:] through one method or another. Uh, that's not a practical thing, uh, if you have a distant mike, you don't have a close mike ordinarily, but we thought that might make [disfmarker] also might make a good baseline. Uh, it still won't be perfect because there's noise. Uh, but [disfmarker] And then there are s uh, there are single microphone methods that I think people have done for, uh [disfmarker] for this kind of de reverberation. Do y do you know any references to any? Cuz I [disfmarker] I w I was [disfmarker] w w I [disfmarker] I lead him down a [disfmarker] a bad path on that. [speaker002:] Uh, I g I guess [disfmarker] I guess when people are working with single microphones, they are more trying to do [disfmarker] [speaker003:] But. [speaker002:] well, not [disfmarker] not very [disfmarker] Well, there is the Avendano work, [speaker003:] Right. [speaker002:] but also trying to mmm, uh [disfmarker] trying to f t find the de convolution filter but in the um [disfmarker] not in the time domain but in the uh the stream of features uh I guess. [speaker003:] Yeah, OK. [speaker002:] Well, [@ @] [comment] there [disfmarker] there's someone working on this on i in Mons So perhaps, yeah, we should try t to [disfmarker] He's working on this, on trying to [disfmarker] [speaker003:] Yeah. [speaker002:] on re reverberation, um [disfmarker] [speaker003:] The first paper on this is gonna have great references, I can tell already. [speaker002:] Mm hmm. [speaker003:] It's always good to have references, especially when reviewers read it or [disfmarker] or one of the authors and, [vocalsound] feel they'll "You're OK, you've r You cited me." [speaker002:] So, yeah. Well, he did echo cancellation and he did some fancier things like, uh, [vocalsound] [vocalsound] uh, training different network on different reverberation conditions and then trying to find the best one, but. Well. [speaker003:] Yeah. [speaker002:] Yeah. [speaker003:] The oth the other thing, uh, that Dave was talking about earlier was, uh, uh, multiple mike things, uh, where they're all distant. So, um, I mean, there's [disfmarker] there's all this work on arrays, but the other thing is, uh, [pause] what can we do that's cleverer that can take some advantage of only two mikes, uh, particularly if there's an obstruction between them, as we [disfmarker] as we have over there. [speaker002:] If there is [disfmarker]? [speaker003:] An obstruction between them. [speaker002:] Ah, yeah. [speaker003:] It creates a shadow which is [disfmarker] is helpful. It's part of why you have such good directionality with, [vocalsound] with two ears even though they're not several feet apart. [speaker002:] Mm hmm. [speaker003:] For most [disfmarker] for most people's heads. [speaker001:] That could help though. [speaker003:] So that [disfmarker] Yeah, the [disfmarker] the head, in the way, is really [disfmarker] that's what it's for. It's basically, [speaker001:] That's what the head's for? [speaker003:] Yeah, it's to separate the ears. [speaker001:] To separate the ears? [speaker003:] That's right, yeah. Yeah. Uh, so. Anyway, O K. Uh, I think that's [disfmarker] that's all we have this week. [speaker005:] Oh. [speaker003:] And, uh, I think it's digit time. [speaker001:] Actually the, um [disfmarker] For some reason the digit forms are blank. [speaker003:] Yeah? [speaker001:] Uh, I think th that may be due to the fact that [comment] Adam ran out of digits, [comment] uh, and didn't have time to regenerate any. [speaker003:] Oh! Oh! I guess it's [disfmarker] Well there's no real reason to write our names on here then, [speaker001:] Yeah, if you want to put your credit card numbers and, uh [disfmarker] [speaker003:] is there? [speaker005:] Oh, no [disfmarker]? [speaker003:] Or do [disfmarker] did any [disfmarker] do we need the names for the other stuff, or [disfmarker]? [speaker001:] Uh, yeah, I do need your names and [disfmarker] and the time, and all that, [speaker003:] Oh, OK. [speaker001:] cuz we put that into the "key" files. [speaker003:] Oh, OK. [speaker001:] Um. But w [speaker003:] OK. [speaker001:] That's why we have the forms, uh, even if there are no digits. [speaker003:] OK, yeah, I didn't notice this. I'm sitting here and I was [disfmarker] I was about to read them too. It's a, uh, blank sheet of paper. [speaker001:] So I guess we're [disfmarker] we're done. [speaker003:] Yeah, yeah, I'll do my credit card number later. OK. [speaker004:] And we already got the crash out of the way. It did crash, so I feel much better, earlier. [speaker006:] Yeah. [speaker005:] Interesting. [speaker006:] Will you get the door, [speaker005:] Hmm. [speaker006:] and [disfmarker]? OK. [speaker004:] OK, so um. [speaker006:] You collected an agenda, huh? [speaker004:] I did collect an agenda. So I'm gonna go first. Mwa ha ha! It shouldn't take too long. [speaker005:] Yeah. [speaker004:] Um, so we're pretty much out of digits. We've gone once through the set. Um, so the only thing I have to do [speaker006:] No there's only ten. [speaker004:] Yeah, that's right. so I [disfmarker] I just have to go through them [speaker006:] Well, OK. [speaker004:] and uh pick out the ones that have problems, and either correct them or have them re read. So we probably have like four or five more forms to be read, to be once through the set. I've also extracted out about an hour's worth. We have about two hours worth. I extracted out about an hour's worth which are the f digits with [disfmarker] for which whose speaker have speaker forms, have filled out speaker forms. Not everyone's filled out a speaker form. So I extracted one for speakers who have speaker forms and for meetings in which the "key" file and the transcript files are parsable. Some of the early key files, it looks like, were done by hand, and so they're not automatically parsable and I have to go back and fix those. So what that means is we have about an hour of transcribed digits that we can play with. Um, [speaker006:] So you think two [disfmarker] you think two hours is the [disfmarker] is the total that we have? [speaker004:] Liz [disfmarker] Yep, yeah. [speaker006:] And you think we th uh, I [disfmarker] I didn't quite catch all these different things that are not quite right, but you think we'll be able to retrieve the other hour, reasonably? [speaker004:] Yes, absolutely. [speaker006:] OK. [speaker004:] So it's just a question of a little hand editing of some files and then waiting for more people to turn in their speaker forms. I have this web based speaker form, and I sent mail to everyone who hadn't filled out a speaker form, and they're slowly s trickling in. [speaker006:] So the relevance of the speaker form here, s [speaker004:] It's for labeling the extracted audio files. [speaker006:] Oh, OK. [speaker004:] By speaker ID and microphone type. [speaker006:] Wasn't like whether they were giving us permission to use their digits or something. [speaker004:] No, I spoke with Jane about that and we sort of decided that it's probably not an issue that [disfmarker] We edit out any of the errors anyway. Right? [speaker006:] Yeah. [speaker004:] So the there are no errors in the digits, you'll always read the string correctly. So I can't imagine why anyone would care. So the other topic with digits is uh, Liz would like to elicit different prosodics, and so we tried last week with them written out in English. And it just didn't work at all because no one grouped them together. So it just sounded like many many more lines instead of anything else. So in conversations with Liz and uh Jane we decided that if you wrote them out as numbers instead of words it would elicit more phone number, social security number like readings. The problem with that is it becomes numbers instead of digits. When I look at this, that first line is "sixty one, sixty two, eighteen, eighty six, ten." Um, and so the question is does anyone care? Um, I've already spoken with Liz and she feels that, correct me if I'm wrong, that for her, connected numbers is fine, [speaker005:] Mm hmm. [speaker004:] as opposed to connected digits. Um, I think two hours is probably fine for a test set, but it may be a little short if we actually wanna do training and adaptation and all that other stuff. [speaker006:] Yeah Um, do um you want different prosodics, so if you always had the same groupings you wouldn't like that? Is that correct? [speaker007:] Well, we actually figured out a way to [disfmarker] [speaker004:] Yeah, the [disfmarker] the [disfmarker] [speaker007:] the [disfmarker] the groupings are randomly generated. [speaker006:] No but, I was asking if that was something you really cared about because if it wasn't, it seems to me if you made it really specifically telephone groupings that maybe people wouldn't, uh, go and do numbers so much. You know if it if it's [disfmarker] [speaker007:] I think they may still do it, [speaker001:] Uh [disfmarker] [speaker007:] um, [speaker006:] Maybe some, but I probably not so much. [speaker002:] What about putting a hyphen between the numbers in the group? [speaker007:] And [disfmarker] [speaker006:] Right? So if you [disfmarker] if [disfmarker] if you have uh [speaker004:] Six dash one, you mean? [speaker006:] if you go six six six uh dash uh two nine three one. [speaker007:] I [disfmarker] well OK [disfmarker] I [disfmarker] it might help, I would like to g get away from having only one specific grouping. [speaker006:] That's what I was asking, yeah. [speaker007:] Um, so if that's your question, [speaker006:] Yeah. [speaker007:] but I mean it seems to me that, at least for us, we can learn to read them as digits if that's what people want. [speaker005:] Yeah. [speaker007:] I [disfmarker] I'm [speaker005:] Yeah. [speaker007:] don't think that'd be that hard to read them as single digits. [speaker005:] I agree. [speaker007:] Um, and it seems like that might be better for you guys since then you'll have just more digit data, [speaker004:] Right. [speaker007:] and that's always a good thing. It's a little bit better for me too [speaker004:] Yep. [speaker007:] because the digits are easier to recognize. They're better trained than the numbers. [speaker006:] Right. [speaker004:] So we could just, uh, put in the instructions "read them as digits". [speaker007:] Right. Right, read them as single digits, so sixty one w is read as six one, [speaker005:] Mm hmm. [speaker007:] and if people make a mistake we [disfmarker] [speaker004:] How about "O" versus "zero"? [speaker006:] I mean, the other thing is we could just bag it because it's [disfmarker] it's [disfmarker] it's I'm not worrying about it I mean, because we do have digits training data that we have from uh from OGI. I'm sorry, digits [disfmarker] numbers training that we have from OGI, we've done lots and lots of studies with that. And um. [speaker007:] But it's nice to get it in this room with the acous [speaker006:] Yeah. [speaker007:] I mean [disfmarker] for [disfmarker] it's [disfmarker] [speaker006:] No, no, I guess what I'm saying is that [speaker004:] Just let them read it how they read it. [speaker006:] to some extent maybe we could just read them [disfmarker] have them read how [disfmarker] how they read it and it just means that we have to expand our [disfmarker] our vocabulary out to stuff that we already have. [speaker007:] Right. Well that's fine with me as long as [disfmarker] It's just that I didn't want to cause the people who would have been collecting digits the other way to not have the digits. [speaker006:] Yeah. [speaker007:] So [disfmarker] [speaker006:] We can go back to the other thing later. I mean we s we [disfmarker] we've [disfmarker] We can do this for awhile [speaker007:] OK. [speaker006:] and then go back to digits for awhile, or um. Do yo I mean, do you want [disfmarker] do you want this [disfmarker] Do you need training data or adaptation data out of this? [speaker007:] OK. [speaker006:] How much of this do you need? with uh the [disfmarker] [speaker007:] It's actually unclear right now. I just thought well we're [disfmarker] if we're collec collecting digits, and Adam had said we were running out of the TI forms, I thought it'd be nice to have them in groups, and probably, all else being equal, it'd be better for me to just have single digits since it's, you know, a recognizer's gonna do better on those anyway, [speaker006:] OK. [speaker007:] um, and it's more predictable. So we can know from the transcript what the person said and the transcriber, in general. [speaker006:] OK, well if you pre [speaker007:] But if they make mistakes, it's no big deal if the people say a hundred instead of "one OO". and also w maybe we can just let them choose "zero" versus "O" as they [disfmarker] as they like because even the same person c sometimes says "O" and sometimes says "zero" in different context, [speaker006:] Yeah. [speaker007:] and that's sort of interesting. So I don't have a Specific need cuz if I did I'd probably try to collect it, you know, without bothering this group, but If we can try it [disfmarker] [speaker004:] OK so [disfmarker] so I can just add to the instructions to read it as digits not as connected numbers. [speaker007:] Right, and you can give an example [speaker005:] Mm hmm. [speaker007:] like, you know, "six [disfmarker] sixty one would be read as six one". [speaker004:] Right. [speaker007:] And I think people will get it. [speaker005:] Mm hmm. And i actually it's no more artificial than what we've been doing with words. I'm sure people can adapt to this, [speaker007:] Right, right. [speaker005:] read it single. [speaker007:] It's just easier to read. [speaker005:] The spaces already bias it toward being separated. [speaker007:] Right. [speaker005:] And I know I'm gonna find this easier than words. [speaker004:] Oh yeah, absolutely, [speaker007:] OK [speaker004:] cognitively it's much easier. [speaker007:] I also had a hard [disfmarker] hard time with the words, but then we went back and forth on that. [speaker006:] Yeah. [speaker007:] OK, so let's give that a try [speaker004:] OK. And is the spacing alright or do you think there should be more space between digits and groups? [speaker006:] OK. [speaker007:] and [disfmarker] I mean what do other people think [speaker004:] Or is that alright? [speaker007:] cuz you guys are reading [comment] them. [speaker005:] I think that i it's fine. I it [disfmarker] it [disfmarker] to me it looks like you've got the func the idea of grouping and you have the grou the idea of separation [speaker004:] OK. [speaker007:] OK. [speaker005:] and, you know, it's just a matter of u i the instructions, that's all. [speaker007:] Great. OK. Well let's give it a try. [speaker004:] And I think there are about ten different gouping patterns [speaker006:] Let's try it. [speaker004:] isn't that right, Liz? [speaker007:] Righ right, and you just [disfmarker] they're randomly [nonvocalsound] generated and randomly assigned to digits. [speaker004:] That we did. [speaker005:] I did [disfmarker] [speaker006:] So we have [disfmarker] [speaker005:] Mm hmm. [speaker006:] Sorry, [speaker005:] Go ahead. [speaker006:] I [disfmarker] I was just gonna say, so we have in the vicinity of forty hours of [disfmarker] of recordings now. And you're saying two hours, uh, is digits, so that's roughly the ratio then, something like twenty [disfmarker] twenty to one. [speaker004:] Yep. [speaker006:] Which I guess makes [disfmarker] makes sense. So if we did another forty hours of recordings then we could get another couple hours of this. [speaker004:] Right. [speaker006:] Um, yeah like you say, I think a couple hours for a [disfmarker] for a [disfmarker] for a test [disfmarker] test set's OK. It'd be nice to get, you know, more later because we'll [disfmarker] we might use [disfmarker] use this up, uh, in some sense, [speaker005:] Mm hmm. [speaker006:] but [disfmarker] but uh [disfmarker] [speaker004:] Right. [speaker005:] Yeah, I also would like to argue for that cuz it [disfmarker] it seems to me that, um, there's a real strength in having the same test replicated in [disfmarker] a whole bunch of times and adding to that basic test bank. [speaker004:] Right. [speaker005:] Hmm? Cuz then you have, you know, more and more, u chances to get away from random errors. And I think, um, the other thing too is that right now we have sort of a stratified sample with reference to dialect groups, and it might be [disfmarker] there might be an argument to be made for having uh f for replicating all of the digits that we've done, which were done by non native speakers so that we have a core that totally replicates the original data set, which is totally American speakers, and then we have these stratified additional language groups overlapping certain aspects of the database. [speaker004:] Right. I think that uh trying to duplicate, spending too much effort trying to duplicate the existing TI digits probably isn't too worthwhile because the recording situation is so different. [speaker006:] Yeah. [speaker004:] It's gonna be very hard to be comparable. [speaker005:] Except that if you have the stimuli [pause] comparable, then it says something about the [disfmarker] the contribution of setting [speaker006:] No it's [disfmarker] it's not the same. A little bit, [speaker005:] and [disfmarker] [speaker006:] but the other differences are so major. [speaker005:] OK. [speaker006:] They're such major sources of variance that it's [disfmarker] it's [disfmarker] it's uh [disfmarker] [speaker004:] Yeah I mean read versus not. [speaker005:] What's an example of a [disfmarker] of m some of the other differences? Any other a difference? [speaker006:] Well i i individual human glottis [vocalsound] is going to be different for each one, [speaker005:] OK. [speaker006:] you know, it's just [disfmarker] There's so many things. [speaker005:] OK. [speaker006:] it's [disfmarker] it [disfmarker] and [disfmarker] and enunciation. [speaker004:] Well, and not just that, I mean the uh the corpus itself. I mean, we're collecting it in a read digit in a particular list, and I'm sure that they're doing more specific stuff. I mean if I remember correctly it was like postman reading zipcodes and things like that. [speaker006:] TI digits was? I thought [disfmarker] I thought it was read. [speaker004:] I thought so. Was it read? [speaker006:] Yeah, I think the reading zipcode stuff you're thinking of would be OGI. [speaker004:] Oh, I may well be. [speaker006:] Yeah, no TI digits was read in th in read in the studio I believe. [speaker004:] I haven't ever listened to TI digits. [speaker006:] Yeah. [speaker004:] So I don't really know how it compares. [speaker006:] Yeah. But it [disfmarker] but [disfmarker] [speaker004:] But [disfmarker] but regardless it's gonna [disfmarker] it's hard to compare cross corpus. [speaker006:] It it's different people [pause] is the [disfmarker] is the core thing. [speaker004:] So. [speaker006:] And they're different circumstances with different recording environment and so forth, [speaker005:] OK, fine. [speaker006:] so it's [disfmarker] it's [disfmarker] it's really pretty different. But I think the idea of using a set thing was just to give you some sort of framework, so that even though you couldn't do exact comparisons, it wouldn't be s valid scientifically at least it'd give you some kind of uh frame of reference. Uh, you know it's not [disfmarker] [speaker005:] OK. [speaker002:] Hey Liz, What [disfmarker] what do the groupings represent? You said there's like ten different groupings? [speaker007:] Right, just groupings in terms of number of groups in a line, and number of digits in a group, and the pattern of groupings. [speaker002:] Mm hmm. Are the patterns [disfmarker] like are they based on anything or [speaker007:] Um, I [disfmarker] I just roughly looked at what kinds of digit strings are out there, and they're usually grouped into either two, three, or four, four digits at a time. [speaker002:] Oh. [speaker007:] And they can have, I mean, actually, things are getting longer and longer. In the old days you probably only had three sequences, and telephone numbers were less, and so forth. So, there's between, um [disfmarker] Well if you look at it, there are between like three and five groups, and each one has between two and four groupings and [disfmarker] I purposely didn't want them to look like they were in any kind of pattern. [speaker002:] Mmm. [speaker007:] So [speaker004:] And which group appears is picked randomly, and what the numbers are are picked randomly. [speaker002:] Mm hmm. [speaker007:] Right. [speaker004:] So unlike the previous one, which I d simply replicated TI digits, this is generated randomly. [speaker002:] Mmm, oh, OK. [speaker001:] Oh OK. [speaker007:] But I think it'd be great i to be able to compare digits, whether it's these digits or TI digits, to speakers, um, and compare that to their spontaneous speech, and then we do need you know a fair amount of [disfmarker] of digit data because you might be wearing a different microphone and, I mean [disfmarker] so it's [disfmarker] it's nice to have the digits you know, replicated many times. [speaker004:] Mm hmm. [speaker007:] Especially for speakers that don't talk a lot. So [vocalsound] um, [speaker004:] Yeah. [speaker007:] for adaptation. No, I'm serious, so we have a problem with acoustic adaptation, [speaker006:] Yeah. [speaker004:] Yeah all we have for some people is digits. [speaker001:] Yeah. [speaker007:] and we're not using the digit data now, but you know [disfmarker] [speaker004:] Oh, you're not. [speaker007:] Not for adaptation, nope. v W we're not [disfmarker] we were running adaptation only on the data that we ran recognition on and I'd [disfmarker] As soon as someone started to read transcript number, that's read speech and I thought well, we're gonna do better on that, [speaker004:] Oh I see. [speaker007:] that's not fair to use ". [speaker004:] Oh yeah that's true, absolutely. [speaker007:] But, it might be fair to use the data for adaptation, [speaker001:] OK. [speaker007:] so. So those speakers who are very quiet, [comment] shy [disfmarker] [speaker004:] That would be interesting to see whether that helps. [speaker007:] r Right [disfmarker] [speaker004:] Do you think that would help adapting on [disfmarker] [speaker002:] Like Adam? [speaker004:] Yeah. [speaker006:] Yeah. [speaker004:] Yeah, I have a real problem with that. [speaker007:] Well, it sh I mean it's the same micropho see the nice thing is we have that in the [disfmarker] in the same meeting, [speaker004:] Right. [speaker006:] Yeah. [speaker007:] and so you don't get [disfmarker] [speaker004:] Same [disfmarker] same acoustics, same microphone, same channel. [speaker001:] Yeah. [speaker007:] Right, and so I still like the idea of having some kind of [pause] digit data. [speaker004:] OK. Good. [speaker006:] Yeah I mean, for the [disfmarker] for the um acoustic research, for the signal processing, farfield stuff, I see it as [disfmarker] as [disfmarker] as the place that we start. But, th I mean, it'd be nice to have twenty hours of digits data, but [disfmarker] but uh the truth is I'm hoping that we [disfmarker] we through the [disfmarker] the stuff that [disfmarker] that you guys have been doing as you continue that, we get, uh, the best we can do on the spontaneous stuff uh, uh nearfield, and then um, we do a lot of the testing of the algorithms on the digits for the farfield, and at some point when we feel it's mature and we understand what's going on with it then we [disfmarker] we have to move on to the spontaneous data with the farfield. So. [speaker007:] The only thing that we don't have, [speaker005:] Great. [speaker007:] I know this sounds weird, and maybe it's completely stupid, but we don't have any overlapping digits. [speaker004:] Yeah, we talked about that a couple times. [speaker007:] An yea I know it's weird, but um [disfmarker] [speaker001:] Overlapping digits! [speaker004:] The [disfmarker] the problem I see with trying to do overlapping digits is the cognitive load. [speaker007:] Alright everybody's laughing. OK. [speaker003:] Dueling digits. [speaker004:] No it's [disfmarker] it's not stupid, it's just [disfmarker] I mean, try to do it. [speaker007:] I'm just talkin for the stuff that like Dan Ellis is gonna try, [speaker004:] I mean, here, let's try it. [speaker007:] you know, cross talk cancellation. [speaker006:] Let's try it. [speaker004:] You read the last line, I'll read the first line. [speaker007:] OK. [speaker001:] Oh! [speaker007:] Wait [disfmarker] oh it [disfmarker] these are all the same forms. [speaker006:] Sixty one. [speaker007:] OK [comment] So but [disfmarker] [speaker004:] So [disfmarker] so you read the last line, I'll read the first line. [speaker007:] So you plu you plug your ears. [speaker006:] No, I'll p [speaker004:] Oh I guess if you plug you're ears you could do it, but then you don't get the [disfmarker] the same effects. [speaker001:] Yeah. [speaker007:] Well, what I mean is actually no not the overlaps that are well governed linguistically, but the actual fact that there is speech coming from two people and the beam forming stuf [speaker004:] Yeah. [speaker007:] all the acoustic stuff that like Dan Ellis and [disfmarker] and company want to do. [speaker004:] Oh I see. [speaker007:] Digits are nice and well behaved, I mean Anyway, it's just a thought. [speaker004:] I guess we could try. We could try doing some. [speaker007:] It [disfmarker] it would go faster. [speaker002:] Parallel. [speaker007:] It would take one around [comment] amount of ti [speaker002:] It's the P make of digit reading. [speaker004:] Well [disfmarker] Well OK. Well let's try it. [speaker007:] That's right. I [disfmarker] I mea I'm [disfmarker] I was sort of serious, but I really, I mean, I'm [disfmarker] I don't feel strongly enough that it's a good idea, [speaker006:] See, y [speaker004:] You do the last line, I'll do the first line. [speaker007:] so. [speaker006:] OK. [speaker004:] O. [comment] That's not bad. [speaker006:] No, I can do it. [speaker007:] A and that prosody was great, by the way. [speaker002:] I couldn't understand a single thing you guys were saying. [speaker005:] I think it was numbers, but I'm not sure. [speaker007:] It [disfmarker] it sort of sounded like a duet, or something. [speaker001:] Yeah. [speaker002:] Performance art. [speaker006:] Alright, let's try three at once you [disfmarker] you pick one in the middle. [speaker001:] The Aurora theater. [speaker007:] OK. [speaker006:] Go. [speaker007:] I'm sorry. I'm mean I think it's doable, [speaker004:] The poor transcribers [speaker007:] I'm just [disfmarker] [speaker004:] they're gonna hate us. [speaker007:] So, we [disfmarker] we could have a round like where you do two at a time, and then the next person picks up when the first guy's done, or something. [speaker001:] So pairwise. [speaker006:] Oh like a round, yeah, like in a [disfmarker] a [disfmarker] [speaker007:] Like a, [speaker001:] Yeah, just pairwise, [speaker006:] yeah. [speaker007:] what do you call it? Li a r like [disfmarker] [speaker004:] A round. [speaker001:] or [speaker006:] Row, row, row your boat. [speaker003:] Round. [speaker001:] yeah. [speaker004:] Mm hmm. [speaker006:] Yeah. [speaker007:] yeah, like that. [speaker006:] OK. [speaker002:] It's gonna require some coordination. [speaker007:] Then it would go like h twice as fast, or [pause] a third as fast. Anyway, it's just a thought. [speaker005:] You have to have a similar pace. [speaker006:] Yeah. [speaker007:] I'm actually sort of serious if it would help people do that kind o but the people who wanna work on it we should talk to them. [speaker006:] I don't think we're gonna collect vast amounts of data that way, [speaker007:] So. [speaker004:] Mmm. [speaker006:] but I think having a little bit might at least be fun for somebody like Dan to play around with, [speaker007:] OK. [speaker004:] I think maybe if we wanted to do that we would do it as a separate session, [speaker006:] yeah. [speaker007:] Yeah. [speaker004:] something like that rather than doing it during a real meeting and you know, do two people at a time then three people at a time and things like that. So. [speaker007:] Can try it out. If we have nothing [disfmarker] if we have no agenda we could do it some week. [speaker004:] See [disfmarker] see what Dan thinks. Yeah, right. [speaker006:] Yeah, yeah. [speaker007:] OK. [speaker006:] Spend the whole time reading digits with different qu quantities. [speaker005:] c c Can I can I have an another [disfmarker] another question w about this? [speaker004:] I thought this was gonna be fast. Oh well. [speaker005:] So, um, there are these digits, which are detached digits, but there are other words that contain the same general phon phoneme sequences. Like "wonderful" has "one" in it and [disfmarker] and Victor Borge had a [disfmarker] had a piece on this where he inflated the digits. Well, I wonder if there's, um, an if there would be a value in having digits that are in essence embedded in real words to compare in terms of like the articulation of "one" in "wonderful" versus "one" as a digit being read. [speaker006:] That's "two" bad. Yeah. [speaker007:] I'm all "four" it. [speaker005:] There you go. [speaker004:] Not after I "eight" though. [speaker006:] Uh, they don't all work as well, do they? Hmm. What does nine work in? [speaker004:] Uh. [speaker006:] Uh, [speaker003:] Nein! You scream it. [speaker004:] Nein! [speaker006:] Oh. In German, [speaker004:] You have to be German, [speaker002:] It's great for the Germans. [speaker006:] yeah. [speaker001:] That's German, yeah. [speaker007:] Oh, oh! [speaker004:] yeah. [speaker005:] Nein. [speaker006:] That's right! [speaker001:] Yeah. [speaker004:] Oh! [speaker003:] It only sounds w good when you scream it, though. So. [speaker006:] I think everybody's a little punchy here [vocalsound] today. [speaker005:] Well, I mean, I just wanted to offer that as a possible task [speaker006:] Yes. [speaker005:] because, you know, if we were to each read his embedded numbers words in sent in sentences cuz it's like an entire sketch he does and I wouldn't take the inflated version. So he talks about the woman being "two derful", and [disfmarker] and [disfmarker] a But, you know, if it were to be deflated, just the normal word, it would be like a little story that we could read. [speaker006:] Mm hmm. [speaker005:] I don't know if it would be useful for comparison, but it's embedded numbers. [speaker006:] Well I don't know. [speaker004:] I think for something like that we'd be better off doing like uh TIMIT. [speaker006:] Well I think the question is what the research is, so I mean, I presume that the reason that you wanted to have these digits this way is because you wanted to actually do some research looking at the prosodic form here. [speaker004:] Hmm. [speaker007:] Right, yeah. [speaker006:] Yeah OK. So if somebody wanted to do that, if they wanted to look at the [disfmarker] the [disfmarker] the difference of the uh phones in the digits in the context of a word versus uh the digits [disfmarker] a [disfmarker] a non digit word versus in digit word, uh that would be a good thing to do, but I think someone would have to express interest in that. [speaker005:] I see. [speaker006:] I think, to [disfmarker] [speaker005:] OK. [speaker006:] I mean if you were interested in it then we could do it, for instance. [speaker005:] OK, thank you. Huh. [speaker004:] OK, are we done with digits? Um, We have ASR results from Liz, transcript status from Jane, and disk space and storage formats from Don. Does [disfmarker] do we have any prefer preference on which way we wanna [disfmarker] we wanna go? [speaker007:] Well I was actually gonna skip the ASR results part, in favor of getting the transcription stuff talked about [speaker004:] Mm hmm. [speaker007:] since I think that's more important to moving forward, but I mean Morgan has this paper copy and if people have questions, um, it's pretty preliminary in terms of ASR results because we didn't do anything fancy, but I think e just having the results there, and pointing out some main conclusions like it's not the speaking style that differs, it's the fact that there's overlap that causes recognition errors. And then, the fact that it's almost all insertion errors, which you would expect but you might also think that in the overlapped regions you would get substitutions and so forth, um, leads us to believe that doing a better segmentation, like your channel based segmentation, or some kind of uh, echo cancellation to get basically back down to the individual speaker utterances would be probably all that we would need to be able to do good recognition on the [disfmarker] on the close talking mikes. [speaker004:] Um, why don't you, if you have a hard copy, why don't you email it to the list. [speaker001:] So these [disfmarker] [speaker007:] So, that's about the summary [disfmarker] But this is [disfmarker] Morgan has this paper. [speaker001:] Yeah, yeah. [speaker007:] I mean he [disfmarker] he [disfmarker] [speaker006:] Yeah, so it's the same thing? [speaker004:] Oh it's in the paper. [speaker007:] it [disfmarker] it's that paper. [speaker006:] It's the same thing I mailed to every everybody that w where it was, [speaker004:] OK. [speaker007:] Yeah, yeah. So, we basically, um, did a lot of work on that [speaker004:] OK then, it's already been mailed. [speaker006:] yeah. [speaker007:] and it's [disfmarker] Let's see, th I guess the other neat thing is it shows for sure w that the lapel, you know within speaker is bad. [speaker004:] Horrible? [speaker007:] And it's bad because it picks up the overlapping speech. [speaker001:] So, your [disfmarker] your ASR results were run on the channels synchronized, [speaker007:] Yes, cuz that's all that w had been transcribed at the time, [speaker001:] OK. OK. OK. [speaker007:] um but as we [disfmarker] I mean I wanted to here more about the transcription. If we can get the channel asynchronous or the [disfmarker] [speaker001:] Yeah. [speaker007:] the closer t that would be very interesting for us because we [disfmarker] [speaker002:] So if [disfmarker] [speaker006:] Yeah, that's [disfmarker] that's why I only used the part from use [speaker001:] Yeah. [speaker006:] which we had uh about uh about the alt over all the channels [speaker001:] Yeah. [speaker007:] Right. That's [disfmarker] [speaker001:] Yeah sure. Yeah. [speaker006:] or mixed channel rather mixed signal. [speaker001:] Yeah. [speaker002:] So if there was a segment of speech this long [speaker007:] cuz [disfmarker] [speaker001:] Yeah. [speaker004:] And someone said "oh" in the front [disfmarker] in the middle. [speaker002:] and oh and someone said "oh," the whole thing was passed to the recognizer? [speaker001:] There were several speakers in it, yeah. [speaker007:] That's right. In fact I [disfmarker] I pulled out a couple classic examples in case you wanna u use them in your talk of [speaker003:] Mm hmm. [speaker002:] That's why there's so many insertion errors? [speaker007:] Chuck on the lapel, so Chuck wore the lapel three out of four times. [speaker003:] Mmm. [speaker004:] I noticed that Chuck was wearing the lapel a lot. [speaker007:] Um, yeah, [speaker002:] Early on, yeah. [speaker007:] and I wore the lapel once, and for me the lapel was OK. I mean I still [disfmarker] and I don't know why. I'm [disfmarker] But um, for you it was [disfmarker] [speaker004:] Probably how you wear it [disfmarker] wore it I would guess. [speaker007:] Or who was next to me or something like that. [speaker003:] Yeah, where you were sitting probably affected it. [speaker001:] Yeah. [speaker007:] Right, but when Chuck wore the lapel and Morgan was talking there're a couple really long utterances where Chuck is saying a few things inside, and it's picking up all of Morgan's words pretty well and so the rec you know, there're error rates because of insertion [disfmarker] Insertions aren't bounded, so with a one word utterance and ten insertions you know you got huge error rate. [speaker004:] Uh huh. [speaker001:] Yeah. [speaker007:] And that's [disfmarker] that's where the problems come in. So I this is sort of what we expected, but it's nice to be able to [disfmarker] to show it. [speaker004:] Right. [speaker007:] And also I just wanted to mention briefly that, um, uh Andreas and I called up Dan Ellis who's still stuck in Switzerland, and we were gonna ask him if [disfmarker] if there're [disfmarker] you know, what's out there in terms of echo cancellation and things like that. Not that we were gonna do it, but we wanted to know what would need to be done. [speaker004:] And he said, "Lots lots lots lots." [speaker007:] And he [disfmarker] We've given him the data we have so far, so these sychronous cases where there are overlap. [speaker001:] Yep. [speaker007:] And he's gonna look into trying to run some things that are out there and see how well it can do because right now we're not able to actually report on recognition in a real paper, [speaker002:] So [disfmarker] [speaker007:] like a Eurospeech paper, because it would look sort of premature. [speaker002:] So [disfmarker] So the idea is that you would take this big hunk where somebody's only speaking a small amount in it, and then try to figure out where they're speaking [comment] based on the other peopl [speaker007:] Right. Or who's [disfmarker] At any point in time who's the foreground speaker, who's the background speaker. [speaker002:] I thought we were just gonna move the boundaries in. [speaker001:] So yeah [disfmarker] [speaker007:] So. [speaker001:] Yeah, should it [disfmarker] [speaker004:] Well that's with the hand stuff. [speaker007:] So there's like [disfmarker] [speaker004:] But how would you do that automatically? [speaker002:] Right. [speaker001:] Uh, I've actually done some experiments with cross correlation [speaker007:] Well ther there's [disfmarker] [speaker001:] and it seems to work pretty well to [disfmarker] to get rid of those [disfmarker] those overlaps, [speaker006:] Mm hmm. [speaker004:] I mean that that's the sort of thing that you would do. [speaker007:] Yeah. Yeah. Exactly, so it's [disfmarker] it's a [disfmarker] [speaker004:] So. [speaker001:] yeah. [speaker002:] So why do you want to do echo cancellation? [speaker007:] Um, it would be techniques used from adaptive [disfmarker] adaptive echo cancellation which I don't know enough about to talk about. [speaker002:] Uh huh. [speaker006:] It [disfmarker] just [disfmarker] it just to r to remove cross talk. [speaker007:] Um. But, [speaker003:] Yeah. [speaker006:] Yeah. [speaker007:] right, um, and that would be similar to what you're also trying to do, but using um, you know, more than energy [disfmarker] I [disfmarker] I don't know what exactly would go into it. [speaker001:] Yeah. Yeah, sure. [speaker007:] So the idea is to basically run this on the whole meeting. [speaker002:] So it would be [disfmarker] [speaker007:] and get the locations, which gives you also the time boundaries of the individual speak [speaker002:] OK. So do sort of what he's already [disfmarker] what he's trying to do. [speaker007:] Right. Except that there are many techniques for the kinds of cues, um, that you can use to do that. [speaker002:] OK, I s I see. [speaker001:] Yeah, in another way, yeah. Yeah. [speaker002:] Yeah. I see. [speaker006:] Yeah, Dave [disfmarker] Dave uh is, um, also gonna be doin usin playing around with echo cancellation for the nearfield farfield stuff, [speaker007:] So. [speaker006:] so we'll be [disfmarker] [speaker007:] And I guess Espen? This [disfmarker] is [disfmarker] uh [disfmarker] is he here too? May also be working [disfmarker] [speaker006:] Yeah. [speaker007:] So it would just be ver that's really the next step because we can't do too much, you know, on term in terms of recognition results knowing that this is a big problem [speaker002:] Mm hmm. [speaker007:] um, until we can do that kind of processing. And so, once we have some [disfmarker] some of yours, [speaker001:] OK. Yeah I'm working on it. [speaker007:] and [@ @] we'll move on. [speaker002:] I think this also ties into one of the things that Jane is gonna talk about too. [speaker004:] Um, [speaker007:] OK. [speaker005:] Mm hmm. Mm hmm. [speaker004:] I also wanted to say I have done all this chopping up of digits, so I have some naming conventions that we should try to agree on. [speaker007:] Oh right. [speaker004:] So let's do that off line, we don't need to do it during the meeting. [speaker007:] Yeah. Right. [speaker003:] OK. [speaker007:] Definitely [disfmarker] Uh, and Don should [disfmarker] [speaker004:] And [disfmarker] and I have scripts that will extract it out from "key" files and [disfmarker] and do all the naming automatically, [speaker007:] OK. [speaker004:] so you don't have to do it by hand. [speaker003:] Alright. [speaker007:] Great. So that that's it for the [disfmarker] [speaker003:] You've compiled the list of, uh, speaker names? [speaker007:] Speakers and [disfmarker] [speaker004:] Mm hmm. [speaker007:] OK. [speaker003:] Not names, but I Ds. [speaker004:] Yep. Yeah, names [disfmarker] names in the [disfmarker] names to I Ds, so you [speaker003:] OK. [speaker007:] Great. [speaker004:] and it does all sorts of matches because the way people filled out names is different on every single file so it does a very fuzzy sort of match. [speaker007:] Right. [speaker003:] Cool. [speaker007:] So at this point we can sort of finalize the naming, and so forth, [speaker004:] Yep. [speaker007:] and we're gonna basically re rewrite out these waveforms that we did [speaker003:] Mm hmm. [speaker007:] because as you notice in the paper your "M O in one meeting and" M O two in another meeting and it's [disfmarker] we just need to standardize the [speaker003:] Yeah. That was my fault. [speaker007:] um, no it's [disfmarker] it's [disfmarker] [speaker006:] No, I didn't notice that actually. [speaker007:] um, that's why those comments are s [vocalsound] are in there. [speaker003:] Yeah. [speaker004:] Yep. [speaker003:] Then disregard it then. [speaker004:] So th I now have a script that you can just say basically look up Morgan, [speaker007:] So [disfmarker] Right. [speaker006:] Yeah. [speaker007:] OK. [speaker004:] and it will give you his ID. [speaker007:] Great, great. [speaker003:] OK. [speaker004:] So. [speaker007:] Terrific. [speaker004:] Um, alright. Do we [disfmarker] Don, you had disk space and storage formats. Is that something we need to talk about at the meeting, or should you just talk with Chuck at some other time? [speaker003:] Um, I had some general questions just about the compression algorithms of shortening waveforms and I don't know exactly who to ask. I thought that maybe you would be the [disfmarker] the person to talk to. So, is it a lossless compression [comment] when you compress, [speaker004:] Mm hmm. [speaker003:] so [disfmarker] [speaker004:] Entropy coding. [speaker003:] It just uses entropy coding? [speaker004:] So. [speaker003:] OK. So, I mean, I guess my question would be is I just got this new eighteen gig drive installed. Um, yeah, which is [disfmarker] [speaker004:] And I assume half of it is scratch and half of it is [disfmarker]? [speaker003:] I'm not exactly sure how they partitioned it. [speaker004:] Probably, yeah. [speaker003:] But um, [speaker006:] That's typical, huh. [speaker003:] yeah, I don't know what's typical here, but um, it's local though, so [disfmarker] [speaker004:] That doesn't matter. [speaker003:] But [disfmarker] [speaker004:] You can access it from anywhere in ICSI. N [disfmarker] [speaker003:] OK. [speaker006:] In fact, this is an eighteen gig drive, [comment] or is it a thirty six gig drive with eighteen [disfmarker] [speaker003:] Alright. How do you do that? [speaker004:] N [disfmarker] [speaker003:] Eighteen. [speaker007:] Eigh eighteen. It was a spare that Dave had around [disfmarker] [speaker006:] Oh OK. [speaker004:] Slash N slash machine name, slash X A in all likelihood. [speaker003:] Oh I see. OK. Alright, I did know that. [speaker004:] Um, so the [disfmarker] the only question is how much of it [disfmarker] The distinction between scratch and non scratch is whether it's backed up or not. [speaker003:] Mm hmm. Right. [speaker004:] So what you wanna do is use the scratch for stuff that you can regenerate. [speaker003:] OK. [speaker004:] So, the stuff that isn't backed up is not a big deal because disks don't crash very frequently, [speaker003:] Right. [speaker004:] as long as you can regenerate it. [speaker003:] Right. [speaker007:] Yeah it's [disfmarker] [speaker003:] I mean all of this stuff can be regenerated, it's just a question [disfmarker] [speaker007:] Well the [disfmarker] [speaker004:] Then put it all on scratch because we're [disfmarker] ICSI is [disfmarker] is bottlenecked by backup. [speaker007:] Yeah. [speaker005:] Mm hmm, very good point. [speaker003:] OK. [speaker007:] Well I'd leave all the [disfmarker] All the transcript stuff shouldn't [disfmarker] should be backed up, [speaker004:] So we wanna put [disfmarker] [speaker005:] Mm hmm. [speaker007:] but all the waveform [disfmarker] [comment] Sound files should not be backed up, [speaker003:] Yeah, I guess [disfmarker] Right. [speaker007:] the ones that you write out. [speaker003:] OK. So, I mean, I guess th the other question was then, should we shorten them, downsample them, or keep them in their original form? Um [disfmarker] [speaker004:] It just depends on your tools. I mean, because it's not backed up and it's just on scratch, if your sc tools can't take shortened format, I would leave them expanded, [speaker003:] Right. [speaker004:] so you don't have to unshorten them every single time you wanna do anything. [speaker003:] OK. [speaker007:] We can downsample them, [speaker003:] Do you think that'd be OK? [speaker007:] so. Yeah. [speaker003:] To downsample them? [speaker007:] Yeah, we get the same performance. [speaker003:] OK. [speaker007:] I mean the r the front end on the SRI recognizer just downsamples them on the fly, [speaker003:] Yeah, I guess the only argument against downsampling is to preserve just the original files in case we want to experiment with different filtering techniques. [speaker007:] so [disfmarker] So that's [disfmarker] [speaker006:] I [disfmarker] I [disfmarker] I'm sorry [disfmarker] [speaker007:] Yeah, if [speaker006:] Yeah, l I mean over all our data, we [disfmarker] we want to not downsample. [speaker007:] fe You'd [disfmarker] you wanna not. OK. So we're [disfmarker] what we're doing is we're writing out [disfmarker] [speaker006:] Yeah. [speaker007:] I mean, this is just a question. We're writing out these individual segments, that wherever there's a time boundary from Thilo, or [disfmarker] or Jane's transcribers, you know, we [disfmarker] we chop it [pause] there. [speaker006:] Yeah. Mm hmm. [speaker007:] And the reason is so that we can feed it to the recognizer, [speaker006:] Mm hmm. [speaker007:] and throw out ones that we're not using and so forth. [speaker006:] Yeah. [speaker007:] And those are the ones that we're storing. [speaker004:] Yeah, as I said, since that's [disfmarker] it's regeneratable, what I would do is take [disfmarker] downsample it, [speaker007:] So [disfmarker] Yeah. [speaker004:] and compress it however you're e the SRI recognizer wants to take it in. [speaker007:] Yeah. So we can't shorten them, [speaker006:] ye [speaker007:] but we can downsample them. [speaker003:] Right. [speaker006:] Yeah, I mean [disfmarker] yeah, I'm sorry. [speaker007:] So. [speaker006:] As [disfmarker] yeah, as long as there is a [disfmarker] a form that we can come from again, that is not downsampled, [comment] then, [speaker003:] r Yeah. [speaker007:] Oh yeah th [speaker003:] Yeah those are gonna be kept. [speaker007:] Yeah. Yeah. That [disfmarker] that's why we need more disk space [speaker006:] uuu [speaker007:] cuz we're basically duplicating the originals, um [disfmarker] [speaker006:] Yeah. Then it's fine. [speaker004:] Right. [speaker006:] But for [disfmarker] for [disfmarker] fu future research we'll be doing it with different microphone positions and so on [speaker007:] Oh yeah. No. We always have the original long ones. [speaker004:] Yep. [speaker003:] Right. [speaker006:] we would like to [disfmarker] Yeah. [speaker002:] So the SRI front end won't take a uh [disfmarker] an [disfmarker] an [disfmarker] a large audio file name and then a [disfmarker] a list of segments to chop out [comment] from that large audio file? They actually have to be chopped out already? [speaker007:] Um, it's better if they're chopped out, and [disfmarker] and it [disfmarker] it will be [disfmarker] [speaker002:] Uh huh. [speaker007:] yeah, y we could probably write something to do that, but it's actually convenient to have them chopped out cuz you can run them, you know, in different orders. You c you can actually move them around. [speaker004:] And that's the whole point about the naming conventions [speaker007:] Uh, you can get rid of [speaker004:] is that you could run all the English speaking, [speaker007:] Yeah, it it's a lot faster. [speaker004:] all the native speakers, [speaker007:] Right. You can grab everything with the word "the" in it, [speaker004:] and all the non native speakers, and all the men, and all the women. [speaker007:] and it's [disfmarker] [speaker004:] Yeah. [speaker007:] That's a lot quicker than actually trying to access the wavefile each time, find the time boundaries and [disfmarker] So in principle, yeah, you could do that, [speaker002:] I don't [disfmarker] I don't think that's really right. [speaker007:] but it's [disfmarker] but it's um [disfmarker] [speaker004:] "That's just not right, man." [speaker007:] These are long [disfmarker] These are long [disfmarker] [speaker004:] The [disfmarker] the point [disfmarker] So [disfmarker] so s For example, what if you wanted to run [disfmarker] run all the native speakers. [speaker007:] You know. This is an hour of speech. [speaker004:] Right, so if [disfmarker] if you did it that way you would have to generate a program that looks in the database somewhere, extracts out the language, finds the time marks for that particular one, do it that way. The way they're doing it, you have that already extracted and it's embedded in the file name. And so, you know, you just say [disfmarker] [speaker007:] We yeah that's [disfmarker] so that's part of it [speaker004:] y so you just say you know "asterisk E asterisk dot wave", and you get what you want. [speaker007:] is [disfmarker] Right. And the other part is just that once they're written out it [disfmarker] it is a lot faster to [disfmarker] to process them. [speaker004:] Rather than doing seeks through the file. [speaker007:] So. Otherwise, you're just accessing [disfmarker] [speaker004:] This is all just temporary access, so I don't [disfmarker] I think [disfmarker] it's all just [disfmarker] It's fine. You know. Fine to do it however is convenient. [speaker007:] Right. [speaker006:] I mean it just depends how big the file is. If the file sits in memory you can do extremely fast seeks [speaker007:] Right. The other thing is that, believe it or not [disfmarker] I mean, we have some [disfmarker] [speaker006:] but. [speaker004:] Yeah and they don't. Two gig? [speaker007:] So we're also looking at these in Waves like for the alignments and so forth. You can't load an hour of speech into X Waves. [speaker006:] Yeah. [speaker007:] You need to s have these small files, and in fact, even for the Transcriber program Um [disfmarker] [speaker004:] Yes you can. [speaker002:] Yeah, you [disfmarker] you can give Waves a start and an end time. [speaker007:] Yeah, if you try to load s really long waveform into X Waves, you'll be waiting there for [disfmarker] [speaker002:] And middle. No, I [disfmarker] I'm not suggesting you load a long wave file, [speaker007:] Oh [speaker002:] I'm just saying you give it a start and an end time. And it'll just go and pull out that section. [speaker004:] I th w The transcribers didn't have any problem with that did they Jane? [speaker005:] What's th u w in what respect? [speaker007:] Loading the long [disfmarker] [speaker004:] They loaded [disfmarker] they loaded the long long files into X Waves. [speaker001:] No, with the Transcriber tool, it's no problem. [speaker007:] It takes a very long ti [speaker005:] In the [disfmarker] in [speaker001:] Yeah just to load a transcription [speaker005:] Mm hmm. [speaker007:] Right. It takes a l very long time. [speaker001:] takes a long time, but not for the wavefile. The wavefile is there immediately. [speaker005:] Mm hmm. Yeah. [speaker004:] Are you talking about Transcriber or X Waves? [speaker007:] Huh. [speaker001:] Yeah. Oh, I'm tr talking about Transcriber. [speaker007:] Actually, you're talking about Transcriber, right? [speaker001:] Yeah. [speaker005:] It was also true of the digits task which was X Waves. [speaker004:] Because [disfmarker] because i we used X Waves to do the digits. [speaker005:] Yeah. [speaker004:] And they were loading the full mixed files then, [speaker005:] Very quickly. [speaker004:] and it didn't seem to be any problem. [speaker005:] I agree. [speaker007:] Huh. Well we [disfmarker] we have a problem with that, you know, time wise on a [disfmarker] It it's a lot slower to load in a long file, [speaker004:] Hmm. Seemed really fast. [speaker007:] and also to check the file, so if you have a transcript, um, [speaker004:] Well regardless, it's [disfmarker] [speaker007:] I mean it's [disfmarker] [speaker006:] Yeah. [speaker007:] I [disfmarker] I think overall you could get everything to work by accessing the same waveform and trying to find two [disfmarker] you know, the begin and end times. Um, but I think it's more efficient, if we have the storage space, to have the small ones. [speaker004:] and, it's no problem, right? [speaker007:] Yeah, it's [disfmarker] [speaker004:] Because it's not backed up. [speaker007:] Yeah. It's [disfmarker] it's just [disfmarker] [speaker004:] So we just [disfmarker] If we don't have a spare disk sitting around we go out and we buy ourselves an eighty gigabyte drive and make it all scratch space. You know, it's not a big deal. [speaker005:] You're right about the backup being [pause] a bottleneck. [speaker007:] Right. [speaker005:] It's good to think towards scratch. [speaker007:] Yeah, so these wouldn't be backed up, the [disfmarker] [speaker005:] Yeah. [speaker006:] Yep. [speaker007:] Right. [speaker004:] So remind me afterward [speaker007:] And [disfmarker] [speaker004:] and I'll [disfmarker] and we'll look at your disk and see where to put stuff. [speaker003:] OK. Alright. I mean, I could just u do a DU on it right? And just see which [disfmarker] how much is on each [disfmarker] [speaker004:] Yep. [speaker003:] So. [speaker004:] Each partition. And you wanna use, either XA or scratch. [speaker003:] OK. [speaker004:] Well X question mark, anything starting with X is scratch. [speaker003:] OK. [speaker005:] With two [disfmarker] two digits. [speaker004:] Two digits, right, XA, XB, XC. [speaker006:] So, [@ @]. [speaker004:] OK? Jane? [speaker005:] OK. So I got a little print out here. So three on this side, three on this side. And I stapled them. OK. Alright so, first of all, um, there was a [disfmarker] an interest in the transcribe transcription, uh, checking procedures and [disfmarker] [vocalsound] and I can [vocalsound] tell you first, uh, to go through the steps although you've probably seen them. Um, as you might imagine, when you're dealing with, um, r really c a fair number of words, and uh, [@ @] [comment] natural speech which means s self repairs and all these other factors, that there're lots of things to be, um, s standardized and streamlined and checked on. And, um, so, I did a bunch of checks, and the first thing I did was obviously a spell check. And at that point I discovered certain things like, um, "accommodate" with one "M", that kind of thing. And then, in addition to that, I did an exhaustive listing of the forms in the data file, which included n detecting things like f faulty punctuation and things [disfmarker] [speaker002:] I'm [disfmarker] I'm sorry to interrupt [speaker005:] Yeah? [speaker002:] you could [disfmarker] could I just back up a little bit [speaker005:] Sure, please, yeah, please, please. [speaker002:] and [disfmarker] [speaker005:] Yeah, yeah, yeah. [speaker002:] So you're doing these [disfmarker] So [pause] the whole process is that the transcribers get the conversation and they do their pass over it. [speaker005:] Yes. [speaker002:] And then when they're finished with it, it comes to you, and you begin these sanit these quality checks. [speaker005:] That's right. Exactly. I do these checks. Uh huh. [speaker002:] OK. [speaker005:] Exactly. [speaker002:] OK. [speaker005:] Yeah. Thank you. And so, uh, I do a [disfmarker] an exhaustive listing of the forms [disfmarker] Actually, I will go through this in [disfmarker] in order, so if [disfmarker] if we could maybe wait and stick keep that for a second cuz we're not ready for that. [speaker004:] So on the fifth page, seven down [disfmarker] [speaker005:] Yeah, yeah, yeah, yeah. Exactly! Exactly! Alright so, [vocalsound] a spelling check first then an exhaustive listing of the, uh [disfmarker] all the forms in the data with the punctuation attached and at that point I pick up things like, oh, you know, word followed by two commas. And th and then another check involves, uh, being sure that every utterance has an identifiable speaker. And if not, then that gets checked. Then there's this issue of glossing s w so called "spoken forms". So there [disfmarker] mo for the most part, we're keeping it standard wo word level transcription. But there's [disfmarker] w And that that's done with the assumption that [pause] pronunciation variants can be handled. So for things like "and", the fact that someone doesn't say the "D", uh that's not important enough to capture in the transcription because a [disfmarker] a good pronunciation, uh, you know, model would be able to handle that. However, things like "cuz" where you're lacking an entire very prominent first syllable, and furthermore, it's a form that's specific to spoken language, those are r reasons [disfmarker] f for those reasons I [disfmarker] I kept that separate, and used the convention of using "CUZ" for that form, however, glossing it so that it's possible with the script to plug in the full orthographic form for that one, and a couple of others, not many. So "wanna" is another one, "going [disfmarker]" uh, "gonna" is another one, with just the assumption, again, that this [disfmarker] th these are things which it's not really fair to a c consider [disfmarker] expect that [disfmarker] a pronunciation model, to handle. And Chuck, you in you indicated that "cuz" is [disfmarker] is one of those that's handled in a different way also, didn't you? Did I [disfmarker] [speaker002:] I don't remember. [speaker005:] OK. So [disfmarker] so it might not have been [disfmarker] [vocalsound] It might not have been you, [speaker002:] Hmm. [speaker005:] but someone told me that in fact "cuz" is treated differently in, um, i u in this context because of that r reason that, um, it's a little bit farther than a pronunciation variant. OK, so after that, let's see, um. [speaker002:] So that was part of the spell check, [comment] or was that [disfmarker] that was after the spell check? [speaker005:] Well so when I get the exhau So the spell check picks up those words because they're not in the dictionary. [speaker002:] Uh huh. [speaker005:] So it gets "cuz" and "wanna" and that [disfmarker] [speaker004:] And then you gloss them? [speaker005:] Yeah, mm hmm. Run it through [disfmarker] I have a sed [disfmarker] You know, so I do sed script saying whenever you see "gonna" you know, "convert it to gonna", you know, "gloss equals quote going to quote", you know. And with all these things being in curly brackets so they're always distinctive. [speaker004:] Mm hmm. [speaker005:] OK, I also wrote a script which will, um, retrieve anything in curly brackets, [vocalsound] or anything which I've classified as an acronym, and [disfmarker] a pronounced acronym. And the way I tag ac pronounced acronyms is that I have underscores between the components. So if it's "ACL" then it's "A" underscore "C" underscore "L". And the th [speaker004:] And so [disfmarker] so your list here, are these ones that actually occurred in the meetings? [speaker005:] Yes. Uh huh, yeah. [speaker004:] Whew! [speaker005:] OK, so now. Uh and [disfmarker] a [speaker004:] We are acronym loaded. [speaker007:] Um, can I ask a question about the glossing, uh before we go on? [speaker005:] Yeah. [speaker007:] So, for a word like "because" is it that it's always predictably "because"? I mean, is "CUZ" always meaning "because"? [speaker005:] Yes, but not the reverse. So sometimes people will say "because" in the meeting, and if [disfmarker] if they actually said "because", then it's written as "because" with no [disfmarker] w "cuz" doesn't even figure into the equation. [speaker007:] Beca because [disfmarker] [speaker006:] But [disfmarker] but in our meetings people don't say "hey cuz how you doing?" [speaker007:] Right. [comment] [vocalsound] Right. [speaker004:] Except right there. [speaker005:] Yeah. [speaker006:] Yeah. [speaker007:] Um, so, I guess [disfmarker] So, from the point of view of [disfmarker] [speaker005:] That's a good point. [speaker007:] The [disfmarker] the only problem is that with [disfmarker] for the recognition we [disfmarker] we map it to "because", and so if we know that "CUZ" [disfmarker] [speaker004:] Well, [speaker005:] That's fine. Well Don has a script. [speaker004:] but they have the gloss. [speaker003:] Yeah. [speaker007:] but, we don't [disfmarker] [speaker004:] You have the gloss form so you always replace it. [speaker005:] Exactly. [speaker004:] If that's how [disfmarker] what you wanna do. [speaker005:] Uh huh. And Don knows this, and he's bee he has a glo he has a script that [disfmarker] [speaker003:] Yeah. I replace the "cuz" with "because" if it's glossed. [speaker007:] S Right. But, if it's [disfmarker] OK. But then there are other glosses that we don't replace, right? [speaker003:] And [disfmarker] [speaker007:] Because [disfmarker] [speaker005:] Yes. And that's why there're different tags on the glosses, [speaker007:] OK. So, then it's fine. [speaker005:] on the different [disfmarker] on the different types of comments, which we'll [disfmarker] which we'll see in just a second. [speaker003:] Right. [speaker007:] OK. [speaker005:] So the pronounceable acronyms get underscores, the things in curly brackets are viewed as comments. There're comments of four types. So this is a good time to introduce that. The four types. w And maybe we'll expand that [speaker004:] Um [disfmarker] [speaker005:] but the [disfmarker] but the comments are, um, of four types mainly right now. One of them is, um, the gloss type we just mentioned. [speaker004:] Can [disfmarker] ca [speaker005:] Another type is, um [disfmarker] [speaker004:] So a are we done with acronyms? Cuz I had a question on what [disfmarker] what this meant. [speaker005:] I'm still doing the overview. I haven't actually gotten here yet. [speaker004:] Oh I'm sorry. [speaker005:] OK so, gloss is things like replacing the full form u with the, um, more abbreviated one to the left. Uh, then you have if it's [disfmarker] uh, there're a couple different types of elements that can happen that aren't really properly words, and wo some of them are laughs and breathes, so we have [disfmarker] uh that's prepended with a v a tag of "VOC". [speaker001:] Whew! [speaker005:] And the non vocal ones are like door slams and tappings, and that's prepended with a no non vocalization. [speaker002:] So then it [disfmarker] just an ending curly brace there, or is there something else in there. [speaker005:] Oh yeah, so i e this would [disfmarker] Let's just take one example. [speaker004:] A comment, basically. [speaker002:] Oh, oh, oh. [speaker005:] And then the no non vocalization would be something like a door slam. They always end. So it's like they're paired curly brackets. And then the third type right now, [vocalsound] uh, is [pause] m things that fall in the category of comments about what's happening. So it could be something like, you know, "referring to so and so", "talking about such and such", uh, you know, "looking at so and so". [speaker002:] So on the m on the middle t So, in the first case that gloss applies to the word to the left. [speaker005:] Yeah. Yeah, and this gets substituted here. [speaker002:] But in the middle two [disfmarker] Th it's not applying to anything, right? [speaker004:] They're impulsive. [speaker005:] Huh uh. [speaker002:] OK. [speaker005:] No, they're events. They're actually [disfmarker] They have the status of events. [speaker002:] OK. [speaker004:] Well the "QUAL" can be [disfmarker] The "QUAL" is applying to the left. [speaker002:] Right, I just meant the middle two ones, [speaker004:] Yep. [speaker002:] yeah. [speaker005:] Well, and actually, um, it is true that, with respect to "laugh", there's another one which is "while laughing", [speaker004:] "While laughing". [speaker005:] and that is, uh, i i An argument could be made for this [disfmarker] tur turning that into a qualitative statement because it's talking about the thing that preceded it, but at present we haven't been, um, uh, coding the exact scope of laughing, you know, and so to have "while laughing", you know that it happened somewhere in there which could well mean that it occurred separately and following, or, you know, including some of the utterances to the left. Haven't been awfully precise about that, but I have here, now we're about to get to the [disfmarker] to this now, I have frequencies. So you'll see how often these different things occur. But, um, uh, the very front page deals with this, uh, final c pa uh, uh, aspect of the standardization which has to do with the spoken forms like "mm hmm" and "mm hmm" and "ha" and "uh uh" and all these different types. And, um, uh, someone pointed out to me, this might have been Chuck, [comment] about, um [disfmarker] about how a recognizer, if it's looking for "mm hmmm" with three M's, [vocalsound] and it's transcribed with two M's, [vocalsound] that it might [disfmarker] uh, that it might increase the error rate which is [disfmarker] which would really be a shame because um, I p I personally w would not be able to make a claim that those are dr dramatically different items. So, right now I've standardized across all the existing data with these spoken forms. [speaker004:] Oh good. [speaker005:] I [disfmarker] I should say [speaker004:] So it's a small list. [speaker005:] all existing data except thirty minutes which got found today. So, I'm gonna [disfmarker] [vocalsound] I'm gonna [disfmarker] [vocalsound] I'm gonna check [disfmarker] [speaker004:] That [disfmarker] that's known as "found data". [speaker005:] Yeah, yeah. Acsu actually yeah. I got [disfmarker] It was stored in a place I didn't expect, [speaker003:] It's like the z Zapruder Film. [speaker005:] so [disfmarker] and [disfmarker] and um, w we, uh, sh yea reconstructed how that happened. [speaker006:] I wanna work with lost data. [speaker004:] Yeah. It's much easier. [speaker005:] And this is [disfmarker] this'll be great. So I'll [disfmarker] I'll be able to get through that tonight, and then everyth i well, actually later today probably. [speaker004:] Hmm. [speaker005:] And so then we'll have everything following these conventions. But you notice it's really rather a small set of these kinds of things. [speaker004:] Yeah. [speaker005:] And I made it so that these are, um, with a couple exceptions but, things that you wouldn't find in the spell checker so that they'll show up really easily. And, um [disfmarker] [speaker003:] Jane, can I ask you a question? What's that very last one correspond to? [speaker005:] Sure. [speaker003:] I don't even know how to pronounce that. [speaker005:] Well, yeah. Now that [disfmarker] that s only occurs once, [speaker007:] Yeah. [speaker005:] and I'm thinking of changing that. [speaker007:] Right. [speaker005:] So c I haven't listened to it [speaker003:] Uh, is that like someone's like burning or some such thing? [speaker005:] so I don't know. I haven't heard it actually. [speaker003:] Like their hair's on fire? [speaker005:] I n I need to listen to that one. [speaker004:] Ah! [speaker007:] Actually we [disfmarker] we gave this to our pronunciation person, [speaker001:] It's the Castle of Ah! [speaker003:] Uh, it looks like that. [speaker007:] she's like, "I don't know what that is either". So. [speaker005:] Did she hear the th did she actually hear it? [speaker007:] No, we just gave her a list of words that, you know, weren't in our dictionary [speaker005:] Cuz I haven't heard it. [speaker007:] and so of course it picked up stuff like this, and she just didn't listen so she didn't know. We just [disfmarker] we're waiting on that [pause] just to do the alignments. [speaker005:] Yeah. Yeah I'm curious to se hear what it is, but I didn't know [disfmarker] wanna change it to something else until I knew. [speaker007:] Maybe it's "argh"? [speaker003:] Right. [speaker005:] Well, sss, [comment] you know [disfmarker] [speaker003:] But that's not really like [disfmarker] [speaker005:] Hhh. [speaker007:] Yeah. [speaker003:] No one really says "argh," you know, [speaker007:] Right, no one say [speaker006:] Well, you just did. [speaker003:] it's not [disfmarker] Well, [speaker005:] Yeah. [speaker003:] there's another [disfmarker] there's another word error. [speaker002:] Except for now! [speaker005:] That's right. [speaker004:] Yes, that's right. We're gonna have a big problem when we talk about that. [speaker003:] Cha ching. [speaker007:] Ah. [speaker002:] We're gonna never recognize this meeting. [speaker005:] OK. [speaker004:] In Monty Python you say "argh" a lot. [speaker003:] Oh yeah? [speaker004:] So. Well, or if you're a C programmer. [speaker003:] Mmm. [speaker005:] Yeah, that's right. [speaker006:] Yeah. [speaker004:] You say arg C and arg V all the time. [speaker005:] That's right. [speaker006:] Yeah [speaker003:] That's true. [speaker007:] But it has a different prosody. [speaker005:] Yeah. [speaker006:] Arg. [speaker004:] It does. [speaker006:] Arg [disfmarker] arg max, arg min, [speaker003:] Mm hmm. [speaker006:] yeah. [speaker004:] Ah! [speaker005:] Uh, [speaker007:] So, Jane, what's the [disfmarker] d [speaker004:] Maybe he died while dictating. [speaker005:] so. [speaker007:] I have one question about the the "EH" versus like the "AH" and the "UH". [speaker005:] That's partly a nonnative native thing, [speaker007:] OK. [speaker005:] but I have found "EH" in native speakers too. But it's mostly non native [disfmarker] [speaker007:] S OK. [speaker001:] H [speaker002:] That's "eh" versus "ah"? [speaker005:] Eh. [speaker004:] Eh? [speaker007:] "Eh," yeah right, cuz there were [disfmarker] were some speakers that did definite "eh's" [speaker005:] Mm hmm. [speaker007:] but right now we [disfmarker] [speaker002:] They were the Canadians, right? [speaker006:] Canadians, yeah, yeah, yeah. [speaker005:] That's right. [speaker007:] So, it [disfmarker] it's actually probably good for us to know the difference between the real "eh" and the one that's just like "uh" or transcribed "aaa" [speaker005:] Exactly. [speaker007:] cuz in [disfmarker] like in Switchboard, you would see e all of these forms, but they all were like "uh". [speaker004:] You mean just the single letter "a" [comment] as in the particle? [speaker001:] The transcription or [disfmarker] [speaker007:] No, no, I mean like the [disfmarker] the "UH", [speaker004:] Article. [speaker005:] "UH". [speaker004:] Oh. [speaker007:] or [disfmarker] the "UH", "EH", "AH" were all the same. And then, we have this additional non native version of [disfmarker] uh, like "eeh". [speaker003:] All the "EH"'s I've seen have been like that. They've been like "eh" like that have bee has been transcribed to "EH". [speaker005:] Mm hmm, that's right. [speaker003:] And sometimes it's stronger, like "eeh" [comment] which is like closer to "EH". [speaker005:] Mmm. [speaker007:] Right. [speaker003:] But. [speaker005:] Yeah. [speaker004:] I'm just [disfmarker] these poor transcribers, they're gonna hate this meeting. [speaker003:] I know. We should go off line. [speaker005:] Well, [vocalsound] we're not doing [disfmarker] We're not doing length. [speaker006:] Quick Thilo, do a [disfmarker] do a filled pause for us. [speaker005:] Yeah, that's right. [speaker001:] Ooo [comment] no. [speaker007:] But you're a native German speaker so it's not a [disfmarker] not a issue for [disfmarker] [speaker001:] Yeah. [speaker007:] It's only [disfmarker] [speaker004:] Them Canadians. [speaker007:] Onl yeah. No, only if you don't have lax vowels, I guess. Right. So it's [disfmarker] like Japanese and Spanish [speaker004:] Oh. [speaker005:] This makes sense. Yeah I [disfmarker] I think you've [disfmarker] uh huh, yeah. [speaker007:] and [disfmarker] [speaker006:] Uh huh. [speaker004:] Oh I see. [speaker005:] That makes sense. [speaker004:] I didn't get that, OK. [speaker005:] Yeah, and so, you know, I mean, th th I have [disfmarker] there are some, um, Americans who [disfmarker] who are using this "eh" too, and I haven't listened to it systematically, maybe with some of them, uh, they'd end up being "uh's" but, uh, I my spot checking has made me think that we do have "eh" in also, um, American e e data represented here. But any case, that's the [disfmarker] this is reduced down from really quite a long a much longer list, [speaker007:] Yeah this is great. [speaker005:] and this is [speaker003:] Mm hmm. [speaker004:] Yeah, it's good, [speaker007:] This is really really helpful. [speaker004:] yeah. [speaker005:] functionally pretty, you know, also [disfmarker] It was fascinating, I was listening to some of these, uh, I guess two nights ago, and it's just hilarious to liste to [disfmarker] to do a search for the "mm hmm's". And you get "mm hmm" and diff everybody's doing it. [speaker004:] And just listen to them? Yeah. [speaker005:] Just [disfmarker] I wanted to say [disfmarker] I w think it would be fun to make a montage of it because there's a "Mm hmm. Mm hmm. Mm hmm." [speaker004:] Performance art, just extract them all. [speaker007:] Right. [speaker005:] It's really [disfmarker] it's really fun to listen to. [speaker002:] Morgan can make a song out of it. [speaker005:] All these different vocal tracts, you know, but it's [disfmarker] it's the same item. It's very interesting. OK. Uh, then the acronyms y and the ones in parentheses are ones which the transcriber wasn't sure of, and I haven't been able to listen to to [disfmarker] to clarify, [speaker004:] Oh I see. [speaker005:] but you can see that the parenthesis convention makes it very easy to find them [speaker004:] o [speaker005:] cuz it's the only place where [disfmarker] where they're used. [speaker004:] How about question mark? [speaker001:] The question marks, yeah. [speaker005:] Question mark is punctuation. [speaker001:] What are those? [speaker005:] So it [disfmarker] they said that [@ @] [disfmarker] [speaker004:] Oh. [speaker003:] Mm hmm. [speaker005:] um, "DC?" [speaker004:] So they [disfmarker] so it's "PLP?" [speaker001:] Ah. [speaker005:] Exactly. Exactly. Yeah, so the only [disfmarker] Well, and I do have a stress marker here. Sometimes the contrastive stress is showing up, and, um [disfmarker] [speaker006:] I'm sorry, I [disfmarker] I got lost here. What w what's the difference between the parenthesized acronym and the non parenthesized? [speaker005:] The parenthesized is something that the transcriber thought was ANN, but wasn't entirely sure. So I'd need to go back or someone needs to go back, and say, you know, yes or no, [speaker006:] Ah. [speaker007:] Right. [speaker005:] and then get rid of the parentheses. But the parentheses are used only in that context in the transcripts, of of noti noticing that there's something uncertain. [speaker004:] Yeah, P make is [disfmarker] [speaker007:] Yeah I mean cuz they [disfmarker] they have no idea, [speaker004:] That's a good one. That's correct. [speaker007:] right. If you hear CTPD, I mean, they do pretty well [speaker006:] Yeah. [speaker003:] Mm hmm. [speaker007:] but it's [disfmarker] [speaker006:] I [disfmarker] I don't recognize a lot of these. [speaker007:] you know how are [disfmarker] how are they gonna know? [speaker004:] I know! [speaker006:] I [disfmarker] [speaker004:] I [disfmarker] I was saying that I think a lot of them are the Networks meeting. [speaker003:] Yeah. [speaker005:] I think that's true. [speaker006:] Maybe. [speaker005:] Yeah, absolutely. [speaker007:] Yeah. [speaker005:] NSA, [speaker004:] I see a few. [speaker005:] a lot of these are [disfmarker] are coming from them. I listened to some of that. [speaker004:] Although I see [disfmarker] I see plenty of uh [speaker003:] Yeah, we don't have that many acronyms comparatively in this meeting. [speaker005:] Yeah. Yeah. I agree. [speaker003:] It's not so bad. [speaker007:] Right. [speaker005:] And Robustness has a fair amount, but the NSA group is just very very many. [speaker001:] Yeah. [speaker003:] Mmm. [speaker007:] The recognizer, it is funny. Kept getting PTA for PDA. This is close, [speaker006:] Yeah. [speaker004:] Yeah, that's pretty close. [speaker007:] right, and the PTA was in these, uh, topics about children, [speaker003:] That's not bad. [speaker005:] Yeah. [speaker007:] so, anyway. [speaker005:] That's interesting. [speaker007:] Is the P PTA working? [speaker005:] Right and sometimes, I mean, you see a couple of these that are actually "OK's" so it's [disfmarker] it's [disfmarker] may be that they got to the point where [disfmarker] I mean it was low enough understandable [disfmarker] understandability that they weren't entirely sure the person said "OK." You know, so it isn't really necessarily a an undecipherable acronym, but just n needs to be double checked. [speaker003:] There's a lot of "OK's". [speaker005:] Now we get to the comments. [speaker006:] The number to the left is the number of incidences? [speaker005:] This [disfmarker] [speaker004:] Count. [speaker005:] Number of times out of the entire database, [speaker004:] Yep. [speaker006:] Uh huh. [speaker005:] w except for that last thirty minutes I haven't checked yet. [speaker006:] So CTS is really big here, [speaker004:] Yeah, I wonder what it is. [speaker006:] yeah. Yeah. [speaker001:] So what is the difference between "papers rustling" and "rustling papers"? [speaker006:] IP, I know what IP is. [speaker005:] I'd have to listen. I [disfmarker] I I agree. I w I'd like to standardize these down farther but, um, uh, uh, to me that sounds equivalent. [speaker001:] Yeah. [speaker005:] But, I [disfmarker] I'm a little hesitant to [disfmarker] to collapse across categories unless I actually listen to them. [speaker001:] Seems so. [speaker006:] OK. [speaker004:] Oh I'm sure we've said XML more than five times. [speaker005:] Well, then, at least now. [speaker006:] S s six now, yeah. [speaker001:] Now it's at least six times, yeah. [speaker005:] Yeah. Six. OK well [disfmarker] [speaker006:] Wh the self referential aspect of these [disfmarker] these p [speaker007:] I'm wai [speaker004:] Yes, it's very bad. [speaker003:] Yeah. [speaker007:] Well this is exactly how people will prove that these meetings do differ because we're recording, right? [speaker004:] Yes. [speaker007:] Y no normally you don't go around saying, Now you've said it six times. [speaker004:] Yeah [comment] that's right. [speaker007:] Now you've said " [speaker005:] But did you notice that there were seven hundred and eighty five instances of "OK"? And that's just without the [disfmarker] without punc punctuation. [speaker004:] Yep. [speaker006:] No, I didn't. [speaker001:] Seven hundred eighty five instances. [speaker003:] Yeah. [speaker006:] Yeah. [speaker004:] And that's an underestimate [speaker005:] Extra forty one if it's questioned. [speaker004:] cuz they're [speaker002:] Where's that? [speaker006:] So th [speaker004:] Yep. [speaker005:] On the page two of acronyms. [speaker006:] Yeah. [speaker003:] Is this after [disfmarker] like did you do some uh replacements for all the different form of "OK" to this? [speaker006:] Seven hundred eighty. [speaker005:] Yeah. Of "OK", yes. [speaker003:] OK. [speaker005:] Mm hmm. So that's the single existing convention for "OK". [speaker002:] Wait a minute, w s [speaker006:] So now we're up to seven hundred and eighty eight. [speaker005:] Yeah that's [disfmarker] [speaker003:] Although, what's [disfmarker] there's one with a slash after it. That's kind of disturbing. [speaker005:] Yeah. That's [disfmarker] that's [disfmarker] I looked for that one. [speaker004:] Yeah, we'll have to look at it you know. [speaker007:] Yeah. [speaker003:] Anyway. [speaker005:] I actually explicitly looked for that one, [speaker003:] Mm hmm. [speaker005:] and I think that, um, I [disfmarker] I'm not exactly sure about that. [speaker002:] Was that somewhere where they were gonna say "new speaker" or something? [speaker005:] No, I looked for that, but that doesn't actually exist. And it may be, I don't [disfmarker] I can't explain that. [speaker003:] That's alright. I'm just pointing that out. [speaker005:] I i it's the only [disfmarker] [speaker003:] There's [disfmarker] [speaker005:] it's the only pattern that has a slash after it, [speaker007:] Well there's not [@ @]. [speaker005:] and I think it's [disfmarker] it's an epiphenomenon. [speaker004:] So I'll just [disfmarker] I was just looking at the bottom of page three there, is that "to be" or "not to be". [speaker005:] Yeah. Oh that's cute. [speaker002:] There's no tilde in front of it, so. [speaker005:] That's funny. Yeah. OK. [speaker004:] OK anyways, sorry. [speaker005:] There is th one [disfmarker] [speaker004:] "Try to stay on topic, Adam." [speaker005:] Y well, no, that's r that's legitimate. So now, uh, comments, you can see they're listed again, same deal, with exhaustive listing of everything found in everything except for these final th thirty minutes. [speaker004:] OK so, um, on some of these QUALs, [speaker005:] Yeah. [speaker004:] are they really QUALs, or are they glosses? So like there's a "QUAL TCL". [speaker005:] "TCL". Where do you see that? [speaker004:] Uh [speaker005:] Oh, oh. The reason is because w it was said "tickle". [speaker006:] What's a QUAL? [speaker004:] Oh I see, I see. [speaker003:] Hmm. [speaker004:] So it's not gloss. OK, I see. [speaker005:] Yep. [speaker004:] It wasn't said "TCL". [speaker003:] Sh shouldn't it be "QUAL TICKLE" or something? [speaker004:] Of course. [speaker003:] Like [disfmarker] it's not [disfmarker] [speaker005:] On the [disfmarker] in the actual script [disfmarker] in the actual transcript, I s I [disfmarker] So this [disfmarker] this happens in the very first one. [speaker003:] Mm hmm. [speaker005:] I actually wrote it as "tickle". [speaker003:] OK. [speaker005:] Because we [disfmarker] they didn't say "TCL", they said "tickle". [speaker003:] Yeah. [speaker007:] Right. [speaker005:] And then, following that is "QUAL TCL". [speaker003:] Oh I see. [speaker006:] I f I forget, [speaker003:] OK. [speaker006:] what's QUAL? [speaker005:] Qual qualifier. [speaker002:] It's just comment about what they said. [speaker005:] Comment. [speaker003:] Yeah. [speaker005:] Comment or contextual comment. [speaker003:] It's not something you wanna replace [pause] with [speaker002:] So they didn't mean "tickle" as in Elmo, [speaker003:] but [disfmarker] [speaker006:] Yeah. [speaker001:] Tickle? [speaker002:] they meant "tickle" as in [disfmarker] [speaker004:] Yeah. [speaker007:] Huh. [speaker006:] Right. [speaker007:] But at some point [disfmarker] I mean, we probably shoul [speaker004:] We'll probably add it to the language model. [speaker007:] But we should add it to the dictionar No, to the pronunciation model. [speaker004:] Yeah. What did I say? [speaker007:] Language, uh [disfmarker] [speaker001:] To the language model [disfmarker] model. [speaker004:] Well both. We can go on lan lan add it to both dictionary and language model. [speaker007:] Oh lan Oh OK we OK [speaker002:] Add what, Liz? [speaker001:] Yeah. [speaker007:] it's in the language model, w yeah, but it so it's the pronunciation model that has to have a pronunciation of "tickle". [speaker004:] Well "tickle" was pronounced "tickle". Right? [speaker002:] What are you saying? [speaker004:] It's pronounced the same [disfmarker] it's pronounced the same as the verb. [speaker001:] "tickle" is pronounced "tickle"? [speaker007:] I'm sorry! [speaker004:] So I think it's the language model that makes it different. [speaker007:] Oh, sorry. What I meant is that there should be a pronunciation "tickle" for TCL as a word. [speaker004:] Oh I see. [speaker007:] And that word in the [disfmarker] in, you know, it stays in the language model wherever it was. [speaker001:] Yeah. [speaker006:] Mm hmm. [speaker004:] Right. Right. [speaker007:] Yeah you never would put "tickle" in the language model in that form, [speaker006:] Right. [speaker007:] yeah. [speaker004:] Right. [speaker007:] Right. There's actually a bunch of cases like this with people's names and [disfmarker] [speaker002:] So how w there'd be a problem for doing the language modeling then with our transcripts the way they are. [speaker007:] Yes. Yeah. Yeah so th th there there's a few cases like that where the um, the word needs to be spelled out in [disfmarker] in a consistent way as it would appear in the language, but there's not very many of these. Tcl's one of them. [speaker004:] And [disfmarker] and you'll ha you'll have to do it sychronously. [speaker007:] Um, y yeah. [speaker004:] Right, so y so, whoever's creating the new models, will have to also go through the transcripts and change them synchronously. [speaker003:] It's just disturbing. [speaker007:] Right. Right. We have this [disfmarker] there is this thing I was gonna talk to you about at some point about, you know, what do we do with the dictionary as we're up updating the dictionary, [speaker002:] Hmm. [speaker007:] these changes have to be consistent with what's in the [disfmarker] Like spelling people's names and so forth. If we make a spelling correction to their name, like someone had Deborah Tannen's name mispelled, and since we know who that is, you know, we could correct it, [speaker004:] You can correct it. [speaker007:] but [disfmarker] [speaker004:] Yeah. [speaker007:] but we need to make sure we have the mispel If it doesn't get corrected we have to have a pronunciation as a mispelled word in the dictionary. Things like that. [speaker005:] Mm hmm. [speaker004:] These are so funny to read. [speaker005:] Well, of course now the [disfmarker] the Tannen corre the spelling c change. [speaker007:] So. [speaker005:] Uh, that's what gets [disfmarker] I [disfmarker] I picked those up in the frequency check. [speaker007:] Right. Right. So if there's things that get corrected before we get them, it's [disfmarker] it's not an issue, but if there's things that um, we change later, then we always have to keep our [disfmarker] the dictionary up to date. [speaker005:] Mm hmm. [speaker007:] And then, yeah, in the case of "tickle" I guess we would just have a, you know, word "TCL" which [disfmarker] [speaker002:] Mm hmm. [speaker004:] You add it to the dictionary. [speaker007:] which normally would be an acronym, you know, "TCL" [speaker004:] Right. [speaker007:] but just has another pronunciation. [speaker004:] Yep. [speaker005:] "ICSI" is [disfmarker] is one of those that sometimes people pronounce and sometimes they say "ICSI." [speaker004:] Mm hmm. [speaker005:] So, those that are l are listed in the acronyms, I actually know [speaker007:] Oh yeah. [speaker005:] they were said as letters. The others, um, e those really do need to be listened to cuz I haven't been able to go to all the IC ICSI things, [speaker007:] Right, exactly. [speaker005:] and [disfmarker] [comment] and until they've been listened to they stay as "ICSI". [speaker004:] Mm hmm. [speaker007:] Right. [speaker006:] Don and I were just noticing, love this one over on page three, "vocal [disfmarker] vocal gesture mimicking sound of screwing something into head to hold mike in place." [speaker003:] That's great. [speaker004:] It's this, "rrre rrre rrre". It was me. [speaker005:] It was! In fact, it was! Yeah! [speaker004:] A lot of these are me [speaker005:] He [disfmarker] he s he said [disfmarker] he said get [disfmarker] [speaker004:] the [disfmarker] the "beep is said with a high pit high pitch and lengthening." [speaker001:] To head. [speaker005:] Yeah, that's it. [speaker004:] That was the [disfmarker] I was imitating uh, beeping out [disfmarker] [speaker006:] Beep. [speaker005:] Perfect. [speaker007:] Oh there is something spelled out "BEEEEEEP" [speaker006:] Yeah. [speaker005:] Yeah that's it. [speaker003:] Um [disfmarker] [speaker005:] That's it. Yeah, that's [disfmarker] that's been changed. [speaker003:] Yeah. [speaker007:] in the old [disfmarker] Thank you. Because he was saying, "How many E's do I have to allow for?" [speaker003:] You need a lot of [disfmarker] [speaker004:] What I meant was "beep". [speaker003:] You need a lot of qualification Adam. [speaker005:] That's been changed. [speaker004:] I guess so. [speaker005:] So, exactly, that's where the lengthening comment c came in. [speaker004:] Anyway. [speaker007:] Right, thanks, [speaker003:] Subtext. [speaker005:] s chan brought it down. [speaker007:] yeah. [speaker004:] So they're vocalization, [speaker007:] Right. [speaker005:] And those of course get [disfmarker] get picked up in the frequency check [speaker004:] glosses. [speaker005:] because you see "beep" [speaker007:] Right. [speaker005:] and you know [disfmarker] I mean it gets kicked out in the spelling, [speaker007:] Right. [speaker005:] and it also gets kicked out in the, uh, freq frequency listing. [speaker007:] Right. [speaker005:] I have the [disfmarker] there're various things like "breathe" versus "breath" versus "inhale" and, hhh, you know, I don't know. I [disfmarker] I think they don't have any implications for anything else so it's like I'm tempted to leave them for now an and [disfmarker] It's easy enough to find them when they're in curly brackets. We can always get an exhaustive listing of these things and find them and change them. [speaker007:] Yeah. [speaker006:] "Sings finale type song" [speaker007:] Yeah. [speaker003:] Yeah, that was in the first meeting. [speaker006:] that's [disfmarker] that's good. [speaker004:] Um, [speaker005:] Yeah, but I don't actually remember what it was. But that was [disfmarker] Eric did that. [speaker006:] Yeah. [speaker005:] Yeah. [speaker004:] So on [disfmarker] [speaker006:] Tah dah! I don't know. [speaker005:] I think maybe something like that. [speaker006:] Something like that maybe, yeah. [speaker005:] Well, that'd qualify. [speaker004:] On the glosses for numbers, [speaker005:] Yeah. [speaker004:] it seems like there are lots of different ways it's being done. [speaker005:] OK. Interesting question. [speaker004:] There's a [disfmarker] [speaker005:] Yes. OK, now first of all [disfmarker] Ooo ooo! Very important. [speaker004:] "Ooo ooo." [speaker005:] Uh Chuck [disfmarker] Chuck led to a refinement here which is to add "NUMS" if these are parts of the read numbers. Now you already know i that I had, uh, in places where they hadn't transcribed numbers, I put "numbers" in place of any kind of numbers, but there are places where they, um, it [disfmarker] th this convention came later an and at the very first digits task in some transcripts they actually transcribed numbers. And, um, d Chuck pointed out that this is read speech, and it's nice to have the option of ignoring it for certain other prob uh p uh, things. And that's why there's this other tag here which occurs a hundred and five [disfmarker] or three hundred and five times right now which is just [disfmarker] well n n "NUMS" by itself [speaker004:] "NUMS", yeah. [speaker005:] which means this is part of the numbers task. I may change it to "digits". I mean, i with the sed command you can really just change it however you want because it's systematically encoded, you know? [speaker004:] Yep. [speaker005:] Have to think about what's the best for [disfmarker] for the overall purposes, but in any case, um, "numbers" and "NUMS" are a part of this digits task thing. Um, now th Then I have these numbers that have quotation marks around them. Um, I didn't want to put them in as gloss comments because then you get the substitution. And actually, th um, [vocalsound] the reason I b did it this way was because I initially started out with the other version, you have the numbers and you have the full form and the parentheses, however sometimes people stumble over these numbers they're saying. So you say, "Seve seventy eight point two", or whatever. And there's no way of capturing that if you're putting the numbers off to the side. You can't have the seven and [disfmarker] [speaker004:] So what's to the left of these? [speaker005:] The left is i so example the very first one, it would be, spelled out in words, "point five". [speaker004:] Mm hmm. OK, that's what I was asking. Right. [speaker005:] Only it's spelled out in words. So i this is also spelled out in [disfmarker] in words. [speaker004:] Point FIVE, yeah. [speaker005:] "Point five." [speaker004:] Good. [speaker005:] And then, in here, "NUMS", so it's not going to be mistaken as a gloss. It comes out as "NUMS quote dot five". [speaker004:] OK now, the other example is, in the glosses right there, [speaker005:] Thank you. [speaker004:] "gloss one one one dash one three zero". [speaker003:] Right. [speaker005:] Well now [disfmarker] [speaker004:] What [disfmarker] what's to the left of that? [speaker005:] In that case it's people saying things like "one one one dash so and so" or they're saying uh "two [disfmarker] I mean zero" whatever. [speaker004:] OK. [speaker005:] And in that case, it's part of the numbers task, and it's not gonna be included in the read digits anyway, so [disfmarker] I m in the uh [disfmarker] [speaker002:] So there will be a "NUMS" tag on those lines? [speaker005:] There is. [speaker002:] Yeah. [speaker005:] Yeah. I've added that all now too. [speaker004:] Good. [speaker003:] There's a "numbers" tag [disfmarker] [speaker007:] Wait. [speaker003:] I'm sorry I'm [disfmarker] I didn't follow that last thing. [speaker005:] So, so gloss [disfmarker] in the same line that would have "gloss quote one one one dash one thirty", you'd have a gloss at the end of the line saying, uh, "curly bracket NUMS curly bracket". [speaker003:] Right. [speaker005:] So if you [disfmarker] if you did a, uh, a "grep minus V nums" [speaker007:] Oh, so you could do "grep minus V nums". So that's the [disfmarker] [speaker005:] and you get rid of anything that was read. [speaker007:] yeah. So there wouldn't be something like [speaker003:] OK. [speaker007:] i if somebody said something like, "Boy, I'm really tired, OK." and then started reading that would be on a separate line? [speaker005:] Yes. [speaker007:] OK great. Cuz I was doing the "grep minus V" quick and dirty and looked like that was working OK, [speaker005:] Mm hmm. [speaker007:] but [disfmarker] [speaker005:] Good. [speaker007:] Great. [speaker005:] Yep. [speaker007:] Now why do we [disfmarker] what's the reason for having like the point five have the "NUMS" on it? Is that just like when they're talking about their data or something? [speaker005:] This is more because [disfmarker] [speaker007:] Or [disfmarker] [speaker005:] Yeah. Oh these are all these, the "NUMS point", this all where they're saying "point" something or other. [speaker007:] These are all like inside the spontaneous [disfmarker] [speaker005:] And the other thing too is for readability of the transcript. I mean if you're trying to follow this while you're reading it it's really hard to read, you know [disfmarker] eh, "so in the data column five has", you know, "one point five compared to seventy nine point six", it's like when you see the words it's really hard to follow the argument. And this is just really a [disfmarker] a way of someone who would handle th the data in a more discourse y way to be able to follow what's being said. [speaker007:] Oh OK. [speaker004:] Label it. [speaker007:] I see. [speaker005:] So this is where Chuck's, um, overall h architecture comes in, where we're gonna have a master file of the channelized data. Um, there will be scripts that are written to convert it into these t these main two uses and th some scripts will take it down th e into a f a for ta take it to a format that's usable for the recognizer an uh, other scripts will take it to a form that's usable for the [disfmarker] for linguistics an and discourse analysis. And, um, the implication that [disfmarker] that I have is that th the master copy will stay unchanged. These will just be things that are generated, [speaker004:] Right [speaker005:] and [speaker007:] OK. [speaker005:] e by using scripts. [speaker004:] Master copies of superset. [speaker005:] When things change then the [disfmarker] the script will cham change but the [disfmarker] but there won't be stored copies of [disfmarker] in different versions of things. [speaker004:] Good. [speaker007:] So, I guess I'd have one request here which is just, um, maybe to make it more robust, th that the tag, whatever you would choose for this type of "NUMS" [comment] where it's inside the spontaneous speech, is different than the tag that you use for the read speech. [speaker002:] Right. Right. That would argue for changing the other ones to be "digits" or something. [speaker007:] Um, that way w if we make a mistake parsing, or something, we don't see the "point five", or [disfmarker] or it's not there, then we [speaker002:] Mm hmm. [speaker007:] a Just [disfmarker] an And actually for things like "seven eighths", or people do fractions too I guess, you [disfmarker] maybe you want one overall tag for sort of that would be similar to that, [speaker005:] Except [disfmarker] [speaker007:] or [disfmarker] As long as they're sep as they're different strings that we [disfmarker] that'll make our p sort of processing more robust. [speaker005:] Well [disfmarker] [speaker007:] Cuz we really will get rid of everything that has the "NUMS" string in it. [speaker002:] I suppose what you could do is just make sure that you get rid of everything that has "curly brace NUMS curly brace". [speaker005:] Well [disfmarker] Ex exactly. Exactly. [speaker002:] I mean that would be the [disfmarker] [speaker005:] That was [disfmarker] that was my motivation. [speaker007:] Yeah. [speaker005:] And i these can be changed, like I said. You know, I mean, as I said I was considering changing it to "digits". And, it just [disfmarker] i you know, it's just a matter of deciding on whatever it is, and being sure the scripts know. [speaker002:] Right. [speaker007:] It would probably be safer, if you're willing, to have a separate tag just because um, then we know for sure. And we can also do counts on them without having to do the processing. But you're right, we could do it this way, it [disfmarker] it should work. Um, [speaker002:] Yeah, and it makes it [disfmarker] I guess the thing about [disfmarker] [speaker007:] but it it's probably not hard for a person to tell the difference because one's in the context of a [disfmarker] you know, a transcribed word string, [speaker002:] Yeah. Right. [speaker005:] The other thing is you can get really so minute with these things [speaker007:] and [disfmarker] So [disfmarker] [speaker005:] and increase the size of the files and the re and decrease the readability to such an extent by simply something like "percent". Now I [disfmarker] I could have adopted a similar convention for "percent", but somehow percent is not so hard, you know? [speaker004:] Hmm. [speaker005:] i It's just when you have these points and you're trying to figure out where the decimal places are [disfmarker] And we could always add it later. Percent's easy to detect. Point however is [disfmarker] is uh a word that has a couple different meanings. And you'll find both of those in one of these meetings, where he's saying "well the first point I wanna make is so and so" and he goes through four points, and also has all these decimals. [speaker002:] So Liz, what does the recognizer do, [speaker005:] So. [speaker002:] uh, [speaker006:] Hmm. [speaker002:] what does the SRI recognizer output for things like that? "seven point five". Does it output the word [disfmarker] [speaker007:] "Seven point five". [speaker002:] Right, [speaker004:] Well, the numbers? [speaker002:] the word "seven"? The number "seven"? [speaker007:] The word. [speaker002:] The word "seven", OK. [speaker006:] Yeah. [speaker007:] Yeah. [speaker006:] So I'd [disfmarker] so "I'd like [disfmarker] I'd like to talk about point five". [speaker007:] And [disfmarker] and actually, you know the language [disfmarker] it's the same point, actually, [speaker006:] Yeah. [speaker007:] the [disfmarker] the p you know, the word "to" and the word y th "going to" and "to go to" those are two different "to's" and so there's no distinction there. [speaker002:] Mm hmm. [speaker007:] It's just [disfmarker] just the word "point" has [disfmarker] Yeah, every word has only one, yeah e one version even if [disfmarker] even if it's [disfmarker] A actually even like the word "read" [comment] and "read" Those are two different words. They're spelled the same way, right? [speaker002:] Mm hmm. [speaker007:] And they're still gonna be transcribed as READ. [speaker006:] Right. [speaker002:] Mm hmm. [speaker007:] So, yeah, I [disfmarker] I like the idea of having this in there, I just [disfmarker] I was a little bit worried that, um, the tag for removing the read speech [disfmarker] because i What if we have like "read letters" or, I don't know, [speaker004:] We might wanna [disfmarker] just a separate tag that says it's read. [speaker007:] like "read something" like "read" yeah, basically. [speaker003:] Mm hmm. [speaker007:] But other than that I it sounds great. [speaker004:] Yeah. OK? Are we done? [speaker005:] Well I wanted to say also regarding the channelized data, [speaker004:] Oh, I guess we're not done. [speaker005:] that, um, Thilo requested, um, that we ge get some segments done by hand to e e s reduce the size of the time bins [speaker002:] Yeah. [speaker005:] wh like was Chuc Chuck was mentioning earlier that, um, that, um, if you [disfmarker] if you said, "Oh" and it was in part of a really long, s complex, overlapping segment, that the same start and end times would be held for that one [speaker004:] Well [disfmarker] [speaker005:] as for the longer utterances, [speaker004:] We did that for one meeting, right, [speaker005:] and [disfmarker] [speaker004:] so you have that data don't you? [speaker001:] Yeah, that's the training data. [speaker005:] And he requested that there be, uh, similar, uh, samples done for five minute stretches c involving a variety of speakers and overlapping secti sections. He gave me [disfmarker] he did the [disfmarker] very nice, he [disfmarker] he did some shopping through the data [speaker001:] Yeah. [speaker005:] and found segments that would be useful. And at this point, all four of the ones that he specified have been done. In addition the I've [disfmarker] I have the transcribers expanding the amount that they're doing actually. [speaker001:] Oh great. [speaker005:] So right now, um, I know that as of today we got an extra fifteen minutes of that type, and I'm having them expand the realm on either side of these places where they've already started. [speaker001:] Oh great. OK. [speaker005:] But if [disfmarker] if [disfmarker] you know, and I [disfmarker] and he's gonna give me some more sections that [disfmarker] that he thinks would be useful for this purpose. [speaker001:] Yeah. Yeah. [speaker005:] Because it's true, I mean, if we could do the [disfmarker] the more fine grained tuning of this, uh, using an algorithm, that would be so much more efficient. And, um. So this is gonna be [pause] useful to expand this. [speaker001:] So I [disfmarker] I thought we [disfmarker] we sh we sh perhaps we should try to [disfmarker] to start with those channelized versions just to [disfmarker] just to try it. Give it [disfmarker] Give one tr transcriber the [disfmarker] the channelized version of [disfmarker] of my speech nonspeech detection and look if [disfmarker] if that's helpful for them, or just let them try if [disfmarker] if that's better or If they [disfmarker] if they can [disfmarker] [speaker005:] You mean to start from scratch f in a brand new transcript? [speaker001:] Yeah. Yeah. Yeah. [speaker005:] That'd be excellent. Yeah, that'd be really great. As it stands we're still in the phase of sort of, um, cleaning up the existing data getting things, uh, in i m more tight tightly time [disfmarker] uh, aligned. I also wanna tell [disfmarker] um, I also wanted to r raise the issue that [disfmarker] OK so, there's this idea we're gonna have this master copy of the transcript, it's gonna be modified by scripts t into these two different functions. And actually the master [disfmarker] [speaker002:] Two or more. Two or more different functions. [speaker005:] Two [disfmarker] two or more. And that the master is gonna be the channelized version. [speaker002:] Right. [speaker005:] So right now we've taken this i initial one, it was a single channel basically the way it was input. And now, uh, thanks to the advances made in the interface, we can from now on use the channelized part, and, um, any changes that are made get made in the channelized version kind of thing. But I wanted to get all the finished [disfmarker] all the checks [disfmarker] [speaker002:] Yeah, so that has implications for your script. [speaker003:] Yeah. So, uh, have those [disfmarker] e e the vis the ten hours that have been transcribed already, have those been channelized? [speaker005:] Yes, they have. [speaker003:] And I know [disfmarker] I've seen [@ @] [disfmarker] I've seen they've been channelized, [speaker004:] All ten hours? [speaker005:] Except for the missing thirty minutes. [speaker003:] but [speaker004:] Great. [speaker003:] have they uh [disfmarker] have they been [disfmarker] has the time [disfmarker] have the time markings been adjusted, uh, p on a per channel [disfmarker] [speaker005:] Uh, for [disfmarker] for a total of like twenty m f for a total of [disfmarker] Let's see, four times [disfmarker] total of about an [disfmarker] [pause] thirty minutes. That's [disfmarker] that's been the case. [speaker003:] So, [speaker005:] And plus the training, whatever you have. [speaker003:] I guess, I mean, I don't know if we should talk about this now, or not, but I [speaker004:] Well it's just we're [pause] missing tea. [speaker003:] Yeah, I know. [speaker004:] So. [speaker003:] No, but I mean my question is like should I wait until all of those are processed, and channelized, like the time markings are adjusted before I do all the processing, and we start like branching off into the [disfmarker] into the [disfmarker] our layer of uh transcripts. [speaker005:] Well, you know the problem [disfmarker] the problem is that some [disfmarker] some of the adjustments that they're making are to bring [disfmarker] are to combine bins that were [disfmarker] time bins which were previously separate. And the reason they do that is sometimes there's a word that's cut off. [speaker003:] Right. [speaker005:] And so, i i i it's true that it's likely to be adjusted in the way that the words are more complete. And, [speaker003:] OK. [speaker005:] so I [disfmarker] it's gonna be a more reliable thing [speaker003:] No I know [disfmarker] I know that adjusting those things are gonna [disfmarker] is gonna make it better. [speaker005:] and I'm not sure [disfmarker] Yeah. [speaker003:] I mean I'm sure about that, but do you have like a time frame when you can expect like all of it to be done, or when you expect them to finish it, or [disfmarker] [speaker005:] Well partly it depends on how [disfmarker] um, how e effective it will be to apply an algorithm because i this takes time, [speaker003:] Yeah. [speaker005:] you know, it takes a couple hours t to do, uh, ten minutes. [speaker003:] Yeah. Yeah, I don't doubt it. Um, so. [speaker002:] So right now the [disfmarker] what you're doing is you're taking the [disfmarker] uh, the o original version and you're sort of channelizing yourself, right? [speaker003:] Yeah. I'm doing it myself. I mean i if the time markings aren't different across channels, like the channelized version really doesn't have any more information. [speaker002:] Mm hmm. [speaker003:] So, I was just [disfmarker] I mean, originally I had done before like the channelized versions were coming out. [speaker002:] Right. Right. [speaker003:] Um, and so it's a question of like what [disfmarker] [speaker002:] So I [disfmarker] I th I think probably the way it'll go is that, you know, when we make this first general version and then start working on the script, that script [@ @] that will be ma you know primarily come from what you've done, um, we'll need to work on a channelized version of those originals. [speaker003:] Mm hmm. [speaker005:] Mm hmm. [speaker003:] Mm hmm. [speaker002:] And so it should be pretty much identical to what you have [speaker005:] Yep. [speaker002:] t except for the one that they've already tightened the boundaries on. [speaker005:] Mm hmm. [speaker003:] Right. [speaker002:] Um, [speaker005:] Yeah, I mean [disfmarker] [speaker002:] So [speaker005:] yeah. [speaker002:] uh, and then probably what will happen is as the transcribers finish tightening more and more, you know, that original version will get updated and then we'll rerun the script and produce better uh versions. [speaker003:] OK. [speaker002:] But the [disfmarker] I guess the ef the effect for you guys, because you're pulling out the little wave forms into separate ones, that would mean these boundaries are constantly changing you'd have to constantly re rerun that, [speaker003:] I know. [speaker002:] so, maybe [disfmarker] [speaker003:] Right. [speaker007:] But that [disfmarker] that's not hard. [speaker005:] But that [disfmarker] Mm hmm. [speaker003:] No. [speaker007:] I I think the harder part is making sure that the transc the transcription [disfmarker] [speaker002:] OK. [speaker007:] So if you b merge two things, then you know that it's the sum of the transcripts, but if you split inside something, you don't where the word [disfmarker] which words moved. [speaker002:] Mm hmm. Mm hmm. [speaker007:] And that's wh that's where it becomes a little bit [disfmarker] uh, having to rerun the processing. [speaker002:] Mm hmm. [speaker007:] The cutting of the waveforms is pretty trivial. [speaker003:] Yeah. I mean as long as it can all be done automatically, I mean, then that's not a concern. You know, if I just have to run three scripts to extract it all and let it run on my computer for an hour and a half, or however long it takes to parse and create all the reference file, that's not a problem. [speaker007:] Right. [speaker002:] Mm hmm. Yeah. Uh huh. Mm hmm. [speaker003:] Um, so yeah. As long as we're at that point. And I know exactly like what the steps will work [disfmarker] what's going on, in the editing process, [speaker002:] Yeah. [speaker003:] so. OK. [speaker005:] So that's [disfmarker] I I mean I could [disfmarker] there were other checks that I did, but it's [disfmarker] I think that we've [disfmarker] unless you think there's anything else, I think that I've covered it. [speaker006:] Yeah. [speaker002:] I can't think of any of the [disfmarker] other ones. [speaker005:] OK. Great. [speaker006:] OK. [speaker004:] Oop! Man! [speaker001:] Am I on? I guess so. Radio two. Hmm. Radio two. [speaker005:] Hello? [speaker001:] Wow. [speaker005:] Mm hmm. Hi? [speaker002:] Blow into it, it works really well. [speaker006:] Channel B. [speaker001:] People say the strangest things when their microphones are on. [speaker004:] Channel four. Test. [speaker003:] Uh oh. [speaker004:] OK. [speaker003:] Radio four. [speaker005:] Hello? [speaker004:] Today's [speaker001:] So everybody everybody's on? Yeah. So y you guys had a [disfmarker] a meeting with uh [disfmarker] with Hynek which I unfortunately had to miss. Um [speaker003:] Mmm. [speaker001:] and uh somebody eh e and uh I guess Chuck you weren't there either, so the uh [speaker002:] I was there. [speaker001:] Oh you were there? [speaker002:] With Hynek? [speaker001:] Yeah. [speaker002:] Yeah. [speaker001:] So everybody knows what happened except me. OK. [vocalsound] Maybe somebody should tell me. [speaker003:] Oh yeah. Alright. Well. Uh first we discussed about some of the points that I was addressing in the mail I sent last week. [speaker001:] Uh huh. [speaker003:] So. Yeah. About the um, well [disfmarker] the downsampling problem. [speaker001:] Yeah. [speaker003:] Uh and about the f the length of the filters and [disfmarker] Yeah. [speaker001:] What was the [disfmarker] w what was the downsampling problem again? [speaker003:] So we had [disfmarker] [speaker001:] I forget. [speaker003:] So the fact that there [disfmarker] there is no uh low pass filtering before the downsampling. Well. [speaker001:] Uh huh. [speaker003:] There is because there is LDA filtering but that's perhaps not uh the best w m [speaker001:] Depends what it's frequency characteristic is, yeah. [speaker003:] Well. Mm hmm. [speaker004:] System on [speaker001:] So you could do a [disfmarker] you could do a stricter one. Maybe. Yeah. [speaker003:] Yeah. So we discussed about this, about the um [disfmarker] [speaker001:] Was there any conclusion about that? [speaker003:] Uh "try it". Yeah. [speaker001:] I see. [speaker003:] I guess. Uh. [speaker001:] Yeah. So again this is th this is the downsampling [vocalsound] uh of the uh [disfmarker] the feature vector stream and um Yeah I guess the [disfmarker] the uh LDA filters they were doing do have um [vocalsound] uh let's see, so the [disfmarker] the [disfmarker] the feature vectors are calculated every ten milliseconds so uh the question is how far down they are at fifty [disfmarker] fifty hertz. [speaker003:] Yeah. [speaker001:] Uh. [vocalsound] Um. [speaker003:] Mm hmm. [speaker001:] Sorry at twenty five hertz since they're downsampling by two. So. Does anybody know what the frequency characteristic is? [speaker003:] We don't have yet um [vocalsound] So, yeah. [speaker001:] Oh OK. [speaker003:] We should have a look first at, perhaps, [vocalsound] the modulation spectrum. [speaker001:] OK. Yeah. [speaker003:] Um. So there is this, there is the um length of the filters. Um. [vocalsound] [vocalsound] So the i this idea of trying to find filters with shorter delays. Um. We started to work with this. [speaker001:] Hmm hmm. [speaker003:] Mmm. And the third point um [vocalsound] [vocalsound] was the um, yeah, [vocalsound] the on line normalization where, well, the recursion f recursion for the mean estimation [vocalsound] is a filter with some kind of delay [speaker001:] Yeah. [speaker003:] and that's not taken into account right now. Um. Yeah. And there again, yeah. For this, the conclusion of Hynek was, well, "we can try it but [disfmarker]" [speaker001:] Uh huh. [speaker003:] Um. [speaker001:] Try [disfmarker] try what? [speaker003:] So try to um [vocalsound] [vocalsound] um take into account the delay of the recursion for the mean estimation. [speaker001:] OK. [speaker003:] Mmm. And this [disfmarker] we've not uh worked on this yet. Um, yeah. And so while discussing about these [disfmarker] these LDA filters, some i issues appeared, like well, the fact that if we look at the frequency response of these filters it's uh, well, we don't know really what's the important part in the frequency response and there is the fact that [vocalsound] in the very low frequency, these filters don't [disfmarker] don't really remove a lot. [vocalsound] compared to the [disfmarker] to the uh standard RASTA filter. Uh and that's probably a reason why, yeah, on line normalization helps because it [disfmarker] it, [speaker001:] Right. [speaker003:] yeah, it removed this mean. Um. Yeah, but perhaps everything could [disfmarker] should be [disfmarker] could be in the filter, I mean, uh the [disfmarker] the mean normalization and [disfmarker] Yeah. So. Yeah. So basically that was [disfmarker] that's [vocalsound] all we discussed about. We discussed about [vocalsound] good things to do also uh well, generally good stuff [vocalsound] to do for the research. [speaker001:] Mm hmm. [speaker003:] And this was this LDA uh tuning perhaps and [vocalsound] Hynek proposed again to his uh TRAPS, so. [speaker001:] OK. [speaker003:] Yeah, um. [speaker001:] I mean I g I guess the key thing for me is [disfmarker] is figuring out how to better coordinate between the two sides [speaker003:] Mm hmm. [speaker001:] cuz [disfmarker] because um uh I was talking with Hynek about it later and the [disfmarker] the [disfmarker] sort of had the sense sort of that [disfmarker] that neither group of people wanted to [disfmarker] to bother the other group too much. And [disfmarker] and I don't think anybody is, you know, closed in in their thinking or are unwilling to talk about things but I think that [vocalsound] you were sort of waiting for them to [vocalsound] tell you that they had something for you and [disfmarker] and that [disfmarker] and expected that they would do certain things [speaker003:] Mm hmm. [speaker001:] and they were sor they didn't wanna bother you and [vocalsound] they were sort of waiting for you and [disfmarker] and [disfmarker] and uh we ended up with this thing where they [disfmarker] they were filling up all of the possible latency themselves, and they just had hadn't thought of that. So. Uh. [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] I mean it's true that maybe [disfmarker] maybe no one really thought about that [disfmarker] that this latency thing would be such a [disfmarker] a strict issue [speaker003:] Yeah. Well, but. Yeah. Yeah. Well [disfmarker] [speaker001:] in [disfmarker] in uh [disfmarker] the other [disfmarker] [speaker003:] Yeah I don't know what happened really, but [speaker001:] Yeah. [speaker003:] I guess it's [disfmarker] it's also so uh the time constraints. Because, [vocalsound] well, we discussed about that [disfmarker] about this problem and they told us "well, we will do all that's possible to have enough space for a network" but then, yeah, perhaps they were too short with the time and [speaker001:] Then they couldn't. I see. [speaker003:] uh yeah. But there was also problem [disfmarker] perhaps a problem of communication. So, yeah. Now we will try to [disfmarker] [speaker001:] Just talk more. [speaker003:] Yeah, slikes and send mails. u s [speaker001:] Yeah. [speaker003:] o o Yeah. [speaker001:] Yeah. [speaker003:] Uh. OK. [speaker001:] So there's um [disfmarker] Alright. Well maybe we should just uh I mean you're [disfmarker] you're bus other than that you folks are busy doing all the [disfmarker] all the things that you're trying that we talked about before right? And this [disfmarker] machines are busy and [vocalsound] you're busy [speaker003:] Yeah. [speaker001:] and [speaker003:] Basically. [speaker001:] Yeah. OK. [speaker003:] Um. [speaker001:] Oh. Let's [disfmarker] let's, I mean, I think that as [disfmarker] as we said before that one of the things that we're imagining is that uh there [disfmarker] there will be [vocalsound] uh in the system we end up with there'll be something to explicitly uh uh do something about noise [speaker003:] Mm hmm. [speaker001:] in addition to the uh other things that we're talking about and that's probably the best thing to do. And there was that one email that said that [vocalsound] it sounded like uh uh things looked very promising up there in terms of uh I think they were using Ericsson's [vocalsound] approach or something and [vocalsound] in addition to [disfmarker] They're doing some noise removal thing, right? [speaker003:] Yeah, yeah. So yeah we're [disfmarker] will start to do this also. [speaker001:] Yeah. [speaker003:] Uh so Carmen is just looking at the Ericsson [disfmarker] Ericsson code. [speaker004:] Yeah. We modif [speaker001:] Mm hmm. [speaker004:] Yeah, I modified it [disfmarker] [speaker003:] And [speaker004:] well, modifying [disfmarker] [vocalsound] I studied Barry's sim code, more or less. to take [@ @] the first step the spectral subtraction. and we have some [disfmarker] the feature for Italian database and we will try with this feature with the filter to find the result. [speaker001:] Mm hmm. Mm hmm. [speaker004:] But we haven't result until this moment. [speaker001:] Yeah, sure. [speaker004:] But well, we are working in this also [speaker001:] Yeah. [speaker004:] and maybe try another type of spectral subtraction, I don't [disfmarker] [speaker001:] When you say you don't have a result yet you mean it's [disfmarker] it's just that it's in process or that you [disfmarker] [vocalsound] it finished and it didn't get a good result? [speaker004:] No. No, no n we have n we have do the experiment only have the feature [disfmarker] the feature but the experiment have [speaker003:] Yeah. [speaker004:] we have not make the experiment [speaker001:] Oh. [speaker004:] and [speaker001:] OK. [speaker004:] maybe will be good result or bad result, we don't know. [speaker001:] Yeah. [speaker003:] Yeah. [speaker001:] Yeah. OK. So um I suggest actually now we [disfmarker] we [disfmarker] we sorta move on and [disfmarker] and hear what's [disfmarker] what's [disfmarker] what's happening in [disfmarker] in other areas like [vocalsound] what's [disfmarker] what's happening with your [vocalsound] investigations [vocalsound] about echos and so on. [speaker006:] Oh um Well um I haven't started writing the test yet, I'm meeting with Adam today [speaker001:] Mm hmm. [speaker006:] um and he's going t show me the scripts he has for um [vocalsound] [vocalsound] running recognition on mee Meeting Recorder digits. [speaker001:] Mm hmm. [speaker006:] Uh [vocalsound] I also um [vocalsound] [vocalsound] haven't got the code yet, I haven't asked Hynek for [disfmarker] for the [disfmarker] for his code yet. Cuz I looked at uh Avendano's thesis and [vocalsound] I don't really understand what he's doing yet but it [disfmarker] [vocalsound] it [disfmarker] it sounded like um [vocalsound] the channel normalization part [vocalsound] um of his thesis um [vocalsound] was done in a [disfmarker] a bit of I don't know what the word is, a [disfmarker] a bit of a rough way um [vocalsound] it sounded like he um he [disfmarker] he [disfmarker] it [disfmarker] it wasn't really fleshed out and maybe he did something that was [vocalsound] interesting for the test situation but I [disfmarker] I'm not sure if it's [vocalsound] what I'd wanna use so I have to [disfmarker] I have to read it more, I don't really understand what he's doing yet. [speaker001:] OK. [speaker004:] It's my [speaker001:] Yeah I haven't read it in a while so I'm not gonna be too much help unless I read it again, [speaker004:] I know this is mine here. [speaker003:] Oh yeah? [speaker001:] so. OK. Um. [vocalsound] The um [disfmarker] so you, and then [vocalsound] you're also gonna be doing this echo cancelling between the [disfmarker] the close mounted and the [disfmarker] [vocalsound] and the [disfmarker] the [disfmarker] the [disfmarker] what we're calling a cheating experiment uh of sorts [speaker006:] Uh I I'm ho [speaker001:] between the distant [disfmarker] [speaker006:] Right. Well [disfmarker] [vocalsound] or I'm hoping [disfmarker] I'm hoping Espen will do it. Um [speaker001:] Ah! OK. F um [speaker006:] u [speaker001:] Delegate. That's good. It's good to delegate. [speaker006:] I [disfmarker] I think he's at least planning to do it for the cl close mike cross talk and so maybe I can just take whatever setup he has and use it. [speaker001:] Great. Great. Yeah actually um he should uh I wonder who else is I think maybe it's Dan Ellis is going to be doing uh a different cancellation. Um. [vocalsound] One of the things that people working in the meeting task wanna get at is they would like to have cleaner [vocalsound] close miked recordings. So uh this is especially true for the lapel but even for the close [disfmarker] close miked uh cases um we'd like to be able to have [vocalsound] um other sounds from other people and so forth removed from [disfmarker] So when someone isn't speaking you'd like the part where they're not speaking to actually be [disfmarker] So [vocalsound] what they're talking about doing is using ec uh echo cancellation like techniques. It's not really echo but [vocalsound] uh just um uh taking the input from other mikes and using uh [vocalsound] uh a uh [disfmarker] [vocalsound] an adaptive filtering approach to remove the effect of that uh other speech. So. Um what was it, there was [disfmarker] there was some [disfmarker] some [disfmarker] some point where [vocalsound] eh uh Eric or somebody was [disfmarker] was speaking and he had lots of [vocalsound] silence in his channel and I was saying something to somebody else uh [vocalsound] which was in the background and it was not [disfmarker] it was recognizing my words, which were the background speech [vocalsound] on the close [disfmarker] [vocalsound] close mike. [speaker006:] Hmm. [speaker002:] Oh the [disfmarker] What we talked about yesterday? [speaker001:] Yes. [speaker002:] Yeah that was actually my [disfmarker] I was wearing the [disfmarker] I was wearing the lapel and you were sitting next to me, [speaker001:] Oh you [disfmarker] it was you I was Yeah. [speaker002:] and I only said one thing but you were talking and it was picking up all your words. [speaker001:] Yeah. Yeah. So they would like clean channels. Uh and for that [disfmarker] mmm uh [disfmarker] that purpose uh they'd like to pull it out. So I think [disfmarker] [vocalsound] I think Dan Ellis or somebody who was working with him was going to uh work on that. So. OK. Right? Um. [vocalsound] And uh I don't know if we've talked lately about the [disfmarker] the plans you're developing that we talked about this morning uh I don't remember if we talked about that last week or not, but [vocalsound] maybe just a quick reprise of [disfmarker] of what we were saying this morning. [speaker005:] OK. Um. [comment] So continuing to um extend [speaker001:] Uh. [speaker002:] What about the stuff that um Mirjam has been doing? And [disfmarker] and S Shawn, yeah. Oh. So they're training up nets to try to recognize these acoustic features? I see. [speaker001:] But that's uh uh all [disfmarker] that's [disfmarker] is a [disfmarker] a certainly relevant [comment] [vocalsound] uh study and, you know, what are the features that they're finding. We have this problem with the overloading of the term "feature" so [speaker002:] Yeah. [speaker001:] uh [vocalsound] what are the variables, what we're calling this one, what are the variables that they're found [disfmarker] finding useful [speaker003:] Hmm. [speaker001:] um [speaker002:] And their [disfmarker] their targets are based on canonical mappings of phones to acoustic f features. [speaker001:] for [disfmarker] Right. And that's certainly one thing to do and we're gonna try and do something more f more fine than that but uh um so um So I guess you know what, I was trying to remember some of the things we were saying, do you ha still have that [disfmarker]? [speaker005:] Oh yeah. [speaker001:] Yeah. There's those [vocalsound] [pause] that uh yeah, some of [disfmarker] some of the issues we were talking about was in j just getting a good handle on [disfmarker] on uh [vocalsound] what "good features" are and [disfmarker] [speaker002:] What does [disfmarker] what did um Larry Saul use for [disfmarker] it was the sonorant uh detector, right? How did he [disfmarker] H how did he do that? Wh what was his detector? Mm hmm. Mm hmm. Oh, OK. Mm hmm. So how did he combine all these features? What [disfmarker] what r mmm classifier did he Hmm. Oh right. You were talking about that, yeah. I see. [speaker001:] And the other thing you were talking about is [disfmarker] is [disfmarker] is where we get the targets from. So I mean, there's these issues of what are the [disfmarker] what are the variables that you use and do you combine them using the soft "AND OR" or you do something, you know, more complicated um and then the other thing was so where do you get the targets from? The initial thing is just the obvious that we're discussing is starting up with phone labels [vocalsound] from somewhere and then uh doing the transformation. But then the other thing is to do something better and eh w why don't you tell us again about this [disfmarker] this database? This is the [disfmarker] [speaker002:] Hmm! [speaker001:] And then tell them to talk naturally? Yeah, yeah. [speaker002:] Pierced tongues and Yeah. You could just mount it to that and they wouldn't even notice. Weld it. Zzz. [speaker001:] Maybe you could go to these parlors and [disfmarker] and you could, you know [disfmarker] you know have [disfmarker] have, you know, reduced rates if you [disfmarker] [vocalsound] if you can do the measurements. [speaker002:] Yeah. I That's right. You could [disfmarker] what you could do is you could sell little rings and stuff with embedded you know, transmitters in them and things [speaker001:] Yeah. [speaker002:] and [speaker001:] Yeah, be cool and help science. [speaker002:] Yeah. [speaker001:] OK. [speaker002:] Hmm! There's a bunch of data that l around, that [disfmarker] people have done studies like that w way way back right? I mean [vocalsound] I can't remember where [disfmarker] uh Wisconsin or someplace that used to have a big database of [disfmarker] Yeah. I remember there was this guy at A T andT, Randolph? or r What was his name? Do you remember that guy? Um, [vocalsound] [vocalsound] researcher at A T andT a while back that was studying, trying to do speech recognition from these kinds of features. I can't remember what his name was. Dang. Now I'll think of it. [speaker001:] Do you mean eh [disfmarker] but you [disfmarker] I mean [disfmarker] Mar [speaker002:] That's interesting. [speaker003:] Well he was the guy [disfmarker] the guy that was using [disfmarker] [speaker001:] you mean when was [disfmarker] was Mark Randolph there, or [disfmarker]? [speaker002:] Mark Randolph. [speaker001:] Yeah he's [disfmarker] he's [disfmarker] he's at Motorola now. [speaker002:] Oh is he? Oh OK. [speaker001:] Yeah. Yeah. [speaker002:] Yeah. [speaker003:] Is it the guy that was using the pattern of pressure on the tongue or [disfmarker]? [speaker002:] I can't remember exactly what he was using, now. But I know [disfmarker] I just remember it had to do with you know [vocalsound] uh positional parameters [speaker003:] What [disfmarker] Yeah. Mm hmm. [speaker002:] and trying to m you know do speech recognition based on them. [speaker001:] Yeah. So the only [disfmarker] the only uh hesitation I had about it since, I mean I haven't see the data is it sounds like it's [disfmarker] it's [vocalsound] continuous variables and a bunch of them. [speaker002:] Hmm. [speaker001:] And so I don't know how complicated it is to go from there [disfmarker] What you really want are these binary [pause] labels, and just a few of them. And maybe there's a trivial mapping if you wanna do it and it's e but it [disfmarker] I [disfmarker] I [disfmarker] I worry a little bit that this is a research project in itself, whereas um [vocalsound] if you did something instead that [disfmarker] like um having some manual annotation by [vocalsound] uh you know, linguistics students, this would [disfmarker] there'd be a limited s set of things that you could do a as per our discussions with [disfmarker] with John before [speaker002:] Mm hmm. [speaker001:] but the things that you could do, like nasality and voicing and a couple other things you probably could do reasonably well. [speaker002:] Mm hmm. [speaker001:] And then there would [disfmarker] it would really be uh this uh uh binary variable. Course then, that's the other question is do you want binary variables. So. I mean the other thing you could do is [vocalsound] boot trying to [disfmarker] to uh get those binary variables and take the continuous variables from [vocalsound] uh the uh [vocalsound] uh the data itself there, but I [disfmarker] I'm not sure [disfmarker] [speaker002:] Could you cluster the [disfmarker] just do some kind of clustering? [speaker001:] Guess you could, yeah. [speaker002:] Bin them up into different categories and [disfmarker] [speaker001:] Yeah. So anyway that's [disfmarker] that's uh [disfmarker] that's another whole direction that cou could be looked at. Um. [vocalsound] Um. [vocalsound] I mean in general it's gonna be [disfmarker] for new data that you look at, it's gonna be hidden variable because we're not gonna get everybody sitting in these meetings to [vocalsound] wear the pellets and [disfmarker] [speaker005:] Right. [speaker001:] Um. So. [speaker005:] Right. [speaker002:] So you're talking about using that data to get uh instead of using canonical mappings of phones. [speaker005:] Right. [speaker002:] So you'd use that data to give you sort of what the [disfmarker] [vocalsound] the true mappings are for each phone? [speaker005:] Mm hmm. [speaker002:] I see. [speaker005:] Mm hmm. [speaker001:] Yeah. So wh yeah, where this fits into the rest in [disfmarker] in my mind, I guess, is that um [vocalsound] we're looking at different ways that we can combine [vocalsound] uh different kinds of [disfmarker] of rep front end representations [vocalsound] um in order to get robustness under difficult or even, you know, typical conditions. And part of it, this robustness, seems to come from [vocalsound] uh multi stream or multi band sorts of things and Saul seems to have [vocalsound] a reasonable way of looking at it, at least for one [disfmarker] [vocalsound] [vocalsound] one um articulatory feature. The question is is can we learn from that [vocalsound] to change some of the other methods we have, since [disfmarker] I mean, one of the things that's nice about what he had I thought was that [disfmarker] that it [disfmarker] it um [disfmarker] the decision about how strongly to train the different pieces is based on uh a [disfmarker] a reasonable criterion with hidden variables rather than [vocalsound] um just assuming [vocalsound] that you should train e e every detector uh with equal strength [vocalsound] towards uh it being this phone or that phone. [speaker002:] Hmm. [speaker001:] Right? So it [disfmarker] so um [vocalsound] he's got these um uh uh he "AND's" between these different [vocalsound] features. It's a soft "AND", I guess but in [disfmarker] in principle [vocalsound] you [disfmarker] you wanna get a strong concurrence of all the different things that indicate something and then he "OR's" across the different [disfmarker] soft "OR's" across the different uh [vocalsound] multi band channels. And um [vocalsound] the weight yeah, the target for the training of the "AND" [disfmarker] "AND'd" things [vocalsound] is something that's kept [vocalsound] uh as a hidden variable, and is learned with EM. [speaker002:] So he doesn't have [disfmarker] [speaker001:] Whereas what we were doing is [disfmarker] is uh [vocalsound] taking [vocalsound] the phone target and then just back propagating from that which means that it's [disfmarker] [vocalsound] it's uh i It could be for instance [vocalsound] that for a particular point in the data [vocalsound] you don't want to um uh train a particular band [disfmarker] train the detectors for a particular band. You [disfmarker] you wanna ignore [vocalsound] that band, cuz that's a [disfmarker] Ban band is a noisy [disfmarker] noisy measure. [speaker002:] Mm hmm. [speaker001:] And we don't [disfmarker] We're [disfmarker] we're still gonna try to train it up. In our scheme we're gonna try to train it up to do as well [disfmarker] well as it can at predicting. Uh. Maybe that's not the right thing to do. [speaker002:] So he doesn't have to have truth marks or [disfmarker] Ho [speaker005:] F right, and uh he doesn't have to have hard labels. [speaker001:] Well at the [disfmarker] at the tail end, yeah, he has to know what's [disfmarker] where it's sonorant. [speaker005:] Right. For the full band. [speaker001:] But he's [disfmarker] but what he's but what he's not training up [disfmarker] uh what he doesn't depend on as truth is um I guess one way of describing would be if [disfmarker] if a sound is sonorant is it sonorant in this band? Is it sonorant in that band? Is it sonorant in that band? [speaker005:] Right. [speaker001:] i It's hard to even answer that what you really mean is that the whole sound is sonorant. [speaker002:] Mm hmm. [speaker001:] So [speaker002:] OK. [speaker001:] then it comes down to, you know, to what extent should you make use of information from particular band [vocalsound] towards making your decision. [speaker002:] I see. [speaker001:] And um [vocalsound] uh we're making in a sense sort of this hard decision that you should [disfmarker] you should use everything [vocalsound] uh with [disfmarker] with uh equal strength. And uh because in the ideal case we would be going for posterior probabilities, if we had [vocalsound] uh enough data to really get posterior probabilities and if the [disfmarker] if we also had enough data so that it was representative of the test data then we would in fact be doing the right thing to train everything as hard as we can. But um this is something that's more built up along an idea of robustness from [disfmarker] from the beginning and so you don't necessarily want to train everything up towards the [disfmarker] [speaker002:] So where did he get his [disfmarker] uh his tar his uh high level targets about what's sonorant and what's not? [speaker005:] From uh canonical mappings [comment] um at first [speaker002:] OK. [speaker001:] Yeah. [speaker005:] and then it's unclear um eh [speaker002:] Using TIMIT? or using [disfmarker] [speaker005:] using TIMIT right, right. [speaker002:] Uh huh. [speaker001:] Yeah. [speaker005:] And then uh he does some fine tuning um for um special cases. Yeah. [speaker001:] Yeah. I mean we ha we have a kind of [vocalsound] iterative training because we do this embedded Viterbi, uh so there is some something that's suggested, based on the data but it's [disfmarker] it's not [disfmarker] I think it s doesn't seem like it's quite the same, cuz of this [disfmarker] cuz then whatever [vocalsound] that alignment is, it's that for all [disfmarker] all bands. [speaker002:] Mm hmm. [speaker001:] Well no, that's not quite right, we did actually do them separate [disfmarker] tried to do them separately so that would be a little more like what he did. Um. But it's still [vocalsound] not quite the same because then it's [disfmarker] it's um setting targets based on where you would say [vocalsound] the sound begins in a particular band. Where he's s this is not a labeling per se. Might be closer I guess if we did a [vocalsound] soft [disfmarker] soft target uh [vocalsound] uh embedded [vocalsound] neural net training like we've done a few times uh [vocalsound] f the forward um [disfmarker] do the forward calculations to get the gammas and train on those. Mmm. Uh what's next? [speaker002:] I could say a little bit about w stuff I've been playing with. [speaker001:] Oh. [speaker002:] I um [speaker001:] You're playing? [speaker002:] Huh? [speaker001:] You're playing? [speaker002:] Yes, I'm playing. Um [vocalsound] so I wanted to do this experiment to see um [vocalsound] uh what happens if we try to uh improve the performance of the back end recognizer for the Aurora task and see how that affects things. And so I had this um [disfmarker] I think I sent around last week a [disfmarker] [vocalsound] this plan I had for an experiment, this matrix where [vocalsound] I would take the um [disfmarker] the original um the original system. So there's the original system trained on the mel cepstral features and then com and then uh optimize the b HTK system and run that again. So look at the difference there and then uh do the same thing for [vocalsound] the ICSI OGI front end. [speaker001:] What [disfmarker] which test set was this? [speaker002:] This is [disfmarker] that I looked at? [speaker001:] Mm hmm. [speaker002:] Uh I'm looking at the Italian right now. [speaker001:] Mm hmm. [speaker002:] So as far as I've gotten is I've uh [vocalsound] been able to go through from beginning to end the um full HTK [vocalsound] system for the Italian data and got the same results that um [disfmarker] that uh [vocalsound] Stephane had. So um I started looking [disfmarker] to [disfmarker] and now I'm [disfmarker] I'm sort of lookin at the point where I wanna know what should I change in the HTK back end in order to try to [disfmarker] uh to improve it. So. One of the first things I thought of was the fact that they use [vocalsound] the same number of states for all of the models [speaker001:] Mm hmm. [speaker002:] and so I went on line and I uh found a pronunciation dictionary for Italian digits [speaker001:] Mm hmm. [speaker002:] and just looked at, you know, the number of phones in each one of the digits. Um you know, sort of the canonical way of setting up a [disfmarker] an HMM system is that you use [vocalsound] um three states per phone and um [vocalsound] so then the [disfmarker] the total number of states for a word would just be, you know, the number of phones times three. And so when I did that for the Italian digits, I got a number of states, ranging on the low end from nine to the high end, eighteen. Um. [vocalsound] Now you have to really add two to that because in HTK there's an initial null and a final null so when they use [vocalsound] uh models that have eighteen states, there're really sixteen states. They've got those initial and final null states. And so um [vocalsound] [vocalsound] their guess of eighteen states seems to be pretty well matched to the two longest words of the Italian digits, the four and five [vocalsound] which um, according to my, you know, sort of off the cuff calculation, should have eighteen states each. [speaker001:] Mm hmm. [speaker002:] And so they had sixteen. So that's pretty close. Um [vocalsound] [vocalsound] but for the [disfmarker] most of the words are sh much shorter. So the majority of them wanna have nine states. And so theirs are s sort of twice as long. So [vocalsound] my guess [disfmarker] uh And then if you [disfmarker] I [disfmarker] I printed out a confusion matrix um [vocalsound] uh for the well matched case, and it turns out that the longest words are actually the ones that do the best. So my guess about what's happening is that [vocalsound] you know, if you assume a fixed [disfmarker] the same amount of training data for each of these digits and a fixed length model for all of them but the actual words for some of them are half as long you really um have, you know, half as much training data for those models. Because if you have a long word and you're training it to eighteen states, [vocalsound] [vocalsound] uh you've got [disfmarker] you know, you've got the same number of Gaussians, you've gotta train in each case, [speaker001:] Mm hmm. [speaker002:] but for the shorter words, you know, the total number of frames is actually half as many. [speaker001:] Mm hmm. [speaker002:] So [vocalsound] it could be that, you know, for the short words there's [disfmarker] because you have so many states, you just don't have enough data to train all those Gaussians. So um I'm going to try to um create more word specific [vocalsound] um uh prototype H M Ms to start training from. [speaker001:] Yeah, I mean, it's not at all uncommon you do worse on long word on short words than long words anyway just because you're accumulating more evidence for the [disfmarker] for the longer word, [speaker002:] Mm hmm. [speaker001:] but. [speaker002:] Yeah so I'll [disfmarker] I'll, the next experiment I'm gonna try is to just um you know create [vocalsound] uh models that seem to be more w matched to my guess about how long they should be. [speaker001:] Mm hmm. [speaker002:] And as part of that um I wanted to see sort of how the um [disfmarker] how these models were coming out, you know, what w [vocalsound] when we train up uh th you know, the model for "one", which wants to have nine states, you know, what is the [disfmarker] uh what do the transition probabilities look like [disfmarker] in the self loops, [comment] look like in [disfmarker] in those models? And so I talked to Andreas and he explained to me how you can [vocalsound] calculate the expected duration of an HMM just by looking at the transition matrix [speaker001:] Mm hmm. [speaker002:] and so I wrote a little Matlab script that calculates that and so I'm gonna sort of print those out for each of the words to see what's happening, you know, how these models are training up, [speaker001:] Mm hmm. [speaker002:] you know, the long ones versus the short ones. [speaker001:] Mm hmm. [speaker002:] I d I did [disfmarker] quickly, I did the silence model and [disfmarker] [vocalsound] and um that's coming out with about one point two seconds as its average duration and the silence model's the one that's used at the beginning and the end of each of the [vocalsound] string of digits. [speaker001:] Wow. Lots of silence. [speaker002:] Yeah, yeah. And so the S P model, which is what they put in between digits, I [disfmarker] I haven't calculated that for that one yet, but um. So they basically [disfmarker] their [disfmarker] [vocalsound] their model for a whole digit string is silence [vocalsound] digit, SP, digit, SP blah blah blah and then silence at the end. And so. [speaker001:] Are the SP's optional? I mean skip them? [speaker002:] I have to look at that, but I'm not sure that they are. Now the one thing about the S P model is really it only has a single s emitting state to it. [speaker001:] Mm hmm. [speaker002:] So if it's not optional, you know, it's [disfmarker] it's not gonna hurt a whole lot [speaker001:] I see. [speaker002:] and it's tied to the center state of the silence model so it's not its own [disfmarker] um It doesn't require its own training data, [speaker001:] Mm hmm. [speaker002:] it just shares that state. [speaker001:] Mm hmm. [speaker002:] So it, I mean, it's pretty good the way that they have it set up, but um i So I wanna play with that a little bit more. I'm curious about looking at, you know [vocalsound] how these models have trained and looking at the expected durations of the models and I wanna compare that in the [disfmarker] the well matched case f to the unmatched case, and see if you can get an idea of [disfmarker] just from looking at the [vocalsound] durations of these models, you know, what what's happening. [speaker001:] Yeah, I mean, I think that uh, as much as you can, it's good to d sort of not do anything really tricky. [speaker002:] Mm hmm. [speaker001:] Not do anything that's really finely tuned, but just sort of eh you know you t you i z [speaker002:] Yeah. [speaker001:] The premise is kind of you have a [disfmarker] a good person look at this for a few weeks and what do you come up with? [speaker002:] Mm hmm. Mm hmm. [speaker001:] And uh [speaker002:] And Hynek, when I wa told him about this, he had an interesting point, and that was th um [vocalsound] the [disfmarker] the final models that they end up training up have I think probably something on the order of six Gaussians per state. So they're fairly, you know, hefty models. And Hynek was saying that well, probably in a real application, [vocalsound] you wouldn't have enough compute to handle models that are very big or complicated. So in fact what we may want are simpler models. [speaker001:] Could be. [speaker002:] And compare how they perform to that. But [vocalsound] you know, it depends on what the actual application is and it's really hard to know what your limits are in terms of how many Gaussians you can have. [speaker001:] Right. And that, I mean, at the moment that's not the limitation, so. [speaker002:] Mm hmm. [speaker001:] I mean, I [disfmarker] I [disfmarker] I [disfmarker] what I thought you were gonna say i but which I was thinking was um where did six come from? Probably came from the same place eighteen came from. [speaker002:] Yeah. [speaker001:] You know, so. [speaker002:] Right. [speaker001:] Uh [vocalsound] that's another parameter, right? [speaker002:] Yeah, yeah. [speaker001:] that [disfmarker] that maybe, you know, uh [disfmarker] you really want three or nine or [disfmarker] [speaker002:] Well one thing [disfmarker] I mean, if I [disfmarker] if [disfmarker] if I start um reducing the number of states for some of these shorter models [vocalsound] that's gonna reduce the total number of Gaussians. So in a sense it'll be a simpler system. [speaker001:] Right. Yeah. Yeah. But I think right now again the idea is doing just very simple things [speaker002:] Yeah. [speaker001:] how much better can you make it? [speaker002:] Mm hmm. [speaker001:] And um since they're only simple things there's nothing that you're gonna do that is going to blow up the amount of computation [speaker002:] Right. [speaker001:] um so [speaker002:] Right. [speaker001:] if you found that nine was better than six that would be O K, I think, actually. [speaker002:] Mm hmm. Yeah. [speaker001:] Doesn't have to go down. [speaker002:] I really wasn't even gonna play with that part of the system yet, I was just gonna change the [disfmarker] the t [speaker001:] Mm hmm, OK. Yeah, just work with the models, [speaker002:] yeah, just look at the length of the models and just see what happens. [speaker001:] yeah. Yeah. [speaker002:] So. [speaker001:] Cool. OK. So uh [vocalsound] what's uh I guess your plan for [disfmarker] You [disfmarker] you [disfmarker] you guys' plan for the next [disfmarker] next week is [vocalsound] just continue on these [disfmarker] these same things we've been talking about for Aurora and [speaker003:] Yeah, I guess we can try to [vocalsound] have some kind of new baseline for next week perhaps. with all these minor things [vocalsound] [vocalsound] modified. And then do other things, play with the spectral subtraction, and [vocalsound] retry the MSG and things like that. [speaker001:] Yeah. Yeah. Yeah we [disfmarker] we have a big list. [speaker003:] Big list? [speaker001:] You have a big list of [disfmarker] [vocalsound] of things to do. So. Well that's good. I think [vocalsound] that after all of this uh um confusion settles down in another [disfmarker] some point a little later next year there will be some sort of standard and it'll get out there and [vocalsound] hopefully it'll have some effect from something [vocalsound] that [disfmarker] [vocalsound] that has uh been done by our group of people but uh e even if it doesn't there's [disfmarker] [vocalsound] there's go there'll be standards after that. So. [speaker002:] Does anybody know how to um [vocalsound] run Matlab sort of in batch mode like you c send it [vocalsound] s a bunch of commands to run and it gives you the output. Is it possible to do that? [speaker005:] I [disfmarker] I think uh Mike tried it [speaker002:] Yeah? [speaker005:] and he says it's impossible so he went to Octave. Octave is the um UNIX clone of [disfmarker] of Matlab which you can batch. [speaker002:] Octave. Ah! OK. Great. Thanks. [speaker005:] Yeah. [speaker002:] I was going crazy trying to do that. [speaker001:] Huh. [speaker005:] Yeah. [speaker003:] What is Octave so? [speaker005:] What's that? [speaker003:] It's a free software? [speaker005:] Uh, Octave? [speaker003:] Yeah. [speaker005:] Yeah it's [disfmarker] it's [disfmarker] it's free. I think we have it here [comment] r running somewhere. [speaker002:] Great! [speaker005:] Yeah. [speaker003:] And it does the same syntax and everything eh [speaker005:] Um [vocalsound] [comment] i it's a little behind, [speaker003:] like Matlab, or [disfmarker]? [speaker005:] it's the same syntax but it's a little behind in that [comment] Matlab went to these like um you can have cells and you can [disfmarker] you can [comment] uh implement object oriented type things with Matlab. Uh Octave doesn't do that yet, so I think you, Octave is kinda like Matlab um four point something or. [speaker002:] If it'll do like a lot of the basic matrix and vector stuff [speaker005:] The basic stuff, right. [speaker002:] that's perfect. [speaker005:] Yeah. [speaker002:] Great! [speaker001:] OK, guess we're done. [speaker005:] OK. [speaker006:] Well, although by the way. [speaker006:] And we're on. [speaker004:] OK. Might wanna [vocalsound] close the door so that [disfmarker] Uh, Stephane will [disfmarker] [speaker006:] I'll get it. [speaker004:] Yeah [speaker006:] Hey Dave? Could you go ahead and turn on, uh, Stephane's [disfmarker] [speaker003:] Mm hmm. [speaker004:] So that's the virtual Stephane over there. [speaker006:] OK. [speaker007:] Do you use a PC for recording? Or [disfmarker] [speaker006:] Uh, yeah, a Linux box. Yeah. [speaker007:] Uh huh. [speaker006:] It's got, uh, like sixteen channels going into it. [speaker007:] Uh huh. The quality is quite good? Or [disfmarker]? [speaker006:] Mm hmm. Yeah, so far, it's been pretty good. [speaker007:] Mm hmm. [speaker004:] Yeah. So, uh, yeah [disfmarker] the suggestion was to have these guys start to [disfmarker] [speaker006:] OK. Why don't you go ahead, Dave? [speaker003:] OK. Um, so, yeah, the [disfmarker] this past week I've been main mainly occupied with, um, getting some results, u from the SRI system trained on this short Hub five training set for the mean subtraction method. And, um, I ran some tests last night. But, um, c the results are suspicious. Um, it's, um, [vocalsound] cuz they're [disfmarker] the baseline results are worse than, um, Andreas [disfmarker] than results Andreas got previously. And [vocalsound] it could have something to do with, um [disfmarker] [speaker006:] That's on digits? [speaker003:] That's on digits. It c it [disfmarker] it could h it could have something to do with, um, downsampling. [speaker006:] Hmm. [speaker003:] That's [disfmarker] that's worth looking into. Um, d and, um, ap ap apart from that, I guess the [disfmarker] the main thing I have t ta I have to talk is, um, where I'm planning to go over the next week. Um. So I've been working on integrating this mean subtraction approach into the SmartKom system. And there's this question of, well, so, um, in my tests before with HTK I found it worked [disfmarker] it worked the best with about twelve seconds of data used to estimate the mean, but, we'll often have less [comment] in the SmartKom system. Um. So I think we'll use as much data as we have [pause] at a particular time, and we'll [disfmarker] [vocalsound] we'll concatenate utterances together, um, to get as much data as we possibly can from the user. But, [vocalsound] um, [vocalsound] there's a question of how to set up the models. So um, we could train the models. If we think twelve seconds is ideal we could train the models using twelve seconds to calculate the mean, to mean subtract the training data. Or we could, um, use some other amount. So [disfmarker] like I did an experiment where I, um, was using six seconds in test, um, but, for [disfmarker] I tried twelve seconds in train. And I tried, um, um, the same in train [disfmarker] I'm a I tried six seconds in train. And six seconds in train [vocalsound] was about point three percent better. Um, and [disfmarker] [vocalsound] um, it's not clear to me yet whether that's [vocalsound] something significant. So I wanna do some tests and, um, [vocalsound] actually make some plots of, um [disfmarker] for a particular amount of data and test what happens if you vary the amount of data in train. [speaker006:] Mm hmm. [speaker004:] Uh, Guenter, I don't know if you t [vocalsound] followed this stuff but this is, uh, [vocalsound] a uh, uh, long term [disfmarker] long term window F F [speaker007:] Yeah, [speaker004:] Yeah. Yeah, he [disfmarker] you talked about it. [speaker007:] we [disfmarker] we spoke about it already, [speaker004:] Oh, OK. [speaker007:] yeah. [speaker004:] So you know what he's doing. Alright. [speaker003:] y s so I was [disfmarker] I actually ran the experiments mostly and I [disfmarker] I was [disfmarker] I was hoping to have the plots with me today. I just didn't get to it. But, um [disfmarker] yeah, I wou I would be curious about people's feedback on this cuz I'm [disfmarker] [vocalsound] [@ @] [comment] I p I think there are some I think it's [disfmarker] it's kind of like a [disfmarker] a bit of a tricky engineering problem. I'm trying to figure out what's the optimal way to set this up. So, um, [vocalsound] I'll try to make the plots and then put some postscript up on my [disfmarker] on my web page. And I'll mention it in my status report if people wanna take a look. [speaker004:] You could clarify something for me. You're saying point three percent, you take a point three percent hit, [vocalsound] when the training and testing links are [disfmarker] don't match or something? [speaker005:] Hello. [speaker004:] Is that what it is? Or [disfmarker]? [speaker003:] w Well, it c I [disfmarker] I don't think it [disfmarker] it's [vocalsound] just for any mismatch [vocalsound] you take a hit. [speaker004:] Yeah. [speaker003:] i In some cases it might be u better to have a mismatch. [speaker004:] Yeah. [speaker003:] Like I think I saw something like [disfmarker] like if you only have two seconds in test, or, um, maybe it was something like four seconds, you actually do a little better if you, um, [vocalsound] train on six seconds than if you train on four seconds. [speaker004:] Right. [speaker003:] Um, but the case, uh [disfmarker] with the point three percent hit was [vocalsound] using six seconds in test, um, comparing train on twelve seconds [comment] versus train on six seconds. [speaker004:] And which was worse? [speaker003:] The train on twelve seconds. [speaker004:] OK. But point three percent, uh, w from what to what? That's point three percent [disfmarker] [speaker003:] On [disfmarker] The [disfmarker] the [disfmarker] the accuracies [vocalsound] w went from [disfmarker] it was something vaguely like ninety five point six accuracy, um, improved to ninety five point nine wh when I [disfmarker] [speaker004:] So four point four to four point one. [speaker003:] OK. [speaker004:] So [disfmarker] yeah. So about a [disfmarker] about an eight percent, uh, seven or eight percent relative? [speaker003:] OK. [speaker004:] Uh, Yeah. Well, I think in a p You know, if [disfmarker] if you were going for an evaluation system you'd care. But if you were doing a live system that people were actually using nobody would notice. It's [disfmarker] uh, I think the thing is to get something that's practical, that [disfmarker] that you could really use. [speaker003:] Huh. That's [disfmarker] that's interesting. Alright, the e uh, I see your point. I guess I was thinking of it as, um, [vocalsound] an interesting research problem. [speaker004:] Yeah. [speaker003:] The [disfmarker] how to g I was thinking that for the ASRU paper we could have a section saying, [vocalsound] "For SmartKom, we [disfmarker] we d in [disfmarker] we tried this approach in, uh, [vocalsound] interactive system", which I don't think has been done before. [speaker004:] Mm hmm. Mm hmm. [speaker003:] And [disfmarker] and then there was two research questions from that. And one is the k does it still work if you just use the past history? [speaker004:] Mm hmm. [speaker003:] Alright, and the other was this question of, um what I was just talking about now. So I guess that's why I thought it was interesting. [speaker004:] I mean, a short time FFT [disfmarker] short time cepstrum calculation, uh, mean [disfmarker] u mean calculation work that people have in commercial systems, they do this all the time. They [disfmarker] the [disfmarker] they calculate it from previous utterances and then use it, you know. [speaker003:] Yeah, um. [speaker004:] But [disfmarker] but, uh, as you say, there hasn't been that much with this long [disfmarker] long time, uh, spectra work. Uh, [speaker003:] Oh, o Oh, OK. So that's [disfmarker] that's [disfmarker] that's standard. Um [disfmarker] [speaker004:] Yeah. Pretty common. Yeah. [speaker003:] OK. [speaker004:] Um, but, u uh, yes. No, it is interesting. And the other thing is, I mean, there's two sides to these really small, uh, gradations in performance. Um, I mean, on the one hand in a practical system if something is, uh, four point four percent error, four point one percent error, people won't really tell [disfmarker] be able to tell the difference. On the other hand, when you're doing, uh, research, you may, eh [disfmarker] you might find that the way that you build up a change from a ninety five percent accurate system to a ninety eight percent accurate system is through ten or twelve little things that you do that each are point three percent. So [disfmarker] so the [disfmarker] they [disfmarker] they [disfmarker] it's [disfmarker] I don't mean to say that they're [disfmarker] they're irrelevant. Uh, they are relevant. But, um, [vocalsound] i for a demo, you won't see it. [speaker003:] Mm hmm. Right. OK. [speaker004:] Yeah. [speaker003:] And, um, Let's [disfmarker] l let's see. Um, OK. And then there's um, another thing I wanna start looking at, um, [vocalsound] wi is, um, the choice of the analysis window length. So I've just been using two seconds just because that's what Carlos did before. Uh, I wrote to him asking about he chose the two seconds. And it seemed like he chose it a bit informally. So, um, with the [disfmarker] with the HTK set up I should be able to do some experiments, on just varying that length, say between one and three seconds, in a few different reverberation conditions, um, say this room and also a few of the artificial impulse responses we have for reverberation, just, um, making some plots and seeing how they look. And, um, so, with the [disfmarker] the sampling rate I was using, one second or two seconds or four seconds is at a power of two um, number of samples and, um, I'll [disfmarker] I'll jus f for the ones in between I guess I'll just zero pad. [speaker004:] Mm hmm. I guess one thing that might also be an issue, uh, cuz part of what you're doing is you're getting a [disfmarker] a spectrum over a bunch of different kinds of speech sounds. Um, and so it might matter how fast someone was talking for instance. [speaker003:] Oh. [speaker004:] You know, if you [disfmarker] if [disfmarker] if [disfmarker] if there's a lot of phones in one second maybe you'll get a [disfmarker] a really good sampling of all these different things, and [disfmarker] [vocalsound] and, uh, on the other hand if someone's talking slowly maybe you'd need more. So [disfmarker] I don't know if you have some samples of faster or slower speech [speaker003:] Huh. [speaker004:] but it might make a difference. I don't know. [speaker003:] Uh, yeah, I don't [disfmarker] I don't think the TI digits data that I have, um, [vocalsound] i is [disfmarker] would be appropriate for that. [speaker004:] Yeah, probably not. Yeah. [speaker003:] But what do you [disfmarker] What about if I w I fed it through some kind of, um, speech processing algorithm that changed the speech rate? [speaker004:] Yeah, but then you'll have the degradation of [disfmarker] of, uh, whatever you do uh, added onto that. But maybe. Yeah, maybe if you get something that sounds [disfmarker] that [disfmarker] that's [disfmarker] does a pretty job at that. [speaker003:] Yeah. Well, uh, just if you think it's worth looking into. [speaker004:] You could imagine that. [speaker003:] I mean, it [disfmarker] it is getting a little away from reverberation. [speaker004:] Um, yeah. It's just that you're making a choice [disfmarker] uh, I was thinking more from the system aspect, if you're making a choice for SmartKom, that [disfmarker] that [disfmarker] that it might be that it's [disfmarker] it c the optimal number could be different, depending on [disfmarker] [speaker003:] Yeah. Right. [speaker004:] Could be. I don't know. [speaker003:] And [disfmarker] and th the third thing, um, uh, is, um, Barry explained LDA filtering to me yesterday. And so, um, Mike Shire in his thesis um, [vocalsound] did a [disfmarker] a series of experiments, um, training LDA filters in d on different conditions. And you were interested in having me repeat this for [disfmarker] for this mean subtraction approach? Is [disfmarker] is that right? Or for these long analysis windows, I guess, is the right way to put it. [speaker004:] I guess, the [disfmarker] the [disfmarker] the issue I was [disfmarker] the general issue I was bringing up was that if you're [disfmarker] have a moving [disfmarker] [vocalsound] moving window, uh, a wa a [disfmarker] a set of weights times things that, uh, move along, shift along in time, that you have in fact a linear time invariant filter. And you just happened to have picked a particular one by setting all the weights to be equal. And so the issue is what are some other filters that you could use, uh, in that sense of "filter"? [speaker003:] Mm hmm. [speaker004:] And, um, as I was saying, I think the simplest thing to do is not to train anything, but just to do some sort of, uh, uh, hamming or Hanning, uh, kind of window, kind of thing, [speaker003:] Right. Mm hmm. [speaker004:] just sort of to de emphasize the jarring. So I think that would sort of be the first thing to do. But then, yeah, the LDA i uh, is interesting because it would sort of say well, suppose you actually trained this up to do the best you could by some criterion, what would the filter look like then? [speaker003:] Uh huh. [speaker004:] Uh, and, um, that's sort of what we're doing in this Aur Aurora stuff. And, uh, it's still not clear to me in the long run whether the best thing to do would be to do that or to have some stylized version of the filter that looks like these things you've trained up, because you always have the problem that it's trained up for one condition and it isn't quite right for another. So. uh [disfmarker] that's [disfmarker] that's why [disfmarker] that's why RASTA filter has actually ended up lasting a long time, people still using it quite a bit, because y you don't change it. So doesn't get any worse. Uh, [speaker003:] Huh. [speaker004:] Anyway. [speaker003:] o OK. So, um, a actually I was just thinking about what I was asking about earlier, wi which is about having [vocalsound] less than say twelve seconds in the SmartKom system to do the mean subtraction. You said in [vocalsound] systems where you use cepstral mean subtraction, they concatenate utterances and, [vocalsound] do you know how they address this issue of, um, testing versus training? Can [disfmarker] [speaker007:] I think what they do is they do it always on line, [speaker004:] Go ahead. [speaker007:] I mean, that you just take what you have from the past, that you calculate the mean of this and subtract the mean. [speaker003:] OK. Um [disfmarker] [speaker007:] And then you can [disfmarker] yeah, you [disfmarker] you can increase your window whi while you get [disfmarker] while you are getting more samples. [speaker003:] OK, um, and, um, so [disfmarker] so in tha in that case, wh what do they do when they're t um, performing the cepstral mean subtraction on the training data? So [disfmarker] because you'd have hours and hours of training data. So do they cut it off and start over? At intervals? Or [disfmarker]? [speaker007:] So do you have [disfmarker] uh, you [disfmarker] you mean you have files which are hours of hours long? Or [disfmarker]? [speaker003:] Oh, well, no. I guess not. [speaker007:] Yeah. [speaker003:] But [disfmarker] [speaker007:] I mean, usually you have in the training set you have similar conditions, I mean, file lengths are, I guess the same order or in the same size as for test data, or aren't they? [speaker003:] OK. But it's [disfmarker] OK. So if someone's interacting with the system, though, uh, Morgan [disfmarker] uh, Morgan said that you would [vocalsound] tend to, um, [vocalsound] chain utterances together um, r [speaker004:] Well, I think what I was s I thought what I was saying was that, um, at any given point you are gonna start off with what you had from before. [speaker003:] Oh. [speaker004:] From [disfmarker] and so if you're splitting things up into utterances [disfmarker] So, for instance, in a dialogue system, [comment] where you're gonna be asking, uh, you know, th for some information, there's some initial th something. And, you know, the first time out you [disfmarker] you might have some general average. But you [disfmarker] you d you don't have very much information yet. But at [disfmarker] after they've given one utterance you've got something. You can compute your mean cepstra from that, [speaker003:] Mm hmm. [speaker004:] and then can use it for the next thing that they say, uh, so that, you know, the performance should be better that second time. Um, and I think the heuristics of exactly how people handle that and how they handle their training I'm sure vary from place to place. But I think the [disfmarker] ideally, it seems to me anyway, that you [disfmarker] you would wanna do the same thing in training as you do in test. But that's [disfmarker] that's just, uh, a prejudice. And I think anybody working on this with some particular task would experiment. [speaker003:] Right. I g I guess the question I had was, um, amount of data e u was the amount of data that you'd give it to, um [vocalsound] update this estimate. Because say you [disfmarker] if you have say five thousand utterances in your training set, [vocalsound] um, and you [disfmarker] you keep the mean from the last utterance, by the time it gets to the five thousandth utterance [disfmarker] [speaker004:] No, but those are all different people with different [disfmarker] I mean, i in y So for instance, in [disfmarker] in the [disfmarker] in a telephone task, these are different phone calls. So you don't wanna [@ @] [comment] chain it together from a [disfmarker] from a different phone call. [speaker003:] OK, so [disfmarker] so [disfmarker] so they would [disfmarker] [speaker004:] So it's within speaker, [speaker003:] g s [speaker007:] Yeah. [speaker004:] within phone call, if it's a dialogue system, it's within whatever this characteristic you're trying to get rid of is expected to be consistent over, [speaker007:] Hmm. [speaker003:] r and it [disfmarker] [speaker004:] right? [speaker003:] right. OK, so you'd [disfmarker] you [disfmarker] and so in training you would start over at [disfmarker] at every new phone call or at every [vocalsound] new speaker. [speaker004:] Yeah. [speaker003:] Yeah, OK. [speaker004:] Yeah. Now, [vocalsound] you know, maybe you'd use something from the others just because at the beginning of a call you don't know anything, and so you might have some kind of general thing that's your best guess to start with. But [disfmarker] So, s I [disfmarker] I [disfmarker] you know, a lot of these things are proprietary so we're doing a little bit of guesswork here. I mean, what do comp what do people do who really face these problems in the field? Well, they have companies and they don't tell other people exactly what they do. But [disfmarker] but I mean, when you [disfmarker] the [disfmarker] the hints that you get from what they [disfmarker] when they talk about it are that they do [disfmarker] they all do something like this. [speaker003:] R right. Right, OK. I see. Bec because I [disfmarker] so this SmartKom task first off, it's this TV and movie information system. [speaker004:] Yeah, but you might have somebody who's using it [speaker003:] And [disfmarker] [speaker004:] and then later you might have somebody else who's using it. [speaker003:] Yeah. Yeah. [speaker004:] And so you'd wanna set some [disfmarker] [speaker003:] Right. Right. I [disfmarker] I see. I was [disfmarker] I was about to say. So if [disfmarker] if you ask it "What [disfmarker] what movies are on TV tonight?", [speaker004:] Yeah. Yeah. [speaker003:] if I look at my wristwatch when I say that it's about two seconds. [speaker004:] Yeah. [speaker003:] The way I currently have the mean subtraction, um, set up, the [disfmarker] the analysis window is two seconds. So what you just said, about what do you start with, raises a question of [vocalsound] what do I start with then? [speaker004:] Mm hmm. [speaker003:] I guess it [disfmarker] because [disfmarker] [speaker004:] Well, w OK, so in that situation, though, th maybe what's a little different there, is I think you're talking about [disfmarker] there's only one [disfmarker] it [disfmarker] it [disfmarker] it also depends [disfmarker] we're getting a little off track here. [speaker003:] Oh, [speaker004:] r But [disfmarker] but [disfmarker] but [disfmarker] [speaker003:] right. [speaker004:] Uh, there's been some discussion about whether the work we're doing in that project is gonna be for the kiosk or for the mobile or for both. And I think for this kind of discussion it matters. If it's in the kiosk, then the physical situation is the same. It's gonna [disfmarker] you know, the exact interaction of the microphone's gonna differ depending on the person and so forth. But at least the basic acoustics are gonna be the same. So f if it's really in one kiosk, then I think that you could just chain together and [disfmarker] and you know, as much [disfmarker] as much speech as possible to [disfmarker] because what you're really trying to get at is the [disfmarker] is the reverberation characteristic. [speaker003:] Yeah. [speaker004:] But in [disfmarker] in the case of the mobile, uh, [comment] presumably the acoustic's changing all over the place. [speaker003:] Right. [speaker004:] And in that case you probably don't wanna have it be endless because you wanna have some sort of [disfmarker] it's [disfmarker] it's not a question of how long do you think it's [disfmarker] you can get an approximation to a stationary something, given that it's not really stationary. [speaker003:] Right. [speaker007:] Hmm. [speaker003:] Right. [speaker004:] So. [speaker003:] And I [disfmarker] I g I guess I s just started thinking of another question, which is, [vocalsound] for [disfmarker] for the very first frame, w what [disfmarker] what do I do if I'm [disfmarker] if I take [disfmarker] if I use that frame to calculate the mean, then I'm just gonna get n nothing. [speaker004:] Mm hmm. Right. [speaker003:] Um, so I should probably have some kind of default [vocalsound] mean for the first f couple of frames? [speaker004:] Yeah. Yeah. [speaker003:] OK. [speaker004:] Yeah. Or subtract nothing. I mean, it's [disfmarker] [speaker003:] Or subtract nothing. And [disfmarker] and that's [disfmarker] that's [disfmarker] I guess that's something that's p people have figured out how to deal with in cepstral mean subtraction as well? [speaker004:] Yeah, yeah. Yeah, people do something. They [disfmarker] they, uh, they have some, um, uh, in [disfmarker] in cepstral mean subtraction, for short term window [disfmarker] analysis windows, as is usually done, you're trying to get rid of some very general characteristic. And so, uh, if you have any other information about what a general kind of characteristic would be, then you [disfmarker] you can do it there. [speaker006:] You can also [disfmarker] you can also reflect the data. So you take, uh [disfmarker] you know, I'm not sure how many frames you need. But you take that many from the front and flip it around to [disfmarker] a as the negative value. [speaker003:] Uh huh. [speaker004:] Yeah, that's [disfmarker] [speaker006:] So you can always [disfmarker] [speaker004:] Yeah. The other thing is that [disfmarker] and [disfmarker] and [disfmarker] I [disfmarker] I remember B B N doing this, is that if you have a multi pass system, um, if the first pass ta it takes most of the computation, the second and the third pass could be very, very quick, just looking at a relatively small n small, uh, space of hypotheses. [speaker003:] Mmm. Uh huh. [speaker004:] Then you can do your first pass [vocalsound] without any subtraction at all. [speaker003:] Oh. [speaker004:] And then your second pass, uh, uh, eliminates those [disfmarker] most of those hypotheses by, uh [disfmarker] by having an improved [disfmarker] improved version o of the analysis. [speaker003:] OK. OK. [speaker004:] So. [speaker003:] OK. So that was all I had, for now. [speaker004:] Yeah. [speaker006:] Do you wanna go, Barry? [speaker001:] Yeah, OK. Um, so for the past, [vocalsound] uh, week an or two, I've been just writing my, uh, formal thesis proposal. Um, so I'm taking [vocalsound] this qualifier exam that's coming up in two weeks. And I [disfmarker] I finish writing a proposal and submit it to the committee. Um. And uh, should I [disfmarker] should I explain, uh, more about what [disfmarker] what I'm proposing to do, and s and stuff? [speaker004:] Yes, briefly. [speaker006:] Yeah briefly. [speaker001:] OK. Um, so briefly, [vocalsound] I'm proposing to do a n a new p approach to speech recognition using um, a combination of, uh, multi band ideas and ideas, um, [vocalsound] [vocalsound] [comment] about the uh, acoustic phonec phonetic approach to speech recognition. Um, so I will be using [vocalsound] these graphical models that [disfmarker] um, that implement the multi band approach [vocalsound] to recognize a set of intermediate categories that might involve, uh, things like phonetic features [vocalsound] or other [disfmarker] other f feature things that are more closely related to the acoustic signal itself. Um, and the hope in all of this is that by going multi band and by going into these, [vocalsound] um intermediate classifications, [vocalsound] that we can get a system that's more robust to [disfmarker] to unseen noises, and situations like that. Um, and so, some of the research issues involved in this are, [vocalsound] um, [vocalsound] [comment] one, what kind of intermediate categories do we need to classify? Um, another one is [vocalsound] um, what [disfmarker] what other types of structures in these multi band graphical models should we consider in order to um, combine evidence from [vocalsound] the sub bands? And, uh, the third one is how do we [disfmarker] how do we merge all the, uh, information from the individual uh, multi band classifiers to come up with word [disfmarker] word recognition or [disfmarker] or phone recognition things. Um, so basically that's [disfmarker] that's what I've been doing. [speaker006:] So you've got two weeks, huh? [speaker001:] And, I got two weeks to brush up on d um, presentation stuff and, um, [speaker004:] Oh, I thought you were finishing your thesis in two weeks. [speaker001:] But. Oh, that too. [speaker004:] Yeah. [speaker001:] Yeah. [speaker006:] Are you gonna do any dry runs for your thing, or are you just gonna [disfmarker] [speaker001:] Yes. Yes. I, um [disfmarker] I'm [disfmarker] I'm gonna do some. Would you be interested? [speaker006:] Sure. [speaker001:] To help out? [speaker006:] Sure. [speaker001:] OK. Thanks. Yeah. [speaker006:] Is that it? Hhh. [speaker001:] That's it. [speaker006:] OK. Uh. Hhh. Let's see. So we've got forty minutes left, and it seems like there's a lot of material. An any suggestions about where we [disfmarker] where we should go next? [speaker002:] Mmm, [@ @]. [speaker006:] Uh. Do you wanna go, Sunil? Maybe we'll just start with you. [speaker002:] Yeah. But I actually stuck most of this in our m last meeting with Guenter. Um, but I'll just [disfmarker] Um, so the last week, uh, I showed some results with only SpeechDat Car which was like some fifty six percent. And, uh, I didn't h I mean, I [disfmarker] I found that the results [disfmarker] I mean, I wasn't getting that r results on the TI digit. So I was like looking into "why, what is wrong with the TI digits?". Why [disfmarker] why I was not getting it. And I found that, the noise estimation is a reason for the TI digits to perform worse than the baseline. So, uh, I actually, picked th I mean, the first thing I did was I just scaled the noise estimate by a factor which is less than one to see if that [disfmarker] because I found there are a lot of zeros in the spectrogram for the TI digits when I used this approach. So the first thing I did was I just scaled the noise estimate. And I found [disfmarker] So the [disfmarker] the results that I've shown here are the complete results using the new [disfmarker] Well, the n the new technique is nothing but the noise estimate scaled by a factor of point five. So it's just an ad hoc [disfmarker] I mean, some intermediate result, because it's not optimized for anything. So the results [disfmarker] The trend [disfmarker] the only trend I could see from those results was like the [disfmarker] the p the current noise estimation or the, uh, noise composition scheme is working good for like the car noise type of thing. Because I've [disfmarker] the only [disfmarker] only [disfmarker] p very good result in the TI digits is the noise [disfmarker] car noise condition for their test A, which is like the best I could see that uh, for any non stationary noise like "Babble" or "Subway" or any [disfmarker] "Street", some "Restaurant" noise, it's like [disfmarker] it's not performing w very well. So, the [disfmarker] [vocalsound] So that [disfmarker] that's the first thing I c uh, I could make out from this stuff. And [disfmarker] [speaker007:] Yeah, I think what is important to see is that there is a big difference between the training modes. Uh huh. [speaker002:] Yeah. [speaker007:] If you have clean training, you get also a fifty percent improvement. [speaker002:] Yeah. [speaker007:] But if you have muddy condition training you get only twenty percent. [speaker002:] Yeah. Yeah. [speaker005:] Mm hmm. [speaker007:] Mm hmm. [speaker002:] Uh, and in that twenty percent [@ @] it's very inconsistent across different noise conditions. [speaker007:] Mmm. [speaker002:] So I have like a forty five [vocalsound] percent for "Car noise" and then there's a minus five percent for the "Babble", [speaker007:] Mmm. [speaker002:] and there's this thirty three for the "Station". And so [vocalsound] it's [disfmarker] it's not [disfmarker] it's not actually very consistent across. So. The only correlation between the SpeechDat Car and this performance is the c stationarity of the noise that is there in these conditions and the SpeechDat Car. [speaker007:] Mm hmm. [speaker002:] And, uh [disfmarker] so [disfmarker] so the overall result is like in the last page, which is like forty seven, which is still very imbalanced because there are like fifty six percent on the SpeechDat Car and thirty five percent on the TI digits. And [disfmarker] uh, ps the fifty six percent is like comparable to what the French Telecom gets, but the thirty five percent is way off. [speaker004:] I'm sort of confused but [disfmarker] this [disfmarker] I'm looking on the second page, [speaker002:] Oh, yep. [speaker004:] and it says "fifty percent" [disfmarker] looking in the lower right hand corner, "fifty percent relative performance". [speaker007:] For the clean training. [speaker004:] Is that [disfmarker] [speaker007:] u And if you [disfmarker] if you look [disfmarker] [speaker004:] is that fifty percent improvement? [speaker002:] Yeah. [speaker007:] Yeah. [speaker002:] For [disfmarker] that's for the clean training and the noisy testing for the TI digits. [speaker004:] So it's improvement over the baseline mel cepstrum? [speaker002:] Yeah. Yeah. [speaker004:] But the baseline mel cepstrum under those training doesn't do as well I [disfmarker] I'm [disfmarker] I'm trying to understand why it's [disfmarker] it's eighty percent [disfmarker] That's an accuracy number, I guess, [speaker002:] Yeah, yeah, yeah. [speaker004:] right? So that's not as good as the one up above. [speaker002:] No. [speaker004:] But the fifty is better than the one up above, [speaker002:] Yeah. [speaker004:] so I'm confused. [speaker002:] Uh, actually the noise compensation whatever, uh, we are put in it works very well for the high mismatch condition. I mean, it's consistent in the SpeechDat Car and in the clean training also it gives it [disfmarker] But this fifty percent is [disfmarker] is that the [disfmarker] the high mismatch performance [disfmarker] equivalent to the high mismatch performance in the speech. [speaker006:] So n s So since the high mismatch performance is much worse to begin with, it's easier to get a better relative improvement. [speaker002:] Yeah. Yeah. I do. Yeah, yeah. So by putting this noise [disfmarker] [speaker005:] Yeah. Yeah, if we look at the figures on the right, we see that the reference system is very bad. [speaker004:] Oh. [speaker002:] Yeah. The reference drops like a very fast [disfmarker] [speaker004:] Oh, oh, oh, oh, oh, oh. [speaker005:] Like for clean [disfmarker] clean training condition. [speaker004:] I see. [speaker002:] Yeah. [speaker004:] I see. [speaker005:] Nnn. [speaker004:] This is [disfmarker] this is TI digits [comment] we're looking at? [speaker002:] Yeah. [speaker004:] This whole page is TI digits [speaker002:] Yeah. Oh [disfmarker] [speaker004:] or this is [disfmarker]? [speaker002:] Oh. Yeah. It's not written anywhere. Yeah, it's TI digits. The first r spreadsheet is TI digits. [speaker004:] Mmm. [speaker007:] Hmm. [speaker004:] How does clean training do for the, uh, "Car" stuff? [speaker002:] The "Car"? Oh. Still [disfmarker] it still, uh [disfmarker] that [disfmarker] that's still consistent. I mean, I get the best performance in the case of "Car", which is the third column in the A condition. [speaker004:] No. I mean, this is added noise. I mean, this is TI digits. I'm sorry. I meant [disfmarker] in [disfmarker] in the [disfmarker] in the, uh, multi language, uh, uh, Finnish and [disfmarker] [speaker002:] Uh [disfmarker] [speaker007:] This is next [disfmarker] next page. [speaker002:] That's the next [disfmarker] next spreadsheet, is [disfmarker] [speaker007:] Hmm. [speaker002:] So that is the performance for Italian, Finnish and Spanish. [speaker004:] "Training condition" [disfmarker] Oh, right. So "clean" corresponds to "high mismatch". [speaker002:] Yeah. [speaker004:] And "increase", That's increase e [speaker007:] Improvement. [speaker002:] Improvement. [speaker007:] Yeah. [speaker002:] That's [disfmarker] "Percentage increase" is the percentage improvement over the baseline. [speaker007:] It's [disfmarker] it's a [disfmarker] [speaker002:] So that's [disfmarker] [speaker004:] Which means decrease in word error rate? [speaker002:] Yeah. [speaker004:] OK, so "percentage increase" means decrease? OK. [speaker002:] Yeah, yeah. [speaker007:] Yeah. The [disfmarker] the w there was a very long discussion about this on [disfmarker] on the [disfmarker] on the, uh, Amsterdam meeting. [speaker004:] Yeah. [speaker007:] How to [disfmarker] how to calculate it then. [speaker002:] Yeah. There's [disfmarker] there's a [disfmarker] [speaker007:] I [disfmarker] I [disfmarker] I guess you are using finally this [disfmarker] the scheme which they [disfmarker] [speaker002:] Which is there in the spreadsheet. I'm not changing anything in there. [speaker007:] OK. Mmm. [speaker004:] Alright. [speaker002:] So. Uh, yeah. So all the hi H M numbers are w very good, in the sense, they are better than what the French Telecom gets. So. But the [disfmarker] the only number that's still [disfmarker] I mean, which Stephane also got in his result was that medium mismatch of the Finnish, which is very [disfmarker] [vocalsound] which is a very strange situation where we used the [disfmarker] we changed the proto for initializing the HMM [disfmarker] I mean, this [disfmarker] this is basically because it gets stuck in some local minimum in the training. That seventy five point seven nine in the Finnish mismatch which is that [disfmarker] the eleven point nine six what we see. [speaker004:] Uh huh. [speaker007:] Mmm. [speaker002:] Yeah. [speaker004:] So we have to jiggle it somehow? [speaker002:] Yeah [disfmarker] so we start with that different proto and it becomes eighty eight, which is like some fifty percent improvement. [speaker004:] S Wait a minute. Start with a different what? [speaker002:] Different prototype, which is like a different initialization for the, uh, s transition probabilities. It's just that right now, the initialization is to stay more in the current state, which is point four point six, right? [speaker005:] Yeah. [speaker002:] Yeah. And if it changes to point five point five, which is equal [@ @] for transition and self loop where it becomes eighty eight percent. [speaker006:] Well, but that involves mucking with the back end, which is not allowed. [speaker002:] Yeah. We can't do it. [speaker006:] Yeah. [speaker002:] Yeah. [speaker005:] Mmm. [speaker002:] So. [speaker007:] I mean, it uh, like, i i i It is well known, this [disfmarker] this medium match condition of the Finnish data has some strange effects. [speaker002:] Very s [speaker005:] Yeah. [speaker002:] It has a very few at [disfmarker] uh, actually, c uh, tran I mean, words also. [speaker007:] I mean, that is [disfmarker] Yeah, that too. [speaker002:] It's a very, very small set, actually. [speaker007:] Yeah. Uh huh. There is a l a [disfmarker] There is a lot of [disfmarker] Uh, there are a lot of utterances with music in [disfmarker] with music in the background. [speaker002:] So there is [disfmarker] Yeah. Yeah, yeah, yeah. [speaker007:] Mmm. [speaker002:] Yeah. [speaker004:] Uh huh. [speaker002:] Yeah. It has some music also. I mean, very horrible music like like I know. [speaker004:] So maybe for that one you need a much smarter VAD? Mmm, if it's music. [speaker002:] Uh [disfmarker] So, that [disfmarker] that's the [disfmarker] that's about the results. And, uh, the summary is like [disfmarker] OK. So there are [disfmarker] the other thing what I tried was, which I explained in the last meeting, is using the channel zero for, uh, for both dropping and estimating the noise. And that's like just to f n get a feel of how good it is. I guess the fifty six percent improvement in the SpeechDat Car becomes like sixty seven percent. Like ten percent better. But that's [disfmarker] that's not a [disfmarker] that's a cheating experiment. So. That's just [disfmarker] So, m w [speaker007:] But the [disfmarker] but the, uh, forty seven point nine percent which you have now, that's already a remarkable improvement in comparison to the first proposal. [speaker002:] Yeah. So we had forty four percent in the first proposal. [speaker007:] OK. Mm hmm. [speaker002:] Yeah. We have f a big im So [vocalsound] the major improvement that we got was in all the high mismatch cases, because all those numbers were in sixties and seventies because we never had any noise compensations. [speaker007:] Mmm. [speaker002:] So that's where the biggest improvement came up. Not much in the well match and the medium match and TI digits also right now. So this is still at three or four percent improvement over the first proposal. [speaker007:] Mmm. Mmm. [speaker004:] Yeah, so that's good. [speaker002:] Yeah. So. [speaker004:] Then if we can improve the noise estimation, then it should get better. [speaker007:] Yeah, I [disfmarker] I started thinking about also [disfmarker] I mean yeah, uh, [vocalsound] I discovered the same problem when I started working on [disfmarker] uh, on this Aurora task [vocalsound] almost two years ago, that you have the problem with this mulit a at the beginning we had only this multi condition training of the TI digits. [speaker002:] Yeah. [speaker007:] And, uh, I [disfmarker] I found the same problem. Just taking um, what we were used to u [vocalsound] use, I mean, uh, some type of spectral subtraction, [comment] y [vocalsound] you get even worse results than [vocalsound] the basis [speaker002:] Yeah. [speaker007:] and uh [disfmarker] [speaker002:] Yeah, yeah. [speaker007:] I [disfmarker] I tried to find an explanation for it, so [disfmarker] [speaker004:] Mmm. [speaker002:] So. Yes. Stephane also has the same experience of using the spectral subtraction right? [speaker007:] Mmm. [speaker005:] Mm hmm. [speaker002:] Yeah. [speaker005:] Yeah. [speaker002:] So here [disfmarker] here I mean, I found that it's [disfmarker] if I changed the noise estimate I could get an improvement. So that's [disfmarker] so it's something which I can actually pursue, is the noise estimate. [speaker007:] Mm hmm. [speaker002:] And [disfmarker] [speaker007:] Yeah, I think what you do is in [disfmarker] when [disfmarker] when you have the [disfmarker] the [disfmarker] this multi condition training mode, um then you have [disfmarker] then you can train models for the speech, for the words, as well as for the pauses where you really have all information about the noise available. [speaker002:] Yeah. [speaker007:] And it was surprising [disfmarker] At the beginning it was not surprising to me that you get really the best results on doing it this way, I mean, in comparison to any type of training on clean data and any type of processing. But it was [disfmarker] So, u u it [disfmarker] it seems to be the best what [disfmarker] wh wh what [disfmarker] what we can do in this moment is multi condition training. And every when we now start introducing some [disfmarker] some noise reduction technique we [disfmarker] we introduce also somehow artificial distortions. [speaker002:] Yeah. [speaker007:] And these artificial distortions [disfmarker] uh, I have the feeling that they are the reason why [disfmarker] why we have the problems in this multi condition training. That means the H M Ms we trained, they are [disfmarker] they are based on Gaussians, [speaker002:] Yeah. [speaker007:] and on modeling Gaussians. And if you [disfmarker] Can I move a little bit with this? Yeah. And if we introduce now this [disfmarker] this u spectral subtraction, or Wiener filtering stuff [disfmarker] So, usually what you have is maybe, um [disfmarker] I'm [disfmarker] I'm showing now an envelope um maybe you'll [disfmarker] f for this time. So usually you have [disfmarker] maybe in clean condition you have something which looks like this. And if it is noisy it is somewhere here. And then you try to subtract it or Wiener filter or whatever. And what you get is you have always these problems, that you have this [disfmarker] these [disfmarker] these [disfmarker] these zeros in there. [speaker002:] Yeah. [speaker007:] And you have to do something if you get these negative values. I mean, this is your noise estimate and you somehow subtract it or do whatever. Uh, and then you have [disfmarker] And then I think what you do is you introduce some [disfmarker] some artificial distribution in this uh in [disfmarker] in the models. I mean, i you [disfmarker] you train it also this way but, i somehow there is [disfmarker] u u there is no longer a [disfmarker] a Gaussian distribution. It is somehow a strange distribution which we introduce with these [vocalsound] artificial distortions. And [disfmarker] and I was thinking that [disfmarker] that might be the reason why you get these problems in the [disfmarker] especially in the multi condition training mode. [speaker004:] Mm hmm. [speaker002:] Yeah, yeah. Th That's true. Yeah [disfmarker] the c the models are not complex enough to absorb that additional variability that you're introducing. [speaker007:] s [speaker006:] Thanks Adam. [speaker007:] Yeah. Yes. [speaker002:] Well, that's [disfmarker] Yeah. So [disfmarker] [speaker005:] I also have the feeling that um, the reason ye why it doesn't work is [disfmarker] yeah, that the models are much [disfmarker] are t um, not complex enough. Because I [disfmarker] actually I als always had a good experience with spectral subtraction, just a straight spectral subtraction algorithm when I was using neural networks, big neural networks, which maybe are more able to model strange distributions [speaker007:] Mm hmm. [speaker005:] and [disfmarker] But [disfmarker] Yeah. Then I tried the same [disfmarker] exactly the same spectral subtraction algorithm on these Aurora tasks and it simply doesn't work. [speaker007:] Hmm. [speaker005:] It's even [disfmarker] it, uh, hurts even. So. [speaker004:] We probably should at some point here try the tandem [disfmarker] the [disfmarker] the [disfmarker] the system two kind of stuff with this, with the spectral subtraction for that reason. [speaker007:] Hmm. Mm hmm. [speaker004:] Cuz [vocalsound] again, it should do a transformation to a domain where it maybe [disfmarker] looks more Gaussian. [speaker005:] Mm hmm. [speaker007:] Hmm. Yeah, y I [disfmarker] I was [disfmarker] whe w w just yesterday when I was thinking about it [vocalsound] um w what [disfmarker] what we could try to do, or do about it [disfmarker] I mean, if you [disfmarker] if you get at this [disfmarker] in this situation that you get this [disfmarker] this negative values and you simply set it to zero or to a constant or whatever [vocalsound] if we [disfmarker] if we would use there a somehow, um [disfmarker] a random generator which [disfmarker] which has a certain distribution, u not a certain [disfmarker] [comment] yeah, a special distribution we should see [disfmarker] we [disfmarker] we have to think about it. [speaker002:] It's [disfmarker] [speaker007:] And that we, so, introduce again some natural behavior in this trajectory. [speaker004:] Mm hmm. [speaker002:] Mm hmm. Very different from speech. Still, I mean, it shouldn't confuse the [disfmarker] [speaker007:] Yeah, I mean, similar to what [disfmarker] what you see really u in [disfmarker] in the real um noisy situation. [speaker002:] OK. Mm hmm. [speaker007:] Or i in the clean situation. But [disfmarker] but somehow a [disfmarker] a natural distribution. [speaker004:] But isn't that s again sort of the idea of the additive thing, if it [disfmarker] as [disfmarker] as we had in the J stuff? I mean, basically if [disfmarker] [vocalsound] if you have random data, um, in [disfmarker] in the time domain, then when you look at the s spectrum it's gonna be pretty flat. [speaker007:] Mm hmm. [speaker004:] And [disfmarker] and, uh, so just add something everywhere rather than just in those places. It's just a constant, right? [speaker005:] Mm hmm. [speaker007:] Yeah. I think [disfmarker] e yeah. It's [disfmarker] it's just especially in these segments, I mean, you introduce, um, very artificial behavior. [speaker004:] Yeah. [speaker007:] And [disfmarker] [speaker004:] Yeah. Well, see if you add something everywhere, it has almost no effect up [disfmarker] up [disfmarker] up on [disfmarker] on top. [speaker005:] Mm hmm. [speaker004:] And it [disfmarker] and it [disfmarker] and it has significant effect down there. That was, sort of the idea. [speaker007:] Mm hmm. [speaker002:] Hmm. Yeah the [disfmarker] that's true. [speaker007:] I [speaker002:] That [disfmarker] those [disfmarker] those regions are the cause for this [@ @] [disfmarker] those negative values or whatever you get. [speaker007:] Mm hmm. Mm hmm. [speaker002:] Yeah. So. [speaker007:] I mean, we [disfmarker] we could trit uh, we [disfmarker] we could think how w what [disfmarker] what we could try. [speaker002:] Yeah. [speaker007:] I mean, [vocalsound] it [disfmarker] it was just an idea. [speaker002:] Yeah, yeah. [speaker005:] Mm hmm. [speaker007:] I mean, we [disfmarker] [speaker004:] I think when it's noisy people should just speak up. [speaker007:] to [disfmarker] Mmm. [speaker002:] So [disfmarker] [speaker005:] If we look at the France Telecom proposal, they use some kind of noise addition. They have a random number generator, right? And they add noise on the trajectory of, uh, the log energy only, right? [speaker004:] Oh, they do! Oh. [speaker002:] Yep. C z C zero and log energy also, [speaker005:] Yeah. [speaker002:] yeah. [speaker005:] Um, But I don't know how much effect it [disfmarker] this have, but they do that. [speaker002:] Now? [speaker005:] Yeah. [speaker007:] Uh huh. [speaker002:] Oh. [speaker004:] Hmm. [speaker007:] So it [disfmarker] it [disfmarker] it [disfmarker] it [disfmarker] it is l somehow similar to what [disfmarker] [speaker005:] I think because they have th log energy, yeah, and then just generate random number. They have some kind of mean and variance, and they add this number to [disfmarker] to the log energy simply. Um [disfmarker] [speaker004:] To the l [speaker002:] Yeah [disfmarker] the [disfmarker] the log energy, the [disfmarker] after the clean [disfmarker] cleaning up. [speaker005:] Mm hmm. [speaker002:] So they add a random [disfmarker] random noise to it. [speaker004:] To the [disfmarker] just the energy, or to the mel [disfmarker] uh, to the mel filter? [speaker002:] No. On only to the log energy. [speaker005:] Only [disfmarker] Yeah. [speaker004:] Oh. [speaker007:] Uh huh. [speaker004:] So it [disfmarker] Cuz I mean, I think this is most interesting for the mel filters. [speaker007:] Uh huh. [speaker004:] Right? Or [disfmarker] or F F one or the other. [speaker007:] But [disfmarker] but they do not apply filtering of the log energy or what [disfmarker] [speaker002:] Like, uh [disfmarker] I mean [disfmarker] [speaker007:] like [disfmarker] like a spectral subtraction or [disfmarker] [speaker002:] No [disfmarker] their filter is not M domain. [speaker007:] Yeah. [speaker002:] S so they did filter their time signal [speaker007:] I kn [speaker002:] and then what [@ @] [disfmarker] u [speaker007:] And then they calculate from this, the log energy or [disfmarker]? [speaker002:] Yeah [disfmarker] then after that it is s almost the same as the baseline prop system. [speaker007:] Mm hmm. [speaker002:] And then the final log energy that they [disfmarker] that they get, that [disfmarker] to the [disfmarker] to that they add some random noise. [speaker004:] Yeah, but again, that's just log energy as opposed to [vocalsound] filter bank energy. [speaker002:] Yeah. [speaker007:] Mmm. [speaker002:] So it's not the mel. You know, it's not the mel filter bank output. [speaker004:] Yeah. [speaker007:] Mm hmm. [speaker002:] These are log energy computed from the time s domain signal, [speaker005:] Mm hmm. [speaker002:] not from the mel filter banks. So [disfmarker] [speaker004:] Hmm. [speaker005:] Maybe it's just a way to decrease the importance of this particular parameter in the [disfmarker] in the world feature vector [speaker002:] did [disfmarker] [speaker005:] cu if you add noise to one of the parameters, you widen the distributions [speaker004:] Hmm. [speaker002:] Becomes flat. The variance, yeah, reduces, [speaker005:] and [disfmarker] [speaker002:] so. Hmm, yeah. [speaker005:] Eee sss uh. [speaker004:] So it could reduce the dependence on the amplitude and so on. Yeah. [speaker005:] Yeah. [speaker002:] Yeah. Although [disfmarker] [speaker004:] Maybe. [speaker006:] So is, uh [disfmarker] Is that about it? [speaker005:] Mm hmm. [speaker002:] Uh, [speaker006:] Or [disfmarker]? [speaker002:] so the [disfmarker] OK. So the other thing is the [disfmarker] I'm just looking at a little bit on the delay issue where the delay of the system is like a hundred and eighty millisecond. So [vocalsound] I just [disfmarker] just tried another sk system [disfmarker] I mean, another filter which I've like shown at the end. Which is very similar to the existing uh, filter. Only [disfmarker] Uh, only thing is that the phase is [disfmarker] is like a totally nonlinear phase because it's a [disfmarker] it's not a symmetric filter anymore. [speaker006:] This is for the LDA? [speaker002:] Yeah [disfmarker] so [disfmarker] so this [disfmarker] this is like [disfmarker] So this makes the delay like zero for LDA because it's completely causal. [speaker006:] Oh. [speaker002:] So [disfmarker] So I got actually just the results for the Italian for that and that's like [disfmarker] So the fifty one point O nine has become forty eight point O six, which is like three percent relative degradation. [speaker005:] Mm hmm. [speaker002:] So I have like the fifty one point O nine and [disfmarker] So. I don't know it f fares for the other conditions. So it's just like [disfmarker] it's like a three percent relative degradation, with the [disfmarker] [speaker007:] But [disfmarker] but is there [disfmarker] is there a problem with the one hundred eighty milliseconds? Or [disfmarker]? [speaker004:] Th Well, this is [disfmarker] [speaker002:] u Uh, may [speaker007:] Yeah, I mean, I talked to [disfmarker] to [disfmarker] uh, I ta Uh, I talked, uh, about it with [disfmarker] with Hynek. I mean, there is [disfmarker] [speaker004:] This is [disfmarker] So [disfmarker] So, basically our [disfmarker] our position is [vocalsound] that, um, we shouldn't be unduly constraining the latency at this point because we're all still experimenting with trying to make the performance better in the presence of noise. Uh, there is a minority in that group who is a arguing [disfmarker] who are arguing for [vocalsound] um, uh, having a further constraining of the latency. So we're s just continuing to keep aware of what the trade offs are and, you know, what [disfmarker] what do we gain from having longer or shorter latencies? [speaker007:] Mmm. [speaker004:] But since we always seem to at least get something out of longer latencies not being so constrained, we're tending to go with that if we're not told we can't do it. [speaker007:] Mm hmm. [speaker006:] What [disfmarker] where was the, um [disfmarker] the smallest latency of all the systems last time? [speaker002:] The French Telecom. [speaker004:] Well, France Telecom was [disfmarker] was [disfmarker] was very short latency [speaker007:] It's [disfmarker] [speaker004:] and they had a very good result. [speaker006:] What [disfmarker] what was it? [speaker004:] It was thirty five. [speaker007:] It was in the order of thirty milliseconds or [disfmarker] [speaker004:] Yeah. [speaker006:] Thirteen? [speaker004:] th th [speaker007:] Thirty. [speaker006:] Thirty. [speaker002:] Thirty four. [speaker004:] Yeah. [speaker007:] Yeah. [speaker004:] Yeah, so it's possible to get very short latency. But, again, we're [disfmarker] the [disfmarker] the approaches that we're using are ones that [vocalsound] take advantage of [disfmarker] [speaker006:] Yeah. I was just curious about where we are compared to, you know, the shortest that people have done. [speaker007:] But [disfmarker] but I think this thirty milliseconds [disfmarker] they [disfmarker] they did [disfmarker] it did not include the [disfmarker] the delta calculation. [speaker002:] Yeah. [speaker007:] And this is included now, [speaker002:] Yeah. Yeah. Yeah. [speaker007:] you know? [speaker002:] Yeah. So if they include the delta, it will be an additional forty millisecond. [speaker005:] Mm hmm. [speaker007:] Yeah. [speaker004:] Yeah. [speaker007:] I [disfmarker] I don't remember the [disfmarker] i th They were not using the HTK delta? [speaker002:] No, they're using a nine point window, which is like a four on either side, [speaker007:] Nine point. [speaker002:] which is like [disfmarker] [speaker007:] OK. [speaker002:] f so [disfmarker] [speaker007:] Mmm. [speaker002:] they didn't include that. [speaker004:] Yeah. [speaker007:] Mm hmm. [speaker002:] So [disfmarker] [speaker006:] OK. [speaker005:] Where does the comprish compression in decoding delay comes from? [speaker002:] That's the way the [disfmarker] the [disfmarker] the frames are packed, like you have to wait for one more frame to pack. Because it's [disfmarker] the CRC is computed for two frames always. [speaker005:] Mm hmm. [speaker004:] Well, that [disfmarker] the they would need that forty milliseconds also. Right? [speaker002:] No. They actually changed the compression scheme altogether. [speaker005:] Mm hmm. [speaker002:] So they have their own compression and decoding scheme and they [disfmarker] I don't know what they have. [speaker004:] Oh. [speaker002:] But they have coded zero delay for that. Because they ch I know they changed it, their compression. They have their own CRC, their [disfmarker] their own [vocalsound] error correction mechanism. [speaker004:] Oh. [speaker002:] So they don't have to wait more than one more frame to know whether the current frame is in error. [speaker004:] Oh, OK. [speaker002:] So they changed the whole thing so that there's no delay for that compression and [disfmarker] part also. [speaker004:] Hmm. [speaker007:] Mm hmm. [speaker002:] Even you have reported actually zero delay for the [pause] compression. I thought maybe you also have some different [disfmarker] [speaker007:] Mmm. Mmm. No, I think I [disfmarker] I used this scheme as it was before. [speaker002:] OK. Ah. Mm hmm. [speaker006:] OK, we've got twenty minutes so we should [vocalsound] probably try to move along. Uh, did you wanna go next, Stephane? [speaker005:] I can go next. Yeah. Mmm. [speaker004:] Oh. [speaker005:] It's [disfmarker] [speaker004:] Wait a minute. It's [disfmarker] [speaker005:] Yeah, we have to take [disfmarker] [speaker004:] Wait a minute. I think [vocalsound] I'm confused. [speaker005:] Well [disfmarker] OK. [speaker004:] Alright. [speaker005:] So you have w w one sheet? This one is [disfmarker] you don't need it, alright. [speaker004:] Uh [disfmarker] [speaker005:] So you have to take the whole [disfmarker] the five. There should be five sheets. [speaker004:] OK, I have four now because I left one with Dave because I thought I was dropping one off and passing the others on. So, no, we're not. OK. [speaker002:] Thanks. [speaker008:] Please give me one. [speaker004:] Ah, we need one more over here. [speaker005:] OK, maybe there's not enough for everybody. [speaker006:] I can share with Barry. [speaker005:] But [disfmarker] [speaker001:] Yeah. [speaker007:] OK. [speaker004:] Oh, OK. [speaker005:] Can we look at this? [speaker003:] Yeah. [speaker005:] So, yeah, there are two figures showing actually the, mmm, um, performance of the current VAD. So it's a n neural network based on PLP parameters, uh, which estimate silence probabilities, and then I just put a median filtering on this to smooth the probabilities, right? Um [disfmarker] I didn't use the [disfmarker] the scheme that's currently in the proposal because [vocalsound] I don't want to [disfmarker] In the proposal [disfmarker] Well, in [disfmarker] in the system we want to add like speech frame before every word and a little bit of [disfmarker] of, uh, s a couple of frames after also. Uh, but to estimate the performance of the VAD, we don't want to do that, because it would artificially increase the um [disfmarker] the false alarm rate of speech detection. Right? Um, so, there is u normally a figure for the Finnish and one for Italian. And maybe someone has two for the Italian because I'm missing one figure here. [speaker002:] No. [speaker005:] Well [disfmarker] Well, whatever. Uh [disfmarker] Yeah, so one surprising thing that we can notice first is that apparently the speech miss rate is uh, higher than the false alarm rate. So. [speaker007:] So [disfmarker] so what is the lower curve and the upper curve? [speaker005:] It means [disfmarker] Mm hmm. Yeah, there are two curves. [speaker007:] Yeah. [speaker005:] One curve's for the close talking microphone, which is the lower curve. [speaker007:] Ah, OK. [speaker005:] And the other one is for the distant microphone which has more noise so, it's logical that [vocalsound] it performs worse. So as I was saying, the miss rate is quite important uh, which means that we tend to label speech as [disfmarker] as a silence. And, uh, I didn't analyze further yet, but [vocalsound] I think it's [disfmarker] it may be due to the fricative sounds which may be [disfmarker] in noisy condition maybe label [disfmarker] labelled as silence. And it may also be due to the alignment because [disfmarker] well, the reference alignment. Because right now I just use an alignment obtained from [disfmarker] from a system trained on channel zero. And I checked it a little bit but there might be alignment errors. Um, yeah, e like the fact that [vocalsound] [vocalsound] the [disfmarker] the models tend to align their first state on silence and their last state o on silence also. So the reference [disfmarker] reference alignment would label as speech some silence frame before speech and after speech. This is something that we already noticed before when [disfmarker] mmm, So this cus this could also explain, uh, the high miss rate maybe. [speaker007:] And [disfmarker] and this [disfmarker] this curves are the average over the whole database, [speaker005:] Uh [disfmarker] [speaker007:] so. [speaker005:] Yeah. Right. [speaker007:] Mmm. [speaker005:] Um [disfmarker] Yeah, and the different points of the curves are for five uh, thresholds on the probability [comment] uh from point three to point seven. [speaker002:] So that threshold [disfmarker] [speaker005:] Mm hmm. Yeah. [speaker002:] OK. S OK [disfmarker] so d the detection threshold is very [disfmarker] [speaker005:] So the v The VAD? [speaker002:] Yeah, yeah. [speaker005:] Yeah. There first, a threshold on the probability [comment] [@ @] [comment] That puts all the values to zero or one. [speaker002:] Mmm. [speaker005:] And then the median filtering. [speaker002:] Yeah, so the median filtering is fixed. You just change the threshold? [speaker005:] Yeah. It's fixed, yeah. [speaker002:] Yeah. [speaker005:] Mm hmm. So, going from channel zero to channel one, uh, almost double the error rate. Um, Yeah. Well, so it's a reference performance that we can [disfmarker] you know, if we want to [disfmarker] to work on the VAD, [comment] we can work on this basis [speaker002:] Mm hmm. [speaker005:] and [disfmarker] [speaker002:] OK. [speaker001:] Is this [disfmarker] is this VAD a MLP? [speaker005:] Yeah. [speaker001:] OK. How [disfmarker] how big is it? [speaker005:] It's a very big one. I don't remember. [speaker002:] So three [disfmarker] three hundred and fifty inputs, [speaker005:] m [speaker002:] uh, six thousand hidden nodes and two outputs. t t [speaker001:] OK. [speaker002:] Yeah. [speaker005:] Mm hmm. [speaker004:] Middle sized one. [speaker002:] Yeah. [speaker005:] Mm hmm. Yeah. Uh, ppp. I don't know, you have questions about that, or suggestions? [speaker002:] Mmm. S so [disfmarker] [speaker005:] It seems [disfmarker] the performance seems worse in Finnish, which [disfmarker] [speaker002:] Well, it's not trained on Finnish. [speaker005:] uh [disfmarker] [speaker008:] It's worse. [speaker005:] It's not trained on Finnish, yeah. [speaker004:] What's it trained on? [speaker002:] I mean, the MLP's not trained on Finnish. [speaker004:] Right, what's it trained on? [speaker002:] Oh [disfmarker] oh. Sorry. Uh, it's Italian TI digits. [speaker004:] Yeah. Oh, it's trained on Italian? [speaker002:] Yeah. [speaker004:] Yeah, [speaker005:] Mm hmm. [speaker004:] OK. [speaker005:] And [disfmarker] [speaker002:] That's right. [speaker004:] OK. [speaker005:] And also there are like funny noises on Finnish more than on Italian. [speaker007:] Mm hmm. [speaker005:] I mean, like music [speaker002:] Yeah. [speaker005:] and [vocalsound] um [disfmarker] [speaker002:] Yeah, the [disfmarker] Yeah, it's true. [speaker005:] So, yeah, we were looking at this. But for most of the noises, noises are [disfmarker] um, I don't know if we want to talk about that. But, well, the [disfmarker] the "Car" noises are below like five hundred hertz. And we were looking at the "Music" utterances and in this case the noise is more about two thousand hertz. Well, the music energy's very low apparently. [speaker002:] Yeah. [speaker005:] Uh, uh, from zero to two [disfmarker] two thousand hertz. So maybe just looking at this frequency range for [disfmarker] from five hundred to two thousand would improve somewhat the VAD [speaker002:] Mmm. [speaker005:] and [disfmarker] [speaker002:] Yeah. [speaker005:] Mmm [disfmarker] [speaker002:] So there are like some [disfmarker] some s some parameters you wanted to use or something? [speaker005:] Yeah, but [disfmarker] Yes. [speaker002:] Or [disfmarker] Yeah. [speaker005:] Mm hmm. Uh, the next, um [disfmarker] [speaker007:] So is the [disfmarker] is the [disfmarker] is the training [disfmarker] is the training based on these labels files which you take as reference here? [speaker005:] Oh, it's there. [speaker007:] Wh when you train the neural net y y you [disfmarker] [speaker005:] Yeah. No. It's not. It's [disfmarker] it was trained on some alignment obtained um, uh [disfmarker] For the Italian data, I think we trained the neural network on [disfmarker] with embedded training. So re estimation of the alignment using the neural network, I guess. That's right? [speaker002:] Yeah. We actually trained, uh, the [disfmarker] on the Italian training part. [speaker005:] Yeah. [speaker002:] We [disfmarker] we had another [vocalsound] system with u [speaker005:] So it was a f f a phonetic classification system for the Italian Aurora data. [speaker002:] Yeah. It must be somewhere. Yeah. [speaker005:] For the Aurora data that it was trained on, it was different. Like, for TI digits you used a [disfmarker] a previous system that you had, I guess. [speaker002:] What [disfmarker] No it [disfmarker] Yeah, yeah. That's true. [speaker005:] So the alignments from the different database that are used for training came from different system. [speaker002:] Syste Yeah. [speaker005:] Then we put them tog together. Well, you put them together and trained the VAD on them. [speaker002:] Yeah. [speaker005:] Mmm. [speaker002:] Yeah. [speaker007:] Hmm. [speaker005:] Uh, But did you use channel [disfmarker] did you align channel one also? Or [disfmarker] [speaker002:] I just took their entire Italian training part. [speaker005:] Yeah. [speaker002:] So it was both channel zero plus channel one. [speaker005:] So di Yeah. So the alignments might be wrong then on channel one, right? [speaker002:] On one. Possible. [speaker005:] So we might, yeah, [speaker002:] We can do a realignment. That's true. [speaker005:] at least want to retrain on these alignments, which should be better because they come from close talking microphone. [speaker007:] Yeah, the [disfmarker] that was my idea. I mean, if [disfmarker] if it ha if it is not the same labeling which is taking the spaces. [speaker002:] Yeah. [speaker005:] OK. [speaker002:] Yeah, possible. [speaker005:] Yeah. [speaker007:] Mmm. [speaker002:] I mean, it [disfmarker] [speaker005:] Yeah. [speaker002:] so the system [disfmarker] so the VAD was trained on maybe different set of labels for channel zero and channel one [speaker005:] Mm hmm. [speaker002:] and [disfmarker] [speaker007:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] was the alignments were w were different for [disfmarker] s certainly different because they were independently trained. We didn't copy the channel zero alignments to channel one. [speaker005:] Mm hmm. [speaker007:] Mm hmm. [speaker002:] Yeah. [speaker005:] Yeah. [speaker002:] But for the new alignments what you generated, you just copied the channel zero to channel one, right? [speaker005:] Right. Yeah. [speaker002:] Yeah. [speaker005:] Um. And eh, hhh actually when we look at [disfmarker] at the VAD, [vocalsound] for some utterances it's almost perfect, I mean, it just dropped one frame, the first frame of speech or [disfmarker] So there are some utterances where it's almost one hundred percent VAD performance. [speaker007:] Hmm. [speaker005:] Uh, but [disfmarker] Yeah. Mmm [disfmarker] Yep. So the next thing is um, I have the spreadsheet for three different system. But for this you only have to look right now on the SpeechDat Car performance uh, because I didn't test [disfmarker] so [disfmarker] I didn't test the spectral subtraction on TI digits yet. Uh, so you have three she sheets. One is the um proposal one system. Actually, it's not exe exactly proposal one. It's the system that Sunil just described. Um, but with uh, Wiener filtering from um, France Telecom included. Um, so this gives like fifty seven point seven percent, uh, s uh, error rate reduction on the SpeechDat Car data. Mmm, and then I have two sheets where it's for a system where [disfmarker] uh, so it's again the same system. But in this case we have spectral subtraction with a maximum overestimation factor of two point five. Uh, there is smoothing of the gain trajectory with some kind of uh, low pass filter, which has forty milliseconds latency. And then, after subtraction um, I add a constant to the energies and I have two cases d where [disfmarker] The first case is where the constant is twenty five DB below the mean speech energy and the other is thirty DB below. Um, and for these s two system we have like fifty five point, uh, five percent improvement, and fifty eight point one. So again, it's around fifty six, fifty seven. Uh [disfmarker] [speaker004:] Cuz I notice the TI digits number is exactly the same for these last two? [speaker005:] Yeah, because I didn't [disfmarker] For the France Telecom uh, spectral subtraction included in the [disfmarker] our system, the TI digits number are the right one, but not for the other system because I didn't test it yet [disfmarker] this system, including [disfmarker] with spectral subtraction on the TI digits data. I just tested it on SpeechDat Car. [speaker007:] Mm hmm. [speaker004:] Ah! So [disfmarker] so that means the only thing [disfmarker] [speaker007:] So [disfmarker] so [disfmarker] so these numbers are simply [disfmarker] [speaker005:] This, we have to [disfmarker] Yeah. [speaker007:] Yeah. [speaker004:] Yeah. [speaker002:] But this number. [speaker007:] OK. [speaker005:] Yes. [speaker004:] So you [disfmarker] so you just should look at that fifty eight perc point O nine percent and so on. [speaker005:] Right. Right. [speaker004:] OK. Good. [speaker005:] Mm hmm. Um, Yeah. [speaker002:] So this [disfmarker] [speaker007:] s [speaker002:] So by [disfmarker] uh, by [disfmarker] by reducing the noise a [disfmarker] a decent threshold like minus thirty DB, it's like [disfmarker] Uh, you are like r r reducing the floor of the noisy regions, right? [speaker005:] Yeah. Yeah. The floor is lower. Um, [speaker002:] Uh huh. [speaker005:] mm hmm. [speaker004:] I'm sorry. So when you say minus twenty five or minus thirty DB, with respect to what? [speaker005:] To the average um, speech energy which is estimated on the world database. [speaker004:] OK, so basically you're creating a signal to noise ratio of twenty five or thirty DB? [speaker005:] Yeah. But it's not [disfmarker] [speaker007:] I [disfmarker] I [disfmarker] I think what you do is this. [speaker004:] uh r [speaker005:] it [disfmarker] it's [disfmarker] [speaker007:] i When [disfmarker] when you have this, [vocalsound] after you subtracted it, I mean, then you get something w w with this, uh, where you set the values to zero and then you simply add an additive constant again. [speaker005:] Yeah. [speaker007:] So you shift it somehow. This [disfmarker] this whole curve is shifted again. [speaker005:] Right. [speaker004:] But did you do that before the thresholding to zero, [speaker005:] It's [disfmarker] [speaker004:] or [disfmarker]? [speaker005:] But, it's after the thresholding. So, [speaker004:] Oh, so you'd really want to do it before, [speaker005:] maybe [disfmarker] [speaker004:] right? [speaker005:] maybe we might do it before, yeah. [speaker004:] Yeah, because then the [disfmarker] then you would have less of that phenomenon. [speaker005:] Yeah. [speaker007:] E Hhh. [speaker004:] I think. [speaker005:] Uh [disfmarker] Yeah. [speaker004:] c [speaker005:] But still, when you do this and you take the log after that, it [disfmarker] it reduce the [disfmarker] the variance. [speaker004:] Yeah, it [disfmarker] it [disfmarker] [speaker005:] But [disfmarker] [speaker004:] Right. [speaker005:] Mmm, [speaker004:] Yeah, that will reduce the variance. That'll help. But maybe if you does [disfmarker] do it before you get less of these funny looking things he's drawing. [speaker007:] Mm hmm. [speaker005:] Um, [speaker002:] So before it's like adding this, col to the [disfmarker] to the [disfmarker] o exi original [disfmarker] [speaker007:] But [disfmarker] but [disfmarker] [speaker005:] We would [disfmarker] [speaker004:] Right at the point where you've done the subtraction. [speaker002:] OK. [speaker004:] Um, essentially you're adding a constant into everything. [speaker005:] Mm hmm. [speaker007:] But the way Stephane did it, it is exactly the way I have implemented in the phone, so. [speaker004:] Oh, yeah, better do it different, then. Yeah. [speaker005:] Um. Yeah. [speaker004:] Just you [disfmarker] you just ta you just set it for a particular signal to noise ratio that you want? [speaker007:] Yeah I [disfmarker] I made s similar investigations like Stephane did here, just uh, adding this constant and [disfmarker] and looking how dependent is it on the value of the constant [speaker004:] Yeah. Yeah. [speaker005:] Mm hmm. [speaker007:] and then, must choose them somehow [vocalsound] to give on average the best results for a certain range of the signal to noise ratios. [speaker005:] Yeah. [speaker004:] Uh huh. [speaker005:] Mm hmm. [speaker007:] So [disfmarker] [speaker005:] Oh, it's clear. I should have gi given other results. Also it's clear when you don't add noise, it's much worse. Like, around five percent worse I guess. [speaker004:] Uh huh. [speaker005:] And if you add too much noise it get worse also. And it seems that [vocalsound] right now this [disfmarker] this is c a constant that does not depend on [disfmarker] [comment] on anything that you can learn from the utterance. It's just a constant noise addition. Um. And I [disfmarker] I think w w [speaker004:] I [disfmarker] I'm sorry. [speaker005:] I think [disfmarker] [speaker004:] Then [disfmarker] then I'm confused. I thought [disfmarker] you're saying it doesn't depend on the utterance but I thought you were adding an amount that was twenty five DB down from the signal energy. [speaker005:] Yeah, so the way I did that, [comment] i I just measured the average speech energy of the [disfmarker] all the Italian data. [speaker004:] Oh! [speaker005:] And then [disfmarker] I [disfmarker] I have [disfmarker] I used this as mean speech energy. Mm hmm. [speaker004:] Oh, it's just a constant amount over all. [speaker005:] Yeah. And [disfmarker] wha what I observed is that for Italian and Spanish, [comment] when you go to thirty and twenty five DB, [comment] uh it [disfmarker] it's good. [speaker002:] OK. Oh. [speaker005:] It stays [disfmarker] In this range, it's, uh, the p u well, the performance of the [disfmarker] this algorithm is quite good. But for Finnish, [vocalsound] you have a degradation already when you go from thirty five to thirty and then from thirty to twenty five. And [disfmarker] I have the feeling that maybe it's because just Finnish has a mean energy that's lower than [disfmarker] than the other databases. And due to this the thresholds should be [disfmarker] [speaker004:] Yeah. [speaker005:] the [disfmarker] the a the noise addition should be lower [speaker004:] But in [disfmarker] I mean, in the real thing you're not gonna be able to measure what people are doing over half an hour or an hour, or anything, right? [speaker005:] and [disfmarker] [speaker004:] So you have to come up with this number from something else. [speaker005:] Yeah. So [disfmarker] [speaker007:] Uh, but you are not doing it now language dependent? Or [disfmarker]? [speaker005:] It's not. It's just something that's fixed. [speaker007:] No. It's overall. [speaker005:] Yeah. [speaker007:] OK. [speaker005:] Mm hmm. Um [disfmarker] Yeah, so I g [speaker004:] But what he is doing language dependent is measuring what that number i reference is that he comes down twenty five down from. [speaker005:] No. It [disfmarker] No. Because I did it [disfmarker] I started working on Italian. [speaker004:] No? [speaker005:] I obtained this average energy [speaker004:] Yeah. [speaker005:] and then I used this one. [speaker002:] For all the languages. [speaker005:] Yeah. [speaker002:] OK. [speaker004:] So it's sort of arbitrary. I mean, so if y if [disfmarker] [speaker002:] Yeah. [speaker004:] Yeah. [speaker005:] Yep. Um, [speaker004:] Yeah. [speaker005:] yeah, so the next thing is to use this as [disfmarker] as maybe initialization and then use something on line. [speaker004:] Uh huh. Something more adaptive, [speaker005:] But [disfmarker] [vocalsound] And I expect improvement at least in Finnish because eh [disfmarker] the way [disfmarker] [speaker004:] yeah. OK. [speaker005:] Well, um, for Italian and Spanish it's [disfmarker] th this value works good but not necessarily for Finnish. Mmm. But unfortunately there is, like, this forty millisecond latency and, um [disfmarker] Yeah, so I would try to somewhat reduce this [@ @]. I already know that if I completely remove this latency, so. [vocalsound] um, [comment] it [disfmarker] um there is a three percent hit on Italian. [speaker007:] Mm hmm. [speaker002:] d Does latency [disfmarker] [speaker007:] i [speaker002:] Sorry. Go ahead. [speaker007:] Yeah. Your [disfmarker] your smoothing was [@ @] [comment] uh, over this s so to say, the [disfmarker] the factor of the Wiener. And then it's, uh [disfmarker] What was it? This [disfmarker] [speaker005:] Mm hmm. [speaker007:] this smoothing, it was over the subtraction factor, so to say. [speaker005:] It's a smoothing over the [disfmarker] the gain of the subtraction algorithm. [speaker007:] Was this done [disfmarker] Mm hmm. And [disfmarker] and you are looking into the future, into the past. [speaker005:] Right. [speaker007:] And smoothing. Mm hmm. [speaker005:] So, to smooth this [pause] thing. Yeah. Um [disfmarker] [speaker007:] And did [disfmarker] did you try simply to smooth um to smooth the [disfmarker] the [disfmarker] t to [disfmarker] to smooth stronger the [disfmarker] the envelope? [speaker005:] Um, no, I did not. [speaker007:] Mmm. [speaker005:] Mmm. [speaker007:] Because I mean, it should have a similar effect if you [disfmarker] [speaker005:] Yeah. [speaker007:] I mean, you [disfmarker] you have now several stages of smoothing, so to say. You start up. As far as I remember you [disfmarker] you smooth somehow the envelope, you smooth somehow the noise estimate, [speaker005:] Mm hmm. [speaker007:] and [disfmarker] [vocalsound] and later on you smooth also this subtraction factor. [speaker005:] Mmm [disfmarker] Uh, no, it's [disfmarker] it's just the gain that's smoothed actually [speaker002:] Uh, actually I d I do all the smoothing. [speaker005:] but it's smoothed [disfmarker] [speaker007:] Ah. Oh, it w it was you. [speaker002:] Yeah, yeah. [speaker005:] Uh [disfmarker] Yeah. [speaker007:] Yeah. [speaker005:] Yeah. [speaker007:] Yeah. [speaker005:] No, in this case it's just the gain. [speaker007:] Uh huh. [speaker005:] And [disfmarker] But the way it's done is that um, for low gain, there is this non nonlinear smoothing actually. For low gains um, I use the smoothed sm uh, smoothed version [speaker007:] Uh. [speaker005:] but [disfmarker] for high gain [@ @] [comment] it's [disfmarker] I don't smooth. [speaker007:] Mm hmm. I just, uh [disfmarker] it [disfmarker] Experience shows you, if [disfmarker] if you do the [disfmarker] The best is to do the smoo smoothing as early as possible. [speaker005:] Uh huh. [speaker007:] So w when you start up. I mean, you start up with the [disfmarker] with the [disfmarker] somehow with the noisy envelope. [speaker005:] Mm hmm. [speaker007:] And, best is to smooth this somehow. [speaker005:] Mm hmm. Uh, yeah, I could try this. Um. [speaker007:] And [disfmarker] [speaker002:] So, before estimating the SNR, [@ @] smooth the envelope. [speaker007:] Yeah. Yeah. Uh huh. [speaker005:] Mm hmm. But [disfmarker] Yeah. Then I [disfmarker] I would need to find a way to like smooth less also when there is high energy. Cuz I noticed that it [disfmarker] it helps a little bit to s like smooth more during low energy portions and less during speech, [speaker007:] Yes, y [speaker005:] because if you smooth then y you kind of distort the speech. [speaker007:] Yeah. Yeah. Right. [speaker005:] Um. Mm hmm. [speaker007:] Yeah, I think when w you [disfmarker] you could do it in this way that you say, if you [disfmarker] if I'm [disfmarker] you have somehow a noise estimate, [speaker005:] Mm hmm. [speaker007:] and, if you say I'm [disfmarker] I'm [disfmarker] with my envelope I'm close to this noise estimate, [speaker005:] Yeah. [speaker007:] then you have a bad signal to noise ratio and then you [disfmarker] you would like to have a stronger smoothing. [speaker005:] Mm hmm. [speaker007:] So you could [disfmarker] you could base it on your estimation of the signal to noise ratio on your actual [disfmarker] [speaker005:] Mm hmm. Mm hmm. Mmm. [speaker002:] Yeah, or some silence probability from the VAD if you have [disfmarker] [speaker005:] Um, yeah, but I don't trust [vocalsound] the current VAD. So. [speaker002:] Yeah, uh, so not [disfmarker] not right now maybe. [speaker005:] Well, maybe. Maybe. [speaker004:] The VAD later will be much better. Yeah. So. I see. [speaker006:] So is [pause] that it? [speaker005:] Uh, fff [comment] I think that's it. Yeah. Uh. [speaker007:] s So to summarize the performance of these, SpeechDat Car results is similar than [disfmarker] than yours so to say. [speaker002:] Yeah, [speaker005:] Yeah. [speaker002:] so the fifty eight is like the be some fifty six point [disfmarker] [speaker007:] Y you have [disfmarker] you have fifty six point four and [disfmarker] and [disfmarker] [vocalsound] and dependent on this additive constant, it is s better or [disfmarker] or worse. [speaker002:] Yeah, that's true. [speaker005:] Yeah. [speaker002:] Slightly better. [speaker005:] Mm hmm. [speaker002:] Yeah. [speaker007:] Yeah. [speaker005:] Mm hmm. And, [vocalsound] yeah, i i i the condition where it's better than your approach, it's [disfmarker] it [disfmarker] just because maybe it's better on well matched and that the weight on well matched is [disfmarker] is bigger, [speaker002:] Yeah. Yeah, you [disfmarker] you caught up. [speaker005:] because [disfmarker] [speaker002:] Yep, that's true. [speaker005:] if you don't weigh differently the different condition, you can see that your [disfmarker] well, the win the two stage Wiener filtering is maybe better or [disfmarker] It's better for high mismatch, right? [speaker002:] Yeah. Yeah, it's better for high mismatch. [speaker005:] Mm hmm. But a little bit worse for well matched. [speaker002:] So over all it gets, yeah, worse for the well matched condition, so y [speaker005:] Uh huh. [speaker006:] So we need to combine these two. [speaker002:] Uh, that's [disfmarker] that's the best thing, is like the French Telecom system is optimized for the well matched condition. [speaker005:] Mm hmm. [speaker002:] They c Yeah. So they know that the weighting is good for the well matched, and so there's [disfmarker] everywhere the well matched's s s performance is very good for the French Telecom. [speaker005:] Yeah. [speaker007:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] T we are [disfmarker] we may also have to do something similar [@ @]. [speaker005:] Mm hmm. [speaker004:] Well, our tradition here has always been to focus on the mismatched. [speaker002:] Um the [disfmarker] [speaker004:] Cuz it's more interesting. [speaker007:] Mu my [disfmarker] mine was it too, I mean. [speaker004:] Yeah. [speaker007:] Before I started working on this Aurora. so. [speaker004:] Yeah. Yeah. Yeah. OK. [speaker006:] Carmen? Do you, uh [disfmarker] [speaker008:] Well, I only say that the [disfmarker] this is, a summary of the [disfmarker] of all the VTS experiments and say that the result in the last [comment] um, for Italian [disfmarker] the last experiment for Italian, [vocalsound] are bad. I make a mistake when I write. Up at D I copy [vocalsound] one of the bad result. [speaker002:] So you [disfmarker] [speaker008:] And [disfmarker] There. [vocalsound] You know, this. Um, well. If we put everything, we improve a lot u the spectral use of the VTS but the final result [vocalsound] are not still mmm, good [vocalsound] like the Wiener filter for example. I don't know. Maybe it's [disfmarker] [@ @] [comment] it's possible to [disfmarker] to have the same result. [speaker002:] That's somewhere [disfmarker] [speaker008:] I don't know exactly. Mmm. Because I have, [vocalsound] mmm, [comment] worse result in medium mismatch and high mismatch. [speaker002:] You s you have a better r Yeah. You have some results that are good for the high mismatch. [speaker008:] And [disfmarker] Yeah. I someti are more or less similar but [disfmarker] but are worse. And still I don't have the result for TI digits. The program is training. Maybe for this weekend I will have result TI digits and I can complete that s like this. Well. [speaker004:] Uh. Right. [speaker008:] One thing that I [comment] note are not here in this result [vocalsound] but are speak [disfmarker] are spoken before with Sunil I [disfmarker] I improve my result using clean LDA filter. [speaker002:] Mm hmm. [speaker004:] Mm hmm. [speaker008:] If I use, [vocalsound] eh, the LDA filter that are training with the noisy speech, [vocalsound] that hurts the res my results. [speaker004:] So what are these numbers here? Are these with the clean or with the noisy? [speaker008:] This is with the clean. [speaker004:] OK. [speaker008:] With the noise I have worse result, that if I doesn't use it. [speaker004:] Uh huh. [speaker008:] But m that may be because [vocalsound] with this technique [vocalsound] we are using really [disfmarker] really clean speech. The speech [disfmarker] the [comment] representation that go to the HTK is really clean speech because it's from the dictionary, the code book and maybe from that. I don't know. [speaker005:] Mm hmm. [speaker008:] Because I think that you [disfmarker] did some experiments using the two [disfmarker] the two LDA filter, clean and noi and noise, [speaker005:] It's [disfmarker] [speaker008:] and it doesn't matter too much. [speaker005:] Um, yeah, I did that but it doesn't matter on SpeechDat Car, but, it matters, uh, a lot on TI digits. [speaker008:] It's better to use clean. [speaker002:] Using the clean filter. [speaker005:] Yeah, d uh, it's much better when you [disfmarker] we used the clean derived LDA filter. [speaker008:] Mm hmm. Maybe you can do d also this. [speaker002:] Yeah. [speaker008:] To use clean speech. [speaker005:] Uh, [speaker002:] Yeah, I'll try. [speaker005:] but, yeah, Sunil in [disfmarker] in your result it's [disfmarker] [speaker002:] I [disfmarker] I'll try the cle No, I [disfmarker] I [disfmarker] my result is with the noisy [disfmarker] noisy LDA. [speaker005:] It's with the noisy one. Yeah. [speaker002:] Yeah. [speaker004:] Oh! [speaker002:] It's with the noisy. [speaker005:] So [disfmarker] [speaker002:] Yeah. It's [disfmarker] it's not the clean LDA. [speaker004:] Um [disfmarker] [speaker002:] It's [disfmarker] In [disfmarker] in the front sheet, I have like [disfmarker] like the summary. Yeah. [speaker004:] And [disfmarker] and your result [comment] is with the [disfmarker] [speaker005:] It's with the clean LDA. [speaker002:] Oh. This is [disfmarker] Your results are all with the clean LDA result? [speaker008:] Yeah, with the clean LDA. [speaker005:] Yeah. [speaker002:] OK. [@ @]. [speaker008:] Is that the reason? [speaker005:] And in your case it's all [disfmarker] all noisy, [speaker002:] All noisy, [speaker005:] yeah. [speaker002:] yeah. [speaker005:] But [disfmarker] [speaker008:] And [disfmarker] [speaker002:] Uh [disfmarker] [speaker005:] Yeah. [speaker002:] Uh [disfmarker] [speaker005:] But I observe my case it's in, uh, uh, at least on SpeechDat Car it doesn't matter but TI digits it's like two or three percent absolute, uh, [comment] better. [speaker002:] On TI digits this matters. Absolute. Uh [disfmarker] [speaker005:] So if [disfmarker] [speaker004:] So you really might wanna try the clean I think. [speaker002:] Yeah, I [disfmarker] I [disfmarker] I will have to look at it. Yeah, that's true. [speaker004:] Yeah. Yeah, that could be sizeable right there. [speaker008:] And this is everything. [speaker007:] Yeah. Maybe you [disfmarker] you are leaving in [disfmarker] in about two weeks Carmen. [speaker004:] OK. [speaker007:] No? [speaker008:] Yeah. [speaker007:] Yeah. So I mean, if [disfmarker] if [disfmarker] if I would put it [disfmarker] put on the head of a project mana manager [disfmarker] I [disfmarker] I [disfmarker] I I would say, uh, um [disfmarker] I mean there is not so much time left now. [speaker004:] Be my guest. [speaker007:] I mean, if [disfmarker] [vocalsound] um, what [disfmarker] what I would do is I [disfmarker] I [disfmarker] I would pick [@ @] [comment] the best consolation, which you think, and [vocalsound] c create [disfmarker] create all the results for the whole database that you get to the final number as [disfmarker] as Sunil did it [speaker008:] And prepare at the s [speaker007:] and [vocalsound] um and maybe also to [disfmarker] to write somehow a document where you describe your approach, and what you have done. [speaker008:] Yeah, I was thinking to do that next week. [speaker004:] Yeah. [speaker007:] Yeah. [speaker004:] Yeah, I'll [disfmarker] I'll borrow the head back and [disfmarker] and agree. [speaker008:] Yeah, I wi I [disfmarker] I will do that next week. [speaker004:] Yeah, that's [disfmarker] that's [disfmarker] Right. In fact, actually I g I guess the, uh [disfmarker] the Spanish government, uh, requires that anyway. They want some kind of report from everybody who's in the program. [speaker008:] Mm hmm. [speaker004:] So. And of course I'd [disfmarker] we'd [disfmarker] we'd like to see it too. So, [speaker008:] OK. [speaker004:] yeah. [speaker006:] So, um, what's [disfmarker] Do you think we, uh, should do the digits or skip it? Or what are [disfmarker] what do you think? [speaker004:] Uh, we have them now? [speaker006:] Yeah, got them. [speaker004:] Uh, why don why don't we do it? [speaker006:] OK. [speaker004:] Just [comment] [disfmarker] just take a minute. [speaker008:] I can send yet. [speaker006:] Would you pass those down? [speaker004:] Oh! Sorry. [speaker006:] OK, um, so I guess I'll go ahead. Um, [speaker004:] Seat? [speaker005:] Dave? Is it the channel, or the mike? I don't remember. It's the mike? [speaker004:] Mike? [speaker005:] It's not four. [speaker008:] This is date and time. No. On the channel, channel. [speaker007:] What is this? [speaker002:] t [speaker006:] OK, if you could just leave, um, your mike on top of your, uh, digit form I can fill in any information that's missing. [speaker007:] OK. [speaker006:] That's uh [disfmarker] I didn't get a chance to fill them out ahead of time. Yeah, we're gonna have to fix that. Uh, let's see, it starts with one here, and then goes around and ends with nine here. [speaker001:] Seven. So I [disfmarker] I'm eight, [speaker006:] So he's eight, [speaker001:] you're seven. [speaker006:] you're seven, [speaker001:] Yeah. [speaker005:] Alright. So. [speaker003:] So are you [disfmarker] Are we going? [speaker005:] It is uh, must be February fifteenth. [speaker006:] Yeah. [speaker003:] Yu I think the date's written in there, yep. And actually if everyone could cross out the R nine next to "Session", and write MR eleven. [speaker005:] Yeah. Yeah. We didn't have a front end meeting today. [speaker003:] And let's remember also to make sure that one's [comment] gets marked as unread, unused. [speaker005:] OK. [speaker001:] MR eleven. [speaker003:] MR eleven. [speaker006:] That sounds like a spy code. [speaker005:] Mmm. OK. So. [speaker003:] There's lots of clicking I'm sure as I'm trying to get this to work [pause] correctly. [speaker005:] Agenda. Any agenda items today? [speaker003:] I wanna talk a little bit about getting [disfmarker] how we're gonna to get people to edit bleeps, parts of the meeting that they don't want to include. What I've done so far, and I wanna get some opinions on, how to [disfmarker] how to finish it up. [speaker005:] OK. [speaker006:] I wanna ask about um, some aud audio monitoring on some of the [pause] um [pause] well some of the equipment. In particular, the [disfmarker] well uh, that's just what I wanna ask. [speaker005:] OK audio monitoring, Jane. [speaker006:] Ba based on some of the tran uh [disfmarker] i In listening to [pause] some of these meetings that have already been recorded there are sometimes big spikes on particular things, and in pact [disfmarker] in fact this one I'm talking on is one of [disfmarker] of the ones that showed up in one of the meetings, so I [disfmarker] [speaker003:] Oh really. [speaker006:] Mm hmm. Yeah. [speaker002:] "Spikes", you mean like uh, instantaneous click type spikes, or [disfmarker]? [speaker001:] Spikes? [speaker003:] Clicks. [speaker006:] Yeah. [speaker005:] Hmm. [speaker006:] Yeah. [speaker002:] Huh. [speaker006:] And I don't know what the e electronics is but. Yeah. [speaker003:] Yeah. Well, I think it's [speaker001:] Touching. [speaker003:] uh, it [disfmarker] it could be a number of things. It could be touching and fiddling, [speaker001:] Yeah. [speaker003:] and the other thing is that it could [disfmarker] the fact that it's on a wired mike is suspicious. It might be a connector. [speaker006:] Oh, OK. Well maybe [disfmarker] Then we don't really have to talk about that as an [disfmarker] I [disfmarker] I take that off the agenda. [speaker002:] You could try an experiment and say "OK, I'm about to test for spikes", and then wiggle the thing there, and then go and when they go to transcribe it, it could, ask them to come and get you. [speaker003:] Yeah. Right. [speaker006:] Oh that [disfmarker] [speaker002:] "Come get me when you transcribe this and see if there's spikes." [speaker005:] Um. [speaker006:] Well, OK. [speaker002:] No I'm just [disfmarker] [speaker005:] I mean, were this a professional audio recording, [vocalsound] what we would do [disfmarker] [comment] what you would do is [disfmarker] in testing it is, you would actually do all this wiggling and make sure that [disfmarker] that [disfmarker] that things are not giving that kind of performance. And if they are, then they can't be used. [speaker003:] Right. [speaker005:] So. Um. Let's see. I guess [pause] I would like to have a discussion about you know where we are on uh, recording, transcription you know, basically you know where we are on the corpus. [speaker006:] Good. [speaker005:] And then um, the other thing which I would like to talk about which is a real meta quest, I think, deal is, uh, agendas. So maybe I'll [disfmarker] I'll start with that actually. Uh, um. [vocalsound] Andreas brought up the fact that he would kinda like to know, if possible, what we were gonna be talking about because he's sort of peripherally involved to this point, and if there's gonna be a topic about [disfmarker] discussion about something that he uh strongly cares about then he would come and [disfmarker] And I think part of [disfmarker] part of his motivation with this is that he's trying to help us out, in the [disfmarker] because of uh the fact that the meetings are [disfmarker] are tending to become reasonably large now on days when everybody shows up and so, he figures he could help that out by not showing [disfmarker] and [disfmarker] and I'm sure help out his own time. [speaker003:] Mmm. [speaker005:] by not showing up if it's a meeting that he's [disfmarker] he's [disfmarker] So, uh in order [disfmarker] I'd [disfmarker] I think that this is a wish on his part. Uh. It's actually gonna be hard because it seems like a lot of times uh things come up that are unanticipated and [disfmarker] and [disfmarker] But um, we could try anyway, [speaker003:] Right. [speaker005:] uh, do another try at coming up with the agenda uh, at some point before the meeting, uh, say the day before. [speaker003:] Well maybe it would be a good idea for one of us to [pause] like on Wednesday, or Tuesday send out a reminder for people to send in agenda items. [speaker001:] Yeah. [speaker005:] OK. You [disfmarker] you wanna volunteer to do that? [speaker003:] Sure. [speaker005:] OK. Alright so we'll send out agenda request. Uh. [speaker003:] Let me [speaker002:] That'll be [disfmarker] I think that'll help [disfmarker] [speaker003:] I'll put that on my spare brain or it will not [pause] get done. [speaker002:] That'll help a lot, actually. [speaker005:] Yeah, I have to tell you for the uh [disfmarker] for the admin meeting that we have, Lila does that um every time before an admin meeting. And uh, she ends up getting the agenda requests uh, uh ten minutes before the meeting. But [disfmarker] but [disfmarker] [comment] But. Uh. [comment] But we can try. Maybe it'll work. [speaker003:] Yeah. [speaker002:] Mmm. [speaker003:] Maybe. [speaker005:] Yeah. [speaker003:] Weirder things have happened. [speaker006:] I'm wondering if he were to just, uh, specify particular topics, I mean. Maybe we'd be able to meet that request of his a little more. [speaker002:] I would [disfmarker] I would also guess that as we get more into processing the data and things like that there'll be more things of interest to him. [speaker003:] Well then [disfmarker] [speaker005:] Yeah. Actually it [disfmarker] This [disfmarker] this maybe brings up another topic which is um [disfmarker] So we're done with that topic. The other topic I was thinking of was the sta status on microphones and channels, and all that. [speaker003:] Yeah, actually I [disfmarker] I was going to say we need to [pause] talk about that too. [speaker005:] Yeah. Why [disfmarker] why don't we do that. [speaker003:] OK. Um, the new microphones, the two new ones are in. Um. [pause] And they are being assembled as we speak, I hope. And I didn't bring my car today so I'm gonna pick them up tomorrow. Um, and then the other question I was thinking about is [disfmarker] well, a couple things. First of all, if the other headsets are a lot more comfortable, we should probably just go ahead and get them. So we'll have to evaluate that when they come in, and get people's opinions on [disfmarker] on what they think of them. Um, then the other question I had is maybe we should get another wireless. Another wireless setup. I mean it's expensive, but it does seem to be [pause] better than the wired. [speaker005:] So how many channels do you get to have in a wireless setup? [speaker003:] Um, well, I'm pretty sure that you can daisy chain them together so what we would do is replace the wired mikes with wireless. So we currently have one base station with six wireless mike, possibility of six wireless receivers, and apparently you can chain those together. And so we could replace our wired mikes with wireless if we bought another base station and more wireless mikes. [speaker005:] So, um. [speaker003:] And [disfmarker] [speaker005:] So let's see we [disfmarker] [speaker003:] So, you know it's still, it's fifteen minus six. Right? So we could have up to nine. [speaker005:] And right now we can have up to six. [speaker003:] Right. And we have five, we're getting one more. [speaker005:] Yeah. [speaker003:] And it's um, about nine hundred dollars for the base station, and then eight hundred per channel. [speaker005:] Oh. So yeah so the only [disfmarker] Beyond the mike [disfmarker] the cost of the mikes the only thing is the base station that's nine hundred dollars. [speaker003:] Right. [speaker005:] Oh, we should do it. [speaker003:] OK. [speaker005:] Yeah. [speaker003:] OK, so I'll look into how you daisy chain them and [disfmarker] and then just go ahead and order them. [speaker005:] Yeah. [speaker002:] I don't quite understand how that [disfmarker] how that works,. If [disfmarker] So we're not increasing the number of channels. [speaker003:] No, we're just replacing the wired [disfmarker] the two wired that are still working, [speaker002:] OK. OK. I see. [speaker003:] along with a couple of the wired that aren't working, one of the wired that's not working, with a wireless. [speaker005:] Yeah. Basically we found [disfmarker] [speaker002:] Three wireds work, right? [speaker003:] I [disfmarker] I guess three wireds work, yeah. [speaker002:] Yeah. Yeah. [speaker005:] Yeah. But we've had more problems with that. [speaker003:] Yep. [speaker005:] And that sort of bypasses the whole [disfmarker] the whole Jimbox thing and all that. [speaker003:] Right. [speaker005:] And so um, we [disfmarker] we seem to have uh, a reliable way of getting the data in, which is through the ra Sony radio mikes, as long as we're conscious about the batteries. That seems to be the key issue. [speaker003:] Right. Everyone's battery OK? [speaker002:] I checked them this morning, they should be. [speaker003:] OK. [speaker005:] Yeah. Um, [vocalsound] That's the only thing with them. But the quality seems really good and [disfmarker] Um I heard from UW that they're [disfmarker] they're uh very close to getting their, uh setup purchased. They're [disfmarker] they're [disfmarker] they're buying something that you can just sort of buy off the shelf. [speaker003:] Well we should talk to them about it because I know that SRI is also in the process of looking at stuff, and so, you know, what we should try to keep everyone [disfmarker] on the same page with that. [speaker005:] Yeah. [speaker002:] SRI, really? [speaker003:] Yeah. [speaker002:] Oh. [speaker003:] They got sa apparent Well, Maybe [pause] this needs to be bleeped out? I have no clue. [speaker005:] Uh, I don't know. [speaker003:] I don't know how much of it's public. [speaker005:] Probably we shouldn't [disfmarker] probably we shouldn't talk about funding stuff. [speaker003:] Right. [speaker005:] Yeah. But anyway there's [disfmarker] there's [disfmarker] there's uh, uh other activities that are going on there and [disfmarker] and uh [disfmarker] and NIST and UW. So. Um. But [disfmarker] but yeah I thin I think that at least the message we can tell other people is that our experience is [disfmarker] is quite positive with the Sony, uh, radio mikes. [speaker003:] Right. [speaker005:] Now the one thing that you have said that actually concerns me a little is you're talking about changing the headsets meaning changing the connector, which means some hand soldering or something, right? [speaker003:] Uh, no, [speaker005:] No? [speaker003:] we're having the [disfmarker] them do it. So it's so hand soldering it, but I'm not doing it. [speaker005:] Oh. OK. [speaker003:] So, they [disfmarker] they charge [speaker005:] Nothing against you and your hand soldering [speaker003:] right. [speaker005:] but [disfmarker] [speaker003:] You've never seen my hand soldering. But uh, a as I said they're coming in. [speaker005:] Uh, OK, so that's being done professionally and [disfmarker] Yeah. [speaker003:] I [disfmarker] I mean [disfmarker] Yeah. I mean. [speaker005:] Yeah. [speaker003:] As professionally as I guess you can get it done. [speaker005:] Well, it could [disfmarker] if they do a lot of it, it's [disfmarker] [speaker003:] I mean i it's just their repair shop. Right? Their maintenance people. [speaker005:] Well, we'll see what it [disfmarker] it's like. [speaker003:] Yep. [speaker005:] That [disfmarker] tha that can be quite good. Th this [disfmarker] Yeah, OK. Good. Yeah. So let's go with that. Uh, [speaker003:] And, I mean we'll see, tomorrow, you know, what it looks like. [speaker005:] Yeah. So, um, uh, Dave isn't here but he was going to start working on some things with the digits. Uh, so he'll be interested in what's going on with that. I guess [disfmarker] Was [disfmarker] the decision last time was that the [disfmarker] the uh transcribers were going to be doing stuff with the digits as well? [speaker006:] Mm hmm. [speaker005:] Has that started, or is that [disfmarker]? [speaker006:] Yeah. Uh, it would be to use his interface and I was going to meet with him today about that. [speaker003:] Right, so, the decision was that Jane did not want the transcribers to be doing any of the paperwork. So I did the [disfmarker] all that last week. So all the [disfmarker] all the forms are now [pause] on the computer. And uh, then I have a bunch of scripts that we'll read those and let the uh [pause] transcribers use different tools. And I just want to talk to Jane about how we transition to using those. [speaker006:] Mm hmm. So he has a nice set up that they [disfmarker] it w it will be efficient for them to do that. [speaker005:] OK. So anyway [disfmarker] [speaker003:] I [disfmarker] I don't think it'll take too long. So, you know, just uh, a matter of a few days I suspect. [speaker005:] So anyway I think we [disfmarker] we have at least one uh, user for the digits once they get done, which will be Dave. [speaker003:] Right. I've already done five or six [pause] sets. [speaker006:] OK. [speaker003:] So if he wanted to, you know, just have a few to start with, he could. [speaker005:] Yeah, he might [disfmarker] he might be asking [disfmarker] [speaker003:] You know, and I also have a bunch of scripts that will, like, generate P files and run recognition on them also. [speaker005:] Right. OK. Uh, is Dave [disfmarker] I don't know if Dave is on the list, if he's invited to these meetings, uh [speaker006:] I don't tend to get an invitation myself for them even. [speaker005:] if he knows. [speaker001:] No, no. [speaker003:] Uh, we don't have a active one but I'll make sure he's on the list. [speaker006:] Yeah. Should we call him? I mean is he [disfmarker] d is he definitely not available today? [speaker005:] I don't know. [speaker006:] Should I call his office and see? [speaker005:] Uh, well i it's uh [disfmarker] [speaker001:] He was in. [speaker003:] I mean, he's still taking classes, so uh, he may well have conflicts. [speaker001:] Yeah. [speaker005:] Yeah. [speaker001:] Yeah, he was in [pause] s [speaker006:] He wasn't there at cof [speaker005:] Yeah, so this might be a conflict for him. [speaker001:] Yeah. [speaker006:] OK. [speaker005:] Yeah. OK. Uh, so. [speaker003:] Yeah didn't he say his signal processing class was like [pause] Tuesdays and Thursdays? [speaker001:] I think he has a class. Yeah. [speaker002:] Yeah. He might have. [speaker005:] Oh, OK. [speaker004:] You talking about David Gelbart? [speaker003:] Oh well, whatever. [speaker005:] Yeah. [speaker002:] Yeah. [speaker003:] Yeah. [speaker006:] Yes. [speaker001:] Yeah. [speaker004:] Yeah, I think he's taking two twenty five A which is now. [speaker005:] Yeah. [speaker006:] OK. [speaker004:] So. [speaker002:] Yeah. [speaker005:] OK. So, that's why we're not seeing him. OK. Uh, transcriptions, uh, beyond the digits, where we are, and so on. [speaker006:] OK. [speaker005:] And the [disfmarker] and the recordings also, [speaker006:] Um [disfmarker] [speaker005:] just where we are. Yeah. [speaker006:] Well, so um, should we [disfmarker] we don't wan wanna do the recording status first, or [disfmarker]? [speaker003:] Well, we have about thirty two hours uh as of, I guess a week and a half ago, so we probably now have about thirty five hours. [speaker005:] And [disfmarker] and that's [disfmarker] that's uh [disfmarker] How much of that is digits? It's uh [disfmarker] that's including digits, [speaker003:] That's including digits. [speaker005:] right? So [disfmarker] [speaker003:] I haven't separated it out so I have no clue how much of that is digits. [speaker005:] Yeah. So anyway there's at least probably thirty hours, or something of [disfmarker] There's got to be more than thirty hour [disfmarker] i it couldn't [disfmarker] [speaker003:] Of [disfmarker] of non digits? [speaker005:] of [disfmarker] Of non digits. [speaker001:] Mmm. [speaker003:] Yeah, absolutely. [speaker005:] Yeah, yeah. [speaker003:] I mean, the digits don't take up that much time. [speaker005:] OK. [speaker006:] OK, and the transcribers h I, uh, don't have the exact numbers, but I [pause] think it would come to about eleven hours that are finished uh, transcribing from them right now. The next step is to [disfmarker] that I'm working on is to insure that the data are clean first, and then channelized. What I mean by clean is that they're spell checked, that the mark up is consistent all the way throughout, and also that we now incorporate these additional conventions that uh, Liz requested in terms of um, um [pause] in terms of having a s a systematic handling of numbers, and acronyms which I hadn't been specific about. Um, for example, i they'll say uh "ninety two". And you know, so how [disfmarker] you could [disfmarker] [speaker003:] Nine two, [speaker006:] e Exactly. [speaker003:] right. [speaker006:] So if you just say "nine two", the [disfmarker] there are many s ways that could have been expressed. An and I just had them [disfmarker] I [disfmarker] I mean, a certain number of them did put the words down, but now we have a convention which also involves having it followed by, um, a gloss th and things. [speaker002:] You know, Jane? [speaker003:] Mm hmm. [speaker002:] Um, one suggestion and you may already be doing this, but I've noticed in the past that when I've gone through transcriptions and you know in [disfmarker] in order to build lexicons and things, if you um, just take all the transcriptions and separate them into words and then alphabetize them, [comment] a lot of times just scanning down that list you'll find a lot of [pause] inconsistencies and mis [speaker003:] Misspelled. [speaker001:] Yeah. [speaker006:] You're talking about the type token frequency listings, and I use those too. Y you mean just uh [pause] on each [disfmarker] on each line there's a one word right? It's one token from the [disfmarker] from the corpus. [speaker002:] Mm hmm. Mm hmm. [speaker006:] Yeah, those are e extremely efficient and I and I [disfmarker] I agree that's a very good use of it. [speaker002:] Oh so you already have that, OK. [speaker006:] Well that's [disfmarker] that's a way [disfmarker] that's [disfmarker] You know, the spell check basically does that but [disfmarker] but in addition [disfmarker] yes, that's [disfmarker] that's exactly the strategy I wanna do in terms of locating these things which are you know colloquial spoken forms which aren't in the lexicon. [speaker002:] Mm hmm. [speaker006:] Exactly. [speaker002:] Cuz a lot of times they'll appear next to each other, and uh, [speaker006:] And then you ca then you can do a s [speaker001:] Yeah. [speaker002:] i in alphabetized lists, they'll appear next to each other and [disfmarker] and so it makes it easier. [speaker006:] Absolutely. I agree. That's a very good [disfmarker] that's a very good uh, suggestion. And that was [disfmarker] that's my strategy for handling a lot of these things, in terms of things that need to be glossed. I didn't get to that point but [disfmarker] So there are numbers, then there are acronyms, and then um, there's a [disfmarker] he she wants the uh, actually a [disfmarker] an explicit marker of what type of comment this is, so i curly b inside the curly brackets I'm gonna put either "VOC" for vocalized, like cough or like laugh or whatever, "NONVOC" for door slam, and "GLOSS" for things that have to do with [disfmarker] if they said a s a spoken form with this [disfmarker] m this pronunciation error. [speaker003:] Right. [speaker006:] I already had that convention but I [disfmarker] I haven't been asking these people to do it systematically [speaker002:] Oh that's great. [speaker006:] cuz I think it most [disfmarker] ha most efficiently handled by uh [disfmarker] by a [disfmarker] a filter. That was what I was always planing on. So that, you know you get a whole long list [disfmarker] exactly what you're saying, you get a whole list of things that say "curly bracket laugh curly bracket", [speaker002:] Mm hmm. [speaker006:] then y you know it's [disfmarker] it's [disfmarker] You [disfmarker] you risk less error if you handle it by a filter, than if you have this transcriber ch laboriously typing in sort of a VOC space, [speaker002:] Yeah. [speaker006:] so man So many ways that error prone. [speaker002:] Right. Right. [speaker006:] So, um, [vocalsound] um I'm [disfmarker] I'm going to convert that via a filter, into these tagged uh, subcategorized comments, and same thing with you know, we see you get a subset when you do what you're saying, you end up with a s with uh, you're collapsing across a frequency you just have the tokens [speaker002:] Mm hmm. [speaker006:] and you can um, have a filter which more efficiently makes those changes. [speaker002:] Mm hmm. [speaker006:] But the numbers and acronyms have to be handled by hand, because, you know I mean, jus [speaker003:] You don't know what they could be. [speaker006:] Yeah now TIMIT's clear um [pause] and PLP is clear [speaker002:] Yeah. [speaker006:] but uh there are things that are not so well known, in [disfmarker] or [disfmarker] or have variant [disfmarker] u u uses like the numbers you can say "nine two" or you can say "ninety two", and uh [speaker003:] So how are you doing the [disfmarker] [speaker006:] I'd handle the numbers individually. [speaker003:] How are you doing the uh, acronyms so if I say PZM what would it appear on the transcript? [speaker006:] It would be separate [disfmarker] The letters would be separated in space [speaker003:] OK. [speaker006:] and potentially they'll have a curly bracket thing afterwards e but I'm not sure if that's necessary, clarifying what it is, [speaker003:] Mm hmm. [speaker006:] so gloss of [pause] whatever. [speaker003:] Right. [speaker006:] I don't know if that's really necessary to do that. Maybe it's a nice thing to do because of it then indicating this is uh, a step away from i indicating that it really is intentional that those spaces are there, and indicating why they're there to indicate that it's uh [vocalsound] the you know, [comment] uh enumerated, or i [speaker003:] Mm hmm. [speaker006:] it's not a good way of saying [disfmarker] but it's [disfmarker] it's the [vocalsound] specific uh way of stating these [disfmarker] these letters. [speaker003:] Right. So it sounds good. [speaker006:] And so anyway, the clean [disfmarker] those are those things and then channelized is to then um, get it into this multichannel format. And at that point then it's ready for use by Liz and Don. But that's been my top priority [disfmarker] beyond getting it tanel channelized, the next step is to work on tightening up the boundaries of the time bins. [speaker001:] Yeah. [speaker003:] Right. [speaker006:] And uh, Thilo had a [disfmarker] e e a breakthrough with this [disfmarker] this last week in terms of getting the channel based um uh s s speech nonspeech segmentation um, up and running and I haven't [disfmarker] I haven't been able to use that yet cuz I'm working s re this is my top priority [disfmarker] get the data clean, and channelized. [speaker001:] I actually gave [speaker003:] Have you also been doing spot checks, Jane? [speaker006:] Oh yes. [speaker003:] Okay, good. [speaker006:] Well you see that's part of the cleaning process. I spent um actually um I have a segment of ten minutes that was transcribed by two of our transcribers, [speaker003:] Oh good. Good. [speaker006:] and I went through it last night, it's [disfmarker] it's almost spooky how similar these are, word for word. And there are some differences in commas cuz commas I [disfmarker] I left them discretion at commas. [speaker003:] Right. [speaker006:] Uh [disfmarker] and so because it's not part of our st of our ne needed conventions. [speaker005:] Mm hmm. [speaker006:] And um, and [disfmarker] so they'll be a difference in commas, but it's word by word the same, in [disfmarker] in huge patches of the data. And I have t ten minute stretch where I can [disfmarker] where I can show that. And [disfmarker] and sometimes it turns out that one of these transcribers has a better ear for technical jargon, and the other one has a better ear for colloquial speech. So um, the one i i the colloquial speech person picked up "gobbledy gook". [speaker002:] Hmm. [speaker006:] And the other one didn't. And on this side, this one's picking up things like "neural nets" and the one that's good on the sp o on th the vocabulary on the uh colloquial didn't. [speaker003:] Right. [speaker002:] When [disfmarker] for the person who missed "gobbledy gook" what did they put? [speaker006:] It was an interesting approximation, put in parentheses, cuz I have this convention that, i if they're not sure what it was, they put it in parentheses. [speaker002:] Oh. [speaker006:] So they tried to approximate it, but it was [disfmarker] [speaker002:] Oh good. [speaker006:] it was spelled GABBL [disfmarker] [speaker002:] Sort of how it sounds. [speaker006:] Yes. More of an attempt to [disfmarker] [speaker002:] Yeah. [speaker006:] I mean apparently it was very clear to her that these [disfmarker] the a this [disfmarker] this was a sound [disfmarker] these are the sounds, [speaker003:] It was a technical term that she didn't recognize, [speaker002:] Yeah. [speaker006:] but [disfmarker] Yeah. But she knew that she didn't know it. Maybe it was a technical ter exactly. But she [disfmarker] even though her technical perception is just really [disfmarker] uh you know I've [disfmarker] I'm tempted to ask her if she's taken any courses in this area or if she's taken cognitive science courses [speaker003:] Right. [speaker006:] then cuz "neural nets" and [disfmarker] oh she has some things that are [disfmarker] oh "downsampled", she got that right. [speaker002:] Hmm. [speaker006:] And some of these are rather [pause] uh unexpected. [speaker003:] Obscure, yeah. [speaker006:] But ch ten solid uh [disfmarker] m ch s chunk of ten solid minutes where they both coded the same data. And um [disfmarker] [speaker005:] And [disfmarker] and again the main track that you're working with is elev eleven hours? Is that right? [speaker006:] Yes exactly. [speaker005:] Yeah, OK. [speaker006:] And that's part of this [disfmarker] Eleven hours. [speaker005:] Is that [disfmarker] is that [disfmarker] that including digits? [speaker006:] Yes it is. [speaker005:] Yeah. So let's say roughly [pause] ten hours or so of [disfmarker] [speaker006:] Mm hmm. [speaker005:] I mean it's probably more than that but [disfmarker] but with [disfmarker] of [disfmarker] of non digits. [speaker006:] It'd be more than that [speaker001:] Yeah. [speaker006:] because I [disfmarker] my recollection is the minutes [disfmarker] that da digits don't take more than half a minute. Per person. [speaker005:] Oh, OK. [speaker006:] But um [pause] the [disfmarker] the total set that I gave them is twelve hours of tape, [speaker005:] Oh, I see. [speaker006:] But they haven't gotten to the end of that yet. So they're still working [disfmarker] some of them are [disfmarker] Two of them are still working on completing that. [speaker005:] Oh, I see. [speaker006:] Yeah. [speaker002:] Boy, they're moving right along. [speaker006:] Yeah. They are. [speaker005:] Yeah. [speaker006:] Mm hmm. They're very efficient. There're some who have more hours that they devote to it than others. [speaker005:] Mm hmm. [speaker002:] Mm hmm. [speaker006:] Yeah. [speaker005:] So what [disfmarker] what [disfmarker] what's the deal with [disfmarker] with your [disfmarker] [speaker001:] The channel u thing? [speaker005:] Yeah. [speaker001:] Oh, it's just uh, I ran the recognizer [disfmarker] uh, the [pause] [comment] speech nonspeech detector on different channels and, it's just in uh [disfmarker] in this new multi channel format and output, and I just gave one [disfmarker] one meeting to [disfmarker] to Liz who wanted to [disfmarker] to try it for [disfmarker] for the recognizer [speaker005:] Oh, I see. [speaker001:] as uh, apparently the recognizer had problems with those long chunks of speech, which took too much memory or whatever, [speaker005:] Right. Yeah. [speaker001:] and so [pause] she [disfmarker] she will try that I think and [disfmarker] I'm [disfmarker] I'm working on it. So, I hope [disfmarker] [speaker003:] Is this anything different than the HMM system you were using before? [speaker005:] Yeah. [speaker001:] No. Uh, I [pause] mmm, use some [disfmarker] some different features but not [disfmarker] not [disfmarker] [speaker003:] Mm hmm. [speaker001:] The basic thing is this HMM base. [speaker003:] So there's still no [disfmarker] no knowledge using different channels at the same time. You know what I mean? [speaker001:] There is some, uh as the energy is normalized across channels [speaker003:] Across all of them. [speaker001:] yeah. [speaker003:] OK. [speaker001:] So. But basically that's one of the main changes. [speaker003:] Mm hmm. [speaker005:] Mm hmm. What are some of the other features? Besides the energy? You said you're trying some different features, or something. [speaker001:] Oh I just uh [disfmarker] [vocalsound] Mmm, I just use um our loudness based things now as they [disfmarker] before there were [disfmarker] they were some in [disfmarker] in the log domain and I [disfmarker] I changed this to the [disfmarker] to the [disfmarker] [speaker005:] Cu Cube root? [speaker001:] Yeah. To [disfmarker] No, I changed this to the [disfmarker] to the [disfmarker] to the loudness thingy with the [disfmarker] with the [speaker003:] Hmm. [speaker005:] Ah. [speaker001:] how do you call it? I'm not sure. With the, uh [disfmarker] [speaker005:] Fletcher Munson? No. [speaker001:] I'm not sure about the term. [speaker005:] Oh, OK. [speaker001:] Uh, I'll look it up. [speaker005:] Yeah, alright. [speaker001:] And say it to you. Uh, OK, and [disfmarker] Yeah. That's [disfmarker] that's basically the [disfmarker] the [disfmarker] the thing. [speaker005:] OK. [speaker001:] Yeah, and I [disfmarker] and I tried t to normalize uh [disfmarker] uh the features, there's loudness and modified loudness, um, within one channel, because they're, [vocalsound] yeah to [disfmarker] to be able to distinguish between foreground and background speech. And it works quite well. But, not always. [speaker005:] Uh huh. Uh huh. [speaker001:] So. [speaker005:] OK. [speaker003:] Good. [speaker005:] Um, let's see. I think the uh [disfmarker] Were [disfmarker] were you basically done with the transcription part? So I guess the next thing is this uh [disfmarker] bleep editing. [speaker003:] Right. So the [disfmarker] The idea is that we need to have [disfmarker] We need to provide the transcripts to every participant of every meeting to give them an opportunity to bleep out sections they don't want. So I've written a bunch of tools that will generate web pages, uh with the transcription in it so that they can click on them and piece [disfmarker] pieces and they can scroll through and read them, and then they can check on each one if they want it excluded. And then, it's a form, HTML form, so they can submit it and it will end up sending me email with the times that they want excluded. And so, uh, some of the questions on this is what do we do about the privacy issue. [speaker005:] Yeah. [speaker003:] And so I thought about this a little bit and I think the best way to do it is every participant will have a password, a single password. Each person will have a single password, user name and password. And then each meeting, we'll only allow the participants who were at that meeting to look at it. And that way each person only has to remember one password. [speaker005:] I [disfmarker] I can't help but wonder if this is maybe a little more elaborate than is needed. I mean if people have [disfmarker] Uh, I mean, for me I would actually want to have some pieces of paper that had the transcription and I would sort of flip through it. And then [pause] um [pause] if I thought it was OK, I'd say "it's OK". [speaker003:] Mm hmm. [speaker005:] And, I [disfmarker] uh [disfmarker] I mean it depends how this really ends up working out, but I guess my thought was that the occasion of somebody wondering whether something was OK or not and needing to listen to it was gonna be extremely rare. [speaker003:] Right, I mean so th th th the fact that you could listen to it over the web is a minor thing that I had already done for [pause] other reasons. [speaker005:] OK. [speaker003:] And so that [disfmarker] that's a minor part of it, I just wanted some web interface so that people [disfmarker] you didn't actually have to send everyone the text. So m what my intention to do is that as the transcripts become ready, um [pause] I would take them, and generate the web pages and send email to every participant or contact them using the contact method they wanted, and just uh, tell them, "here's the web page", um, "you need a password". So th th question number one is how do we distribute the passwords, and question number two is how else do we wanna provide this information if they want it. [speaker005:] That's [disfmarker] I think what I was sort of saying is that if you just say [vocalsound] "here is a [disfmarker] here is [disfmarker]" I mean this maybe it sounds paleolithic but [disfmarker] but I just thought if you handed them some sheets of paper, that said, uh, "here's what was said in this transcription is it OK with you? and if it is, here's this other sheet of paper that you sign that says that it's OK". [speaker003:] I think that um there are a subset of people who will want printouts that we can certainly provide. [speaker005:] And then they'd hand it back to you. [speaker003:] But certainly I wouldn't want a printout. These are big, and I would much rather be [pause] ha be able to just sit and leaf through it. [speaker005:] You find it easier to go through a large [disfmarker] I mean how do you read books? [speaker003:] Well I certainly read books by hand. But for something like this, I think it's easier to do it on the web. [speaker005:] Really? I mean, it [disfmarker] [speaker003:] Cuz you're gonna get, you know, if I [disfmarker] I'm [disfmarker] I'm in a bunch of meetings and I don't wanna get a stack of these. I wanna just be able to go to [disfmarker] go to the web site [comment] and visit it as I want. [speaker005:] Going to a web site is easy, but flipping through a hundred pounds [disfmarker] a hundred pages of stuff is not easy on the web. [speaker003:] Well, I don't think it's that much harder than, paper. [speaker006:] I have one question. [speaker005:] Really? [speaker003:] So. [speaker006:] So are you thinking that um the person would have a transcript and go strictly from the transcript? Because I [disfmarker] I do think that there's a benefit to being able to hear the tone of voice and the [disfmarker] [speaker005:] So here's the way I was imagining it, and maybe I'm wrong, [speaker006:] Yeah. [speaker005:] but the way I imagined it was that um, the largest set of people is gonna go "oh yeah, I didn't say anything funny in that meeting just go ahead, where's the [disfmarker] where's the release?" And then there'll be a subset of people, right? [disfmarker] OK there's [disfmarker] I mean think of who it is we've been recording mostly. OK there'll be a subset of people, who um, will say uh "well, yeah, I really would like to see that." [speaker001:] Yeah. [speaker005:] And for them, the easiest way to flip through, if it's a really large document, I mean unless you're searching. Searching, of course, should be electronic, but if you're not [disfmarker] so if you provide some search mechanism you go to every place they said something or something like that, [speaker001:] Yeah. [speaker005:] but see then we're getting more elaborate with this thing. Um if [disfmarker] if uh you don't have search mechanisms you just sort of have this really, really long document, I mean whenever I've had a really, really long document that it was sitting on the web, I've always ended up printing it out. I mean, so it's [disfmarker] it's [disfmarker] I mean, you [disfmarker] you're [disfmarker] you're not necessarily gonna be sitting at the desk all the time, you wanna figure you have a train ride, and there's all these situations where [disfmarker] where I [disfmarker] I mean, this is how I was imagining it, anyway. And then I figured, that out of that group, there would be a subset who would go "hmm you know I'm really not sure about this section here," and then that group would need it [disfmarker] S It seems like i if I'm right in that, it seems like you're setting it up for the most infrequent case, rather than for the most frequent case. So that uh, now we have to worry about privacy, [speaker003:] Well, no fre for the most [disfmarker] [speaker005:] we have to worry about all these passwords, for different people [speaker003:] For the most frequent case they just say [pause] "it's OK" and then they're done. And I think [pause] almost everyone would rather do that by email than any other method. [speaker005:] Mm hmm. [speaker006:] The other thing too is it seems like [disfmarker] [speaker005:] Um, yeah, that's true. [speaker006:] Go ahead. [speaker003:] I mean, cuz you don't have to visit the web page if you don't want to. [speaker001:] Yeah. [speaker005:] I guess [disfmarker] Yeah, I guess we don't need their signature. I guess an email OK is alright. [speaker003:] Oh that was another thing I [disfmarker] I had assumed that we didn't need their signature, that it [disfmarker] that an email approval was sufficient. But [pause] I don't actually know. [speaker002:] Are [disfmarker] are people going to be allowed to bleep out sections of a meeting where they weren't speaking? [speaker003:] Yes. [speaker006:] I also [disfmarker] [speaker003:] If someone feels strongly enough about it, then I [disfmarker] I [disfmarker] I think they should be allowed to do that. [speaker006:] mm hmm. [speaker002:] So that means other people are editing what you say? [speaker005:] Uh [pause] I don't know about that. [speaker003:] Yeah. [speaker002:] I don't know if I like that. [speaker003:] Well, the only other choice is that the person would say "no, don't distribute this meeting at all", and I would rather they were able to edit out other people then just say "don't distribute it at all". [speaker005:] But th what they signed in the consent form, was something that said you can use my voice. [speaker003:] Well, but if [disfmarker] if someone is having a conversation, and you only bleep out one side of it, that's not sufficient. [speaker005:] Right? Yeah. Yeah, but that's our decision then. Right? [speaker003:] Um, I don't think so. [speaker005:] I think it is. [speaker003:] I mean, because if I object to the conversation. If I say "we were having a conversation, and I consider that conversation private," and I consider that your side of it is enough for other people to infer, I wanna be able to bleep out your side. [speaker006:] The [disfmarker] I agree that the consent forms were [disfmarker] uh, I cons agree with what Adam's saying, that [vocalsound] um, the consent form did leave open this possibility that they could edit things which they found offensive whe whether they said them or didn't say them. [speaker005:] I see. OK, well, if that's what it said. [speaker006:] And the other thing is from the standpoint of the l of the l I'm not a law lawyer, but it strikes me that [vocalsound] uh, we wouldn't want someone to say "oh yes, I was a little concerned about it but [vocalsound] it was too hard to access". So I think it's kind of nice to have this facility to listen to it. Now [disfmarker] in terms of like editing it by hand, I mean I think it's [disfmarker] i some people would find that easier to specify the bleep part by having a document they edited. But [disfmarker] but it seems to me that sometimes um, you know i if a person had a bad day, and they had a tone in their voice that they didn't really like, you know it's nice [disfmarker] it's nice to be able to listen to it and be sure that that was OK. [speaker003:] I mean I can certainly provide a printable version if people want it. [speaker005:] Um [pause] I mean it's also a mixture of people, [speaker003:] Um. [speaker005:] I mean some people are r do their work primarily by sitting at the computer, flipping around the web, and others do not. [speaker003:] Yep. [speaker005:] Others would consider it [disfmarker] this uh [disfmarker] a [disfmarker] a set of skills that they would have to gain. You know? [speaker003:] Well I think most of the people in the meetings are the former. [speaker005:] It depends on what meetings. [speaker006:] That's true. [speaker002:] So far. [speaker003:] So. [speaker005:] In the meetings so far, yeah. [speaker003:] Yep. [speaker005:] But we're trying to expand this, right? [speaker003:] Right. [speaker005:] So I [disfmarker] I [disfmarker] I actually think that paper is the more universal thing. [speaker003:] And that [disfmarker] [speaker006:] Mm hmm. [speaker003:] Well, but if they want to print it out that's alright. [speaker006:] Yeah. [speaker003:] I think everyone in the meeting can access the web. [speaker005:] No, I think we have to be able to print it out. It's not just if they want to print it out. [speaker003:] OK, so does that mean that I can't use email? [speaker005:] I [disfmarker] I think [disfmarker] [speaker003:] Or what? [speaker006:] Cuz you could send it through email you're thinking. [speaker005:] I [disfmarker] I th well [disfmarker] we [disfmarker] [speaker003:] Well, I don't think I [disfmarker] [speaker005:] there was this [disfmarker] [speaker003:] well I don't think we can send the text through email because of the privacy issues. [speaker006:] Good. [speaker005:] No. [speaker006:] For security? Yeah, OK good. [speaker001:] Yeah. [speaker005:] Right. [speaker006:] Good point. [speaker003:] Um. So giving them, you think a web site to say, "if you wanna print it out here it is", is not sufficient? [speaker001:] Yeah. I [speaker005:] Certainly for everybody who's been in the meetings so far it would be sufficient. I'm just wondering about [disfmarker] [speaker003:] Yeah, I'm just thinking for people that that's not sufficient for, what [disfmarker] the only sufficient thing would be for me to walk up to them and hand it to them. [speaker006:] You could mail it to them. Get an a mailing address. [speaker001:] Yeah. [speaker003:] Equivalent. [speaker006:] But I think it's easier to drop in the box. [speaker001:] But [disfmarker] Just put the button on [disfmarker] on the web page which say "please send me the [disfmarker] the scripts". [speaker006:] Oh that's interesting. [speaker003:] That's right. [speaker001:] Yeah. [speaker002:] What um [disfmarker] When you display it on the web page, what are [disfmarker] what are you showing them? Utterances, [speaker003:] Mm hmm. [speaker002:] or [disfmarker]? And so can they bleep within an utterance? [speaker003:] No. Whole utterances only. [speaker002:] Whole utterances. [speaker003:] And that was just convenience for my sake, that it's uh, uh it would end up being fairly difficult to edit the transcripts if we would do it at the sub utterance level. Because this way I can just delete an entire line out of a transcript file rather than have to do it by hand. [speaker005:] There's another aspect to this which maybe [disfmarker] is part of why this is bothering me. Um, I think you're really trying very hard to make this as convenient as possible for people to do this. [speaker003:] I mean that's why I did the web form, because for me that would be my most convenient. [speaker002:] Mmm. [speaker005:] I [disfmarker] I [disfmarker] I understand. I think that's the bad idea. [speaker002:] I know where you're going. [speaker003:] Oh. [speaker005:] See because you're gon you're [disfmarker] uh [disfmarker] Really. You're gonna end up with all these little patchy things, whereas really what we want to do is have the [disfmarker] the [disfmarker] the bias towards letting it go. Because nob you know it [disfmarker] There was a [disfmarker] one or twi once or twice, in the re in the meetings we've heard, where somebody said something that they might be embarrassed by, but overall people are talking about technical topics. Nobody's gonna get hurt. Nobody's being l libeled. You know, this is [disfmarker] this [disfmarker] we're [disfmarker] we're covering [disfmarker] We're playing the lawyer's game, and we're playing we're [disfmarker] we're [disfmarker] we're looking for the extreme case. If we really orient it towards that extreme case, make it really easy, we're gonna end up encouraging a headache. That [disfmarker] I think that's [disfmarker] I'm sort of psyching myself out here, I [disfmarker] I'm trying to [disfmarker] uh [disfmarker] but I [disfmarker] I think that's [disfmarker] [speaker003:] I guess I don't see having a few phrases here and there in a meeting being that mu much of a headache, bleeped out. [speaker005:] Well, it's [disfmarker] [speaker003:] So. [speaker005:] but i [speaker002:] I think what Morgan's saying is the easier it is, the more is gonna be bleeped. [speaker005:] And [disfmarker] and it really depends on what kind of research you're doing. I think some researchers who are gonna be working with this corpus years from now are really gonna be cursing the fact that there's a bunch of stuff in there [comment] that's missing from the dialogue. [speaker003:] Mm hmm. [speaker005:] You know, it depends on the kind of research they're doing, [speaker001:] Yeah. [speaker005:] but it might be, uh [pause] it might be really a [disfmarker] a pain. And, you know where it's really gonna hurt somebody, in some way [disfmarker] the one who said it or someone who is being spoken about, [comment] we definitely want to allow the option of it being bleeped out. But I really think we wanna make it the rare incidence. And [disfmarker] and uh, I am just a little worried about making it so easy for people to do, and so much fun! [vocalsound] that they're gonna go through and bleep out stuff. [speaker006:] So much fun. [speaker005:] and they can bleep out stuff they don't like too, right from somebody else, as you say, you know, so "well I didn't like what he said." [speaker003:] Well I don't see any way of avoiding that. I mean, we have to provi we have promised that we would provide them the transcript and that they can remove parts that they don't like. [speaker005:] Yeah. [speaker003:] So that the [disfmarker] [speaker005:] No, no, I [disfmarker] I [disfmarker] I don't [disfmarker] [speaker003:] The only question is [disfmarker] [speaker005:] You you've talked me into that, but I [disfmarker] I just think that we should make it harder to do. [speaker003:] The problem is if it's harder for them it's also harder for me. Whereas this web interface, I just get email, it's all formatted, it's all ready to go and I can just insert it. [speaker005:] So maybe you don't give them access to the web interface unless they really need it. [speaker006:] Well I guess [disfmarker] [speaker005:] So [disfmarker] so [disfmarker] so [speaker006:] Yeah. [speaker005:] I'm sorry [disfmarker] so [disfmarker] so [disfmarker] So maybe this is a s a way out of it. [speaker006:] Hmm. [speaker005:] You've provided something that's useful for you to do [disfmarker] handle, and useful for someone else if they need it. But I think the issue of privacy and ease and so forth should be that uh, they get access to this if they really need it. [speaker003:] Well [disfmarker] [speaker002:] So you're saying the [disfmarker] the sequence would be more like first Adam goes to the contact lists, contacts them via whatever their preferred method is, to see if they want to review the meeting. [speaker005:] Right. [speaker002:] And then if they don't, you're done. If they do, then he provides them access to the [disfmarker] the web site. [speaker005:] W w [speaker003:] Well, to some extent I have to do that anyway because as I said we have to distribute passwords. [speaker002:] Or [disfmarker] a printed out form. [speaker005:] There's [disfmarker] there [disfmarker] y [speaker003:] So, [speaker005:] but you don't necessarily have to distribute passwords is what I'm saying. So [disfmarker] [speaker003:] Well, but [disfmarker] [speaker002:] Only if they want it. [speaker003:] what I'm saying is that I can't just email them the password because that's not secure. [speaker005:] No, no, no. But you aren't necessarily giving them [disfmarker] [speaker003:] So they have to call me and ask. [speaker005:] Right. But [disfmarker] we don't even necessarily need to end up distributing passwords at all. [speaker003:] Well, we do because of privacy. We can't just make it openly available on the web. [speaker005:] No, no. You're missing the point. [speaker006:] Mm hmm. [speaker005:] We're [disfmarker] We're trying i We're trying to make it less of an obvious just l l l l uh fall off a log, to do this. [speaker006:] Not everyone gets a password, unless they ask for it. [speaker005:] Right? So th so what I would see, is that first you contact them and ask them if they would like to review it for to check for the [disfmarker] [speaker006:] Yeah. [speaker005:] not just for fun, OK? but to [disfmarker] to check this for uh things that they're worried about having said or if they're willing to just send an approval of it, at [disfmarker] from their memory. Um [disfmarker] and, uh, and we should think carefully actually we should review [disfmarker] go through how that's worded, OK? Then, if someone uh [disfmarker] wants to review it, uh, and I know you don't like this, but I'm offering this as a suggestion, is that [disfmarker] is that we then give them a print out. And then if they say that "I have a potential problem with these things," then, you [disfmarker] you say "OK well you might wanna hear this in context to s think if you need that," you issue them a password, i in the [disfmarker] [speaker003:] But the [disfmarker] the problem with what you're suggesting is it's not just inconvenient for them, it's inconvenient for me. Because that means multiple contacts every time [disfmarker] for every single meeting every time anyone wants anything. I would much prefer to have all be automatic, they visit the web site if they want to. Obviously they don't have to. [speaker005:] I know you'd prefer it, but the proble we have [disfmarker] [speaker003:] Yeah. [speaker005:] there's a problem with it. [speaker003:] So I think you're thinking people are going to arbitrarily start bleeping and I just don't think that's gonna happen. [speaker006:] I'm also concerned about the spirit of the [disfmarker] of the informed consent thing. Cuz I think if they feel that uh, it's [disfmarker] I th I th You know, if it turns out that something gets published in this corpus that someone really should have eliminated and didn't detect, then it could have been because of their own negligence that they didn't pursue that next level and get the password and do that, um, but [disfmarker] but they might be able to argue "oh well it was cumbersome, and I was busy and it was gonna take me too much time to trace it down". So it could that the burden would come back onto us. So I'm a little bit worried about uh, making it harder for them, from the legal standpoint. [speaker005:] Well you can go too far in that direction, and you need to find somewhere between I think, [speaker006:] Yeah. [speaker005:] because [disfmarker] [speaker003:] It seems to me that sending them email, saying if you have an O OK reply to this email and say OK, [speaker005:] Uh huh. [speaker003:] If you have a problem with it contact me and I'll give you a password ", seems like is a perfectly, reasonable compromise. And if they want a printout they can print it out themselves. [speaker006:] Or we could print it up for them, I mean we could offer that [disfmarker] [speaker001:] Yeah. [speaker006:] but [disfmarker] but there's uh, another aspect to that and that is that in the informed consent form, um, my impression is that they [disfmarker] that we offered them at the very least that they definitely would have access to the transcript. And [disfmarker] and I ha I don't know that there's a chance of really skipping that stage. [speaker005:] Yeah. [speaker006:] I mean I [disfmarker] I thought that you were [disfmarker] Maybe I misinterpreted what you said but it's [disfmarker] [speaker005:] Having access to it doesn't necessarily mean, that [disfmarker] having it [speaker003:] Having it. [speaker006:] Giving it to them. [speaker005:] right? [speaker006:] OK. [speaker005:] It just means they have the right to have it. [speaker003:] Well the in [disfmarker] [speaker006:] Alright. [speaker003:] the consent form is right in there if anyone wants to look at it, [speaker006:] Fine. OK. Fair enough. [speaker003:] so. [speaker005:] Yeah. [speaker006:] Sh sh [speaker003:] D you want me to grab one? [speaker006:] well I could [disfmarker] I'm closer. I could [disfmarker] [speaker003:] Yeah, but you're wired aren't you? [speaker006:] Yeah. That is true. [speaker005:] Um. [vocalsound] Yeah, I mean I don't wanna fool them, [speaker006:] I don't know [disfmarker] [speaker005:] I just meant that e every [disfmarker] ev any time you say anything to anyone there is in fact a [disfmarker] a bias that is presented, [speaker006:] Oh yeah yeah [disfmarker] oh I know. [speaker005:] right? [speaker006:] Yeah that's true. [speaker005:] of [disfmarker] and [disfmarker] [speaker003:] "If you agree to participate you'll have the opportunity to have anything ex anything excised, which you would prefer not to have included in the data set." [speaker006:] Yeah. [speaker005:] Yeah. [speaker003:] "Once a transcript is available we will ask your permission to include the data in the corpus for the r larger research community. There again you will be allowed to indicate any sections that you'd prefer to have excised from the database, and they will m be removed both from the transcript and the recording." [speaker006:] Hmm. Well that's more open than I realized. [speaker003:] Well, I mean it [disfmarker] The one question is definitely clear with anything as opposed to just what you said. [speaker005:] I [disfmarker] [speaker001:] Yeah. [speaker005:] Yeah, uh no that [disfmarker] it [disfmarker] tha [speaker006:] Tha that's true. That's more severe, but the next one says the transcript will be around. [speaker005:] that's right. [speaker006:] And it doesn't [comment] really say we'll send it to you, or wi it'll be available for you on the web, or anything. [speaker002:] I think it probably leaves it open how we get it to them. [speaker005:] I I [disfmarker] [speaker006:] At least it more often. Yeah. It means also we don't have to g To give it to them. I mean like [disfmarker] like Morgan was saying they [disfmarker] they [disfmarker] [speaker003:] They just have to make sure that it is available to them. [speaker006:] It's available to them if they ask for it. [speaker005:] Yeah, OK, so. wh um [disfmarker] I think I have an idea that may be sat may satisfy both you and me in this which is, um, it's a [disfmarker] it [disfmarker] we just go over carefully how these notes to people are worded. So I [disfmarker] I just want it to be worded in such a way where it gives the strong impre it gives very, I mean nothing hidden, v very strongly the bias that we would really like to use all of these data. [speaker003:] Right. [speaker005:] That [disfmarker] that we really would rather it wasn't a patchwork of things tossed out, [speaker006:] Good. [speaker005:] that it would be better for, um, our, uh, field if that is the case. But if you really think something is gonna [disfmarker] And I don't think there's anything in the legal aspects that [disfmarker] that is hurt by our expressing that bias. [speaker006:] Great. Great, great. [speaker005:] And then [disfmarker] then my concern about [disfmarker] which [disfmarker] [speaker006:] Yeah. I agree. [speaker005:] you know you might be right, it may be it was just paranoia on my part, uh but people just [disfmarker] See I'm [@ @] worried about this interface so much fun [vocalsound] that people start bleeping stuff out [comment] [vocalsound] just as [disfmarker] just because they can. [speaker003:] It's just a check box next to the text, it's not any fun at all. [speaker005:] Yeah. Well I don't know. I kind of had fun when you played me something that was bleeped out. You know. I [speaker003:] Well, but they won't get that feedback. All [disfmarker] [speaker005:] Oh they won't? [speaker003:] no because it doesn't automatically bleep it at the time. [speaker005:] Oh good. [speaker003:] It just sends me [disfmarker] [speaker005:] So you haven't made it so much fun. [speaker003:] Right. [speaker005:] Oh good. OK, [speaker003:] It just sends me the time intervals. And then at some point I'll incorporate them all and put bleeps. I mean I don't wanna have t ha do that yet until we actually release the data [speaker005:] Yeah. [speaker003:] because um, then we have to have two copies of every meeting and we're already short on disk space. [speaker005:] Yeah. [speaker003:] So I [disfmarker] I wanna [disfmarker] I [disfmarker] just keep the times until we actually wanna release the data and then we bleep it. [speaker005:] OK. Alright, so I think [disfmarker] Yeah so if we have if [disfmarker] i Again let's you know, sort of circulate the [disfmarker] the wording on each of these things and get it right, but [disfmarker] but [disfmarker] [speaker003:] Well since you seem to feel heart uh, strongest about it, would you like to do the first pass? [speaker005:] OK. Uh, fair enough. Turn about is fair play, [speaker006:] Al Also it ther there is this other question, the legal question that [disfmarker] that Adam's raised, uh about whether we need a concrete signature, or email c i suffices or whatever [speaker005:] Sorry. [speaker003:] Yeah. [speaker006:] and I don't know how that works. i There's something down there about "if you agree to [disfmarker]" [speaker005:] I'm [disfmarker] I'm [disfmarker] I'm [disfmarker] I thought [disfmarker] I [disfmarker] I thought about it with one of my background processes [speaker003:] I don't think so. [speaker005:] and I [disfmarker] uh it's [disfmarker] uh it's uh, it's fine to do the email. [speaker006:] Ah. Fine. Good. [speaker005:] OK. [speaker003:] Yeah because thi th they're signing here that they're agreeing to the paragraph which says "you'll be given an opportunity." [speaker006:] OK. [speaker005:] Yeah. And [disfmarker] [speaker003:] And so I don't think they need another signature. [speaker005:] Well and furthermore I [disfmarker] it's now fairly routine in a lot of arrangements that I do with people on contracts and so forth that [disfmarker] that uh if it's [disfmarker] if it's that sort of thing where you're you're saying uh "OK I agree, we want eighty hours of this person at such and such amount, and I agree that's OK," uh if it's a follow up to some other agreement where there was a signature it's often done in email now [speaker003:] Right. [speaker005:] so it's [disfmarker] it's OK. [speaker006:] Great. [speaker005:] Um. [speaker003:] So I guess I probably should at the minimum, think about how to present it in a printed form. I'm not really sure what's best with that. The problem is a lot of them are really short, [speaker006:] Well [disfmarker] [speaker003:] and so I don't necessarily wanna do one per line. [speaker006:] Well I s [speaker003:] But I don't know how else to do it. [speaker006:] I also have this [disfmarker] I [disfmarker] I think it's nice you have it uh, viewab her [comment] hearable on the [disfmarker] on the web for those who might wonder about um, the non nonverbal side, I mean I [disfmarker] I agree that our bias should be as [disfmarker] as expressed here, and [disfmarker] but I [disfmarker] I think it's nice that a person could check. Cuz sometimes you know you [disfmarker] the words on a [disfmarker] on the page, come out soun sounding different in terms of the [pause] social dynamics if they hear it. [speaker003:] Hmm. [speaker005:] Mm hmm. [speaker006:] And I realize we shouldn't emphasize that people [comment] you know, shouldn't borrow trouble. [speaker005:] Mm hmm. [speaker006:] What it comes down to but [disfmarker] [speaker003:] Yeah I think actually [disfmarker] my opinion probably is that the only time someone will need to listen to it is if the transcript is uh not good. You know, if [disfmarker] if there are lots of mumbles and parentheses and things like that. [speaker006:] Oh, you know, or what if there was an error in the transcript that didn't get detected and there was a whole uh [disfmarker] i segment a against some [disfmarker] [vocalsound] personal [vocalsound] i th [speaker003:] Right. That was all mumbled? I think Microsoft is [speaker001:] Yeah. [speaker006:] Yeah exactly [speaker001:] Oh, [speaker003:] Sorry transcribers. [speaker006:] Or [disfmarker] or even [disfmarker] [vocalsound] or even [comment] there was a [disfmarker] a line you know about how "hmm mmm mmm [comment] Bill Gates duh duh duh duh." [speaker005:] Yeah. [speaker006:] but [disfmarker] but it was all [disfmarker] the words were all visible, but they didn't end up i some there was a slip in the transcript. [speaker001:] Oh, God. [speaker003:] They're gonna hate this meeting. [speaker001:] Yeah. [speaker006:] Yeah that's true. [speaker003:] Actually Liz will like it. You know, but. [speaker005:] Liz will like it. We had a pretty strong disagreement going there. [speaker003:] Yep, yep, that's right. [speaker005:] Yeah. [speaker006:] Yeah. So I don't know. I mean, I [disfmarker] I guess we're assuming that the transcript is a close enough approximation and that [disfmarker] that my double checking will be [pause] so close to absolutely perfect that it [disfmarker] that nothing will slip by. [speaker003:] Mm hmm. [speaker005:] But it [disfmarker] the [disfmarker] some something might sometime, and they [disfmarker] uh [disfmarker] if [disfmarker] if it's something that they said, they might [disfmarker] i i I mean, you might be very accurate in putting down what they actually said, [speaker006:] Mm hmm. [speaker005:] but, when they hear it, themselves, they may hear something different because they know what they meant. [speaker006:] I don't know how to notate that. [speaker002:] Sarcasm, [speaker006:] Yeah, that's right. [speaker002:] how do you [disfmarker] how do you indicate sarcasm? [speaker006:] Yeah that's right. [speaker005:] No, I'm serious. So [disfmarker] the [disfmarker] so [disfmarker] i the [disfmarker] so we might [disfmarker] we might get some feedback from people that such and such was, you know, not [disfmarker] not really what I said. [speaker003:] Yeah. Well that would be good to get, definitely. [speaker005:] Yeah, but, Yeah, sure. [speaker003:] Just for corrections. [speaker005:] Yeah. [speaker003:] So um, in terms of password distribution, I think phone is really the only way to do it, phone and in person. [speaker006:] Yeah. [speaker003:] Or mail, physical mail. [speaker006:] Or if for leave it on their voice mail. [speaker002:] Any sub word level thing. [speaker003:] Any sub wor Yeah, OK. I mean you could do it with PGP or things like that but it's too complex. [speaker006:] You know I just realized something, which is of [disfmarker] e th this question about the [disfmarker] uh the possible mismatch of [disfmarker] I mean i well, and actually also the lawyer saying that um, we shouldn't really have them [disfmarker] have the people believing that they will be cleared by our checks. You know? I mean. So it's like i in a way it's [disfmarker] it's nice to have the responsibility still on them to listen to the tape and [disfmarker] and hear the transcript, [speaker005:] Mm hmm. [speaker006:] to have that be the [disfmarker] [speaker005:] Well yeah, but you can't dep I mean, most people will not wanna take the time to do that, though. [speaker006:] Yeah, OK, fair enough. And they're s they're absorbing the responsibility themselves. [speaker005:] And they [disfmarker] they have to [disfmarker] [speaker006:] So it's not [disfmarker] it's not um [disfmarker] Yeah, good. [speaker005:] But I mean if you were at a meeting, and [disfmarker] and you [disfmarker] you don't think, at least, that you said anything funny and the meeting was about, you know, some [disfmarker] some funny thing about semantics or something, or uh [disfmarker] [speaker003:] You probably won't listen to it. [speaker005:] Yeah. [speaker006:] It is true that tec that the content is technical, I [disfmarker] and so i and we're not having these discussions which [disfmarker] [speaker005:] Yeah. [speaker006:] I [disfmarker] I mean, when I listen to these things, I don't find things that are questionable, in other people's speech or in my own. [speaker005:] Yeah. You would think it would be rare, [speaker006:] Just [disfmarker] It should be very rare. [speaker005:] I mean we're not talking about the energy crisis or something, people have [disfmarker] [speaker006:] Yeah. Yeah, OK. [speaker003:] How about them energy crises. [speaker005:] Yeah. I think we're uh [disfmarker] [speaker003:] Done? [speaker005:] Kind of done. Actually, I was gonna [disfmarker] Di Did you have anything n that's going on, or [disfmarker] [speaker004:] Not really. No. Um, [vocalsound] my project is going along but um, I'm really just here to um fill the project uh [disfmarker] the overall progress. [speaker005:] Yeah. [speaker004:] I don't really have anything specific to [disfmarker] to talk about. [speaker005:] That's fine. I just didn't wanna go by you, if you had something. [speaker004:] Oh, OK. [speaker005:] You don't have anything to say. Nah. [speaker002:] No. [speaker005:] Transcribers, he was rattling the b marbles in his brain back and forth just then this [disfmarker] this [disfmarker] [speaker003:] Shall we do digits? [speaker005:] Oh yeah. [speaker003:] Um, oh by the way I did find a bunch [disfmarker] [speaker004:] It um [speaker003:] Uh, we should count out how many more digits to forms do we have back there? [speaker002:] There were quite a few. [speaker003:] That's what I thought. [speaker002:] Uh. [speaker003:] I f I was going through them all and I found actually a lot filed in with them, that were blanks, that no one had actually read. [speaker002:] Mmm. [speaker003:] And so we still have more than I thought we did. [speaker002:] Oh good. [speaker003:] So, we have a few more digits before we're done. [speaker002:] You know having this headset reminds me of like working at Burger King or something. [speaker003:] Oops. [speaker006:] Oh, did you do that? [speaker003:] I'd like a burger with that, [speaker001:] Burger King [speaker003:] do you want fries with that? [speaker002:] No I never did. [speaker005:] Wow. [speaker003:] And [pause] [speaker002:] But I feel like I could now. [speaker003:] Starts [disfmarker] No. [speaker004:] No. [speaker003:] No. [speaker004:] That's a different thing. [speaker003:] There's another [disfmarker] I don't know. It starts with a P or something. I forget the word for it, but it's [disfmarker] it's um [speaker004:] Oh. [speaker003:] Typically when you [disfmarker] you're ab r starting around forty for most people, it starts to harden and then it's just harder for the lens to shift things [speaker004:] Oh. [speaker003:] and th the [disfmarker] the symptom is typically that you [disfmarker] [vocalsound] you have to hold stuff uh uh further away to [disfmarker] to see it. [speaker005:] Uh huh. Yeah. [speaker003:] In fact, uh m my brother's a [pause] gerontological psychologist and he [disfmarker] he uh [vocalsound] came up with an [disfmarker] an uh [disfmarker] a uh body age test which uh gets down to sort of only three measurements that are good enough st statistical predictors of all the rest of it. And one of them is [disfmarker] is the distance [vocalsound] that you have to hold it at. [speaker004:] Give someone a piece of paper and then they [disfmarker] Oh. [speaker005:] Yeah. [speaker003:] Yeah. [speaker001:] We're [disfmarker] we're live by the way, so we've got a good intro here [speaker003:] Oh. Yeah. About how old I am. OK. [speaker001:] Yep. We can edit that out if you want. [speaker004:] Oh, that's optional. [speaker003:] No, that's OK. [speaker005:] Mm hmm. [speaker001:] OK. [speaker004:] You know. [speaker001:] So. This time the form discussion should be very short, right? [speaker003:] It also should be [pause] later. [speaker001:] OK. [speaker003:] Because Jane uh is not here yet. [speaker001:] Good point. [speaker003:] And uh she'll be most interested in that. Uh, she's probably least involved in the signal processing stuff so maybe we can just [disfmarker] just uh, I don't think we should go though an elaborate thing, but um uh Jose and I were just talking about [vocalsound] the uh [nonvocalsound] uh, speech e energy thing, [speaker005:] The [@ @] [disfmarker] Yeah. [speaker003:] and I uh [disfmarker] We didn't talk about the derivatives. But I think, you know, the [disfmarker] the [disfmarker] i if I can [disfmarker] if you don't mind my [disfmarker] my speaking for you for a bit, um [vocalsound] Uh. Right now, that he's not really showing any kind of uh distinction, but uh [disfmarker] but we discussed a couple of the possible things that uh he can look at. Um. And uh one is that uh this is all in log energy and log energy is basically compressing the distances [vocalsound] uh [pause] between things. Um [pause] Another is that he needs to play with the [disfmarker] the different uh [pause] uh temporal sizes. He was [disfmarker] he [disfmarker] he was taking everything over two hundred milliseconds uh, and uh he's going to vary that number and also look at moving windows, as we discussed before. Um And uh [disfmarker] and the other thing is that the [disfmarker] yeah doing the [disfmarker] [vocalsound] subtracting off the mean and the variance in the [disfmarker] [pause] uh and dividing it by the [pause] standard deviation in the log domain, [vocalsound] may not be [pause] the right thing to do. [speaker005:] Hi. [speaker001:] Hi Jane! [speaker005:] Yeah. [speaker001:] We just started. Could you take that mike there? [speaker004:] Are these the long term means? Like, over the whole [disfmarker] I mean, the means of [pause] what? [speaker003:] Uh B Between [disfmarker] between [disfmarker] [speaker004:] All the frames in the conversation? [speaker001:] Thanks. [speaker004:] Or of things that [disfmarker] [speaker003:] No. Between [disfmarker] Neither. [speaker005:] No. [speaker003:] It's uh between the pauses [pause] uh for some segment. [speaker004:] Oh. [speaker003:] And so i i his [disfmarker] his [disfmarker] He's making the constraint it has to be at least two hundred milliseconds. [speaker004:] Oh. [speaker003:] And so you take that. And then he's [disfmarker] he's uh measuring at the frame level [disfmarker] still at the frame level, of what [disfmarker] [speaker004:] Right. [speaker003:] and then [disfmarker] and then just uh normalizing with that larger amount. um and [disfmarker] But one thing he was pointing out is when he [disfmarker] he looked at a bunch of examples in log domain, it is actually pretty hard to see [vocalsound] the change. And you can sort of [pause] see that, because of j of just putting it on the board that [vocalsound] if you sort of have log X plus log X, that's the log of X plus the log of two [speaker005:] Yep. [speaker004:] Yeah, maybe it's not log distributed. [speaker005:] Mmm. Yeah. [speaker003:] and it's just, [pause] you know, it [disfmarker] it diminishes the [pause] effect of having two of them. [speaker004:] But you could do like a C D F there instead? [speaker003:] Um. [speaker004:] I mean, we don't know that the distribution here is normally. [speaker003:] Yes, right. So [disfmarker] So what I was suggesting to him is that [disfmarker] [speaker004:] So just some kind of a simple [disfmarker] [speaker003:] Actually, a PDF. But, you know, uh But, either way. [speaker004:] PDF [speaker003:] Yeah. [speaker004:] Yeah. [speaker005:] Yeah. [speaker003:] Yeah, eith eith uh [vocalsound] B [speaker004:] Something like that where it's sort of data driven. [speaker003:] Yeah, but I think [pause] also u I think a good first indicator is when the [disfmarker] the [disfmarker] the researcher looks at [vocalsound] examples of the data and can not see a change [pause] in how big the [disfmarker] the signal is, [vocalsound] when the two speaker [disfmarker] [speaker005:] Yeah. Yeah. [speaker003:] Then, that's a problem right there. So. I think you should at least be able, [speaker004:] Oh yeah. [speaker003:] doing casual looking and can get the sense, "Hey, there's something there." and then you can play around with the measures. [speaker004:] Oh yeah. [speaker005:] Yeah. [speaker003:] And when he's looking in the log domain he's not really seeing it. So. And when he's looking in straight energy he is, so that's a good place to start. [speaker005:] Yeah. [speaker003:] Um. So that was [disfmarker] that was the discussion we just had. Um. [vocalsound] The other thing Actually we ca had a question for Adam in this. Uh, when you did the [vocalsound] sampling? uh [pause] over the [pause] speech segments or s or sampling over the [disfmarker] the individual channels in order to do the e uh the [pause] amplitude equalization, [vocalsound] did you do it over just the entire [disfmarker] everything in the mike channels? [speaker005:] How [disfmarker] [speaker003:] You didn't try to find speech? [speaker001:] No, I just took over the entire s uh entire channel um [pause] sampled ten minutes randomly. [speaker003:] Right, OK. So then that means that someone who didn't speak [pause] very much [vocalsound] would be largely represented by silence. [speaker001:] Yep. [speaker003:] And someone who would [disfmarker] who would be [disfmarker] So the normalization factor probably is [pause] i i i [pause] is [disfmarker] is [disfmarker] [speaker001:] Yeah, this was quite quick and dirty, and it was just for [pause] listening. [speaker005:] Yeah. [speaker003:] Yeah. OK. [speaker001:] And for listening it seems to work really well. [speaker005:] Yeah. [speaker003:] Yeah. [speaker005:] Yeah. [speaker001:] So. [speaker003:] Yeah. But that's [disfmarker] Right. [speaker001:] But, it's not [disfmarker] Not a good measure. [speaker003:] So th [speaker005:] Yeah. [speaker003:] OK. So yeah there [disfmarker] there [disfmarker] there [disfmarker] There's a good chance then given that different people do talk different amounts [pause] that there is [disfmarker] there [disfmarker] there is still a lot more to be gained from gain norm normalization with some sort [speaker005:] Yeah. Yeah. Mmm. [speaker001:] Yes, absolutely. [speaker003:] if [disfmarker] if we can figure out a way to do it. [speaker005:] Yeah. [speaker003:] Uh. But we were agreed that in addition to that [comment] uh there should be [pause] s stuff related to pitch and harmonics and so forth. [speaker005:] Yeah. [speaker003:] So we didn't talk at all about uh the other derivatives, but uh again just [disfmarker] just looking at [disfmarker] Uh, I think uh Liz has a very good point, that in fact it would be much more graphic just to show [disfmarker] [speaker005:] Yeah. [speaker003:] Well, actually, you do have some distributions here, uh for these cases. [speaker005:] Yeah. [speaker003:] You have some histograms, [speaker005:] Yeah. [speaker003:] um [pause] and [pause] uh, they don't look very separate. uh [vocalsound] [pause] separated. [speaker005:] This is the [disfmarker] the first derivate of log of frame energy uh without any kind of normalization. [speaker004:] What [disfmarker] [speaker003:] Yeah. Yeah. Yeah. [speaker004:] Log energy. [speaker005:] These the These are the [disfmarker] the first experiments uh with comment uh [speaker004:] Sorry. Frame energy. [speaker001:] Except that [pause] it's hard to judge this because the [disfmarker] they're not normalized. It's just number of frames. [speaker004:] Yeah. [speaker003:] Yeah. [speaker005:] Yeah. [speaker004:] Yeah. [speaker001:] But yeah, even so. [speaker004:] W [vocalsound] I mean, what I meant is, even if you use linear, [pause] you know, raw [pause] measures, like [pause] raw energy or whatever, [speaker003:] "Number" [disfmarker] [speaker004:] maybe we shouldn't make any assumptions about the distribution's shape, and just use [disfmarker] you know, use the distribution to model the [disfmarker] [vocalsound] [pause] the mean, or what y you know, rather than the mean take some [disfmarker] [speaker003:] Yeah. But [disfmarker] And so in [disfmarker] in these he's got that. [speaker004:] Yeah. [speaker003:] He's got some pictures. [speaker005:] Yeah. [speaker003:] But he doesn't [disfmarker] he doesn't in the [disfmarker] he i just in derivatives, but not in the [disfmarker] [speaker004:] Yeah. Oh. [speaker003:] but he d but he doesn't [disfmarker] doesn't [disfmarker] [speaker004:] Right. So, we don't [pause] know what they look like [pause] on the, [pause] tsk [disfmarker] [comment] For the raw. [speaker003:] But he didn't h have it for the energy. He had it for the derivatives. [speaker004:] Yeah. So. [speaker003:] Yeah. [speaker004:] I mean, there might be something there. I don't know. Huh. [speaker003:] Yeah. [speaker001:] Interesting [speaker005:] Here I [disfmarker] I [speaker003:] Oh that [disfmarker] yeah that's a good q [speaker005:] in [disfmarker] No I [disfmarker] I [disfmarker] I haven't the result [speaker003:] did [disfmarker] did you have this sort of thing, for just the [disfmarker] just the l r uh the [disfmarker] the unnormalized log energy? OK. Yeah. So she [disfmarker] she's right. [speaker005:] but it's the [disfmarker] it's the [disfmarker] the [disfmarker] the following. [speaker003:] That's a [disfmarker] [speaker004:] Well it might be just good to know what it looks like. [speaker003:] Yeah. [speaker004:] Cuz [disfmarker] [speaker003:] That's [disfmarker] That's uh [pause] cuz I'd mentioned scatter plots before but she's right, [speaker005:] Huh? [speaker003:] I mean, even before you get the scatter plots, just looking at a single feature [vocalsound] uh, looking at the distribution, is a good thing to do. [speaker005:] Yeah. Catal uh [disfmarker] Combining the different possibilities of uh the parameters. I [disfmarker] I [disfmarker] I [disfmarker] I mean the [disfmarker] the [disfmarker] the scatter plot [pause] combining eh different [pause] n two combination. [speaker003:] Yeah, but [disfmarker] but what she's saying [pause] is, which is right, is [pause] le [speaker005:] combination of two, [pause] of energy and derivate [disfmarker] [speaker003:] I mean, let's start with the [disfmarker] Before we get complicated, let's start with the most basic wh thing, which is [pause] we're arguing that if you take energy [disfmarker] uh if you look at the energy, that, when two people are speaking at the same time, usually [vocalsound] [pause] there'll be more energy than when one is [speaker005:] Yeah. [speaker003:] right? That's [disfmarker] that sort of hypothesis. [speaker005:] That's right. [speaker003:] And the first way you'd look at that, uh s she's, you know, absolutely right, is that you would just take a look at the distribution of those two things, [speaker005:] Yeah. [speaker003:] much as you've plotted them here, You know, but just [disfmarker] but just [disfmarker] [pause] just uh do it [disfmarker] [speaker005:] Yeah. [speaker003:] Well in this case you have three. You have the silence, and that [disfmarker] that's fine. [speaker005:] Yeah. [speaker003:] So, uh with three colors or three shades or whatever, just [disfmarker] just look at those distributions. [speaker005:] Yeah. [speaker003:] And then, given that as a base, you can see if that gets improved, you know, or [disfmarker] or [disfmarker] [pause] or worsened [pause] by the [disfmarker] looking at regular energy, looking at log energy, we were just proposing that maybe it's [disfmarker] you know, it's harder to [pause] see with the log energy, [speaker005:] Yeah. [speaker003:] um and uh also these different normalizations, does a particular choice of normalization make it better? But I had maybe made it too complicated by suggesting early on, that you look at scatter plots because that's looking at a distribution in two dimensions. Let's start off just in one, uh, with this feature. [speaker005:] Yeah. Yeah. Yeah. [speaker003:] I think that's probably the most basic thing, before anything very complicated. [speaker004:] Yeah. [speaker003:] Um And then we w I think we're agreed that pitch related things are [disfmarker] are [disfmarker] are going to be a [disfmarker] a really likely candidate to help. [speaker005:] Yeah. I agree, yeah. Uh huh. [speaker003:] Um [pause] But [pause] since [disfmarker] [vocalsound] uh your intuition from looking at some of the data, is that when you looked at the regular energy, that it did in fact usually go up, [vocalsound] when two people were talking, [vocalsound] that's [disfmarker] eh you know, you should be able to come up with a measure which will [pause] match your intuition. [speaker005:] OK. Yeah. Yeah. Yeah, yeah, yeah. [speaker003:] And she's right, that a [disfmarker] that having a [disfmarker] having [disfmarker] [comment] having this table, with a whole bunch of things, [pause] with the standard deviation, the variance and so forth, it's [disfmarker] it's [disfmarker] it's harder to interpret than just looking at the [disfmarker] the same kind of picture you have here. [speaker005:] But [disfmarker] Uh huh. Yeah. But [disfmarker] It [disfmarker] it's curious but uh I f I found it in the [disfmarker] in the mixed file, in one channel [vocalsound] that eh in several [disfmarker] oh e eh several times eh you have an speaker talking alone with a high level of energy [speaker003:] Mm hmm. Mm hmm. [speaker005:] eh in the middle eh a zone of overlapping with mmm less energy [speaker003:] Mm hmm. [speaker005:] and eh come with another speaker with high energy [speaker003:] Mm hmm. [speaker005:] and the overlapping zone has eh less energy. [speaker003:] Yeah. So there'll be some cases for which [disfmarker] [speaker005:] Because there reach very many [speaker004:] Right. [speaker003:] But, the qu So [disfmarker] So they'll be [disfmarker] This is [disfmarker] [vocalsound] I w want to point [pause] to visual things, But I mean they [disfmarker] there'll be time [disfmarker] There'll be overlap between the distributions, but the question is, "If it's a reasonable feature at all, there's some separation." [speaker005:] Yeah. Yeah. [speaker004:] Especially locally. [speaker003:] Mm hmm. [speaker004:] So. Locally. [speaker005:] just locally, yeah. [speaker001:] And [disfmarker] I was just going to say that [disfmarker] that [pause] right now we're just exploring. [speaker004:] And the other thing is I Sorry. I [disfmarker] [speaker005:] Yeah. [speaker001:] What you would imagine eventually, is that you'll feed all of these features into some [pause] discriminative system. [speaker003:] Yeah. [speaker005:] Yeah. [speaker001:] And so even if [disfmarker] if one of the features does a good job at one type of overlap, another feature might do a good job at another type of overlap. [speaker005:] Yeah. Yeah. Yeah, this is the [disfmarker] [speaker003:] Right. I mean the [disfmarker] the reason I had suggested the scatter f p features is I used to do this a lot, when we had thirteen or fifteen or twenty features [pause] to look at. um Because something is a good feature uh by itself, you don't really know how it'll behave in combination and so it's nice to have as many [disfmarker] as many together at the same time as possible in uh in some reasonable visual form. There's cool graphic things people have had sometimes to put together three or four in some funny [disfmarker] funny way. But it's true that you shouldn't do any of that unless you know that the individual ones, at least, have [disfmarker] have some uh [disfmarker] some hope [speaker005:] Yeah. [speaker004:] Well, especially for normalizing. I mean, it's really important to [pause] pick a normalization that matches the distribution for that feature. [speaker003:] Mm hmm. Mm hmm. [speaker004:] And it may not be the same for all the types of overlaps or the windows may not be the same. e Actually, I was wondering, [vocalsound] right now you're taking a [disfmarker] all of the [pause] speech, from the whole meeting, and you're trying to find points of overlap, but we don't really know which speaker is overlapping with which speaker, right? [speaker003:] Right. [speaker005:] Yeah. [speaker004:] So I mean another way would just be to take the speech from just, say, Morgan, And just Jane and then just their overlaps, like [disfmarker] but by hand, by cheating, and looking at you know, if you can detect something that way, because if we can't do it that way, there's no good way that we're going to be able to do it. [speaker001:] No prayer. [speaker004:] That [disfmarker] You know, there might be something helpful and cleaner about looking at just [pause] individuals and then that combination alone. [speaker005:] Yeah. [speaker004:] Plus, I think it has more elegant [disfmarker] e [speaker003:] Mm hmm. [speaker004:] The m the right model will be [pause] easier to see that way. So if [disfmarker] I don't know, if you go through and you find Adam, cuz he has a lot of overlaps and some other speaker who also has e enough speech [speaker005:] Yeah. [speaker004:] and just sort of look at those three cases of Adam and the other person and the overlaps, [speaker005:] Yeah. [speaker004:] maybe [disfmarker] and just look at the distributions, maybe there is a clear pattern [speaker005:] Yeah. Uh huh. [speaker004:] but we just can't see it because there's too many combinations of [disfmarker] of people that can overlap. [speaker005:] Yeah. [speaker002:] I had the same intuition last [disfmarker] last [disfmarker] last week. [speaker004:] So. [speaker005:] Yeah. [speaker004:] Just seems sort of complex. [speaker002:] I think it's [disfmarker] to start with it's s your [disfmarker] your idea of simplifying, starting with something that [pause] you can see [pause] eh you know without the [pause] extra [pause] layers of [disfmarker] [speaker004:] Right. Cuz if energy doesn't matter there, like [disfmarker] I don't think this is true, but what if [speaker005:] To study individual? [speaker004:] Hmm? [speaker002:] Sorry, what? [speaker005:] To study individual? [speaker002:] Well, you [disfmarker] you [disfmarker] you don't have to study everybody individually [speaker004:] Well, to study the simplest case to get rid of extra [disfmarker] [speaker005:] The [disfmarker] the [disfmarker] the [disfmarker] But [disfmarker] Consider [disfmarker] [speaker002:] but [pause] just simple case and the one that has the lot of data associated with it. [speaker004:] Right. Cuz what if it's the case and I don't think this is true [disfmarker] [speaker001:] That was a great overlap by the way. [speaker004:] What if it's the case that when two people overlap they equate their [disfmarker] you know, there's a [pause] conservation of energy and everybody [disfmarker] both people talk more softly? I don't think this happens at all. [speaker002:] Or [disfmarker] or what if what if the equipment [disfmarker] what if the equipment adjusts somehow, [speaker004:] Or they get louder. [speaker002:] there's some equalizing in there? [speaker004:] Yeah or [disfmarker] [speaker003:] Uh, no we don't have that. [speaker004:] I mean. [speaker003:] But. [speaker001:] Well, but [disfmarker] But I think that's what I was saying about different types of overlap. [speaker002:] OK. Saturation. [speaker004:] There are [disfmarker] there are different types, and within those types, like as Jose was saying, that [pause] sounded like a backchannel overlap, meaning the kind that's [pause] a friendly encouragement, like "Mm hmm.", "Great!", "Yeah!" [speaker005:] Yeah. [speaker004:] And it doesn't take [disfmarker] you don't take the floor. Um, but, some of those, as you showed, I think can be discriminated by the duration of the overlap. [speaker005:] Yeah. [speaker004:] So. It [disfmarker] Actually the s new student, Don, who um Adam has met, and he was at one of our meetings [disfmarker] He's [pause] getting his feet wet and then he'll be starting again [pause] in mid January. He's interested in trying to distinguish the types of overlap. I don't know if he's talked with you yet. [speaker005:] Yeah. [speaker004:] But in sort of honing in on these different types [speaker005:] I don't consi Now I don't consider that possibility. [speaker004:] and [disfmarker] So maybe [disfmarker] [speaker005:] This is a s a general studio of the overlapping we're studying the [disfmarker] i [speaker003:] Yeah. [speaker004:] So it might be something that we can [pause] help by categorizing some of them and then, you know, look at that. [speaker003:] Well [pause] I [disfmarker] I [disfmarker] I [disfmarker] I would s actually still recommend that he do the overall thing because [pause] it would be the quickest thing for him to do. He could [disfmarker] You see, he already has all his stuff in place, [speaker004:] Yeah. [speaker003:] he has the histogram mechanism, he has the stuff that subtracts out [disfmarker] and all he has to do is change it uh uh from [disfmarker] from log to plain energy and plot the histogram and look at it. And then he should go on and do the other stuff bec [speaker004:] Yeah. [speaker003:] but [disfmarker] But this will [disfmarker] [speaker004:] Yeah, no. I didn't mean that [disfmarker] that [disfmarker] for you to do that, but I was thinking if [disfmarker] if Don and I are trying to get [pause] categories [speaker003:] Mm hmm. [speaker004:] and we label some data for you, and we say this is what we think is going [disfmarker] [speaker003:] Mm hmm. [speaker004:] So you don't have to worry about it. And here's the three types of overlaps. [speaker005:] Yeah. [speaker004:] And we'll [disfmarker] we'll do the labelling for you. [speaker005:] Yeah. [speaker003:] Hm hmm. [speaker004:] Um. [speaker005:] Consider different class of overlap? [speaker004:] Yeah, that we would be working on anyway. [speaker005:] If there's time. Yeah. [speaker004:] Then maybe [pause] you can try some different things for those three cases, and see if that helps, or [disfmarker] [speaker005:] Yeah. This is the thing I [disfmarker] I comment with you before, that uh we have a great variation of th situation of overlapping. [speaker003:] Mm hmm. [speaker005:] And the behavior for energy is, uh log energy, [vocalsound] is not uh the same all the time. [speaker004:] Mm hmm. [speaker005:] And [disfmarker] [speaker003:] But I guess I was just saying that [disfmarker] that right now uh from the means that you gave, I don't have any sense of whether even, you know, there are any significant number of cases for which there is distinct [disfmarker] and I would imagine there should be some [disfmarker] you know, there should be [disfmarker] The distributions should be somewhat separated. [speaker005:] Yeah. Yeah. [speaker003:] Uh and I [disfmarker] I would still guess that if they are not separated at all, that there's some [disfmarker] there's [disfmarker] there's most likely something wrong in the way that we're measuring it. [speaker005:] Yeah. Yeah. Yeah. [speaker003:] Um, but [pause] um For instance, I mean I wouldn't expect that it was very common overall, that when two people were talking at the same time, that it would [disfmarker] that it really was lower, [speaker005:] Yeah. [speaker003:] although sometimes, as you say, it would. [speaker005:] Yeah. [speaker004:] Yeah, no, that was [disfmarker] That was a jok [speaker003:] So. So. [speaker004:] or a sort of, a case where [disfmarker] where you would never know that unless you actually go and look at two individuals. [speaker003:] Yeah. I mean. No. It could [disfmarker] it probably does happen sometimes. [speaker005:] Yeah. [speaker003:] Yeah. [speaker001:] Right. [speaker003:] Yeah. [speaker005:] Yeah. [speaker001:] Mind if I turned that light off? [speaker004:] So. [speaker001:] The flickering is annoying me. [speaker003:] OK. [speaker004:] It might the case, though, that the significant energy, just as Jose was saying, comes in the non backchannel cases. Because in back Most people when they're talking don't change their own [pause] energy when they get a backchannel, cuz they're not really predicting the backchannel. [speaker003:] Mm hmm. [speaker004:] And sometimes it's a nod and sometimes it's an "mm hmm". [speaker005:] Yeah. [speaker004:] And the "mm hmm" is really usually very low energy. [speaker005:] Yeah. [speaker004:] So maybe those don't actually have much difference in energy. But [pause] all the other cases might. and the backchannels are sort of easy to spot s in terms of their words or [disfmarker] [speaker003:] e [vocalsound] e and [disfmarker] and again what they [disfmarker] what difference there was would kind of be lost in taking the log, [speaker004:] I mean, just listen to it. [speaker005:] Yeah. [speaker004:] So. [speaker003:] so, [speaker004:] Well, it would be lost [pause] no matter what you do. [speaker005:] But [disfmarker] [speaker003:] as well. [speaker005:] Yeah. [speaker004:] It just [disfmarker] [speaker003:] Mmm, no, if it's [disfmarker] if i if it's [disfmarker] [speaker001:] Tone [speaker003:] Well, it won't be as big. [speaker004:] I mean, even if you take the log, you can [disfmarker] your model just has a more sensitive [pause] measures. [speaker005:] Yeah. [speaker001:] Sure, but tone might be very [speaker004:] So. [speaker001:] Yeah, you're "mm hmm" tone is going to be very different. [speaker004:] Yeah. Right. Right. [speaker001:] You could imagine doing specialized ones for different types of backchannels, if you could [disfmarker] if you had a good model for it. Your "mm hmm" detector. [speaker003:] If [disfmarker] if you're [disfmarker] a I guess my point is, if you're doing essentially a linear separation, taking the log first does in fact make it harder to separate. [speaker001:] Right. [speaker005:] Yeah. [speaker003:] So it's [disfmarker] So, uh if you i i So i if there [disfmarker] if there close to things it does [speaker004:] Yeah. [speaker003:] it's a nonlinear operation that does in fact change the distinction. If you're doing a non if you're doing some fancy thing then [disfmarker] then yeah. And right now we're essentially doing this linear thing by looking across here and [disfmarker] and saying we're going to cut it here. Um and that [disfmarker] that's the indicator that we're getting. But anyway, yeah, we're not [pause] disagreeing on any of this, we should look at it more uh [disfmarker] more finely, but uh uh I think that [disfmarker] This often happens, you do fairly complicated things, and then you stand back from them and you realize that you haven't done something simple. So uh, if you generated something like that just for the energy and see, and then, a a a as [disfmarker] as Liz says, when they g have uh uh smaller um, more coherent groups to look at, that would be another interesting thing later. [speaker005:] Uh huh. Uh huh. [speaker003:] And then that should give us some indication [disfmarker] between those, should give us some indication of whether there's anything to be achieved f from energy at all. And then you can move on to the uh [pause] uh more [nonvocalsound] pitch related stuff. [speaker005:] Mm hmm. I [disfmarker] I [disfmarker] I think this is a good idea. [speaker003:] OK. [speaker005:] Not consider the log energy. [speaker002:] Mm hmm. [speaker003:] Yeah. But then the [disfmarker] Have you started looking at the pitch related [pause] stuff at all, or [disfmarker]? [speaker005:] The [disfmarker]? [speaker003:] Pitch [pause] related? Harmonicity and so on? [speaker005:] I [disfmarker] I'm preparing the [disfmarker] the program but I don't [disfmarker] I don't begin because eh [vocalsound] I saw your email [speaker003:] Preparing to [disfmarker] Yeah. [speaker005:] and [pause] I agree with you it's better to [disfmarker] I suppose it's better to [disfmarker] to consider the [disfmarker] the energy this kind of parameter [vocalsound] bef [speaker003:] Yeah. Oh, that's not what I meant. No, no. I [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] Well, we certainly should see this but I [disfmarker] I [disfmarker] I [disfmarker] I think that the harm I certainly wasn't saying this was better than the harmonicity and pitch related things I was just saying [speaker005:] I [disfmarker] I go on with the [disfmarker] with the pitch, [speaker003:] Yeah. [speaker005:] aha! [pause] OK. [speaker003:] Yeah, I was just saying [disfmarker] [speaker005:] I [disfmarker] I [disfmarker] I [disfmarker] I understood uh that eh [pause] I [disfmarker] I had to finish [pause] by the moment with the [vocalsound] and [disfmarker] and concentrate my [disfmarker] my energy in that problem. [speaker003:] OK. OK. [vocalsound] OK. But I think, like, all these derivatives and second derivatives and all these other very fancy things, I think I would just sort of look at the energy [pause] and then get into the harmonicity as [disfmarker] as a suggestion. [speaker005:] OK. I go on with the pitch. [speaker003:] Uh OK. So maybe uh since w we're trying to uh compress the meeting, um, I know Adam had some form stuff he wanted to talk about and did you have some? [speaker002:] I wanted to ask just s something on the end of this top topic. So, when I presented my results about the uh distribution of overlaps and the speakers and the profiles of the speakers, at the bottom of that I did have a proposal, [speaker005:] Uh huh. [speaker002:] and I had plan to go through with it, of [disfmarker] of co coding the types of overlaps that people were involved in s just with reference to speaker style so, you know, with reference [disfmarker] [speaker004:] Oh. That'd be great. [speaker002:] and you know I said that on my [disfmarker] in my summary, [speaker004:] Yeah, I remem [speaker002:] that [pause] you know so it's like people may have different amounts of being overlapped with or overlapping [speaker004:] Right. [speaker002:] but that in itself is not informative without knowing what types of overlaps they're involved in so I was planning to do a taxonomy of types overlaps with reference to that. [speaker004:] That would be great. [speaker005:] Yeah. [speaker004:] That would be really great. [speaker002:] So, but it you know it's like it sounds like you also have uh something in that direction. [speaker001:] Hmm. [speaker002:] Is [disfmarker] is it [disfmarker] [speaker004:] We have nothing [disfmarker] You know, basically, we got [pause] his environment set up. He's [disfmarker] he's a double E [comment] you know. So. It's mostly that, [pause] if we had to [pause] label it ourselves, we [disfmarker] we would or we'd have to, to get started, but if [disfmarker] [pause] It [disfmarker] it would be much better if you can do it. You'd be much better [comment] at doing it also because [vocalsound] you know, I [disfmarker] I'm not [disfmarker] I don't have a good feel for how they should be sorted out, [speaker002:] Interesting. [speaker004:] and I really didn't wanna go into that if I didn't have to. So if [disfmarker] If you're w willing to do that or [disfmarker] or [disfmarker] [speaker002:] Well maybe we can [speaker001:] It would be interesting, though, to talk, maybe not at the meeting, but at some other time about what are the classes. [speaker002:] OK. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker004:] Yeah. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker004:] I think that's a research [pause] effort in and of itself, [speaker001:] Yeah, it would be interesting. [speaker004:] because you can read the literature, but I don't know how it'll [pause] turn out [speaker005:] Yeah. [speaker004:] and, You know, it's always an interesting question. [speaker005:] I would think it's interesting, yeah. [speaker002:] It seems like we also s with reference to a purpose, too, that we we'd want to have them coded. [speaker005:] Yeah. [speaker004:] That'd be great. [speaker005:] Yeah. [speaker001:] Yep. [speaker004:] That'd be really great. [speaker002:] OK. [speaker004:] And we'd still have some [pause] funding for this project, [speaker002:] I can do that. [speaker005:] uh uh [speaker004:] like probably, if we had to hire some [disfmarker] like an undergrad, because uh Don is being covered half time on something else [disfmarker] [speaker002:] Mm hmm. [speaker004:] I mean, he [disfmarker] we're not paying him [pause] the full RA ship for [disfmarker] all the time. So. [vocalsound] um If we got it to where we wanted [disfmarker] we needed someone to do that [disfmarker] I don't think there's really enough data where [disfmarker] where [disfmarker] [speaker002:] Mm hmm. Yeah, I see this as a prototype, to use the only the [disfmarker] the already transcribed meeting as just a prototype. [speaker005:] Yeah. [speaker004:] Yeah. [speaker005:] I [disfmarker] I think a a another parameter we c we [disfmarker] we can consider is eh the [pause] duration. [speaker004:] But [disfmarker] [speaker001:] Mm hmm. [speaker005:] Another e e m besides eh the [disfmarker] the class of overlap, the duration. Because is possible [vocalsound] eh some s s um eh some classes eh has eh [pause] a type of a duration, eh, [pause] a duration very short uh when we have [disfmarker] [vocalsound] we have overlapping with speech. [speaker002:] Mm hmm. [speaker004:] Yeah, definitely. [speaker002:] Yeah, maybe [disfmarker] It may be correlated. [speaker005:] Is possible to have. [speaker002:] Mm hmm. [speaker005:] And it's interesting, [pause] I think, [pause] to consider the [disfmarker] the window of normalization, normalization window. Eh [pause] because eh if we have a type of, [pause] a kind of eh overlap, eh backchannel overlap, with a short duration, is possible [pause] eh to normali i i that if we normalize eh with eh [pause] eh consider only the [disfmarker] the eh window eh by the left eh ri eh [pause] side on the right side overlapping with a [disfmarker] a very [pause] eh oh a small window eh the [disfmarker] if the fit of normalization is eh mmm bigger eh in that overlapping zone eh very short [speaker002:] Mm hmm. [speaker004:] Yeah, that's true. The window shouldn't be larger than the backchannel. [speaker005:] I [disfmarker] I me I [disfmarker] I understand. I mean that you have eh you have a backchannel, eh, eh [disfmarker] you have a overlapping zone very short [speaker004:] Yeah. [speaker005:] and you consider eh n eh all the channel to normalize this very short eh [disfmarker] [speaker003:] Mm hmm. Mm hmm. [speaker005:] for example "mmm mm hmm hmm" eh And the energy is not eh height eh I think if you consider all the channel to normalize and the channel is [pause] mmm bigger [pause] eh eh eh compared with the [disfmarker] with the overlapping eh duration, [speaker003:] Mm hmm. [speaker005:] eh the effect is mmm stronger eh [pause] that I [disfmarker] I mean the [disfmarker] the e effect of the normalization eh with the mean and the [disfmarker] and the variance eh is different that if you consider [pause] only a [pause] window compared eh with the n the duration of overlapping. [speaker003:] Mm hmm. [speaker005:] Not [disfmarker] [speaker003:] You [disfmarker] you want it around the overlapping part. You want it to include something that's not in overlapping [speaker005:] Yeah. Yeah. [speaker003:] but [disfmarker] but uh [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker005:] I [disfmarker] I don't know. Is [disfmarker] s If [disfmarker] [speaker003:] Yeah. [speaker004:] Well it's a sliding window, right? So if you take the [disfmarker] the measure in the center of the overlapped [pause] piece, you know, there'd better be some something. [speaker003:] Mm hmm. [speaker005:] Yeah. Yeah. [speaker004:] But if your window is really huge then yeah you're right you won't even [disfmarker] [speaker005:] Yeah, This is the [disfmarker] This is the [disfmarker] the idea, [vocalsound] to consider only the [disfmarker] the small window near [disfmarker] near [disfmarker] near the [disfmarker] the overlapping zone. [speaker004:] The portion of the [disfmarker] [comment] of the backchannel won't [disfmarker] won't effect anything. But you [disfmarker] Yeah. So. You know, you shouldn't be more than like [disfmarker] [pause] You should definitely not be three times as big as your [disfmarker] as your [pause] backchannel. [speaker005:] Yeah. [speaker004:] Then you're gonna w have a wash. [speaker002:] Mm hmm. [speaker004:] And hopefully it's more like on the order of [disfmarker] [speaker003:] I'm not sure that's [pause] necessarily true. [speaker005:] Yeah? [speaker002:] It is an empirical question, it seems like. [speaker004:] Yea [speaker003:] Because [disfmarker] because it [disfmarker] because um again if you're just compensating for the gain, [speaker005:] Yeah. [speaker002:] Yeah. [speaker003:] you know, the fact that this [disfmarker] this gain thing was crude, and the gain wh if someone is speaking relatively at consistent level, just to [disfmarker] to give a [disfmarker] an extreme example, all you're doing is compensating for that. And then you still s And then if you look at the frame with respect to that, it still should [disfmarker] should uh change [speaker004:] Yeah, it depends how different your normalization is, as you slide your window across. I mean. That's something we don't know. [speaker003:] Mm hmm. [speaker002:] It's possible to try it both ways, [speaker001:] Well, I mean we're also talking about a couple of different things. [speaker002:] isn't it? in this small [speaker001:] I mean, one is your analysis window and then the other is any sort of normalization that you're doing. [speaker004:] Yeah I was talking about the n normalization window. [speaker001:] And the [disfmarker] And they could be quite different. [speaker004:] Yeah. [speaker003:] Right. [speaker005:] Yeah. [speaker003:] This was sort of where [disfmarker] where we were last week. [speaker004:] Yeah. That's true. Yeah. [speaker001:] Yep. [speaker003:] But, anyway We [disfmarker] we'll have to look at some core things. [speaker004:] Um. [speaker005:] OK. [speaker002:] OK. [speaker004:] But that'd be great if [disfmarker] if you're marking those and [disfmarker] um. [speaker002:] Great. OK. [speaker005:] Yeah. [speaker004:] But it is definitely true that we need to have the time marks, [speaker002:] Mm hmm. [speaker004:] and I was assuming that will be inherited because, if you have the words and they're roughly aligned in time via forced alignment or whatever we end up using, then you know, this [pause] student and I would be looking at the time marks [speaker002:] Yep, I agree. Mm hmm. [speaker004:] and classifying all the frames inside those as whatever labels Jane gave [speaker002:] Coming off of the other [disfmarker] [speaker005:] Yeah. [speaker002:] Good. So, it wouldn't be [pause] I wasn't planning to label the time marks. [speaker005:] I can give you my transcription file, [speaker002:] I was thinking that that would come from the engineering side, [speaker004:] I don't think you need to. [speaker005:] no? [speaker002:] yeah. [speaker004:] Yeah. That should be linked to the words which are linked to time somehow, [speaker002:] There you go. [speaker004:] right? [speaker001:] Well we're not any time soon going to get a forced alignment. [speaker004:] Not now. [speaker001:] So. [speaker005:] Yeah. [speaker001:] Um If it's not hand marked then we're not going to get the times. [speaker004:] Well, it's something that w Well, we [disfmarker] we wouldn't be able to do any work without a forced alignment anyway, [speaker005:] Yes [speaker004:] so somehow if [disfmarker] once he gets going we're gonna hafta come up with one and [speaker003:] Yes. [speaker004:] Yeah. [speaker002:] Good. Good. [speaker001:] I mean w I guess we could do a very bad one with Broadcast News. [speaker004:] So whatever you would label would be attached to the words, I think. [speaker002:] Great! Good, good. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker003:] Well again for the close [pause] mike stuff, we could come up [disfmarker] take a s take the Switchboard system or something, [speaker001:] That might be good enough. Yeah. [speaker003:] and [disfmarker] Um [speaker001:] It'd be worth a try. It would be interesting to see what we get. [speaker003:] Just, you know, low pass filter the speech and [disfmarker] [speaker004:] Cuz there's [disfmarker] there's a lot of work you can't do without that, I mean, how [disfmarker] how would you [disfmarker] [speaker003:] Yeah. [speaker004:] You'd have to go in and measure every start and stop point next to a word is y if you're interested in anything to do with words. [speaker002:] It would be very inefficient. [speaker001:] Yep. [speaker005:] Yeah. [speaker004:] So. [speaker002:] Mm hmm. [speaker004:] Anyway [pause] So that'd be great. [speaker002:] Good. OK. [speaker005:] Yeah. [speaker003:] There's something we should talk about later but maybe not just now. But, uh, should talk about our options as far as the uh uh [pause] transcription But. Well, w But we'll do that later. [speaker001:] Yep, if IBM doesn't [disfmarker] [speaker002:] OK. Good. [speaker004:] Do we hafta [pause] turn [disfmarker] Are we supposed to keep recording here? [speaker002:] Yeah. Let's do that later. Yeah. [speaker001:] Yeah [vocalsound] Right. [speaker003:] We'll talk about it later. [speaker002:] Yeah. [speaker003:] So [vocalsound] uh Uh "forms". You had something on forms. [speaker001:] Forms Next iteration of forms. Oops. [speaker002:] Oh! Oh good, OK. [speaker003:] Um. Oh. [speaker002:] How [disfmarker] So it's two pages per person? [speaker001:] Nope. One's a digit form, one's a speaker form. [speaker002:] Oh! [speaker001:] So one is a one time only [pause] speaker form and the other is the digits. [speaker002:] Oh, I see. [speaker005:] Oh it's the same. Oh no no. Is [disfmarker] is new Is OK. [speaker001:] So don't fill these out. [speaker002:] Alright. [speaker001:] This is just the suggestion for uh what the new forms would look like. So, they incorporate the changes [pause] that we talked about. [speaker002:] Date and time. Uh why did you switch the order of the Date and Time fields? This is rather a low level, but [speaker001:] On which one? [speaker002:] On [disfmarker] on the new one, Time comes first and then Date, but I thought [disfmarker] [speaker001:] Oh you mean on the digit form? [speaker002:] This is [disfmarker] this is rather a low level question, but [disfmarker] but it used [disfmarker] used to be Date came first. [speaker001:] Uh, because the user fills out the first three fields and I fill out the rest. [speaker002:] Oh I see. [speaker001:] So it was intentional. [speaker002:] Well, how would the [disfmarker] How would the user know the time if they didn't know the date? [speaker001:] It's an interesting observation, but it was intentional. Because the date is when you actually read the digits and the time and, excuse me, the time is when you actually read the digits, but I'm filling out the date beforehand. If you look at the form in front of you? that you're going to fill out when you read the digits? you'll see I've already filled in the date but not the time. [speaker002:] Yeah. I always assumed [disfmarker] So the time is supposed to be pretty exact, because I've just been taking beginning time [disfmarker] time of the meeting. [speaker004:] Yeah, me too. [speaker005:] Yeah, I [disfmarker] [speaker001:] Yeah, I've noticed that in the forms. [speaker005:] yeah. [speaker001:] The [disfmarker] the reason I put the time in, is so that the person who's extracting the digits, meaning me, will know where to look in the meeting, to try to find the digits. [speaker005:] Me too. Oh! [speaker002:] Oh dear. [speaker005:] But [disfmarker] I am put [disfmarker] I am putting the beginning of the meeting. [speaker002:] We've been [disfmarker] we've been messing up your forms. [speaker001:] I know. [speaker004:] So you should call it, like, "digits start time". Or. [speaker001:] And I haven't said anything. Yep. [speaker005:] in [disfmarker] on there. [speaker003:] Why [disfmarker] What [disfmarker] what were you putting in? [speaker002:] Oh, well, I was saying if we started the meeting at two thirty, [speaker005:] Yeah. [speaker002:] I'd put two thirty, and I guess d e everyone was putting two thirty, [speaker003:] Oh. [speaker005:] Yeah. [speaker002:] and I didn't realize there was "uh oh I'm about to read this and I should" [disfmarker] [speaker001:] No, it's about fifty fifty. Actually it's about one third each. About one third of them are blank, about one third of them are when the digits are read, and about one third of them are when the [pause] meeting starts. [speaker005:] Oh. [speaker001:] So. [speaker002:] This would be a radical suggestion but [disfmarker] [speaker001:] I could put instructions? [speaker002:] Ei either that or maybe you could maybe write down when people [vocalsound] start reading digits on that particular session. [speaker001:] Nah. [speaker005:] Yeah. [speaker001:] But if I'm not at the meeting, I can't do that. [speaker002:] I know, OK. [speaker003:] Yeah, he's been setting stuff up and going away. So. [speaker002:] That's a good point. I see. Good point good point. [speaker003:] For some reason he doesn't want to sit through every meeting that's [disfmarker] [speaker001:] Yep, but that is the reason Name, Email and Time are where they are. [speaker002:] Oh, OK. [speaker003:] Yeah. [speaker002:] Alright. I rest my [disfmarker] [speaker001:] And then the others are later on. [speaker003:] Uh huh. [speaker002:] OK. w [speaker005:] And the Seat is this number? [speaker001:] Mm hmm. [speaker002:] Seat and Session. [speaker004:] "For official use only" That's [disfmarker] [vocalsound] Well, he's very professional. [speaker005:] "use only" [speaker002:] Actually you could [disfmarker] Well that does raise another question, which is why is the "Professional use only" line not higher? Why doesn't it come in at the point of Date and Seat? Oh. Because we're filling in other things. [speaker001:] What? [speaker003:] What? [speaker002:] Well, because [disfmarker] If y your [disfmarker] your professional use, you're gonna already have the date, and the s [speaker001:] What [disfmarker] which form are you talking about? [speaker002:] Well I'm comparing the new one with the old one. This is the digit form. [speaker005:] Oh. [speaker001:] Oh you're talking about the digit form. [speaker005:] Yeah. [speaker003:] Digit. Digit form. [speaker002:] Oh! I wasn't supposed to [disfmarker] [speaker001:] The digit form doesn't [disfmarker] The digit [disfmarker] [speaker005:] Yeah. [speaker002:] Sorry. Sorry. [speaker001:] No, that's alright. The digit form doesn't have a "for official use only" line. It just has a line, [pause] which is what you're supposed to read. [speaker002:] That [disfmarker] uh OK. Sorry about that. [speaker001:] So on the digits form, everything above the line is a fill in form [speaker002:] Yeah. Yeah. [speaker001:] and everything below the line is digits that the user reads. [speaker002:] OK. Alright s but I didn't mean to derail our discussion here, so you really wanted to start with this other form. [speaker001:] No, either way is fine I just [disfmarker] You just started talking about something, and I didn't know which form you were referring to. [speaker002:] Alright yeah, I was comparing [disfmarker] so th this is [disfmarker] So I was looking at the change first. So it's like we started with this and now we've got a new version of it wi [pause] with reference to this. So the digit form, we had one already. Now the f the fields are slightly different. [speaker003:] So the main thing that the person fills out um [pause] is the name and email and time? [speaker005:] Yeah. [speaker001:] Right. [speaker005:] Ah! [speaker003:] You do the rest? [speaker001:] Yep. Just as uh [disfmarker] as I have for all the others. [speaker003:] Right. [speaker002:] What [disfmarker] And there's an addition of the native language, which is a bit redundant. This one has Native Language and this one does too. [speaker001:] That's because the one, the digit form that has native language is the old form [speaker002:] Oh! Thank you. [pause] Thank you, thank you. [speaker001:] not the new form. [speaker005:] Yeah. [speaker002:] There we go. Oh, yeah. I'll catch up here. OK, I see. [speaker003:] "South Midland, North Midland" [speaker002:] That's the old and that's the new. [speaker001:] Yeah this was the problem with these categories, I [disfmarker] I picked those categories from TIMIT. I don't know what those are. [speaker004:] Actually, the only way I know is from working with the database and having to figure it out. [speaker005:] What [disfmarker] [speaker001:] With TIMIT, [speaker005:] uh huh. [speaker001:] yeah? [speaker003:] So is South Midland like Kansas? [speaker005:] What i [speaker001:] So, I was gonna ask wh w I mean. [speaker003:] and North Midland like [disfmarker] like uh Illinois, or [disfmarker]? [speaker004:] Well yeah. [speaker005:] Yeah. [speaker004:] Nor um [disfmarker] [speaker001:] So [disfmarker] so what accent are we speaking? Western? [speaker003:] By definition? [speaker005:] And for simple for [disfmarker] for me? Is mean my native language Spanish [disfmarker] Spanish? [speaker003:] Well, [speaker004:] Probably Western, yeah. [speaker005:] eh The original is the center of Spain and the [vocalsound] beca [speaker001:] Yeah, I mean you could call it whatever you want. For the foreign language we couldn't classify every single one. So I just left it blank and you can put whatever you want. [speaker005:] Because is different, the Span uh [pause] the Spanish language from the [disfmarker] the north of Spain, of the south, of the west and the [disfmarker] [speaker001:] Sure. [speaker005:] But. [speaker001:] So I'm not sure what to do about the Region field for English variety. [speaker005:] Yeah. [speaker001:] You know, when I wrote [disfmarker] I was writing those down, I was thinking, "You know, these are great [pause] if you're a linguist". [speaker002:] Yeah. [speaker004:] Actually even if you [vocalsound] [pause] t [speaker001:] But I don't know how to [disfmarker] I don't know how to [disfmarker] I don't know how to categorize them. [speaker002:] Yeah. [speaker003:] If you're [disfmarker] if e [vocalsound] if y [speaker004:] This wasn't developed by [disfmarker] th these regions weren't [disfmarker] [speaker003:] if you're a TI or MIT [vocalsound] from [vocalsound] nineteen eighty five. [speaker001:] Yeah [speaker003:] Yeah. [speaker001:] So I guess my only question was if [disfmarker] if you were a South Midland speaking region, person? Would you know it? [speaker004:] Mm hmm. [speaker001:] Is that what you would call yourself? [speaker004:] I don't know. [speaker001:] Yeah. [speaker003:] You know, I think if you're talking [disfmarker] if you're thinking in terms of places, [pause] as opposed to [pause] names different peop names people have given to [pause] different ways of talking, [pause] I would think North Midwest, and South Midwest would be more common than saying Midland, right, I mean, I [disfmarker] I went to s [speaker004:] Yeah. Now [pause] the usage [disfmarker] Maybe we can give them a li [pause] like a little map? with the regions and they just [disfmarker] No, I'm serious. [speaker002:] No, that's not bad. [speaker004:] Because it takes less time, and it's sort of cute [speaker005:] i at this [disfmarker] [vocalsound] in that side [disfmarker] in that side of the [disfmarker] the paper. [speaker002:] Yeah. [speaker004:] there's no figure. [speaker003:] Well. [speaker004:] Well just a little [disfmarker] You know, it doesn't have all the detail, but you sort of [disfmarker] [speaker003:] But what if you moved five times and [disfmarker] and uh [speaker002:] Well, I was thinking you could have ma multiple ones and then the amount of time [disfmarker] [speaker004:] No, but you're categorized. That's the same [disfmarker] [speaker002:] so, roughly. So. You could say, [pause] you know "ten years [pause] on the east coast, five years on the west coast" or something or other. [speaker001:] Well, We [disfmarker] I think we don't want to get that level of detail at this form. I think that's alright if we want to follow up. But. [speaker003:] I guess we don't really know. [speaker004:] I mean I [disfmarker] As I said, I don't think there's a huge [pause] benefit to this region thing. It [disfmarker] it gets [disfmarker] The problem is that for some things it's really clear and usually listening to [comment] it you can tell right away if it's a New York or Boston accent, but New York and Boston are two [disfmarker] well, I guess they have the NYC, but New England has a bunch of very different dialects and [disfmarker] [speaker002:] Mm hmm. [speaker004:] and [pause] so does um S So do other places. [speaker001:] Yeah, so I picked these regions cuz we had talked about TIMIT, [speaker004:] Right. [speaker001:] and those are right from TIMIT. [speaker004:] And so these would be [pause] satisfying like a speech [pause] research [pause] community if we released the database, [speaker001:] So. [speaker004:] but as to whether subjects know where they're from, I'm not sure because um I know that they had to fill this out for Switchboard. This is i almost exactly the same as Switchboard regions [speaker002:] Oh. OK. [speaker004:] or very close. Yeah. Um And I don't know how they filled that out. But th if Midland [disfmarker] Yeah, Midland is the one that's difficult I guess. [speaker002:] I think a lot of people [disfmarker] [speaker004:] Also Northwest you've got Oreg Washington and Oregon now which uh y people don't know if it's western or northern. [speaker002:] Yeah. [speaker001:] Yeah, I certainly don't. I mean, I was saying I don't even know what I speak. [speaker004:] It's like Northwest [speaker003:] Oh, what is Northern? [speaker001:] Am I speaking [disfmarker] Am I speaking Western? [speaker003:] Well and what [disfmarker] and what's Northern? [speaker004:] I think originally it was North [disfmarker] Northwest [speaker001:] Northwest? [speaker005:] Yeah. [speaker004:] But [disfmarker] [speaker001:] Yeah, so this is a real problem. [speaker004:] Yeah. [speaker001:] I don't know what to do about it. [speaker002:] I wouldn't know how to characterize mine either. And [disfmarker] and so I would think [disfmarker] I would say, I've [disfmarker] I've got a mix of California and Ohio. [speaker001:] I c I think at the first level, for example, we speak the same. [speaker004:] I don't know. [speaker001:] our [disfmarker] our dialects Or [pause] whatever you [disfmarker] region are the same. [speaker002:] Uh huh. [speaker001:] But I don't know what it is. So. [speaker004:] Well, you have a like techno speak accent I think. [speaker001:] a techno speak accent? [speaker004:] Yeah, you know? [speaker005:] A techno [speaker001:] A [disfmarker] a geek region? [speaker004:] Well it's [disfmarker] I mean I [disfmarker] you can sort of identify [speaker002:] Geek region. [speaker004:] it f It's [disfmarker] it's [disfmarker] not [disfmarker] not that that's [disfmarker] [speaker005:] Is different. Is different. [speaker004:] but [disfmarker] but maybe that [disfmarker] maybe we could leave this and see what people [disfmarker] See what people choose and then um let them just fill in if they don't [disfmarker] I mean I don't know what else we can do, cuz [disfmarker] [vocalsound] That's North Midland. [speaker002:] I'm wondering about a question like, "Where are you from mostly?" [speaker005:] Yeah. [speaker003:] But I [disfmarker] I'm s I'm [disfmarker] now that you mentioned it though, I am [disfmarker] really am confused by "Northern". [speaker005:] Yeah. [speaker002:] I agree. I agree. [speaker003:] I really am. [speaker005:] Yeah. [speaker003:] I mean, if [disfmarker] if you're [pause] in New England, that's North. [speaker002:] I agree. [speaker003:] If you're [disfmarker] i if you're [speaker002:] Scandinavian, the Minnesota area's north. [speaker003:] Uh yeah. That's [disfmarker] But that's also North Midland, [speaker005:] Yeah. [speaker003:] right? [speaker002:] Oh, [@ @]. [disfmarker] OK. [speaker003:] And [disfmarker] and [disfmarker] and Oregon and [disfmarker] and Oregon and Washington are [disfmarker] are Western, but they're also Northern. [speaker004:] Yeah. Of course, that's very different from, like, Michigan, or [disfmarker] [speaker005:] Mmm. [speaker002:] Mm hmm. [speaker003:] uh, Idaho? [speaker004:] Well there are hardly any subjects from Idaho. [speaker003:] Montana? [speaker001:] No problem. [speaker002:] Just rule them out. [speaker004:] There's only a few people in Idaho. [speaker001:] There are hardly any subjects from "beep" [speaker005:] Yeah. [speaker004:] Sorry. No, that's [disfmarker] [speaker003:] Maybe [disfmarker] Maybe we [disfmarker] Maybe we should put a little map and say "put an X on where you're from", [speaker005:] And [disfmarker] is [disfmarker] in those [disfmarker] [speaker001:] Yeah really. [speaker005:] And if you put [disfmarker] [speaker004:] We could ask where they're from. [speaker002:] It'd be pretty simple, yeah. [speaker004:] Yeah. But We went back to that. [speaker005:] Yeah. If you put eh the state? [speaker001:] Well well we sort of [disfmarker] [speaker002:] Where are you from mostly? [speaker004:] We [disfmarker] we went [disfmarker] we went around this and then [pause] a lot of people ended up saying that it [disfmarker] [speaker005:] Uh huh. Mm hmm. [speaker004:] You know. [speaker001:] Well, I like the idea of asking "what variety of English do you speak" as opposed to where you're from Because th if we start asking where we're from, again you have to start saying, "well, is that the language you speak or is that just where you're from?" [speaker005:] Yeah. [speaker002:] Hmm? [speaker004:] Right. Right. [speaker005:] Yeah. [speaker003:] Let's [disfmarker] [speaker004:] I mean it gives us good information on where they're from, but that doesn't [comment] tell us anything [disfmarker] [speaker003:] Mm hmm. [speaker001:] And [disfmarker] [speaker003:] We could always ask them if they're from [disfmarker] [speaker004:] well, enough about their [disfmarker] [speaker001:] I mean. So [disfmarker] so I would say Germany [speaker004:] like [disfmarker] [speaker001:] You know am I speaking with German accent [speaker002:] Oh. [speaker001:] I don't think so. [speaker004:] Right. [speaker002:] Well, see, I'm thinking "Where are you from mostly" [speaker001:] Oh, OK yeah. [speaker002:] because, you know, then you have some [disfmarker] some kind of subjective amount of time factored into it. [speaker005:] Yeah. Yeah. [speaker001:] Yep. [speaker005:] Yeah. [speaker001:] Yeah, I guess I could try to put [disfmarker] squeeze in a little map. I mean there's not a lot of r of room [speaker003:] I'd say, uh, "Boston, New York City, the South and Regular". [speaker002:] Well [disfmarker] [speaker001:] I think of those, Northern is the only one that I don't even know what they're meaning. [speaker004:] Oh, I don't know. [speaker005:] And [disfmarker] And [disfmarker] [vocalsound] Um [pause] And usually here [disfmarker] people here know what is their kind of mmm lang English language? [speaker002:] Yeah. Yeah. [speaker003:] That's a joke. That's [disfmarker] [speaker004:] So let's make it up. S I mean, who cares. Right? We can make up our own [disfmarker] So we can say "Northwest", "Rest of West" or something. You know. "West" and I mean. [speaker001:] Ye I don't think the Northwest people speak any differently than I do. [speaker004:] It doesn't even [disfmarker] Yeah, exactly. That's not really a region. [speaker003:] "Do you come from the Louisiana Purchase?" [speaker002:] I [disfmarker] [speaker004:] So we could take out "North" [disfmarker] "Northern". [speaker001:] That [disfmarker] that's [speaker005:] eh here Is easy for people to know? [speaker001:] exactly what we're arguing about. [speaker004:] That's [disfmarker] Yeah, w It's [disfmarker] In [disfmarker] It's [disfmarker] it's harder in America anywhere else, basically. [speaker001:] We don't know. [speaker005:] because you have [disfmarker] [speaker001:] I mean some of them are very obvious. If you [disfmarker] if you talk to someone speaking with Southern drawl, you know. [speaker005:] N m Yeah. [speaker002:] Yeah, or Boston. [speaker001:] Or Boston, yeah. [speaker002:] I can't do it, but [disfmarker] [speaker005:] Or Boston? [speaker004:] And those people, if you ask them to self identify their accent they know. [speaker003:] Yeah. [speaker005:] Yeah. [speaker002:] Yeah, they do. [speaker004:] They know very well. [speaker002:] Yeah I agree I agree. I agree. [speaker004:] They know they don't speak the same as the [speaker001:] But is Boston New England? [speaker002:] And they're proud of it. [speaker004:] day o [speaker002:] Yeah. [speaker004:] Yeah, exactly. [speaker002:] It's identity thing. [speaker004:] And they're glad to tell you. [speaker005:] style. [speaker004:] Well. Depends who you ask, I suppose. [speaker001:] W [vocalsound] I guess that's the problem with these categories. [speaker004:] But that's why they have New York City but [disfmarker] [speaker002:] Well, we ca Well, why can't we just say characterize [disfmarker] something like char characterize your accent [speaker003:] Well, Boston's [@ @], too. [speaker004:] Or [disfmarker] "Characterize your accent [pause] if you can." [speaker002:] and [disfmarker] and so I would say, "I don't know". [speaker004:] Yeah. Right, which probably means you have a very [disfmarker] [speaker002:] But someone from Boston with a really strong coloration would know. And so would an R less Maine [disfmarker] or something, [speaker004:] And that's actually good. [speaker005:] Yeah. [speaker004:] I was [disfmarker] I was thinking of something along that line [speaker002:] yeah. [speaker003:] How [speaker004:] because [pause] if you don't know, then, you know, ruling out the fact that you're totally [pause] inept or something, [speaker002:] Good. Hmm. [speaker004:] if somebody doesn't know, it probably means their accent isn't very strong compared to the sort of midwest standard. [speaker003:] Well, [vocalsound] I mean, it wasn't that long ago that we had somebody here who was from Texas who was absolutely sure that he didn't have any accent left. [speaker002:] Hmm? [speaker003:] And [disfmarker] and had [disfmarker] he had a pretty [vocalsound] noticeable drawl. [speaker001:] OK, so. I propose, [pause] take out Northern add, don't know. [speaker002:] Oh. [pause] Yeah. I [disfmarker] I would say more [disfmarker] more sweepingly, "how would you characterize your accent?" [speaker005:] Yeah. [speaker001:] So you want to change the instructions also not just say region? [speaker004:] W [speaker002:] Well, I think this discussion has made me think that's s something to consider. [speaker001:] I don't know if I [disfmarker] if I read this form, I think they're going to ask [comment] it [disfmarker] they're going to answer the same way if you say, "What's variety of English do you speak? Region." as if you say "what variety of region [disfmarker] region [comment] do you speak? Please characterize your accent?" They're going to answer the same way. [speaker002:] I guess [disfmarker] Well, I was not sure that [disfmarker] [speaker005:] Mmm. [speaker002:] I [disfmarker] So. I was suggesting not having the options, just having them [disfmarker] [speaker001:] Oh, I see. [speaker005:] Huh. [speaker001:] Well what we talked about with that is [pause] is so that they would understand the granularity. [speaker002:] Yes, but if, as Liz is suggesting, people who have strong accents know that they do [disfmarker] [speaker001:] I mean that's what I had before, and you told me to list the regions to list them. [speaker002:] and are [disfmarker] [speaker004:] Each [disfmarker] each one has pros and cons [speaker002:] Well, I know. [speaker003:] Right. Right. [speaker001:] So. [speaker004:] I mean we [disfmarker] we [disfmarker] [speaker003:] Right. [speaker002:] That's true. [speaker003:] Yeah last week [disfmarker] last week I was sort of r arguing for having it wide open, but then everybody said "Oh, no, but then it will be hard to interpret because some people will say Cincinnati and some will say Ohio". [speaker001:] I mean I had it wide open last week and [disfmarker] and you said TIMIT. [speaker003:] And. [speaker004:] What if we put in both? [speaker002:] Yeah. [speaker003:] Yeah. [speaker004:] And [disfmarker] Would people [disfmarker] [speaker001:] That's what the "Other" is for. [speaker004:] No, I mean what if we put in both ways of asking them? So. One is [pause] Region and the another one is "if you had to characterize yourself [disfmarker] your accent, what would you say?" [speaker001:] Won't they answer the same thing? [speaker004:] Well they might only answer only one of the questions but if [speaker002:] Yeah that's fine. [speaker004:] You know. [speaker002:] They might say "Other" for Region because they don't know what category to use [speaker004:] Actually [disfmarker] [speaker002:] but they might have something [disfmarker] [speaker004:] Right. It just [disfmarker] And we [disfmarker] [speaker002:] because it is easier to have it open ended. [speaker004:] we might learn from what they say, as to which one's a better [comment] way to ask it. But [disfmarker] I [disfmarker] Cuz I really don't know. [speaker003:] W This is just a small thing but um It says "Variety" and then it gives things that e have American as one of the choices. But then it says "Region", but Region actually just applies to uh, US, right? [speaker001:] Right. I mean that's why I put the "Other" in. [speaker002:] Well, we thought about it. [speaker003:] Ah, OK. [speaker002:] Yeah, OK. [speaker003:] S [speaker002:] We just [disfmarker] We sort of thought, "yes, [disfmarker]" y y I mean [disfmarker] At the last meeting, my recollection was that [pause] we felt people would have uh less [disfmarker] that [disfmarker] that there are so many types and varieties of [pause] these other languages and we are not going to have that many subjects from these different language groups and that it's a huge waste of [disfmarker] of space. [speaker001:] Yep. [speaker003:] OK. [speaker001:] So I mean, I [disfmarker] I mean the way I had it last time [pause] was Region was blank, [speaker002:] That's what I thought. [speaker001:] it just said Region colon. [speaker002:] Yeah. Yeah. [speaker001:] And [disfmarker] and I think that that's the best way to do it, because [disfmarker] because of the problems we're talking about but what we said last week, was no, put in a list, so I put in a list. [speaker004:] Maybe we can make the list a little smaller. [speaker001:] So should we go back to [disfmarker] Well, certainly dropping "Northern" I think is right, because none of us know what that is. [speaker004:] Cuz, I mean [disfmarker] And keeping "Other", and then [pause] maybe this North Midland, we call it "North Midwest". South [pause] Midwest, or just [disfmarker] [speaker003:] Yes I [disfmarker] I [disfmarker] I think so. Yeah. [speaker004:] South Midwest. Does that make sense? [speaker005:] South Midwest? [speaker004:] That would help me [disfmarker] Yeah. Cuz [disfmarker] [speaker003:] U unless you're from Midland, Kansas. [speaker004:] Midland [disfmarker] [speaker003:] But. Yeah. [speaker004:] I don't know where Midland is [speaker003:] There's a [disfmarker] Or Midland [disfmarker] Midland [disfmarker] Is it Midland [disfmarker] Midland [disfmarker] Midland, Texas or Midland, Kansas? [speaker001:] Is "Midwest" one word? [speaker004:] Y yeah, one w [speaker003:] I forget. But there's a town. [speaker004:] Oh. [speaker003:] in [disfmarker] in there. I forget what it is [@ @]. [speaker002:] I don't think that's what they mean. [speaker004:] But, yeah. So. Kansas would be [pause] South Midland. [speaker003:] Yeah. [speaker004:] Right? [speaker003:] Y yeah. [speaker004:] And [disfmarker] and wouldn't [disfmarker] Yeah. So, th I'm from Kansas, actually. [speaker003:] And Colorado, right across the border, would be [vocalsound] North Midland. [speaker005:] Southern Midland. [speaker004:] Yeah. Colora [speaker001:] And uh [disfmarker] [speaker004:] Oh, right. And then, the [disfmarker] the [disfmarker] dropping North, so it would be Western. It's just one big shebang, where, of course, you have huge variation in dialects, [speaker003:] But you do in the others, too. So. [speaker001:] But that's true of New England too. [speaker004:] but [disfmarker] [comment] but so do you [disfmarker] Yeah. [speaker001:] So. I mean only one [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. Yeah. [speaker001:] Well, I shouldn't say that. I have no clue. I was going to say the only one that doesn't have a huge variety is New York City. But I have no idea whether it does or not. [speaker002:] It does seem [disfmarker] [speaker003:] U [speaker002:] I mean. I [disfmarker] I would think that these categories would be more [disfmarker] w would be easier for an an analyst to put in rather than the subject himself. [speaker001:] I think that [disfmarker] that was what happened with TIMIT, was that it was an analyst. [speaker002:] OK. [speaker003:] Wait a minute. Where does [disfmarker] Where does [disfmarker] [pause] d w Where [disfmarker] Where's [disfmarker] where does uh [vocalsound] New [disfmarker] New York west of [disfmarker] west of uh New York City and [pause] Pennsylvania [pause] uh and uh [speaker004:] Yeah, I don't know how it came from. [speaker002:] OK. [speaker004:] So. That's New England I think. [speaker001:] New England [speaker003:] N No, it's not. [speaker004:] Yeah. [speaker001:] Oh, no. [speaker003:] Oh no. [speaker002:] I sort of thought they were part of the [disfmarker] one of the Midlands. [speaker003:] No, no. [pause] No. Pennsylvania is not [disfmarker] [speaker001:] "Other", it goes under "Other", definitely under "Other". [speaker004:] Well, you know, Pennsylvania has a pretty strong dialect and it's totally different than [disfmarker] [speaker003:] Pennsylvania [disfmarker] Yeah. Pennsylvania is not New England. and uh New Jersey is not New England and Maryland is not New England and none of those are the South. [speaker001:] OK. So. Another suggestion. [speaker002:] Yeah. [speaker001:] Rather than have circle fill in forms, say "Region, open paren, E G Southern comma Western comma close paren colon." [speaker002:] OK. [speaker004:] OK! [speaker003:] That's good. [speaker005:] Yeah. [speaker002:] Fine by me, fine by me. [speaker004:] Sure! [speaker003:] I like that. [speaker002:] Yeah. [speaker004:] Let's just [disfmarker] [speaker005:] Yeah. [speaker004:] And we'll see what we get. [speaker003:] We're all [pause] sufficiently [pause] tired of this that we're agreeing with you. [speaker002:] Be easier on the subjects. I think that's fine. [speaker003:] So. [speaker002:] No. I think [disfmarker] I like that. I like that. [speaker003:] You like it? OK. [speaker002:] Yeah, I do. [speaker003:] Good. [speaker001:] Actually, maybe we do one non English one as well. Southern, Cockney? [speaker004:] Yeah, and [disfmarker] [speaker005:] Yeah. [speaker001:] Is that a [pause] real accent? [speaker002:] Sure, yeah! [speaker005:] Yeah. [speaker001:] How do you spell it? [speaker002:] I think that's fine. [speaker003:] Cockney? CO [disfmarker] [speaker001:] N E [speaker003:] Yeah. [speaker004:] You could say Liverpool. [speaker003:] Liverpuddlian. [speaker002:] Yeah. [speaker004:] Actually, Liverpool doesn't l [speaker002:] Alright. [speaker004:] Yeah. It's [disfmarker] [vocalsound] I'm s I ha [speaker002:] Well. Well. I mean, pure [disfmarker] [speaker001:] OK, we'll do it that way. Actually, I like that a lot. [speaker003:] OK. [speaker001:] Because that get's at both of the things we were trying to do, the granularity, and the person can just self assess and we don't have to argue about what these regions are. [speaker002:] That's right. [speaker005:] Yeah. [speaker003:] OK. [speaker002:] And it's easy on the subjects. [speaker001:] Yep. [speaker002:] Now I have one suggestion on the next section. [speaker005:] Mm hmm. [speaker001:] Mm hmm. [speaker002:] So you have native language, you have region, and then you have time spent in English speaking country. Now, I wonder if it might be useful to have another open field saying "which one parenthesis S [comment] paren closed parenthesis". Cuz if they spent [pause] time in [disfmarker] in Britain and America [disfmarker] [speaker003:] Yes. [speaker002:] It doesn't have to be ex all [disfmarker] at all exact, just in the same open field format that you have. [speaker001:] Yep, just which one. I think that's fine. [speaker002:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] with a [disfmarker] with an S which one sss, [comment] optional S. [speaker003:] OK. [speaker002:] Yeah. [speaker003:] We uh [disfmarker] We done? [speaker001:] Yep. [speaker002:] Yeah, that's good. [speaker003:] OK. um s e Any [disfmarker] any other uh open mike topics or should we go [pause] right to the digits? [speaker001:] Um, did you guys get my email on the multitrans? That [disfmarker] OK. [speaker002:] Isn't that wonderful! Yeah. Excellent! [speaker001:] Yeah. So. So. I [disfmarker] I have a version also which actually displays all the channels. [speaker002:] Thank you! [speaker004:] It's really great. [speaker001:] But it's hideously slow. [speaker002:] So you [disfmarker] this is n Dan's patches, Dan Ellis's patches. [speaker001:] The [disfmarker] what [disfmarker] the ones I applied, that you can actually do are Dan's, because it doesn't slow it down. [speaker004:] M [speaker002:] Fantastic! [speaker001:] Just uses a lot of memory. [speaker004:] So when you say "slow", does that mean to [disfmarker] [speaker001:] No, the [disfmarker] the one that's installed is fine. It's not slow at all. I wrote another version. Which, instead of having the one pane with the one view, It has multiple panes [pause] with the views. [speaker005:] Yeah. [speaker004:] Mm hmm. [speaker001:] But the problem with it is the drawing of those waveforms is so slow that every time you do anything it just crawls. [speaker002:] Mm hmm. [speaker001:] It's really bad. [speaker004:] It's [disfmarker] So, it [disfmarker] it's the redrawing of the w [speaker002:] That's a consideration. [speaker001:] Mm hmm. [speaker004:] oh uh huh, w as you move. [speaker001:] As you play, as you move, as you scroll. Just about anything, and it [disfmarker] it was so slow it was not usable. [speaker002:] And this'll be a [disfmarker] hav having the multiwave will be a big help [speaker001:] So that's why I didn't install it and didn't pursue it. [speaker002:] cuz [disfmarker] in terms of like disentangling overlaps and things, that'll be a big help. [speaker004:] Oh yeah. [speaker001:] So. I think that the one Dan has is usable enough. [speaker004:] Yeah. [speaker001:] It doesn't display the others. It displays just the mixed signal. [speaker002:] Mm hmm. [speaker001:] But you can listen to any of them. [speaker002:] That's excellent. [speaker005:] Yeah. [speaker002:] He also has version control which is another nice e so you [disfmarker] e the patches that you [disfmarker] [speaker001:] No, he suggested that, but he didn't [disfmarker] [pause] It's not installed. [speaker002:] Oh, I thought it was in one of those patches. [speaker001:] No. No. [speaker002:] Oh OK. Well. Alright. [speaker004:] So is there any hope for actually displaying the wave form? [speaker001:] Um, not if we're going to use Tcl TK At least not if we're going to use Snack. [speaker004:] OK. [speaker001:] I mean you would have to do something ourselves. [speaker002:] Well, or use the one that crawls. [speaker004:] OK. Well, I'm [disfmarker] I probably would be trying to use the [disfmarker] [comment] whatever's there. And it's useful to have the [disfmarker] [speaker001:] Why don't we [disfmarker] we see how Dan's works and if it [disfmarker] If we really need the display [disfmarker] [speaker004:] Yeah. I mean. I wonder [disfmarker] I'm just wondering if we can display things other than the wave form. So. Suppose we have a feature [disfmarker] a feature stream. And it's just, you know, a [disfmarker] a uni dimensional feature, varying in time. [speaker005:] Yeah. [speaker004:] And we want to plot that, instead of the whole wave form. [speaker001:] I mean. [speaker005:] Yeah. [speaker004:] That might be faster. Right? [speaker005:] Yeah. [speaker004:] So. [speaker001:] We [disfmarker] we could do that but that would mean changing the code. I mean this isn't a program we wrote. [speaker003:] Yeah. [speaker001:] This is a program that we got from someone else, and we've done patches on. [speaker004:] OK. OK. Well, I'll talk to you about it and we can see [speaker002:] Mm hmm. [speaker005:] Yeah. [speaker003:] Cou i e I mean, y [speaker001:] So. Yeah. [speaker004:] but it's definitely [pause] great to have the other one. That's [disfmarker] [speaker003:] If there was some [disfmarker] Is there some way to [pause] have someone write patches in something faster and [disfmarker] and [disfmarker] link it in, or something? Or is that [disfmarker] [speaker001:] Not easily. I mean y yes we could do that. You could [disfmarker] you can write widgets in C. [speaker003:] Yeah. [speaker001:] And try to do it that way but I just don't think [disfmarker] it [disfmarker] Let's try it with Dan's and if that isn't enough, we can do it otherwise. [speaker004:] Right. [speaker001:] I think it is, cuz when I was playing with it, the mixed signal has it all in there. And so it's really [disfmarker] It's not too bad to find places in the [disfmarker] in the stream where things are happening. [speaker004:] OK. [speaker002:] And it's also [disfmarker] also the case that [disfmarker] that uh this multi wave thing [pause] is proposed to the [disfmarker] [speaker005:] Hmm? [speaker001:] So I [disfmarker] I don't think it'll be bad. [speaker002:] So. Dan proposed it to the Transcriber central people, and it's likely that uh [disfmarker] So. And [disfmarker] and they responded favorably looks as though it will be incorporated in the future version. [speaker004:] Oh. [speaker002:] They said that the only reason they hadn't had the multi the parallel uh stream one before was simply that [pause] they hadn't had time to do it. And uh [pause] so it's likely that this [disfmarker] this may be entered into the ch this central [@ @]. [speaker005:] Yeah. [speaker003:] They may well have not had much demand for it. [speaker001:] And if [disfmarker] if [disfmarker] [speaker002:] Well that's [disfmarker] that's [disfmarker] that's true, too. [speaker003:] Yeah. [speaker002:] This is a [disfmarker] [pause] a useful thing for us. [speaker004:] So. You mean they could [disfmarker] they could do it and it would be [pause] fast enough if they do it? [speaker005:] Yeah. [speaker002:] Oh. No. I just mean [disfmarker] I just mean that it's [disfmarker] that [disfmarker] that his [disfmarker] [speaker001:] Depends on how much work they did. [speaker004:] Or [disfmarker]? [speaker005:] Oh. [speaker002:] So. [pause] This one that we now have does have the status of [pause] potentially being incorporated l likely being incorporated into the central code. [speaker004:] Mm hmm. OK. [speaker002:] Now, tha Now, if we develop further then, y uh, I don't [disfmarker] I mean it's [disfmarker] I think it's a nice feature to have it [disfmarker] set that way. [speaker001:] I think if [disfmarker] if [disfmarker] if one of us sat down and coded it, so that it could be displayed fast enough I'm sure they would be quite willing to incorporate it. [speaker002:] Mm hmm. [speaker001:] But it's not a trivial task. [speaker004:] OK. [speaker002:] Mm hmm. Yeah. I just like the idea of it being something that's, you know, tied back into the original, so that other people can benefit from it. [speaker001:] Yeah. [speaker002:] Yeah. However. I also understand that you can have [pause] widgets that are very useful for their purpose and that you don't need to always go that w route. Yeah. [speaker005:] OK. [speaker001:] anyway, shall we do digits? [speaker002:] Yeah. [speaker003:] Yeah. Let's do digits, uh, and then we'll turn off the mikes, and then I have one other thing to discuss. [speaker002:] OK. [speaker004:] I actually have to leave. [speaker001:] OK. [speaker004:] So. Um. I mean [pause] I had to leave at three thirty, so I can [disfmarker] [vocalsound] Well, I can wait [pause] for the digits but I can't stay for the discussion [speaker002:] Uh oh. [speaker003:] Oh. [speaker001:] OK. Well, you want to go first? Or. [speaker004:] I c [pause] I have to make a call. So. [speaker003:] OK. Well, we'll talk to you about it [disfmarker] [vocalsound] Uh [speaker002:] Well, should we [disfmarker] e should we switch off the g [speaker004:] Um. [speaker001:] Do you wanna go do digits or do you wanna just skip digits? [speaker004:] No, I can do digits if [disfmarker] if [disfmarker] But I don't wanna butt in, or something. [speaker001:] Then [disfmarker] Alright. You go ahead. [speaker004:] But if there's something on the rest of the [disfmarker] I'm [disfmarker] I'll be around just have to make call before quarter of. So. [speaker005:] Mm hmm. [speaker004:] So I [disfmarker] Or we can talk about it. [speaker002:] Ke [speaker001:] Why don't you read the digits? [speaker004:] OK. [vocalsound] Alright. [speaker003:] Yeah, why don't you read the digits and then you can [pause] go. [speaker004:] Oh, this is the new one. [speaker003:] Yeah. [speaker001:] Yeah, don't [disfmarker] Don't read the old one. [speaker004:] Alright. The [disfmarker] And the time is. OK. [speaker001:] OK [speaker003:] OK. [speaker002:] Turn it off. But wait till he [disfmarker] OK. [speaker001:] And [speaker004:] OK, we can start? OK. Thank you, Adam. I will call you then when we are ready. I believe, maybe one hour, in that sense. OK, welcome to our next meeting. Um [pause] I have couple of points we should discussed. One is related to the proposal, Michael, please can you close the door? [speaker006:] Yep. [speaker004:] Thank you. And, um furthermore, as most of you know I will leave in tenth of December, I think we should discuss how we proceed in the meantime, because I get, I would like to say, different signals from Siemens, I believe, in the meantime it's a higher probability that I will return, but based on the pro uh, procedure to get a J one visa and the IP six six it will be [disfmarker] I would like to say, earliest stage will be end of January, so that means [pause] we have to get over six weeks and [disfmarker] Then furthermore I would [pause] like to know the status of everybody. I know for sure, OK, [vocalsound] um [pause] Miguel, you will [pause] have your plans for this, uh, ongoing process, uh, concerning your work. OK, from you it's definitely clear, in principle. OK, for Michael, what are your current plans and Dietmar, maybe you then, also. What we have discussed, in principle, concerning the ongoing process with U [disfmarker] [vocalsound] USAIA and also this, uh, your current work, and from my point, it seems to be clear that I will go back definitely at tenth of December, and, as I mentioned before, probably will be [disfmarker] more than fifty percent, I would like to say, that I will be come back to get [pause] the freedom to launch here the project in the sense in whatever kind of form and get, in principle, paid by another project within Siemens. So [pause] the point is how long I will stay. I believe, a couple of months definitely and based on the success, I think that's always an ongoing process to go back to [disfmarker] uh, to Germany for a couple of weeks and then come here and [disfmarker] and go on with [disfmarker] with this work and [disfmarker] so that is [pause] my current opinion, concerning how Siemens deal with [disfmarker] with my, uh, ongoing effort. And, furthermore, I would like to [disfmarker] First of all, thanks to uh [pause] Dietmar because he started to use a testbed. [speaker002:] Oh. [speaker004:] Eh in principle, in his spare time. Yeah! [speaker002:] Yeah. [speaker004:] Uh I will [disfmarker] So [disfmarker] so the first, [vocalsound] uh uh, box, this morning. I will set up, definitely, in my spare time, in the evenings and the weekends the second box, and I'm looking for [pause] two other volunteers who are setting the third and the fourth box. [speaker003:] But wh wh wh what are the box? [speaker004:] A box is a [pause] s P C. [speaker002:] P Cs. P Cs. [speaker003:] But what w [speaker004:] Equipped with [disfmarker] with [disfmarker] we should equipped it with [disfmarker] with Linux or maybe too with Free BSD, [speaker002:] Well the first [speaker004:] I doubt [disfmarker] I don't know [speaker002:] stop, [disfmarker] [speaker004:] and we'll [disfmarker] [speaker002:] the first thing [pause] the most, eh, complicated is to fit them with the removal [disfmarker] removable [disfmarker] um, hard disk. [speaker004:] And now we have an expert. [speaker002:] But it's [disfmarker] Yeah. So the first computer I had to [disfmarker] um the Oh, what's the right word? I had to put everything out to [disfmarker] to put the [pause] removable hard disk in [speaker004:] Yeah, to remov So that you can pull eh the [disfmarker] plug in the removable hard disk to the ID bus [speaker002:] and [disfmarker] and it's [disfmarker] [speaker003:] OK. [speaker002:] Yeah, it's [disfmarker] [speaker004:] and [disfmarker] and OK, maybe it's a little bit [disfmarker] [speaker002:] but i but it's not [disfmarker] not, um [pause] very, uh [pause] difficult or something like that. [speaker004:] Uh huh, [speaker002:] So you just have to [disfmarker] [speaker004:] And [disfmarker] Why you are laughing? I ju I don't know, I cou I s I didn't see him, so I don't know what he did. I know in the meantime it's running, [speaker002:] Uh! [speaker004:] and I will definitely set up the next one. [speaker002:] So Wilbert will set up the second computer. [speaker004:] Yeah, though my idea [disfmarker] maybe you eh can equip it with a I don't know what your time constrain whether the time constraint allows you to set up one with [disfmarker] with Free BSD, for instance. [speaker003:] You know, I [disfmarker] I can [disfmarker] I can build, eh, any computer uh you n [disfmarker] installing whatever. It's needed, [speaker004:] Yeah. OK, uh huh. [speaker003:] I mean, I will enjoy doing it. [speaker004:] Do you find the time? [speaker003:] So. That's OK for me. [speaker005:] Mmm. [speaker004:] Maybe also to uh support your [disfmarker] [speaker005:] No promises. [speaker004:] No promises? [speaker005:] No. [speaker004:] and even maybe to [disfmarker] to, uh [pause] put your I G M P stuff on it. Eh [speaker005:] No promises. [speaker004:] No promises. [speaker005:] No. [speaker004:] Ah, he's very hard to sell. [speaker002:] So it's about one hour to put this, uh [pause] hard disk in and a further hour to, uh [disfmarker] [pause] Well, two or three hours [vocalsound] to install the operating systems. [speaker004:] Nee, because then the network is still running then. Could we wait until the network is running? Because currently it's not possible, uh, based on certain fire wall constraints. [speaker002:] Yeah. At the moment. [speaker004:] I believe David is, uh, currently working on that. [vocalsound] to set up the fire wall, and he's not an expert for that and when I get the signal I will ask one more time. Nah, but nevertheless thanks to Dietmar that he [@ @]. [speaker005:] Did the networks have internet connectivity? [speaker004:] Yes, we have, eh, for [disfmarker] for secure socket, and, uh [pause] for web access, the fire wall is open. That's all. And [disfmarker] But currently it's not working, right? [speaker002:] Right, um [pause] On Friday when I installed, um [pause] uh [pause] Linux the first time [vocalsound] I got access to the local network here, um, but not to any machines outside of, uh, ICSI. And, till Monday there's no access at all to any machines, also not to inside machines. [speaker004:] But then, uh [speaker002:] So, I think it's related to the work, um [pause] they've done on Sunday here and I sent [disfmarker] [speaker005:] But normally you should have access to th all the internet, r to outside. [speaker004:] Yes. Yeah, You have internet outside access, [speaker005:] Full. [speaker004:] but no telnet and not R login or whatever kind of things, you know. [speaker005:] A restricted in. Not. Huh. Not a [disfmarker] no FTP. [speaker004:] Yeah, FTP yes. [speaker005:] H [speaker004:] But no telnet. [speaker005:] Hmm. [speaker004:] Uh [pause] because you can [disfmarker] The [disfmarker] the [disfmarker] the point is that we will use it, as it is uh described several times, that we will use it as a local network, in principle, but it doesn't make sense to use certain kinds of uh, media transportation f to put some stuff you need from your PC or your LS [pause] to this, uh [disfmarker] uh, [vocalsound] uh, to this testbed, you know. [speaker003:] Mm hmm. [speaker004:] So you download it, if that s must be possible from the network, but normally it should be used as um [disfmarker] uh for demonstrations or to figure out if you have a scenario where you have to use more than, uh, one box which you have on your desktop, cuz that is the primary, uh uh [pause] reason for this testbed. So, definitely you have to have web access to download the necessary files. [speaker003:] Hmm hmm hmm. [speaker004:] So and then furthermore, one more thanks to Claudia. He [comment] did a lot [pause] of work [speaker001:] Right. [speaker004:] and I could only pass her some advices and some discussions with her concerning the web pages. [speaker001:] Ah, the messages. Oh. [speaker004:] And now we are, in principle, in the state we have to do it with a project area. But they're also some contributions from you guys requested. Um and the alumni page, in principle, we have to, you know, but it's primarily my job, I believe, uh, to set up so that it is, uh adjusted really to the people who are worth to be alumnis [disfmarker] alumnis, yeah. Is it "alumni" [disfmarker] "alumni"? I'm [disfmarker] [speaker001:] "Alumni". [speaker004:] "Alumni". I'm not sure how to pronounce this word. [speaker006:] Aber das wird jetzt geaendert. [speaker004:] Huh? [speaker006:] Das wird geaendert. [speaker004:] And, um [disfmarker] But, um I think everybody should take a look to the web pages, and inform Claudia if something, uh, has to be improved or if there's some comments [speaker003:] Mm hmm. [speaker004:] or uh He's laughing all the time. Why? Uh because it is [disfmarker] you know, that's very important maybe now not for us but maybe for our successors and maybe somebody will come back once a day or send some students or some colleagues or whatever. So, my point of view [disfmarker] I will doc I will documented everything when I will leave here in the sense that potential successor as a group leader can hook in with these things and that these web pages will be updated every time when it is necessary. We tried to do it, so that it must be at least updated only when really major changes happen. So that means somebody's leaving the group, uh, somebody is joining the group, a new project is started, but, uh, the effort should be minimized as possible. And, so please take a look to the web pages and I have announced in the project area the major headlines. And especially, Wilbert, maybe you can write something about yours? I call it, I believe, "Multicast" and then "Development deployment", eh, for [disfmarker] for source specific multicas maybe a few lines, maybe a picture. And something like that should be added to USAIA? OK, that's my stuff. And um then the next point is then this active routing stuff? Um I would like to discuss it before, but you were busy. [speaker003:] Yeah, [speaker004:] And so [disfmarker] [speaker003:] OK. [speaker004:] um, I believe, it must be adjusted to the current activities a little bit more. [speaker003:] Mm hmm. [speaker004:] And, um I [disfmarker] I think it doesn't [disfmarker] It is not worth to discuss it now because nobody raised it before. [speaker003:] Yeah, I think so. [speaker004:] And, uh, so l let us have a bilateral discussion afterwards. I would [disfmarker] what I would like to have is that um, your contributions is related to certain kind of things happened here. And that means really to the project proposal itself and also maybe to the USAIA architecture, in that sense, that at certain stages, cuz it's a general architecture, certain stages maybe to future [vocalsound] mobile core networks, that it is related to these kind of things. Um, having [disfmarker] uh, I [disfmarker] what I have in mind is that things should not be started from the end system. That's my principle concern, because that will never be happen, from my point of view. It should be started as the edge device, so that it cu certain kind of mechanism, whatever, is enabled between the ingress edge device and the outgress a edge device, because it will be at the certain stage IP based in the future generation telecommunication networks before its goal. [speaker003:] Mm hmm. That's OK. [speaker004:] Fuel it to the [disfmarker] over the internet, and b between these both boxes, I can assume uh because the mobility happens there as a normal natural thing and the forwarding process and the routing process should be no more separated, from my point of view. And to have there a certain kind of intelligent mechanism, that the packets can be used to set up the routes because they have all the information. They know where they have to go because IP addresses are still there and [disfmarker] within the packet. [speaker003:] Hmm hmm hmm. [speaker004:] I mean, how can this use, uh, within this area, and how this area is then aligned to the more traditional routing within the internet backbone itself. That's my question. And I think that is good a research area, then I think there's a lot of [disfmarker] I'm not a e xpert at all for this. But I [disfmarker] I [disfmarker] I believe, there's a lot of potential for that. And we should discuss it maybe bilateral afterwards. [speaker003:] OK. OK. [speaker004:] Yeah? So, one more time, everybody should take a look to the web pages and from Claudia, at the general remarks and should write based on the, uh, headlines in the project proposal the um things what still happened, the things what they have still in mind uh [pause] so that we can update these last [@ @] you know [@ @] present the last major part of the web pages. [speaker002:] The pages are already on the web? [speaker003:] OK. [speaker001:] Yeah, yeah. [speaker004:] Yes, [speaker002:] Yeah. [speaker001:] It is. [speaker004:] you can now access it normally with maybe WWW ICSI Berkeley EDU, [speaker001:] So there are some gaps. [speaker002:] OK. Hmm. [speaker004:] and so. [speaker002:] OK. [speaker001:] Yeah, so there are some gaps left and, I don't know, you [disfmarker] Maybe you can also read through the [disfmarker] all the text which is on the web pages cuz I'd like to change the text a bit cuz sometimes it's too long, sometimes it's too short, maybe the English is not that good, so um, but anyways [disfmarker] So I tried to do this today and if you could do it afterwards it would be really nice cuz I'm quite sure that I can't find every, like, orthographic mistake in it or something. [speaker003:] Uh, there [disfmarker] there [disfmarker] there are couple of comments I [disfmarker] I have about the web pages. [speaker001:] Good. [speaker003:] One is [disfmarker] is the, uh, I think it's Joe's, uh, graphic, I think it should be better. [speaker001:] Yeah, yeah, yeah, [speaker004:] Yeah. [speaker001:] the graphics are too bad. [speaker003:] Maybe you are working at it. [speaker001:] I know. [speaker004:] You are definitely right. It was in a good shape, but [vocalsound] it, uh, then we disturbed. [speaker003:] OK, And then maybe the [disfmarker] the ICSI logo could be [disfmarker] [vocalsound] could be [disfmarker] [vocalsound] could be, [speaker001:] I can't change something with a logo. [speaker003:] uh [disfmarker] No, it is just, uh [disfmarker] Maybe we can [disfmarker] we can, uh, translate the white color into transparent color so it will [pause] get more natural. [speaker001:] Yeah, OK. Good, yeah I ju [speaker003:] It's just a suggestion. [speaker001:] an another thing is I'd want to get a link on the logo so that you can come from the logo to the [pause] ICSI web p home page [pause] back. [speaker003:] To the [pause] home page. OK. That's [disfmarker] that's OK. [speaker001:] So. That's [disfmarker] these are th some things I have to do. Yeah, and if you have something which I should put on the web page for you, so yeah, some pictures will be nice, a few sentences will be nice, and yeah, it depends on how much time you have. And then the page for inf about [disfmarker] So, some information for people who are new comers or, I don't know a good word for it, should we use uh "new comers" or [disfmarker] So, with somebody who is new, arriving here, or [vocalsound] wants to [disfmarker] to, uh, send an application? [speaker005:] Newbies. [speaker001:] S Huh? [speaker005:] Newbies. [speaker001:] Newbies? Yeah, OK. So, there I have to add some information, I don't know, I t I tried to set up everything by today for this page cuz I've started with this, um, if you have any kind of ideas what was important for you then just let m cuz I have some [vocalsound] points like um [pause] I don't know f yeah, like Pacific Bell, like uh, long distance carriers, then like uh, cars and car insurances, health insurances [disfmarker] So, health insurance is k something I can only say for German people. [speaker003:] Hmm hmm hmm. [speaker001:] So maybe I don't know if this is different in Spain. That would be fine too, to have some information, or in D nnn [vocalsound] Holland. [speaker003:] OK. I [speaker001:] I don't know. [speaker003:] I will take a look and I will tell you something. [speaker001:] Yeah, [speaker004:] Mm hmm. [speaker001:] so how [disfmarker] maybe how you did it or whatsoever. [speaker004:] OK, I will send you some because I have something in mind. [speaker001:] So yeah, then what else did I have? Then another point's like, uh, what about the bank, uh, so, transferring your money from Europe t over here, what is the cheapest way or what could you recommend or something. um another point yeah, and then there's [disfmarker] there's another point like going out in Berkeley or what to do in Berkeley or in the neighborhood. [speaker005:] This [disfmarker] a lot of this stuff is not networking group [disfmarker] NSA, uh, restricted. [speaker001:] No, I know. [speaker004:] Yeah, but nobody do that, you know. [speaker001:] I know. But there's nothing like this on the [disfmarker] on the ICSI page as far as I know. [speaker004:] So, that's the point. We discussed it still a couple of weeks or months ago. Uh, the point is normally it is an ICSI task anyway. [speaker003:] Mm hmm. [speaker005:] M [speaker004:] But [speaker005:] Yeah, but a at least then it should be [disfmarker] mmm, I think be, uh, ICSI part, [pause] and not network restricted. [speaker004:] Right. [speaker003:] Well, th they have uh, a [disfmarker] a white booklet, you know, probably [disfmarker] [speaker004:] Yeah, that is from nineteen ninety one or nineteen ninety two, you know. A [speaker001:] Yeah, but it's not in the [disfmarker] This is not on the web. [speaker004:] A And this is not on the web, [speaker003:] No, no, no. [speaker004:] and the [disfmarker] the [disfmarker] the [disfmarker] [speaker003:] I [disfmarker] I just [disfmarker] I [speaker004:] people normally [disfmarker] that's was my [disfmarker] my, uh [disfmarker] [speaker003:] In fact the Pacific Bell phone was not working, for example when I tried the [disfmarker] the phone on this booklet it was not working. [speaker004:] Hmm. [speaker003:] But the thing is that maybe [vocalsound] this could be on a starting point or [disfmarker] or [pause] if [disfmarker] digest [speaker001:] Yeah. I don't know, maybe I talk with Jane about this. If we can use this information, I don't know, somewhere on the ICSI home page too. [speaker005:] M move it on the normal ICSI, ICSI. [speaker001:] Yeah. Maybe. [speaker005:] Maybe every group has somewhere hidden, uh [pause] a gro uh deep [pause] outgoing [disfmarker] [speaker004:] We can do that. Yeah, but the point is that nobody feel responsible for that. And if it's [disfmarker] [speaker005:] Yeah, but eh [disfmarker] but my point is that it should not be under the networking group thing. [speaker003:] It's a common [disfmarker] [speaker004:] Yes, Uh, yes, [speaker005:] eh [disfmarker] But [disfmarker] [speaker004:] I see there your point. [speaker005:] OK. [speaker004:] That's right. But if it is under the networking points, the group leader is responsible to get up to date information. I believe, we are the group, where the people are, uh, changing much more faster to join the group and to leave the group as other groups. [speaker003:] Mm hmm. [speaker004:] And, so, the most problem as [disfmarker] as of, uh, it's likely that uh, the people here in this group needs this information um, much more [pause] f uh, often as [disfmarker] as other groups. [speaker005:] But [disfmarker] b But that could still be applicable if you put it under the ICSI site if you take the responsibility for that. [speaker004:] Yes, I see th [speaker001:] Yeah, I talk to Jane about this. [speaker005:] Uh, [speaker004:] Yeah, [speaker005:] So I think th I think both arguments are not an ar are arguments against having this under the normal ICSI page. [speaker004:] but we [disfmarker] we can do that. [speaker005:] Or only the networking, uh, page. [speaker006:] But the problem is, are you allowed to change anything on the general ICSI server? [speaker005:] Yeah. [speaker001:] No, this is why I have to talk to Jane. [speaker006:] Yeah. [speaker004:] Yeah, that's [disfmarker] yeah, [speaker006:] So you may need just a link from the main ICSI page to the NSA page. [speaker004:] that's a [disfmarker] We have the [disfmarker] we have the directory structure, of the real UNIX tree. in compliance and in accordance with what you [disfmarker] with the structure of the web pages. And, um, that means, uh, [vocalsound] the group leader normally do not have access to the general web pages. He has access to the N S A pages. And, um, but I think that's an administration problem and that everybody agrees that the information [vocalsound] page must be on the general, on the highest level. OK, I have no problem with that. [speaker002:] But, uh [speaker004:] But [disfmarker] but I think we discuss if still [disfmarker] [speaker002:] your argument is, uh, I think it we have to have the right to change it. [speaker006:] Yeah. [speaker002:] So if there's anything changing in insurances or something [disfmarker] something like that, [vocalsound] that we have access to this page further on [disfmarker] to chan to make any changes. [speaker005:] Yeah. [speaker001:] Yeah, but this is not a problem at all, I [disfmarker] I mean, if y you are coming from Germany and you know that there is something changed with the health insurance, then you send Jane an email like "Could you please change this, blah blah blah." [speaker002:] Yeah, but it's much more complicated. [speaker006:] Yeah, but that's [disfmarker] that's effort. That's the whole problem, that nobody wants to do it. [speaker002:] Yeah. [speaker004:] Yeah. [speaker001:] Yeah, OK. [speaker004:] You know how it [disfmarker] you know the process here within the NSA group. You know [speaker001:] Yes. [speaker004:] And [disfmarker] and that was a reason I [disfmarker] we finally decided here that we put it [disfmarker] information on the N S A page, we have discuss dis this discussed still a couple of months ago [disfmarker] ago. [speaker002:] I think the best solution is really to put a link on the ICSI page to this stuff. [speaker005:] Yeah. [speaker006:] Yep. [speaker002:] And the stuff itself is, uh, [vocalsound] located at our server. [speaker004:] In our directory. [speaker002:] Yeah, in our directory, [speaker004:] Yeah. [speaker002:] yeah. [speaker004:] Yeah, we can do that. [speaker001:] Yes, also fine. [speaker004:] We then create a link and then we have to ask with Jane, [speaker001:] Yeah. [speaker004:] OK? [speaker001:] Another point is I want [disfmarker] I want to [disfmarker] would like to add a another button, something which is called like "events" or, I don't know, yeah, I guess I call it "events" [speaker003:] Hmm? [speaker001:] like [vocalsound] where we could s uh, announce talks which are, you know, future [disfmarker] talks for the future, or [disfmarker] and where we could also have a link on [disfmarker] [vocalsound] for instance the two workshops which have going on, uh, last summer, so, Hannes' workshop and then the workshop from Oliver, that there might be a link behind these workshops and [disfmarker] or former talks we have, so like from people who have been here before and he wants to, uh, publish their slides, or an article, or whatsoever, behind there. So, that we have like a button for events which are [disfmarker] where [vocalsound] workshops can be announced or old workshops could [disfmarker] could be, um, shown or whatever. [speaker003:] Mm hmm. [speaker004:] Of course that is a living thing. I think it's a good idea, not only that the workshop was a talk announce because they are still announce it is a broader sense. Eh, but I think the idea from Claudia is very good, that the slides from the talks here, if it's worth to have it on the web, anyway um, should be published in that way that the people can access it. [speaker003:] Mm hmm. [speaker004:] For instance I have my USAIA slides. maybe if some things and [disfmarker] is going on maybe you have some Multicast slides concerning your work at IGMP [speaker003:] Yeah, that's OK, too. [speaker004:] and uh and if we have some talks from EXTERM, and we had some in the past, we can also put the slides on [disfmarker] on these things [disfmarker] because then they're still cau still a growing context for us, opportunity for the people you know there, "Oh, this guy did something there, maybe I can contact him and ask him would you like to do something, contribute, whatever." [speaker003:] Hmm hmm hmm. [speaker004:] Uh, so that's [disfmarker] and I think it's a great idea from Claudia to [disfmarker] to have this event page because [vocalsound] it's not a big deal, you know. [speaker001:] No, it's easy to [disfmarker] to add it, [speaker004:] It's [disfmarker] it's [disfmarker] [speaker001:] I mean. [speaker004:] Yeah, and then it's [disfmarker] Nah, OK. If there are no events, then there are no events. [speaker001:] Yeah, yeah. [speaker004:] But we still have a few which happened in the last year and OK, then it is a growing thing but it's done automatically that the group leader [vocalsound] or the group, uh, put some things eh, also on the web. Something in the center pipe. Yeah? Everybody agree? [speaker002:] Um, I don't know. [speaker003:] OK. [speaker002:] Maybe it could be a good thing if we have such an event button [vocalsound] that, um, somewhere beneath this button or wherever, [vocalsound] that there's a date, when the last update was. So that you know it's worth to click this button to see there's a new, um, [vocalsound] announcement or not. [speaker001:] Yeah, OK. [speaker002:] S So otherwise, you are often clicking this button and you get your old information or even an [disfmarker] uh empty page. [speaker004:] Maybe there's [disfmarker] [speaker001:] Yeah but this will be then down on the button of the page anyways. [speaker004:] Yeah, maybe we can have "updated on", you know the button itself? [speaker002:] So Yeah, something like that. [speaker004:] And then, "updated" and the current date, when [disfmarker] when something happened. [speaker002:] "Last updated", yeah. [speaker004:] I don't know how we deal with that. [speaker001:] Yep. [speaker004:] Is it another thing we have to? [speaker001:] Yep. [speaker002:] So it's just a su suggestion I don't know how easy it is to [disfmarker] to implement something like that. [speaker004:] Yeah. [speaker001:] Huh! OK. [speaker004:] OK you Well. Let's consider [vocalsound] [vocalsound] "update event button". OK. So, and then the next point and, I believe, yeah, [nonvocalsound] So, [comment] not the last one, [@ @] is one that I would like to know the status as far it is possible from everybody concerning his future intention here at ICSI or whether e when he will leave whether he can contri or will he consider currently, [vocalsound] uh, contribute here to the proposal because you know, I've [disfmarker] I'm facing the problem, if I go back to Siemens and I say "OK, there's a potential." "There are potential contacts." currently, maybe no money, but if we go in one direction [vocalsound] it [disfmarker] [disfmarker] it is worth to know if people can still go on in that area of that kind of work then they are back in their university or their company or whatever. and even if it's nur unschl part time jobs and [disfmarker] and if it's come to a real proposal, that maybe the people can hook in, uh, one more time because I need this information definitely t uh, to inform the uh, the Siemens management that it's worth to send me back here. So maybe Wilbert you can start. You will leave end of January definitely. And currently you see no chance, as far as I know, uh, to stay longer here and either you will go back to KPN or you will stay here and find then a job opportunity another company, is that right? [speaker005:] Yes, yeah. [speaker004:] Can you give us more details of [disfmarker] concerning the status? [speaker005:] No, [speaker004:] No? [speaker005:] no. Pending. [speaker004:] Pending. [speaker005:] Yeah. [speaker001:] See for me it's the same I can also say, uh, for me everything has completely changed, so now I go back and I don't anything what to do, so it's like I have to sort everything out when I'm back in Germany [speaker004:] Yes, but you have no [disfmarker] [speaker001:] but I won't go back to the university [speaker004:] yeah? [speaker001:] so I don't think so. [speaker004:] So you are looking for a job. That's probably then [disfmarker] OK. in principle, I would like to say you are lost for the NSA group, [speaker001:] Then it's like yeah. [speaker004:] because ffft [comment] that's uh, it would be a [disfmarker] a wonder if, uh, it hook in at the same area, you know. [speaker001:] Yeah, No, no, no. Yeah. [speaker005:] Uh, but that's also for me, [speaker004:] Yeah. [speaker005:] I'm kind of lost for the NSA group. [speaker004:] Yeah. Yeah. I think so, yeah. [speaker001:] "Lost" is maybe a bit like [disfmarker] [speaker005:] And oh, also [disfmarker] also [speaker004:] Ja from the work contributions, you know. [speaker005:] Well, b but I think even [disfmarker] also K P N. i is lost for the, uh, for the I C [speaker004:] Yeah. Yeah. Uh, do you know whether they have [disfmarker] [speaker005:] There's a lot of, uh, z budget cuts going on and nothing is possible anymore, so. [speaker004:] Um, so from your side the statement is, um, that, uh, independently of your person uh, the KPN is not [disfmarker] they're not willing to send us another guy here to ICSI anyway. [speaker005:] Yes. Absolutely. [speaker004:] And, if they are interested in uh, a certain kind of project then it must be related to a certain kind of really service or a certain kind of end system or real product or whatever kind of thing [vocalsound] with respect to UMTS, [speaker005:] I it [disfmarker] It will be different thing. [speaker004:] right? [speaker005:] It will be a [disfmarker] it will be a start up. And I know they're [disfmarker] they're working on a w a few of them in this area, but it's [pause] ou outside of ICSI. [speaker004:] Mmm. So, KPN and [disfmarker] and ICSI in principle. [speaker005:] Yeah. [speaker004:] After you will leave here that is. Yeah, and then [comment] what do you intend to do in the last two months beside of course, looking for, [speaker005:] u [speaker002:] Holidays [speaker005:] Th [speaker004:] yeah. [speaker005:] Um, I am working now on [disfmarker] on something, uh an open source router. [speaker004:] This is that from from Mark, right? [speaker005:] No, in [disfmarker] [speaker004:] Not? [speaker005:] well, it's a little bit u out of Mark. I'm [disfmarker] Right now I'm interested and I'm looking into it and I'm working [disfmarker] started yesterday, actually working on it, to do the, uh [disfmarker] u to do an implementation of the protocol that was designed by Mark, [vocalsound] um, on an open source router that's different from the one that [disfmarker] of Mark's project. [speaker004:] Mm hmm. Yeah, do you finish that and do you believe that you can finish that until end of January? [speaker005:] S s I hope so. I [speaker004:] It sounded [disfmarker] sounded to me a little bit [disfmarker] little bit, [pause] hard t hard [disfmarker] hard time constraint, right? [speaker005:] That would be cool. Yeah. It's hard work. Yeah. That's right. But I'm working on it. [speaker004:] Mm hmm. OK, Mi Miguel, maybe you can say also. [speaker003:] Oh, I'll be [disfmarker] I'll be here until the end of July. And, uh, well, eh even later [vocalsound] if, uh, there is some interesting work, uh, going on, well I [disfmarker] I will [vocalsound] be able to [vocalsound] to do some work from Spain too. [speaker004:] Mm hmm. [speaker003:] So, that's my [pause] future [vocalsound] uh forecast. [speaker004:] Mm hmm, and, uh, do you have some students which could be join if there is [disfmarker] [speaker003:] Yeah. [speaker004:] I assume there is a certain activity. Maybe i if you will leave in July [speaker003:] Yeah, well, you know, uh the problem the [disfmarker] I think that we have a [disfmarker] a problem [disfmarker] or a difference in Spain, at least, [vocalsound] compared with the Americans. And i is that our students are, uh [disfmarker] [vocalsound] are not getting involved [vocalsound] in research until they have, uh, finished their studies. So, uh, it's something a little bit, uh, strange but it is this [disfmarker] this is the w [speaker004:] What? They are not [disfmarker] sorry? They are not involved in [disfmarker] in research? [speaker003:] Yeah. You know [disfmarker] [speaker004:] What they are doing then? I'm sorry. [speaker005:] Lectures. [speaker004:] Huh? [speaker003:] You know, they [disfmarker] [speaker004:] Ach so, ach so, OK. [speaker003:] You [disfmarker] you have to [disfmarker] to pass the different tests and you have to study the different subjects. And, [vocalsound] until you have [vocalsound] finished with this, uh well [disfmarker] OK, you can do it by [disfmarker] by your own, [speaker004:] Mm hmm. [speaker003:] but [disfmarker] but the thing is that [vocalsound] you're not supposed to do it. And usually students are pretty busy [vocalsound] for not doing other things. [speaker004:] Mm hmm. [speaker003:] Uh. [speaker002:] And there's no final master's thesis or something like that what they have to perform? [speaker003:] Ah yeah em nnn, [comment] not really. You know, there is. There is. [speaker002:] Mm hmm. [speaker003:] But, [vocalsound] uh, it is not a research work. Usually it's an application work. So you build a web site, your own E commerce site, uh, [vocalsound] whatever, [speaker002:] Mm hmm. [speaker003:] or a program or [disfmarker] [speaker004:] Why I didn't study in Spain? [speaker003:] but [disfmarker] But it is not, uh, something your teacher provides you some research papers and you, [vocalsound] let's say, build a simulator or try to. [speaker004:] Mmm. Mm hmm. [speaker002:] Mm hmm. [speaker004:] Are certain tasks [speaker002:] OK. [speaker004:] and so? [speaker003:] So we have, uh, other students are which are, uh, mmm people that has a grant and they are [disfmarker] [vocalsound] they are starting as, uh, researchers, aaa, [comment] but this is only connected [vocalsound] to your own funding so if you [disfmarker] if you have projects [vocalsound] you can have, uh, people working with you. [speaker004:] Mm hmm. [speaker003:] If you don't have, [vocalsound] a research project [vocalsound] well, you cannot pay them so u u usually [vocalsound] you don't have anybody. So it's kind of, uh, [vocalsound] dependent on the funding we have in some projects. So w it's not sure, [speaker004:] Hmm? No, [speaker003:] not really sure. [speaker004:] there. We're not talking about to send the students here to I C S I. I'm talking if there is an activity, at least with the NSA group, that certain kind of activities also in the university could apply for the master works, of diplomas thesis, [pause] uh, whatever kind of thing that [disfmarker] [speaker003:] Yeah, we [disfmarker] we [disfmarker] I think we could [disfmarker] We could find, uh, some, eh work force. OK? [speaker004:] Mm hmm. [speaker003:] Because [disfmarker] well, you [disfmarker] you can, eh, fo [vocalsound] redirect you know, the [disfmarker] the work as [disfmarker] uh let's say as a subject work of something [vocalsound] that you can [disfmarker] [vocalsound] you can try to, eh, s In some subjects, in fact, [vocalsound] eh, the [disfmarker] the [disfmarker] the, uh, topics are more open so [vocalsound] you can introduce some research topics too. So but the main thing is that there is a, uh, this big difference because, eh, students are not supposed to do research, eh, work until they have finished their gra their grade. [speaker004:] Mm hmm. [speaker005:] You can change that. [speaker003:] Yeah, sure. [speaker005:] M uh, maybe you sh [speaker003:] Sure. [speaker005:] I think it's pretty healthy for a student to [disfmarker] [vocalsound] to have a project, eh, where you have to write actually some part of research or. [speaker003:] Yeah, I think so too, but, eh, you know, it also depends on the subjects because sometimes subjects are fixed [speaker005:] Sure. [speaker003:] so you cannot [disfmarker] you cannot, uh uh, give any [disfmarker] any stuff you want. [speaker005:] OK. [speaker003:] You have to [disfmarker] [vocalsound] to provide a certain [disfmarker] a certain prime [pause] for the subject too, [speaker005:] O Of course. [speaker003:] but, you know, sometimes, especially in the [disfmarker] [vocalsound] in the latest, eh uh, courses uh, things are probably more open and, well, it [disfmarker] it is something that can be changed. [speaker004:] Mm hmm. OK. Thank you. Claudia, I still mentioned, Have you other things? For me, as I said before I will be funded by a d uh by the ministry of, uh, the German government for research in the project so called Invinet That seems to be sure. And I got absolutely freedom to do what I would like to do, and I would like to go on with the QS in advance mechanism and, uh, in the relation of USAIA I also have the [vocalsound] activities with Georgia Tech. They have now a PHD. It is settled, and my other next, uh, generation of contract is also going in that sense so the topic is still going [disfmarker] go going on. and when I will come back here, I would like to use these things too to maybe come to a [vocalsound] closer, a bigger project um, outside of this goal, with contributions here from, uh [pause] first of all from the NSA group and then from [disfmarker] from other partners. But this I have definitely ou uh discussed with my management and [pause] this will happened, I believe, in December, maybe, and in the beginning of January. and I will keep informed to the management of ICSI anyway, but also, I assume that my suggestions that we have a weekly phone conference and email exchange [pause] during that time so that we can inform each other [vocalsound] concerning what are the questions, what are the problems, what is the status and so on. Uh? But I think we will uh detail these things then in one of the last meetings. So how is your status and [disfmarker] and your ideas and your plans and your future and [disfmarker] [speaker002:] Yeah. Well I will [disfmarker] [speaker004:] You're addicted to the testbeds you run on the hour, [speaker002:] No, no, no, no, no. [speaker004:] so that's [disfmarker] [speaker002:] So I will stay here till the end of February and afterwards, um, at Berlin I'm yeah, funded by an institution of the German government, so, they are paying me, and I have to do research for them, of course. I have to [vocalsound] um well, uh, reach some goals and so on. But on the other hand there will be some time left I think to do some work on [pause] this area I started here. And for me, myself, I'm also very interested in [disfmarker] in continue this work and so that will be manageable and, um, on the other hand there are also [vocalsound] uh, some students or [disfmarker] well, at the moment it's [pause] quite difficult to find students, eh, who are doing their master's thesis and so on because there are not a lot of students, uh, which are in this state in Germany now, but um, about two years ago the numbers of students were, um, studying computer science, [vocalsound] uh, started to increase again and so in one or two years we will have, [vocalsound] uh, many students, uh, which are performing their master's thesis and so [vocalsound] I think there will also be, uh, many students [vocalsound] which are joining in this work. So, hopefully, [vocalsound] uh, I will found [disfmarker] will find some students which are joining in and, um So, hopefully this work will be go on. [speaker004:] Mm hmm. OK, sounded good to me. And Michael? [speaker006:] So, [pause] I won't change the topic then. I'm here until end of May and after that I presumably going to a Siemens project with some [disfmarker] department of gee mens, but related to that topic what I have done here. So, it's some of pre stage [pause] to that, so it might be more [vocalsound] related or loosely related. So it's at least it's in the same topic. And [pause] progress, it's [pause] going on, but it's not that fast going on as I would have like it. [speaker004:] Mm hmm. [speaker006:] Oh, I would like to have it. Sorry. Well but oder? [speaker004:] Yeah, then, what is the current status anyway? [speaker006:] Um, I'm [disfmarker] currently I'm looking into [vocalsound] Quality of Service Routing, [speaker004:] this is. [speaker006:] and I had discussions with a post doc from the U UCB. Well, we want to [disfmarker] we started to collect information to [vocalsound] write the paper about Quality of Serving [pause] routing. [vocalsound] for MPLS in an MPLS domain. [speaker004:] Mm hmm. [speaker006:] So you have [pause] um distributed routing at the edges, and one aspect is [vocalsound] that the information is outdated, so that you know to always have the exact information of your links, dates, and so on. It what we have plans of that. And trying to [vocalsound] look at some routing [pause] policies, or routing algorithms. [speaker004:] Mm hmm. Now as I see, in principle, uh fff [comment] that's a really raw view. MPLS offers certain kind of success of active routing, because in principle, the forwarding pass is a little bit decoupled from this [disfmarker] from the routing pass, [speaker006:] Yeah. [speaker004:] and this was, in principle [disfmarker] OK, it's [disfmarker] it's not really the subject [speaker006:] It [disfmarker] it [disfmarker] [speaker004:] but in a broader sense it could be seen in that sense. [speaker006:] It d yeah. [speaker004:] I think uh, there are in a certain degree some alignment with [disfmarker] with both of you guys, [speaker006:] Yeah. [speaker004:] right? [speaker006:] So at least from the desh definition of Christian Tschudin, MPLS is a subset of [disfmarker] [vocalsound] of active [pause] route active routing. So it's very [disfmarker] [pause] It's static? It's [disfmarker] No, active? But it's sort of active [pause] because they have these program statements. [speaker004:] Yeah. Yeah, maybe it's a separation, in principle, that I do not use the normal traditional, [vocalsound] uh, data forwarding pass [vocalsound] and update, in principle, only the routing information uh, to [disfmarker] to [disfmarker] to use it. [speaker006:] But y Yeah. Yeah. [speaker003:] Well, they are [disfmarker] they are using the [disfmarker] the [disfmarker] the same idea of labelling [pause] packets [vocalsound] with a [disfmarker] with a content descriptor or something. [speaker004:] Yeah. Yeah. [speaker006:] Yeah. That's right. [speaker004:] Yeah, so what I mean is at a certain degree, yeah, that has to be figured out. [speaker003:] Yeah. There's a connection point. [speaker004:] There is, uh, there are some linkage between them. [@ @] in both of your [disfmarker] your work. [speaker003:] Mm hmm. [speaker006:] Yeah. [speaker004:] So, my last point, uh, in principle, is the proposal itself. Um I know, in the past it was always if I'm not writing something nothing gonna happen. So, I believe, Wilbert, you are going out of these activity. You have to use your time to finish your work as you uh, have in mind until your end of February, or is that right? [speaker005:] January, yes. [speaker004:] End of January, sorry, so you will not [disfmarker] y you will not find any time to contribute anymore to this activity. [speaker005:] No. [speaker004:] OK. Then from Miguel's side I expect more detailed vision. We can do it, as I mentioned before, bilateral afterwards, [vocalsound] uh, concerning your work and how you think it could be beneficial contribution to the activity maybe based on the generic uh, uh, USAIA architecture. [speaker003:] Later. [speaker004:] I will write as long as I have time here, uh, on the proposal itself and in the near term future, I will distribute a list of the small cluster of people. I tried to access Juan Peire several times, he didn't answer, because he still had mentioned that there's some minor funding, that I think would help, for instance, to have initial meeting either here or in Europe or [disfmarker] or wherever. I've, uh, contact one more time the professor, what is his name? Rao? or [disfmarker]? uh, L Landay mentioned, uh, [speaker006:] L Lan Landay? [speaker004:] Rao, right? Yeah. because he's a professor in [disfmarker] uh, responsible for the infrastructure at UCB and he has some projects which are related to [vocalsound] um, these [disfmarker] these, uh, activities [speaker006:] Landay. Ah. [speaker004:] I would c c name it cor intelligent classroom. I don't know whether I can succeed to get an appointment then but I will distribute a list [disfmarker] contact list and [@ @] the interests of each partner so that everybody can see [vocalsound] and everybody knows, yeah, I have in mind maybe it's a small NSA group activity itself, with no linkage to the outer world maybe it's a NSA group activity with a linkage to the outer world based on the uh universities at home or the companies at home there's some linkage and the third stage is, let's say there's a NSA group activity in this area, [vocalsound] uh, with the supported, um, home entities and with certain kind of linkage here to the companies or u or universities here in this area. The last one, I think it's, uh, the best one. But, you see the question is always "Where's the money?" and I say "Oh poo, currently I don't know." The second question is how long I stay here and say ooo, [comment] [pause] I don't know ", so everybody's waiting. Yeah, and the only rea um, activity could come from the NSA group itself, but I think it's a [disfmarker] a work in process and that it will take some time and, uh, [vocalsound] everything must be just settled a little bit more. So, but anyway, I would like to get contributions from the people here of the NSA group concerning proposals, this is a living thing and it's a pity that Multicast is, in principle, removed and we do not have any expert from Multicast anymore. And, uh, that's makes very hard, I believe. Because that is a major [@ @], at least twenty five percent of the whole network uh, stuff and, um, but neverless we have to live with this situation. Right. You are laughing, again? [speaker005:] Right. Right. [speaker004:] So maybe we have to [nonvocalsound] rewrite something, have new ideas going forward more in certain kind of really telecommunication network and the routing and QS stuff, and remove, in principle, the Multicast stuff which, in principle, a pity because I think that's a really really good eh [disfmarker] uh [pause] mmm source of research work for the future telecommunication core networks because they are still not able to deal with Multicast, they still have there that tunnel generations and all those kind of things and I [disfmarker] I believe that would be very beneficial to use it for the really [pause] uh, mobile scenarios in [disfmarker] in that network, so [disfmarker] but anyway we have to live with a real situation and that means we have to drop Multicast from the proposal. Any comments? [speaker003:] Oh, not really, you know, uh, I [disfmarker] I [disfmarker] I [disfmarker] I think that [disfmarker] Well, in the future, somebody will come to the NSA group eh [speaker004:] Then we can extend the proposal one more time with Multicast. [speaker003:] Ah, so [disfmarker] No, I mean [vocalsound] that [disfmarker] OK, I [disfmarker] I [disfmarker] I think that it cannot, eh, be, eh considered the first type task to do because nobody is [disfmarker] is knowledgeable about multicasting but I think that, eh, any of us could address these things in the future, or maybe a new person could address this. in the future. So if [disfmarker] I mean, wh what I think is that, uh, it [disfmarker] it makes no sense to me [vocalsound] to reduce, [vocalsound] uh, uh, uh, uh, something which is more or less complete, because we don't have the person. I think that the [disfmarker] the way to [disfmarker] to go is probably the opposite, to try to find [vocalsound] something which has [vocalsound] a meaning by itself and then try to get the resources to [disfmarker] to [disfmarker] [vocalsound] to do this [disfmarker] [speaker004:] Oh, maybe you understand me in the wrong way. Uh, I do not mean to drop it, to remove the text from the proposal. [speaker003:] OK, OK. [speaker004:] I mean, activities, planning activities. [speaker003:] Sorry. Yeah, that's OK. [speaker004:] Work packages. [speaker003:] Sorry, sorry, heh? [speaker004:] Yeah, [speaker003:] Now I [speaker004:] so no, no, because that's, uh, that's why I mentioned that it is important thing. But, currently there's only one new group member, Sven Buchholz, who will figure out the thing, hmm how to say, um, the taking into account the end system, or the limited end system capabilities with respect to mobility and the network itself. Where it should be ada [@ @] [disfmarker] where should be, uh, the adaptation process performed [vocalsound] to deal with the different devices, for ubiquitous access to the network? Should it be in the end system itself? Should be in the edge device? It should it be in the core? should it be on the server? These are the questions he would like to figure out. And to p uh, to have, in principle, the pro and cons [nonvocalsound] of, uh, different [disfmarker] of all these different approaches. So that's the only new member. [speaker005:] Who is that? [speaker004:] Sven Buchholz. He was here. I invited him and discussed with him his potential work, I [@ @]. quite sure [speaker005:] Oh, he was here one day. [speaker004:] Yeah. [speaker001:] Yeah. [speaker006:] Hmm. [speaker004:] But he showed up the [pause] next day [speaker006:] Several days. [speaker004:] but only for administration stuff, [speaker006:] Yeah. [speaker004:] but for the technical discussion he was here one day. [speaker006:] Yeah. Yeah. [speaker004:] I I'm do not remember whether [disfmarker] w whether you were [disfmarker] [speaker001:] That must be three weeks ago, cuz I r I [pause] just met him when I arrived. [speaker005:] Yeah, I do remember him, [speaker006:] Yeah. [speaker005:] but I didn't talk with him about, uh [disfmarker] [pause] about the research topic. [speaker004:] Mm hmm. [speaker005:] So I didn't know what he was going to do. [speaker004:] Yeah, he will come mid of January, I believe, yeah, [speaker006:] Mm hmm. [speaker004:] as far as I know. So, but this is the only guy, you know, but that is not reflected in that sense that we remove the text. what I mean is [disfmarker] is, uh, the [disfmarker] that we [disfmarker] what's kind of activity we are going more in detail in describing, the network packages and [disfmarker] and detailed, uh, uh, descriptions [disfmarker] more detailed descriptions. [speaker003:] OK. [speaker004:] So Any other question? Any other comment? Oh. Then we pu should start with reading the numbers one more time. [speaker001:] Oh, again? [speaker005:] We have to do that? Oh! [speaker004:] Yes, we [pause] have to do it, [speaker001:] I don't know. [speaker004:] so that means we have to read the transcript script number and then the numbers, and if there's a new line, a short break. And, afterwards we have to, uh, that device is switched on, so please do not switch off the devices. And I will call Herr Janin and that [disfmarker] Adam [@ @]. So I will start. one [comment] OK, [speaker001:] Should I? [speaker004:] Oh. Yeah, yeah. Fine. [speaker001:] OK, [speaker004:] He's laughing all the time. [speaker001:] Oh, oh Wilbert Oh, I'm sorry. [speaker005:] Did you get that? [speaker001:] Oh, I'm so sorry. That was mean, OK. [speaker005:] No, it wasn't. I [disfmarker] I [disfmarker] I saw it already standing there [comment] when I was here [comment] so then I started laughing, oh, oh, [speaker004:] OK, Michael please. [speaker002:] That's it. [speaker004:] OK. Thank you. OK, then I will call Adam. Adam? [speaker004:] OK. [speaker001:] Mike. Mike one? [speaker002:] Ah. [speaker004:] We're on? Yes, please. I mean, we're testing noise robustness but let's not get silly. OK, so, uh, you've got some, uh, Xerox things to pass out? That are [disfmarker] [speaker001:] Yeah, [speaker004:] Yeah. [speaker001:] um. Yeah. Yeah, I'm sorry for the table, but as it grows in size, uh, it. [speaker004:] Uh, so for th the last column we use our imagination. OK. [speaker002:] Ah, yeah. [speaker004:] Ah. [speaker001:] Uh, yeah. [speaker002:] Uh, do you want [@ @]. [speaker004:] This one's nice, though. This has nice big font. [speaker001:] Yeah. [speaker003:] Let's see. Yeah. [speaker004:] Yeah. [speaker003:] Chop! [speaker004:] When you get older you have these different perspectives. [speaker001:] So [speaker004:] I mean, lowering the word hour rate is fine, but having big font! [speaker001:] Next time we will put colors or something. [speaker004:] That's what's [disfmarker] Yeah. It's mostly big font. [speaker001:] Uh. [speaker004:] OK. Uh [disfmarker] [speaker001:] OK, s so there is kind of summary of what has been done [disfmarker] [speaker004:] Go ahead. [speaker001:] It's this. [speaker004:] Oh. OK. [speaker001:] Summary of experiments since, well, since last week and also since the [disfmarker] we've started to run [disfmarker] work on this. Um. [pause] So since last week we've started to fill the column with um [vocalsound] uh features w with nets trained on PLP with on line normalization but with delta also, because the column was not completely [disfmarker] [speaker004:] Mm hmm. Mm hmm. [speaker001:] well, it's still not completely filled, but [pause] we have more results to compare with network using without PLP and [pause] finally, hhh, [comment] um [pause] ehhh [comment] PL uh delta seems very important. Uh [pause] I don't know. If you take um, let's say, anyway Aurora two B, so, the next [disfmarker] t the second, uh, part of the table, [speaker004:] Mm hmm. [speaker001:] uh [pause] when we use the large training set using French, Spanish, and English, you have one hundred and six without delta and eighty nine with the delta. [speaker004:] a And again all of these numbers are with a hundred percent being, uh, the baseline performance, [speaker001:] Yeah, on the baseline, yeah. [speaker004:] but with a mel cepstra system going straight into the HTK? [speaker001:] So [disfmarker] Yeah. Yeah. [speaker004:] Yes. [speaker001:] So now we see that the gap between the different training set is much [pause] uh uh much smaller [speaker003:] It's out of the way. [speaker001:] um [disfmarker] But, actually, um, for English training on TIMIT is still better than the other languages. And Mmm, [pause] Yeah. And f also for Italian, actually. If you take the second set of experiment for Italian, so, the mismatched condition, [speaker004:] Mm hmm. [speaker001:] um [pause] when we use the training on TIMIT so, it's multi English, we have a ninety one number, [speaker004:] Mm hmm. [speaker001:] and training with other languages is a little bit worse. [speaker004:] Um [disfmarker] Oh, I see. Down near the bottom of this sheet. Uh, [comment] [pause] yes. [speaker001:] So, yeah. [speaker004:] OK. [speaker001:] And, yeah, and here the gap is still more important between using delta and not using delta. [speaker004:] Yes. [speaker001:] If y if I take the training s the large training set, it's [disfmarker] we have one hundred and seventy two, [speaker004:] Yeah. [speaker001:] and one hundred and four when we use delta. [speaker004:] Mm hmm. [speaker001:] Uh. [pause] Even if the contexts used is quite the same, because without delta we use seventeenths [disfmarker] seventeen frames. Uh. Yeah, um, so the second point is that we have no single cross language experiments, uh, that we did not have last week. Uh, so this is training the net on French only, or on English only, and testing on Italian. [speaker004:] Mm hmm. [speaker001:] And training the net on French only and Spanish only and testing on, uh TI digits. [speaker004:] Mm hmm. [speaker001:] And, fff [comment] um, yeah. What we see is that these nets are not as good, except for the multi English, which is always one of the best. Yeah, then we started to work on a large dat database containing, uh, sentences from the French, from the Spanish, from the TIMIT, from SPINE, uh from [comment] uh English digits, and from Italian digits. So this is the [disfmarker] another line [disfmarker] another set of lines in the table. [speaker004:] Ah, yes. [speaker001:] Uh, [@ @] with SPINE [speaker004:] Mm hmm. [speaker001:] and [pause] uh, actually we did this before knowing the result of all the data, uh, so we have to to redo the uh [disfmarker] the experiment training the net with, uh PLP, but with delta. [speaker004:] Mm hmm. [speaker001:] But um this [disfmarker] this net performed quite well. Well, it's [disfmarker] it's better than the net using French, Spanish, and English only. Uh. So, uh, yeah. We have also started feature combination experiments. Uh many experiments using features and net outputs together. And this is [disfmarker] The results are on the other document. Uh, we can discuss this after, perhaps [disfmarker] well, just, [@ @]. Yeah, so basically there are four [disfmarker] four kind of systems. The first one, yeah, is combining, um, two feature streams, uh using [disfmarker] and each feature stream has its own MPL. So it's the [disfmarker] kind of similar to the tandem that was proposed for the first. The multi stream tandem for the first proposal. The second is using features and KLT transformed MLP outputs. And the third one is to u use a single KLT trans transform features as well as MLP outputs. Um, yeah. Mmm. You know you can [disfmarker] you can comment these results, [speaker002:] Yes, I can s I would like to say that, for example, um, mmm, if we doesn't use the delta delta, uh we have an improve when we use s some combination. But when [speaker001:] Yeah, we ju just to be clear, the numbers here are uh recognition accuracy. [speaker002:] w Yeah, this [disfmarker] Yeah, this number recognition acc [speaker001:] So it's not the [disfmarker] [vocalsound] Again we switch to another [disfmarker] [speaker002:] Yes, and the baseline [disfmarker] the baseline have [disfmarker] i is eighty two. [speaker004:] Baseline is eighty two. [speaker002:] Yeah [speaker001:] So it's experiment only on the Italian mismatched for the moment for this. [speaker004:] Uh, this is Italian mismatched. [speaker002:] Yeah, by the moment. [speaker001:] Um. [speaker004:] OK. [speaker001:] Mm hmm. [speaker002:] And first in the experiment one I [disfmarker] I do [disfmarker] I [disfmarker] I use different MLP, [speaker004:] Mm hmm. [speaker002:] and is obviously that the multi English MLP is the better. Um. for the ne [disfmarker] rest of experiment I use multi English, only multi English. And I try to combine different type of feature, but the result is that the MSG three feature doesn't work for the Italian database because never help to increase the accuracy. [speaker001:] Yeah, eh, actually, if w we look at the table, the huge table, [speaker004:] Mm hmm. [speaker001:] um, we see that for TI digits MSG perform as well as the PLP, but this is not the case for Italian what [disfmarker] where the error rate is c is almost uh twice the error rate of PLP. [speaker004:] Mm hmm. [speaker001:] So, um [vocalsound] uh, well, I don't think this is a bug but this [disfmarker] this is something in [disfmarker] probably in the MSG um process that uh I don't know what exactly. Perhaps the fact that the [disfmarker] the [disfmarker] there's no low pass filter, well, or no pre emp pre emphasis filter and that there is some DC offset in the Italian, or, well, something simple like that. But [disfmarker] that we need to sort out if want to uh get improvement by combining PLP and MSG [speaker004:] Mm hmm. [speaker001:] because for the moment MSG do doesn't bring much information. [speaker004:] Mm hmm. [speaker001:] And as Carmen said, if we combine the two, we have the result, basically, of PLP. [speaker004:] I Um, the uh, baseline system [disfmarker] when you said the baseline system was uh, uh eighty two percent, that was trained on what and tested on what? That was, uh Italian mismatched d uh, uh, digits, uh, is the testing, [speaker002:] Yeah. [speaker004:] and the training is Italian digits? [speaker002:] Yeah. [speaker004:] So the "mismatch" just refers to the noise and [disfmarker] and, uh microphone and so forth, [speaker002:] Yeah. [speaker004:] right? [speaker001:] Yeah. [speaker004:] So, um did we have [disfmarker] So would that then correspond to the first line here of where the training is [disfmarker] is the uh Italian digits? The [disfmarker] [speaker002:] The train the training of the HTK? Yes. [speaker004:] Yes. [speaker002:] Ah yes! This h Yes. [speaker004:] Yes. [speaker002:] Th Yes. [speaker004:] Training of the net, yeah. [speaker002:] Yeah. [speaker004:] So, um [disfmarker] So what that says is that in a matched condition, [vocalsound] we end up with a fair amount worse putting in the uh PLP. Now w would [disfmarker] do we have a number, I suppose for the matched [disfmarker] I [disfmarker] I don't mean matched, but uh use of Italian [disfmarker] training in Italian digits for PLP only? [speaker002:] Uh [pause] yes? [speaker001:] Uh [pause] yeah, so this is [disfmarker] basically this is in the table. Uh [pause] so the number is fifty two, [speaker002:] Another table. [speaker001:] uh [disfmarker] [speaker004:] Fifty two percent. [speaker001:] Fift So [disfmarker] No, it's [disfmarker] it's the [disfmarker] [speaker002:] No. [speaker004:] No, fifty two percent of eighty two? [speaker001:] Of [disfmarker] of [disfmarker] of uh [pause] eighteen [disfmarker] [speaker002:] Eighty. Eighty. [speaker001:] of eighteen. So it's [disfmarker] it's error rate, basically. [speaker002:] It's plus six. [speaker001:] It's er error rate ratio. So [disfmarker] [speaker004:] Oh this is accuracy! [speaker002:] Yeah. [speaker004:] Oy! [comment] OK. [speaker001:] Uh, so we have nine [disfmarker] nine [disfmarker] let's say ninety percent. [speaker004:] Ninety. [speaker001:] Yeah. Um [comment] which is uh [comment] what we have also if use PLP and MSG together, [speaker004:] Yeah. [speaker001:] eighty nine point seven. [speaker004:] OK, so even just PLP, uh, it is not, in the matched condition [disfmarker] Um I wonder if it's a difference between PLP and mel cepstra, or whether it's that the net half, for some reason, is not helping. [speaker001:] Uh. P PLP and Mel cepstra give the same [disfmarker] same results. [speaker004:] Same result pretty much? [speaker001:] Well, we have these results. I don't know. It's not [disfmarker] [speaker004:] So, s [speaker001:] Do you have this result with PLP alone, [comment] j fee feeding HTK? That [disfmarker] That's what you mean? [speaker002:] Yeah, [speaker001:] Just PLP at the input of HTK. [speaker002:] yeah yeah yeah yeah, at the first [disfmarker] and the [disfmarker] Yeah. [speaker001:] Yeah. So, PLP [disfmarker] [speaker004:] Eighty eight point six. [speaker001:] Yeah. [speaker004:] Um, so adding MSG [speaker001:] Um [disfmarker] [speaker004:] um [disfmarker] Well, but that's [disfmarker] yeah, that's without the neural net, right? [speaker001:] Yeah, that's without the neural net and that's the result basically that OGI has also with the MFCC with on line normalization. [speaker004:] But she had said eighty two. [speaker001:] This is the [disfmarker] w well, but this is without on line normalization. [speaker004:] Right? Oh, this [disfmarker] the eighty two. [speaker001:] Yeah. Eighty two is the [disfmarker] it's the Aurora baseline, so MFCC. Then we can use [disfmarker] well, OGI, they use MFCC [disfmarker] th the baseline MFCC plus on line normalization [speaker004:] Oh, I'm sorry, I k I keep getting confused because this is accuracy. [speaker001:] Yeah, sorry. Yeah. [speaker002:] Yeah. [speaker004:] OK. Alright. [speaker001:] Yeah. [speaker004:] Alright. So this is [disfmarker] I was thinking all this was worse. OK so this is all better because eighty nine is bigger than eighty two. [speaker002:] Yes, better. [speaker001:] Mm hmm. [speaker004:] OK. [speaker002:] Yeah. [speaker004:] I'm [disfmarker] I'm all better now. OK, go ahead. [speaker001:] So what happ what happens is that when we apply on line normalization we jump to almost ninety percent. [speaker004:] Yeah. Mm hmm. [speaker001:] Uh, when we apply a neural network, is the same. We j jump to ninety percent. [speaker002:] Nnn, we don't know exactly. [speaker004:] Yeah. [speaker001:] And [disfmarker] And um [disfmarker] whatever the normalization, actually. If we use n neural network, even if the features are not correctly normalized, we jump to ninety percent. [speaker004:] So we go from eighty si eighty eight point six to [disfmarker] to ninety, or something. [speaker001:] So [disfmarker] Well, ninety [disfmarker] No, I [disfmarker] I mean ninety It's around eighty nine, ninety, [speaker004:] Eighty nine. [speaker001:] eighty eight. [speaker002:] Yeah. [speaker001:] Well, there are minor [disfmarker] minor differences. [speaker004:] And then adding the MSG does nothing, basically. [speaker001:] No. [speaker004:] Yeah. OK. [speaker001:] Uh For Italian, yeah. [speaker004:] For this case, right? [speaker001:] Um. [speaker004:] Alright. So, um [disfmarker] So actually, the answer for experiments with one is that adding MSG, if you [disfmarker] uh does not help in that case. [speaker001:] Mm hmm. [speaker004:] Um [disfmarker] [speaker001:] But w [speaker004:] The other ones, we'd have to look at it, [speaker001:] Yeah. [speaker004:] but [disfmarker] And the multi English, does uh [disfmarker] So if we think of this in error rates, we start off with, uh eighteen percent error rate, roughly. [speaker001:] Mm hmm. [speaker004:] Um [pause] and [pause] we uh almost, uh cut that in half by um putting in the on line normalization and the neural net. [speaker001:] Yeah [speaker004:] And the MSG doesn't however particularly affect things. [speaker001:] No. [speaker004:] And we cut off, I guess about twenty five percent of the error. Uh [pause] no, not quite that, is it. Uh, two point six out of eighteen. About, um [pause] sixteen percent or something of the error, um, if we use multi English instead of the matching condition. [speaker001:] Mm hmm. [speaker004:] Not matching condition, [speaker001:] Yeah. [speaker004:] but uh, the uh, Italian training. [speaker002:] Yeah. [speaker001:] Mm hmm. [speaker004:] OK. [speaker001:] Mmm. [speaker002:] We select these [disfmarker] these [disfmarker] these tasks because it's the more difficult. [speaker004:] Yes, good. OK? So then you're assuming multi English is closer to the kind of thing that you could use since you're not gonna have matching, uh, data for the [disfmarker] uh for the new [disfmarker] for the other languages and so forth. Um, one qu thing is that, uh [disfmarker] I think I asked you this before, but I wanna double check. When you say "ME" in these other tests, that's the multi English, [speaker001:] That's [disfmarker] it's a part [disfmarker] it's [disfmarker] [speaker004:] but it is not all of the multi English, right? It is some piece of [disfmarker] part of it. [speaker001:] Or, one million frames. [speaker004:] And the multi English is how much? [speaker002:] You have here the information. [speaker001:] It's one million and a half. Yeah. [speaker004:] Oh, so you used almost all You used two thirds of it, [speaker001:] Yeah. [speaker004:] you think. So, it it's still [disfmarker] it hurts you [disfmarker] seems to hurt you a fair amount to add in this French and Spanish. [speaker001:] Mmm. [speaker002:] Yeah. [speaker004:] I wonder why Yeah. Uh. [speaker003:] Well Stephane was saying that they weren't hand labeled, [speaker002:] Yeah. [speaker001:] Yeah, it's [disfmarker] Yeah. [speaker003:] the French and the Spanish. [speaker002:] The Spanish. Maybe for that. [speaker004:] Hmm. [speaker001:] Mmm. [speaker004:] It's still [disfmarker] OK. Alright, go ahead. And then [disfmarker] then [disfmarker] [speaker002:] Um. Mmm, with the experiment type two, I [disfmarker] first I tried to to combine, nnn, some feature from the MLP and other feature [disfmarker] another feature. [speaker004:] Mm hmm. [speaker002:] And we s we can [disfmarker] first the feature are without delta and delta delta, and we can see that in the situation, uh, the MSG three, the same help nothing. [speaker004:] Mm hmm. [speaker002:] And then I do the same but with the delta and delta delta [disfmarker] PLP delta and delta delta. And they all p but they all put off the MLP is it without delta and delta delta. [speaker004:] Mm hmm. [speaker002:] And we have a l little bit less result than the [disfmarker] the [disfmarker] the baseline PLP with delta and delta delta. Maybe if [disfmarker] when we have the new [disfmarker] the new [pause] neural network trained with PLP delta and delta delta, maybe the final result must be better. I don't know. Uh [disfmarker] [speaker001:] Actually, just to be some more [disfmarker] Do This number, this eighty seven point one number, has to be compared with the [speaker004:] Yes, yeah, I mean it can't be compared with the other [speaker001:] Which number? [speaker004:] cuz this is, uh [disfmarker] with multi English, uh, training. [speaker002:] Mm hmm. [speaker004:] So you have to compare it with the one over that you've got in a box, which is that, uh the eighty four point six. [speaker002:] Mm hmm. [speaker004:] Right? [speaker001:] Uh. [speaker004:] So [disfmarker] [speaker001:] Yeah, but I mean in this case for the eighty seven point one we used MLP outputs for the PLP net [speaker004:] Yeah. [speaker001:] and straight features with delta delta. [speaker004:] Yeah. [speaker002:] Mm hmm. [speaker001:] And straight features with delta delta gives you what's on the first sheet. [speaker004:] Not t not [speaker001:] It's eight eighty eight point six. [speaker004:] tr No. No. No. [speaker002:] Yes. [speaker004:] Not trained with multi English. [speaker002:] No, but they [disfmarker] they feature [@ @] without [disfmarker] [speaker001:] Uh, yeah, but th this is the second configuration. So we use feature out uh, net outputs together with features. So yeah, this is not [disfmarker] perhaps not clear here but in this table, the first column is for MLP and the second for the features. [speaker004:] Eh. [comment] Oh, I see. Ah. So you're saying w so asking the question, What [disfmarker] what has adding the MLP done to improve over the, [speaker001:] So, just [disfmarker] Yeah so, actually it [disfmarker] it [disfmarker] it decreased the [disfmarker] the accuracy. [speaker004:] uh [disfmarker] [speaker002:] Yeah. [speaker004:] Yes. Uh huh. [speaker001:] Because we have eighty eight point six. And even the MLP alone [disfmarker] What gives the MLP alone? Multi English PLP. Oh no, it gives eighty three point six. So we have our eighty three point six and now eighty eighty point six, [speaker002:] But [disfmarker] [speaker001:] that gives eighty seven point one. [speaker004:] Mm hmm. Eighty s I thought it was eighty Oh, OK, eighty three point six and eighty [disfmarker] eighty eight point six. [speaker001:] Eighty three point six. Eighty [disfmarker] [speaker004:] OK. [speaker001:] Is th is that right? Yeah? [speaker002:] Yeah. But [disfmarker] I don't know [disfmarker] but maybe if we have the neural network trained with the PLP [pause] delta and delta delta, maybe tha this can help. [speaker001:] Perhaps, yeah. [speaker004:] Well, that's [disfmarker] that's one thing, but see the other thing is that, um, I mean it's good to take the difficult case, but let's [disfmarker] let's consider what that means. What [disfmarker] what we're saying is that one o one of the things that [disfmarker] I mean my interpretation of your [disfmarker] your s original suggestion is something like this, as motivation. When we train on data that is in one sense or another, similar to the testing data, then we get a win by having discriminant training. [speaker001:] Mm hmm. [speaker004:] When we train on something that's quite different, we have a potential to have some problems. [speaker001:] Mm hmm. [speaker004:] And, um, if we get something that helps us when it's somewhat similar, and doesn't hurt us too much when it [disfmarker] when it's quite different, that's maybe not so bad. [speaker001:] Yeah. [speaker004:] So the question is, if you took the same combination, and you tried it out on, uh [disfmarker] on say digits, [speaker001:] Mmm. On TI digits? OK. [speaker004:] you know, d Was that experiment done? [speaker001:] No, not yet. [speaker004:] Yeah, OK. Uh, then does that, eh [disfmarker] you know maybe with similar noise conditions and so forth, [comment] does it [disfmarker] does it then look much better? [speaker001:] Mm hmm. [speaker004:] And so what is the range over these different kinds of uh [disfmarker] of tests? So, an anyway. OK, go ahead. [speaker001:] Yeah. [speaker002:] And, with this type of configuration which I do on experiment using the new neural net with name broad klatt s twenty seven, [speaker004:] Mm hmm. [speaker002:] uh, d I have found more or less the same result. [speaker001:] So, it's slightly better, [speaker002:] Little bit better? [speaker001:] yeah. [speaker004:] Slightly better. [speaker001:] Yeah. [speaker002:] Slightly bet better. Yes, is better. [speaker004:] And [disfmarker] and you know again maybe if you use the, uh, delta [pause] there, uh, you would bring it up to where it was, uh you know at least about the same for a difficult case. [speaker002:] Yeah, maybe. Maybe. Maybe. [speaker001:] Yeah. [speaker002:] Oh, yeah. [speaker001:] Yeah. [speaker004:] So. [speaker002:] Oh, yeah. [speaker001:] Well, so perhaps let's [disfmarker] let's jump at the last experiment. It's either less information from the neural network if we use only the silence output. [speaker002:] i [speaker004:] Mm hmm. [speaker001:] It's again better. So it's eighty nine point [disfmarker] point one. [speaker004:] Mm hmm. [speaker002:] Yeah, and we have only forty [disfmarker] forty feature [speaker001:] So. [speaker002:] because in this situation we have one hundred and three feature. [speaker004:] Yeah. [speaker002:] Yeah. And then w with the first configuration, I f I am found that work, uh, doesn't work [disfmarker] [speaker004:] Yeah. [speaker002:] uh, well, work, but is better, the second configuration. Because I [disfmarker] for the del Engli PLP delta and delta delta, here I have eighty five point three accuracy, and with the second configuration I have eighty seven point one. [speaker004:] Um, by the way, there is a another, uh, suggestion that would apply, uh, to the second configuration, um, which, uh, was made, uh, by, uh, Hari. And that was that, um, if you have [disfmarker] uh feed two streams into HTK, um, and you, uh, change the, uh variances [disfmarker] if you scale the variances associated with, uh these streams um, you can effectively scale [pause] the streams. Right? So, um, you know, without changing the scripts for HTK, which is the rule here, uh, you can still change the variances [speaker001:] Mm hmm. [speaker004:] which would effectively change the scale of these [disfmarker] these, uh, two streams that come in. [speaker001:] Uh, [comment] yeah. [speaker004:] And, um, so, um, if you do that, for instance it may be the case that, um, the MLP should not be considered as strongly, for instance. [speaker001:] Mmm. [speaker004:] And, um, so this is just setting them to be, excuse me, of equal [disfmarker] equal weight. Maybe it shouldn't be equal weight. [speaker002:] Maybe. [speaker004:] Right? You know, I I'm sorry to say that gives more experiments if we wanted to look at that, but [disfmarker] but, uh, um, you know on the other hand it's just experiments at the level of the HTK recognition. [speaker001:] Mmm. [speaker004:] It's not even the HTK, uh, uh [disfmarker] [speaker001:] Yeah. [speaker002:] Yeah. Yeah. [speaker004:] Well, I guess you have to do the HTK training also. [speaker002:] so this is what we decided to do. [speaker004:] Uh, do you? Let me think. Maybe you don't. Uh. Yeah, you have to change the [disfmarker] No, you can just do it in [disfmarker] as [disfmarker] once you've done the training [disfmarker] [speaker003:] And then you can vary it. [speaker004:] Yeah, the training is just coming up with the variances [speaker003:] Yeah. [speaker004:] so I guess you could [disfmarker] you could just scale them all. [speaker001:] Scale [speaker004:] Variances. [speaker001:] Yeah. But [disfmarker] Is it [disfmarker] i th I mean the HTK models are diagonal covariances, so I d Is it [disfmarker] [speaker004:] That's uh, exactly the point, I think, that if you change [disfmarker] um, change what they are [disfmarker] [speaker001:] Hmm. [speaker004:] It's diagonal covariance matrices, [speaker001:] Mm hmm. [speaker004:] but you say what those variances are. [speaker001:] Mm hmm. [speaker004:] So, that [disfmarker] you know, it's diagonal, but the diagonal means th that then you're gonna [disfmarker] it's gonna [disfmarker] it's gonna internally multiply it [disfmarker] and [disfmarker] and uh, [vocalsound] uh, i it im uh implicitly exponentiated to get probabilities, and so it's [disfmarker] it's gonna [disfmarker] it's [disfmarker] it's going to affect the range of things if you change the [disfmarker] change the variances [pause] of some of the features. [speaker001:] Mmm. Mmm. [speaker002:] do? [speaker004:] So, i it's precisely given that model you can very simply affect, uh, the s the strength that you apply the features. That was [disfmarker] that was, uh, Hari's suggestion. [speaker001:] Yeah. Yeah. [speaker004:] So, um [disfmarker] [speaker002:] Yeah. [speaker004:] Yeah. So. So it could just be that h treating them equally, tea treating two streams equally is just [disfmarker] just not the right thing to do. Of course it's potentially opening a can of worms because, you know, maybe it should be a different [vocalsound] number for [disfmarker] for each [vocalsound] kind of [pause] test set, or something, [speaker001:] Mm hmm. [speaker004:] but [disfmarker] OK. [speaker001:] Yeah. [speaker004:] So I guess the other thing is to take [disfmarker] you know [disfmarker] if one were to take, uh, you know, a couple of the most successful of these, [speaker001:] Yeah, and test across everything. [speaker004:] and uh [disfmarker] Yeah, try all these different tests. [speaker001:] Mmm. [speaker002:] Yeah. [speaker001:] Yeah. [speaker004:] Alright. Uh. [speaker001:] So, the next point, yeah, we've had some discussion with Steve and Shawn, um, about their um, uh, articulatory stuff, um. So we'll perhaps start something next week. [speaker004:] Mm hmm. [speaker001:] Um, discussion with Hynek, Sunil and Pratibha for trying to plug in their our [disfmarker] our networks with their [disfmarker] within their block diagram, uh, where to plug in the [disfmarker] the network, uh, after the [disfmarker] the feature, before as um a as a plugin or as a anoth another path, discussion about multi band and TRAPS, um, actually Hynek would like to see, perhaps if you remember the block diagram there is, uh, temporal LDA followed b by a spectral LDA for each uh critical band. And he would like to replace these by a network which would, uh, make the system look like a TRAP. Well, basically, it would be a TRAP system. Basically, this is a TRAP system [disfmarker] kind of TRAP system, I mean, but where the neural network are replaced by LDA. Hmm. [vocalsound] Um, yeah, and about multi band, uh, I started multi band MLP trainings, um mmh [comment] Actually, I w I w hhh [comment] prefer to do exactly what I did when I was in Belgium. So I take exactly the same configurations, seven bands with nine frames of context, and we just train on TIMIT, and on the large database, so, with SPINE and everything. And, mmm, I'm starting to train also, networks with larger contexts. So, this would [disfmarker] would be something between TRAPS and multi band because we still have quite large bands, and [disfmarker] but with a lot of context also. So Um Yeah, we still have to work on Finnish, um, basically, to make a decision on which MLP can be the best across the different languages. For the moment it's the TIMIT network, and perhaps the network trained on everything. So. Now we can test these two networks on [disfmarker] with [disfmarker] with delta and large networks. Well, test them also on Finnish [speaker002:] Mmm. [speaker001:] and see which one is the [disfmarker] the [disfmarker] the best. Uh, well, the next part of the document is, well, basically, a kind of summary of what [disfmarker] everything that has been done. So. We have seventy nine M L Ps trained on one, two, three, four, uh, three, four, five, six, seven ten [disfmarker] on ten different databases. [speaker004:] Mm hmm. [speaker001:] Uh, the number of frames is bad also, so we have one million and a half for some, three million for other, and six million for the last one. Uh, yeah! [comment] As we mentioned, TIMIT is the only that's hand labeled, and perhaps this is what makes the difference. Um. Yeah, the other are just Viterbi aligned. So these seventy nine MLP differ on different things. First, um with respect to the on line normalization, there are [disfmarker] that use bad on line normalization, and other good on line normalization. Um. With respect to the features, with respect to the use of delta or no, uh with respect to the hidden layer size and to the targets. Uh, but of course we don't have all the combination of these different parameters Um. s What's this? We only have two hundred eighty six different tests And no not two thousand. [speaker004:] Ugh! I was impressed boy, two thousand. [speaker001:] Yeah. [speaker004:] OK. [speaker002:] Ah, yes. [speaker004:] Alright, now I'm just slightly impressed, [speaker002:] I say this morning that [@ @] thought it was the [disfmarker] [speaker004:] OK. [speaker001:] Um. Yeah, basically the observation is what we discussed already. The MSG problem, um, the fact that the MLP trained on target task decreased the error rate. but when the M MLP is trained on the um [disfmarker] is not trained on the target task, it increased the error rate compared to using straight features. Except if the features are bad [disfmarker] uh, actually except if the features are not correctly on line normalized. In this case the tandem is still better even if it's trained on [disfmarker] not on the target digits. [speaker004:] Yeah. So it sounds like [vocalsound] yeah, the net corrects some of the problems with some poor normalization. [speaker001:] Yeah. [speaker004:] But if you can do good normalization it's [disfmarker] it's uh [disfmarker] OK. [speaker001:] Yeah. [speaker002:] Yeah. [speaker001:] Uh, so the fourth point is, yeah, the TIMIT plus noise seems to be the training set that gives better [disfmarker] the best network. [speaker004:] So So Let me [disfmarker] bef before you go on to the possible issues. [speaker001:] Mm hmm. [speaker004:] So, on the MSG uh problem um, I think that in [disfmarker] in the [disfmarker] um, in the short [pause] time [pause] solution um, that is, um, trying to figure out what we can proceed forward with to make the greatest progress, [speaker001:] Mm hmm. [speaker004:] uh, much as I said with JRASTA, even though I really like JRASTA and I really like MSG, [speaker001:] Mm hmm. [speaker004:] I think it's kind of in category that it's, it [disfmarker] it may be complicated. [speaker001:] Yeah. [speaker004:] And uh it might be [disfmarker] if someone's interested in it, uh, certainly encourage anybody to look into it in the longer term, once we get out of this particular rush [pause] uh for results. [speaker001:] Mm hmm. [speaker004:] But in the short term, unless you have some [disfmarker] some s strong idea of what's wrong, [speaker001:] I don't know at all but I've [disfmarker] perhaps [disfmarker] I have the feeling that it's something that's quite [disfmarker] quite simple or just like nnn, no high pass filter [speaker004:] Yeah, probably. [speaker001:] or [disfmarker] Mmm. Yeah. [pause] My [disfmarker] But I don't know. [speaker004:] There's supposed to [disfmarker] well MSG is supposed to have a an on line normalization though, right? [speaker001:] It's [disfmarker] There is, yeah, an AGC kind of AGC. Yeah. [vocalsound] Yeah. Yeah. [speaker004:] Yeah, but also there's an on line norm besides the AGC, there's an on line normalization that's supposed to be uh, yeah, [speaker001:] Mmm. [speaker004:] taking out means and variances and so forth. So. [speaker001:] Yeah. [speaker004:] In fac in fact the on line normalization that we're using came from the MSG design, so it's [disfmarker] [speaker001:] Um. Yeah, but [disfmarker] Yeah. But this was the bad on line normalization. Actually. [speaker002:] Maybe, may [disfmarker] [speaker001:] Uh. Are your results are still with the bad [disfmarker] the bad [disfmarker] [speaker002:] No? With the better [disfmarker] No? [speaker001:] With the O OLN two? Ah yeah, you have [disfmarker] you have OLN two, [speaker002:] Oh! Yeah, yeah, yeah! With "two", with "on line two". [speaker001:] yeah. [speaker002:] Yeah, yeah, [speaker004:] "On line two" is good. [speaker001:] So it's, is the good yeah. [speaker002:] yeah. Yep, [speaker004:] "Two" is good? [speaker002:] it's a good. [speaker001:] And [disfmarker] [speaker004:] No, "two" is bad. [speaker001:] Yeah. [speaker004:] OK. [speaker002:] Well, actually, it's good with the ch with the good. [speaker004:] Yeah. So [disfmarker] Yeah, I [disfmarker] I agree. It's probably something simple uh, i if [disfmarker] if uh someone, you know, uh, wants to play with it for a little bit. I mean, you're gonna do what you're gonna do [speaker001:] Mmm. [speaker004:] but [disfmarker] but my [disfmarker] my guess would be that it's something that is a simple thing that could take a while to find. [speaker001:] But [disfmarker] Yeah. Mmm. I see, yeah. [speaker004:] Yeah. [speaker001:] And [disfmarker] [speaker004:] Uh. [comment] And the other [disfmarker] the results uh, observations two and three, Um, is [speaker001:] Mmm. [speaker004:] uh [disfmarker] Yeah, that's pretty much what we've seen. That's [disfmarker] that [disfmarker] what we were concerned about is that if it's not on the target task [disfmarker] If it's on the target task then it [disfmarker] it [disfmarker] it helps to have the MLP transforming it. [speaker001:] Mmm. [speaker004:] If it uh [disfmarker] if it's not on the target task, then, depending on how different it is, uh you can get uh, a reduction in performance. [speaker001:] Mmm. [speaker004:] And the question is now how to [disfmarker] how to get one and not the other? Or how to [disfmarker] how to ameliorate the [disfmarker] the problems. [speaker001:] Mmm. [speaker004:] Um, because it [disfmarker] it certainly does [disfmarker] is nice to have in there, when it [disfmarker] [vocalsound] when there is something like the training data. [speaker001:] Mm hmm. Um. Yeah. So, [pause] the [disfmarker] the reason [disfmarker] Yeah, the reason is that the [disfmarker] perhaps the target [disfmarker] the [disfmarker] the task dependency [disfmarker] the language dependency, [vocalsound] and the noise dependency [disfmarker] [speaker004:] So that's what you say th there. I see. [speaker001:] Well, the e e But this is still not clear because, um, I [disfmarker] I [disfmarker] I don't think we have enough result to talk about the [disfmarker] the language dependency. Well, the TIMIT network is still the best but there is also an the other difference, the fact that it's [disfmarker] it's hand labeled. [speaker004:] Hey! Um, just [disfmarker] you can just sit here. Uh, I d I don't think we want to mess with the microphones but it's uh [disfmarker] Just uh, have a seat. Um. s Summary of the first uh, uh forty five minutes is that some stuff work and [disfmarker] works, and some stuff doesn't OK, [speaker001:] We still have uh [pause] this [disfmarker] One of these perhaps? [speaker002:] Yeah. [speaker004:] Yeah, I guess we can do a little better than that [speaker001:] Mm hmm. [speaker004:] but [disfmarker] [vocalsound] I think if you [disfmarker] if you start off with the other one, actually, that sort of has it in words and then th that has it the [pause] associated results. [speaker002:] Um. [speaker004:] OK. So you're saying that um, um, although from what we see, yes there's what you would expect in terms of a language dependency and a noise dependency. That is, uh, when the neural net is trained on one of those and tested on something different, we don't do as well as in the target thing. But you're saying that uh, it is [disfmarker] Although that general thing is observable so far, there's something you're not completely convinced about. And [disfmarker] and what is that? I mean, you say "not clear yet". What [disfmarker] what do you mean? [speaker001:] Uh, mmm, uh, [comment] I mean, that the [disfmarker] the fact that s Well, for [disfmarker] for TI digits the TIMIT net is the best, which is the English net. [speaker004:] Mm hmm. [speaker001:] But the other are slightly worse. But you have two [disfmarker] two effects, the effect of changing language and the effect of training on something that's [pause] Viterbi aligned instead of hand [disfmarker] hand labeled. [speaker002:] Yeah. [speaker001:] So. Um. Yeah. [speaker004:] Do you think the alignments are bad? I mean, have you looked at the alignments at all? What the Viterbi alignment's doing? [speaker001:] Mmm. I don't [disfmarker] I don't know. Did did you look at the Spanish alignments Carmen? [speaker002:] Mmm, no. [speaker004:] Might be interesting to look at it. Because, I mean, that is just looking but um, um [disfmarker] It's not clear to me you necessarily would do so badly from a Viterbi alignment. It depends how good the recognizer is [speaker001:] Mm hmm. [speaker004:] that's [disfmarker] that [disfmarker] the [disfmarker] the engine is that's doing the alignment. [speaker001:] Yeah. But [disfmarker] Yeah. But, perhaps it's not really the [disfmarker] the alignment that's bad but the [disfmarker] just the ph phoneme string that's used for the alignment [speaker004:] Aha! [speaker002:] Yeah. [speaker001:] Mmm. [speaker004:] The pronunciation models and so forth [speaker001:] I mean [pause] for [disfmarker] We [disfmarker] It's single pronunciation, uh [disfmarker] [speaker004:] Aha. I see. [speaker001:] French [disfmarker] French s uh, phoneme strings were corrected manually so we asked people to listen to the um [disfmarker] the sentence and we gave the phoneme string and they kind of correct them. But still, there [disfmarker] there might be errors just in the [disfmarker] in [disfmarker] in the ph string of phonemes. Mmm. Um. Yeah, so this is not really the Viterbi alignment, in fact, yeah. Um, the third [disfmarker] The third uh issue is the noise dependency perhaps but, well, this is not clear yet because all our nets are trained on the same noises and [disfmarker] [speaker004:] I thought some of the nets were trained with SPINE and so forth. So it [disfmarker] And that has other noise. [speaker001:] Yeah. So [disfmarker] Yeah. But [disfmarker] Yeah. Results are only coming for [disfmarker] for this net. Mmm. [speaker004:] OK, yeah, just don't [disfmarker] just need more [disfmarker] more results there with that [@ @]. [speaker001:] Yeah. Um. So. Uh, from these results we have some questions with answers. What should be the network input? Um, PLP work as well as MFCC, I mean. Um. But it seems impor important to use the delta. Uh, with respect to the network size, there's one experiment that's still running and we should have the result today, comparing network with five hundred and [pause] one thousand units. So, nnn, still no answer actually. [speaker004:] Hm hmm. [speaker001:] Uh, the training set, well, some kind of answer. We can, we can tell which training set gives the best result, but [vocalsound] we don't know exactly why. [speaker004:] Uh. Right, I mean the multi English so far is [disfmarker] is the best. [speaker001:] Uh, so. Yeah. [speaker004:] "Multi multi English" just means "TIMIT", right? [speaker001:] Yeah. [speaker002:] Yeah. [speaker004:] So uh That's [disfmarker] Yeah. So. And [disfmarker] and when you add other things in to [disfmarker] to broaden it, it gets worse [pause] uh typically. [speaker001:] Mmm. [speaker004:] Yeah. [speaker001:] Mm hmm. Then uh some questions without answers. [speaker004:] OK. [speaker001:] Uh, training set, um, [speaker004:] Uh huh. [speaker001:] uh, training targets [disfmarker] [speaker004:] I like that. The training set is both questions, with answers and without answers. [speaker001:] It's [disfmarker] Yeah. Yeah. [speaker004:] It's sort of, yes [disfmarker] it's mul it's multi uh purpose. [speaker001:] Yeah. [speaker004:] OK. [speaker001:] Uh, training s Right. So [disfmarker] Yeah, the training targets actually, the two of the main issues perhaps are still the language dependency [vocalsound] and the noise dependency. And perhaps to try to reduce the language dependency, we should focus on finding some other kind of training targets. [speaker004:] Mm hmm. [speaker001:] And labeling s labeling seems important uh, because of TIMIT results. [speaker004:] Mm hmm. [speaker001:] Uh. For moment you use [disfmarker] we use phonetic targets but we could also use articulatory targets, soft targets, and perhaps even, um use networks that doesn't do classification but just regression so uh, train to have neural networks that um, um, uh, [speaker004:] Mm hmm. [speaker001:] does a regression and well, basically com com compute features and noit not, nnn, features without noise. I mean uh, transform the fea noisy features [vocalsound] in other features that are not noisy. But continuous features. Not uh uh, hard targets. [speaker004:] Mm hmm. Mm hmm. [speaker001:] Uh [disfmarker] [speaker004:] Yeah, that [pause] seems like a good thing to do, probably, not uh again a short term sort of thing. [speaker001:] Yeah. [speaker004:] I mean one of the things about that is that um it's [disfmarker] e u the ri I guess the major risk you have there of being [disfmarker] is being dependent on [disfmarker] very dependent on the kind of noise and [disfmarker] and so forth. [speaker001:] Yeah. f But, yeah. [speaker004:] Uh. But it's another thing to try. [speaker001:] So, this is w w i wa wa this is one thing, this [disfmarker] this could be [disfmarker] could help [disfmarker] could help perhaps to reduce language dependency and for the noise part um we could combine this with other approaches, like, well, the Kleinschmidt approach. So the d the idea of putting all the noise that we can find inside a database. [speaker004:] Mm hmm. [speaker002:] Yeah. [speaker001:] I think Kleinschmidt was using more than fifty different noises to train his network, and [disfmarker] So this is one [vocalsound] approach [speaker004:] Mm hmm. [speaker001:] and the other is multi band [vocalsound] [vocalsound] uh, that I think is more robust to the noisy changes. [speaker004:] Mm hmm. [speaker001:] So perhaps, I think something like multi band trained on a lot of noises with uh, features based targets could [disfmarker] could [disfmarker] could help. [speaker004:] Yeah, if you [disfmarker] i i It's interesting thought maybe if you just trained up [disfmarker] I mean w yeah, one [disfmarker] one fantasy would be you have something like articulatory targets and you have [pause] um some reasonable database, um but then [disfmarker] which is um [vocalsound] copied over many times with a range of different noises, [speaker001:] Mm hmm. [speaker004:] And uh [disfmarker] [vocalsound] If [disfmarker] Cuz what you're trying to [pause] do is come up with a [disfmarker] a core, reasonable feature set which is then gonna be used uh, by the [disfmarker] the uh HMM [pause] system. [speaker001:] Mm hmm. [speaker004:] So. Yeah, OK. [speaker001:] So, um, yeah. The future work is, [pause] well, try to connect to the [disfmarker] to make [disfmarker] to plug in the system to the OGI system. Um, there are still open questions there, [speaker004:] Mm hmm. [speaker001:] where to put the MLP basically. Um. [speaker004:] And I guess, you know, the [disfmarker] the [disfmarker] the real open question, I mean, e u there's lots of open questions, but one of the core quote [comment] "open questions" for that is um, um, if we take the uh [disfmarker] you know, the best ones here, maybe not just the best one, but the best few or something [disfmarker] You want the most promising group from these other experiments. Um, how well do they do over a range of these different tests, not just the Italian? [speaker001:] Mmm, [speaker004:] Um. And y y [pause] Right? [speaker001:] Yeah, yeah. [speaker004:] And then um [disfmarker] then see, [pause] again, how [disfmarker] We know that there's a mis there's a uh [disfmarker] a [disfmarker] a loss in performance when the neural net is trained on conditions that are different than [disfmarker] than, uh we're gonna test on, but well, if you look over a range of these different tests um, how well do these different ways of combining the straight features with the MLP features, uh stand up over that range? [speaker002:] Mm hmm. [speaker004:] That's [disfmarker] that [disfmarker] that seems like the [disfmarker] the [disfmarker] the real question. And if you know that [disfmarker] So if you just take PLP with uh, the double deltas. Assume that's the p the feature. look at these different ways of combining it. And uh, take [disfmarker] let's say, just take uh multi English cause that works pretty well for the training. [speaker001:] Mm hmm. [speaker004:] And just look [disfmarker] take that case and then look over all the different things. How does that [disfmarker] How does that compare between the [disfmarker] [speaker001:] So all the [disfmarker] all the test sets you mean, yeah. [speaker002:] Yeah. [speaker004:] All the different test sets, [speaker001:] And [disfmarker] [speaker004:] and for [disfmarker] and for the couple different ways that you have of [disfmarker] of [disfmarker] of combining them. [speaker001:] Yeah. [speaker004:] Um. [pause] How well do they stand up, over the [disfmarker] [speaker002:] Mm hmm. [speaker001:] Mmm. And perhaps doing this for [disfmarker] cha changing the variance of the streams and so on [pause] getting different scaling [disfmarker] [speaker004:] That's another possibility if you have time, yeah. Yeah. [speaker001:] Um. Yeah, so thi this sh would be more working on the MLP as an additional path instead of an insert to the [disfmarker] to their diagram. Cuz [disfmarker] Yeah. Perhaps the insert idea is kind of strange because nnn, they [disfmarker] they make LDA and then we will again add a network does discriminate anal nnn, that discriminates, [speaker004:] Yeah. [pause] It's a little strange [speaker001:] or [disfmarker]? [speaker004:] but on the other hand they did it before. [speaker001:] Mmm? [speaker004:] Um the [speaker001:] Mmm. And [disfmarker] and [disfmarker] and yeah. And because also perhaps we know that the [disfmarker] when we have very good features the MLP doesn't help. So. I don't know. [speaker004:] Um, the other thing, though, is that um [disfmarker] So. Uh, we [disfmarker] we wanna get their path running here, right? If so, we can add this other stuff. [speaker001:] Um. [speaker004:] as an additional path right? [speaker001:] Yeah, the [disfmarker] the way we want to do [disfmarker] [speaker004:] Cuz they're doing LDA [pause] RASTA. [speaker001:] The d What? [speaker004:] They're doing LDA RASTA, [speaker001:] Yeah, the way we want to do it perhaps is to [disfmarker] just to get the VAD labels and the final features. [speaker004:] yeah? [speaker001:] So they will send us the [disfmarker] Well, provide us with the feature files, [speaker004:] I see. I see. [speaker001:] and with VAD uh, binary labels so that we can uh, get our MLP features and filter them with the VAD and then combine them with their f feature stream. [speaker004:] I see. [speaker001:] So. [speaker004:] So we [disfmarker] So. First thing of course we'd wanna do there is to make sure that when we get those labels of final features is that we get the same results as them. Without putting in a second path. [speaker001:] Uh. You mean [disfmarker] Oh, yeah! Just re re retraining r retraining the HTK? [speaker004:] Yeah just th w i i Just to make sure that we [pause] have [disfmarker] we understand properly what things are, our very first thing to do is to [disfmarker] is to double check that we get the exact same results as them on HTK. [speaker001:] Oh yeah. Yeah, OK. Mmm. [speaker004:] Uh, I mean, I don't know that we need to r [speaker002:] Yeah. [speaker001:] Yeah. [speaker004:] Um [pause] Do we need to retrain I mean we can just take the re their training files also. But. [pause] But, uh just for the testing, jus just make sure that we get the same results [pause] so we can duplicate it before we add in another [disfmarker] [speaker001:] Mmm. OK. [speaker004:] Cuz otherwise, you know, we won't know what things mean. [speaker001:] Oh, yeah. OK. And um. Yeah, so fff, LogRASTA, I don't know if we want to [disfmarker] We can try [pause] networks with LogRASTA filtered features. [speaker004:] Maybe. [speaker001:] Mmm. I'm sorry? Yeah. Well [disfmarker] Yeah. [speaker004:] Oh! You know, the other thing is when you say comb [speaker001:] But [disfmarker] [speaker004:] I'm [disfmarker] I'm sorry, I'm interrupting. [comment] that u Um, uh, when you're talking about combining multiple features, um [disfmarker] Suppose we said, "OK, we've got these different features and so forth, but PLP seems [pause] pretty good." If we take the approach that Mike did and have [disfmarker] [speaker001:] Mm hmm. [speaker004:] I mean, one of the situations we have is we have these different conditions. We have different languages, we have different [disfmarker] [vocalsound] different noises, Um [pause] If we have some drastically different conditions and we just train up different M L Ps [pause] with them. And put [disfmarker] put them together. What [disfmarker] what [disfmarker] What Mike found, for the reverberation case at least, I mean [disfmarker] I mean, who knows if it'll work for these other ones. That you did have nice interpolative effects. That is, that yes, if you knew [pause] what the reverberation condition was gonna be and you trained for that, then you got the best results. But if you had, say, a heavily reverberation ca heavy reverberation case and a no reverberation case, uh, and then you fed the thing, uh something that was a modest amount of reverberation then you'd get some result in between the two. So it was sort of [disfmarker] behaved reasonably. Is tha that a fair [disfmarker] Yeah. [speaker001:] Yeah. So you [disfmarker] you think it's perhaps better to have several M L Yeah but [disfmarker] [speaker004:] It works better if [pause] what? [speaker001:] Yea [speaker004:] I see. Well, see, i oc You were doing some something that was [disfmarker] So maybe the analogy isn't quite right. You were doing something that was in way a little better behaved. You had reverb for a single variable which was re uh, uh, reverberation. Here the problem seems to be is that we don't have a hug a really huge net with a really huge amount of training data. But we have s f [pause] for this kind of task, I would think, [pause] sort of a modest amount. I mean, a million frames actually isn't that much. We have a modest amount of [disfmarker] of uh training data from a couple different conditions, and then uh [disfmarker] in [disfmarker] yeah, that [disfmarker] and the real situation is that there's enormous variability that we anticipate in the test set in terms of language, and noise type uh, and uh, [pause] uh, channel characteristic, sort of all over the map. A bunch of different dimensions. And so, I'm just concerned that we don't really have [pause] um, the data to train up [disfmarker] I mean one of the things that we were seeing is that when we added in [disfmarker] we still don't have a good explanation for this, but we are seeing that we're adding in uh, a fe few different databases and uh the performance is getting worse and uh, when we just take one of those databases that's a pretty good one, it actually is [disfmarker] is [disfmarker] is [disfmarker] is [disfmarker] is better. And uh that says to me, yes, that, you know, there might be some problems with the pronunciation models that some of the databases we're adding in or something like that. But one way or another [pause] we don't have uh, seemingly, the ability [pause] to represent, in the neural net of the size that we have, um, all of the variability [pause] that we're gonna be covering. So that I'm [disfmarker] I'm [disfmarker] I'm hoping that um, this is another take on the efficiency argument you're making, which is I'm hoping that with moderate size neural nets, uh, that uh if we [disfmarker] if they look at more constrained conditions they [disfmarker] they'll have enough parameters to really represent them. Mm hmm. Mm hmm. Mm hmm. Yeah. [speaker001:] So doing both is [disfmarker] is not [disfmarker] is not right, you mean, or [disfmarker]? Yeah. [speaker004:] Yeah. I [disfmarker] I just sort of have a feeling [disfmarker] [speaker001:] But [disfmarker] Yeah. Mm hmm. [speaker004:] Yeah. I mean [disfmarker] [vocalsound] i i e The um [disfmarker] I think it's true that the OGI folk found that using LDA [pause] RASTA, which is a kind of LogRASTA, it's just that they have the [disfmarker] I mean it's done in the log domain, as I recall, and it's [disfmarker] it uh [disfmarker] it's just that they d it's trained up, right? That that um benefitted from on line normalization. So they did [disfmarker] At least in their case, it did seem to be somewhat complimentary. So will it be in our case, where we're using the neural net? I mean they [disfmarker] they were not [disfmarker] not using the neural net. Uh I don't know. OK, so the other things you have here are uh, trying to improve results from a single [disfmarker] Yeah. Make stuff better. OK. Uh. [vocalsound] Yeah. And CPU memory issues. Yeah. We've been sort of ignoring that, haven't we? But [disfmarker] [speaker001:] Yeah, so I don't know. [speaker004:] Yeah, but I li [speaker001:] But we have to address the problem of CPU and memory we [disfmarker] [speaker004:] Well, I think [disfmarker] My impression [disfmarker] You [disfmarker] you folks have been looking at this more than me. But my impression was that [vocalsound] uh, there was a [disfmarker] a [disfmarker] a [disfmarker] a strict constraint on the delay, [speaker002:] Yeah. [speaker004:] but beyond that it was kind of that uh using less memory was better, and [vocalsound] using less CPU was better. Something like that, right? [speaker001:] Yeah, but [disfmarker] Yeah. So, yeah, but we've [disfmarker] I don't know. We have to get some reference point to where we [disfmarker] Well, what's a reasonable number? Perhaps be because if it's [disfmarker] if it's too large or [disfmarker] large or [@ @] [disfmarker] [speaker004:] Um, well I don't think we're [vocalsound] um [vocalsound] completely off the wall. I mean I think that if we [disfmarker] if we have [disfmarker] Uh, I mean the ultimate fall back that we could do [disfmarker] If we find uh [disfmarker] I mean we may find that we [disfmarker] we're not really gonna worry about the M L You know, if the MLP ultimately, after all is said and done, doesn't really help then we won't have it in. [speaker001:] Mmm. [speaker004:] If the MLP does, we find, help us enough in some conditions, uh, we might even have more than one MLP. We could simply say that is uh, done on the uh, server. [speaker001:] Mmm. [speaker004:] And it's uh [disfmarker] We do the other manipulations that we're doing before that. So, I [disfmarker] I [disfmarker] I think [disfmarker] I think that's [disfmarker] [pause] that's OK. [speaker001:] And [disfmarker] Yeah. [speaker004:] So I think the key thing was um, this plug into OGI. Um, what [disfmarker] what are they [disfmarker] What are they gonna be working [disfmarker] Do we know what they're gonna be working on while we take their features, [speaker001:] They're [disfmarker] They're starting to wor work on some kind of multi band. [speaker004:] and [disfmarker]? [speaker001:] So. Um [disfmarker] This [disfmarker] that was Pratibha. Sunil, what was he doing, do you remember? [speaker002:] Sunil? [speaker001:] Yeah. He was doing something new or [disfmarker]? [speaker002:] I [disfmarker] I don't re I didn't remember. Maybe he's working with [pause] neural network. [speaker001:] I don't think so. Trying to tune wha networks? [speaker002:] Yeah, I think so. [speaker001:] I think they were also mainly, well, working a little bit of new things, like networks and multi band, but mainly trying to tune their [disfmarker] their system as it is now to [disfmarker] just trying to get the best from this [disfmarker] this architecture. [speaker002:] Yeah. [speaker004:] OK. So I guess the way it would work is that you'd get [disfmarker] There'd be some point where you say, "OK, this is their version one" or whatever, and we get these VAD labels and features and so forth for all these test sets from them, [speaker001:] Mm hmm. [speaker004:] and then um, uh, that's what we work with. We have a certain level we try to improve it with this other path and then um, uh, when it gets to be uh, January some point uh, we say, "OK we [disfmarker] we have shown that we can improve this, in this way. So now uh [pause] um [pause] what's your newest version?" And then maybe they'll have something that's better and then we [disfmarker] we'd combine it. This is always hard. I mean I [disfmarker] I [disfmarker] I used to work [pause] with uh folks who were trying to improve a good uh, HMM system with uh [disfmarker] with a neural net system and uh, it was [pause] a common problem that you'd [disfmarker] Oh, and this [disfmarker] Actually, this is true not just for neural nets but just for [disfmarker] in general if people were [pause] working with uh, rescoring uh, N best lists or lattices that come [disfmarker] came from uh, a mainstream recognizer. Uh, You get something from the [disfmarker] the other site at one point and you work really hard on making it better with rescoring. But they're working really hard, too. So by the time [pause] you have uh, improved their score, they have also improved their score [speaker001:] Mmm. [speaker004:] and now there isn't any difference, [speaker001:] Yeah. [speaker004:] because the other [disfmarker] [speaker002:] Yeah. [speaker004:] So, um, I guess at some point we'll have to [speaker001:] So it's [disfmarker] [speaker004:] uh [disfmarker] [comment] Uh, I [disfmarker] I don't know. I think we're [disfmarker] we're integrated a little more tightly than happens in a lot of those cases. I think at the moment they [disfmarker] they say that they have a better thing we can [disfmarker] we [disfmarker] e e What takes all the time here is that th we're trying so many things, [speaker001:] Mmm. [speaker004:] presumably uh, in a [disfmarker] in a day we could turn around uh, taking a new set of things from them and [disfmarker] and rescoring it, [speaker001:] Mmm. Yeah. Yeah, perhaps we could. [speaker004:] right? So. Yeah. Well, OK. No, this is [disfmarker] I think this is good. I think that the most wide open thing is the issues about the uh, you know, different trainings. You know, da training targets and noises and so forth. [speaker001:] Mmm. [speaker004:] That's sort of wide open. [speaker001:] So we [disfmarker] we can for [disfmarker] we c we can forget combining multiple features and MLG perhaps, or focus more on the targets and on the training data and [disfmarker]? [speaker004:] Yeah, I think for right now um, I th I [disfmarker] I really liked MSG. And I think that, you know, one of the things I liked about it is has such different temporal properties. And um, I think that there is ultimately a really good uh, potential for, you know, bringing in things with different temporal properties. Um, but um, uh, we only have limited time and there's a lot of other things we have to look at. [speaker001:] Mmm. [speaker004:] And it seems like much more core questions are issues about the training set and the training targets, and fitting in uh what we're doing with what they're doing, and, you know, with limited time. Yeah. I think [pause] we have to start cutting down. So uh [disfmarker] [speaker001:] Mmm. [speaker004:] I think so, yeah. And then, you know, once we [disfmarker] Um, having gone through this [pause] process and trying many different things, I would imagine that certain things uh, come up that you are curious about uh, that you'd not getting to and so when the dust settles from the evaluation uh, I think that would time to go back and take whatever intrigued you most, you know, got you most interested uh and uh [disfmarker] and [disfmarker] and work with it, you know, for the next round. Uh, as you can tell from these numbers uh, nothing that any of us is gonna do is actually gonna completely solve the problem. So. [speaker001:] Mmm. [speaker004:] So, [comment] there'll still be plenty to do. Barry, you've been pretty quiet. [speaker003:] Just listening. [speaker004:] Well I figured that, but [disfmarker] [vocalsound] That [disfmarker] what [disfmarker] what [disfmarker] what were you involved in in this primarily? [speaker003:] Um, [vocalsound] helping out [vocalsound] uh, preparing [disfmarker] Well, they've been kind of running all the experiments and stuff and I've been uh, uh w doing some work on the [disfmarker] on the [disfmarker] preparing all [disfmarker] all the data for them to [disfmarker] to um, train and to test on. Um Yeah. Right now, I'm [disfmarker] I'm focusing mainly on this final project I'm working on in Jordan's class. [speaker004:] Ah! I see. [speaker003:] Yeah. [speaker004:] Right. What's [disfmarker] what's that? [speaker003:] Um, [vocalsound] I'm trying to um [disfmarker] So there was a paper in ICSLP about um this [disfmarker] this multi band um, belief net structure. [comment] This guy did [disfmarker] [speaker004:] Mm hmm. [speaker003:] uh basically it was two H M Ms with [disfmarker] with a [disfmarker] with a dependency arrow between the two H M [speaker004:] Uh huh. [speaker003:] And so I wanna try [disfmarker] try coupling them instead of t having an arrow that [disfmarker] that flows from one sub band to another sub band. I wanna try having the arrows go both ways. And um, [vocalsound] I'm just gonna see if [disfmarker] if that [disfmarker] that better models [pause] um, uh asynchrony in any way or um [disfmarker] [pause] Yeah. [speaker004:] Oh! OK. Well, that sounds interesting. [speaker003:] Yeah. [speaker004:] OK. Alright. Anything to [disfmarker] [vocalsound] you wanted to [disfmarker] No. OK. Silent partner in the [disfmarker] [vocalsound] in the meeting. Oh, we got a laugh out of him, that's good. OK, everyone h must contribute to the [disfmarker] our [disfmarker] our sound [disfmarker] [vocalsound] sound files here. OK, so speaking of which, if we don't have anything else that we need [disfmarker] You happy with where we are? Know [disfmarker] know wher know where we're going? [speaker001:] Mmm. [speaker004:] Uh [disfmarker] [speaker001:] I think so, yeah. [speaker004:] Yeah, yeah. You [disfmarker] you happy? You're happy. OK everyone [pause] should be happy. OK. You don't have to be happy. You're almost done. Yeah, yeah. OK. [speaker005:] Al actually I should mention [disfmarker] So if [disfmarker] [comment] um, about the Linux machine "Swede." [speaker004:] Yeah. [speaker005:] So it looks like the um, neural net tools are installed there. [speaker001:] Mmm. [speaker005:] And um Dan Ellis [comment] I believe knows something about using that machine so If people are interested in [disfmarker] in getting jobs running on that maybe I could help with that. [speaker001:] Mmm. Yeah, but I don't know if we really need now a lot of machines. Well. we could start computing another huge table but [disfmarker] yeah, we [disfmarker] [speaker004:] Well. Yeah, I think we want a different table, at least [speaker001:] Yeah, sure. [speaker004:] Right? I mean there's [disfmarker] there's some different things that we're trying to get at now. [speaker001:] But [disfmarker] [speaker004:] But [disfmarker] [speaker001:] Yeah. Mmm. [speaker004:] So. Yeah, as far as you can tell, you're actually OK on C on CPU uh, for training and so on? Yeah. [speaker001:] Ah yeah. I think so. Well, more is always better, but mmm, I don't think we have to train a lot of networks, now that we know [disfmarker] We just select what works [pause] fine [speaker004:] OK. OK. [speaker002:] Yeah. [speaker001:] and try to improve this [speaker004:] And we're OK on [disfmarker] And we're OK on disk? [speaker002:] to work [speaker001:] and [disfmarker] It's OK, yeah. Well sometimes we have some problems. [speaker002:] Some problems with the [disfmarker] [speaker004:] But they're correctable, uh problems. [speaker002:] You know. [speaker001:] Yeah, restarting the script basically and [disfmarker] [speaker004:] Yes. Yeah, I'm familiar with [vocalsound] that one, OK. Alright, so uh, [comment] [vocalsound] since uh, we didn't ha get a channel on for you, [comment] you don't have to read any digits but the rest of us will. Uh, is it on? Well. We didn't uh [disfmarker] I think I won't touch anything cuz I'm afraid of making the driver crash which it seems to do, [pause] pretty easily. OK, thanks. OK, so we'll uh [disfmarker] I'll start off the uh um connect the [disfmarker] [speaker001:] My battery is low. [speaker004:] Well, let's hope it works. Maybe you should go first and see so that you're [disfmarker] OK. [speaker002:] batteries? [speaker003:] Yeah, your battery's going down too. [speaker004:] Transcript uh [speaker003:] Carmen's battery is d going down too. [speaker004:] two [disfmarker] Oh, OK. Yeah. Why don't you go next then. OK. Guess we're done. OK, uh so. Just finished digits. Yeah, so. Uh Well, it's good. I think [disfmarker] I guess we can turn off our microphones now. [speaker003:] Just pull the batteries out. [speaker001:] We're going? OK. [speaker002:] OK. [speaker001:] Sh Close your door on [disfmarker] door on the way out? [speaker002:] Thanks. [speaker001:] Thanks. [speaker002:] Oh. [speaker001:] Yeah. Probably wanna get this other door, too. OK. So. Um. [vocalsound] [vocalsound] What are we talking about today? [speaker005:] Uh, well, first there are perhaps these uh Meeting Recorder digits that we tested. [speaker001:] Oh, yeah. That was kind of uh interesting. [speaker005:] So. [speaker001:] The [disfmarker] both the uh [disfmarker] [vocalsound] the SRI System and the oth [speaker005:] Um. [speaker001:] And for one thing that [disfmarker] that sure shows the [vocalsound] difference between having a lot of uh training data [vocalsound] or not, [speaker005:] Of data? Yeah. [speaker001:] uh, the uh [disfmarker] [vocalsound] The best kind of number we have on the English uh [disfmarker] on near microphone only is [disfmarker] is uh three or four percent. [speaker005:] Mm hmm. [speaker001:] And uh it's significantly better than that, using fairly simple front ends [vocalsound] on [disfmarker] [vocalsound] on the uh [disfmarker] [vocalsound] uh, with the SRI system. [speaker005:] Mm hmm. [speaker001:] So I th I think that the uh [disfmarker] But that's [disfmarker] that's using uh a [disfmarker] a pretty huge amount of data, mostly not digits, of course, but [disfmarker] but then again [disfmarker] Well, yeah. In fact, mostly not digits for the actual training the H M Ms whereas uh in this case we're just using digits for training the H M [speaker005:] Yeah. Right. [speaker001:] Did anybody mention about whether the [disfmarker] the SRI system is a [disfmarker] [vocalsound] is [disfmarker] is doing the digits um the wor as a word model or as uh a sub s sub phone states? [speaker005:] I guess it's [disfmarker] it's uh allophone models, [speaker001:] Yeah. [speaker005:] so, well [disfmarker] [speaker001:] Probably. Huh? [speaker005:] Yeah. I think so, because it's their very d huge, their huge system. [speaker001:] Yeah. [speaker005:] And. But. So. There is one difference [disfmarker] Well, the SRI system [disfmarker] the result for the SRI system that are represented here are with adaptation. So there is [disfmarker] It's their complete system and [disfmarker] including on line uh unsupervised adaptation. [speaker001:] That's true. [speaker005:] And if you don't use adaptation, the error rate is around fifty percent worse, I think, if I remember. Yeah. [speaker001:] OK. It's tha it's that much, huh? [speaker005:] Nnn. It's [disfmarker] Yeah. It's quite significant. Yeah. [speaker001:] Oh. OK. Still. [speaker005:] Mm hmm. [speaker001:] But [disfmarker] but uh what [disfmarker] what I think I'd be interested to do given that, is that we [disfmarker] we should uh [vocalsound] take [disfmarker] I guess that somebody's gonna do this, right? [disfmarker] is to take some of these tandem things and feed it into the SRI system, right? [speaker005:] Yeah. [speaker001:] Yeah. [speaker005:] We can do something like that. Yeah. [speaker001:] Yeah. Because [disfmarker] [speaker005:] But [disfmarker] But I guess the main point is the data because uh [vocalsound] I am not sure. Our back end is [disfmarker] is fairly simple but until now, well, the attempts to improve it or [disfmarker] have fail Ah, well, I mean uh what Chuck tried to [disfmarker] to [disfmarker] to do [speaker001:] Yeah, but he's doing it with the same data, right? [speaker005:] Yeah. [speaker001:] I mean so to [disfmarker] [vocalsound] So there's [disfmarker] there's [disfmarker] there's two things being affected. [speaker005:] So it's [disfmarker] Yeah. [speaker001:] I mean. One is that [disfmarker] that, you know, there's something simple that's wrong with the back end. We've been playing a number of states [speaker005:] Mm hmm. [speaker001:] uh I [disfmarker] I don't know if he got to the point of playing with the uh number of Gaussians yet [speaker005:] Mm hmm. [speaker001:] but [disfmarker] but uh, uh, you know. But, yeah, so far he hadn't gotten any big improvement, [speaker005:] Mm hmm. [speaker001:] but that's all with the same amount of data which is pretty small. [speaker005:] Yeah. Mmm. [speaker001:] And um. [speaker005:] So, yeah, we could retrain some of these tandem on [disfmarker] on huge [disfmarker] [speaker001:] Well, you could do that, but I'm saying even with it not [disfmarker] with that part not retrained, [speaker005:] Ah, yeah. [speaker001:] just [disfmarker] just using [disfmarker] having the H M Ms [disfmarker] much better H M [speaker005:] Just [disfmarker] f for the HMM models. Yeah. [speaker001:] Yeah. [speaker005:] Mm hmm. Mm hmm. [speaker001:] Um. [vocalsound] But just train those H M Ms using different features, the features coming from our Aurora stuff. [speaker005:] Yeah. [speaker001:] So. [speaker005:] Yeah. But [vocalsound] what would be interesting to see also is what [disfmarker] what [disfmarker] perhaps it's not related, the amount of data but the um recording conditions. I don't know. Because [vocalsound] it's probably not a problem of noise, because our features are supposed to be robust to noise. It's not a problem of channel, because there is um [vocalsound] [vocalsound] normalization with respect to the channel. [speaker001:] Well, yeah. [speaker005:] So [disfmarker] [speaker001:] I [disfmarker] I [disfmarker] I'm sorry. What [disfmarker] what is the problem that you're trying to explain? [speaker005:] The [disfmarker] the fact that [disfmarker] the result with the tandem and Aurora system are [vocalsound] uh so much worse. [speaker001:] That the [disfmarker] Oh. So much worse? [speaker005:] Yeah. [speaker001:] Oh. I uh but I'm [disfmarker] I'm almost certain that it [disfmarker] it [disfmarker] [vocalsound] I mean, that it has to do with the um amount of training data. [speaker005:] It [disfmarker] [speaker001:] It [disfmarker] it's [disfmarker] it's orders of magnitude off. [speaker005:] Yeah but [disfmarker] Yeah. Yeah but we train only on digits and it's [disfmarker] it's a digit task, so. Well. [speaker001:] But [disfmarker] but having a huge [disfmarker] [speaker005:] It [disfmarker] [speaker001:] If [disfmarker] [vocalsound] if you look at what commercial places do, they use a huge amount of data. [speaker005:] Mm hmm. Alright. [speaker001:] This is a modest amount of data. [speaker005:] Yeah. Mm hmm. [speaker001:] So. [vocalsound] I mean, ordinarily you would say "well, given that you have enough occurrences of the digits, you can just train with digits rather than with, you know" [disfmarker] [speaker005:] Mm hmm. [speaker001:] But the thing is, if you have a huge [disfmarker] in other words, do word models [disfmarker] But if you have a huge amount of data then you're going to have many occurrences of similar uh allophones. [speaker005:] Right. Mmm. Yeah. [speaker001:] And that's just a huge amount of training for it. So it's [vocalsound] um [disfmarker] [vocalsound] I [disfmarker] I think it has to be that, because, as you say, this is, you know, this is near microphone, [speaker005:] Mm hmm. [speaker001:] it's really pretty clean data. [speaker005:] Mm hmm. [speaker001:] Um. Now, some of it could be the fact that uh [disfmarker] let's see, in the [disfmarker] in these multi train things did we include noisy data in the training? [speaker005:] Yeah. [speaker001:] I mean, that could be hurting us actually, for the clean case. [speaker005:] Yeah. Well, actually we see that the clean train for the Aurora proposals are [disfmarker] are better than the multi train, [speaker001:] It is if [disfmarker] [speaker005:] yeah. [speaker001:] Yeah. Yeah. Cuz this is clean data, and so that's not too surprising. [speaker005:] Mm hmm. [speaker001:] But um. Uh. So. [speaker005:] Well, o I guess what I meant is that well, let's say if we [disfmarker] if we add enough data to train on the um on the Meeting Recorder digits, I guess we could have better results than this. [speaker001:] Uh huh. Mm hmm. [speaker005:] And. What I meant is that perhaps we can learn something uh from this, what's [disfmarker] what's wrong uh what [disfmarker] what is different between TI digits and these digits and [disfmarker] [speaker001:] What kind of numbers are we getting on TI digits? [speaker005:] It's point eight percent, so. [speaker001:] Oh. I see. [speaker005:] Four Fourier. [speaker001:] So in the actual TI digits database we're getting point eight percent, [speaker005:] Yeah. Yeah. [speaker001:] and here we're getting three or four [disfmarker] three, [speaker005:] Mm hmm. [speaker001:] let's see, three for this? Yeah. Sure, but I mean, um point eight percent is something like double uh or triple what people have gotten who've worked very hard at doing that. [speaker005:] Mm hmm. [speaker001:] And [disfmarker] and also, as you point out, there's adaptation in these numbers also. [speaker005:] Mmm. [speaker001:] So if you, you know, put the ad adap take the adaptation off, then it [disfmarker] for the English Near you get something like two percent. And here you had, you know, something like three point four. [speaker005:] Mm hmm. [speaker001:] And I could easily see that difference coming from this huge amount of data that it was trained on. [speaker005:] Mm hmm. [speaker001:] So it's [disfmarker] You know, I don't think there's anything magical here. [speaker005:] Yeah. [speaker001:] It's, you know, we used a simple HTK system with a modest amount of data. And this is a [disfmarker] a, you know, modern [vocalsound] uh system [speaker005:] Yeah. [speaker001:] uh has [disfmarker] has a lot of nice points to it. [speaker005:] Mm hmm. [speaker001:] Um. So. I mean, the HTK is an older HTK, even. So. [speaker005:] Mm hmm. [speaker001:] Yeah it [disfmarker] it's not that surprising. But to me it just [disfmarker] it just meant a practical [vocalsound] point that um if we want to [vocalsound] publish results on digits that [disfmarker] that people pay [vocalsound] attention to we probably should uh [disfmarker] Cuz we've had the problem before that you get [disfmarker] show some [vocalsound] nice improvement on something that's [disfmarker] that's uh, uh [disfmarker] it seems like too large a number, [speaker005:] Mm hmm. [speaker001:] and uh [vocalsound] uh people don't necessarily take it so seriously. Um. Yeah. Yeah. So the three point four percent for this uh is [disfmarker] is uh [disfmarker] So why is it [disfmarker] It's an interesting question though, still. Why is [disfmarker] why is it three point four percent for the d the digits recorded in this environment as opposed to [vocalsound] the uh point eight percent for [disfmarker] for [disfmarker] for the original TI digits database? Um. [speaker005:] Yeah. th that's [disfmarker] th that's my point [speaker001:] Given [disfmarker] given the same [disfmarker] [speaker005:] I [disfmarker] I [disfmarker] I don't I [disfmarker] [speaker001:] Yeah. So ignore [disfmarker] ignoring the [disfmarker] the [disfmarker] the SRI system for a moment, [speaker005:] Mm hmm. [speaker001:] just looking at [vocalsound] the TI di the uh tandem system, if we're getting point eight percent, which, yes, it's high. It's, you know, it [disfmarker] it's not awfully high, [speaker005:] Mm hmm. [speaker001:] but it's, you know [disfmarker] it's [disfmarker] it's high. Um. [vocalsound] Why is it [vocalsound] uh four times as high, or more? [speaker005:] Yeah, I guess. [speaker001:] Right? I mean, there's [disfmarker] [vocalsound] even though it's close miked there's still [disfmarker] there really is background noise. [speaker005:] Mm hmm. [speaker001:] Um. And [vocalsound] uh I suspect when the TI digits were recorded if somebody fumbled or said something wrong or something that they probably made them take it over. [speaker005:] Mm hmm. [speaker001:] It was not [disfmarker] I mean there was no attempt to have it be realistic in any [disfmarker] in any sense at all. [speaker005:] Well. Yeah. And acoustically, it's q it's [disfmarker] I listened. It's quite different. TI digit is [disfmarker] it's very, very clean and it's like studio recording [speaker001:] Mm hmm. [speaker005:] whereas these Meeting Recorder digits sometimes you have breath noise and Mmm. [speaker001:] Right. Yeah. So I think they were [disfmarker] [speaker005:] It's [nonvocalsound] not controlled at all, I mean. [speaker001:] Bless you. [speaker002:] Thanks. [speaker001:] I [disfmarker] Yeah. I think it's [disfmarker] it's [disfmarker] [speaker005:] Mm hmm. [speaker001:] So. Yes. [speaker005:] But [speaker001:] It's [disfmarker] I think it's [disfmarker] it's the indication it's harder. [speaker005:] Yeah. [speaker001:] Uh. [vocalsound] Yeah and again, you know, i that's true either way. I mean so take a look at the uh [disfmarker] [vocalsound] um, the SRI results. I mean, they're much much better, but still you're getting something like one point three percent for uh things that are same data as in T [disfmarker] TI digits the same [disfmarker] same text. [speaker005:] Mm hmm. [speaker001:] Uh. And uh, I'm sure the same [disfmarker] same system would [disfmarker] would get, you know, point [disfmarker] point three or point four or something [vocalsound] on the actual TI digits. So this [disfmarker] I think, on both systems the [vocalsound] these digits are showing up as harder. [speaker005:] Mmm. [speaker001:] Um. [speaker005:] Mm hmm. [speaker001:] Which I find sort of interesting cause I think this is closer to [disfmarker] uh I mean it's still read. But I still think it's much closer to [disfmarker] to what [disfmarker] what people actually face, [vocalsound] um when they're [disfmarker] they're dealing with people saying digits over the telephone. I mean. [vocalsound] I don't think uh [disfmarker] I mean, I'm sure they wouldn't release the numbers, but I don't think that uh [vocalsound] the uh [disfmarker] the [disfmarker] the companies that [disfmarker] that do telephone [vocalsound] speech get anything like point four percent on their [vocalsound] digits. I'm [disfmarker] I'm [disfmarker] I'm sure they get [disfmarker] Uh, I mean, for one thing people do phone up who don't have uh uh Middle America accents and [speaker005:] Mm hmm. [speaker001:] it's a we we it's [disfmarker] it's [disfmarker] it's US. it has [disfmarker] has many people [vocalsound] [vocalsound] who sound in many different ways. So. Um. I mean. OK. That was that topic. What else we got? [speaker005:] Um. [speaker001:] Did we end up giving up on [disfmarker] on, any Eurospeech submissions, [speaker005:] But [disfmarker] [speaker001:] or [disfmarker]? I know Thilo and Dan Ellis are [disfmarker] are submitting something, but uh. [speaker005:] Yeah. I [disfmarker] [vocalsound] I guess e the only thing with these [disfmarker] the Meeting Recorder and, well, [disfmarker] So, I think, yeah [disfmarker] I think we basically gave up. [speaker001:] Um. [vocalsound] Now, actually for the [disfmarker] for the Aur uh [speaker005:] But [disfmarker] [speaker001:] we do have stuff for Aurora, right? Because [disfmarker] because we have ano an extra month or something. [speaker005:] Yeah. Yeah. Yeah. So. Yeah, for sure we will do something for the special session. [speaker001:] Yeah. Well, that's fine. So th so [disfmarker] so we have a couple [disfmarker] a couple little things on Meeting Recorder [speaker005:] Yeah. Mm hmm. [speaker001:] and we have [disfmarker] [vocalsound] We don't [disfmarker] we don't have to flood it with papers. We're not trying to prove anything to anybody. so. That's fine. Um. Anything else? [speaker005:] Yeah. Well. So. Perhaps the point is that we've been working on [vocalsound] is, yeah, we have put the um the good VAD in the system and [vocalsound] it really makes a huge difference. Um. So, yeah. I think, yeah, this is perhaps one of the reason why our system was not [disfmarker] [vocalsound] not the best, because with the new VAD, it's very [disfmarker] the results are similar to the France Telecom results and perhaps even better sometimes. [speaker001:] Hmm. [speaker002:] Huh. [speaker005:] Um. So there is this point. Uh. The problem is that it's very big and [vocalsound] [vocalsound] we still have to think how to [disfmarker] where to put it and [disfmarker] [vocalsound] um, [speaker001:] Mm hmm. [speaker005:] because it [disfmarker] it [disfmarker] well, this VAD uh either some delay and we [disfmarker] if we put it on the server side, it doesn't work, because on the server side features you already have LDA applied [vocalsound] from the f from the terminal side and [vocalsound] so you accumulate the delay so the VAD should be before the LDA which means perhaps on the terminal side and then smaller [vocalsound] and [speaker001:] So wha where did this good VAD come from? [speaker005:] So. It's um from OGI. So it's the network trained [disfmarker] it's the network with the huge amounts on hidden [disfmarker] of hidden units, and um nine input frames compared to the VAD that was in the proposal which has a very small amount of hidden units and fewer inputs. [speaker001:] This is the one they had originally? [speaker005:] Yeah. [speaker001:] Oh. Yeah, but they had to [pause] get rid of it because of the space, didn't they? [speaker005:] Yeah. So. Yeah. But the abso assumption is that we will be able to make a VAD that's small and that works fine. And. So we can [disfmarker] [speaker001:] Well. So that's a problem. Yeah. [speaker005:] Yeah but [disfmarker] nnn. [speaker001:] But the other thing is uh to use a different VAD entirely. I mean, uh i if [disfmarker] if there's a [vocalsound] if [disfmarker] if [disfmarker] I [disfmarker] I don't know what the thinking was amongst the [disfmarker] the [disfmarker] the [vocalsound] the ETSI folk but um if everybody agreed sure let's use this VAD and take that out of there [disfmarker] [speaker005:] Mm hmm. Mm hmm. They just want, apparently [disfmarker] they don't want to fix the VAD because they think there is some interaction between feature extraction and [disfmarker] and VAD or frame dropping But they still [vocalsound] want to [disfmarker] just to give some um [vocalsound] requirement for this VAD because it's [disfmarker] it will not be part of [disfmarker] they don't want it to be part of the standard. [speaker001:] OK. [speaker005:] So. So it must be at least uh somewhat fixed but not completely. So there just will be some requirements that are still not [disfmarker] uh not yet uh ready I think. [speaker001:] Determined. I see. [speaker005:] Nnn. [speaker001:] But I was thinking that [disfmarker] that uh [vocalsound] s Sure, there may be some interaction, but I don't think we need to be stuck on using our or OGI's [pause] VAD. We could use somebody else's if it's smaller or [disfmarker] [speaker005:] Yeah. [speaker001:] You know, as long as it did the job. [speaker005:] Mm hmm. [speaker001:] So that's good. [speaker005:] Uh. So there is this thing. There is um [disfmarker] Yeah. Uh I designed a new [disfmarker] a new filter because when I designed other filters with shorter delay from the LDA filters, [vocalsound] there was one filter with fif sixty millisecond delay and the other with ten milliseconds and [vocalsound] uh Hynek suggested that both could have sixty five sixty s [speaker001:] Right. [speaker005:] I think it's sixty five. Yeah. [speaker001:] Yeah. [speaker005:] Both should have sixty five because [disfmarker] Yeah. [speaker001:] You didn't gain anything, right? [speaker005:] And. So I did that and uh it's running. So, [vocalsound] let's see what will happen. Uh but the filter is of course closer to the reference filter. [speaker001:] Mm hmm. [speaker005:] Mmm. Um. Yeah. I think [disfmarker] [speaker001:] So that means logically, in principle, it should be better. So probably it'll be worse. [speaker005:] Yeah [speaker001:] Or in the basic perverse nature uh of reality. Yeah. OK. [speaker005:] Yeah. Sure. [speaker003:] Yeah. [speaker001:] OK. [speaker005:] Yeah, and then we've started to work with this of um voiced unvoiced stuff. [speaker001:] Mm hmm. [speaker005:] And next week I think we will [vocalsound] perhaps try to have um a new system with uh uh MSG stream also see what [disfmarker] what happens. So, something that's similar to the proposal too, but with MSG stream. [speaker001:] Mm hmm. Mm hmm. [speaker005:] Mmm. [speaker001:] OK. [speaker004:] No, I w [vocalsound] I begin to play [vocalsound] with Matlab and to found some parameter robust for voiced unvoiced decision. But only to play. And we [disfmarker] [vocalsound] they [disfmarker] we found that maybe w is a classical parameter, the [vocalsound] sq the variance [vocalsound] between the um FFT of the signal and the small spectrum of time [vocalsound] we [disfmarker] after the um mel filter bank. [speaker001:] Uh huh. [speaker004:] And, well, is more or less robust. Is good for clean speech. Is quite good [vocalsound] for noisy speech. [speaker001:] Huh? Mm hmm. [speaker004:] but um we must to have bigger statistic with TIMIT, [speaker001:] Mm hmm. [speaker004:] and is not ready yet to use on, [speaker001:] Yeah. [speaker004:] well, I don't know. [speaker001:] Yeah. [speaker005:] Yeah. So, basically we wa want to look at something like the ex the ex excitation signal and [disfmarker] [speaker001:] Right. [speaker004:] Mm hmm. [speaker005:] which are the variance of it and [disfmarker] [speaker004:] I have here. I have here for one signal, for one frame. [speaker005:] Mmm. [speaker001:] Yeah. [speaker004:] The [disfmarker] the mix of the two, noise and unnoise, and the signal is this. [speaker001:] Uh huh. [speaker004:] Clean, and this noise. [speaker001:] Uh. [speaker004:] These are the two [disfmarker] the mixed, the big signal is for clean. [speaker001:] Well, I'm s uh [disfmarker] There's [disfmarker] None of these axes are labeled, so I don't know what this [disfmarker] What's this axis? [speaker004:] Uh this is uh [disfmarker] this axis is [vocalsound] nnn, "frame". [speaker001:] Frame. [speaker004:] Mm hmm. [speaker001:] And what's th what this? [speaker004:] Uh, this is uh energy, log energy of the spectrum. Of the this is the variance, the difference [nonvocalsound] between the spectrum of the signal and FFT of each frame of the signal and this mouth spectrum of time after the f may fit for the two, [speaker001:] For this one. For the noi [speaker004:] this big, to here, they are to signal. This is for clean and this is for noise. [speaker001:] Oh. There's two things on the same graph. [speaker004:] Yeah. I don't know. I [disfmarker] I think that I have d another graph, but I'm not sure. [speaker005:] Yeah. [speaker001:] So w which is clean and which is noise? [speaker005:] I think the lower one is noise. [speaker004:] The lower is noise and the height is clean. [speaker001:] OK. So it's harder to distinguish [speaker004:] It's height. [speaker001:] but it [disfmarker] but it g [speaker005:] Yeah. [speaker001:] with noise of course but [disfmarker] but [disfmarker] [speaker004:] Oh. I must to have. [speaker001:] Uh. [speaker004:] Pity, but I don't have two different [speaker001:] And presumably when there's a [disfmarker] a [disfmarker] [speaker005:] So this should the [disfmarker] the [disfmarker] the t voiced portions. [speaker001:] Uh huh. [speaker004:] Yeah, it is the height is voiced portion. [speaker005:] The p the peaks should be voiced portion. [speaker004:] And this is the noise portion. [speaker001:] Uh huh. [speaker004:] And this is more or less like this. But I meant to have see [@ @] two [disfmarker] two the picture. [speaker001:] Yeah. Yeah. [speaker004:] This is, for example, for one frame. [speaker001:] Yeah [speaker004:] the [disfmarker] the spectrum of the signal. And this is the small version of the spectrum after ML mel filter bank. [speaker001:] Yeah. And this is the difference? [speaker004:] And this is I don't know. This is not the different. This is trying to obtain [vocalsound] with LPC model the spectrum but using Matlab without going factor and s [speaker001:] No pre emphasis? Yeah. [speaker004:] Not pre emphasis. Nothing. [speaker001:] Yeah so it's [disfmarker] [speaker004:] And the [disfmarker] [speaker001:] doesn't do too well there. [speaker004:] I think that this is good. This is quite similar. this is [disfmarker] [vocalsound] this is another frame. ho how I obtained the [vocalsound] envelope, [nonvocalsound] this envelope, with the mel filter bank. [speaker001:] Right. So now I wonder [disfmarker] I mean, do you want to [disfmarker] I know you want to get at something orthogonal from what you get with the smooth spectrum Um. But if you were to really try and get a voiced unvoiced, do you [disfmarker] do you want to totally ignore that? I mean, do you [disfmarker] do you [disfmarker] I mean, clearly a [disfmarker] a very big [disfmarker] very big cues [vocalsound] for voiced unvoiced come from uh spectral slope and so on, [speaker005:] Mm hmm. [speaker001:] right? Um. [speaker005:] Yeah. Well, this would be [disfmarker] this would be perhaps an additional parameter, simply isn't [disfmarker] [speaker001:] Yeah. I see. [speaker005:] Yeah. [speaker004:] Yeah because when did noise clear [nonvocalsound] in these section is clear [speaker005:] Uh. [speaker001:] Mm hmm. [speaker004:] if s [@ @] [nonvocalsound] val value is indicative that is a voice frame and it's low values [speaker001:] Yeah. Yeah. Well, you probably want [disfmarker] I mean, [vocalsound] certainly if [vocalsound] you want to do good voiced unvoiced detection, you need a few features. Each [disfmarker] each feature is [vocalsound] by itself not enough. But, you know, people look at [disfmarker] at slope and [vocalsound] uh first auto correlation coefficient, divided by power. [speaker005:] Mmm. [speaker001:] Or [disfmarker] or uh um there's uh [disfmarker] I guess we prob probably don't have enough computation to do a simple pitch detector or something? I mean with a pitch detector you could have a [disfmarker] [vocalsound] have a [disfmarker] an estimate of [disfmarker] of what the [disfmarker] [speaker005:] Mmm. [speaker001:] Uh. Or maybe you could you just do it going through the P FFT's figuring out some um probable [vocalsound] um harmonic structure. Right. [speaker005:] Mmm. [speaker001:] And [disfmarker] and uh. [speaker004:] you have read up and [disfmarker] you have a paper, [vocalsound] the paper that you s give me yesterday. [speaker005:] Oh, yeah. [speaker004:] they say that yesterday [vocalsound] they are some [nonvocalsound] problem [speaker005:] But [disfmarker] Yeah, but it's not [disfmarker] it's, yeah, it's [disfmarker] it's another problem. [speaker004:] and the [disfmarker] [speaker005:] Yeah [speaker004:] Is another problem. [speaker005:] Um. Yeah, there is th this fact actually. If you look at this um spectrum, [speaker001:] Yeah. [speaker005:] What's this again? Is it [vocalsound] the mel filters? [speaker004:] Yeah like this. [speaker005:] Yeah. [speaker004:] Of kind like this. [speaker005:] OK. So the envelope here is the output of the mel filters [speaker001:] Mm hmm. [speaker005:] and what we clearly see is that in some cases, and it clearly appears here, and the [disfmarker] the harmonics are resolved by the f Well, there are still appear after mel filtering, [speaker001:] Mm hmm. [speaker005:] and it happens [vocalsound] for high pitched voice because the width of the lower frequency mel filters [vocalsound] is sometimes even smaller than the pitch. [speaker001:] Yeah. [speaker005:] It's around one hundred, one hundred and fifty hertz [vocalsound] Nnn. [speaker001:] Right. [speaker005:] And so what happens is that this uh, add additional variability to this envelope and [vocalsound] [vocalsound] um [speaker001:] Yeah. [speaker005:] so we were thinking to modify the mel spectrum to have something that [disfmarker] that's smoother on low frequencies. [speaker001:] That's as [disfmarker] as a separate thing. [speaker005:] i [speaker001:] Yeah. [speaker005:] Yeah. This is a separate thing. [speaker004:] Yeah. [speaker001:] Separate thing? Yeah. [speaker005:] And. [speaker001:] Yeah. Maybe so. Um. Yeah. So, what [disfmarker] Yeah. What I was talking about was just, starting with the FFT you could [disfmarker] you could uh do a very rough thing to estimate [disfmarker] estimate uh pitch. [speaker005:] Yeah. Mm hmm. [speaker001:] And uh uh, given [disfmarker] you know, given that, uh [vocalsound] you could uh uh come up with some kind of estimate of how much of the low frequency energy was [disfmarker] was explained by [disfmarker] [vocalsound] by uh uh those harmonics. [speaker005:] Mm hmm. [speaker001:] Uh. It's uh a variant on what you're s what you're doing. The [disfmarker] I mean, the [disfmarker] the [vocalsound] the mel does give a smooth thing. But as you say it's not that smooth here. And [disfmarker] and so if you [disfmarker] [vocalsound] if you just you know subtracted off uh your guess of the harmonics then something like this would end up with [vocalsound] quite a bit lower energy in the first fifteen hundred hertz or so and [disfmarker] and our first kilohertz, even. [speaker005:] Mm hmm. [speaker001:] And um [vocalsound] if was uh noisy, the proportion that it would go down would be [speaker005:] Mm hmm. [speaker001:] if it was [disfmarker] if it was unvoiced or something. So you oughta be able to [vocalsound] pick out voiced segments. At least it should be another [disfmarker] another cue. [speaker005:] Mm hmm. [speaker001:] So. [vocalsound] Anyway. OK? That's what's going on. Uh. What's up with you? [speaker002:] Um [vocalsound] our t I went to [vocalsound] talk with uh Mike Jordan this [disfmarker] this week [speaker001:] Mm hmm. [speaker002:] um [nonvocalsound] and uh [vocalsound] shared with him the ideas about um [vocalsound] extending the Larry Saul work and um I asked him some questions about factorial H M so like later down the line when [vocalsound] we've come up with these [disfmarker] these feature detectors, how do we [disfmarker] [vocalsound] how do we uh [vocalsound] you know, uh model the time series that [disfmarker] that happens um [vocalsound] [vocalsound] and [vocalsound] and we talked a little bit about [vocalsound] factorial H M Ms and how [vocalsound] um when you're doing inference [disfmarker] or w when you're doing recognition, there's like simple Viterbi stuff that you can do for [disfmarker] [vocalsound] for these H M and [vocalsound] the uh [disfmarker] [vocalsound] the great advantages that um a lot of times the factorial H M Ms don't [vocalsound] um [vocalsound] don't over alert the problem there they have a limited number of parameters and they focus directly on [disfmarker] [vocalsound] on uh the sub problems at hand so [vocalsound] you can imagine [vocalsound] um [vocalsound] five or so parallel [vocalsound] um features um transitioning independently and then [vocalsound] at the end you [disfmarker] you uh couple these factorial H M Ms with uh [disfmarker] [vocalsound] with uh undirected links um based on [disfmarker] [vocalsound] based on some more data. [speaker001:] Hmm. [speaker002:] So he [disfmarker] he seemed [disfmarker] he seemed like really interested in [disfmarker] [vocalsound] in um [disfmarker] in this and said [disfmarker] said this is [disfmarker] this is something very do able and can learn a lot and um yeah, I've just been [vocalsound] continue reading um about certain things. [speaker001:] Mm hmm. [speaker002:] um thinking of maybe using um [vocalsound] um m modulation spectrum stuff to [vocalsound] um [disfmarker] as features um also in the [disfmarker] in the sub bands [speaker001:] Mm hmm. [speaker002:] because [vocalsound] it seems like [vocalsound] the modulation um spectrum tells you a lot about the intelligibility of [disfmarker] of certain um words and stuff So, um. Yeah. Just that's about it. [speaker001:] OK. [speaker003:] OK. And um so I've been looking at Avendano's work and um uh I'll try to write up in my next stat status report a nice description of [vocalsound] what he's doing, but it's [disfmarker] it's an approach to deal with [vocalsound] reverberation or that [disfmarker] the aspect of his work that I'm interested in the idea is that um [vocalsound] [vocalsound] [vocalsound] normally an analysis frames are um [vocalsound] too short to encompass reverberation effects um in full. You miss most of the reverberation tail in a ten millisecond window and so [vocalsound] [vocalsound] you [disfmarker] you'd like it to be that [vocalsound] um [vocalsound] the reverberation responses um simply convolved um in, but it's not really with these ten millisecond frames cuz you j But if you take, say, a two millisecond [vocalsound] um window [disfmarker] I'm sorry a two second window then in a room like this, most of the reverberation response [vocalsound] is included in the window and the [disfmarker] then it um [vocalsound] then things are l more linear. It is [disfmarker] it is more like the reverberation response is simply c convolved and um [disfmarker] [vocalsound] and you can use channel normalization techniques [vocalsound] like uh in his thesis he's assuming that the reverberation response is fixed. He just does um [vocalsound] mean subtraction, which is like removing the DC component of the modulation spectrum and [vocalsound] that's supposed to d um deal [disfmarker] uh deal pretty well with the um reverberation and um [vocalsound] the neat thing is you can't take these two second frames and feed them to a speech recognizer um [vocalsound] so he does this [vocalsound] um [vocalsound] method training trading the um [vocalsound] the spectral resolution for time resolution [vocalsound] and um [vocalsound] come ca uh synthesizes a new representation which is with say ten second frames but a lower s um [vocalsound] frequency resolution. So I don't really know the theory. I guess it's [disfmarker] these are called "time frequency representations" and h he's making the [disfmarker] the time sh um finer grained and the frequency resolution um less fine grained. [speaker005:] Mm hmm. [speaker003:] s so I'm [disfmarker] I guess my first stab actually in continuing [vocalsound] his work is to um [vocalsound] re implement this [disfmarker] this thing which um [vocalsound] changes the time and frequency resolutions cuz he doesn't have code for me. So that that'll take some reading about the theory. I don't really know the theory. [speaker005:] Mm hmm. [speaker003:] Oh, and um, [vocalsound] another f first step is um, so the [disfmarker] the way I want to extend his work is make it able to deal with a time varying reverberation response um [vocalsound] and um we don't really know [vocalsound] how fast the um [disfmarker] the reverberation response is varying the Meeting Recorder data um so um [vocalsound] we [disfmarker] we have this um block least squares um imp echo canceller implementation and um [vocalsound] I want to try [vocalsound] finding [vocalsound] the [disfmarker] the response, say, between a near mike and the table mike for someone using the echo canceller and looking at the echo canceller taps and then [vocalsound] see how fast that varies [vocalsound] from block to block. [speaker005:] Mm hmm. [speaker003:] That should give an idea of how fast the reverberation response is changing. [speaker005:] Mm hmm. [speaker001:] OK. Um. I think we're [vocalsound] sort of done. [speaker005:] Yeah. [speaker001:] So let's read our digits and go home. [speaker003:] Um. S so um y you do [disfmarker] I think you read some of the [disfmarker] the zeros as O's and some as zeros. [speaker001:] Yeah. [speaker003:] Is there a particular way we're supposed to read them? [speaker005:] There are only zeros here. Well. [speaker001:] No. "O" [disfmarker] "O" [disfmarker] "O" "O" [disfmarker] "O" [disfmarker] "O" and "zero" are two ways that we say that digit. [speaker005:] Eee. Yeah. [speaker001:] So it's [disfmarker] [speaker005:] But [disfmarker] [speaker002:] Ha! [speaker001:] so it's [disfmarker] i [speaker005:] Perhaps in the sheets there should be another sign for the [disfmarker] if we want to [disfmarker] the [disfmarker] the guy to say "O" or [speaker001:] No. I mean. I think people will do what they say. [speaker005:] It's [disfmarker] Yeah. [speaker001:] It's OK. [speaker005:] OK. [speaker003:] Alright. [speaker001:] I mean in digit recognition we've done before, you have [disfmarker] you have two pronunciations for that value, "O" and "zero". [speaker003:] OK. [speaker005:] But it's perhaps more difficult for the people to prepare the database then, if [disfmarker] because here you only have zeros [speaker001:] No, they just write [disfmarker] [speaker005:] and [disfmarker] and people pronounce "O" or zero [disfmarker] [speaker001:] they [disfmarker] they write down OH. or they write down ZERO a and they [disfmarker] and they each have their own pronunciation. [speaker005:] Yeah but if the sh the sheet was prepared with a different sign for the "O". [speaker001:] But people wouldn't know what that wa I mean [vocalsound] there is no convention for it. [speaker005:] OK. Yeah. OK. [speaker001:] See. I mean, you'd have to tell them [vocalsound] "OK when we write this, say it tha", you know, and you just [disfmarker] They just want people to read the digits as you ordinarily would [speaker005:] Mm hmm. Yeah. Yep. [speaker001:] and [disfmarker] and people say it different ways. [speaker003:] OK. Is this a change from the last batch of [disfmarker] of um forms? Because in the last batch it was spelled out which one you should read. [speaker005:] Yeah, it was orthographic, so. [speaker001:] Yes. That's right. It was [disfmarker] it was spelled out, and they decided they wanted to get at more the way people would really say things. [speaker003:] Oh. OK. [speaker001:] That's also why they're [disfmarker] they're bunched together in these different groups. [speaker003:] OK. [speaker001:] So [disfmarker] so it's [disfmarker] Yeah. So it's [disfmarker] it's [disfmarker] Everything's fine. [speaker003:] OK. [speaker001:] OK. Actually, let me just s since [disfmarker] since you brought it up, I was just [disfmarker] it was hard not to be self conscious about that when it [vocalsound] after we [disfmarker] since we just discussed it. But I realized that [disfmarker] that um [vocalsound] when I'm talking on the phone, certainly, and [disfmarker] and saying these numbers, [vocalsound] I almost always say zero. And uh [disfmarker] cuz [disfmarker] because uh i it's two syllables. It's [disfmarker] it's more likely they'll understand what I said. So that [disfmarker] that [disfmarker] that's the habit I'm in, but some people say "O" and [disfmarker] [speaker002:] Yeah I normally say "O" cuz it's easier to say. [speaker001:] Yeah it's shorter. Yeah. So it's [disfmarker] So. [vocalsound] So uh. [speaker002:] "O" [speaker001:] Now, don't think about it. [speaker002:] Oh, no! [speaker001:] OK. We're done. [speaker008:] st Yeah. [speaker006:] So we're on. [speaker008:] That's better. [speaker006:] And, [comment] somewhere is my agenda. I think the most important thing is Morgan wanted to talk about, uh, the ARPA [pause] demo. [speaker004:] Well, so, here's the thing. Um, why don't we s again start off with [disfmarker] with, uh, Yeah, I'll get it. I'll get the door. Um, I think we want to start off with the agenda. And then, given that, uh, Liz and Andreas are gonna be [pause] ten, fifteen minutes late, we can try to figure out what we can do most effectively without them here. So [disfmarker] [vocalsound] So [disfmarker] so, one thing is, yeah, talk about demo, [speaker006:] OK. So, uh [disfmarker] uh, IBM transcription status, [speaker004:] IBM transcription. Uh, what else? What's SmartKom? SmartKom? [speaker006:] Uh, we wanna talk about if w if we wanna add the data to the mar Meeting Recorder corpus. [speaker005:] The data. The data which we are collecting here. [speaker004:] What [disfmarker] what [disfmarker] what are we collecting here? [speaker005:] Data? [speaker006:] So why don't we have that on the agenda and we'll [disfmarker] we'll get to it and talk about it? [speaker005:] The SmartKom data? [speaker004:] Yeah, right. [speaker005:] Yeah. [speaker004:] Uh, right. Uh. [speaker006:] Uh, reorganization status. [speaker004:] Reorganization status. [speaker001:] Oh. Files and directories? [speaker004:] Files and directories. [speaker006:] Yep. Uh huh. Absinthe, which is the multiprocessor UNIX [disfmarker] Linux. I think it was [pause] Andreas wanted to talk about segmentation and recognition, and update on SRI recognition experiments. [speaker004:] Um [disfmarker] [speaker006:] And then if ti if there's time I wanted to talk about digits, but it looked like we were pretty full, so I can wait till next week. [speaker004:] Right. OK. Well, let's see. I think the a certainly the segmentation and recognition we wanna maybe focus on when An Andreas is here since that was particularly his thing. [speaker005:] And also the SmartKom thing should b [speaker004:] SmartKom also, Andreas. Absinthe, I think also he has sort of been involved in a lot of those things. [speaker006:] At least, yeah, [speaker004:] Yeah. [speaker006:] he'll t he'll probably be interested. But. [speaker004:] Yeah. Um So, I mean, I think they'll be inter I'll be interested in all this, but [disfmarker] but, uh, probably, if we had to pick something [pause] that we would talk on for ten minutes or so while they're coming here. Or I guess it would be, you think, reorganization status, or [disfmarker]? [speaker006:] Yeah. I mean, I think, Chuck was the one who added out the agenda item. I don't really have anything to say other than that we still haven't done it. So. [speaker002:] Well, I mean, I uh [disfmarker] [vocalsound] just basically that [disfmarker] maybe I said [disfmarker] maybe we said this before [disfmarker] just that we met and we talked about it and we sort of have a plan for getting things organized and [disfmarker] [speaker001:] And I [disfmarker] and I think a crucial part of that is the idea of [disfmarker] of not wanting to do it until right before the next level zero back up so that there won't be huge number of [disfmarker] of added, [speaker002:] Right. [speaker001:] uh [disfmarker] [speaker006:] Right. [speaker002:] That [disfmarker] that was basically it. Not [disfmarker] not much [@ @] [disfmarker] [speaker006:] Although Dave basically said that if we wanna do it, just tell him and he'll do a d level zero then. [speaker001:] Yeah. Uh huh. Oh, excellent. [speaker006:] So. [speaker002:] Oh, so maybe we should just go ahead and get everything ready, [speaker001:] Oh, good. [speaker002:] and [disfmarker] [speaker006:] Yep. So, I think we do need to talk a little bit about [disfmarker] Well, we don't need to do it during this meeting. We have a little more to discuss. [speaker002:] Yeah. [speaker006:] But, uh, we're [disfmarker] we're basically ready to do it. And, uh, I have some web pages on ts [comment] more of the background. So, naming conventions and things like that, that I've been trying to keep actually up to date. So. And I've been sharing them with U d UW folks also. [speaker001:] I'm sorry, you've been what? [speaker004:] OK. [speaker001:] Showing them? [speaker006:] Sharing them with the UW folks. [speaker001:] Sharing them. OK. OK. [speaker004:] OK. Well, maybe uh, since that [disfmarker] that was a pretty short one, maybe we should talk about the IBM transcription status. Someone can [vocalsound] fill in Liz and Andreas later. [speaker006:] OK. [speaker004:] Uh [speaker006:] So, we, uh [disfmarker] we did another version of the beeps, where we separated each beeps with a spoken digit. Chuck came up here and recorded some di himself speaking some digits, and so it just goes "beep one beep" and then the phrase, and then "beep two beep" and then the phrase. And that seems pretty good. Um, I think they'll have a b easier time keeping track of where they are in the file. [speaker005:] And we have done that on the [pause] automatic segmentations. [speaker006:] And we did it with the automatic segmentation, and I don't think [disfmarker] We ne we didn't look at it in detail. We just sent it to IBM. We [disfmarker] we sorta spot checked it. [speaker002:] I listened to [pause] probably, uh, five or ten minutes of it from the beginning. [speaker005:] Yeah. [speaker006:] Oh, really? OK. [speaker002:] Yeah. And [disfmarker] [speaker006:] I sorta spot checked here and there and it sounded pretty good. So. I think it'll work. [speaker004:] OK. [speaker006:] And, uh, we'll just hafta see what we get back from them. Uh [disfmarker] [speaker002:] And the main thing will be if we can align what they give us with what we sent them. I mean, that's the crucial part. [speaker006:] Right. [speaker002:] And I think we'll be able to do that at [disfmarker] with this new beep format. [speaker006:] Yep. Well, I think it's also they are much less likely to d have errors. [speaker002:] Mm hmm. [speaker006:] I mean, so the problem wi last time is that there were errors in the transcripts where they put beeps where there weren't any, or [disfmarker] and they put in extraneous beeps. [speaker002:] Right. Yeah. [speaker006:] And with the numbers there, it's much less likely. [speaker002:] Yeah, one interesting note is [disfmarker] uh, or problem [disfmarker] I dunno if this was just because of how I play it back, I say, uh, SND play and then the file, every once in a while, [@ @] [comment] uh, like a beep sounds like it's cut into two beeps. [speaker005:] Yeah. Into two pieces. Yeah. [speaker002:] Yeah, and I [disfmarker] I dunno if that's an, uh, artifact of playback [disfmarker] [speaker005:] Yep. [speaker002:] bu uh, I don't think it's probably in the original file. Um, but, uh [disfmarker] [speaker005:] I recognize that, too. Yeah. [speaker006:] Ha. That's interesting. I didn't hear that. [speaker002:] Yeah. But with this new format, um, that hopefully they're not hearing that, and if they are, it shouldn't throw them. [speaker005:] Yep. [speaker002:] So. [speaker006:] Well, maybe we better listen to it again, make sure, but, I mean, certainly the software shouldn't do that, [speaker002:] Yeah. That's what I thought. [speaker006:] so. [speaker002:] I it's probably just, you know, mmm, somehow the audio [pause] device gets hung for a second, [speaker001:] Mm hmm. [speaker005:] Yeah. Some latency or something. [speaker006:] Hiccups. [speaker005:] Yeah? [speaker002:] or [disfmarker] [speaker005:] Yeah. [speaker001:] As long as they have one number, and they know that there's only one beep maximum [vocalsound] that goes with that number. [speaker002:] Yeah. [speaker006:] Yeah. The only [disfmarker] the only part that might be confusing is when Chuck is reading digits. [speaker002:] Right. [speaker005:] Yep. [speaker002:] Right. So [vocalsound] th [speaker001:] Well, you know, actually, are we having them [disfmarker] [speaker006:] "Seven four eight beep seven beep [vocalsound] eight three two". [speaker001:] Yeah, but are we having them do digits? [speaker006:] Yes. Because, uh, we don't [disfmarker] we didn't [disfmarker] In order to cut them out we'd have to listen to it. [speaker002:] We [disfmarker] we didn't cut those out. [speaker005:] Yeah. They are not transcribed yet. So. Yeah. Yeah. [speaker006:] And we wanted to avoid doing that, [speaker001:] OK. [speaker006:] so we [disfmarker] they are transcribing the digits. [speaker001:] OK. OK. [speaker006:] Although we could tell them [disfmarker] [comment] [vocalsound] we could tell them, if you hear someone reading a digits string just say "bracket digit bracket" [speaker002:] We can [disfmarker] we can ignore it when we get it back, huh. [speaker006:] and don't bother actually computing the di writing down the digits. [speaker002:] Yeah. [speaker001:] That'd be great. That'd be what I'm having the transcribers here do, cuz it can be extracted later. [speaker006:] Yep. And then I wanted to talk about [disfmarker] but as I said I [disfmarker] we may not have time [disfmarker] what we should do about digits. We have a whole pile of digits that haven't been transcribed. [speaker004:] Le let's talk about it, because that's [disfmarker] that's something that I [disfmarker] I know Andreas is less interested in than Liz is, [speaker006:] OK. [speaker004:] so, you know. [speaker006:] Do we have anything else to say about transcription? [speaker004:] It's good [disfmarker] [speaker006:] About IBM stuff? [speaker002:] Uh, Brian [disfmarker] I [disfmarker] I [vocalsound] sent bresset [disfmarker] [vocalsound] [vocalsound] sent Brian a message about [pause] [vocalsound] the meeting and I haven't heard back yet. So. I g hope he got it and hopefully he's [disfmarker] [speaker006:] OK. [speaker001:] Hmm. [speaker002:] maybe he's gone, I dunno. He didn't even reply to my message. So. I should probably ping him just to make sure that he got it. [speaker006:] Alright. So, we have a whole bunch of digits, if we wanna move on to digits. [speaker004:] Actually, maybe I [disfmarker] One [disfmarker] one relate more related thing in transcription. So that's the IBM stuff. We've got that sorted out. Um, how're we doing on the [disfmarker] on the rest of it? [speaker001:] We're doing well. I [disfmarker] I hire [disfmarker] I've hired two extra people already, expect to hire two more. [speaker006:] Hmm. [speaker001:] And, um, [vocalsound] I've prepared, um, uh, a set of five which I'm [disfmarker] which I'm calling set two, which are now being edited by my head transcriber, [vocalsound] in terms of spelling errors and all that. She's also checking through and mar and [disfmarker] [vocalsound] and monitoring, um, the transcription of another transcriber. You know, I mean, she's going through and doing these kinds of checks. [speaker004:] Uh huh. [speaker001:] And, I've moved on now to what I'm calling set three. I sort of thought if I do it in sets [disfmarker] groups of five, then I can have, like, sort of a [disfmarker] a parallel processing through [disfmarker] through the [disfmarker] the current. [speaker004:] Uh huh. [speaker001:] And [disfmarker] and you indicated to me that we have a g a goal now, [vocalsound] for the [disfmarker] for the, um, [nonvocalsound] [vocalsound] the, uh, DARPA demo, of twenty hours. So, I'm gonna go up to twenty hours, be sure that everything gets processed, and released, and [disfmarker] [pause] [comment] and that's [disfmarker] that's what my goal is. Package of twenty hours right now, [vocalsound] and then once that's done, move on to the next. [speaker004:] Yeah, uh, so twenty hours. But I guess the other thing is that, um, that [disfmarker] that's kinda twenty hours ASAP because the longer before the demo we actually have the twenty hours, the more time it'll be for people to actually do cool things with it. [speaker001:] Mm hmm. [speaker004:] So. OK. [speaker001:] Good. I'm [disfmarker] I'm hiring people who, [vocalsound] uh, really are [disfmarker] They would like to do it full time, several of these people. [speaker004:] Mm hmm. [speaker001:] And [disfmarker] and I don't think it's [vocalsound] possible, really, to do this full time, but, that [disfmarker] what it shows is motivation to do as many hours as possible. [speaker006:] It'll keep your accuracy up. Yep. [speaker004:] Yeah. Yeah. [speaker001:] And they're really excellent. [speaker004:] Well, that's good. [speaker001:] Yeah. [speaker004:] Yeah, I mean, I guess the [disfmarker] So the difference if [disfmarker] if, um, if the IBM stuff works out, the difference in the job would be that they p primarily would be checking through things that were already done by someone else? [speaker001:] Got a good core group now. Again. Mm hmm. [speaker004:] Is that most of what it [disfmarker]? [speaker006:] And correcting. [speaker004:] I mean [disfmarker] Correcting. [speaker006:] Correcting. We'll [disfmarker] we'll expect that they'll have to move some time bins and do some corrections. [speaker001:] And I [disfmarker] you know, I've also d uh, discovered [disfmarker] So with the new transcriber I'm [disfmarker] um [disfmarker] So [disfmarker] Uh, lemme say that my, uh [disfmarker] So, um [disfmarker] At present, um, the people have been doing these transcriptions a channel at a time. And, that sort of, um, [vocalsound] is useful, and t you know, and then once in a while they'll have to refer to the other channels to clear something up. OK. Well, [vocalsound] I realize that, um, w i we we're using the pre segmented version, and, um, the pre segmented version is extremely useful, and wouldn't it be, useful also to have the visual representation of those segments? And so I've [disfmarker] [pause] uh, [pause] I, uh, uh, I've [comment] trained the new one [disfmarker] uh, the new the newest one, [vocalsound] to, um, [vocalsound] use the visual from the channel that is gonna be transcribed at any given time. And that's just amazingly helpful. Because what happens then, is you scan across the signal and once in a while you'll find a blip that didn't show up in the pre segmentation. [speaker006:] Oh, right. I see what you mean. [speaker001:] And that'll be something like [disfmarker] [vocalsound] I it's ver [disfmarker] it's interesting. [speaker006:] A backchannel, or [disfmarker] [speaker001:] Once in a while it's a backchannel. [speaker005:] Yep. [speaker001:] Sometimes it seems to be, um, similar to the ones that are being picked up. [speaker006:] Mm hmm. [speaker001:] And they're rare events, but you can really go through a meeting very quickly. You just [disfmarker] you just, you know, yo you s you scroll from screen to screen, looking for blips. And, I think that we're gonna end up with, uh [pause] better coverage of the backchannels, [speaker004:] Yeah. [speaker001:] but at the same time we're benefitting tremendously from the pre segmentation because [vocalsound] there are huge places where there is just absolutely no activity at all. [speaker004:] Mm hmm. [speaker001:] And, uh, the audio quality is so good [disfmarker] [speaker002:] So they can [disfmarker] they can, um, scroll through that pretty quick? [speaker001:] Yeah. Mm hmm. [speaker002:] That's great. [speaker001:] Yeah. So I think that that's gonna, also [pause] eh, [comment] you know, speed the efficiency of this part of the process. [speaker004:] Hmm. OK. Uh, yeah. So, uh [disfmarker] Yeah. So let's talk about the digits, since they're not here yet. [speaker006:] Uh, so, we have a whole bunch of digits that we've read and we have the forms and so on, um, but only a small number of that ha well, not a small number [disfmarker] only a subset of that has been transcribed. And so we need to decide what we wanna do. And, uh, Liz and Andreas [disfmarker] actually they're not here, but, they did say at one point that they thought they could do a pretty good job of just doing a forced alignment. And, again, I don't think we'll be able to do with that alone, because, um, sometimes people correct themselves and things like that. But [disfmarker] so, I was just wondering what people thought about how automated can we make the process of finding where the people read the digits, doing a forced alignment, and doing the timing. [speaker004:] Well, forced alignment would be one thing. What about just actually doing recognition? [speaker006:] Well, we [disfmarker] we know what they read, because we have the forms. [speaker004:] No, they make mistakes. [speaker006:] Right. But, the point is that we wanna get a set of clean digits. [speaker004:] Right. [speaker002:] You're talking about as a pre processing step. Right, Morgan? [speaker004:] Um [disfmarker] [speaker002:] Is that what you're [disfmarker]? [speaker004:] Yeah, I'm [disfmarker] I'm not quite sure what I'm talking about. I mean [disfmarker] I [disfmarker] I mean, uh, we're talking about digits now. And [disfmarker] and so, um, there's a bunch of stuff that hasn't been marked yet. Uh. And, um, [vocalsound] there's the issue that [disfmarker] that they [disfmarker] we know what [disfmarker] what was said, but do we? [speaker006:] I mean, so one option i [speaker004:] Because people make mistakes and stuff. I was just asking, just out of curiosity, if [disfmarker] if with, uh [disfmarker] uh, the SRI recognizer getting one percent word error, uh, would we [disfmarker] would we do [pause] better [disfmarker]? So, if you do a forced alignment but the force but the [disfmarker] but the transcription you have is wrong because they actually made mistakes, uh, or [vocalsound] false starts, it's [disfmarker] it's much less c [vocalsound] it's [pause] much less common than one percent? [speaker006:] But that's pretty uncommon. Um, if we could really get one percent on [disfmarker] Well, I guess [disfmarker] yeah, I guess if we segmented it, we could get one percent on digits. [speaker004:] We should be able to. Right? [speaker002:] Yeah. [speaker004:] Yeah. So that's just my question. [speaker006:] But, [speaker004:] I'm not saying it should be one way or the other, but it's [disfmarker] If [disfmarker] [speaker006:] Well, there [disfmarker] there're a couple different of doing it. We could use the tools I've already developed and transcribe it. Hire some people, or use the transcribers to do it. We could let IBM transcribe it. You know, they're doing it anyway, and unless we tell them different, they're gonna transcribe it. Um, or we could try some automated methods. And my [disfmarker] my tendency right now is, well, if IBM comes back with this meeting and the transcript is good, just let them do it. [speaker004:] Well [disfmarker] Yeah, it's [disfmarker] Y you raised a point, kind of, uh, euphemistically [disfmarker] but, I mean, m maybe it is a serious problem. Ho what will they do when they go [disfmarker] hear "beep [pause] seven [pause] beep [pause] seven three five two" [disfmarker] I mean, [vocalsound] you think they'll [disfmarker] we'll get [disfmarker]? [speaker006:] It's pretty distinct. The beeps are [pause] pre recorded. [speaker004:] Yeah? [speaker002:] It'll [comment] only be a problem for m for mine. [speaker005:] Yeah. [speaker001:] Well it [disfmarker] it [disfmarker] well, it'd be preceded by "I'm reading transcript so and so"? [speaker002:] Yeah. [speaker006:] Yes. [speaker001:] So, I think if they're processing it at [disfmarker] [speaker006:] I mean, it'll be [disfmarker] it will be in the midst of a digit string. So [disfmarker] [speaker004:] Yeah. [speaker006:] I mean it [disfmarker] sure, there [disfmarker] there might be a place where it's "beep seven [pause] beep eight [pause] beep [pause] eight [pause] beep". But, you know, they [disfmarker] they're [disfmarker] they're gonna macros for inserting the beep marks. And so, I [disfmarker] I don't think it'll be a problem. We'll have to see, but I don't think it's gonna be a problem. [speaker004:] OK. Well, I [disfmarker] I [disfmarker] I dunno, I [disfmarker] I think that that's [disfmarker] if they are in fact going to transcribe these things, uh, certainly any process that we'd have to correct them, or whatever is [disfmarker] needs to be much less elaborate for digits than for other stuff. [speaker006:] Right. [speaker004:] So, why not? Sure. That was it? [speaker006:] That was it. Just, what do we do with digits? [speaker004:] OK. [speaker006:] We have so many of them, [vocalsound] and it'd be nice to [pause] actually do something with them. [speaker004:] Well, we [disfmarker] we [disfmarker] we wanna have them. [speaker009:] You mean there're more than ten? [speaker004:] Yeah, I [disfmarker] [speaker006:] Anything else? Your mike is a little low there. [speaker004:] I in Berkeley, yeah. So, [vocalsound] uh [pause] You [disfmarker] you have to go a little early, right? [speaker009:] Well, I can stay till about, uh, three forty. [speaker004:] At twenty [disfmarker] Alright. So le let's make sure we do the ones that [disfmarker] that, uh, saved you. [speaker009:] Yeah. Mm hmm. [speaker004:] So there was some [disfmarker] Uh [pause] [vocalsound] In [disfmarker] in [disfmarker] Adam's agenda list, he had something from you about segmentation this last recognition? [speaker009:] Well, yeah. So this is just partly to inform everybody, um, and [disfmarker] and of course to get, um, input. [speaker006:] Oops. [speaker009:] Um, so, [nonvocalsound] uh, we had a discussion [disfmarker] Don and Liz and I had discussion last week about how to proceed with, uh, you know, with Don's work, [speaker005:] Ch [speaker009:] and [disfmarker] [vocalsound] and [disfmarker] and, uh, one of the obvious things that occur to us was that we're [disfmarker] since we now have Thilo's segmenter and it works, you know, amazingly well, [vocalsound] um, we should actually basically re evaluate the recognition, um, results using [disfmarker] you know, without cheating on the segmentations. And, that should be fairly [disfmarker] [speaker005:] So [disfmarker] And how do we find the transcripts for those so that [disfmarker]? Yeah. [speaker009:] Oh, OK. [speaker005:] The references for [disfmarker] for [pause] those segments? [speaker009:] So, there's actually [disfmarker] [speaker005:] It's not that [disfmarker] [speaker009:] Why do you ask? [speaker006:] I could [disfmarker] [speaker009:] No, actually, um, NIST has, um m a fairly sophisticated scoring program [vocalsound] that you can give a, um [disfmarker] [vocalsound] a time, [speaker007:] Well [disfmarker] [speaker006:] Hand ones. [speaker005:] OK. [speaker009:] uh [disfmarker] You know, you basically just give two [pause] time marked sequences of words, and it computes the um [disfmarker] the, [comment] uh [disfmarker] [comment] you know, the [disfmarker] the [disfmarker] th [speaker002:] It does all the work for you. [speaker009:] it does all the work for you. [speaker002:] Yeah. [speaker005:] OK. [speaker009:] So, it [disfmarker] we just [disfmarker] and we use that actually in Hub five to do the scoring. Um. So what we've been using so far was sort of a [pause] simplified version of the scoring. And we can [disfmarker] we can handle the [disfmarker] the [disfmarker] the type of problem we have here. So, we ha [speaker005:] So, basically you give some time constraints for [disfmarker] for the references and for [disfmarker] for the hypothesis, [speaker009:] Yeah. Right. Right. [speaker005:] and [disfmarker] [speaker007:] Yeah. Maybe the [pause] start of your speech and the end of it, [speaker005:] Yeah, OK. [speaker009:] So do [speaker007:] or stuff like that. [speaker005:] OK. [speaker009:] Right. It does time constrained word alignment. [speaker005:] OK. [speaker009:] So. So that should be possible. I mean that shouldn't be a problem. Uh, so that was the one thing, and the other was that, um [disfmarker] What was the other problem? Oh! That Thilo wanted to use [pause] the recognizer alignments to train up his, um, speech detector. [speaker005:] Yeah. [speaker009:] Um, so that we could use, uh [disfmarker] you know there wouldn't be so much hand [vocalsound] labelling needed to, uh [disfmarker] to generate training data for [disfmarker] for the speech detector. [speaker005:] Yeah. I'm just in progress of [disfmarker] of doing that. So. [speaker009:] And I think you're in the process of doing that. So, you can [disfmarker] [comment] you can [disfmarker] [speaker005:] Yeah. [speaker002:] It'll give you a lot more data, too. Won't it? [speaker005:] Yeah. So, it's basically [disfmarker] s I think, eight meetings or something which [disfmarker] which I'm using, and, [vocalsound] it's [disfmarker] [vocalsound] before it was twenty minutes of one meeting. [speaker009:] Mm hmm. [speaker005:] So [disfmarker] should [comment] be a little bit better. [speaker009:] Right. That won't be perfect [disfmarker] the alignments aren't perfect, [speaker002:] Great. [speaker005:] Yeah. [speaker009:] but, um, it's probably still better to have all this extra data, than [disfmarker] [speaker005:] But [disfmarker] [speaker007:] Yeah. [speaker005:] Yeah. Yep. [speaker009:] Yeah. [speaker005:] We'll see that. [speaker009:] Yeah. [speaker004:] OK. [speaker007:] Actually, I had a question about that. If you find that you can [vocalsound] lower the false alarms that you get where there's no speech, that would be useful [pause] for us to know. So, um [disfmarker] [speaker005:] There were the false alarms. [speaker007:] Yeah. So, [vocalsound] r right now you get f fal you know, false [disfmarker] false, uh, speech regions when it's just like, um, [vocalsound] breath or something like that, [speaker005:] OK. Yeah. [speaker007:] and I'd be interested to know the [disfmarker] wha if you retrain [speaker005:] Yep. [speaker007:] um, [speaker005:] Yeah. [speaker007:] do those actually go down or not? Because [pause] of [disfmarker] [speaker005:] Yeah. I'll [disfmarker] can make [disfmarker] an can, like, make a c comparison of [disfmarker] of the old system to the [disfmarker] to the new one, [speaker007:] Yeah, just to see if by doing nothing in the modeling of [disfmarker] just having that training data wh what happens. [speaker005:] and [pause] then [disfmarker] Yeah. Yeah. Yep. [speaker004:] Um another one that we had on Adam's agenda [pause] that definitely involved you was s something about SmartKom? [speaker006:] Right. So, Rob Porzel [disfmarker] eh, Porzel? and the, uh [disfmarker] Porzel [disfmarker] and the, uh, SmartKom group are collecting some dialogues. [speaker009:] Porzel. Porzel. [speaker006:] Basically they have one person sitting in here, looking at a picture, and a wizard sitting in another room somewhere. And, uh, they're doing a travel task. And, uh, it involves starting [disfmarker] I believe starting with a [disfmarker] It's [disfmarker] it's always the wizard, but it starts where the wizard is pretending to be a computer and it goes through a, uh, [vocalsound] speech generation system. [speaker005:] Yeah. Actually, it's changed to a synthesis for [disfmarker] for the first part now. [speaker006:] Synthesis system. [speaker005:] Yeah. [speaker006:] Um, and then, it goes to a real wizard and they're evaluating that. And they wanted to use this equipment, and so the w question came up, is [disfmarker] well, here's some more data. Should this be part of the corpus or not? And my attitude was yes, because there might be people who are using this corpus for [pause] acoustics, as opposed to just for language. Um, or also for dialogue of various sorts. Um, so it's not a meeting. Right? Because it's two people and they're not face to face. [speaker004:] Wait a minute. [speaker005:] Yeah. [speaker004:] So, I just wanted to understand it, cuz I [disfmarker] I'm [disfmarker] uh, hadn't quite followed this process. Um. So, it's wizard in the sen usual sense that the person who is asking the questions doesn't know that it's, uh, a machi not a machine? [speaker009:] Right. Actually [disfmarker] actually, w w the [disfmarker] the [disfmarker] We do this [disfmarker] [speaker006:] At the beginning. [speaker009:] I dunno who came up with it, but I think it's a really clever idea. We simulate a computer breakdown halfway through the session, and so then after that, the person's told that they're now talking to a, uh [disfmarker] to a human. [speaker004:] Yeah. [speaker005:] It's a human operator. Yeah. [speaker004:] Yeah. [speaker006:] But of course they don't know that it's the same person both times. [speaker009:] So, we [disfmarker] we collect [disfmarker] we collect both human computer and human human data, essentially, in the same session. [speaker004:] You might wanna try collecting it the other way around sometime, saying that th the computer isn't up yet [speaker001:] Hmm. [speaker004:] and then [disfmarker] so then you can separate it out whether it's the beginning or end kind of effects. [speaker009:] That's an idea. [speaker004:] But, [speaker006:] Yep. [speaker004:] yeah. [speaker009:] Yeah. [speaker001:] That's a good idea. [speaker006:] "I have to go now. You can talk to the computer." [speaker002:] It's a lot more believable, too, [speaker006:] "No!" [speaker002:] if you tell them that they're [disfmarker] the computer part is running on a Windows machine. And the whole breakdown thing kinda makes sense. [speaker009:] O Just [disfmarker] just reboot it. [speaker006:] Abort [disfmarker] abort, retry, fail? [speaker007:] So did they actually save the far field [pause] data? Cuz at first they weren't [disfmarker] they weren't sa [speaker006:] Well, this was [disfmarker] this was the question. [speaker005:] Yes. [speaker009:] Yeah. [speaker006:] So [disfmarker] so they were saying they were not going to, [speaker005:] Yeah. [speaker007:] OK. [speaker006:] and I said, "well that's silly, if [disfmarker] if we're gonna try to do it for a corpus, there might be people who are interested in acoustics." [speaker007:] Yeah. [speaker009:] Wow. [speaker005:] No. [speaker007:] Or [disfmarker] [speaker005:] projector [comment] We were not saying we are not [pause] doing it. [speaker007:] Yeah. [speaker004:] S [speaker009:] No, the [disfmarker] the question is do we save one or two far field channels or all of them? [speaker005:] We wer we just wanted to do [disfmarker] [speaker007:] Right. [speaker005:] Yeah. Yeah. [speaker006:] I [disfmarker] I see no reason not to do all of them. That [disfmarker] that if we have someone who is doing acoustic studies, uh, it's nice to have the same for every recording. [speaker004:] Um [disfmarker] [speaker007:] Nnn. [speaker009:] Hmm. [speaker007:] Yeah. [speaker004:] So, what is the purpose of this recording? [speaker009:] Mm hmm. [speaker004:] This is to get acoustic and language model training data for SmartKom. OK. [speaker009:] It's to be traini to b training data and development data for the SmartKom [pause] system. [speaker005:] The English system? [speaker009:] Yeah. [speaker005:] Yeah. [speaker009:] Right. Right. [speaker002:] Where does this [disfmarker]? [speaker007:] Maybe we can have him vary the microphones, too, [speaker004:] Well, [speaker007:] or they're different s speakers. [speaker005:] B [speaker006:] Right. So [disfmarker] so [disfmarker] so for their usage, they don't need anything. [speaker004:] so why not [disfmarker]? [speaker005:] Yeah. But [disfmarker] but I'm not sure about the legal aspect of [disfmarker] of that. [speaker006:] Right? [speaker005:] Is [disfmarker] is there some contract with SmartKom or something about the data? [speaker009:] Yeah. [speaker005:] What they [disfmarker] or, is [disfmarker] is that our data which we are collecting here, [speaker004:] We've never signed anything that said that we couldn't use anything that we did. [speaker005:] or [disfmarker]? OK. OK. [speaker009:] We weren't supposed to collect any data. [speaker005:] So. OK. [speaker009:] This was all [disfmarker] [speaker004:] Yeah. [speaker005:] So. Yeah, th th that was the question. If [disfmarker] if [disfmarker]? Yeah. [speaker009:] Yeah. [speaker005:] Basically. [speaker004:] No that's not a problem. I [disfmarker] L look, it seems to me that if we're doing it anyway and we're doing it for these [disfmarker] these purposes that we have, [vocalsound] and we have these distant mikes, we definitely should re should save it all as long as we've got disk space, [speaker009:] Mm hmm. [speaker004:] and disk is pretty cheap. [speaker009:] OK. [speaker004:] So should we save it? [speaker006:] And then [disfmarker] [speaker004:] Now th Yeah. So we save it because it's [disfmarker] it [disfmarker] it's potentially useful. [speaker006:] Right. [speaker004:] And now, what do we do with it is [disfmarker] is a s separate question. I mean, anybody who's training something up could [vocalsound] choose to put it [disfmarker] eh, to u include this or not. [speaker009:] Right. [speaker004:] I [disfmarker] I would not say it was part of the meetings corpus. It isn't. But it's some other data we have, and if somebody doing experiment wants to train up including that then they can. [speaker009:] Mm hmm. [speaker004:] Right? [speaker006:] So it's [disfmarker] It [disfmarker] it [disfmarker] I guess it [disfmarker] the [disfmarker] begs the question of what is the meeting corpus. So if, at UW they start recording two person hallway conversations is that part of the meeting corpus? [speaker004:] I think it's [disfmarker] I [disfmarker] I think [disfmarker] I th think the idea of two or more people conversing with one another is key. [speaker006:] Well, this has two or more people conversing with each other. [speaker004:] Nnn, well [speaker005:] Yeah. [speaker006:] They're just not face to face. [speaker007:] What if we just give it a [disfmarker] a name like we give these meetings a name? [speaker001:] Well this [disfmarker] [speaker004:] No, it doesn't. Right? It has [disfmarker] [speaker006:] I mean, that was my intention. [speaker007:] And then later on some people will consider it a meeting and some people won't, [speaker004:] Yeah. [speaker006:] That was my intention. [speaker001:] Well this [disfmarker] [speaker007:] and [disfmarker] [speaker006:] So [disfmarker] so [disfmarker] s [vocalsound] so part of the reason that I wanted to bring this up is, [vocalsound] do we wanna handle it as a special case or do we wanna fold it in, [speaker007:] Just give it a [vocalsound] title. [speaker001:] Oh. [speaker004:] I think it is a s [speaker006:] we give everyone who's involved as their own user ID, give it session I Ds, [vocalsound] let all the tools that handle Meeting Recorder handle it, or do we wanna special case it? And if we were gonna special case it, who's gonna do that? [speaker009:] Well, it [disfmarker] it makes sense to handle it with the same infrastructure, since we don't want to duplicate things unnecessarily. [speaker005:] So. It [disfmarker] it [disfmarker] it [disfmarker] [speaker001:] I think [disfmarker] [speaker009:] But as far as distributing it, we shouldn't label it as part of this meeting corpus. We should let it be its own corp [speaker004:] Yeah. [speaker006:] I don't see why not. [speaker001:] Well it's [disfmarker] it [disfmarker] well, because [disfmarker] [speaker006:] It's just a different topic. [speaker001:] I ha I have an extra point, which is the naturalness issue. Because we have, like, meetings that have a reason. That's one of the reasons that we were talking about this. And [disfmarker] and those [disfmarker] and this sounds like it's more of an experimental setup. [speaker004:] Yeah. It's scenario based, it's [disfmarker] it's human computer interface [disfmarker] [vocalsound] it's really pretty different. [speaker001:] It's got a different purpose. [speaker004:] But I I [disfmarker] I have no problem with somebody folding it in for some experiment they're gonna do, [speaker001:] Yeah. [speaker004:] but I don't think i it [disfmarker] it doesn't match anything that we've described about meetings. [speaker006:] Mm hmm. [speaker004:] Whereas everything that we talked about them doing at [disfmarker] at UW and so forth really does. [speaker006:] OK. [speaker004:] They're actually talking [disfmarker] [speaker006:] So w so what does that mean for how we are gonna organize things? [speaker001:] Hmm. [speaker005:] Yeah. [speaker004:] You can [disfmarker] you can [disfmarker] Again, as [disfmarker] as I think Andreas was saying, [vocalsound] if you wanna use the same tools and the same conventions, there's no problem with that. It's just that it's, you know, different directory, it's called something different, it's [disfmarker] you know. It is different. You can't just fold it in as if it's [disfmarker] I mean, digits are different, too. Right? [speaker009:] It might also be potentially confusing. [speaker006:] Yeah, but those are folded in, and it's just [disfmarker] you just mark the transcripts differently. So [disfmarker] so one option is you fold it in, [speaker009:] Right. [speaker006:] and just simply in the file you mark somewhere that this is this type of interaction, rather than another type of interaction. [speaker009:] Yeah, I th [speaker004:] Well, I don I wouldn't call reading digits "meetings". Right? I mean, we [disfmarker] we [disfmarker] we were doing [disfmarker] [speaker006:] Well, but [disfmarker] but, [vocalsound] I put it under the same directory tree. You know, it's in "user doctor speech data MR". [speaker004:] Well [disfmarker] [speaker007:] Can we just have a directory called, like, "other stuff"? And [disfmarker] [speaker006:] Other. [speaker007:] Well [disfmarker] or, I dunno. [speaker004:] I mean, I don't care what directory tree you have it under. [speaker007:] And [disfmarker] [vocalsound] and just, um, store it there. [speaker004:] Right? I mean that's just a [disfmarker] [speaker006:] OK. My preference is to have a single procedure so that I don't have to think too much about things. [speaker009:] Yes. [speaker007:] I mean [disfmarker] [speaker006:] And, just have a marking. [speaker004:] Yeah. [speaker006:] If we do it any other way that means that we need a separate procedure, and someone has to do that. [speaker004:] O You [disfmarker] you can use whatever procedure you want that's p convenient for you. All I'm saying is that there's no way that we're gonna tell people that reading digits is meetings. [speaker006:] Right. [speaker004:] And similarly we're not gonna tell them that someone talking to a computer to get travel information is meetings. Those aren't meetings. But if it makes it easier for you to pu fold them in the same procedures and have them under the same directory tree, knock yourself out. You know? [speaker002:] There's a couple other questions that I have too, and [disfmarker] and [pause] one of them is, what about, uh, consent issues? And the other one is, what about transcription? [speaker005:] Transcription is done in Munich. [speaker002:] Are [disfmarker]? OK. So we don't have to worry about transcribing it? [speaker005:] Yeah. [speaker004:] Alright. [speaker006:] So, w we will hafta worry about format. [speaker009:] That's a [disfmarker] that's another argument to keep it separate, because it's gonna follow the SmartKom transcription conventions and not the ICSI meeting transcription conventions. [speaker006:] Oh, OK. [speaker005:] Yeah. [speaker004:] Ah. [speaker006:] OK. Well, I didn't realize that. [speaker004:] Good point. [speaker006:] That's [disfmarker] that's a [disfmarker] [speaker004:] Good point. But I'm sure no one would have a problem with our folding it in for some acoustic modeling or [disfmarker] or some things. Um. Do we h do we have, uh, um, American born folk, uh, reading German [disfmarker] German, uh, pla uh, place names and so forth? [speaker005:] Yeah. [speaker004:] Is that [disfmarker]? [speaker009:] Exactly. [speaker006:] Yep. [speaker005:] Yeah. [speaker004:] Yeah, great. [speaker009:] Yeah. [speaker006:] They [disfmarker] they even have a reading list. [speaker002:] I bet that sounds good, [speaker006:] It's pretty funny. [speaker004:] Yeah. [speaker002:] huh? [speaker009:] Yeah. [speaker005:] You can do that if you want. [speaker004:] Yeah. [speaker002:] OK. [speaker009:] Yeah. [speaker002:] I dunno if you want that. [speaker004:] Right. [speaker006:] So [disfmarker] [speaker001:] Hmm. [speaker004:] Heidelberg [speaker009:] Exactly [speaker006:] Disk might eventually be an issue so we might [disfmarker] we [disfmarker] we might need to, uh, [vocalsound] get some more disk pretty soon. [speaker009:] Do you wanna be a subject? [speaker004:] Yeah, I be pretty good. [speaker009:] We [disfmarker] Yeah. [speaker006:] We're about [disfmarker] we're about half [disfmarker] halfway through our disk right now. [speaker009:] That was one of our concerns. [speaker002:] Yeah. Are we only half? I thought we were more than that. [speaker006:] We're probably a little more than that because we're using up some space that we shouldn't be on. So, once everything gets converted over to the disks we're supposed to be using we'll be probably, uh, seventy five percent. [speaker002:] Well, when I was looking for space for Thilo, I found one disk that had, uh, I think it was nine gigs and another one had seventeen. [speaker006:] Yep. [speaker002:] And everything else was sorta committed. Uh [disfmarker] [speaker006:] Were those backed up or non backed up? [speaker002:] Those were non backed up. [speaker005:] Non back up. [speaker006:] Right. So that's different. [speaker002:] S oh, you're talking about backed up. [speaker006:] I'm much more concerned about the backed up. The non backed up, [speaker002:] I haven't looked to see how much of that we have. [speaker006:] yeah, i is cheap. I mean, if we need to we can buy a disk, hang it off a s uh, workstation. If it's not backed up the sysadmins don't care too much. [speaker004:] Yeah. So, I mean, pretty much anytime we need a disk, we can get it at the rate that we're [disfmarker] [speaker009:] You can [disfmarker] I shouldn't be saying this, but, you can just [disfmarker] you know, since the back ups are every night, you can recycle the backed up diskspace. [speaker006:] Yeah. But that's [disfmarker] that's [disfmarker] [pause] that's risky. [speaker004:] Yeah. You really shouldn't be saying [disfmarker] [speaker006:] Mmm. [speaker009:] I didn't say that. [speaker006:] Mmm. Yeah, that's right. [speaker009:] I didn't say that. [speaker006:] Beep that out. [speaker004:] Da we had allowed Dave to listen to these [disfmarker] [vocalsound] these, [vocalsound] uh, recordings. [speaker009:] Right. [speaker004:] Um [disfmarker] [vocalsound] Yeah, I me and there's been this conversation going on about getting another file server, and [disfmarker] and [vocalsound] we can do that. [speaker009:] Mm hmm. [speaker004:] We'll take the opportunity and get another big raft of [disfmarker] [vocalsound] of disk, I guess. [speaker006:] Yeah. [speaker009:] Well, I think [disfmarker] [comment] I think there's an argument for having [disfmarker] you know, you could use our old file server for [disfmarker] for disks that have data that [pause] is very rarely accessed, [speaker006:] It's really the back up issue rather than the file server issue. [speaker009:] and then have a fast new file server for data that is, um, heavily accessed. [speaker006:] Yeah. My understanding is, the issue isn't really the file server. [speaker009:] Yeah. [speaker006:] We could always put more disks on. [speaker009:] Yeah. It's the back it's the back up capaci [speaker006:] It's the back up system. [speaker009:] Yeah. [speaker006:] So [disfmarker] which is near saturation, apparently. So. [speaker004:] Soon. [speaker002:] I think [disfmarker] I think the file server could become an issue as we get a whole bunch more new compute machines. And we've got, you know, fifty machines trying to access data off of Abbott at once. [speaker006:] Well, we're alright for now because the network's so slow. [speaker009:] I mean, I think [disfmarker] I think we've raised this before and someone said this is not a reliable way to do it, but the [disfmarker] What about putting the stuff on, like, C CD ROM or DVD or something? [speaker006:] Yeah. That was me. I was the one who said it was not reliable. [speaker009:] OK. [speaker006:] The they [disfmarker] they wear out. [speaker009:] Oh, OK. [speaker006:] Yeah. The [disfmarker] the [disfmarker] th [speaker009:] But they wear out just from sitting on the shelf? [speaker006:] Yep. Absolutely. [speaker009:] Or from being [pause] read and read? [speaker006:] No. Read and write don't hurt them too much unless you scratch them. [speaker009:] Oh, OK. [speaker006:] But the r the write once, and the read writes, don't last. So you don't wa you don't wanna put ir un reproduceable data [pause] on them. [speaker009:] Uh huh. [speaker002:] Wear out after what amount of time? [speaker006:] Year or two. [speaker001:] Would it be [disfmarker]? [speaker004:] Year or two? [speaker006:] Yep. [speaker004:] Wow. [speaker001:] Hmm. [speaker009:] But if that [disfmarker] then you would think you'd [pause] hear much more clamoring about data loss [speaker005:] Yeah. [speaker009:] and [disfmarker] [speaker004:] I mean, yeah, all the L [speaker006:] I [disfmarker] I don't know many people who do it on CD. I mean, they're [disfmarker] the most [disfmarker] fo [speaker004:] LDC all the LDC distributions are on CD ROM. [speaker007:] Yeah. [speaker006:] They're on CD, but they're not [disfmarker] tha that's not the only source. [speaker007:] Like [disfmarker] [speaker006:] They have them on disk. And they burn new ones every once in a while. [speaker009:] But, you know, we have [disfmarker] [speaker006:] But if you go [disfmarker] [vocalsound] if you go k [speaker007:] But we have like thirty [pause] you know, from [pause] ten years ago? No. [speaker004:] We have all sorts of CD ROMs from a long time ago. [speaker007:] Yeah! [speaker009:] Right. [speaker007:] Ten years ago. [speaker005:] Yeah. [speaker006:] Well, th th OK. [speaker007:] Ninety one, and they're still all fine. [speaker008:] Were they burned or were they pressed? [speaker004:] Yeah. [speaker007:] Uh, both. I've burned them and they're still OK. [speaker008:] Yeah. [speaker007:] I mean, usually they're [disfmarker] [speaker006:] The [disfmarker] the pressed ones last for well, not forever, they've been finding even those degrade. [speaker004:] Oh, I see. [speaker006:] But, uh, the burned ones [disfmarker] I mean, when I say two or three years what I'm saying is that I have had disks which are gone in a year. [speaker007:] That's what I [disfmarker] [speaker006:] On the average, it'll probably be three or four years. But, uh [disfmarker] I [disfmarker] I [disfmarker] you don't want to per p have your only copy on a media that fails. [speaker009:] Mmm. [speaker006:] And they do. [speaker009:] So how about [disfmarker]? [speaker006:] Um, if you have them professionally pressed, y you know, they're good for decades. [speaker009:] So [disfmarker] so how about putting them on that plus, like on a [disfmarker] on [disfmarker] on DAT or some other medium that isn't risky? [speaker006:] I think th um, we can already put them on tape. [speaker009:] OK. [speaker006:] And the tape is hi is very reliable. [speaker009:] Mm hmm. [speaker006:] So the [disfmarker] the only issue is then [pause] if we need access to them. [speaker009:] Right. [speaker006:] So that's fine f if we don't need access to them. [speaker009:] Well, if [disfmarker] if [disfmarker] if you [disfmarker] if they last [disfmarker] Say, they actually last, like, five years, huh, in [disfmarker] in the typical case, and [disfmarker] and occasionally you might need to recreate one, and then you get your tape out, but otherwise you don't. Can't you just [disfmarker] you just put them on [disfmarker]? [speaker008:] So you just archive it on the tape, and then put it on CD as well? [speaker009:] Yeah. Right. [speaker006:] Oh. So you're just saying put them on C Ds for normal access. [speaker009:] Right. [speaker008:] Yeah. [speaker006:] Yeah. I mean, you can do that [speaker002:] What you [disfmarker] [speaker006:] but that's pretty annoying, because the C Ds are so slow. [speaker007:] See [disfmarker] [speaker008:] Yeah. [speaker009:] Mmm. [speaker007:] Yeah. [speaker002:] What'd be nice is a system that re burned the C Ds every [vocalsound] year. [speaker007:] H everytime it was a "gonna" [disfmarker] "gonna die". [speaker006:] Well, I mean, the C Ds are [disfmarker] are an op [speaker004:] Well [disfmarker] [speaker005:] Yeah. [speaker009:] It's like [disfmarker] like dynamic ra DRAM. [speaker005:] Just before. [speaker007:] Just before they be before it goes bad, it burns them in. [speaker002:] Yeah. [speaker006:] The [disfmarker] the CD is an alternative to tape. [speaker008:] Yeah. [speaker006:] ICSI already has a perfectly good tape system and it's more reliable. [speaker004:] You know [disfmarker] I would think [disfmarker] [speaker006:] So for archiving, we'll just use tape. [speaker009:] One [disfmarker] one thing I don't understand is, if you have the data [disfmarker] if [disfmarker] if you if the meeting data is put on disk exactly once, then it's backed up once and the back up system should never have to bother with it, uh, more than once. [speaker006:] Well, regardless [disfmarker] Well, first of all there was, um, a problem with the archive in that I was every once in a while doing a chmod on all the directories an or recursive chmod and chown, because [vocalsound] they weren't getting set correctly every once in a while, [speaker009:] Mm hmm. [speaker006:] and I was just, [vocalsound] doing a minus R star, [vocalsound] not realizing that that caused [pause] it to be re backed up. [speaker009:] Mm hmm. [speaker007:] Ah. [speaker006:] But normally you're correct. But even without that, the back up system is becoming saturated. [speaker009:] But [disfmarker] but this back up system is smart enough to figure out that something hasn't changed and doesn't need to be [pause] backed up again. [speaker006:] Sure, but we still have enough changed that the nightly back ups are starting to take too long. [speaker004:] The b I think th the [disfmarker] at least the once tha that you put it on, it would [disfmarker] [vocalsound] it would [comment] [vocalsound] kill that. [speaker009:] OK. So [disfmarker] so then, if [disfmarker] So [disfmarker] so then, let's [disfmarker] [speaker006:] It has nothing to do with the meeting. [speaker004:] So. [speaker006:] It's just the general ICSI back up system is becoming saturated. [speaker009:] Right. OK. Right. So, what if we buy, uh [disfmarker] uh, what [disfmarker] what do they call these, um [pause] high density [disfmarker]? [speaker006:] Well, why don't you have this [disfmarker] have a [disfmarker] this conversation with Dave Johnson tha rather than with me? [speaker009:] No, no. Because this is [pause] maybe something that we can do without involving Dave, and [disfmarker] and, putting more burden on him. How about we buy, uh [disfmarker] uh [disfmarker] uh, one of these high density tape drives? And we put the data actually on non backed up disks. And we do our own back up once and for all [disfmarker] all, and then [disfmarker] and we don't have to bother this [@ @] up? [speaker006:] Actually, you know, we could do that just with the tape [disfmarker] with the current tape. [speaker009:] I dunno what the these tapes [disfmarker] uh, at some point these [disfmarker] I dunno. What kind of tape drive is it? Is it [disfmarker] is [disfmarker]? [speaker006:] I dunno but it's an automatic robot so it's very convenient. [speaker004:] Wh [speaker009:] Right. [speaker006:] You just run a program to restore them. [speaker004:] The o the one that we have? [speaker006:] Yeah. [speaker009:] But it might interfere with their back up schedule, [speaker004:] The [disfmarker] I mean [disfmarker] [speaker007:] But [disfmarker] [speaker004:] No, we have s we [disfmarker] Don't we have our own? [speaker009:] eh. [speaker004:] Something wi th that doesn't [disfmarker] that isn't used by the back up gang? Don't we have something downstairs? [speaker001:] Well they [disfmarker] [speaker002:] What kinda tape drive? [speaker004:] Just in [disfmarker]? [speaker006:] Well [disfmarker] [speaker004:] Yeah. [speaker006:] but [disfmarker] no, but Andreas's point is a good one. And we don't have to do anything ourselves to do that. [speaker009:] Right. [speaker006:] They're already right now on tape. Right. So your [disfmarker] your point is, and I think it's a good one, that we could just get more disk and put it there. [speaker009:] Mmm. On an XH [disfmarker] uh, X [disfmarker] X whatever partition. [speaker006:] Yeah. That's not a bad idea. [speaker009:] Yeah. [speaker004:] Yeah, that's basically what I was gonna say, is that a disk is [disfmarker] is so cheap it's es essentially, you know, close to free. [speaker006:] So once it's on tape [disfmarker] [speaker009:] Right. [speaker004:] And the only thing that costs is the back up [pause] issue, [vocalsound] eh, to first order. [speaker009:] Right. [speaker004:] And we can take care of that by putting it on non back [pause] up drives and just backing it up once onto this tape. [speaker009:] Mm hmm. [speaker006:] I think that's a good idea. [speaker009:] Right. [speaker004:] Oh. [speaker009:] OK. [speaker004:] Yeah. Good. It's good. [speaker007:] So, who's gonna do these back ups? The people that collect it? [speaker006:] Uh Well, I'll talk to Dave, and [disfmarker] and see what th how [disfmarker] [nonvocalsound] what the best way of doing that is. [speaker002:] It's probably gonna n [speaker006:] There's a little utility that will manually burn a tape for you, and that's probably the right way to do it. [speaker002:] Yeah, and we should probably make that part of the procedure for recording the meetings. [speaker007:] Well, s [speaker006:] Yep. [speaker007:] Yeah. That's what I'm wondering, if [disfmarker] [speaker006:] Well [pause] we're g we're gonna automate that. [speaker007:] OK. [speaker006:] My intention is to [pause] do a script that'll do everything. [speaker007:] I mean, you don't have to physically put a tape in the drive? [speaker006:] No. [speaker007:] Or s? s? [comment] Oh, OK. [speaker006:] It's all tape robot, [speaker007:] So it's just [disfmarker] [speaker006:] so you just sit down at your computer and you type a command. [speaker007:] Oh, OK. [speaker009:] Yeah, but then you're effectively using the resources of the back up system. Or is that a different tape robot? [speaker007:] But not at the same time. [speaker006:] Yeah. But y but you would be anyway. [speaker009:] No, no. [speaker006:] Right? [speaker002:] No, no, no. [speaker006:] Because [disfmarker] [speaker009:] See [disfmarker] [speaker002:] He's saying get a whole different drive. [speaker009:] Yeah, just give a dedi [speaker006:] But there's no reason to do that. [speaker009:] Well, I'm saying is [@ @] i if you go to Dave, and [disfmarker] and [disfmarker] and ask him "can I use your tape robot?", [speaker006:] It [disfmarker] we already have it there and it [disfmarker] it's [disfmarker] [speaker009:] he will say, "well [pause] that's gonna screw up our back up operation." [speaker006:] No, we won't. He'll say "if [disfmarker] if that means [pause] that it's not gonna be backed up standardly, great." [speaker004:] He I [disfmarker] Dave has [disfmarker] has promoted this in the past. So I don't think he's actually against it. [speaker006:] Yeah. It's [disfmarker] it's definitely no problem. [speaker009:] Oh, OK. Alright. Alright. [speaker004:] Yeah. [speaker009:] Good. [speaker004:] OK. [speaker007:] What about if the times overlap with the normal back up time? [speaker006:] Um, it's [disfmarker] it's just [disfmarker] it's just a utility which queues up. [speaker007:] OK. [speaker006:] It just queues it up and [disfmarker] and when it's available, it will copy it. [speaker004:] Yeah. [speaker006:] And then you can tell it to then remove it from the disk or you can, you know, do it a a few days later or whatever you wanna do, after you confirm that it's really backed up. [speaker007:] OK. [speaker006:] NW [disfmarker]? [speaker001:] You saying NW archive? [speaker006:] NW archive. That's what it is. [speaker001:] Yep [comment] [vocalsound] And if you did that during the day it would never make it to the nightly back ups. [speaker004:] OK. [speaker006:] Right. [speaker001:] And then there wouldn't be this extra load. [speaker009:] Well, it [disfmarker] if he [disfmarker] you have to put the data on a [disfmarker] on a non backed up disk to begin with. So that [disfmarker] so that [disfmarker] otherwise you don't [disfmarker] you [disfmarker] [speaker006:] Right. [speaker001:] Well, but you can have it NW archive to [disfmarker] you can have, [vocalsound] uh, a non backed up disk NW archived, [speaker006:] Right. [speaker001:] and it'll never show up on the nightly back ups. [speaker006:] And then it never [disfmarker] [speaker009:] Right. Right. [speaker006:] Right. Which I'm sure would make ever the sysadmins very happy. [speaker009:] Right. [speaker001:] Yeah. [speaker006:] So, I think that's a good idea. [speaker009:] OK. [speaker006:] That's what we should do. [speaker009:] OK. [speaker006:] So, that means we'll probably wanna convert all [disfmarker] all those files [disfmarker] filesystems to non backed up media. [speaker002:] That sounds good. [speaker006:] Yep. [speaker004:] Yeah. Um, another, thing on the agenda said SRI recognition experiments? What's that? [speaker009:] SRI recognition? Oh. [speaker006:] That wasn't me. [speaker009:] Um. well, [speaker004:] Uh. Who's that? [speaker009:] we have lots of them. Uh, I dunno. Chuck, do you have any [disfmarker] any updates? [speaker002:] N I'm successfully, uh, increasing the error rate. Uh [disfmarker] [speaker006:] That's good. [speaker009:] Oh. [speaker008:] Mmm. [speaker007:] Lift the Herve approach. [speaker002:] Yeah. So, I mean I'm just playing with, um, the number of Gaussians that we use in the [disfmarker] the recognizer, and [disfmarker] [speaker009:] Well, you have to sa you have to [pause] tell people that you're [disfmarker] you're doing [disfmarker] you're trying the tandem features. [speaker002:] Yes, I'm using tandem features. [speaker006:] Oh you are? [speaker009:] A and I'm still tinkering with the PLP features. [speaker006:] Cool. [speaker002:] And [disfmarker] [speaker004:] Yeah, I got confused by the results. It sai because [disfmarker] uh, the [pause] meeting before, [vocalsound] you said "OK, we got it down to where they're [disfmarker] they're within a tenth of a percent". [speaker002:] That was on males. [speaker009:] Right. That was [disfmarker] that was before I tried it on the females. [speaker004:] Oh. [speaker009:] See, women are nothi are, trouble. [speaker004:] It's the women are the problem. OK. [speaker009:] Right? As we all know. So. [speaker007:] Well, let's just say that men are simple. [speaker009:] So [disfmarker] [comment] so, when [disfmarker] So I [disfmarker] I had [disfmarker] I ha [speaker006:] That was a quick response. [speaker009:] So, we had reached the point where [disfmarker] [speaker007:] I'm well rehearsed. [speaker004:] Yeah. [speaker009:] we had reached the point where, [comment] um, on the male portion of the [pause] development set, the, um [disfmarker] or one of the development sets, I should say [disfmarker] [vocalsound] the, um [disfmarker] the male error rate with, uh, ICSI PLP features was pretty much identical with, uh, SRI features. which are [pause] MFCC. So, um, then I thought, "Oh, great. I'll j I'll [disfmarker] just let's make sure everything works on the females." And the error rate [disfmarker] you know, there was a three percent difference. [speaker004:] Oh. Uh huh. [speaker009:] So, [speaker007:] Is there less training data? [speaker009:] uh [disfmarker] [speaker007:] I mean, we don [speaker009:] No, actually there's more training data. [speaker007:] This is on just digits? [speaker009:] No, no. [speaker007:] Oh, sorry. [speaker004:] No. [speaker006:] No. It's, uh, Swi [speaker007:] OK. [speaker002:] Hub five. [speaker007:] This is on [disfmarker] [speaker009:] This is Hub five. [speaker007:] Oh, OK. [speaker006:] Hub five. Yeah. [speaker009:] Yeah. Um, and the test data is CallHome and Switchboard. So, uh [disfmarker] so then [pause] um [disfmarker] Oh, and plus the [disfmarker] the vocal tract [pause] length normalization didn't [disfmarker] actually made things worse. So something's really seriously wrong. So [disfmarker] Um [disfmarker] [speaker004:] Aha! [speaker009:] So [disfmarker] So [disfmarker] [speaker004:] OK. So [disfmarker] but you see, now, between [disfmarker] between the males and the females, there's certainly a much bigger difference in the scaling range, than there is, say, just within the males. And what you were using before was scaling factors that were just from the [disfmarker] the m the [pause] SRI front end. And that worked [disfmarker] that worked fine. [speaker009:] That's true. Yeah. [speaker004:] Uh, but now you're looking over a larger range and it may not be so fine. [speaker009:] Well, um [disfmarker] So [disfmarker] I just [disfmarker] d so the one thing that I then tried was to put in the low pass filter, which we have in the [disfmarker] So, most [disfmarker] most Hub five systems actually band limit the [disfmarker] uh, at about, uh, thirty seven hundred, um, hertz. [speaker004:] Uh huh. [speaker009:] Although, you know, normally, I mean, the channel goes to four [disfmarker] four thousand. Right? So, um [disfmarker] And that actually helped, uh [disfmarker] uh, a little bit. [speaker004:] Uh huh. [speaker009:] Um [pause] and it didn't hurt on the males either. So, um [disfmarker] And I'm now, uh, trying the [disfmarker] Oh, and suddenly, also the v the vocal tract length normalization only in the test se on the test data. So, you can do vocal tract length normalization on the test data only or on both the training and the test. [speaker004:] Yeah. [speaker009:] And you expect it to help a little bit if you do it only on the test, and s more if you do it on both training and test. [speaker004:] Yeah. [speaker009:] And so the [disfmarker] It now helps, if you do it only on the test, and I'm currently retraining another set of models where it's both in the training and the test, and then we'll [disfmarker] we'll have, hopefully, even better results. So [disfmarker] But there's [disfmarker] It looks like there will still be some difference, maybe between one and two percent, um, for the females. [speaker004:] Huh. [speaker009:] And so, um, you know, I'm open to suggestions. And it is true that the, uh [disfmarker] that the [disfmarker] [vocalsound] you know, we are using the [disfmarker] [speaker006:] Mm hmm. [speaker009:] But [disfmarker] it can't be just the VTL, [speaker004:] Uh huh. [speaker009:] because if you don't do VTL in both systems, uh, you know, the [disfmarker] the females are considerably worse in the [disfmarker] with the PLP features. [speaker004:] No [disfmarker] no. I [disfmarker] I remember that. [speaker006:] It's much worse. Yeah. [speaker009:] So there must be some [disfmarker] something else going on. [speaker007:] Well, what's the standard [disfmarker]? Yeah, so I thought the performance was actually a little better on females than males. [speaker006:] That's what I thought, too. [speaker009:] Um, [pause] that [pause] ye [comment] overall, yes, but on this particular development test set, they're actually a little worse. But that's beside the point. We're looking at the discrepancy between the SRI system and the SRI system when trained with ICSI features. [speaker007:] Right. I'm just wondering if that [disfmarker] if [disfmarker] if you have any indication of your standard features, [speaker006:] What's [disfmarker] Are the freq? [speaker007:] you know, if that's also different [pause] or in the same direction or not. [speaker004:] You're [disfmarker] This is [disfmarker] lemme ask a q more basic que [speaker007:] Cuz [disfmarker] [speaker009:] Mm hmm. [speaker004:] I mean, is this, uh [disfmarker] uh, iterative, Baum Welch training? Or is it Viterbi training? Or [disfmarker]? [speaker009:] It's Baum Welch training. [speaker004:] Baum Welch training. And how do you determine when to [disfmarker] to stop iterating? [speaker009:] Um [disfmarker] Well, actually, we [disfmarker] we just basically do a s a fixed number of iterations. [speaker006:] Hmm. [speaker009:] Uh, in this case four. Um, which [disfmarker] Eh, we used to do only three, and then we found out we can squeeze [disfmarker] And it was basically, we're s we're keeping it on the safe side. But you're d Right. It might be that one more iteration [vocalsound] would [disfmarker] would help, but it's sort of you know. [speaker004:] Or maybe [disfmarker] or maybe you're doing one too many. I mean it's [disfmarker] it's [disfmarker] [speaker009:] No, but with Baum Welch, there shouldn't be an over fitting issue, really. [speaker004:] Uh. [comment] Well, there can be. Sure. [speaker009:] Um. [speaker006:] Well, you can try each one on a cross validation set, [speaker004:] It d if you [disfmarker] if you remember some years ago Bill Byrne did a thing where he was [disfmarker] he was looking at that, [speaker006:] can't you? [speaker004:] and he showed that you could get it. [speaker009:] Yeah. [speaker004:] So. But [disfmarker] [comment] but [disfmarker] [vocalsound] but, um [disfmarker] [speaker009:] Well, yeah. We can [disfmarker] Well, that's [disfmarker] that's the easy one to check, because we save all the intermediate models [speaker004:] Yeah. [speaker006:] Do you [disfmarker]? [speaker009:] and we can [disfmarker] [speaker004:] And in each case, ho [speaker006:] What [disfmarker]? [speaker004:] um, I'm sorry [disfmarker] in each case how do you determine, you know, the [disfmarker] the usual [pause] fudge factors? The, uh [disfmarker] [vocalsound] the, uh, language, uh, scaling, acoustic scaling, uh, uh [disfmarker] [speaker009:] Um [pause] I uh [disfmarker] [comment] I'm actually re optimizing them. Although that hasn't shown to make [pause] a big difference. [speaker004:] OK. And the pru the question he was asking at one point about pruning, uh [disfmarker] [speaker009:] Pruning [disfmarker]? [speaker004:] Remember that one? [speaker009:] Pruning in the [disfmarker]? [speaker004:] Well, he was [disfmarker] he's [disfmarker] it looked like the probabil at one point he was looking at the probabilities he was getting out [disfmarker] at the likelihoods he was getting out of PLP versus mel cepstrum, and they looked pretty different, [speaker002:] Yeah, the likelihoods were [pause] lower for the PLP. [speaker004:] as I recall. [speaker007:] Oh. [speaker004:] And so, uh, there's the question [disfmarker] [speaker009:] I you mean [disfmarker] did you see this in the SRI system? [speaker002:] Mm hmm. [speaker009:] Um. Well, the likelihoods are [disfmarker] [speaker002:] Was just looking through the log files, and [disfmarker] [speaker009:] You can't directly compare them, because, for every set of models you compute a new normalization. And so these log probabilities, they aren't directly comparable [speaker002:] Oh. [speaker009:] because you have a different normalization constants for each model you train. So [disfmarker] [speaker002:] Hmm. [speaker004:] But, still it's a question [disfmarker] if you have some threshold somewhere in terms of beam search or something, [speaker002:] Well, yeah. [speaker004:] or [disfmarker]? [speaker002:] That's what I was wondering. [speaker009:] W yeah. I mean [disfmarker] Uh [disfmarker] [speaker002:] I mean, if you have one threshold that works well because the range of your likelihoods is in this area [disfmarker] [speaker009:] We prune very conservatively. I mean, as we saw with the meeting data, um [pause] we could probably tighten the pruning without really [disfmarker] So we we basically we have a very open beam. [speaker004:] But, you're only talking about a percent or two. [speaker002:] Yeah. [speaker004:] Right? Here we're we're saying that we there [disfmarker] gee, there's this b eh, there's this difference here. And [pause] it [disfmarker] See cuz, i i [comment] there could be lots of things. Right? [speaker009:] Right. [speaker004:] But [disfmarker] but [disfmarker] but [disfmarker] but, um, let's suppose just for a second that, uh, we've sort of taken out a lot of the [disfmarker] the major differences, uh, between the two. [speaker009:] Course. Mm hmm. Right. [speaker004:] I mean, we're already sort of using the mel scale and we're using the same style filter integration, and [vocalsound] and, well, we're making sure that low and high [disfmarker] [speaker009:] Actually, there is [disfmarker] the difference in that. So, for the PLP features we use the triangular filter shapes. And for the [disfmarker] in the SRI front end we use the trapezoidal one. [speaker006:] And what's the top frequency of each? [speaker009:] Well, now it's the same. It's thirty [disfmarker] thirty to seven hundred and sixty hertz. [speaker006:] Yeah. Exp one's triangular, one's trapezoidal. So [disfmarker] [speaker009:] No, no. But [disfmarker] [speaker004:] Before we [disfmarker] i i th with straight PLP, it's trapezoidal also. [speaker009:] Well [disfmarker] But [disfmarker] [speaker004:] But then we had a slight difference in the [disfmarker] in the scale. Uh, so. [speaker009:] Since currently the Feacalc program doesn't allow me to change [pause] the filter shape independently of the scale. [speaker006:] Uh huh. [speaker009:] And, I did the experiment on the SRI front end where I tried the [disfmarker] y where the standard used to be to use trapezoidal filters. You can actually continuously vary it between the two. And so I wen I swi I tried the trap eh, triangular ones. And it did slightly worse, but it's really a small difference. [speaker006:] Hmm. [speaker009:] So [disfmarker] [speaker004:] Coup Couple tenths of a percent or something. [speaker006:] OK. [speaker009:] Yeah, exactly. [speaker006:] So it's not just losing some [vocalsound] frequency range. [speaker004:] Right. [speaker009:] So, it's not [disfmarker] I don't think the filter shape by itself will make a huge [comment] difference. [speaker004:] Yeah. Right. So the oth [vocalsound] the other thing that [disfmarker] [speaker006:] Yeah. [speaker004:] So, f i We've always viewed it, anyway, as the major difference between the two, is actually in the smoothing, [speaker009:] Mm hmm. [speaker004:] that the [disfmarker] that the, um, [vocalsound] PLP, and [disfmarker] and the reason PLP has been advantageous in, uh, slightly noisy situations is because, [vocalsound] PLP does the smoothing at the end by an auto regressive model, [speaker009:] Mm hmm. [speaker004:] and mel cepstrum does it by just computing the lower cepstral coefficients. [speaker009:] Mm hmm. [speaker004:] Um. So, um [disfmarker] Mm hmm. [speaker009:] OK. So [pause] one thing I haven't done yet is to actually do all of this with a much larger [disfmarker] with our full training set. So right now, we're using a [disfmarker] I don't know, forty? I i it's [disfmarker] it's [disfmarker] eh [comment] it's a f training set that's about, um, you know, by a factor of four smaller than what we use when we train the full system. So, some of these smoothing issues are over fitting for that matter. And the Baum Welch should be much less of a factor, if you go full [disfmarker] whole hog. [speaker004:] Mm hmm. Could be. Yeah. [speaker009:] And so, w so, just um [disfmarker] so the strategy is to first sort of treat things [pause] with fast turn around on a smaller training set and then, [vocalsound] when you've sort of, narrowed it down, you try it on a larger training set. [speaker004:] Yeah. [speaker009:] And so, we haven't done that yet. [speaker004:] Now the other que related question, though, is [disfmarker] is, [vocalsound] uh, what's the boot models for these things? [speaker009:] Th th the boot models are trained from scratch. So we compute, um [disfmarker] So, we start with a, um, alil alignment that we computed with the b sort of the best system we have. And [disfmarker] and then we train from scratch. So we com we do a, you know, w um [disfmarker] [vocalsound] We collect the [disfmarker] uh, the observations from those alignments under each of the feature sets that [disfmarker] that we [pause] train. And then, from there we do, um [disfmarker] There's a lot of, actually [disfmarker] [vocalsound] The way it works, you first train a phonetically tied mixture model. Um. You do a total of [disfmarker] First you do a context independent PTM model. Then you switch to a context [disfmarker] You do two iterations of that. Then you do two iterations of [disfmarker] of [disfmarker] of context dependent phonetically tied mixtures. And then from that you [disfmarker] you do the [disfmarker] you [disfmarker] you go to a state clustered model, [speaker004:] Yeah. [speaker009:] and you do four iterations of that. So there's a lot of iterations overall between your original boot models and the final models. I don't think that [disfmarker] Hmm. We have never seen big differences. Once I thought "oh, I can [disfmarker] Now I have these much better models. I'll re generate my initial alignments. Then I'll get much better models at the end." Made no difference whatsoever. It's [disfmarker] I think it's [disfmarker] eh, i [speaker004:] Right. [speaker009:] the boot models are recur [speaker004:] Well, mis for making things better. Yeah. But, this for making things worse. This it migh Th the thought is [disfmarker] is [disfmarker] is possible [disfmarker] another possible [pause] partial cause is if the boot models [vocalsound] used a comple used a different feature set, that [disfmarker] [speaker009:] Mm hmm. Mm hmm. But there are no boot models, in fact. You [disfmarker] you're not booting from initial models. You're booting from initial alignments. [speaker004:] Which you got from a different feature set. [speaker009:] That's correct. [speaker004:] So, those features look at the data differently, actually. [speaker009:] Yeah, but [disfmarker] [speaker004:] I mean, you know, they [disfmarker] they will find boundaries a little differently, though [disfmarker] You know, all th all that sort of thing is actually slightly different. I'd expect it to be a minor effect, [speaker009:] But [disfmarker] but [disfmarker] but, what I'm [disfmarker] what I'm saying is [disfmarker] [speaker004:] but [disfmarker] [speaker009:] So, we e w f w For a long time we had used boot alignments that had been trained with a [disfmarker] [vocalsound] with the same front end but with acoustic models that were, like, fifteen percent worse than what we use now. [speaker004:] Mm hmm. [speaker009:] And with a dict different dictionary [disfmarker] with a considerably different dictionary, which was much less detailed and much less well suited. [speaker004:] Mm hmm. Yeah. [speaker009:] And so, [vocalsound] then we switched to new boot alignments, which [disfmarker] which now had the benefit of all these improvements that we've made over two years in the system. [speaker004:] Right. [speaker009:] And, the result in the end was no different. [speaker004:] Right. [speaker009:] So, what I'm saying is, the exact nature of these boot alignments is probably not [pause] a big factor in the quality of the final models. [speaker004:] Yeah, maybe not. But [pause] it [disfmarker] it [disfmarker] I st still see it as [disfmarker] I mean, [vocalsound] there's [disfmarker] there's a history to this, too, [speaker009:] Yeah. [speaker004:] but I [disfmarker] uh, I don't wanna go into, [speaker009:] Mm hmm. [speaker004:] but [disfmarker] but I [disfmarker] I [disfmarker] I th I think it could be the things [pause] that it [disfmarker] the data is being viewed in a certain way, uh, that a beginning is here rather than there and so forth, [speaker009:] Yeah. Right. [speaker004:] because the actual signal processing you're doing is slightly different. [speaker009:] Right. [speaker004:] But, [vocalsound] it's [disfmarker] it's [disfmarker] that's probably not it. [speaker009:] Yeah. Anyway, I [disfmarker] I [disfmarker] I should really reserve, uh, any conclusions until we've done it on the large training set, um, and until we've seen the results with the [disfmarker] with the VTL in training. So. [speaker004:] Yeah. At some point you also might wanna take the same thing and try it on, uh, some Broadcast News data or something else that actually has [disfmarker] has some noisy [disfmarker] [vocalsound] noisy components, so we can see if any conclusions we come to holds [vocalsound] across [pause] different data. [speaker009:] Yeah. Right. [speaker004:] Uh [disfmarker] [speaker009:] And, uh, with this, I have to leave. [speaker008:] Hmm! [speaker004:] OK. So, is there something quick about Absinthe [pause] that you [disfmarker]? [speaker009:] With this said. [speaker006:] Uh. Just what we were talking about before, which is that I ported a Blass library to Absinthe, and then got [disfmarker] got it working with fast forward, and got [vocalsound] [vocalsound] a speedup roughly proportional to the number of processors times the clock cycle. [speaker009:] Oh. [speaker006:] So, that's pretty good. [speaker009:] Oh! Cool. [speaker006:] Um, I'm in the process of doing it for Quicknet, but there's something going wrong and it's about half the speed that I was estimating it should be, and I'm not sure why. [speaker009:] Mm hmm. [speaker006:] But I'll keep working on it. But the [disfmarker] what it means is that it's likely that for net training and forward passes, we'll [disfmarker] Absinthe will be a good machine. Especially if we get a few more processors and upgrade the processors. [speaker009:] A few more processors? How many are you shooting for? [speaker006:] There're five now. It can hold eight. [speaker009:] Oh, OK. [speaker004:] Yeah, we'll just go buy them, I guess. [speaker006:] And it's also five fifty megahertz and you can get a gigahertz. [speaker009:] Yeah. [speaker006:] So. [speaker009:] Can you mix [pause] t uh, processors of different speed? [speaker006:] I don't think so. I think we'd have to do all [disfmarker] [speaker009:] OK. [speaker004:] Probably just throw away the old ones, [speaker006:] Yep. [speaker004:] and [disfmarker] Thank you [pause] for the box, and [disfmarker] [vocalsound] I'll just go buy their process. [speaker009:] Oh, OK. [speaker008:] Hmm! [speaker009:] Maybe we can stick them in another system. I dunno. [speaker006:] We'd have to get a [disfmarker] almost certainly have to get a, uh, Netfinity server. [speaker009:] I see. [speaker006:] They're pretty [disfmarker] pretty specialized. [speaker004:] Yeah. [speaker009:] OK. [speaker004:] OK. Is [disfmarker] is Liz coming back, do you know, or [disfmarker]? I dunno. Yeah. Oh, you don't. OK. Alright. Alright. See you. Um. Alright. So [disfmarker] Uh, they're having tea out there. So I guess the other thing that we were gonna talk about is [disfmarker] is, uh, demo. And, um, so, these are the demos for the [pause] uh, July, uh, meeting [pause] and, um [disfmarker] DARPA mee [speaker006:] July what? Early July? Late July? [speaker004:] Oh, I think it's July fifteenth. [speaker001:] Sixteen to eighteen, I think. [speaker004:] Is that it? Yeah, sixteenth, eighteenth. [speaker001:] Roughly. [speaker004:] Yeah. So, we talked about getting something together for that, but maybe, uh [disfmarker] maybe we'll just put that off for now, given that [disfmarker] But I think maybe we should have a [disfmarker] a sub meeting, I think, uh, probably, uh, Adam and [disfmarker] and, uh, Chuck and me should talk about [disfmarker] should get together and talk about that sometime soon. [speaker006:] Over a cappuccino tomorrow? [speaker004:] Yeah [comment] something like that. Um, uh, you know, maybe [disfmarker] maybe we'll involve Dan Ellis at some [disfmarker] some level as well. [speaker006:] Mm hmm. [speaker004:] Um. OK. The [disfmarker] the tea is [disfmarker] is going, so, uh, I suggest we do, uh [disfmarker] uh, a unison. [speaker006:] A unison digits? [speaker001:] OK. [speaker004:] Yeah. [speaker006:] Which is gonna be a little hard for a couple people because we have different digits forms. [speaker004:] Gets our [disfmarker] [speaker005:] Oops. [speaker006:] We have a [disfmarker] I found a couple of old ones. [speaker008:] Hmm. [speaker004:] Oh. Well, that'll be interesting. So, uh [disfmarker] [speaker006:] Have you done digits before? [speaker004:] No. [speaker003:] I haven't done it. [speaker006:] OK. So, uh, the idea is just to read each line [pause] with a short pause between lines, [speaker003:] Alright. [speaker006:] not between [disfmarker] And, uh, since we're in a hurry, we were just gonna read everyone all at once. So, if you sorta plug your ears and read [disfmarker] [speaker003:] OK. [speaker006:] So first read the transcript number, and then start reading the [pause] digits. [speaker003:] Sure. [speaker006:] OK? One, two, three. [speaker004:] OK we're done. [speaker006:] And [disfmarker] [speaker001:] Yeah, I think I got my mike on. OK. Let's see. [speaker002:] OK. Ami, do yours then we'll open it and I think it'll be enough. [speaker001:] Mmm [disfmarker] Doesn't, uh [disfmarker] It should be the other way. Yeah, now it's on. [speaker006:] Right. OK. [speaker002:] OK. So, we all switched on? [speaker001:] We are all switched on, yeah. [speaker002:] Alright. [speaker006:] We are all switched on. [speaker002:] Anyway. So, uh, before we get started with the, uh, technical part, I just want to review what I think is happening with the [disfmarker] our data collection. So [disfmarker] [vocalsound] Uh, probably after today, [vocalsound] that shouldn't come up in this meeting. Th this [disfmarker] this is s should be im it isn't [disfmarker] There's another thing going on of gathering data, and that's pretty much independent of this. But, uh, I just want to make sure we're all together on this. What we think is gonna happen is that, uh, in parallel starting about now [vocalsound] we're gonna get Fey [vocalsound] to, where you're working with me and Robert, draft a note that we're gonna send out to various CogSci c and other classes saying, "here's an opportunity to be a subject. Contact Fey." And then there'll be a certain number of um, hours during the week which she will be available and we'll bring in people. Uh, roughly how many, Robert? We d Do we know? [speaker003:] Um, fifty was our [disfmarker] sort of our first [disfmarker] [speaker002:] OK. So, we're looking for a total of fifty people, not necessarily by any means all students but we'll s we'll start with [disfmarker] with that. In parallel with that, we're gonna need to actually do the script. And, so, I guess there's a plan to have a meeting Friday afternoon Uh, with [disfmarker] uh, Jane, and maybe Liz and whoever, on actually getting the script worked out. But what I'd like to do, if it's O K, [vocalsound] is to s to, as I say, start the recruiting in parallel and possibly start running subjects next week. The week after that's Spring Break, and maybe we'll look for them [disfmarker] some subjects next door [speaker003:] Yeah. [speaker002:] or [pause] i [speaker003:] Yeah. Also, Fey will not be here during spring break. [speaker002:] Oh, OK, then we won't do it. [speaker003:] So. [speaker002:] OK. So that's easy. Um. So, is [disfmarker] Is that make sense to everybody? [speaker003:] Yeah. Also, um, F [vocalsound] both Fey and I will, um, [vocalsound] do something of which I may, eh [disfmarker] kindly ask you to [disfmarker] to do the same thing, which is we gonna check out our social infrastructures for possible subjects. Meaning, [vocalsound] um, kid children's gymnastic classes, pre school parents and so forth. They also sometimes have flexible schedules. So, if you happen to be sort of in a non student social setting, and you know people who may be interested in being subjects [disfmarker] We also considered using the Berkeley High School and their teachers, maybe, and get them interested in stuff. [speaker002:] That's a good idea. [speaker003:] And, um. So that's as far as our brainstorming was concerned. [speaker002:] Oh, yeah. The high school's a great idea. [speaker003:] So. But I [disfmarker] I will just make a first draft of the, uh, note, the "write up" note, send it to you and Fey and then [disfmarker] [speaker002:] And why don't you also copy Jane on it? [speaker003:] And, um, Are we [disfmarker] Have we concurred that, uh, these [disfmarker] these forms are sufficient for us, and necessary? [speaker002:] Uh, th I think they're necessary. This [disfmarker] The permission form. [speaker003:] Mmm. Nuh. N. [speaker002:] Uh, there has to be one, and I think we're just gonna use it as it is, and [pause] Um [speaker003:] N. You happy with that? [speaker002:] Well, yeah. There's one tricky part about, um, they have the right um I The last paragraph [comment] "if you agree to participate you have the opportunity to have anything excised which you would prefer not to have included in the data set." OK? Now that, we had to be included for this other one which might have, uh, meetings, you know, about something. [speaker003:] Mm hmm. [speaker002:] In this case, it doesn't really make sense. Um, so what I'd like to do is also have our subjects sign a waiver saying "I don't want to see the final transcript". [speaker003:] Mm hmm. [speaker002:] And if they don't [disfmarker] If they say "no, I'm not willing to sign that", then we'll show them the final transcript. But, um. [speaker003:] Yep. Makes sense. [speaker002:] That, uh [disfmarker] yeah, so we might actually, um S i Jane may say that, "you know, you can't do this", uh, "on the same form, we need a separate form." But anyway. I'd [disfmarker] I'd [disfmarker] I'd like to, e e um, add an a little thi eh [disfmarker] a thing for them to initial, saying "nah, do I don't want to see the final transcript." [speaker003:] Mm hmm. [speaker002:] But other than that, that's one's been approved, this really is the same project, uh, rec you know. And so forth. So I think we just go with it. [speaker003:] Yeah. Yeah. OK. So much for the data, except that with Munich everything is fine now. They're gonna [vocalsound] transcribe. They're also gonna translate the, uh, German data from the TV and cinema stuff for Andreas. So. They're [disfmarker] they all seem to be happy now, [vocalsound] with that. So. w c sh should we move on to the technical sides? [speaker002:] Yep. [speaker003:] Well I guess the good [disfmarker] good news of last week was the parser. So, um Bhaskara and I started working on the [disfmarker] [vocalsound] the parser. Then Bhaskara went to class and once he came back, um, [vocalsound] it was finished. So. It, uh [disfmarker] I didn't measure it, but it was about an hour and ten minutes. [speaker004:] Yep. Something like that. [speaker003:] And, um [disfmarker] and now it's [disfmarker] We have a complete English parser that does everything the German parser does. [speaker002:] Which is [vocalsound] not a lot. [speaker004:] That's the, uh, point. [speaker002:] But [disfmarker] [speaker003:] The [disfmarker] uh, that's not a lot. [speaker004:] Yes. [speaker002:] OK. Right. [speaker003:] And um. [speaker005:] What did you end up having to do? I mean, wha Was there anything [pause] interesting about it at all? [speaker003:] Well, if you, eh [disfmarker] [speaker004:] We'll show you. [speaker002:] Yeah, we can show us, [speaker005:] or are we gonna see that? [speaker002:] right? [speaker003:] Well, w w We d The first we did is we [disfmarker] we tried to [disfmarker] to do [disfmarker] change the [disfmarker] the "laufen" into "run", [vocalsound] or "running", [vocalsound] or "runs". [speaker002:] Yep. [speaker005:] Mm hmm. [speaker003:] And we noticed that whatever we tried to do, it no effect. [speaker005:] OK. [speaker003:] And we were puzzled. [speaker005:] Mm hmm. [speaker003:] And, uh, the reason was that the parser i c completely ignores the verb. [speaker005:] Hmm. [speaker003:] So this sentence [disfmarker] sentence is [disfmarker] parses the p the same output, [speaker005:] Interesting parser property. [speaker003:] um, even if you leave out, um, all [disfmarker] all of this. [speaker005:] I see. Yeah. [speaker003:] So it's basically feature film and TV. [speaker005:] Today [speaker003:] That's what you need. [speaker005:] OK. And the [disfmarker] t and the time, right? [speaker003:] If [disfmarker] if you'd add [disfmarker] add Today and Evening, it'll add Time or not. So it [disfmarker] i it does look at that. [speaker005:] OK. [speaker003:] But all the rest is p simply frosting on the cake, and it's optional for that parser. [speaker005:] True. [speaker002:] So, you can sho You [disfmarker] you [disfmarker] Are [disfmarker] are you gonna show us the little templates? [speaker003:] And [disfmarker] [speaker005:] S [speaker003:] Yeah. We ar we can sh er [disfmarker] I can show you the templates. [speaker005:] The former end g " [speaker003:] I [disfmarker] I also have it running here, [speaker005:] Oh, I see. Uh huh. [speaker003:] so if I [vocalsound] do this now, um, [vocalsound] you can see that it parsed the wonderful English sentence, "Which films are on the cinema today [pause] evening?" But, um. [speaker002:] Well, that sounds [disfmarker] [speaker003:] Uh do don't worry about it. It could be "this evening, which [disfmarker] which films are on the cinema", [speaker002:] No i [speaker003:] or "running in the cinema, which [disfmarker]" [speaker005:] OK. [speaker003:] uh, "today evening", uh i "Is anything happening in the cinema this evening?" [speaker005:] OK. Key words, e basically. [speaker002:] Well [speaker003:] Ge elaborate, or, more or less, [vocalsound] uh [disfmarker] [speaker002:] Actually, it's a little tricky, in that there's some allowable German orders which aren't allowable English orders and so forth. And it is order based. So it [disfmarker] it [disfmarker] Isn't it? [speaker004:] No. [speaker003:] No. [speaker002:] Oh. So it [disfmarker] it doe I it [disfmarker] These [disfmarker] u these optional elements, [speaker003:] It is not [disfmarker] [speaker002:] it's [disfmarker] it's actually a set, not a sequence? [speaker003:] Yeah. We were [disfmarker] I was afraid that, um [disfmarker] [speaker002:] Oh! [speaker005:] So it really is key word matching, basically. [speaker002:] Really a se [speaker006:] e yeah. [speaker003:] Um. [speaker006:] Mm hmm. [speaker002:] Oh, wow. [speaker003:] Um, I mean, these sentences are just silly. [speaker005:] Hmm. [speaker003:] I mean, uh, d these were not the ones we [disfmarker] we actually did it. Um. What's an idiomatic of phrasing this? Which films are [pause] showing? [speaker004:] Are pl playing at the cinema? [speaker003:] playing? [speaker004:] Yeah. [speaker005:] Tonight? [speaker004:] I changed that file, actually, where it's on my account. [speaker005:] This [disfmarker] this evening? [speaker006:] Actually, you would say, "which films are on tonight?" [speaker004:] You want to get it? Or [disfmarker] is [disfmarker] di was it easy to get it? [speaker003:] Um. I have no net here. [speaker004:] Oh, OK. [speaker002:] Do I? [speaker003:] OK. So. Wonderful parse, same thing. Um. [speaker002:] Right. [speaker003:] Except that we d w we don't have this, uh, time information here now, which is, um [disfmarker] Oh. This [disfmarker] are the reserve. Anyways. [vocalsound] So. Um. These are the [disfmarker] sort of the ten different sentence types that the uh [disfmarker] [vocalsound] the parser was able to do. And it still is, now in English. [speaker002:] Yeah. [speaker005:] Mm hmm. [speaker003:] And, um [disfmarker] Sorry. And, um you have already to make it a little bit more elaborate, right? [speaker004:] Yeah, I mean I changed those sentences to make it, uh, more, uh, idiomatic. And, of course, you can have i many variations in those sentences, they will still parse fine. So, in a sense it's pretty broad. [speaker002:] OK. [speaker003:] OK. So, if you want to look at the templates, [vocalsound] [vocalsound] they're conveniently located in a file, "template". Um, and this is what I had to do. I had to change, [@ @] [comment] "Spielfilm" to "film", uh, "Film" to "movie", cinem "Kino" to "cinema" [disfmarker] to "today" [disfmarker] heu "heute" to "today", [speaker005:] Huh. [speaker003:] evening [disfmarker] "Abend" to "evening" [speaker002:] Capitalized as well [speaker001:] Hmm. [speaker003:] And, um. [speaker004:] One thing I was wondering, was, those functions there, are those things that modify the M three L basically? [speaker002:] Y i [speaker003:] Yep. [speaker004:] OK. [speaker003:] And that's [disfmarker] that's the next step, but we'll get to that in a second. [speaker002:] p [speaker003:] And so this means, um, [speaker002:] Oh. [speaker003:] "this" and "see" are not optional. "Want I like" is all maybe in there, but may also not be in there. [speaker002:] So [disfmarker] so, the point is, if it says "this" and "see", it also will work in "see" and "this"? [speaker005:] S [speaker002:] In the other order? [speaker003:] Yeah. [speaker002:] with those two key words? [speaker003:] Should we try it? [speaker002:] "This is the one I want to see" or whatever. [speaker003:] OK. "Action watch", [speaker004:] Hmm. [speaker003:] whatever. Nothing was specialfi specified. except that it has some references to audio visual media here. [speaker004:] AV medium. Yeah. [speaker003:] Where it gets that from [disfmarker] It's correct, but I don't know where it gets it from. [speaker004:] "See". [speaker003:] Oh, "see". Yeah. [speaker004:] I mean it's sort of [disfmarker] [speaker003:] Yeah. Yep. OK. And "see this" [comment] is exactly the same thing. [speaker002:] OK, so it is set based. Alright. [speaker004:] One thing I was wondering was, [vocalsound] those percentage signs, right? So, I mean, why do we even have them? Because [disfmarker] if you didn't have them [disfmarker] [speaker003:] Yep. Uh, I'll tell you why. Because it gives a [disfmarker] you a score. [speaker005:] Mm hmm. [speaker004:] Oh. [speaker003:] And the value of the score is, v I assume, I guess, the more of these optional things that are actually in there, the higher the r score [vocalsound] it is. [speaker004:] OK. [speaker006:] Right. [speaker005:] It's a match. [speaker004:] So that's the main purpose. Alright. [speaker005:] Mm hmm. [speaker002:] OK. [speaker003:] So we [disfmarker] we shouldn't belittle it too much. It's doing something, some things, and it's very flexible. I've just tried to [speaker005:] Mm hmm. [speaker006:] Right. [speaker003:] be nice. [speaker002:] No, no. [speaker005:] Right [disfmarker] Yeah. [speaker002:] Fine. Yeah, yeah, yeah, flexible it is. [speaker006:] But [disfmarker] [speaker003:] OK. [vocalsound] Um, let's hope that the generation will not be more difficult, even though the generator is a little bit more complex. Uh but we'll [disfmarker] Mmm, that means we may need two hours and twenty minutes rather than an hour ten minutes, [speaker004:] Right. [speaker003:] I hope. [speaker002:] Alright. [speaker003:] And the next thing I would like to be able to do, and it seems like this would not be too difficult either, is [vocalsound] to say, "OK let's now pretend we actually wanted to not only change the [disfmarker] [vocalsound] the mapping of [disfmarker] of, uh, words to the M three L but we also wanted to change [disfmarker] add a new sentence type and and make up some [disfmarker] some new M three L [disfmarker] s" [speaker002:] Yep. So That'd be great. It would be a good exercise to just see [vocalsound] whether one can get that to run. [speaker003:] See th Mm hmm. [vocalsound] Yep. And, um, [speaker004:] So, that's [disfmarker] Fine, yeah. [speaker003:] that's [disfmarker] shouldn't be too tough. [speaker004:] Yeah, so where are those [disfmarker] those functions "Action", "Goodbye", and so on, right? Are they actually, um, [vocalsound] Are they going to be called? Um, are they present in the code for the parser? [speaker003:] Yeah. I think what it does, it i i it does something sort of fancy. It loads um [disfmarker] It has these style sheets and also the, um, schemata. So what it probably does, is it takes the, uh, [vocalsound] um [disfmarker] Is this where it is? This is already the XML stuff? This is where it takes its own, um, syntax, and converts it somehow. Um. Where is the uh [disfmarker] [speaker004:] What are you looking for? [speaker003:] Um, where it actually produces the [disfmarker] the XML out of the, uh, parsed [pause] stuff. [speaker004:] Oh, OK. [speaker003:] No, this is not it. Uh. I can't find it now. You mean, where the [disfmarker] where the act how the action "Goodbye" maps into something [disfmarker] [speaker004:] Yeah. [speaker001:] Yeah, where are those constructors defined? [speaker004:] Oh. [speaker003:] Nope. [speaker004:] No, that's not it. [speaker003:] Yeah. This is sort of what happens. This is what you would need to [disfmarker] to change [disfmarker] to get the, uh, XML changed. So when it encounts encounters "Day", [vocalsound] it will, uh, activate those h classes in the [disfmarker] in the XML stuff But, um [disfmarker] I saw those actions [disfmarker] uh, the "Goodbye" stuff somewhere. Hmm, hmm, hmm, hmm, hmm. [speaker001:] Grep for it? [speaker003:] Yeah. Let's do that. Oh. [speaker004:] Mmm. M three L dot DTD? [speaker003:] Yep. [speaker004:] That's just a [pause] specification for the XML format. [speaker003:] Yep. Well, we'll find that out. So whatever [disfmarker] n this does [disfmarker] I mean this is, basically, looks l to me like a function call, right? [speaker002:] Hmm? Oh, yeah. [speaker003:] And, um [disfmarker] So, whenever it [disfmarker] it encounters "Goodbye", which we can make it do in a second, here [speaker001:] That function automatically generates an initialized XML structure? [speaker004:] I think each of those functions act on the current XML structure, and change it in some way, for example, by adding a [disfmarker] a l a field to it, or something. [speaker003:] I [speaker002:] y Yeah. They also seem to affect state, [speaker003:] Mm hmm. [speaker002:] cause some of them [disfmarker] there were other actions uh, that [disfmarker] that s seemed to step [disfmarker] state variables somewhere, [speaker004:] Right. [speaker002:] like the n s "Discourse Status Confirm". [speaker003:] Yep. [speaker002:] OK. So that's going to be a call on the discourse and [vocalsound] confirm that it's [disfmarker] [speaker003:] W we Mm hmm [speaker004:] Oh, you mean that's not going to actually modify the tree, but it's going to change the event. [speaker002:] I think that's right. [speaker003:] e [speaker002:] I think it's actually [disfmarker] That looks like it's state modification. [speaker004:] Oh. Oh. [speaker003:] e mmm Um, well i There is a feature called "Discourse Status", [speaker004:] When there's a feature. [speaker002:] Yeah. [speaker003:] And so whenever I just say, "Write", it will [disfmarker] it will put this in here. [speaker002:] Oh, so it always just [disfmarker] Is it [disfmarker] So it [disfmarker] Well, go back, then, [speaker003:] h [speaker002:] cuz it may be that all those th things, while they look like function calls, are just a way of adding exactly that to the XML. [speaker003:] Yep. [speaker002:] Uh huh! I'm not [disfmarker] I'm not sure. [speaker003:] So, this [disfmarker] [speaker002:] e I'm not sure [disfmarker] e that [disfmarker] [speaker003:] Um [disfmarker] well, we [disfmarker] we'll see, when we say, let's test something, "Goodbye", causes it to c to create basically an "Action Goodbye End Action". Which is a means of telling the system to shut down. [speaker002:] Right. Right. [speaker003:] Now, if we know that "Write" produces a "Feature Discourse Status Confirm Discourse Status". So if I now say "Write, Goodbye," it should do that. It sho it creates this, [speaker004:] Mm hmm. [speaker002:] Right. [speaker003:] "Confirm Goodbye". [speaker004:] Right there. [speaker002:] Yep. [speaker004:] But there is some kind of function call, because how does it know to put Goodbye in Content, but, uh, Confirm in Features? So [speaker003:] Oh. It d it [disfmarker] n That's because [disfmarker] [speaker004:] So, it's not just that it's adding that field. It's [speaker002:] Right. Absolutely. [speaker004:] OK. [speaker002:] Good point. It's [disfmarker] it's [disfmarker] the [disfmarker] It's under what sub type you're doing it. [speaker003:] Mm hmm. [speaker002:] Yeah. [speaker003:] Yeah. [speaker001:] It's mystery functions. [speaker004:] Well, they're defined somewhere, presumably. [speaker003:] Well, sometimes it m Sometimes, i [speaker002:] Yeah, each is [disfmarker] [speaker003:] When it [disfmarker] [speaker002:] S so that's funny. You bury the s the state in the function Alright. Uh [speaker001:] Well, it just automatically initializes things that are common, right? [speaker003:] it [disfmarker] [speaker001:] So it's just a shorthand. [speaker002:] Yeah. [speaker003:] For example [disfmarker] Oh, this is German. Sorry. e So, now, this, it cannot do anymore. Nothing comes out of here. [speaker001:] A "not a number" is a value. Awesome. [speaker003:] So, it doesn't speak German anymore, but it does speak English. And there is, here, a reference [disfmarker] So, this tells us that whatever is [disfmarker] has the ID "zero" is referenced here [disfmarker] by [@ @] [comment] the restriction seed and this is exa "I want [disfmarker]" What was the sentence? [speaker002:] "I want two seats here." [speaker003:] "need two seats here." Nuh. "And where is it playing?" There should also be a reference to something, maybe. Our d This is re um Mmm. Here, we change [disfmarker] and so, we [disfmarker] Here we add something to the Discourse Status, that the user wants to change something that was sort of done before And, uh [disfmarker] and that, whatever is being changed has something to do with the cinema. [speaker001:] So then, whatever takes this M three L is what actually changes the state, not the [disfmarker] Yeah, OK. [speaker002:] No, right, the Discourse Maintainer, [speaker001:] Yeah. [speaker002:] yeah. I see. And it [disfmarker] and it runs around looking for Discourse Status tags, and doing whatever it does with them. And other people ignore those tags. Alright. So, yeah. I definitely think it's [disfmarker] [vocalsound] It's worth the exercise of trying to actually add something that isn't there. [speaker003:] Hmm? [speaker002:] Uh Disc [speaker003:] Sort of get a complete understanding of the whole thing. [speaker002:] Yeah, a kid understanding what's going on. Then the next thing we talked about is actually, [vocalsound] um, figuring out how to add our own tags, and stuff like that. [speaker003:] OK. Point number two. I got the, uh, M three L for the routes today. Uh, so I got some more. This is sort of the uh, [vocalsound] um, Hmm. Interesting. It's just going up, it's not going back down. So, this is [disfmarker] um, what I got today is [vocalsound] the [disfmarker] the new [vocalsound] um [vocalsound] M three L for um, [vocalsound] [vocalsound] the Maps, [speaker002:] Yep. [speaker003:] uh, and with some examples [disfmarker] So, this is the XML and this is sort of what it will look like later on, even though it [disfmarker] you can't see it on [disfmarker] on this resolution. And this is what it [disfmarker] sort of is the [disfmarker] the structure of Map requests, um also not very interesting, and here is the more interesting stuff for us, is the routes, route elements, and, again, as we thought it's really simple. This is sort of the, uh, [vocalsound] [vocalsound] um, [vocalsound] parameters. We have [@ @] [comment] simple "from objects" and "to objects" and so forth, points of interest along the way [disfmarker] And, um, I asked them whether or not we could, um [disfmarker] First of all, I was little bit [disfmarker] It seemed to me that this m way of doing it is sort of a stack a step backwards from the way we've done it before. t It seems to me that some notions were missing. So these are [disfmarker] these are [disfmarker] [speaker002:] S So these are [disfmarker] these are your friends back at EML. [speaker003:] Yep. Who are doing this. [speaker002:] So this is not a complicated negotiation. [speaker003:] No. [speaker002:] There's [disfmarker] there's not seven committees, or anything, right? [speaker003:] No, this is very straightforward. [speaker002:] Great. So this is just trying to [disfmarker] It's a design thing, not a political thing. [speaker003:] Yeah. [speaker002:] Once we've [disfmarker] eh [disfmarker] We can just sort of agree on what oughta be done. [speaker003:] Exactly. [speaker002:] Good. [speaker003:] And, um [disfmarker] And, uh [disfmarker] However, the, uh [disfmarker] e So that you understand, it is really simple. Uh [disfmarker] You [disfmarker] you have a route, and you cut it up in different pieces. And every [disfmarker] every element of that e r r f of that [disfmarker] Every segment we call a "route element". And so, from A to B we cut up in three different steps, and every step has a "from object" where you start, a "to object" where y where [pause] you sort of end, and some points of interest along the way. What w I was sort of missing here, and uh, maybe it was just me being too stupid, is, [vocalsound] I didn't sort of get the [disfmarker] the notion of the global goal of the whole route. Really, s was not straightforward visibly for me. And some other stuff. And I [vocalsound] suggested that they should n be [disfmarker] k uh, kind enough to do s two things for us, is one, um, [vocalsound] [vocalsound] Also allocating, uh, some tags for our Action Schema Enter Vista Approach, and [disfmarker] And also, um, since you had suggested that [disfmarker] that, um, we figure out if we ever, for a demo reason, wanted to shortcut directly to the g GIS and the Planner, of how we can do it. Now, what's the state of the art of getting to entrances, um, what's the syntax for that, how get getting to [vocalsound] vista points and calculating those on the spot. And the Approach mode, anyhow, is the default. That's all they do it these days. Wherever you'll find a route planner it n does nothing but get to the closest point where the street network is [vocalsound] at minimal distance to the geometric center. [speaker002:] Mm hmm. [speaker003:] So. [speaker002:] So, well, let [disfmarker] Now, this is important. Let, uh [disfmarker] I want a a Again, outside of m almost managerial point, um [disfmarker] You're in the midst of this, so you know better. But it seems to me it's probably a good idea to li uh [disfmarker] minimize the number of uh, change requests we make of them. So it seemed to me, what we ought to do is get our story together. OK? And think about it some, internally, before asking them to make changes. [speaker003:] Mm hmm. [speaker002:] Oh. Does this [disfmarker] does this make sense to you guys? It [disfmarker] I mean you're [disfmarker] you're doing the [disfmarker] the interaction but it seemed to me that [vocalsound] what we ought to do is come up with a [disfmarker] uh, something where you, um [disfmarker] And I [disfmarker] I don't know who's mok working most closely on it. Probably Johno. OK. Uh, take what they have, send it to everybody saying "this is what they have, this is what we think we should add", OK? and then have a d a [disfmarker] an iteration within our group saying "Hmm, well [disfmarker]" OK? And get our best idea of what we should add. [speaker003:] Mm hmm. [speaker002:] And then go back to them. Is i or, I don't know does this make sense to you? Or [speaker003:] Yeah. [vocalsound] Especially if we want [disfmarker] Sort of, what I [disfmarker] my feeling was eh we [disfmarker] we sort of reserved something that has a r eh an OK label. That's [disfmarker] th that was my th first sort of step. [speaker002:] Mm hmm. [speaker003:] I w No matter how we want to call it, [vocalsound] this is sort of our playground. And if we get something in there that is a structure elaborate and [disfmarker] and [disfmarker] and [disfmarker] and [disfmarker] and complex enough to [disfmarker] to [disfmarker] to maybe enable a whole simulation, one of these days, that would be [disfmarker] u the [disfmarker] the perfect goal. [speaker002:] Right. Right. That's right. So. So, Yeah. The problem isn't the short ra range optimization. It's the sort of [disfmarker] o one or two year kind of thing. OK. What are the thl class of things we think we might try to do in a year or two? How [disfmarker] how would we try to characterize those and what do we want to request now [vocalsound] that's leave enough space to do all that stuff? [speaker003:] Mm hmm. Yep. [speaker002:] Right. And that re that requires some thought. [speaker003:] Yep. [speaker002:] And [disfmarker] so that sounds like a great thing to do [vocalsound] as the priority item um, as soon as we can do it. [speaker003:] Yep. [speaker002:] So y so you guys will [vocalsound] send to the rest of us um [pause] [vocalsound] a version of um, this, and [disfmarker] the [disfmarker] uh, description [disfmarker] [speaker001:] With sugge yeah, suggested improvements and [disfmarker] [speaker002:] Well b Yeah. So, the [disfmarker] the [disfmarker] uh [disfmarker] Not everyone uh, reads German, so if you'd um [speaker001:] Mmm. [speaker002:] tu uh, tur change the description to, uh, English [speaker001:] OK. [speaker002:] and, um, Then [disfmarker] then, yeah. Then, with some sug s suggestions about where [disfmarker] where do we go from here? Uh, this [disfmarker] and this, of course, was just the [vocalsound] [vocalsound] action end. [speaker001:] OK. [speaker002:] Uh, at some point we're going to have to worry about the language end. But for the moment just [vocalsound] uh, t for this class of [disfmarker] of things, we might want to try to encompass. And [disfmarker] [speaker001:] Then the scope of this is beyond [pause] Approach and Vis or Vista. [speaker002:] Oh, yeah, yeah yeah yeah. [speaker001:] Yeah, yeah. [speaker002:] This is [disfmarker] this is everything that [disfmarker] that, um, [pause] [vocalsound] you know, um [pause] we might want to do in the next couple years. [speaker001:] Yeah, yeah. So what would [disfmarker] [speaker003:] Hmm? [speaker001:] OK. [speaker002:] We don't [disfmarker] I mean, that's an issue. We don't know what, entirely. [speaker001:] Uh, yeah. but I'm just [disfmarker] But the [disfmarker] Yeah, OK. So I just [disfmarker] this XML stuff here just has to do with Source Path Goal type stuff, in terms of traveling through Heidelberg. [speaker003:] Hmm. [speaker002:] Right. [speaker001:] Or travel, specifically. So, but this O Is the domain greater than that? [speaker002:] No. I think [disfmarker] I think the i the idea is [pause] that [disfmarker] [speaker001:] OK. [speaker002:] Oh. It's beyond Source Path Goal, but I think we don't need to get beyond it [@ @] [comment] [disfmarker] tourists in Heidelberg. [speaker001:] OK. [speaker002:] It seems to me we can get [vocalsound] all the complexity we want in actions and in language without going outside of tourists in Heidelberg. OK? But you know, i depending on what people are interested in, one could have, [vocalsound] uh, tours, one could have [vocalsound] um, explanations of why something is [disfmarker] is, you know, why [disfmarker] why was this done, or [disfmarker] I mean, no [disfmarker] there's no end to the complexity you can build into the [disfmarker] [vocalsound] uh, what a tourist in Heidelberg might ask. [speaker001:] Mmm. [speaker002:] So, at least [disfmarker] unless somebody else wants t to suggest otherwise I think [vocalsound] the general domain we don't have t to uh, broaden. That is, tourists in Heidelberg. And if there's something somebody comes up with that can't be done that way, then, sure. W we'll [disfmarker] we'll look at that, but [vocalsound] uh I'd be s I I'd be surprised at [disfmarker] if there's any [disfmarker] [vocalsound] important issue that [disfmarker] that [disfmarker] And, um [disfmarker] I mean if [disfmarker] if you want to [pause] uh, push us into reference problems, that would be great. [speaker006:] OK. [speaker002:] OK, so this is [disfmarker] his specialty is [disfmarker] reference, [speaker003:] Mm hmm. [speaker002:] and [disfmarker] you know, what [disfmarker] what are these things referring to? Not only [vocalsound] anaphora, but, uh, more generally the, uh [disfmarker] this whole issue of, uh, referring expressions, and, what is it that they're actually dealing with in the world? [speaker003:] Mm hmm. [speaker002:] And, again, this is li in the databa this is also pretty well formed because there is an ontology, and the database, and stuff. So it isn't like, [vocalsound] um, you know, the Evening Star or stuff like that. [speaker006:] Right. [speaker002:] I i it [disfmarker] All the entities do have concrete reference. [speaker006:] Right. [speaker002:] Although th the [vocalsound] To get at them from a language may not be trivial. [speaker006:] Right. [speaker002:] There aren't really deep mysteries about um, what w what things the system knows about. [speaker006:] Right. And you have both proper names and descriptions and y and you can ask for it. [speaker002:] All those things. Yeah. [speaker003:] Mm hmm. [speaker002:] You have proper names, and descriptions. [speaker006:] Right. OK. [speaker002:] And a l and a lot [disfmarker] and [disfmarker] and anaphora, and pronouns, [speaker003:] Nuh. [speaker006:] Right. [speaker002:] and [pause] all those things. [speaker006:] Right. [speaker003:] Now, we hav the [disfmarker] the whole [disfmarker] Unfortunately, the whole database is, uh, [vocalsound] in German. We have just commissioned someone to translate some bits of it, IE the e the shortest k the [disfmarker] the more general descriptions of all the objects and, um, persons and events. So, it's a relational database with persons, events, [vocalsound] and, um, objects. And it's [disfmarker] it's quite, um, [vocalsound] there. But did y I [disfmarker] uh [disfmarker] I think there will be great because the reference problem really is not trivial, even if you have such a g well defined world. [speaker002:] He knows. [speaker003:] Ah he you are not, uh, throwing uh, uh, carrying owls to Athens. [speaker001:] Could you give me an example of a reference problem? so [disfmarker] so l I can make it more concrete? [speaker003:] Well [disfmarker] [vocalsound] How do I get to the Powder Tower? We sort of t think that our bit in this problem is interesting, but, just to get from Powder Tower to an object I ID in a database is also not really trivial. [speaker006:] Or [disfmarker] or if you take something even more scary, um, "how do I get to the third building after the Tower? the Ple Powder Tower?" [speaker001:] Mmm. [speaker006:] Uh, you need some mechanism for [speaker002:] Yeah. Or, you know, the church across from City Hall, or [disfmarker] [speaker001:] Or the re the restaurant where they wear lederhosen? [speaker003:] Or the [speaker006:] Right. [speaker001:] Or is that [disfmarker] [speaker002:] Yeah, that would be fine. [speaker006:] Right. [speaker001:] OK. [speaker002:] Yeah. [speaker006:] Right. [speaker003:] O or [disfmarker] or tower, or this tower, or that building, or [disfmarker] [speaker005:] Uniquely. [speaker003:] hmm? [speaker001:] OK. [speaker002:] Or you can say "how [disfmarker]" you know, "how do I get back?" [speaker001:] Trying to [disfmarker] Yeah, yeah. [speaker002:] OK. And, again, it's just a question of which of these things, uh, people want to [vocalsound] dive into. What, uh, I think I'm gonna try to do, and I guess, pwww! let's say that by the end of spring break, I'll try to come up with some [vocalsound] general story about, um, construction grammar, and what constructions we'd use and how all this might fit together. There's this whole framework problem that I'm feeling really uncomfortable about. And I haven't had a chance to [vocalsound] think about it seriously. But I [disfmarker] I want to [disfmarker] I want to do that early, rather than late. And you and I will probably have to talk about this some. [speaker003:] u u u u That's what strikes me, that we sort of [disfmarker] the de g uh, small [disfmarker] Something, uh, maybe we should address one of these days, is to [disfmarker] [vocalsound] That most of the work people actually always do is look at some statements, and [disfmarker] and analyze those. Whether it's abstracts or newspapers and stuff like this. [speaker002:] Hmm. [speaker003:] But the whole [disfmarker] i is it [disfmarker] is it really relevant that we are dealing mostly with, sort of, questions? Uh, you know [disfmarker] [speaker002:] Oh, yeah? Well, I mean yeah, I d [speaker003:] And this is [disfmarker] It seems to me that we should maybe at least spend a session or [disfmarker] or brainstorm a little bit about whether that l this is special case in that sense. [speaker004:] Mm hmm. [speaker003:] Um, I don't know. You know [disfmarker] Did we ever find m metaphorical use in [disfmarker] in questions in [disfmarker] in that sense, really? [speaker002:] Yeah. You will. [speaker003:] And how soon, I don't know. [speaker002:] Oh, yeah. I mean, uh, we could take all the standard metaphor examples and make question versions of them. OK. [speaker003:] "Who got kicked out of France?" [speaker006:] Muh [speaker003:] Nuh. [speaker002:] Yeah, or, you know. "Wh why is he [disfmarker] why is he pushing for promotion?" [speaker006:] Right. [speaker002:] or, "who's pushing proof" [speaker003:] Nuh. [speaker002:] er, just pick [disfmarker] pick any of them and just [vocalsound] do the [disfmarker] eh [disfmarker] [speaker003:] Mm hmm. [speaker002:] So I don't [disfmarker] I don't think, [vocalsound] uh, it's at all difficult [disfmarker] Uh, to convert them to question forms that really exist and people say all the time, um [disfmarker] And [disfmarker] sort of [disfmarker] we don't know how to handle them, too. Right? I mean, it's [disfmarker] I d It [disfmarker] We don't know how to handle the declarative forms, [@ @] [comment] really, and, then, the interrogative forms, ah oh. Uh. [speaker004:] Ooo! [speaker002:] Yeah. Nancy, it looked like you were s [speaker005:] Oh. it's just that [disfmarker] that the goals are g very different to cases [disfmarker] So we had this problem last year when we first thought about this domain, actually, was that [vocalsound] most of the things we talked about are our story understanding. Uh, we're gonna have a short discourse [speaker002:] Right. [speaker005:] and [vocalsound] the person talking is trying to, I don't know, give you a statement and tell you something. And here, [vocalsound] it's th [speaker003:] Help you create a mental model, blah blah blah. Yeah. [speaker005:] Yea eh [disfmarker] y Yeah, I guess so. [speaker002:] Yes. [speaker005:] And then here, y you are j uh, the person is getting information and they or may not be following some larger plan, [vocalsound] you know, that we have to recognize or, you know, infer. And th th the [disfmarker] their discourse patterns probably [nonvocalsound] don't follo follow quite as many [vocalsound] logical connec [speaker002:] Right. No, I think that's one of things that's interesting, [speaker005:] Yeah. [speaker002:] is [disfmarker] is in this sort of over arching story we [disfmarker] we worked it out for th as you say, this [disfmarker] the storytelling scenario. [speaker005:] Mm hmm. Mm hmm. [speaker002:] And I think it's really worth thinking through [vocalsound] [vocalsound] what it looks like. [speaker005:] Mm hmm. [speaker002:] What is the simspec mean, et cetera. [speaker005:] M Right. Cuz for a while we were thinking, "well, how can we change the, [vocalsound] um, data to sort of illicit tha [vocalsound] illicit, um, actions that are more like what we are used to?" But obviously we would rather, you know, try to figure out what's [disfmarker] what's, you know [disfmarker] [speaker002:] Well, I don't know. I mean, maybe [disfmarker] maybe that's what we'll do is [disfmarker] is s u e We can do anything we want with it. [speaker005:] Yep. [speaker002:] I mean, once we have fulfilled these requirements, [speaker005:] Mmm [disfmarker] Mm hmm. Mm hmm. [speaker002:] OK, and the one for next uh, summer is just half done and then the other half is this, um, "generation thing" which we think isn't much different. [speaker005:] Mm hmm. [speaker002:] So once that's done, then all the rest of it is, uh, sort of, you know, what we want to do for the research. And we can [disfmarker] w we can do all sorts of things that don't fit into their framework at all. Th there's no reason why we're c we're constrained to do that. [speaker005:] Mm hmm. [speaker002:] If we can use all the, uh, execution engines, then we can, [vocalsound] you know, really [nonvocalsound] try things that would be too [disfmarker] too much pain to do ourselves. [speaker005:] Mm hmm. [speaker002:] But there's no obligation on any of this. So, if we want to turn it into u understan standing stories about Heidelberg, we can do that. I mean, that would just be a t a um [disfmarker] [speaker003:] Or, as a matter of fact, we need [disfmarker] [vocalsound] and if we if we' eh [disfmarker] take a ten year perspective, we need to do that, because w e w a Assuming we have this, um, we we ta in that case we actually do have these wonderful stories, and historical anecdotes, [speaker002:] Yeah. [speaker003:] and knights jumping out of windows, [speaker005:] Mmm. [speaker003:] and and and [disfmarker] [comment] [comment] tons of stuff. So, th the database is huge, and if we want to answer a question on that, we actually have to go one step before that, and understand that. [speaker005:] Mm hmm. [speaker003:] In order to e do sensible information extraction. [speaker005:] Yeah. Mm hmm. [speaker003:] And so, [speaker002:] You might, yeah. [speaker006:] Mwa [speaker003:] um, this has been a [disfmarker] a [disfmarker] a Deep Map research issue that was [disfmarker] is [disfmarker] is part of the unresolved, and to do's, and something for the future, is [vocalsound] how can we sort of run our our text, our content, through a machine [vocalsound] that will enable us, later, to retrieve or answer e questions more sensibly? [speaker006:] Mm hmm. [speaker005:] Mm hmm. Mmm. [speaker002:] Right. Anyway. S Who's going? [speaker006:] So, uh [disfmarker] So, uh, I was just going to ask, um, [vocalsound] so, what is the [disfmarker] the basic thing that [disfmarker] that you are, um, obligated to do, um, uh, by the summer before w uh y c we can move [disfmarker] [speaker002:] Ah! OK. So [disfmarker] eh [disfmarker] Yeah. So, what happened is, there's this, eh, uh [disfmarker] Robert was describing the [disfmarker] There's two packages there's a, uh, quote parser, there's a particular piece [vocalsound] of this big system, which, in German, uh, takes these t sentence templates and produces XML structures. [speaker006:] Right. [speaker002:] And one of our jobs was to make the English equivalent of that. That, these guys did in a [disfmarker] in a day. [speaker006:] Right. Right. [speaker002:] The other thing is, at the other end, roughly at the same level, there's something that takes, uh, X M L structures, produces an output XML structure which is instructions for the generator. [speaker006:] Right. [speaker002:] OK? And then there's a language generator, and then after that a s a synthesizer that goes from an XML structure to, uh, language generation, to actual specifications for a synthesizer. Eh, but again, there's one module in which there's one piece [vocalsound] that we have to convert to English. [speaker006:] Right. Right. Got it. [speaker002:] Is that [disfmarker] OK. And that [disfmarker] But as I say, this is [disfmarker] all along was viewed as a kind of [disfmarker] [vocalsound] a m a minor thing, necessary, but [disfmarker] but not [disfmarker] [speaker006:] Right. [speaker002:] OK? [speaker006:] Right. [speaker002:] And much more interesting is the fact that, [vocalsound] as part of doing this, we [disfmarker] we are, you know, inheriting this system that does all sort of these other [vocalsound] things. [speaker006:] That's great! Right. [speaker002:] Not precisely what we want, and that's [disfmarker] [vocalsound] that's wh where it [disfmarker] it gets difficult. And I [disfmarker] I don't pretend to understand yet what I think we really ought to do. [speaker003:] OK. So, e enough of that, but I, uh, um, mmm, the e sort of, Johno and I will take up that responsibility, and, um, get a first draft of that. Now, we have um just, I think two more short things. [speaker002:] OK. [speaker003:] Um, y you guys sort of started fighting, uh, on the Bayes net "Noisy OR" front? [speaker004:] Hmm. Yeah, I thought I should, um, talk a little bit about that, because that might be a good, uh, sort of architecture to have, in general for, uh, problems with, [vocalsound] you know, multiple inputs to a node. [speaker002:] Good! OK. Good. And what's the other one? so that [disfmarker] just we know what the d agenda is? [speaker003:] Um, the Wu paper, I think maybe [disfmarker] [speaker002:] Oh, yeah. I've got a couple new Wu papers as well. Uh, so I [disfmarker] I've been in contact with Wu, so, probably let's put that off till I [disfmarker] I [disfmarker] till I understand better, [vocalsound] uh, what he's doing. It's just a little embarrassing cause all this was in his thesis and I was on his thesis committee, and, so, [vocalsound] I r really knew this at one time. [speaker006:] Ugh. [speaker002:] But, I [disfmarker] I [disfmarker] It's not only uh Is [disfmarker] Part of what I haven't figured out yet is [disfmarker] is how all this goes together. So I'll dig up some more stuff from Dekai. And [disfmarker] so why don't we just do the, uh [disfmarker] [speaker004:] OK. So [disfmarker] should I [disfmarker] Is there a white board here that I can use? [speaker002:] Yeah. You could [disfmarker] [speaker004:] Uh [disfmarker] [speaker005:] Yeah. [speaker004:] Or shall I just use this? [speaker002:] squealing sound? It's probably just as easy. I [speaker006:] Yeah. [speaker004:] Yeah. [speaker001:] You can put the microphone in your pocket. [speaker004:] Hey! [speaker001:] I was envying you and your pocket cause I don't have one. [speaker005:] It was a quick one, huh? [speaker002:] That's why they invented "pocket T's". [speaker005:] They have clips! [speaker001:] exactly [speaker005:] Huh. [speaker004:] Yeah. So, um [disfmarker] [vocalsound] Recall that, uh, we want to have this kind of structure in our Bayes nets. Namely, that, um [disfmarker] [vocalsound] [vocalsound] You have these nodes that have several bands, right? So [disfmarker] Does I mean, they sort of [disfmarker] the typical example is that, um, these are all a bunch of cues for something, and this is a certain effect that we'd like to conclude. So, uh [disfmarker] Like, let's just look at the case when, um, this is actually the [disfmarker] the final action, right? So this is like, uh, [vocalsound] you know, touch, [speaker003:] Y [speaker004:] or [disfmarker] Sorry. [speaker003:] E EVA [speaker004:] Uh Yeah, E [vocalsound] EVA, right? [speaker003:] Yeah. [speaker004:] Enter, V View, Approach, right? [speaker006:] W what was this? It [disfmarker] i i i ehhh, [comment] i ehhh. [speaker004:] So, this is [disfmarker] [speaker002:] Wri write it out for for [disfmarker] [speaker004:] Yeah. [speaker006:] I mean [disfmarker] [speaker004:] Enter, View, Approach. [speaker006:] OK. Right. [speaker004:] Right. So, I mean, we'd like to [disfmarker] take all these various cues, right? [speaker006:] Like the army. [speaker004:] So this one might be, say, uh [disfmarker] [speaker006:] Yeah. [speaker005:] New terminology? [speaker004:] Well, let me pick a random one [speaker003:] Hmm? [speaker005:] I haven't heard that before. [speaker004:] and say, uh [disfmarker] I don't know, it could be, like [disfmarker] [vocalsound] This isn't the way it really is, but let me say [disfmarker] that, suppose someone mentioned, uh, admission fees Ah, it takes too long. Try [disfmarker] let me just say "Landmark". If the thing is a landmark, you know, [vocalsound] um [disfmarker] then there's another thing that says if [disfmarker] [vocalsound] um [disfmarker] if it's closed or not, at the moment. Alright, so you have nodes. Right? And the, uh, problem that we were having was that, you know, given N nodes, there's "two to the N" Given N nodes, and furthermore, the fact that there's three things here, we need to specify "three times", uh, "two to the N" probabilities. Right? That's assuming these are all binary, which f they may not be. For example, they could be "time of day", in which case we could, uh, say, you know, "Morning, afternoon, evening, night". So, this could be more So, it's a lot, anyway. And, that's a lot of probabilities to put here, which is kind of a pain. So [pause] Noisy ORs are a way to, uh, [vocalsound] sort of deal with this. Um Where should I put this? So, the idea is that, um, [vocalsound] Let's call these, uh, C one, C two, C three, and C four, and E, for Cause and Effect, I guess. The idea is to have these intermediate nodes. Right. Well, actually, the idea, first of all, is that each of these things has a [disfmarker] quote unquote distinguished state, which means that this is [vocalsound] the state in which we don't really know anything about it. So [disfmarker] right? So, for example, if we don't really know [vocalsound] if the thing is a landmark or not, Or, i if that just doesn't seem relevant, then that would be th sort of the Disting the Distinguish state. It's a really, you know, [vocalsound] if there is something for the person talking about the admission fee, you know, if they didn't talk about it, that would be the Distinguish state. So [disfmarker] [speaker003:] S so, this is a fanciful way of saying "default"? [speaker004:] Yeah, yeah. [speaker003:] OK. [speaker004:] That's just what they [disfmarker] the word they used in that paper. [speaker003:] Mm hmm. [speaker004:] So, the idea is that, um, [vocalsound] you have these intermediate nodes, right? E one, E two, E three and E four? [speaker002:] So, this is the Heckerman paper you're working with? [speaker004:] Yeah. [speaker002:] Good. [speaker004:] So [pause] The idea is that, each of these EI [disfmarker] is [disfmarker] [vocalsound] represents what this would be [disfmarker] if all the other ones were in the distinguish state. Right? So, for example, suppose that the person [disfmarker] I mean, suppose the thing that they talked about is a landmark. But none of the other [disfmarker] [vocalsound] sort of cues really apply. Then, [vocalsound] this would be [disfmarker] W The [vocalsound] this would just represent the probability distribution of this, assuming that this cue is turned on and the other ones just didn't apply? So, you know, if it is a landmark, and no none of the other things really ap applicable, then [disfmarker] this would represent the probability distribution. So maybe in this case [disfmarker] [vocalsound] Maybe we just t k Maybe we decide that, if the thing's a landmark and we don't know anything else, then we're gonna conclude that, um [disfmarker] [vocalsound] They want to view it with probability, you know, point four. They want to enter it with probability, uh [disfmarker] with probability point five and they want to approach it probability point one, say [disfmarker] Right? So we come up with these l little tables for each of those OK. And the final thing is that, um [disfmarker] [vocalsound] [vocalsound] this is a deterministic function of these, so we don't need to specify any probabilities. We just have to, um, say what function this is, right? So we can let this be, um [disfmarker] [vocalsound] G of E one comma E two. E three, E four. Right? and our example G would be, um, [vocalsound] a majority vote? Right? [speaker002:] Well. OK, so th so the important point [disfmarker] is [disfmarker] W not what the G function is. The important point is [disfmarker] that [disfmarker] Um [disfmarker] There is a [disfmarker] a [disfmarker] a general kind of idea of shortcutting the full CPT. Th c the full conditional probability table [disfmarker] with some function. OK? Which y w you choose appropriately for each case. So, depending on [vocalsound] what your situation is, there are different functions which are most appropriate. And [disfmarker] So I gave [disfmarker] eh [disfmarker] Bhaskara a copy of this, eh [disfmarker] sort of "ninety two" [comment] paper. [speaker004:] Mm hmm. [speaker002:] D and you got one, Robert. I don't know who else has seen it. [speaker004:] There's [disfmarker] I mean [disfmarker] yeah. it's Heckerman and Breese. [speaker002:] It's short. It's short. [speaker004:] Yeah. [speaker002:] So, I u w Um, yo uh [disfmarker] you [disfmarker] Have you read it yet? [speaker004:] Uh, you can [disfmarker] Yeah, you should take a look at it, I guess. [speaker002:] OK, so you should take a look. [speaker001:] OK [speaker002:] Nancy, I'm sure you read it at some point in life. [speaker005:] I [disfmarker] yeah. I [disfmarker] I think so, yeah. [speaker002:] OK. And [disfmarker] so, you other guys can decide how interested [disfmarker] [speaker005:] Yeah, [@ @]. [speaker002:] Anyway. So the paper isn't th isn't real hard. [speaker006:] OK. [speaker002:] And [disfmarker] [vocalsound] Uh [disfmarker] One of the questions just come at Bhaskara is, "How much of this does JavaBayes support?" [speaker004:] Yeah, it's a good question. Um [pause] [vocalsound] [nonvocalsound] The [disfmarker] so what we want, is basically JavaBayes to support deterministic, uh, functions. [speaker002:] Right. [speaker004:] And, um [disfmarker] [vocalsound] In a sense it sup we can make it supported by, um, [vocalsound] manually, uh, entering, you know, probabilities that are one and zeros, right? [speaker002:] Right. So the little handout that [disfmarker] The little thing that I sent [disfmarker] I sent a message saying, uh, here is a way to take [disfmarker] One thing you could do, which is kind of s in a way, stupid, is take this deterministic function, and use it to build the CPT. So, if Ba JavaBayes won't do it for you, [speaker003:] Mmm. [speaker002:] that you can convert all that into what the CPT would be. Um [disfmarker] and, what I sent out about a week ago, was an idea of how to do that, for, um, evidence combination. So one of [disfmarker] one function that you could use as your "G function" is an e e Evidence Combining. So you just take [vocalsound] the [disfmarker] uh, if each of th if each of the ones has its own little table like that, [vocalsound] then you could take the, uh, strength of each of those, times its little table, and you'd add up the total evidence for "V", "E", and "A". [speaker004:] Mmm. I don't think you can do this, [speaker005:] Mm hmm. [speaker004:] because [disfmarker] [vocalsound] G is a function from [pause] that [vocalsound] to that. [speaker002:] Yep. Right. [speaker004:] Right? So there's no numbers. There's just [disfmarker] quadruplets of [disfmarker] well, N duplets of, uh, E Vs. [speaker002:] I i i No, no [disfmarker] But I'm saying is [disfmarker] There [disfmarker] [vocalsound] [vocalsound] There is a w I mean, if y if [disfmarker] if you decide what's [disfmarker] what is appropriate, is probablistic evidence combination, you can write a function that does it. It's a pui it's actually one of the examples he's got in there. But, anyway, s skipping [disfmarker] skipping the question of exactly which functions [disfmarker] now is it clear that you might like to be able to shortcut the whole conditional probability table. [speaker003:] I mean, in some [disfmarker] it seems very plausible in some sense, where we will be likely to not be [disfmarker] observe some of the stuff. Cuz we don't have the a access to the information. [speaker004:] Oops, [comment] sorry. [speaker002:] Right. That's one of the problems, is, W Is [disfmarker] is, Where would th Where would it all come from? [speaker003:] Yeah. So. [speaker004:] Is [disfmarker] [speaker005:] Mmm. [speaker004:] Oh, right. W would not be ab able to observe What? [speaker003:] I if it's a [disfmarker] a [disfmarker] a discar Discourse Initial Phrase, we will have nothing in the discourse history. So, if [disfmarker] if we ever want to wonder what was mention [speaker004:] Oh [disfmarker] Oh. A are you saying that we'll not be able to observe certain nodes? That's fine. That is sort of orthogonal thing. [speaker002:] Yeah, so there's [disfmarker] there's two separate things, Robert. The f the [disfmarker] the [disfmarker] the Bayes nets in general are quite good at saying, "if you have no current information about this variable just take the prior for that." OK? Th that's what they're real good at. So, if you don't have any information about the discourse, you just use your priors of [disfmarker] of whatever [disfmarker] eh the [disfmarker] discourse [disfmarker] uh, eh, basically whatever w it's [disfmarker] Probabilistically, whatever it would be. And it's [disfmarker] it's sort of not a great estimate, [speaker003:] Mm hmm. [speaker002:] but [disfmarker] it's the best one you have, and, so forth. So that, they're good at. But the other problem is, how do you fill in all these numbers? And I think that's the one he was getting at. [speaker005:] Mm hmm. [speaker004:] Yeah. [speaker005:] Yeah. [speaker004:] So, specifically in this case you have to [disfmarker] f have this many numbers, whereas in this case you just have to have three for this, three for this, three for this. Right? So you have to have just three N? [speaker003:] Mm hmm. [speaker004:] So, this is much smaller than that. [speaker001:] Asymptotically. [speaker005:] Mm hmm. [speaker004:] Yeah. [speaker002:] Well, pretty quickly. [speaker004:] Right. [speaker001:] U yeah, yeah. [speaker005:] So, you don't need da data enough to cover [disfmarker] uh, nearly as much stuff. [speaker002:] I mean [disfmarker] [speaker004:] I don't know. [speaker001:] So, really, i What a [disfmarker] A Noisy OR seems to kind of [pause] "neural net acize" these Bayes nets? [speaker002:] Eh [disfmarker] well to some No, no. So, "Noisy OR" is a funny way of referring to this, because [vocalsound] the Noisy OR is only one instance. [speaker004:] Yeah. This isn't a Noisy OR anymore. [speaker002:] That one actually isn't a Noisy OR. So we'll have to think of [vocalsound] of a way t t [speaker001:] Yeah. [speaker004:] it's a Noisy arg max or a Noisy whatever. [speaker002:] Yeah, whatever. Yeah. So [disfmarker] Eh [disfmarker] [comment] Um [speaker001:] Well, my point was more that we just [disfmarker] eh [disfmarker] With the neural net, right, eh, things come in, you have a function that combines them and [disfmarker] [speaker002:] Yeah, it [disfmarker] it [disfmarker] Tha that's true. It is a is also more neural net like, although [disfmarker] [vocalsound] Uh, it isn't necessarily sum [disfmarker] uh, s you know, sum of weights or anything like that. [speaker001:] Right. [speaker002:] I mean i You could have, uh, like the Noisy OR function, really is one that's essentially says, uh, take the max. [speaker004:] Well, the "OR". [speaker002:] Same. [speaker004:] Right. I guess you're right. Yeah. [speaker002:] Uh But anyway. So [disfmarker] And, I thi I think that's the standard way people get around the [disfmarker] uh There are a couple other ones. There are ways of breaking this up into s to [disfmarker] to subnets and stuff like that. But, um The I think we definitely [disfmarker] I think it's a great idea tha to [disfmarker] to pursue that. [speaker004:] Yep. So [speaker003:] Wha still sort of leaves one question. It [disfmarker] I mean you [disfmarker] you can always uh [disfmarker] see easily that [disfmarker] that I'm not grasping everything correctly, but [vocalsound] what seemed attractive to me in im uh in the last discussion we had, was [vocalsound] that we find out a means of [disfmarker] of getting these point four, point five, point one, of C four, not because, you know, A is a Landmark or not, but we [disfmarker] we [disfmarker] we label this whatever object type, and if it's a garden, it's point three, point four, point two. If it's a castle, it's point eight, point one, point one. If it's, [vocalsound] uh, a town hall, it's point two, point three, point five. [speaker002:] Right. [speaker003:] And so forth. And we don't want to write this down [disfmarker] necessarily every time for something but, uh [disfmarker] [speaker004:] It'll be students [disfmarker] [speaker003:] let's see. [speaker004:] Where else would it be stored? That's the question. [speaker003:] Well, in the beginning, we'll write up a flat file. We know we have twenty object types [speaker002:] Oh. Yeah. [speaker003:] and we'll write it down in a flat file. [speaker002:] No. So, i is Well, let me say something, guys, cuz there's not [disfmarker] There's a pretty point about this we might as well get in right now. [speaker004:] Mm hmm. [speaker002:] Which is [disfmarker] The hierarchy that s comes with the ontology is just what you want for this. So that [disfmarker] [vocalsound] Uh, if you know about it [disfmarker] let's say, a particular town hall [disfmarker] [vocalsound] that, it's one that is a monument, [vocalsound] then, that would be stored there. If you don't, you look up the hierarchy, Eh [disfmarker] so, you [disfmarker] you [disfmarker] you may or [disfmarker] So, then you'd have this little vector of, um, you know, Approach Mode or EVA Mode. Let's [disfmarker] OK, so we have [vocalsound] the EVA vector for [disfmarker] for various kinds of landmarks. If you know it for a specific landmark you put it there. If you don't, you just go up the hierarchy to the first place you find one. [speaker004:] OK. So, is the idea to put it in the ontology? [speaker002:] Absolutely. [speaker004:] OK. [speaker002:] Uh, or, link to [disfmarker] or [disfmarker] but [disfmarker] but in any case [disfmarker] i View it logically as being in the ontology. It's part of what you know about [disfmarker] a [disfmarker] an object, [vocalsound] is its EVA vector. [speaker004:] OK. Mm hmm. [speaker002:] And, if yo As I say, if you know about a specific object, you put it there. This is part of what Dekai was doing. So, when we get to Wu, The e We'll see w what he says about that. [speaker004:] Right. [speaker002:] And, then if you [disfmarker] If it isn't there, it's higher, and if you don't know anything except that it's a b it's [disfmarker] it's a [disfmarker] building, then up at the highest thing, you have the pr what amounts to a prior. If you don't know anything else about a building, [vocalsound] uh, you just take whatever your crude approximation is up at that level, [speaker004:] Right. [speaker002:] which might be equal, or whatever it is. [speaker004:] Yeah. [speaker002:] So, that's a very pretty relationship between these local vectors and the ontology. And it seems to me the obvious thing to do, unless [vocalsound] we find a reason to do something different. [speaker004:] Yeah. [speaker002:] Does this make sense to you? [speaker004:] So [disfmarker] [speaker002:] Bhask -? [speaker004:] Yeah. So, we are [disfmarker] but we [disfmarker] we're not doing the ontology, so we have to get to whoever is doing the [disfmarker] u ultimately, [speaker002:] Indeed. [speaker004:] we have to get them to [disfmarker] [speaker002:] So, that's another thing we're gonna need to do, is [disfmarker] is, to, either [disfmarker] We're gonna need some way to either get a p tag in the ontology, or add fields, or [disfmarker] [comment] [comment] [vocalsound] some way to associate [disfmarker] Or, w It may be that all we can do is, um, some of our own hash tables that it [disfmarker] Th the [disfmarker] th you know, there's always a way to do that. It's a just a question of [disfmarker] i [speaker001:] Yeah, hash on object name to, you know, uh, the probabilities or whatever. [speaker002:] th Yeah. e Right. And, so, i uh [disfmarker] [speaker003:] But it's, uh [disfmarker] [vocalsound] Well, it strikes me as a What For If we get the mechanism, that will be sort of the wonderful part. And then, [vocalsound] how to make it work is [disfmarker] is the second part, in the sense that [disfmarker] I mean, m the guy who was doing the ontology [disfmarker] eh, eh, s ap apologized that i it will take him another through [disfmarker] two to three days because they're having really trouble getting the upper level straight, and right now. The reason is, [vocalsound] given the craw bet uh, the [disfmarker] the [disfmarker] the projects that all carry their own taxonomy and, on all history, [vocalsound] they're really trying to build one top level ontology ft that covers all the EML projects, and that's, uh, uh, sort of a tough cookie, a little bit tougher than they [vocalsound] figured. I could have told them s so. [speaker002:] Right. [speaker003:] Uh. [speaker002:] Yeah. [speaker003:] But, nevertheless, it's going to be there by n by, uh, next Monday and I will show you what's [disfmarker] what some examples [vocalsound] from that for towers, and stuff. And, um, what I don't think is ever going to be in the ontology, is sort of, you know, the likelihood of, eh, people entering r town halls, and looking at town halls, and approaching town halls, especially since we are b dealing with a case based, not an instance based ontology. So, there will be nothing on [disfmarker] on that town hall, or on the Berkeley town hall, or on the [vocalsound] Heidelberg town hall, it'll just be information on town halls. [speaker002:] Well, they [disfmarker] they [disfmarker] they [disfmarker] How ar What are they gonna do with instances? [speaker003:] But what [disfmarker] [speaker002:] I mean, you [disfmarker] y [speaker003:] Well, that's [disfmarker] Hhh. That's [disfmarker] that's al different question. I mean, th the [disfmarker] first, they had to make a design question, [vocalsound] "do we take ontologies that have instances? or just one that does not, that just has the types?" [speaker002:] OK. [speaker003:] And, so, since the d decision was on types, on a d simply type based, [vocalsound] we now have to hook it up to instances. I mean this is one [disfmarker] [speaker002:] But what i What is SmartKom gonna do about that? Cuz, they have instances all the time. [speaker003:] Yeah, but the ontology is really not a SmartKom thing, in [disfmarker] in and of itself. That's more something that [vocalsound] I kicked loose in [disfmarker] in EML. So it's a completely EML thing. [speaker002:] But [disfmarker] Uh [disfmarker] uh [disfmarker] SmartKom's gonna need an ontology. [speaker003:] Yes, u a w a lot of people are aware of that. [speaker002:] I understand, [vocalsound] but is anybody doing anything about it? [speaker003:] Um [disfmarker] [speaker002:] OK. It's a political problem. We won't worry about it. [speaker003:] No, but [disfmarker] th the r eh [disfmarker] I th I still think that there is enough information in there. For example, whether [disfmarker] OK. So, th it will know about the twenty object types there are in the world. Let's assume there are only twenty object types in this world. And it will know if any of those have institutional meanings. So, in a sense, "I" used as Institutions for some s in some sense or the other. Which makes them [disfmarker] enterable. Right? In a sense. You know. [speaker002:] Yeah. Anyway. So we may have to [disfmarker] [speaker003:] Yep. [speaker002:] This is with the whole thing, we may have to build another data stru [speaker003:] Yep. [speaker002:] Conceptually, we know what should be done. When we see what people have done, it may turn out that the easiest thing to do [vocalsound] is to build a [disfmarker] a separate thing that [disfmarker] that just pools i i Like, i i it [disfmarker] it may be, that, the [disfmarker] the instance [disfmarker] w That we have to build our own instance, uh, things, that, with their types, [speaker004:] Yeah, it's [disfmarker] Right, we can just assume [disfmarker] [speaker002:] and then it goes off to the ontology once you have its type. So we build a little data structure And so what we would do in that case, is, in our instance gadget have [vocalsound] our E V And if we d there isn't one we'd get the type and then have the E V As for the type. So we'd have our own little, [vocalsound] uh, EVA tree. [speaker004:] Yeah. [speaker002:] And then, for other, uh, vectors that we need. [speaker004:] Right. [speaker002:] So, we'd have our own little [vocalsound] things so that whenever we needed one, we'd just use the ontology to get the type, [speaker004:] Mm hmm. Mm hmm. [speaker002:] and then would hash or whatever we do to say, "ah! If it's that type of thing, and we want its EVA vector, pppt pppt! [comment] it's that." So, I I think we can handle that. And then [disfmarker] But, the combination functions, and whether we can put those in Java Bayes, and all that sort of stuff, is, uh [disfmarker] is the bigger deal. [speaker004:] Yeah. [speaker002:] I think that's where we have to get technically clever. [speaker004:] Um [disfmarker] [speaker001:] We could just steal the classes in JavaBayes and then interface to them with our own code. [speaker002:] Well, I me ye [nonvocalsound] eh, yeah, the [disfmarker] [speaker004:] That requires understanding the classes in JavaBayes, I guess. [@ @]. [speaker002:] Yeah, I mean, it's, uh, e e e e e cute. I mean, you've been around enough to [disfmarker] I mean [disfmarker] Just? [speaker001:] Well, it depends on [disfmarker] [speaker002:] I mean, there's this huge package which [disfmarker] which may or may not be consistent and [disfmarker] you know. But, yeah, we could look at it. [speaker001:] Well, I was j OK. Yeah. [speaker002:] Yeah. It's b It [disfmarker] It's an inter sort of a kind of a [disfmarker] it [disfmarker] The thing is, it's kind of an interpreter and i i it expects its data structures to be in a given form, and if you say, "hey, we're gonna [vocalsound] make a different kind of data structure to stick in there [disfmarker]" [speaker001:] Well, no, but that just means there's a protocol, right? That you could [disfmarker] [speaker002:] It may or may not. I don't know. That's the question is "to what extent does it allow us to put in these G functions?" And I don't know. [speaker001:] Well, no, but [disfmarker] I mean [disfmarker] What I uh the [disfmarker] So you could have four different Bayes nets that you're running, and then run your own [disfmarker] write your own function that would take the output of those four, and make your own "G function", is what I was saying. [speaker002:] Yeah, that's fine if it's [disfmarker] if it comes only at the end. But suppose you want it embedded? [speaker001:] Well, then you'd have to break all of your Bayes nets into smaller Bayes nets, with all the [disfmarker] [speaker002:] Oh, that [disfmarker] Yeah, that's a truly horrible way to do d it. One would hope [disfmarker] [speaker001:] Yeah, but I'm just [disfmarker] [speaker003:] Mm hmm. [speaker002:] Yeah, yeah, yeah, yeah, yeah, you bet. But, at that point you may say, "hey, Java Bayes isn't the only package in town. Let's see if there's another package that's, eh, more civilized about this." Now, Srini is worth talking to on this, [speaker004:] Mmm. [speaker002:] cuz he said that he actually did hack some combining functions into [speaker004:] Ah! [speaker002:] But he doesn't remember [disfmarker] at least when I talked to him, he didn't remember [vocalsound] whether it was an e an easy thing, a natural thing, or whether he had to do some violence to it to make it work. Uh. But he did do it. [speaker004:] Yeah. I don't see why the, uh, combining f functions have to be directly hacked into I mean, they're used to create tables so we can just make our own little functions that create tables in XML. [speaker002:] Well, I say that's one way to do it, is [disfmarker] is to just convert it int into a [disfmarker] into a C P T that you zip [disfmarker] It's blown up, and is a [disfmarker] it's, uh [disfmarker] it's huge, but [disfmarker] [vocalsound] it doesn't require any data fitting or complication. [speaker004:] Mm hmm. Yeah. I don't think [disfmarker] I mean, the fact that it blown u blows up is a huge issue in the sense that [disfmarker] I mean, OK. So say it blows up, right? So there's, like, the you know, ten, f ten, fifteen, uh, things. It's gonna be like, two to the [disfmarker] that, which isn't so bad. [speaker002:] I I understand. I'm just saying tha that w That was wi that was my note. The little note I sent said that. [speaker004:] Mm hmm. [speaker002:] It said, "Here's the way you'd take the logical f G function and turn it into a CPT." [speaker004:] Mm hmm. [speaker002:] I mean that [disfmarker] the Max the Evidence Combining function. So we could do that. And maybe that's what we'll do. But, um don't know. So, I will, e [vocalsound] e before next week, uh, [@ @] [comment] p push [disfmarker] push some more on [disfmarker] on this stuff that Dekai Wu did, and try to understand it. Uh, you'll make a couple of more copies of the Heckerman paper to give to people? [speaker004:] p Sure. [speaker006:] Yeah, I [disfmarker] I would like a copy, y y yeah. [speaker004:] OK. [speaker002:] OK. And, um [speaker006:] OK. [speaker002:] I think [disfmarker] [speaker003:] OK. And I I'll [disfmarker] I'll think s through this, uh, [vocalsound] eh [disfmarker] getting EVA vectors dynamically out of ontologies one more time because I s I [disfmarker] I [disfmarker] I'm not quite sure whether we all think of the same thing or not, here. [speaker002:] Well, you and I should talk about it. [speaker003:] Yeah, uh huh. OK. [speaker002:] Alright, great! And, Robert, thank you for [vocalsound] coming in under [disfmarker] He [disfmarker] he's been sick, Robert. [speaker003:] Und. [speaker005:] Mm hmm. [speaker001:] I was thinking maybe we should just cough into the microphone and see if they can't [disfmarker] th see if they can handle it. [speaker004:] Yep. [speaker005:] Sure. [speaker003:] Um [disfmarker] is this, uh [disfmarker] [speaker005:] As usual. [speaker002:] Yes. Whew! I almost forgot [pause] about the meeting. I woke up twenty minutes ago, thinking, what did I forget? [speaker004:] It's great how the br brain sort of does that. [speaker005:] Something's not right here. [speaker002:] Internal alarms. [speaker004:] OK. So the news for me is A, my forthcoming travel plans in two weeks from today? [speaker002:] Yes. [speaker004:] Yeah? More or less? I'll be off to Sicily and Germany for a couple, three days. [speaker002:] Now what are y what are you doing there? I forgot? [speaker004:] OK, I'm flying to Sicily basically to drop off Simon there with his grandparents. And then I'm flying to Germany t to go to a MOKU Treffen which is the meeting of all the module responsible people in SmartKom, [speaker002:] Mmm. [speaker004:] and, represent ICI and myself I guess there. And um. That's the mmm actual reason. And then I'm also going up to EML for a day, and then I'm going to [vocalsound] meet the very big boss, Wolfgang Walster, in Saarbruecken and the System system integration people in Kaiserslautern and then I'm flying back via Sicily pick up my son come back here on the fourth of July. And uh. [speaker005:] What a great time to be coming back to the [speaker002:] God bless America. [speaker005:] You'll see maybe [disfmarker] see the fireworks from your plane coming in. [speaker004:] And I'm sure all the [disfmarker] the people at the airport will be happy to work on that day. [speaker005:] Yeah. You'll get even better service than usual. [speaker004:] Mm hmm. [speaker002:] Wait, aren't you flying on Lufthansa though? [speaker004:] Alitalia. [speaker002:] Oh. Well then the [disfmarker] you know, it's not a big deal. Once you get to the United States it'll be a problem, but [speaker004:] Yeah. And um, that's that bit of news, and the other bit of news is we had [disfmarker] you know, uh, I was visited by my German project manager who A, did like what we did [disfmarker] what we're doing here, and B, is planning to come here either three weeks in July or three weeks in August, to actually work. [speaker002:] On [disfmarker]? [speaker004:] With us. [speaker002:] Oh. [speaker004:] And we sat around and we talked and he came up [disfmarker] we came up [disfmarker] with a pretty strange idea. And that's what I'm gonna lay on you now. And um, maybe it might be ultimately the most interesting thing for Eva because she has been known to complain about the fact that the stuff we do here is not weird enough. [speaker003:] OK. [speaker004:] So this is so weird it should even make you happy. [speaker003:] Uh. [comment] OK. [speaker005:] Oh great. [speaker004:] Imagine if you will, [vocalsound] that we have a system that does all that understanding that we want it to do based on utterances. [speaker002:] Mm hmm. [speaker004:] It should be possible to make that system produce questions. So if you have the knowledge of how to interpret "where is X?" under given conditions, situational, user, discourse and ontological [vocalsound] conditions, you should also be able to make that same system ask "where is X?" [speaker005:] Mm hmm. [speaker004:] in a sper certain way, based on certain intentions. So in instead of just being able to observe phenomenon, um, and, guess the intention we might be able just to sort of give it an intention, and make it produce an utterance. [speaker005:] Hmm. [speaker002:] Well, like in AI they generally do the take in, and then they also do the generation phase, like Nancy's thing. Or uh, you remember, in the [disfmarker] the hand thing in one eighty two, like not only was it able to recognize but it was also to generate based upon situations. You mean that sort of thing? [speaker004:] Absolutely. [speaker002:] OK. [speaker004:] And once you've done that what we can do is have the system ask itself. And answer, understand the answer, ask something else, and enter a dialogue with itself. So the [disfmarker] the ba basic [disfmarker] the same idea as having two chess computers play against each other. [speaker005:] Except this smacks a little bit more of a schizophrenic computer than AI. [speaker004:] Yeah you c if you want, you can have two parallel [vocalsound] machines um, asking each other. What would that give us? Would A be something completely weird and strange, and B, i if you look at all the factors, we will never observe people let's say, in wheelchairs under [disfmarker] you know, in [disfmarker] under all conditions, [speaker005:] That's good. [speaker004:] you know, when they say "X", and there is a ride at the goal, and the parking is good, we can never collect enough data. [speaker005:] Mm hmm. [speaker004:] It's [disfmarker] it's [disfmarker] it's not possible. [speaker005:] Right, right. [speaker004:] But maybe one could do some learning. If you get the system to speak to itself, you may find n break downs and errors and you may be able to learn. And make it more robust, maybe learn new things. And um, so there's no [disfmarker] no end of potential things one could get out of it, if that works. And he would like to actually work on that with us. So [speaker002:] Well then, he probably should be coming back a year [pause] from now. [speaker004:] Yeah, I w See the [disfmarker] the generation bit, making the system generate [disfmarker] generate something, [comment] is [disfmarker] shouldn't be too hard. [speaker002:] Well, once the system understands things. [speaker005:] Yeah. No problem. [speaker002:] I just don't think [disfmarker] I think we're probably a year away from getting the system to understand things. [speaker004:] Yeah. Well, if we can get it to understand one thing, like our "where is" run through we can also, maybe, e make it say, or ask "where is X?" Or not. [speaker005:] Mmm, I don't know. e I'm sort of [disfmarker] have the impression that getting it to say the right thing in the right circumstances is much more difficult than getting it to understand something given the circumstances and so on, you know, I mean just cuz it's sort of harder to learn to speak correctly in a foreign language, rather than learning to understand it. Right? I mean just the fact that we'll get [disfmarker] The point is that getting it to understand one construction doesn't mean that it will n always know exactly when it's correct to use that construction. Right? [speaker004:] It's [disfmarker] it's uh [disfmarker] Well, I've [disfmarker] I've done generation and language production research for fo four [disfmarker] four and a half years. And so it's [disfmarker] it's [disfmarker] you're right, it's not the same as the understanding. It's in some ways easier and some ways harder. nuh? [speaker005:] Yeah. [speaker004:] But, um, I think it'd be fun to look at it, or into that question. [speaker005:] Nnn, yeah. [speaker004:] It's a pretty strange idea. And so that's [disfmarker] that's [disfmarker] But [disfmarker] [speaker002:] The basic idea I guess would be to give [disfmarker] allow the system to have intentions, basically? Cuz that's basically what needs to be added to the system for it. [speaker004:] Well, look at th eee, I think even [disfmarker] think even [disfmarker] What it [disfmarker] would be the [disfmarker] the prior intention. So let's uh [disfmarker] uh, let's say we have this [disfmarker] [speaker002:] Well we'd have to seed that, I mean. [speaker004:] No. Let's [disfmarker] we have to [disfmarker] we have some [disfmarker] some top down processing, given certain setting. OK, now we change nothing, and just say ask something. Right? What would it ask? [speaker002:] It wouldn't know what to ask. I mean. [speaker004:] It shur [speaker002:] Unless it was in a situation. We'd have to set up a situation where, it didn't know where something was and it wanted to go there. [speaker004:] Yeah! [speaker003:] Mm hmm. [speaker004:] Yeah. [speaker002:] Which means that we'd need to set up an intention inside of the system. Right? [speaker004:] Eh, n [speaker002:] Which is basically, "I don't know where something is and I need to go there". [speaker005:] Yeah. [speaker004:] Ooh, do we really need to do that? Because, [speaker002:] Well, no I guess not. Excel [speaker004:] s It's [disfmarker] i I know it's [disfmarker] it's strange, but look at it [disfmarker] look at our Bayes net. If we don't have [disfmarker] Let's assume we don't have any input from the language. Right? So there's also nothing we could query the ontology, but we have a certain user setting. If you just ask, what is the likelihood of that person wanting to enter some [disfmarker] something, it'll give you an answer. Right? That's just how they are. [speaker002:] Sure. [speaker004:] And so, [@ @] whatever that is, it's the generic default intention. That it would find out. Which is, wanting to know where something is, maybe nnn [disfmarker] and wanting [disfmarker] I don't know what it's gonna be, but there's gonna be something that [speaker005:] Well you're not gonna [disfmarker] are you gonna get a variety of intentions out of that then? I mean, you're just talking about like given this user, what's the th what is it [disfmarker] what is that user most likely to want to do? And, have it talk about [disfmarker] [speaker004:] Well you can observe some user and context stuff and ask, what's the posterior probabilities of all of our decision nodes. [speaker005:] OK. [speaker004:] You could even say, "let's take all the priors, let's observe nothing", and query all the posterior probabilities. It it's gonna tell us something. Right? And [disfmarker] [speaker002:] Well, it will d r assign values to all the nodes. Yes. [speaker004:] Yes. And come up with posterior probabilities for all the values of the decision nodes. Which, if we have an algorithm that filters out whatever the [disfmarker] the best or the most consistent answer out of that, will give us the intention ex nihilo. And that is exactly what would happen if we ask it to produce an utterance, it would be b based on that extension, ex nihilo, which we don't know what it is, but it's there. So we wouldn't even have to [disfmarker] t to kick start it by giving it a certain intention or observing anything on the decision node. And whatever that [disfmarker] maybe that would lead to "what is the castle?", [speaker002:] I'm just [disfmarker] [speaker004:] or "what is that whatever". [speaker002:] I guess what I'm afraid of is if we don't, you know, set up a [pause] situation, [comment] we'll just get a bunch of garbage out, like you know, everything's exactly thirty percent. [speaker004:] No [disfmarker] [speaker003:] Mmm. [speaker004:] Yeah. So what we actually then need to do is [disfmarker] is write a little script that changes all the settings, you know, go goes through all the permutations, which is [disfmarker] we did a [disfmarker] didn't we calculate that once? [speaker002:] Well that was [disfmarker] that was absurdly low, in the last meeting, [speaker004:] It's a [disfmarker] [speaker003:] Uh, [speaker002:] cuz I went and looked at it cuz I was thinking, that could not be right, and it would [disfmarker] it was on the order of twenty output nodes and something like twenty [disfmarker] [speaker003:] And like thirty input nodes [speaker002:] thirty input nodes. [speaker003:] or some [disfmarker] [speaker002:] So to test every output node, uh, would at least [disfmarker] Let's see, so it would be two to the thirty for every output node? Which is very th very large. [speaker004:] Oh! That's n [speaker005:] Oh. [speaker004:] that's [disfmarker] that's nothing for those neural guys. I mean, they train for millions and millions of epochs. So. [speaker002:] Well, I'm talking about Oh, I was gonna take a drink of my water. I'm talking about billions and billions and billions and a number [disfmarker] two to the thirty is like a Bhaskara said, we had calculated out and Bhaskara believes that it's larger than the number of particles in the universe. And if i [speaker005:] I don't know if that's right or not. Th that's big. That's just [disfmarker] That's uh [disfmarker] It's a billion, right? [speaker002:] Two to the thirty? Well, two to the thirty is a billion, but if we have to do it two to the twenty times, then that's a very very large number. [speaker005:] Right. Argh. Oh, OK. Yeah. Yeah, that's big. [speaker002:] Cuz you have to query the node, for every a uh, or query the net two to the twenty times. [speaker005:] Sure. Alright. [speaker002:] Or not two to th excuse me, twenty times. [speaker005:] OK. So, is it t comes to twenty billion or something? [speaker002:] Yes. [speaker005:] That's pretty big, though. [speaker002:] As far as [disfmarker] That's [@ @] [disfmarker] That's big. Actually [disfmarker] Oh! We calculated a different number before. How did we do that? [speaker003:] Hmm. [speaker005:] I remember there being some other one floating around. But anyway, uh. [speaker003:] I don't really know. [speaker005:] Yeah, it's g Anyway, the point is that given all of these different factors, it's uh e it's [disfmarker] it's still going to be impossible to run through all of the possible situations or whatever. [speaker003:] Ooo, it's just big. [speaker005:] But I mean, this'll get us a bit closer at least, right? I mean. [speaker002:] If it takes us a second to do, for each one, and let's say it's twenty billion, [comment] then that's twenty billion seconds, which is [disfmarker] [speaker005:] Yeah. [speaker002:] Eva, do the math. [speaker005:] Long! [speaker003:] Can't. [speaker002:] Hours and hours and hours and hours. But we can do randomized testing. [speaker005:] Tah dah! [speaker002:] Which probabilistically will be good enough. [speaker004:] Mm hmm. Yeah. So, it be it it's an idea that one could n for [disfmarker] for example run [disfmarker] run past, um, what's that guy's name? You know? He he's usually here. Tsk. [speaker005:] Here in the group? [speaker004:] J J Jer Jerj [speaker005:] Jerry Feldman. [speaker004:] Oh, yeah. That's the guy. We [disfmarker] we [disfmarker] we [disfmarker] we g [speaker002:] Wait, who? [speaker005:] Yeah, i that would the g the bald guy. [speaker002:] Oh! My advisor! [speaker004:] And um. so this is just an idea that's floating around and we'll see what happens. And um, hmm, what other news do I have? Well we fixed some more things from the SmartKom system, but that's not really of general interest, Um, Oh! Questions, yeah. I'll ask Eva about the E Bayes and she's working on that. How is the generation XML thing? [speaker002:] I'm gonna work on that today and tomorrow. [speaker004:] OK. No need to do it today or tomorrow even. Do it next week or [disfmarker] [speaker002:] I'm gonna finish it today, uh hopefully. [speaker004:] OK. [speaker002:] I wanna do one of those things where I stay here. Cuz uh, if I go home, I can't finish it. I've tried about five times so far, where I work for a while and then I'm like, I'm hungry. So I go home, and then I think [disfmarker] [speaker005:] I'm not going back. [speaker002:] Yeah. Either that or I think to myself, I can work at home. And then I try to work at home, but I fail miserably. [speaker005:] Yeah. [speaker002:] Like I ended up at Blakes last night. [speaker005:] Non conducive. [speaker002:] No. I almost got into a brawl. But I did not finish the uh, But I've been looking into it. I th [@ @] It's not like it's a blank slate. I found everything that I need and stu and uh, [speaker004:] But st [speaker002:] At the b uh furthermore, I told Jerry that I was gonna finish it before he got back. So. [speaker004:] OK. [speaker005:] That's approaching. He's coming back when? Uh next [disfmarker] [speaker002:] Well, I think [disfmarker] we think we'll see him definitely on Tuesday for the next [disfmarker] Or, no, wait. The meetings are on Thursday. [speaker004:] Maybe. Who knows. [speaker002:] Maybe. [speaker005:] OK. [speaker002:] Well, we'll see him next week. [speaker005:] Alright. [speaker004:] That's good. Yeah. The paper. [speaker005:] Hmm. [speaker004:] Hmm. [speaker002:] I was thinking about that. I think I will try to work on the SmartKom stuff and I'll [disfmarker] if I can finish it today, I'll help you with that tomorrow, if you work on it? I don't have a problem with us working on it though? So. [speaker004:] OK. So you would say it's funky [speaker002:] And it [disfmarker] [speaker004:] cool. [speaker002:] I mean we just [disfmarker] I mean it wouldn't hurt to write up a paper, cuz then, I mean, yeah [disfmarker] I was talking with Nancy and Nancy said, you don't know whether you have a paper to [pause] write up until you write it up. So. [speaker005:] Yeah. [speaker004:] Well [speaker002:] And since Jerry's coming back, we can run it by him too. So. [speaker004:] Yep. Um, what's your input? [speaker005:] Well, um, I don't have much experience with uh, conference papers for compu in the computer science realm, and so when I looked at what you had, which was apparently a complete submission, I just sort of said what [disfmarker] just [disfmarker] I [disfmarker] I didn't really know what to do with it, like, this is the sort of the basic outline of the system or whatever, or [disfmarker] or "here's an idea", right? That's what that paper was, "here's [disfmarker] here's one possible thing you could do", [speaker004:] Mm hmm. [speaker005:] short, eight pages, and I just don't know what you have in mind for expanding. Like I'd [disfmarker] I [disfmarker] what I didn't do is go to the web site of the conference and look at what they're looking for or whatever. [speaker004:] Mm hmm. Well, it seems to me that um [disfmarker] [speaker002:] Wait, is this a computer science conference or is it a [disfmarker] [speaker004:] Um, well it's more [disfmarker] It's both, right? It's [disfmarker] it's sort of t cognitive, neural, psycho, linguistic, but all for the sake of doing computer science. So it's sort of cognitive, psycho, neural, plausibly motivated, architectures of natural language processing. So it seems pretty interdisciplinary, and I mean, w w the keynote speaker is Tomasello [speaker005:] Right. Oh, yeah. [speaker004:] and blah blah blah, so, W the [disfmarker] the question is what could we actually do and [disfmarker] and [disfmarker] and keep a straight face while doing it. And i [speaker002:] Well, I really can't keep a straight face doing anything. [speaker004:] My idea is, [speaker005:] Setting that aside. [speaker004:] well, you can say we have done a little bit and that's this, and uh sort of the rest is position paper, "we wanna also do that". Which is not too good. Might be more interesting to do something like let's assume um, we're right, we have as Jerry calls it, a delusion of adequacy, and take a "where is X" sentence, [speaker005:] Mm hmm. [speaker004:] and say, "we will just talk about this, and how we cognitively, neurally, psycho linguistically, construction grammar ally, motivated, envision uh, understanding that". [speaker005:] Mmm. [speaker004:] So we can actually show how we parse it. That should be able to [disfmarker] we should be able to come up with, you know, a sort of a [disfmarker] a parse. [speaker005:] Right. [speaker004:] It's on, just [disfmarker] just put it on. [speaker001:] I'm [speaker002:] Did Ben harass you? [speaker001:] OK. Yes. [speaker002:] Good. [speaker001:] Was he supposed to harass me? [speaker002:] Yes. [speaker001:] Well, he just told me that you came looking for me. [speaker004:] You don [speaker002:] Oh. [speaker004:] You will suffer in hell, [speaker001:] figure this out. [speaker004:] you know that. [speaker005:] Backwards. There's a s diagram somewhere which tells you how to put that [disfmarker] [speaker001:] I know, I didn't understand that either! [speaker004:] This is it. [speaker002:] No wait. You have to put it on exactly like that, [speaker004:] Yeah. [speaker002:] so put that [disfmarker] those things over your ears like that. [speaker001:] OK. [speaker002:] See the p how the plastic things ar arch out like that? There we go. [speaker001:] OK. It hurts. [speaker002:] It hurts. It hurts real bad. [speaker001:] It does! [speaker005:] But that's what you get for coming late to the meeting. [speaker001:] I'm sorry I didn't mean to [disfmarker] I'm sorry. I'm sorry, oh these are all the same. OK! th this is not very [pause] on target. [speaker002:] Is your mike on? [speaker003:] An [speaker004:] Yeah, it is. [speaker001:] Shoot. [speaker002:] OK. [speaker001:] Alright, you guys can continue talking about whatever you were talking about before. [speaker005:] Um, [speaker004:] We're talking about this um, alleged paper that we may, just, sort of w [speaker001:] Oh! Which Johno mentioned to me. [speaker004:] Yeah. [speaker001:] Uh huh. [speaker004:] And I just sort of brought forth the idea that we take a sentence, "Where is the Powder Tower", [speaker001:] Mm hmm. [speaker004:] and we [disfmarker] we p pretend to parse it, we pretend to understand it, and we write about it. [speaker005:] Hmm. About how [vocalsound] all of these things [disfmarker] [speaker001:] What's the part that's not pretend? The writing? [speaker004:] OK, then we pretend to write about. [speaker005:] The submitting to a major international conference. [comment] [comment] Yeah. [speaker001:] Tha [vocalsound] Which conference is it for? [speaker004:] It's the whatever, architectures, eh you know, where [disfmarker] There is this conference, it's the seventh already international conference, on neu neurally, cognitively, motivated, architectures of natural language processing. [speaker001:] Oh. Wow. Interesting. [speaker004:] And the keynote speakers are Tomasello, MacWhinney? We MacWhinney, I think. [speaker001:] Whinney. [comment] MacWhinney. Uh huh. So, interesting, both, like, child language people. [speaker004:] Yeah. Yep. [speaker001:] OK. [speaker004:] So maybe you wanna write something too. [speaker001:] Yeah, maybe I wanna go. [speaker005:] Mmm. [vocalsound] Mmm. [speaker001:] Um, why are they speaking at it if it [disfmarker] is [disfmarker] is it normally like [disfmarker] like, dialogue systems, or, you know, other NLP ish things? [speaker004:] No no no no no no no no. It's [disfmarker] it's like a [disfmarker] [speaker001:] Oh, it's cognitive. OK. [speaker004:] Yeah. Yeah. Even neuro. [speaker001:] And uh, both learning and like, comprehension, production, that kinda stuff. [speaker004:] Psycho. You could look at the web site. [speaker001:] OK. [speaker004:] I'll [disfmarker] [speaker001:] OK. [speaker004:] And the ad and [disfmarker] and the deadline is the fifteenth of June. [speaker001:] I don't know about it. Yeah that's pretty soon. [speaker005:] Mmm. [speaker004:] Hey. Plenty of time. [speaker005:] Why, we've got over a week! [speaker004:] It would be nice to go write two papers actually. Yeah. And one [disfmarker] one from your perspective, and one from our peve per per [speaker001:] Mm hmm. I mean, th that's the kinda thing that maybe like, um, the general uh con sort of like NTL ish like, whatever, the previous simulation based pers [comment] maybe you're talking about the same kind of thing. A general paper about the approach here would probably be appropriate. [speaker004:] Yeah. [speaker001:] And good to do at some point anyway. [speaker004:] Yeah. [speaker001:] Um. [speaker004:] Well, I [disfmarker] I also think that if we sort of write about what we have done in the past six months, we [disfmarker] we [disfmarker] we could sort of craft a nice little paper that [pause] if it gets rejected, which could happen, doesn't hurt [speaker001:] Mm hmm. [speaker004:] because it's something we eh [disfmarker] [speaker001:] Having it is still a good thing. [speaker004:] having it is a good [disfmarker] good thing. [speaker001:] Yeah. [speaker004:] It's a nice exercise, it's [disfmarker] I usually enjoy writing papers. It's not [disfmarker] I don't re regard it as a painful thing. [speaker001:] Mm hmm. It's fun. [speaker004:] And um, we should all do more for our publication lists. And. It just never hurts. And Keith and or Johno will go, probably. [speaker002:] Will I? [speaker001:] When is it and where? [speaker005:] Hmm! [speaker004:] In case of [disfmarker] It's on the twenty second of September, in Saarbruecken Germany. [speaker001:] Ah, it's in Germany. Ah, OK. I s I see. Tomasello's already in Germany anyway, so makes sense. OK. [speaker005:] Just [disfmarker] [speaker001:] Um. OK. So, is the [disfmarker] What [disfmarker] Are you just talking about you know, the details of how to do it, or whether to do it, or what it would be? [speaker005:] What would one possibly put in such a paper? [speaker004:] What to write about. [speaker001:] Or what to write about? [speaker004:] What is our [disfmarker] what's our take home message. What [disfmarker] what do we actually [disfmarker] Because I mean, it [disfmarker] I don't like papers where you just talk about what you plan to do. I mean, it's obvious that we can't do any kind of evaluation, and have no [disfmarker] you know, we can't write an ACL type paper where we say, OK, we've done this [speaker001:] Mm hmm. [speaker004:] and now we're whatever percentage better than everybody else ". You know. [speaker001:] Mm hmm. [speaker004:] It's far too early for that. But uh, we [disfmarker] we can tell them what we think. I mean that's [disfmarker] never hurts to try. And um, maybe even [disfmarker] That's maybe the time to introduce the [disfmarker] the new formalism that you guys have cooked up. [speaker001:] Mm hmm. [speaker005:] Are in the process of [disfmarker] [speaker002:] But that [disfmarker] [speaker001:] How many pages? [speaker002:] don't they need to finish the formalism? [speaker004:] It's just like four pages. [speaker001:] Four pages? [speaker004:] I mean it's [disfmarker] it's not even a h [speaker005:] Yeah. [speaker001:] OK, so it's a little thing. [speaker004:] Mm hmm. [speaker001:] Oh. [speaker002:] Well, you said it was four thousand lines? [speaker005:] Oh. [speaker002:] Is that what you s [speaker001:] OK. Four pages is, like, really not very much space. [speaker004:] I don't know w Did you look at it? Yeah, it depends on the format. [speaker005:] Oh my gosh. Oh, I thought you were [disfmarker] I thought we were talking about something which was much more like ten or something. [speaker004:] No that's [disfmarker] I mean that's actually a problem. It's difficu it's more difficult to write on four pages than on eight. [speaker001:] It's [disfmarker] Yeah. [speaker005:] Yeah. [speaker001:] And it's also difficult to [disfmarker] even if you had a lot of substance, it's hard to demonstrate that in four pages, basically. [speaker005:] Yeah. [speaker001:] Um. [speaker005:] That would be hard. [speaker004:] Well I uh maybe it's just four thousand lines. [speaker001:] I mean it's still [disfmarker] it's still [disfmarker] [speaker004:] I do I don't [disfmarker] They don't want any [disfmarker] They don't have a TeX f style [@ @] guide. [speaker001:] Uh huh, uh huh. [speaker004:] They just want ASCII. Pure ASCII lines, [speaker001:] OK. [speaker004:] whatever. Why, for whatever reason, I don't know. [speaker001:] Not including figures and such? [speaker004:] I don't know. Very unspecific unfortunately. [speaker001:] OK. Well, [speaker004:] We'll just uh [disfmarker] [speaker002:] I would say that's closer to six pages actually. Four thousand lines of ASCII? [speaker004:] OK then. [speaker005:] Four thousand lines. [speaker004:] It's [disfmarker] [speaker005:] I mean. Isn't a isn't it about fifty s fifty five, sixty lines to a page? [speaker004:] I d don't quote me on this. This is numbers I [disfmarker] I have from looking o [speaker002:] How many characters are on a line? [speaker004:] OK. [speaker001:] ASCII? [speaker004:] Let's [disfmarker] let's [disfmarker] wh wh what should we [disfmarker] should [disfmarker] should we uh, um, discuss this over tea and all of us look at the web? Oh, I can't. I'm wizarding today. [speaker001:] OK, look at the web page? [speaker004:] Um. [speaker001:] Wha w [speaker004:] Look at the web page and let's talk about it maybe tomorrow afternoon? [speaker001:] More cues for us to find it are like, neural cons [speaker004:] Johno will send you a link. [speaker001:] Oh, you have a link. OK. OK. [speaker002:] I got an email. [speaker001:] OK. [speaker002:] By the way, Keith is comfortable with us calling him "cool Keith". [speaker001:] Oh. Cool. Keith. [speaker005:] He [disfmarker] he decided [vocalsound] I'm chilling in the five one O. [speaker001:] Cool, "cool Keith". [speaker005:] Yeah. [speaker001:] Excellent. [speaker004:] OK. [speaker001:] That's a very cool T shirt. [speaker005:] Thank you. [speaker004:] And I'm also flying [disfmarker] [speaker005:] I got this from the two one two. [speaker001:] New York? Excellent. [speaker005:] Yeah. [speaker001:] Sorry. Yes? [speaker004:] I'm flying to Sicily next [disfmarker] in a w two weeks from now, [speaker001:] Oh, lucky you. [speaker004:] w and a week of business in Germany. I should mention that for you. And otherwise you haven't missed much, except for a really weird idea, but you'll hear about that soon enough. [speaker001:] The idea that you and I already know about? That you already told me? Not that [disfmarker] [speaker004:] No, no, no. [speaker001:] OK. [speaker004:] Yeah, that is something for the rest of the gang to [disfmarker] to g [speaker005:] The thing with the goats and the helicopters? [speaker004:] Change the watchband. It's time to walk the sheep. [speaker003:] like [speaker001:] OK. [speaker004:] Um. Did you catch that allusion? It's time to walk the sheep? [speaker005:] No. [speaker004:] It's a a uh presumably one of the Watergate codes they uh [disfmarker] [speaker005:] Oh. [speaker004:] Anyways, th um, um, don't make any plans for spring break next year. That's [disfmarker] [speaker005:] Oh, shoot. [speaker004:] That's the other thing. We're gonna do an int EDU internal workshop in Sicily. [speaker001:] That's what [disfmarker] That's what he says. [speaker004:] I've already got the funding. [speaker001:] I kn That's great! [speaker004:] So, I mean. [speaker001:] Does that mean [disfmarker] Does that mean you'll get [disfmarker] you'll fly us there? [speaker005:] We'll see. [speaker004:] No, that's [disfmarker] Yeah, that's what it means. [speaker001:] Hhh! OK, cool. [speaker005:] Huh. [speaker001:] Uh a a [speaker002:] And he'll put us up, too. [speaker001:] I know [disfmarker] I know about that part. I know about the [disfmarker] the almond trees and stuff. Not joking. [speaker004:] OK. [speaker001:] Name a vegetable, OK. [vocalsound] Oh, um, kiwi? [speaker005:] Yeah. [speaker004:] Mmm, too easy. [speaker001:] Coconut. [speaker004:] Ki [speaker001:] Pineapple. See? Mango? [speaker004:] Too easy. [speaker001:] OK. OK. Too easy? [speaker004:] Yeah, mangos go everywhere. So do kiwi. [speaker001:] Really? Oh. OK, but I was trying to find something that he didn't grow on his farm. [speaker004:] But coconut anana pineapple, that's [disfmarker] that's tricky, yeah. [speaker001:] Sorry. Anyway. Cantaloupe. [speaker005:] So, but we have to decide what, like, sort of the general idea of [disfmarker] [speaker002:] Potatoes. So. Sorry! [speaker005:] Um, I mean, we're gonna have an example case um, right? I m the [disfmarker] the point is to [disfmarker] like this "where is" case, or something. [speaker004:] Yeah, maybe you have [disfmarker] It would be kind of [disfmarker] The paper ha would have, in my vision, a nice flow if we could say, well here is th the [disfmarker] th here is parsing if you wanna do it c right, here is understanding if you wanna do it right, and you know [disfmarker] without going into technical [disfmarker] [speaker005:] Mm hmm. [speaker001:] But then in the end we're not doing like those things right yet, right? Would that be clear in the paper or not? [speaker004:] That would be clear, we would [disfmarker] I [disfmarker] I mailed around a little paper that I have [disfmarker] [speaker001:] OK. It would be like, this is the idea. [speaker004:] w we could sort of say, this is [disfmarker] [speaker001:] Oh, I didn't get that, did I? Oops. [speaker004:] No, [speaker001:] Did I? Oops. [comment] Sorry. [speaker002:] No, y I don't think you got it. [speaker004:] See this, if you if you're not around, and don't partake in the discussions, and you don't get any email, [speaker001:] I'm sorry. I'm sorry, I'm sorry. Sorry. [speaker004:] and [speaker001:] OK, go on. So parsing done right [vocalsound] is like chicken done right. [speaker004:] Su So we could [disfmarker] we could say this is what [disfmarker] what's sort of state of the art today. Nuh? [speaker001:] OK. [speaker004:] And say, this is bad. Nuh? [speaker001:] Yeah. [speaker004:] And then we can say, uh well what we do is this. [speaker001:] OK. [speaker004:] Yeah. [speaker001:] Parsing done right, interpretation done right, example. [speaker004:] Mm hmm. Yeah. And [speaker001:] And how much to get into the cognitive neural part? [speaker004:] We [speaker002:] That's the only [disfmarker] That's the question mark. Don't you need to reduce it if it's a [disfmarker] or reduce it, if it's a cognitive neuro [disfmarker] [speaker001:] Well, you don't have t I mean the conference may be cognitive neural, doesn't mean that every paper has to be both. [speaker004:] Yeah, and you can [disfmarker] you can just point to the [disfmarker] to the literature, [speaker005:] Mmm. [speaker001:] Like, NLP cognitive neural. [speaker004:] you can say that construction based You know [disfmarker] [speaker001:] So i so this paper wouldn't particularly deal with that side although it could reference the NTL ish sort of, like, um, approach. [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker004:] Yeah. [speaker001:] The fact that the methods here are all compatible with or designed to be compatible with whatever, neurological [disfmarker] neuro neuro biol su stuff. [speaker004:] Mm hmm. [speaker001:] Yeah, I guess four pages you could [disfmarker] I mean you could definitely [disfmarker] it's definitely possible to do it. It's just [disfmarker] It'd just be small. Like introducing the formalism might be not really possible in detail, but you can use an example of it. [speaker005:] Well, l looking at [disfmarker] yeah, looking at that paper that [disfmarker] that you had, I mean you know, like, you didn't really explain in detail what was going on in the XML cases or whatever you just sorta said well, you know, here's the general idea, some stuff gets put in there. You know, hopefully you can [disfmarker] you can say something like constituents tells you what the construction is made out of, you know, without going into this intense detail. [speaker001:] Yeah, yeah. So it be like using the formalism rather than you know, introducing it per se. [speaker005:] Yeah. [speaker001:] So. [speaker005:] Give them the one paragraph whirlwind tour of w w what this is for, [speaker001:] Yeah. [speaker005:] and [disfmarker] Yeah. [speaker004:] Mm hmm. [speaker001:] And people will sort of figure out or ask about the bits that are implicit. [speaker004:] Yeah. So this will be sort of documenting what we think, and documenting what we have in terms of the Bayes net stuff. [speaker001:] Mm hmm. [speaker004:] And since there's never a bad idea to document things, no? [speaker001:] That's th that's definitely a good idea. [speaker004:] That would be my, uh [disfmarker] We [disfmarker] we should sketch out the details maybe tomorrow afternoon ish, if everyone is around. I don't know. [speaker005:] I think so. [speaker004:] You probably wouldn't be part of it. Maybe you want? Think about it. Um, You may [disfmarker] may ruin your career forever, if you appear. [speaker002:] Yeah, you might get blacklisted. [speaker004:] And um, the uh, other thing, yeah we actually [disfmarker] Have we made any progress on what we decided, uh, last week? I'm sure you read the transcript of last week's meeting in red so sh so you're up to dated [disfmarker] caught up. [speaker001:] No. Sorry. [speaker004:] We decided t that we're gonna take a "where is something" question, and pretend we have parsed it, and see what we could possibly hope to observe on the discourse side. [speaker002:] Remember I came in and I started asking you about how we were sor going to sort out the uh, decision nodes? [speaker001:] Yes! What'd you say? [speaker002:] I remember you talking to me, just not what you said. [speaker001:] I do remember you talking to me. Um, a few more bits. [speaker002:] Well, there was like we needed to [disfmarker] or uh, in my opinion we need to design a Bayes [disfmarker] another sub Bayes net [disfmarker] You know, it was whether [disfmarker] it was whether we would have a Bayes net on the output and on the input, [speaker001:] Oh. [speaker002:] or whether the construction was gonna be in the Bayes net, [speaker001:] Oh, yeah. OK. [speaker002:] a and outside of it, and [disfmarker] [speaker001:] OK. So that was [disfmarker] was that the question? Was that what [disfmarker] [speaker002:] Well that was related to what we were talking about. [speaker004:] Should I introduce it as SUDO square? [speaker002:] Yeah sure. [speaker004:] We have to put this in the paper. If we write it. This is [disfmarker] this is my only constraint. The [disfmarker] th So. The SUDO square [nonvocalsound] is, [vocalsound] "Situation", "User", "Discourse", right? "Ontology". [speaker005:] Oh I saw the diagram in the office, [speaker001:] Oh my god, that's amazing! [speaker004:] Mmm. Yeah. Whatever. [speaker001:] No way. [speaker005:] Way! [speaker004:] Is it? [speaker001:] Someone's gonna start making Phil Collins jokes. [speaker004:] Yeah. Hmm? [speaker001:] Sorry. [speaker005:] Oh, god, I hope not. [speaker002:] What? [speaker001:] You guys are too young. [speaker005:] You know like "Sussudio", that horrible, horrible song that should never have been created. [speaker001:] Yeah, come on. [speaker002:] Oh, oh, oh, oh. [speaker001:] I know, that was horrible. [speaker002:] I've blocked every aspect of Phil Collins out of my mind. [speaker001:] Sussudio. [speaker003:] What? [speaker001:] I'm sorry, I haven't. Not on purpose. [speaker005:] in here [speaker004:] Oh [disfmarker] [vocalsound] Well, also he's talking about suicide, and that's [disfmarker] that's not a notion I wanna have evoked. [speaker001:] No, he's not. [speaker004:] He is. [speaker001:] Really? Oops. [comment] I didn't really listen to it, [speaker004:] The [disfmarker] [speaker001:] I was too young. [speaker005:] Hmm. It sounds too rocking for that. [speaker001:] Anyway. Yeah. [speaker005:] Anyway. So, what's going on here? So what are [disfmarker] what [disfmarker] Was wollte der Kuenstler uns damit sagen? [speaker004:] So, [speaker001:] Stop excluding me. [speaker004:] OK, so we have tons of little things here, and we've [speaker001:] I can't believe that that's never been thought of before. [speaker002:] Wait, what are the dots? I don't remember what the dots were. [speaker005:] Those are little bugs. [speaker004:] OK. [speaker001:] Cool Keith. [speaker004:] You know, these are our, whatever, belief net decision nodes, and they all contribute to these [pause] [nonvocalsound] things down here. [speaker002:] Oh, oh. [speaker001:] Wait, wait, what's the middle thing? [speaker004:] That's EDU. [speaker005:] That's a c [speaker004:] e e Our e e e [speaker001:] But wh I mean [disfmarker] [speaker005:] That's [disfmarker] [speaker004:] You. We. Us. [speaker001:] But what is it? [speaker004:] Well, in the moment it's a Bayes net. And it has sort of fifty not yet specified interfaces. OK. Eh [pause] I have taken care that we actually can build little interfaces, [nonvocalsound] to other modules that will tell us whether the user likes these things and, n the [disfmarker] or these things, and he [disfmarker] whether he's in a wheelchair or not, [speaker001:] OK. Is that supposed to be the international sign for interface? [speaker004:] I think so, yeah. [speaker001:] Mmm. OK. [speaker002:] I'd [disfmarker] I'd never seen it before either. [speaker001:] OK. Just t Cool. [speaker004:] Mmm. So. [speaker005:] Cuz things fit onto that, see? [speaker001:] Yeah. Cool. [speaker005:] In a vaguely obscene fashion. [speaker004:] No, this is a RME core by agent design, I don't know. [speaker001:] That's so great. [speaker004:] There's maybe a different [speaker005:] So wait, what a what are these letters again, Situr [comment] Situation, User, Discourse and [speaker004:] Situation, user, d ontology. [speaker001:] User? [speaker005:] Ontology. [speaker001:] What about the utterance? [speaker004:] That's here. [speaker005:] It's [disfmarker] [speaker003:] Discourse. [speaker001:] Oh, discourse. [speaker004:] Yeah. [speaker005:] Discourse is all things linguistic, yeah. [speaker001:] So that's not like context, OK. [speaker004:] So this [disfmarker] this includes the [disfmarker] the current utterance plus all the previous utterances. [speaker001:] Interesting, uh huh. User. [speaker004:] And for example w i s I Irena Gurevich is going to be here eh, end of July. [speaker001:] User. [speaker004:] She's a new linguist working for EML. And what she would like to do for example is great for us. She would like to take the ent ontolog [speaker003:] Ouch. [speaker004:] So, we have discussed in terms of the EVA [disfmarker] [speaker001:] Grateful for us? [speaker004:] uh [disfmarker] [speaker001:] Did you just say grateful for us? OK, sorry. Anyway. [speaker004:] Think of [disfmarker] back at the EVA vector, and Johno coming up with the idea that if the person discussed the [disfmarker] discussed the admission fee, in [disfmarker] eh previously, that might be a good indication that, "how do I get to the castle?", actually he wants to enter. [speaker001:] Mm hmm. [speaker004:] Or, you know, "how do I get to X?" discussing the admission fee in the previous utterance, is a good indication. [speaker005:] Mm hmm. [speaker004:] So we don't want a hard code, a set of lexemes, or things, that person's you know, sort of filter, or uh search the discourse history. [speaker001:] Mm hmm. [speaker004:] So what would be kind of cool is that if we encounter concepts that are castle, tower, bank, hotel, we run it through the ontology, and the ontology tells us it has um, admission, opening times, it has admission fees, it has this, it has that, and then we [disfmarker] we [disfmarker] we make a thesaurus lexicon, look up, and then search dynamically through the uh, discourse history for [pause] occurrences of these things in a given window of utterances. [speaker001:] Mm hmm. [speaker004:] And that might, you know, give us additional input to belief A versus B. Or E versus A. [speaker001:] So it's not just a particular word's [disfmarker] OK, so the [disfmarker] you're looking for a few keys that you know are cues to [disfmarker] sorry, a few specific cues to some intention. [speaker002:] You can dynamically look up keys, yeah. [speaker004:] Yeah. [speaker005:] Uh, so, wait [disfmarker] so um, since this [disfmarker] since this sort of technical stuff is going over my head, [speaker001:] OK. [speaker002:] And then grep, basically. [speaker005:] the [disfmarker] the point is that you uh [disfmarker] that when someone's talking about a castle, you know that it's the sort of thing that people are likely to wanna go into? Or, is it the fact that if there's an admission fee, then one of the things we know about admission fees is that you pay them in order to go in? And then the idea of entering is active in the discourse or something? And then [speaker004:] Well [speaker005:] blah blah blah? I mean. [speaker004:] the [disfmarker] the idea is even more general. The idea is to say, we encounter a certain entity in a [disfmarker] in a in a utterance. So le let's look up everything we [disfmarker] the ontology gives us about that entity, what stuff it does, what roles it has, what parts, whatever it has. Functions. And, then we look in the discourse, whether any of that, or any surface structure corresponding to these roles, functions aaa [comment] has ever occurred. [speaker005:] Oh, OK. [speaker004:] And then, the discourse history can t tell us, "yeah", or "no". [speaker005:] OK. [speaker004:] And then it's up for us to decide what to do with it. [speaker005:] OK. [speaker004:] t So i [speaker005:] So [disfmarker] No, go ahead. [speaker004:] So, we may think that if you say um, [vocalsound] [vocalsound] "where is the theater", um, whether or not he has talked about tickets before, then we [disfmarker] he's probably wanna go there to see something. [speaker005:] Mm hmm. OK. [speaker004:] Or where is the opera in Par Paris?, yeah? Lots of people go to the opera to take pictures of it and to look at it, [speaker005:] Mm hmm. OK. [speaker004:] and lots of people go to attend a performance. [speaker005:] Mm hmm. [speaker004:] And, the discourse can maybe tell us w what's more likely if we know what to look for in previous statements. And so we can hard code for opera, look for tickets, look for this, look for that, [speaker005:] OK. OK. [speaker004:] or look for Mozart, look for thi but the smarter way is to go via the ontology and dynamically, then look up u stuff. [speaker005:] OK. But you're still doing look up so that when the person [disfmarker] So the point is that when the person says, "where is it?" then you sort of say, let's go back and look at other things and then decide, rather than the other possibility which is that [pause] all through discourse as they talk about different things [disfmarker] You know like w prior to the "where is it" question they say, you know, "how much does it cost to get in, you know, to [disfmarker] to see a movie around here", um, [vocalsound] "where is the closest theater" [disfmarker] The [disfmarker] the [disfmarker] the point is that by mentioning admission fees, that just sort of stays active now. [speaker004:] Yeah. [speaker005:] You know. That becomes part of like, their sort of current ongoing active conceptual structure. [speaker004:] Mm hmm. [speaker005:] And then, um, over in your Bayes net or whatever, when [disfmarker] when the person says "where is it", you've already got, you know since they were talking about admission, and that evokes the idea of entering, um, then when they go and ask "where is it", then you're Enter node is already active because that's what the person is thinking about. [speaker004:] Mm hmm. Yeah. [speaker005:] I mean that's the sort of cognitive linguistic y way, and probably not practical. [speaker004:] Yeah, e ultimately that's also what we wanna get at. I think that's [disfmarker] that's the correct way. So, of course we have to keep memory of what was the last intention, [speaker005:] Mm hmm. [speaker004:] and how does it fit to this, and what does it tell us, in terms of [disfmarker] of the [disfmarker] the [disfmarker] what we're examining. [speaker005:] Mmm, yeah. [speaker004:] And furthermore, I mean we can idealize that, you know, people don't change topics, [speaker005:] Mm hmm. [speaker004:] but they do. But, even th for that, there is a student of ours who's doing a dialogue act um, recognition module. [speaker005:] Right. Mm hmm. [speaker004:] So, maybe, we're even in a position where we can take your approach, which is of course much better, as to say how [disfmarker] how do these pieces [disfmarker] [speaker005:] Mmm. And much harder to r program. [speaker004:] Hmm? [speaker005:] And much harder to p to program. [speaker004:] Yeah. How [disfmarker] how do these pieces fit together? Uh huh. And um. But, OK, nevertheless. So these are issues but we [disfmarker] what we actually decided last week, is to, and this is, again, for your benefit [disfmarker] is to um, pretend we have observed and parsed an utterance such as "where is the Powder Tower", or "where is the zoo", and specify um, what [disfmarker] what we think the [disfmarker] the output uh, observe, out [disfmarker] i input nodes for our Bayes nets for the sub sub D, for the discourse bit, should be. So that [disfmarker] And I will [disfmarker] I will then [comment] [vocalsound] come up with the ontology side uh, bits and pieces, so that we can say, OK we [disfmarker] we always just look at this utterance. That's the only utterance we can do, it's hard coded, like Srini, sort of hand parsed, hand crafted, but this is what we hope to be able to observe in general from utterances, and from ontologies, and then we can sort of fiddle with these things to see what it actually produces, in terms of output. [speaker005:] Uh [speaker004:] So we need to find out what the "where is X" construction will give us in terms of semantics and [vocalsound] Simspec type things. [speaker001:] Just [disfmarker] OK. Just "where is X"? [speaker004:] Mm hmm. [speaker001:] Or any variants of that. [speaker004:] Yeah. No! Um, look at it this way, i Yeah. What did we decide. We decided sort of the [disfmarker] the prototypical "where is X", where you know, we don't really know, does he wanna go there, or just wanna know where it is. [speaker005:] Well we were [speaker004:] So the difference of "where is the railway station", versus where [disfmarker] where [disfmarker] "where is Greenland". Nuh? [speaker005:] Mm hmm. [speaker002:] Uh s I was just dancing, sorry. [speaker004:] We're not videotaping any of this. So. [speaker002:] Uh [disfmarker] ah [disfmarker] [speaker005:] So, um, we're supposed to [disfmarker] I mean we're talking about sort of anything that has the semantics of request for location, right? actually? Or, I mean, anyway, the node in the uh [disfmarker] the ultimate, uh, in [disfmarker] in the Bayes net thing when you're done, the [disfmarker] the node that we're talking about um, is one that says "request for location, true", or something like that, right? Um, and [disfmarker] and exactly how that gets activated, you know, like whether we want the sentence "how do I get there?" to activate that node or not, you know, that's [disfmarker] that's sort of the issue that sort of the linguistic y side has to deal with, right? [speaker004:] Yeah, but it [disfmarker] Yea Nnn Well actually more [disfmarker] m more the other way around. We wanted something that represents uncertainty uh we in terms of going there or just wanting to know where it is, for example. Some generic information. [speaker005:] OK. [speaker004:] And so this is prototypically [@ @] found in the "where is something" question, surface structure, [speaker005:] OK. [speaker002:] We [speaker004:] which can be p you know, should be maps to something that activates both. I mean the idea is to [disfmarker] [speaker002:] I don't [disfmarker] [speaker005:] Alright, OK. [speaker002:] Hhh. [speaker004:] let's have it fit nicely with the paper. [speaker002:] I guess. I don't [disfmarker] I don't see unde how we would be able to distinguish between the two intentions just from the g utterance, though. [speaker004:] The [disfmarker] [speaker002:] I mean, uh bef or, before we don't [disfmarker] before we cranked it through the Bayes net. I mean. [speaker004:] Yeah, we [disfmarker] we wouldn't. That's exactly what we want. We want to get [disfmarker] [speaker002:] We would? [speaker004:] No. We wouldn't. [speaker002:] OK, but then so basically it's just a [disfmarker] for every construction we have a node in the net, right? And we turn on that node. [speaker004:] Yeah. [speaker005:] Oy. [speaker004:] What [disfmarker] what is this gonna [disfmarker] Exactly. What is the uh [disfmarker] Well [disfmarker] [speaker002:] And then given that we know that [pause] the construction [pause] has these two things, we can set up probabilities [disfmarker] we can s basically define all the tables for ev for those [disfmarker] [speaker004:] Yeah, it should be [disfmarker] So we have um, i let's assume we [disfmarker] we call something like a loc X node and a path X node. And what we actually get if we just look at the discourse, "where is X" should activate or should [disfmarker] [speaker005:] Mmm. [speaker004:] Hmm. Should be both, whereas maybe "where is X located", we find from the data, is always just asked when the person wants to know where it is, and "how do I get to" is always asked when the person just wants to know how to get there. Right? So we want to sort of come up with what gets uh, input, and how inter in case of a "where is" question. So what [disfmarker] what would the outcome of [disfmarker] of your parser look like? And, what other discourse information from the discourse history could we hope to get, squeeze out of that utterance? So define the [disfmarker] the input into the Bayes net [vocalsound] based on what the utterance, "where is X", gives us. So definitely have an Entity node here which is activated via the ontology, [speaker001:] s [speaker004:] so "where is X" produces something that is s stands for X, whether it's castle, bank, restroom, toilet, whatever. And then the ontology will tell us [disfmarker] [speaker001:] That it has a location or something like that? [disfmarker] or th the ontology will tell us where actually it is located? [speaker004:] No. Not at all. [speaker001:] OK. [speaker004:] Where it is located, we have, a user proximity node here somewhere, [speaker001:] OK. OK. [speaker004:] e which tells us how far the user [disfmarker] how far away the user is in respect to that uh entity. [speaker001:] OK. So you're talking about, for instance, the construction obviously involves this entity or refers [disfmarker] refers to this entity, [speaker004:] Mm hmm. [speaker001:] and from the construction also you know that it is a location [disfmarker] is [disfmarker] or a thing [disfmarker] thing that can be located. Right? Ontology says this thing has a location slot. Sh and that's the thing that is being [disfmarker] that is the content of the question that's being queried by one interpretation of "where is X". And another one is, um, path from current [disfmarker] user current location to [comment] that location. [speaker004:] Mm hmm. [speaker001:] So. So is the question [disfmarker] I mean it's just that I'm not sure what the [disfmarker] Is the question, for this particular construction how we specify that that's the information it provides? Or [disfmarker] or asked for? b Both sides, right? [speaker004:] Yeah, you don't need to even do that. It's just sort of what [vocalsound] what would be [@ @] [comment] observed in uh [disfmarker] in that case. [speaker001:] Observed when you heard the speaker say "where is X", or when [disfmarker] when that's been parsed? [speaker004:] Mm hmm. [speaker001:] So these little circles you have by the D? Is that [disfmarker]? [speaker004:] That's exactly what we're looking for. [speaker001:] OK. OK. [speaker002:] I d I just [disfmarker] I don't like having [disfmarker] characterizing the constructions with location and path, or li characterizing them like that. Cuz you don't [disfmarker] It seems like in the general case you wouldn't know how [disfmarker] how to characterize them. [speaker004:] You wouldn't. [speaker002:] I mean [disfmarker] or, for when. There could be an interpretation that we don't have a node for in the [disfmarker] I mean it just seems like [@ @] has to have uh [disfmarker] a node for the construction and then let the chips fall where they may. Versus uh, saying, this construction either can mean location or path. And, in this cas and since [disfmarker] since it can mean either of those things, it would light both of those up. [speaker004:] It's the same. [speaker002:] Thoughts? Questions? [speaker005:] I'm thinking about it. Um [disfmarker] [speaker004:] It will be the same. So I think r in here we have "I'll go there", right? [speaker002:] Answers? [speaker004:] And we have our Info on. So in my c my case, this would sort of make this [pause] happy, and this would make the Go there happy. What you're saying is we have a Where X question, Where X node, that makes both happy. Right? That's what you're proposing, which is, in my mind just as fine. So w if we have a construction [pause] node, "where is X", it's gonna both get the po posterior probability that [disfmarker] it's Info on up, [speaker002:] Mmm, yeah. [speaker004:] Info on is True up, and that Go there is True up, as well. Which would be exactly analogous to what I'm proposing is, this makes [disfmarker] uh makes something here true, and this makes something [disfmarker] also something here true, and this makes this True up, and this makes this True up as well. [speaker005:] I kinda like it better without that extra level of indirection too. You know with [disfmarker] with this points to this points to that, and so on because [vocalsound] I don't know, it [disfmarker] [speaker004:] Yeah, because we get [disfmarker] we get tons of constructions I think. [speaker001:] Is uh, [speaker005:] Yeah. [speaker004:] Because, you know, mmm people have many ways of asking for the same thing, [speaker002:] Yeah, sure. [speaker001:] Yeah. [speaker004:] and [disfmarker] [speaker001:] So un [speaker002:] I change I changed my mind actually. [speaker005:] OK. [speaker001:] So I agree with that. I have a different kinda question, might be related, which is, OK so implicitly everything in EDU, we're always inferring the speaker intent, right? Like, what they want either, the information that they want, or [disfmarker] It's always information that they want probably, of some kind. Right? Or I [disfmarker] I don't know, [speaker004:] The system doesn't massage you, no. No. [speaker001:] or what's something that they [disfmarker] I [disfmarker] I [disfmarker] I don't [disfmarker] OK. So, um, let's see. So I don't know if the [disfmarker] I mean i if th just there's more s here that's not shown that you [disfmarker] it's already like part of the system whatever, but, "where is X", like, the fact that it is, you know, a speech act, whatever, it is a question. It's a question that, um, queries on some particular thing X, and X is that location. There's, like, a lot of structure in representing that. [speaker004:] Yep. Yeah. [speaker001:] So that seems different from just having the node "location X" and that goes into EDU, right? [speaker004:] Yeah. [vocalsound] Precisely. That's [disfmarker] that's [disfmarker] So, w [speaker001:] So tha is that what you're t talking about? [speaker004:] Exactly. We have su we have specified two. [speaker001:] wh what kinds of structure we want. [speaker004:] OK, the next one would be here, just for mood. [speaker005:] Mm hmm. [speaker001:] Mm hmm. [speaker004:] The next one would be what we can squeeze out of the uh I don't know, maybe we wanna observe the uh, um, [vocalsound] [vocalsound] uh the length of [disfmarker] of the words used, and, or the prosody [speaker001:] Mmm. [speaker004:] and g a and t make conclusions about the user's intelligence. [speaker001:] OK. So in some ways [disfmarker] [speaker004:] I don't know, yeah. [speaker001:] um, so in some ways in the other sort of parallel set of mo more linguistic meetings we've been talking about possible semantics of some construction. [speaker005:] Mm hmm. [speaker001:] Right? Where it was the simulation that's, according to it [disfmarker] you know, that [disfmarker] that corresponds to it, and as well the [disfmarker] as discourse, [speaker004:] Mm hmm. [speaker001:] whatever, conte infor in discourse information, such as the mood, and, you know, other stuff. So, are we looking for a sort of abbreviation of that, that's tailored to this problem? Cuz that [disfmarker] that has, you know, basically, you know, s it's in progress still it's in development still, but it definitely has various feature slots, attributes, um, bindings between things [disfmarker] [speaker004:] Mm hmm. Yeah. U that's exactly r um, why I'm proposing [disfmarker] It's too early to have [disfmarker] to think of them [disfmarker] of all of these discourse things that one could possibly observe, [speaker001:] Uh huh. Mm hmm. [speaker004:] so let's just assume [speaker001:] For the subset of [disfmarker] [speaker004:] human beings are not allowed to ask anything but "where is X". [speaker001:] OK. [speaker004:] This is the only utterance in the world. What could we observe from that? [speaker001:] OK. That exactly "where is X", [speaker004:] In ter [speaker001:] not the [disfmarker] the choices of "where is X" or "how do I get to X". Just "where is X". [speaker005:] Yeah. [speaker004:] Just [disfmarker] just "where is X". [speaker001:] OK. [speaker004:] And, but you know, do it [disfmarker] do it in such a way that we know that people can also say, "is the town hall in front of the bank", so that we need something like a w WH focus. Nuh? Should be [disfmarker] should be there, that, you know, this [disfmarker] the [disfmarker] whatever we get from the [disfmarker] [speaker001:] Wait, so do, or do not take other kinds of constructions into account? [speaker004:] Well, if you [disfmarker] if you can, oh definitely do, [speaker001:] OK. Where possible. [speaker004:] where possible. Right? [speaker001:] OK. [speaker004:] If i if [disfmarker] if it's not at all triggered by our thing, then it's irrelevant, [speaker001:] Mm hmm. [speaker004:] and it doesn't hurt to leave it out for the moment. Um, but [disfmarker] [speaker001:] OK. Um, it seems like for instance, "where is X", the fact that it might mean um, "tell me how to get to X", like [disfmarker] Do y So, would you wanna say that those two are both, like [disfmarker] Those are the two interpretations, right? the [disfmarker] the ones that are location or path. So, you could say that the s construction is a question asking about this location, and then you can additionally infer, if they're asking about the location, it's because they wanna go to that place, [speaker004:] Mm hmm. [speaker001:] in which case, the [disfmarker] you're jumping a step [disfmarker] step and saying, oh, I know where it is [speaker005:] Yeah. [speaker001:] but I also know how to get [disfmarker] they wanna seem [disfmarker] they seem to wanna get there so I'm gonna tell them ". So there's like structure [speaker005:] Right, th this [disfmarker] it's not [disfmarker] it's not that this is sort of like semantically ambiguous between these two. [speaker001:] i do you kn sort of uh, that [disfmarker] [speaker004:] Mm hmm. [speaker005:] It's really about this but why would you care about this? Well, it's because you also want to know this, or something like that right? [speaker004:] Mm hmm. [speaker001:] So it's like you infer the speaker intent, and then infer a plan, a larger plan from that, for which you have the additional information, [speaker005:] Yeah. [speaker004:] Mm hmm. [speaker001:] you're just being extra helpful. [speaker004:] Yep. [speaker001:] Um. [speaker004:] Think [disfmarker] Uh, well this is just a mental exercise. [speaker001:] Yeah. [speaker004:] If you think about, focus on this question, how would you design [pause] that? [speaker005:] Mm hmm. [speaker004:] Is it [disfmarker] do you feel confident about saying this is part of the language already to [disfmarker] to detect those plans, and why would anyone care about location, if not, you know [speaker005:] Mmm. [speaker004:] and so forth. Or do you actually, I mean this is perfectly legitimate, and I [disfmarker] I would not have any problems with erasing this and say, that's all we can activate, based on the utterance out of context. [speaker001:] Mm hmm. And just by an additional link [disfmarker] Oh. [speaker004:] What? [speaker005:] Right. [speaker001:] Right, [speaker004:] And then the [disfmarker] the [disfmarker] the miracle that we get out the intention, Go there, happens, based on what we know about that entity, about the user, about his various beliefs, goals, desires, blah blah blah. [speaker001:] like, with context and enough user information, yeah. [speaker005:] Yeah. [speaker004:] Absolutely fine. But this is the sort of thing, I [disfmarker] I propose that we think about, [speaker001:] OK. [speaker004:] so that we actually end up with um, um, nodes for the discourse and ontology so that we can put them into our Bayes net, never change them, so we [disfmarker] all there is is "where is X", and, Eva can play around with the observed things, and we can run our better JavaBayes, and have it produce some output. And for the first time in th in [disfmarker] in the world, we look at our output, and um [disfmarker] and see uh whether it [disfmarker] it's any good. [speaker001:] OK. [speaker004:] You know? I mean, [speaker005:] Here's hoping. [speaker004:] Hmm? [speaker005:] Here's hoping. Right? Now cross your fingers. [speaker004:] Yeah, I [disfmarker] I mean, for me this is just a ba matter of curiosity, I wanna [disfmarker] would like to look at uh, what this ad hoc process of designing a belief net would actually produce. [speaker005:] Yeah. [comment] Yeah. [speaker001:] Mmm. [speaker004:] If [disfmarker] if we ask it where is something. And, maybe it also h enables you to think about certain things more specifically, um, come up with interesting questions, to which you can find interesting answers. And, additionally it might fit in really nicely with the paper. [speaker005:] Um hmm. [speaker004:] Because if [disfmarker] if [disfmarker] if we want an example for the paper, I suggest there it is. [speaker005:] Yeah. [speaker004:] So th this might be a nice opening paragraph for the paper as saying, "you know people look at kinds of [disfmarker] [vocalsound] at ambiguities", and um, in the literature there's "bank" and whatever kinds of garden path phenomenon. [speaker005:] Mm hmm. [speaker004:] And we can say, well, that's all nonsense. A, A, uh these things are never really ambiguous in discourse, B, B, don't ever occur really in discourse, but normal statements that seem completely unambiguous, such as "where is the blah blah", actually are terribly complex, and completely ambiguous. [speaker005:] Mm hmm. Mm hmm. [speaker004:] And so, what every everybody else has been doing so far in [disfmarker] in [disfmarker] in [disfmarker] you know, has been completely nonsensical, and can all go into the wastepaper bin, and the only [disfmarker] [speaker005:] That's always a good way to begin. Yeah. Yeah. [speaker004:] Yeah. And the [disfmarker] the [disfmarker] the only [disfmarker] [speaker002:] I am great. [speaker004:] Yeah. [speaker005:] All others are useless. [speaker004:] Yeah. [speaker005:] That's good. [speaker004:] Nice overture, but, you know, just not really [disfmarker] OK, I'm eja exaggerating, but that might be, you know, saying "hey", you know, some stuff is [disfmarker] is actually complex, if you look at it in [disfmarker] in [disfmarker] in the vacuum [speaker005:] Mm hmm. [speaker004:] and [disfmarker] and ceases to be complex in reality. And some stuff that's as [disfmarker] that's absolutely straightforward in the vacuum, is actually terribly complex in reality. Would be nice sort of, uh, also, nice, um bottom up linguistics, um, type message. [speaker005:] Mm hmm. True. [speaker004:] Versus the old top down school. I'm running out of time. OK. [speaker002:] When do you need to start wizarding? [speaker004:] At four ten. OK, this is the other bit of news. The subjects today know Fey, so she can't be here, and do the wizarding. [speaker005:] Huh. [speaker004:] So I'm gonna do the wizarding and Thilo's gonna do the instructing. [speaker002:] Mmm. [speaker004:] Also we're getting a [disfmarker] a person who just got fired uh, from her job. Uh a person from Oakland who is interested in maybe continuing the wizard bit once Fey leaves in August. And um, she's gonna look at it today. Which is good news in the sense that if we want to continue, after the thir thir after July, we can. We could. And, um [disfmarker] and that's also maybe interesting for Keith and whoever, if you wanna get some more stuff into the data collection. Remember this, we can completely change the set up any time we want. [speaker005:] Mm hmm. OK. [speaker004:] Look at the results we've gotten so far for the first, whatever, fifty some subjects? [speaker001:] Fifty? You've had fifty so far, or [disfmarker]? [speaker004:] No, we're approaching twenty now. [speaker001:] OK. [speaker004:] But, until Fey is leaving, we surely will hit the [disfmarker] some of the higher numbers. [speaker001:] Yeah. Hmm. [speaker004:] And um, so that's cool. Can a do more funky stuff. [speaker005:] Sure. Yeah, I'll have to look more into that data. Is that around? Like, cuz that's pretty much getting posted or something right away when you get it? Or [disfmarker]? [speaker004:] Um. [speaker005:] I guess it has to be transcribed, huh? [speaker004:] We have uh, eh found someone here who's hand st hand transcribing the first twelve. [speaker005:] OK. [speaker004:] First dozen subjects [speaker005:] Uh huh. [speaker004:] just so we can build a [disfmarker] a language model for the recognizer. [speaker005:] OK. [speaker004:] But, um [disfmarker] So those should be available soon. [speaker005:] OK. [speaker004:] The first twelve. And I can ch ch st e [speaker005:] You know [disfmarker] I mean you know that I [disfmarker] that I looked at the first [disfmarker] the first one and got enough data to keep me going for, you know, probably most of July. So. [vocalsound] But, um. Yeah, a probably not the right way to do it actually. [speaker004:] But you can listen to [disfmarker] a y y y You can listen to all of them from your Solaris box. [speaker005:] OK. [speaker004:] If you want. [speaker005:] Right. [speaker004:] It's always fun. [speaker002:] Is it starting now? [speaker005:] Yep. [speaker002:] So what [disfmarker] what [disfmarker] from [disfmarker] what [disfmarker] Whatever we say from now on, it can be held against us, [speaker001:] Hello? [speaker002:] right? [speaker005:] That's right. [speaker002:] and uh [speaker001:] It's your right to remain silent. [speaker002:] Yeah. So I [disfmarker] I [disfmarker] the [disfmarker] the problem is that I actually don't know how th these held meetings are held, if they are very informal and sort of just people are say what's going on [speaker005:] Yeah. [speaker002:] and [speaker005:] Yeah, that's usually what we do. We just sorta go around and people say what's going on, what's the latest uh [disfmarker] [speaker002:] OK. Yeah. OK. So I guess that what may be a [disfmarker] reasonable is if I uh first make a report on what's happening in Aurora in general, at least what from my perspective. [speaker005:] Yeah. That would be great. [speaker004:] Uh o [speaker002:] And [disfmarker] and uh so, I [disfmarker] I think that Carmen and Stephane reported on uh Amsterdam meeting, which was kind of interesting because it was for the first time we realized we are not friends really, but we are competitors. Cuz until then it was sort of like everything was like wonderful and [disfmarker] [speaker005:] Yeah. It seemed like there were still some issues, [speaker002:] Yeah. [speaker005:] right? that they were trying to decide? [speaker002:] There is a plenty of [disfmarker] there're plenty of issues. [speaker005:] Like the voice activity detector, [speaker002:] Well and what happened was that they realized that if two leading proposals, which was French Telecom Alcatel, and us both had uh voice activity detector. [speaker005:] Right. [speaker002:] And I said "well big surprise, I mean we could have told you that [pause] n n n four months ago, except we didn't because nobody else was bringing it up". Obviously French Telecom didn't volunteer this information either, [speaker005:] Right. [speaker002:] cuz we were working on [disfmarker] mainly on voice activity detector for past uh several months because that's buying us the most uh thing. And everybody said "Well but this is not fair. We didn't know that." And of course uh the [disfmarker] it's not working on features really. [speaker005:] Right. [speaker002:] And be I agreed. I said "well yeah, you are absolutely right, I mean if I wish that you provided better end point at speech because uh [disfmarker] or at least that if we could modify the recognizer, uh to account for these long silences, because otherwise uh that [disfmarker] that [disfmarker] th that wasn't a correct thing." And so then ev ev everybody else says "well we should [disfmarker] we need to do a new eval evaluation without voice activity detector, or we have to do something about it". [speaker005:] Right. [speaker002:] And in principle I [disfmarker] uh I [disfmarker] we agreed. [speaker005:] Mm hmm. [speaker002:] We said uh "yeah". Because uh [disfmarker] but in that case, uh we would like to change the uh [disfmarker] the algorithm because uh if we are working on different data, we probably will use a different set of tricks. [speaker005:] Right. [speaker002:] But unfortunately nobody ever officially can somehow acknowledge that this can be done, because French Telecom was saying "no, no, no, now everybody has access to our code, so everybody is going to copy what we did." Yeah well our argument was everybody ha has access to our code, and everybody always had access to our code. We never uh [disfmarker] uh denied that. We thought that people are honest, that if you copy something and if it is protected [disfmarker] protected by patent then you negotiate, or something, [speaker005:] Yeah. Right. [speaker002:] right? I mean, if you find our technique useful, we are very happy. [speaker005:] Right. Mm hmm. [speaker002:] But [disfmarker] And French Telecom was saying "no, no, no, there is a lot of little tricks which uh sort of uh cannot be protected and you guys will take them," which probably is also true. I mean, you know, it might be that people will take uh uh th the algorithms apart and use the blocks from that. But I somehow think that it wouldn't be so bad, as long as people are happy abou uh uh uh honest about it. [speaker005:] Yeah. [speaker002:] And I think they have to be honest in the long run, because winning proposal again [disfmarker] uh what will be available th is [disfmarker] will be a code. [speaker005:] Mm hmm. [speaker002:] So the uh [disfmarker] the people can go to code and say "well listen this is what you stole from me" [speaker005:] Right. [speaker002:] you know? [speaker005:] Right. [speaker002:] "so let's deal with that". So I don't see the problem. The biggest problem of course is that f that Alcatel French Telecom cl claims "well we fulfilled the conditions. We are the best. Uh. We are the standard." And e and other people don't feel that, because they [disfmarker] so they now decided that [disfmarker] that [disfmarker] is [disfmarker] the whole thing will be done on well endpointed data, essentially that somebody will endpoint the data based on clean speech, because most of this the SpeechDat Car has the also close speaking mike and endpoints will be provided. [speaker005:] Mm hmm. Ah. [speaker002:] And uh we will run again [disfmarker] still not clear if we are going to run the [disfmarker] if we are allowed to run uh uh new algorithms, but I assume so. Because uh we would fight for that, really. uh but [disfmarker] since uh u u n u [disfmarker] at least our experience is that only endpointing a [disfmarker] a mel cepstrum gets uh [disfmarker] gets you twenty one percent improvement overall and twenty seven improvement on SpeechDat Car [speaker005:] Hmm. [speaker002:] then obvious the database [disfmarker] uh I mean the [disfmarker] the [disfmarker] the [disfmarker] uh the baseline will go up. [speaker005:] Right. [speaker002:] And nobody can then achieve fifty percent improvement. So they agreed that uh there will be a twenty five percent improvement required on [disfmarker] on uh h u m bad mis badly mismatched [disfmarker] [speaker005:] But wait a minute, I thought the endpointing really only helped in the noisy cases. [speaker002:] It uh [disfmarker] [speaker005:] Oh, but you still have that with the MFCC. OK. [speaker002:] Y yeah. [speaker005:] Yeah. [speaker002:] Yeah but you have the same prob I mean MFCC basically has an enormous number of uh insertions. [speaker005:] Right. Yeah. Yeah. Yeah. [speaker002:] And so, so now they want to say "we [disfmarker] we will require fifty percent improvement only for well matched condition, and only twenty five percent for the serial cases." [speaker005:] Hmm. [speaker002:] And uh [disfmarker] and they almost agreed on that except that it wasn't a hundred percent agreed. And so last time uh during the meeting, I just uh brought up the issue, I said "well you know uh quite frankly I'm surprised how lightly you are making these decisions because this is a major decision. For two years we are fighting for fifty percent improvement and suddenly you are saying" oh no we [disfmarker] we will do something less ", but maybe we should discuss that. And everybody said "oh we discussed that and you were not a mee there" and I said "well a lot of other people were not there because not everybody participates at these teleconferencing c things." Then they said "oh no no no because uh everybody is invited." However, there is only ten or fifteen lines, so people can't even con you know participate. So eh they agreed, and so they said "OK, we will discuss that." Immediately Nokia uh raised the question and they said "oh yeah we agree this is not good to to uh dissolve the uh uh [disfmarker] the uh [disfmarker] the criterion." [speaker005:] Mm hmm. [speaker002:] So now officially, Nokia is uh uh complaining and said they [disfmarker] they are looking for support, uh I think QualComm is uh saying, too "we shouldn't abandon the fifty percent yet. We should at least try once again, one more round." [speaker005:] Mm hmm. Mm hmm. [speaker002:] So this is where we are. [speaker005:] Hmm. [speaker002:] I hope that [disfmarker] I hope that this is going to be a adopted. Next Wednesday we are going to have uh another uh teleconferencing call, so we'll see what uh [disfmarker] where it goes. [speaker005:] So what about the issue of um the weights on the [disfmarker] for the different systems, the well matched, and medium mismatched and [disfmarker] [speaker002:] Yeah, that's what [disfmarker] that's a g very good uh point, because David says "well you know we ca we can manipulate this number by choosing the right weights anyways." [speaker005:] Mm hmm. [speaker002:] So while you are right but [disfmarker] uh you know but Uh yeah, if of course if you put a zero [disfmarker] uh weight zero on a mismatched condition, or highly mismatched then [disfmarker] then you are done. [speaker005:] Mm hmm. [speaker002:] But weights were also deter already decided uh half a year ago. [speaker005:] And they're the [disfmarker] staying the same? [speaker002:] So [disfmarker] Well, of course people will not like it. [speaker005:] Mm hmm. [speaker002:] Now [disfmarker] What is happening now is that I th I think that people try to match the criterion to solution. They have solution. Now they want to [vocalsound] make sure their criterion is [disfmarker] [speaker005:] Right. [speaker002:] And I think that this is not the right way. [speaker005:] Yeah. [speaker002:] Uh it may be that [disfmarker] that [disfmarker] Eventually it may ha may ha it may have to happen. [speaker005:] Mm hmm. [speaker002:] But it's should happen at a point where everybody feels comfortable that we did all what we could. [speaker005:] Mm hmm. [speaker002:] And I don't think we did. Basically, I think that [disfmarker] that this test was a little bit bogus because of the data and uh essentially [pause] there were these arbitrary decisions made, and [disfmarker] and everything. So, so [disfmarker] so this is [disfmarker] so this is where it is. So what we are doing at OGI now is uh uh uh working basically on our parts which we I think a little bit neglected, like noise separation. Uh so we are looking in ways is [disfmarker] in uh which [disfmarker] uh with which we can provide better initial estimate of the mel spectrum basically, which would be a l uh, f more robust to noise, and so far not much uh success. [speaker005:] Hmm. [speaker002:] We tried uh things which uh a long time ago Bill Byrne suggested, instead of using Fourier spectrum, from Fourier transform, use the spectrum from LPC model. Their argument there was the LPC model fits the peaks of the spectrum, so it may be m naturally more robust in noise. And I thought "well, that makes sense," but so far we can't get much [disfmarker] much out of it. [speaker005:] Hmm. [speaker002:] uh we may try some standard techniques like spectral subtraction and [disfmarker] [speaker005:] You haven't tried that yet? [speaker002:] not [disfmarker] not [disfmarker] not much. [speaker005:] Hmm. [speaker002:] Or even I was thinking about uh looking back into these totally ad hoc techniques like for instance uh Dennis Klatt was suggesting uh the one way to uh deal with noisy speech is to add noise to everything. [speaker005:] Hmm! [speaker002:] So. [comment] I mean, uh uh add moderate amount of noise to all data. [speaker005:] Oh! I see. [speaker002:] So that makes uh th any additive noise less addi less a a effective, right? [speaker005:] Right. [speaker002:] Because you already uh had the noise uh in a [disfmarker] And it was working at the time. It was kind of like one of these things, you know, but if you think about it, it's actually pretty ingenious. So well, you know, just take a [disfmarker] take a spectrum and [disfmarker] and [disfmarker] and add of the constant, C, to every [disfmarker] every value. [speaker005:] Well you're [disfmarker] you're basically y Yeah. So you're making all your training data more uniform. [speaker002:] Exactly. And if [disfmarker] if then [disfmarker] if this data becomes noisy, it b it becomes eff effectively becomes less noisy basically. [speaker005:] Hmm. [speaker002:] But of course you cannot add too much noise because then you'll s then you're clean recognition goes down, but I mean it's yet to be seen how much, it's a very simple technique. [speaker005:] Mm hmm. [speaker002:] Yes indeed it's a very simple technique, you just take your spectrum and [disfmarker] and use whatever is coming from FFT, [pause] add constant, [speaker005:] Hmm. [speaker002:] you know? on [disfmarker] onto power spectrum. That [disfmarker] that [disfmarker] Or the other thing is of course if you have a spectrum, what you can s start doing, you can leave [disfmarker] start leaving out the p the parts which are uh uh low in energy and then perhaps uh one could try to find a [disfmarker] a all pole model to such a spectrum. Because a all pole model will still try to [disfmarker] to [disfmarker] to put the [disfmarker] the continuation basically of the [disfmarker] of the model into these parts where the issue set to zero. That's what we want to try. I have a visitor from Brno. He's a [disfmarker] kind of like young faculty. pretty hard working so he [disfmarker] so he's [disfmarker] so he's looking into that. [speaker005:] Hmm. [speaker002:] And then most of the effort is uh now also aimed at this e e TRAP recognition. This uh [disfmarker] this is this recognition from temporal patterns. [speaker005:] Hmm! What is that? [speaker002:] Ah, you don't know about TRAPS! [speaker001:] Hmm. [speaker005:] The TRAPS sound familiar, I [disfmarker] but I don't [disfmarker] [speaker002:] Yeah I mean tha This is familiar like sort of because we gave you the name, but, what it is, is that normally what you do is that you recognize uh speech based on a shortened spectrum. [speaker005:] Mm hmm. Mm hmm. [speaker002:] Essentially L P LPC, mel cepstrum, uh, everything starts with a spectral slice. Uh so if you s So, given the spectrogram you essentially are sliding [disfmarker] sliding the spectrogram along the uh f frequency axis [speaker005:] Mm hmm. [speaker002:] and you keep shifting this thing, and you have a spectrogram. [speaker005:] Mm hmm. [speaker002:] So you can say "well you can also take the time trajectory of the energy at a given frequency", [speaker005:] Mm hmm. [speaker002:] and what you get is then, that you get a p [pause] vector. And this vector can be a [disfmarker] a [disfmarker] s assigned to s some phoneme. Namely you can say i it [disfmarker] I will [disfmarker] I will say that this vector will eh [disfmarker] will [disfmarker] will describe the phoneme which is in the center of the vector. And you can try to classify based on that. [speaker005:] Hmm. [speaker002:] And you [disfmarker] so you classi so it's a very different vector, very different properties, we don't know much about it, [speaker005:] Hmm. [speaker002:] but the truth is [disfmarker] [speaker005:] But you have many of those vectors per phoneme, [speaker002:] Well, so you get many decisions. [speaker005:] right? Uh huh. [speaker002:] And then you can start dec thinking about how to combine these decisions. [speaker005:] Hmm. [speaker002:] Exactly, that's what [disfmarker] yeah, that's what it is. [speaker005:] Hmm. [speaker002:] Because if you run this uh recognition, you get [disfmarker] you still get about twenty percent error [disfmarker] uh twenty percent correct. [speaker005:] Hmm. [speaker002:] You know, on [disfmarker] on like for the frame by frame basis, so [pause] uh [disfmarker] uh so it's much better than chance. [speaker005:] How wide are the uh frequency bands? [speaker002:] That's another thing. Well c currently we start [disfmarker] I mean we start always with critical band spectrum. For various reasons. But uh the latest uh observation uh is that you [disfmarker] you [disfmarker] you are [disfmarker] you can get quite a big advantage of using two critical bands at the same time. [speaker001:] Are they adjacent, or are they s [speaker002:] Adjacent, adjacent. [speaker001:] OK. [speaker002:] And the reasons [disfmarker] there are some reasons for that. Because there are some reasons I can [disfmarker] I could talk about, will have to tell you about things like masking experiments which uh uh uh uh yield critical bands, and also experiments with release of masking, which actually tell you that something is happening across critical bands, across bands. And [disfmarker] [speaker005:] Well how do you [disfmarker] how do you uh convert this uh energy over time in a particular frequency band into a vector of numbers? [speaker002:] It's uh uh uh I mean time T zero is one number, [pause] time t [speaker005:] Yeah but what's the number? Is it just the [disfmarker] [speaker002:] It's a spectral energy, logarithmic spectral energy, [speaker005:] it's just the amount of energy in that band from f in that time interval. [speaker002:] yeah. Yes, yes. Yes, yes. [speaker005:] OK. [speaker002:] And that's what [disfmarker] that's what I'm saying then, so this is a [disfmarker] this is a starting vector. It's just like shortened f [pause] spectrum, or something. [speaker005:] Mm hmm. [speaker002:] But now we are trying to understand what this vector actually represents, for instance a question is like "how correlated are the elements of this vector?" Turns out they are quite correlated, because I mean, especially the neighboring ones, [speaker005:] Yeah. [speaker002:] right? They [disfmarker] they represent the same [disfmarker] almost the same configuration of the vocal tract. [speaker005:] Yeah. Mm hmm. [speaker002:] So there's a very high correlation. So the classifiers which use the diagonal covariance matrix don't like it. So we're thinking about de correlating them. [speaker005:] Hmm. [speaker002:] Then the question is uh "can you describe elements of this vector by Gaussian distributions", or to what extent? Because uh [disfmarker] And [disfmarker] and [disfmarker] and so on and so on. So we are learning quite a lot about that. [speaker005:] Hmm. [speaker002:] And then another issue is how many vectors we should be using, I mean the [disfmarker] so the minimum is one. [speaker005:] Mm hmm. [speaker002:] But I mean is the [disfmarker] is the critical band the right uh uh dimension? So we somehow made arbitrary decision, "yes". Then [disfmarker] but then now we are thinking a lot how to [disfmarker] uh how to use at least the neighboring band because that seems to be happening [disfmarker] This I somehow start to believe that's what's happening in recognition. Cuz a lot of experiments point to the fact that people can split the signal into critical bands, but then oh uh uh so you can [disfmarker] you are quite capable of processing a signal in uh uh independently in individual critical bands. That's what masking experiments tell you. But at the same time you most likely pay attention to at least neighboring bands when you are making any decisions, you compare what's happening in [disfmarker] in this band to what's happening to the band [disfmarker] to [disfmarker] to [disfmarker] to the [disfmarker] to the neighboring bands. And that's how you make uh decisions. That's why the articulatory events, which uh F F Fletcher talks about, they are about two critical bands. You need at least two, basically. You need some relative, relative relation. [speaker001:] Hmm. [speaker005:] Hmm. [speaker002:] Absolute number doesn't tell you the right thing. You need to [disfmarker] you need to compare it to something else, what's happening but it's what's happening in the [disfmarker] in the close neighborhood. So if you are making decision what's happening at one kilohertz, you want to know what's happening at nine hundred hertz and it [disfmarker] and maybe at eleven hundred hertz, but you don't much care what's happening at three kilohertz. [speaker005:] So it's really w It's sort of like saying that what's happening at one kilohertz depends on what's happening around it. It's sort of relative to it. [speaker002:] To some extent, it [disfmarker] that is also true. [speaker005:] Mm hmm. [speaker002:] Yeah. But it's [disfmarker] but for [disfmarker] but for instance, [vocalsound] th uh [vocalsound] uh what [disfmarker] what uh humans are very much capable of doing is that if th if they are exactly the same thing happening in two neighboring critical bands, recognition can discard it. [speaker005:] Hmm. [speaker002:] Is what's happening [disfmarker] Hey! [speaker001:] Hey! [speaker002:] OK, we need us another [disfmarker] another voice here. [speaker005:] Hey Stephane. [speaker002:] Yeah, I think so. [speaker005:] Yep. [speaker002:] Yeah? [speaker005:] Sure. Go ahead. [speaker002:] And so so [disfmarker] so for instance if you d if you a if you add the noise that normally masks [disfmarker] masks the uh [disfmarker] the [disfmarker] the signal [speaker005:] Mm hmm. [speaker002:] right? and you can show that in [disfmarker] that if the [disfmarker] if you add the noise outside the critical band, that doesn't affect the [disfmarker] the decisions you're making about a signal within a critical band. [speaker005:] Hmm. [speaker002:] Unless this noise is modulated. If the noise is modulated, with the same modulation frequency as the noise in a critical band, the amount of masking is less. [speaker005:] Mmm. [speaker002:] The moment you [disfmarker] moment you provide the noise in n neighboring critical bands. So the s m masking curve, normally it looks like sort of [disfmarker] I start from [disfmarker] from here, so you [disfmarker] [comment] you have uh no noise then you [disfmarker] you [disfmarker] you are expanding the critical band, so the amount of maching is increasing. And when you e hit a certain point, which is a critical band, then the amount of masking is the same. [speaker005:] Mmm. [speaker002:] So that's the famous experiment of Fletcher, a long time ago. Like that's where people started thinking "wow this is interesting!" [speaker005:] Yeah. [speaker002:] So. But, if you [disfmarker] if you [disfmarker] if you modulate the noise, the masking goes up and the moment you start hitting the [disfmarker] another critical band, the masking goes down. So essentially [disfmarker] essentially that's a very clear indication that [disfmarker] that [disfmarker] that [pause] cognition can take uh uh into consideration what's happening in the neighboring bands. But if you go too far in a [disfmarker] in a [disfmarker] if you [disfmarker] if the noise is very broad, you are not increasing much more, so [disfmarker] so if you [disfmarker] if you are far away from the signal [disfmarker] uh from the signal f uh the frequency at which the signal is, then the m even the [disfmarker] when the noise is co modulated it [disfmarker] it's not helping you much. [speaker005:] Mm hmm. Yeah. Mm hmm. [speaker002:] So. [speaker001:] Hmm. [speaker002:] So things like this we are kind of playing with [disfmarker] with [disfmarker] with the hope that perhaps we could eventually u use this in a [disfmarker] in a real recognizer. [speaker001:] Mm hmm. [speaker002:] Like uh partially of course we promised to do this under the [disfmarker] the [disfmarker] the Aurora uh program. [speaker005:] But you probably won't have anything before the next time we have to evaluate, right? [speaker002:] Probably not. [speaker005:] Yeah. [speaker002:] Well, maybe, most likely we will not have anything which c would comply with the rules. [speaker005:] Ah. [speaker002:] like because uh uh [speaker005:] Latency and things. [speaker002:] latency currently chops the require uh significant uh latency amount of processing, [speaker005:] Mm hmm. [speaker002:] because uh we don't know any better, yet, than to use the neural net classifiers, uh and uh [disfmarker] and uh TRAPS. [speaker005:] Yeah. [speaker001:] Mm hmm. [speaker002:] Though the [disfmarker] the work which uh everybody is looking at now aims at s trying to find out what to do with these vectors, so that a g simple Gaussian classifier would be happier with it. [speaker003:] Hmm. [speaker005:] Mm hmm. [speaker002:] or to what extent a Gaussian classifier should be unhappy uh that, and how to Gaussian ize the vectors, and [disfmarker] [speaker005:] Hmm. [speaker002:] So this is uh what's happening. Then Sunil is uh uh uh asked me f for one month's vacation and since he did not take any vacation for two years, I had no [disfmarker] I didn't have heart to tell him no. So he's in India. [speaker005:] Wow. [speaker002:] And uh [disfmarker] [speaker005:] Is he getting married or something? [speaker002:] Uh well, he may be looking for a girl, for [disfmarker] for I don't [disfmarker] I don't [disfmarker] I don't ask. I know that Naran when last time Narayanan did that he came back engaged. [speaker005:] Right. Well, I mean, I've known other friends who [disfmarker] they [disfmarker] they go to Ind they go back home to India for a month, they come back married, [speaker002:] Yeah. I know. I know, I know, [speaker005:] you know, huh. [speaker002:] and then of course then what happened with Narayanan was that he start pushing me that he needs to get a PHD because they wouldn't give him his wife. And she's very pretty and he loves her and so [disfmarker] so we had to really [disfmarker] [speaker005:] So he finally had some incentive to finish, [speaker002:] Oh yeah. [speaker005:] huh? [speaker002:] We had [disfmarker] well I had a incentive because he [disfmarker] he always had this plan except he never told me. [speaker005:] Oh. [speaker002:] Sort of figured that [disfmarker] That was a uh that he uh he told me the day when we did very well at our NIST evaluations of speaker recognition, the technology, and he was involved there. We were [disfmarker] after presentation we were driving home and he told me. [speaker005:] When he knew you were happy, huh? [speaker002:] Yeah. So I [disfmarker] I said "well, yeah, OK" so he took another [disfmarker] another three quarter of the year but uh he was out. So I [disfmarker] wouldn't surprise me if he has a plan like that, though [disfmarker] though uh Pratibha still needs to get out first. [speaker005:] Hmm. [speaker002:] Cuz Pratibha is there a [disfmarker] a year earlier. [speaker005:] Hmm. [speaker002:] And S and Satya needs to get out very first because he's [disfmarker] he already has uh four years served, though one year he was getting masters. So. So. [speaker003:] Hmm. [speaker005:] So have the um [disfmarker] when is the next uh evaluation? June or something? [speaker002:] Which? Speaker recognition? [speaker005:] No, for uh Aurora? [speaker002:] Uh there, we don't know about evaluation, next meeting is in June. [speaker005:] Hmm. [speaker002:] And uh uh but like getting [disfmarker] get together. [speaker005:] Oh, OK. Are people supposed to rerun their systems, [speaker002:] Nobody said that yet. [speaker005:] or [disfmarker]? [speaker002:] I assume so. [speaker005:] Hmm. [speaker002:] Uh yes, uh, but nobody even set up yet the [pause] date for uh delivering uh endpointed data. [speaker005:] Wow. [speaker002:] And this uh [disfmarker] that [disfmarker] that sort of stuff. But I uh, yeah, what I think would be of course extremely useful, if we can come to our next meeting and say "well you know we did get fifty percent improvement. If [disfmarker] if you are interested we eventually can tell you how", but uh we can get fifty percent improvement. [speaker005:] Mm hmm. [speaker002:] Because people will s will be saying it's impossible. [speaker005:] Hmm. Do you know what the new baseline is? Oh, I guess if you don't have [disfmarker] [speaker002:] Twenty two [disfmarker] t twenty [disfmarker] twenty two percent better than the old baseline. [speaker005:] Using your uh voice activity detector? [speaker002:] u Yes. Yes. But I assume that it will be similar, I don't [disfmarker] I [disfmarker] I don't see the reason why it shouldn't be. [speaker005:] Similar, yeah. Mm hmm. [speaker002:] I d I don't see reason why it should be worse. [speaker005:] Yeah. [speaker002:] Cuz if it is worse, then we will raise the objection, we say "well you know how come?" Because eh if we just use our voice activity detector, which we don't claim even that it's wonderful, it's just like one of them. [speaker003:] Mm hmm. [speaker005:] Yeah. [speaker002:] We get this sort of improvement, how come that we don't see it on [disfmarker] on [disfmarker] on [disfmarker] on your endpointed data? [speaker003:] Yeah. I guess it could be even better, because the voice activity detector that I choosed is something that cheating, it's using the alignment of the speech recognition system, [speaker002:] I think so. Yeah. C yeah uh [speaker003:] and only the alignment on the clean channel, and then mapped this alignment to the noisy channel. [speaker002:] and on clean speech data. [speaker005:] Oh, OK. [speaker002:] Yeah. Well David told me [disfmarker] David told me yesterday or Harry actually he told Harry from QualComm and Harry uh brought up the suggestion we should still go for fifty percent he says are you aware that your system does only thirty percent uh comparing to [disfmarker] to endpointed baselines? [speaker003:] Yeah. [speaker002:] So they must have run already something. [speaker005:] Hmm. [speaker002:] So. And Harry said "Yeah. But I mean we think that we [disfmarker] we didn't say the last word yet, that we have other [disfmarker] other things which we can try." [speaker005:] Hmm. [speaker002:] So. So there's a lot of discussion now about this uh new criterion. [speaker003:] Mm hmm. [speaker002:] Because Nokia was objecting, with uh QualComm's [disfmarker] we basically supported that, we said "yes". [speaker003:] Mm hmm. [speaker002:] Now everybody else is saying "well you guys might [disfmarker] must be out of your mind." uh The [disfmarker] Guenter Hirsch who d doesn't speak for Ericsson anymore because he is not with Ericsson and Ericsson may not [disfmarker] may withdraw from the whole Aurora activity because they have so many troubles now. [speaker005:] Wow. [speaker002:] Ericsson's laying off twenty percent of people. [speaker001:] Wow. [speaker005:] Where's uh Guenter going? [speaker002:] Well Guenter is already [disfmarker] he got the job uh already was working on it for past two years or three years [disfmarker] [speaker005:] Mm hmm. [speaker002:] he got a job uh at some [disfmarker] some Fachschule, the technical college not too far from Aachen. [speaker005:] Hmm! [speaker002:] So it's like professor [disfmarker] u university professor [speaker005:] Mm hmm. [speaker002:] you know, not quite a university, not quite a sort of [disfmarker] it's not Aachen University, but it's a good school and he [disfmarker] he's happy. [speaker005:] Mm hmm. Hmm! [speaker002:] And he [disfmarker] well, he was hoping to work uh with Ericsson like on t uh like consulting basis, [speaker005:] Mm hmm. [speaker002:] but right now he says [disfmarker] says it doesn't look like that anybody is even thinking about speech recognition. [speaker005:] Wow! [speaker002:] They think about survival. Yeah. [speaker005:] Hmm. [speaker002:] So. So. But this is being now discussed right now, and it's possible that uh [disfmarker] that [disfmarker] that it may get through, that we will still stick to fifty percent. [speaker003:] Mm hmm. [speaker002:] But that means that nobody will probably get this im this improvement. yet, wi with the current system. Which event es essentially I think that we should be happy with because that [disfmarker] that would mean that at least people may be forced to look into alternative solutions [speaker003:] Mm hmm. Mm hmm. [speaker002:] and [disfmarker] [speaker003:] But maybe [disfmarker] I [disfmarker] I mean we are not too far from [disfmarker] from fifty percent, from the new baseline. [speaker002:] Uh, but not [disfmarker] [speaker003:] Which would mean like sixty percent over the current baseline, which is [disfmarker] [speaker002:] Yeah. Yes. Yes. We [disfmarker] we getting [disfmarker] we getting there, right. [speaker003:] Well. We are around fifty, fifty five. [speaker002:] Yeah. [speaker003:] So. [speaker002:] Yeah. [speaker003:] Mm hmm. [speaker002:] Is it like sort of [disfmarker] is [disfmarker] How did you come up with this number? If you improve twenty [disfmarker] by twenty percent the c the f the all baselines, it's just a quick c comp co computation? [speaker003:] Yeah. I don't know exactly if it's [disfmarker] [speaker002:] Uh huh. I think it's about right. [speaker003:] Yeah, because it de it depends on the weightings and [disfmarker] [speaker002:] Yeah, yeah. [speaker003:] Yeah. But. Mm hmm. [speaker005:] Hmm. How's your documentation or whatever it w what was it you guys were working on last week? [speaker003:] Yeah, finally we [disfmarker] we've not finished with this. We stopped. [speaker004:] More or less it's finished. [speaker003:] Yeah. [speaker004:] Ma nec to need a little more time to improve the English, and maybe s to fill in something [disfmarker] some small detail, something like that, [speaker003:] Mm hmm. [speaker005:] Hmm. [speaker004:] but it's more or less ready. [speaker003:] Yeah. Well, we have a document that explain a big part of the experiments, [speaker004:] Necessary to [disfmarker] to include the bi the bibliography. [speaker003:] but [speaker004:] Mm hmm. [speaker003:] it's not, yeah, finished yet. Mm hmm. [speaker005:] So have you been running some new experiments? I [disfmarker] I thought I saw some jobs of yours running on some of the machine [disfmarker] [speaker003:] Yeah. Right. We've fff [comment] done some strange things like removing C zero or C one from the [disfmarker] [vocalsound] [vocalsound] the vector of parameters, and we noticed that C one is almost not useful at all. You can remove it from the vector, it doesn't hurt. [speaker005:] Really?! That has no effect? [speaker003:] Um. [speaker005:] Eh [disfmarker] Is this in the baseline? or in uh [disfmarker] [speaker003:] In the [disfmarker] No, in the proposal. [speaker005:] in [disfmarker] uh huh, uh huh. [speaker002:] So we were just discussing, since you mentioned that, in [disfmarker] it w [speaker003:] Mm hmm. [speaker002:] driving in the car with Morgan this morning, we were discussing a good experiment for b for beginning graduate student who wants to run a lot of [disfmarker] who wants to get a lot of numbers on something [speaker003:] Mm hmm. [speaker002:] which is, like, imagine that you will [disfmarker] you will start putting every co any coefficient, which you are using in your vector, in some general power. [speaker005:] In some what? [speaker002:] General pow power. Like sort of you take a s power of two, or take a square root, or something. [speaker005:] Mm hmm. [speaker003:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] So suppose that you are working with a s C zer C one. So if you put it in a s square root, that effectively makes your model half as efficient. Because uh your uh Gaussian mixture model, right? computes the mean. [speaker005:] Mm hmm. [speaker002:] And [disfmarker] and uh i i i but it's [disfmarker] the mean is an exponent of the whatever, the [disfmarker] the [disfmarker] this Gaussian function. [speaker005:] You're compressing the range, right? of that [disfmarker] [speaker002:] So you're compressing the range of this coefficient, so it's becoming less efficient. Right? [speaker005:] Mm hmm. [speaker002:] So. So. Morgan was [@ @] and he was [disfmarker] he was saying well this might be the alternative way how to play with a [disfmarker] with a fudge factor, you know, uh in the [disfmarker] [speaker005:] Oh. [speaker002:] you know, just compress the whole vector. [speaker005:] Yeah. [speaker002:] And I said well in that case why don't we just start compressing individual elements, like when [disfmarker] when [disfmarker] because in old days we were doing [disfmarker] when [disfmarker] when people still were doing template matching and Euclidean distances, we were doing this liftering of parameters, right? [speaker005:] Uh huh. [speaker002:] because we observed that uh higher parameters were more important than lower for recognition. And basically the [disfmarker] the C ze C one contributes mainly slope, [speaker005:] Right. [speaker002:] and it's highly affected by uh frequency response of the [disfmarker] of the recording equipment and that sort of thing, [speaker003:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] so [disfmarker] so we were coming with all these f various lifters. [speaker005:] Mm hmm. [speaker002:] uh Bell Labs had he [disfmarker] this uh uh r raised cosine lifter which still I think is built into H [disfmarker] HTK for reasons n unknown to anybody, but [disfmarker] but uh we had exponential lifter, or triangle lifter, basic number of lifters. [speaker005:] Hmm. [speaker002:] And. But so they may be a way to [disfmarker] to fiddle with the f with the f [speaker005:] Insertions. [speaker002:] Insertions, deletions, or the [disfmarker] the [disfmarker] giving a relative [disfmarker] uh basically modifying relative importance of the various parameters. [speaker003:] Mm hmm. [speaker002:] The only of course problem is that there's an infinite number of combinations [speaker005:] Oh. Uh huh. [speaker002:] and if the [disfmarker] if you s if y [speaker005:] You need like a [disfmarker] some kind of a [disfmarker] [speaker002:] Yeah, you need a lot of graduate students, and a lot of computing power. [speaker005:] You need to have a genetic algorithm, that basically tries random permutations of these things. [speaker002:] I know. Exactly. Oh. If you were at Bell Labs or [disfmarker] I d d I shouldn't be saying this in [disfmarker] on [disfmarker] on a mike, right? Or I [disfmarker] uh [disfmarker] IBM, that's what [disfmarker] maybe that's what somebody would be doing. [speaker005:] Yeah. [speaker001:] Hmm. [speaker002:] Oh, I mean, I mean the places which have a lot of computing power, [speaker005:] Mm hmm. [speaker002:] so because it is really it's a p it's a [disfmarker] it's [disfmarker] it will be reasonable search [speaker005:] Yeah. [speaker002:] uh but I wonder if there isn't some way of doing this uh search like when we are searching say for best discriminants. [speaker005:] You know actually, I don't know that this wouldn't be all that bad. I mean you [disfmarker] you compute the features once, [speaker002:] Yeah. [speaker005:] right? [speaker002:] Yeah. [speaker005:] And then these exponents are just applied to that [disfmarker] [speaker002:] Absolutely. And hev everything is fixed. [speaker005:] So. [speaker002:] Everything is fixed. [speaker005:] And is this something that you would adjust for training? [speaker002:] Each [disfmarker] each [disfmarker] [speaker005:] or only recognition? [speaker002:] For both, you would have to do. [speaker005:] You would do it on both. [speaker002:] Yeah. [speaker005:] So you'd actually [disfmarker] [speaker002:] You have to do bo both. Because essentially you are saying "uh this feature is not important". [speaker005:] Mm hmm. [speaker002:] Or less important, so that's [disfmarker] th that's a [disfmarker] that's a painful one, yeah. [speaker005:] So for each [disfmarker] uh set of exponents that you would try, it would require a training and a recognition? [speaker002:] Yeah. But [disfmarker] but wait a minute. You may not need to re uh uh retrain the m model. You just may n may need to c uh give uh less weight to [disfmarker] to uh a mod uh a component of the model which represents this particular feature. You don't have to retrain it. [speaker005:] Oh. So if you [disfmarker] Instead of altering the feature vectors themselves, you [disfmarker] you modify the [disfmarker] the [disfmarker] the Gaussians in the models. [speaker002:] You just multiply. Yeah. Yep. You modify the Gaussian in the model, but in the [disfmarker] in the test data you would have to put it in the power, but in a training what you c in a training uh [disfmarker] in trained model, all you would have to do is to multiply a model by appropriate constant. [speaker005:] Uh huh. But why [disfmarker] if you're [disfmarker] if you're multi if you're altering the model, why w in the test data, why would you have to muck with the uh cepstral coefficients? [speaker002:] Because in uh test [disfmarker] in uh test data you ca don't have a model. You have uh only data. But in a [disfmarker] in a tr [speaker005:] No. But you're running your data through that same model. [speaker002:] That is true, but w I mean, so what you want to do [disfmarker] You want to say if uh obs you [disfmarker] if you observe something like Stephane observes, that C one is not important, you can do two things. [speaker005:] Mm hmm. Mm hmm. [speaker002:] If you have a trained [disfmarker] trained recognizer, in the model, you know the [disfmarker] the [disfmarker] the [disfmarker] the component which [disfmarker] I [disfmarker] I mean di dimension [vocalsound] wh [speaker005:] Mm hmm. All of the [disfmarker] all of the mean and variances that correspond to C one, you put them to zero. [speaker002:] To the s you [disfmarker] you know it. [speaker005:] Yeah. [speaker002:] But what I'm proposing now, if it is important but not as important, you multiply it by point one in a model. But [disfmarker] but [disfmarker] but [disfmarker] [speaker005:] But what are you multiplying? Cuz those are means, right? I mean you're [disfmarker] [speaker001:] You're multiplying the standard deviation? So it's [disfmarker] [speaker002:] I think that you multiply the [disfmarker] I would [disfmarker] I would have to look in the [disfmarker] in the math, I mean how [disfmarker] how does the model uh [disfmarker] [speaker005:] I think you [disfmarker] Yeah, I think you'd have to modify the standard deviation or something, so that you make it [vocalsound] wider or narrower. [speaker002:] Yeah. [speaker001:] Cuz [disfmarker] [speaker002:] Yeah. [speaker001:] Yeah. [speaker003:] Yeah. [speaker002:] Effectively, that's [disfmarker] that [disfmarker] that's [disfmarker] I [disfmarker] Exactly. That's what you do. That's what you do, you [disfmarker] you [disfmarker] you modify the standard deviation as it was trained. [speaker001:] Yeah. [speaker002:] Effectively you, you know y in f in front of the [disfmarker] of the model, you put a constant. S yeah effectively what you're doing is you [disfmarker] is you are modifying the [disfmarker] the [disfmarker] the deviation. Right? [speaker005:] Oop. [speaker001:] The spread, [speaker005:] Sorry. [speaker001:] right. [speaker002:] Yeah, the spread. [speaker005:] So. [speaker001:] It's the same [disfmarker] same mean, [speaker002:] And [disfmarker] and [disfmarker] and [disfmarker] [speaker001:] right? [speaker005:] So by making th the standard deviation narrower, [comment] uh your scores get worse for [disfmarker] [speaker002:] Yeah. [speaker005:] unless it's exactly right on the mean. [speaker002:] Your als No. By making it narrower, [speaker005:] Right? I mean there's [disfmarker] you're [disfmarker] you're allowing for less variance. [speaker002:] uh y your [disfmarker] [speaker001:] Mm hmm. [speaker002:] Yes, so you making this particular dimension less important. Because see what you are fitting is the multidimensional Gaussian, right? [speaker005:] Mm hmm. [speaker002:] It's a [disfmarker] it has [disfmarker] it has uh thirty nine dimensions, or thirteen dimensions if you g ignore deltas and double deltas. [speaker005:] Mm hmm. Mm hmm. [speaker002:] So in order [disfmarker] if you [disfmarker] in order to make dimension which [disfmarker] which Stephane sees uh less important, uh uh I mean not [disfmarker] not useful, less important, what you do is that this particular component in the model you can multiply by w you can [disfmarker] you can basically de weight it in the model. But you can't do it in a [disfmarker] in a test data because you don't have a model for th I mean uh when the test comes, but what you can do is that you put this particular component in [disfmarker] and [disfmarker] and you compress it. That becomes uh th gets less variance, subsequently becomes less important. [speaker005:] Couldn't you just do that to the test data and not do anything with your training data? [speaker002:] That would be very bad, because uh your t your model was trained uh expecting uh, that wouldn't work. Because your model was trained expecting a certain var variance on C one. [speaker005:] Uh huh. [speaker002:] And because the model thinks C one is important. After you train the model, you sort of [disfmarker] y you could do [disfmarker] you could do still what I was proposing initially, that during the training you [disfmarker] you compress C one that becomes [disfmarker] then it becomes less important in a training. [speaker005:] Mm hmm. [speaker002:] But if you have [disfmarker] if you want to run e ex extensive experiment without retraining the model, you don't have to retrain the model. You train it on the original vector. But after, you [disfmarker] wh when you are doing this parametric study of importance of C one you will de weight the C one component in the model, and you will put in the [disfmarker] you will compress the [disfmarker] this component in a [disfmarker] in the test data. s by the same amount. [speaker005:] Could you also if you wanted to [disfmarker] if you wanted to try an experiment uh by [pause] leaving out say, C one, couldn't you, in your test data, uh modify the [disfmarker] all of the C one values to be um way outside of the normal range of the Gaussian for C one that was trained in the model? So that effectively, the C one never really contributes to the score? [speaker003:] Mm hmm. [speaker005:] Do you know what I'm say [speaker002:] No, that would be a severe mismatch, right? what you are proposing? [speaker005:] Yeah, someth [speaker002:] N no you don't want that. Because that would [disfmarker] then your model would be unlikely. Your likelihood would be low, right? [speaker005:] Mm hmm. [speaker002:] Because you would be providing severe mismatch. [speaker005:] But what if you set if to the mean of the model, then? And it was a cons you set all C ones coming in through your test data, you [disfmarker] you change whatever value that was there to the mean that your model had. [speaker002:] No that would be very good match, right? [speaker005:] Yeah. [speaker002:] That you would [disfmarker] [speaker003:] Which [disfmarker] Well, yeah, but we have several means. So. [speaker002:] I see what you are sa [pause] saying, [speaker003:] Right? [speaker001:] Saying. [speaker002:] but uh, [vocalsound] no, no I don't think that it would be the same. I mean, no, the [disfmarker] If you set it to a mean, that would [disfmarker] No, you can't do that. [speaker005:] Oh, that's true, [speaker002:] Y you ca you ca Ch Chuck, you can't do that. [speaker005:] right, yeah, because you [disfmarker] you have [disfmarker] [speaker003:] Wait. Which [disfmarker] [speaker005:] Yeah. [speaker002:] Because that would be a really f fiddling with the data, you can't do that. [speaker005:] Mm hmm. Mm hmm. [speaker002:] But what you can do, I'm confident you ca well, I'm reasonably confident and I putting it on the record, right? I mean y people will listen to it for [disfmarker] for centuries now, is [pause] what you can do, is you train the model uh with the [disfmarker] with the original data. [speaker001:] Mm hmm. [speaker002:] Then you decide that you want to see how important C [disfmarker] C one is. So what you will do is that a component in the model for C one, you will divide it by [disfmarker] by two. And you will compress your test data by square root. [speaker005:] Mm hmm. [speaker002:] Then you will still have a perfect m match. Except that this component of C one will be half as important in a [disfmarker] in a overall score. [speaker005:] Mm hmm. Mm hmm. [speaker002:] Then you divide it by four and you take a square, f fourth root. Then if you think that some component is more [disfmarker] is more important then th th th it then [disfmarker] then uh uh i it is, based on training, then you uh multiply this particular component in the model by [disfmarker] by [disfmarker] by [disfmarker] [speaker005:] You're talking about the standard deviation? [speaker002:] yeah. [speaker005:] Yeah. [speaker002:] Yeah, multiply this component uh i it by number b larger than one, [speaker005:] Mm hmm. [speaker002:] and you put your data in power higher than one. Then it becomes more important. In the overall score, I believe. [speaker005:] But [pause] don't you have to do something to the mean, also? [speaker003:] Yeah, but, at the [disfmarker] [speaker002:] No. [speaker003:] No. [speaker002:] No. [speaker001:] Yeah. [speaker003:] But I think it's [disfmarker] uh the [disfmarker] The variance is on [disfmarker] on the denominator in the [disfmarker] in the Gaussian equation. So. I think it's maybe it's the contrary. If you want to decrease the importance of a c parameter, you have to increase it's variance. [speaker002:] Yes. Right. Yes. [speaker004:] Multiply. [speaker002:] Exactly. Yeah. So you [disfmarker] so you may want to do it other way around, [speaker003:] Hmm. That's right. OK. [speaker002:] yeah. [speaker003:] Mm hmm. [speaker001:] Right. [speaker005:] But if your [disfmarker] If your um original data for C one had a mean of two. [speaker002:] Uh huh. [speaker005:] And now you're [disfmarker] you're [disfmarker] you're changing that by squaring it. Now your mean of your C one original data has [disfmarker] [comment] is four. But your model still has a mean of two. So even though you've expended the range, your mean doesn't match anymore. [speaker003:] Mm hmm. [speaker005:] Do you see what I mean? [speaker002:] Let's see. [speaker003:] I think [disfmarker] What I see [disfmarker] What could be done is you don't change your features, which are computed once for all, [speaker002:] Uh huh. [speaker003:] but you just tune the model. So. You have your features. You train your [disfmarker] your model on these features. [speaker005:] Mm hmm. [speaker003:] And then if you want to decrease the importance of C one you just take the variance of the C one component in the [disfmarker] in the model and increase it if you want to decrease the importance of C one or decrease it [disfmarker] [speaker005:] Yeah. [speaker002:] Yeah. [speaker005:] Right. [speaker002:] Yeah. You would have to modify the mean in the model. I [disfmarker] you [disfmarker] I agree with you. Yeah. Yeah, [speaker005:] Yeah, so y [speaker002:] but I mean, but it's [disfmarker] it's i it's do able, [speaker003:] Well. [speaker002:] right? I mean, it's predictable. Uh. Yeah. [speaker005:] It's predictable, yeah. [speaker002:] Yeah. Yeah, it's predictable. [speaker005:] Yeah. [speaker003:] Mmm. [speaker005:] But as a simple thing, you could just [disfmarker] just muck with the variance. [speaker003:] Just adjust the model, yeah. [speaker005:] to get uh this [disfmarker] uh this [disfmarker] the effect I think that you're talking about, [speaker002:] Mm hmm. [speaker005:] right? [speaker002:] It might be. [speaker005:] Could increase the variance to decrease the importance. [speaker002:] Mm hmm. [speaker003:] Mm hmm. [speaker005:] Yeah, because if you had a huge variance, you're dividing by a large number, [comment] you get a very small contribution. [speaker003:] Yeah, it becomes more flat [speaker001:] Doesn't matter [disfmarker] [speaker003:] and [disfmarker] [speaker001:] Right. [speaker002:] Yeah. [speaker005:] Yeah. [speaker003:] Yeah. [speaker005:] Hmm. [speaker001:] Yeah, the sharper the variance, the more [disfmarker] more important to get that one right. [speaker002:] Mm hmm. [speaker005:] Yeah, you know actually, this reminds me of something that happened uh when I was at BBN. We were playing with putting um pitch into the Mandarin recognizer. [speaker002:] Mm hmm. [speaker005:] And this particular pitch algorithm um when it didn't think there was any voicing, was spitting out zeros. So we were getting [disfmarker] uh when we did clustering, we were getting groups uh of features [speaker002:] p Pretty new outliers, interesting outliers, [speaker005:] yeah, with [disfmarker] with a mean of zero and basically zero variance. [speaker002:] right? Variance. [speaker005:] So, when ener [comment] when anytime any one of those vectors came in that had a zero in it, we got a great score. I mean it was just, [nonvocalsound] you know, incredibly [nonvocalsound] high score, and so that was throwing everything off. [speaker003:] Mm hmm. [speaker005:] So [vocalsound] if you have very small variance you get really good scores when you get something that matches. [speaker002:] Yeah. [speaker005:] So. [vocalsound] So that's a way, yeah, yeah [disfmarker] [speaker003:] Mm hmm. [speaker005:] That's a way to increase the [disfmarker] yeah, n That's interesting. So in fact, that would be [disfmarker] That doesn't require any retraining. [speaker002:] Yeah. [speaker003:] No, [speaker002:] No. No. [speaker003:] that's right. [speaker005:] So that means it's just [speaker003:] So it's just tuning the models and testing, actually. [speaker002:] Yeah. [speaker005:] recognitions. Yeah. [speaker002:] Yeah. [speaker005:] You [disfmarker] you have a step where you you modify the models, make a d copy of your models with whatever variance modifications you make, and rerun recognition. [speaker003:] It would be quick. [speaker002:] Yeah. [speaker003:] Mm hmm. [speaker002:] Yeah. Yeah. Yeah. [speaker005:] And then do a whole bunch of those. [speaker002:] Yeah. [speaker003:] Mm hmm. [speaker005:] That could be set up fairly easily I think, and you have a whole bunch of you know [disfmarker] [speaker002:] Chuck is getting himself in trouble. [speaker005:] That's an interesting idea, actually. For testing the [disfmarker] Yeah. Huh! [speaker001:] Didn't you say you got these uh HTK's set up on the new Linux boxes? [speaker005:] That's right. In fact, and [disfmarker] and they're just t right now they're installing uh [disfmarker] increasing the memory on that uh [disfmarker] the Linux box. [speaker001:] Yeah. [speaker002:] Hey! And Chuck is sort of really fishing for how to keep his computer busy, [speaker001:] Right. [speaker005:] Yeah. [speaker002:] right? [speaker005:] Absinthe. Absinthe. [speaker002:] Well, you know, that's [disfmarker] [speaker005:] We've got five processors on that. [speaker002:] that's [disfmarker] yeah, that's a good thing [speaker001:] Oh yeah. [speaker005:] And two gigs of memory. [speaker001:] That's right. [speaker002:] because then y you just write the "do" loops and then you pretend that you are working while you are sort of [disfmarker] you c you can go fishing. [speaker003:] Yeah. [speaker005:] Yeah. Exactly. [speaker001:] Pretend, yeah. [speaker005:] Yeah. [speaker004:] Go fishing. [speaker005:] See how many cycles we used? [speaker002:] Yeah. Then you are sort of in this mode like all of those ARPA people are, [speaker005:] Yeah. [speaker002:] right? Uh, since it is on the record, I can't say uh which company it was, but it was reported to me that uh somebody visited a company and during a [disfmarker] d during a discussion, there was this guy who was always hitting the carriage returns uh on a computer. [speaker005:] Uh huh. [speaker002:] So after two hours uh the visitor said "wh why are you hitting this carriage return?" And he said "well you know, we are being paid by a computer ty I mean we are [disfmarker] we have a government contract. And they pay us by [disfmarker] by amount of computer time we use." It was in old days when there were uh [disfmarker] of PDP eights and that sort of thing. [speaker005:] Oh, my gosh! So he had to make it look like [disfmarker] [speaker002:] Because so they had a [disfmarker] they literally had to c monitor at the time [disfmarker] at the time on a computer how much time is being spent I [disfmarker] i i or on [disfmarker] on this particular project. [speaker005:] Yeah. How [disfmarker] Idle time. Yeah. [speaker001:] Yeah. [speaker002:] Nobody was looking even at what was coming out. [speaker005:] Have you ever seen those little um [disfmarker] It's [disfmarker] it's this thing that's the shape of a bird and it has a red ball and its beak dips into the water? [speaker002:] Yeah, I know, right. [speaker005:] So [vocalsound] if you could hook that up so it hit the keyboard [disfmarker] [speaker002:] Yeah. Yeah. Yeah. Yeah. [speaker005:] That's an interesting experiment. [speaker002:] It would be similar [disfmarker] similar to [disfmarker] I knew some people who were uh that was in old Communist uh Czechoslovakia, right? so we were watching for American airplanes, coming to spy on [disfmarker] on uh [disfmarker] on us at the time, [speaker005:] Mm hmm. Mm hmm. [speaker002:] so there were three guys uh uh stationed in the middle of the woods on one l lonely uh watching tower, pretty much spending a year and a half there because there was this service right? [speaker005:] Ugh! [speaker002:] And so they [disfmarker] very quickly they made friends with local girls and local people in the village and [disfmarker] [speaker005:] Yeah. [speaker002:] and so but they [disfmarker] there was one plane flying over s always uh uh above, and so that was the only work which they had. They [disfmarker] like four in the afternoon they had to report there was a plane from Prague to Brno Basically f flying there, [speaker005:] Yeah. [speaker002:] so they f very q f first thing was that they would always run back and [disfmarker] and at four o'lock and [disfmarker] and quickly make a call, "this plane is uh uh passing" then a second thing was that they [disfmarker] they took the line from this u u post to uh uh a local pub. And they were calling from the pub. And they [disfmarker] but third thing which they made, and when they screwed up, they [disfmarker] finally they had to p the [disfmarker] the p the pub owner to make these phone calls because they didn't even bother to be there anymore. And one day there was [disfmarker] there was no plane. At least they were sort of smart enough that they looked if the plane is flying there, right? [speaker005:] Yeah. [speaker002:] And the pub owner says "oh my [disfmarker] four o'lock, OK, quickly p pick up the phone, call that there's a plane flying." There was no plane for some reason, [speaker005:] And there wasn't? [speaker002:] it was downed, or [disfmarker] [vocalsound] and [disfmarker] so they got in trouble. But. [vocalsound] But uh. [speaker005:] Huh! Well that's [disfmarker] that's a really i [speaker002:] So. So. Yeah. [speaker005:] That wouldn't be too difficult to try. Maybe I could set that up. [speaker002:] Yeah. Yeah. [speaker005:] And we'll just [disfmarker] [speaker002:] Well, at least go test the s test the uh assumption about C C one I mean to begin with. But then of course one can then think about some predictable result to change all of them. [speaker003:] Mm hmm. [speaker002:] It's just like we used to do these uh [disfmarker] these uh [disfmarker] um the [disfmarker] the uh distance measures. It might be that uh [disfmarker] [speaker005:] Yeah, so the first set of uh variance weighting vectors would be just you know one [disfmarker] modifying one and leaving the others the same. [speaker002:] Yeah. Yeah. Yeah. Yeah. Yeah. [speaker003:] Yeah. Maybe. [speaker005:] And [disfmarker] and do that for each one. [speaker002:] Because you see, I mean, what is happening here in a [disfmarker] in a [disfmarker] in a [disfmarker] in such a model is that it's [disfmarker] tells you yeah what has a low variance uh is uh [disfmarker] is uh [disfmarker] is more reliable, [speaker005:] That would be one set of experiment [disfmarker] [speaker002:] right? How do we [disfmarker] [speaker005:] Wh yeah, when the data matches that, then you get really [disfmarker] [speaker002:] Yeah. Yeah. Yeah. Yeah. Yeah. [speaker005:] Yeah. Right. [speaker002:] How do we know, especially when it comes to noise? [speaker005:] But there could just naturally be low variance. [speaker002:] Yeah? [speaker005:] Because I [disfmarker] Like, I've noticed in the higher cepstral coefficients, the numbers seem to get smaller, right? So d [speaker003:] They [disfmarker] t [speaker005:] I mean, just naturally. [speaker003:] Yeah. [speaker002:] Yeah, th that's [disfmarker] [speaker003:] They have smaller means, also. [speaker005:] Yeah. [speaker003:] Uh. [speaker005:] Exactly. And so it seems like they're already sort of compressed. [speaker003:] Uh huh. [speaker005:] The range [pause] of values. [speaker002:] Yeah that's why uh people used these lifters were inverse variance weighting lifters basically that makes uh uh Euclidean distance more like uh Mahalanobis distance with a diagonal covariance when you knew what all the variances were over the old data. [speaker005:] Mm hmm. Mm hmm. Hmm. [speaker002:] What they would do is that they would weight each coefficient by inverse of the variance. Turns out that uh the variance decreases at least at fast, I believe, as the index of the cepstral coefficients. [speaker005:] Mm hmm. [speaker002:] I think you can show that uh uh analytically. [speaker005:] Hmm. [speaker002:] So typically what happens is that you [disfmarker] you need to weight the [disfmarker] uh weight the higher coefficients more than uh the lower coefficients. [speaker005:] Mm hmm. Hmm. [speaker003:] Mmm. [speaker002:] So. When [disfmarker] Yeah. When we talked about Aurora still I wanted to m make a plea [disfmarker] uh encourage for uh more communication between [disfmarker] between uh [pause] uh different uh parts of the distributed uh [pause] uh center. Uh even when there is absolutely nothing to [disfmarker] to s to say but the weather is good in Ore in [disfmarker] in Berkeley. I'm sure that it's being appreciated in Oregon and maybe it will generate similar responses down here, like, uh [disfmarker] [speaker003:] We can set up a webcam maybe. [speaker002:] Yeah. [speaker001:] Yeah. [speaker002:] What [disfmarker] you know, nowadays, yeah. It's actually do able, almost. [speaker005:] Is the um [disfmarker] if we mail to "Aurora inhouse", does that go up to you guys also? [speaker002:] I don't think so. [speaker003:] No. [speaker002:] No. [speaker005:] OK. So i What is it [disfmarker] [speaker002:] So we should do that. [speaker005:] Yeah we sh [speaker002:] We should definitely set up [disfmarker] [speaker001:] Yeah. [speaker005:] Do we have a mailing list that includes uh the OGI people? [speaker002:] Yeah. [speaker003:] Uh no. We don't have. [speaker005:] Oh! [speaker002:] Uh huh. [speaker005:] Maybe we should set that up. That would make it much easier. [speaker002:] Yeah. Yeah. Yeah, that would make it easier. [speaker005:] So maybe just call it "Aurora" or something that would [disfmarker] [speaker002:] Yeah. Yeah. And then we also can send the [disfmarker] the dis to the same address right, and it goes to everybody [speaker005:] Mm hmm. Mm hmm. [speaker003:] Yeah. [speaker005:] OK. Maybe we can set that up. [speaker002:] Because what's happening naturally in research, I know, is that people essentially start working on something and they don't want to be much bothered, right? but what the [disfmarker] the [disfmarker] then the danger is in a group like this, is that two people are working on the same thing and i c of course both of them come with the s very good solution, but it could have been done somehow in half of the effort or something. [speaker005:] Mm hmm. [speaker002:] Oh, there's another thing which I wanted to uh uh report. Lucash, I think, uh wrote the software for this Aurora two system. reasonably uh good one, because he's doing it for Intel, but I trust that we have uh rights to uh use it uh or distribute it and everything. Cuz Intel's intentions originally was to distribute it free of charge anyways. [speaker005:] Hmm! [speaker002:] u s And so [disfmarker] so uh we [disfmarker] we will make sure that at least you can see the software and if [disfmarker] if [disfmarker] if [disfmarker] if it is of any use. Just uh [disfmarker] [speaker003:] Mm hmm. [speaker002:] It might be a reasonable point for p perhaps uh start converging. [speaker003:] Mm hmm. [speaker002:] Because Morgan's point is that [disfmarker] He is an experienced guy. He says well you know it's very difficult to collaborate if you are working with supposedly the same thing, in quotes, except which is not s is not the same. [speaker005:] Mm hmm. [speaker002:] Which [disfmarker] which uh uh one is using that set of hurdles, another one set [disfmarker] is using another set of hurdles. So. And [disfmarker] And then it's difficult to c compare. [speaker003:] What about Harry? Uh. We received a mail last week and you are starting to [disfmarker] to do some experiments. [speaker002:] He got the [disfmarker] he got the software. Yeah. They sent the release. [speaker003:] And use this Intel version. [speaker002:] Yeah. Yeah. Yeah. Yeah. [speaker003:] Hmm. [speaker002:] Yeah because Intel paid us uh should I say on a microphone? uh some amount of money, not much. Not much I can say on a microphone. Much less then we should have gotten [vocalsound] for this amount of work. And they wanted uh to [disfmarker] to have software so that they can also play with it, which means that it has to be in a certain environment [disfmarker] [speaker005:] Hmm. [speaker002:] they use actu actually some Intel libraries, but in the process, Lucash just rewrote the whole thing because he figured rather than trying to f make sense uh of uh [disfmarker] including ICSI software uh not for training on the nets [speaker005:] Hmm. [speaker001:] Oh. [speaker002:] but I think he rewrote the [disfmarker] the [disfmarker] the [disfmarker] or so maybe somehow reused over the parts of the thing so that [disfmarker] so that [disfmarker] the whole thing, including MLP, trained MLP is one piece of uh software. [speaker005:] Mm hmm. Wow! [speaker002:] Is it useful? Yeah? [speaker001:] Ye Yeah. I mean, I remember when we were trying to put together all the ICSI software for the submission. [speaker002:] Or [disfmarker] That's what he was saying, right. He said that it was like [disfmarker] it was like just so many libraries and nobody knew what was used when, and [disfmarker] and so that's where he started and that's where he realized that it needs to be [disfmarker] needs to be uh uh at least cleaned up, [speaker001:] Yeah. [speaker003:] Mm hmm. [speaker002:] and so I think it [disfmarker] this is available. [speaker001:] Hmm. [speaker003:] Yeah. [speaker002:] So [disfmarker] [speaker003:] Well, the [disfmarker] the only thing I would check is if he [disfmarker] does he use Intel math libraries, because if it's the case, it's maybe not so easy to use it on another architecture. [speaker002:] uh e ev n not maybe [disfmarker] Maybe not in a first [disfmarker] maybe not in a first ap approximation [speaker003:] Ah yeah. [speaker002:] because I think he started first just with a plain C [disfmarker] C or C plus plus or something before [disfmarker] [speaker003:] Mm hmm. [speaker002:] I [disfmarker] I can check on that. Yeah. [speaker003:] Yeah. [speaker005:] Hmm. [speaker002:] And uh in [disfmarker] otherwise the Intel libraries, I think they are available free of f freely. But they may be running only on [disfmarker] on uh [disfmarker] on uh Windows. [speaker003:] Yeah. [speaker002:] Or on [disfmarker] on the [disfmarker] [speaker003:] On Intel architecture maybe. [speaker002:] Yeah, on Intel architecture, may not run in SUN. [speaker003:] I'm [disfmarker] Yeah. Yeah. [speaker002:] Yeah. That is p that is [disfmarker] that is possible. That's why Intel of course is distributing it, [speaker003:] Well. [speaker002:] right? Or [disfmarker] [vocalsound] That's [disfmarker] [speaker003:] Yeah. Well there are [disfmarker] at least there are optimized version for their architecture. [speaker002:] Yeah. [speaker003:] I don't know. I never checked carefully these sorts of [disfmarker] [speaker002:] I know there was some issues that initially of course we d do all the development on Linux but we use [disfmarker] we don't have [disfmarker] we have only three uh uh uh uh s SUNs and we have them only because they have a SPERT board in. Otherwise [disfmarker] otherwise we t almost exclusively are working with uh PC's now, with Intel. In that way Intel succeeded with us, because they gave us too many good machines for very little money or nothing. [speaker005:] Yeah. [speaker002:] So. So. So we run everything on Intel. [speaker005:] Wow! Hmm. [speaker002:] And [disfmarker] [speaker005:] Does anybody have anything else? to [disfmarker] Shall we read some digits? [speaker003:] Yeah. [speaker002:] Yes. I have to take my glasses [disfmarker] [speaker005:] So. Hynek, I don't know if you've ever done this. The way that it works is each person goes around in turn, [comment] and uh you say the transcript number and then you read the digits, the [disfmarker] the strings of numbers as individual digits. [speaker002:] No. Mm hmm. [speaker005:] So you don't say "eight hundred and fifty", you say "eight five oh", and so forth. [speaker002:] OK. OK. [speaker005:] Um. [speaker002:] So can [disfmarker] maybe [disfmarker] can I t maybe start then? [speaker005:] Sure. [speaker002:] OK. So. [speaker003:] I think that's recording now. [speaker002:] OK, I'm on [disfmarker] [vocalsound] I'm talking on mike two. So, Which channel is that? [speaker003:] Um, do it again. [speaker002:] Mike number two, I'm talking right now on? [speaker003:] That's uh [disfmarker] That's in channel one. [speaker002:] Mike number two? Channel one? Is that channel one or channel zero? [speaker003:] Channel one. Let me make sure. [speaker001:] Channel one. [speaker003:] There are no [disfmarker] There's only one connection element. [speaker002:] which is zero based or one based? [speaker003:] It's zero based. [speaker002:] OK, So we're not going to do that. OK. [speaker003:] Unfortunately the mikes are [pause] you know [disfmarker] This [disfmarker] this has the disadvantage of needing to put the numbers on the mikes. [speaker001:] off by one. [speaker003:] Consistently not equal to the numbers [disfmarker] numbers on the channels. [speaker001:] Right. [speaker002:] Wh that's why I wanted to write it down, but Dan just doesn't want to do that. [speaker004:] Well [speaker003:] Well, you can write it down. I mean [pause] Write down whatever you like. [speaker002:] Thank you. [speaker004:] Write down that poem. Let us know when you're done, though cuz we want to start the meeting. Now, how do you do the mike? [speaker002:] OK, now, that means [disfmarker] if I'm going to be standing in there looking, [comment] then someone else has to write them down. [speaker004:] I keep getting it in my nose. [speaker003:] I don't know. You can do whatever [disfmarker] any way you like. [speaker004:] Oh! Oh! This is good. Oh! I like this. [speaker002:] because I won [speaker003:] The weird thing is that um it means that it varies when you move your head around [speaker004:] I like this. [speaker002:] OK. Fine. [speaker003:] but [disfmarker] [vocalsound] [pause] I think it's probably the better solution. [speaker002:] Let's not try [pause] to do anything too easy. [speaker004:] I like this, this is much more comfortable. [speaker003:] OK? [speaker002:] Testing. [speaker004:] I could have it more at the midline. Boy! It's very [pause] adjustable now. [speaker001:] So go ahead. [speaker004:] I'm on number four. [speaker003:] Do we have the numbers? the the sheets, that we did from last time? so we can do them again? [speaker001:] Can [disfmarker] So we can do them again? [speaker002:] Yep. They're [pause] in the folder right there. [speaker001:] Yeah. [speaker003:] Great. [speaker004:] May I pass the folder to Dan? [speaker002:] You may. [speaker004:] OK. [speaker002:] So, Jane, can you talk into your mike? You said it's mike four? [speaker004:] I'm mike [disfmarker] I'm on mike four. and it's a very nice microphone, very modern. [speaker002:] Well, hmm, I'm glad you like it. [speaker004:] I like it very much. [speaker002:] Uh Eric, what mike are you? [speaker001:] I'm on channel one [disfmarker] [speaker002:] Mike one [disfmarker] [speaker001:] Mike one, channel zero, probably. [speaker002:] Looks like it. S talk again? [speaker001:] Channel zero. [speaker002:] Thank you. [speaker003:] I'm on uh mike three. This is mike three, coming in here. It's moving around as I move my head backwards and forwards. So that's just the way it goes. That's life. That's just what it does. [speaker002:] OK. [speaker004:] You didn't get the s head restraint with your system? [speaker003:] That's right. [speaker001:] Head restraint system? [speaker004:] I did modify my my mike uh distance from my speaking apparatus. [speaker002:] Um, can you [pause] do taps on the P D [speaker004:] Slightly. [speaker001:] OK. [speaker002:] and the P Z [speaker003:] OK. [speaker002:] And you have to actually check what the nu [speaker003:] P Z Ms? or [disfmarker] [pause] or not P Z [speaker002:] The Start with P Z [speaker003:] Because this is PZM number three. [speaker002:] Thank you. [speaker003:] Let's list them in order. [speaker001:] OK. [speaker003:] PZM number three was the furthest away. This is the next one in, which is PZM number four, [comment] apparently. [speaker002:] I can't tell. [speaker003:] PZ What Can you [disfmarker] can you see other stuff happening? [speaker002:] Yep. [speaker003:] This is PZM number four. [speaker002:] Thanks. [speaker004:] That's a good way. [speaker001:] OK. This is PZM two. [speaker002:] F [comment] Thank you. [speaker001:] And this is PZM one. [speaker003:] which is closest to the machine room. [speaker001:] Which [disfmarker] [speaker004:] Should I draw a map? [speaker002:] OK. [speaker004:] OK. [speaker003:] I mean [comment] I don't know. [speaker002:] And PDA, left, right? [speaker003:] It's all hopeless. This is uh one side of PDA, I guess. We could actually give these sides names, right? This would be left, wouldn't it. [speaker002:] I don't see anything. [speaker003:] I know, [speaker001:] Huh uh. [speaker003:] but I'm just [pause] asking. [speaker004:] He's labelling [disfmarker] he's labelling right now. [speaker003:] I wasn't asking you. [speaker001:] He's? [speaker002:] Ah. [speaker003:] This is [disfmarker] this is PDA left. [speaker004:] Now. [speaker003:] You getting something there? [speaker002:] Yep. [speaker003:] And this is PDA right. [speaker002:] All OK. [speaker003:] They will vary in future. [speaker002:] Thank you. [speaker004:] OK. [speaker001:] Right. [speaker003:] Because it depends which one of these shows more consistency then. [speaker002:] Did we just record all that? Should we start it over? [speaker003:] No. [speaker002:] OK. [speaker003:] Well, we're w we're so ahead of the game now, cuz we've got built in downsampling now. [speaker004:] We have what? What do we have? [speaker003:] We've got built in downsampling. And so it's only recording [pause] sixteen kilohertz data, and we've got [disfmarker] [speaker001:] Wait a second. [speaker003:] We're not select We're not recording the empty channels. [speaker002:] The ones that aren't filled out are the ones you want to hand out. [speaker003:] What about the ones that we didn't record? [speaker001:] No, I was looking for the ones [disfmarker] [speaker002:] I threw the ones out, that we already recorded but we didn't record. [speaker001:] Oh. OK. [speaker003:] OK? [speaker001:] Well, then, fine. [speaker003:] Fine. Then it doesn't matter, [speaker004:] Well done. [speaker001:] See if we care. [speaker003:] does it. [speaker004:] Well done. [speaker001:] OK. [speaker004:] You're probably wondering why I brought you all here today. [speaker001:] Right. But before w before we [disfmarker] [speaker003:] Before that. [speaker002:] Right. So the first task is to read some digits. [speaker001:] Before that. [speaker004:] Yeah. [speaker002:] So I'll go ahead and start. And [pause] This is Adam on mike number two, So. If you could just fill out the form, and then later on we'll read it, with pauses between. [speaker004:] Good. [speaker002:] And we're session two. [speaker001:] Do you wanna read next, [speaker004:] Shall I [disfmarker] n [speaker001:] or d? [speaker004:] I'd like to hear one [disfmarker] [speaker002:] Let her fill it out and go ahead and read. [speaker004:] Yeah, I'd like to hear one more. [speaker001:] uh OK. Um, this is Eric on [vocalsound] mike one, the wireless lapel, channel zero. um, [speaker004:] Could I hear you next? [speaker003:] OK. So, this is Dan on mike three, wireless headset, I'm wearing it around my neck. which may be different than wearing it on the head. [speaker004:] OK. So, basically they're kind of like sentences but with n numbers instead of words. OK. So, I'm [disfmarker] I'm on uh number four with the microphone around my neck and I will read these. [speaker002:] So you wanna pause [pause] briefly between each one so that the person transcribing can tell [pause] where one ends and one begins. [speaker004:] OK. [speaker003:] And [disfmarker] [speaker002:] So, a short sentence between each line. [speaker004:] It really sounds like sentences. Your intonation sounds like [disfmarker] somewhat like sentence intonation. [speaker002:] Well, [pause] you can intone it however you want. Imagine that you're [vocalsound] reading a number [disfmarker] phone number to someone. [speaker004:] OK. Yeah? [speaker002:] Yep. [speaker004:] Um, OK. [speaker002:] OK. Good. Thanks. [speaker003:] Great. [speaker002:] So we'll probably do another one at the end [pause] of the meeting. just to get some more digits. [speaker001:] Right. [speaker002:] And, uh [speaker001:] OK. [speaker002:] Thank you. [speaker004:] If I were to be dictating a phone number, I would have done that more slowly, but uh [disfmarker] which would have been easier for the [disfmarker] of course, for the machine. But. Was that an OK pace? [speaker001:] Sure. [speaker003:] It's kind of weird. [speaker001:] Yeah. It's fine. [speaker003:] That was great, what you did. Um [speaker004:] OK. Just as long as it's with [disfmarker] you know, uh within the constraints. [speaker002:] What what's your mike number, Jane? [speaker004:] Number four. Oh, Sorry. [speaker002:] Dan doesn't remember what sex he is. [speaker003:] Yeah, cuz I figure no one's going to be able to figure it out from the [disfmarker] from my name. [speaker002:] That's true, it's [disfmarker] it's really difficult. [speaker003:] So, Yeah. [speaker004:] and the voice. Just no way. [speaker002:] Well, don't you [disfmarker] I [disfmarker] I wrote a scanner program that scans these forms in. [speaker003:] Oh, right, right, right. [speaker002:] Anyway. So, why did I call you all here? [speaker004:] Yes. [speaker002:] Um, what I'd like to talk about is, um the transcription. [speaker004:] Oh, you did. [speaker002:] So, as part of this um project, we need to transcribe these meetings. um, to do training and test and so on. and, uh, so we need word transcripts and, uh, speaker change identification. So, speaker ID. And, uh so I'd sort of like to know [pause] who's going to do that, hi Jane, and uh [comment] what tools to use, what other resources you might need, and also what we [disfmarker] things like just the data format and how we're going to do it. [speaker001:] So, I'm just curious how [disfmarker] how [disfmarker] h how exactly you got roped into this. I mean [disfmarker] [speaker004:] Oh. Um, because my name was [disfmarker] was mentioned when I wasn't around, but I'm most happy. [speaker001:] OK. [speaker002:] Oh really? [speaker001:] Good. [speaker002:] Cuz Morgan said he asked you. [speaker004:] Oh, uh he did. and [disfmarker] and I approved. [speaker002:] Mm hmm. [speaker004:] but I think that I was uh proposed before I was asked? And that doesn't matter cuz I think there was probably an a uh y an indication that [disfmarker] that I was interested in advance [speaker001:] Oh. Yeah. That would be m That would be my fault. [speaker004:] and, in which case, I thank you. [speaker001:] Right. [speaker004:] Yeah. [speaker001:] Uh right. [speaker002:] And so, one of the first questions is, "do you need help?" [speaker001:] OK. [speaker003:] Well, hang on. [speaker001:] Oh? [speaker002:] That's not the first question. [speaker003:] Well, [vocalsound] Well, I guess. [speaker001:] OK. [speaker003:] Do you [disfmarker] So, do you [disfmarker] What are [disfmarker] what are we talking about? That you're gonna [disfmarker] you're gonna do transcriptions? You're going to listen to these things and [disfmarker] [speaker004:] I'm going to provide [disfmarker] I'm going to be [disfmarker] once I'm given the data, I will provide w word level transcripts with uh indication of speakers, [speaker003:] OK. [speaker004:] and um I guess speaker change in any case, so speaker ID, and words. [speaker001:] OK. So you're gonna do the actual transcription yourself. [speaker004:] Yeah. [speaker001:] OK. [speaker003:] D I And do you have a clear [disfmarker] I mean, is that [disfmarker] Do you sort of [disfmarker] Are you going to have to make up what you do or do you have a very clear idea of what you're going to do? So I mean, is it [disfmarker] is it [disfmarker] Is this l Is it [comment] like something you've done before? [speaker004:] Well [speaker003:] or [disfmarker] [speaker004:] I've transcribed a lot. [speaker003:] Yeah. [speaker004:] But um e you know, the [disfmarker] I mean, there are always ways of doing things more efficiently or more uh [disfmarker] in a more technical technically sophisticated way? [speaker003:] And, Yeah. [speaker004:] Do you have to speak true English sentences when you do this? [speaker003:] Yeah, because as people are going to be listening to it afterwards and saying "Oh listen to her. She couldn't even speak true English sentences!" [speaker004:] "She can't speak!" "It's terrible!" OK, So [vocalsound] um [speaker003:] Oh, you're meant [disfmarker] You're meant to speak naturally. You're meant to say "Oh, I didn't realize we were being recorded!" [speaker002:] Ah [vocalsound] right. [speaker004:] Yeah. "No one told me that." "This microphone, what's this doing around my neck?" [speaker002:] Uh, that's [disfmarker] I [disfmarker] I was saying before we don't really have to worry about people being surprised by it. [speaker003:] It's just [disfmarker] It's just jewelry. [speaker004:] OK, so um typically, what I w you know I [disfmarker] in [disfmarker] uh i what I've done before is I have a [disfmarker] uh tape recorder, with a reverse button. [speaker003:] Yeah. [speaker004:] And you just, you know, if it's word level e you know, you do a [disfmarker] e a r I just hit the r re the rewind button every couple of words [speaker002:] Play. [speaker004:] and type [pause] what I know [speaker003:] Yeah. [speaker004:] and then [disfmarker] and then at the end do a breeze through to be sure that everything's correct. [speaker003:] Yeah. [speaker004:] However, they do have software these days which [disfmarker] with a digitized uh record you can have control over replay through some sort of an interface. [speaker003:] Yeah. [speaker004:] Um. I've never used one of those. I know it would be an [disfmarker] a wonderful idea. It's just I haven't transcribed in [disfmarker] in awhile. [speaker003:] The problem is [disfmarker] The nice thing about transcribers is they have foot pedals, [speaker004:] I mean [disfmarker] [speaker001:] Right. [speaker004:] Foot pedals? I've used transcribing machines. That's fine. um But I frankly didn't find it any more use to me than my [pause] hand held taperecorder. [speaker003:] Oh. I see. [speaker004:] However, if we have a transcribing machine I would accept it. The other thing is, you know, having [disfmarker] getting hold of a transcriber. I think that CogSci has one. [speaker002:] Mm hmm. [speaker001:] Yeah. [speaker004:] And I could probably borrow one. [speaker001:] Well, one th [speaker003:] I [disfmarker] I [disfmarker] I actually have one. [speaker004:] You have one yourself? [speaker003:] Well, Sarah has one [speaker004:] Oh, I see! [speaker003:] because she does a lot of transcribing of interviews. [speaker004:] Oh, super! [speaker003:] But The crazy thing is we'd have to convert the s we'd have to dump the audio files onto cassette. to use it. [speaker001:] The other thing is is that we might be [disfmarker] Oh. [speaker004:] Oh, I see. OK. [speaker003:] I mean, given that we've already got them digitally it would be [disfmarker] you know, it's quite tempting to use some kind of uh software thing. [speaker004:] Oh, OK. [speaker003:] But [disfmarker] but then there's the question of the ergonomic interface, that [disfmarker] [speaker004:] I kn [speaker002:] Oh, I actually have some foot pedals for the P C. [speaker001:] Well, that's what I was gonna say, is that Adam has foot pedals that we could probably try and rig something up to [disfmarker] [speaker003:] OK. [speaker004:] Oh, that'd be super. [speaker003:] What do they connect to? [speaker002:] Uh, keyboard. and they generate keystrokes. [speaker003:] Oh, that's totally cool. now how [disfmarker] how do you [disfmarker] how do you decide which keystrokes they generate? [speaker002:] Um There's a little program that downloads into it. Um. [speaker003:] Hmm. [speaker004:] This is great. [speaker003:] And this is Windows? [speaker002:] But the program probably only runs in Windows, although I bet we could program it once [speaker003:] Uh huh. [speaker002:] And [disfmarker] and then put it on another machine and it would work. [speaker001:] Except for the fact that we would have to unplug it and pow and change [disfmarker] it w [speaker002:] I haven't used them. [speaker001:] Does it lose the memory once you've unpowered it? [speaker002:] I don't know. I don't think it does. [speaker003:] So hang on. You have [disfmarker] You have these foot pedals but you d you're not using them. [speaker002:] I'm not using them anymore. I used to use them. So they're currently set up for Shift, Control, and Alt. [speaker003:] Yeah. [speaker002:] But I r I don't use them. They're not even plugged in. [speaker003:] Wow. That's so cool. OK. [speaker004:] This sounds wonderful. [speaker002:] And so tha uh if that would help, uh I'm certainly willing to donate them. [speaker003:] Well, [speaker004:] I'm [disfmarker] I'm open to [disfmarker] [speaker003:] the d the [disfmarker] well, the uh The part that's not wonderful is someone has to write the software. [speaker001:] Right. [speaker004:] Yeah, that's true. [speaker002:] And it [disfmarker] it is a PC connection, [speaker004:] Well, you know [disfmarker] [speaker002:] so it would have to go on a P C somewhere. [speaker003:] Yeah. [speaker004:] Now, we're at a pilot phase right now, [speaker002:] Mm hmm. [speaker004:] and [pause] what I was told is that beginning, you know, that the [disfmarker] the pilot data would be something like half hour of uh [disfmarker] or f you know [disfmarker] half hour to forty minutes of running text. And so, [pause] for the pilot thing, you know, of course this would be useful. [speaker001:] Right, this is the pilot data. [speaker004:] And I w I wou I'm assu I'm assuming that I would not be uh you know e eh uh transcribing all of it. [speaker002:] Yeah, yeah, you are [pause] generating it. Well, that [disfmarker] that was part of the question, is that if, for [disfmarker] if we're doing this regularly [disfmarker] [speaker003:] Right. [speaker002:] Imagine we're doing a couple of hours a week of transcripts [speaker004:] Yeah. [speaker002:] um, of recording, uh, I wouldn't expect you to do it all. [speaker004:] No. No, no. [speaker002:] So the question is do you know of resources that we could use. So are there grad students or undergrads or just commercial services? [speaker004:] Yes. Mm hmm, yeah, OK, so now I've [disfmarker] I've been involved with some of that. At [disfmarker] at CogSci, um [disfmarker] So, Susan E Susan Ervin Tripp, who you know um would typically, and I've done this too, supervise undergrads to do this kind of thing [speaker001:] Hmm. [speaker004:] and they c you know, you can give them like [disfmarker] if you give them a little bit of instruction and [disfmarker] and make it interesting for them they can do it as a two ninety eight or s or, not two ninety eight, what is it, one whatever, undergraduate research credit. Because what you do is, you can [disfmarker] you know, build it into some sort of a [disfmarker] [speaker003:] Mm hmm. [speaker002:] Oh, really? So we only have to give them credits, We don't have to give them money? [speaker004:] Well uh, you have to be sure you give them compensation for [disfmarker] you know. of some [disfmarker] of some sort. [speaker002:] Oh, you do have to pay them. [speaker001:] Awesome. [speaker004:] What I mean is it's not necessarily monetary. But if they feel like they're getting something out of it, sometimes you can do it that way. [speaker002:] Mm hmm. Mm hmm. [speaker004:] And otherwise, as part of a grant, you can [disfmarker] you can do it. some sort of minimal wage. [speaker003:] Yeah. [speaker002:] Well, we [disfmarker] we need to [disfmarker] We need to find out how much that will cost, so I can pass that on to Morgan, [speaker004:] Yeah. [speaker002:] cuz right now we have almost no money. [speaker004:] Yeah, OK. [speaker002:] So [disfmarker] so the cheaper we can do it, the better. [speaker003:] Well, [disfmarker] [speaker004:] OK. Well, you also want to get quality. I mean, someone's going to have to pass through [disfmarker] Either a higher level person will have to pass through, or you have to be sure you get good quality to begin with. [speaker003:] Yeah, absolutely. [speaker002:] Mm hmm. Right. [speaker004:] But um, yeah, that's quite [disfmarker] you know [disfmarker] we've [disfmarker] That's been done. I can ask Susan Ervin Tripp what she would recommend on that. [speaker003:] Yeah. [speaker004:] She's really [disfmarker] She's done [disfmarker] Of the people that I know, she's the one who's supervised the most of those things locally. [speaker002:] Mm hmm. [speaker003:] What's her name? [speaker004:] Uh uh Doctor Susan Ervin Tripp. [speaker003:] OK. [speaker004:] faculty member in Psychology, recently retired, [speaker003:] Uh huh. So [disfmarker] [speaker004:] and very well known in discourse analysis. [speaker003:] Oh, OK. So, the p I mean, what I'm hearing is that the problem of starting off with some recordings and trying to get word transcripts is kind of [disfmarker] I mean, it's not a [disfmarker] it's not an unfamiliar problem. So that [disfmarker] [speaker004:] Oh, it's very common. [speaker003:] So that I mean that basically, there are standard solutions and we should just adopt [disfmarker] adopt them [speaker004:] Yeah. [speaker003:] and it's going to cost however much it's going to cost, I mean there's [disfmarker] We're [disfmarker] we're probably not going to have a lot of flexibility in bringing down the cost cuz probably the low cost solution's already been established. And so that's just whatever it is. [speaker002:] Mm hmm. [speaker003:] That's how we'll do it. See what I mean? [speaker004:] I think there's always this question of the trade off between uh paying more for the first pass, and then h and then editing less later. [speaker003:] Mm hmm. Uh huh. [speaker004:] or paying less the first pass and editing more later. So [speaker003:] Yeah. Well, whatever they [disfmarker] people who have had to do this [disfmarker] I mean the only thing is I wonder if our requirements are different from um [disfmarker] I mean there's [disfmarker] than [disfmarker] than what's been done in the past than discourse analyzers because maybe we have different requirements in terms of time align you know, how much time [pause] aligned detail we need and uh how much detail in terms of people. [speaker004:] Oh, But that's a s Mm hmm. OK. [speaker003:] I don I don't know how much ti I don't think we do need any time aligned detail. I think we just have basically one text file which runs from beginning to end. But. [speaker002:] Well, if [disfmarker] in terms of transcripts, sure, but in terms of speaker change it might be nice to get the actual time. [speaker003:] Sure. But not if it costs more. [speaker002:] Right. [speaker001:] Well, OK. So what happened with Switchboard was at least, I mean, the output was that they had speaker turns and they um analyzed it in terms of, you know, "this is where this speaker change came in" and of course that gets really difficult when you're talking about [disfmarker] I mean, this is some of the stuff that you were talking about in uh your lunch talk, [speaker003:] Mm hmm. [speaker001:] is that [disfmarker] is that you know, you get backchannels and stuff like that, that that, you know, disrupt the um the segment you know the segmentation. [speaker003:] Right. Yeah. Right, I mean, the [disfmarker] it's sort of it's [disfmarker] a little [disfmarker] In some sense it's a little bit easier with it [disfmarker] with Switchboard because the telephone is so [disfmarker] actually people are less flexible in what they do. In this kind of situation where we've got nonverbal channels then there's a lot of overlap and there's just that it's not a very good model. [speaker001:] Right. [speaker002:] Oh, in terms of data format, what are we going to do about that? I mean, you have the S T M format, where you have you know, every phrase, every utterance is [disfmarker] is a single speaker. I mean, is that the right way to do it? Or [disfmarker] [speaker003:] Well, every utterance is a single speaker, right? It's just that utterances can overlap. I mean, [speaker002:] Is that the right way to do it? [speaker003:] I think so. [speaker002:] OK. [speaker003:] I think [disfmarker] I think [disfmarker] I mean, I think when I'm [disfmarker] when [disfmarker] Like, So, Eric made a s made a sentence, and that should be like one utterance and then I had a couple of backchannels in there and they should be just overlapping things that [disfmarker] I don't think that we should break Eric's sentences for those. I think [disfmarker] [speaker002:] OK. So that means that for each utterance, we'll need the time marks, [speaker001:] Right. [speaker002:] the start and end of each utterance. [speaker004:] Yeah. [speaker001:] So. So [disfmarker] [speaker003:] Well, we'll have to go in from somewhere but we can't necessarily [disfmarker] I mean, we [disfmarker] we may be able to get the transcriber to give them to us if we give them software but we may not in which case we'll have to get them some other way, like with forced alignment later, I mean you know all this can be done. [speaker002:] Mm hmm. [speaker001:] So we [disfmarker] maybe we should look at the um [disfmarker] the tools that Mississippi State has. [speaker003:] Yeah. [speaker001:] Because, I d I [disfmarker] I know that they published um i [comment] annotation tools, you know, for their ins [speaker002:] Well, X Waves have some as well, but they're pretty low level. They're designed for um [disfmarker] for phoneme level [speaker001:] Yeah. [speaker003:] Phoneme [disfmarker] phoneme [disfmarker] phoneme transcriptions. Yeah. [speaker004:] I should've [disfmarker] [speaker002:] Although, they actually have a nice tool for [disfmarker] that could be used for speaker change [pause] marking. [speaker003:] There's a [disfmarker] there are [disfmarker] there's a whole bunch of tools. There's a [disfmarker] some web page, where they have a list of like ten of them or something. [speaker004:] Yes. Are you speaking about Mississippi State per se? [speaker001:] OK. [speaker004:] or are y [speaker003:] No, no, no. There's some [disfmarker] I mean, I just [disfmarker] there are [disfmarker] there are a lot of a lot of [@ @] these things. [speaker004:] Yeah. Actually, I wanted to mention [disfmarker] There are two projects, which are international, huge projects focused on this kind of thing, actually. One of them's MATE, one of them's EAGLES. and um. And both of them have [disfmarker] [speaker003:] Oh, EAGLES. Yeah, that's the European one. Yeah. [speaker004:] You know, I saw I know you know about the big book. [speaker001:] Yeah. [speaker004:] I think you got it as a prize or something. [speaker003:] Mmm. [speaker001:] Yeah. [speaker004:] Got a surprise. And um and they have you know, a lot of [disfmarker] and MATE is a project which is being run out of Denmark and um they have a lot uh [disfmarker] big uh, e you know, they're trying to set up a [disfmarker] a work bench or something or other. And they have a whole bunch of tools that are associated with them. [speaker003:] Mm hmm. [speaker004:] So, I mean, there are things that in terms of like the time alignment issue [speaker003:] Mm hmm. [speaker004:] I mean, [pause] you guys are more expert on that, I mean, than I am, but I want to say that I know for a fact, I keep running across these, and those would be my first two choices of looking at. EAGLES and MATE. And I have the documentation downstairs. [speaker002:] OK. [speaker004:] I didn't think to bring it. [speaker002:] Ha Have you looked at it already? [speaker004:] Oh, I have, but I wasn't interested in that particular issue. [speaker002:] Mm hmm. a a is this [disfmarker] [speaker004:] So, we're talking now, once you have words you know, how do you time align it. and that's. [speaker002:] Oh. So, these are p particularly for time alignment? [speaker004:] Well, these are uh tools [disfmarker] So, MATE and EAGLES are both within the so called uh Language Engineering branch, and um so they're intended for this kind of a comp compu s computer science application type of approach. [speaker003:] Yeah. [speaker004:] These are not discourse types of tools. [speaker003:] Right. [speaker004:] So. They [disfmarker] [speaker002:] Right. [disfmarker] are they just for time alignment or are they for transcription as well? [speaker004:] Can I be excused, and I'll go down and get my stuff? because um I [disfmarker] I do think [disfmarker] [speaker003:] You don't [disfmarker] Just take the thing with you. [speaker002:] Th i It's wireless. [speaker004:] Are you serious? [speaker002:] I mean it won't [disfmarker] [speaker003:] Well, I don't know what will happen, but [pause] you may as well. [speaker004:] I [disfmarker] but [disfmarker] but [disfmarker] what uh [disfmarker] I mean, can you spare like a minute or two of a [disfmarker] [speaker003:] Sure. Sure. Sure. [speaker002:] Sure. [speaker001:] Sure. [speaker004:] OK. And I go get my books and I'll be right back. I didn't think of bringing this. Do I need this? I need my keys. OK. [speaker003:] My uh My s [speaker004:] Oops! I need my card key. [speaker001:] We can hear all your conversations along the way. [speaker003:] Yeah, so be just [disfmarker] [speaker004:] I'll try not to say, "So, Oh dear, Boy! That's itching there!" [speaker002:] Yeah, don't talk to yourself too much. [speaker003:] Yeah. [speaker004:] So, [speaker003:] Yeah, that's true. You can leave it behind as well if you want. [speaker002:] So, For the digits [disfmarker] For the d [speaker003:] I I assume that these are projects where they do the whole thing, right? [speaker002:] Right. [speaker003:] So they probably will have tools to address the actual the [disfmarker] the [disfmarker] the transcriber support [speaker004:] Oops. [speaker003:] and then automated alignment of what the transcribers give you, which is, you know, exactly the problem we have. [speaker002:] Mm hmm. [speaker001:] Right, but we can do automated alignment. That's not a problem [speaker004:] OK [speaker003:] Right. [speaker001:] I think [speaker003:] Right. But then again we don't have to do it. [speaker002:] Well, I'm not sure if we can do automated alignment or not. [speaker003:] and [disfmarker] [speaker002:] I mean, e especially in e with speaker change in a meeting like this, where we're overlapping so much, [speaker001:] I th [speaker002:] I don't think we're gonna have a [disfmarker] a prayer. [speaker003:] Oh sure, we are. We've got individual microphones. [speaker001:] Well, to get [disfmarker] to get the r to get the actual transcript, I think we'll do OK because of [disfmarker] because of the separate channels. [speaker002:] Oh, that's true. Right. We could just do it through these. [speaker003:] Yeah. [speaker001:] Right. [speaker002:] I hadn't even thought about that. Sure. Um. In terms of data formats, uh what I did for the Digits stuff was just to hand rolled my own. That looks like this form. You know, it's just uh something that's easy to parse in Perl. Um. Probably for the full one, we'll want to do something like S T I mean, some format that our tools already work with. [speaker003:] Yeah. Well, Yes, although that might [@ @] presumably we'll have one [disfmarker] we'll have several formats that you can interconvert between. [speaker001:] Right. [speaker002:] Yeah, presumably. [speaker003:] When we want to do recognition we'll do STM, but if we're [disfmarker] rather than [disfmarker] rather than designing our own format, we'll just take whatever comes out of the tools that we can find to use, I think. [speaker002:] Yeah, I guess that's true. Yeah. [speaker003:] I [disfmarker] I [disfmarker] I've got no idea what that'll look like. [speaker001:] Right. [speaker003:] but [speaker001:] Well I know that [disfmarker] I know that um the Mississippi State tools, at least th what the produce does not come out in STM format. [speaker003:] on our speaker phones. Right. [speaker001:] They [disfmarker] they basically give um They subdivide into WAVE files and then they give [disfmarker] for each, they give a one [disfmarker] oh, you know, a WAVE file that g indicates where the start and stop times are, [speaker003:] Mmm hmm. [speaker001:] and then [disfmarker] um and then give the uh whatever [disfmarker] the sentence information. [speaker003:] Yeah. [speaker001:] But I mean Ni i i i it [disfmarker] as you said, it was easy to convert back and forth between um that file and STM format, in fact what I did is I took the Mississippi State data, and just made STM files out of it. [speaker003:] Mm hmm. OK. Good. [speaker002:] OK. "MATE Mark up Framework". OK? [speaker003:] Right. [speaker004:] "Specification of uh Coding workbench", [speaker001:] Mm hmm. [speaker004:] That's one o thing, and then, back in here [speaker002:] Oh, no! It has the word "X M L" in it! [speaker003:] Oh, good. It must be [disfmarker] It must be fundable then. [speaker004:] "KCM file formats", [speaker001:] Oh, looks like [disfmarker] Looks like uh work packages. [speaker004:] This is the [speaker001:] Must be [pause] a European [pause] project. [speaker003:] Yeah. Oh, yeah. [speaker004:] Yeah, it is European. Both of these are Eur European, but they have you know, American contributions, and stuff. [speaker001:] Oh, so they're using the Alembic Workbench, from Mitre. [speaker003:] Alembic? [speaker001:] I guess. [speaker004:] Yeah. [speaker002:] Is that free? [speaker001:] I don't know. [speaker004:] Well i but you see they have a whole bunch of things. What they're doing is trying to bring together [disfmarker] and they rate things. I wasn't focused on that so I didn't [disfmarker] Let me show y Have you seen this one, Dan? [speaker003:] No. [speaker004:] And I'll try to get one for you, too. Oh, you've got one. What are you looking at here? He's got the Mark up Framework. Oh, yeah. Different coding schemes. [speaker002:] But it doesn't have enough pictures. So, I think it will require too much reading to understand whether this is anything that we can easily use. [speaker004:] OK. [speaker001:] Oh, this is, I see. They're reviewing, and [disfmarker] [speaker004:] What they're doing is they're comp comparing all these existing things that have been submitted and And it includes, this one includes the TEI [speaker001:] I see. [speaker004:] and uh so it's [disfmarker] it's um pretty broad based. [speaker002:] "TEI" is [disfmarker]? [speaker004:] These things do seem to be [disfmarker] [speaker001:] Mitre, [speaker004:] That's the Text Encoding Initiative. [speaker002:] Oh. [speaker004:] That's a way of marking mark up. so, you know, but focus on [pause] tools to do this stuff um, I do have a file also in addition to those things, but it'll take me just a second here [speaker002:] Well, I guess XML makes some amount of sense. I'm not actually familiar with it but it seems to be the sort of thing that this would be good for. But [disfmarker] [speaker004:] Well, yeah, I mean, if you start with a [disfmarker] a basic transcription and a good filter. You can [disfmarker] [speaker002:] Mm hmm. [speaker004:] I mean, that's sort of my feeling about it, that you should be [disfmarker] you should be able to [pause] translate between an XML and non XML version cuz most [disfmarker] at least, you know, with a focus on discourse, mostly. [speaker002:] Right. [speaker004:] Oh! I wanted to This [disfmarker] this approach here [disfmarker] This is Uli Heid from Stuttgart [disfmarker] is um a query language for research in phonetics where he wants to build in ToBI tags and [disfmarker] so you can determine um [disfmarker] I wanted to mention this to you. I was going to come by and ask you about this. [speaker003:] Mm hmm. [speaker004:] Do you know about the Stuttgart um people? Because he's saying if you represent [disfmarker] [speaker003:] I don't think so. [speaker004:] He says that uh in German "selbst" is used in two different ways so you have It either means uh "himself" or it means "even" and you can tell the difference in terms of uh intonation. [speaker003:] Mm hmm. [speaker004:] So, they have the recordings such that they can locate things [pause] uh with [disfmarker] by searching for part of speech and also intonational [pause] tags. [speaker003:] Hmm. Mm hmm. [speaker004:] Anyway, [pause] that's [pause] that. [speaker003:] Because the transcriber's put in intonational tags? [speaker004:] This is true. This is true. Which of course is another issue. [speaker003:] Or maybe not. [speaker004:] Exactly so. and then [disfmarker] I have one more thing I need to find. [speaker003:] Well. [speaker001:] But that kind of thing I think we would probably want to extract automatically. [speaker004:] And then I'll be [disfmarker] [speaker001:] I mean, e this is, the [disfmarker] this is a conversation that we should have with Liz. [speaker004:] Yeah. Yes. Oh. [pause] OK. [speaker001:] Right, I think, because she's going to be [disfmarker] She's going to be the one who's the most concerned about this kind of thing. [speaker004:] OK. [speaker001:] And so she may have some ideas. on [disfmarker] on that particular issue. [speaker003:] Mm hmm. [speaker002:] Right, but it's [disfmarker] I think for the initial task, just getting the word transcripts and speaker change is highest on my list. [speaker004:] Yeah. I w I [disfmarker] Yeah. Well that's what Morgan mentioned to me, too. [speaker002:] So. [speaker004:] I mean, if [disfmarker] in terms of the t the type [disfmarker] [speaker001:] Yeah. [speaker004:] But you asked me about uh tools and things and um There's this one [speaker003:] Yeah. [speaker004:] um This is called "Transcriber". um "binary distribution for [disfmarker] [pause] for uh Linux and Solaris and SGI and Windows". [speaker002:] Mm hmm. [speaker003:] Mm hmm. [speaker004:] "Transcriber is a tool for assisting creation of speech corpora." [speaker003:] Who's it from? Right. [speaker004:] "It allows to manually segment, label and transcribe speech signals" And, [speaker003:] Is it the French one? [speaker004:] uh, let's see, where did I get this? I got this from U Penn. [speaker003:] U Penn. I think that I've heard of this one. [speaker004:] But [disfmarker] [speaker003:] It was the one that Steve Renals mentioned. [speaker004:] It is possible. It could be French. I can't really tell. [speaker002:] Well I think that uh [disfmarker] [speaker001:] Can I see that? [speaker002:] It sounds like you're at least a little familiar with these tools, Jane, so Are y Do you feel comfortable just picking one? Um, because [disfmarker] [speaker004:] I've never used any of them. [speaker002:] OK. [speaker004:] And so um I think it would be something that we should work uh together with, in terms of uh seeing how this [disfmarker] [speaker003:] Sure. [speaker001:] Right. [speaker004:] Here's another thing I got. [speaker001:] Oh, so this'd [disfmarker] this w is probably what the L D C uses. I mean they do a lot of transcription at the L D [speaker004:] OK. [speaker001:] I could ask my contacts at the LDC what [disfmarker] what it is they actually use. [speaker004:] Oh! [speaker001:] I mean, I know, I mean I've got [disfmarker] I've got contacts up there, [speaker004:] Good idea. Great idea. [speaker002:] Signed language is interesting. [speaker001:] so. [speaker004:] Excellent idea. [speaker001:] So. [speaker004:] "Automatic annotation of prosody in Verbmobil." [speaker003:] Can I see? [speaker001:] Yeah. [speaker004:] This is the other think that [pause] um [disfmarker] This is part of EAGLES, but um, but here again, it's focusing on representation rather than the tools to do this. Why do I feel like I'm whispering? [speaker002:] Well, so, how should we proceed on picking a tool? Jane, do you want to [pause] pick a few and we'll download them and try them out? Is that the right thing to do? [speaker004:] Oh! I think that would be lovely. Now, so, are we talking about the stage of [disfmarker] uh simply the word stage? or are you talking about tools that would allow the time alignment as well? [speaker002:] No, we're not going to do time alignment. [speaker001:] No. [speaker003:] Well [disfmarker] Well, We are going to do time alignment. [speaker004:] OK. [speaker003:] but we'll probably do it ourselves, right? So we're not [disfmarker] we're not concerned about getting [disfmarker] getting [disfmarker] about buying and [disfmarker] or bringing in tools. [speaker001:] Right. We're very good at time alignments. [speaker003:] We're not very good [speaker004:] OK. [speaker003:] but we're proud of ourselves. [speaker001:] Right. OK. [speaker003:] We're very proud of our time alignment. [speaker004:] Absolutely. Yeah, OK. [speaker002:] Right, and so [disfmarker] but, in terms of what the transcriber needs to do, they're not going to have to do time alignment. [speaker004:] Good. Oh good, [pause] OK. [speaker002:] Except maybe for word speaker change, [speaker004:] Fine. [speaker002:] but [disfmarker] [speaker003:] Right. I mean, it's not a bad [disfmarker] Oh look! It uses Snack! Um. [speaker001:] Uses what? [speaker003:] It uses Snack. That's a [pause] TCL [pause] audio [pause] thing that I [pause] had some involvement with. [speaker002:] OK, then we'll have to use it. [speaker004:] Oh, cool. Very cool. [speaker002:] No [disfmarker] no question. [speaker003:] Yeah. If I have to use it I'm sure [disfmarker] I'm sure [disfmarker] It's a TCL based thing. I'm sure it's just [pause] wonderful. [speaker004:] Cool. Cool. [speaker003:] Yes, T Transcriber was developed with the scripting language TCL TK and C extensions. [speaker004:] Yeah. [speaker003:] It relies on the Snack sound extension, which allows support for most common audio formats. " [speaker004:] Cool. [speaker002:] Boy, the recognition on what you just mumbled is going to be really horrible. [speaker003:] Yeah. It's the French one. [speaker004:] Oh, cool! [speaker002:] Yeah. [speaker004:] OK. There. There you go. [speaker003:] I mean, no, I [disfmarker] I [disfmarker] [speaker004:] I I love it when they [speaker003:] But I've never downloaded it. [speaker004:] c [speaker003:] Someone mentioned it to me. And that's probably why they mentioned it to me, because they thought I would be excited because it's TCL based. [speaker004:] Did you see this one? The -, Did you see this one here? [speaker003:] Oh "Deliverable D three point one", Oh yeah. [speaker002:] OK, So, so, it's called "Transcriber" you said? [speaker004:] OK. [speaker002:] gacks [speaker003:] Transcriber [speaker004:] Whoa! [speaker001:] So it's very interest I mean, it's interesting that one of the coordinators on that is Mark Liberman from the LDC [speaker002:] Mm hmm. [speaker004:] Yeah. [speaker001:] Which means that [disfmarker] I don't know what "coordinator" means in [disfmarker] in terms of that. I mean [speaker002:] Well, I think [disfmarker] [speaker001:] What [disfmarker] what is [disfmarker] what is [disfmarker] what is this web page from? [speaker004:] Well, I apparently, [disfmarker] you know, I roam around on [disfmarker] on topics like this [speaker001:] Sure. [speaker004:] And I think I got this off the U Penn and it does in fact cite a French site, so um they have some sort of an arrangement. [speaker002:] Well, I just wrote down the French site [speaker001:] I see. [speaker002:] and I will check it and try to download it [speaker001:] Right. [speaker002:] And uh see what it looks like. [speaker004:] Good. You you can access it through um [disfmarker] through the, uh, whatever [disfmarker] I can't remember which aspect of U of U Penn I was following, [speaker001:] Right. [speaker004:] but It must have been the LDC. I [disfmarker] I [disfmarker] I bet you anything. [speaker001:] Yeah. [speaker002:] Yeah, they have a [disfmarker] I vaguely recall that they have a pages uh tools page somewhere, [speaker004:] Mm hmm. OK. [speaker002:] with like lists and lists and lists of uh various and assorted tools. [speaker001:] OK. [speaker004:] Yes. Well there's [disfmarker] They have a special project, as I'm remembering, and um, usually it doesn't deal with much that has to do with discourse, [speaker002:] Mm hmm. [speaker004:] but they uh [disfmarker] they show up in resources. [speaker002:] So, Um, we'll start with "Transcriber" and if it looks like that's not gonna [pause] do [pause] what we need, [pause] we'll [pause] move on to the Mississippi State and if it doesn't look like that's going to work, we'll look at something else. [speaker004:] I like that very much, and I and I like the idea of also doing one that's tied in with these larger corp [pause] corpus projects. [speaker002:] OK. [speaker004:] So that, you know, the fact that, yeah, it's U Penn and all that. [speaker002:] Mm hmm. [speaker003:] Mmm. [speaker001:] Yeah. [speaker004:] Yeah. [speaker001:] Right. [speaker002:] Also, I wanted to mention just in passing that uh another thing that we're going to want to start collecting is queries for the meetings. So if over the next couple of days you find yourself wondering what we talked about in the meeting, write down the sort of query you'd like to make. [speaker004:] Oh, good idea. [speaker002:] Do you understand what I mean? [speaker004:] It's like the PDA thing. [speaker003:] Mm hmm. [speaker002:] So, imagine that we had Meeting Recorder all done, and you had one, what might you do with it? So, if they occur to you. [speaker004:] So, like "did the French system use Tcl?" [speaker002:] Right. That's a good one. [speaker001:] Actually what I would have come up with was w w "What was the web address of that [vocalsound] thing", [speaker002:] Send m So, if you could just email me them. [speaker001:] which we didn't [disfmarker] [speaker002:] Yeah, except we didn't record. [speaker001:] we didn't record that. [speaker002:] So. [speaker001:] Right. [speaker002:] It's WWW dot ETCA dot FR. [speaker003:] That's a fine question. [speaker004:] if anyone wants to know. [speaker003:] Wow. [speaker004:] But you could get it through the U Penn [disfmarker] [speaker001:] slash [disfmarker] slash [pause] capital CTA [pause] slash, l [speaker002:] s l slash lots of other stuff, yeah. [speaker001:] Yeah. Yeah. [speaker003:] So this thing, this uh deliverable D three point one of the MATE project is a review of a lot of different tools. [speaker001:] Right. [speaker004:] Yes! Yeah, there you go. Exactly. [speaker003:] They're actually, they seem to be more the tool the main focus of the tools is you've already got a transcription but you're trying to tag the different turns transi transcriptions with uh discourse roles and things like that. [speaker001:] I see. [speaker003:] But I mean a lot them are tools that will also do transcription as well. And I could be wrong. [speaker004:] Well, my my impression I [disfmarker] I [disfmarker] I really thought of these resources in the context of you [disfmarker] you asking about time aligning the [disfmarker] you know, What would you need to indicate [speaker003:] Right. [speaker004:] and were there tools for that because um I suspect that would be uh pretty time intensive. I also wanted to say, in terms of like disc you know, if there are discussions about intonation contours I do have a certain amount of background in that and I really would be [disfmarker] I'd like to be involved in that. [speaker001:] Right. No, right, I didn't mean to imply that that we [disfmarker] that [disfmarker] that we shouldn't discuss this now, [speaker004:] Yeah. [speaker001:] but I'm [disfmarker] I'm just saying that [disfmarker] [speaker004:] Oh, not right now, but I mean in the future. [speaker001:] Right. [speaker004:] So at this meeting with Liz I [disfmarker] you know, I mean [disfmarker] I [disfmarker] I do [disfmarker] I'd like to [disfmarker] I like that stuff. [speaker001:] Sure, sure. [speaker002:] So, when is she showing up? [speaker001:] Well, [pause] I mean, they're coming in April. [speaker002:] April. OK. [speaker001:] Right. But, um You know, we [disfmarker] we may or may not talk before then on that. [speaker002:] Mm hmm. [speaker001:] Um. I [disfmarker] I think we won't probably talk until we have more data, I would think. Right? [speaker002:] Right, since we have almost none right now, let's [speaker001:] Yeah. [speaker002:] We need to move forward from there. But uh, yeah, I guess it's O K. And then, uh there's also just the general question of [pause] will it handle multiple channels or should we just [disfmarker] Dan and I spoke about this briefly, do we want to try to combine channels somehow to make the transcriber's job easier? I was listening to some of it and uh like, if you just listen to the far P Z M the person sitting at this end of the table you can hardly hear at all. on [disfmarker] on the transcript, so there may be [disfmarker] it may be worthwhile to at least try a little bit to combine the channels. [speaker001:] Right. [speaker003:] I [disfmarker] Uh, the stuff I did, I combined pairs of these into stereo pairs, listened to them in stereo. [speaker002:] Huh. [speaker003:] And I was very impressed by how well you could hear separate speakers, [speaker002:] Neat! [speaker003:] but I haven't [disfmarker] I haven't done very extensive tests. [speaker002:] Oh, so, that would be interesting as well to do that. [speaker001:] Oh! Yeah, cuz you'd get you get [disfmarker] [speaker003:] you get a certain amount of spatial information. [speaker001:] Right! [speaker003:] It's not [disfmarker] It's not consistent but it's [disfmarker] Gives you some noise [disfmarker] [speaker001:] Wow. [speaker002:] Well, I bet if you used this one [disfmarker] the two s most separated P Z Well, maybe, maybe not. [speaker003:] Yeah, yeah, you'd get [disfmarker] get wide coverage. [speaker001:] Well, you would [disfmarker] you would [disfmarker] sound like you'd have like in this e big big head. [speaker002:] Big ears. [speaker003:] Right. Right. But I mean [disfmarker] But you would [disfmarker] [pause] you would [disfmarker] You know, you'd get good [disfmarker] You'd get good pick up at both ends of the table. [speaker001:] Right. [speaker003:] So. [speaker002:] Right. [speaker001:] Right. [speaker002:] OK, do we have anything else to talk about with transcribes? [speaker003:] Well, what is [disfmarker] what is the goal? again? We're going to end up with some [disfmarker] I mean, ultimately, we're going to end up with transcriptions of these meetings. [speaker002:] And we'll run a [disfmarker] train a neural net and create a recognizer and do information retrieval [speaker003:] What, and then [disfmarker] Then [disfmarker] Then we'll be done, [speaker002:] and [disfmarker] [speaker003:] huh? [speaker002:] Well, that's my interest. I mean, other people have different interests, [speaker003:] Um. [speaker002:] so. That's what I want. [speaker003:] Of course, there are different kinds of recos so we're going to actually train a recognizer based on these uh, far field mikes [speaker002:] Right. [speaker003:] cuz what we [disfmarker] that's the problem we're trying to solve. [speaker002:] Right. [speaker003:] OK. We need a lot of [disfmarker] I mean [disfmarker] [speaker002:] Well, what [disfmarker] the [disfmarker] so, the reason we're doing near field as well is if that doesn't work we wanna back we want some [disfmarker] something that will work. [speaker003:] Well [disfmarker] and also the near field will allow us to do the time alignment accurately. [speaker002:] Right. Right. [speaker003:] Um. [speaker004:] So these are near [disfmarker] near field? [speaker002:] Mm hmm. [speaker003:] Yeah. [speaker004:] there OK. [speaker003:] Yeah. Close mikes. I mean I [disfmarker] yeah, I feel like [disfmarker] that [disfmarker] What we're actually going to do with these data is very unclear to me [speaker002:] Why? [speaker003:] Well, because I don't think that's a [disfmarker] That's not going to work. [speaker002:] Y [speaker003:] Right? OK. We're going to get horrible performance on that. [speaker002:] The question is, "is it [disfmarker] will it work well enough to do information retrieval?" [speaker003:] Uh huh. [speaker002:] So, w will the application [disfmarker] the Meeting Recorder application, work? [speaker003:] Yeah. [speaker002:] So, Is [disfmarker] Part of the research is um "is fifty percent accuracy enough to do an useful information retrieval?" [speaker001:] Well, part [speaker002:] It might be. [speaker001:] I [speaker003:] Well, we know the answer to that, but unfortunately, will we get fifty percent p accuracy? [speaker002:] That's right, Well, but you know we have to try. [speaker003:] Yeah. [speaker002:] We [disfmarker] we [disfmarker] we could simply not do the project. But I think that's a bad solution. [speaker001:] W We also need to d do this in terms of SmartKom, I mean, we're collecting this as [disfmarker] to build general English acoustic models. that can be that they can say "OK here's the vocabulary and here's the [disfmarker] you know, here are the pronunciations. And go to it." [speaker003:] And are they specifically far field mikes? [speaker001:] Yeah. Yeah. [speaker002:] Oh, I didn't know that. [speaker003:] OK. [speaker004:] You know, if [disfmarker] if we were to add [pause] in addition, and I'm offering this, um to the word level, also the [disfmarker] uh, so, stressed words, within utterances [speaker003:] Yeah. [speaker004:] I do think that that would help the information retrieval. [speaker003:] Well, I mean, that would be tremendously useful information [speaker004:] OK. [speaker003:] assuming that we actually pursue it, but um we've got a huge problem, because if we ge if we want train a recognizer we need tens of hours of labelled data. [speaker004:] Oh, good point. [speaker003:] And um it's just eh [disfmarker] it's very frightening. [speaker002:] Yep. [speaker004:] Do you think [disfmarker] So, are [disfmarker] are you familiar with this the MARS the MARSEC project? [speaker003:] uh uh [speaker004:] Um, So, I'll put this over here, cuz he seems to be the chairman. Um. [speaker003:] Who, Eric? [speaker004:] Oh, he's not? [speaker001:] No. [speaker002:] He has [disfmarker] he has the open piece of paper. [speaker004:] Well Uh huh. [speaker002:] So, open notebook. [speaker004:] He [disfmarker] Yeah. Well, he can [disfmarker] y y [speaker001:] Well, Dan's got an open notebook, too. [speaker004:] discuss [disfmarker] discuss among yourselves. [speaker003:] Outrageous! Laying out his folders. [speaker002:] I have one but I've carefully covered it so that it doesn't [disfmarker] [speaker001:] Right. [speaker004:] So, this is a project that um [comment] has time aligned spoken data which is um [disfmarker] I see it as sort of in between um these fields because a lot of the time when people do rich um intonational and [disfmarker] and stress oriented transcription they don't have the digital record at the same time and the don't have it time aligned. [speaker003:] Mm hmm. [speaker004:] I don't know how large this is, actually. But I just was sort of thinking that um I'm [disfmarker] I'm [disfmarker] and it's British [speaker003:] Mm hmm. [speaker004:] And you know [speaker002:] Well, I think to some extent with things like the stress and the other prosodic things [speaker004:] So. Uh huh. [speaker002:] an and tighter time alignments that we can do I think we should probably do those but only once we have a consumer of that data [speaker004:] Mm hmm. OK. [speaker002:] ' You know, so right now we have a consumer of the word transcripts and the speaker change. [speaker004:] Mm hmm. [speaker002:] Me. I want those things to work. And so I would like to get that done as quickly as possible. [speaker004:] Mm hmm. [speaker002:] And so, [pause] if [disfmarker] if Liz or someone else wants [pause] to do some work on uh prosody and on stress and so on then we can go back to the data and generate the uh the transcripts that we need. [speaker004:] I'm d Mm hmm. OK. One thing about that is that is that um in the process of [disfmarker] of doing the word level? it really is just as easy to put the stress in at that point. [speaker002:] Even for an undergrad? [speaker004:] simply the stressed [disfmarker] Oh, oh, well, see in my pilot stuff, so in the half hour or so that I'm doing. I'd just as soon put it in, because I do think that you'd find that those would be words [disfmarker] [speaker002:] Mm hmm. [speaker003:] OK. [speaker004:] I mean [disfmarker] You know of course, Rosaria's pursuing this [disfmarker] this area [speaker002:] Mm hmm. [speaker004:] but I really do think from everything that I've run across that um you [disfmarker] you should get gain in terms of the uh uh what you were saying, the information retrieval aspects from that. [speaker002:] Right. [speaker004:] And I wouldn't be too surprised [disfmarker] I mean, it's [disfmarker] it's like uh I mean I don't know how [disfmarker] how reliable Lea is, but [pause] if [pause] this idea of islands of [disfmarker] of um for di whatever they are. r r islands of reliability or whatever it is. [speaker002:] Accuracy? [speaker004:] The stressed w stressed words. [speaker002:] Certainty, islands of certainty. [speaker004:] Is that it? OK. [speaker002:] Yeah. [speaker004:] I mean [speaker001:] Are you certain? [speaker003:] Lea? [speaker002:] No. [speaker004:] The claim that uh words that get stressed are easier to recognize. [speaker003:] I see. [speaker004:] So, I mean, I [disfmarker] I think it c there's be sort of a benefit with very minimal input. [speaker003:] Yeah. [speaker004:] Cuz when you listen to a sentence, you can tell [disfmarker] If it's contrastive stress you say "Oh! that word was stressed" and why the heck not put it in at the word level? So [disfmarker] so, I mean, the part that I'm doing, I'm really happy to supply that without any [disfmarker] cuz it [disfmarker] it's just [disfmarker] it's there. [speaker002:] Mm hmm. [speaker003:] OK. If you think it won't slow you down at all. [speaker004:] It's not [disfmarker] it doesn't take any thinking. [speaker003:] Yeah. [speaker004:] I'm not going to talk about, you know uh, trying grade them in any kind of careful way, but just indicate [disfmarker] if th if there's a prominent word, to indicate it. [speaker002:] Sure. [speaker003:] Yeah, that's great. [speaker002:] All caps. [speaker003:] No, well, presumably there's a an annotation standard for prefix [pause] characters or something like that. [speaker004:] Whatever. Whatever. People [disfmarker] [speaker003:] That's what they're using here. [speaker004:] Yeah, people do different [disfmarker] different ways, [speaker003:] OK. [speaker004:] a and so just as long as it's systematic in my view, you can always filter it into some other format. [speaker003:] Yeah, exactly. Exactly. Exactly. [speaker002:] Anything else? [speaker003:] Well, so, um What is the plan? We're gonna [disfmarker] I mean, How many [disfmarker] How many [disfmarker] How many meetings? [speaker002:] I'm gonna go download "Transcriber" and Jane and I will look at it [speaker003:] What about r our recording schedule? [speaker002:] Oh [speaker003:] How many meetings are we going to record? Um. Which ones are we going to transcribe? Um. Do we [disfmarker] [speaker002:] Well I think the answer is we need to record as many as we can. [speaker003:] OK. [speaker002:] Right? I don't see any reason not to try to record a meeting or two a week. [speaker003:] Absolutely. [speaker002:] Um. I mean, more if we can. And uh so I wanna [disfmarker] um I wanna start by in the next couple weeks just doing the Speech group until we get to the point where we're fairly comfortable with the hardware and the software and everything works. [speaker003:] Yeah. [speaker002:] and then I think I'll [disfmarker] also uh a Jerry volunteered to have his group do a few. So, I'll uh try to get those as well. [speaker004:] How many of these do you have, [speaker003:] OK. [speaker004:] these near mikes? [speaker003:] Five. [speaker002:] We have five of the wireless. [speaker004:] Oh, OK. OK. [speaker002:] And then we were [disfmarker] going to be getting a bunch of wired ones as well. I mean notice that n [speaker003:] Well. [speaker002:] your [disfmarker] you [disfmarker] you stood up and walked around. None of the rest of us have. And so we could have wired mikes and I wouldn't be a big deal. [speaker004:] Oh. Oh, I see. [speaker002:] for bigger meetings. [speaker004:] OK. I see. [speaker002:] So. [speaker004:] Interesting. Very interesting. Wow, I'm just really impressed by this, um, yeah. [speaker002:] It really makes you feel like you're doing real work when you get all this hardware. [speaker004:] Well, this is, I mean, compared to the way people normally do, like, discourse stuff? you know, it's like you got your [disfmarker] I mean they've gotten away from w reel to reel tapes, [speaker003:] Uh huh. [speaker004:] but [disfmarker] and they do have DAT recorders, I mean you do have [disfmarker] [speaker003:] Mm hmm. [speaker004:] but I mean this is just wonderful, the uh added level of yeah, compactness in the channels. [speaker003:] If we can do anything with it, yeah. [speaker002:] Yep. [speaker004:] I do find myself whispering. I mean, I suspect [disfmarker] My normal s. [speaker002:] Why? [speaker004:] I don't know why. [speaker002:] I had noticed that you were talking quietly, [speaker004:] It's bizarre. [speaker002:] and I was actually going to ask you. [speaker004:] yeah, this is not really my normal [disfmarker] pattern. [speaker002:] I was going to ask you to speak up. But I didn't want you to feel self conscious. [speaker004:] I think the problem is that I'm afraid that I'm talking into someone's ear. [speaker002:] Oh. [speaker004:] I really do. So it's [disfmarker] so it's like [disfmarker] And I realize now [comment] that you have volume uh control. But uh Yeah. [speaker003:] I don't feel like you're speaking unnaturally quietly. [speaker004:] Oh good. [speaker003:] I mean [disfmarker] [pause] I feel like it's all quite normal. [speaker004:] OK. Good. [speaker001:] Well, I hear what you're saying. [speaker004:] I'm glad. [speaker003:] Yeah. [speaker004:] Good. [speaker003:] Eric's really got it easy cuz he's got the lapel mike, which is so unobtrusive. [speaker004:] That's great. [speaker001:] I asked if somebody else wanted the lapel mike. Somebody else can take it next time. [speaker002:] I've been trying to rotate it through. So, Dan will get it next time. [speaker004:] Are these more expensive or less expensive than this kind? [speaker003:] They're much more expensive. [speaker004:] OK. [speaker003:] I [disfmarker] I [disfmarker] Which I don't understand but there it is. [speaker004:] It would be like the television. effect or something, maybe. [speaker003:] Right, it's because they make people feel like television stars. [speaker002:] Do you feel like a television star? [speaker003:] So you can charge more for it. [speaker004:] That's probably why I thought he was the chairman. [speaker003:] Yeah, that's it. That must be it. [speaker002:] That [disfmarker] that makes sense, [speaker001:] That's it! [speaker002:] right? Cuz he's not heavily encumbered with this [disfmarker] [speaker003:] Yeah. [speaker001:] Right. [speaker002:] N You should have run it under your sweater, though. [speaker001:] Right. Yes. [speaker003:] OK. Well, I don't mind. Oh, we should read some more numbers, then. Are we [disfmarker] are we [disfmarker] are we there? [speaker004:] Oh good. [speaker001:] Yeah. [speaker003:] Are we at that point? [speaker002:] I think we're there. [speaker004:] Good. [speaker001:] How much space do we have left? [speaker004:] Good. [speaker003:] More space than you can [disfmarker] [speaker002:] uh [speaker003:] We've got two hours of space now. [speaker002:] He [disfmarker] he [disfmarker] he changed the downsampling. [speaker001:] Oh really. [speaker003:] Yeah. [speaker002:] OK, so this is Adam, on uh [speaker001:] Oh! [speaker002:] mike number two, wireless headset [speaker003:] In theory. Assuming it's recording anything. [speaker001:] OK, this is Eric [pause] on microphone number one, uh channel zero [speaker004:] Here, here! OK, this is Jane. OK. [speaker002:] OK? [speaker003:] OK. [speaker002:] Thanks very much. Just control C, Dan? [speaker003:] Yep. [speaker004:] Alrighty. [speaker004:] How many batteries do you go through? [speaker002:] Thank you. [speaker003:] Alright. [speaker001:] Sure. [speaker003:] Good. Yeah. OK so, let's get started. Nancy said she's coming and that means she will be. Um. My suggestion is that Robert and Johno sort of give us a report on last week's adventures uh to start. So everybody knows there were these guys f uh from Heidelber uh, uh, actually from uh DFKI uh, part of the German SmartKom project, who were here for the week and, I think got a lot done. [speaker005:] Yeah, I think so too. Um. The [disfmarker] we got to the point where we can now speak into the SmartKom system, and it'll go all the way through and then say something like "Roman numeral one, am Smarticus." It actually says, "Roemisch einz, am Smarticus," which means it's just using a German sythesis module for English sentences. [speaker002:] OK. OK. [speaker005:] So uh, [speaker003:] It doesn't know "I". [speaker002:] OK. [speaker005:] Um, the uh [speaker002:] Oh, Am Spartacus. " [speaker004:] "I am Sm I am Smarticus" is what it's saying. [speaker002:] Verstehe. [speaker001:] Right. [speaker004:] I gue [speaker002:] OK. [speaker005:] The uh sythesis is just a question of um, hopefully it's just a question of exchanging a couple of files, once we have them. And, um, it's not going to be a problem because we decided to stick to the so called concept to speech approach. So I'm [disfmarker] I'm [disfmarker] I'm going backwards now, so "synthesis" is where you sort of make this [disfmarker] uh, make these sounds, and "concept to speech" is feeding into this synthesis module giving it what needs to be said, and the whole syntactic structure so it can pronounce things better, presumably. Then, just with text to speech. [speaker002:] Mm hmm. [speaker005:] And, uh, Johno learned how to write XML tags. Uh, and did write the tree adjoining grammar for some [disfmarker] some sentences. No, right? Yeah, for a couple [disfmarker] [speaker004:] Yeah. So. Bu Uh, i The way the uh, the dialogue manager works is it dumps out what it wants to know, or what it wants to tell the person, to a [disfmarker] er in XML and there's a conversion system for different uh, to go from XML to something else. And th so, the knowledge base for the system, that generates the syntasti syntactic structures for the ge generation is uh, in a LISP like [disfmarker] the knowledge base is in a LISP like form. And then the thing that actually builds these syntactic structures is something based on Prolog. So, you have a [disfmarker] basically, a goal and it, you know, says OK, well I'm gonna try to do the Greet the person goal, [speaker002:] Mm hmm. [speaker004:] so it just starts [disfmarker] uh, it binds some variables and it just decides to, you know, do some subscold. Basically, it just means "build the tree." And then it passes the tree onto, uh, the ge the generation module. [speaker002:] OK. [speaker005:] But I think that the point is that out of the twelve possible utterances that the German system can do, we've already written the [disfmarker] the syntax trees for three or four. [speaker004:] We yeah. So, the syntax trees are very simple. It's like most of the sentences in one tree, [speaker002:] Mm hmm. [speaker004:] and instead of, you know, breaking down to, like, small units and building back up, they basically took the sentences, and basically cut them in half, or you know, into thirds or something like that, and made trees out of those. And so uh, uh Tilman wrote a little tool that you could take LISP notation and generate an XML, uh, tree. Uh, S what do ca structure from the [disfmarker] from the LISP. And so basically you just say, you know, "noun goes to", you know, Er, nah, I don't re I've never been good at those. So there's like the VP goes to N and those things in LISP, and it will generate for you. [speaker002:] OK. N, N, V yeah, OK. Alright. [speaker005:] And because we're sticking to that structure, the synthesis module doesn't need to be changed. So all that f fancy stuff, and the Texas speech version of it, which is actually the simpler version, is gonna be done in October which is much too late for us. So. This way we [disfmarker] we worked around that. The, uh [disfmarker] the system, um [disfmarker] I can show you the system. I actually want, at least, maybe, you should be able to start it on your own. If you wanna play around with it, in th in the future. Right now it's brittle and you need to ch start it up and then make ts twenty changes on [disfmarker] on [disfmarker] on [disfmarker] on seventeen modules before they actually can stomach it, anything. And send in a [disfmarker] a [disfmarker] a couple of side queries on some dummy center set up program so that it actually works because it's designed for this seevit thing, where you have the gestural recognition running with this s Siemens virtual touch screen, which we don't have here. [speaker002:] Mm hmm. [speaker005:] And so we're doing it via mouse, but the whole system was designed to work with this thing and it was [disfmarker] It was a lot of engineering stuff. No science in there whatsoever, but it's working now, and um, that's the good news. So everything else actually did prove to be language independent except for the parsing and the generation. [speaker004:] Why [disfmarker] I had [disfmarker] I did need to chan generate different trees than the German ones, mainly because you know like uh, the gerund in [disfmarker] in German is automatically taken care of with just a regular verb, [speaker005:] You have to switch it on. [speaker002:] Mm hmm. [speaker004:] so I'd uh have to add "am walking," or I'd have to add a little stem for the "am", when I build the [disfmarker] built the tree. [speaker002:] OK. OK. Yeah, I noticed that um, that some of the examples they had, had you know, non English word orders and so on, you know. And then all that good stuff. So. [speaker004:] Yeah. [speaker003:] Alright. [speaker002:] Like. [speaker003:] So it might be worth, Keith, you looking at this, [speaker002:] Yeah. [speaker003:] um [speaker004:] Well Tilman s [speaker002:] I [disfmarker] I still don't [disfmarker] I still don't really understand e like [disfmarker] I mean we sort of say, um [disfmarker] You know, I [disfmarker] I still don't exactly understand sort of the information flow uh in [disfmarker] in this thing, or what the modules are and so on. So, you know, like just that such and such module uh um decides that it wants to achieve the goal of greeting the user, and then magically it sort of s [speaker003:] Yeah [disfmarker] [speaker002:] I mean, how does it know which syntactic structure to pull out, and all that? [speaker003:] I thi Yeah. So. I think it's not worth going over in the group, [speaker002:] R uh [speaker003:] but sort of when you get free and you have the time uh either Robert or Johno or I can walk you through it. [speaker002:] Sure. Yeah, soon. OK. [speaker003:] And you can ask all the questions about how this all fits together. [speaker002:] That's fine. [speaker003:] It's eee [comment] messy but once you understand it you understand it. It's [disfmarker] it's [disfmarker] There's nothing really complicated about it. [speaker002:] OK. [speaker005:] No. [speaker002:] And I remember one thing that [disfmarker] that came up in the talk last Wednesday. Um, was this, I [disfmarker] I think he talked about the idea of like, um [disfmarker] He was talking about these lexicalized uh, uh, tree adjoining grammars where you sort of [disfmarker] for each word you, um [disfmarker] [speaker004:] OK, you know how to do it? [speaker002:] For each lexical item, the lexical entry says what all the uh trees are that it can appear in. And of course, that's not v That's the opposite of constructional. That's, you know, that's [disfmarker] that's HPSG or whatever. [speaker003:] Right. Right. [speaker002:] You know? [speaker003:] Now, we're [disfmarker] we're not committed for our research to [pause] do any of those things. [speaker002:] Yeah. Mm hmm. [speaker003:] So uh we are committed for our funding. [speaker002:] Right. [speaker003:] OK? to [pause] uh [disfmarker] [speaker002:] Make our stuff fit to that. [speaker003:] Yeah, to [disfmarker] n no, to just get the dem get the demos they need. [speaker002:] Uh huh. [speaker003:] OK? So between us all we have t to get th the demos they need. If it turns out we can also give them lots more than that by, you know, tapping into other things we do, that's great. [speaker004:] You should probably move the microphone closer to your face. [speaker002:] Mm hmm. [speaker003:] But i it turns out not to be in an any of the contracts [speaker004:] There's like a little [disfmarker] The twisty thing, you can move it with. [speaker002:] OK. [speaker003:] and, s deliberately. So, the reason I'd like you to understand uh what's going on in this demo system is not because it's important to the research. It's just for closure. So that if we come up with a question of "could we fit this deeper stuff in there?" or something. You know what the hell we we're talking about fitting in. [speaker002:] Right. OK. [speaker003:] So it's just, uh in the sam same actually with the rest of us we just need to really understand what's there. Is there anything we can make use of? Uh, is there anything we can give back, beyond th the sort of minimum requirements? But none of that has a short time fuse. [speaker002:] OK. [speaker003:] So th the demo the demo requirements for this Fall are sort of taken care of as of later this week or something. And then [disfmarker] So, it's probably fifteen months or something until there's another serious demo requirement. That doesn't mean we don't think about it for fifteen months, [speaker002:] Oh OK. [speaker003:] but it means we can not think about it for six months. [speaker002:] Right. Right, yeah. [speaker003:] So. The plan for this summer uh, really is to step back from the applied project, [speaker005:] Right. [speaker003:] keep the d keep the context open, but actually go after the basic issues. [speaker002:] Hmm. Oh OK. [speaker003:] And, so The idea is there's this uh, other subgroup that's worrying about formalizing the nota getting a notation. But sort of in parallel with that, uh, the hope is tha in particularly you will work on constructions in English Ge and German for this domain, [speaker002:] Mm hmm. [speaker003:] but y not worry about parsing them or fitting them into SmartKom or any of the other [disfmarker] anything lik any other constraints for the time being. [speaker002:] Yeah. OK. Got it. [speaker003:] It's hard enough to get it semantically and syntactically right and then [disfmarker] and get the constructions in their form and stuff. [speaker002:] Yeah. [speaker003:] And, I don I don't want you f feeling that you have to somehow meet all these other constraints. [speaker002:] Right, OK. [speaker003:] Um. And similarly with the parsing, uh we're gonna worry about parsing uh, the general case you know, construction parser for general constructions. And, if we need a cut down version for something, or whatever, we'll worry about that later. [speaker002:] OK. [speaker003:] So I'd like to, for the summer turn into science mode. [speaker002:] OK. [speaker003:] And I assume that's also, uh, your plan as well. Right. [speaker002:] So I mean, the [disfmarker] the point is that like the meetings um so far that I've been at have been [disfmarker] sort of been geared towards this demo, [speaker003:] Yeah. Yeah. But [disfmarker] but we we're swit [speaker002:] and then that's going to go away pretty soon. [speaker003:] Right. [speaker002:] OK. And then we'll sort of shift gears a Fairly substantially, [speaker003:] Yeah. [speaker005:] It's [disfmarker] [speaker003:] Yeah. [speaker005:] It's got. [speaker002:] huh? [speaker003:] Yeah. [speaker005:] What I [disfmarker] what I think is [disfmarker] is a good idea that I can [disfmarker] can show to anyone who's interested, we can even make a [disfmarker] sort of an internal demo, and I [disfmarker] I show you what I do, [speaker002:] Mm hmm. [speaker005:] I speak into it and you hear it talk, [speaker002:] OK. [speaker005:] and I can sort of walk f through the information. So, this is like in half hour or forty five minutes. Just fun. [speaker002:] OK. [speaker005:] And so you [disfmarker] when somebody on the streets com comes up to you and asks you what is SmartKom so you can, sort of, give a sensible answer. [speaker002:] Right. OK. [speaker003:] So, c sh we could set that up as actually an institute wide thing? Just give a talk in the big room, and [disfmarker] and so peo people know what's going on? when you're ready? [speaker005:] Absolutely. [speaker003:] Yeah I mean, that's the kind of thing [disfmarker] That's the level at which you know we can just li invite everybody and say "this is a project that we've been working on and here's a demo version of it" and stuff like that. [speaker002:] Yeah. [speaker005:] OK. Well d we [disfmarker] we do wanna have all the bugs out b where you have to sort of pipe in extra XML messages from left and right before you're [disfmarker] [speaker002:] Uh huh. [speaker003:] Indeed. [speaker005:] Yeah. OK. Makes sense. [speaker003:] But any so that [disfmarker] e e It's clear, then, I think. Actually, roughly starting uh let's say, nex next meeting, cuz this meeting we have one other thing to tie up besides the trip report. [speaker002:] Yeah. OK. [speaker003:] But uh starting next meeting I think we want to flip into this mode where [disfmarker] Uh. I mean there are a lot of issues, what's the ontology look like, [speaker002:] Mm hmm. [speaker003:] you know what do the constructions look like, what's the execution engine look like, mmm lots of things. [speaker002:] Mm hmm. [speaker003:] But, more focused on uh an idealized version than just getting the demo out. Now before we do that, let's get back in [disfmarker] Oh! But, it's still, I think, useful for you to understand the demo version enough, so that you can [disfmarker] can see what [disfmarker] what it is that [disfmarker] that uh it might eventually get retro fitted into or something. [speaker002:] Yeah. OK, right. [speaker003:] And Johno's already done that, uh, looked at the dem uh the [disfmarker] looked at the SmartKom stuff. [speaker004:] Wa uh [disfmarker] To some de uh what [disfmarker] what part of th the SmartKom stuff? [speaker003:] Well, the parser, and that stuff. [speaker004:] Oh yeah [disfmarker] yeah. [speaker003:] OK. Anyway. So, the trip [disfmarker] the report on these [disfmarker] the last we we sort of interrupted you guys telling us about what happened last week. [speaker002:] Yeah. It's alright. [speaker005:] Um. [vocalsound] Well it was just amazing to [disfmarker] to see uh how [disfmarker] how instable the whole thing is, [speaker003:] Maybe you're done, then. [speaker005:] and if you just take the [disfmarker] And I g I got the feeling that we are [pause] the only ones right now who have a running system. I don't know what the guys in Kaiserslautern have running because e the version [disfmarker] that is, the full version that's on the server d does not work. And you need to do a lot of stuff to make it work. And so it's [disfmarker] And even Tilman and Ralf sort of said "yeah there never was a really working version that uh did it without th all the shortcuts that they built in for the uh October [@ @] version". So we're actually maybe ahead of the System Gruppe by now, the system [disfmarker] the integration group. And it was, uh [disfmarker] It was fun to some extent, but the uh the outcome that is sort of of scientific interest is that I think both Ralf and Tilman [disfmarker] um, I know that they enjoyed it here, and they r they [disfmarker] they liked, uh, a lot of the stuff they saw here, what [disfmarker] what we have been thinking about, and they're more than willing to [disfmarker] to um, cooperate, by all means. And um, part of my responsibility is uh to use our internal "group ware" server at EML, make that open to all of us and them, so that whatever we discuss in terms of parsing and [disfmarker] and generating and constructions w we [disfmarker] we sort of uh put it in there and they put what they do in there and maybe we can even um, get some overlap, get some synergy out of that. And um, the, uh [disfmarker] If I find someone at [disfmarker] in EML that is interested in that, um I [disfmarker] I may even think that we could look [disfmarker] take constructions and [disfmarker] and generate from them because the tree adjoining grammars that [disfmarker] that Tilman is using is as you said nothing but a mathematical formalism. And you can just do anything with it, whether it's syntactic trees, H P S G like stuff, or whether it's construction. So if you ever get to the generation side of constructing things and there might be something of interest there, but in the moment we're of course definitely focused on the understanding, um, pipeline. [speaker003:] Anyth any other [vocalsound] [comment] uh repo visit reports sort of stories? uh we [disfmarker] so we now know I think, what the landscape is like. [speaker002:] Mm hmm. [speaker003:] And so we just push on and [disfmarker] and uh, do what we need to do. And one of the things we need to do is the um, and this I think is relatively tight [disfmarker] tightly constrained, is to finish up this belief net stuff. So. Uh. And I was going to switch to start talking about that unless there're m other more general questions. OK so here's where we are on the belief net stuff as far as I understand it. Um. Going back I guess two weeks ago uh Robert had laid out this belief net, missing only the connections. Right? That is [disfmarker] [comment] So, he'd put all th all the dots down, and we went through this, and, I think, more or less convinced ourselves that at least the vast majority of the nodes that we needed for the demo level we were thinking of, were in there. Yeah [comment] we may run across one or two more. But of course the connections weren't. So, uh Bhaskara and I went off and looked at some technical questions about were certain operations sort of legitimate belief net computations and was there some known problem with them or had someone already uh, solved you know how to do this and stuff. And so Bhaskara tracked that down. The answer seems to be uh, "no, no one has done it, but yes it's a perfectly reasonable thing to do if that's what you set out to do". And, so the current state of things is that, again, starting now, um we'd like to actually get a running belief net for this particular subdomain done in the next few weeks. So Bhaskara is switching projects as of the first of June, and uh, he's gonna leave us an inheritance, which is a uh [disfmarker] hopefully a belief net that does these things. And there're two aspects to it, one of which is, you know, technical, getting the coding right, and making it run, and uh stuff like that. And the other is the actual semantics. OK? What all [disfmarker] you know, what are the considerations and how and what are the ways in which they relate. So he doe h he doesn't need help from this group on the technical aspects or if he does uh we'll do that separately. [speaker002:] Mm hmm. [speaker003:] But in terms of what are the decisions and stuff like that, that's something that we all have to work out. Is [disfmarker] is that right? I mean that's [disfmarker] that's both you guys' understanding of where we are? [speaker005:] Absolutely. [speaker003:] OK. [speaker007:] So, I guess, um [disfmarker] Is there like a latest version of the belief net [disfmarker] of the proposed belief net? Like [disfmarker] [speaker005:] We had um decided [disfmarker] [speaker007:] like [disfmarker] [speaker005:] Um. Well, no, we didn't decide. We wanted to look into maybe getting it, the visualization, a bit clearer, but I think if we do it, um, sort of a paper version of all the nodes and then the connections between them, that should suffice. [speaker007:] Mm hmm. Yeah, that should be fine. [speaker004:] Yeah, I [disfmarker] [speaker003:] Yeah I mean, that's a separate problem. We do in the long run wanna do better visualization and all that stuff. [speaker005:] Yeah. [speaker003:] That's separable, yeah. [speaker004:] I did look into that, uh in terms of, you know, exploding the nodes out and down ag [speaker003:] Yep. Right. [speaker004:] JavaBayes does not support that. I can imagine a way of hacking at the code to do that. It'd probably take two weeks or so to actually go through and do it, [speaker003:] Not [disfmarker] not at this point. [speaker004:] and I went through all the other packages on Murph Kevin Murphy's page, [speaker003:] Right. [speaker004:] and I couldn't find the necessary mix of free and uh with the GUI and, with this thing that we want. [speaker003:] Well, we can p If it's [disfmarker] If we can pay [disfmarker] Yeah. If you know it's paying a thousand dollars or something we can do that. OK? [speaker004:] OK. [speaker003:] So [disfmarker] so don't view free as [disfmarker] as a absolute constraint. [speaker004:] OK, so then I'll go back and look at the ones on the list that [disfmarker] [speaker003:] OK. [speaker007:] Yeah. [speaker005:] But [disfmarker] [speaker003:] And you can ask Kevin. [speaker007:] Yeah, the one that uh people seem to use is uh Hugin or whatever? [speaker004:] Mmm. [speaker005:] But [disfmarker] [speaker003:] Hugin, yeah that's free. [speaker007:] How exp I don't think it's [disfmarker] Is it free? Because I've seen it advertised in places so I [disfmarker] it seems to [disfmarker] [speaker003:] Uh, it may be free to academics. Like I [disfmarker] I don't know. I have a co [comment] I have a copy [comment] that I l I downloaded. [speaker007:] OK. [speaker003:] So, at one point it was free. [speaker007:] OK. [speaker003:] Uh but yo I noticed people do use Hugin so um, [speaker004:] How do you spell that? [speaker006:] Why [speaker003:] HUGIN. And Bhaskara can give you a pointer. So then, in any case, um [disfmarker] But paying a lit You know, if i if it's uh [disfmarker] Probably for university, it's [disfmarker] it's gonna be real cheap anyway. But um, you know, if it's fifty thousand dollars we aren't gonna do it. I'm mean, we have no need for that. [speaker005:] I [disfmarker] I also s would suggest not to d spend two weeks in [disfmarker] in [disfmarker] in changing the [disfmarker] the JavaBayes code. [speaker003:] No, he's not gonna do that. [speaker002:] Yeah. [speaker004:] OK. [speaker005:] I [disfmarker] I will send you a pointer to a Java applet that does that, it's sort of a fish eye. You [disfmarker] you have a node, and you click on it, and it shows you all the connections, and then if you click on something else that moves away, that goes into the middle. [speaker004:] Mmm. [speaker005:] And maybe there is an easy way of interfacing those two. If that doesn't work, it's not a problem we [disfmarker] we need to solve right now. What I'm [disfmarker] what my job is, I will, um, give you the input in terms of [disfmarker] of the internal structure. Maybe node by node, or something like this? Or should I collect it all [speaker007:] Mm hmm. [speaker005:] and [disfmarker] [speaker003:] Doesn't matter. [speaker007:] Um, just any like [disfmarker] like sort of rough representation of the entire belief net is probably best. [speaker005:] OK. And um you're gonna be around? t again, always Tuesdays and Thursdays afternoon ish? As usual? [speaker007:] Yeah [disfmarker] [speaker005:] Or will that change? [speaker007:] I mean, yeah, I can [disfmarker] like I c Um. This week I guess um, kind of [disfmarker] I have a lot of projects and stuff but after that I will generally be more free. So yes, I might [disfmarker] I can be around. And g I mean, generally if you email me also I can be around on other days. [speaker005:] Yeah. OK. [speaker003:] Yeah and this is not a crisis that [disfmarker] I mean, you do, e everybody who's a student should, you know do their work, get their c courses all in good shape and [disfmarker] and [disfmarker] and [disfmarker] and then we'll dig [disfmarker] d dig down on this. [speaker005:] Yeah, that's [disfmarker] Yeah. OK. No, that's good. That means I have I h I can spend this week doing it. So. [speaker007:] OK. [speaker002:] How do you go about this process of deciding what these connections are? I know that there's an issue of how to weight the different things too, and stuff. [speaker003:] Right. [speaker002:] Right? I mean do you just sort of guess and see if it sort of [disfmarker] [speaker003:] Well there [disfmarker] there [disfmarker] there There're two different things you do. [speaker005:] It's [disfmarker] [speaker003:] One is you design and the other is you learn. OK? So uh what we're gonna do initially is [disfmarker] is do design, and, i if you will, guess. [speaker002:] OK. [speaker003:] OK. Uh that is you know use your best knowledge of [disfmarker] of the domain to uh, hypothesize what the dependencies are and stuff. [speaker002:] Right. OK. [speaker003:] If it's done right, and if you have data then, there are techniques for learning the numbers given the structure [speaker002:] Yeah. [speaker003:] and there are even techniques for learning the structure, although that takes a lot more data, and it's not as [@ @] and so forth and so on. So uh but for the limited amount of stuff we have for this particular exercise I think we'll just design it. [speaker002:] Alright. [speaker005:] Yeah. Fo Hopefully as time passes we'll get more and more data from Heidelberg and from people actually using it and stuff. [speaker002:] OK. [speaker005:] So but this is the [pause] [vocalsound] [comment] [pause] long run. [speaker002:] Yeah. [speaker005:] But to solve our problems ag uh a mediocre design will do I think in the beginning. [speaker002:] Yeah, that's right. Yeah, oh, and by the way, speaking of data, um, are there I could swore [disfmarker] uh, I could swear I saw it sitting on someone's desk at some point, but is there a [disfmarker] um a transcript of any of the, sort of, initial interactions of people with the [disfmarker] with the system? [speaker003:] Mm hmm. [speaker002:] Cuz you know, I'm still sort of itching to [disfmarker] to look at what [disfmarker] look at the stuff, and see what people are saying. [speaker003:] Yeah. Yeah make yourself a note. So and [disfmarker] and, of course Keith would like the German as well as the English, so whatever you guys can get. [speaker005:] The German. Oh yeah, of course, German. Yeah. [speaker003:] Yeah, the y your native language, right? You remember that one. [speaker005:] OK. That's important, yeah. [speaker003:] So he'll get you some data. [speaker002:] Yeah, u OK. Yeah, I mean I [disfmarker] I sort of um found the uh, uh the audio of some of those, [speaker005:] Hmm. [speaker002:] and um, it kind of sounded like I didn't want to trudge through that, you know. It was just [disfmarker] Strange, but. [speaker003:] Yep. [speaker005:] We probably will not get those to describe because they were trial runs. [speaker002:] Oh yeah, OK. [speaker005:] Um, but uh that's th but we have data in English and German already. So. [disfmarker] [speaker002:] OK, yeah, I mean. [speaker005:] Transcribed. I will send you that. OK. [speaker003:] OK, so while we're still at this sort of top level, anything else that we oughta talk about today? [speaker005:] Ho how was your thingy. [speaker002:] Oh, um, I just wanted to, uh, s like mention as an issue, um, you know last meeting I wasn't here because I went to a linguistics colloquium on the fictive motion stuff, [speaker003:] Oh right. [speaker002:] and that was pretty interesting and you know, I mean, seems to me that that will fairly obviously be of relevance to uh [disfmarker] to what we're doing here because you know people are likely to give descriptions like you know, "What's that thing uh right where you start to go up the hill," or something like that, you know, meaning a few feet up the hill or whatever from some reference point and all that stuff so I mean, I'm sure in terms of you know, people trying to state locations or, you know, all that kind of stuff, this is gonna be very relevant. So, um, now that was [disfmarker] the talk was about English versus Japanese, um, which obviously the Japanese doesn't affect us directly, except that, um, some of the construction [disfmarker] he'd [disfmarker] what he talked about was that you know in English we say things like th you know, "your bike is parked across the street" and we use these prepositional phrases, you know, "well, if you were to move across the street you would be at the bike", but um in [disfmarker] in Japanese the [disfmarker] the more conventionalized tendency is to use a [disfmarker] sort of a description of "where one has crossed to the river, there is a tree". Um, and you know, you can actually say things like, um, "there's a tree where one has crossed the river, but no one has ever crossed the river", or something like that. So the idea is that this really is you know that's supposed show that's it's really fictive and so on. But um [disfmarker] But the point is that that kind of construction is also used in English, you know, like "right where you start to go up the hill", or "just when you get off the train", or something like that to [disfmarker] uh, to indicate where something is. [speaker003:] Mmm. [speaker002:] So we'll have to think about [disfmarker] [speaker003:] So [disfmarker] how much is that used in German? [speaker005:] Um. The uh [disfmarker] Well [disfmarker] I wa I was on a uh [disfmarker] on a [disfmarker] on a different sidetrack. I mean, the [disfmarker] the Deep Map project which um is undergoing some renovation at [disfmarker] at the moment, [speaker003:] Oh, OK. [speaker005:] but this is a [disfmarker] a three language project: German, English, Japanese. [speaker002:] OK. [speaker005:] And um, we have a uh, uh [disfmarker] I have taken care that we have the [disfmarker] the Japanese generation and stuff. And so I looked into uh spatial description. So we can generate spatial descriptions, how to get from A to B. And [disfmarker] and information on objects, in German, English, and Japanese. [speaker002:] Mm hmm. [speaker005:] And there is a huge uh project on spatial descriptions uh [disfmarker] differences in spatial descriptions. Well, if yo if you're interested in that, so how [disfmarker] how, I mean it does sort of go d all the way down to the conceptual level to some extent. [speaker002:] OK. [speaker005:] So. Um. [speaker003:] So, where is this huge project? [speaker005:] It's KLEIST. It's the uh Bielefeld generation of uh spatial descriptions and whatever. [speaker003:] Mm hmm. Well, that may be another thing that Keith wants to look at. [speaker002:] OK. [speaker005:] But um, I [disfmarker] I think we should leave Japanese constructions maybe outside of the scope for [disfmarker] for now, [speaker002:] Yeah. [speaker005:] but um definitely it's interesting to look at [disfmarker] at cross the bordered there. [speaker003:] Mm hmm. [speaker001:] Are [disfmarker] are you going to p pay any attention to the relative position [disfmarker] of [disfmarker] of the direction relative [disfmarker] relative to the speaker? For example, there are some differences between Hebrew and English. We can say um "park in front of the car" as you come beh you drive behind the car. In Hebrew it means "park behind the car", because to follow the car is defined as it faces you. [speaker005:] Mm hmm. Intrinsic, yeah. [speaker001:] While in English, front of the car is the absolute front of the car. [speaker002:] OK. Right, so the canonical direction of motion determines where the front is. [speaker001:] So. Right. Right. [speaker002:] OK. [speaker001:] So, i i i is German uh closer to [disfmarker] to E uh, uh, uh, uh [disfmarker] to E I mean uh [speaker005:] Mm hmm. [speaker001:] I don't think it [disfmarker] it's related to syntax, though, so it may be entirely different. [speaker005:] Um, as a matter of fact [disfmarker] [speaker003:] No, it's not. [speaker002:] Right. [speaker001:] Yeah. [speaker005:] Um. Did you ever get to look at the [disfmarker] the rou paper that I sent you on the [disfmarker] on that problem in English and German? [speaker002:] I think [disfmarker] [speaker005:] Carroll, ninety three. Um. I [disfmarker] There is a [disfmarker] a study on the differences between English and German on exactly that problem. [speaker001:] Hmm. [speaker005:] So it's [disfmarker] they actually say "the monkey in front of the car, where's the monkey?" [speaker002:] Mm hmm. [speaker005:] And, um, they found statistically very significant differences in English and German, so I [disfmarker] I [disfmarker] I [disfmarker] It might be, since there are only a finite number of ways of doing it, that [disfmarker] that German might be more like Hebrew in that respect. The solution they proposed was that it was due to syntactic factors. [speaker002:] Hmm. [speaker001:] That [disfmarker] but it wasn't [disfmarker] was [disfmarker] [speaker005:] That syntactic facto factors do [disfmarker] do play a role there, wh whether you're more likely, you know, to develop uh, choices that lead you towards using uh intrinsic versus extrinsic reference frames. [speaker001:] Right. Mm hmm. Right. [speaker003:] Hmm. [speaker002:] I mean, it seems to me that you can get both in [disfmarker] in English depending o You know, like, "in front of the car" could you know [disfmarker] Like, here's the car sideways to me in between me and the car or something's in front of the car, or whatever. I could see that, [speaker003:] Absolutely. [speaker002:] but [disfmarker] But anyway, so you know, I mean, this was [disfmarker] this was a [disfmarker] a very good talk on those kinds of issues and so on. So uh. [speaker005:] I can also give you uh, a pointer to a paper of mine which is the [disfmarker] the ultimate taxonomy of reference frames. [speaker002:] Alright! [speaker005:] So. [speaker002:] Cool! [speaker005:] I'm the only person in the world who actually knows how it works. [speaker003:] Oh. Oh. [speaker005:] Not really. [speaker003:] Great. No, I've not seen that. [speaker005:] It's called a [disfmarker] [speaker001:] What do you mean. Um. "reference frames"? uh uh [speaker005:] It's [disfmarker] it's spatial reference frames. You actually have only [disfmarker] Um. If you wanna have a [disfmarker] This is usually um [disfmarker] I should [disfmarker] there should be an "L", though. Well actually you have [disfmarker] only have two choices. You can either do a two point or a three point which is you You're familiar with th with the "origo"? where that's the center [disfmarker] "Origo" is the center of the f frame of reference. And then you have the reference object and the object to be localized. [speaker002:] Hmm. Hmm. [speaker005:] OK? [speaker001:] Mm hmm. [speaker005:] In some cases the origo is the same as the reference object. [speaker006:] This was like [disfmarker] [speaker003:] So that would be "origin" in English, [speaker002:] The origin. [speaker003:] right? [speaker002:] Yeah. [speaker001:] Right [speaker005:] "Origo" is a Terminus technikus. in that sense, that's even used in the English literature. "Origo." [speaker002:] Oh, OK. [speaker003:] Alright. [speaker002:] I never heard it. [speaker001:] OK. [speaker002:] OK. [speaker005:] And um, so, this video tape is in front of me. I'm the origo and I'm also the reference object. [speaker002:] Mm hmm. Mm hmm. [speaker005:] Those are two point. [speaker001:] Right. [speaker003:] Mm hmm. [speaker005:] And three point relations is if something has an intrinsic front side like this chair then your f shoe is behind the chair. [speaker003:] Yeah. [speaker002:] Mm hmm. [speaker005:] And, reference object and [disfmarker] Um. No, from [disfmarker] from my point of view your shoe is left of the chair. [speaker002:] Right. You [disfmarker] you can actually say things like, um, "it's behind the tree from me" or something like that, I think, in [disfmarker] in [disfmarker] in certain circumstances in English, right? [speaker005:] Yeah. [speaker002:] As sort of "from where I'm standing it would appear that" [disfmarker] [speaker005:] Yeah. So, [speaker006:] Looks a little bit like Reichenbach for time. [speaker003:] Yeah, it sounds like it, doesn't it, [speaker006:] It's a lot like it. [speaker003:] yeah. [speaker002:] Yeah. [speaker005:] And then [disfmarker] and then here you [disfmarker] [speaker006:] Um. [speaker005:] On this scale, you have it either be ego or allocentric. [speaker003:] Mm hmm. [speaker005:] And that's [comment] [disfmarker] that's basically it. So. Egocentric two point, egocentric three point, or you can have allocentric. So, "as seen from the church, the town hall is right of that um, fire station". [speaker002:] Oh, OK. [speaker005:] aa huh [comment] It's hardly ever used but it's w [speaker001:] I'd love to see it if you [disfmarker] if you have a copy kind of. [speaker003:] Yeah. I see this is [disfmarker] this is getting into Ami's thing. [speaker002:] Yeah. [speaker001:] Uh. [speaker002:] Mm hmm. [speaker003:] He's [disfmarker] he's very interested in that. [speaker001:] Here [speaker005:] OK. [speaker003:] So. Uh. Yeah. [speaker002:] Me too. [speaker003:] Well, why don't you just put it on the web page? There's this EDU [disfmarker] Right? [speaker005:] Yeah it's [disfmarker] or [disfmarker] or just [disfmarker] Yeah. It's also all on my [disfmarker] my home page at EML. [speaker003:] Or a link to it. [speaker005:] It's called "An Anatomy of a Spatial Description". [speaker003:] Just [speaker005:] But I'll send that link. [speaker001:] OK, great. [speaker003:] Maybe just put a link on. [speaker005:] Yep. [speaker003:] Yeah. [speaker005:] Yep. [speaker003:] By the way, there [disfmarker] something that I didn't know until about a week ago or so, is apparently, there are separate brain areas for things within reach, and things that are out of reach. So there's [disfmarker] there's uh all this linguistic stuff about you know, near and far, or yon and [disfmarker] and so forth. [speaker002:] Huh. Mm hmm. [speaker003:] So this is all [disfmarker] This is [disfmarker] There's this linguistic facts. But apparently, the [disfmarker] Uh. Here's the way the findings go. That, you know they do MRI, and [disfmarker] and if you're uh [disfmarker] got something within reach then there's one of your areas lights up, and if something's out of reach uh a different one. But here's the [disfmarker] the amazing result, um, they say. You get someone with a [disfmarker] with a deficit so that they have a perfectly normal ability at distance things. So the s typical task is subdivision. So there's a [disfmarker] a line on the wall over there, and you give them a laser pointer, and you say, "Where's the midpoint?" And they do fine. If you give them the line, and they have to touch it, they can't. There's just that part of the brain isn't functioning, so they can't do that. Here's the real experiment. The same thing on the wall, you give them a laser, "where is it?", they do it. [speaker002:] Mm hmm. [speaker003:] Give them a stick, long stick, and say "do it", they can't do it. So there's a remapping of distant space into nearby space. [speaker001:] Right. So they doubled [disfmarker] [speaker006:] Because it's within reach now? [speaker001:] the [disfmarker] the end [disfmarker] the end of this [disfmarker] [speaker003:] It's not within reach and you use the Within Reach uh, mechanism. [speaker002:] Yeah, [speaker006:] Oh. Wow. [speaker002:] yeah. Circuits. [speaker003:] So I'll d I'll dig you up this reference. [speaker001:] Right. [speaker002:] That's cool. [speaker003:] And so this doe This is, uh [disfmarker] First of all, it explains something that I've always wondered about and I'll do this [disfmarker] this test on you guys as well. So. Uh. How I have had an experience, not often, but a certain number of times, when, for example, I'm working with a tool, a screwdriver or something, for a long time, I start feeling the tip directly. Not indirectly, but you actually can feel the tip. [speaker002:] Yeah yeah. [speaker003:] And people who are uh accomplished violinists and stuff like that, claim they also have this kind of thing where you get a direct sensation of, physical sensation, of the end affector. [speaker002:] Yeah. What's going on at the end of the tool, yeah. [speaker001:] The ext the [disfmarker] the [disfmarker] The extension, [speaker003:] Huh? [speaker002:] What's going on at the end of the tool, or whatever. [speaker003:] Yeah, within [disfmarker] [speaker001:] right. [speaker003:] Huh? [speaker001:] The extension of [disfmarker] of your hand, right. [speaker003:] Yeah, right. Have you hav y h had this? [speaker001:] The [disfmarker] I [disfmarker] I think so. I mean i i it's not exactly the th same thing, but [disfmarker] but s it [disfmarker] it [disfmarker] it's getting close to that. [speaker006:] W what does it feel like? [speaker002:] Yeah. [speaker003:] Oh i it feels like your [disfmarker] as if your uh neurons had extended themselves out to this tool, and you're feeling forces on it and so forth and [disfmarker] and you deal directly with it. [speaker001:] I once [disfmarker] I [disfmarker] I was playing you know with those um uh devices that allow you to manipulate objects when it's dangerous to get close? [speaker003:] Right, yeah [disfmarker] yeah [disfmarker] yeah. [speaker002:] Oh, OK. [speaker001:] So you can insert your hand something [speaker003:] Yeah. [speaker001:] and there's a correspondence between [disfmarker] [speaker003:] Yeah. [speaker001:] So I played with it. After a while, you don't feel the difference anymore. [speaker003:] Yeah, right. [speaker001:] I [disfmarker] I mean it's kind of [disfmarker] [speaker002:] Mm hmm. [speaker001:] Very [disfmarker] kind of [disfmarker] you stop back and suddenly it goes away and you have to kind of work again to recapture it, but yeah. [speaker002:] Yeah. [speaker003:] Right, Yeah, so anyway, so [disfmarker] So this was the first actual experimental evidence I'd seen that was consistent with this anecdotal stuff. [speaker002:] That's cool. [speaker003:] And of course it makes a lovely def uh story about why languages uh, make this distinction. Of course there are behavioral differences too. Things you can reach are really quite different than things you can't. [speaker002:] Yeah. [speaker003:] But there seems to be an actu really deep embodied neural difference. And i this is, um [disfmarker] So. In addition to the e [speaker005:] This is more proximal distal. [speaker003:] Yeah uh exactly. So in addition to e ego and allocentric uh which appear all over the place, you also apparently have this proximal distal thing which is very deeply uh embedded. S [speaker005:] Well, Dan Montello sort of, he [disfmarker] he does the uh uh [disfmarker] th the cognitive map world, down in Santa Barbara. And he [disfmarker] he always talks about these [disfmarker] He [disfmarker] he already [disfmarker] well [disfmarker] i probably most likely without knowing this [disfmarker] this evidence uh is talking about these small scale spaces that you can manipulate versus large scale environmental spaces. [speaker003:] Yeah. Well there's [disfmarker] there's uh been a lot of behavioral things o on this, but that was the first neur neuro physiological thing I saw. Anyway yeah, so we'll [disfmarker] we'll look at this. And. So, all of these issues now [disfmarker] are now starting to come up. So, now [disfmarker] we're now done with demos. We're starting to do science, right? And so these issues about uh, reference, and [disfmarker] spatial [comment] reference, discourse reference, uh uh uh uh [comment] all this sort of stuff, uh, deixis which is part of what you were talking about, [speaker002:] Mm hmm. Mm hmm. [speaker003:] um [disfmarker] So, all of this stuff is coming up essentially starting now. So we gotta do all this. So there's that. And then there's also a set of system things that come up. So "OK, we're not using their system. That means we need our system." [speaker002:] Mm hmm. [speaker003:] Right? It [disfmarker] it follows. [speaker002:] Yeah. [speaker003:] And so, uh, in addition to the business about just getting the linguistics right, and the formalism and stuff, we're actually gonna build something and uh, Johno is point person on the parser, analyzer, whatever that is, and we're gonna start on that in parallel with the um, the grammar stuff. But to do that we're gonna need to make some decisions like ontology, [speaker002:] Alright. [speaker003:] so, um [disfmarker] And so this is another thing where we're gonna, you know, have to get involved and make s relatively early I think, make some decisions on uh, "is there an ontology API that [disfmarker] that" [disfmarker] There's a sort of standard way of getting things from ontologies and we build the parser and stuff around that, or is there a particular ontology that we're gonna standardize on, and if so [disfmarker] For example, is there something that we can use there. i Does uh either the uh SmartKom project or one of the projects at EML have something that we can just p pull out, for that. Uh, so there are gonna be some [disfmarker] some [disfmarker] some things like that, which are not science but system. But we aren't gonna ignore those cuz we're [disfmarker] we're not only going [disfmarker] The plan is not only to lay out this thing, but to actually uh build some of it. And how much we build, and [disfmarker] and so forth. [speaker002:] I [disfmarker] [speaker003:] Uh. Part of it, if it works right, is wh It looks like we're now in a position that the construction analyzer that we want for this applied project can be the same as the construction analyzer that Nancy needs for the child language modeling. So. It's always been out of phase but it now seems that um, there's a good shot at that. So we've talked about it, and the hope is that we can make these things the same thing, [speaker002:] OK. [speaker003:] and of course it's only w In both cases it's only one piece of a bigger system. [speaker002:] Mm hmm. [speaker003:] But it would be nice if that piece were exactly the same piece. It was just this uh construction analyzer. [speaker002:] Right. [speaker003:] And so we think [disfmarker] we think we have a shot at [disfmarker] at that. [speaker002:] OK. [speaker003:] So. The for So. To [disfmarker] to come full circle on that, this formalization task, OK? is trying to get the formalism into [disfmarker] into a shape where it can actually uh [speaker002:] Yeah. Be of use to someone who's trying to do this, right? [speaker003:] d Well, yeah, where it actually is [disfmarker] is [disfmarker] covers the whole range of things. And the [disfmarker] the [disfmarker] the [disfmarker] the thing that got Mark into the worst trouble is he had a very ambitious thing he was trying to do, and he insisted on trying to do it with a limited set of mechanisms. It turned out, inherently not to cover the space. [speaker002:] OK. [speaker003:] And it just [disfmarker] it was just terribly frustrating for him, [speaker002:] Yeah. [speaker003:] and he seemed fully committed to both sides of this i i irreconcilable thing. [speaker002:] I see. Right. [speaker003:] And. Uh. Johno is much more pragmatic. [speaker002:] OK. [speaker003:] Uh. Huh? [speaker002:] Good to know. [speaker003:] Is [disfmarker] This is true, is it not? [speaker004:] Yes. [speaker003:] OK. So there's you know sort of, yeah, deep, really deep, emotional commitment to a certain theory being uh, complete. [speaker002:] Oh, OK. [speaker006:] You don't have a hidden purist streak? [speaker004:] Oh no. [speaker006:] OK. [speaker003:] We well it hasn't it [disfmarker] it certainly hasn't been observed, in any case. [speaker006:] Just checking. [speaker004:] No sir. [speaker002:] Alright. [speaker003:] Um. Now, you do, but that's OK. Uh. So. For [disfmarker] for [disfmarker] [speaker002:] Cuz I don't have to implement anything. [speaker003:] Exactly right. [speaker006:] I have a problem, then. [speaker003:] Exactly. [speaker006:] It's [disfmarker] So. Whether I do depends on whether I'm talking to him or him probably. [speaker001:] Hmm. [speaker002:] Yeah, right. [speaker003:] Right. [speaker006:] Which meeting I'm in. [speaker003:] Why [disfmarker] a actually, uh, the thing is, you [disfmarker] you do but, th the thing you have to im implement is so small that [disfmarker] Uh. [speaker006:] It's OK to be purist within that context. [speaker003:] Within that, [speaker006:] Yes, [speaker003:] yeah, [speaker006:] good. [speaker003:] and uh, it's [disfmarker] a and still, I think, you know, get something done. [speaker002:] Cool! [speaker003:] But to try to do something upscale and purist Particularly if [disfmarker] if um what you're purist about doesn't actually work, [vocalsound] is real hard. [speaker006:] Yay. [speaker002:] Mm hmm. Yeah. [speaker003:] OK. And then the other thing is while we're doing this uh Robert's gonna pick a piece of this space, [speaker001:] It's possible yeah. [speaker002:] OK. [speaker003:] OK, uh, for his absentee thesis. I think you all know that [disfmarker] that you can just, in Germany [disfmarker] almost just send in your thesis. [speaker002:] Just a drive up. Ca chuk! [speaker003:] Yeah right. [speaker001:] Um [speaker002:] There you go. [speaker003:] OK. [speaker005:] The th There [disfmarker] there's a drive in thesis uh sh [vocalsound] joint over in Saarbruecken. [speaker002:] Exactly. Drive through, yeah. [speaker003:] It costs a lot. The [disfmarker] the amount [disfmarker] You put in your credit card and [disfmarker] [vocalsound] as well. But, uh, [disfmarker] But anyway, so, uh, that's um, also gotta be worked out, hopefully over the next few weeks, so that [disfmarker] that it becomes clear uh, what piece uh, Robert wants to jump into. And, while we're at this level, uh, there's at least one new doctoral student in computer science who will be joining the project, either next week or the first of August, depending on the blandishments of Microsoft. [speaker002:] OK. [speaker003:] So, de Uh. And her name is Eva. [speaker002:] OK. [speaker003:] It really is. Nobody believed th th that [disfmarker] [speaker006:] Yeah, I thought it had to be a joke, of your part, you know like [disfmarker] [comment] Johno made it up, [speaker003:] Yeah. [speaker006:] I'm sure. " [speaker007:] Is this person someone who's in first year this year, [speaker003:] No, first year coming. [speaker007:] or [speaker003:] So, she's [disfmarker] she's now out here she's moved, [speaker007:] OK. [speaker003:] and she'll be a student as of then. And probably she'll pick up from you on the belief net stuff, so sh she'll be chasing you down and stuff like that. [speaker007:] OK. [speaker005:] Document. [speaker003:] Uh. [speaker007:] Right. [speaker003:] Uh, against all traditions. And actually I talked today to a uh undergraduate who wants to do an honors thesis on this. Uh [disfmarker] [speaker006:] Someone from the class? [speaker003:] No, interestingly enough. [speaker006:] We always get these people who are not in the class, who [disfmarker] [speaker003:] Some of th some of them, yeah. [speaker006:] It's interesting. [speaker003:] So anyway, uh, but uh she's another one of these ones with a three point nine average and so forth and so on. [speaker002:] Mm hmm. [speaker003:] Uh, so, um, I've give I've given her some things to read. So we'll see how this goes. Oh there's yet another one of the incoming first [disfmarker] [comment] incoming first year graduate students who's expressed interest, so we'll see how that goes. Um, anyway, so, I think as far as this group goes, um, it's certainly worth continuing for the next few weeks to get closure on the uh belief net and the ideas that are involved in that, and what are th what are the concepts. We'll see whether it's gonna make sense to have this be separate from the other bigger effort with the formalization stuff or not, I'm not sure. It partly depends on w what your thesis turns out to be and how that goes. S so, we'll see. And then, Ami, you can decide, you know, how much time you wanna put into it and uh, it it's beginning to take shap shape, [speaker001:] OK. [speaker003:] so uh and, [speaker001:] Right [speaker003:] I think you will find that if you want to look technically at some of the [disfmarker] your traditional questions in this light, uh Keith, who's buil building constructions, will be quite happy to uh see what, you know, you envision as the issues and the problems and um, how they might uh get reflected in constructions. [speaker002:] Sure. [speaker003:] I suspect that's right. [speaker002:] Yeah. Yeah. [speaker001:] I [disfmarker] I may have to go to Switzerland for [disfmarker] in June or beginning of July for between two weeks and four weeks, but uh, after that or before that. [speaker003:] OK, fine. And, um, if it's useful we can probably arrange for you to drop by and visit either at Heidelberg or at the German AI center, while you're in [disfmarker] in the neighborhood. [speaker001:] Right. Yeah be uh actu actually I'm invited to do some consulting with a bank in Geneva which has an affiliation with a research institute in Geneva, which I forgot the name of. [speaker003:] Yeah. Yep. E o do y Well, we we're connected to uh [disfmarker] [speaker001:] Yeah. [speaker003:] There's a [disfmarker] there's a [disfmarker] a very significant connection between [disfmarker] We'll [disfmarker] we'll go through this, [speaker001:] Yeah. [speaker003:] ICSI and EPFL, which is the, uh [disfmarker] It's the [disfmarker] Fr Ge Germany's got two big technical institutes. There's one in [disfmarker] in Zurich, [speaker001:] Mm hmm. [speaker003:] E T and then there's one, the French speaking one, in Lausanne, OK? [speaker002:] Oh, so in Switzerland. [speaker003:] which is uh E P F L. So find out who they are associated with in Geneva. [speaker001:] Great. [speaker003:] Probably we're connected to them. [speaker001:] Right. Great. I'll let you know. S I'll send you email. [speaker003:] OK. Yeah, and so anyway we c uh [disfmarker] We can m undoubtedly get Ami uh to give a talk at uh EML or something like that. While he's in [disfmarker] in uh [disfmarker] [speaker005:] Hmm. Uh. I [disfmarker] I think the one you [disfmarker] you gave here a couple of weeks ago would be of interest there, too. [speaker001:] Sure, yeah. [speaker003:] A lot of interest. Actually, either place, DFKI or uh [disfmarker] Yeah, so, and [disfmarker] and if there is a book, that you'll be building up some audience for it. [speaker001:] Yeah. Right. [speaker003:] And you'll get feedback from these guys. [speaker001:] Great, yeah. [speaker003:] Cuz they've actually [disfmarker] these DFKI guys have done as much as anyone over the last decade in trying to build them. So we'll set that up. [speaker001:] Cool. [speaker003:] OK. So, uh, unless we wanna start digging into the [disfmarker] uh the belief net and the decisions now, which would be fine, it's probably [disfmarker] [speaker005:] I [disfmarker] I tho It's probably better if I come next week with the um version O point nine of the structure. [speaker003:] OK. So, how about if you two guys between now and next week come up with something that is partially proposal, and partially questions, saying "here's what we think we understand, here are the things we think we don't understand". And that we as a group will try to [disfmarker] to finish it. What I'd like to do is shoot f for finishing all this next Monday. [speaker007:] Sure. [speaker003:] OK? Uh, "these are the decisions" [disfmarker] I don't think we're gonna get lots more information. It's a design problem. [speaker002:] Mm hmm. [speaker007:] Yeah. [speaker003:] You know. We [disfmarker] Yeah. And let's come up with a first cut at what this should look like. And then finish it up. [speaker002:] OK. [speaker003:] Does that so make sense? [speaker002:] OK. [speaker005:] And um, the [disfmarker] the sem semester will be over next week but then you have projects for one more week to come? [speaker007:] No, I [disfmarker] I think I'll be done [disfmarker] everything by this uh [disfmarker] by the end of this week. [speaker005:] Same with you? No. [speaker004:] Nnn. This [disfmarker] Well, I've [disfmarker] I have projects, but then the [disfmarker] my prof professor of one of my classes also wa has a final that he's giving us. And he's giving us five days to do it which means it going to be hard. [speaker003:] Yeah. [speaker002:] Yeah. [speaker003:] Oh. is it a take home final? [speaker004:] Yeah. [speaker003:] Who's doing this? [speaker004:] Aikin, Alex, yeah. [speaker003:] Yeah, figured. That would have been i my guess. [speaker007:] Hmm. [speaker003:] Right. Um, But anyway, yeah. [speaker002:] Pretty soon. [speaker005:] OK. [speaker003:] OK, so I guess that's [speaker004:] So, the seventeenth will definitely be the last day, like it or not for me. [speaker003:] Right. right. So let's do this, and then we we well there's gonna be some separate co these guys are talking, uh we have a group on the formalization, uh Nancy and Johno and I are gonna talk about parsers. So there're various kinds of uh [disfmarker] [speaker002:] OK. [speaker003:] Of course, nothing gets done even in a meeting of seven people, right? So, um, two or three people is the size in which actual work gets done. [speaker002:] Right. [speaker005:] Mmm. [speaker002:] Yeah. [speaker003:] So we'll do that. Great. Oh, the other thing we wanna do is catch up with uh, Ellen and see what she's doing because the um image schemas are going to be um, an important pa [speaker002:] Yeah. Quite relevant, yeah. [speaker003:] We [disfmarker] we want those, right? [speaker002:] Yeah, [speaker003:] And we want them formalized and stuff like that. [speaker002:] oh yeah. Yeah. [speaker003:] So let me [disfmarker] let me make a note to do that. [speaker002:] OK. Yeah, I'm actually probably going to be in contact with her uh pretty soon anyway because of various of us students were going to have a reading group about precisely that sort of thing over the summer, [speaker004:] OK. [speaker003:] Oh right! Right right right! That's great! [speaker002:] so. [speaker003:] Yeah, I [disfmarker] I [disfmarker] Shweta mentioned that, although she said it's a secret. [speaker002:] OK. [speaker004:] Hi [speaker002:] Right, [speaker003:] Th the faculty aren't [disfmarker] faculty aren't supposed to know. [speaker002:] no faculty! [speaker004:] Wednesday's much better for me, yeah. [speaker003:] But um, I'm sufficiently clueless that I count as a [disfmarker] [speaker002:] Yeah, right. It's as if we didn't tell anyone at all, [speaker004:] Bhaskara. [speaker002:] right. [speaker004:] OK. [speaker003:] OK. We can start? [speaker004:] OK. [speaker003:] OK. So, welcome to our next meeting. Uh, today we will have a talk from Miguel Sanchez [pause] concerning his [pause] uh [pause] work related to ad hoc networks. We have here a guest, a former [pause] ICSI member of the NSA group, Jordi Domingo Pasqual. He is, uh, assistant professor at UPC. And maybe Jordi, you have a few words [pause] concerning your work [disfmarker] one or two sentence and [pause] because we will have tomorrow a talk from him in more detail concerning what's going on. [speaker005:] Eh, you [disfmarker] you want me to [pause] say [disfmarker] [speaker003:] Yeah, only a short introduction, [speaker005:] Oh! [speaker003:] ve very short introduction, please. [speaker005:] Of course. Eh, [pause] well, I'm working in the em [pause] Broadband Commu [speaker003:] Uh, sorry, one interruption. This is not related to hearing, it's only for [pause] fixing it. It's not a earphone. [speaker005:] Ah, it's not a hear [disfmarker] [speaker003:] No. So. [speaker005:] It works [pause] like [disfmarker] like this. [speaker003:] Yeah, yeah. Like this one. Yeah. [speaker005:] Ah, [speaker003:] It's only a microphone. [speaker005:] that's [disfmarker] Ah, [speaker003:] Yeah? [speaker005:] that's uh [pause] much more comfortable. [speaker003:] OK. Yeah. OK. [speaker005:] Thank [vocalsound] thank you. I'm uh [pause] working in the Broadband Communications Group and eh, eh well, the research topics at this moment is uh, [pause] I P eighty traffic monitoring and characterization and em [pause] eh [pause] tool [disfmarker] developing tools uh for quality of service, eh [pause] performance analysis. That's eh [disfmarker] [pause] And we have several projects eh [pause] both Spanish and European projects, ongoing eh [pause] in this area. [speaker003:] OK. Thank you. And after the talk of [pause] uh, Miguel, I [pause] will [pause] give [pause] you [pause] all [pause] a few slides concerning the project proposal. I will um [pause] say something about in which direction it will go and uh Maybe it was my fault that I assumed that the pointer of [pause] Miguel [pause] concerning UMTS services was [pause] deployed to everybody, but it was only sent to me, so in the meantime everybody got, I believe, this pointer from me, and maybe we [disfmarker] It's a homework, in principle, for the next meeting, to get [vocalsound] [pause] a little bit familiar with these things. [speaker001:] Worse than no work. [speaker003:] And [pause] um so I believe we will not have [pause] today a very hard technical discussion, maybe more a few topic and items we will focus in the next days, until the next meeting. And when we are ready with our meeting. Everything is done. Then, uh, we have to read these number and we go around here, every member. [speaker002:] OK. [speaker003:] So. [speaker004:] OK. [speaker003:] Miguel, please. [speaker004:] Thank you. Well, eh, I [disfmarker] I have [pause] prepared, eh, a really short talk about some of the topics I [disfmarker] I have been doing some w some research work [pause] uh before coming here. And eh, well, mainly I will present uh [disfmarker] [pause] Here you have more or less the sketch of the presentation. I will present some ideas and some concepts about [pause] what ad hoc networks are, about the issues in media access control in wireless networks, also about routing in such kind of networks and, eh, I will make some comments also about [pause] uh [pause] the current standards of wireless local area networks and finally, I will present the blueprint of [disfmarker] of my research project or my research uh proposal here. So [pause] the first thing is eh what an ad hoc network is. You know, uh, an ad hoc network is made of wireless nodes. These nodes of course can [pause] be mobile nodes, and, eh in this network there is no [pause] network [pause] infrastructure. So [pause] eh [pause] the only [pause] active thing we have is the mobile nodes. And eh end to end communication, is sometimes, uh [pause] [vocalsound] done by direct communication from one node to the other, because [pause] they can talk to each other. But sometimes it is not possible, this direct communication, because nodes, end [disfmarker] uh source or [disfmarker] and destination nodes are um are too far away. And in this case, [pause] it can be done, communication can be done, by uh using multi hub routing. So an intermediate, or a set of intermediate nodes will [pause] route or will forward the packet from the sender to the destination. So uh [pause] in a sense, what we have in this kind of networks, [pause] is that, uh [pause] nodes are as both arced as both uh [pause] endpoints and also routers of other neighboring nodes. And this makes a uh difference from [pause] other kind of network. And well, regarding [pause] the purposes of [pause] what can this kind of network be used for, [vocalsound] eh [pause] we can say that, well, cellular networks are infrastructure based networks, in this kind of ad hoc networks, we don't have [vocalsound] anything uh, with the exception of the network nodes. So, this uh network can be useful for eh [vocalsound] uh [pause] occasions where no available infrastructure is present, like for example, in [disfmarker] in [disfmarker] eh after some kind of [disfmarker] of [disfmarker] eh um [pause] catastrophe catastrophic event, or, also when [disfmarker] even when there is a network available, [pause] we don't want, for different reasons, to use the available network. This latt later case can be, for example, the use in military systems, where [pause] maybe we don't want to use the other [disfmarker] the enemy network infrastructure, for evident reasons. Yeah. [speaker003:] Hmm? But you can also, I believe, uh to have an ad hoc network, where one node is then connecting to the outer world. For instance, what I have in mind, you have a car, where you plug in different devices and they talk to each other, for instance, but one device decided to go to the G S M network or GPS network, or whatever kind of things. [speaker004:] That's OK. [speaker003:] So there's a certain kind of task uh, for one node within the ad hoc network to provide connectivity to the outer world, right? [speaker004:] That's O K. [speaker003:] OK. [speaker004:] That's perfect, you know. [speaker003:] Yeah. [speaker004:] This [disfmarker] in this case, we will say that this node is acting as a [disfmarker] as a gat external gateway, from this network [pause] to connect to other networks, [speaker003:] Right. [speaker004:] and of course the underlying [pause] uh protocols can can be, of course, uh IP based. Uh, OK. Here you have several [disfmarker] several uh, proposed, uh, scenarios for [disfmarker] for used this networks. When no network is available, OK? And another [disfmarker] another application, for example, NASA is [disfmarker] is uh studying, is uh the use of these networks, these ad hoc networks, for sending multiple probes, uh, um [pause] uh [pause] missions. So, for example, all of you are aware of this uh [disfmarker] the last Atlas, the last successful uh Mars mission, the cellular vehicle was only one, and was uh walking around, uh roaming around the base station, which in this case was the gateway uh node of this network, but it was only one mobile node, of course. Nothing prevents, except well maybe the cost, to send not only one mobile rover, but ten tons of them, or [disfmarker] [pause] or at least uh several of them. So, [pause] uh higher area could be covered, could be explored and uh of course, all the different nodes could also use some kind of ad hoc networking,, ad hoc routing, to extend the distance, the maximum distance, than uh a single radio transmission will [disfmarker] would allow. OK? And in the other case, or the other area uh of [disfmarker] of [disfmarker] of application, [pause] is the so called network of sensors, where [pause] we can deploy a set of sensors in a maybe uninhabited area, let's say, for example, the forest, and we can uh take some information uh, for example for [disfmarker] for fire fighting, we can just launch from a plane, several small sensors, and uh these sensors can be for example, uh checking that there is no fire, they can have some kind of uh let's say infrared technology sensor, so we can cover a huge area with only a small set of [disfmarker] of sensors, and without any other infrastructure. Yeah. [speaker002:] Um, is that really an ad hoc network? Because the sensors are not moving. And I think it's [disfmarker] makes no sense with moving sensors because then you don't know where the sensors are. [speaker004:] Yeah. [speaker002:] So. [speaker004:] Well, You know, it's an interesting question. Uh, the [disfmarker] the main thing about an ad hoc network is not that nodes move. It is the way of topology changes can happen. The idea that mmm, any change is possible at any given moment. But you can of course ask, "but if the nodes are not moving, what kind of topology changes can we expect?" [speaker002:] Yeah. That's it. [speaker004:] Well, eh, the thing is that usually in [disfmarker] in a n in a network of sensors, what we have, or the idea is to have um, an important number of sensors, and maybe with a high filer rate. [speaker003:] Yeah. [speaker004:] maybe the batteries are uh [pause] [vocalsound] exhausted in some nodes because maybe uh they are not being able to take uh, let's say, any sun to recharge its cells, [speaker002:] OK. Mm hmm. [speaker004:] uh maybe some of them can be uh just uh out of order [pause] for other [pause] eh environmenta environmental conditions. So uh the idea is that even when these nodes are not moving, they can start and stop working at uh, let's say, higher pace. Then, uh [pause] what will happen in a [disfmarker] in a regular network, uh in a network where you don't have or [disfmarker] either mobility or uh this uh on off uh rate, high rate. [speaker003:] Mm hmm. Let me go back to your uh uh example. [speaker002:] Hmm. [speaker004:] Yeah. [speaker003:] Have in mind some routers are burnt [pause] here, at ICSI or whatever kind of thing, what will happen then even in a fixed network? [speaker002:] Mm hmm. [speaker003:] So, if some sensors are getting out of uh [pause] service and whatever kind of thing, so the topology has changed, even if no node is moving. [speaker002:] OK. Mm hmm, [speaker003:] Yeah. [speaker002:] OK. [speaker003:] And how to provide the connectivity and the intercommunication things wi within this network. [speaker004:] Thank you, Joe. [speaker002:] Yeah, I see. Mm hmm. [speaker004:] Well, uh, you know, and finally, uh, another non military applications, where uh maybe a network is available, like for example, a room like this, where we have, in fa in fact, a wired infrastructure, but [pause] this uh [pause] possible application of ad hoc networks could be uh, any place, any meeting room, where we arrive with our computers and eh we just [pause] can communicate seamlessly [pause] to each other without [pause] any kind [pause] of access point, any kind of uh mmm external network, because [pause] the kind of work we want to do is just only local based, it's only to chat to each other or to exchange files with the person, which is [pause] in front of us. OK, but we n don't want to product [disfmarker] pro provide, eh, access to the internet or something. OK? And, of course, the same eh [disfmarker] the same scenario, [pause] could also happen in a place where there is no [pause] uh, infrastructure available. Let's say a meeting room uh, in [disfmarker] in the middle of the desert, OK?, without any other network. So the point [disfmarker] or [disfmarker] or the question could be for some of us, the idea of saying "OK, That sounds OK, but why [pause] not to use a longer transmission range, so we can reach any destination in just one hop?" This seems to make sense because, [pause] well, uh we are [pause] this way avoiding all the routing problem that, well, we will talk a little bit uh, about routing [pause] later, but uh it [disfmarker] it seems that i probably that the problem is [disfmarker] is not an easy one, because uh a node's moving, the route is changing, so w one can think that routing in this uh scenario is [disfmarker] is a complex thing. But the problem is [pause] if w we just uh [pause] try to put more power in each tr transmitter, so this uh transmitter can reach any destination [pause] in only one hop, what we are doing is uh to rising the power needs of each node, and also we are reducing the network uh throughput, because we are only allowing one transmissio one transmission at a time. Because uh, well, we cannot allow several transmissions, several high powered transmissions at the same time because one is interfering with the other. OK? We are [disfmarker] uh I think I'm not telling [pause] it yet, but [pause] we are assuming [pause] that we are using a single common channel for all the nodes. OK? So that's why [pause] we cannot allow several transmitters in the same channel at the same time. OK? So, the idea of hav having a longer transmission range is not [pause] a good [pause] one. or at least it has the drawbacks um, um, I was telling you. Eh, let's speak a little bit about the media access issues in [disfmarker] in wireless networks, Uh, you know, the [vocalsound] easiest eh way to [disfmarker] to show how this works is [disfmarker] or at least one of [disfmarker] I think one of the easiest [disfmarker] is that this drawing, or this picture, what [disfmarker] what we have is a set of nodes, each one at a different location. At [disfmarker] at [disfmarker] at a [disfmarker] at an [disfmarker] at a given moment this eh node, this dash node is [disfmarker] is transmitting and [pause] uh, the arrows are representing this transmission and the receiving nodes, these three nodes, are receiving the transmission, and I'm trying to uh [pause] show the area, the covered area, for [disfmarker] of this transmission by means of this uh dashed eh line circle. OK? So, uh, any transmission [vocalsound] has a maximum transmission range, and uh in the [pause] simpler way, we can see this like a circle with the center in the transmitter and a certain radius. OK? The value of this radius is uh [disfmarker] [pause] in principle, is a fixed value and it depends on the uh power of the transmitter. Of course, uh [pause] this is only our uh [pause] simplife simplified model and of course this can be uh [pause] improved uh [pause] a lot, OK? And in fact the real behavior is not exactly this one, but uh for [disfmarker] for what I want to say it's [disfmarker] it's [disfmarker] OK, it's enough. OK? So what are the differences uh, in this [disfmarker] in this eh [pause] wireless transmission, regarding uh to the wireless [disfmarker] to the wired ones. What [disfmarker] what are the d the main differences. Well, the main differences come from the fact that, uh in uh [disfmarker] in uh wireless transmission we have a special domain we don't have when we are using a wire for transmitting a signal. And the point I'm trying to show in this picture, is that if for example, this other node wants also to transmit while this one was transmitting, if we are using some kind of carry detect scheme, like for example, CSMA, or something uh uh derivated from it, the problem is that this node will hear the g the m the transmission media, it won't be able to hear anything so, he will conclude that [pause] no other transmission is going on, and he will start transmitting. And if he does, what will happen is that this node and this other node will also receive this transmission, and this will produce, at least at these two nodes, a collision or a m a m a [disfmarker] a re a bad reception. OK? So none of these two simultaneous transmissions will be properly received at this node nor at this other node. OK, eh, this problem is what is called [disfmarker] Oh, sorry. uh, the [disfmarker] I think this is the previous one? This is the German [disfmarker] [speaker003:] Uh. [speaker001:] No. One [speaker003:] Yes, [speaker004:] OK, [speaker003:] this one. [speaker004:] thank you. Eh this is what is called a "hidden terminal" problem. OK? because well uh this node is not able to hear this other node [pause] because of the distance [pause] uh, between them. But Uh, this problem, as you can see in the [disfmarker] in the picture, is uh really producing [pause] at least some reception problem at these two nodes. OK, so it is [disfmarker] it is a problem that we have to address and it is a new problem compared to wired uh media access control. And, uh, well, some devel some protocols have been developed to avoid this problem and to try to address this particular behavior of wireless transmissions and the main idea is to use some kind of a reservation mechanism. So we are um reserving a certain, and a specific uh, area of space so we can [vocalsound] guarantee that nobody is trying to use the same space at the same time. OK? This is the basic idea. [speaker003:] Mm hmm. Um [speaker004:] Yeah? [speaker003:] Miguel, can you give some examples for such kind of protocols? [speaker004:] Yeah. Yeah, in fact [disfmarker] [speaker003:] One is really [disfmarker] really [disfmarker] [speaker004:] This one. This one, n next slide is [disfmarker] [speaker003:] OK. Yeah, yeah. OK. [speaker004:] OK, in eh the [disfmarker] the [disfmarker] the main idea and [disfmarker] and the base of most of this protocols, is what this [disfmarker] the so called RTS CTS uh dialogue. The idea is that, uh, of course, this is only intended for unicast transmissions, I mean, only for when a node wants to transmit something to a neighbor node. OK? But not to broadcast. Uh, to all the nodes. OK. For this [disfmarker] for this eh, work, the transmitter has to send a [disfmarker] a [disfmarker] a reservation packet called RTS, "Request To Send", before eh [pause] being able to send data. And the receiver is expected to return back another packet [pause] called CTS, "Clear To Send", to allow the transmitter to send the data. If the transmitter receives successfully the CTS packet from the destination, then it will be able to send eh the packet, the data packet. If not, it will h uh, back off, and it will retry uh sometime later. OK? So the main idea [disfmarker] [speaker003:] And the media is [disfmarker] is [disfmarker] is still occupied during that period, or [disfmarker]? [speaker004:] Uh. Yes. Yes. [speaker003:] OK. [speaker004:] The idea is uh, if I want to send you something, I [disfmarker] first of all, I need you to answer me to be sure that you are my neighbor now because maybe you are off or maybe you are uh too far away, and also because that way Miguel, for example, it may be is not being able to hearing me will hear your CTS and will know that you are about to receive uh data from me. OK? So in fact, forcing the receiver to transmit for a short period of time is also uh, used to signal all the uh receiver neighbors that a recept a reception is taking place. OK? [speaker003:] Yeah. I didn't get it. Um, what happened when I sent an RTS packet? Is then the media still occupied for me? Or, is anybody allowed also to send RTS packets? [speaker004:] Or everybody [disfmarker] [speaker003:] Because if I go to an intermediate node, maybe you are communicating to me and I am the intermediate node to the final destination Michael. And Michael send also some RTS packet in the time and [disfmarker] and [disfmarker] uh uh uh Jordi too and whatever. [speaker004:] Yeah. [speaker003:] Then you will never come to a really [disfmarker] then you're always exchanging RTS and non acknowledgement packets for the RTS. [speaker004:] OK. Two [disfmarker] two things. [speaker003:] So that's uh [disfmarker] that's uh, my [disfmarker] my problem. [speaker004:] First of all, [pause] eh, this [disfmarker] this mechanism is only a media access controlled mechanism, so it is not connected [pause] with routing [pause] eh, anyhow. [speaker003:] Then, fine. [speaker004:] And, second point, [speaker003:] OK. [speaker004:] eh [speaker003:] So it's only related to the direct neighbor within the scope of the [pause] transmissions? [speaker004:] RTS [disfmarker] Yeah, RTS packets can collide [pause] to [pause] other RTS packets. [speaker003:] OK. [speaker004:] So, [pause] this mechanism [pause] is not [pause] uh preventing [pause] collisions, it is preventing only data collisions. OK? [speaker003:] OK. [speaker004:] So the [disfmarker] And [disfmarker] and [disfmarker] you can say "OK, but what is the [disfmarker] what the benefit is?" The benefit is that, eh RTS packets and CTS packets are short packets, really short packets. Let's say eh [pause] eh 30 bytes or less. [speaker003:] I assume so. Yeah. [speaker004:] while data packets can be much mor much bigger. Let's say uh one K, five K s. OK? [speaker003:] Yeah. [speaker004:] So [speaker003:] Fine. [speaker004:] a collision in this short packet [pause] is only wasting a small amount of time. And, uh if that way we can warranty that there is [disfmarker] there are [disfmarker] there is no collision uh uh, between data packet, well, this [disfmarker] this is [disfmarker] this means a [disfmarker] a huge eh increase in network throughput, because we well, we [disfmarker] we only waste really small time colliding. [speaker003:] OK, [speaker004:] OK? [speaker003:] but that means, in principle, that on the MAC layer you have always a RTS CTS [pause] packet between neighboring nodes, [speaker004:] Yeah. [speaker003:] and you send afterwards the data. But the data will be maybe delayed in the [pause] intermediate node because you have always be assured to be uh s uh uh RTS and CTS packets are changed before [disfmarker] [vocalsound] before you can forewarning the data, right? [speaker004:] Yeah. It is a reservation mechanism, [speaker003:] Yeah. OK. [speaker004:] so if reservation [pause] doesn't work OK, if uh one reservation don't succeed then, uh, this transmission should have to wait. [speaker003:] Now the p [speaker004:] until the reservation [disfmarker] OK, yeah, let me [disfmarker] [speaker003:] Maybe it's better to [disfmarker] to uh, [pause] draw it on the [disfmarker] Oops. [speaker004:] l let me [disfmarker] [speaker003:] So, maybe my idea first. So this is what, node one? This is node two and this is the source, [speaker004:] Yeah. [speaker003:] and this is node three that is the final destination. [speaker004:] Yeah. Fo fo for forget [disfmarker] forget about routing. [speaker003:] Yeah? Yeah, yeah. OK. But but anyway, I would like to do that in the ad hoc network. This is maybe the sco this is within the scope [pause] of this, uh [pause] special network. That means here that is uh within the transmission power. [speaker004:] OK. [speaker003:] So if I send [pause] an RTS, and get back, [pause] um, a CTS, then he is allowed to send data, right? [speaker004:] Mm hmm. a CTS. That's OK. [speaker003:] So. In the meantime, when he gets this one, he also had to send RTS [nonvocalsound] and then CTS because this is the next next scope of the transmission area, [speaker004:] Mm hmm. [speaker003:] and then he can send the data, right? [speaker004:] OK. [speaker003:] That's the case? [speaker004:] That's it. That's [disfmarker] [speaker003:] Yeah? And that's what I mean. In the meantime, he c still can get some data here, where he try to get certain connectivity on the MAC layer. [speaker004:] Mm hmm. Yeah, but eh [pause] a node is only [disfmarker] is only involved in one transmission at a given time. So if now two has uh mmm answer with the [disfmarker] with the CTS, it will be waiting until the data that has to come. [speaker003:] Yeah? OK, [speaker004:] OK? [speaker003:] and then he try to forward this data. [speaker004:] Yeah. [speaker003:] After that. [speaker004:] But n not before. [speaker003:] So he is definitely first to stop. He has to collect all the data? [speaker004:] Yeah. Yeah, you cannot [disfmarker] [speaker003:] Could it not be a little bit strange because if that is starting a file transfer with millions of bytes, he have to collect all the bytes before he forwarding to [disfmarker] to whatever. [speaker004:] No, no, no. But [disfmarker] we are [disfmarker] we are talking only about a data packet. [speaker003:] Huh? One data p [speaker004:] at the MAC layer. [speaker003:] He have to do it for all data packets? [speaker004:] Yeah. [speaker003:] OK. OK. So but, in principle, [pause] when he receives the data packet then he [disfmarker] The same applies for the next data packet. [speaker004:] Yeah. All the packets, all unicast packets. [speaker003:] but anyway in the meantime he has to forward the packet to the final destination. [speaker004:] Yeah. That [disfmarker] that true. [speaker003:] Right? [speaker004:] That's true. [speaker003:] So it's acting, in [disfmarker] in prin [speaker004:] But not in the meantime. After receiving [disfmarker] You know, in fact, probably the routing protocol [pause] will need to check the address [disfmarker] the [disfmarker] the let's say IP address, which is inside the data packet, [speaker003:] OK. Yeah. [speaker004:] so, you cannot [disfmarker] u usually you cannot, uh, read the data packet until you [disfmarker] [pause] the reception is over. OK? because [pause] usually your network hardware only provides you a copy of a packet after ou a [disfmarker] a successful reception and usually you know that reception is successful if every bit has passed the CRC check. OK? You have checked that the [disfmarker] that this frame has received [pause] uh without errors. [speaker003:] Naja to [disfmarker] to [disfmarker] Yeah. [speaker004:] OK? [speaker003:] Mm hmm. Yeah, [@ @]. OK. [speaker004:] So [speaker002:] Um, a further question. [speaker004:] Yeah. [speaker002:] Um. [speaker004:] That's OK. [speaker002:] Am I right that this um dialogue does not prevent um any disturbance of other nodes in the um sending range of, yeah, N one. [speaker004:] Yeah, let me [disfmarker] let me put some special representation of what we are doing. [speaker002:] S [speaker004:] OK. This is the transmitting [pause] node, this is the receiving [pause] node, and these two areas, of course, there's something like this, are the uh places of the [pause] total space where you can hear the [disfmarker] Here, this one is the RTS and this one is the CTS. [speaker002:] Mm hmm. [speaker004:] So, any node, for example, this node [pause] is a neighbor of the receiving node, [speaker002:] Mm hmm. [speaker004:] but it is not a neighbor of the transmitting node. [speaker002:] Yeah. That's it. [speaker004:] So when this node hears the CTS transmission that is performed by the receiver, this will be uh [pause] informed, that a recev a reception is taking place in [disfmarker] in [disfmarker] in it's area. [speaker002:] Mm hmm. Mm hmm. [speaker004:] OK? So in fact, this node will avoid any further transmission until uh this reception, that will start in [disfmarker] in a moment, is over. [speaker002:] But then this node, um, this transmission is uh aborted. [speaker004:] OK? [speaker003:] And delayed. [speaker002:] And so there's no advantage to uh [disfmarker] if it's aborted, by [disfmarker] by any data packets. [speaker003:] Yeah. [speaker002:] So this um node has to stop each transmission. [speaker004:] Yeah. [speaker002:] So, even the at the moment ongoing transmission. [speaker004:] Mm hmm. [speaker002:] Is that right? [speaker004:] Yeah. [speaker002:] So, his transmission is aborted. [speaker003:] Yeah, but this is typical for MAC layer protocols, you know. [speaker004:] Not [disfmarker] No. No. [speaker003:] That's [disfmarker] [speaker004:] Let me [disfmarker] let me [disfmarker] let me [disfmarker] show you one thing. If this node was transmitting before, it should uh it should ha uh [pause] transmit an RTS packet in advance, OK? [speaker002:] Mm hmm. Mm hmm. [speaker004:] So this node should hear it before. [speaker002:] Yeah. [speaker003:] Yeah. [speaker002:] That's right. [speaker003:] Yeah. [speaker002:] Mm hmm. [speaker004:] So this node [pause] is [disfmarker] W won't be sending the eh eh CTS packet to the transmitter. [speaker002:] OK. Yeah. [speaker003:] Yeah. [speaker002:] Mm hmm. [speaker004:] So this transmission [pause] won't take place. [speaker002:] OK, yeah. [speaker003:] Yeah. It's always true that only [pause] one node gets a media access, for one frame. [speaker002:] Mm hmm. Mm hmm. [speaker003:] That's the point. [speaker004:] M Yeah. [speaker003:] We always said "from data" but it's really one frame. [speaker004:] Yeah. [speaker003:] Yeah, OK. [speaker004:] And, in fact, uh there is another detail, it is not uh uh uh uh in [disfmarker] on the [disfmarker] on the slide, and it is that both the RTS and the [pause] CTS packet uh, have a [pause] field, a control field where [pause] the [pause] total length of the data packet is shown. [speaker002:] Mm hmm. [speaker004:] So when you hear one uh CTS or one uh RTS packet, you know that the frame that they are trying to transmit [disfmarker] or, that this one is trying to transmit this one [pause] is about to receive [disfmarker] You know the length. You know the bit rate, [speaker002:] Mm hmm. [speaker004:] so you know that the amount of time you have to wait until you can try to get the control, or t try to send your own transmission. [speaker002:] Mm hmm. OK, but this scenario is also nov uh, only working if um both or all nodes have the same range of transmission. So, the s same radius of the sending range. [speaker004:] Mmm, yeah. [speaker002:] So otherwise um this one node maybe [pause] um [pause] would hear [pause] his neighbor but not vice versa. [speaker004:] Uh, well, we are assuming that uh mmm all the [disfmarker] all the transmissions are bidirectional. I mean, uh you know it depends. It depends mainly on [disfmarker] on [pause] the antenna gain and the transmission power and the receiver uh [pause] threshold. [speaker002:] Mm hmm. Mm hmm. [speaker004:] OK? [speaker003:] Uh huh. [speaker002:] But I'm thinking about this example you gave [pause] before. [speaker003:] Maybe [disfmarker] maybe you are right. [speaker004:] S So what [disfmarker] what we are assuming is that all the nodes behave exactly the same regarding to uh transmission and receiving signals. [speaker003:] Yeah, but, Miguel, this is a good point from [disfmarker] from Dietmar, [speaker002:] Yeah. [speaker003:] but maybe here is a wall. And this guy is behind the wall, so [pause] his transmission rate is something like that. [speaker004:] Yeah but [disfmarker] but [disfmarker] yeah but the [disfmarker] but in this case, you know this [disfmarker] this idea, this uh geometric idea, can be [disfmarker] can be, if you want, uh deformed because of some obstacles. OK? But this wall is working both ways. It is not only making this node not to hear this one, it's also making this one not to hear this node. [speaker002:] But one of the [disfmarker] [speaker004:] So if they [disfmarker] Because of this wall, if they are not neighbors, well, we can represent this putting this node like here. [speaker002:] But one of the um senders could be much stronger than the other one. [speaker004:] OK? No, but this is not the scenario we are assuming. [speaker002:] So [disfmarker] OK. So that was my question. [speaker004:] Yeah. [speaker003:] Mm hmm. OK. [speaker004:] We are assuming all the nodes are exactly the same. [speaker002:] OK. [speaker004:] Uh, so they transmit the same power, and they uh have the same uh uh sensitivity. [speaker002:] Mm hmm. [speaker004:] So they can hear [disfmarker] [speaker002:] OK. [speaker004:] If [disfmarker] if [disfmarker] if I can hear you, you can hear me. [speaker002:] Mm hmm. [speaker004:] and [disfmarker] and the opposite works also. [speaker002:] Mm hmm. [speaker004:] OK, if you cannot hear me, I cannot hear you. [speaker002:] But [disfmarker] Mm hmm. OK. It's a assumption it's [disfmarker] it's clear. But on the other hand, um, in a real world it isn't, or [disfmarker]? Um. [speaker004:] In a? [speaker002:] In real scenarios, uh, you have to [disfmarker] [speaker001:] It [disfmarker] It is actually [disfmarker] [speaker004:] Uh [pause] not really, but also depends on uh you know, for example power relationships, and antenna relationships. [speaker002:] Yeah, [speaker004:] You know, eh [speaker002:] eh that's it. [speaker004:] Because [disfmarker] [speaker002:] A car or [disfmarker] [speaker004:] because for example, uh [pause] one kind of thing, OK, if I use a directional antenna, this is making that in one direction, [speaker002:] Yeah. Mm hmm. [speaker004:] I'm getting more uh [pause] let's say more gain than another. [speaker002:] Yeah, [speaker004:] And that's true [speaker002:] OK. [speaker004:] but th that [disfmarker] What is also true [pause] is that one directional antenna works both ways. [speaker002:] That [disfmarker] You're not assuming this. Yeah. [speaker004:] It provides you [pause] more gain [pause] when you are transmitting but also when you are receiving. so, in general, in general even when [disfmarker] if there [disfmarker] uh [pause] if there i are different antenna gains, uh, you're getting mainly bidirectional channels. Only when you have a an important uh power difference is when l um, the communication can only work one way. [speaker002:] Yeah. OK. [speaker004:] For example, you have one big station with uh [vocalsound] a lot of transmission power, and then you have a really small mobile uh thing that can hear perfectly the signal from this uh uh bigger station, [speaker002:] Yeah. Yeah, a cellular phone and a car. [speaker004:] but it has not enough power to transmit something that can be heard here. [speaker002:] Mm hmm. Yeah. [speaker004:] So um [pause] But this [disfmarker] this [disfmarker] this is not our scenario. [speaker002:] OK. [speaker004:] In our scenario, all the nodes are exactly the same. [speaker002:] Mm hmm. OK. [speaker004:] They [disfmarker] they have the same uh signal uh behavior. [speaker002:] Mm hmm. [speaker004:] So this is not a problem for us, at least for the moment, maybe later we'll see some differences. So You know, the idea is that this mechanism is a reservation mechanism. And what we are doing here is try to reserve the area of space we are covering and eh here's where our [pause] some of our work [pause] is done. The main idea [pause] for [disfmarker] for our work, is well, [pause] if we are using this reservation mechanism, we are reserving an area of space for a single transmission. Uh, if this area is bigger than what is really needed, we are, somehow, wasting a little bit of space. So, uh our first eh work in this area [pause] was [pause] to study different mechanisms [pause] to try to adjust [pause] these transmission values, these uh the area, the covered area, [pause] for a single transmission, trying to get a minimum value, so [pause] we can [pause] reduce [pause] the not used space for other simultaneous transmissions. And this will [pause] lead us to a [disfmarker] a better [disfmarker] [pause] to a better network throughput, to higher number of simultaneous transmissions. And, we have, uh mmm, worked about this problem [pause] uh, in two different uh ways. One of these ways has been [pause] try to find a mechanism [pause] to uh calculate [pause] an optimal value [pause] for the transmission range [pause] uh, and to get [pause] an equal value for all the transmitters. So, [pause] the idea in this first work [pause] was [pause] to obtain [pause] a fixed value [pause] for the transmission range. And the second eh work, uh, it is about [disfmarker] Oops! It is about uh, an algorithm to adapt the transmission power for every for every single transmission. So, uh, in this second part of our work, [pause] we are using a different value for every single transmission. So. First point, [pause] what we have proposed is [pause] a way [pause] to obtain [pause] the transmission [pause] range [pause] uh [pause] for uh for an escenario where we know the number of nodes and the area of deployment, [pause] we have a [disfmarker] a fixed area, where we deploy all the nodes, we have a fixed number of nodes, and, [pause] uh we assume that these nodes are moving [pause] freely around this [disfmarker] this area and, uh, what we have presented, [pause] is a mechanism [pause] to uh calculate [pause] to obtain [pause] the eh [pause] transmission [pause] radius that [pause] provides at the same time [disfmarker] that provide us [pause] the better throughput possible [disfmarker] the better possible throughput, OK? but without our network getting partitioned, which is a problem that we are trying to avoid. You know, if we have, because of node motion, we have something like this, this is the deployment area of [disfmarker] or [disfmarker] or node, and if we have some nodes here and some nodes here, [pause] well maybe these nodes in this area, cannot reach the nodes on this other area. OK? Uh, so [pause] what can we do to avoid this? Of course we can choose uh our transmission values longer enough to reach from [pause] so the nodes from this area will be able to reach the nodes on this other area. OK? But, again, we can assume that eh if uh, nodes are moving, [pause] uh [pause] more or less uh [pause] in a random fashion, this configuration is not very likely to happen. OK? So [pause] the point we ha what we have done [pause] is to present a mechanism would pr which provides [pause] uh [pause] one uh [pause] value for a given uh [pause] mmm [pause] probability. OK? So we can say that uh [pause] a given eh value are [pause] [@ @] for [disfmarker] for the transmission range, will be OK for a given number of nodes, if [pause] we [disfmarker] Sorry. for a certain, for a certain probability. OK? So, the point is uh, we cannot say that this is not gonna happen, what we can say is this is not very likely to happen, [pause] if nodes move uh uh random. [speaker003:] Yeah, but um Miguel, [pause] if uh in the previous slide you mentioned that you uh are working also on adaptive power control. [speaker004:] OK? Yeah. Yeah. [speaker003:] Is it not possible then, if I send something and I could not find any neighbor because one of the cluster below, [speaker004:] This is the next. Yeah, th [speaker003:] is it not possible to extend the power until the maximum [speaker004:] u [speaker003:] and if uh I do not have any connectivity with the maximum of power, OK, then I'm lost anyway. [speaker004:] e e Yeah, [speaker003:] Yeah, yeah. [speaker004:] but [disfmarker] this is [disfmarker] this is a way that it is [disfmarker] it is being explored by other researchers, but not [disfmarker] not uh, by me. But uh, there is, uh [pause] one, at least one person who is [disfmarker] who is doing what they call "topology control", [speaker003:] Mm hmm. [speaker004:] and this is, uh [pause] related to [disfmarker] to uh the control of the transmission power to get uh [pause] certain properties uh from your network, like for example, they are [disfmarker] what they are doing eh is eh some people working for BBN Ram Ramanatan is the researcher, [speaker003:] Mm hmm. [speaker004:] and the article is [disfmarker] is uh I think from [disfmarker] from this year, and eh these guys are doing eh what they call uh "B connected networks", they are trying all the network nodes to have at least two neighbors. [speaker003:] OK, mm hmm. [speaker004:] OK? And they are controlling the power to eh this g uh mmm get this [disfmarker] [pause] this number of neighbors, but this is not what I am doing and uh the kind of power control I am doing uh, um, is more or less [pause] presented here, and what we are doing is to transmit uh [pause] with the minimum power needed depending on the distance to the destination. OK? Every transmission is supposed to be done to a certain neighbor node we cull the neighbors to these nodes which are [pause] uh closer than a given distance, OK? and because we wanted to [disfmarker] to compare our algorithm with something, we needed to compare it with the no power control, which is the usual uh mechanism at this eh media access control level. So, what we are doing here is to [disfmarker] to compare our mechanism against the no power control, but again we have to fix [disfmarker] in order to be able to compare, we have to fix a given whatever, but a given uh maximum transmission range to compare. And, uh [pause] what we are doing is [pause] we are uh [pause] taking the advantage that we are u we have this RTS CTS dialogue, in advance, before sending the data, to uh [pause] agree with the receiver, the power level he is willing to get from us. OK, so every single transmission [disfmarker] in every single transmission we are checking with the receiver the power level we have to use for the data transmission. That's the main thing. And [disfmarker] as, uh [disfmarker] [speaker003:] And how do you [disfmarker] sorry Miguel [disfmarker] how do you check if [disfmarker] if the nodes are moving randomly? [speaker004:] Well, uh we are assuming that e y for [disfmarker] at least for uh [pause] veh uh you know, uh, car and [disfmarker] and walking or [disfmarker] or people riding a mo motorcycle the f um frame length is so small that only um [pause] very [disfmarker] very small um [pause] space can be [disfmarker] can be eh travelled while the transmission is taking place. You know, in one millisec [speaker003:] Yeah, OK, maybe that's applied for RTS CTS, but for the real frame? [speaker004:] Yeah, well, you just take the [disfmarker] [speaker003:] The frame could also be uh whatever, [pause] I don't know but [pause] a few hundred bytes, you know, that takes a little bit time. [speaker004:] Let's [disfmarker] let's [disfmarker] no but [disfmarker] Yeah but, i let's say you have one kilobyte. OK? One kilobyte is uh [pause] eight uh bits. OK? [speaker003:] Mm hmm. [speaker004:] And eh, if you are transmitting at, let's say, uh at one megabit per second, this means one microsecond per bit. OK? So this only is taking eight milliseconds, which is not a lot of time if you are travelling, let's say, at uh one hundred kilometers per second. [speaker003:] No, s [speaker004:] Oh per [disfmarker] per hour. [speaker003:] no sorry m Yeah, that's right. [speaker004:] Sorry. [speaker003:] But first of all it took place RTS CTS, and there could be, you mentioned, if it's still occu if the medium is still occupied by another one, it will be delayed anyway to a randomly time. [speaker004:] Yeah, it will be delayed but you will start from scratch again. I mean, it will be a start with another RTS CTS. [speaker003:] Oh, so! OK, y OK. OK. I see. Yeah. OK. [speaker004:] OK? But, anyway, [pause] uh, well, depending on [disfmarker] on the network, you now, the [disfmarker] the [disfmarker] the [disfmarker] this [disfmarker] I will have to translate this, uh It shou should be something [pause] ipsh So, this should be something like uh thirty meters per second. OK? [speaker003:] Oh yeah, something like that. [speaker004:] So in [disfmarker] in eight milliseconds, well, the amount of [disfmarker] of space you are travelling is not too much. [speaker003:] It's OK. [speaker004:] For most of [disfmarker] of [disfmarker] of uh [pause] the wireless technologies, maybe not for all of them, OK, but [disfmarker] I'm [disfmarker] [pause] of course, I'm also uh using a conservative value for the transmission speed, now you can get much more speed than [disfmarker] speedier interfaces than this one, but [disfmarker] the thing is, uh, that we are assuming that for a given transmission, we can uh [pause] consider the network as if it were built of a set of static nodes. OK? Motion will be important for routing, but for the media access control level, we are considering that motion can be con [pause] is not [disfmarker] is not important. OK? [speaker003:] Yeah, what I think [disfmarker] Maybe some, uh [pause] cases why I go this way with my node, and suddenly, I s go out of the scope of the transmission area, because it's really a random movement. Then I still hear [disfmarker] this is during the data transmission. [speaker004:] Yeah, but [disfmarker] again [disfmarker] again, what we are adding with this behavior is, uh [pause] uh [pause] a little bit more, probably some [disfmarker] some uh small part to the uh bit error rate. [speaker003:] Yeah? then [disfmarker] [speaker004:] OK? Maybe this frame will [disfmarker] won't be received properly, and, should eh [pause] have to be eh dropped, [speaker003:] Oh, yeah. [speaker004:] but anyway, eh [pause] even if nodes are completely static, eh some problems will happen [pause] in the wireless transmission, so we won't have a hundred percent eh perfect eh channel anyway. [speaker003:] Yep. Right. [speaker004:] So eh [pause] if this problem, as you show it [pause] probably it's [disfmarker] it's eh [disfmarker] [pause] Y you can [disfmarker] you can see that it is not very likely to happen, because [pause] you [disfmarker] you [disfmarker] you eh have to consider the probability that o of happening this [disfmarker] [speaker003:] Nee. That's really right, [speaker004:] Hmm? [speaker003:] but what I would like to say is that besides the RTS CTS behavior, you need certain kind of additional MAC layer functionality to deal with these packet. [speaker004:] Hmm hmm hmm. [speaker003:] For example, with transmission or [disfmarker] or whatever, [speaker004:] Yeah, sure, sure. [speaker003:] for whatever control, or whatever kind of things, you know. [speaker004:] In fact [disfmarker] in fact, you know I [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] I [pause] maybe I [disfmarker] I [disfmarker] I [disfmarker] had to [disfmarker] to prepare a more detailed eh, view, I [disfmarker] I just was trying to do something soft, not to [disfmarker] to bore th the public but of course there are a lot of other smart eh things that are built in, in these protocols, [speaker003:] Right. Yeah. [speaker004:] like for example, an automatic, uh [pause] acknowledgement. Some protocols used uh [pause] in the same way as this RTS and CTS packet [disfmarker] some of them use, an eh acknowledge from the receiver, which is also a short packet, so you [disfmarker] you are getting some kind of link layer uh [pause] acknowledgement scheme which can save a lot of data transmissions because eh usually when you send a [disfmarker] [pause] a [disfmarker] a [disfmarker] a frame, you will want to [disfmarker] some kind of acknowledgement back. So, if this acknowledgement has to go through the process of a new RTS and a new CTS, well, this takes longer than just a link layer uh, extra packet. That, of course uh [pause] is also contemplated in this eh packet length [disfmarker] length or [disfmarker] included in the RTS and CTS packets, so, eh, you know, everybody can wait until this uh acknowledgement is also transmitted by the receiver. So in fact, eh nnn [pause] some people proposing this mechanism is [disfmarker] is showing a [disfmarker] a b a big improvement and eh there are a lot of other mechanisms, also regarding and a time out mechanism, or the time out algorithm for these retries when you [pause] try to send something [pause] and you don't succeed, uh, what [disfmarker] what you the amount of time you have to wait until you retry again. Well, uh, some people is also proposing different things, different eh algorithms with different results. So, in fact, there are a lot of good [disfmarker] I just [pause] was trying to present a [disfmarker] you know [pause] the main ideas. So, I [disfmarker] I [disfmarker] I [disfmarker] you know, ask uh [disfmarker] You [disfmarker] Feel free to ask uh whatever you want, but [pause] I [disfmarker] I want to insist that [pause] it was just an [disfmarker] uh a simplified uh [pause] version, a light version. OK? well, uh [pause] you know finally this algorithm, eh, as I was telling, eh what it's doing is to check [pause] with the receiver [pause] the power level to be used. And it is very simple to see that some advantages [pause] can [disfmarker] some advantage can be obtained, because [pause] the difference is [pause] eh if you don't have power control, you are using all the time the maximum power, if you have power control, [pause] sometimes you will use [pause] less [pause] than the maximum power How often [pause] do you use less power than the maximum? Well, this will [disfmarker] [pause] this will be connected to the motion pattern. OK? Depending how nodes are moving, well [pause] maybe [pause] all the time [pause] all your neighbors are at the maximum distance But, this probably, doesn't l look like uh something very [pause] very uh [pause] [pause] very likely. OK? because well, l [pause] probably y [pause] what you will have [pause] will be some neighbors that are [pause] really close [speaker003:] Hmm. [speaker004:] others that are a little bit [pause] far away and finally, [pause] these neighbors that are in the [disfmarker] [pause] in the frontier [pause] of your [disfmarker] of your coverage area. OK. [speaker003:] Hmm. [speaker004:] Yeah. [speaker003:] OK, this is a question not directly related to these things, but maybe [disfmarker] that's why maybe only a short answer. Uh, do you assume that [pause] power control will be issue also in the [pause] next years? In other words, um [pause] is [pause] the [pause] um [pause] success of having more power within the devices in maybe three year, four years, five years, for batteries, and [disfmarker] and [disfmarker] and [disfmarker] [@ @] and sofort so on uh will it be succeeded that you have so much power available that this [pause] difficult mechanism who is in the wireless network will be uh out of scope? [speaker004:] Mmm. Yeah. [speaker003:] Because if I have a device which uh can I use maybe for twenty four hours, I do not have, in principle, really the constraint [pause] to uh deal with [disfmarker] with uh power savings. [speaker004:] Yeah, let [disfmarker] let me [disfmarker] let me answer you uh with a question. Uh, If, for example, your car [pause] uh you [disfmarker] you have uh an endless [pause] uh source of [disfmarker] of gas, do you think [pause] it will [disfmarker] it will be a problem, [pause] the amount of gas your [disfmarker] your car is [disfmarker] is using? OK. It won't be [pause] a problem for your pocket, for your wallet, but maybe [pause] it will be a problem for [pause] the environment. So, [pause] I think that the answer [pause] to your question is [pause] in this line, that [pause] uh if you have an endless uh source of power, [pause] well, maybe it's not a problem, [pause] for the battery uh lasting. But, it is a problem because [pause] you are using more area [pause] than you should. And therefore you are preventing other transmissions to take place [pause] at the same time. And therefore [pause] uh the problem is not only connected to [disfmarker] to the [disfmarker] to the power it is also connected to the net to the global network throughput. OK? [speaker003:] OK. [speaker004:] So [disfmarker] But, [pause] again, uh, you know, uh [pause] the [disfmarker] the m If you ask me "what [disfmarker] what do you think what will happen, [pause] in the industry?" What I think that will happen is that they will [pause] try to get [pause] uh [pause] mmm [pause] better [pause] hardware than we have now because now [pause] the problem [pause] with transceivers [pause] is that a lot of power [pause] is uh [pause] consumed [pause] at the logic of the transceiver, and not [pause] uh transmitted to the air. It is not power transferred to the air. So, [pause] the problem now [pause] is that eh most of the power [pause] uh, uh t uh a wireless network [pause] card [pause] is consuming, [pause] is [disfmarker] is [disfmarker] is drawn [pause] by the [disfmarker] by the eh logic control of the card. [speaker003:] Yeah. [speaker004:] All the [disfmarker] all the calculations, [speaker003:] That's what I [disfmarker] [speaker004:] all the hardware [disfmarker] [speaker003:] Yeah, that was the reason I'm talking about this is uh [disfmarker] [speaker004:] So, I am sure that eh [disfmarker] that manufacturers will try to [disfmarker] to squeeze until the [disfmarker] the [disfmarker] [pause] the last ampere f of [disfmarker] [pause] of this eh, let's say, [pause] eh [pause] wasted power, because [pause] what you cannot avoid is, uh to p l the part of the [disfmarker] of the radio frequency power you have to transmit. I mean, this is [disfmarker] this cannot be avoided. But what you can avoid or you can reduce and probably because now it is the main factor, the main consumption factor, [pause] is the logic inside. In the same way, at [disfmarker] as [disfmarker] as it happened with [disfmarker] with processors, for example. Now [pause] processors [pause] are much more [pause] uh power efficient than before. OK? [speaker003:] Yeah, but anyway, that was the reason for my question. If I save with a lot of difficult mechanism, within the air interface and between the base station or access point and the mobile node, one percent of the power [pause] whereas ninety nine percent is consumed by packet processing and [disfmarker] and [disfmarker] and [disfmarker] and [disfmarker] uh uh C P U or whatever, [speaker004:] Mm hmm. [speaker003:] uh [pause] is it worth to have it? If I really, with a normal device [pause] uh with a battery lifetime of maybe twenty four hours, thirty six hours or whatever kind of thing [disfmarker] And I think it will be increased. As the more mobile nodes are in the world the better the battery will be. So that's [disfmarker] was the reason for my question. [speaker004:] Yeah, pero e [speaker003:] Yeah, yeah, yeah. [speaker004:] uh but [disfmarker] Oh, sorry! But even if you [disfmarker] if you don't have to [disfmarker] to worry about eh your power source, [pause] you have to worry about your environment, about not using more resources [pause] that you don't own [speaker003:] Yeah. Yeah. [speaker004:] that are uh [pause] shared about [disfmarker] w w among the community. [speaker003:] Yeah, I g got it. Yeah. Yeah. OK. [speaker005:] Excuse me [@ @]. [speaker004:] OK. [speaker005:] If I understood correctly, the mechanism eh [pause] you should eh [pause] look for um the minimum eh [pause] coverage range [speaker004:] Yeah. [speaker005:] in order to maximize the throughput of the network. [speaker004:] Yeah. [speaker005:] So there [disfmarker] the [disfmarker] the power control eh [pause] algorithm should work, in order to use at each instant of time the minimum power need to reach [speaker004:] The destination. [speaker005:] the neighbor. That [disfmarker] Because this is the way [pause] eh [pause] eh more transmissions uh may take [disfmarker] take place at the same time. [speaker004:] Can take place at the same time. [speaker003:] Yeah, yeah, yeah, yeah. [speaker005:] So you need the adaptive mechanism to minimize the coverage range. [speaker003:] Yeah. [speaker004:] That's OK. And to [disfmarker] and to get the maximum network throughput. [speaker005:] That [disfmarker] [speaker004:] That's [disfmarker] that's the idea. However [disfmarker] however, and uh, it is always [pause] some however, uh [pause] by [pause] applying this uh [pause] greedy mechanism, [pause] in [disfmarker] in [disfmarker] or, [pause] more than greedy in this case, it could be, [pause] say, this stingy mechanism to [disfmarker] trying to avoid [pause] to use any extra space [pause] is also leading [pause] to [pause] a higher level of interference [pause] at the receivers. So, uh at the end of the road, [pause] we have to [pause] uh look for some trade offs [pause] to avoid too much [pause] uh interference [pause] at the receivers. OK? So, [pause] uh at the end we are using a little bit more power [pause] than these [pause] mathematically minimum values. OK? because if not, [pause] the uh the frame error rate or the bit error rate [pause] uh, skyrockets. [speaker003:] Mm hmm. [speaker004:] OK? because of the [disfmarker] of high interference levels. A lot of simultaneous transmissions [pause] also means [pause] for [pause] each receiver point of view, [pause] a lot of [pause] neighbors [pause] or [disfmarker] or nodes [pause] transmitting at the same time. And, all the transmitters [pause] uh with the exception of the transmitter [disfmarker] of my transmitter, [pause] are my intereferers and are [disfmarker] [pause] and all of them are producing [pause] less quality reception, so [pause] if they are a lot of them, [pause] well, uh the reception can be [pause] can be uh disturbed. OK? And [disfmarker] [pause] but um, um [pause] this is taking longer than I expected. OK. Let's [disfmarker] let's uh take a k a [disfmarker] a really [disfmarker] a really light uh view at the [disfmarker] at the routing in this [disfmarker] in this network, which is [pause] another important [pause] area of [disfmarker] of research. You know, [pause] in fact there is a [disfmarker] a group at the ITF [pause] [nonvocalsound] uh called "Manit". This working group is [disfmarker] is devoted to [pause] uh try to [disfmarker] to present or to agree [pause] a common protocol [pause] for routing IP traffic [pause] on these eh kind of networks, on these ad hoc networks, Uh, and at this moment [pause] there are, I would say, a lot [pause] of proposals [pause] of different protocols uh for uh end to end traffic and also for multicast traffic, on these networks And eh well some day in the future eh, a protocol should be [pause] chosen but, [pause] for the moment eh [pause] we have several of them [pause] on the table. and eh [pause] any protocol has some advantages and some disadvantages. The basic [disfmarker] [pause] the basic problem we have to solve [pause] in [disfmarker] in these eh [pause] routing protocols [pause] is [pause] the fast eh [disfmarker] [pause] the fast pace at what eh topology changes [pause] can happen. So the problem here is that [pause] now [pause] uh [pause] one node is my neighbor, and next second, [pause] it is not. So any route that was going through this neighbor [pause] now is [disfmarker] is not possible and should change. And the problem is that [pause] well uh an an [pause] if you let me a personal comment, [pause] I think that one error on the group is not to agree [pause] a set [pause] of mobility patterns [pause] to [disfmarker] to do the [disfmarker] [pause] the tests. because [pause] here, [pause] well, mobility is [pause] making [pause] these uh neighborhood relationships to appear or to disappear, So [pause] uh, it should be interesting if we agree [pause] uh a common [disfmarker] a common [pause] set of motion patterns [pause] to perform the tests because if not, well, it's difficult to [disfmarker] [pause] to perform comparisons [pause] uh between one [pause] protocol and another, a different one. But anyway, [pause] now what we have is [disfmarker] is two basic approaches to this [disfmarker] [pause] this problem. Eh, mainly [pause] what they call the practive protocols which are um extensions of [disfmarker] of uh [pause] um back door and link end state routing protocols [disfmarker] distance vectors, sorry, and [disfmarker] and link end state protocols, eh, which are [pause] usually table driven, so [pause] you [pause] have a process, [pause] trying to [disfmarker] to maintain up to date [pause] a set of routing table tables and the nodes and eh every routing de decision [pause] is eh just eh looking up [pause] the table, and sending to the proper [pause] neighbor node and, eh the other [disfmarker] the other set of protocols are using a reactive [pause] uh [pause] approach and these protocols are trying to work on demand, so they are trying [pause] to solve on only [pause] these routers [pause] that are needed so when I want to send [pause] uh, a packet to a [disfmarker] a given destination, [pause] first of all, I have to [pause] figure out the proper route [pause] for this packet. And then, [pause] uh [pause] I'm using [pause] soft routing. I'm [disfmarker] I'm including on the [disfmarker] on the [disfmarker] uh header of this packet, [pause] the set of eh nodes [pause] that should be forwarding this node to the destination. eh [pause] of course, [pause] uh these uh two approaches can be [disfmarker] can be uh somehow mixed and some hybrid proposals are also [pause] on the [pause] on the [pause] paper, and eh these proposals are [disfmarker] are trying to get [pause] the advantages of both [pause] uh mechanisms eh some sometimes [pause] making a kind of combination of [disfmarker] of them uh, let's say, for example, [pause] using [pause] uh [pause] practive approach [pause] for the closer set of nodes, let's say [pause] for the neighbors and maybe [pause] the next level, and, using a reactive approach I mean, as our routing approach, [pause] for [pause] nodes that [pause] are [pause] closer than this practive area. OK? and eh, [pause] well, u e a some multicast protocols [pause] have been proposed [pause] for this scenario too, and, well, they have to deal with the same problems that they're [pause] uh [pause] end to end routing protocols. [speaker003:] Mmm. What do you mean by multicast protocols? [speaker004:] Yeah. [speaker003:] Routing? [speaker004:] Routing protocols intended for multicast. [speaker003:] OK. Yeah. OK. [speaker004:] So, for [disfmarker] for [disfmarker] uh, for sending one to any. [speaker003:] Yeah. [speaker004:] Hmm? Uh [pause] well, [pause] the main [disfmarker] the main problems that some of the solutions eh are exhibiting, [pause] is uh [pause] the lack of eh scalability uh mainly [pause] the [pause] source routing protocols [pause] have a problem when you [disfmarker] when you scale from [disfmarker] or scale [pause] scale [pause] badly [pause] from tens of nodes to [disfmarker] to thousands of nodes because, well, you cannot have [pause] uh, uh [pause] let's say, [pause] uh, an endless uh header length. You have a limited [disfmarker] a limited header length. [speaker003:] And maybe we will have to explain a little bit for those who are not familiar with source routing. [speaker004:] So [disfmarker] [speaker003:] You can have the destinations you would like to [disfmarker] or the in [disfmarker] the I P addresses of the intermediate nodes you still know [speaker004:] Yeah. [speaker003:] even if it's not the direct neighbor, but one of uh where you really want that the packet went through, [pause] put it in the I P header. [speaker002:] Mm hmm. [speaker003:] So, and, this one is going then to this guy and so on and so on, until finally it's reached destination. [speaker002:] So there are [disfmarker] [speaker003:] And, that's what I mean, you cannot put it [disfmarker] thousand intermediate nodes in for example, [speaker002:] OK. [speaker003:] but to put a thousand I P addresses in the header, so that the packet is for uh forwarded accordingly to [disfmarker] [speaker002:] So there's more than one IP address in the header, [speaker003:] Yeah, yeah, yeah, yeah. [speaker002:] and [disfmarker] OK, I see. [speaker004:] Something like this So you [disfmarker] your packet [pause] should look like [disfmarker] like this way. [speaker002:] Do they have to [disfmarker] [speaker003:] Yeah. [speaker002:] Do they have to be in order? So [pause] you are looking first for node two and then for node three and so on. [speaker004:] Yeah. [speaker003:] Yeah. It's uh [disfmarker] You'll know to remove h [pause] it's h from the header, in principle, and [disfmarker] and so on, until it uh reaches the final destination. [speaker002:] OK. OK. [speaker004:] So this [disfmarker] this uh header, well, [pause] it's [disfmarker] it's a problem. Of course, there are also some [disfmarker] [pause] some solutions like uh [pause] uh employing some kind of [disfmarker] of uh [pause] hierarchical source route routing, so you don't have to put [pause] all [pause] the [disfmarker] all the route [disfmarker] [speaker003:] This one, for instance. that may that the different parts could be one to number four. [speaker002:] Mm hmm. Mm hmm. [speaker003:] Maybe two B or three B Yeah, but nevertheless this guy is a [disfmarker] is an gateway in the router for [disfmarker] for this packet, you know. And they remove itself from the header. [speaker002:] OK. [speaker003:] Yeah. [speaker004:] Anoth [speaker003:] It must be not the whole path, you know. [speaker004:] Another problem [pause] eh, is [pause] eh trying to provide some quality of service [pause] in these networks, because, well, [pause] em uh change is [disfmarker] is the only thing you know for sure, [pause] here So, [pause] eh [pause] providing quality of service is also [disfmarker] [pause] is quite a challenge [pause] in [disfmarker] in this network. But, again [pause] uh there are some proposals. [speaker003:] Um, Miguel, are you familiar with these kind of proposals? [speaker004:] Not really. Not really, because I don't trust [disfmarker] You know, uh, I'm sure that the proposals are OK, but eh [pause] I don't [disfmarker] [pause] I don't think that [disfmarker] [pause] You cannot warr you [disfmarker] you [disfmarker] [pause] I don't think you can warranty [pause] too much in these networks. So, I see [pause] this like a [pause] you know, a [disfmarker] a topic I'm not convinced about. I don't trust you can really provide quality of service, but of course [disfmarker] [speaker003:] That depends on the service level you want to be. Definitely not a guarantee level. But maybe something like a control load level or something like that, seems to be for me [pause] possible. [speaker004:] Yeah. [speaker003:] Right? [speaker004:] You know, uh if you are curious, uh I can [disfmarker] I can [disfmarker] [pause] I can provide you the n [pause] some of the proposals [speaker003:] Yeah. Hmm. [speaker004:] which include [disfmarker] [speaker003:] Yeah, we can discuss later, off line. [speaker004:] Yeah. [speaker003:] OK. [speaker004:] And eh, finally, [pause] the other [disfmarker] the other thing, which is also under research or under study [pause] is, uh to have some kind of a power aware routing. There are some proposals, too. So that way you could [disfmarker] routing function could be [pause] connected [pause] with uh global or local power consumption. So, for example, you could ask [pause] to u nnn, the network to transmit your packet [pause] with the lowest [pause] uh global [pause] uh consumption or with your [pause] lowest [pause] local consumption, for example. OK? I mean, would the option for me to [disfmarker] to use less power [pause] when sending a packet is to send it to [pause] the closest neighbor. OK, assuming that I'm [disfmarker] I'm also using s or having some kind of power control thing. so that way [pause] transmitting to the closer [disfmarker] close [disfmarker] to my closer node, [pause] is, uh [pause] something that requires [pause] less effort [pause] than sending to a more distant node. Maybe [pause] the closer node [pause] is not a good uh [disfmarker] it is not in the good direction to the [disfmarker] to reach the destination. But [pause] it is for sure [pause] the [disfmarker] the less expensive uh [pause] thing for me. OK? But probably this uh [disfmarker] [pause] this is not [pause] something really useful. And that ca [pause] what can be u really useful [pause] is to reduce the global [pause] power consumption in the routing function. So, because in this way, well, maybe one node is [disfmarker] [pause] is [pause] being forced to use more power but [disfmarker] but if we take a look at the whole thing, [pause] well, we can ask for a [disfmarker] a low [pause] consumption route, well, the same way [speaker003:] Yeah, but the [disfmarker] [speaker004:] as we are asking, for example, when we want [pause] uh some packet to be shipped in the real life, [pause] we can go and say well, [pause] we will [disfmarker] we are happy with this [pause] UPS [pause] ground system. What we want w is two day [pause] delivery service, [pause] or w twenty four hours [pause] service, and we are willing to pay more or less [pause] for this service. [speaker003:] Yeah, but it's [pause] really happened also in the riot case, you know. If I get the wrong route with the RCMP redirects, it's, in principle, then the same. [speaker004:] Yeah. Yeah. That's O K. [speaker003:] Yeah, yeah. [speaker004:] So, these are more or less uh some open problems, and uh [pause] well [pause] just a quick comment about [disfmarker] probably all of use are [pause] now aware [pause] of the availability [pause] of this eh standard [pause] the I triple E eight hundred and two [pause] point eleven uh [pause] in fact it is [disfmarker] it is [pause] powering [pause] you know, uh the evolution of the [disfmarker] [pause] of the [disfmarker] of networking industry in the wireless area. And [pause] in the beginning, [pause] the first adaptors [pause] uh Oops! were [pause] only one and two megabytes per second, but now we [disfmarker] you [disfmarker] you can buy for [disfmarker] for the same price [disfmarker] for even the same price [pause] uh, the new eleven megabytes per second version I've been [pause] use it for [disfmarker] for a while, and it w [pause] can tell you it works pretty well. and uh, [pause] the thing is that these [pause] devices [pause] have [pause] two different modes of operation. One of them, and probably the most popular, [pause] is uh l what we call the "infrastructure mode", where [pause] you have [pause] a set of mobile nodes, and then you have [pause] one or more access points. These access points are uh [pause] in fact providing [pause] the eh [pause] required connectivity [pause] from the mobiles to the [pause] network [pause] to the [pause] m wired network, and from there to the internet usually. [speaker003:] Yeah, in principle, the base station. [speaker004:] Yeah, yeah, we can call it a base station. [speaker003:] Yeah. Yeah. [speaker004:] And, eh, the other mode of operation [pause] which should be appropriate, for example, for this meeting room example, [pause] uh [pause] doesn't require any [disfmarker] any access point, and in this second case, communication happens [pause] f eh um from one computer to the other at the link ly layer. OK? So, if I want to connect [pause] to uh, let's say, Miguel's, [pause] uh Michael's, uh I [disfmarker] I will send him [pause] a packet, and he will receive it, and that's it. OK, we can build on top of this [pause] some [pause] uh routing algorithm, some ad hoc routing algorithm, but this is out of the scope of the [disfmarker] of the standard. [speaker003:] Mm hmm. [speaker004:] And [disfmarker] [speaker003:] and Miguel, should [disfmarker] I should ask you one question. [speaker004:] Yeah. [speaker003:] And uh uh in version one and two you [disfmarker] you have different technologies. You know, frequency hopping, uh two modes and [disfmarker] and [disfmarker] and also infrared. [speaker004:] Yeah. Yeah. [speaker003:] Does it still apply for [disfmarker] for the uh [pause] eleven B? [speaker004:] For the [disfmarker] no. Eleven B is only for [disfmarker] for uh radio and it's only for uh [pause] direct sequences spread spectrum. [speaker003:] OK, DSS. OK. OK. [speaker004:] But, in fact, there is an ongoing new version, I think it's c [pause] version A, but it is not uh commercially available which w will use [pause] uh orthogonal frequency division multiplexing [speaker003:] OK. [speaker004:] and will get probably twenty five megabits per second. eh [pause] but, again, the will move to five gigahertz, uh five point five gigahertz, eh [pause] gigahertz, [pause] uh [pause] area of a spec expect [disfmarker] uh spectrum But now, uh current devices [pause] are split at [disfmarker] You know, that the old [disfmarker] [pause] the old waveLAN are nine hundred megahertz [pause] version and the new ones are the [disfmarker] the eleven megabits uh are uh two point four m uh ISM band. [speaker003:] Megahertz, yeah. Mm hmm. [speaker004:] However, [pause] uh [pause] some old uh waveLANs are also, but not the older ones [disfmarker] [pause] the oldest ones, [pause] are also uh two point four m eh gigahertz. [pause] S [speaker003:] Uh, gigahertz. Yeah. In the RSM band. [speaker004:] S And the good thing is that now, [pause] especially with the new version, [pause] all you [disfmarker] you ca you [disfmarker] you are [disfmarker] you start to see [pause] uh compatible products in the market. Because eh [disfmarker] [pause] maybe because it's the, let's say, second version, but also because [pause] I think it's the only way [pause] for this industry to take off. because [pause] uh [pause] previous versions, if you [pause] buy, let's say, waveLAN, [pause] you are tied to waveLAN [pause] al the rest of your life. But now you can [disfmarker] [pause] you can have interoperable products [pause] from several vendors, which is, I think, [pause] what, [pause] in the past, [comment] will [pause] power the ethernet uh revolution You can buy [pause] any [disfmarker] any compatible product and [disfmarker] and you know it will work with the other brand. [speaker003:] Yeah. And one additional question from myself, but maybe also short, because time it running fast. Uh, I have heard that uh wireless LAN and Bluetooth will have some problems uh that they influence each other. [speaker004:] In front? [speaker003:] Do you have additional information about this topic? [speaker004:] Yeah. Mmm. You know, in the beginning, uh what w [pause] the [disfmarker] I think that [pause] to put things in perspective, [pause] what we [pause] need to know is [pause] what Bluetooth is and what [pause] Bluetooth is for. And uh Bluetooth is mainly for [pause] cutting [pause] this cable. You know, the other day I was joking about [pause] that our wireless microphones [pause] were not, in fact, [pause] wireless because it had this wire, So, the point is [pause] that eh Bluetooth is mainly a technology [pause] for cutting these kind of cables. You know, sh oh short [disfmarker] short distance cables, [speaker003:] Yeah, that's right. Yeah. Yeah. [speaker004:] uh, carrying usually [pause] really s uh, sh [disfmarker] nnn, nnn slow signals, or [pause] let's say, boys and something like this. [speaker003:] Yeah, but nevertheless, you have the range of ten meters, or of one hundred meters. [speaker004:] Yeah. Yeah, but you know, uh in fact [speaker003:] OK. [speaker004:] uh [pause] the initial version [pause] had only the [disfmarker] the range of ten meters, and I guess that somebody at [pause] a certain meeting [pause] said "OK, why don't we have [pause] a l an extended range version, so we could [pause] re use the same technology for other purposes, different [pause] than the original ones." So, em, uh, I have the feeling that this is [disfmarker] [pause] u this is what is uh [pause] introducing some confor confusion [pause] in the [disfmarker] [pause] in the Bluetooth eh [disfmarker] [pause] or in the way people is [disfmarker] is looking at Bluetooth because they can say OK, hundred meters is [disfmarker] is [disfmarker] [pause] is [pause] an acceptable distance, let's say, for communicating inside a building, inside a [disfmarker] a small office. So, [pause] uh some people, I think, is being tempted to [pause] think that Bluetooth could [pause] provide them [pause] uh, some kind of local connectivity, but [pause] uh for [disfmarker] for data for computing [pause] uses. Eh, however, I think that the [disfmarker] that the [disfmarker] [pause] When you have tried [pause] uh [pause] current eh standard eh working devices, [pause] you [disfmarker] you [disfmarker] working at [disfmarker] at eleven megabits per second, you cannot [disfmarker] [pause] you cannot switch to another thing, to another slower thing. But uh [pause] even when they are using the same [disfmarker] the same uh, um [pause] area of a s a spectrum, because Bluetooth is also intended to work in two point four uh gigabits, [speaker003:] Yeah. [speaker004:] eh the technologies are different, you know eh Bluetooth [pause] eh is using a frequency hopping system and uh mmm both uh dialing sequence and frequency hopping, [pause] spread the spectrum techniques, [pause] are also intended [pause] to provide uh interference l uh less, uh [pause] data transmission. So, the idea is that [pause] these [pause] two techniques [pause] are intended not to interfere [pause] with any other thing. So, [pause] even when I haven't tried, [pause] and I cannot tell you for sure, [pause] I think that [pause] the only problem [pause] one network will encounter [pause] uh in the presence of the other [pause] is a little bit [pause] of more noise. And this will produce a little bit [pause] less throughput, but nothing more. I don't expect any [disfmarker] any [pause] more problem. even [pause] the [disfmarker] the [disfmarker] the own Bluetooth specification [pause] is also assuming that another Bluetooth [pause] or several Bluetooth [pause] networks [pause] c will be [disfmarker] could be [pause] in the same area of space and they are using the same [pause] hopping technology, frequency hopping technology. But again, [pause] this technology [pause] is uh trying to avoid [pause] the other networks by choosing different hopping patterns. So [pause] uh [pause] if several Bluetooth networks can [disfmarker] can uh coexist without a problem, [pause] I don't see any problem will [disfmarker] will happen if [disfmarker] if we [pause] want to [disfmarker] to make coexist [pause] one uh Bluetooth with another wireless uh local area network technology. [speaker003:] Well that's a good news, OK? [speaker004:] Yeah. I think so. [speaker003:] But anyway, [vocalsound] um based on your uh uh [pause] initial scenario I believe because Bluetooth is on one chip, [speaker004:] Yeah. [speaker003:] like you have now your UCB buss or whatever connectivity, you will have your Bluetooth connectivity. [speaker004:] Five bucks. [speaker003:] And that means your mouse, and [disfmarker] and [disfmarker] and [disfmarker] [speaker004:] Five bucks chip. [speaker003:] Yeah, and [disfmarker] and [disfmarker] and whatever kind of [disfmarker] of eh things connected to your laptop and also your microphone and whatever you will [disfmarker] will have, is Bluetooth based. [speaker004:] Hmm hmm hmm. [speaker003:] Yeah, or will be Bluetooth based. [speaker004:] Will be. [speaker003:] Yeah, and this is [disfmarker] this is eh within maybe an uh wireless LAN. [speaker004:] Will be. [speaker003:] So, despite the fact that they are considered for different scenarios they are still [speaker004:] In fact [disfmarker] in fact Bluetooth is also designed [speaker003:] within one [disfmarker] one area. Yeah. [speaker004:] to be compatible with existing [pause] digital European cordless telephones. So, hopefully, [pause] we will be able [pause] to use some of [pause] our old devices [pause] or maybe our [disfmarker] our cordless phone base station at home, [pause] with a computer. S S [pause] So I think some really nice applications will [disfmarker] will appear. [speaker003:] OK we'll when I [disfmarker] I will have some problems in the future with my l [pause] wireless LAN and Bluetooth connectivity, I will come to you, you will solve it OK? [speaker004:] I'll try to. [speaker003:] OK, please go on. [speaker004:] So, [pause] let me [disfmarker] let me just uh finish, you know. [speaker003:] OK. [speaker004:] Uh [pause] well, [pause] this is something I, more or less, [pause] I told you. and eh [pause] you know, the point, or [disfmarker] or [disfmarker] or [disfmarker] what [disfmarker] [pause] the [disfmarker] the spark of [disfmarker] of my [disfmarker] [pause] of my work here, or at least, of what I [pause] was proposing to work here, [pause] is [pause] the fact that [pause] these two model uh two modes of operation, I was telling you here [disfmarker] [pause] that you have to choose at the driver level [pause] if your network adaptor [pause] is working in an [pause] access point based environment or in ad ho [pause] in an [pause] ad hoc environment. [pause] I see this as a [disfmarker] as a limitation because I think that [pause] uh when you have to choose among two options, well, mmm, [pause] maybe you are [pause] saying "no" to some of the benefits the other option, [pause] different than the one you are choosing, [pause] could provide you. And I envision an [disfmarker] an [disfmarker] an m [pause] [pause] an escenario [pause] where [pause] eh [disfmarker] Oops! Well, yeah. Let's go. where we can have [pause] a kind of uh dual mode uh wireless network adaptor So [pause] if we are not able [pause] to reach any access point [pause] but we are able to reach another network [pause] node, [pause] well, [pause] maybe this [pause] network node [pause] could do [pause] me the favor [pause] to reach or to forward my packets to the access point. So [pause] the main idea, the driving force here, [pause] is [pause] to uh [pause] try to obtain [pause] uh, let's say, hybrid [pause] mode which allows [pause] an adaptor to work either in the access point or in the [disfmarker] [pause] eh [pause] in the ad hoc mode [pause] in a seamless way. and to include in every single node driver, this forwarding or helper [pause] uh forwarding [pause] mechanism [pause] that [pause] will allow [pause] to extend the wireless coverage [pause] without a need of buying [pause] a lot of access points and putting even an access point inside the restroom. OK? Of course [pause] this idea mmm, [pause] can sound [pause] p can sound a little bit [pause] not very appealing for the manufacturers, at least for the manufacturers of access point because well, they probably are very happy of selling a lot of them. [speaker003:] But maybe for cars. Yeah, if your car is the infrastructure, for instance. [speaker004:] Yeah. [speaker003:] Yeah. OK. [speaker004:] So, [pause] this is [pause] the m I think this is near the end, this is the last [disfmarker] So One possible extension of the same idea [pause] and something uh I've been thinking for awhile, [pause] is the idea that, well, why can't [pause] do [pause] we [comment] extend this same concept to cell array systems? Why [pause] can't we, [pause] for example, think about [pause] some mmm, [pause] new [pause] mechanisms that users could get, [pause] uh [pause] connectivity by means of other users, [pause] and no directly to the base station, or, [pause] when we have [pause] several technologies [pause] uh in [disfmarker] [pause] overlapped in a certain area, [pause] why can't we use [pause] the closest and the cheapest, maybe, technology, or maybe [pause] the less [pause] power [pause] consumption technology, [pause] without the user to worry about if he is now [pause] using this or [pause] the other technology. And [pause] on the other hand, [pause] what are the billing [pause] implications of this behavior? Because [pause] it is clear that, well, [pause] when I'm [pause] only [pause] uh, talking to the [disfmarker] to the base station of my provider, well [pause] billing is [pause] more or less straightforward, but [pause] what happens [pause] if I'm working [pause] uh, let's say, uh across a mall, [pause] and [pause] at a given moment I'm using [pause] the small [pause] base station of a cordless phone [pause] that is inside the shop I'm at the window of? Well in this case, I'm using [pause] a resource [pause] which is not owned by my provider. I'm using, [pause] a [disfmarker] let's say, a private property. Maybe I have to pay [pause] to this private property, or maybe my carrier [pause] will have to pay or to refund some money to this guy. You know [pause] this something, uh [pause] I'm putting on the table I'm not sure about, [pause] uh [pause] if this can be really of interest or not but, [pause] it's an idea, and I'll be happy to discuss it with you. And I uh [disfmarker] I think [pause] my presentation is over, So. [speaker003:] Mm hmm. OK. many thanks Miguel. Are there any questions? [speaker004:] Oh, you are [pause] welcome. [speaker003:] Maybe we have also the time to discuss with Miguel off line the things if there is some [pause] uh question concerning uh his talk or maybe other questions. Maybe one last question from my side because I bring it up, [pause] uh concerning the proposal, some kind of active routing mechanism. [speaker004:] Mm hmm. [speaker003:] It's going in a quite different direction as [disfmarker] as [disfmarker] as [disfmarker] as the traditional things discussed in the ITF and I currently do not have any idea whether active routing is really a topic uh, within the I T Do you have certain kind of experience or do you know certain activities uh, in that area? [speaker004:] Nnn, no to both q [pause] to both questions. [speaker003:] OK. [speaker004:] I don't have previous experience I don't eh know if [disfmarker] uh where it will [disfmarker] [pause] it would fix. However, [pause] eh you know I'm just doing [pause] this presentation uh [pause] some [pause] somehow a review [pause] of my previous work. [speaker003:] I know, yeah. [speaker004:] So eh this eh doesn't mean that [disfmarker] I'm [disfmarker] I'm probably [pause] getting involved a little bit more in this active routing thing. [speaker003:] Mm hmm. [speaker004:] So uh [pause] I [disfmarker] I [disfmarker] maybe I will be able to [disfmarker] hopefully I will be able to [disfmarker] to give you a better answer in the future. [speaker003:] Oh, OK. One more time, many thanks Miguel. So. [speaker004:] Oh, than thanks, Michael, for your computer. [speaker001:] No problem. [speaker003:] So I will deploy a little bit information here from the uh project proposal. [speaker004:] Should I power off it? [speaker001:] Um, just log out. [speaker004:] OK. [speaker001:] Uh. No. Log out. [speaker004:] Oh, OK, sorry, sorry. [speaker003:] So I have only four. [speaker004:] Oh. [speaker003:] Jordi is [pause] only a short period of time here, [speaker004:] W which is i which is proper [disfmarker] [speaker003:] so he will get the [pause] colored one. [speaker004:] Michael [speaker005:] Ah, thank you [disfmarker] thank you very much. [speaker001:] OK. [speaker004:] Michael, [speaker003:] and take a look to [disfmarker] [speaker004:] which is the proper answer? [speaker003:] uh [speaker001:] That's the proper answer. [speaker004:] Th this one? [speaker001:] Yeah. [speaker004:] OK. [speaker003:] Yeah. [speaker004:] Thank you. [speaker003:] Kill the computer. Uh, So I started uh, for [disfmarker] for advertising the project proposal here to provide a few slides I think, first of all, Wilbert, is not here, second um, the [pause] um link here [pause] was uh enabling UMTS services and application, uh, I assume [nonvocalsound] was not read [pause] yesterday night. So [pause] I suggest we go roughly to the slides here and um The [disfmarker] the problem is that, in principle, we have different scopes. And motivation of the project proposal from my point of view is that we will have the um Wireless access networks for the next generation will be more flexible. There's a [@ @] for uh current [disfmarker] uh uh uh for [disfmarker] for fast [pause] changes within this network. And that means, if you go to slide number three, [nonvocalsound] that [pause] we [pause] have [pause] um [pause] the possibility to [disfmarker] to um [pause] having normal features in the mobile access network, and [disfmarker] and [pause] um and [pause] um [pause] also based on that it belongs normally to one single provider domain, that it is possible to [disfmarker] to uh use novel things and [disfmarker] and [disfmarker] and new protocols and new technologies within this network. On the other hand, in the internet backbone, uh [pause] we [disfmarker] it is i impossible and we see it on several examples like this, MPLS, there's quality of service support, and multicast and all this kind of thing, that, based on the success of the internet backbone, it's become uh [pause] very unflexible, to p uh fo for the provision of these things. So, if [pause] we are going to offer building blocks, and provide novel solutions for the next generation of wireless access networks, taking into account, [pause] without modifying [pause] the end to end scope, then I think it's really beneficial work, also for the use of interactive multimedia application, so [pause] as it is described in the uh [pause] project proposal. Um [pause] the point is that um [pause] to start [pause] with um [pause] a bigger cluster of co uh [pause] of potential partners to do this really really really big work, seems not to be feasible in the next time. because if you have settled everything, and discussed everything and [disfmarker] and [disfmarker] and [disfmarker] and uh [disfmarker] without any money in the background, and a lot of money in the background, that is not uh feasible from my point of view. Nevertheless, I discussed, with a lot of partners, these things. And today, [pause] uh Michael and me, we were on the UCB to discuss with a guy who is a professor who is probably interested in the application stuff. They are going in that direction. They have a small project funded by NSA [disfmarker] NSF, um, f for this "intelligent classroom", is a little bit exaggerated, I believe, but going in that direction. So there are a lot of interest, but without really [pause] hard money in the background, it is very hard to come to the initial steps. And uh Juan Peire, one of the uh members of this NSA group has [pause] uh [pause] identified a certain kind of s start money, I would like to say. So maybe if you start with a small cluster of um the constraints of this funding, and that means uh six universities and [disfmarker] and institutions uh, three in USA and three in Europe. and of course, they are uh including the NSA group and maybe that seems to be very reasonably, for me. But then we have to align the project proposal in that sense. Because then we cannot do [disfmarker] do anything as it is described in the [disfmarker] in the bigger scope. So the point is [pause] um [pause] to reduce the overall picture, which I described in a few words, to the fact that we have really some work packages where we can focus on it, still fitting into the big picture. Yeah? And so that will be the task of the next weeks. And [pause] uh I [pause] hope that everybody is doing his homework concerning [@ @] for instance uh alternative to the intelligent classroom maybe that we [pause] h can start here only with the N S A group within this big picture um [pause] with some work that we put the pieces together. Because I believe many are very flexible here accounting also the [pause] future coming uh [pause] joining people of the NSA group, to do some work here. And if [pause] we are starting with a very very very small cluster with a common activity have maybe some starting funding for bigger uh [pause] radius of the [disfmarker] of the [disfmarker] the [disfmarker] the uh [pause] project proposal, with the mind that maybe at the final stage it will becoming a European proposal with this uh the [disfmarker] the whole funding. That's uh [pause] my point, and my picture I have in mind. And I will go definitely back to [pause] Germany the tenth of December. That is fixed now. And I hope we can use these remaining weeks [pause] to have really progress in [disfmarker] uh for the proposal in that sense what I mentioned before. [speaker004:] Hmm hmm. [speaker003:] Because then I will make [pause] the advertising that is still worth to stay here, And I hope then Siemens will say OK for me to stay longer here. And then I will come back maybe in the [disfmarker] mid of end of January. So that's uh it's a comment to the proposal. So I think everybody should [pause] consider carefully, [pause] based on this statement I made uh next steps and of course everybody has little bit other work to do but it should be [disfmarker] should be always an ongoing process until I will leave [disfmarker] until the tenth of December. [speaker002:] You mentioned um six partners of this um work. [speaker003:] OK. [speaker002:] So you know already what partners are included. OK I didn't have a look at uh slides, [speaker003:] Um yeah I will distribute all the things electronically [speaker002:] OK. [speaker003:] to all the [disfmarker] OK, take a look to slide of number two, [speaker002:] Just [disfmarker] I'm just curious. [speaker003:] Yeah! You see uh uh University of Mannheim, University also these are the official ones, [speaker002:] Yeah, sure. [speaker003:] You see that in the ICSI US [disfmarker] on the U S A side, I believe that the home companies and universities are still contributing. [speaker002:] Mm hmm. Yeah yeah. [speaker003:] Yeah? but these are then the official ones, and I would like to have them [disfmarker] [speaker002:] And all of them had agreed, or [disfmarker]? [speaker003:] They say they are very interested in it. [speaker002:] OK. [speaker003:] Yeah? Uh a few words, the University of Mannheim is at location where Joerg Widmer is uh working on it, and I got the statement uh they are very interested in that. They did a lot in the area of um intelligent classroom but nevertheless they did a lot in networking stuff. And University of Uppsala, Sweden, that's the reason I invited uh Christian Tschudin, the professor who did ten years work in [disfmarker] in active routing. And uh [pause] [disfmarker] OK the Universidad Nacional de Educat Yeah. Spanish. Maybe we have a better expert [vocalsound] to pronounce it. [speaker004:] Universidad Nacional de Educacion a Distancia [speaker003:] "UNED". I always said "UNED". in Spain, uh this is this university of Juan Peire. [speaker004:] Yeah. [speaker003:] And [pause] he is also the one who had recognized uh [speaker005:] Who is? [speaker003:] one pair pair [speaker005:] Perez. [speaker003:] Not Parez, Peire. PEIRE. [speaker001:] Paez. [speaker004:] Paez, Paez [speaker001:] Paez. [speaker005:] Ah, Peire. [speaker003:] No, no, no. [speaker004:] Paez. [speaker003:] Not [disfmarker] not Paez. Not. Peire, P [pause] E [pause] I R E. P [pause] E [pause] I R E. You never heard from him? [speaker005:] No. I don't know. [speaker003:] You can [disfmarker] I [disfmarker] you can mail him later on the informa [speaker005:] I don't know who he is. OK. [speaker003:] Then, of course, the whole N S A group and [pause] also of the coordinator of all the things, then Georgia Tech, I have good relationship to uh the long lasting uh cooperation with Georgia Tech and they did a lot in this area, also related to networking stuff as well as to the intelligent classroom stuff [pause] and the UCB. We still focusing on maybe we can have uh more professors involved which are [pause] working also on the networking stuff as well as on the application stuff, so that [pause] the UCB is clustered as one uh partner but nevertheless with different departments. [speaker004:] Hmm hmm hmm. [speaker003:] Yeah. So, [disfmarker] But take a look to the slides, I will uh deploy it um uh electronically. [speaker004:] Wher where your slides are available? [speaker003:] Sorry? [speaker004:] Wher where the slides are available? [speaker003:] I will send it electronically. [speaker004:] When? [speaker003:] After the meeting. [speaker004:] Ah! You will send it, sorry. [speaker003:] I will. Future. [speaker004:] Sorry. [speaker003:] And, [pause] maybe you take a look to the [pause] uh last two slides, um Um, maybe the GPRS picture is still uh available. So you have, in principle, here now the mapping of [pause] the different areas, um within GPS, and, in principle, the same applies for UMTS. There is no so much different. Um. You always have the SGSN and UMTS, they are called three GSGSN. And you have the protocol stack um in [disfmarker] on the slide number five, to see what uh is going on. And you see the difference. The amazing thing is that you have IP over IP. As you take a look to the [disfmarker] the uh GGSN, you have two IP layer functionality and [disfmarker] and all those things. And you have the [pause] GPS tunnel p uh protocol G T P between the G and the S and the GTSN. And [disfmarker] and so on. So uh I believe they are very very difficult protocols stuck between the uh [disfmarker] between the um um involved nodes, and [pause] making it necessary, in principle, to deal with our four building blocks in um [disfmarker] [vocalsound] yeah, I believe, in a difficult way, if, uh, we have in mind quality of service routing, and [disfmarker] and [disfmarker] uh qualit uh mobility aspects and [disfmarker] and all this kinds of things for [pause] I P layer functionality. So that's, in principle, uh an overview, of the protocol stack and the involved nodes, and I will provide more and more information but um eh I have to also figure out everything from the scratch. I am not a total expert for [disfmarker] for the future [pause] wireless networks. OK. Then I would like to thanks everybody and [pause] if there's no additional question, comment, or [pause] or [pause] other remarks, [pause] I would like to start with reading uh these numbers. [speaker004:] OK, we have [disfmarker] [speaker003:] OK, please the next one. OK, thank you Now, please let [disfmarker] don't switch off power, I have to call Adam, so So I suggest, [@ @] now we will have [disfmarker] in a few minutes we will have the coffee break, tea, cake, whatever, [@ @] will [disfmarker] will be offered, and then we will have a [disfmarker] we can have a discussion. [speaker003:] Yeah, we had a long discussion about how much w how easy we want to make it for people to bleep things out. So [disfmarker] Morgan wants to make it hard. [speaker004:] It [disfmarker] it doesn't [disfmarker] [speaker003:] Did [disfmarker] did [disfmarker] did it [disfmarker]? I didn't even check yesterday whether it was moving. [speaker004:] It didn't move yesterday either when I started it. [speaker003:] So. [speaker004:] So I don't know if it doesn't like both of us [disfmarker] [speaker003:] Channel three? Channel three? [speaker004:] You know, I discovered something yesterday on these, um, wireless ones. [speaker002:] Channel two. [speaker003:] Mm hmm? [speaker004:] You can tell if it's picking up [pause] breath noise and stuff. [speaker003:] Yeah, it has a little indicator on it [disfmarker] on the AF. [speaker004:] Mm hmm. So if you [disfmarker] yeah, if you breathe under [disfmarker] breathe and then you see AF go off, then you know [pause] it's p picking up your mouth noise. [speaker006:] Oh, that's good. Cuz we have a lot of breath noises. [speaker003:] Yep. Test. [speaker006:] In fact, if you listen to just the channels of people not talking, it's like "[@ @]". It's very disgust [speaker003:] What? Did you see Hannibal recently or something? [speaker006:] Sorry. Exactly. It's very disconcerting. OK. So, um, I was gonna try to get out of here, like, in half an hour, um, cuz I really appreciate people coming, and [vocalsound] the main thing that I was gonna ask people to help with today is [pause] to give input on what kinds of database format we should [pause] use in starting to link up things like word transcripts and annotations of word transcripts, so anything that transcribers or discourse coders or whatever put in the signal, [vocalsound] with time marks for, like, words and phone boundaries and all the stuff we get out of the forced alignments and the recognizer. So, we have this, um [disfmarker] I think a starting point is clearly the [disfmarker] the channelized [pause] output of Dave Gelbart's program, which Don brought a copy of, [speaker003:] Yeah. Yeah, I'm [disfmarker] I'm familiar with that. I mean, we [disfmarker] I sort of already have developed an XML format for this sort of stuff. [speaker006:] um, which [disfmarker] [speaker004:] Can I see it? [speaker003:] And so the only question [disfmarker] is it the sort of thing that you want to use or not? Have you looked at that? I mean, I had a web page up. [speaker006:] Right. So, I actually mostly need to be able to link up, or [disfmarker] [speaker003:] So [disfmarker] [speaker006:] I it's [disfmarker] it's a question both of what the representation is and [disfmarker] [speaker003:] You mean, this [disfmarker] I guess I am gonna be standing up and drawing on the board. [speaker006:] OK, yeah. So you should, definitely. [speaker003:] Um, so [disfmarker] so it definitely had that as a concept. So tha it has a single time line, [speaker006:] Mm hmm. [speaker003:] and then you can have lots of different sections, each of which have I Ds attached to it, and then you can refer from other sections to those I Ds, if you want to. So that, um [disfmarker] so that you start with [disfmarker] with a time line tag. "Time line". And then you have a bunch of times. I don't e I don't remember exactly what my notation was, [speaker001:] Oh, I remember seeing an example of this. [speaker003:] but it [disfmarker] [speaker006:] Right, right. [speaker001:] Yeah. [speaker003:] Yeah, "T equals one point three two", uh [disfmarker] And then I [disfmarker] I also had optional things like accuracy, and then "ID equals T one, uh, one seven". And then, [nonvocalsound] I also wanted to [disfmarker] to be i to be able to not specify specifically what the time was and just have a stamp. [speaker006:] Right. [speaker003:] Yeah, so these are arbitrary, assigned by a program, not [disfmarker] not by a user. So you have a whole bunch of those. And then somewhere la further down you might have something like an utterance tag which has "start equals T seventeen, end equals T eighteen". So what that's saying is, we know it starts at this particular time. We don't know when it ends. [speaker006:] OK. [speaker003:] Right? But it ends at this T eighteen, which may be somewhere else. We say there's another utterance. We don't know what the t time actually is but we know that it's the same time as this end time. [speaker001:] Mmm. [speaker003:] You know, thirty eight, whatever you want. [speaker001:] So you're essentially defining a lattice. [speaker003:] OK. Yes, exactly. [speaker001:] Yeah. [speaker003:] And then, uh [disfmarker] and then these also have I Ds. Right? So you could [disfmarker] you could have some sort of other [disfmarker] other tag later in the file that would be something like, um, oh, I don't know, [comment] uh, [nonvocalsound] "noise type equals [nonvocalsound] door slam". You know? And then, uh, [nonvocalsound] you could either say "time equals a particular time mark" or you could do other sorts of references. So [disfmarker] or [disfmarker] or you might have a prosody [disfmarker] "Prosody" right? D? T? D? T? T? [speaker006:] It's an O instead of an I, but the D is good. [speaker003:] You like the D? [speaker006:] Yeah. [speaker003:] That's a good D. Um, you know, so you could have some sort of type here, and then you could have, um [disfmarker] the utterance that it's referring to could be U seventeen or something like that. [speaker006:] OK. So, I mean, that seems [disfmarker] that seems g great for all of the encoding of things with time and, [speaker003:] Oh, well. [speaker006:] um [disfmarker] I [disfmarker] I guess my question is more, uh, what d what do you do with, say, a forced alignment? [speaker001:] How how [speaker006:] I mean you've got all these phone labels, and what do you do if you [disfmarker] just conceptually, if you get, um, transcriptions where the words are staying but the time boundaries are changing, cuz you've got a new recognition output, or s sort of [disfmarker] what's the, um, sequence of going from the waveforms that stay the same, the transcripts that may or may not change, and then the utterance which [disfmarker] where the time boundaries that may or may not change [disfmarker]? [speaker001:] Oh, that's [disfmarker] That's actually very nicely handled here [speaker006:] Um. [speaker001:] because you could [disfmarker] you could [disfmarker] all you'd have to change is the, [vocalsound] um, time stamps in the time line without [disfmarker] without, uh, changing the I Ds. [speaker006:] And you'd be able to propagate all of the [disfmarker] the information? [speaker003:] Right. That's, the who that's why you do that extra level of indirection. So that you can just change the time line. [speaker001:] Except the time line is gonna be huge. If you say [disfmarker] [speaker003:] Yes. [speaker006:] Yeah, yeah, especially at the phone level. [speaker001:] suppose you have a phone level alignment. [speaker006:] The [disfmarker] we [disfmarker] we have phone level backtraces. [speaker001:] You'd have [disfmarker] you'd have [disfmarker] [speaker003:] Yeah, this [disfmarker] I don't think I would do this for phone level. [speaker006:] Um [disfmarker] [speaker003:] I think for phone level you want to use some sort of binary representation because it'll be too dense otherwise. [speaker006:] OK. So, if you were doing that and you had this sort of companion, uh, thing that gets called up for phone level, uh, what would that look like? How would you [disfmarker]? [speaker003:] I would use just an existing [disfmarker] an existing way of doing it. [speaker001:] Why Mmm. But [disfmarker] but why not use it for phone level? [speaker006:] H h [speaker001:] It's just a matter of [disfmarker] it's just a matter of it being bigger. But if you have [disfmarker] you know, barring memory limitations, or uh [disfmarker] I w I mean this is still the m [speaker003:] It's parsing limitations. I don't want to have this text file that you have to read in the whole thing to do something very simple for. [speaker001:] Oh, no. You would use it only [pause] for [pause] purposes where you actually want the phone level information, I'd imagine. [speaker006:] So you could have some file that configures how much information you want in your [disfmarker] in your XML or something. [speaker003:] Right. I mean, you'd [disfmarker] y [speaker006:] Um, cuz th it does get very bush with [disfmarker] [speaker003:] I [disfmarker] I am imagining you'd have multiple versions of this depending on the information that you want. [speaker001:] You [disfmarker] [speaker006:] Right. [speaker003:] Um, I'm just [disfmarker] what I'm wondering is whether [disfmarker] I think for word level, this would be OK. [speaker006:] Yeah. Yeah. [speaker003:] For word level, it's alright. [speaker006:] Definitely. [speaker001:] Mm hmm. [speaker003:] For lower than word level, you're talking about so much data that I just [disfmarker] I don't know. I don't know if that [disfmarker] [speaker006:] I mean, we actually have [disfmarker] So, one thing that Don is doing, is we're [disfmarker] we're running [disfmarker] For every frame, you get a pitch value, [speaker004:] Lattices are big, too. [speaker006:] and not only one pitch value but different kinds of pitch values depending on [disfmarker] [speaker003:] Yeah, I mean, for something like that I would use P file or [disfmarker] or any frame level stuff I would use P file. [speaker006:] Meaning [disfmarker]? [speaker003:] Uh, that's a [disfmarker] well, or something like it. It's ICS uh, ICSI has a format for frame level representation of features. [speaker006:] OK. [speaker003:] Um. [speaker006:] That you could call [disfmarker] that you would tie into this representation with like an ID. [speaker003:] Right. Right. Or [disfmarker] or there's a [disfmarker] there's a particular way in XML to refer to external resources. [speaker006:] And [disfmarker] OK. [speaker003:] So you would say "refer to this external file". Um, so that external file wouldn't be in [disfmarker] [speaker006:] So that might [disfmarker] that might work. [speaker004:] But what [disfmarker] what's the advantage of doing that versus just putting it into this format? [speaker003:] More compact, which I think is [disfmarker] is better. [speaker004:] Uh huh. [speaker003:] I mean, if you did it at this [disfmarker] [speaker006:] I mean these are long meetings and with [disfmarker] for every frame, [speaker003:] You don't want to do it with that [disfmarker] [speaker006:] um [disfmarker] [speaker003:] Anything at frame level you had better encode binary or it's gonna be really painful. [speaker001:] Or you just compre I mean, I like text formats. Um, b you can always, uh, G zip them, and, um, you know, c decompress them on the fly if y if space is really a concern. [speaker004:] Yeah, I was thi I was thinking the advantage is that we can share this with other people. [speaker003:] Well, but if you're talking about one per frame, you're talking about gigabyte size files. You're gonna actually run out of space in your filesystem for one file. [speaker006:] These are big files. These are really [disfmarker] I mean [disfmarker] [speaker003:] Right? Because you have a two gigabyte limit on most O Ss. [speaker001:] Right, OK. I would say [disfmarker] OK, so frame level is probably not a good idea. [speaker006:] And th it's [disfmarker] [speaker001:] But for phone level stuff it's perfectly [disfmarker] Like phones, or syllables, or anything like that. [speaker006:] Phones are every five frames though, so. Or something like that. [speaker001:] But [disfmarker] but [disfmarker] but most of the frames are actually not speech. So, you know, people don't [disfmarker] [speaker006:] Yeah, [speaker001:] v Look at it, words times the average [disfmarker] The average number of phones in an English word is, I don't know, [comment] five maybe? [speaker006:] but we actually [disfmarker] [speaker001:] So, look at it, t number of words times five. [speaker006:] Oh, so you mean pause phones take up a lot of the [disfmarker] [speaker001:] That's not [disfmarker] that not [disfmarker] [speaker006:] long pause phones. [speaker003:] Yep. [speaker001:] Exactly. [speaker006:] Yeah. OK. [speaker001:] Yeah. [speaker006:] That's true. But you do have to keep them in there. Y yeah. [speaker003:] So I think it [disfmarker] it's debatable whether you want to do phone level in the same thing. [speaker006:] OK. [speaker003:] But I think, a anything at frame level, even P file, is too verbose. [speaker006:] OK. So [disfmarker] [speaker003:] I would use something tighter than P files. [speaker006:] Do you [disfmarker] Are you familiar with it? [speaker003:] So. [speaker006:] I haven't seen this particular format, [speaker001:] I mean, I've [disfmarker] I've used them. [speaker006:] but [disfmarker] [speaker001:] I don't know what their structure is. [speaker006:] OK. [speaker001:] I've forgot what the str [speaker004:] But, wait a minute, P file for each frame is storing a vector of cepstral or PLP values, right? [speaker003:] It's whatever you want, actually. [speaker004:] Right. [speaker003:] So that [disfmarker] what's nice about the P file [disfmarker] It [disfmarker] i Built into it is the concept of [pause] frames, utterances, sentences, that sort of thing, that structure. And then also attached to it is an arbitrary vector of values. [speaker006:] Oh. [speaker003:] And it can take different types. So it [disfmarker] th they don't all have to be floats. You know, you can have integers and you can have doubles, and all that sort of stuff. [speaker006:] So that [disfmarker] that sounds [disfmarker] that sounds about what I w [speaker003:] Um. Right? And it has a header [disfmarker] it has a header format that [pause] describes it [pause] to some extent. So, the only problem with it is it's actually storing the [pause] utterance numbers and the [pause] frame numbers in the file, even though they're always sequential. And so it does waste a lot of space. [speaker001:] Hmm. [speaker003:] But it's still a lot tighter than [disfmarker] than ASCII. And we have a lot of tools already to deal with it. [speaker006:] You do? OK. Is there some documentation on this somewhere? [speaker003:] Yeah, there's a ton of it. [speaker006:] OK, great. [speaker003:] Man pages and, uh, source code, and me. [speaker006:] So, I mean, that sounds good. I [disfmarker] I was just looking for something [disfmarker] I'm not a database person, but something sort of standard enough that, you know, if we start using this we can give it out, other people can work on it, [speaker003:] Yeah, it's not standard. [speaker006:] or [disfmarker] [comment] Is it [disfmarker]? [speaker003:] I mean, it's something that we developed at ICSI. But, uh [disfmarker] [speaker006:] But it's [pause] been used here [speaker003:] But it's been used here [speaker006:] and people've [disfmarker] [speaker003:] and [disfmarker] and, you know, we have a [pause] well configured system that you can distribute for free, and [disfmarker] [speaker004:] I mean, it must be the equivalent of whatever you guys used to store feat your computed features in, right? [speaker006:] OK. [speaker001:] Yeah, th we have [disfmarker] Actually, we [disfmarker] we use a generalization of the [disfmarker] the Sphere format. [speaker004:] Mmm. [speaker001:] Um, but [disfmarker] Yeah, so there is something like that but it's, um, probably not as sophist [speaker004:] And I think there's [disfmarker] [speaker003:] Well, what does H T K do for features? Or does it even have a concept of features? [speaker001:] They ha it has its own [disfmarker] I mean, Entropic has their own feature format that's called, like, S SD or some so SF or something like that. [speaker006:] Yeah. [speaker004:] Yeah. [speaker003:] I'm just wondering, would it be worth while to use that instead? [speaker001:] Hmm? [speaker006:] Yeah. Th this is exactly the kind of decision [disfmarker] It's just whatever [disfmarker] [speaker004:] But, I mean, people don't typically share this kind of stuff, right? [speaker001:] Right. [speaker004:] I mean [disfmarker] [speaker003:] They generate their own. [speaker006:] Actually, I [disfmarker] I just [disfmarker] you know, we [disfmarker] we've done this stuff on prosodics [speaker004:] Yeah. [speaker006:] and three or four places have asked for those prosodic files, and we just have an ASCII, uh, output of frame by frame. [speaker003:] Ah, right. [speaker006:] Which is fine, but it gets unwieldy to go in and [disfmarker] and query these files with really huge files. [speaker003:] Right. [speaker006:] I mean, we could do it. I was just thinking if there's something that [disfmarker] where all the frame values are [disfmarker] [speaker003:] And a and again, if you have a [disfmarker] [speaker006:] Hmm? [speaker003:] if you have a two hour long meeting, that's gonna [disfmarker] [speaker006:] They're [disfmarker] they're fair they're quite large. And these are for ten minute Switchboard conversations, [speaker003:] Yeah, I mean, they'd be emo enormous. Right. [speaker006:] and [disfmarker] So it's doable, it's just that you can only store a feature vector at frame by frame and it doesn't have any kind of, um [disfmarker] [speaker004:] Is [disfmarker] is the sharing part of this a pretty important [pause] consideration or does that just sort of, uh [disfmarker] a nice thing to have? [speaker006:] I [disfmarker] I don't know enough about what we're gonna do with the data. But I thought it would be good to get something that we can [disfmarker] that other people can use or adopt for their own kinds of encoding. And just, I mean we have to use some we have to make some decision about what to do. [speaker003:] Yeah. [speaker006:] And especially for the prosody work, what [disfmarker] what it ends up being is you get features from the signal, and of course those change every time your alignments change. So you re run a recognizer, you want to recompute your features, um, and then keep the database up to date. [speaker003:] Right. [speaker006:] Or you change a word, or you change a [vocalsound] utterance boundary segment, which is gonna happen a lot. And so I wanted something where [pause] all of this can be done in a elegant way and that if somebody wants to try something or compute something else, that it can be done flexibly. Um, it doesn't have to be pretty, it just has to be, you know, easy to use, and [disfmarker] [speaker003:] Yeah, the other thing [disfmarker] We should look at ATLAS, the NIST thing, [speaker006:] Oh. [speaker001:] Mmm. [speaker003:] and see if they have anything at that level. [speaker006:] Uh [disfmarker] [speaker003:] I mean, I'm not sure what to do about this with ATLAS, because they chose a different route. I chose something that [disfmarker] Th there are sort of two choices. Your [disfmarker] your file format can know about [disfmarker] know that you're talking about language [pause] and speech, which is what I chose, and time, or your file format can just be a graph representation. And then the application has to impose the structure on top. So what it looked like ATLAS chose is, they chose the other way, which was their file format is just nodes and links, and you have to interpret what they mean yourself. [speaker006:] And why did you not choose that type of approach? [speaker003:] Uh, because I knew that we were doing speech, and I thought it was better if you're looking at a raw file to be [disfmarker] t for the tags to say "it's an utterance", as opposed to the tag to say "it's a link". [speaker006:] OK. OK. But other than that, are they compatible? [speaker003:] So, but [disfmarker] [speaker006:] I mean, you could sort of [disfmarker] [speaker003:] Yeah, they're reasonably compatible. [speaker006:] I mean, you [disfmarker] you could [disfmarker] [speaker004:] You could probably translate between them. [speaker006:] Yeah, that's w [speaker003:] Yep. [speaker006:] So, [speaker003:] So, well, the other thing is if we choose to use ATLAS, which maybe we should just do, we should just throw this out before we invest a lot of time in it. [speaker006:] OK. I don't [disfmarker] So this is what the meeting's about, just sort of how to [disfmarker] [speaker003:] Yeah. [speaker006:] Um, cuz we need to come up with a database like this just to do our work. And I actually don't care, as long as it's something useful to other people, what we choose. [speaker003:] Yeah. [speaker006:] So maybe it's [disfmarker] maybe oth you know, if [disfmarker] if you have any idea of how to choose, cuz I don't. [speaker003:] The only thing [disfmarker] Yeah. [speaker001:] Do they already have tools? [speaker003:] I mean, I [disfmarker] I chose this for a couple reasons. One of them is that it's easy to parse. You don't need a full XML parser. It's very easy to just write a Perl script [pause] to parse it. [speaker001:] As long as uh each tag is on one line. [speaker003:] Exactly. Exactly. Which I always do. [speaker006:] And you can have as much information in the tag as you want, right? [speaker003:] Well, I have it structured. Right? So each type tag has only particular items that it can take. [speaker006:] Can you [disfmarker] But you can add to those structures if you [disfmarker] [speaker003:] Sure. If you have more information. So what [disfmarker] What NIST would say is that instead of doing this, you would say something like link [nonvocalsound] start equals, um, you know, some node ID, [speaker006:] Yeah. So [disfmarker] [speaker003:] end equals some other node ID ", and then "type" would be "utterance". [speaker001:] Hmm. [speaker003:] You know, so it's very similar. [speaker006:] So why would it be a [disfmarker] a waste to do it this way if it's similar enough that we can always translate it? [speaker004:] It probably wouldn't be a waste. It would mean that at some point if we wanted to switch, we'd just have to translate everything. [speaker003:] Write a translator. But it se [speaker006:] But it [disfmarker] but that sounds [disfmarker] [speaker003:] Since they are developing a big [disfmarker] [speaker004:] But that's [disfmarker] I don't think that's a big deal. [speaker006:] As long as it is [disfmarker] [speaker003:] they're developing a big infrastructure. And so it seems to me that if [disfmarker] if we want to use that, we might as well go directly to what they're doing, rather than [disfmarker] [speaker001:] If we want to [disfmarker] Do they already have something that's [disfmarker] that would be useful for us in place? [speaker004:] Yeah. See, that's the question. I mean, how stable is their [disfmarker] Are they ready to go, [speaker003:] The [disfmarker] I looked at it [disfmarker] [speaker004:] or [disfmarker]? [speaker003:] The last time I looked at it was a while ago, probably a year ago, [speaker004:] Hmm. [speaker003:] uh, when we first started talking about this. And at that time at least [vocalsound] it was still not very [pause] complete. And so, specifically they didn't have any external format representation at that time. They just had the sort of conceptual [pause] node [disfmarker] uh, annotated transcription graph, which I really liked. And that's exactly what this stuff is based on. Since then, they've developed their own external file format, which is, uh, you know, this sort of s this sort of thing. Um, and apparently they've also developed a lot of tools, but I haven't looked at them. Maybe I should. [speaker001:] We should [disfmarker] we should find out. [speaker006:] I mean, would the tools [disfmarker] would the tools run on something like this, if you can translate them anyway? [speaker003:] Um, th what would [disfmarker] would [disfmarker] would [disfmarker] what would worry me is that maybe we might miss a little detail [speaker006:] I mean, that [disfmarker] I guess it's a question that [disfmarker] [speaker001:] It's a hassle [speaker006:] uh, yeah. [speaker001:] if [disfmarker] [speaker003:] that would make it very difficult to translate from one to the other. [speaker006:] OK. [speaker001:] I [disfmarker] I think if it's conceptually close, and they already have or will have tools that everybody else will be using, I mean, [vocalsound] it would be crazy to do something s you know, separate that [disfmarker] [speaker006:] OK. [speaker003:] Yeah, we might as well. Yep. [speaker006:] Yeah. [speaker003:] So I'll [disfmarker] I'll take a closer look at it. [speaker006:] Actually, so it's [disfmarker] that [disfmarker] that would really be the question, is just what you would feel is in the long run the best thing. [speaker003:] And [disfmarker] Right. [speaker006:] Cuz [vocalsound] once we start, sort of, doing this I don't [disfmarker] we don't actually have enough time to probably have to rehash it out again [speaker003:] The [disfmarker] Yep. [speaker006:] and [disfmarker] [speaker003:] The other thing [disfmarker] the other way that I sort of established this was as easy translation to and from the Transcriber format. [speaker006:] s Right. [speaker003:] Um, [speaker006:] Right. [speaker003:] but [disfmarker] [speaker006:] I mean, I like this. This is sort of intuitively easy to actually r read, [speaker003:] Yep. [speaker006:] as easy it could [disfmarker] as it could be. But, I suppose that [pause] as long as they have a type here that specifies "utt", um, [speaker003:] It's almost the same. [speaker006:] it's [disfmarker] yeah, close enough that [disfmarker] [speaker003:] The [disfmarker] the [disfmarker] the [disfmarker] the point is [disfmarker] with this, though, is that you can't really add any supplementary information. Right? So if you suddenly decide that you want [disfmarker] [speaker006:] You have to make a different type. [speaker003:] Yeah. You'd have to make a different type. [speaker006:] So [disfmarker] Well, if you look at it and [disfmarker] Um, I guess in my mind I don't know enough [disfmarker] Jane would know better, [comment] about the [pause] types of annotations and [disfmarker] and [disfmarker] But I imagine that those are things that would [disfmarker] well, you guys mentioned this, [comment] that could span any [disfmarker] it could be in its own channel, it could span time boundaries of any type, [speaker003:] Right. [speaker006:] it could be instantaneous, things like that. Um, and then from the recognition side we have backtraces at the phone level. If [disfmarker] if it can handle that, it could handle states or whatever. [speaker003:] Right. [speaker006:] And then at the prosody level we have frame [disfmarker] sort of like cepstral feature files, [speaker003:] Yep. [speaker006:] uh, like these P files or anything like that. And that's sort of the world of things that I [disfmarker] And then we have the aligned channels, of course, [speaker003:] Right. [speaker006:] and [disfmarker] [speaker001:] It seems to me you want to keep the frame level stuff separate. [speaker006:] Yeah. I [disfmarker] I definitely agree [speaker001:] And then [disfmarker] [speaker006:] and I wanted to find actually a f a nicer format or a [disfmarker] maybe a more compact format than what we used before. [speaker003:] Right. [speaker006:] Just cuz you've got [vocalsound] ten channels or whatever and two hours of a meeting. It's [disfmarker] it's a lot of [disfmarker] [speaker001:] Now [disfmarker] now how would you [disfmarker] how would you represent, um, multiple speakers in this framework? [speaker003:] Huge. [speaker001:] Were [disfmarker] You would just represent them as [disfmarker] [speaker003:] Um, [speaker001:] You would have like a speaker tag or something? [speaker003:] there's a spea speaker tag up at the top which identifies them and then each utt the way I had it is each turn or each utterance, [comment] I don't even remember now, had a speaker ID tag attached to it. [speaker001:] Mm hmm. OK. [speaker003:] And in this format you would have a different tag, which [disfmarker] which would, uh, be linked to the link. [speaker006:] Yeah. [speaker003:] So [disfmarker] so somewhere else you would have another thing [pause] that would be, um [disfmarker] Let's see, would it be a node or a link? Um [disfmarker] And so [disfmarker] so this one would have, um, an ID is link [disfmarker] [comment] link seventy four or something like that. And then somewhere up here you would have a link that [disfmarker] that, uh, you know, was referencing L seventy four and had speaker Adam. [speaker001:] Mm hmm. [speaker006:] Actually, it's the channel, I think, that [disfmarker] [speaker001:] Is i? [speaker003:] You know, or something like that. [speaker006:] I mean, w [speaker001:] Well, channel or speaker or whatever. [speaker006:] yeah, channel is what the channelized output out [speaker001:] It doesn't [disfmarker] [speaker003:] This isn't quite right. I have to look at it again. [speaker001:] Right. [speaker006:] Yeah, but [disfmarker] [speaker001:] But [disfmarker] but [disfmarker] so how in the NIST format do we express [vocalsound] a hierarchical relationship between, um, say, an utterance and the words within it? So how do you [pause] tell [pause] that [pause] these are the words that belong to that utterance? [speaker003:] Um, you would have another structure lower down than this that would be saying they're all belonging to this ID. [speaker001:] Mm hmm. [speaker004:] So each thing refers to the [pause] utterance that it belongs to. [speaker003:] Right. [speaker004:] So it's [disfmarker] it's not hi it's sort of bottom up. [speaker003:] And then each utterance could refer to a turn, and each turn could refer to something higher up. [speaker006:] And what if you actually have [disfmarker] So right now what you have as utterance, um, the closest thing that comes out of the channelized is the stuff between the segment boundaries that the transcribers put in or that Thilo put in, which may or may not actually be, like, a s it's usually not [disfmarker] um, the beginning and end of a sentence, say. [speaker003:] Well, that's why I didn't call it "sentence". [speaker006:] So, right. Um, so it's like a segment or something. [speaker003:] Yeah. [speaker006:] So, I mean, I assume this is possible, that if you have [disfmarker] someone annotates the punctuation or whatever when they transcribe, you can say, you know, from [disfmarker] for [disfmarker] from the c beginning of the sentence to the end of the sentence, from the annotations, this is a unit, even though it never actually [disfmarker] i It's only a unit by virtue of the annotations [pause] at the word level. [speaker003:] Sure. I mean, so you would [disfmarker] you would have yet another tag. [speaker006:] And then that would get a tag somehow. [speaker003:] You'd have another tag which says this is of type "sentence". [speaker006:] OK. OK. [speaker003:] And, [speaker006:] But it's just not overtly in the [disfmarker] [speaker003:] what [disfmarker] [speaker001:] OK. [speaker006:] Um, cuz this is exactly the kind of [disfmarker] [speaker001:] So [disfmarker] [speaker006:] I think that should be [pause] possible as long as the [disfmarker] But, uh, what I don't understand is where the [disfmarker] where in this type of file [pause] that would be expressed. [speaker003:] Right. You would have another tag somewhere. It's [disfmarker] well, there're two ways of doing it. [speaker006:] S so it would just be floating before the sentence or floating after the sentence without a time mark. [speaker003:] You could have some sort of link type [disfmarker] type equals "sentence", and ID is "S whatever". And then lower down you could have an utterance. So the type is "utterance" [disfmarker] equals "utt". And you could either say that [disfmarker] No. I don't know [disfmarker] I take that back. [speaker001:] So here's the thing. Um [disfmarker] [speaker006:] See, cuz it's [disfmarker] [speaker003:] Can you [disfmarker] can you say that this is part of this, [speaker001:] Hhh. [speaker006:] it's [disfmarker] [speaker004:] You would just have a r [speaker006:] S [speaker003:] or do you say this is part of this? I think [disfmarker] [speaker006:] But they're [disfmarker] [speaker004:] You would refer up to the sentence. [speaker001:] Well, the thing [disfmarker] [speaker006:] they're actually overlapping each other, sort of. [speaker003:] So [disfmarker] [speaker001:] the thing is that some something may be a part of one thing for one purpose and another thing of another purpose. [speaker003:] Right. [speaker006:] You have to have another type then, I guess. [speaker001:] So f s Um, well, [speaker003:] Well, I think I'm [disfmarker] I think w I had better look at it again [speaker001:] s let's [disfmarker] let's ta so let's [disfmarker] [speaker006:] Yeah. OK. [speaker001:] so [disfmarker] [speaker003:] because I [disfmarker] I'm [disfmarker] [speaker006:] OK. [speaker001:] y So for instance [@ @] [comment] sup [speaker003:] There's one level [disfmarker] there's one more level of indirection that I'm forgetting. [speaker001:] Suppose you have a word sequence and you have two different segmentations of that same word sequence. f Say, one segmentation is in terms of, um, you know, uh, sentences. And another segmentation is in terms of, um, [vocalsound] I don't know, [comment] prosodic phrases. And let's say that they don't [pause] nest. So, you know, a prosodic phrase may cross two sentences or something. [speaker003:] Right. [speaker001:] I don't know if that's true or not but [vocalsound] let's as [speaker006:] Well, it's definitely true with the segment. [speaker001:] Right. [speaker006:] That's what I [disfmarker] exactly what I meant by the utterances versus the sentence could be sort of [disfmarker] [speaker001:] Yeah. So, you want to be s you want to say this [disfmarker] this word is part of that sentence and this prosodic phrase. [speaker006:] Yeah. Yeah. [speaker001:] But the phrase is not part of the sentence and neither is the sentence part of the phrase. [speaker006:] Right. [speaker003:] I I'm pretty sure that you can do that, but I'm forgetting the exact level of nesting. [speaker001:] So, you would have to have [vocalsound] two different pointers from the word up [disfmarker] one level up, one to the sent [speaker003:] So [disfmarker] so what you would end up having is a tag saying "here's a word, and it starts here and it ends here". [speaker001:] Right. [speaker003:] And then lower down you would say "here's a prosodic boundary and it has these words in it". And lower down you'd have here's a sentence, [speaker006:] An Right. [speaker001:] Right. [speaker003:] and it has these words in it ". [speaker006:] So you would be able to go in and say, you know, "give me all the words in the bound in the prosodic phrase and give me all the words in the [disfmarker]" [speaker003:] Yep. [speaker006:] Yeah. [speaker003:] So I think that's [disfmarker] that would wor [speaker006:] Um, OK. [speaker003:] Let me look at it again. [speaker001:] Mm hmm. [speaker006:] OK. [speaker001:] The [disfmarker] the o the other issue that you had was, how do you actually efficiently extract, um [disfmarker] find and extract information in a structure of this type? [speaker003:] So. [speaker006:] That's good. [speaker001:] So you gave some examples like [disfmarker] [speaker006:] Well, uh, and, I mean, you guys might [disfmarker] I don't know if this is premature because I suppose once you get the representation you can do this, but the kinds of things I was worried about is, [speaker001:] No, that's not clear. [speaker006:] uh [disfmarker] [speaker001:] I mean, yeah, you c sure you can do it, [speaker006:] Well, OK. So i if it [disfmarker] [speaker001:] but can you do it sort of l l you know, it [disfmarker] [speaker006:] I I mean, I can't do it, but I can [disfmarker] um, [speaker001:] y y you gotta [disfmarker] you gotta do this [disfmarker] you [disfmarker] you're gonna want to do this very quickly [speaker003:] Well [disfmarker] [speaker001:] or else you'll spend all your time sort of searching through very [vocalsound] complex data structures [disfmarker] [speaker006:] Right. You'd need a p sort of a paradigm for how to do it. But an example would be "find all the cases in which Adam started to talk while Andreas was talking and his pitch was rising, Andreas's pitch". That kind of thing. [speaker003:] Right. I mean, that's gonna be [disfmarker] Is the rising pitch a [pause] feature, or is it gonna be in the same file? [speaker006:] Well, the rising pitch will never be [pause] hand annotated. So the [disfmarker] all the prosodic features are going to be automatically [disfmarker] [speaker003:] But the [disfmarker] I mean, that's gonna be hard regardless, [speaker006:] So they're gonna be in those [disfmarker] [speaker003:] right? Because you're gonna have to write a program that goes through your feature file and looks for rising pitches. [speaker001:] Yeah. [speaker006:] So [disfmarker] Right. So normally what we would do is we would say "what do we wanna assign rising pitch to?" Are we gonna assign it to words? Are we gonna just assign it to sort of [disfmarker] when it's rising we have a begin end rise representation? But suppose we dump out this file and we say, uh, for every word we just classify it as, w you know, rise or fall or neither? [speaker003:] OK. Well, in that case you would add that to this [pause] format [speaker006:] OK. So we would basically be sort of, um, taking the format and enriching it with things that we wanna query in relation to the words that are already in the file, [speaker003:] r Right. [speaker006:] and then querying it. OK. [speaker001:] You want sort of a grep that's [disfmarker] that works at the structural [disfmarker] on the structural representation. [speaker003:] You have that. There's a [pause] standard again in XML, specifically for searching XML documents [disfmarker] structured X XML documents, where you can specify both the content and the structural position. [speaker001:] Yeah, but it's [disfmarker] it's not clear that that's [disfmarker] [speaker006:] If [disfmarker] [speaker001:] That's relative to the structure of the XML document, not to the structure of what you're representing in the document. [speaker003:] You use it as a tool. You use it as a tool, not an end user. It's not an end user thing. [speaker001:] Right. [speaker003:] It's [disfmarker] it's [disfmarker] you would use that to build your tool to do that sort of search. [speaker001:] Right. Be Because here you're specifying a lattice. [speaker006:] Uh [disfmarker] [speaker001:] So the underlying [disfmarker] that's the underlying data structure. [speaker006:] But as long as the [disfmarker] [speaker001:] And you want to be able to search in that lattice. [speaker003:] It's a graph, but [disfmarker] [speaker001:] That's different from searching through the text. [speaker006:] But it seems like as long as the features that [disfmarker] [speaker003:] Well, no, no, no. The whole point is that the text and the lattice are isomorphic. They [pause] represent each other [pause] completely. [speaker001:] Um [disfmarker] [speaker003:] So that [disfmarker] I mean th [speaker006:] That's true if the features from your acoustics or whatever that are not explicitly in this are at the level of these types. [speaker001:] Hhh. [speaker006:] That [disfmarker] that if you can do that [disfmarker] [speaker003:] Yeah, but that's gonna be the trouble no matter what. Right? No matter what format you choose, you're gonna have the trou you're gonna have the difficulty of relating the [disfmarker] the frame level features [disfmarker] [speaker006:] That's right. That's true. That's why I was trying to figure out what's the best format for this representation. [speaker003:] Yep. [speaker006:] And it's still gonna be [disfmarker] it's still gonna be, uh, not direct. [speaker001:] Hmm. [speaker006:] You know, it [disfmarker] [speaker003:] Right. [speaker006:] Or another example was, you know, uh, where in the language [disfmarker] where in the word sequence are people interrupting? So, I guess that one's actually easier. [speaker004:] What about [disfmarker] what about, um, the idea of using a relational database to, uh, store the information from the XML? So you would have [disfmarker] XML basically would [disfmarker] Uh, you [disfmarker] you could use the XML to put the data in, and then when you get data out, you put it back in XML. So use XML as sort of the [disfmarker] the transfer format, [speaker003:] Transfer. [speaker004:] uh, but then you store the data in the database, which allows you to do all kinds of [pause] good search things in there. [speaker003:] The, uh [disfmarker] One of the things that ATLAS is doing is they're trying to define an API which is independent of the back store, [speaker006:] Huh. [speaker003:] so that, uh, you could define a single API and the [disfmarker] the storage could be flat XML files or a database. [speaker004:] Mm hmm. [speaker003:] My opinion on that is for the s sort of stuff that we're doing, [comment] I suspect it's overkill to do a full relational database, that, um, just a flat file and, uh, search tools I bet will be enough. [speaker001:] But [disfmarker] [speaker003:] But that's the advantage of ATLAS, is that if we actually take [disfmarker] decide to go that route completely and we program to their API, then if we wanted to add a database later it would be pretty easy. [speaker004:] Mm hmm. Mm hmm. [speaker006:] It seems like the kind of thing you'd do if [disfmarker] I don't know, if people start adding all kinds of s bells and whistles to the data. And so that might be [disfmarker] I mean, it'd be good for us to know [disfmarker] to use a format where we know we can easily, um, input that to some database if other people are using it. [speaker003:] Yep. [speaker006:] Something like that. [speaker003:] I guess I'm just a little hesitant to try to go whole hog on sort of the [disfmarker] the whole framework that [disfmarker] that NIST is talking about, with ATLAS and a database and all that sort of stuff, [speaker006:] So [disfmarker] [speaker003:] cuz it's a big learning curve, just to get going. [speaker004:] Hmm. [speaker001:] Hmm. [speaker003:] Whereas if we just do a flat file format, sure, it may not be as efficient but everyone can program in Perl and [disfmarker] and use it. [speaker006:] OK. [speaker003:] Right? So, as opposed to [disfmarker] [speaker001:] But this is [disfmarker] I [disfmarker] I'm still, um, [vocalsound] not convinced that you can do much at all on the text [disfmarker] on the flat file that [disfmarker] that [disfmarker] you know, the text representation. e Because the text representation is gonna be, uh, not reflecting the structure of [disfmarker] of your words and annotations. It's just [disfmarker] it's [disfmarker] [speaker003:] Well, if it's not representing it, then how do you recover it? Of course it's representing it. [speaker001:] No. You [disfmarker] you have to [disfmarker] [speaker003:] That's the whole point. [speaker001:] what you have to do is you have to basically [disfmarker] Y yeah. You can use Perl to read it in and construct a internal representation that is essentially a lattice. But, the [disfmarker] [speaker004:] Yeah. [speaker003:] OK. [speaker001:] and then [disfmarker] [speaker003:] Well, that was a different point. Right? So what I was saying is that [disfmarker] [speaker001:] Right. But that's what you'll have to do. [speaker003:] For Perl [disfmarker] if you want to just do Perl. [speaker001:] Bec be [speaker003:] If you wanted to use the structured XML query language, that's a different thing. And it's a set of tools [vocalsound] that let you specify given the D DDT [disfmarker] DTD of the document, um, what sorts of structural searches you want to do. So you want to say that, you know, you're looking for, um, a tag within a tag within a particular tag that has this particular text in it, um, and, uh, refers to a particular value. And so the point isn't that an end user, who is looking for a query like you specified, wouldn't program it in this language. What you would do is, someone would build a tool that used that as a library. So that they [disfmarker] so that you wouldn't have to construct the internal representations yourself. [speaker006:] Is a [disfmarker] See, I think the kinds of questions, at least in the next [disfmarker] to the end of this year, are [disfmarker] there may be a lot of different ones, but they'll all have a similar nature. They'll be looking at either a word level prosodic, uh, an [disfmarker] a value, like a continuous value, [speaker003:] Mm hmm. [speaker006:] like the slope of something. But you know, we'll do something where we [disfmarker] some kind of data reduction where the prosodic features are sort o uh, either at the word level or at the segment level, [speaker003:] Right. [speaker006:] or [disfmarker] or something like that. They're not gonna be at the phone level and they're no not gonna be at the frame level when we get done with sort of giving them simpler shapes and things. And so the main thing is just being able [disfmarker] Well, I guess, the two goals. Um, one that Chuck mentioned is starting out with something that we don't have to start over, that we don't have to throw away if other people want to extend it for other kinds of questions, [speaker003:] Right. [speaker006:] and being able to at least get enough, uh, information out on [disfmarker] where we condition the location of features on information that's in the kind of file that you [pause] put up there. And that would [disfmarker] that would do it, I mean, for me. [speaker003:] Yeah. I think that there are quick and dirty solutions, and then there are long term, big infrastructure solutions. And so [vocalsound] we want to try to pick something that lets us do a little bit of both. [speaker006:] In the between, right. And especially that the representation doesn't have to be thrown away, [speaker003:] Um [disfmarker] Right. [speaker006:] even if your tools change. [speaker003:] And so it seems to me that [disfmarker] I mean, I have to look at it again to see whether it can really do what we want, but if we use the ATLAS external file representation, um, it seems like it's rich enough that you could do quick tools just as I said in Perl, [speaker006:] Yeah. [speaker003:] and then later on if we choose to go up the learning curve, we can use the whole ATLAS inter infrastructure, [speaker006:] I mean, that sounds good to me. [speaker003:] which has all that built in. [speaker006:] I [disfmarker] I don't [disfmarker] So if [disfmarker] if you would l look at that and let us know what you think. [speaker003:] Sure. [speaker006:] I mean, I think we're sort of guinea pigs, cuz I [disfmarker] I want to get the prosody work done but I don't want to waste time, you know, getting the [disfmarker] [speaker001:] Oh, maybe [disfmarker] [speaker006:] Yeah? [speaker001:] um [disfmarker] [speaker003:] Well, I wouldn't wait for the formats, because anything you pick we'll be able to translate to another form. [speaker001:] Well [disfmarker] Ma [speaker006:] OK. [speaker001:] well, maybe you should actually look at it yourself too to get a sense of what it is you'll [disfmarker] you'll be dealing with, because, um, you know, Adam might have one opinion but you might have another, [speaker002:] Yeah. [speaker001:] so [speaker006:] Yeah, definitely. [speaker001:] I think the more eyes look at this the better. [speaker006:] Especially if there's, e um [disfmarker] you know, if someone can help with at least the [disfmarker] the setup of the right [disfmarker] Oh, hi. [speaker003:] Hi, Jane. [speaker001:] Mmm. [speaker006:] the right representation, then, i you know, I hope it won't [disfmarker] We don't actually need the whole full blown thing to be ready, [speaker003:] Can you [disfmarker] Oh, well. [speaker006:] so. Um, so maybe if you guys can look at it and sort of see what, [speaker003:] Sure. [speaker002:] Yeah. [speaker006:] um [disfmarker] I think we're [disfmarker] we're [disfmarker] [vocalsound] we're actually just [disfmarker] [speaker003:] We're about done. [speaker006:] yeah, wrapping up, [speaker002:] Hmm. [speaker006:] but, um [disfmarker] Yeah, sorry, it's a uh short meeting, but, um [disfmarker] Well, I don't know. Is there anything else, like [disfmarker] I mean that helps me a lot, [speaker003:] Well, I think the other thing we might want to look at is alternatives to P file. [speaker006:] but [disfmarker] [speaker003:] I mean, th the reason I like P file is I'm already familiar with it, we have expertise here, and so if we pick something else, there's the learning curve problem. But, I mean, it is just something we developed at ICSI. And so [disfmarker] [speaker001:] Is there an [disfmarker] is there an IP API? [speaker003:] Yeah. [speaker001:] OK. [speaker003:] There's an API for it. And, uh, [speaker001:] There used to be a problem that they get too large, [speaker003:] a bunch of libraries, P file utilities. [speaker001:] and so [pause] basically the [disfmarker] uh the filesystem wouldn't [disfmarker] [speaker003:] Well, that's gonna be a problem no matter what. You have the two gigabyte limit on the filesystem size. And we definitely hit that with Broadcast News. [speaker001:] Maybe you could extend the API to, uh, support, uh, like splitting up, you know, conceptually one file into smaller files on disk so that you can essentially, you know, have arbitrarily long f [speaker003:] Yep. Most of the tools can handle that. So that [speaker001:] Yeah. [speaker003:] we didn't do it at the API level. We did it at the t tool level. That [disfmarker] that [disfmarker] most [disfmarker] many of them can s you can specify several P files and they'll just be done sequentially. [speaker001:] OK. [speaker003:] So. [speaker006:] So, I guess, yeah, if [disfmarker] if you and Don can [disfmarker] if you can show him the P file stuff and see. [speaker003:] Sure. [speaker006:] So this would be like for the F zero [disfmarker] [speaker003:] I mean, if you do "man P file" or "apropos P file", you'll see a lot. [speaker002:] True. I've used the P file, I think. I've looked at it at least, briefly, I think when we were doing s something. [speaker001:] What does the P stand for anyway? [speaker003:] I have no idea. [speaker002:] Oh, in there. [speaker003:] I didn't de I didn't develop it. You know, it was [disfmarker] I think it was Dave Johnson. So it's all part of the Quicknet library. It has all the utilities for it. [speaker001:] No, P files were around way before Quicknet. [speaker003:] Oh, were they? [speaker001:] P files were [disfmarker] were around when [disfmarker] w with, um, [vocalsound] RAP. [speaker004:] Mm hmm. [speaker001:] Right? [speaker006:] It's like the history of ICSI. [speaker001:] You worked with P files. [speaker006:] Like [disfmarker] [speaker003:] Mm hmm. [speaker004:] No. [speaker001:] I worked with P files. [speaker006:] Yeah? [speaker004:] I don't remember what the "P" is, though. [speaker001:] No. [speaker003:] But there are ni they're [disfmarker] The [pause] Quicknet library has a bunch of things in it to handle P files, [speaker001:] Yeah. [speaker003:] so it works pretty well. [speaker006:] And that isn't really, I guess, as important as the [disfmarker] the main [disfmarker] I don't know what you call it, the [disfmarker] the main sort of word level [disfmarker] [speaker003:] Neither do I. [speaker004:] Probably stands for "Phil". Phil Kohn. [speaker003:] It's a Phil file? [speaker004:] Yeah. That's my guess. [speaker006:] Huh. OK. Well, that's really useful. I mean, this is exactly the kind of thing that I wanted to settle. Um, so [disfmarker] [speaker003:] Yeah, I've been meaning to look at the ATLAS stuff again anyway. [speaker006:] Great. [speaker003:] So, just keep [disfmarker] [speaker006:] Yeah. I guess it's also sort of a political deci I mean, if [disfmarker] if you feel like that's a community that would be good to tie into anyway, then it's [disfmarker] sounds like it's worth doing. [speaker003:] Yeah, I think it [disfmarker] it w [speaker001:] j I think there's [disfmarker] [speaker003:] And, w uh, as I said, I [disfmarker] what I did with this stuff [disfmarker] I based it on theirs. It's just they hadn't actually come up with an external format yet. So now that they have come up with a format, it doesn't [disfmarker] it seems pretty reasonable to use it. [speaker001:] Mmm. [speaker003:] But let me look at it again. [speaker006:] OK, great. [speaker003:] As I said, that [disfmarker] [speaker006:] Cuz we actually can start [disfmarker] [speaker003:] There's one level [disfmarker] there's one more level of indirection and I'm just blanking on exactly how it works. I gotta look at it again. [speaker006:] I mean, we can start with, um, I guess, this input from Dave's, which you had printed out, the channelized input. Cuz he has all of the channels, you know, with the channels in the tag and stuff like that. [speaker003:] Yeah, I've seen it. [speaker006:] So that would be i directly, [speaker003:] Yep. [speaker006:] um [disfmarker] [speaker003:] Easy [disfmarker] easy to map. [speaker006:] Yeah. And so then it would just be a matter of getting [disfmarker] making sure to handle the annotations that are, you know, not at the word level and, um, t to import the [speaker002:] Where are those annotations coming from? [speaker006:] Well, right now, I g Jane would [disfmarker] [vocalsound] would [disfmarker] [speaker003:] Mm hmm. [speaker006:] Yeah. [speaker005:] Are you talking about the overlap a annotations? [speaker006:] Yeah, any kind of annotation [pause] that, like, isn't already there. Uh, you know, anything you can envision. [speaker005:] Yeah. So what I was imagining was [disfmarker] um, so Dave says we can have unlimited numbers of green ribbons. And so put, uh, a [disfmarker] a green ribbon on for an overlap code. And since we w we [disfmarker] I [disfmarker] I think it's important to remain flexible regarding the time bins for now. And so it's nice to have [disfmarker] However, you know, you want to have it, uh, time time uh, located in the discourse. So, um, if we [disfmarker] if we tie the overlap code to the first word in the overlap, then you'll have a time marking. It won't [disfmarker] it'll be independent of the time bins, however these e evolve, shrink, or whatever, increase, or [disfmarker] Also, you could have different time bins for different purposes. And having it tied to the first word in an overlap segment is unique, uh, you know, anchored, clear. And it would just end up on a separate ribbon. So the overlap coding is gonna be easy with respect to that. [speaker003:] Right. [speaker005:] You look puzzled. [speaker004:] I [disfmarker] I just [disfmarker] I don't quite understand what these things are. [speaker005:] OK. What, the codes themselves? [speaker004:] Uh. Well, th overlap codes. [speaker005:] Or the [disfmarker]? [speaker004:] I'm not sure what that [@ @] [disfmarker] [speaker003:] Well, I mean, is that [disfmarker] [speaker005:] Well, we don't have to go into the codes. [speaker004:] It probably doesn't matter. [speaker005:] We don't have to go into the codes. [speaker004:] No, I d [speaker003:] I mean, it doesn't. I mean, that [disfmarker] not for the topic of this meeting. [speaker005:] But let me just [disfmarker] No. W the idea is just to have a separate green ribbon, you know, and [disfmarker] and [disfmarker] and let's say that this is a time bin. There's a word here. This is the first word of an overlapping segment of any length, overlapping with any other, uh, word [disfmarker] uh, i segment of any length. And, um, then you can indicate that this here was perhaps a ch a backchannel, or you can say that it was, um, a usurping of the turn, or you can [disfmarker] you know, any [disfmarker] any number of categories. But the fact is, you have it time tagged in a way that's independent of the, uh, sp particular time bin that the word ends up in. If it's a large unit or a small unit, or [speaker001:] Mm hmm. [speaker005:] we sh change the boundaries of the units, [speaker006:] Right. [speaker005:] it's still unique and [disfmarker] and, uh, fits with the format, flexible, all that. [speaker001:] Um, it would be nice [disfmarker] um, eh, gr this is sort of r regarding [disfmarker] uh, uh it's related but not directly germane to the topic of discussion, but, when it comes to annotations, um, you often find yourself in the situation where you have [pause] different annotations [pause] of the same, say, word sequence. [speaker005:] Yeah. [speaker001:] OK? [speaker005:] Yeah. [speaker001:] And sometimes the word sequences even differ slightly because they were edited s at one place but not the other. So, once this data gets out there, some people might start annotating this for, I don't know, dialogue acts or, um, you know, topics or what the heck. You know, there's a zillion things that people might annotate this for. And the only thing that is really sort of common among all the versi the various versions of this data is the word sequence, [speaker005:] Yep. [speaker001:] or approximately. [speaker006:] Or the time. [speaker001:] Or the times. But, see, if you'd annotate dialogue acts, you don't necessarily want to [disfmarker] or topics [disfmarker] you don't really want to be dealing with time marks. [speaker006:] I guess. [speaker001:] You'd [disfmarker] it's much more efficient for them to just see the word sequence, right? [speaker006:] Mm hmm. [speaker001:] I mean, most people aren't as sophisticated as [disfmarker] as we are here with, you know, uh, time alignments and stuff. So [disfmarker] So the [disfmarker] the [disfmarker] the point is [disfmarker] [speaker003:] Should [disfmarker] should we mention some names on the people who are n? [speaker001:] Right. So, um, the p my point is that [pause] you're gonna end up with, uh, word sequences that are differently annotated. And [pause] you want some tool, uh, that is able to sort of merge these different annotations back into a single, uh, version. OK? Um, and we had this problem very massively, uh, at SRI when we worked, uh, a while back on, [vocalsound] uh [disfmarker] well, on dialogue acts as well as, uh, you know, um, what was it? uh, [speaker006:] Well, all the Switchboard in it. [speaker001:] utterance types. [speaker006:] Yeah. [speaker001:] There's, uh, automatic, uh, punctuation and stuff like that. Because we had one set of [pause] annotations that were based on, uh, one version of the transcripts with a particular segmentation, and then we had another version that was based on, uh, a different s slightly edited version of the transcripts with a different segmentation. So, [vocalsound] we had these two different versions which were [disfmarker] you know, you could tell they were from the same source but they weren't identical. So it was extremely hard [vocalsound] to reliably merge these two back together to correlate the information from the different annotations. [speaker003:] Yep. I [disfmarker] I don't see any way that file formats are gonna help us with that. [speaker001:] No. [speaker003:] It's [disfmarker] it's all a question of semantic. [speaker001:] No. But once you have a file format, I can imagine writing [disfmarker] not personally, but someone writing a tool that is essentially an alignment tool, um, that mediates between various versions, [speaker006:] Mm hmm. [speaker003:] Yeah. [speaker001:] and [disfmarker] uh, sort of like th uh, you know, you have this thing in UNIX where you have, uh, diff. [speaker003:] Diff. [speaker006:] W diff or diff. [speaker001:] There's the, uh, diff that actually tries to reconcile different [disfmarker] two diffs f [comment] based on the same original. [speaker006:] Yeah. [speaker005:] Is it S diff? [speaker003:] Yep. [speaker005:] Mmm. [speaker001:] Something like that, um, but operating on these lattices that are really what's behind this [disfmarker] uh, this annotation format. [speaker003:] Yep. [speaker001:] So [disfmarker] [speaker006:] You could definitely do that with the [disfmarker] [speaker003:] There's actually a diff library you can use [pause] to do things like that that [disfmarker] so you have different formats. [speaker001:] So somewhere in the API you would like to have like a merge or some [disfmarker] some function that merges two [disfmarker] two versions. [speaker003:] Yeah, I think it's gonna be very hard. Any sort of structured anything when you try to merge is really, really hard [speaker001:] Right. [speaker003:] because you ha i The hard part isn't the file format. The hard part is specifying what you mean by "merge". [speaker001:] Is [disfmarker] Exactly. [speaker006:] But the one thing that would work here actually for i that is more reliable than the utterances is the [disfmarker] the speaker ons and offs. [speaker003:] And that's very difficult. [speaker006:] So if you have a good, um [disfmarker] [speaker003:] But this is exactly what I mean, is that [disfmarker] that the problem i [speaker006:] Yeah. You just have to know wha what to tie it to. [speaker003:] Yeah, exactly. The problem is saying what are the semantics, [speaker006:] And [disfmarker] [speaker003:] what do you mean by "merge"? " [speaker006:] Right, right. [speaker001:] Right. [speaker003:] So. [speaker001:] So [disfmarker] so just to let you know what we [disfmarker] where we kluged it by, uh, doing [disfmarker] uh, by doing [disfmarker] Hhh. Both were based on words, so, bo we have two versions of the same words intersp you know, sprinkled with [disfmarker] with different tags for annotations. [speaker003:] And then you did diff. [speaker001:] And we did diff. [speaker003:] Yeah, [speaker001:] Exactly! [speaker003:] that's just what I thought. [speaker001:] And that's how [disfmarker] [speaker003:] That's just wh how I would have done it. [speaker001:] Yeah. But, you know, it had lots of errors and things would end up in the wrong order, and so forth. Uh, so, [speaker003:] Yep. [speaker001:] um, if you had a more [disfmarker] Uh, it [disfmarker] it was a kluge because it was basically reducing everything to [disfmarker] uh, to [disfmarker] uh, uh, to textual alignment. [speaker003:] A textual [disfmarker] [speaker001:] Um, so [disfmarker] [speaker006:] But, d isn't that something where whoever [disfmarker] if [vocalsound] [disfmarker] if the people who are making changes, say in the transcripts, cuz this all happened when the transcripts were different [disfmarker] ye um, if they tie it to something, like if they tied it to the acoustic segment [disfmarker] if they [disfmarker] You know what I mean? Then [disfmarker] Or if they tied it to an acoustic segment and we had the time marks, that would help. But the problem is exactly as Adam said, that you get, you know, y you don't have that information or it's lost in the merge somehow, [speaker003:] Yep. [speaker006:] so [disfmarker] [speaker005:] Well, can I ask one question? It [disfmarker] it seems to me that, um, we will have o an official version of the corpus, which will be only one [disfmarker] one version in terms of the words [disfmarker] where the words are concerned. We'd still have the [disfmarker] the merging issue maybe if coding were done independently of the [disfmarker] [speaker001:] And you're gonna get that [speaker005:] But [disfmarker] but [disfmarker] [speaker001:] because if the data gets out, people will do all kinds of things to it. And, uh, s you know, several years from now you might want to look into, um, the prosody of referring expressions. And someone at the university of who knows where has annotated the referring expressions. [speaker003:] Right. [speaker001:] So you want to get that annotation and bring it back in line with your data. [speaker003:] But unfortunately they've also hand edited it. [speaker001:] OK? [speaker005:] OK, then [disfmarker] [speaker006:] But they've also [disfmarker] Exactly. And so that's exactly what we should [disfmarker] somehow when you distribute the data, say that [disfmarker] you know, that [disfmarker] have some way of knowing how to merge it back in and asking people to try to do that. [speaker001:] Yeah. [speaker003:] Yep. [speaker001:] Right. [speaker005:] Well, then the [disfmarker] [speaker004:] What's [disfmarker] what's wrong with [pause] doing times? [speaker005:] I agree. [speaker004:] I [disfmarker] [speaker005:] That was what I was wondering. [speaker006:] Uh, yeah, time is the [disfmarker] [speaker005:] Time is unique. [speaker003:] Well, [speaker005:] You were saying that you didn't think we should [disfmarker] [speaker006:] Time is passing! [speaker001:] Time [disfmarker] time [disfmarker] times are ephemeral. [speaker005:] Andreas was saying [disfmarker] Yeah. [speaker003:] what if they haven't notated with them, times? [speaker006:] Yeah. He [disfmarker] he's a language modeling person, though. [speaker001:] Um [disfmarker] [speaker003:] So [disfmarker] so imagine [disfmarker] I think his [disfmarker] his example is a good one. Imagine that this person who developed the corpus of the referring expressions didn't include time. [speaker001:] Mm hmm. Yeah. [speaker003:] He included references to words. [speaker005:] Ach! Well, then [disfmarker] [speaker003:] He said that at this word is when [disfmarker] when it happened. [speaker001:] Yeah. Or she. [speaker005:] But then couldn't you just indirectly figure out the time [pause] tied to the word? [speaker003:] Or she. [speaker006:] But still they [disfmarker] Exactly. [speaker003:] Sure. [speaker006:] Yeah. [speaker003:] But what if [disfmarker] what if they change the words? [speaker005:] Not [disfmarker] Well, but you'd have some anchoring point. He couldn't have changed all the words. [speaker004:] But can they change the words without changing the time of the word? [speaker003:] Sure. But they could have changed it a little. The [disfmarker] the point is, that [disfmarker] that they may have annotated it off a word transcript that isn't the same as our word transcript, so how do you merge it back in? I understand what you're saying. [speaker001:] Mmm. Mm hmm. [speaker003:] And I [disfmarker] I guess the answer is, um, it's gonna be different every time. It's j it's just gonna be [disfmarker] [speaker006:] Yeah. [speaker005:] Yeah. [speaker006:] You only know the boundaries of the [disfmarker] [speaker003:] I it's exactly what I said before, which is that "what do you mean by" merge "? "So in this case where you have the words and you don't have the times, well, what do you mean by" merge "? [speaker006:] Right. [speaker003:] If you tell me what you mean, I can write a program to do it. [speaker006:] Right. You can merge at the level of the representation that the other person preserved and that's it. [speaker003:] Right. And that's about all you can do. [speaker006:] And beyond that, all you know is [disfmarker] is relative ordering and sometimes even that is wrong. [speaker003:] So [disfmarker] so in [disfmarker] so in this one you would have to do a best match between the word sequences, [speaker006:] So. [speaker001:] Mm hmm. [speaker003:] extract the times f from the best match of theirs to yours, and use that. [speaker006:] And then infer that their time marks are somewhere in between. [speaker003:] Right. [speaker006:] Yeah, exactly. [speaker005:] But it could be that they just [disfmarker] uh, I mean, it could be that they chunked [disfmarker] they [disfmarker] they lost certain utterances and all that stuff, [speaker003:] Right, exactly. So it could get very, very ugly. [speaker005:] or [disfmarker] [speaker006:] Definitely. Definitely. [speaker005:] Yeah. [speaker006:] Alright. Well, I guess, w I [disfmarker] I didn't want to keep people too long [speaker005:] That's interesting. [speaker006:] and Adam wanted t people [disfmarker] I'll read the digits. If anyone else offers to, that'd be great. And [speaker001:] Ah, well. [speaker006:] if not, I guess [disfmarker] [speaker003:] Yeah. More digits, the better. [speaker001:] For th for the [disfmarker] [nonvocalsound] for the benefit of science we'll read the digits. [speaker003:] OK, [speaker006:] Thanks [disfmarker] thanks a lot. [speaker003:] this is [speaker006:] It's really helpful. I mean, Adam and Don [nonvocalsound] will sort of meet and I think that's great. Very useful. Go next. [speaker004:] Scratch that. [speaker005:] O three [speaker003:] Oh, right. [speaker004:] Channel one. [speaker007:] Test. [speaker005:] Hello. [speaker004:] Channel three. [speaker007:] Test. [speaker001:] Uh oh. [speaker006:] So you think we're going now, yes? OK, good. Alright Going again Uh [disfmarker] So we're gonna go around as before, and uh do [disfmarker] do our digits. Uh transcript one three one one dash one three three zero. [comment] three two three [comment] four seven six five [comment] five three one six two four one [comment] six seven [comment] seven [comment] eight [comment] nine zero nine four zero zero three [comment] zero one five eight [comment] one seven three five three [comment] two six eight zero [comment] three six two four three zero seven [comment] four [comment] five zero six nine four [comment] seven four [comment] eight five seven [comment] nine six one five [comment] O seven eight O two [comment] zero nine six zero four zero zero [comment] one [comment] two [comment] Uh [disfmarker] Yeah, you don't actually n need to say the name. [speaker003:] OK, [vocalsound] this is Barry Chen and I am reading transcript [speaker006:] That'll probably be bleeped out. [speaker003:] OK. [speaker006:] So. That's if these are anonymized, but [vocalsound] Yeah [disfmarker] [speaker003:] Oh. [comment] OK. [speaker006:] uh [disfmarker] I mean [disfmarker] not that there's anything defamatory about uh [disfmarker] eight five seven or [vocalsound] or anything, but Uh, anyway. [speaker003:] OK. [speaker006:] Uh [disfmarker] so here's what I have for [disfmarker] I [disfmarker] I was just jotting down things I think th w that we should do today. Uh [disfmarker] This is what I have for an agenda so far Um, We should talk a little bit about the plans for the uh [disfmarker] the field trip next week. Uh [disfmarker] a number of us are doing a field trip to uh Uh [disfmarker] OGI And uh [disfmarker] mostly uh First though about the logistics for it. Then maybe later on in the meeting we should talk about what we actually you know, might accomplish. Uh [disfmarker] [speaker003:] OK. [speaker006:] Uh, in and [pause] kind of go around [disfmarker] see what people have been doing [disfmarker] talk about that, [pause] a r progress report. Um, Essentially. Um [disfmarker] And then uh [disfmarker] Another topic I had was that uh [disfmarker] uh [disfmarker] Uh [disfmarker] Dave here had uh said uh "Give me something to do." And I [disfmarker] I have [disfmarker] I have uh [disfmarker] failed so far in doing that. And so maybe we can discuss that a little bit. If we find some holes in some things that [disfmarker] that [disfmarker] someone could use some help with, he's [disfmarker] he's volunteering to help. [speaker001:] I've got to move a bunch of furniture. [speaker006:] OK, always count on a [vocalsound] serious comment from that corner. So, um, uh, and uh, then uh, talk a little bit about [disfmarker] about disks and resource [disfmarker] resource issues that [disfmarker] that's starting to get worked out. And then, anything else anybody has that isn't in that list? Uh [disfmarker] [speaker004:] I was just wondering, does this mean the battery's dying and I should change it? [speaker006:] Uh I think that means the battery's O K. [disfmarker] d [disfmarker] do you [speaker001:] Let me see. [speaker004:] Oh OK, so th [speaker001:] Yeah, that's good. [speaker006:] Yeah. [speaker001:] You're alright? [speaker004:] Cuz it's full. [speaker006:] Yeah. Yeah. [speaker004:] Alright. [speaker006:] Yeah. It looks full of electrons. OK. Plenty of electrons left there. OK, so, um, uh. OK, so, uh, I wanted to start this with this mundane thing. Um [disfmarker] Uh [disfmarker] I [disfmarker] I [disfmarker] it was [disfmarker] it was kind of my bright idea to have us take a plane that leaves at seven twenty in the morning. Um. [speaker003:] Oh, yeah, that's right. [speaker006:] Uh [vocalsound] this is uh [disfmarker] The reason I did it uh was because otherwise for those of us who have to come back the same day it is really not much of a [disfmarker] of a visit. Uh [disfmarker] So um the issue is how [disfmarker] how [disfmarker] how would we ever accomplish that? Uh [disfmarker] what [disfmarker] what [disfmarker] what part of town do you live in? [speaker003:] Um, I live in, um, the corner of campus. The, um, southeast corner. [speaker006:] OK. OK, so would it be easier [disfmarker] those of you who are not, you know, used to this area, it can be very tricky to get to the airport at [disfmarker] at uh, you know, six thirty. Um. So. Would it be easier for you if you came here and I drove you? Yeah? [speaker007:] Yeah, perhaps, yeah. [speaker006:] Yeah, yeah, OK. [speaker005:] Yeah. [speaker003:] Yeah. Sure. [speaker006:] OK, so if [disfmarker] if everybody can get here at six. [speaker005:] At six. [speaker006:] Yeah, I'm afraid we need to do that to get there on time. [speaker003:] Six, OK. [speaker006:] Yeah, so. Oh boy. Anyway, so. [speaker001:] Will that [pause] be enough time? [speaker006:] Yeah. Yeah, so I'll just pull up in front at six and just be out front. And, uh, and yeah, that'll be plenty of time. It'll take [disfmarker] it [disfmarker] it [disfmarker] it won't be bad traffic that time of day and [disfmarker] and uh [speaker001:] I guess once you get past the bridge [pause] that that would be the worst. [speaker006:] Going to Oakland. [speaker002:] Yeah, Oakland. [speaker003:] Oakland. [speaker001:] Yeah. [speaker006:] Bridge [speaker001:] Once you get past the turnoff to the [pause] Bay Bridge. [speaker006:] oh, the turnoff to the bridge [speaker001:] Yeah. [speaker006:] Won't even do that. I mean, just go down Martin Luther King. [speaker002:] Yeah. [speaker001:] Yeah. OK. [speaker006:] And then Martin Luther King to nine eighty to eight eighty, [speaker001:] Mm hmm. Yeah. [speaker006:] and it's [disfmarker] it'd take us, tops uh thirty minutes to get there. [speaker001:] Oh, I [disfmarker] [speaker006:] So that leaves us fifty minutes before the plane [disfmarker] it'll just [disfmarker] yeah. So Great, OK so that'll It's [disfmarker] I mean, it's still not going to be really easy but [disfmarker] well Particularly for [disfmarker] for uh [disfmarker] for Barry and me, we're not [disfmarker] we're not staying overnight so we don't need to bring anything particularly except for [vocalsound] uh [disfmarker] a pad of paper and [disfmarker] [vocalsound] So, and, uh you, two have to bring a little bit [speaker003:] OK. [speaker006:] but uh [disfmarker] you know, don't [disfmarker] don't bring a footlocker and we'll be OK So. [speaker003:] s [speaker006:] W you're staying overnight. [speaker003:] So just [disfmarker] [speaker006:] I figured you wouldn't need a great big suitcase, [speaker007:] Oh yeah. Yeah. [speaker006:] yeah. That's sort of [pause] [vocalsound] one night. So. Anyway. [speaker003:] So, s six AM, in front. [speaker006:] OK. Six AM in front. [speaker003:] OK. [speaker006:] Uh, I'll be here. Uh [disfmarker] I'll [disfmarker] I'll [disfmarker] I'll [disfmarker] I'll give you my phone number, If I'm not here for a few m after a few minutes then [speaker003:] Wake you up. [speaker006:] Nah, I'll be fine. I just, uh [disfmarker] it [disfmarker] for me it just means getting up a half an hour earlier than I usually do. Not [disfmarker] not [disfmarker] not a lot, [speaker003:] OK. Wednesday. [speaker006:] so OK, that was the real real important stuff. Um, I [disfmarker] I [disfmarker] I figured maybe wait on the potential goals for the meeting uh [disfmarker] until we talk about wh what's been going on. So, uh, what's been going on? Why don't we start [disfmarker] start over here. [speaker007:] Um. [vocalsound] Well, preparation of the French test data actually. [speaker006:] OK. [speaker007:] So, [vocalsound] it means that um, well, it is, uh, a digit French database of microphone speech, downsampled to eight kilohertz and I've added noise to one part, with the [disfmarker] actually the Aurora two noises. And, [@ @] so this is a training part. And then [pause] the remaining part, I use for testing and [disfmarker] with other kind of noises. So we can [disfmarker] So this is almost ready. I'm preparing the [disfmarker] the HTK baseline for this task. And, yeah. [speaker006:] OK Uh, So the HTK base lines [disfmarker] so this is using mel cepstra and so on, or [disfmarker]? [speaker007:] Yeah. [speaker006:] Yeah. OK. And again, I guess the p the plan is, uh, to uh [disfmarker] then given this [disfmarker] What's the plan again? [speaker007:] The plan with [pause] these data? [speaker006:] With [disfmarker] So [disfmarker] So [disfmarker] Does i Just remind me of what [disfmarker] what you were going to do with the [disfmarker] what [disfmarker] what [disfmarker] what [disfmarker] what's [disfmarker] y You just described what you've been doing. [speaker007:] Yeah. [speaker006:] So if you could remind me of what you're going to be doing. Oh, this is [disfmarker] [speaker007:] Uh, yeah. [speaker006:] yeah, yeah. [speaker003:] Tell him about the cube. [speaker007:] Well. The cube? I should tell him about the cube? [speaker003:] Yeah. [speaker006:] Oh! Cube. Yeah. [speaker007:] Yeah. Uh we [disfmarker] actually we want to, mmm, Uh, [vocalsound] uh, analyze three dimensions, [speaker005:] Fill in the cube. [speaker007:] the feature dimension, the [pause] training data dimension, and the test data dimension. Um. Well, what we want to do is first we have number for each [pause] uh task. So we have the um, TI digit task, the Italian task, the French task [pause] and the Finnish task. [speaker006:] Yeah? [speaker007:] So we have numbers with [pause] uh [disfmarker] systems [disfmarker] I mean [disfmarker] I mean neural networks trained on the task data. And then to have systems with neural networks trained on, [vocalsound] uh, data from the same language, if possible, with, well, using a more generic database, which is phonetically [disfmarker] phonetically balanced, and. Um. Yeah. So. [speaker006:] So so we had talked [disfmarker] I guess we had talked at one point about maybe, the language ID corpus? Is that a possibility for that? [speaker007:] Ye uh [disfmarker] [pause] Yeah, but, uh these corpus, w w there is a CallHome and a CallFriend also, The CallFriend is for language ind identification. Well, anyway, these corpus are all telephone speech. So, um. [vocalsound] This could be a [disfmarker] [pause] a problem for [disfmarker] Why? Because uh, uh, the [disfmarker] the SpeechDat databases are not telephone speech. They are downsampled to eight kilohertz but [disfmarker] but they are not [vocalsound] uh with telephone bandwidth. [speaker006:] Yeah. That's really funny isn't it? I mean cuz th this whole thing is for [pause] developing new standards for the telephone. [speaker003:] Telephone. [speaker006:] Yeah. [speaker007:] Yeah, but the [disfmarker] the idea is to compute the feature before [pause] the [disfmarker] before sending them to the [disfmarker] Well, [pause] you don't [disfmarker] do not send speech, you send features, computed on th the [disfmarker] [pause] the device, [speaker006:] Mm hmm. Yeah, I know, but the reason [disfmarker] [speaker007:] or [disfmarker] Well. [speaker006:] Oh I see, so your point is that it's [disfmarker] it's [disfmarker] it's uh [disfmarker] the features are computed locally, and so they aren't necessarily telephone bandwidth, uh or telephone distortions. [speaker007:] So you [disfmarker] Yeah. Yeah. [speaker001:] Did you [pause] happen to find out anything about the OGI multilingual database? [speaker006:] Yeah, that's wh that's wh that's what I meant. [speaker007:] Yeah, it's [disfmarker] [speaker006:] I said [disfmarker] [@ @], there's [disfmarker] there's [disfmarker] there's an OGI language ID, not the [disfmarker] not the, uh [disfmarker] the CallFriend is a [disfmarker] is a, uh, LDC w thing, right? [speaker007:] Yea Yeah, there are also two other databases. One they call the multi language database, and another one is a twenty two language, something like that. But it's also telephone speech. [speaker001:] Oh, they are? OK. [speaker007:] Uh. Well, nnn. [speaker006:] But I'm not sure [disfmarker] [speaker007:] So [disfmarker] [speaker006:] I mean, we' e e The bandwidth shouldn't be such an issue right? Because e e this is downsampled and [disfmarker] and filtered, right? So it's just the fact that it's not telephone. And there are so many other differences between these different databases. I mean some of this stuff's recorded in the car, and some of it's [disfmarker] I mean there's [disfmarker] there's many different acoustic differences. [speaker007:] Yeah. [speaker006:] So I'm not sure if [disfmarker]. I mean, unless we're going to include a bunch of car recordings in the [disfmarker] in the training database, I'm not sure if it's [disfmarker] completely rules it out if our [disfmarker] if we [disfmarker] if our major goal is to have phonetic context and you figure that there's gonna be a mismatch in acoustic conditions does it make it much worse f to sort of add another mismatch, if you will. [speaker007:] Mmm. [speaker006:] Uh, i i I [disfmarker] I guess the question is how important is it to [disfmarker] for us to get multiple languages uh, in there. [speaker007:] Yeah, but [disfmarker] Mm hmm. [vocalsound] Um. Yeah. Well, actually, for the moment if we w do not want to use these phone databases, we [disfmarker] we already have uh [disfmarker] English, Spanish and French uh, with microphone speech. [speaker006:] Mm hmm. Yeah. [speaker007:] So. [speaker006:] So that's what you're thinking of using is sort of the multi the equivalent of the multiple? [speaker007:] Well. Yeah, for the multilingual part we were thinking of using these three databases. [speaker006:] And for the difference in phonetic context [pause] that you [disfmarker]? Provide that. [speaker007:] Well, this [disfmarker] Uh, actually, these three databases are um generic databases. [speaker005:] Yeah. [speaker007:] So w f for [disfmarker] for uh Italian, which is close to Spanish, French and, i i uh, TI digits we have both uh, digits [pause] training data and also [pause] more general training data. So. Mmm. [speaker006:] Well, we also have this Broadcast News that we were talking about taking off the disk, which is [disfmarker] [vocalsound] is microphone data for [disfmarker] for English. [speaker007:] Yeah. Yeah, perhaps [disfmarker] yeah, there is also TIMIT. We could use TIMIT. [speaker006:] Yeah. Right. Yeah, so there's plenty of stuff around. OK, so anyway, th the basic plan is to, uh, test this cube. [speaker007:] Yeah. [speaker006:] Yes. [speaker005:] To fill in the cube. [speaker006:] To fill i fill it in, yeah. OK. [speaker007:] Yeah, and perhaps, um [disfmarker] [pause] We were thinking that perhaps the cross language issue is not, uh, so big of a issue. Well, w w we [disfmarker] perhaps we should not focus too much on that cross language stuff. I mean, uh, training [disfmarker] training a net on a language and testing a for another language. [speaker006:] Uh huh. But that's [disfmarker] [speaker007:] Mmm. Perhaps the most important is to have neural networks trained on the target languages. But, uh, with a general database [disfmarker] general databases. u So that th Well, the [disfmarker] the guy who has to develop an application with one language can use the net trained o on that language, or a generic net, [speaker006:] Uh, depen it depen it depends how you mean "using the net". [speaker007:] but not trained on a [disfmarker] [speaker006:] So, if you're talking about for producing these discriminative features [pause] that we're talking about [pause] you can't do that. [speaker007:] Mmm. [speaker006:] Because [disfmarker] because the [disfmarker] what they're asking for is [disfmarker] is a feature set. Right? And so, uh, we're the ones who have been weird by [disfmarker] by doing this training. [speaker007:] Yeah. [speaker006:] But if we say, "No, you have to have a different feature set for each language," I think this is ver gonna be very bad. [speaker003:] Oh. [speaker006:] So [disfmarker] [speaker007:] You think so. [speaker005:] Oh. [speaker003:] That's [disfmarker] [speaker007:] Mmm. [speaker006:] Oh yeah. Yeah. I mean, in principle, I mean conceptually, it's sort of like they want a re [@ @] [comment] well, they want a replacement for mel cepstra. [speaker007:] Mmm. [speaker006:] So, we say "OK, this is the year two thousand, we've got something much better than mel cepstra. It's, you know, gobbledy gook." OK? And so [vocalsound] we give them these gobbledy gook features but these gobbledy gook features are supposed to be good for any language. [speaker007:] Hmm. [speaker006:] Cuz you don't know who's gonna call, and you know, I mean so it's [disfmarker] it's [disfmarker] it's, uh, uh [disfmarker] how do you know what language it is? Somebody picks up the phone. So thi this is their image. [speaker007:] Well, I [comment] chh [disfmarker] [speaker006:] Someone picks up the phone, right? And [disfmarker] and he [disfmarker] he picks up the ph [speaker007:] Yeah, but the [disfmarker] the application is [disfmarker] there is a target language for the application. So, if a [disfmarker] [speaker006:] Yeah. y y y Well. [speaker007:] Well. [speaker006:] But, no but, y you [disfmarker] you pick up the phone, [speaker007:] Yeah? [speaker006:] you talk on the phone, and it sends features out. OK, so the phone doesn't know what a [disfmarker] what [disfmarker] what your language is. [speaker007:] Yeah, if [disfmarker] Yeah. If it's th in the phone, but [disfmarker] well, it [disfmarker] that [disfmarker] that could be th at the server's side, [speaker006:] But that's the image that they have. [speaker007:] and, well. Mmm, yeah. [speaker006:] It could be, but that's the image they have, right? So that's [disfmarker] that's [disfmarker] I mean, one could argue all over the place about how things really will be in ten years. But the particular image that the cellular industry has right now is that it's distributed speech recognition, where the, uh, uh, probabilistic part, and [disfmarker] and s semantics and so forth are all on the servers, and you compute features of the [disfmarker] uh, on the phone. So that's [disfmarker] that's what we're involved in. We might [disfmarker] might or might not agree that that's the way it will be in ten years, but that's [disfmarker] that's [disfmarker] that's what they're asking for. So [disfmarker] so I think that [disfmarker] th th it is an important issue whether it works cross language. Now, it's the OGI, uh, folks' perspective right now that probably that's not the biggest deal. And that the biggest deal is the, um envir acoustic environment mismatch. And they may very well be right, but I [disfmarker] I was hoping we could just do a test and determine if that was true. If that's true, we don't need to worry so much. Maybe [disfmarker] maybe we have a couple languages in the training set and that gives us enough breadth uh, uh, that [disfmarker] that [disfmarker] that the rest doesn't matter. Um, the other thing is, uh, this notion of training to uh [disfmarker] which I [disfmarker] I guess they're starting to look at up there, [comment] training to something more like articulatory features. Uh, and if you have something that's just good for distinguishing different articulatory features that should just be good across, you know, a wide range of languages. [speaker007:] Yeah. [speaker006:] Uh, but [disfmarker] Yeah, so I don't th I know [disfmarker] unfortunately I don't [disfmarker] I see what you're comi where you're coming from, I think, but I don't think we can ignore it. [speaker007:] So we [disfmarker] we really have to do test with a real cross language. I mean, tr for instance training on English and testing on Italian, or [disfmarker] Or we can train [disfmarker] or else, uh, can we train a net on, uh, a range of languages and [disfmarker] which can include the test [disfmarker] the test [@ @] the target language, [speaker003:] Test on an unseen. [speaker007:] or [disfmarker] [speaker006:] Yeah, so, um, there's [disfmarker] there's, uh [disfmarker] This is complex. So, ultimately, uh, as I was saying, I think it doesn't fit within their image that you switch nets based on language. [speaker007:] Yeah. [speaker006:] Now, can you include, uh, the [disfmarker] the target language? Um, from a purist's standpoint it'd be nice not to because then you can say when [disfmarker] because surely someone is going to say at some point, OK, so you put in the German and the Finnish. [speaker007:] Mmm. [speaker006:] Uh, now, what do you do, uh, when somebody has Portuguese? "you know? Um, and [disfmarker] Uh, however, you aren't [disfmarker] it isn't actually a constraint in this evaluation. So I would say if it looks like there's a big difference to put it in, then we'd make note of it, and then we probably put in the other, because we have so many other problems in trying to get things to work well here that [disfmarker] that, you know, it's not so bad as long as we [disfmarker] we note it and say," Look, we did do this ". [speaker007:] Mmm? [speaker001:] And so, ideally, what you'd wanna do is you'd wanna run it with and without the target language and the training set for a wide range of languages. [speaker006:] Uh. Yeah. [speaker007:] Yeah, perhaps. Yeah. Yeah. [speaker006:] Yeah. [speaker001:] And that way you can say, "Well," you know, "we're gonna build it for what we think are [pause] the most common ones", but if that [disfmarker] somebody uses it with a different language, you know, "here's what's you're l here's what's likely to happen." [speaker006:] Yeah, cuz the truth is, is that it's [disfmarker] it's not like there are [disfmarker] I mean, al although there are thousands of languages, uh, from uh, uh, the point of view of cellular companies, there aren't. There's [disfmarker] [vocalsound] you know, there's fifty or something, [speaker001:] Right. [speaker006:] you know? So, uh, an and they aren't [disfmarker] you know, with the exception of Finnish, which I guess it's pretty different from most [disfmarker] most things. uh, it's [disfmarker] it's, uh [disfmarker] most of them are like at least some of the others. And so, our guess that Spanish is like Italian, and [disfmarker] and so on. I guess Finnish is a [disfmarker] is [disfmarker] is a little bit like Hungarian, supposedly, right? Or is [disfmarker] I think [disfmarker] [speaker001:] I don't know anything about Finnish. [speaker006:] well, I kn oh, well I know that H uh, H I mean, I'm not a linguist, but I guess Hungarian and Finnish and one of the [disfmarker] one of the languages from the former Soviet Union are in this sort of same family. But they're just these, you know, uh [disfmarker] countries that are pretty far apart from one another, have [disfmarker] I guess, people rode in on horses and brought their [disfmarker] [speaker001:] Hmm. Hmm. [speaker006:] OK. [speaker007:] The [disfmarker] Yeah. [speaker006:] Your turn. [speaker003:] Oh, my turn. Oh, OK. Um, Let's see, I [disfmarker] I spent the last week, uh, looking over Stephane's shoulder. And [disfmarker] [vocalsound] and understanding some of the data. I re installed, um, um, HTK, the free version, so, um, everybody's now using three point O, which is the same version that, uh, OGI is using. [speaker006:] Oh, good. [speaker003:] Yeah. So, without [disfmarker] without any licensing big deals, or anything like that. And, um, so we've been talking about this [disfmarker] this, uh, cube thing, and it's beginning more and more looking like the, uh, the Borge cube thing. It's really gargantuan. Um, but I I'm [disfmarker] Am I [disfmarker] [speaker006:] So are [disfmarker] are you going to be assimilated? [speaker001:] Resistance is futile. [speaker003:] Exactly. Um, yeah, so I I've been looking at, uh, uh, TIMIT stuff. Um, the [disfmarker] the stuff that we've been working on with TIMIT, trying to get a, um [disfmarker] a labels file so we can, uh, train up a [disfmarker] train up a net on TIMIT and test, um, the difference between this net trained on TIMIT and a net trained on digits alone. [speaker006:] Mm hmm. [speaker003:] Um, and seeing if [disfmarker] if it hurts or helps. [speaker006:] And again, when y just to clarify, when you're talking about training up a net, you're talking about training up a net for a tandem approach? [speaker003:] Anyway. Yeah, yeah. Um. [speaker006:] And [disfmarker] and the inputs are PLP and delta and that sort of thing, [speaker003:] Mm hmm. Well, the inputs are one dimension of the cube, [speaker006:] or [disfmarker]? [speaker003:] which, um, we've talked about it being, uh, PLP, um, M F C Cs, um, J JRASTA, JRASTA LDA [disfmarker] [speaker007:] Hmm. [speaker006:] Yeah, but your initial things you're making one choice there, [speaker003:] Yeah, [speaker006:] right? Which is PLP, or something? [speaker003:] right. [speaker006:] Yeah. [speaker003:] Um, I [disfmarker] I haven't [disfmarker] I haven't decided on [disfmarker] on the initial thing. Probably [disfmarker] probably something like PLP. Yeah. [speaker007:] Hmm. [speaker006:] Yeah. Um, so [disfmarker] so you take PLP and you [disfmarker] you, uh, do it [disfmarker] uh, you [disfmarker] you, uh, use HTK with it with the transformed features using a neural net that's trained. And the training could either be from Digits itself or from TIMIT. And that's the [disfmarker] [speaker003:] Right. [speaker006:] and, and th and then the testing would be these other things which [disfmarker] which [disfmarker] which might be foreign language. [speaker003:] Right. [speaker006:] I see. [speaker003:] Right. [speaker006:] I [disfmarker] I [disfmarker] I get in the picture about the cube. [speaker003:] Yeah. [speaker006:] OK. [speaker003:] Maybe [disfmarker] OK. [speaker006:] OK. [speaker003:] Uh huh. [speaker006:] Um, I mean, those listening to this will not have a picture either, so, um, I guess I'm [disfmarker] I'm not any worse off. But but at some point [disfmarker] somebody should just show me the cube. It sounds s I [disfmarker] I get [disfmarker] I think I get the general idea of it, [speaker003:] Yeah, yeah, [speaker006:] yeah. [speaker003:] b May [speaker001:] So, when you said that you were getting the labels for TIMIT, [comment] um, are y what do you mean by that? [speaker003:] Mm hmm. Oh, I'm just [disfmarker] I'm just, uh, transforming them from the, um, the standard TIMIT transcriptions into [disfmarker] into a nice long huge P file to do training. [speaker001:] Mmm. Were the digits, um, hand labeled for phones? [speaker003:] Um, the [disfmarker] the digits [disfmarker] [speaker001:] Or were they [disfmarker] those labels automatically derived? [speaker003:] Oh yeah, those were [disfmarker] those were automatically derived by [disfmarker] by Dan using, um, embedded [disfmarker] embedded training and alignment. [speaker001:] Mmm. [speaker006:] Ah, but which Dan? [speaker003:] Uh, Ellis. [speaker006:] OK. [speaker003:] Right? [speaker006:] OK. [speaker003:] Yeah. So. [speaker001:] I was just wondering because that test you're t [speaker003:] Uh huh. [speaker001:] I [disfmarker] I think you're doing this test because you want to determine whether or not, uh, having s general speech performs as well as having specific [pause] speech. [speaker003:] That's right. [speaker006:] Well, especially when you go over the different languages again, because you'd [disfmarker] the different languages have different words for the different digits, [speaker001:] Mm hmm. And I was [disfmarker] [speaker006:] so it's [disfmarker] [speaker001:] yeah, so I was just wondering if the fact that TIMIT [disfmarker] you're using the hand labeled stuff from TIMIT might be [disfmarker] confuse the results that you get. [speaker006:] I [disfmarker] I think it would, but [disfmarker] but on the other hand it might be better. [speaker001:] Right, but if it's better, it may be better because [pause] it was hand labeled. [speaker006:] Oh yeah, but still [@ @] probably use it. [speaker001:] Yeah. OK. [speaker006:] I mean, you know, I [disfmarker] I [disfmarker] I guess I'm sounding cavalier, but I mean, I think the point is you have, uh, a bunch of labels and [disfmarker] and they're han hand uh [disfmarker] hand marked. Uh, I guess, actually, TIMIT was not entirely hand marked. It was automatically first, and then hand [disfmarker] hand corrected. But [disfmarker] but, um, uh, it [disfmarker] it, um, it might be a better source. [speaker001:] Oh, OK. [speaker006:] So, i it's [disfmarker] you're right. It would be another interesting scientific question to ask, "Is it because it's a broad source or because it was, you know, carefully?" uh. And that's something you could ask, [speaker001:] Mm hmm. [speaker006:] but given limited time, I think the main thing is if it's a better thing for going across languages on this training tandem system, [speaker001:] Yeah. [speaker006:] then it's probably [disfmarker] [speaker001:] Right. What about the differences in the phone sets? [speaker003:] Uh, between languages? [speaker001:] No, between TIMIT and the [disfmarker] the digits. [speaker003:] Oh, um, right. Well, there's a mapping from the sixty one phonemes in TIMIT to [disfmarker] to fifty six, the ICSI fifty six. [speaker005:] Sixty one. [speaker001:] Oh, OK. I see. [speaker003:] And then the digits phonemes, um, there's about twenty twenty two or twenty four of them? Is that right? [speaker001:] Out of that fifty six? [speaker007:] Yep. [speaker003:] Out of that fifty six. [speaker001:] Oh, OK. [speaker003:] Yeah. So, it's [disfmarker] it's definitely broader, yeah. [speaker007:] But, actually, the issue of phoneti phon uh phone phoneme mappings will arise when we will do severa use several languages [speaker005:] Yeah. [speaker007:] because you [disfmarker] Well, some phonemes are not, uh, in every languages, and [disfmarker] So we plan to develop a subset of the phonemes, uh, that includes, uh, all the phonemes of our training languages, and use a network with kind of one hundred outputs or something like that. [speaker001:] Mm hmm. [speaker006:] Mm hmm. You mean a superset, sort of. [speaker007:] Uh, yeah, superset, [speaker006:] Yeah. [vocalsound] Yeah. [speaker007:] yeah. [speaker005:] Yeah. I th I looks the SAMPA SAMPA phone. [speaker007:] Yeah. [speaker005:] SAMPA phone? For English [disfmarker] uh American English, and the [disfmarker] the [disfmarker] the language who have more phone are the English. [speaker001:] Mmm. [speaker005:] Of the [disfmarker] these language. But n for example, in Spain, the Spanish have several phone that d doesn't appear in the E English and we thought to complete. But for that, it needs [disfmarker] we must r h do a lot of work [vocalsound] because we need to generate new tran transcription for the database that we have. [speaker006:] Mm hmm. Mm hmm. [speaker002:] Other than the language, is there a reason not to use the TIMIT phone set? Cuz it's larger? As opposed to the ICSI [pause] phone set? [speaker003:] Oh, you mean why map the sixty one to the fifty six? [speaker002:] Yeah. [speaker003:] I don't know. I have [disfmarker] [speaker006:] Um, I forget if that happened starting with you, or was it [disfmarker] o or if it was Eric, afterwards who did that. But I think, basically, there were several of the phones that were just hardly ever there. [speaker001:] Yeah, and I think some of them, they were making distinctions between silence at the end and silence at the beginning, [speaker002:] Oh. [speaker001:] when really they're [pause] both silence. I th I think it was things like that that got it mapped down to fifty six. [speaker002:] OK. [speaker006:] Yeah, especially in a system like ours, which is a discriminative system. You know, you're really asking this net to learn. [speaker002:] Yeah. [speaker001:] Yeah. [speaker006:] It's [disfmarker] it's kind of hard. [speaker001:] There's not much difference, really. And [pause] the ones that are gone, I think are [disfmarker] I think there was [disfmarker] they also in TIMIT had like a glottal stop, which was basically a short period of silence, [speaker002:] Mm hmm. [speaker001:] and so. [speaker002:] Well, we have that now, too, right? [speaker001:] I don't know. [speaker002:] Yeah. [speaker006:] i It's actually pretty common that a lot of the recognition systems people use have things like [disfmarker] like, say thirty nine, phone symbols, [speaker001:] So. [speaker006:] right? Uh, and then they get the variety by [disfmarker] by bringing in the context, the phonetic context. Uh. So we actually have an unusually large number in [disfmarker] in what we tend to use here. Um. So, a a actually [disfmarker] maybe [disfmarker] now you've got me sort of intrigued. What [disfmarker] there's [disfmarker] Can you describe what [disfmarker] what's on the cube? [speaker003:] Yeah, [speaker006:] I mean [disfmarker] [speaker003:] w I th I think that's a good idea [speaker006:] Yeah, yeah. [speaker003:] to [disfmarker] to talk about the whole cube [speaker005:] Yeah. Yeah. [speaker003:] and maybe we could sections in the cube for people to work on. [speaker006:] Yeah. Yeah. [speaker003:] Um, OK. [speaker006:] OK, [speaker003:] Uh, do you wanna do it? [speaker006:] so even [disfmarker] even though the meeting recorder doesn't [disfmarker] doesn't, uh [disfmarker] and since you're not running a video camera we won't get this, but if you use a board it'll help us anyway. [speaker003:] OK. [speaker006:] Uh, point out one of the limitations of this [vocalsound] medium, [speaker003:] OK. [speaker006:] but you've got the wireless on, right? [speaker003:] Yeah, I have the wireless. [speaker006:] Yeah, so you can walk around. [speaker003:] OK. Can y can you walk around too? No. OK, well, [speaker006:] Uh, he can't, actually, [speaker003:] um, [speaker006:] but [disfmarker] He's tethered. [speaker003:] s basically, the [disfmarker] the cube will have three dimensions. The first dimension is the [disfmarker] the features that we're going to use. And the second dimension, um, is the training corpus. And that's the training on the discriminant neural net. Um and the last dimension happens to be [disfmarker] [speaker006:] Yeah and again [disfmarker] Yeah. So the [disfmarker] the training for HTK is always [disfmarker] that's always set up for the individual test, right? That there's some training data and some test data. So that's different than this. [speaker003:] Right, right. This is [disfmarker] this is for [disfmarker] for ANN only. And, yeah, the training for the HTK models is always, uh, fixed for whatever language you're testing on. [speaker006:] Right. [speaker003:] And then, there's the testing corpus. So, then I think it's probably instructive to go and [disfmarker] and [disfmarker] and show you the features that we were talking about. Um, so, let's see. Help me out with [disfmarker] [speaker007:] PLP. [speaker003:] With what? [speaker007:] PLP. [speaker003:] PLP? OK. [speaker007:] MSG. [speaker003:] MSG. [speaker007:] Uh, JRASTA. [speaker003:] JRASTA. [speaker007:] And JRASTA LDA. [speaker003:] JRASTA LDA. [speaker007:] Um, multi band. [speaker003:] Multi band. [speaker007:] So there would be multi band before, um [disfmarker] before our network, I mean. [speaker003:] Yeah, just the multi band features, right? [speaker007:] And [disfmarker] Yeah. [speaker003:] Yeah. [speaker007:] So, something like, uh, s TCT within bands [speaker006:] Uh huh. Ah. Ah. [speaker007:] and [disfmarker] Well. And then multi band after networks. Meaning that we would have, uh, neural networks, uh, discriminant neural networks for each band. Uh, yeah. And using the [disfmarker] the outputs of these networks or the linear outputs or something like that. Uh, yeah. [speaker001:] What about mel cepstrum? [speaker003:] Oh, um [disfmarker] [speaker001:] Or is that [disfmarker] you don't include that because it's part of the base or something? [speaker006:] Well, y you do have a baseline system that's m that's mel cepstra, [speaker005:] Yeah databases. Yeah. [speaker006:] right? So. [speaker005:] Mm hmm. [speaker007:] But, uh, well, not for the [disfmarker] the ANN. I mean [disfmarker] [speaker006:] OK. [speaker007:] So, yeah, we could [disfmarker] we could add [pause] MFCC also. [speaker003:] We could add [disfmarker] [speaker006:] Probably should. I mean at least [disfmarker] at least conceptually, you know, it doesn't meant you actually have to do it, [speaker007:] Yeah. [speaker005:] Yeah. [speaker006:] but conceptually it makes sense as a [disfmarker] as a base line. [speaker001:] It'd be an interesting test just to have [disfmarker] just to do MFCC with the neural net [speaker005:] Without the [disfmarker] Yeah. [speaker001:] and everything else the same. Compare that with just M MFCC without the [disfmarker] the net. [speaker007:] Yeah. [speaker005:] Mm hmm. [speaker003:] I think [disfmarker] I think Dan did some of that. [speaker001:] Oh. [speaker003:] Um, in his previous Aurora experiments. And with the net it's [disfmarker] it's wonderful. Without the net it's just baseline. [speaker006:] Um, I think OGI folks have been doing that, too. D Because I think that for a bunch of their experiments they used, uh, mel cepstra, actually. [speaker003:] Yeah. Yeah. [speaker006:] Um, of course that's there and this is here and so on. OK? [speaker003:] OK. Um, for the training corpus [disfmarker] corpus, um, we have, um, the [disfmarker] the d [pause] digits [nonvocalsound] from the various languages. Um, English Spanish um, French What else do we have? [speaker007:] And the [pause] Finnish. [speaker003:] Finnish. [speaker005:] And Italian. [speaker001:] Where did th where did that come from? [speaker005:] Uh, no, Italian no. [speaker001:] Digits? [speaker005:] Italian no. [speaker001:] Oh. [speaker003:] Oh. Italian. [speaker005:] I Italian yes. Italian? [speaker006:] Italian. [speaker001:] Is that [disfmarker] Was that distributed with Aurora, or [disfmarker]? [speaker003:] One L or two L's? [speaker006:] The newer one. [speaker001:] Where did that [disfmarker]? [speaker007:] So English, uh, Finnish and Italian are Aurora. [speaker006:] Yeah. [speaker007:] And Spanish and French is something that we can use in addition to Aurora. Uh, well. [speaker006:] Yeah, so Carmen brought the Spanish, and Stephane brought the French. [speaker003:] OK. And, um, oh yeah, [speaker006:] Is it French French or Belgian French? [speaker003:] and [disfmarker] [speaker006:] There's a [disfmarker] [speaker007:] It's, uh, French French. [speaker003:] French French. [speaker005:] Like Mexican Spain and Spain. [speaker006:] Yeah. [speaker002:] Or Swiss. [speaker005:] I think that is more important, [speaker002:] Swiss German. [speaker005:] Mexican Spain. [speaker006:] Yeah. [speaker005:] Because more people [disfmarker] [speaker006:] Yeah, probably so. [speaker005:] Yeah. [speaker006:] Yeah. Yeah, Herve always insists that Belgian is [disfmarker] i is absolutely pure French, has nothing to do with [disfmarker] but he says those [disfmarker] those [disfmarker] those Parisians talk funny. [speaker007:] Yeah, yeah, yeah. They have an accent. [speaker006:] Yeah they [disfmarker] they do, yeah. Yeah. [pause] But then he likes Belgian fries too, so. OK. [speaker003:] And then we have, uh, um, broader [disfmarker] broader corpus, um, like TIMIT. TIMIT so far, [speaker005:] And Spanish too. [speaker003:] right? Spanish [disfmarker] Oh, Spanish stories? [speaker005:] Albayzin is the name. [speaker001:] What about TI digits? [speaker003:] Um, TI digits [disfmarker] uh all these Aurora f d data p data is from [disfmarker] is derived from TI digits. [speaker001:] Uh huh. Oh. Oh OK. [speaker003:] Um, basically, they [disfmarker] they corrupted it with, uh, different kinds of noises at different SNR levels. [speaker001:] Ah. I see. [speaker003:] Yeah. [speaker006:] y And I think Stephane was saying there's [disfmarker] there's some broader s material in the French also? [speaker007:] Yeah, we cou we could use [disfmarker] Yeah. [speaker003:] OK. [speaker007:] The French data. [speaker005:] Spanish stories? No. [speaker003:] No. Sp Not Spanish stories? [speaker005:] No. [speaker006:] Spanish [disfmarker] [speaker005:] No. Albayz [speaker003:] Spanish something. [speaker005:] Yeah. [speaker003:] OK. [speaker002:] Did the Aurora people actually corrupt it themselves, or just specify the signal and the signal t [speaker003:] They [disfmarker] they corrupted it, um, themselves, [speaker002:] OK. [speaker003:] but they also included the [disfmarker] the noise files for us, right? Or [disfmarker] [speaker007:] Yeah. [speaker003:] so we can go ahead and corrupt other things. [speaker006:] I'm just curious, Carmen [disfmarker] I mean, I couldn't tell if you were joking or [disfmarker] i Is it [disfmarker] is it Mexican Spanish, or is it [disfmarker] [speaker005:] No no no no. [speaker006:] Oh, no, no. [speaker005:] No no no no. [speaker006:] It's [disfmarker] it's Spanish from Spain, Spanish. [speaker005:] Spanish from Spain. [speaker006:] Yeah, OK. [speaker003:] From Spain. [speaker006:] Alright. Spanish from Spain. Yeah, we're really covered there now. OK. And the French from France. [speaker003:] OK. [speaker007:] Yeah, the [disfmarker] No, the French is f yeah, from, uh, Paris, [speaker003:] Oh, from Paris, OK. [speaker006:] Yeah. [speaker007:] OK. [speaker003:] And TIMIT's from [pause] lots of different places. [speaker006:] From TI. From [disfmarker] i It's from Texas. So may maybe it's [disfmarker] [speaker002:] From the deep South. [speaker006:] So s so it's not really from the US either. Is that [disfmarker]? OK. [speaker003:] Yeah. Yeah. OK. And, um, with within the training corporas um, we're, uh, thinking about, um, training with noise. So, incorporating the same kinds of noises that, um, Aurora is in incorporating in their, um [disfmarker] in their training corpus. Um, I don't think we we're given the, uh [disfmarker] the unseen noise conditions, though, right? [speaker006:] I think what they were saying was that, um, for this next test there's gonna be some of the cases where they have the same type of noise as you were given before hand and some cases where you're not. [speaker003:] Like [disfmarker] Mm hmm. [speaker007:] Mm hmm. [speaker003:] OK. [speaker006:] So, presumably, that'll be part of the topic of analysis of the [disfmarker] the test results, is how well you do when it's matching noise and how well you do where it's not. [speaker003:] Right. [speaker006:] I think that's right. [speaker003:] So, I guess we can't train on [disfmarker] on the [disfmarker] the unseen noise conditions. [speaker006:] Well, not if it's not seen, [speaker003:] Right. If [disfmarker] Not if it's unseen. [speaker006:] yeah. [speaker003:] Yeah. [speaker006:] OK. I mean, i i i i it does seem to me that a lot of times when you train with something that's at least a little bit noisy it can [disfmarker] it can help you out in other kinds of noise even if it's not matching just because there's some more variance that you've built into things. But, but, uh, [speaker007:] Mm hmm. [speaker006:] uh, exactly how well it will work will depend on how near it is to what you had ahead of time. So. OK, so that's your training corpus, [speaker007:] Mm hmm. [speaker006:] and then your testing corpus [disfmarker]? [speaker003:] Um, the testing corporas are, um, just, um, the same ones as Aurora testing. And, that includes, um, the English Spa um, Italian. Finnish. [speaker005:] Finnish. [speaker003:] Uh, we' we're gonna get German, right? Ge [comment] At the final test will have German. [speaker006:] Well, so, yeah, the final test, on a guess, is supposed to be German and Danish, [speaker007:] Uh, yeah. [speaker006:] right? [speaker003:] Right. [speaker007:] The s yeah, the Spanish, perhaps, [speaker003:] Spanish. [speaker007:] we will have. [speaker003:] Oh yeah, we can [disfmarker] we can test on s Spanish. [speaker007:] Yeah. But the [disfmarker] the Aurora Spanish, I mean. [speaker003:] Oh yeah. Mm hmm. [speaker006:] Oh, there's a [disfmarker] there's Spanish testing in the Aurora? [speaker007:] Uh, not yet, but, uh, yeah, uh, e [speaker005:] Yeah, it's preparing. [speaker007:] pre they are preparing it, and, well, [speaker005:] They are preparing. [speaker007:] according to Hynek it will be [disfmarker] we will have this at the end of November, or [disfmarker] Um. [speaker006:] OK, so, uh, [speaker007:] Yeah [disfmarker] [speaker006:] something like seven things in each, uh [disfmarker] each column. So that's, uh, three hundred and forty three, uh, [vocalsound] different systems that are going to be developed. There's three of you. Uh, so that's [speaker003:] Yeah. One hundred each, about. [speaker006:] hundred and [disfmarker] [vocalsound] hundred and fourteen each. [speaker004:] What a what about noise conditions? [speaker006:] What? [speaker004:] w Don't we need to put in the column for noise conditions? [speaker006:] Are you just trying to be difficult? [speaker004:] No, I just don't understand. [speaker003:] Well, th uh, [speaker006:] I'm just kidding. [speaker003:] when [disfmarker] when I put these testings on there, I'm assumi [speaker006:] Yeah. [speaker003:] There there's three [disfmarker] three tests. Um, type A, type B, and type C. And they're all [disfmarker] they're all gonna be test tested, um, with one training of the HTK system. Um, there's a script that tests all three different types of noise conditions. Test A is like a matched noise. Test B is a [disfmarker] is a slightly mismatched. And test C is a, um, mismatched channel. [speaker004:] And do we do all our [pause] training on clean data? [speaker003:] Um, [speaker005:] Also, we can clean that. [speaker003:] no, no, [speaker007:] No. [speaker003:] we're [disfmarker] we're gonna be, um, training on the noise files that we do have. [speaker006:] So, um [disfmarker] Yeah, so I guess the question is how long does it take to do a [disfmarker] a training? I mean, it's not totally crazy t I mean, these are [disfmarker] a lot of these are built in things and we know [disfmarker] we have programs that compute PLP, we have MSG, we have JRA you know, a lot of these things will just kind of happen, won't take uh a huge amount of development, it's just trying it out. So, we actually can do quite a few experiments. But how [disfmarker] how long does it take, do we think, for one of these [pause] [comment] trainings? [speaker003:] Mm hmm. That's a good question. [speaker001:] What about combinations of things? [speaker006:] Oh yeah, that's right. I mean, cuz, so, for instance, I think the major advantage of MSG [disfmarker] [speaker003:] Oh! [speaker006:] Yeah, [speaker003:] Och! [speaker006:] good point. A major advantage of MSG, I see, th that we've seen in the past is combined with PLP. [speaker005:] Yeah. [speaker006:] Um. [speaker003:] Now, this is turning into a four dimensional cube? [speaker002:] Or you just add it to the features. [speaker005:] No. [speaker001:] Well, you just select multiple things on the one dimension. [speaker005:] Here. [speaker003:] Just [disfmarker] Oh, yeah. OK. [speaker006:] Yeah, so, I mean, you don't wanna, uh [disfmarker] Let's see, seven choose two would [disfmarker] [vocalsound] [disfmarker] be, uh, twenty one different combinations. Um. [speaker002:] It's not a complete set of combinations, though, [speaker006:] Probably [disfmarker] [speaker002:] right? [speaker006:] What? [speaker002:] It's not a complete set of combinations, though, [speaker003:] No. [speaker002:] right? [speaker006:] Yeah, I hope not. Yeah, there's [disfmarker] [speaker003:] That would be [disfmarker] [speaker006:] Uh, yeah, so PLP and MSG I think we definitely wanna try cuz we've had a lot of good experience with putting those together. [speaker005:] Mm hmm. [speaker006:] Um. Yeah. [speaker001:] When you do that, you're increasing the size of the inputs to the net. Do you have to reduce the hidden layer, or something? [speaker006:] Well, so [disfmarker] I mean, so i it doesn't increase the number of trainings. [speaker001:] No, no, I'm [disfmarker] I'm just wondering about number of parameters in the net. Do you have to worry about keeping that the same, or [disfmarker]? [speaker006:] Uh, I don't think so. [speaker002:] There's a computation limit, though, isn't there? [speaker006:] Yeah, I mean, it's just more compu Excuse me? [speaker002:] Isn't there like a limit [pause] on the computation load, or d latency, or something like that for Aurora task? [speaker006:] Oh yeah, we haven't talked about any of that at all, have we? [speaker003:] No. [speaker006:] Yeah, so, there's not really a limit. What it is is that there's [disfmarker] there's, uh [disfmarker] it's just penalty, you know? That [disfmarker] that if you're using, uh, a megabyte, then they'll say that's very nice, but, of course, it will never go on a cheap cell phone. Um. [speaker002:] OK. [speaker006:] And, u uh, I think the computation isn't so much of a problem. I think it's more the memory. Uh, and, expensive cell phones, exa expensive hand helds, and so forth, are gonna have lots of memory. So it's just that, uh, these people see the [disfmarker] the cheap cell phones as being still the biggest market, so. [speaker002:] Mm hmm. [speaker006:] Um. But, yeah, I was just realizing that, actually, it doesn't explode out, um [disfmarker] It's not really two to the seventh. But it's [disfmarker] but [disfmarker] but [disfmarker] i i it doesn't really explode out the number of trainings cuz these were all trained individually. Right? So, uh, if you have all of these nets trained some place, then, uh, you can combine their outputs and do the KL transformation and so forth and [disfmarker] and, uh [disfmarker] [speaker003:] Mm hmm. [speaker006:] So, what it [disfmarker] it blows out is the number of uh testings. And, you know [disfmarker] and the number of times you do that last part. But that last part, I think, is so [disfmarker] has gotta be pretty quick, so. Uh. Right? I mean, it's just running the data through [disfmarker] [speaker003:] Oh. [speaker006:] Well, you gotta do the KL transformation, [speaker001:] But wh what about a net that's trained on multiple languages, though? [speaker007:] Eight [disfmarker] y [speaker006:] but [disfmarker] [speaker001:] Is that just separate nets for each language then combined, [speaker005:] Necessary to put in. [speaker001:] or is that actually one net trained on? [speaker007:] Uh, probably one net. [speaker006:] Good question. [speaker007:] Well. Uh. [speaker006:] One would think one net, [speaker007:] So. [speaker006:] but we've [disfmarker] I don't think we've tested that. Right? [speaker007:] So, in the broader training corpus we can [disfmarker] we can use, uh, the three, or, a combination of [disfmarker] of two [disfmarker] two languages. [speaker005:] Database three. [speaker001:] In one net. [speaker007:] Yeah. [speaker001:] Mm hmm. [speaker006:] Yeah, so, I guess the first thing is if w if we know how much a [disfmarker] how long a [disfmarker] a training takes, if we can train up all these [disfmarker] these combinations, uh, then we can start working on testing of them individually, and in combination. Right? Because the putting them in combination, I think, is not as much computationally as the r training of the nets in the first place. [speaker003:] Mm hmm. [speaker006:] Right? [speaker007:] Yeah. [speaker006:] So y you do have to compute the KL transformation. Uh, which is a little bit, but it's not too much. [speaker007:] It's not too much, no. [speaker006:] Yeah. So it's [disfmarker] [speaker007:] But [disfmarker] Yeah. But there is the testing also, which implies training, uh, the HTK models [speaker005:] The [disfmarker] the model [disfmarker] [speaker007:] and, well, [speaker005:] the HTK model. [speaker006:] Uh, right. [speaker007:] it's [disfmarker] [speaker006:] Right. [speaker007:] yeah. [speaker006:] So if you do have lots of combinations, it's [disfmarker] [speaker007:] But it's [disfmarker] it's [disfmarker] it's not so long. It [@ @] [disfmarker] Yeah. [speaker006:] How long does it take for an, uh, HTK training? [speaker007:] It's around six hours, I think. [speaker005:] It depends on the [disfmarker] [speaker007:] For training and testing, yeah. [speaker005:] More than six hours. [speaker007:] More. [speaker005:] For the Italian, yes. Maybe one day. [speaker007:] One day? [speaker006:] For HTK? [speaker005:] Yeah. [speaker006:] Really? [speaker005:] Well. [speaker006:] Running on what? [speaker005:] Uh, M [disfmarker] MFCC. [speaker006:] No, I'm sorry, ru running on what machine? [speaker005:] Uh, Ravioli. [speaker006:] Uh, I don't know what Ravioli is. Is it [disfmarker] is it an Ultra five, or is it a [disfmarker]? [speaker005:] mmm Um. Who is that? [speaker001:] I don't know. [speaker003:] I don't know. [speaker005:] I don't know. [speaker002:] I don't know what a Ravioli is. [speaker005:] I don't know. [speaker003:] I don't know. [speaker002:] We can check really quickly, [speaker007:] Yeah, [speaker002:] I guess. [speaker007:] I I think it's it's it's not so long because, well, the TI digits test data is about, uh how many hours? Uh, th uh, thirty hours of speech, I think, [speaker006:] It's a few hours. [speaker007:] something like that. [speaker002:] Hmm. [speaker006:] Yeah. [vocalsound] Right, [speaker007:] And it p Well. [speaker006:] so, I mean, clearly, there [disfmarker] there's no way we can even begin to do an any significant amount here unless we use multiple machines. [speaker007:] It's six hours. [speaker006:] Right? So [disfmarker] so [disfmarker] w we [disfmarker] I mean there's plenty of machines here and they're n they're often not in [disfmarker] in a great [disfmarker] great deal of use. So, I mean, I think it's [disfmarker] it's key that [disfmarker] that the [disfmarker] that you look at, uh, you know, what machines are fast, what machines are used a lot [disfmarker] Uh, are we still using P make? Is that [disfmarker]? [speaker003:] Oh, I don't know how w how we would P make this, though. Um. [speaker006:] Well, you have a [disfmarker] I mean, once you get the basic thing set up, you have just all the [disfmarker] uh, a all these combinations, [speaker003:] Yeah. [speaker006:] right? [speaker003:] Mm hmm. [speaker006:] Um. It's [disfmarker] it's [disfmarker] let's say it's six hours or eight hours, or something for the training of HTK. How long is it for training of [disfmarker] of, uh, the neural net? [speaker003:] The neural net? Um. [speaker007:] I would say two days. [speaker001:] Depends on the corpuses, right? [speaker005:] It depends. [speaker007:] Yeah. [speaker002:] It s also depends on the net. [speaker005:] Depends on the corpus. [speaker003:] Yeah. [speaker002:] How big is the net? [speaker005:] For Albayzin I trained on neural network, uh, was, um, one day also. [speaker006:] Uh, but on what machine? [speaker003:] On a SPERT board. [speaker005:] Uh. I [disfmarker] I think the neural net SPERT. [speaker003:] Y you did a [disfmarker] you did it on a SPERT board. [speaker006:] OK, [speaker005:] Yes. [speaker006:] again, we do have a bunch of SPERT boards. [speaker003:] Yeah. [speaker006:] And I think there [disfmarker] there [disfmarker] there's [disfmarker] I think you folks are probably go the ones using them right now. [speaker001:] Is it faster to do it on the SPERT, or [disfmarker]? [speaker006:] Uh, don't know. Used to be. [speaker003:] It's [disfmarker] it's still a little faster on the [speaker001:] Is it? [speaker003:] Yeah, yeah. Ad Adam [disfmarker] Adam did some testing. Or either Adam or [disfmarker] or Dan did some testing and they found that the SPERT board's still [disfmarker] still faster. And the benefits is that, you know, you run out of SPERT and then you can do other things on your [disfmarker] your computer, [speaker001:] Mm hmm. [speaker006:] Mm hmm. [speaker003:] and you don't [disfmarker] [speaker001:] Mm hmm. [speaker006:] Yeah. So you could be [disfmarker] we have quite a few SPERT boards. You could set up, uh, you know, ten different jobs, or something, to run on SPERT [disfmarker] different SPERT boards and [disfmarker] and have ten other jobs running on different computers. So, it's got to take that sort of thing, or [disfmarker] or we're not going to get through any significant number of these. [speaker003:] Yeah. [speaker006:] So this is [disfmarker] Yeah, I mean, I kind of like this because what it [disfmarker] No [disfmarker] uh, no, what I like about it is [speaker003:] OK. [speaker006:] we [disfmarker] we [disfmarker] we do have a problem that we have very limited time. You know, so, with very limited time, we actually have really quite a [disfmarker] quite a bit of computational resource available if you, you know, get a look across the institute and how little things are being used. And uh, on the other hand, almost anything that really i you know, is [disfmarker] is new, where we're saying, "Well, let's look at, like we were talking before about, uh, uh, voiced unvoiced silence detection features and all those sort [disfmarker]" that's [disfmarker] [speaker005:] Yeah. [speaker006:] I think it's a great thing to go to. But if it's new, then we have this development and [disfmarker] and [disfmarker] and learning process t to [disfmarker] to go through on top of [disfmarker] just the [disfmarker] the [disfmarker] all the [disfmarker] all the work. So, I [disfmarker] I [disfmarker] I don't see how we'd do it. So what I like about this is you basically have listed all the things that we already know how to do. And [disfmarker] and all the kinds of data that we, at this point, already have. [speaker005:] Yeah. [speaker006:] And, uh, you're just saying let's look at the outer product of all of these things and see if we can calculate them. a a Am I [disfmarker] am I interpreting this correctly? Is this sort of what [disfmarker] what you're thinking of doing in the short term? [speaker007:] Mmm. [speaker006:] OK. [speaker007:] Yeah. [speaker006:] So [disfmarker] so then I think it's just the [disfmarker] the missing piece is that you need to, uh, you know [disfmarker] you know, talk to [disfmarker] talk to, uh, Chuck, talk to, uh, Adam, uh, sort out about, uh, what's the best way to really, you know, attack this as a [disfmarker] as a [disfmarker] as a mass problem in terms of using many machines. Uh, and uh, then, you know, set it up in terms of scripts and so forth, and [disfmarker] uh, in [disfmarker] in kind o some kind of structured way. Uh. Um, and, you know, when we go to, uh, OGI next week, uh, we can then present to them, you know, what it is that we're doing. And, uh, we can pull things out of this list that we think they are doing sufficiently, [speaker003:] Mmm. [speaker006:] that, you know, we're not [disfmarker] we won't be contributing that much. [speaker003:] Mm hmm. [speaker006:] Um. And, uh [disfmarker] Then, uh, like, we're there. [speaker002:] How big are the nets you're using? [speaker003:] Um, for the [disfmarker] for nets trained on digits, [comment] um, we have been using, uh, four hundred order hidden units. And, um, for the broader class nets we're [disfmarker] we're going to increase that because the, um, the digits nets only correspond to about twenty phonemes. [speaker002:] Uh huh. [speaker003:] So. [speaker006:] Broader class? [speaker003:] Um, the broader [disfmarker] broader training corpus nets like TIMIT. Um, w we're gonna [disfmarker] [speaker006:] Oh, it's not actually broader class, it's actually finer class, but you mean [disfmarker] y You mean [vocalsound] more classes. [speaker003:] Right. Right. Yeah. More classes. Right, right. [speaker006:] Yeah. [speaker003:] More classes. [speaker006:] Yeah. [speaker003:] That's what I mean. [speaker006:] Yeah. [speaker003:] Mm hmm. And. Yeah. [speaker006:] Carmen, did you [disfmarker] do you have something else to add? We [disfmarker] you haven't talked too much, and [disfmarker] [speaker005:] D I begin to work with the Italian database to [disfmarker] nnn, to [disfmarker] with the f front end and with the HTK program and the [@ @]. And I trained eh, with the Spanish two neural network with PLP and with LogRASTA PLP. I don't know exactly what is better if [disfmarker] if LogRASTA or JRASTA. [speaker006:] Well, um, JRASTA has the potential to do better, but it doesn't always. It's [disfmarker] i i JRASTA is more complicated. It's [disfmarker] it's, uh [disfmarker] instead of doing RASTA with a log, you're doing RASTA with a log like function that varies depending on a J parameter, uh, which is supposed to be sensitive to the amount of noise there is. So, it's sort of like the right transformation to do the filtering in, is dependent on how much noise there is. [speaker005:] Hm hmm. [speaker006:] And so in JRASTA you attempt to do that. It's a little complicated because once you do that, you end up in some funny domain and you end up having to do a transformation afterwards, which requires some tables. And, uh, so it's [disfmarker] it's [disfmarker] it's a little messier, [speaker005:] Hm hmm. [speaker006:] uh, there's more ways that it can go wrong, uh, but if [disfmarker] if [disfmarker] if you're careful with it, it can do better. [speaker005:] It's a bit [disfmarker] I'll do better. [speaker006:] So, it's [disfmarker] So. [speaker005:] Um, and I think to [disfmarker] to [disfmarker] to recognize the Italian digits with the neural netw Spanish neural network, and also to train another neural network with the Spanish digits, the database of Spanish digits. And I working that. [speaker006:] Yeah. [speaker005:] But prepa to prepare the [disfmarker] the database are difficult. Was for me, n it was a difficult work last week with the labels because the [disfmarker] the program with the label obtained that I have, the Albayzin, is different w to the label to train the neural network. And [pause] [vocalsound] that is another work that we must to do, to [disfmarker] to change. [speaker006:] I [disfmarker] I didn't understand. [speaker005:] Uh, for example Albayzin database was labeled automatically with HTK. It's not hand [disfmarker] it's not labels by hand. [speaker006:] Oh, "l labeled". I'm sorry, [speaker005:] Labels. [speaker006:] I have a p I had a problem with [vocalsound] the pronunciation. [speaker005:] I'm sorry, I'm sorry. The labels. I'm sorry. The labels. [speaker006:] Yeah, OK. So, OK, [speaker005:] Oh, also that [disfmarker] [speaker006:] so let's start over. So, TI TIMI TIMIT's hand labeled, [speaker005:] Yes. [speaker006:] and [disfmarker] and you're saying about the Spanish? [speaker005:] The Spanish labels? That was in different format, [speaker006:] Oh, I see. [speaker005:] that the format for the em [disfmarker] the program to train the neural network. I necessary to convert. And someti well [disfmarker] [speaker001:] So you're just having a problem converting the labels. [speaker005:] It's [disfmarker] it's [disfmarker] Yeah. Yeah, but n yes, because they have one program, Feacalc, but no, l LabeCut, l LabeCut, but don't [disfmarker] doesn't, eh, include the HTK format to convert. [speaker006:] Mm hmm. [speaker005:] And, [speaker002:] Hmm. [speaker005:] I don't know what. I ask [disfmarker] e even I ask to Dan Ellis what I can do that, and h they [disfmarker] he say me that h he does doesn't any [disfmarker] any s any form to [disfmarker] to do that. And at the end, I think that with LabeCut I can transfer to ASCII format, and HTK is an ASCII format. And I m do another, uh, one program to put ASCII format of HTK to ase ay ac ASCII format to Exceed [speaker006:] Mm hmm. [speaker005:] and they used LabCut to [disfmarker] [vocalsound] to pass. [speaker006:] OK, yeah. [speaker005:] Actually that was complicated, [speaker006:] So you [speaker005:] but well, I know how we can did that [disfmarker] do that. [speaker006:] Sure. So it's just usual kind of uh [disfmarker] sometimes say housekeeping, right? To get these [disfmarker] get these things sorted out. [speaker005:] Yeah. [speaker006:] So it seems like there's [disfmarker] there's some peculiarities of the, uh [disfmarker] of each of these dimensions that are getting sorted out. And then, um, if [disfmarker] if you work on getting the, uh, assembly lines together, and then the [disfmarker] the pieces sort of get ready to go into the assembly line and gradually can start, you know, start turning the crank, more or less. And, uh, uh, we have a lot more computational capability here than they do at OGI, so I think that i if [disfmarker] What's [disfmarker] what's great about this is it sets it up in a very systematic way, so that, uh, once these [disfmarker] all of these, you know, mundane but real problems get sorted out, we can just start turning the crank [speaker005:] Mm hmm. [speaker006:] and [disfmarker] and push all of us through, and then finally figure out what's best. [speaker003:] Yeah. Um, I [disfmarker] I was thinking two things. Uh, the first thing was, um [disfmarker] we [disfmarker] we actually had thought of this as sort of like, um [disfmarker] not [disfmarker] not in stages, [comment] but more along the [disfmarker] the time axis. Just kind of like one stream at a time, [speaker006:] Mm hmm. [speaker003:] je je je je je [comment] check out the results and [disfmarker] and go that way. [speaker006:] Oh, yeah, yeah, sure. No, I'm just saying, I'm just thinking of it like loops, right? And so, y y y if you had three nested loops, that you have a choice for this, a choice for this, and a choice for that, [speaker003:] Uh huh. Yeah. [speaker006:] right? And you're going through them all. [speaker003:] Mm hmm. [speaker006:] That [disfmarker] that's what I meant. [speaker003:] Right, right. [speaker006:] And, uh, the thing is that once you get a better handle on how much you can realistically do, uh, um, [vocalsound] concurrently on different machines, different SPERTs, and so forth, uh, and you see how long it takes on what machine and so forth, you can stand back from it and say, "OK, if we look at all these combinations we're talking about, and combinations of combinations, and so forth," you'll probably find you can't do it all. [speaker003:] Mm hmm. OK. [speaker006:] OK, so then at that point, uh, we should sort out which ones do we throw away. Which of the combinations across [disfmarker] you know, what are the most likely ones, [speaker003:] Mm hmm. [speaker006:] and [disfmarker] And, uh, I still think we could do a lot of them. I mean, it wouldn't surprise me if we could do a hundred of them or something. But, probably when you include all the combinations, you're actually talking about a thousand of them or something, and that's probably more than we can do. Uh, but a hundred is a lot. And [disfmarker] and, uh, um [disfmarker] [speaker003:] OK. [speaker006:] Yeah. [speaker003:] Yeah, and the [disfmarker] the second thing was about scratch space. And I think you sent an email about, um, e scratch space for [disfmarker] for people to work on. And I know that, uh, Stephane's working from an NT machine, so his [disfmarker] his home directory exists somewhere else. [speaker006:] His [disfmarker] his stuff is somewhere else, yeah. Yeah, I mean, my point I [disfmarker] I want to [disfmarker] Yeah, thanks for bring it back to that. My [disfmarker] th I want to clarify my point about that [disfmarker] that [disfmarker] that Chuck repeated in his note. Um. We're [disfmarker] over the next year or two, we're gonna be upgrading the networks in this place, [speaker003:] Mm hmm. [speaker006:] but right now they're still all te pretty much all ten megabit lines. And we have reached the [disfmarker] this [disfmarker] the machines are getting faster and faster. So, it actually has reached the point where it's a significant drag on the time for something to move the data from one place to another. [speaker003:] Mm hmm. [speaker006:] So, you [disfmarker] you don't w especially in something with repetitive computation where you're going over it multiple times, you do [disfmarker] don't want to have the [disfmarker] the data that you're working on distant from where it's being [disfmarker] where the computation's being done if you can help it. [speaker003:] Mm hmm. [speaker006:] Uh. Now, we are getting more disk for the central file server, which, since it's not a computational server, would seem to be a contradiction to what I just said. But the idea is that, uh, suppose you're working with, uh, this big bunch of multi multilingual databases. Um, you put them all in the central ser at the cen central file server. [speaker003:] Mm hmm. [speaker006:] Then, when you're working with something and accessing it many times, you copy the piece of it that you're working with over to some place that's close to where the computation is and then do all the work there. And then that way you [disfmarker] you won't have the [disfmarker] the network [disfmarker] you won't be clogging the network for yourself and others. [speaker003:] Mmm. [speaker006:] That's the idea. So, uh, it's gonna take us [disfmarker] It may be too late for this, uh, p precise crunch we're in now, but, uh, we're, uh [disfmarker] It's gonna take us a couple weeks at least to get the, uh, uh, the amount of disk we're gonna be getting. We're actually gonna get, uh, I think four more, uh, thirty six gigabyte drives and, uh, put them on another [disfmarker] another disk rack. We ran out of space on the disk rack that we had, so we're getting another disk rack and [vocalsound] four more drives to share between, uh [disfmarker] primarily between this project and the Meetings [disfmarker] Meetings Project. Um. But, uh, we've put another [disfmarker] I guess there's another eighteen gigabytes that's [disfmarker] that's in there now to help us with the immediate crunch. But, uh, are you saying [disfmarker] So I don't know where [pause] you're [disfmarker] Stephane, where you're doing your computations. If [disfmarker] i so, you're on an NT machine, so you're using some external machine [speaker007:] Yeah, it, uh [disfmarker] Well, to [disfmarker] It's Nutmeg and Mustard, I think, [speaker006:] Do you know these yet? [speaker007:] I don't know what kind. [speaker001:] Nuh uh. [speaker006:] Yeah, OK. Uh, are these [disfmarker] are these, uh, computational servers, or something? I'm [disfmarker] I've been kind of out of it. [speaker007:] Yeah, I think, yeah. I think so. Mmm. [speaker006:] Unfortunately, these days my idea of running comput of computa doing computation is running a spread sheet. So. [speaker007:] Mmm. [speaker006:] Uh, haven't been [disfmarker] haven't been doing much computing personally, so. Um. Yeah, so those are computational servers. So I guess the other question is what disk there i space there is there on the computational servers. [speaker001:] Right. Yeah, I'm not sure what's available on [disfmarker] is it [disfmarker] you said Nutmeg and what was the other one? [speaker007:] Mustard. [speaker001:] Mustard. OK. [speaker006:] Yeah, [speaker002:] Huh. [speaker006:] Well, you're the [disfmarker] you're the disk czar now. So [speaker001:] Right, right. Well, I'll check on that. [speaker006:] Yeah. Yeah, so basically, uh, Chuck will be the one who will be sorting out what disk needs to be where, and so on, and I'll be the one who says, "OK, spend the money." So. [vocalsound] Which, I mean, n these days, uh, if you're talking about scratch space, it doesn't increase the, uh, need for backup, and, uh, I think it's not that big a d and the [disfmarker] the disks themselves are not that expensive. Right now it's [disfmarker] [speaker001:] What you can do, when you're on that machine, is, uh, just go to the slash scratch directory, and do a DF minus K, and it'll tell you if there's space available. [speaker007:] Yeah. [speaker001:] Uh, and if there is then, uh [disfmarker] [speaker006:] But wasn't it, uh [disfmarker] I think Dave was saying that he preferred that people didn't put stuff in slash scratch. It's more putting in d s XA or XB or, [speaker001:] Well, there's different [disfmarker] there, um, there's [disfmarker] [speaker006:] right? [speaker001:] Right. So there's the slash X whatever disks, and then there's slash scratch. And both of those two kinds are not backed up. And if it's called "slash scratch", it means it's probably an internal disk to the machine. Um. And so that's the kind of thing where, like if [disfmarker] um, OK, if you don't have an NT, but you have a [disfmarker] a [disfmarker] a Unix workstation, and they attach an external disk, [comment] it'll be called "slash X something" uh, if it's not backed up and it'll be "slash D something" if it is backed up. And if it's inside the machine on the desk, it's called "slash scratch". But the problem is, if you ever get a new machine, they take your machine away. It's easy to unhook the external disks, put them back on the new machine, but then your slash scratch is gone. So, you don't wanna put anything in slash scratch that you wanna keep around for a long period of time. But if it's a copy of, say, some data that's on a server, you can put it on slash scratch because, um, first of all it's not backed up, and second it doesn't matter if that machine disappears and you get a new machine because you just recopy it to slash scratch. So tha that's why I was saying you could check slash scratch on those [disfmarker] on [disfmarker] on, um, Mustard and [disfmarker] and Nutmeg to see if [disfmarker] if there's space that you could use there. [speaker006:] I see. [speaker001:] You could also use slash X whatever disks on Mustard and Nutmeg. [speaker007:] Yeah, yeah. [speaker001:] Um. Yeah, and we do have [disfmarker] I mean, yeah, so [disfmarker] so you [disfmarker] yeah, it's better to have things local if you're gonna run over them lots of times so you don't have to go to the network. [speaker006:] Right, so es so especially if you're [disfmarker] right, if you're [disfmarker] if you're taking some piece of the training corpus, which usually resides in where Chuck is putting it all on the [disfmarker] on the, uh, file server, uh, then, yeah, it's fine if it's not backed up because if it g g gets wiped out or something, y I mean it is backed up on the other disk. So, [speaker001:] Mm hmm. [speaker006:] yeah, OK. [speaker001:] Yeah, so, [vocalsound] one of the things that I need to [disfmarker] I've started looking at [disfmarker] Uh, is this the appropriate time to talk about the disk space stuff? [speaker006:] Sure. [speaker001:] I've started looking at, um, disk space. Dan [disfmarker] David, um, put a new, um, drive onto Abbott, that's an X disk, which means it's not backed up. So, um, I've been going through and copying data that is, you know, some kind of corpus stuff usually, that [disfmarker] that we've got on a CD ROM or something, onto that new disk to free up space [pause] on other disks. And, um, so far, um, I've copied a couple of Carmen's, um, databases over there. We haven't deleted them off of the slash DC disk that they're on right now in Abbott, um, uh, but we [disfmarker] I would like to go through [disfmarker] sit down with you about some of these other ones and see if we can move them onto, um, this new disk also. [speaker007:] Yeah, OK. [speaker001:] There's [disfmarker] there's a lot more space there, and it'll free up more space for doing the experiments and things. So, anything that [disfmarker] that you don't need backed up, we can put on this new disk. Um, but if it's experiments and you're creating files and things that you're gonna need, you probably wanna have those on a disk that's backed up, just in case something [comment] goes wrong. So. Um So far I've [disfmarker] I've copied a couple of things, but I haven't deleted anything off of the old disk to make room yet. Um, and I haven't looked at the [disfmarker] any of the Aurora stuff, except for the Spanish. So I [disfmarker] I guess I'll need to get together with you and see what data we can move onto the new disk. [speaker007:] Yeah, OK. [speaker006:] Um, yeah, I [disfmarker] I just [disfmarker] an another question occurred to me is [disfmarker] is what were you folks planning to do about normalization? [speaker007:] Um. Well, we were thinking about using this systematically for all the experiments. Um. [speaker006:] This being [disfmarker]? [speaker007:] So, but [disfmarker] Uh. So that this could be another dimension, but we think perhaps we can use the [disfmarker] the best, uh, um, uh, normalization scheme as OGI is using, so, with parameters that they use there, [speaker006:] Yeah, I think that's a good idea. [speaker007:] u [vocalsound] u [speaker006:] I mean it's i i we [disfmarker] we seem to have enough dimensions as it is. So probably if we [vocalsound] sort of take their [disfmarker] [speaker007:] Yeah, yeah, yeah. [speaker006:] probably the on line [disfmarker] line normalization because then it [disfmarker] [comment] it's [disfmarker] if we do anything else, we're gonna end up having to do on line normalization too, so we may as well just do on line normalization. [speaker007:] Mm hmm. [speaker006:] So. Um. So that it's plausible for the final thing. Good. Um. So, I guess, yeah, th the other topic [disfmarker] I [disfmarker] maybe we're already there, or almost there, is goals for the [disfmarker] for next week's meeting. Uh. i i i it seems to me that we wanna do is flush out what you put on the board here. Uh. You know, maybe, have it be somewhat visual, a little bit. [speaker003:] OK. Like a s like a slide? [speaker006:] Uh, so w we can say what we're doing, yeah. [speaker003:] OK. [speaker006:] And, um, also, if you have [pause] sorted out, um, this information about how long i roughly how long it takes to do on what and, you know, what we can [disfmarker] how many of these trainings, uh, uh, and testings and so forth that we can realistically do, uh, then one of the big goals of going there next week would be to [disfmarker] to actually settle on which of them we're gonna do. And, uh, when we come back we can charge in and do it. Um. Anything else that [disfmarker] I a a Actually [disfmarker] started out this [disfmarker] this field trip started off with [disfmarker] with, uh, Stephane talking to Hynek, so you may have [disfmarker] you may have had other goals, uh, for going up, and any anything else you can think of would be [disfmarker] we should think about [pause] accomplishing? I mean, I'm just saying this because [pause] maybe there's things we need to do in preparation. [speaker007:] Oh, I think basically, this is [disfmarker] this is, uh, yeah. [speaker006:] OK. OK. Uh. Alright. And uh [disfmarker] and the other [disfmarker] the [disfmarker] the last topic I had here was, um, uh d Dave's fine offer to [disfmarker] to, uh, do something [pause] [vocalsound] on this. I mean he's doing [disfmarker] [vocalsound] [disfmarker] he's working on other things, but to [disfmarker] to do something on this project. So the question is, "Where [disfmarker] where could we, uh, uh, most use Dave's help?" [speaker007:] Um, yeah, I was thinking perhaps if, um, additionally to all these experiments, which is not really research, well I mean it's, uh, running programs [speaker006:] Yeah. [speaker007:] and, um, [vocalsound] trying to have a closer look at the [disfmarker] perhaps the, um, [vocalsound] speech, uh, noise detection or, uh, voiced sound unvoiced sound detection and [disfmarker] Which could be important in [disfmarker] i for noise [disfmarker] noise [disfmarker] [speaker001:] I think that would be a [disfmarker] I think that's a big [disfmarker] big deal. Because the [disfmarker] you know, the thing that Sunil was talking about, uh, with the labels, uh, labeling the database when it got to the noisy stuff? The [disfmarker] That [disfmarker] that really throws things off. You know, having the noise all of a sudden, your [disfmarker] your, um, speech detector, I mean the [disfmarker] the, um [disfmarker] What was it? What was happening with his thing? He was running through these models very quickly. He was getting lots of, uh, uh insertions, is what it was, in his recognitions. [speaker006:] The only problem [disfmarker] I mean, maybe that's the right thing [disfmarker] the only problem I have with it is exactly the same reason why you thought it'd be a good thing to do. Um, I [disfmarker] I think that [disfmarker] Let's fall back to that. But I think the first responsibility is sort of to figure out if there's something [pause] that, uh, an [disfmarker] an additional [disfmarker] Uh, that's a good thing you [disfmarker] remove the mike. Go ahead, good. Uh, uh. What an additional clever person could help with when we're really in a crunch for time. Right? Cuz Dave's gonna be around for a long time, right? [speaker005:] Yeah. [speaker006:] He's [disfmarker] he's gonna be here for years. [speaker007:] Yeah. [speaker006:] And so, um, over years, if he's [disfmarker] if he's interested in, you know, voiced unvoiced silence, he could do a lot. But if there [disfmarker] if in fact there's something else [pause] that he could be doing, that would help us when we're [disfmarker] we're sort of uh strapped for time [disfmarker] We have [disfmarker] we [disfmarker] we've, you know, only, [pause] uh, another [disfmarker] another month or two [pause] to [disfmarker] you know, with the holidays in the middle of it, um, to [disfmarker] to get a lot done. If we can think of something [disfmarker] some piece of this that's going to be [disfmarker] The very fact that it is sort of just work, and i and it's running programs and so forth, is exactly why [pause] it's possible that it [disfmarker] some piece of could be handed to someone to do, because it's not [disfmarker] Uh, yeah, so that [disfmarker] that's the question. And we don't have to solve it right this s second, but if we could think of some [disfmarker] some piece that's [disfmarker] that's well defined, that he could help with, he's expressing a will willingness to do that. [speaker001:] What about training up a, um, a multilingual net? [speaker006:] Uh. [speaker005:] Yes, [speaker007:] Yeah. [speaker005:] maybe to, mmm, put together the [disfmarker] the label [disfmarker] the labels between TIMIT and Spanish or something like that. [speaker007:] Yeah, so defining the superset, [speaker005:] Yes. [speaker007:] and, uh, joining the data and [disfmarker] Mmm. [speaker005:] Yeah. [speaker007:] Yeah. [speaker006:] Uh. Yeah, that's something that needs to be done in any event. [speaker005:] Yeah. [speaker006:] So what we were just saying is that [disfmarker] that, um [disfmarker] I was arguing for, [pause] if possible, coming up with something that [disfmarker] that really was development and wasn't research because we [disfmarker] we're [disfmarker] we have a time crunch. And so, uh, if there's something that would [disfmarker] would save some time that someone else could do on some other piece, then we should think of that first. See the thing with voiced unvoiced silence is I really think that [disfmarker] that it's [disfmarker] to do [disfmarker] to do a [disfmarker] a [disfmarker] a [disfmarker] a poor job is [disfmarker] is pretty quick, uh, or, you know, a so so job. You can [disfmarker] you can [disfmarker] you can throw in a couple fea we know what [disfmarker] what kinds of features help with it. [speaker005:] Hmm. [speaker006:] You can throw something in. You can do pretty well. But I remember, in fact, when you were working on that, and you worked on for few months, as I recall, and you got to, say ninety three percent, and getting to ninety four [pause] [vocalsound] really really hard. [speaker001:] Mm hmm. Another year. [speaker006:] Yeah, yeah. So, um [disfmarker] And th th the other tricky thing is, since we are, uh, even though we're not [disfmarker] we don't have a strict prohibition on memory size, and [disfmarker] and computational complexity, uh, clearly there's some limitation to it. So if we have to [disfmarker] if we say we have to have a pitch detector, say, if we [disfmarker] if we're trying to incorporate pitch information, or at least some kind of harmonic [disfmarker] harmonicity, or something, this is another whole thing, take a while to develop. Anyway, it's a very very interesting topic. I mean, one [disfmarker] I think one of the [disfmarker] a lot of people would say, and I think Dan would also, uh, that one of the things wrong with current speech recognition is that we [disfmarker] we really do throw away all the harmonicity information. Uh, we try to get spectral envelopes. Reason for doing that is that most of the information about the phonetic identity is in the spectral envelopes are not in the harmonic detail. But the harmonic detail does tell you something. Like the fact that there is harmonic detail is [disfmarker] is real important. So. Um. So, uh. So I think [disfmarker] Yeah. So [disfmarker] wh that [disfmarker] so the [disfmarker] the other suggestion that just came up was, well what about having him [pause] work on the, uh, [pause] multilingual super f superset [pause] kind of thing. Uh, coming up with that and then, you know, training it [disfmarker] training a net on that, say, um, from [disfmarker] from, uh [disfmarker] from TIMIT or something. Is that [disfmarker] or uh, for multiple databases. What [disfmarker] what would you [disfmarker] what would you think it would [disfmarker] wh what would this task consist of? [speaker007:] Yeah, it would consist in, uh, well, um, creating the [disfmarker] the superset, and, uh, modifying the lab labels for matching the superset. Uh. [speaker006:] Uh, creating a superset from looking at the multiple languages, [speaker007:] Well, creating the mappings, actually. [speaker006:] and then creating i m changing labels on TIMIT? [speaker007:] Yeah. [speaker006:] Or on [disfmarker] or on multiple language [disfmarker] [vocalsound] multiple languages? [speaker005:] No. [speaker007:] Yeah, yeah, [speaker005:] The multiple language. [speaker007:] with the [@ @] three languages, [speaker005:] Maybe for the other language because TIMIT have more phone. [speaker006:] Yeah. Uh. [speaker001:] So you'd have to create a mapping from each language to the superset. [speaker005:] Yeah. [speaker007:] From each language to the superset, [speaker005:] Mm hmm. [speaker007:] yeah. [speaker005:] Yeah. [speaker003:] There's, um [disfmarker] Carmen was talking about this SAMPA thing, and it's, um, [vocalsound] it's an effort by linguists to come up with, um, a machine readable IPA, um, sort of thing, right? And, um, they [disfmarker] they have a web site that Stephane was showing us that has, um [disfmarker] has all the English phonemes and their SAMPA correspondent, um, phoneme, [speaker006:] Yeah. [speaker003:] and then, um, they have Spanish, they have German, they have all [disfmarker] all sorts of languages, um, mapping [disfmarker] mapping to the SAMPA phonemes, which [disfmarker] [speaker005:] Yeah, the tr the transcription, though, for Albayzin is n the transcription are of SAMPA the same, uh, how you say, symbol that SAMPA appear. [speaker002:] SAMPA? What does "SAMPA" mean? [speaker006:] Mm hmm. Hmm. [speaker005:] But I don't know if TIMIT o how is TIMIT. [speaker002:] So, I mean [disfmarker] [speaker006:] What [disfmarker] [speaker002:] I'm sorry. [speaker006:] Go ahead. [speaker002:] I was gonna say, does that mean IPA is not really international? [speaker003:] No, it's [disfmarker] it's saying [disfmarker] [speaker001:] It uses special diacritics and stuff, which you can't do with ASCII characters. [speaker003:] y can't print on ASCII. [speaker005:] Yeah. [speaker002:] Oh, I see. [speaker001:] So the SAMPA's just mapping those. [speaker002:] Got it. [speaker006:] What, uh [disfmarker] Has OGI done anything about this issue? Do they have [disfmarker] Do they have any kind of superset that they already have? [speaker007:] I don't think so. Well, they [disfmarker] they [disfmarker] they're going actually the [disfmarker] the other way, defining uh, phoneme clusters, apparently. Well. [speaker006:] Aha. That's right. Uh, and that's an interesting [pause] way to go too. [speaker001:] So they just throw the speech from all different languages together, then cluster it into sixty or fifty or whatever clusters? [speaker007:] I think they've not done it, uh, doing, uh, multiple language yet, but what they did is to training, uh, English nets with all the phonemes, and then training it in English nets with, uh, kind of seventeen, I think it was [disfmarker] seventeen, uh, broad classes. [speaker001:] Automatically derived [disfmarker] Mm hmm. [speaker007:] Yeah. [speaker001:] Automatically derived broad classes, or [disfmarker]? [speaker007:] Yeah, I think so. [speaker001:] Uh huh. [speaker007:] Uh, and, yeah. And the result was that apparently, when testing on cross language it was better. I think so. But Hynek didn't add [disfmarker] didn't have all the results when he showed me that, so, well. But [disfmarker] [speaker006:] So that does make an interesting question, though. Is there's some way that we should tie into that with this. Um. Right? I mean, if [disfmarker] if in fact that is a better thing to do, [pause] should we leverage that, rather than doing, [pause] um, our own. Right? So, if i if [disfmarker] if they s I mean, we have [disfmarker] [pause] i we have the [disfmarker] the trainings with our own categories. And now we're saying, "Well, how do we handle cross language?" And one way is to come up with a superset, but they are als they're trying coming up with clustered, and do we think there's something wrong with that? [speaker007:] I think that there's something wrong [speaker006:] OK. [speaker007:] or [disfmarker] Well, because [disfmarker] [speaker006:] What w [speaker007:] Well, for the moment we are testing on digits, and e i perhaps u using broad phoneme classes, it's [disfmarker] it's OK for um, uh classifying the digits, but as soon as you will have more words, well, words can differ with only a single phoneme, and [disfmarker] which could be the same, uh, class. Well. [speaker006:] I see. [speaker007:] So. [speaker006:] Right. Although, you are not using this for the [disfmarker] [speaker007:] So, I'm [speaker006:] You're using this for the feature generation, though, not the [disfmarker] [speaker007:] Yeah, but you will ask the net to put one for th th the phoneme class [speaker006:] Yeah. [speaker007:] and [disfmarker] So. [speaker006:] Yeah. [speaker001:] So you're saying that there may not be enough information coming out of the net to help you discriminate the words? [speaker007:] Well. Yeah, yeah. Mmm. [speaker002:] Fact, most confusions are within the phone [disfmarker] phone classes, right? I think, uh, Larry was saying like obstruents are only confused with other obstruents, et cetera, et cetera. [speaker006:] Yeah. [speaker001:] Hmm. [speaker006:] Yeah. Yeah. [speaker007:] Yeah, this is another p yeah, another point. [speaker006:] Yeah. [speaker003:] So [disfmarker] so, maybe we could look at articulatory type stuff, [speaker006:] But that's what I thought they were gonna [disfmarker] [speaker003:] right? [speaker006:] Did they not do that, or [disfmarker]? [speaker007:] I don't think so. Well, they were talking about, perhaps, [speaker006:] So [disfmarker] [speaker007:] but they d I d [speaker006:] They're talking about it, [speaker007:] w [speaker006:] but that's sort of a question whether they did [speaker007:] Yeah. [speaker006:] because that's [disfmarker] that's the other route to go. Instead of this, you know [disfmarker] [speaker003:] Mm hmm. Superclass. [speaker006:] Instead of the [disfmarker] the [disfmarker] the [disfmarker] the superclass thing, which is to take [disfmarker] So suppose y you don't really mark arti To really mark articulatory features, you really wanna look at the acoustics and [disfmarker] and see where everything is, and we're not gonna do that. So, uh, the second class way of doing it is [pause] to look at the, uh, phones that are labeled and translate them into acoustic [disfmarker] uh, uh [disfmarker] articulatory, uh, uh, features. So it won't really be right. You won't really have these overlapping [pause] things and so forth, [speaker001:] So the targets of the net [disfmarker] are these [disfmarker]? [speaker006:] but [disfmarker] Articulatory feature. [speaker001:] Articulatory features. [speaker006:] Right. [speaker001:] But that implies that you can have more than one on at a time? [speaker006:] That's right. [speaker001:] Ah. [speaker006:] You either do that or you have multiple nets. [speaker001:] OK. I see. [speaker006:] Um. And, um I don't know if our software [disfmarker] this [disfmarker] if the qu versions of the Quicknet that we're using allows for that. Do you know? [speaker003:] Allows for [disfmarker]? [speaker006:] Multiple targets being one? [speaker003:] Oh, um, we have gotten soft targets to [disfmarker] to work. [speaker006:] OK. So that [disfmarker] that'll work, yeah. [speaker003:] Yeah. [speaker006:] OK. So, um, that's another thing that could be done [disfmarker] is that we could [disfmarker] we could, uh, just translate [disfmarker] instead of translating to a superset, [pause] just translate to articulatory features, some set of articulatory features and train with that. [speaker002:] Um. [speaker006:] Now the fact [disfmarker] even though it's a smaller number, [pause] it's still fine because you have the [disfmarker] the, uh, combinations. So, in fact, it has every, you know [disfmarker] it had [disfmarker] has [disfmarker] has every distinction in it that you would have the other way. [speaker007:] Yeah. [speaker006:] But it should go across languages better. [speaker001:] We could do an interesting cheating experiment with that too. We could [disfmarker] I don't know, if you had uh the phone labels, you could replace them by their articulatory features and then feed in a vector with those uh, things turned on based on what they're supposed to be for each phone to see if it [disfmarker] if you get a big win. Do you know what I'm saying? [speaker006:] No. [speaker001:] So, um, I mean, if your net is gonna be outputting, uh, a vector of [disfmarker] basically of [disfmarker] well, it's gonna have probabilities, but let's say that they were ones and zeros, then y and you know for each, um, I don't know if you know this for your testing data, but if you know for your test data, you know, what the string of phones is and [disfmarker] and you have them aligned, then you can just [disfmarker] instead of going through the net, just create the vector for each phone and feed that in to see if that data helps. Eh, eh, what made me think about this is, I was talking with Hynek and he said that there was a guy at A T andT who spent eighteen months working on a single feature. And because they had done some cheating experiments [disfmarker] [speaker006:] This was the guy that we were just talking a that we saw on campus. So, this was Larry Saul who did this [disfmarker] did this. [speaker001:] Oh, OK. [speaker006:] He used sonorants. [speaker001:] Right, OK, [speaker006:] Was what he was doing. [speaker001:] right. [speaker006:] Yeah. [speaker001:] And they [disfmarker] they had done a cheating experiment or something, right? [speaker006:] He [disfmarker] he di he didn't mention that part. [speaker001:] and determined that [disfmarker] [speaker006:] But. [speaker001:] Well, Hynek said that [disfmarker] that, I guess before they had him work on this, they had done some experiment where if they could get that one feature right, it dramatically improved the result. [speaker006:] I see. OK. [speaker001:] So I was thinking, you know [disfmarker] it made me think about this, that if [disfmarker] it'd be an interesting experiment just to see, you know, if you did get all of those right. [speaker006:] Should be. Because if you get all of them in there, that defines all of the phones. So that's [disfmarker] that's equivalent to saying that you've got [disfmarker] [vocalsound] got all the phones right. [speaker001:] Right. [speaker006:] So, if that doesn't help, there's [disfmarker] [speaker001:] Yeah. [speaker006:] Although, yeah, it would be [disfmarker] make an interesting cheating experiment because we are using it in this funny way, where we're converting it into features. [speaker001:] Yeah. And then you also don't know what error they've got on the HTK side. You know? [speaker006:] Yeah. [speaker001:] It sort of gives you your [disfmarker] the best you could hope for, kind of. [speaker003:] Mmm. Mmm, I see. [speaker002:] The soft training of the nets still requires the vector to sum to one, though, [speaker003:] To sum up to one. [speaker002:] right? So you can't really feed it, like, two articulatory features that are on at the same time with ones cuz it'll kind of normalize them down to one half or something like that, for instance. [speaker007:] But perhaps you have the choice of the [pause] final nonl [speaker003:] Right. Nonlinearity? [speaker007:] uh, nonlinearity, yeah. [speaker003:] Um, [speaker007:] Is it always softmax [speaker003:] it's sig [speaker007:] or [disfmarker]? [speaker003:] No, it's actually sigmoid X [speaker007:] Yeah. [speaker003:] for the [disfmarker] [speaker007:] So if you choose sigmoid it's o it's OK? [speaker003:] You, [speaker006:] Did we just run out of disk, [speaker003:] um [disfmarker] I think [disfmarker] I think apparently, the, uh [disfmarker] [speaker006:] or [disfmarker]? [speaker002:] Why don't you just choose linear? [speaker003:] What's that? [speaker002:] Right? Linear outputs? [speaker003:] Linear outputs? [speaker002:] Isn't that what you'll want? [speaker003:] Um. [speaker002:] If you're gonna do a KL Transform on it. [speaker003:] Right, right. Right, but during the training, we would train on sigmoid X [speaker002:] Oh, you [disfmarker] Yeah? [speaker003:] and then at the end just chop off the final nonlinearity. [speaker002:] Hmm. [speaker006:] So, we're [disfmarker] we're [disfmarker] we're off the air, or [disfmarker]? About to be off the air. [speaker003:] Now can you give me the uh [pause] remote T? [speaker004:] OK, so Eva, co uh [disfmarker] could you read your numbers? [speaker001:] Go ahead and read. [speaker004:] Yeah. [speaker001:] OK. [speaker003:] Alright. [speaker004:] Yeah, let's get started. Um [disfmarker] Hopefully Nancy will come, if not, she won't. [speaker002:] Uh, Robert, do you uh have any way to turn off your uh screensaver on there so that it's not going off every [disfmarker] uh, it seems to have about at two minute [disfmarker] [speaker003:] Yeah, I've [disfmarker] I [disfmarker] uh [disfmarker] it's not that I didn't try. [speaker002:] OK. [speaker003:] and um I [disfmarker] I told it to stay on forever and ever, but if it's not plugged in it just doesn't obey my commands. [speaker002:] OK. [speaker003:] It has a mind. [speaker002:] Got it. [speaker005:] Wants to conserve. [speaker003:] But I I just [disfmarker] You know, sort of keep on wiggling. [speaker002:] Yeah, OK. [speaker003:] But uh [disfmarker] we'll just be m m working on it at intensity so it doesn't happen. We'll see. Should we plunge right into it? [speaker004:] Yeah. [speaker003:] So, would you like to [disfmarker] [speaker004:] I think so. [speaker003:] So what I've tried to do here is list all the decision nodes that we have identified on this [pause] side. Commented and [disfmarker] what they're about and sort of [disfmarker] the properties we may um give them. And here are the uh [disfmarker] tasks to be implemented via our data collection. So all of these tasks [disfmarker] The reading is out of these tasks more or less imply that the user wants to go there, sometime or the other. And analogously for example, here we have our EVA um [disfmarker] intention. And these are the data tasks where w we can assume the person would like to enter, view or just approach the thing. Analogously the same on the object information we can see that, you know, we have sort of created these tasks before we came up with our decision nodes so there's a lot of things where we have no analogous tasks, and [pause] that may or may not be a problem. We can change the tasks slightly if we feel that we should have data for e sort of for every decision node so [disfmarker] trying to im um [disfmarker] implant the intention of going to a place now, going to a place later on the same tour, or trying to plant the intention of going sometime on the next tour, or the next day or whenever. [speaker004:] Right, right. [speaker003:] But I think that might be overdoing it a little. [speaker004:] So [disfmarker] Yeah. So let me pop up a level. And uh s s make sure that we're all oriented the same. So What we're gonna do today is two related things. Uh one of them is to work on the semantics of the belief net which is going to be the main inference engine for thi the system uh making decisions. And decisions are going to turn out to be parameter choices for calls on other modules. so f the natural language understanding thing is uh, we think gonna only have to choose parameters, but You know, a fairly large set of parameters. So to do that, we need to do two things. One of which is figure out what all the choices are, which we've done a fair amount. Then we need to figure out what influences its choices and finally we have to do some technical work on the actual belief relations and presumably estimates of the probabilities and stuff. But we aren't gonna do the probability stuff today. Technical stuff we'll do [disfmarker] uh [disfmarker] another day. Probably next week. But we are gonna worry about all the decisions and the things that pert that contribute to them. And we're also, sort of uh in the same process, going to work with Fey on what there should be in the dialogues. So One of the s steps that's coming up real soon is to actually get subjects uh [disfmarker] in here, and have them actually record like this. Uh record dialogues more or less. And [disfmarker] depending on what Fey sort of provokes them to say, we'll get information on different things. So [disfmarker] [speaker003:] Well how people phrase different intentions more or less, [speaker004:] Fo v yeah people with the [disfmarker] phrase them [speaker003:] huh? [speaker004:] and so [disfmarker] Uh for, you know, Keith and people worrying about what constructions people use, uh [disfmarker] we have some i we have some ways to affect that by the way the dialogues go. So what Robert kindly did, is to lay out a table of the kinds of uh [pause] things that [disfmarker] that might come up, and, the kinds of decisions. So the uh [disfmarker] uh [disfmarker] on the left are decision nodes, and discreet values. So if [disfmarker] if we're right, you can get by with um just this middle column worth of decisions, and it's not all that many, and it's perfectly feasible technically to build belief nets that will do that. And he has a handout. [speaker003:] Yeah. Maybe it was too fast plunging in there, because j we have two updates. [speaker004:] Yeah. [speaker003:] Um you can look at this if you want, these are what our subject's going to have to fill out. Any comments I can [disfmarker] can still be made and the changes will be put in correspondingly. [speaker005:] m [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] Yes. [speaker003:] Let me summarize in two sentences, mainly for Eva's benefit, who probably has not heard about the data collection, at all. [speaker001:] OK. [speaker003:] Or have you heard about it? [speaker001:] Not that much [speaker003:] No. [speaker001:] you didn't. [speaker003:] OK. We were gonna put this in front of people. They give us some information on themselves. [speaker001:] OK. [speaker003:] Then [disfmarker] then they will read uh [disfmarker] a task where lots of German words are sort of thrown in between. And um [disfmarker] and they have to read isolated proper names And these change [disfmarker] [speaker004:] S I don't see a release [speaker003:] No, this is not the release form. This is the speaker information form. [speaker004:] Got it. OK, fine. OK. [speaker003:] The release form is over there in that box. [speaker004:] Alright, fair enough. [speaker003:] And um [disfmarker] And then they gonna have to f um um choose from one of these tasks, which are listed here. They [disfmarker] they pick a couple, say three [disfmarker] uh [disfmarker] uh six as a matter of fact. Six different things they sort of think they would do if they were in Heidelberg or traveling someplace [disfmarker] and um [disfmarker] and they have a map. [speaker002:] Hmm. [speaker003:] Like this. Very sketchy, simplified map. And they can take notes on that map. And then they call this computer system that works perfectly, and understands everything. [speaker001:] OK. [speaker003:] And um [disfmarker] The comp Yeah, the computer system sits right in front of you, [speaker002:] This is a fictional system obviously, huh. [speaker003:] that's Fey. [speaker005:] I've [disfmarker] I understand everything. [speaker004:] And she does know everything. [speaker005:] Yes I do. [speaker003:] And she has a way of making this machine talk. So she can copy sentences into a window, or type really fast and this machine will use speech synthesis to produce that. So if you ask "How do I get to the castle" then a m s several seconds later it'll come out of here "In order to get to the castle you do [disfmarker]" OK? [speaker002:] Yeah. [speaker003:] And um [disfmarker] And then after three tasks the system breaks down. And Fey comes on the phone as a human operator. And says "Sorry the system broke down but let's continue." And we sort of get the idea what people do when they s think they speak to a machine and what people say when they think they speak to a human, or know, or assume they speak to a human. [speaker001:] OK. Huh. [speaker002:] Mm hmm. Mm hmm. [speaker003:] That's the data collection. And um [disfmarker] And Fey has some thirty subjects lined up? Something? [speaker005:] Yeah. And more and more every day. [speaker003:] And um [disfmarker] And they're [disfmarker] r ready uh [disfmarker] to roll. And we're gonna start tomorrow at three? four? one? [speaker005:] Tomorrow, well we don't know for sure. Because we don't know whether that person is coming or not, [speaker003:] OK. [speaker005:] but [disfmarker] [speaker003:] Around four ish. And um we're still l looking for a room on the sixth floor because they stole away that conference room. Um [disfmarker] behind our backs. But [disfmarker] [speaker004:] Well, there are these [disfmarker] uh [disfmarker] uh [disfmarker] oh, I see, we have to [disfmarker] Yeah, it's tricky. We'll [disfmarker] let's [disfmarker] let [disfmarker] we'll do that off line, OK. [speaker003:] Yeah, but I [disfmarker] i i it's happening. David and [disfmarker] and Jane and [disfmarker] and Lila are working on that as we speak. [speaker004:] OK. [speaker003:] OK. That was the uh [disfmarker] the data collection in a nutshell. And um [disfmarker] I can report a [disfmarker] so I did this but I also tried to do this [disfmarker] so if I click on here, Isn't this wonderful? we get to the uh [disfmarker] uh belief net just focusing on [disfmarker] on the g Go there node. uh [disfmarker] Analogously this would be sort of the reason node and the timing node and so forth. [speaker002:] Mm hmm. [speaker003:] And what w what happened is that um design wise I'd sort of n noticed that we can [disfmarker] we still get a lot of errors from a lot of points to one of these sub Go there User Go there Situation nodes. So I came up with a couple of additional nodes here where um whether the user is thrifty or not, and what his budget is currently like, is going to result in some financial state of the user. How much will he [disfmarker] is he willing to spend? Or can spend. Being the same at this [disfmarker] just the money available, which may influence us, whether he wants to go there if it is [disfmarker] you know [disfmarker] charging tons of dollars for admission or its gonna g cost a lot of t e whatever. Twenty two million to fly to International Space Station, you know. [speaker004:] Right. [speaker003:] just [disfmarker] Not all people can do that. So, and this actually turned out to be pretty key, because having specified sort of these [disfmarker] uh [disfmarker] this [disfmarker] this [disfmarker] intermediate level Um and sort of noticing that everything that happens here [disfmarker] let's go to our favorite endpoint one is again more or less [disfmarker] we have [disfmarker] um [disfmarker] then the situation nodes contributing to the [disfmarker] the endpoint situation node, which contributes to the endpoint and so forth. um [disfmarker] I can now sort of draw straight lines from these to here, meaning it g of course goes where the sub S [disfmarker] everything that comes from situation, everything that comes from user goes with the sub U, and whatever we specify for the so called "Keith node", or the discourse, what comes from the [disfmarker] um [disfmarker] parser, construction parser, um will contribute to the D and the ontology to the sub O node. And um one just s sort of has to watch which [disfmarker] also final decision node so it doesn't make sense [disfmarker] t to figure out whether he wants to enter, view or approach an object if he never wants to go there in the first place. But this makes the design thing fairly simple. And um now all w that's left to do then is the CPG's, the conditional probabilities, for the likelihood of a person having enough money, actually wanting to go a place if it costs, you know this or that. And um [disfmarker] OK. and once um Bhaskara has finished his classwork that's where we're gonna end up doing. You get involved in that process too. And um [disfmarker] And for now uh the [disfmarker] the question is "How much of these decisions do we want to build in explicitly into our data collection?" So [disfmarker] Um, one could [disfmarker] sort of [disfmarker] think of [disfmarker] you know we could call the z see or [disfmarker] you know, people who visit the zoo we could s call it "Visit the zoo tomorrow", so we have an intention of seeing something, but not now [disfmarker] but later. [speaker004:] Right. Yeah. Yeah, so [disfmarker] let's s uh s see I th I think that from one point of view, Uh, um, all these places are the same, so that d d That, um [disfmarker] in terms of the linguistics and stuff, there may be a few different kinds of places, so I th i it seems to me that We ought to decide you know, what things are k are actually going to matter to us. And um, so the zoo, and the university and the castle, et cetera. Um are all big ish things that um [disfmarker] you know [disfmarker] have different parts to them, and one of them might be fine. [speaker003:] Hmm. Hmm, hmm. [speaker004:] And [disfmarker] [speaker003:] Yeah [disfmarker] The [disfmarker] the reason why we did it that way, as a [disfmarker] as a reminder, is uh [disfmarker] no person is gonna do all of them. They're just gonna select u um, according to their preferences. [speaker004:] Yeah, yeah. [speaker003:] "Ah, yeah, I usually visit zoos, or I usually visit castles, or I usually [disfmarker]" And then you pick that one. [speaker004:] Right, no no, but [disfmarker] but s th point is to [disfmarker] to y to [disfmarker] build a system that's got everything in it that might happen you do one thing. [speaker005:] They're redundant. [speaker004:] T to build a system that um [disfmarker] had the most data on a relatively confined set of things, you do something else. And the speech people, for example, are gonna do better if they [disfmarker] if [disfmarker] things come up uh [disfmarker] repeatedly. Now, of course, if everybody says exactly the same thing then it's not interesting. So, all I'm saying is i th there's [disfmarker] there's a kind of question of what we're trying t to accomplish. and [disfmarker] I think my temptation for the data gathering would be to uh, you know [disfmarker] And each person is only gonna do it once, so you don't have to worry about them being bored, so if [disfmarker] if it's one service, one luxury item, you know, one big ish place, and so forth and so on, um [disfmarker] then my guess is that [disfmarker] that the data is going to be easier to handle. Now of course you have this I guess possible danger that somehow there're certain constructions that people use uh when talking about a museum that they wouldn't talk about with a university and stuff, um [disfmarker] but I guess I'm [disfmarker] I uh m my temptation is to go for simpler. You know, less variation. But I don't know what other people think about this in terms of [disfmarker] uh [disfmarker] [speaker002:] So I don't exactly understand [disfmarker] like I I [disfmarker] I guess we're trying to [disfmarker] limit the detail of our ontology or types of places that someone could go, right? But who is it that has to care about this, or what component of the system? [speaker004:] Oh, well, uh [disfmarker] th I think there are two places where it comes up. One is uh [disfmarker] in the [disfmarker] th these people who are gonna take this and [disfmarker] and try to do speech with it. [speaker002:] Mm hmm. [speaker004:] uh [disfmarker] Lots of pronunciations of th of the same thing are going to give you better data than l you know, a few pronunciations of lots more things. [speaker002:] OK. [speaker004:] That's one. [speaker002:] So we would rather just ask [disfmarker] uh have a bunch of people talk about the zoo, uh and assume that that will [disfmarker] that the constructions that they use there will give us everything we need to know about these sort of zoo, castle, whatever type things, these bigger places. [speaker004:] Bigger [disfmarker] Y yeah thi well this is a question for [disfmarker] [speaker002:] And that way you get the speech data of people saying "zoo" over and over again or whatever too. [speaker004:] Yeah. Yeah. Yeah. [speaker002:] OK. [speaker004:] So this is a question for you, [speaker002:] Mm hmm. [speaker004:] and, you know, if we [disfmarker] if we do, and we probably will, actually try to uh build a prototype, uh probably we could get by with the prototype only handling a few of them anyway. So, Um [disfmarker] [speaker003:] Yeah, the this was sort of [disfmarker] these are all different sort of activities. Um But I think y I [disfmarker] I got the point and I think I like it. We can do [disfmarker] put them in a more hierarchical fashion. So, "Go to place" and then give them a choice, you know either they're the symphony type or opera type or the tourist site guide type or the nightclub disco type person and they say "yeah this is [disfmarker] on that" go to big ish place ", this is what I would do. " [speaker002:] Mm hmm. [speaker003:] And then we have the "Fix" thing, and then maybe "Do something the other day" thing, so. My question is [disfmarker] I guess, to some extent, we should [disfmarker] y we just have to try it out and see if it works. It would be challenging, in [disfmarker] in a sense, to try to make it so [disfmarker] so complex that they even really should schedule, or to plan it, uh, a more complex thing in terms of OK, you know, they should get the feeling that there are these s six things they have to do and they sh can be done maybe in two days. [speaker004:] Well [disfmarker] yeah. Well I think th th [speaker003:] So they make these decisions, [speaker004:] yeah. [speaker003:] "Can I go there tomorrow?" or [disfmarker] you know [disfmarker] influences [speaker004:] Yeah. [speaker002:] Mm hmm. [speaker004:] Well, I think it's easy enough to set that up if that's your expectation. So, the uh system could say, Well, uh we'd like to [disfmarker] to set up your program for two days in Heidelberg, you know, let's first think about all the things you might like to do. So there [disfmarker] th i i in [disfmarker] I mean [disfmarker] in [disfmarker] I th I [disfmarker] I'm sure that if that's what you did then they would start telling you about that, and then you could get into um various things about ordering, if you wanted. [speaker003:] Mm hmm. Yeah. Yeah, but I think this is part of the instructor's job. And that can be done, sort of to say, "OK now we've picked these six tasks." "Now you have you can call the system and you have two days." And th w [speaker004:] I'm sorry. No, we have to help [disfmarker] we have to decide. Fey will p carry out whatever we decide. But we have to decide you know, what is the appropriate scenario. That's what we're gonna talk about t yeah. [speaker003:] Yep, yep. [speaker006:] But these are two different scenarios entirely. I mean, one is a planner [disfmarker] The other, it kind of give you instructions on the spot [speaker003:] Yeah, but th the [disfmarker] I don't [disfmarker] I'm not really interested in sort of "Phase planning" capabilities. But it's more the [disfmarker] how do people phrase these planning requests? So are we gonna masquerade the system as this [disfmarker] as you said simple response system, "I have one question I get one response", or should we allow for a certain level of complexity. And a I w think the data would be nicer if we get temporal references. [speaker004:] Well, so Keith, what do you think? [speaker002:] Well, um it seems that [disfmarker] Yeah, I mean, off the top of my head it kinda seems like you would probably just want, you know, richer data, more complex stuff going on, people trying to do more complex sets of things. I mean [pause] you know, if our goal is to really sort of be able to handle a whole bunch of different stuff, then throwing harder situations at people will get them to do more linguistic [disfmarker] more interesting linguistic stuff. But I mean [disfmarker] I'm [disfmarker] I'm not really sure Uh, because I don't fully understand like what our choices are of ways to do this here yet. [speaker003:] I mean w we have tested this and a y have you heard [disfmarker] listen to the f first two or th as a matter of fact the second person is uh [disfmarker] is [disfmarker] was faced with exactly this kind of setup. And [disfmarker] [speaker002:] I started to listen to one and it was just like, um, uh, sort of depressing. I thought I'd just sort of listen to the beginning part and the person was just sort of reading off her script or something. And. [speaker003:] Oh, OK. [speaker004:] Yeah. [speaker003:] That was the first subject. [speaker002:] Yeah. [speaker004:] First one wasn't very good. [speaker002:] Yeah. [speaker003:] Yeah. [speaker002:] So um, I [disfmarker] [speaker005:] Although [disfmarker] [speaker003:] Um, it is [disfmarker] already with this it got pretty [disfmarker] with this setup and that particular subject it got pretty complex. [speaker002:] Mm hmm. [speaker003:] Maybe [disfmarker] I suggest we make some fine tuning of these, get [disfmarker] sort of [disfmarker] run through ten or so subjects [speaker002:] Mm hmm. [speaker003:] and then take a breather, and see whether we wanna make it more complex or not, depending on what [disfmarker] what sort of results we're getting. [speaker002:] Right. Yeah. It [disfmarker] In fact, um, I am just you know [disfmarker] today, next couple days gonna start really diving into this data. I've basically looked at one of the files [disfmarker] you know one of these [disfmarker] l y y y you gave me those dozens of files and I looked at one of them which was about ten sentences, found fifteen, twenty different construction types that we would have to look for and so on and like, "alright, well, let's start here." Um. So I haven't really gone into the, you know [disfmarker] looked at all of the stuff that's going on. So I don't really [disfmarker] Right, I mean, once I start doing that I'll have more to say about this kind of thing. [speaker004:] OK. [speaker003:] And y and always [disfmarker] [speaker004:] But well th but you did say something important, which is that um you can probably keep yourself fairly well occupied uh [disfmarker] with the simple cases for quite a while. [speaker002:] Yeah. [speaker004:] Although, obviously th so [disfmarker] so that sa s does suggest that [disfmarker] Uh, now, I have looked at all the data, and it's pre it's actually at least to an amateur, quite redundant. That [disfmarker] that it was [disfmarker] it was very stylized, and quite a lot of people said more or less the same thing. [speaker002:] Yeah, Yeah. I um [disfmarker] I did sort of scan it at first and noticed that, and then looked in detail at one of them. [speaker004:] Yeah. [speaker002:] But yeah, yeah I noticed that, too. [speaker004:] So, we [disfmarker] we [disfmarker] we wanna do more than that. [speaker003:] And with this we're getting more. [speaker004:] OK. [speaker003:] No question. [speaker004:] Right. So [disfmarker] [speaker003:] uh w do we wanna get going beyond more, which is sort of the [disfmarker] [speaker004:] Well, OK, so let's [disfmarker] let's take [disfmarker] let's I [disfmarker] I think your suggestion is good, which is we'll do a b uh [disfmarker] a batch. OK. And, uh, Fey, How long is it gonna be till you have ten subjects? Couple days? Or thr f a A week? Or [disfmarker] I don't [disfmarker] I don't have a feel for th [speaker005:] Um [disfmarker] I can [disfmarker] Yeah, I mean I s I think can probably schedule ten people, uh, whenever. [speaker004:] Well, it's [disfmarker] it's up to you, I mean I j I [disfmarker] uh e We don't have any huge time pressure. It's just [disfmarker] when you have t [speaker005:] How long will it be? Um [disfmarker] I [disfmarker] I would say maybe two weeks. [speaker004:] Yeah. Oh, OK. So let's do this. Let's plan next Monday, OK, to have a review of what we have so far. and [disfmarker] [speaker003:] This means audio, but [disfmarker] [speaker004:] Huh? [speaker003:] no transcriptions of course, [speaker004:] No, we won't have the transcriptions, [speaker003:] yeah. [speaker004:] but what we should be able to do and I don't know if, Fey, if you will have time to do this, but it would be great if you could, um, not transcribe it all, but pick out uh, some stuff. I mean we could lis uh [disfmarker] just sit here and listen to it all. Are you gonna have the audio on the web site? OK. [speaker003:] Until we reach the gigabyte thing and David Johnson s ki kills me. And we're gonna put it on the web site. Yeah. [speaker004:] Oh, we could get [disfmarker] I mean, you can buy another disk for two hundred dollars, right? I mean it's [disfmarker] it's not like [disfmarker] OK. So, we'll take care of David Johnson. OK. [speaker003:] No, he [disfmarker] uh, he [disfmarker] he has been solving all our problems or [disfmarker] is wonderful, [speaker005:] Take [disfmarker] care of him. [speaker004:] OK. [speaker003:] so s [speaker004:] Alright. So we'll buy a disk. But anyway, so, um, If you [disfmarker] if you can think of a way [disfmarker] to uh, point us to th to interesting things, sort of as you're doing this or [disfmarker] or something uh, make your [disfmarker] make notes or something that [disfmarker] that this is, you know, something worth looking at. And other than that, yeah I guess we'll just have to uh, listen [disfmarker] although I guess it's only ten minutes each, right? Roughly. [speaker005:] Well, I guess. I'm not sure how long it's actually going to take. [speaker003:] The reading task is a lot shorter. That was cut by fifty percent. And the reading, nobody's interested in that except for the speech people. [speaker004:] Right. No, we don't care about that at all. [speaker003:] So. It's actually like five minutes dialogue. [speaker004:] I b My guess is it's gonna be ten. People [disfmarker] [speaker003:] Ten minutes is long. [speaker004:] I understand, but people [disfmarker] people [disfmarker] you know uh [disfmarker] [speaker005:] It feels like a long time but. [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] It feels like forever when you're doing it, but then it turns out to be three minutes and forty five seconds. [speaker004:] Yeah. Could be. [speaker002:] Yeah. [speaker004:] OK. I was thinking people would, you know, hesitate and [disfmarker] Whatever. Whatever it is we'll [disfmarker] we'll deal with it. [speaker003:] Yeah, it's not [disfmarker] [speaker004:] OK, so that'll be [disfmarker] that'll be [disfmarker] um [disfmarker] on [disfmarker] on the web page. [speaker003:] And it's fun. OK. [speaker004:] That's great. Um But anyway [disfmarker] yeah, so I think [disfmarker] it's a good idea to start with the sort of relatively straight forward res just response system. And then if we want to uh [disfmarker] get them to start doing [disfmarker] uh [disfmarker] multiple step planning with a whole bunch of things and then organize them an um tell them which things are near each other and [disfmarker] you know, any of that stuff. uh [disfmarker] You know, "Which things would you like to do Tuesday morning?" [speaker003:] Yeah. [speaker004:] So yeah I [disfmarker] th that seems [disfmarker] pretty straight forward. [speaker005:] But were you saying that [disfmarker] [speaker003:] I need those back by the way. [speaker004:] OK. [speaker005:] Yeah. [speaker002:] OK. [speaker003:] That's for [disfmarker] [speaker004:] I'm sorry, Fey, what? [speaker005:] That w maybe one thing we should do is go through this list and sort of select things that are categories and then o offer only one member of that category? [speaker004:] That's what I was suggesting for the first round, yeah. [speaker005:] OK. [speaker002:] So rather than having zoo and castle. [speaker005:] And then, I mean, they could be alternate versions of the same [disfmarker] If you wanted data on different constructions. [speaker004:] They could, but i but i uh tha eh they c yeah, but [disfmarker] uh [disfmarker] but [disfmarker] [speaker005:] Like one person gets the version with the zoo as a choice, and the other person gets the [disfmarker] [speaker004:] You could, but i but I [disfmarker] I [disfmarker] I think in the short run, [disfmarker] [speaker003:] And no, th the per the person don't get it. I mean, this is why we did it, because when we gave them just three tasks for w part A and three tasks for part B a [speaker004:] Right. Yeah. [speaker005:] Well no, they could still choose. They just wouldn't be able to choose both zoo and say, touring the castle. [speaker003:] Exactly. This is limiting the choices, but yeah. Right. OK, sorry. But um I [disfmarker] I think this approach will very well work, but the person was able to look at it and say "OK, This is what I would actually do." [speaker005:] Yeah. [speaker004:] OK. [speaker003:] Yeah. OK. [speaker005:] He was vicious. [speaker003:] OK, we gotta [disfmarker] we gotta disallow uh [disfmarker] traveling to zoos and uh castles at the same time, sort of [disfmarker] [speaker005:] I mean there [disfmarker] they are significantly different, but. [speaker003:] But no, they're [disfmarker] I mean they're sort of [disfmarker] this is where tour becomes [disfmarker] you know tourists maybe a bit different [speaker005:] Yeah, I guess so. [speaker003:] and, um, these are just places where you [disfmarker] you enter um, much like here. [speaker004:] Yeah. [speaker003:] But we can uh [disfmarker] [speaker004:] Yeah, in fact if y if y if you use the right verb for each in common, like at you know, "attend a theater, symphony or opera" is [disfmarker] is a group, and "tour the university, castle or zoo", [speaker003:] mm hmm Yeah. [speaker004:] all of these d do have this kind of "tour" um [disfmarker] aspect about the way you would go to them. And uh, the movie theater is probably also uh [disfmarker] e is a "attend" et cetera. [speaker003:] Attend, yeah. [speaker004:] So it may turn out to be not so many different kinds of things, [speaker003:] Hmm, mm hmm. [speaker004:] and then, what one would expect is that [disfmarker] that the sentence types would [disfmarker] uh their responses would tend to be grouped according to the kind of activity, you would expect. [speaker002:] Mm hmm. [speaker006:] But I mean i it seem that um [disfmarker] there is a difference between going [disfmarker] to see something, and things like "exchange money" or "dine out" [speaker004:] Oh, absolutely. Yeah. [speaker006:] uh [disfmarker] [@ @] function, yeah. [speaker003:] Yeah, this is where [disfmarker] yeah [disfmarker] th the function stuff is definitely different and the getting information or g stuff [disfmarker] yeah. OK. But this is open. So since people gonna still pick something, we we're not gonna get any significant amount of redundancy. And for reasons, we don't want it, really, in that sense. And um we would be ultimately more interested in getting all the possible ways of people asking, oh, for different things with [disfmarker] or with a computer. And so if you can think of any other sort of high level tasks a tourist may do just always [disfmarker] just m mail them to us and we'll sneak them into the collection. We're not gonna do much statistical stuff with it. [speaker004:] We don't have enough. [speaker003:] No. But it seems like since we [disfmarker] since we are getting towards uh subject [disfmarker] uh fifty subjects and if we can keep it up um to a [disfmarker] uh [disfmarker] sort of five four ish per week rate, we may even reach the one hundred before Fey t takes off to Chicago. [speaker005:] That means that one hundred people have to be interested. [speaker002:] Good luck. [speaker005:] Yeah. [speaker004:] Well, um, these are all f people off campus s from campus so far, [speaker005:] Yeah. [speaker004:] right? [speaker005:] Yeah. [speaker004:] So we [disfmarker] yeah we don't know how many we can get next door at the [disfmarker] uh shelter for example. [speaker002:] Hmm. [speaker004:] Uh for ten bucks, probably quite a few. [speaker002:] Yeah. That's right. [speaker004:] Yeah. So, alright, so let's go [disfmarker] let's go back then, to the [disfmarker] the chart with all the decisions and stuff, and see how we're doing. [speaker003:] Yep. [speaker004:] Do [disfmarker] do people think that, you know this is [disfmarker] is gonna [disfmarker] um cover what we need, or should we be thinking about more? [speaker003:] Okay, in terms of decision nodes? [speaker004:] Yep. [speaker003:] I mean, Go there is [disfmarker] is a yes or no. Right? [speaker004:] Yep. [speaker002:] Mm hmm. [speaker003:] I'm also interested in th in this "property" uh line here, so if you look at [disfmarker] sorry, look at that um, timing was um [disfmarker] I have these three. Do we need a final differentiation there? Now, later on the same tour, sometimes on the next tour. [speaker002:] What's this idea of "next tour"? I mean [disfmarker] [speaker003:] It's sort of next day, so you're doing something now and you have planned to do these three four things, [speaker002:] Mm hmm. [speaker003:] and you can do something immediately, [speaker002:] Mm hmm. [speaker003:] you could sort of tag it on to that tour [speaker002:] Or [disfmarker] OK. [speaker003:] or you can say this is something I would do s I wanna do sometime l in my life, basically. [speaker002:] OK. OK. So [disfmarker] so this tour is sort of just like th the idea of current s round of [disfmarker] of touristness or whatever, [speaker004:] Right. Yeah. [speaker002:] OK. [speaker004:] Yeah, probably between stops back at the hotel. [speaker002:] OK. Got it. [speaker004:] I mean if you [disfmarker] if [disfmarker] if you wanted precise about it, uh you know, [speaker002:] Got it. [speaker004:] uh [disfmarker] and I think that's the way tourists do organize their lives. [speaker002:] Sure, sure, sure. [speaker004:] You know, "OK, we'll go back to the hotel and then we'll go off and [disfmarker]" [speaker002:] OK. [speaker006:] So all tours [disfmarker] b a tour happens only within one day? [speaker004:] Yes. [speaker002:] OK. [speaker006:] So the next tour will be tomorrow? [speaker004:] It [disfmarker] Right. For this. [speaker002:] OK. Just to be totally clear. OK. [speaker003:] Well, my visit to Prague there were some nights where I never went back to the hotel, so whether that counts as a two day tour or not we'll have to [vocalsound] think. [speaker002:] You just spend the whole time at U Fleku or something, [speaker006:] Yeah. [speaker004:] I [disfmarker] w we will [disfmarker] we will not ask you more. [speaker002:] ri [speaker005:] Right. [speaker006:] Right. [speaker005:] That's enough. [speaker003:] I don't know. What is the uh [disfmarker] the [disfmarker] the English co uh um cognate if you want, for "Sankt Nimmerlandstag"? [speaker002:] Keine Ahnung [speaker003:] Sort of We'll do it on [disfmarker] when you say on that d day it means it'll never happen. [speaker004:] Yeah. Right. [speaker002:] OK. [speaker003:] Do you have an expression? Probably you sh [speaker002:] Not that I know of actually. [speaker003:] Yeah, when hell [disfmarker] Yep, we'll do it when hell freezes over. [speaker004:] Yeah. [speaker003:] So maybe that should be another [vocalsound] property in there. [speaker006:] Right. [speaker005:] Never. [speaker004:] Yeah. Yeah. No. [speaker003:] OK. Um, the reason why [disfmarker] why do we go there in the first place IE uh [disfmarker] it's either uh [disfmarker] for sightseeing, for meeting people, for running errands, or doing business. Entertainment is a good one in there, I think. I agree. [speaker002:] So, business is supposed to uh, be sort of [disfmarker] it [disfmarker] like professional type stuff, right, or something like that? [speaker003:] Yep. [speaker002:] OK. Um. [speaker003:] I mean [disfmarker] this w this is uh an old uh Johno thing. He sort of had it in there. "Who is the [disfmarker] the tour is the person?" So it might be a tourist, it might be a business man who's using the system, who wants to sort of go to some [disfmarker] [speaker002:] Mm hmm. Yeah. [speaker004:] Yeah, or [disfmarker] or both. [speaker002:] Yeah. Yeah, I mean like for example my [disfmarker] my father is about to travel to Prague. [speaker003:] Yep. [speaker002:] He'll be there for two weeks. He is going to uh [disfmarker] He's there to teach a course at the business school but he also is touring around and so he may have some mixture of these things. [speaker004:] Mmm. [speaker003:] Yep. [speaker004:] Sure. [speaker003:] Yep. [speaker004:] Right. [speaker003:] He would [disfmarker] [speaker006:] What ab What do you have in mind in terms of um [disfmarker] socializing? What kind of activities? [speaker003:] Eh, just meeting people, basically. [speaker006:] Oh [disfmarker] [speaker003:] "I want to meet someone somewhere", which be puts a very heavy constraint on the "EVA" you know, because then if you're meeting somebody at the town hall, you're not entering it usually, [speaker002:] Yeah. [speaker003:] you're just [disfmarker] want to approach it. [speaker002:] So [disfmarker] I mean, does this capture, like, where do you put [disfmarker] "Exchange money" is an errand, right? But what about uh [disfmarker] [speaker004:] Mm hmm [speaker003:] Yep. [speaker002:] So, like "Go to a movie" is now entertainment, "Dine out" is [disfmarker] [speaker006:] Socializing, I guess. [speaker004:] No, I I well, I dunno. Let [disfmarker] Let [disfmarker] well, we'll put it somewhere, [speaker002:] So I mean [disfmarker] [speaker004:] but [disfmarker] but [disfmarker] um [disfmarker] [speaker002:] Right. [speaker004:] I would say that if "Dine out" is a special c uh [disfmarker] if you're doing it for that purpose then it's entertainment. [speaker002:] Yeah. [speaker004:] And [disfmarker] we'll also as y as you'll s further along we'll get into business about "Well, you're [disfmarker] you know [disfmarker] this is going over a meal time, do you wanna stop for a meal or pick up food or something?" [speaker002:] Mm hmm. [speaker004:] And that's different. That's [disfmarker] that's sort of part of th that's not a destination reason, that's sort of "en passant," right. [speaker002:] Right. [speaker003:] That goes with the "energy depletion" function, blech. [speaker002:] Yeah. [speaker004:] Right, yeah. [speaker003:] OK, "endpoint". [speaker002:] "Tourist needs food, badly" [speaker004:] Right. [speaker003:] "Endpoint" is pretty clear. Um, "mode", uh, I have found three, "drive there", "walk there" uh [disfmarker] or "be driven", which means bus, taxi, BART. [speaker004:] OK. [speaker003:] Yeah. Yep. [speaker004:] Obviously taxis are very different than buses, but on the other hand the system doesn't have any public transport [disfmarker] This [disfmarker] the planner system doesn't have any public transport in it yet. [speaker003:] So this granularity would suffice, I think w if we say the person probably, based on the utterance we on the situation we can conclude wants to drive there, walk there, or use some other form of transportation. [speaker002:] H How much of Heidelberg can you get around by public transport? I mean in terms of the interesting bits. There's lots of bits where you don't really I've only ev was there ten years ago, for a day, so I don't remember, but. [speaker004:] Mm Well, [speaker002:] I mean, like the [disfmarker] sort of the tourist y bits [disfmarker] [speaker003:] Everywhere. [speaker002:] is it like [disfmarker] [speaker004:] you can't get to the Philosophers' Way very well, but, [speaker002:] Yeah. [speaker004:] I mean there are hikes that you can't get to, but [disfmarker] [speaker003:] Yeah. [speaker002:] OK. [speaker004:] but I think other things you can, if I remember right. [speaker001:] So is like "biking there" [disfmarker] part of like "driving there", [speaker003:] Yeah, um we actually [disfmarker] biking should be [disfmarker] should be a separate point because we have a very strong bicycle planning component. [speaker001:] or [disfmarker]? [speaker004:] Oh! [speaker003:] So. [speaker005:] Mmm g that's good. [speaker003:] Um. [speaker004:] Put it in. [speaker003:] Bicycles c should be in there, but, will we have bic I mean is this realistic? I mean [disfmarker] [speaker002:] Yeah. [speaker004:] OK, we can leave it out, I guess. [speaker002:] Yeah. [speaker003:] We can [disfmarker] we can sort of uh, drive [disfmarker] [speaker002:] I would [disfmarker] I would lump it with "walk" because hills matter. [speaker003:] Yeah. [speaker002:] Right? You know. Things like that. [speaker003:] Yeah. [speaker004:] OK. Skateboards right, [speaker006:] Right. [speaker004:] anyway. Scooters, right? [speaker003:] Yep. [speaker004:] Alright. [speaker003:] OK, "Length" is um, you wanna get this over with as fast as possible, you wanna use some part of what [disfmarker] of the time you have. Um, they can. [speaker004:] Ye [speaker003:] But we should just make a decision whether we feel that they want to use some substantial or some fraction of their time. You know, they wanna do it so badly that they are willing to spend uh [disfmarker] you know the necessary and plus time. [speaker002:] Hmm. [speaker003:] And um [disfmarker] And y you know, if we feel that they wanna do nothing but that thing then, you know, we should point out that [disfmarker] to the planner, that they probably want to use all the time they have. So, stretch out that visit for that. [speaker004:] Mm hmm. [speaker002:] Wow [disfmarker] It seems like this would be really hard to guess. I mean, on the part of the system. It seems like it [disfmarker] I mean you're [disfmarker] you're talking about rather than having the user decide this you're supposed t we're supposed to figure it out? [speaker004:] w well [speaker003:] Th the user can always s say it, but it's just sort of we [disfmarker] we hand over these parameters if we make [disfmarker] if we have a feeling that they are important. [speaker002:] Overrider [speaker004:] Yeah. [speaker002:] Mm hmm. [speaker003:] And that we can actually infer them to a significant de degree, or we ask. [speaker004:] And [disfmarker] And par yeah, and part of the system design is that if it looks to be important and you can't figure it out, then you ask. [speaker002:] OK. [speaker003:] Yeah. [speaker002:] OK. [speaker004:] But hopefully you don't ask you know, a all these things all the time. Or [disfmarker] eh so, y but there's th but definitely a back off position to asking. [speaker002:] Yeah. Yeah. Right. Yeah. [speaker003:] And if no [disfmarker] no part of the system ever comes up with the idea that this could be important, no planner is ever gonna ask for it. [speaker002:] Yeah. [speaker003:] y so [disfmarker] And I like the idea that, you know, sort of [disfmarker] Jerry pushed this idea from the very beginning, that it's part of the understanding business to sort of make a good question of what's s sort of important in this general picture, what you need t [speaker002:] Mm hmm. [speaker003:] If you wanna simulate it, for example, what parameters would you need for the simulation? And, Timing, uh, uh, Length would definitely be part of it, "Costs", "Little money, some money, lots of money"? [speaker004:] Mm hmm. [speaker003:] Actually, maybe uh F [comment] uh so, F Yeah, OK. Hmm? [speaker002:] You could say "some" in there. [speaker006:] I must say that thi this one looks a bit strange to me. Um [disfmarker] maybe [disfmarker] It seems like appropriate if I go to Las Vegas. Well [disfmarker] but I decide k kind of how much money uh I'm willing to lose. But a I as a tourist, I'll just paying what's [disfmarker] what's more or less is required. [speaker004:] Well, no. I think there are [disfmarker] there're different things where you have a ch choice, [speaker005:] Mmm. [speaker004:] for example, uh this t interacts with "do am I do oh are you willing to take a taxi?" [speaker002:] Yeah. Dinner. [speaker004:] Or uh, you know, if [disfmarker] if you're going to the opera are you gonna l look for the best seats or the peanut gallery [speaker006:] The best seat or [disfmarker] or [disfmarker] Right. [speaker004:] or, you know, whatever? [speaker002:] OK. So [disfmarker] [speaker004:] S so I think there are a variety of things in which um [disfmarker] Tour tourists really do have different styles eating. Another one, you know. [speaker002:] Yeah. [speaker006:] Right, that's true. [speaker005:] Right. [speaker003:] The [disfmarker] what [disfmarker] what my sort of sentiment is they're [disfmarker] Well, I [disfmarker] I once had to write a [disfmarker] a [disfmarker] a [disfmarker] a charter, a carter for a [disfmarker] a student organization. And they had [disfmarker] wanted me to define what the quorum is going to be. And I looked at the other ones and they always said ten percent of the student body has to be present at their general meeting otherwise it's not a [disfmarker] And I wrote in there "En Enough" people have to be there. And it was hotly debated, but people agreed with me that everybody probably has a good feeling whether it was a farce, a joke, or whether there were enough people. [speaker002:] Yeah. [speaker003:] And if you go to Turkey, you will find when people go shopping, they will say "How much cheese do you want?" and they say "Ah, enough." And the [disfmarker] and the [disfmarker] this used all over the place. Because the person selling the cheese knows, you know, that person has two kids and you know, a husband that dislikes cheese, so this is enough. [speaker002:] Mm hmm. [speaker003:] And um so the middle part is always sort of the [disfmarker] the golden way, right? So you can s you can be really [disfmarker] make it as cheap as possible, or you can say "I want, er, you know, I don't care" [speaker002:] Money is no object. Mm hmm. [speaker004:] Yeah. [speaker003:] Money is no object, or you say "I just want to spend enough". [speaker002:] Mm hmm. [speaker003:] Or the sufficient, or the the appropriate amount. [speaker002:] Yeah. [speaker003:] But, Then again, this may turn out to be insufficient for our purposes. But well, this is my first guess, [speaker002:] I mean y [speaker003:] in much the same way as how [disfmarker] how d you know [disfmarker] should the route be? [speaker002:] Yeah. [speaker003:] Should it be the easiest route, even if it's a b little bit longer? [speaker002:] Mm hmm. [speaker003:] No steep inclinations? Go the normal way? Whatever that again means, er [disfmarker] or do you [disfmarker] does the person wanna rough it? [speaker002:] Mm hmm. I mean [disfmarker] th so there's a couple of different ways you can interpret these things right? You know [disfmarker] "I want to go there and I don't care if it's really hard." Or if you're an extreme sport person, you know. "I wanna go there and I insist on it being the hard way." [speaker004:] Right. [speaker002:] Right? you know, so I assume we're going for the first interpretation, [speaker005:] Right. [speaker002:] right? Something like [disfmarker] I'll go th I mean [disfmarker] I'd li I dunno. [speaker004:] No, I think he was going for the second one ar actually. [speaker002:] It's different from thing to [disfmarker] Yeah? [speaker004:] Anyway, we'll sort th yeah, we'll sort that out. [speaker002:] I [disfmarker] I [disfmarker] OK. [speaker004:] Right. [speaker002:] Yeah. [speaker004:] Absolutely. [speaker003:] Well, this is all sort of um, top of my head. No [disfmarker] no research behind that. [speaker002:] Yeah. [speaker003:] Um [disfmarker] "Object information", "Do I [disfmarker] do I wanna know anything about that object?" is either true or false. And. if I care about it being open, accessible or not, I don't think there's any middle ground there. Um, either I wanna know where it is or not, I wanna know about it's history or not, or, um I wanna know about what it's good for or not. Maybe one could put scales in there, too. So I wanna know a l lot about it. [speaker004:] Yeah, now ob OK, I'm sorry, go ahead, what were you gonna say? [speaker003:] One could put scales in there. So I wanna know a lot about the history, just a bit. [speaker004:] Yeah, right well y i w if we [disfmarker] w right. So "object" becomes "entity", right? [speaker003:] Yep, that's true. [speaker004:] Yeah, but we don't have to do it now. [speaker003:] Yep. That was the wrong shortcut anyhow. [speaker004:] And we think that's it, interestingly enough, that um, you know, th or [disfmarker] or [disfmarker] or something very close to it is going to be uh [disfmarker] going to be enough. And [disfmarker] [speaker005:] Still wrong. [speaker003:] Yeah. [speaker002:] OK. [speaker004:] Alright, so um [disfmarker] So I think the order of things is that um, Robert will clean this up a little bit, although it looks pretty good. And [disfmarker] [speaker003:] What, well this is the part that [disfmarker] [speaker004:] Huh? [speaker003:] this is the part that needs the work. [speaker004:] Right. Yeah, so [disfmarker] [speaker002:] Yeah. [speaker004:] right, so [disfmarker] So, um In parallel, uh [disfmarker] three things are going to happen. Uh Robert and Eva and Bhaskara are gonna actually [disfmarker] build a belief net that [disfmarker] that, um, has CPT's and, you know, tries to infer this from various kinds of information. And Fey is going to start collecting data, and we're gonna start thinking a about [disfmarker] uh [disfmarker] what constructions we want to elicit. And then w go it may iterate on uh, further data collection to elicit [disfmarker] [speaker002:] D Do you mean [disfmarker] Do you mean eliciting particular constructions? Or do you mean like what kinds of things we want to get people talking about? Semantically speaking, eh? [speaker004:] Well, yes. Both. [speaker002:] OK. [speaker004:] Uh, and [disfmarker] Though for us, constructions are primarily semantic, right? [speaker002:] Right. [speaker004:] And [disfmarker] And so [disfmarker] uh [disfmarker] [speaker002:] Sure. I mean from my point of view I'm [disfmarker] I'm trying to care about the syntax, so you know [disfmarker] [speaker004:] Well that too, but um [disfmarker] You know if th if we in [disfmarker] if we you know, make sure that we get them talking about temporal order. [speaker002:] OK. Yeah. [speaker004:] OK, that would be great and if th if they use prepositional phrases or subordinate clauses or whatever, [speaker002:] Mm hmm. Right. OK. [speaker004:] um [disfmarker] W You know, whatever form they use is fine. [speaker002:] OK. [speaker004:] But I [disfmarker] I think that probably we're gonna try to look at it as you know, s what semantic constructions d do we [disfmarker] do we want them to uh do direc [speaker002:] OK. [speaker004:] you know, um, "Caused motion", I don't know, something like that. [speaker002:] OK. [speaker004:] Uh But, Eh uh this is actually a conversation you and I have to have about your thesis fantasies, and how all this fits into that. [speaker002:] Got it. Yeah. Uh Yeah. OK. [speaker004:] But uh [disfmarker] [speaker003:] Well, I will tell you the German tourist data. [speaker002:] OK. [speaker003:] Because I have not been able to dig out all the stuff out of the m ta thirty D V [speaker002:] OK. [speaker003:] Um [disfmarker] If you [disfmarker] [speaker002:] Is that roughly the equivalent of [disfmarker] of what I've seen in English or is it [disfmarker] [speaker003:] No, not at all. [speaker002:] OK. [speaker003:] Dialogues. SmartKom [disfmarker] [speaker002:] OK. [speaker003:] SmartKom [disfmarker] Human. Wizard of Oz. [speaker002:] OK. Same [disfmarker] OK, that. Got it. Like what [disfmarker] What have I got now? I mean I have uh what [disfmarker] what I'm loo what I [disfmarker] Those files that you sent me are the user side of some interaction with Fey? [speaker003:] A little bit of data, I [disfmarker] [speaker002:] Is that what it is? Or [disfmarker]? [speaker003:] With nothing. [speaker004:] No, no. [speaker002:] Just talking into a box and not hearing anything back. [speaker003:] Yep. [speaker002:] OK. [speaker003:] Yep. Some data I collected in a couple weeks for training recognizers and email way back when. [speaker002:] OK. OK. [speaker003:] Nothing to write home about. [speaker002:] OK. [speaker003:] And um [disfmarker] the [disfmarker] see this [disfmarker] this [disfmarker] this [disfmarker] uh [disfmarker] ontology node is probably something that I will try to expand. Once we have the full ontology API, what can we expect to get from the ontology? And hopefully you can sort of also try to find out, you know, sooner or later in the course of the summer what we can expect to get from the discourse that might, you know [disfmarker] or the [disfmarker] [speaker002:] Mm hmm. [speaker004:] mm hmm. [speaker003:] not the discourse, the utterance as it were, uh, [speaker002:] Mm hmm. [speaker004:] Right. [speaker003:] in terms of uh [disfmarker] [speaker004:] Right, but we're not expecting Keith to actually build a parser. [speaker003:] No, no, no, no, no. [speaker002:] Right, Right. [speaker004:] OK. We are expecting Johno to build a parser, [speaker003:] Uh, this is [disfmarker] Yes. [speaker004:] but that's a [disfmarker] [speaker002:] By the end of the summer, too. [speaker004:] No. [speaker003:] No. [speaker004:] No. Uh [disfmarker] He's g he's hoping to do this for his masters' thesis s by a year from now. [speaker003:] But it's sort of [disfmarker] it's [disfmarker] [speaker002:] Right. Hmm. Still, pretty formidable actually. [speaker004:] Eh absolutely. Uh [disfmarker] limited. I mean, you know, the idea is [disfmarker] is, [speaker002:] Yeah. [speaker004:] Well, the hope is that the parser itself is, uh, pretty robust. But it's not popular [disfmarker] it's only p only [disfmarker] [speaker002:] Right, Right. Existence proof, you know. Set up the infrastructure, [speaker004:] Right. It's only popula [speaker002:] yeah. [speaker004:] Right. [speaker002:] Um sometime, I have to talk to some subset of the people in this group, at least about um what sort of constructions I'm looking for. I mean, you know obviously like just again, looking at this one uh thing, you know, I saw y things from [disfmarker] sort of as general as argument structure constructions. Oh, you know, I have to do Verb Phrase. I have to do uh [disfmarker] uh [disfmarker] unbounded dependencies, you know, which have a variety of constructions in [disfmarker] uh [disfmarker] uh [disfmarker] instantiate that. On the other hand I have to have, you know, there's particular uh, fixed expressions, or semi fixed expressions like "Get" plus path expression for, you know, "how d ho how do I get there?", [speaker004:] Mm hmm. [speaker002:] "How do I get in?", "How do I get away?" [speaker004:] Right. [speaker002:] and all that kind of stuff. Um, so there's a variety of sort of different sorts of constructions [speaker004:] Absolutely. [speaker002:] and it [disfmarker] you know it's [disfmarker] it's sort of like anything goes. Like [disfmarker] [speaker004:] OK, so this is [disfmarker] I think we're gonna mainly work on with George. OK, and hi [speaker002:] OK. [speaker004:] let me f th [disfmarker] say what I think is [disfmarker] is [disfmarker] so the idea is [disfmarker] uh [disfmarker] first of all I misspoke when I said we thought you should do the constructions. Cause apparently for a linguist that means to do completely and perfectly. So what I [disfmarker] yeah, OK, [disfmarker] So what [disfmarker] what I meant was "Do a first cut at". [speaker002:] er [disfmarker] that's what Yeah, yeah. [speaker004:] OK, Because uh [disfmarker] we do wanna get them r u perfectly [disfmarker] but I think we're gonna have to do a first cut at a lot of them to see how they interact. [speaker002:] Of course. Right, exactly. Now it [disfmarker] w we talked about this before, right. And I [disfmarker] I me it would it would be completely out of the question to really do more than, say, like, oh I don't know, ten, over the summer, [speaker004:] Yeah. [speaker002:] but uh, but you know obviously we need to get sort of a general view of what things look like, so yeah. [speaker004:] Right. So the idea is going to be to do [disfmarker] sort of like Nancy did in some of the er these papers where you do enough of them so you can go from top to bottom [disfmarker] so you can do f you know, f f uh [disfmarker] have a complete story ov of s of some piece of dialogue. [speaker002:] Mm hmm. [speaker004:] And that's gonna be much more useful than having all of the clausal constructions and nothing else, or [disfmarker] or [disfmarker] or something like that. [speaker002:] Yeah. Sure. Yeah. [speaker004:] So that the [disfmarker] the trick is going to be t to take this and pick a [disfmarker] some sort of lattice of constructions, [speaker002:] Mm hmm. [speaker004:] so some lexical and some phrasal, and [disfmarker] and, you know, [speaker002:] Mm hmm. [speaker004:] whatever you need in order to uh, be able to then, uh, by hand, you know, explain, some fraction of the utterances. [speaker002:] Mm hmm. Yeah. [speaker004:] And so, exactly which ones will partly depend on your research interests and a bunch of other things. [speaker002:] Mm hmm. Sure. OK. But I mean in terms of the s th sort of level of uh [disfmarker] of analysis, you know, these don't necessarily have to be more complex than like the "Out of" construction in the BCP paper where it's just like, you know, half a page on each one or something. [speaker004:] Correct. Oh yeah [disfmarker] yeah. V a half a page is [disfmarker] is what we'd like. [speaker002:] Yeah. [speaker004:] And if [disfmarker] if there's something that really requires a lot more than that then it does and we have to do it, but [disfmarker] [speaker002:] Yeah. For the first cut, that should be fine, yeah. [speaker004:] Yeah. [speaker003:] We could sit down and think of sort of the [disfmarker] the ideal speaker utterances, and I mean two or three that follow each other, [speaker002:] Mm hmm. [speaker003:] so, where we can also sort of, once we have everything up and running, show the tremendous, insane inferencing capabilities of our system. [speaker002:] Mm hmm. [speaker003:] So, you know, as [disfmarker] as the SmartKom people have. This is sort of their standard demo dialogue, which is, you know, what the system survives and nothing but that. [speaker002:] Mm hmm. Mm hmm. [speaker003:] Um, we could also sor sort of have the analogen of o our sample sentences, the ideal sentences where we have complete construction coverage and, sort of, they match nicely. [speaker002:] Mm hmm. [speaker003:] So the [disfmarker] the "How do I get to X?", you know, that's definitely gonna be uh, a major one. [speaker002:] Yeah. Yeah. That's about six times in this little one here, so uh, [vocalsound] yeah. [speaker004:] Right. [speaker003:] Yep. "Where is X?" might be another one which is not too complicated. [speaker002:] Yeah. Mm hmm. [speaker003:] And um "Tell me something about X." [speaker002:] Yeah. [speaker003:] And hey, that's [disfmarker] that's already covering eighty percent of the system's functionality. [speaker004:] Ye Right, but it's not covering eighty percent of the intellectual interest. [speaker002:] Yeah. [speaker003:] No, we can w throw in an "Out of Film" construction if you want to, [speaker004:] No, no, no. [speaker003:] but [disfmarker] [speaker004:] Well the [disfmarker] th the thing is there's a lot that needs to be done to get this right. OK, I th [speaker003:] OK. [speaker004:] We done? [speaker003:] I have one bit of news. [speaker004:] Good. [speaker003:] Um, the action planner guy has wrote [disfmarker] has written a [disfmarker] a p lengthy [disfmarker] proposal on how he wants to do the action planning. [speaker004:] Good. [speaker003:] And I responded to him, also rather lengthy, how he should do the action planning. And [disfmarker] [speaker004:] "Action planning" meaning "Discourse Modeling"? [speaker003:] Yes. And I tacked on a little paragraph about the fact that the whole world calls that module a dis disc dialogue manager, [speaker004:] Right. [speaker003:] and wouldn't it make sense to do this here too? [speaker004:] Right. [speaker003:] And also Rainer M Malaka is going to be visiting us shortly, most likely in the beginning of June. [speaker004:] Uh huh, I'll be gone. [speaker003:] Yeah. He he's just in a conference somewhere and he is just swinging through town. [speaker004:] Sure, OK. [speaker003:] And um [disfmarker] m making me incapable of going to NAACL, for which I had funding. But. No, no Pittsburg this year. [speaker002:] Hmm. [speaker004:] S [speaker003:] When is the uh Santa Barbara? Who is going to? uh should a lot of people. That's something I will [disfmarker] would [disfmarker] sort of enjoy. [speaker004:] Probably should go. That was [disfmarker] that's one you should probably go to. [speaker003:] Yep. [speaker002:] How much does it cost? [speaker003:] There's [speaker002:] I haven't planned to go. [speaker004:] Uh, probably we can uh [disfmarker] pay for it. [speaker002:] OK. [speaker004:] Um a student rate shouldn't be very high. So, if we all decide it's a good idea for you to go then you'll [disfmarker] we'll pay for it. [speaker002:] Right. Sure. [speaker005:] Then you can go. [speaker004:] I mean I [disfmarker] I don't have a feeling one way or the other at the moment, but it probably is. [speaker002:] OK. [speaker004:] OK, great. [speaker002:] Thanks. [speaker007:] Time. [speaker003:] Thanks. [speaker007:] Are you Fey? [speaker004:] I am Fey, [speaker007:] Oh. [speaker004:] yeah. [speaker007:] Hi. I think we've met before, [speaker004:] Hi. [speaker002:] What day is today? [speaker007:] like, I remember talking to you about Aspect or something like that at some point or other. [speaker004:] A couple times yeah. [speaker006:] It's the uh twenty [disfmarker] nineteenth. [speaker002:] Nineteenth? [speaker004:] That's right, yeah. [speaker007:] So. [speaker004:] And you were my GSI briefly, until I dropped the class. [speaker002:] Right, right. [speaker007:] Oh that's right. [speaker004:] But. [speaker007:] Well. No offense. [speaker003:] OK, wh wh [speaker007:] Like. [speaker003:] Yeah. OK. Some in some introductions are in order. [speaker007:] Oh, OK sorry. [speaker003:] OK. [speaker007:] Getting ahead of myself. [speaker003:] So. Um. For those who don't know [disfmarker] Everyone knows me, this is great. [speaker007:] Yay! [speaker003:] Um, apart from that, sort of the old gang, Johno and Bhaskara have been with us from [disfmarker] from day one [speaker005:] Hi. [speaker003:] and um they're engaged in [disfmarker] in various activities, some of which you will hear about today. Ami is um our counselor and spiritual guidance and um also interested in problems concerning reference of the more complex type, [speaker005:] Oh wow. [speaker001:] Well. [speaker003:] and um he sits in as a interested participant and helper. Is that a good characterization? [speaker001:] u That's pretty good, I think. [speaker003:] I don't know. [speaker001:] Yeah. Thanks. [speaker003:] OK. Keith is not technically one of us yet, [speaker005:] Not yet. [speaker003:] ha ha. [speaker007:] "One of us." [speaker003:] but um it's too late for him now. [speaker005:] Yeah right. [speaker003:] So. [speaker005:] I've got the headset on after all. [speaker003:] Um. Officially I guess he will be joining us in the summer. [speaker005:] yes. [speaker003:] And um hopefully it is by [disfmarker] by means of Keith that we will be able to get a b a better formal and a better semantic um idea of what a construction is and um how we can make it work for us. Additionally his interest um surpasses um English because it also entails German, an extra capability of speaking and writing and understanding and reading that language. And um, is there anyone who doesn't know Nancy? Do you [disfmarker] do you know Nancy? [speaker007:] Me? Mm hmm. [speaker005:] I know Nancy. [speaker003:] OK. [speaker002:] I made that joke already, Nancy, sadly. [speaker007:] What? [speaker002:] The "I don't know myself" joke. [speaker007:] You did? When? [speaker002:] Uh before you came in. [speaker007:] Oh. About me or you? [speaker005:] Man! [speaker002:] About me. [speaker007:] OK. [vocalsound] OK. [speaker001:] You could do it about you. [speaker007:] Well I didn't know. [speaker002:] Yeah. [speaker007:] I didn't mean to be humor copying, but OK, sorry. Yes, I know myself. It's OK. It's a [disfmarker] [speaker003:] OK. And um Fey is with us as of six days ago officially? [speaker004:] Officially, yeah. [speaker003:] Officially, but in reality already um much much longer and um um next to some [disfmarker] some more or less bureaucratic uh stuff with the [disfmarker] the data collection she's also the wizard in the data collection [speaker007:] Of Oz. [speaker003:] Um, [speaker004:] It's very exciting. [speaker003:] we're sticking with the term "wizard", [speaker004:] Yes. Yes. [speaker003:] OK. and um [speaker007:] Not witch like. [speaker002:] Wizardette. [speaker005:] Wizard. [speaker006:] Wizardess. [speaker003:] Sorceress, I think. [speaker007:] OK. [speaker004:] Wizard. [speaker003:] wizard [speaker007:] OK. [speaker003:] uh by by popular vote um [speaker007:] Didn't take a vote? OK. [speaker003:] OK, um, why don't we get started on that subject anyways. Um, so we're about to collect data and um the uh s the following things have happened since we last met. When will we three meet again? And um [speaker007:] More than three of us. [speaker003:] what happened is that um, "A", [comment] there was some confusion between you and Jerry with the [disfmarker] that leading to your talking to Catherine Snow, and he was uh he [disfmarker] he agreed completely that some something confusing happened. Um his idea was to get sort of the l the lists of mayors of the department, the students. It [disfmarker] it's exactly how you interpreted it, [speaker005:] The list of majors in the department? [speaker003:] sort of s [speaker004:] M m Majors? [speaker003:] Ma majors, majors. [speaker004:] Majors? OK, mayor [disfmarker] [speaker003:] "Mayors". [speaker004:] Something I don't know about these [speaker003:] Majors. [speaker007:] The department has many mayors. [speaker003:] Majors [speaker004:] OK. [speaker003:] and um just sending the [disfmarker] the little write up that we did on to those email lists [speaker004:] OK. Yeah, yeah, yeah. But [disfmarker] Yeah. [speaker003:] uh [disfmarker] [speaker004:] So it was really Carol Snow who was confused, not me and not Jerry. [speaker003:] Yep, yep, yep. OK. So. [speaker004:] That's good. [speaker003:] So, that is uh [disfmarker] [speaker004:] So I should still do that. [speaker003:] Yep. [speaker004:] OK. And using the thing that you wrote up. [speaker003:] And [disfmarker] Yep. [speaker004:] OK. [speaker003:] Wonderful. And um we have a little description of asking peop subjects to contact Fey for you know recruiting them for our thing and um there was some confusion as to the consent form, which is basically that [disfmarker] that what what you just signed [speaker007:] Right. [speaker003:] and since we have one already um [disfmarker] [speaker007:] Did Jerry talk to you about maybe using our class? the students in the undergrad class that he's teaching? [speaker003:] Um well he said um we [disfmarker] definitely "yes", [speaker007:] e [speaker003:] however there is always more people in a [disfmarker] in a facul uh in a department than are just taking his class or anybody else's class at the moment [speaker007:] Yeah. [speaker003:] and one should sort of reach out and try and get them all. [speaker007:] OK, but th I guess it's that um people in his class cover a different set so [disfmarker] than the c is the CogSci department that you were talking about? [speaker004:] I guess. [speaker007:] uh reaching out to? [speaker004:] See [speaker007:] Cuz we have you know people from other areas [speaker004:] that's what I suggested to him, that people like [disfmarker] like Jerry and George and et cetera just [disfmarker] [speaker003:] Yeah. [speaker007:] advertise in their classes as well. [speaker004:] Yeah or even I could [disfmarker] you know I could do the actual [disfmarker] [speaker007:] Cuz I mean I [disfmarker] I know how to contact our students, [speaker003:] Mm hmm. [speaker004:] That's generally the way it's done. [speaker007:] so if there's something that you're sending out you can also s um send me a copy, [speaker003:] Yeah. [speaker007:] me or Bhaskara could [disfmarker] either of us could post it to uh is it [disfmarker] if it's a general solicitation that you know is just contact you then we can totally pro post it to the news group [speaker004:] A mailing list. [speaker003:] Mm hmm. [speaker004:] Yeah. [speaker003:] Yeah. [speaker007:] so. [speaker003:] Do it. [speaker004:] That's [disfmarker] [speaker003:] Yeah. [speaker007:] OK, so you'll send it or something so. [speaker003:] As a matter of fact, if you [disfmarker] [speaker004:] I can send it. I'll send it, [speaker007:] You can send it to me. [speaker003:] if [disfmarker] [speaker007:] OK. [speaker004:] yeah. [speaker003:] Now, i [speaker007:] Don't worry, we [disfmarker] this doesn't concern you anymore, Robert. It's fine. [speaker003:] How [disfmarker] however I suggest that if you [disfmarker] if you look at your email carefully you may think [disfmarker] you may find that you already have it. [speaker007:] Oops. Already? Really? Oops. [speaker004:] Probab [speaker003:] Maybe. OK. [speaker007:] I don't remember getting anything. [speaker003:] W we'll see. Anyhow, um the uh Yeah, not only Also we will talk about Linguistics and of course Computer Science. [speaker007:] Mm hmm. [speaker003:] Um and then, secondly, we had, you may remember, um the problem with the re phrasing, that subject always re phrase sort of the task that uh we gave them, [speaker002:] Right. [speaker003:] and so we had a meeting on Friday talking about how to avoid that, and it proved finally fruitful in the sense that we came up with a new scenario for how to get the [disfmarker] the subject m to really have intentions and sort of to act upon those, and um there the idea is now that next actually we [disfmarker] we need to hire one more person to actually do that job because it [disfmarker] it's getting more complicated. So if you know anyone interested in [disfmarker] in what i'm about to describe, tell that person to [disfmarker] to write a mail to me or Jerry soon, fast. Um [vocalsound] the idea now is to sort of come up with a high level of sort of abstract tasks "go shopping" um "take in uh a batch of art" um "visit [disfmarker] do some sightseeing" blah blah blah blah blah, sort of analogous to what Fey has started in [disfmarker] in [disfmarker] in compiling [disfmarker] compiling here and already [disfmarker] she has already gone to the trouble of [disfmarker] of anchoring it with specific um o [comment] um entities and real world places you will find in Heidelberg. And um. So out of these f s these high level categories the subject can pick a couple, such as if [disfmarker] if there is a cop uh a category in emptying your roll of film, the person can then decide "OK, I wanna do that at this place", sort of make up their own itinerary a and [disfmarker] and tasks and the person is not allowed to take sort of this h high level category list with them, but uh the person is able to take notes on a map that we will give him and the map will be a tourist's sort of schematic representation with [disfmarker] with symbols for the objects. And so, the person can maybe make a mental note that "ah yeah I wanted to go shopping here" and "I wanted to maybe take a picture of that" and "maybe um eat here" and then goes in and solves the task with the system, IE [comment] Fey, and um and we're gonna try out that [disfmarker] Any questions? [speaker007:] so um y you'll have those say somewhere what their intention was [disfmarker] so you still have the [disfmarker] the nice thing about having data where you know what the actual intention was? [speaker003:] Mm hmm. Yeah. [speaker007:] But they will um [disfmarker] There's nothing that says you know "these are the things you want to do" so they'll say "well these are the things I want to do" and [disfmarker] Right, so they'll have a little bit more natural interaction? OK. [speaker003:] Hopefully. [speaker007:] Mm hmm. [speaker006:] So they'll be given this map, which means that they won't have to like ask the system for in for like high level information about where things are? [speaker003:] Yeah it's a schematic tourist map. So it'll be uh i it'll still require the [disfmarker] that information and An [speaker007:] It w it doesn't have like streets on it that would allow them to figure out their way [disfmarker] [speaker003:] N not [disfmarker] not [disfmarker] not really the street network. [speaker007:] OK. [speaker003:] Nuh. [speaker005:] So you're just saying like what part of town the things are in or whatever? [speaker003:] Yeah a and um the map is more a means for them to have the buildings and their names and maybe some ma ma major streets and their names [speaker007:] Mm hmm. [speaker003:] and we want to maybe ask them, if you have [disfmarker] get it sort of isolated street the [disfmarker] the, whatever, "River Street", and they know that [disfmarker] they have decided that, yes, that's where they want to do this kind of action um that they have it with them and they can actually read them or sort of have the label for the object because it's too hard to memorize all these st strange German names. And then we're going to have another [disfmarker] we're gonna have w another trial run IE the first with that new setup tomorrow at two and we have a real interesting subject which is Ron Kay for who [disfmarker] those who know him, he's the founder of ICI. So he'll [disfmarker] he's around seven seventy years old, or something. [speaker007:] I didn't know he was the founder. That's [disfmarker] OK. [speaker003:] And he also approached me and he offered to help [vocalsound] um our project and he was more thinking about some high level thinking tasks and [vocalsound] I said "sure we need help you can come in as a subject" and he said "OK". So that's what's gonna happen, tomorrow, [speaker007:] Using this new [disfmarker] new um plan, [speaker003:] data. New [disfmarker] new set up. [speaker007:] OK. [speaker003:] Yeah. Which I'll hopefully sort of scrape together t But, thanks to Fey, we already have sort of a nice blueprint and I can work with that. Questions? Comments on that? If not, we can move on. No? No more questions? [speaker007:] So what's the s this is what you made, Fey? [speaker005:] I'm not sure I totally understand this but [disfmarker] [speaker003:] Hmm? [speaker005:] I'm not sure I totally understand everything that's being talked about [speaker007:] Like so [disfmarker] So it's just based on like the materials you had about Heidelberg. [speaker005:] but I [disfmarker] I imagine I'll c just catch on. [speaker003:] Um are you familiar with [disfmarker] with the [disfmarker] with the very rough setup of the data? [speaker004:] Based on the web site, yeah, at the [disfmarker] [speaker007:] Oh OK there's a web site and then you could like um figure out what the cate [speaker003:] experiment? [speaker004:] Right. [speaker005:] Uh, this is where they're supposed to [disfmarker] [speaker004:] It's a tourist information web site, [speaker007:] OK. [speaker004:] so. [speaker007:] OK. [speaker003:] Talk to a machine and it breaks down and then the human comes on. [speaker005:] Yeah. Yeah. [speaker003:] The question is just sort of how do we get the tasks in their head that they have an intention of doing something and have a need to ask the system for something without giving them sort of a clear wording or phrasing of the task. [speaker005:] OK. OK. OK. OK. [speaker003:] Because what will happen then is that people repeat [disfmarker] repeat, [comment] or as much as they can, of that phrasing. [speaker007:] Hmm. Um, are you worried about being able to identify [disfmarker] [speaker005:] OK. [speaker007:] Um. The [disfmarker] The goals that we've d you guys have been talking about are this [disfmarker] these you know identifying which of three modes um their question uh concerns. [speaker003:] Mm hmm. [speaker007:] So it's like the Enter versus View [disfmarker] [speaker003:] Yeah, we [disfmarker] we [disfmarker] we will sort of get a protocol of the prior interaction, [speaker007:] Uh huh. [speaker003:] right? That's where the instructor, the person we are going to hire, um and the subjects sit down together with these high level things [speaker007:] Uh huh. Mm hmm. [speaker003:] and so th the q first question for the subject is, "so these are things, you know, we thought a tourist can do. Is there anything that interests you?" [speaker007:] Mm hmm. [speaker003:] And the person can say "yeah, sure sh this is something I would do. I would go shopping". Yeah? [speaker007:] Mm hmm. [speaker003:] and then we can sort of [disfmarker] this s instructor can say "well, uh then you [disfmarker] you may want to find out how to get over here because this is where the shopping district is". [speaker007:] So the interaction beforehand will give them hints about how specific or how whatever though the kinds of questions that are going to ask during the actual session? [speaker003:] No. Just sort of [disfmarker] OK, what [disfmarker] what [disfmarker] what would you like to buy and then um OK there you wanna buy a whatever cuckoos clocks [speaker007:] Yeah. [speaker003:] OK and the there is a store there. [speaker005:] Mm hmm. [speaker003:] So the task then for that person is t finding out how to get there, [speaker007:] Mm hmm. [speaker003:] right? That's sort of what's left. [speaker007:] Mm hmm. [speaker003:] And we know that the intention is to enter because we know that the person wants to buy a cuckoos clock. [speaker007:] OK, that's what I mean so like those tasks are all gonna be um unambiguous about which of the three modes. [speaker003:] Hopefully. [speaker007:] Right. OK. So. [speaker001:] Well, so the idea is to try to get the actual phrasing that they might use and try to interfere as little as possible with their choice of words. [speaker003:] Hopefully. [speaker007:] t [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] That they'll be here? [speaker003:] Yes. In a sense that's exactly the [disfmarker] the [disfmarker] the idea, [speaker001:] uh uh [speaker003:] which is never possible in a [disfmarker] in a s in a lab situation, [speaker001:] Well, u u the one experiment th that [disfmarker] that [disfmarker] that I've read somewhere, it was [disfmarker] they u used pictures. [speaker003:] nuh? Yep. [speaker005:] Mm hmm. [speaker001:] So to [disfmarker] to uh actually um uh specify the [disfmarker] the tasks. Uh, but you know i i [speaker003:] Yeah. We had exactly that on our list of possible way things so we [disfmarker] uh I even made a sort of a silly thing how that could work, how you control you are here you [disfmarker] you want to know how to get someplace, and this is the place and it's a museum and you want to do some and [disfmarker] and [disfmarker] and there's a person looking at pictures. So, you know, this is exactly getting someplace with the intention of entering and looking at pictures. [speaker001:] Right. [speaker003:] However, not only was [disfmarker] the common census were [disfmarker] among all participants of Friday's meeting was it's gonna be very laborious to [disfmarker] to make these drawings for each different things, all the different actions, if at all possible, [speaker001:] Right. [speaker003:] and also people will get caught up in the pictures. So all of a sudden we'll get descriptions of pictures in there. [speaker001:] Right. [speaker003:] And people talking about pictures and pictorial representations [speaker005:] Hmm. [speaker003:] and [disfmarker] um [speaker001:] Right. [speaker003:] I would s I would still be willing to try it. [speaker001:] I mean, I I'm [disfmarker] I'm not saying it's necessary but [disfmarker] but uh i uh uh i [vocalsound] you might be able to combine you know text uh and [disfmarker] and some sort of picture and also uh I think it [disfmarker] it will be a good idea to show them the text and kind of chew the task and then take the test away [disfmarker] the [disfmarker] the [disfmarker] the [disfmarker] the [disfmarker] the text away [speaker003:] Mm hmm. Yeah. [speaker001:] so that they are not uh guided by [disfmarker] by by what you wrote, [speaker003:] We will [disfmarker] [speaker001:] but can come up with their [disfmarker] with their own [disfmarker] [speaker003:] Yeah, they will have no more linguistic matter in front of them when they enter this room. [speaker001:] Right. [speaker003:] OK. Then I suggest we move on to the [disfmarker] to we have um uh the EDU Project, let me make one more general remark, has sort of two [disfmarker] two side uh um actions, its um action items that we're do dealing with, one is modifying the SmartKom parser and the other one is modifying the SmartKom natural language generation module. And um this is not too complicated but I'm just mentioning it [disfmarker] put it in the framework because this is something we will talk about now. Um, I have some news from the generation, do you have news from the parser? [speaker006:] Um, not [disfmarker] [speaker003:] By that look I [disfmarker] [speaker006:] Yes, uh, I would really p It would be better if I talked about it on Friday. [speaker003:] OK. [speaker006:] If that's OK. [speaker003:] Yeah, wonderful. Um, did you run into problems or did you run into not h having time? [speaker006:] Yeah. But not [disfmarker] not any time part. [speaker003:] OK, so that's good. That's better than running into problems. [speaker006:] OK. [speaker003:] And um I [disfmarker] I do have some good news for the natural language generation however. And the good news is I guess it's done. Uh, meaning that Tilman Becker, who does the German one, actually took out some time and already did it in English for us. And so the version he's sending us is already producing the English that's needed to get by in version one point one. [speaker006:] So I take it that was similar to the [disfmarker] what [disfmarker] what we did for the parsing? [speaker003:] Yeah. I [disfmarker] I [disfmarker] it [disfmarker] even though the generator is a little bit more complex and it would have been, not changing one hundred words but maybe four hundred words, [speaker006:] OK. [speaker003:] but it would have been [speaker006:] OK. [speaker003:] but this [disfmarker] this is I guess good news, and the uh [disfmarker] the time and especially Bhaskara and uh [disfmarker] and um [disfmarker] Oh do I have it here? No. The time is now pretty much fixed. It's the last week of April until the fourth of May so it's twenty sixth through fourth. That they'll be here. So it's [disfmarker] it's extremely important that the two of you are also present in this town during that time. [speaker002:] Wait, what [disfmarker] what are the days? April twenty sixth to the [disfmarker] May fourth? [speaker003:] Yeah, something like that. [speaker002:] I'll probably be here. [speaker006:] Yeah. [speaker003:] It's [disfmarker] [speaker005:] You will be here. [speaker003:] There is a d Isn't finals coming up then pretty much after that? [speaker006:] Finals was that. [speaker007:] Yeah w it doesn't really have much meaning to grad students but final projects might. [speaker003:] OK. [speaker006:] Yeah actually, that's true. [speaker007:] That [disfmarker] [speaker003:] Anyway, so this is [disfmarker] [speaker002:] Well I'll be here working on something. Guaranteed, it's just uh will I be here, you know, in uh [disfmarker] I'll be here too actually but [disfmarker] [speaker001:] Hmm. [speaker003:] No it's just um you know they're coming for us so that we can bug them [speaker007:] Ye [speaker003:] and ask them more questions and sit down together and write sensible code and they can give some nice talks and stuff. But uh just make a [disfmarker] [speaker002:] But it's not like we need to be with them twenty four hours a day s for the seven days that they're here. [speaker003:] Not [disfmarker] not unless you really really want to. [speaker005:] They're very dependent [speaker003:] Not unless you really want to. And they're both nice guys so you may [disfmarker] may want to. OK, that much from the parser and generator side, unless there are more questions on that. [speaker007:] So, no sample generator output yet? [speaker003:] No. [speaker007:] OK. [speaker003:] It [disfmarker] Just a mail that, you know, he's sending me the [disfmarker] the [disfmarker] the stuff soon [speaker007:] This is being sent, mm hmm. OK. Mm hmm. [speaker003:] and I was completely flabbergasted here and I [disfmarker] and that's also it's [disfmarker] it's going to produce the concept to speech uh blah blah blah information for [disfmarker] necessary for one point one in English [disfmarker] based on the English, you know, in English. So. I was like OK, [speaker005:] We're done. [speaker003:] we're done! " [speaker007:] So that was like one of the first l You know, the first task was getting it working for English. So that's basically over now. Is that right? [speaker003:] Yeah. [speaker007:] So the basic requirement fulfilled. [speaker003:] Um, the basic requirement is fulfilled almost. When Andreas Stolcke and [disfmarker] and his gang, [speaker007:] Mm hmm. [speaker003:] when they have um changed the language model of the recognizer and the dictionary, then we can actually a put it all together [speaker007:] Mm hmm. So the speech recognizer also works. Uh huh. Mm hmm. [speaker003:] and you can speak into it and ask for TV and movie information [speaker005:] Toll. [speaker003:] and then when if [disfmarker] if something actually happens and some answers come out, then we're done. [speaker007:] Mm hmm. If [disfmarker] and they're kind of correct. [speaker005:] So it's not done basically. [speaker003:] Hmm? [speaker007:] And they kind of are [disfmarker] are correct. [speaker005:] Right. [speaker007:] It's not just like anything. [speaker005:] Perhaps if the answers have something to do with the questions for example. [speaker007:] And they're mostly in English. So. [speaker003:] Then um [disfmarker] [speaker007:] Are they [disfmarker] is it using the database? the German TV movie. [speaker003:] Yeah. [speaker007:] OK. So [vocalsound] all the actual data might be German names? [speaker003:] Um well actually th um [speaker007:] Or are they all like American TV programs? [speaker003:] um well [disfmarker] [speaker005:] I want to see "Die Dukes Von Hazard" [speaker003:] The [disfmarker] OK, so you don't know how the German dialogue [disfmarker] uh the German [disfmarker] the demo dialogue actually works. It works [disfmarker] the first thing is what's, you know, showing on TV, and then the person is presented with what's running on TV in Germany on that day, on that evening [speaker007:] Mm hmm, mm hmm. [speaker003:] and so you take one look at it and then you say "well that's really nothing [disfmarker] there's nothing for me there" "what's running in the cinemas?" [speaker007:] Mm hmm. [speaker003:] So maybe there's something better happening there. And then you get [disfmarker] you're shown what movies play which films, and it's gonna be of course all the Heidelberg movies and what films they are actually showing. [speaker007:] Mm hmm. [speaker003:] And most of them are going to be Hollywood movies. [speaker007:] Hmm. [speaker003:] So, "American Beauty" is "American Beauty", right? Yeah. [speaker005:] Mm hmm. [speaker007:] Right. [speaker003:] And um. [speaker007:] But they're shown like on a screen. It's a [disfmarker] I mean so would the generator, [speaker003:] N [speaker007:] like the English language sentence of it is [disfmarker] "these are the follow you know the following films are being shown" or something like that? [speaker003:] Yeah, but it in that sense it doesn't make [disfmarker] In that case uh it doesn't really make sense to read them out loud. [speaker007:] S Right. So it'll just display [disfmarker] [speaker003:] if you're displaying them. [speaker007:] OK. So we don't have to worry about um [disfmarker] [speaker003:] But uh it'll tell you that this is what's showing in Heidelberg and there you go. [speaker007:] Yeah. [speaker003:] And the presentation agent will go "Hhh!" [comment] Nuh? [speaker007:] OK. [speaker003:] Like that [disfmarker] the avatar. [speaker007:] OK. [speaker003:] And um. And then you pick [disfmarker] pick a movie and [disfmarker] and [disfmarker] and it show shows you the times and you pick a time and you pick seats and all of this. So. [speaker007:] OK. [speaker003:] Pretty straightforward. [speaker005:] OK. [speaker003:] But it's [disfmarker] so this time we [disfmarker] we are at an advantage because it was a problem for the German system to incorporate all these English movie titles. [speaker007:] Yeah. [speaker003:] Nuh? [speaker007:] Right. [speaker003:] But in English, that's not really a problem, [speaker007:] Mm hmm. [speaker003:] unless we get some [disfmarker] some topical German movies that have just come out and that are in their database. [speaker007:] Right. [speaker003:] So the person may select "Huehner Rennen" or whatever. [speaker005:] "Chicken Run". [speaker003:] OK. Then uh on to the modeling. Right? [speaker002:] Yeah, yeah, I guess. [speaker003:] Um then modeling, there it is. [speaker005:] OK. What's the next thing? [speaker002:] Yep. e [speaker003:] This is very rough but this is sort of what um Johno and I managed to come up with. The idea here is that [disfmarker] [speaker002:] This is the uh s the schema of the XML here, not an example or something like that. [speaker003:] Yeah this is not an XML [speaker005:] OK. [speaker003:] this is sort of towards an [disfmarker] a schema, nuh? [speaker001:] Right. [speaker003:] definition. The idea is, so, imagine we have a library of schema such as the Source Path Goal and then we have forced uh motion, we have cost action, [speaker005:] Mm hmm. [speaker007:] Mm hmm. [speaker003:] we have a whole library of schemas. And they're gonna be, you know, fleshed out in [disfmarker] in their real ugly detail, Source Path Goal, and there's gonna be s a lot of stuff on the Goal and blah blah blah, that a goal can be and so forth. What we think is [disfmarker] And all the names could [disfmarker] should be taken "cum grano salis". So. This is a [disfmarker] the fact that we're calling this "action schema" right now should not entail that we are going to continue calling this "action schema". But what that means [vocalsound] is we have here first of all on the [disfmarker] in the [disfmarker] in the first iteration a stupid list of Source Path Goal actions [speaker002:] Actions that can be categorized with [disfmarker] or that are related to Source Path Goal. [speaker003:] wi [speaker005:] OK. [speaker003:] to that schema [speaker007:] Mm hmm. [speaker003:] and we will have you know forced motion and cost action actions. [speaker002:] And then those actions can be in multiple categories at the same time if necessary. [speaker003:] So a push may be in [disfmarker] in [disfmarker] in both you know push uh in this or this [speaker007:] Forced motion and caused action for instance, [speaker003:] uh [disfmarker] Exactly. [speaker007:] OK. [speaker003:] Yeah. Also, these things may or may not get their own structure in the future. So this is something that, you know, may also be a res As a result of your work in the future, we may find out that, you know, there're really s these subtle differences between um even within the domain of entering in the light of a Source Path Goal schema, that we need to put in [disfmarker] fill in additional structure up there. But it gives us a nice handle. So with this we can basically um you know s slaughter the cow any anyway we want. Uh. It [disfmarker] it is [disfmarker] It was sort of a [disfmarker] it gave us some headache, how do we avoid writing down that we have sort of the Enter Source Path Goal that this [disfmarker] But this sort of gets the job done in that respect and maybe it is even conceptually somewhat adequate in a sense that um we're talking about two different things. We're talking more on the sort of intention level, up there, and more on the [disfmarker] this is the [disfmarker] your basic bone um schema, down there. [speaker002:] Uh one question, Robert. When you point at the screen is it your shadow that I'm supposed to look at? [speaker007:] Yeah. It's the shadow. [speaker002:] OK. Whereas I keep looking where your hand is, and it doesn't [disfmarker] [speaker003:] Well, that wouldn't have helped you at all. [speaker007:] Right. [speaker002:] Yeah. Basically, what this is [disfmarker] is that there's an interface between what we are doing and the action planner [speaker005:] Spit right here. [speaker002:] and right now the way the interface is "action go" and then they have the [disfmarker] what the person claimed was the source and the person claimed as the goal passed on. [speaker007:] Mm hmm. [speaker002:] And the problem is, is that the current system does not distinguish between goes of type "going into", goes of type "want to go to a place where I can take a picture of", et cetera. [speaker003:] So this is sort of what it looks like now, some simple "Go" action from it [disfmarker] from an object named "Peter's Kirche" of the type "Church" to an object named "Powder Tower" of the type "Tower". [speaker007:] This is the uh [disfmarker] what the action planner uses? [speaker003:] Right? [speaker007:] This is [disfmarker] [speaker002:] Right. Currently. [speaker007:] OK. [speaker003:] Currently. [speaker007:] And is that [disfmarker] and tha that's changeable? or not? [speaker003:] Yeah, well [disfmarker] [speaker007:] Like are we adapting to it? Or [disfmarker] [speaker003:] No. We [disfmarker] This is the output, sort of, of the natural language understanding, [speaker007:] Oh, yeah. Uh huh. [speaker003:] right? the input into the action planning, as it is now. [speaker007:] Mm hmm. Mm hmm. [speaker003:] And what we are going to do, we going to [disfmarker] and you can see here, and again for Johno please [disfmarker] please focus the shadow, [speaker002:] OK. [speaker003:] um we're gon uh uh here you have the action and the domain object and w and on [disfmarker] on [disfmarker] [speaker007:] What did you think he was doing? OK, sorry. [speaker002:] I just [disfmarker] [speaker005:] A laser pointer would be most appropriate here I think. [speaker004:] Eee. [speaker003:] Yeah I [disfmarker] I um have [disfmarker] I have no [disfmarker] [speaker002:] Robert likes to be abstract and that's what I just thought he was doing. [speaker007:] You look up here. OK. [speaker003:] Sort of between here and here, so as you can see this is on one level and we are going to add another um "Struct", if you want, IE a rich action description on that level. [speaker005:] Mm hmm. [speaker003:] So in the future [disfmarker] [speaker007:] So it's just an additional information [disfmarker] [speaker003:] Exactly. In the future though, the content of a hypothesis will not only be an object and an [disfmarker] an action and a domain object but an action, a domain object, and a rich action description, [speaker007:] Right? that doesn't hurt the current way. Mm hmm. Mm hmm. [speaker003:] which is [disfmarker] [speaker002:] Which [disfmarker] which we're abbreviating as "RAD". [speaker007:] Good. Hmm. [speaker005:] Rad! [speaker006:] So um you had like an action schema and a Source Path Goal schema, [speaker007:] Hmm. Hmm. Mm hmm. [speaker006:] right? So how does this Source Path Goal schema fit into the uh action schema? Like is it one of the tags there? [speaker007:] Yeah can you go back to that one? [speaker002:] So the Source Path Goal schema in this case, I've [disfmarker] if I understand how we described [disfmarker] we set this up, um cuz we've been arguing about it all week, but uh we'll hold the [disfmarker] the [disfmarker] Well in this case it will hold the [disfmarker] [vocalsound] I mean the [disfmarker] the features I guess. I'm not [disfmarker] it's hard for me to exactly s So basically that will store the [disfmarker] the object that is w the Source will store the object that we're going from, [speaker005:] Mm hmm. [speaker002:] the Goal will store the [disfmarker] the f [speaker007:] So the fillers of the role source. [speaker002:] we'll fill those in fill those roles in, [speaker007:] OK. [speaker002:] right? [speaker003:] Yeah. [speaker002:] The S Action schemas basically have extra [disfmarker] See we [disfmarker] so those are [disfmarker] schemas exist because in case we need extra information instead of just making it an attribute and which [disfmarker] which is just one thing we [disfmarker] we decided to make it's own entity so that we could explode it out later on in case there is some structure that [disfmarker] that we need to exploit. [speaker007:] OK, so th sorry I just don't kn um um um [disfmarker] This is just uh XML mo notational but um the fact that it's action schema and then sort of slash action schema that's a whole entit [speaker002:] That's a block, [speaker007:] That's a block, [speaker002:] yeah. [speaker007:] whereas source is just an attribute? Is that [disfmarker] [speaker003:] No, no, no. Source is just not spelled out here. [speaker007:] Oh, OK, OK. [speaker003:] Source meaning [disfmarker] Source will be uh will have a name, a type, maybe a dimensionality, [speaker007:] Uh huh, uh huh. [speaker003:] maybe canonical uh orientation [disfmarker] [speaker007:] OK could it [disfmarker] it could also be blocked out then as [disfmarker] [speaker002:] Yeah, the [disfmarker] So [disfmarker] [speaker007:] OK. [speaker002:] Yeah. [speaker003:] s Source it will be, you know we'll f we know a lot about sources so we'll put all of that in Source. [speaker007:] OK. [speaker003:] But it's independent whether we are using the SPG schema in an Enter, View, or Approach mode, [speaker007:] Mm hmm. [speaker003:] right? This is just properties of the SPG [comment] schema. We can talk about Paths being the fastest, the quickest, the nicest and so forth, uh or [disfmarker] or [disfmarker] and the Trajector should be coming in there as well. [speaker007:] OK. [speaker003:] And then G the same about Goals. [speaker007:] OK. So I guess the question is when you actually fill one of these out, it'll be under action schema? Those are [disfmarker] It's gonna be one [disfmarker] y you'll pick one of those for [disfmarker] [speaker002:] Right. [speaker007:] OK these are [disfmarker] this is just a layout of the possible that could go [disfmarker] play that role. [speaker002:] Right, so the [disfmarker] the [disfmarker] the roles will be filled in with the schema [speaker003:] Hmm? [speaker007:] OK, go it. Uh huh. [speaker002:] and then what actual a action is chosen is [disfmarker] will be in the [disfmarker] in the action schema section. [speaker007:] OK. OK. S S OK, so one question. This was [disfmarker] in this case it's all um clear, sort of obvious, but you can think of the Enter, View and Approach as each having their roles, right? the [disfmarker] I mean it's [disfmarker] it's implicit that the person that's moving is doing entering viewing and approaching, but you know the usual thing is we have bindings between sort of [disfmarker] they're sort of like action specific roles and the more general Source Path Goal specific roles. So are we worrying about that or not for now? [speaker003:] Yes, yes. [speaker007:] OK. [speaker003:] Since you bring it up now, we will worry about it. [speaker007:] OK. [speaker003:] Tell us more about it. What do you [disfmarker] what do you [disfmarker] [speaker007:] What's that? Oh I guess it [disfmarker] I [disfmarker] I may be just um reading this and interpreting it into my head in the way that I've always viewed things [speaker003:] Hmm. [speaker005:] Hmm. [speaker007:] and [vocalsound] that [disfmarker] that may or may not be what you guys intended. But if it is, then the top block is sort of like um, you know, you have to list exactly what X schema or in this action schema, there'll be a certain one, that has its own s structure and maybe it has stuff about that specific to entering or viewing or approaching, but those could include roles like the thing that you're viewing, the thing that you're entering, the thing that you're whatever, you know, that [disfmarker] which are [disfmarker] [speaker005:] So very specific role names are "viewed thing", "entered thing" [disfmarker] [speaker007:] think [disfmarker] think of enter, view and approach as frames and they have frame specific parameters and [disfmarker] and roles [speaker005:] Yeah. [speaker003:] Mm hmm. Yeah. [speaker007:] and you can also describe them in a general way as Source Path Goal schema and maybe there's other image schemas that you could you know add after this that you know, how do they work in terms of you know a force dynamics [speaker003:] Mm hmm. [speaker007:] or how do they work in f terms of other things. [speaker003:] Mm hmm, Mm hmm, Mm hmm. [speaker007:] So all of those have um basically f either specific [disfmarker] frame specific roles or more general frame specific roles that might have binding. So the question is are um [disfmarker] how to represent when things are linked in a certain way. So we know for Enter that there's Container potentially involved [speaker003:] Mm hmm. [speaker007:] and it's not [disfmarker] uh I don't know if you wanna have in the same level as the action schema SPG schema it [disfmarker] it's somewhere in there that you need to represent that there is some container and the interior of it corresponds to some part of the Source Path Goal um you know goal [disfmarker] uh goal I guess in this case. [speaker003:] Mm hmm. [speaker007:] So uh is there an easy way in this notation to show when there's identity basically between things [speaker003:] Yeah. [speaker007:] and I di don't know if that's something we need to invent or you know just [disfmarker] [speaker002:] The [disfmarker] wa wasn't there supposed to be a link in the [speaker006:] Right. [speaker002:] I don't know if this answers your question, I was just staring at this while you were talking, sorry. [speaker007:] It's OK. [speaker002:] Uh a link between the action schema, a field in the s in the schema for the image schemas that would link us to which action schema we were supposed to use so we could [disfmarker] [speaker003:] Yeah. Um, well that's [disfmarker] that's one [disfmarker] one thing is that we can link up, think also that um we can have one or m as many as we want links from [disfmarker] from the schema up to the s action um description of it. [speaker007:] Hmm. [speaker003:] But the notion I got from Nancy's idea was that we may f find sort of concepts floating around i in the a action description of the action f "Enter" frame up there that are, e when you talk about the real world, actually identical to the goal of the [disfmarker] the S Source Path Goal schema, [speaker007:] Exactly. Right, right. [speaker003:] and do we have means of [disfmarker] of telling it within that [speaker007:] Right. [speaker003:] a and the answer is absolutely. [speaker007:] Yeah. [speaker003:] The way [disfmarker] we absolutely have those means that are even part of the M three L A API, [speaker007:] Oh great. s Uh huh. [speaker003:] meaning we can reference. So meaning [disfmarker] [speaker007:] Great. That's exactly what is necessary. [speaker002:] Yeah. [speaker003:] And um. [speaker002:] St [speaker003:] This referencing thing however is of temporary nature because sooner or later the W three C will be finished with their X path, uh, um, specification and then it's going to be even much nicer. Then we have real means of pointing at an individual instantiation of one of our elements here [speaker007:] Mm hmm. Mm hmm. [speaker003:] and link it to another one, and this not only within a document but also via documents, [speaker005:] Mm hmm. [speaker007:] OK. [speaker003:] and [disfmarker] and all in a v very easy e homogenous framework. [speaker007:] So you know [disfmarker] happen to know how [disfmarker] what [disfmarker] what "sooner or later" means like in practice? [speaker003:] That's [speaker007:] Or estimated. [speaker003:] but it's soon. [speaker007:] OK, OK. [speaker003:] So it's g it's [disfmarker] the spec is there and it's gonna part of the M three L AP [disfmarker] API filed by the end of this year so that this means we can start using it basically now. [speaker007:] Mm hmm. [speaker003:] But this is a technical detail. [speaker007:] So a pointer [disfmarker] a way to really say pointers. [speaker002:] Basically references from the roles in the schema [disfmarker] the bottom schemas to the action schemas is wha uh I'm assuming. [speaker007:] Yeah. OK, yeah. [speaker003:] Yeah. Yeah, [speaker007:] OK. [speaker003:] I mean personally, I'm looking even more forward to the day when we're going to have X forms, which l is a form of notation where it allows you to say that if the SPG action up there is Enter, then the goal type can never be a statue. [speaker007:] Uh huh. Right. [speaker005:] Mm hmm. [speaker007:] So you have constraints that are dependent on the c actual s specific filler, uh, of some attribute. [speaker003:] Mm hmm, yeah. W Yeah e exactly. Um, you know this, of course, does not make sense in light of the Statue of Liberty, [speaker007:] Uh huh. [speaker005:] Right. [speaker003:] however [vocalsound] it is uh you know sort of [disfmarker] these sort of things are imaginable. [speaker007:] Tsk. Yeah. Or the Gateway Arch in St. [speaker003:] Yeah? [speaker006:] S So um, like are you gonna have similar schemas for FM [speaker007:] Louis. [speaker003:] Yeah. [speaker007:] So. [speaker006:] like forced motion and caused action and stuff like you have for SPG? [speaker003:] Yeah. [speaker006:] And if so like can [disfmarker] are you able to enforce that you know if [disfmarker] if it's [disfmarker] if it's SPG action then you have that schema, if it's a forced motion then you have the other schema present in the [disfmarker] [speaker003:] Um we have absolute [disfmarker] No. We have absolutely no means of enforcing that, so it would be considered valid if we have an SPG action "Enter" and no SPG schema, but a forced action schema. Could happen. [speaker007:] Whi which is not bad, because I mean, that there's multiple sens I mean that particular case, there's mult there [disfmarker] there's a forced side of [disfmarker] of that verb as well. [speaker003:] Hmm. It [disfmarker] maybe it means we had nothing to say about the Source Path Goal. [speaker006:] OK. [speaker003:] What's also nice, and for a i for me in my mind it's [disfmarker] it's crucially necessary, is that we can have multiple schemas and multiple action schemas in parallel. [speaker006:] Right. [speaker003:] And um we started thinking about going through our bakery questions, so when I say "is there a bakery here?" you know I do ultimately want our module to be able to first of all f tell the rest of the system "hey this person actually wants to go there" and "B", [comment] that person actually wants to buy something to eat there. Nuh? And if these are two different schemas, IE the Source Path Goal schema of getting there and then the buying snacks schema, nuh? [disfmarker] [speaker007:] Would they both be listed here in [disfmarker] [speaker003:] Yes. [speaker007:] OK. Under so o under action schema there's a list that can include both [disfmarker] both things. [speaker003:] ye Yeah, they they would [disfmarker] both schemas would appear, [speaker002:] Right. [speaker003:] so what is the [speaker005:] Snack action. [speaker003:] uh is [disfmarker] is there a "buying s snacks" schema? [speaker007:] That's interesting. [speaker003:] What is the uh [disfmarker] [vocalsound] have [speaker007:] What? [speaker003:] the buying snack schema? [speaker005:] See. [speaker004:] Buying [disfmarker] [vocalsound] buying his food [disfmarker] [speaker005:] I'm sure there's a commercial event schema in there somewhere. [speaker007:] Oop. I [vocalsound] d f [speaker003:] Yeah, a "commercial event" or something. [speaker007:] Yeah I [disfmarker] I [disfmarker] [speaker003:] Yeah? So uh [disfmarker] [vocalsound] so we would [disfmarker] we would instantiate the SPG schema with a Source Path Goal blah blah blah [speaker007:] I see. [speaker003:] and the buying event you know at which [disfmarker] however that looks like, the place f thing to buy. [speaker007:] Uh huh. Uh huh. Interesting. Would you say that the [disfmarker] like [disfmarker] I mean you could have a flat structure and just say these are two independent things, but there's also this sort of like causal, well, so one is really facilitating the other and it's part of a compound action of some kind, which has structure. [speaker003:] Yeah. Now it's technically possible that you can fit schema within schema, [speaker007:] uh I [disfmarker] I think that's nicer for a lot of reasons [speaker003:] and schema within schemata [disfmarker] [speaker007:] but might be a pain so uh [disfmarker] [speaker003:] um Well, for me it seems that uh [disfmarker] [speaker007:] I mean there are truly times when you have two totally independent goals that they might express at once, [speaker003:] r Yes. [speaker007:] but in this case it's really like there's a purpo means that you know f for achieving some other purpose. [speaker003:] Well, if I'm [disfmarker] if I'm recipient of such a message and I get a Source Path Goal where the goal is a bakery and then I get a commercial action which takes place in a bakery, right? [speaker007:] Uh huh. [speaker003:] and [disfmarker] and [disfmarker] and they [disfmarker] they are obviously, via identifiers, identified to be the same thing here. [speaker007:] Yeah. See that [disfmarker] that bothers me that they're the same thing. [speaker003:] No, no, just the [disfmarker] Yeah? [speaker007:] Yeah because they're two different things one of which is l you could think of one a sub you know pru whatever pre condition for the second. [speaker003:] Yeah, yeah! [speaker007:] Right. Yeah, yeah. So. So. OK. So there's like levels of granularity. So uh there's [disfmarker] there's um a single event of which they are both a part. And they're [disfmarker] independently they [disfmarker] they are events which have very different characters as far as Source Path Goal whatever. [speaker003:] Mm hmm, yeah. [speaker007:] So when you identify Source Path Goal and whatever, there's gonna to be a desire, whatever, eating, hunger, whatever other frames you have involved, they have to match up in [disfmarker] in nice ways. So it seems like each of them has its own internal structure and mapping to these schemas [speaker003:] Mm hmm. [speaker007:] you know from the other [disfmarker] But you know that's just [disfmarker] That's just me. Like [disfmarker] [vocalsound] I [disfmarker] I [disfmarker] [speaker003:] Well, I think we're gonna hit a lot of interesting problems and as I prefaced it this is the result of one week of arguing [vocalsound] about it [speaker007:] Mm hmm. Between you guys uh [speaker002:] Yeah. [speaker007:] OK. [speaker003:] and um [disfmarker] and so [disfmarker] [speaker005:] Yeah I mean I [disfmarker] I still am not entirely sure that I really fully grasp the syntax of this. You know, like what [disfmarker] [speaker002:] Well it's not [disfmarker] it's not actually a very [disfmarker] actually, it doesn't actually [disfmarker] [speaker003:] Um it occur [disfmarker] it occurs to me that I mean ne [speaker005:] Right. Or the intended interpretation of this. Yeah. [speaker003:] um well I should have [disfmarker] we should have added an ano an XML example, or some XML examples [speaker005:] Yeah. [speaker007:] yeah that would be [disfmarker] that would be nice. [speaker003:] and [disfmarker] and this is on [disfmarker] on a [disfmarker] on [disfmarker] on my list of things until next [disfmarker] next week. [speaker005:] OK. [speaker003:] It's also a question of the recursiveness and [disfmarker] and a hier hierarchy um in there. [speaker007:] Yeah. Yeah. [speaker003:] Do we want the schemas just blump blump blump blump? I mean it's [disfmarker] if we can actually you know get it so that we can, out of one utterance, activate more than one schema, I mean, then we're already pretty good, [speaker007:] Mm hmm. [speaker003:] right? [speaker001:] Well [disfmarker] well you have to be careful with that uh uh thing because uh [vocalsound] I mean many actions presuppose some [disfmarker] um almost [vocalsound] infinitely many other actions. So if you go to a bakery [pause] you have a general intention of uh not being hungry. [speaker007:] Yeah. Mayb yeah. [speaker001:] You have a specific intentions to cross the traffic light to get there. [speaker007:] Mm hmm. [speaker005:] Yeah. [speaker001:] You have a further specific intentions to left [disfmarker] to lift your right foot [speaker003:] Hmm? [speaker001:] and so uh uh I mean y you really have to focus on on [disfmarker] on [speaker007:] Right. [speaker001:] and decide the level of [disfmarker] of abstraction that [disfmarker] that you aim at it kind of zero in on that, [speaker007:] Right. [speaker003:] Yeah. [speaker001:] and more or less ignore the rest, unless there is some implications that [disfmarker] that you want to constant draw from [disfmarker] from sub tasks um that are relevant [speaker007:] M Th [speaker001:] uh I mean but very difficult. [speaker007:] The other thing that I just thought of is that you could want to go to the bakery because you're supposed to meet your friend there or som [speaker001:] Yeah. [speaker003:] Mm hmm. [speaker007:] you know so you [disfmarker] like being able to infer the second thing is very useful and probably often right. [speaker002:] Well the [disfmarker] the [disfmarker] the utterance was "is there a bakery around here?", [speaker007:] But having them separate [disfmarker] [speaker002:] not "I want to go to a bakery." [speaker007:] Well maybe their friend said they were going to meet them in a bakery around the area. [speaker001:] Right. [speaker007:] And I'm, yeah [disfmarker] I'm [disfmarker] I'm inventing contexts which are maybe unlikely, [speaker001:] Right. [speaker002:] Sure it [disfmarker] OK. [speaker007:] but yeah I mean like [disfmarker] [speaker002:] Yeah. [speaker007:] but it's still the case that um you could [disfmarker] you could override that default by giving extra information [speaker003:] Mm hmm, yeah. [speaker007:] which is to me a reason why you would keep the inference of that separate from the knowledge of "OK they really want to know if there's a bakery around here", [speaker003:] Yeah. [speaker007:] which is direct. [speaker003:] Well there [disfmarker] there [disfmarker] there should never be a hard coded uh [vocalsound] shortcut from [pause] the bakery question to the uh double schema thing, [speaker007:] Right. [speaker003:] how uh [disfmarker] And, as a matter of fact, when I have traveled with my friends we make these [disfmarker] exactly these kinds of appointments. [speaker007:] Mm hmm. Yeah. [speaker005:] Mm hmm. [speaker007:] Exactly. [speaker003:] We o o [speaker007:] It's [disfmarker] I met someone at the bakery you know in the Victoria Station t you know [vocalsound] train station London before, [speaker005:] Mm hmm. [speaker001:] Right. [speaker005:] Yep. [speaker001:] Well. [speaker007:] yeah. It's like [disfmarker] [speaker001:] I have a question about the slot of the SPG action. So [vocalsound] the Enter View Approach the [disfmarker] the [disfmarker] the EVA um, those are fixed slots in this particular action. Every action of this kind will have a choice. Or [disfmarker] or [disfmarker] or [disfmarker] or will it just um uh [disfmarker] [speaker005:] Every SPG [disfmarker] every SPG action either is an Enter or a View or an Approach, [speaker001:] is it change [disfmarker] Right, right. [speaker003:] Mm hmm. [speaker005:] right? OK. [speaker001:] So [disfmarker] so I [disfmarker] I mean for [disfmarker] for each particular action that you may want to characterize you would have some number of slots that define uh uh uh you know in some way what this action is all about. It can be either A, B or C. Um. So is it a fixed number or [disfmarker] or do you leave it open [disfmarker] it could be between one and fifteen uh [disfmarker] it's [disfmarker] it's [disfmarker] it's flexible. [speaker003:] Um, the uh [disfmarker] Well, it sort of depends on [disfmarker] on if you actually write down the [disfmarker] the schema then you have to say it's either one of them or it can be none, or it can be any of them. However the uh [disfmarker] it seems to be sensible to me to r to view them as mutually exclusive um maybe even not. [speaker007:] J Do you mean within the Source Path Goal actions? [speaker001:] uh [vocalsound] ye uh uh b I uh I [disfmarker] u I understand [speaker003:] Yeah. [speaker007:] Those three? [speaker001:] uh but [disfmarker] [speaker003:] And um how [disfmarker] how where is the end? So that's [disfmarker] [speaker001:] No, no. There [disfmarker] a a actually by I think my question is simpler than that, um [vocalsound] is [disfmarker] OK, so you have an SPG action and [disfmarker] and it has three different um uh aspects um because you can either enter a building or view it or [disfmarker] or approach it and touch it or something. Um now you define uh another action, [speaker003:] Forced action or forced motion. [speaker001:] it's [disfmarker] it's called um uh s S P G one [speaker003:] Yeah. [speaker001:] action a different action. Um and this [disfmarker] uh action two would have various variable possibilities of interpreting what you would like to do. And [disfmarker] i in [disfmarker] in a way similar to either Enter View Approach you may want to send a letter, read a letter, or dictate a letter, let's say. [speaker002:] Oh the [disfmarker] OK [speaker001:] So, h [speaker002:] uh maybe I'd [disfmarker] The uh [disfmarker] These actions [disfmarker] I don't know if I'm gonna answer your question or not with this, but the categories inside of action schemas, so, SPG action is a category. Real although I think what we're specifying here is this is a category where the actions "enter, view and approach" would fall into because they have a related Source Path Goal schema in our tourist domain. Cuz viewing in a tourist domain is going up to it and [disfmarker] or actually going from one place to another to take a picture, in this [disfmarker] in a [disfmarker] [speaker001:] Right. Oh, s so it's sort of automatic derived fr from the structure that [disfmarker] that is built elsewhere. [speaker002:] derived I don't know if I u [speaker005:] This is a cate this a category structure here, right? Action schema. [speaker002:] Right. [speaker005:] What are some types of action schemas? Well one of the types of action schemas is Source Path Goal action. And what are some types of that? And an Enter, a View, an Approach. [speaker002:] Right. [speaker003:] Hmm. [speaker005:] Those are all Source Path Goal actions. [speaker002:] Inside of Enter there will be roles that can be filled basically. So if I want to go from outside to inside [vocalsound] then you'd have the roles that need to filled, where you'd have a Source Path Goal set of roles. So you'd the Source would be outside and Path is to the door or whatever, right? [speaker001:] Right. [speaker002:] So if you wanted to have a new type of action you'd create a new type of category. Then this category would [disfmarker] we would put it [disfmarker] or not necessarily [disfmarker] We would put a new action in the m uh in the categories that [disfmarker] in which it has the um [disfmarker] Well, every action has a set of related schemas like Source Path Goal or force, whatever, [speaker005:] Mm hmm. [speaker002:] right? [speaker001:] Right. [speaker002:] So we would put "write a letter" in the categories uh that [disfmarker] in which it had [disfmarker] it w had uh schemas u [speaker005:] There could be a communication event action or something like that [speaker003:] Mm hmm. [speaker002:] Exactly. [speaker005:] and you could write it. [speaker002:] Schemas uh that of that type. And then later, you know, there [disfmarker] the [disfmarker] we have a communication event action where we'd define it down there as [disfmarker] [speaker007:] Hmm. So there's a bit a redundancy, right? in [disfmarker] in which the things that go into a particular [disfmarker] You have categories at the top under action schema and the things that go under a particular category are um supposed to have a corresponding schema definition for that type. So I guess what's the function of having it up there too? I mean I guess I'm wondering whether [disfmarker] You could just have under action schema you could just sort of say whatever you know it's gonna be Enter, View or Approach or whatever number of things [speaker003:] Mm hmm. [speaker007:] and pos partly because you need to know somewhere that those things fall into some categories. And it may be multiple categories as you say which is um the reason why it gets a little messy [speaker003:] Yeah. [speaker007:] um but if it has [disfmarker] if it's supposed to be categorized in category X then the corresponding schema X will be among the structures that [disfmarker] that follow. [speaker002:] Right. [speaker003:] Yeah. [speaker002:] Well, this is one of things we were arguing about. [speaker007:] That's like [disfmarker] [speaker003:] th this is [disfmarker] this r [speaker007:] OK, sorry. [speaker003:] this is [disfmarker] this is more [disfmarker] this is probably the way that th that's the way that seemed more intuitive to Johno I guess [speaker007:] You didn't tell me to [disfmarker] [speaker003:] also for a while [disfmarker] for [speaker007:] Uh huh. But now you guys have seen the light. [speaker003:] No, no, no. Uh we have not [disfmarker] we have not seen the light. [speaker007:] So. [speaker002:] No. [speaker007:] I it's easy to go back and forth [speaker002:] The [disfmarker] the reason [disfmarker] One reason we're doing it this way is in case there's extra structure that's in the Enter action that's not captured by the schemas, [speaker007:] isn't it? Uh huh. I agree. Right. Right. [speaker002:] right? [speaker007:] Which is why I would think you would say Enter and then just say all the things that are relevant specifically to Enter. And then the things that are abstract will be in the abstract things as well. And that's why the bindings become useful. [speaker002:] Right, but [disfmarker] [speaker005:] Ri You'd like [disfmarker] so you're saying you could practically turn this structure inside out? or something, or [disfmarker]? [speaker007:] Um Ye I see what you mean by that, [speaker003:] No basically w [speaker007:] but I [disfmarker] I don't if I would [disfmarker] I would need to have t have that. [speaker003:] Get [disfmarker] get rid of the sort of SPG slash something [speaker007:] Right. [speaker003:] uh or the sub actions category, because what does that tell us? [speaker007:] Uh huh. Yeah. [speaker003:] Um and I agree that you know this is something we need to discuss, [speaker007:] I in fact what you could say is for Enter, [speaker003:] yeah. [speaker007:] you could say here, list all the kinds of schemas that [disfmarker] on the category that [disfmarker] [speaker005:] List all the parent categories. [speaker007:] you know i list all the parent categories ". It's just like a frame hierarchy, right? [speaker005:] Yeah. [speaker007:] like you have these blended frames. [speaker003:] Mm hmm. [speaker007:] So you would say enter and you'd say my parent frames are such and such, h and then those are the ones that actually you then actually define and say how the roles bind to your specific roles which will probably be f richer and fuller and have other stuff in there. [speaker005:] Yeah. This sounds like a paper I've read around here recently in terms of [disfmarker] [speaker007:] Yeah it could [vocalsound] be not a coincidence. Like I said, I'm sure I'm just hitting everything with a hammer that I developed, [speaker005:] Yeah. [speaker007:] but I mean you know uh it's [disfmarker] I'm just telling you what I think, you just hit the button and it's like [disfmarker] [speaker003:] And, I guess fr uh [speaker005:] Yeah I mean but there's a good question here. Like, I mean uh do you [disfmarker] When do you need [disfmarker] Damn this headset! When you this uh, [speaker007:] Metacomment. [speaker005:] eh [disfmarker] Yeah. [comment] That's all recorded. Um. [speaker007:] "Damn this project." [speaker005:] Why do you [disfmarker] [speaker007:] No just kidding. [speaker005:] I don't know. Like [disfmarker] How do I [disfmarker] how do I come at this question? Um. I just don't see why you would [disfmarker] I mean does th Who uses this uh [disfmarker] this data structure? You know? Like, do you say "alright I'm going to uh [disfmarker] [pause] do an SPG action". And then you know somebody ne either the computer or the user says "alright, well, I know I want to do a Source Path Goal action so what are my choices among that?" And "oh, OK, so I can do an Enter View Approach". It's not like that, right? It's more like you say "I want to, uh [disfmarker] [pause] I want to do an Enter." [speaker002:] Well only one of [disfmarker] [speaker005:] And then you're more interested in knowing what the parent categories are of that. Right? So that the um [disfmarker] the uh sort of representation that you were just talking about seems more relevant to the kinds of things you would have to do? [speaker007:] Hmm. [speaker002:] I'd [disfmarker] I I think I'd [disfmarker] I'm not sure if I understand your question. Only one of those things are gonna be lit up when we pass this on. [speaker005:] OK. [speaker002:] So only Enter will be [disfmarker] if we [disfmarker] if our [disfmarker] if our module decided that Enter is the case, View and Approach will not be there. [speaker005:] OK. OK. [speaker003:] Well [vocalsound] uh it's [disfmarker] it sort of came into my mind that sometimes even two could be on, and would be interesting. [speaker007:] Yeah. [speaker003:] um nevertheless um [speaker005:] Mayb Well maybe I'm not understanding where this comes from and where this goes to. [speaker002:] Well in that case, we can't [disfmarker] we can't w if [disfmarker] if [disfmarker] [speaker007:] OK. [speaker003:] l let's [disfmarker] let's not [disfmarker] [speaker002:] well the thing is if that's the case we [disfmarker] our [disfmarker] I don't think our system can handle that currently. [speaker005:] What are we doing with this? [speaker003:] No, not at all. But [disfmarker] U s [vocalsound] t So [disfmarker] [speaker005:] In principle. [speaker007:] "Approach and then enter." [speaker003:] the [disfmarker] I think the [disfmarker] in some sense we [disfmarker] we ex get the task done extremely well [speaker007:] Run like this uh [disfmarker] [speaker003:] because this is exactly the discussion we need [disfmarker] need. [speaker007:] Mm hmm. [speaker003:] Period. No more qualifiers than that. [speaker007:] No, this is the useful, [speaker003:] So. [speaker007:] you know, don don't worry. [speaker003:] and um and [disfmarker] and I th I hope um uh let's make a [disfmarker] a [disfmarker] a [disfmarker] a sharper claim. We will not end this discussion anytime soon. [speaker007:] Yeah, I can guarantee that. [speaker003:] And it's gonna get more and more complex the [disfmarker] the l complexer and larger our domains get. [speaker005:] Sigh. [speaker003:] And I think um we will have all of our points in writing pretty soon. So this is nice about being being recorded also. [speaker005:] Right. [speaker003:] The um [disfmarker] [speaker004:] That's true. [speaker002:] The r uh the [disfmarker] in terms of why is [disfmarker] it's laid out like this versus some other [disfmarker] [speaker003:] the people [disfmarker] [speaker005:] Yeah. Yeah. [speaker002:] um that's kind of a contentious point between the two of us but [vocalsound] this is one wa so this is a way to link uh the way these roles are filled out to the action. [speaker005:] In my view. Mm hmm. [speaker002:] Because if we know that Enter is a t is an SPG action, [speaker005:] Mm hmm. [speaker002:] right? we know to look for an SPG schema and put the appropriate [disfmarker] fill in the appropriate roles later on. [speaker005:] Mm hmm. [speaker007:] And you could have also indicated that by saying "Enter, what are the kinds of action I am?" [speaker005:] Yeah. [speaker003:] Mm hmm, yeah. [speaker002:] Right. [speaker005:] Yeah. [speaker007:] Right? So there's just like sort of reverse organization, right? So like unless [@ @] [disfmarker] Are there reasons why one is better than the other I mean that come from other sources? [speaker005:] Again [disfmarker] [speaker003:] Yes because nobod no the modules don't [disfmarker] [speaker007:] Yeah. uh [speaker003:] This is [disfmarker] this is a schema that defines XML messages that are passed from one module to another, [speaker007:] Mm hmm. [speaker003:] mainly meaning from the natural language understanding, or from the deep language understanding to the action planner. [speaker007:] Mm hmm. [speaker003:] Now the [disfmarker] the reason for [disfmarker] for not using this approach is because you always will have to go back, each module will try [disfmarker] have to go back to look up which uh you know entity can have which uh, you know, entity can have which parents, and then [disfmarker] So you always need the whole body of [disfmarker] of y your model um to figure out what belongs to what. Or you always send it along with it, [speaker007:] Mm hmm. Mm hmm. [speaker003:] nuh? So you always send up "here I am [disfmarker] I am this person, and I can have these parents" [speaker007:] Mm hmm. [speaker003:] in every message. [speaker007:] OK, so it's just like a pain to have to send it. [speaker003:] which e It may or may not be a just a pain it's [disfmarker] it's [disfmarker] I'm completely willing to [disfmarker] to [disfmarker] to throw all of this away [speaker007:] OK, I understand. [speaker003:] and completely redo it, [speaker005:] Well [disfmarker] [speaker003:] you know and [disfmarker] and [disfmarker] and it after some iterations we may just do that. [speaker007:] Mm hmm. [speaker005:] I [disfmarker] I would just like to ask um like, if it could happen for next time, I mean, just beca cuz I'm new and I don't really just [disfmarker] I just don't know what to make of this [speaker003:] Mm hmm. [speaker005:] and what this is for, and stuff like that, you know, so if someone could make an example of what would actually be in it, [speaker003:] Yeah. [speaker005:] like first of all what modules are talking to each other using this, [speaker003:] Yeah, we [disfmarker] I will promise for the next time to have fleshed out N [comment] XML examples for a [disfmarker] a run through and [disfmarker] and see how this [disfmarker] this then translates, [speaker005:] right? And [disfmarker] OK. [speaker007:] Be great. [speaker003:] and how this can come about, nuh? including the sort of "miracle occurs here" um part. [speaker005:] Right. [speaker003:] And um is there more to be said? I think um [disfmarker] In principle what I [disfmarker] I think that this approach does, and e e whether or not we take the Enter View and we all throw up [disfmarker] up the ladder um wha how do how does Professor Peter call that? [speaker007:] Yeah. [speaker003:] The uh hhh, [comment] silence su sublimination? Throwing somebody up the stairs? Have you never read the Peter's Principle [speaker005:] Nope. [speaker003:] anyone here? [speaker001:] Oh, uh [speaker006:] People reach their level of uh max their level of [disfmarker] at which they're incompetent or whatever. [speaker001:] Yeah. [speaker003:] Maximum incompetence and then you can throw them up the stairs [speaker001:] Yeah. [speaker005:] Alright. [speaker001:] Right, right. [speaker007:] Oh! [speaker003:] um. Yeah. [speaker001:] Promote them, yeah. [speaker003:] OK, so we can promote Enter View all [disfmarker] all up a bit and and get rid of the uh blah blah X blah uh asterisk sub action item altogether. [speaker005:] OK. [speaker003:] No [disfmarker] no problem with that and we [disfmarker] w we [disfmarker] we will play around with all of them but the principal distinction between having the [disfmarker] the pure schema and their instantiations on the one hand, and adding some whatever, more intention oriented specification um on parallel to that [disfmarker] that [disfmarker] this approach seems to be uh workable to me. I don't know. If you all share that opinion then that made my day much happier. [speaker007:] Uh yeah wait [disfmarker] [speaker002:] This is a simple way to basically link uh roles to actions. [speaker007:] R Yeah, yeah. That's fine. [speaker005:] Sure. [speaker002:] That's the [disfmarker] that was the intent of [disfmarker] of it, basically. [speaker007:] Uh that's true. [speaker005:] Sure. [speaker007:] Although um roles [disfmarker] [speaker003:] Yeah. [speaker002:] So I [disfmarker] I do I'm [disfmarker] I'm not [disfmarker] [speaker007:] Yeah I [disfmarker] I [disfmarker] [speaker003:] I'm [disfmarker] I'm never happy when he uses the word "roles", [speaker007:] Yeah. I was going to [disfmarker] [speaker003:] I'm [disfmarker] [speaker002:] I b I mean ROLLS so [speaker007:] Bread rolls? [speaker005:] Oh you meant pastries, then? [speaker002:] Yeah, pastries is what I'm talking about. [speaker007:] Pastry oh ba oh the bak bakery example. [speaker004:] Bakery. [speaker005:] This is the bakery example. [speaker004:] Bakery. [speaker007:] I see. [speaker005:] Got it. Alright. [speaker007:] Right. OK. [speaker005:] Help! [speaker007:] I guess I'll agree to that, then. [speaker003:] OK. That's all I have for today. Oh no, there's one more issue. Bhaskara brought that one up. Meeting time rescheduling. [speaker007:] I n Didn't you say something about Friday, or [disfmarker]? [speaker003:] Yeah. [speaker007:] Hmm. [speaker003:] So it looks like you have not been partaking, the Monday at three o'lock time has turned out to be not good anymore. So people have been thinking about an alternative time and the one we came up with is Friday two thirty? three? What was it? [speaker002:] You have class until two, right? so if we don't want him [disfmarker] if we don't want him to run over here [speaker006:] Mm hmm. [speaker003:] Two th Two thirty ish or three or Friday at three or something around that time. [speaker007:] So do I. Yeah. [speaker005:] Mm hmm. [speaker002:] two thirty ish or three is [disfmarker] [speaker006:] Yeah. Yeah. e [speaker003:] Um how [disfmarker] how are your [disfmarker] [speaker007:] That would be good. [speaker001:] uh Friday uh Yeah, that's fine. [speaker005:] Uh [disfmarker] [speaker003:] And I know that you have until three [disfmarker] You're busy? [speaker004:] Yeah. [speaker007:] So three is [disfmarker] sounds good? [speaker001:] Yeah. [speaker005:] Yeah. [speaker007:] I'll be free by then. [speaker005:] I could do that. Yeah I mean earlier on Friday is better but three [disfmarker] you know I mean [disfmarker] if it were a three or a three thirty time then I would take the three or whatever, [speaker003:] Mm hmm. [speaker005:] but yeah sure three is fine. [speaker003:] Yeah, and you can always make it shortly after three probably. [speaker005:] I mean. [speaker004:] Yeah, and I don't need to be here particularly deeply. [speaker003:] Often, no, but uh, [speaker004:] Yeah. [speaker003:] whenever. [speaker004:] But yeah. [speaker003:] You are more than welcome if you think that this kind of discussion gets you anywhere in [disfmarker] in your life then uh you're free to c [speaker004:] It's fascinating. [speaker007:] "That's the right answer." [speaker004:] I'm just glad that I don't have to work it out because. [speaker005:] Yeah. [speaker003:] Hmm? [speaker004:] I'm just glad that don't have to work it out myself, that I'm not involved at all in the working out of it because. [speaker005:] Yeah. [speaker003:] Uh but you're a linguist. You should [disfmarker] [speaker004:] Oh yeah. That's why I'm glad that I'm not involved in working it out. [speaker003:] OK. [speaker001:] So it's at Friday at three? there that's [speaker003:] And um [speaker005:] So already again this week, huh? [speaker003:] How diligent do we feel? Yeah. Do feel that we have done our chores for this week or [disfmarker] [speaker006:] Yeah. So I mean clearly there's [disfmarker] I can talk about the um the parser changes on Friday at least, [speaker003:] OK, Bhaskara will do the big show on Friday. [speaker006:] so. [speaker007:] And you guys will argue some more? [speaker005:] Yeah. Between now and then. [speaker002:] And between now and then yeah. [speaker007:] and have some? probably. [speaker005:] Promise? [speaker003:] We will [disfmarker] r [speaker001:] Yeah. [speaker002:] We will. Don't worry. [speaker007:] Yeah. And we'll get the summary [speaker001:] Yeah. [speaker007:] like, this [disfmarker] the c you know, short version, like [disfmarker] [speaker001:] An and I would like to second Keith's request. [speaker007:] S [speaker001:] An example wo would be nice [speaker003:] Yes. [speaker005:] Yeah. [speaker001:] t to have kind of a detailed example. [speaker003:] Yes. I've [disfmarker] I've [disfmarker] I've [disfmarker] I guess I'm on record for promising that now. [speaker001:] OK. [speaker003:] So um [disfmarker] [speaker007:] Like have it [disfmarker] we'll have it in writing. So. or, better, speech. So. [speaker003:] This is it and um [speaker002:] The other good thing about it is Jerry can be on here on Friday and he can weigh in as well. [speaker003:] Yeah. and um if you can get that binding point also maybe with a nice example that would be helpful for Johno and me. [speaker007:] Oh yeah uh OK. let's uh yeah they're [disfmarker] [speaker003:] Give us [disfmarker] [speaker004:] No problem, [speaker005:] I think you've got one on hand, huh? [speaker004:] yeah. [speaker007:] I have several in my head, yeah. Always thinking about binding. [speaker003:] Well the [disfmarker] the [disfmarker] the binding is technically no problem but it's [disfmarker] it [disfmarker] for me it seems to be conceptually important that we find out if we can s if [disfmarker] if there [disfmarker] if there are things in there that are sort of a general nature, we should distill them out [speaker007:] Mm hmm. [speaker003:] and put them where the schemas are. If there are things that you know are intention specific, then we should put them up somewhere, [speaker007:] So, in general they'll be bindings across both intentions and the actions. [speaker003:] a Yep. [speaker007:] So [disfmarker] [speaker003:] That's wonderful. [speaker007:] Yeah. So it's gen it's general across all of these things it's like [disfmarker] I mean Shastri would say you know binding is like [vocalsound] an essential cognitive uh process. [speaker003:] Yeah. [speaker007:] So. [vocalsound] Um. [speaker003:] OK. [speaker007:] So I don't think it will be isolated to one or the two, but you can definitely figure out where [disfmarker] Yeah, sometimes things belong and [disfmarker] So actually I'm not sure [disfmarker] I would be curious to see how separate the intention part and the action part are in the system. Like I know the whole thing is like intention lattice, or something like that, [speaker003:] Mm hmm. [speaker007:] right? So is the ri right now are the ideas the rich [disfmarker] rich the RAD or whatever is one you know potential block inside intention. It's still [disfmarker] it's still mainly intention hypothesis [speaker003:] Yeah. Yeah. [speaker007:] and then that's just one way to describe the [disfmarker] the action part of it. [speaker003:] Yeah. [speaker007:] OK. [speaker003:] It's [disfmarker] [speaker002:] It's an a attempt to refine it basically. [speaker003:] And yeah, [speaker007:] OK, great uh huh. [speaker003:] it's an [disfmarker] an [disfmarker] it's [disfmarker] it's sort of [disfmarker] [speaker007:] Not just that you want to go from here to here, it's that the action is what you intend [speaker003:] Yeah. [speaker007:] and this action consists of all com complicated modules and image schemas and whatever. So. [speaker003:] Yeah. And [disfmarker] and there will be a [disfmarker] a [disfmarker] a relatively high level of redundancy in the sense that um ultimately one [disfmarker] [speaker007:] Mm hmm. which is, yeah, It's fine [speaker003:] so th so that if we want to get really cocky we we will say "well if you really look at it, you just need our RAD." You can throw the rest away, [speaker007:] Mm hmm. [speaker003:] right? [speaker007:] Right. [speaker003:] Because you're not gonna get anymore information out of the action a as you find it there in the domain object. [speaker007:] Right. Mm hmm. [speaker003:] But then again um in this case, the domain object may contain information that we don't really care about either. [speaker007:] Mm hmm. [speaker003:] So. H But w we'll see that then, and how [disfmarker] how it sort of evolves. [speaker007:] Mm hmm. [speaker003:] I mean if [disfmarker] if people really like our [disfmarker] our RAD, I mean w what might happen is that they will get rid of that action thing completely, you know, and leave it up for us to get the parser input um [speaker007:] Mmm. We know the things that make use of this thing so that we can just change them so that they make use of RAD. [speaker003:] Yeah. Yeah. [speaker007:] I can't believe we're using this term. [speaker004:] You don't have to use the acronym. [speaker007:] So I'm like RAD! Like every time I say it, it's horrible. OK. I see what you mean. [speaker003:] Mm hmm. [speaker002:] RAD's a great term. [speaker007:] Is the [disfmarker] But what is the "why"? [speaker005:] It's rad, even! [speaker002:] Why? [speaker007:] Why? [speaker005:] It happened to c be what it stands for. [speaker003:] Well [disfmarker] [speaker002:] It just happened to be the acronym. [speaker007:] Yeah. That's [disfmarker] doesn't make it a great term. It's just like those jokes where you have to work on both levels. [speaker003:] ye no but i [speaker004:] Just think of it as [disfmarker] as "wheel" in German. [speaker007:] Do you see what I mean? [speaker003:] but if you [disfmarker] if you [disfmarker] if you work in th in that XML community it is a great acronym [speaker007:] Like [speaker003:] because it e evokes whatever RDF [disfmarker] [speaker007:] Oh. [speaker003:] RDF is the biggest thing right? That's the rich [disfmarker] sort of "Resource Description Framework" [speaker005:] Oh "rich de" [speaker007:] Oh. [speaker003:] and um [disfmarker] and also [disfmarker] So, description, having the word d term "description" in there is wonderful, [speaker007:] Mm hmm. [speaker003:] uh "rich" is also great, rwww. [speaker006:] Hmm. [speaker002:] Who doesn't like to be a [speaker005:] Everybody likes action. [speaker007:] Oh. Yeah. OK. [speaker002:] Yeah. [speaker005:] Plus it's hip. [speaker007:] But what if it's not an action? [speaker005:] The kids'll like it. [speaker004:] Yeah all the kids'll love it. [speaker006:] Hmm. [speaker003:] It's [disfmarker] it's rad, yeah. [speaker007:] And intentions will be "RID"? Like, "OK". Um are the [disfmarker] are the sample data that you guys showed sometime ago [disfmarker] like the things [disfmarker] maybe [disfmarker] maybe you're gonna run a trial tomorrow. I mean, I'm just wondering whether the ac some the actual sentences from this domain will be available. Cuz it'd be nice for me to like look if I'm thinking about examples I'm mostly looking at child language which you know will have some overlap but not total with the kinds of things that you guys are getting. So you showed some in this [disfmarker] here before [speaker003:] Mm hmm. [speaker007:] and maybe you've posted it before but where would I look if I want to see? [speaker003:] Oh I [disfmarker] You want audio? [speaker007:] You know. [speaker003:] or do you want transcript? [speaker007:] No just [disfmarker] just transcript. [speaker003:] Yeah, well just transcript is just not available because nobody has transcribed it yet. [speaker007:] Sorry. Oh, OK. [speaker003:] Um I can e I can uh I'll transcribe it though. [speaker007:] I take that back then. [speaker003:] It's no problem. [speaker007:] OK, well don't [disfmarker] don't make it a high priority [disfmarker] I [disfmarker] In fact if you just tell me like you know like two examples [speaker003:] Yeah. [speaker007:] I mean, y [speaker003:] Mm hmm. [speaker007:] The [disfmarker] the [disfmarker] the representational problems are [disfmarker] I'm sure, will be there, [speaker003:] OK. [speaker007:] like enough for me to think about. So. [speaker003:] OK, so Friday, whoever wants and comes, and can. [speaker007:] OK. [speaker005:] OK. [speaker007:] Here. [speaker003:] This Friday. [speaker007:] OK. [speaker003:] The big parser show. Now you can all turn off your [disfmarker] [speaker002:] OK So uh today we're looking at a number of uh things we're trying and uh fortunately for listeners to this uh we lost some of it's visual but um got tables in front of us. Um what is [disfmarker] what does combo mean? [speaker003:] So combo is um a system where we have these features that go through a network and then this same string of features but low pass filtered with the low pass filter used in the MSG features. And so these low pass filtered goes through M eh [disfmarker] another MLP and then the linear output of these two MLP's are combined just by adding the values and then there is this KLT. Um the output is used as uh features as well. [speaker002:] Um so let me try to restate this and see if I have it right. There is uh [disfmarker] there is the features uh there's the OGI features and then um those features um go through a contextual [disfmarker] uh l l let's take this bottom arr one pointed to by the bottom arrow. Um those features go through a contextualized KLT. [speaker003:] Yeah. [speaker002:] Then these features also uh get um low pass filtered [speaker003:] Yeah so yeah I could perhaps draw this on the blackboard [speaker002:] Sure. Yeah. Yeah. [speaker003:] Yeah. [speaker004:] The graph, yeah another one. [speaker002:] Yeah, that's good. [speaker003:] So we have these features from OGI that goes through the three paths. [speaker002:] So Yeah. Three, OK. [speaker003:] The first is a KLT using several frames of the features. [speaker002:] Yeah. Yeah. [speaker003:] The second path is uh MLP also using nine frames [disfmarker] several frames of features [speaker002:] Yeah. Uh huh. [speaker003:] The third path is this low pass filter. [speaker002:] Uh huh. [speaker003:] Uh, MLP [speaker002:] Aha! aha! [speaker003:] Adding the outputs just like in the second propose the [disfmarker] the proposal from [disfmarker] for the first evaluation. [speaker002:] Yeah? Yeah. Yeah. [speaker003:] And then the KLT and then the two together again. [speaker002:] No, the KLT. And those two together. [speaker004:] Two HTK. [speaker002:] That's it. [speaker003:] Um. So this is [disfmarker] [speaker002:] OK so that's [disfmarker] that's this bottom one. [speaker003:] yeah [speaker002:] And so uh and then the [disfmarker] the [disfmarker] the one at the top [disfmarker] and I presume these things that uh are in yellow are in yellow because overall they're the best? [speaker003:] Yeah that's the reason, yeah. [speaker002:] Oh let's focus on them then so what's the block diagram for the one above it? [speaker003:] For the f the f first yellow line you mean? [speaker002:] Yeah. [speaker003:] Yeah so it's uh basically s the same except that we don't have this uh low pass filtering so we have only two streams. [speaker004:] Step. [speaker003:] Well. There's [disfmarker] there's no low [disfmarker] low pass processing used as additional feature stream. [speaker002:] Mm hmm. Mm hmm. [speaker003:] Um [speaker002:] Do you e um they mentioned [disfmarker] made some [disfmarker] uh when I was on the phone with Sunil they [disfmarker] they mentioned some weighting scheme that was used to evaluate all of these numbers. [speaker003:] Yeah. Uh actually the way things seems to um well it's uh forty percent for TI digit, sixty for all the SpeechDat Cars, well all these languages. Ehm the well match is forty, medium thirty five and high mismatch twenty five. Yeah. [speaker002:] Um and we don't have the TI digits part yet? [speaker003:] Uh, no. [speaker002:] OK. [speaker003:] But yeah. Generally what you observe with TI digits is that the result are very close whatever the [disfmarker] the system. [speaker002:] OK. And so have you put all these numbers together into a single number representing that? [speaker003:] Yeah. [speaker002:] I mean not [disfmarker] [speaker003:] Uh not yet. No. [speaker002:] OK so that should be pretty easy to do and that would be good [disfmarker] [speaker003:] Mmm yeah, yeah. [speaker002:] then we could compare the two and say what was better. [speaker003:] Mmm. Yeah. [speaker002:] Um and how does this compare to the numbers [disfmarker] oh so OGI two is just the top [disfmarker] top row? [speaker004:] Yeah. [speaker003:] So yeah to [disfmarker] actually OGI two is the [disfmarker] the baseline with the OGI features but this is not exactly the result that they have because they've [disfmarker] they're still made some changes in the features [speaker002:] OK. [speaker003:] and [disfmarker] well but uh actually our results are better than their results. Um I don't know by how much because they did not send us the new results [speaker002:] OK. [speaker003:] Uh [speaker002:] Uh OK so the one [disfmarker] one place where it looks like we're messing things up a bit is in the highly mismatched Italian. [speaker003:] Yeah. Yeah. [speaker002:] An [speaker003:] Yeah there is something funny happening here because [disfmarker] yeah. [speaker002:] Yeah. [speaker003:] But there are thirty six and then sometimes we are [disfmarker] we are [disfmarker] we are around forty two and [speaker002:] Now up [speaker003:] but [speaker002:] Uh so one of the ideas that you had mentioned last time was having a [disfmarker] a second um silence detection. [speaker003:] Yeah. So there are some results here [speaker004:] For the Italian. [speaker003:] uh so the third and the fifth line of the table [speaker004:] For this one. [speaker002:] So filt is what that is? [speaker003:] Filt, yeah [speaker004:] Yeah. [speaker003:] Um yeah so it seems f for the [disfmarker] the well match and mismatched condition it's uh it brings something. Uh but uh actually apparently there are [disfmarker] there's no room left for any silence detector at the server side because of the delay. Uh well [speaker002:] Oh we can't do it. Oh OK. [speaker003:] No. [speaker004:] For that [disfmarker] for that we [disfmarker] [speaker002:] Oh. [speaker003:] Uh [speaker002:] Too bad. Good idea, but can't do it. [speaker003:] Yeah. [speaker002:] OK. [speaker003:] Except I don't know because they [disfmarker] I think they are still working well. [speaker002:] Uh huh. [speaker003:] Uh t two days ago they were still working on this trying to reduce the delay of the silence detector so but yeah if we had time perhaps we could try to find uh some kind of compromise between the delay that's on the handset and on the server side. Perhaps try to reduce the delay on the handset and [disfmarker] but well hmm For the moment they have this large delay on the [disfmarker] the feature computation and so we don't [speaker002:] OK. So Alright so for now at least that's not there you have some results with low pass filter cepstrum doesn't have a huge effect but it [disfmarker] but it looks like it you know maybe could help in a couple places. [speaker003:] I th Yeah. [speaker002:] Uh little bit. Um and um um Yeah and uh let's see What else did we have in there? Uh I guess it makes a l um at this point this is I [disfmarker] I guess I should probably look at these others a little bit uh And you [disfmarker] you yellowed these out uh but uh uh Oh I see yeah that [disfmarker] that one you can't use because of the delay. Those look pretty good. Um let's see that one Well even the [disfmarker] just the [disfmarker] the second row doesn't look that bad right? That's just uh [speaker003:] Yep. [speaker002:] yeah? And [disfmarker] and that looks like an interesting one too. [speaker004:] Mmm yeah. [speaker002:] Uh [speaker003:] Actually the [disfmarker] yeah the second line is uh pretty much like the first line in yellow except that we don't have this KLT on the first [disfmarker] on the left part of the diagram. We just have the features as they are. [speaker002:] Mm hmm. [speaker003:] Um [speaker002:] Yeah. Yeah so when we do this weighted measure we should compare the two cuz it might even come out better. [speaker003:] Mm hmm. [speaker002:] And it's [disfmarker] it's [disfmarker] it's a little [disfmarker] slightly simpler. [speaker003:] Yeah. [speaker002:] So [disfmarker] so there's [disfmarker] so I [disfmarker] I would put that one also as a [disfmarker] as a maybe. Uh and it [disfmarker] yeah and it's actually [vocalsound] does [disfmarker] does significantly better on the uh uh highly mismatched Italian, so s and little worse on the mis on the MM case, but uh Well yeah it's worse than a few things [speaker003:] Mm hmm. [speaker002:] so uh let's see how that c that c c see how that comes out on their [disfmarker] their measure and [disfmarker] are [disfmarker] are we running this uh for TI digits or uh [speaker003:] Yeah. Yeah. [speaker002:] Now is TI di [disfmarker] is is that part of the result that they get for the uh development [disfmarker] th the results that they're supposed to get at the end of [disfmarker] end of the month, the TI digits are there also? [speaker003:] Yeah. It's included, yeah. [speaker002:] Oh OK. OK. And see what else there is here. Um Oh I see [disfmarker] the one [disfmarker] I was looking down here at the [disfmarker] the o the row below the lower yellowed one. [speaker003:] Mm hmm? [speaker002:] Uh that's uh that's with the reduced uh KLT size [disfmarker] reduced dimensionality. [speaker003:] Yeah. Yeah. [speaker002:] What happens there is it's around the same and so you could reduce the dimension as you were saying before a bit perhaps. [speaker003:] Yeah, it's [disfmarker] it's significantly worse well but [disfmarker] Mm hmm. [speaker002:] It's significantly worse [disfmarker] it's [disfmarker] it's uh it's [disfmarker] it's mostly worse. [speaker003:] Exc except for the HM [speaker004:] For many a mismatch it's worse. [speaker003:] but [speaker002:] Yeah. But it is little. I mean not [disfmarker] not by a huge amount, I don't know. What are [disfmarker] what are the sizes of any of these sets, I [disfmarker] I'm [disfmarker] I'm sure you told me before, but I've forgotten. So [disfmarker] you know how many words are in uh one of these test sets? [speaker003:] Uh [speaker004:] I don't remember. [speaker003:] Um it's [disfmarker] it depends [disfmarker] well [disfmarker] the well matched is generally larger than the other sets [speaker002:] About? [speaker003:] and I think it's around two thousand or three thousand words perhaps, at least. [speaker004:] Ye But words [disfmarker] well word [disfmarker] I don't know. [speaker003:] Hmm? The words, yeah. [speaker004:] Sentences. [speaker003:] S sentences. Some sets have five hundred sentences, so. [speaker004:] Yeah. [speaker003:] Mmm. [speaker002:] So the [disfmarker] so the sets [disfmarker] so the test sets are between five hundred and two thousand sentences, let's say and each sentence on the average has four or five digits or is it [disfmarker] most of them longer or [speaker004:] Yeah for the Italian even seven digits y more or less [speaker003:] Yeah. It [disfmarker] it d [speaker004:] but sometime the sentence have only one digit [speaker003:] Seven digits. [speaker004:] and sometime uh like uh the number of uh credit cards, something like that. [speaker002:] Mm hmm. Right, so between one and sixteen. See the [disfmarker] I mean the reason I'm asking is [disfmarker] is [disfmarker] is we have all these small differences and I don't know how seriously to take them, right? [speaker003:] Mm hmm? [speaker002:] So uh i if [disfmarker] if you had uh just you know [disfmarker] to give an example, if you had uh um if you had a thousand words then uh a [disfmarker] a tenth of a percent would just be one word, [speaker003:] Yeah. [speaker002:] right? So [disfmarker] so it wouldn't mean anything. [speaker004:] Yeah. [speaker003:] Yeah. [speaker002:] Oh um so um yeah it be kind of [disfmarker] I'd kind of like to know what the sizes of these test sets were actually. [speaker004:] The size that we have? [speaker003:] Yeah. We could [disfmarker] we could run [disfmarker] run some kind of significance tests [speaker002:] Yeah since these [disfmarker] well also just to know the numbers, [speaker003:] or [speaker004:] Yeah. [speaker002:] right. So these [disfmarker] these are word error rates [speaker003:] Yeah. [speaker002:] so this is on how many words. [speaker003:] Yep. [speaker004:] Yeah we have the result that the output of the HTK [speaker002:] Yeah. [speaker004:] The number of [disfmarker] of sentences, no it's the number isn't. [speaker003:] Yeah sure [disfmarker] sure. Yeah sure. [speaker002:] Yeah [speaker003:] Yeah. [speaker002:] so anyway if you could just mail out what those numbers are and then [disfmarker] then [disfmarker] that [disfmarker] that be great. [speaker004:] Yeah. [speaker002:] Um [vocalsound] what else is there here? Um see the second [disfmarker] second from the bottom it says SIL, but this is some different kind of silence or thing or [disfmarker] what was that? [speaker003:] Uh [speaker004:] It the [disfmarker] the output silence of the MLP. [speaker003:] Oh yeah I see. [speaker004:] It's only one small experiment to know what happened. To apply also to in include also the [disfmarker] the silence of the MLP we have the fifty six form and the silence to pick up the silence and we include those. [speaker002:] Yes. Uh huh, uh huh. The silence plus the KLT output? Oh so you're only using the silence. [speaker003:] Yeah. [speaker004:] Yeah, because when we apply the KLT [speaker003:] No they're [disfmarker] I think there is this silence in addition to the um KLT outputs [speaker002:] No. [speaker004:] in addition, yes. In addition t [speaker003:] it is because we [disfmarker] we [disfmarker] we just keep uh we don't keep all the dimensions after the KLT and [disfmarker] yeah. [speaker004:] and we not s we are not sure if we pick [disfmarker] we have the silence. [speaker003:] So we try to add the silence also in addition to the [disfmarker] these twenty eight dimensions. [speaker002:] I see. OK. And what [disfmarker] and what's OGI forty five? [speaker003:] Uh it's o it's OGI two, [speaker002:] The bottom one there? [speaker003:] it's [disfmarker] so the [disfmarker] th it's the features from the first line [speaker004:] It's in fact OGI two. [speaker003:] and [disfmarker] yeah. [speaker002:] S Right, but I mean what's the [disfmarker] what does the last row mean? [speaker003:] So it's uh basically this but without the KLT on the [disfmarker] from the left path. [speaker002:] I thought that was the one [disfmarker] I thought that was the second row. So what's the difference between the second [speaker003:] Uh the second line you don't have this combo stuff so you just [speaker002:] Oh. [speaker003:] uh [speaker002:] So this is like the second line but with [disfmarker] with the combo stuff. [speaker003:] Yeah. [speaker004:] And with the [disfmarker] all the output of the combo. [speaker003:] Yeah. [speaker002:] OK. Yeah. [speaker003:] Yeah. [speaker004:] Uh [speaker002:] OK, so [disfmarker] alright so it looks to me [disfmarker] I guess the same [disfmarker] given that we have to take the filt ones out of the [disfmarker] the running because of this delay problem [disfmarker] so it looks to me like the ones you said I agree are [disfmarker] are the ones to look at [speaker003:] Mm hmm. [speaker002:] but I just would add the [disfmarker] the [disfmarker] the second row one [speaker003:] Yeah. [speaker002:] and then um if we can um [speaker003:] Mmm. [speaker002:] oh yeah also when [disfmarker] when they're using this weighting scheme of forty, thirty five, twenty five is that on the percentages or on the raw errors? I guess it's probably on the percentages right? [speaker003:] Uh [vocalsound] I guess, yeah. [speaker002:] Yeah OK. [speaker003:] I guess, yeah. Mmm. [speaker002:] Alright. [speaker003:] It's not clear here. [speaker002:] OK. Maybe [disfmarker] maybe they'll argue about it. Um OK so if we can know what [disfmarker] how many words are in each and then um Dave uh Dave promised to get us something tomorrow which will be there as far as they've gotten [vocalsound] Friday [speaker003:] Mm hmm. Yeah. [speaker002:] and then we'll operate with that and uh how long did it I guess if we're not doing all these things [disfmarker] if we're only doing um um I guess since this is development data it's legitimate to do more than one, right? I mean ordinarily if [disfmarker] in final test data you don't want to do several and [disfmarker] and take the best [speaker003:] Yeah. Mmm. [speaker002:] that's [disfmarker] that's [disfmarker] that's not proper but if this is development data we could still look at a couple. [speaker003:] Yeah. We can [disfmarker] yeah. Sure. But we have to decide [disfmarker] I mean we have to fix the system on this d on this data, to choose the best [speaker002:] Yeah. I [speaker003:] and these [speaker002:] Right. But the question is when [disfmarker] when do we fix the system, [speaker003:] But we could [speaker002:] do we fix the system uh tomorrow or do we fix the system on Tuesday? [speaker003:] it d I think we fixed on Tuesday, yeah. [speaker002:] I [disfmarker] Yeah, OK except that we do have to write it up. [speaker003:] Yeah. Mm hmm. Mm hmm. Yeah. [speaker002:] Also, so [speaker003:] Yeah. [speaker002:] Um [speaker003:] Uh yeah well. Well basically it's this with perhaps some kind of printing and some [disfmarker] some other [@ @]. [speaker002:] Right so maybe what we do is we [disfmarker] we [disfmarker] we uh as soon as we get the data from them we start the training and so forth [speaker003:] Yeah but Mm hmm. [speaker002:] but we start the write up right away because as you say there [disfmarker] there's only minor differences between these. [speaker003:] I think you [disfmarker] we could [disfmarker] we could start soon, yeah. [speaker002:] Yeah. [speaker003:] Write up something. [speaker002:] Yeah, and [disfmarker] and I [disfmarker] I would [disfmarker] you know, I would [disfmarker] I'd kind of like to see it [speaker003:] Um yeah. Mm hmm. [speaker002:] maybe I can [disfmarker] I can edit it a bit uh sure. The [disfmarker] my [disfmarker] what in this si i in this situation is my forte which is English. [speaker003:] Yeah. [speaker002:] Uh so [speaker003:] Mmm. [speaker002:] uh H yeah. Have y have you seen alt d do they have a format for how they want the system descriptions or anything? [speaker003:] Uh not really. [speaker002:] OK. [speaker003:] Um There is the format of the table which is [vocalsound] quite impressive. [speaker002:] Yeah? Uh I see. Yes, for those who are listening to this and not looking at it uh it's not really that impressive, it's just tiny. It's all these little categories set a, set b, set c, multi condition, clean. Uh No mitigation. Wow. Do you know what no [disfmarker] what no mitigation means here? [speaker003:] Um it should be the the problem with the error [disfmarker] channel error [speaker002:] Oh that's probably the [disfmarker] [speaker003:] or [speaker002:] this is probably channel error stuff [speaker003:] well, you [disfmarker] Yeah. [speaker002:] huh? Oh this is i right, it says right above here channel [disfmarker] channel error resilience, [speaker003:] Yeah. [speaker002:] yeah. So recognition performance is just the top part, actually. Uh and they have [disfmarker] yes, split between seen databases and non seen so basically between development and [disfmarker] and evaluation. [speaker003:] Yeah. [speaker002:] And [vocalsound] so [disfmarker] right, it's presumed there's all sorts of tuning that's gone on on the see what they call seen databases and there won't be tuning for the uh unseen. Multi condition [disfmarker] multi condition. So they have [disfmarker] looks like they have [speaker003:] Mm hmm. [speaker002:] uh uh so they splitting up between the TI digits and everything else, I see. So the everything else is the SpeechDat Car, that's the multi multilingual [speaker003:] Yeah, so it's not divided between languages you mean or [disfmarker] [speaker002:] Well, it is. [speaker003:] it just [speaker002:] It is, but there's also [disfmarker] there's these tables over here for the [disfmarker] for the TI digits and these tables over here for the car data [speaker003:] Oh yeah. [speaker002:] which is [disfmarker] which is I guess all the multilingual stuff and then uh there's [disfmarker] they also split up between multi condition and clean only. [speaker003:] Yeah. For TI digits. [speaker002:] Yes. [speaker003:] Yeah, actually yeah. For the TI digits they want to train on clean and on noisy [speaker002:] Yeah. [speaker003:] and [disfmarker] yeah. [speaker002:] So we're doing that also, I guess. [speaker003:] Uh yeah. But uh we actually [disfmarker] do we have the features? Yeah. For the clean TI digits but we did not test it yet. Uh the clean training stuff. [speaker002:] OK. [speaker003:] Mmm. [speaker002:] Well anyway, sounds like there'll be a lot to do just to [vocalsound] work with our partners to fill out the tables [vocalsound] over the next uh next few days [speaker003:] Mm hmm. [speaker004:] Yes. [speaker002:] I guess they have to send it out [disfmarker] let's see the thirty first is uh uh Wednesday and I think the [disfmarker] it has to be there by some hour uh European time on Wednesday [speaker003:] Hmm hmm. [speaker002:] so [vocalsound] I think basically [speaker004:] We lost time uh Wednesday maybe because [vocalsound] that the difference in the time may be [disfmarker] is a long different of the time. [speaker002:] E excuse me? [speaker004:] Maybe the Thursday the twelfth of the night of the Thurs thirty one is [disfmarker] is not valid in Europe. [speaker003:] Yeah. [speaker004:] We don't know is happening. [speaker002:] Yes, so I mean [disfmarker] I think we have to actually get it done Tuesday [speaker004:] Tuesday. [speaker003:] Yeah, well. [speaker002:] right because I [disfmarker] I think [speaker003:] Except if [disfmarker] if it's the thirty one at midnight [speaker002:] uh Uh [speaker003:] or I don't know [disfmarker] we can [vocalsound] still do some work on Wednesday morning. [speaker002:] yeah well. [speaker003:] Yeah, well. [speaker002:] W i is but is [disfmarker] is it midni I thought it was actually something like five PM on [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker002:] was like [disfmarker] I thought it was five PM or something, I didn't think it was midnight. I thought they said they wanted everything by [speaker004:] Yeah, five PM. [speaker002:] well, so five PM their time is [disfmarker] is [disfmarker] [speaker004:] Not five PM, three PM. [speaker002:] if three PM. [speaker004:] Three PM. [speaker002:] Alright, that's six in the morning here. [speaker003:] It's d [speaker004:] Uh no [speaker003:] no. [speaker004:] three [disfmarker] three A three PM? [speaker003:] No, we are wondering about the [disfmarker] the [disfmarker] the hour that we have to eh I don't know if it's three PM [disfmarker] it's [speaker004:] Oh yeah, yeah, yeah, yeah. Three PM here is in Europe midnight. [speaker003:] Yeah, it's [disfmarker] it's midnight but [speaker002:] Yes, yes, but I didn't think it was midnight that it was due, [speaker004:] Oh OK. [speaker002:] I thought it was due at some hour during the day like five PM or something. [speaker004:] Mm hmm. Mm hmm, maybe. [speaker002:] In which case so I [disfmarker] I [disfmarker] uh well we should look but my assumption is that we basically have to be done Tuesday. Um so then next Thursday we can sort of have a little aftermath [speaker004:] Yeah. [speaker002:] but then [disfmarker] then we'll actually have the new data which is the German and the Danish [speaker003:] Yeah. [speaker002:] but that really will be much less work because uh the system will be fixed [speaker003:] Yeah. [speaker002:] so all we'll do is take whatever [vocalsound] they have and [disfmarker] and uh and run it through the process. [speaker003:] Yeah. Mm hmm. [speaker002:] Uh we won't be changing the training on anything so there'll be no new training, there'll just be new HTK runs, so that's means in some sense we can kind of relax from this after [disfmarker] after Tuesday and [disfmarker] and uh maybe next meeting we can start talking a little bit about where we want to go from here uh in terms of uh the research. [speaker003:] Mm hmm. [speaker002:] Um you know what things uh did you think of when you were uh doing this process that uh you just didn't really have time to adequately work on uh uh so [speaker003:] Mm hmm. Yeah. [speaker002:] What? [speaker001:] Oh, Stephane always has these great ideas and [disfmarker] oh, but uh we don't have time. [speaker003:] Sure. [speaker002:] Yeah. [speaker001:] Yeah. [speaker002:] Yeah. [speaker003:] I'm not sure these are great ideas. [speaker002:] But they're ideas. Yeah? Oh, that was good. [speaker003:] Yeah. [speaker001:] Yeah. [speaker002:] And [disfmarker] and uh also it's still true that uh I think it's true that [disfmarker] that we [disfmarker] we at least got fairly consistent i improved results by running uh the uh neural net transformation in parallel with the features [speaker003:] But [speaker002:] rather than uh in sequence which was [disfmarker] was your suggestion and that [disfmarker] that [disfmarker] that seems to have been borne out. [speaker003:] Mm hmm. Mm hmm. [speaker002:] The fact that none of these are [disfmarker] are [disfmarker] you know, enormous is [disfmarker] is [disfmarker] is not too surprising [disfmarker] most improvements aren't enormous and [vocalsound] uh [speaker003:] Yeah. [speaker002:] some of them are but uh I mean you have something really really wrong [vocalsound] and you fix it [vocalsound] you can get big and really enormous improvements [speaker003:] Mm hmm. [speaker002:] but [vocalsound] uh [vocalsound] um Cuz our best improvements over the years that we've gotten from finding bugs, but Anyway OK well I [disfmarker] I think [disfmarker] I see where we are and everybody knows what they're doing and is there [disfmarker] is there anything else we should talk about or [disfmarker] or [disfmarker] are we done? [speaker003:] Mm hmm. I think it's OK um. We so basically we will [disfmarker] I think we'll try to [disfmarker] to focus on these three architectures and [disfmarker] and perhaps I was thinking also a fourth one with just [disfmarker] just a single KLT because we did not really test that [disfmarker] removing all these KLT's and putting one single KLT at the end. [speaker002:] Uh huh. Yeah, I mean that would be pretty low maintenance to try it. [speaker003:] Yeah. [speaker002:] Uh if you can fit it in. [speaker003:] Mm hmm. [speaker002:] Oh I have [disfmarker] yeah I do have one other piece of information which uh I should tell people outside of this group too uh I don't know if we're gonna need it uh but uh Jeff up at the uh University of Washington has uh gotten a hold of a uh uh some kind of server farm of uh of ten uh uh multiprocessor uh IBM machines RS six thousands [speaker003:] Mm hmm. [speaker002:] and [disfmarker] and uh so I think each one is four processors or something or [disfmarker] I don't know, eight hundred megahertz or something and there's four processors in a box and there's ten boxes and there's some kind of ti so if [disfmarker] you know he's got a lot of processing power and um we'd have to schedule it but if we have some big jobs and we wanna [disfmarker] wanna [disfmarker] wanna run them he's [disfmarker] he's offering it. [speaker003:] Mm hmm. [speaker002:] So. It's uh when he was here eh uh he [disfmarker] he used i not only every machine here but every machine on campus as far as I could tell, so [disfmarker] so in some ways he just got his payback, but uh again I [disfmarker] I don't know if we'll end up with [disfmarker] if we're gonna be CPU limited on anything that we're doing in this group [speaker003:] Mm hmm. [speaker002:] but [disfmarker] but if [disfmarker] if we are that's an offer. OK well uh you guys doing great stuff so that's [disfmarker] that [disfmarker] that's really neat and uh we'll uh uh g don't think we need to uh um Oh well the other thing I guess that I will say is that uh the digits that we're gonna record momentarily is starting to get [disfmarker] are starting to get into a pretty good size collection and um in addition to the SpeechDat stuff we will have those to work with really pretty soon now so that's [disfmarker] that's another source of data. Um which is s under somewhat better control and that we can [disfmarker] we can make measurements of the room the [disfmarker] uh that [disfmarker] you know if we feel there's other measurements we don't have that we'd like to have we can make them and uh Dave and I were just talking about that a little while ago [speaker003:] Mm hmm. [speaker002:] so uh that's another [disfmarker] another possibility for this [disfmarker] this kind of work. K, uh if nobody has anything else maybe we should go around do [disfmarker] do our digits [disfmarker] do our digits duty. OK. OK I'll start. Uh, let me say that again. OK. I guess we're done. [speaker005:] OK. [speaker002:] OK, so [pause] We [disfmarker] we had a meeting with, uh [disfmarker] with Hynek, um, in [disfmarker] in which, uh, uh, Sunil and Stephane, uh [vocalsound] summarized where they were and [disfmarker] and, uh, talked about where we were gonna go. So that [disfmarker] that happened sort of mid week. [speaker005:] D did [disfmarker] did you guys get your code pushed together? [speaker002:] Uh. [speaker004:] Oh, yeah. Yeah. It's [disfmarker] it's [disfmarker] it's [disfmarker] it was updated yesterday, [speaker005:] Cool. [speaker004:] right? [speaker001:] Yeah. [speaker004:] Yeah. [speaker005:] Oh, right, I saw [disfmarker] I saw the note. [speaker001:] You probably received the mail. Yeah. [speaker005:] Mm hmm. [speaker002:] What was the update? [speaker001:] What was the update? [speaker002:] Yeah. [speaker001:] So there is th then [disfmarker] the [disfmarker] all the new features that go in. The, um, noise suppression, the re synthesis of speech after suppression. [speaker005:] Is the, um [disfmarker] the CVS mechanism working [pause] well? [speaker001:] These are the [disfmarker] Yeah. [speaker005:] Are [disfmarker] are people, uh, up at OGI grabbing code uh, via that? Or [disfmarker]? [speaker004:] Uh, I don't think [disfmarker] I don't think [disfmarker] [speaker001:] I don't know if they use it, but. [speaker005:] Uh huh. [speaker004:] Yeah, I I don't think anybody up there is like [pause] working on it right now. [speaker005:] Mmm. [speaker002:] I think it more likely that what it means is that when Sunil is up there [vocalsound] he will grab it. [speaker004:] Yeah. Yeah. [speaker005:] Yeah. [speaker004:] So right now nobody's working on Aurora there. [speaker005:] I see. [speaker002:] They're [disfmarker] Yeah. They're working on a different task. [speaker005:] I see. [speaker004:] Yeah. [speaker005:] OK. [speaker002:] But what'll happen is [disfmarker] is he'll go back up there and, uh, Pratibha will come back from [disfmarker] from, uh, the east coast. [speaker005:] Mm hmm. [speaker002:] Uh. And, uh [disfmarker] and [disfmarker] and I guess actually, uh, after Eurospeech for a little bit, uh, he'll go up there too. So, actually everybody [vocalsound] who's working on it [comment] will be up there for at least a little while. So they'll remotely access it [vocalsound] from there. [speaker005:] So has [disfmarker] Has anybody tried remotely accessing the CVS using, uh, uh, SSH? [speaker002:] Yeah. [speaker001:] Um, I don't know if Hari did that or [disfmarker] You d [speaker004:] I [comment] can actually do it today. I mean, I can just log into [disfmarker] [speaker005:] Have you tried it yet? [speaker004:] No, I didn't. [speaker005:] OK. [speaker004:] So I I'll try it today. [speaker002:] Good idea. Yeah. [speaker004:] Yeah. [speaker001:] Actually I [disfmarker] I tried wh while [disfmarker] when I installed the [pause] repository, I tried from Belgium. I logged in there and I tried [pause] to import [disfmarker] [speaker005:] Yeah? It worked good? [speaker001:] Yeah, it works. [speaker005:] Oh, good! Great! [speaker004:] Oh. [speaker001:] But it's [disfmarker] So, right now it's the mechanism with SSH. I don't [pause] s I didn't set up [disfmarker] You can also set up a CVS server [pause] on a new port. It's like well [pause] uh, a main server, or d You can do a CVS server. [speaker005:] Yeah. Right. Then that's using the CVS password mechanism and all that, [speaker001:] But. [speaker005:] right? [speaker001:] Yeah, right. But I didn't do that because I was not sure about [pause] security problems. I [disfmarker] I would have to [disfmarker] [speaker005:] So w when you came in from Belgian [disfmarker] [comment] Belgium, using SSH, uh, was it asking you for your own [pause] password into ICSI? So if yo you can only do that if you have an account at ICSI? [speaker001:] Right. Yeah. [speaker005:] OK. [speaker001:] Yeah. [speaker005:] Cuz there is an [disfmarker] a way to set up anonymous CVS right? So that [disfmarker] [speaker001:] Yeah, you ha in this way you ca you have to set up a CVS server but then, yeah, you can access it. [speaker005:] Oh, OK. So the anonymous mechanism [disfmarker] [speaker001:] you [disfmarker] you can set up priorities. You can access them and mostly if you [disfmarker] if y the set the server is set up like this. [speaker005:] OK. Because a lot of the open source stuff works with anonymous CVS and I'm just wondering [disfmarker] Uh, I mean, for our transcripts we may want to do that. [speaker001:] Mm hmm. [speaker002:] Yeah. [speaker005:] Uh. [speaker002:] Yeah, for this stuff I don't think we're [pause] quite up to that. [speaker005:] Mm hmm. [speaker002:] I mean, we're still so much in development. [speaker005:] Yeah, yeah, yeah. [speaker002:] We want to have just the insiders. [speaker005:] Oh, I wasn't suggesting for this. I'm [pause] thinking of the Meeting Recorder [comment] stuff but. [speaker002:] Yeah. [speaker005:] Yeah. OK. Cool. [speaker002:] Yeah. So, uh [disfmarker] [speaker005:] What's new? [speaker002:] Well, I mean, I think maybe the thing to me might be [disfmarker] I me I'm sure you've just been working on [disfmarker] on, uh, details of that since the meeting, right? And so [disfmarker] [speaker001:] Mmm, since the meeting, [speaker002:] That was [disfmarker] that was Tuesday. [speaker001:] well, I [disfmarker] I've been [disfmarker] I've been train training a new VAD and a new [pause] feature net. [speaker002:] OK. [speaker001:] So they should be ready. Um. [speaker002:] But I guess maybe the thing [disfmarker] since you weren't [disfmarker] yo you guys weren't at that [disfmarker] that meeting, might be just [disfmarker] just to, um, sort of recap, uh, the [disfmarker] the conclusions of the meeting. [speaker005:] Oh, great. [speaker001:] Mm hmm. [speaker005:] You're talking about the meeting with Hynek? [speaker002:] So. Yeah. Cuz that was sort of, uh [disfmarker] we [disfmarker] we'd sort of been working up to that, that [disfmarker] that, uh, he would come here this week and [disfmarker] and we would sort of [disfmarker] [speaker005:] Uh huh. [speaker002:] Since he's going out of town like now, and I'm going out town in a couple weeks, uh, and time is marching, sort of, given all the mu many wonderful things we could be working on, what [disfmarker] what will we actually focus on? [speaker005:] Mm hmm. [speaker002:] And, uh [disfmarker] and what do we freeze? And, you know, what do we [disfmarker]? So, um. I mean, this [pause] software that these guys created was certainly a [disfmarker] a key part. So then there's something central and there aren't at least a bunch of different versions going off in [disfmarker] in ways that [pause] differ [pause] trivially. [speaker005:] Yeah. [speaker002:] Uh, um, and, um, [speaker005:] That's [disfmarker] that's nice. [speaker002:] and then within that, I guess the idea was to freeze a certain set of options for now, to run it, uh, a particular way, and decide on what things are gonna be experimented with, as opposed to just experimenting with everything. So keep a certain set of things constant. So, um. Uh, maybe describe roughly what [disfmarker] what we are keeping constant for now, or [disfmarker]? [speaker001:] Yeah. Well. So we've been working like six weeks on [disfmarker] on the noise compensation and we end up with something that seems reasonable. Um. [speaker005:] Are you gonna use [disfmarker] which of the two techniques? [speaker001:] So finally it's [disfmarker] it's, um, Wiener filtering on FFT bins. And it uses, uh, two steps, smoothing of the transfer function, the first step, that's along time, which use recursion. And [vocalsound] after this step there is a further smoothing along frequency, which use a sliding window of twenty FFT bins. Mmm. And, uh [disfmarker] [speaker005:] So this is on the [disfmarker] uh, before any mel scaling has been done? [speaker001:] Yeah, [speaker005:] This is [disfmarker] [speaker001:] yeah. [speaker002:] This [disfmarker] this smoothing is done on the estimate, um, of what you're going to subtract? Or on the thing that has already had something subtracted? [speaker001:] It was [disfmarker] Yeah. Uh, [vocalsound] it's on the transfer function. So [disfmarker] [speaker002:] Oh, it's on the transfer function for the Wiener filter. [speaker001:] Yeah. [speaker002:] Yeah, OK. [speaker001:] Yeah, so basically we tried [vocalsound] different configuration within this idea. We tried u u applying this on mel bands, having spectral subtraction instead of wiener filtering. Um. Well, finally we end up with [pause] this configuration that works, uh, quite well. So we are going to fix this for the moment and work on the other aspects of [vocalsound] the whole system. [speaker005:] Mm hmm. [speaker001:] So [disfmarker] [speaker002:] Actually, let me int eh, Dave isn't here to talk about it, but let me just interject. This module, in principle, i I mean, you would know whether it's [vocalsound] true in fact, is somewhat independent from the rest of it. I mean, because you [disfmarker] you re synthesize speech, right? [speaker001:] Mm hmm. [speaker002:] So, um. Uh, well you don't [disfmarker] I guess you don't re synthesize speech, but you could [disfmarker] [speaker001:] We [disfmarker] we do not fo [speaker002:] Uh, but you could. [speaker001:] Well [disfmarker] well, we do, but we don't [disfmarker] don't re synthesize. In [disfmarker] in the program we don't re synthesize and then re analyze once again. We just use the clean FFT bins. [speaker002:] But you have a re synthesized thing that you [disfmarker] that's an [disfmarker] an option here. [speaker001:] This is an option that [disfmarker] then you can [disfmarker] Yeah. [speaker002:] Yeah, I gu I guess my point is that, um, i in some of the work he's doing in reverberation, one of the things that we're finding is that, uh, it's [disfmarker] it's [disfmarker] for the [disfmarker] for an artificial situation, we can just deal with the reverberation and his techniques work really well. But for the real situation uh, problem is, is that you don't just have reverberation, you have reverberation in noise. And if you don't include that in the model, it doesn't work very well. So in fact it might be a very nice thing to do, to just take the noise removal part of it and put that in front of what he's looking at. And, uh, generate new files or whatever, and [disfmarker] and, uh, uh [disfmarker] and then do the reverberation part. [speaker001:] Mm hmm. [speaker002:] So it's [disfmarker] [speaker004:] Mmm. [speaker002:] Anyway. [speaker005:] So Dave hasn't [pause] tried that yet? [speaker002:] No, no. He's [disfmarker] I mean, e [speaker005:] I guess he's busy with [disfmarker] [speaker002:] Yeah, prelims, right. [speaker003:] Pre prelim hell. [speaker005:] Yeah. [speaker002:] Yeah. So. [speaker005:] Yeah. [speaker002:] Uh, but [disfmarker] but, you know, that'll [disfmarker] uh, it's clear that we, uh [disfmarker] we are not [disfmarker] with the real case that we're looking at, we can't just look at reverberation in isolation because the interaction between that and noise is [disfmarker] is considerable. And that's I mean, in the past we've looked at, uh, and this is hard enough, the interaction between channel effects and [disfmarker] and, uh [disfmarker] and additive noise, uh, so convolutional effects and [disfmarker] and additive effects. And that's hard enough. I mean, I don't think we really [disfmarker] I mean, we're trying to deal with that. In a sense that's what we're trying to deal with in this Aurora task. And we have, uh, the, uh, uh, LDA stuff that in principle is doing something about convolutional effects. And we have the noise suppression that's doing something about noise. Uh, even that's hard enough. And [disfmarker] and the on line normalization as well, in that s category. i i There's all these interactions between these two and that's part of why these guys had to work so hard on [disfmarker] on juggling everything around. But now when you throw in the reverberation, it's even worse, because not only do you have these effects, but you also have some long time effects. And, um, so Dave has something which, uh, is doing some nice things under some conditions with [disfmarker] with long time effects but when it's [disfmarker] when there's noise there too, it's [disfmarker] it's [disfmarker] it's pretty hard. So we have to start [disfmarker] Since any [disfmarker] almost any real situation is gonna have [disfmarker] uh, where you have the microphone distant, is going to have both things, we [disfmarker] we actually have to think about both at the same time. [speaker005:] Hmm. [speaker002:] So, um [disfmarker] So there's this noise suppression thing, which is sort of worked out and then, uh, maybe you should just continue telling what [disfmarker] what else is in the [disfmarker] the form we have. [speaker001:] Yeah, well, [vocalsound] the, um, the other parts of the system are the [disfmarker] the blocks that were already present before and that we did not modify a lot. [speaker002:] So that's [disfmarker] again, that [disfmarker] that's the Wiener filtering, followed by, uh [disfmarker] uh, that's done at the FFT level. Then [disfmarker] [speaker001:] Yeah, th then the mel filter bank, [speaker002:] Mm hmm. [speaker001:] then the log operation, [speaker002:] Mm hmm. [speaker001:] Mmm. [speaker002:] The [disfmarker] the [disfmarker] the filtering is done in the frequency domain? [speaker001:] Yeah. [speaker002:] Yeah, OK. And then the mel and then the log, and then the [speaker001:] Then the LDA filter, [speaker002:] LDA filter. [speaker001:] mmm, then the downsampling, [speaker002:] And then uh downsample, [speaker001:] DCT, [speaker002:] DCT, [speaker001:] then, um, on line normalization, [speaker002:] on line norm, [speaker001:] followed by [pause] upsampling. Then finally, we compute delta and we put the neural network also. [speaker002:] Right, and then in parallel with [disfmarker] an [disfmarker] a neural net. And then following neural net, some [disfmarker] probably some orthogonalization. Uh [disfmarker] [speaker001:] Yeah. [speaker002:] Um. [speaker001:] And finally frame dropping, which um, [vocalsound] would be a neural network also, used for estimated silence probabilities. And the input of this neural network would be somewhere between log [pause] mel bands or one of the earlier stages of the processing. [speaker002:] Mm hmm. So that's sort of [disfmarker] most of this stuff is [disfmarker] yeah, is operating parallel with this other stuff. [speaker001:] Mm hmm. [speaker002:] Yeah. So the things that we, um, uh, I guess we sort of [disfmarker] uh, There's [disfmarker] there's some, uh, neat ideas for [vocalsound] V A So, I mean, in [disfmarker] I think there's sort of like [disfmarker] There's a bunch of tuning things to improve stuff. There's questions about [pause] various places where there's an exponent, if it's the right exponent, or [pause] ways that we're estimating noise, that we can improve estimating noise. And there's gonna be a host of those. But structurally it seemed like the things [disfmarker] the main things that [disfmarker] that we brought up that, uh, are [disfmarker] are gonna need to get worked on seriously are, uh, uh, a [disfmarker] [vocalsound] a significantly better VAD, uh, putting the neural net on, um, which, you know, we haven't been doing anything with, the, uh, neural net at the end there, and, uh, the, uh, [vocalsound] opening up the second front. Uh. [speaker005:] The other half of the channel? That what you mean? [speaker002:] Yeah, yeah, I mean, cuz we [disfmarker] we have [disfmarker] we have, uh, uh, half the [disfmarker] the, uh, data rate that they allow. And, uh, so the initial thing which came from, uh, the meeting that we had down south was, uh, that, um, we'll initially just put in a mel spectrum as the second one. It's, you know, [pause] cheap, easy. Uh. There's a question about exactly how we do it. We probably will go to something better later, [speaker005:] Mm hmm. [speaker002:] but the initial thing is that cepstra and spectra behave differently, so. Um, [comment] I think Tony Robinson used to do [disfmarker] I was saying this before. I think he used to do mel, uh, spectra and mel cepstra. He used them as alternate features. [speaker005:] Hmm. [speaker002:] Put them together. Uh. [speaker005:] So if you took the system the way it is now, the way it's fro you're gonna freeze it, and it ran it on the last evaluation, where it would it be? [speaker001:] Mm hmm. It, uh, [speaker005:] In terms of ranking? [speaker004:] Second. [speaker001:] Ri right now it's second. Um. [speaker005:] Mm hmm. [speaker002:] Although you [disfmarker] you know, you haven't tested it actually on the German and Danish, have you? [speaker001:] No, we didn't. No, um. [speaker002:] Yeah. [speaker005:] So on the ones that you did test it on it would have been second? [speaker002:] Yeah. Would it [disfmarker] I mean [disfmarker] But [disfmarker] When you're saying second, you're comparing to the numbers that the, uh [disfmarker] that the best system before got on, uh [disfmarker] also without German and Danish? [speaker001:] Yeah, yeah. [speaker002:] Yeah, OK. [speaker004:] And th the ranking actually didn't change after the German and Danish. So, yeah. [speaker002:] Well ranking didn't before, but I'm just asking where this is to where theirs was without the German and Danish, [speaker001:] Yeah. [speaker004:] Yeah. Yeah. [speaker001:] Mmm. [speaker004:] Yeah, yeah. [speaker002:] right? So. [speaker005:] Where [disfmarker] where [disfmarker] where were we actually on the last test? [speaker002:] Oh, we were also esp essentially second, although there were [disfmarker] there were [disfmarker] I mean, we had a couple systems and they had a couple systems. And so, I guess by that [pause] we were third, but I mean, there were two systems that were pretty close, that came from the same place. [speaker005:] Uh huh. I see. OK. [speaker002:] Uh, so institutionally we were [disfmarker] [vocalsound] we were second, with, uh, the third [disfmarker] third system. [speaker005:] We're [disfmarker] so this second that you're saying now is system wide second? [speaker002:] See [disfmarker] Uh, no I think it's also institutional, [speaker005:] Still institutionally second? [speaker002:] isn't it? Right? I mean, I think both of their systems probably [disfmarker] [speaker001:] Uh, we are between their two systems. [speaker002:] Oh, are we? [speaker001:] So I [disfmarker] It is a triumph. [speaker004:] Yeah. Their [disfmarker] their first system is fifty four point something. [speaker002:] Is it? [speaker004:] And, uh, we are fifty three point something. And their second system is also fifty three point something. [speaker001:] But everything is [pause] within the range of one [disfmarker] one percent. [speaker004:] Yeah, one percent. [speaker005:] Oh, wow! [speaker002:] Yeah, so [disfmarker] so basically they're all [disfmarker] they're all pretty close. [speaker005:] That's very close. [speaker001:] So. [speaker004:] Yeah. [speaker005:] Yeah. [speaker002:] And [disfmarker] and, [vocalsound] um, you know, in some sense we're all doing fairly similar things. Uh, I mean, one could argue about the LDA and so forth but I [disfmarker] I think, you know, in a lot of ways we're doing very similar things. But what [disfmarker] what [disfmarker] [speaker005:] So how did they fill up this [disfmarker] all these [disfmarker] these bits? I mean, if we're u [speaker002:] Um, why are we using half? [speaker005:] Yeah. [speaker002:] Well, so you could [disfmarker] you c [speaker005:] Or how are they using more than half, I guess maybe is what I [disfmarker] [speaker002:] Yeah, so I [disfmarker] I think [disfmarker] uh, you guys are closer to it than me, so correct me if I'm wrong, but I [disfmarker] I think that what's going on is that in [disfmarker] in both cases, some kind of normalization is done to deal with convola convolutional effects. Uh, they have some cepstral [pause] modification, right? [speaker001:] Mm hmm. [speaker002:] In our case we have a couple things. We have the on line normalization and then we have the LDA RASTA. And [pause] they seem to comple complement each other enough and be different enough that they both seem to help [disfmarker] help us. But in any event, they're both doing the same sort of thing. But there's one difference. The LDA RASTA, uh, throws away high modulation frequencies. And they're not doing that. [speaker005:] So th So [disfmarker] [speaker002:] So that if you throw away high modulation frequencies, then you can downsample. [speaker003:] Get down. [speaker005:] I see. I see. [speaker002:] So [disfmarker] [speaker005:] So what if you didn't [disfmarker] So do you explicitly downsample then? Do we explicitly downsample? [speaker002:] Yeah. [speaker001:] Yeah. [speaker005:] And what if we didn't do that? Would we get worse performance? [speaker002:] I think it doesn't affect it, [speaker001:] Um [pause] Yeah, not better, not worse. [speaker002:] does it? [speaker005:] I see. OK. [speaker002:] Yeah. So I think the thing is, since we're not evidently throwing away useful information, let's try to put in some useful information. [speaker005:] Yeah. Yeah. [speaker002:] And, uh, so I [disfmarker] you know, we [disfmarker] we've found in a lot of ways for quite a while that having a second stream uh, helps a lot. So that's [disfmarker] that's put in, and you know, it may even end up with mel spectrum even though I'm saying I think we could do much better, just because it's simple. [speaker005:] Mm hmm. [speaker002:] Um. And you know, in the long run having something everybody will look at and say, "oh, yeah, I understand", is [disfmarker] is very helpful. [speaker005:] So you would [disfmarker] you're [disfmarker] You're thinking to put the, uh, mel spectrum in before any of the noise removal stuff? or after? [speaker002:] Well, that's a question. I mean, we were talking about that. It looks like it'd be straightforward to [disfmarker] to, uh, remove the noise, um, and, uh, [speaker005:] Cuz that happens before the mel conversion, right? [speaker002:] Yeah. So, I mean, to do it after the mel conversion [disfmarker] uh, after the noise removal, after the mel conversion. There's even a question in my mind anyhow of whether th you should take the log or not. Uh. I sort of think you should, but I don't know. [speaker001:] What about norm normalizing also? [speaker002:] Right. Uh. Well, but normalizing spectra instead of cepstra? [speaker001:] Yeah. [speaker002:] Yeah, probably. Some kind would be good. You know? I would think. [speaker004:] Well, it [disfmarker] it [disfmarker] it [disfmarker] it [disfmarker] so it actually makes it dependent on the overall energy of the [disfmarker] uh, the frame. [speaker002:] If you do or don't normalize? [speaker004:] If yo if you don't normalize and [disfmarker] if [disfmarker] if you don't normalize. [speaker002:] Right. Yes, so I mean, one would think that you would want to normalize. But I [disfmarker] I [disfmarker] w w My thought is, uh, particularly if you take the log, try it. And then if [disfmarker] if normalization helps, then y you have something to compare against, and say, "OK, this much effect" [disfmarker] I mean, you don't want to change six things and then see what happens. You want to change them one at a time. [speaker004:] Mm hmm. [speaker002:] So adding this other stream in, that's simple in some way. And then [pause] saying, oh [disfmarker] uh [disfmarker] particularly because we've found in the past there's all these [disfmarker] these [disfmarker] these different results you get with slight modifications of how you do normalization. Normalization's a very tricky, sensitive thing and [pause] you learn a lot. So, I would think you would wanna [pause] have some baseline that says, "OK, we don't normalize, this is what we get", when we do this normalization, when we do that normalization. But [disfmarker] but the other question is [disfmarker] So I think ultimately we'll wind up doing some normalization. I agree. [speaker005:] So this second stream, will it add latency to the system or [disfmarker]? [speaker002:] No, it's in parallel. [speaker003:] Para [speaker005:] S [speaker002:] We're not talking about computation time here. We're ta I think we're pretty far out. [speaker005:] Yeah. [speaker002:] So it's just in terms of what data it's depending on. It's depending on the same data as the other. [speaker005:] Same data. OK. [speaker002:] So it's in parallel. Uh huh. [speaker003:] So with this, uh, new stream would you train up a VAD on both [disfmarker] both features, somehow? [speaker004:] No, I guess the VAD has its own set of features. [speaker003:] OK. [speaker004:] I mean, which could be this [disfmarker] one of these streams, or it can be something derived from [pause] these streams. [speaker003:] that's [disfmarker] [speaker002:] Yeah. [speaker003:] OK. [speaker001:] And there is also the idea of using TRAPS, maybe, for the VAD, which, um [disfmarker] [speaker004:] Yeah, that's also [disfmarker] [speaker001:] Well, Pratibha apparently showed, when, she was at IBM, that it's a good idea. So. [speaker003:] Would [disfmarker] would that fit on the handset, or [disfmarker]? Oh! [speaker001:] I have no idea. [speaker003:] OK. [speaker004:] Well, it has t I mean the [disfmarker] th [speaker001:] It would have to fit but [disfmarker] [speaker004:] Yeah, if it has to fit the delays and all this stuff. [speaker001:] Yeah. [speaker002:] Well, there's the delays and the storage, [speaker003:] OK. [speaker002:] yeah. [speaker004:] Yeah. [speaker002:] But I don't think the storage is so big for that. [speaker003:] Right. [speaker002:] I think th the biggest we've run into for storage is the neural net. [speaker004:] Yeah. [speaker002:] Right? Yeah. Um. And so I guess the issue there is, are we [disfmarker] are we using neural net based TRAPS, and [disfmarker] and how big are they? [speaker003:] Oh, right. [speaker002:] So that'll [disfmarker] that'll be, you know, an issue. [speaker003:] Yeah. Cuz sh [speaker002:] Maybe they can be little ones. [speaker003:] Right. Cuz she also does the, uh [disfmarker] the correlation based, uh, TRAPS, with without the neural net, just looking at the correlation between [disfmarker] [speaker002:] Mini TRAPS. Right. And maybe for VAD they would be OK. Yeah. [speaker003:] Yeah. [speaker002:] Yeah. [speaker004:] Yeah. [speaker002:] That's true. Or a simple neural net, right? I mean, the thing is, if you're doing correlation, you're just doing a simple [disfmarker] uh, uh [disfmarker] uh, dot product, you know, with some weights which you happened to learn from this [disfmarker] learn from the data. [speaker003:] Mm hmm. [speaker002:] And so, uh, putting a nonlinearity on it is, [pause] you know, not that big a deal. [speaker003:] Mm hmm. [speaker002:] It certainly doesn't take much space. [speaker003:] Right. [speaker002:] So, uh, the question is, how complex a function do you need? Do you need to have an added layer or something? In which case, uh, potentially, you know, it could be big. So. [speaker003:] Mm hmm. [speaker002:] So, uh, uh [disfmarker] So what's next? Maybe s s remind us. [speaker005:] So the meeting with Hynek that you guys just had was to decide exactly what you were gonna freeze in this system? Is that [disfmarker]? Or was there [disfmarker]? Were you talking about what t new stuff, or [disfmarker]? [speaker002:] What to freeze and then what to do after we froze. [speaker005:] Mmm. [speaker002:] Yeah. And like I was saying, I think the [disfmarker] you know, the basic directions are, uh, uh [disfmarker] I mean, there's lots of little things, such as improve the noise estimator but the bigger things are adding on the neural net and, uh, the second stream. And then, uh, improving the VAD. Uh. [speaker004:] So, I'll, um [disfmarker] I'll actually [disfmarker] after the meeting I'll add the second stream to the VAD and maybe I'll start with the feature net in that case. [speaker002:] So. [speaker004:] It's like, you're looking at the VAD, right? [speaker001:] Uh, yeah. [speaker004:] I'll [disfmarker] [speaker001:] I I've a new feature net ready also. [speaker004:] For the VAD? [speaker001:] No, uh. Well p two network, one VAD and one [pause] feature net. [speaker004:] Oh, you already have it? [speaker001:] Mm hmm. [speaker004:] OK, so just figure how to take the features from the final [disfmarker] [speaker001:] Yeah. [speaker004:] OK. [speaker001:] Um. But, yeah, I think there are plenty of issues to work on for the feature net [@ @]. [speaker005:] What about the, um [disfmarker] uh, the new part of the evaluation, [speaker003:] Feature net. [speaker005:] the, uh, Wall Street Journal part? [speaker002:] Right. Right. Um. Have you ever [disfmarker]? Very good question. Have you ever worked with the Mississippi State h uh, software? [speaker001:] Sorry. [speaker005:] No. Not yet. [speaker002:] Oh. Well you [disfmarker] you may be called upon to help, uh, uh, on account of, uh, all the work in this stuff here has been, uh, with small vocabulary. [speaker005:] OK. Mm hmm. So what [disfmarker] how is the, uh, interaction supposed to happen? Uh, I remember the last time we talked about this, it was sort of up in the air whether they were going to be taking, uh, people's features and then running them or they were gonna give the system out or [disfmarker] [speaker004:] Yeah. Yeah. [speaker005:] Oh, so they're gonna just deliver a system basically. [speaker004:] Yeah, yeah. [speaker002:] Do we already have it? [speaker004:] Yeah, th I [disfmarker] I guess it's almost ready. [speaker005:] Uh huh. [speaker004:] So [disfmarker] That's what [disfmarker] So they have released their, uh, document, describing the system. [speaker005:] I see. [speaker002:] Maybe you could, uh, point it [pause] at Chuck, [speaker004:] Sure. [speaker002:] because, I mean [disfmarker] [speaker005:] So we'll have to grab this over CVS or something? [speaker004:] It no, it's just downloadable from their [disfmarker] from their web site. [speaker005:] Is that how they do it? OK. OK. [speaker002:] Cuz one of the things that might be helpful, if you've [disfmarker] if you've got time in all of this is, is if [disfmarker] if these guys are really focusing on improving, uh, all the digit stuff, uh, maybe [disfmarker] and you got the front end from them, maybe you could do the runs for the [disfmarker] [speaker005:] Mm hmm. Sure. [speaker002:] and [disfmarker] and, you know, iron out hassles that [disfmarker] that you have to, uh, tweak Joe about or whatever, because you're more experienced with running the large vocabulary stuff. [speaker005:] OK. [speaker004:] So I'll point you to the web site and the mails corresponding. [speaker002:] S [speaker004:] So I [speaker005:] And it [disfmarker] but it's not ready yet, the system? [speaker004:] Uh, I [disfmarker] I think they are still, uh, tuning something on that. So they're like, d they're varying different parameters like the insertion penalty and other stuff, and then seeing what's the performance. [speaker005:] Are those going to be parameters that are frozen, nobody can change? Or [disfmarker]? [speaker004:] Uh, w I guess there is, uh, time during which people are gonna make suggestions. [speaker005:] Oh, but everybody's gonna have to use the same values. [speaker004:] After that. [speaker005:] Oh! Interesting. OK. [speaker004:] Yeah, I guess. So these sugges these [disfmarker] this, uh, period during which people are gonna make suggestions is to know whether it is actually biased towards any set of features or [disfmarker] [speaker002:] Yeah, so I th th certainly the thing that I would want to know about is whether we get really hurt, uh, on in insertion penalty, language model, scaling, sorts of things. [speaker005:] Using our features. [speaker002:] Yeah, [speaker005:] Yeah. [speaker002:] yeah. Uh, in which case, um, H Hari or Hynek will need to, you know, push the case [pause] more about [disfmarker] about this. [speaker005:] Mm hmm. [speaker002:] Um. [speaker005:] And we may be able to revisit this idea about, you know, somehow modifying our features to work with [disfmarker] [speaker002:] Yes. In this case, that's right. [speaker005:] Yeah. [speaker002:] That's right. Um, some of that may be, uh, a last minute rush thing because if the [disfmarker] if our features are changing [disfmarker] [speaker005:] Yeah. [speaker002:] Uh. Uh. But, um. Yeah, the other thing is that even though it's months away, uh, it's starting to seem to me now like November fifteenth is right around the corner. And, um, if they haven't decided things like this, like what the parameters are gonna be for this, uh, when "deciding" is not just somebody deciding. I mean, in fact there should be some understanding behind the, uh, [vocalsound] deciding, which means some experiments and [disfmarker] and so forth. It [disfmarker] it [disfmarker] it seems pretty tight to me. [speaker005:] So wha what's the significance of November fifteenth? [speaker002:] That's when the evaluation is. [speaker005:] OK. [speaker002:] Yeah. So, yeah, so after [disfmarker] But, you know, they may even decide in the end to push it off. It wouldn't, you know, entirely surprise me. But, uh, due to other reasons, like some people are going away, I'm [disfmarker] I'm hoping it's not pushed off for [vocalsound] a l a long while. That would be, uh [disfmarker] put us in an awkward position. But [disfmarker] Anyway. [speaker005:] OK. [speaker002:] Great. Yeah, I think that'll be helpful. There's [disfmarker] there's not anybody OGI currently who's [disfmarker] who's, uh, working with this and [disfmarker] and [speaker005:] Is [disfmarker] is this part of the evaluation just a small part, or ho how important is this to the overall [disfmarker]? [speaker002:] I [disfmarker] I think it's [disfmarker] it's, um [disfmarker] it depends how badly [vocalsound] you do. I mean, I think that it [disfmarker] it is [disfmarker] Uh. [speaker004:] b [speaker005:] This is one of those things that will be debated afterwards? [speaker002:] Yeah. Well, I mean, it's [disfmarker] it's [disfmarker] Conceptually, it [disfmarker] my impression, again, you guys correct me if I'm wrong, but [pause] my impression is that, um, they want it as a double check. That you haven't come across [disfmarker] you haven't invented features which are actually gonna do badly for a [disfmarker] a significantly different task, particularly one with larger vocabulary. [speaker005:] Mmm. [speaker002:] And, um, but it's not the main emphasis. I mean, the truth is, most of the applications they're looking at are pretty small vocabulary. [speaker005:] Mmm. [speaker002:] So it's [disfmarker] it's a double check. So they'll probably assign it some sort of low weight. [speaker005:] Seems to me that if it's a double check, they should give you a one or a zero. Y you passed the threshold or you didn't pass the threshold, and they shouldn't even care about what the score is. [speaker002:] Yeah. But, I mean, we'll [disfmarker] we'll [disfmarker] we'll see what they come up with. [speaker005:] Yeah. [speaker002:] Uh, but in [disfmarker] in the current thing, for instance, where you have this well matched, moderately matched, and [disfmarker] and mis highly mismatched, uh, the emphasis is somewhat on the [disfmarker] on the well matched, but it's only a [disfmarker] a marginal, right? It's like forty, thirty five, twenty five, or something like that. [speaker004:] Yeah. [speaker002:] So you still [disfmarker] if you were way, way off on the highly mismatched, it would have a big effect. [speaker005:] Mm hmm. [speaker002:] And, um, it wouldn't surprise me if they did something like that with this. So again, if you're [disfmarker] if you get [disfmarker] If it doesn't help you much, uh, for noisy versions of this [disfmarker] of large vocabulary data, then, uh, you know, it may not hurt you that much. [speaker005:] Oh. [speaker002:] But if it [disfmarker] if you don't [disfmarker] if it doesn't help you much at all, um, or to put it another way, if it helps some people a lot more than it helps other people, uh, if their strategies do, then [disfmarker] [speaker005:] Mm hmm. So is this, uh [disfmarker]? Uh, Guenter was putting a bunch of Wall Street Journal data on our disks. [speaker002:] That's it. [speaker005:] So that's the data that we'll be running on? [speaker002:] Yeah. [speaker005:] I see. OK. [speaker002:] Yeah. So [pause] we have the data, just not the recognizer. OK. [speaker005:] So this test may take quite a while to run, then. May judging by the amount of data that he was putting on. [speaker002:] Uh, well there's training and test, right? [speaker005:] I [disfmarker] I guess, I'm not sure. I just [disfmarker] [speaker002:] No, I mean, if it's like the other things, there's [disfmarker] there's data for training the H M Ms and [disfmarker] and data for testing it. So I wouldn't [disfmarker] So it [disfmarker] it's [disfmarker] [speaker005:] OK. So there's [disfmarker] [speaker002:] So, training the recognizer, but, um Um. But I think it's trained on clean and [disfmarker] Is it trained on clean and [disfmarker] and test on [disfmarker]? [speaker004:] The Wall Street? [speaker002:] Yeah. [speaker001:] Apparently, no. It's training on a range between ten and twenty DB, I think, and testing between five and fifteen. [speaker004:] Mm hmm. Yeah. [speaker001:] That's what I got [pause] on [disfmarker] [speaker002:] OK. [speaker004:] It's, uh [disfmarker] It's like a medium [disfmarker] medium mismatch condition, sort of. [speaker002:] I see. [speaker001:] Yeah, and [disfmarker] So the noise is [disfmarker] There is a range of different noises also [disfmarker] um [disfmarker] which are selected randomly and added randomly, uh, to the files. And there are noises that are different from the noises used [pause] on TI digits. [speaker002:] Yeah. Yeah. I mean, I wouldn't imagine that the amount of testing data was that huge. They probably put training [disfmarker] uh, almost certain they put training data there too. Maybe not. So. That's that. Anybody have anything else? [speaker005:] Uh, one [disfmarker] one last question on that. When did they estimate that they would have that system available for download? [speaker004:] Um, I guess [disfmarker] I guess one [disfmarker] some preliminary version is already there. [speaker005:] Oh, so there's w something you can download to just learn? [speaker004:] Yeah, it's already there. [speaker005:] OK, [speaker004:] Yeah. [speaker005:] good. [speaker004:] But they're actually parallel y doing some modifications also, I think. [speaker005:] OK. [speaker004:] So I guess the f final system will be frozen by middle of, like, one more week maybe. [speaker002:] Oh, well that's pretty soon. [speaker004:] Yeah, that's just one more. [speaker003:] Is this their, um, SVM recognizer? [speaker004:] No, it's just a straightforward HMM. [speaker003:] Oh, OK. [speaker002:] You know, their [disfmarker] their [disfmarker] They have a lot of options [pause] in their recognizer and [disfmarker] and the SVM is one of the things they've done with it, but it's not their more standard thing. [speaker003:] Uh huh. [speaker002:] For the most part it's [disfmarker] it's Gaussian mixtures. [speaker003:] Oh, OK. Oh, OK. [speaker002:] Yeah. [speaker004:] It's just a HMM, Gaussian mixture model. [speaker003:] Gaussian mixture HMM. [speaker002:] Yeah. [speaker003:] OK. [speaker002:] Yeah, the SVM thing was an HMM also. It was just a [disfmarker] it [disfmarker] it [disfmarker] it was like a hybrid, like [disfmarker] [speaker004:] Yeah, this is a g [speaker003:] Mm hmm. [speaker004:] yeah, this i yeah. [speaker002:] what? Yeah. [speaker005:] So, just so that I understand, they're providing scripts and everything so that basically, uh, you [disfmarker] you push a button and it does training, and then it does test, and everything? Is that [pause] the idea? [speaker004:] I [disfmarker] I [disfmarker] I think [disfmarker] yeah, I [disfmarker] I guess something like that. [speaker005:] Mm hmm. [speaker004:] It's like [vocalsound] [disfmarker] as painless as possible, is what [disfmarker] Do they provide all the scripts, everything, and then [disfmarker] Just, [speaker005:] I see. Hmm. Somehow yo there's hooks to put your features in and [disfmarker] [speaker004:] ju Yeah, I th I think. [speaker005:] Hmm. [speaker002:] Hmm. Yeah, um. In fact, I mean, if you look into it a little bit, it might be reasonable [disfmarker] You know Joe, right? [speaker005:] Mm hmm. [speaker002:] Yeah. Just to sort of ask him about the issue of, um, different features having different kinds of, uh, scaling characteristics and so on. So that, you know, w w possibly having entirely different optimal values for [disfmarker] for the usual twiddle factors and what's [disfmarker] what's the plan about that? [speaker005:] OK. [speaker004:] So sh shall we, like, add Chuck also to the mailing lists? It may be better, I mean, in that case if he's going to [disfmarker] [speaker002:] Yeah. [speaker004:] Because there's a mailing list for this. [speaker002:] Is that OK? [speaker005:] Yeah, that'd be great. [speaker004:] Yeah, I guess maybe Hari or Hynek, one of them, has to [pause] send a mail to Joe. Or maybe if you [disfmarker] [speaker005:] I [disfmarker] I could send him an email. [speaker004:] Well, yeah, to add or maybe wh [speaker005:] I [disfmarker] I know him really well. I [disfmarker] I was just talking with him on email the other day actually. [speaker004:] Yeah, so that's just fine. So [disfmarker] So [disfmarker] [speaker002:] Uh, yeah, and just, um, se maybe see. [speaker005:] About other things, but. [speaker002:] Do you have Hari's, uh [disfmarker]? [speaker005:] I have Hari's [disfmarker] [speaker002:] Yeah, so maybe just CC Hari and say that you've just been asked to handle the large vocabulary part here, [speaker005:] OK. [speaker002:] and, uh, you know, [speaker005:] Would it be better if I asked Hari to ask Joe? [speaker002:] Uh. Why don't you just ask Joe but CC Hari, and then in the note say, "Hari, hopefully this is OK with you". [speaker005:] OK. OK. [speaker002:] And then if Joe feels like he needs a confirmation, Hari can answer it. [speaker004:] Yeah. [speaker002:] That way you can get started asking [comment] Joe quickly while he's [disfmarker] while he's maybe still, you know, putting in nails and screws and Yeah. [speaker004:] And there is an, uh, archive of all the mails that has been [vocalsound] gon that has gone, uh, between these people [disfmarker] among these people. So just you can see all this [pause] mails in the ISIP web site [disfmarker] [speaker005:] OK. OK. [speaker004:] Mississippi web site. [speaker005:] Is that a password controlled [disfmarker]? [speaker004:] Yeah, it's password protected. [speaker005:] OK. [speaker004:] So, like [disfmarker] like, it's, like [disfmarker] [speaker002:] Have you thought about [pause] how long [pause] would be uh, most useful for you to go up to OGI? [speaker001:] I don't know, uh. We can [disfmarker] [vocalsound] For September, we can set up a work schedule and we can maybe work independently. And then at some point it maybe be better to work together again. [speaker002:] Oh, so you're [disfmarker] you're imagining more that you would come back here first for a while and then [disfmarker] and then go up there? [speaker001:] I [disfmarker] [speaker002:] I mean, it's to you. [speaker001:] Maybe, yeah. [speaker002:] I ju you guys are Well, y anyway, you don't have to decide this second but thi think about it [disfmarker] about what [disfmarker] what you would think would be the [disfmarker] the best way to work it. I'll [speaker001:] But, uh [pause] Huh. Mm hmm. [speaker002:] support it either way, so. [speaker001:] Mm hmm Right. [speaker002:] OK. Uh. Got anything to tell us? [speaker003:] Um. Well, I've been reading some literature about clustering of data. Just, um, I guess, let me put it in context. OK, so we're talking about discovering intermediate categories to, um [disfmarker] to classify. And, uh, I was looking at some of the work that, uh, Sangita was doing on these TRAPS things. So she has, um [disfmarker] she has temporal patterns for, um, a certain set of phonemes, from [disfmarker] from TIMIT, right? the most common phonemes. And each one of them has [disfmarker] has a [disfmarker] a nice pattern over time, a one [disfmarker] one second window. And it has [disfmarker] has these patterns. Um, so she has, um a TRAP for each one of the phonemes, um, times fifteen, for each of the fifteen critical bands. And, um, [vocalsound] she does this agglomerative hierarchical clustering which [disfmarker] which basically, um, is a clustering algorithm that, uh, starts with many, many, many different points [disfmarker] many different clusters [disfmarker] uh, corresponding to the number of data, uh, patterns that you have in the data. And then you have this distance mej metric which, uh, measures how [disfmarker] how closely related they are. And you start, um [vocalsound] by merging the patterns that are most closely related. [speaker005:] And you create a tree. [speaker003:] And y yeah, yeah, a dendrogram tree. [speaker005:] Mm hmm. [speaker003:] Um. [speaker005:] And then you can pick, uh, values anywhere along that tree to fix your set of clusters. [speaker003:] Right, usually it's when, um [disfmarker] when the sol similarity measures, um, don't go down as much. [speaker005:] Mm hmm. [speaker003:] And so, uh [disfmarker] so you stop at that point. And what she found was, sh um, was there were five broad, um [disfmarker] broad categories, uh, corresponding to, uh, things like, uh, fricatives and, uh, vocalic, um, and, uh, stops. [speaker002:] Mm hmm. [speaker003:] And, uh, one for silence and [disfmarker] and another one for schwa [disfmarker] schwa sounds. Um, and, um, I was thinking about ways to [disfmarker] to generalize this because w you're [disfmarker] it's sort of like a [disfmarker] it's not a completely automatic way of clustering, because yo beforehand you have these [disfmarker] these TRAPS and you're saying that [disfmarker] that these frames correspond to this particular phoneme. Um, and that's [disfmarker] that's constraining your [disfmarker] your clustering to [disfmarker] to the set of phonemes that you already have. Um, whereas maybe we want to just take [disfmarker] take a look at, um, arbitrary windows in time, um, of varying length, um, and cluster those. [speaker005:] Mm hmm. Mm hmm. [speaker003:] And I'm thinking if we [disfmarker] if we do that, then we would probably, um, at some point in the clustering algorithm find that we've clustered things like, OK, thi this is a transition, um, this is a relatively stable [disfmarker] stable point. Um, and I'm hoping to find other things of [disfmarker] of similarity and maybe use these things as the intermediate, um [disfmarker] intermediate categories that, uh, um, I'll later classify. [speaker002:] Are you looking at these in narrow bands? [speaker005:] Mm hmm. [speaker003:] Um, right. F um, I'm [disfmarker] [speaker002:] Cuz that's what you're gonna be using, right? [speaker003:] Yeah, yeah. I [disfmarker] I haven't exactly figured out, um, the exact details for that but, uh, the [disfmarker] the representation of the data that I was thinking of, was using, um, critical band, um, energies, [vocalsound] um, over different lengths of time. So [disfmarker] Yeah. [speaker002:] Yeah, I mean, it seems somehow that needs th uh, there's a couple things that I wonder about with this. [speaker003:] OK. [speaker002:] I mean, so one is [disfmarker] is, [pause] again, looking at the same representation, I mean, if you're going for this sort of thing where you have [pause] uh, little detectors that are looking at narrow bands, then what you're going to be looking for should be some category that you can find with the narrow bands. [speaker003:] Mm hmm. [speaker002:] That [disfmarker] that seems to be kind of fundamental to it. [speaker003:] Right. [speaker002:] Um, and then the other thing, uh, is [disfmarker] that I wonder about with it, and [disfmarker] and don't take this in the wrong way, like I [disfmarker] I know what I'm doing or anything, but, I mean. [vocalsound] Um, just wondering really. [speaker003:] Mm hmm. [speaker002:] Um, the sort of standard answer about this sort of thing is that if you're trying to find [pause] the right system in some sense, whether you're trying by categories or [disfmarker] or parameters [pause] um, and your goal is discrimination, then having choices based on discrimination as opposed to, um, unsupervised nearness of things, um, is actually better. [speaker003:] Hmm. [speaker002:] Um, and I don't know if that [disfmarker] I mean, since you're dealing with issues of robustness, you know, maybe [disfmarker] maybe this isn't right, but it'd be something I'd be concerned about. Because, for instance, you can imagine, uh, uh, i i if you remember from [disfmarker] from, uh [disfmarker] from your [disfmarker] your quals, John Ohala saying that, uh, "buh" [comment] and "puh" [comment] differed, uh, not really cuz of voicing but because of aspiration. [speaker003:] Mm hmm. [speaker002:] I mean, in as far as wha what's really there in the acoustics. So, um, if you looked [disfmarker] if you were doing some coarse clustering, you probably would put those two sounds together. And yet, I would gue I would guess that many of your recognition errors were coming from, uh, um, pfft, [comment] screwing up on this distinction. [speaker003:] Mm hmm. [speaker002:] So, in fact, it's a little hard because recognizers, to first order, sort of work. And the reasons we're doing the things we're doing is because they don't work as well as we'd like. And since they sort of work, uh, it means that they are already doing [disfmarker] if you go and take any recognizer that's already out there and you say, "how well is it distinguishing between [pause] schwas and stops?" [speaker003:] Mm hmm. [speaker002:] Boy, I bet they're all doing nearly perfectly on this, [speaker003:] Mm hmm. [speaker002:] right? So these [disfmarker] these big categories that differ in huge obvious ways, we already know how to do. So, what are we bringing to the party? I mean, in fact what we wanna do is have something that, particularly in the presence of noise, uh, is better at distinguishing between, uh, categories that are actually close to one another, and hence, would probably be clustered together. [speaker003:] Mmm. [speaker002:] So that's th that's the hard thing. I mean, I understand that there's this other constraint that you're considering, is that you wanna have categories that, uh [disfmarker] that would be straightforward for, say, a human being to mark if you had manual annotation. And it's something that you really think you can pick up. But I think it's also essential that you wanna look at what are the [vocalsound] confusions that you're making and how can you come up with, uh, categories that, uh, can clarify these confusions. [speaker003:] Mm hmm. Hmm. [speaker002:] So, I mean, the standard sort of way of doing that is take a look at the algorithms you're looking at, but then throw in some discriminative aspect to it. Y y this is more like, you know, how does LDA differ from PCA? I mean, they're the same sort of thing. [speaker003:] Right. [speaker002:] They're both orthogonalizing. But, you know [disfmarker] and [disfmarker] and, um, this is a little harder because you're not just trying to find parameters. You're actually trying to find the [disfmarker] the [disfmarker] the [disfmarker] the categories themselves. Uh, so a little more like brain surgery, I think on yourself. Uh. [speaker003:] Yeah. [speaker002:] So, uh Um, anyway. That's my [pause] thought. [speaker003:] OK. [speaker002:] You've been thinking about this problem for a long time actually. I mean, well [disfmarker] W actually, you stopped thinking about it for a long time, but you used to think about it [vocalsound] a lot. [speaker005:] Yeah. [speaker002:] And you've been thinking about it more now, [speaker004:] Yeah. [speaker005:] Yeah. [speaker002:] these categories. [speaker005:] I guess [disfmarker] [speaker002:] Mm hmm. [speaker005:] I don't [disfmarker] I don't [disfmarker] um, it's not clear to me how to reconcile, you know, what you're saying, which I think is right, with [pause] the way I've been looking at it. That it's [disfmarker] it's [disfmarker] it's all not very clear to me. But it seems to me that the desire [disfmarker] the desirable feature to have is something that, um, is bottom up. You know, however we do that. [speaker002:] Mm hmm. [speaker005:] And and so I guess what I don't understand is how to do that and still be discriminative, because to be discriminative you have to have categories and the only categories that we know of to use are sort of these human [disfmarker] human sig significant [disfmarker] categories that are significant to humans, like phonemes, things like that. [speaker002:] Right. [speaker005:] But that's sort of what you want to avoid. And so that feels [disfmarker] I don't know how to get out of this. [speaker002:] Well, here's a [disfmarker] here's a, uh, uh Here's a generic and possibly useless thought, which is, [vocalsound] um, what do you really [disfmarker] I mean, in a sense the only s s systems that make sense, uh, are ones that [disfmarker] that have something from top down in th in them. Right? Because if e even the smallest organism that's trying to learn to do anything, if it doesn't have any kind of reward for doing [disfmarker] or penal penalty for doing anything, then it's just going to behave randomly. [speaker005:] Mm hmm. [speaker002:] So whether you're talking about something being learned through evolution or being learned through experience, it's gotta have something come down to it that gives its reward or, you know, at least some reinforcement learning, [speaker005:] Right. So the question is, how far down? [speaker002:] right? And [speaker005:] We could stop at words, but we don't, right? We go all the way down to phonemes. [speaker002:] Right, but I me I [disfmarker] I think that maybe in some ways part of the difficulty is [disfmarker] is trying to deal with the [disfmarker] with these phonemes. [speaker005:] Mm hmm. [speaker002:] You know, and [disfmarker] and [disfmarker] and i it's almost like you want categories if [disfmarker] if our [disfmarker] if our, uh, um, [vocalsound] metric of [disfmarker] of goodness, uh, i if our [disfmarker] correction [disfmarker] if our metric of badness [vocalsound] is word error rate then, um, maybe we should be looking at words. [speaker005:] Mm hmm. [speaker002:] I mean, for [disfmarker] for [disfmarker] for very nice, uh, reasons we've looked for a while at syllables, and they have a lot of good properties, but i i i if you go all the way to words, I mean, that's really [disfmarker] I mean, d w In many applications you wanna go further. You wanna go to concepts or something, or have [disfmarker] have [disfmarker] have concepts, actions, this sort of thing. [speaker005:] Yeah. But words would be a nice [disfmarker] [speaker002:] But, words aren't bad, yeah. And [disfmarker] and [speaker005:] Yeah, so the common [disfmarker] right, the common wisdom is you can't do words because there's too many of them, right? So you have to have some smaller set that you can use, uh, and [disfmarker] and so everybody goes to phonemes. But the problem is that we [disfmarker] we build models of words in terms of phonemes and these models are [disfmarker] are really cartoon ish, right? So when you look at conversational speech, for example, you don't see the phonemes that you [disfmarker] that you have in your word models. [speaker002:] Yeah. But [disfmarker] but [disfmarker] but we're not trying for models of words here. See, so her here's maybe where [disfmarker] If the issue is that we're trying to come up with, um, some sort of intermediate categories which will then be useful for later stuff, uh, then [pause] maybe it doesn't matter that we can't have enough [disfmarker] [speaker005:] Mm hmm. Mm hmm. [speaker002:] I mean, what you wanna do is [disfmarker] is build up these categories that are [disfmarker] that are best for word recognition. [speaker005:] Right. Right. [speaker002:] And [disfmarker] and somehow if that's built into the loop of what the categories [disfmarker] I mean, we do this every day in this very gross way of [disfmarker] of running o a thousand experiments [speaker005:] Right. [speaker002:] because we have fast computers and picking the thing that has the best word error rate. [speaker005:] Yeah. [speaker002:] In some way [disfmarker] I mean, we derive that all the time. In some ways it's really not [comment] a bad [disfmarker] bad thing to do because it tells you in fact how your adjustments at the very low level affect the [disfmarker] the final goal. [speaker005:] Mm hmm. Mm hmm. [speaker002:] Um, so maybe there's a way to even put that in in a much more automatic way, [speaker005:] Right. [speaker002:] where you take, you know, something about the error at the level of the word or some other [disfmarker] it could be syllable [disfmarker] but in some large unit, [speaker005:] Uh huh. [speaker002:] uh, and uh [disfmarker] yeah, you may not have word models, you have phone models, whatever, but you sort of [pause] don't worry about that, and just somehow feed it back through. [speaker005:] Mm hmm. [speaker002:] You know, so that's, uh, wh what I called a useless comments because I'm not really telling you how to do it. But I mean, it's a [disfmarker] [vocalsound] it's [disfmarker] it's, you know [disfmarker] it [speaker005:] No, but I think the important part in there is that, you know, if you want to be discriminative, you have to have uh, you know, categories. [speaker002:] Right. [speaker005:] And I think this [disfmarker] the important categories are the words, and [pause] not the phones. [speaker002:] Yeah. Yeah. [speaker005:] Maybe. And so [disfmarker] Right. If you can put the words in to the loop somehow for determining goodness of your sets of clusters [disfmarker] Uh [disfmarker] [speaker002:] Now, that being said, I think that [disfmarker] that if you have something that is, um [disfmarker] i Once you start dealing with spontaneous speech, all the things you're saying are [disfmarker] are really true. [speaker005:] Mm hmm. [speaker002:] If you [pause] have read speech that's been manually annotated, like TIMIT, then, you know, i i you the phones are gonna be right, actually, [vocalsound] for the most part. [speaker005:] Yeah. Yeah, yeah. [speaker002:] So [disfmarker] so, uh, it doesn't really hurt them to [disfmarker] to do that, to put in discrimination at that level. Um, if you go to spontaneous speech then it's [disfmarker] it's trickier and [disfmarker] and [disfmarker] and, uh, the phones are [disfmarker] uh, you know, it's gonna be based on bad pronunciation models that you have of [disfmarker] [speaker005:] Mmm. [speaker002:] and, um [disfmarker] And it won't allow for the overlapping phenomenon [speaker005:] So it's almost like there's this mechanism that we have that, you know, when [disfmarker] when we're hearing read speech and all the phonemes are there you know, we [disfmarker] we deal with that, but [disfmarker] but when we go to conversational, and then all of a sudden not all the phonemes are there, it doesn't really matter that much to us as humans because we have some kind of mechanism that allows for these word models, whatever those models are, to be [pause] munged, you know, and [disfmarker] and it doesn't really hurt, and I'm not sure how [disfmarker] [vocalsound] how to build that in. Uh. [speaker002:] Yeah, I mean, I guess the other thing i is [disfmarker] is to think of a little bit [disfmarker] I mean, we when y when you start looking at these kind of results I think it usually is [disfmarker] is pretty intuitive, but start looking at um, what are the kinds of confusions that you do make, uh, you know, between words if you want or [disfmarker] or [disfmarker] or, uh, even phones in [disfmarker] in [disfmarker] in [disfmarker] in read speech, say, uh, when there is noise. You know, so is it more across place or more across manner? [speaker003:] Mm hmm. [speaker002:] Or is it cor you know, is it [disfmarker]? I mean, I know one thing that happens is that you [disfmarker] you [disfmarker] you, uh, you lose the, um, uh, low energy phones. I mean, if there's added noise then low energy phones [vocalsound] sometimes don't get heard. And if that [disfmarker] if that is [disfmarker] if it [disfmarker] uh, if that turns it into another word or [disfmarker] or different [disfmarker] you know, or another pair of words or something, then it's more likely to happen. But, um, I don't know, I w I would [disfmarker] I would guess that you'd [disfmarker] [speaker003:] Mm hmm. [speaker002:] W I don't know. Anyway, that's [disfmarker] [speaker005:] I think part of the difficulty is that a l a lot of the robustness that we have is probably coming from a much higher level. You know, we understand the context of the situation when we're having a conversation. [speaker002:] Mm hmm. [speaker005:] And so if there's noise in there, you know, our brain fills in and imagines what [disfmarker] what should be there. [speaker002:] Well that [disfmarker] [speaker003:] Yeah. We're [disfmarker] we're doing some sort of prediction of what [disfmarker] [speaker005:] Yeah, exactly. [speaker002:] Oh, sure, that's really big. [speaker003:] Yeah. [speaker002:] Uh, but I mean, even if you do um, uh, diagnostic rhyme test kind of things, you know, where there really isn't an any information like that, uh, people are still better in noise than they [disfmarker] than they are in [disfmarker] in, uh [disfmarker] uh, than the machines are. [speaker005:] Hmm. [speaker002:] So, I mean, that's [disfmarker] i Right. We can't [disfmarker] we can't get it at all without any language models. Language models are there and important but [disfmarker] but, uh [disfmarker] Uh. If we're not working on that then [vocalsound] we should work on something else and improve it, but [disfmarker] especially if it looks like the potential is there. So [disfmarker] Should we do some digits? [speaker005:] Yeah. [speaker002:] Since we're here? [speaker005:] Go ahead, Morgan. [speaker002:] OK. [speaker005:] OK. [speaker002:] That's all folks. [speaker005:] Hello? [speaker002:] OK. We're on. [speaker001:] OK, so uh [vocalsound] had some interesting mail from uh Dan Ellis. Actually, I think he [disfmarker] he [vocalsound] redirected it to everybody also so uh [vocalsound] the PDA mikes uh have a big bunch of energy at [disfmarker] at uh five hertz uh where this came up was that uh I was showing off these wave forms that we have on the web and [disfmarker] and uh [vocalsound] I just sort of hadn't noticed this, but that [disfmarker] the major, major component in the wave [disfmarker] in the second wave form in that pair of wave forms is actually the air conditioner. [speaker003:] Huh. [speaker001:] So. So. I [vocalsound] [vocalsound] I have to be more careful about using that as a [disfmarker] as a [disfmarker] [vocalsound] as a good illustration, uh, in fact it's not, of uh [disfmarker] [vocalsound] of the effects of room reverberation. It is isn't a bad illustration of the effects of uh room noise. [vocalsound] on [disfmarker] on uh some mikes uh but So. And then we had this other discussion about um [vocalsound] whether this affects the dynamic range, cuz I know, although we start off with thirty two bits, you end up with uh sixteen bits and [vocalsound] you know, are we getting hurt there? But uh Dan is pretty confident that we're not, that [disfmarker] that quantization error is not [disfmarker] is still not a significant [vocalsound] factor there. So. So there was a question of whether we should change things here, whether we should [vocalsound] change a capacitor on the input box for that or whether we should [speaker002:] Yeah, he suggested a smaller capacitor, right? For the P D [speaker001:] Right. But then I had some other uh thing discussions with him and the feeling was [vocalsound] once we start monk monkeying with that, uh, many other problems could ha happen. And additionally we [disfmarker] we already have a lot of data that's been collected with that, so. [speaker002:] Yeah. [speaker001:] A simple thing to do is he [disfmarker] he [disfmarker] he has a [disfmarker] I forget if it [disfmarker] this was in that mail or in the following mail, but he has a [disfmarker] a simple filter, a digital filter that he suggested. We just run over the data before we deal with it. [speaker002:] Mm hmm. [speaker001:] um The other thing that I don't know the answer to, but when people are using Feacalc here, uh whether they're using it with the high pass filter option or not. And I don't know if anybody knows. [speaker005:] Um. [vocalsound] I could go check. [speaker001:] But. Yeah. So when we're doing all these things using our software there is [disfmarker] um if it's [disfmarker] if it's based on the RASTA PLP program, [vocalsound] which does both PLP and RASTA PLP [vocalsound] um then [vocalsound] uh there is an option there which then comes up through to Feacalc which [vocalsound] um allows you to do high pass filtering and in general we like to do that, because of things like this and [vocalsound] it's [disfmarker] it's pretty [disfmarker] it's not a very severe filter. Doesn't affect speech frequencies, even pretty low speech frequencies, at all, but it's [speaker002:] What's the [pause] cut off frequency it used? [speaker001:] Oh. I don't know I wrote this a while ago [speaker002:] Is it like twenty? [speaker001:] Something like that. [speaker002:] Yeah. [speaker001:] Yeah. I mean I think there's some effect above twenty but it's [disfmarker] it's [disfmarker] it's [disfmarker] it's mild. So, I mean it probably [disfmarker] there's probably some effect up to a hundred hertz or something but it's [disfmarker] it's pretty mild. I don't know in the [disfmarker] in the STRUT implementation of the stuff is there a high pass filter or a pre pre emphasis or something in the [disfmarker] [speaker006:] Uh. I think we use a pre emphasis. Yeah. Yeah. [speaker001:] So. We [disfmarker] we [disfmarker] we want to go and check that in i for anything that we're going to use the P D A mike for. [vocalsound] uh He says that there's a pretty good roll off in the PZM mikes so [vocalsound] we don't need [disfmarker] need to worry about them one way or the other but if we do make use of the cheap mikes, [vocalsound] uh we want to be sure to do that [disfmarker] that filtering before we [vocalsound] process it. And then again if it's uh depending on the option that the [disfmarker] our [disfmarker] our software is being run with, it's [disfmarker] it's quite possible that's already being taken care of. uh But I also have to pick a different picture to show the effects of reverberation. uh [speaker002:] Did somebody notice it during your talk? [speaker001:] uh No. [speaker002:] Huh. [speaker001:] Well. uh Well. If they made output they were [disfmarker] they were, you know [disfmarker] they were nice. [speaker002:] Didn't say anything? [speaker001:] But. [vocalsound] I mean the thing is it was since I was talking about reverberation and showing this thing that was noise, it wasn't a good match, but it certainly was still uh an indication of the fact that you get noise with distant mikes. [speaker002:] Mm hmm. [speaker001:] uh It's just not a great example because not only isn't it reverberation but it's a noise that we definitely know what to do. So, I mean, it doesn't take deep [disfmarker] [vocalsound] a new [disfmarker] bold new methods to get rid of uh five hertz noise, so. [speaker002:] Yeah. [speaker001:] um [vocalsound] uh But. So it was [disfmarker] it was a bad example in that way, but it's [disfmarker] it still is [disfmarker] it's the real thing that we did get out of the microphone at distance, so it wasn't [vocalsound] it w it w wasn't wrong it was inappropriate. So. [vocalsound] So uh, but uh, Yeah, someone noticed it later pointed it out to me, and I went "oh, man. Why didn't I notice that?" [speaker002:] Hmm. [speaker001:] um. So. [vocalsound] um So I think we'll change our [disfmarker] our picture on the web, when we're [@ @]. One of the things I was [disfmarker] I mean, I was trying to think about what [disfmarker] what's the best [vocalsound] way to show the difference an and I had a couple of thoughts one was, [vocalsound] that spectrogram that we show [vocalsound] is O K, but the thing is [vocalsound] the eyes uh and the [vocalsound] the brain behind them are so good at picking out patterns [vocalsound] from [disfmarker] from noise [vocalsound] that in first glance you look at them it doesn't seem like it's that bad uh because there's many features that are still preserved. So one thing to do might be to just take a piece of the spec uh of the spectrogram where you can see [vocalsound] that something looks different, an and blow it up, and have that be the part that's [disfmarker] just to show as well. [speaker002:] Mm hmm. Mm hmm. [speaker001:] You know. i i Some things are going to be hurt. um [vocalsound] Another, I was thinking of was um [vocalsound] taking some spectral slices, like uh [disfmarker] like we look at with the recognizer, and look at the spectrum or cepstrum that you get out of there, and the [disfmarker] the uh, um, [vocalsound] the reverberation uh does make it [disfmarker] does change that. [speaker002:] Hmm. [speaker001:] And so maybe [disfmarker] maybe that would be more obvious. [speaker003:] Spectral slices? [speaker001:] Yeah. [speaker003:] W w what d what do you mean? [speaker001:] Well, I mean um all the recognizers look at frames. [speaker002:] So like one instant in time. [speaker001:] So they [disfmarker] they look at [disfmarker] [speaker003:] OK. [speaker001:] Yeah, look at a [disfmarker] So it's, yeah, at one point in time or uh twenty [disfmarker] over twenty milliseconds or something, [vocalsound] you have a spectrum or a cepstrum. [speaker003:] OK. [speaker001:] That's what I meant by a slice. [speaker003:] I see. [speaker001:] Yeah. And [vocalsound] if you look at [disfmarker] [speaker002:] You could just [disfmarker] you could just throw up, you know, uh [vocalsound] the uh [disfmarker] some MFCC feature vectors. You know, one from one, one from the other, and then, you know, you can look and see how different the numbers are. [speaker001:] Right. Well, that's why I saying [speaker002:] I'm just kidding. [speaker001:] either [vocalsound] [vocalsound] Well, either spectrum or cepstrum but [disfmarker] [vocalsound] but I think the thing is you wanna [disfmarker] [speaker002:] I don't mean a graph. I mean the actual numbers. [speaker001:] Oh. I see. Oh. That would be lovely, [speaker002:] Yeah. [speaker001:] yeah. [speaker002:] "See how different these [vocalsound] sequences of numbers are?" [speaker001:] Yeah. Or I could just add them up and get a different total. [speaker002:] Yeah. It's not the square. [speaker001:] OK. Uh. What else [disfmarker] wh what's [disfmarker] what else is going on? [speaker006:] Uh, yeah. Yeah, at first I had a remark why [disfmarker] I am wondering why the PDA is always so far. I mean we are always meeting at the [vocalsound] beginning of the table and [vocalsound] the PDA's there. [speaker001:] Uh. I guess cuz we haven't wanted to move it. We [disfmarker] we could [disfmarker] [vocalsound] we could move us, [speaker006:] Yeah? [speaker001:] and. [speaker006:] OK. [speaker005:] That's right. [speaker006:] Well, anyway. Um. Yeah, so. Uh. Since the last meeting we've [disfmarker] we've tried to put together um [vocalsound] the clean low pass um downsampling, upsampling, I mean, Uh the new filter that's replacing the LDA filters, and also [vocalsound] the um delay issue so that [disfmarker] We considered th the [disfmarker] the delay issue on the [disfmarker] for the on line normalization. Mmm. So we've put together all this and then we have results that are not um [vocalsound] [vocalsound] very impressive. Well, there is no [vocalsound] real improvement. [speaker001:] But it's not wer worse [speaker006:] It's not [disfmarker] [speaker001:] and it's better [disfmarker] better latency, right? [speaker006:] Yeah. Yeah. Well. Actually it's better. It seems better when we look at the mismatched case but [vocalsound] I think we are like [disfmarker] like cheated here by the [disfmarker] th this problem that [vocalsound] uh in some cases when you modify slight [disfmarker] slightly modify the initial condition you end up [vocalsound] completely somewhere air somewhere else in the [disfmarker] in the space, [vocalsound] the parameters. [speaker001:] Yeah. [speaker006:] So. Well. The other system are for instance. For Italian is at seventy eight [vocalsound] percent recognition rate on the mismatch, and this new system has eighty nine. But I don't think it indicates something, really. I don't [disfmarker] I don't think it means that the new system is more robust [speaker001:] Uh huh. [speaker006:] or [disfmarker] It's simply the fact that [disfmarker] Well. [speaker001:] Well, the test would be if you then tried it on one of the other test sets, if [disfmarker] if it was [disfmarker] [speaker006:] Y [speaker001:] Right. So this was Italian, right? [speaker006:] Yeah. Yeah. [speaker001:] So then if you take your changes [speaker006:] It's similar for other test sets [speaker001:] and then [disfmarker] [speaker006:] but I mean [vocalsound] from this se seventy eight um percent recognition rate system, [vocalsound] I could change the transition probabilities for the [disfmarker] the first HMM and [pause] it will end up to eighty nine also. [speaker001:] Uh huh. [speaker006:] By using point five instead of point six, point four [vocalsound] as in the [disfmarker] the HTK script. [speaker001:] Uh huh. Yeah. [speaker006:] So. Well. That's [disfmarker] [speaker002:] Yeah. Yeah I looked at um [disfmarker] [vocalsound] looked at the results when Stephane did that [speaker006:] Well. Eh uh [disfmarker] [speaker002:] and it's [disfmarker] it's really wo really happens. [speaker006:] This really happens. [speaker002:] I mean th the only difference is you change the self loop transition probability by a tenth of a percent [speaker006:] Yeah. [speaker001:] Yeah. [speaker002:] and it causes ten percent difference in the word error rate. [speaker001:] A tenth of a per cent. [speaker006:] Even tenth of a percent? [speaker002:] Yeah. From point [disfmarker] [speaker006:] Well, we tried [disfmarker] we tried point one, [speaker002:] I [disfmarker] I'm sorry f for point [disfmarker] from [disfmarker] You change at point one [speaker006:] yeah. Hmm. [speaker001:] Oh! [speaker002:] and n not tenth of a percent, one tenth, alright? [speaker001:] Yeah. [speaker002:] Um so from point five [disfmarker] so from point six to point five and you get ten percent better. [speaker001:] Mm hmm. [speaker006:] Mm hmm. [speaker002:] And it's [disfmarker] [vocalsound] I think it's what you basically hypothesized in the last meeting [vocalsound] about uh it just being very [disfmarker] and I think you mentioned this in your email too [disfmarker] [speaker006:] Mmm, yeah. [speaker002:] it's just very um [disfmarker] you know get stuck in some local minimum and this thing throws you out of it I guess. [speaker006:] Mm hmm. [speaker001:] Well, what's [disfmarker] what are [disfmarker] according to the rules what [disfmarker] what are we supposed to do about the transition probabilities? Are they supposed to be point five or point six? [speaker002:] I think you're not allowed to [disfmarker] Yeah. That's supposed to be point six, for the self loop. [speaker006:] Yeah. [speaker001:] Point [disfmarker] It's supposed to be point six. [speaker002:] Yeah. But changing it to point five I think is [disfmarker] which gives you much better results, but that's [vocalsound] not allowed. [speaker001:] But not allowed? Yeah. OK. [speaker002:] Yeah. [speaker006:] Yeah, but even if you use point five, I'm not sure it will always give you the better results on other test set or it [speaker002:] Yeah. Right. We only tested it on the [disfmarker] the medium mismatch, [speaker006:] on the other training set, I mean. [speaker002:] right? [speaker006:] Yeah. [speaker002:] You said on the other cases you didn't notice [disfmarker] [speaker006:] But. I think, yeah. I think the reason is, yeah, I not I [disfmarker] it was in my mail I think also, [vocalsound] is the fact that the mismatch is trained only on the far microphone. Well, in [disfmarker] for the mismatched case everything is um using the far microphone training and testing, whereas for the highly mismatched, training is done on the close microphone so [vocalsound] it's [disfmarker] it's clean speech basically so you don't have this problem of local minima probably and for the well match, it's a mix of close microphone and distant microphone and [disfmarker] Well. [speaker002:] I did notice uh something [disfmarker] [speaker006:] So th I think the mismatch is the more difficult for the training part. [speaker002:] Somebody, I think it was Morgan, suggested at the last meeting that I actually count to see [vocalsound] how many parameters and how many frames. [speaker006:] Mm hmm. [speaker001:] Mm hmm. [speaker002:] And there are uh almost one point eight million frames of training data and less than forty thousand parameters in the baseline system. [speaker001:] Hmm. [speaker006:] Yeah. [speaker002:] So it's very, very few parameters compared to how much training data. [speaker004:] Mm hmm. [speaker001:] Well. Yes. So. And that [disfmarker] that says that we could have lots more parameters actually. [speaker002:] Yeah. [speaker006:] Mm hmm. [speaker002:] Yeah. I did one quick experiment just to make sure I had everything worked out and I just [disfmarker] [vocalsound] uh f for most of the um [disfmarker] For [disfmarker] for all of the digit models, they end up at three mixtures per state. And so I just did a quick experiment, where I changed it so it went to four and um [vocalsound] it it [disfmarker] it didn't have a r any significant effect at the uh medium mismatch and high mismatch cases and it had [disfmarker] [vocalsound] it was just barely significant for the well matched better. Uh so I'm r gonna run that again but [vocalsound] um with many more uh mixtures per state. [speaker001:] Yeah. Cuz at forty thou I mean you could you could have uh [disfmarker] Yeah, easily four times as many [vocalsound] parameters. [speaker002:] Mm hmm. And I think also [vocalsound] just seeing what we saw [vocalsound] uh in terms of the expected duration of the silence model? when we did this tweaking of the self loop? [speaker006:] Yeah. [speaker002:] The silence model expected duration was really different. And so in the case where [vocalsound] um [vocalsound] it had a better score, the silence model expected duration was much longer. [speaker006:] Yeah. [speaker002:] So it was like [disfmarker] [vocalsound] it was a better match. I think [vocalsound] you know if we make a better silence model I think that will help a lot too um for a lot of these cases so but one one thing I [disfmarker] I wanted to check out before I increased the um [vocalsound] number of mixtures per state was [vocalsound] uh [vocalsound] in their [vocalsound] default training script they do an initial set of three re estimations and then they built the silence model and then they do seven iterations then the add mixtures and they do another seven then they add mixtures then they do a final set of seven and they quit. Seven seems like a lot to me and it also makes the experiments go take a really long time I mean to do one turn around of the well matched case takes like a day. [speaker001:] Mm hmm. Mm hmm. [speaker002:] And so [vocalsound] you know in trying to run these experiments I notice, you know, it's difficult to find machines, you know, compute the run on. And so one of the things I did was I compiled HTK for the Linux [vocalsound] machines [speaker001:] Mm hmm. [speaker002:] cuz we have this one from IBM that's got like five processors in it? [speaker001:] Right. [speaker002:] and so now I'm [disfmarker] you can run stuff on that and that really helps a lot because now we've got [vocalsound] you know, extra machines that we can use for compute. [speaker004:] Mm hmm. [speaker002:] And if [disfmarker] I'm do running an experiment right now where I'm changing the number of iterations? [vocalsound] from seven to three? [speaker001:] Yeah. [speaker002:] just to see how it affects the baseline system. And so if we can get away with just doing three, we can do [vocalsound] many more experiments more quickly. And if it's not a [disfmarker] a huge difference from running with seven iterations, [vocalsound] um, you know, we should be able to get a lot more experiments done. [speaker006:] Hmm. [speaker002:] And so. I'll let you know what [disfmarker] what happens with that. But if we can [vocalsound] you know, run all of these back ends f with many fewer iterations and [vocalsound] on Linux boxes we should be able to get a lot more experimenting done. [speaker001:] Mm hmm. [speaker002:] So. So I wanted to experiment with cutting down the number of iterations before I [vocalsound] increased the number of Gaussians. [speaker001:] Right. Sorry. So um, how's it going on the [disfmarker] [speaker006:] Um. [speaker001:] So. You [disfmarker] you did some things. They didn't improve things in a way that convinced you you'd substantially improved anything. [speaker006:] Yeah. [speaker001:] But they're not making things worse and we have reduced latency, right? [speaker006:] Yeah. But actually [disfmarker] um actually it seems to do a little bit worse for the well matched case and we just noticed that [disfmarker] Yeah, actually the way the final score is computed is quite funny. It's not a mean of word error rate. It's not a weighted mean of word error rate, it's a weighted mean of improvements. [speaker001:] Uh huh. [speaker006:] So. Which means that [vocalsound] actually the weight on the well matched is [disfmarker] Well I well what what [disfmarker] What happened is that if you have a small improvement or a small if on the well matched case [vocalsound] it will have uh huge influence on the improvement compared to the reference because the reference system is [disfmarker] is [disfmarker] is quite good for [disfmarker] for the well ma well matched case also. [speaker002:] So it [disfmarker] it weights the improvement on the well matched case really heavily compared to the improvement on the other cases? [speaker006:] No, but it's the weighting of the [disfmarker] of the improvement not of the error rate. [speaker002:] Yeah. Yeah, and it's hard to improve on the [disfmarker] on the best case, cuz it's already so good, right? [speaker006:] Yeah but [pause] what I mean is that you can have a huge improvement on the H [disfmarker] HMK's, uh like five percent uh absolute, and this will not affect the final score almost [disfmarker] Uh this will almost not affect the final score because [vocalsound] this improvement [disfmarker] because the improvement [vocalsound] uh relative to the [disfmarker] the baseline is small [disfmarker] Uh. [speaker001:] So they do improvement in terms of uh accuracy? rather than word error rate? [speaker006:] Uh improvement? No, it's compared to the word er it's improvement on the word error rate, [speaker001:] So [disfmarker] [speaker006:] yeah. Sorry. [speaker001:] OK. So if you have uh ten percent error and you get five percent absolute uh [vocalsound] improvement then that's fifty percent. [speaker006:] Mm hmm. [speaker001:] OK. So what you're saying then is that if it's something that has a small word error rate, [vocalsound] then uh a [disfmarker] even a relatively small improvement on it, in absolute terms, [vocalsound] will show up as quite [disfmarker] quite large in this. [speaker006:] Mm hmm. [speaker001:] Is that what you're saying? [speaker006:] Yeah. [speaker001:] Yes. [speaker006:] Yeah. [speaker001:] OK. But yeah that's [disfmarker] that's [disfmarker] it's the notion of relative improvement. [speaker006:] Yeah. [speaker001:] Word error rate. [speaker006:] Sure, but when we think about the weighting, which is point five, point three, point two, [vocalsound] it's on absolute on [disfmarker] on relative figures, not [disfmarker] [speaker001:] Yeah. [speaker006:] So when we look at this error rate [speaker001:] Yeah. No. [speaker006:] uh [disfmarker] [speaker001:] That's why I've been saying we should be looking at word error rate uh and [disfmarker] and not [disfmarker] not at [vocalsound] at accuracies. [speaker006:] Mmm, yeah. Mmm, yeah. [speaker001:] It's [disfmarker] [speaker006:] Mm hmm. [speaker001:] I mean uh we probably should have standardized on that all the way through. It's just [disfmarker] [speaker002:] Well. [speaker006:] Mm hmm. [speaker002:] I mean, it's not [disfmarker] it's not that different, right? I mean, just subtract the accuracy. [speaker001:] Yeah [speaker002:] I mean [disfmarker] [speaker001:] but you're [disfmarker] but when you look at the numbers, your sense of the relative size of things is quite different. [speaker002:] Oh. Oh, I see. Yeah. [speaker001:] If you had ninety percent uh correct [vocalsound] and five percent, five over ninety doesn't look like it's a big difference, but [vocalsound] five over ten is [disfmarker] is big. [speaker002:] Mm hmm. [speaker006:] Mm hmm. [speaker001:] So just when we were looking at a lot of numbers and [vocalsound] getting sense of what was important. [speaker002:] I see. I see. Yeah. That makes sense. [speaker001:] Um. [speaker006:] Mmm. [speaker001:] Um. [speaker006:] Well anyway uh. So. Yeah. So it hurts a little bit on the well match and yeah. [speaker001:] What's a little bit? Like [disfmarker] [speaker006:] Like, it's difficult to say because again um [vocalsound] [vocalsound] I'm not sure I have the um [disfmarker] [speaker002:] Hey Morgan? Do you remember that Signif program that we used to use for testing signi? Is that still valid? I [disfmarker] I've been using that. [speaker001:] Yeah. Yeah, it was actually updated. [speaker002:] OK. [speaker001:] Uh. [vocalsound] Jeff updated it some years ago [speaker002:] Oh, it was. Oh, I shoul [speaker001:] and [disfmarker] and uh cleaned it up made some things better in it. [speaker002:] OK. [speaker001:] So. [speaker002:] I should find that new one. I just use my old one from [vocalsound] ninety two or whatever [speaker001:] Yeah, I'm sure it's not that different but [disfmarker] but he [disfmarker] [vocalsound] he uh [disfmarker] he was a little more rigorous, as I recall. [speaker002:] OK. [speaker006:] Right. So it's around, like, point five. No, point six [comment] uh percent absolute on Italian [disfmarker] [speaker001:] Worse. [speaker006:] Worse, yep. [speaker001:] Out of what? I mean. s [speaker006:] Uh well we start from ninety four point sixty four, and we go to ninety four point O four. [speaker001:] Uh huh. So that's six [disfmarker] six point th [speaker006:] Uh. [speaker002:] Ninety three point six four, right? is the baseline. [speaker006:] Oh, no, I've ninety four. Oh, the baseline, you mean. [speaker002:] Yeah. [speaker006:] Well I don't [disfmarker] I'm not talking about the baseline here. [speaker002:] Oh. Oh. [speaker006:] I [speaker002:] I'm sorry. [speaker006:] uh [disfmarker] My baseline is the submitted system. [speaker002:] Ah! OK. Ah, ah. [speaker006:] Hmm. [speaker002:] Sorry. [speaker001:] Yeah. [speaker006:] Oh yeah. For Finnish, we start to ninety three point eight four and we go to ninety three point seventy four. And for Spanish we are [disfmarker] we were at ninety five point O five and we go to ninety three s point sixty one. [speaker001:] OK, so we are getting hurt somewhat. [speaker006:] So. [speaker001:] And is that wh what [disfmarker] do you know what piece [disfmarker] you've done several changes here. Uh, do you know what pie [speaker006:] Yeah. I guess [disfmarker] I guess it's [disfmarker] it's the filter. Because nnn, well uh we don't have complete result, but the filter [disfmarker] So the filter with the shorter delay hurts on Italian well matched, which [disfmarker] And, yeah. And the other things, like um [vocalsound] downsampling, upsampling, don't seem to hurt and [vocalsound] the new on line normalization, neither. [speaker002:] I'm [disfmarker] [speaker006:] So. [speaker002:] I'm really confused about something. If we saw that making a small change like, you know, a tenth, to the self loop had a huge effect, [vocalsound] can we really make any conclusions about differences in this stuff? [speaker006:] Mm hmm. Yeah that's th Yeah. [speaker002:] I mean, especially when they're this small. I mean. [speaker006:] I think we can be completely fooled by this thing, but [disfmarker] I don't know. [speaker001:] Well, yeah. [speaker006:] So. There is first this thing, and then the [disfmarker] yeah, I computed the um [disfmarker] [vocalsound] like, the confidence level on the different test sets. And for the well matched they are around um [vocalsound] point six uh percent. For the mismatched they are around like let's say one point five percent. And for the well m uh HM they are also around one point five. [speaker001:] But [disfmarker] OK, so you [disfmarker] these [disfmarker] these degradations you were talking about were on the well matched case [speaker006:] So. Yeah. [speaker001:] Uh. Do the [disfmarker] does the new filter make things uh better or worse for the other cases? [speaker006:] But. Uh. About the same. It doesn't hurt. Yeah. [speaker001:] Doesn't hurt, but doesn't get a little better, or something. [speaker006:] No. [speaker001:] No. OK, so [vocalsound] um I guess the argument one might make is that, Yeah, if you looked at one of these cases [vocalsound] and you jiggle something and it changes [vocalsound] then uh you're not quite sure what to make of it. But when you look across a bunch of these and there's some [disfmarker] some pattern, um [disfmarker] I mean, so eh h here's all the [disfmarker] if [disfmarker] if in all these different cases [vocalsound] it never gets better, and there's significant number of cases where it gets worse, [vocalsound] then you're probably [pause] hurting things, [vocalsound] I would say. So um [vocalsound] I mean at the very least that would be a reasonably prediction of what would happen with [disfmarker] with a different test set, that you're not jiggling things with. So I guess the question is if you can do better than this. If you can [disfmarker] if we can approximate [vocalsound] the old numbers while still keeping the latency down. [speaker006:] Mmm. Yeah. [speaker001:] Uh, so. Um. What I was asking, though, is uh [disfmarker] are [disfmarker] what's [disfmarker] what's the level of communication with uh [vocalsound] the O G I gang now, about this and [disfmarker] [speaker006:] Well, we are exchanging mail as soon as we [disfmarker] [vocalsound] we have significant results. [speaker001:] Yeah. [speaker006:] Um. Yeah. For the moment, they are working on integrating [vocalsound] the um [vocalsound] spectral subtraction apparently from Ericsson. [speaker001:] Mm hmm. [speaker006:] Um. Yeah. And so. Yeah. We are working on our side on other things like [vocalsound] uh also trying a sup spectral subtraction but of [disfmarker] of our own, I mean, another [vocalsound] spectral substraction. [speaker001:] Mm hmm. [speaker006:] Um. Yeah. So I think it's [disfmarker] it's OK. It's going [disfmarker] [speaker001:] Is there any further discussion about this [disfmarker] this idea of [disfmarker] of having some sort of source code control? [speaker006:] Yeah. Well. For the moment they're [disfmarker] uh everybody's quite um [disfmarker] There is this Eurospeech deadline, so. [speaker001:] I see. [speaker006:] Um. And. Yeah. But yeah. As soon as we have something that's significant and that's better than [disfmarker] than what was submitted, we will fix [disfmarker] fix the system and [disfmarker] But we've not discussed it [disfmarker] it [disfmarker] it [disfmarker] this yet, yeah. [speaker001:] Yeah. Sounds like a great idea but [disfmarker] but I think that [disfmarker] that um [vocalsound] he's saying people are sort of scrambling for a Eurospeech deadline. [speaker006:] Mmm. [speaker001:] But that'll be uh, uh done in a week. So, maybe after [vocalsound] this next one. [speaker006:] Yeah. [speaker002:] Wow! Already a week! Man! [speaker001:] Yeah. [speaker002:] You're right. That's amazing. [speaker001:] Yeah. Anybo anybody in the [disfmarker] in this group do doing anything for Eurospeech? [speaker006:] S [speaker001:] Or, is that what [disfmarker] is that [disfmarker] [speaker006:] Yeah we are [disfmarker] [vocalsound] We are trying to [disfmarker] to do something with the Meeting Recorder digits, [speaker001:] Right. [speaker006:] and [disfmarker] But yeah. Yeah. And the good thing is that [pause] there is this first deadline, [speaker001:] Yeah. [speaker006:] and, well, some people from OGI are working on a paper for this, but there is also the um [vocalsound] special session about th Aurora which is [disfmarker] [vocalsound] uh which has an extended deadline. So. The deadline is in May. [speaker001:] For uh [disfmarker] [vocalsound] Oh, for Eurospeech? [speaker006:] For th Yeah. So f only for the experiments on Aurora. [speaker001:] Oh! [speaker006:] So it [disfmarker] it's good, yeah. [speaker001:] Oh, a special dispensation. That's great. [speaker002:] Mm hmm. Where is Eurospeech this year? [speaker006:] It's in Denmark. [speaker001:] Aalborg [disfmarker] Aalborg uh [speaker002:] Oh. [speaker001:] So the deadline [disfmarker] When's the deadline? [speaker006:] Hmm? [speaker001:] When's the deadline? [speaker006:] I think it's the thirteenth of May. [speaker001:] That's great! It's great. So we should definitely get something in for that. [speaker006:] Yeah. [speaker001:] But on meeting digits, maybe there's [disfmarker] Maybe. [speaker006:] Yeah. So it would be for the first deadline. [speaker001:] Maybe. Yeah. [speaker006:] Nnn. [speaker001:] Yeah. So, I mean, I [disfmarker] I think that you could certainly start looking at [disfmarker] at the issue uh but [disfmarker] but uh [vocalsound] I think it's probably, on s from what Stephane is saying, it's [disfmarker] it's unlikely to get sort of active participation from the two sides until after they've [disfmarker] [speaker002:] Well I could at least [disfmarker] Well, I'm going to be out next week but I could [pause] try to look into like this uh CVS over the web. That seems to be a very popular [vocalsound] way of [pause] people distributing changes and [disfmarker] over, you know, multiple sites and things [speaker001:] Mm hmm. [speaker002:] so maybe [vocalsound] if I can figure out how do that easily and then pass the information on to everybody so that it's [vocalsound] you know, as easy to do as possible and [disfmarker] and people don't [disfmarker] it won't interfere with [comment] their regular work, then maybe that would be good. And I think we could use it for other things around here too. So. [speaker001:] Good. [speaker003:] That's cool. And if you're interested in using CVS, I've set it up here, so. [speaker002:] Oh great. OK. [speaker003:] um j [speaker002:] I used it a long time ago but it's been a while so maybe I can ask you some questions. [speaker003:] Oh. So. I'll be away tomorrow and Monday but I'll be back on Tuesday or Wednesday. [speaker002:] OK. [speaker001:] Yeah. Dave, the other thing, actually, is [disfmarker] is this business about this wave form. Maybe you and I can talk a little bit at some point about [vocalsound] coming up with a better [vocalsound] uh demonstration of the effects of reverberation for our web page, cuz uh [vocalsound] [disfmarker] the uh [vocalsound] um I mean, actually the [disfmarker] the uh It made a good [disfmarker] good audio demonstration because when we could play that clip the [disfmarker] the [disfmarker] the really [vocalsound] obvious difference is that you can hear two voices and [disfmarker] [vocalsound] [vocalsound] in the second one and only hear [disfmarker] [speaker002:] Maybe we could just [pause] like, talk into a cup. [speaker001:] Yeah. [speaker002:] Some good reverb. [speaker001:] No, I mean, it sound [disfmarker] it sounds pretty reverberant, but I mean you can't [disfmarker] when you play it back in a room with a [disfmarker] you know a big room, [vocalsound] nobody can hear that difference really. [speaker003:] Yeah. Uh huh. [speaker001:] They hear that it's lower amplitude and they hear there's a second voice, um [vocalsound] but uh that [disfmarker] actually that makes for a perfectly good demo because that's a real obvious thing, that you hear two voices. [speaker002:] But not of reverberation. [speaker001:] Yeah. [speaker003:] A boom. [speaker001:] Well that [disfmarker] that [disfmarker] that's OK. But for the [disfmarker] the visual, just, you know, I'd like to have uh [vocalsound] uh, you know, the spectrogram again, [speaker003:] Yeah. [speaker001:] because you're [disfmarker] you're [disfmarker] you're visual [vocalsound] uh abilities as a human being are so good [vocalsound] you can pick out [disfmarker] you know, you [disfmarker] you look at the good one, you look at the cru the screwed up one, and [disfmarker] and you can see the features in it without trying to [@ @] [disfmarker] [speaker002:] I noticed that in the pictures. [speaker001:] yeah. [speaker002:] I thought "hey, you know th" I [disfmarker] My initial thought was "this is not too bad!" [speaker001:] Right. But you have to [disfmarker] you know, if you look at it closely, you see "well, here's a place where this one has a big formant [disfmarker] uh uh formant [disfmarker] maj major formants here are [disfmarker] [vocalsound] are moving quite a bit." And then you look in the other one and they look practically flat. [speaker002:] Mm hmm. [speaker001:] So I mean you could [disfmarker] that's why I was thinking, in a section like that, you could take a look [disfmarker] look at just that part of the spectrogram and you could say "Oh yeah. This [disfmarker] this really distorted it quite a bit." [speaker002:] Yeah. The main thing that struck me in looking at those two spectrograms was the difference in the high frequencies. It looked like [vocalsound] for the one that was farther away, you know, it really [disfmarker] everything was attenuated and [disfmarker] [speaker001:] Right. [speaker002:] I mean that was the main visual thing that I noticed. [speaker001:] Right. But it's [disfmarker] it's uh [disfmarker] So. Yeah. So there are [disfmarker] clearly are spectral effects. Since you're getting all this indirect energy, then a lot of it does have [disfmarker] have uh [vocalsound] reduced high frequencies. But um the other thing is the temporal courses of things really are changed, and [disfmarker] [vocalsound] and uh we want to show that, in some obvious way. The reason I put the wave forms in there was because [vocalsound] uh they [disfmarker] they do look quite different. Uh. And so I thought "Oh, this is good." but I [disfmarker] [vocalsound] I just uh [disfmarker] After [disfmarker] after uh they were put in there I didn't really look at them anymore, cuz I just [disfmarker] they were different. So [vocalsound] I want something that has a [disfmarker] is a more interesting explanation for why they're different. Um. [speaker003:] Oh. So maybe we can just substitute one of these wave forms and um [vocalsound] then do some kind of zoom in on the spectrogram on an interesting area. [speaker001:] Something like that. Yeah. [speaker003:] Uh huh. [speaker001:] The other thing that we had in there that I didn't like was that um [vocalsound] the most obvious characteristic of the difference uh when you listen to it is that there's a second voice, and the [disfmarker] the [disfmarker] the [disfmarker] the [disfmarker] the uh [vocalsound] cuts that we have there actually don't correspond to the full wave form. It's just the first [disfmarker] I think there was something where he was having some trouble getting so much in, or. I [disfmarker] I forget the reason behind it. But [vocalsound] it [disfmarker] it's um [disfmarker] [vocalsound] it's the first six seconds or something [vocalsound] of it and it's in [vocalsound] the seventh or eighth second or something where [@ @] the second voice comes in. So we [disfmarker] we would like to actually see [vocalsound] the voice coming in, too, I think, [speaker003:] Mm hmm. [speaker001:] since that's the most obvious thing [pause] when you listen to it. So. Um. [speaker006:] Uh, yeah. Yeah. I brought some [disfmarker] I don't know if [disfmarker] [vocalsound] some [vocalsound] figures here. Well. I start [disfmarker] we started to work on spectral subtraction. And [vocalsound] um [vocalsound] the preliminary results were very bad. So the thing that we did is just to add spectral subtraction before this, the Wall uh process, which contains LDA on line normalization. [speaker001:] Uh huh. [speaker006:] And it hurts uh a lot. [speaker001:] Uh huh. [speaker006:] And so we started to look at [disfmarker] at um things like this, which is, well, it's [disfmarker] Yeah. So you have the C zero parameters for one uh Italian utterance. [speaker004:] You can [@ @]. [speaker006:] And I plotted this for two channels. Channel zero is the close mic microphone, and channel one is the distant microphone. And it's perfectly synchronized, so. And the sentence contain only one word, which is "Due" And it can't clearly be seen. Where [disfmarker] where is it? Where is the word? [speaker001:] Uh huh. [speaker002:] This is [disfmarker] this is, [speaker005:] Hmm. [speaker002:] oh, a plot of C zero, [speaker006:] So. [speaker002:] the energy. [speaker006:] This is a plot of C zero, uh when we don't use spectral substraction, and when there is no on line normalization. So. [speaker001:] Mm hmm. [speaker006:] There is just some filtering with the LDA and [vocalsound] and some downsampling, upsampling. [speaker002:] C zero is the close talking? [disfmarker] [speaker006:] So. [speaker002:] uh the close channel? [speaker006:] Yeah. Yeah. [speaker002:] and s channel one is the [disfmarker] [speaker006:] Yeah. So C zero is very clean, actually. [speaker002:] Yeah. [speaker006:] Uh then when we apply mean normalization it looks like the second figure, though it is not. Which is good. Well, the noise part is around zero and [disfmarker] [vocalsound] [vocalsound] And then the third figure is what happens when we apply mean normalization and variance normalization. [speaker001:] Mm hmm. [speaker006:] So. What we can clearly see is that on the speech portion [vocalsound] the two channel come [disfmarker] becomes very close, but also what happens on the noisy portion is that the variance of the noise is [disfmarker] [speaker001:] Mm hmm. [speaker002:] This is still being a plot of C zero? [speaker006:] Yeah. This is still C zero. [speaker002:] OK. Can I ask um what does variance normalization do? w What is the effect of that? [speaker001:] Normalizes the variance. [speaker006:] So it [disfmarker] it [disfmarker] Yeah. It normalized th the standard deviation. [speaker002:] I mean [speaker006:] So it [disfmarker] [speaker002:] y Yeah. [speaker006:] You [disfmarker] you get an estimate of the standard deviation. [speaker002:] No, I understand that, but I mean [disfmarker] [speaker006:] That's um [disfmarker] [speaker002:] No. No, I understand what it is, but I mean, what does it [disfmarker] what's [disfmarker] what is [speaker006:] Yeah but. [speaker002:] uh [disfmarker] [speaker001:] What's the rationale? [speaker002:] We Yeah. Yeah. Why [disfmarker] why do it? [speaker006:] Uh. [speaker001:] Well, I mean, because [vocalsound] everything uh [disfmarker] If you have a system based on Gaussians, everything is based on means and variances. [speaker002:] Yeah. [speaker001:] So if there's an overall [vocalsound] reason [disfmarker] You know, it's like uh if you were doing uh image processing and in some of the pictures you were looking at, uh there was a lot of light uh and [disfmarker] and in some, there was low light, [speaker002:] Mm hmm. [speaker001:] you know, you would want to adjust for that in order to compare things. [speaker002:] Mm hmm. [speaker001:] And the variance is just sort of like the next moment, you know? So uh [vocalsound] what if um one set of pictures was taken uh so that throughout the course it was [disfmarker] went through daylight and night uh [vocalsound] um um ten times, another time it went thr I mean i is, you know, how [disfmarker] how much [disfmarker] [vocalsound] how much vari [speaker002:] Oh, OK. [speaker001:] Or no. I guess a better example would be [vocalsound] how much of the light was coming in from outside rather than artificial light. So if it was a lot [disfmarker] [vocalsound] if more was coming from outside, then there'd be the bigger effect of the [disfmarker] of the [disfmarker] of the change in the [disfmarker] So every mean [disfmarker] every [disfmarker] all [disfmarker] all of the [disfmarker] the parameters that you have, especially the variances, are going to be affected by the overall variance. [speaker002:] Oh, OK. Uh huh. [speaker001:] And so, in principle, you [disfmarker] if you remove that source, then, you know, you can [disfmarker] [speaker002:] I see. OK. So would [disfmarker] the major effect is [disfmarker] that you're gonna get is by normalizing the means, but it may help [disfmarker] [speaker001:] That's the first order but [disfmarker] thing, [speaker002:] First order effects. And it may help to do the variance. [speaker001:] but then the second order is [disfmarker] is the variances [speaker002:] OK. OK. [speaker001:] because, again, if you [disfmarker] if you're trying to distinguish between E and B [speaker002:] Mm hmm. [speaker001:] if it just so happens that the E's [vocalsound] were a more [disfmarker] you know, were recorded when [disfmarker] when the energy was [disfmarker] was [disfmarker] was larger or something, [speaker002:] Mm hmm. Mm hmm. [speaker001:] or the variation in it was larger, [vocalsound] uh than with the B's, then this will be [disfmarker] give you some [disfmarker] some bias. So the [disfmarker] [vocalsound] it's removing these sources of variability in the data [vocalsound] that have nothing to do with the linguistic component. [speaker002:] OK. [speaker006:] Mmm. [speaker002:] Gotcha. OK. Sorry to interrupt. [speaker006:] Yep. [speaker001:] But the [disfmarker] the uh [disfmarker] but let me as ask [disfmarker] ask you something. [speaker006:] And it [disfmarker] and this [disfmarker] [speaker001:] i is [disfmarker] if [disfmarker] If you have a good voice activity detector, isn't [disfmarker] isn't it gonna pull that out? [speaker006:] Yeah. Sure. If they are good. Yeah. Well what it [disfmarker] it shows is that, yeah, perhaps a good voice activity detector is [disfmarker] is good before on line normalization and that's what uh [vocalsound] we've already observed. But uh, yeah, voice activity detection is not [vocalsound] [vocalsound] an easy thing neither. [speaker002:] But after you do this, after you do the variance normalization [disfmarker] [speaker006:] Mm hmm. [speaker002:] I mean. I don't know, it seems like this would be a lot easier than this signal to work with. [speaker006:] Yeah. So. What I notice is that, while I prefer to look at the second figure than at the third one, well, because you clearly see where speech is. [speaker002:] Yeah. [speaker001:] Yeah. [speaker006:] But the problem is that on the speech portion, channel zero and channel one are more different than when you use variance normalization where channel zero and channel one become closer. [speaker001:] Right. [speaker002:] But for the purposes of finding the speech [disfmarker] [speaker006:] And [disfmarker] Yeah, but here [disfmarker] Yeah. [speaker002:] You're more interested in the difference between the speech and the nonspeech, [speaker006:] Yeah. [speaker002:] right? [speaker006:] So I think, yeah. For I th I think that it [disfmarker] perhaps it shows that [vocalsound] uh the parameters that the voice activity detector should use [disfmarker] uh have to use should be different than the parameter that have to be used for speech recognition. [speaker001:] Yeah. So basically you want to reduce this effect. [speaker006:] Well, y [speaker001:] So you can do that by doing the voi voice activity detection. You also could do it by spect uh spectral subtraction before the [vocalsound] variance normalization, right? [speaker006:] Yeah, but it's not clear, yeah. [speaker001:] So uh [disfmarker] [speaker006:] We So. Well. It's just to [speaker001:] Yeah. [speaker006:] the [disfmarker] the number that at that are here are recognition experiments on Italian HM and MM [vocalsound] with these two kinds of parameters. And, [pause] well, it's better with variance normalization. [speaker001:] Yeah. Yeah. So it does get better even though it looks ugly. [speaker006:] Uh [disfmarker] [speaker001:] OK. but does this have the voice activity detection in it? [speaker006:] Yeah. [speaker001:] OK. [speaker006:] Um. [speaker005:] OK. [speaker001:] So. [speaker002:] Where's th [speaker006:] But the fact is that the voice activity detector doesn't work on channel one. So. Yeah. [speaker001:] Uh huh. [speaker002:] Where [disfmarker] at what stage is the voice activity detector applied? Is it applied here or a after the variance normalization? [speaker006:] Hmm? [speaker001:] Spectral subtraction, I guess. [speaker006:] It's applied before variance normalization. [speaker002:] or [disfmarker] [speaker006:] So it's a good thing, because I guess voice activity detection on this should [disfmarker] could be worse. [speaker002:] Oh. Yeah. Is it applied all the way back here? [speaker006:] It's applied the um on, yeah, something like this, yeah. [speaker002:] Maybe that's why it doesn't work for channel one. [speaker006:] Perhaps, yeah. [speaker001:] Can I [disfmarker] [speaker006:] So we could perhaps do just mean normalization before VAD. [speaker002:] Mm hmm. [speaker001:] Mm hmm. Can I ask a, I mean [disfmarker] a sort of top level question, which is [vocalsound] um "if [disfmarker] if most of what the OGI folk are working with is trying to [vocalsound] integrate this other [disfmarker] other uh spectral subtraction, [vocalsound] why are we worrying about it?" [speaker006:] Mm hmm. About? Spectral subtraction? [speaker001:] Yeah. [speaker006:] It's just uh [disfmarker] Well it's another [disfmarker] They are trying to u to use the um [disfmarker] [vocalsound] the Ericsson and we're trying to use something [disfmarker] something else. And. Yeah, and also to understand what happens because [speaker001:] OK. [speaker006:] uh fff Well. When we do spectral subtraction, actually, I think [vocalsound] that this is the [disfmarker] the two last figures. [speaker001:] Yeah. [speaker006:] Um. It seems that after spectral subtraction, speech is more emerging now uh [vocalsound] than [disfmarker] than before. [speaker001:] Mm hmm. [speaker002:] Speech is more what? [speaker006:] Well, the difference between the energy of the speech and the energy of the n spectral subtrac subtracted noise portion is [disfmarker] is larger. [speaker001:] Mm hmm. [speaker006:] Well, if you compare the first figure to this one [disfmarker] Actually the scale is not the same, but if you look at the [disfmarker] the numbers um [vocalsound] you clearly see that the difference between the C zero of the speech and C zero of the noise portion is larger. Uh but what happens is that after spectral subtraction, [vocalsound] you also increase the variance of this [disfmarker] of C zero. And so if you apply variance normalization on this, it completely sc screw everything. [speaker001:] Mm hmm. [speaker006:] Well. [speaker001:] Mm hmm. [speaker006:] Um. Uh. Yeah. So yeah. And what they did at OGI is just [vocalsound] uh they don't use on line normalization, for the moment, on spectral subtraction and I think [disfmarker] Yeah. I think as soon as they will try on line normalization [vocalsound] there will be a problem. So yeah, we're working on the same thing but [vocalsound] I think uh with different [disfmarker] different system and [disfmarker] [speaker001:] Right. I mean, i the Intellectually it's interesting to work on things th uh one way or the other [speaker006:] Mm hmm. [speaker001:] but I'm [disfmarker] I'm just wondering if um [disfmarker] [vocalsound] on the list of things that there are to do, if there are things that we won't do because [vocalsound] we've got two groups doing the same thing. [speaker006:] Mm hmm. [speaker001:] Um. [speaker006:] Mm hmm. [speaker001:] That's [disfmarker] Um. Just [disfmarker] just asking. Uh. I mean, it's [disfmarker] [speaker006:] Yeah, well, uh. [speaker002:] There also could be [disfmarker] I mean. I can maybe see a reason f for both working on it too if [vocalsound] um you know, if [disfmarker] if [disfmarker] if you work on something else and [disfmarker] and you're waiting for them to give you [vocalsound] spectral subtraction [disfmarker] I mean it's hard to know whether [vocalsound] the effects that you get from the other experiments you do will [vocalsound] carry over once you then bring in their spectral subtraction module. So it's [disfmarker] it's almost like everything's held up waiting for this [vocalsound] one thing. I don't know if that's true or not, but I could see how [disfmarker] [speaker006:] Mmm. [speaker002:] Maybe that's what you were thinking. [speaker001:] I don't know. I don't know. [vocalsound] I mean, we still evidently have a latency reduction plan which [disfmarker] which isn't quite what you'd like it to be. That [disfmarker] that seems like one prominent thing. And then uh weren't issues of [disfmarker] of having a [disfmarker] a second stream or something? That was [disfmarker] Was it [disfmarker] There was this business that, you know, we [disfmarker] we could use up the full forty eight hundred bits, and [disfmarker] [speaker006:] Yeah. But I think they ' I think we want to work on this. They also want to work on this, so. Uh. [vocalsound] yeah. We [disfmarker] we will try MSG, but um, yeah. And they are t I think they want to work on the second stream also, but more with [vocalsound] some kind of multi band or, well, what they call TRAP or generalized TRAP. [speaker001:] Mm hmm. [speaker006:] Um. So. [speaker001:] OK. Do you remember when the next meeting is supposed to be? [speaker006:] It's uh in June. [speaker001:] the next uh [disfmarker] In June. OK. [speaker006:] Yeah. [speaker001:] Yeah. Um. Yeah, the other thing is that you saw that [disfmarker] that mail about uh the VAD [disfmarker] V A Ds performing quite differently? That that uh So um. This [disfmarker] there was this experiment of uh "what if we just take the baseline?" [speaker006:] Mmm. [speaker001:] set uh of features, just mel cepstra, and you inc incorporate the different V A And it looks like the [disfmarker] the French VAD is actually uh better [disfmarker] significantly better. [speaker002:] Improves the baseline? [speaker001:] Yeah. Yeah. [speaker006:] Yeah but I don't know which VAD they use. Uh. If the use the small VAD I th I think it's on [disfmarker] I think it's easy to do better because it doesn't work at all. So. I [disfmarker] I don't know which [disfmarker] which one. It's Pratibha that [disfmarker] that did this experiment. [speaker004:] Yeah. [speaker006:] Um. We should ask which VAD she used. [speaker004:] I don't [@ @]. He [disfmarker] Actually, I think that he say with the good VAD of [disfmarker] from OGI and with the Alcatel VAD. And the experiment was sometime better, sometime worse. [speaker006:] Yeah but I [disfmarker] it's uh [disfmarker] I think you were talking about the other mail that used VAD on the reference features. [speaker001:] Yes. [speaker006:] Yeah. [speaker004:] I don't remember. [speaker001:] And on that one, uh the French one is [disfmarker] was better. It was just better. [speaker004:] Mm hmm. [speaker001:] I mean it was enough better that [disfmarker] that it would [vocalsound] uh account for a fair amount of the difference between our performance, actually. [speaker006:] Mm hmm. [speaker004:] Mm hmm. [speaker001:] So. [vocalsound] Uh. So if they have a better one, we should use it. I mean. You know? it's [disfmarker] you can't work on everything. [speaker006:] Yeah. [speaker001:] Uh. [vocalsound] Uh. Yeah. [speaker006:] Yeah, so we should find out if it's really better. I mean if it [disfmarker] the [disfmarker] compared to the small or the big network. [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker006:] And perhaps we can easily improve if [disfmarker] if we put like mean normalization before the [disfmarker] before the VAD. Because [disfmarker] [vocalsound] as [disfmarker] as you've [pause] mentioned. [speaker001:] Yeah. [speaker006:] Mmm. [speaker001:] H Hynek will be back in town uh the week after next, back [disfmarker] back in the country. So. And start [disfmarker] start organizing uh [vocalsound] more visits and connections and so forth, [speaker006:] Mm hmm. [speaker001:] and [disfmarker] uh working towards June. [speaker006:] Yeah. Mm hmm. [speaker004:] Also is Stephane was thinking that [vocalsound] maybe it was useful to f to think about uh [vocalsound] voiced unvoiced [disfmarker] to work uh here in voiced unvoiced detection. [speaker006:] Yeah. [speaker004:] And we are looking [vocalsound] [vocalsound] in the uh signal. [speaker006:] Yeah. Yeah, my feeling is that um actually [vocalsound] when we look at all the proposals, ev everybody is still using some kind of spectral envelope and um it's [disfmarker] [speaker001:] Right. No use of pitch uh basically. Yeah. [speaker006:] Yeah, well, not pitch, but to look at the um fine [disfmarker] at the [disfmarker] at the high re high resolution spectrum. [speaker001:] Yeah. [speaker006:] So. We don't necessarily want to find the [disfmarker] the pitch of the [disfmarker] of the sound [speaker001:] Well, it [disfmarker] [speaker006:] but uh [disfmarker] Cuz I have a feeling that [vocalsound] when we look [disfmarker] when we look at the [disfmarker] just at the envelope there is no way you can tell if it's voiced and unvoiced, if there is some [disfmarker] It's [disfmarker] it's easy in clean speech because voiced sound are more low frequency and. So there would be more, [speaker001:] Yeah. [speaker006:] uh [disfmarker] there is the first formant, which is the larger and then voiced sound are more high frequencies cuz it's frication and [disfmarker] [speaker001:] Right. [speaker006:] But, yeah. When you have noise there is no um [disfmarker] [vocalsound] if [disfmarker] if you have a low frequency noise it could be taken for [disfmarker] for voiced speech and. [speaker001:] Yeah, you can make these mistakes, [speaker006:] So. [speaker001:] but [disfmarker] but [disfmarker] [speaker002:] Isn't there some other [speaker006:] S [speaker002:] uh d [speaker006:] So I think that it [disfmarker] it would be good [disfmarker] Yeah, yeah, well, go [disfmarker] go on. [speaker002:] Uh, I was just gonna say isn't there [disfmarker] [vocalsound] aren't [disfmarker] aren't there lots of ideas for doing voice activity, or speech nonspeech rather, [comment] um by looking at [vocalsound] um, you know, uh [vocalsound] I guess harmonics or looking across time [disfmarker] [speaker001:] Well, I think he was talking about the voiced unvoiced, though, [speaker006:] Mmm. [speaker001:] right? [speaker002:] Yeah. [speaker001:] So, not the speech nonspeech. [speaker002:] Well even with e uh w ah [speaker001:] Yeah. [speaker002:] you know, uh even with the voiced non [pause] voiced unvoiced [speaker006:] Mmm. [speaker002:] um [disfmarker] I thought that you or [pause] somebody was talking about [disfmarker] [speaker001:] Well. [speaker006:] So. [speaker001:] Uh yeah. B We should let him finish what he w he was gonna say, [speaker002:] OK. [speaker001:] and [disfmarker] [speaker002:] So go ahead. [speaker006:] Um yeah, so yeah, I think if we try to develop a second stream well, there would be one stream that is the envelope and the second, it could be interesting to have that's [disfmarker] something that's more related to the fine structure of the spectrum. And. Yeah, so I don't know. We were thinking about like using ideas from [disfmarker] from Larry Saul, have a good voice detector, have a good, well, voiced speech detector, that's working on [disfmarker] on the FFT and [vocalsound] uh Larry Saul could be an idea. [speaker001:] U [speaker006:] We were are thinking about just [vocalsound] kind of uh taking the spectrum and computing the variance of [disfmarker] of the high resolution spectrum [vocalsound] and things like this. [speaker001:] So u s u OK. So [disfmarker] So many [vocalsound] tell you something about that. Uh we had a guy here some years ago who did some work on [vocalsound] um [vocalsound] making use of voicing information uh to [vocalsound] help in reducing the noise. [speaker006:] Yeah? Mm hmm. [speaker001:] So what he was doing is basically y you [disfmarker] [vocalsound] you do estimate the pitch. And um you [disfmarker] from that you [disfmarker] you estimate [disfmarker] or you estimate fine harmonic structure, whichev ei either way, it's more or less the same. But [vocalsound] uh the thing is that um you then [vocalsound] can get rid of things that are not [disfmarker] i if there is strong harmonic structure, [vocalsound] you can throw away stuff that's [disfmarker] that's non harmonic. [speaker006:] Mm hmm. Mm hmm. [speaker001:] And that [disfmarker] that is another way of getting rid of part of the noise [speaker006:] Yeah. Yeah. [speaker001:] So um that's something [vocalsound] that is sort of finer, brings in a little more information than just spectral subtraction. Um. [speaker006:] Mm hmm. [speaker001:] And he had some [disfmarker] I mean, he did that sort of in combination with RASTA. It was kind of like RASTA was taking care of convolutional stuff [speaker006:] Mmm. Mm hmm. [speaker001:] and he was [disfmarker] and [disfmarker] and got some [disfmarker] some decent results doing that. So that [disfmarker] that's another [disfmarker] another way. [speaker006:] Yeah. [speaker001:] But yeah, there's [disfmarker] there's [disfmarker] [speaker006:] Mmm. [speaker001:] Right. There's all these cues. We've [speaker006:] But [disfmarker] [speaker001:] actually back when Chuck was here we did some voiced unvoiced uh [vocalsound] classification using a bunch of these, and [disfmarker] and uh works OK. Obviously it's not perfect but [speaker006:] Mm hmm. [speaker001:] um [disfmarker] But the thing is that you can't [disfmarker] given the constraints of this task, we can't, [vocalsound] in a very nice way, feed [pause] forward to the recognizer the information [disfmarker] the probabilistic information that you might get about whether it's voiced or unvoiced, [speaker006:] Mm hmm. [speaker001:] where w we can't you know affect the [disfmarker] [vocalsound] the uh distributions or anything. But we [disfmarker] what we uh [disfmarker] I guess we could Yeah. [speaker002:] Didn't the head dude send around that message? Yeah, I think you sent us all a copy of the message, where he was saying that [disfmarker] I I'm not sure, exactly, what the gist of what he was saying, but something having to do with the voice [vocalsound] activity detector and that it will [disfmarker] [vocalsound] that people shouldn't put their own in or something. It was gonna be a [disfmarker] [speaker001:] That [disfmarker] But [disfmarker] OK. So that's voice activity detector as opposed to voicing detector. [speaker006:] They didn't. [speaker001:] So we're talking about something a little different. [speaker006:] Mmm. [speaker002:] Oh, I'm sorry. [speaker006:] Mmm. [speaker002:] I [disfmarker] I missed that. [speaker001:] Right? I guess what you could do, maybe this would be w useful, if [disfmarker] if you have [disfmarker] if you view the second stream, yeah, before you [disfmarker] before you do KLT's and so forth, if you do view it as probabilities, and if it's an independent [disfmarker] So, if it's [disfmarker] if it's uh not so much [vocalsound] envelope based by fine structure based, uh looking at harmonicity or something like that, um if you get a probability from that information and then multiply it by [disfmarker] you know, multiply by all the voiced [vocalsound] outputs and all the unvoiced outputs, you know, then [vocalsound] use that as the [speaker006:] Mm hmm. [speaker001:] uh [disfmarker] take the log of that or [vocalsound] uh pre pre uh [disfmarker] pre nonlinearity, [speaker006:] Yeah. i if [disfmarker] [speaker001:] uh and do the KLT on the [disfmarker] on [disfmarker] on that, [speaker006:] Yeah. [speaker001:] then that would [disfmarker] that would I guess be uh a reasonable use of independent information. So maybe that's what you meant. And then that would be [disfmarker] [speaker006:] Yeah, well, I was not thinking this [disfmarker] yeah, this could be an yeah So you mean have some kind of probability for the v the voicing and then use a tandem system [speaker001:] R Right. So you have a second neural net. It could be pretty small. Yeah. If you have a tandem system and then you have some kind of [disfmarker] it can be pretty small [disfmarker] net [disfmarker] [speaker006:] Mm hmm. [speaker001:] we used [disfmarker] we d did some of this stuff. Uh I [disfmarker] I did, some years ago, [speaker006:] Yeah. [speaker001:] and the [disfmarker] and [disfmarker] and you use [disfmarker] [vocalsound] the thing is to use information primarily that's different as you say, it's more fine structure based than [disfmarker] than envelope based [speaker006:] Mm hmm. [speaker001:] uh so then it you [disfmarker] you [disfmarker] you can pretty much guarantee it's stuff that you're not looking at very well with the other one, and uh then you only use for this one distinction. [speaker006:] Alright. [speaker001:] And [disfmarker] and so now you've got a probability of the cases, and you've got uh the probability of the finer uh categories on the other side. You multiply them where appropriate and uh [vocalsound] um [speaker006:] I see, yeah. Mm hmm. [speaker001:] if they really are from independent [pause] information sources then [vocalsound] they should have different kinds of errors [speaker006:] Mm hmm. [speaker001:] and roughly independent errors, [speaker006:] Mm hmm. [speaker001:] and [vocalsound] it's a good choice for [disfmarker] [speaker006:] Mm hmm. Yeah. [speaker001:] Uh. Yeah, that's a good idea. [speaker006:] Yeah. Because, yeah, well, spectral subtraction is good and we could u we could use the fine structure to [disfmarker] to have a better estimate of the noise but [vocalsound] still there is this issue with spectral subtraction that it seems to increase the variance of [disfmarker] of [disfmarker] of [speaker001:] Yeah. [speaker006:] um Well it's this musical noise which is annoying if you d you do some kind of on line normalization after. [speaker001:] Right. [speaker006:] So. Um. Yeah. Well. Spectral subtraction and on line normalization don't seem to [disfmarker] to go together very well. I [speaker001:] Or if you do a spectral subtraction [disfmarker] do some spectral subtraction first and then do some on line normalization then do some more spectral subtraction [disfmarker] I mean, maybe [disfmarker] maybe you can do it layers or something so it doesn't [disfmarker] doesn't hurt too much or something. [speaker006:] Ah, yeah. [speaker001:] But it [disfmarker] but uh, anyway I think I was sort of arguing against myself there by giving that example [speaker006:] Yeah. [speaker001:] uh I mean cuz I was already sort of [vocalsound] suggesting that we should be careful about not spending too much time on exactly what they're doing In fact if you get [disfmarker] if you go into uh [disfmarker] a uh harmonics related thing [vocalsound] it's definitely going to be different than what they're doing and uh uh [speaker006:] Mm hmm. [speaker001:] should have some interesting properties in noise. Um. [vocalsound] I know that when have people have done [pause] um sort of the obvious thing of taking [vocalsound] uh your feature vector and adding [pause] in some variables which are [vocalsound] pitch related or uh that [disfmarker] it hasn't [disfmarker] my impression it hasn't particularly helped. [speaker006:] It [disfmarker] it i [speaker001:] Uh. Has not. [speaker006:] has not, yeah. [speaker001:] Yeah. [speaker006:] Oh. [speaker001:] But I think uh [pause] that's [disfmarker] that's a question for this uh you know extending the feature vector versus having different streams. [speaker006:] Was it nois noisy condition? the example that you [disfmarker] you just [speaker001:] And [disfmarker] and it may not have been noisy conditions. [speaker006:] Yeah. [speaker001:] Yeah. I [disfmarker] I don't remember the example but it was [disfmarker] [vocalsound] it was on some DARPA data and some years ago and so it probably wasn't, actually [speaker006:] Mm hmm. Mm hmm. Yeah. But we were thinking, we discussed with Barry about this, and [vocalsound] perhaps [vocalsound] thinking [disfmarker] we were thinking about some kind of sheet cheating experiment where we would use TIMIT [speaker001:] Uh huh. [speaker006:] and see if giving the d uh, this voicing bit would help in [disfmarker] in terms of uh frame classification. [speaker001:] Why don't you [disfmarker] why don't you just do it with Aurora? [speaker006:] Mmm. Yeah, but [disfmarker] but [disfmarker] B but we cannot do the cheating, this cheating thing. [speaker001:] Just any i in [disfmarker] in each [disfmarker] in each frame [speaker005:] We're [disfmarker] [speaker001:] uh [disfmarker] [speaker005:] We need labels. [speaker006:] Well. Cuz we don't have [disfmarker] [speaker001:] Why not? [speaker006:] Well, for Italian perhaps we have, but we don't have this labeling for Aurora. We just have a labeling with word models but not for phonemes. [speaker001:] I see. [speaker004:] Not for foreigners. [speaker005:] we don't have frame [disfmarker] frame level transcriptions. [speaker004:] Right. [speaker001:] Um. [speaker006:] Um. [vocalsound] Yeah. [speaker001:] But you could [disfmarker] I mean you can [disfmarker] you can align so that [disfmarker] It's not perfect, but if you [disfmarker] if you know what was said and [disfmarker] [speaker002:] But the problem is that their models are all word level models. So there's no phone models [pause] that you get alignments for. [speaker006:] Mm hmm. [speaker001:] Oh. [speaker002:] You [disfmarker] So you could find out where the word boundaries are but that's about it. [speaker001:] Yeah. I see. [speaker005:] S But we could use uh the [disfmarker] the noisy version that TIMIT, which [vocalsound] you know, is similar to the [disfmarker] the noises found in the TI digits [vocalsound] um portion of Aurora. [speaker006:] Yeah. noise, yeah. Yeah, that's right, yep. Mmm. Well, I guess [disfmarker] I guess we can [disfmarker] we can say that it will help, [speaker001:] Yeah. [speaker006:] but I don't know. If this voicing bit doesn't help, uh, I think we don't have to [disfmarker] to work more about this because [disfmarker] Uh. It's just to know if it [disfmarker] how much i it will help [speaker001:] Uh. Yeah. [speaker006:] and to have an idea of how much we can gain. [speaker001:] Right. I mean in experiments that we did a long time ago [speaker006:] Mmm. [speaker001:] and different ta it was probably Resource Management or something, um, I think you were getting [pause] something like still eight or nine percent error on the voicing, as I recall. And um, so um [speaker005:] Another person's voice. [speaker001:] what that said is that, sort of, left to its own devices, like without the [disfmarker] a strong language model and so forth, that you would [disfmarker] [vocalsound] you would make significant number of errors [vocalsound] just with your uh probabilistic machinery in deciding [speaker002:] It also [disfmarker] [speaker001:] one oh [speaker002:] Yeah, the [disfmarker] though I think uh there was one problem with that in that, you know, we used canonical mapping so [vocalsound] our truth may not have really been [pause] true to the acoustics. [speaker001:] Uh huh. [speaker005:] Hmm. [speaker002:] So. [speaker006:] Mmm. [speaker001:] Yeah. Well back twenty years ago when I did this voiced unvoiced stuff, we were getting more like [vocalsound] ninety seven or ninety eight percent correct in voicing. But that was [vocalsound] speaker dependent [vocalsound] actually. We were doing training [vocalsound] on a particular announcer [speaker006:] Mm hmm. [speaker001:] and [disfmarker] and getting a [vocalsound] very good handle on the features. [speaker006:] Mm hmm. [speaker001:] And we did this complex feature selection thing where we looked at all the different possible features one could have for voicing and [disfmarker] [vocalsound] and [disfmarker] and uh [disfmarker] and exhaustively searched [vocalsound] all size subsets [speaker002:] Wow! [speaker001:] and [disfmarker] and uh [disfmarker] for [disfmarker] for that particular speaker and you'd find you know the five or six features which really did well on them. [speaker006:] Mm hmm. [speaker001:] And then doing [disfmarker] doing all of that we could get down to two or three percent error. [speaker006:] Mm hmm. [speaker001:] But that, again, was speaker dependent with [vocalsound] lots of feature selection and a very complex sort of thing. [speaker006:] Mmm. [speaker001:] So I would [disfmarker] I would believe [vocalsound] that uh it was quite likely that um looking at envelope only, that we'd be [vocalsound] significantly worse than that. [speaker006:] Mm hmm. [speaker001:] Uh. [speaker006:] And the [disfmarker] all the [disfmarker] the SpeechCorders? what's the idea behind? Cuz they [disfmarker] they have to [disfmarker] Oh, they don't even have to detect voiced spe speech? [speaker001:] The modern ones don't do a [disfmarker] [vocalsound] a simple switch. [speaker006:] They just work on the code book and find out the best excitation. [speaker001:] They work on the code book excitation. Yeah they do [vocalsound] analysis by synthesis. They try [disfmarker] they [disfmarker] they try every [disfmarker] every possible excitation they have in their code book and find the one that matches best. [speaker006:] Yeah. Mmm. Alright. Yeah. So it would not help. [speaker001:] Yeah. [speaker005:] Hmm. [speaker001:] Uh. O K. [speaker002:] Can I just mention one other interesting thing? [speaker001:] Yeah. [speaker002:] Um. One of the ideas that we [pause] had come up with last week for things to try to [vocalsound] improve the system [disfmarker] Um. Actually I [disfmarker] I s we didn't [disfmarker] I guess I wrote this in after the meeting b but [vocalsound] the thought I had was um looking at the language model that's used in the HTK recognizer, which is basically just a big [vocalsound] loop, [speaker005:] Mm hmm. [speaker002:] right? So you [disfmarker] it goes "digit" [speaker004:] Mm hmm. [speaker002:] and then that can be [disfmarker] either go to silence or go to another digit, which [disfmarker] That model would allow for the production of [vocalsound] infinitely long sequences of digits, right? [speaker001:] Right. [speaker002:] So. I thought "well I'm gonna just look at the [disfmarker] what actual digit strings do occur in the training data." [speaker001:] Right. [speaker002:] And the interesting thing was it turns out that there are no sequences of two long or three long digit strings [pause] in any of the Aurora training data. So it's either one, four, five, six, uh up to eleven, and then it skips and then there's some at sixteen. [speaker001:] But what about the testing data? [speaker002:] Um. I don't know. I didn't look at the test data yet. [speaker001:] Yeah. [speaker002:] So. [speaker001:] I mean if there's some testing data that has [disfmarker] has [disfmarker] [vocalsound] has two or three [disfmarker] [speaker002:] Yeah. But I just thought that was a little odd, that there were no two or three long [disfmarker] Sorry. So I [disfmarker] I [disfmarker] just for the heck of it, I made a little grammar which um, you know, had it's separate path [pause] for each length digit string you could get. So there was a one long path and there was a four long and a five long [speaker001:] Mm hmm. [speaker002:] and I tried that and it got way worse. There were lots of deletions. So it was [disfmarker] [vocalsound] you know, I [disfmarker] I didn't have any weights of these paths [speaker001:] Mm hmm. [speaker002:] or [disfmarker] I didn't have anything like that. [speaker001:] Mm hmm. [speaker002:] And I played with tweaking the [vocalsound] word transition penalties a bunch, but I couldn't go anywhere. But um. [speaker001:] Hmm. [speaker002:] I thought "well if I only allow [disfmarker]" Yeah, I guess I should have looked at [disfmarker] to see how often there was a mistake where a two long or a three long path was actually put out as a hypothesis. Um. But. [speaker001:] Hmm. [speaker002:] So to do that right you'd probably want to have [disfmarker] [vocalsound] allow for them all but then have weightings and things. So. I just thought that was a interesting [vocalsound] thing about the data. [speaker001:] OK. So we're gonna read some more digit strings I guess? [speaker002:] Yeah. You want to go ahead, Morgan? [speaker001:] Sure. [speaker007:] How about channel [speaker001:] OK. [speaker005:] We're recording. [speaker003:] Yeah, go ahead. [speaker007:] Alright. [speaker003:] Alright, and no crash. [speaker005:] I pre crashed it. [speaker001:] Hmm. [speaker003:] Yeah. [speaker006:] Pre crashed! [speaker004:] It never crashes on me. [speaker005:] I think it's actually [disfmarker] [speaker004:] What is [disfmarker] what is that? [speaker005:] it depends on if the temp files are there or not, that [disfmarker] at least that's my current working hypothesis, [speaker004:] Ah. [speaker005:] that I think what happens is it tries to clear the temp files and if they're too big, it crashes. [speaker004:] Ah. [speaker002:] When the power went out the other day and I restarted it, it crashed the first time. [speaker005:] Oh, that's right. [speaker002:] After the power out [speaker004:] So then there would be no temp files. [speaker003:] Yeah. [speaker005:] Uh, no, it doesn't [disfmarker] it doesn't clear those necessarily, [speaker004:] OK. [comment] Hmm. Oh wait [disfmarker] It [disfmarker] it doesn't clear them, OK. [speaker005:] so. [speaker003:] Hmm, no connection. [speaker005:] It's [disfmarker] i they're called temp files, but they're not actually in the temp directory they're in the scratch, so. They're not backed up, but they're not erased either on power failure. [speaker004:] But that's usually the meeting that I recorded, and it neve it doesn't crash on me. [speaker005:] Oh well. [speaker002:] Well this wasn't [disfmarker] Actually, this wasn't a before your meeting, this was, um, Tuesday afternoon when, um, uh, Robert just wanted to do a little recording, [speaker004:] Oh [disfmarker] Oh, right. OK. [speaker002:] and the power had gone out earlier in the day. [speaker004:] Huh, OK. [speaker003:] I don't know when would be a good excuse for it, but I just can't wait to be giving a talk t and [disfmarker] and [disfmarker] and use the example from last week with everybody t doing the digits at once. [speaker005:] Yeah. [speaker003:] I'd love to play somebody that. [speaker001:] That was fun. That was fun. [speaker004:] It was quick. [speaker003:] It was. It was really efficient. [speaker002:] Talk about a good noise shield. You know? You wanted to pe keep people from listening in, you could like have that playing outside the room. Nobody could listen in. [speaker003:] Yeah. [speaker004:] Well, I had this idea we could make our whole meeting faster that way. [speaker003:] Yeah. Everybody give the reports about what they were doing at exactly the same time, [speaker004:] And we'll just all leave, and [disfmarker] [speaker002:] And then we'll [disfmarker] we'll go back later and review the individual channels, [speaker003:] yeah. [speaker005:] Yep, and then everyone can listen to it later. Yes. Absolutely. [speaker003:] Actually isn't that what we have been doing? [speaker004:] Yeah. [speaker002:] right? If you wanna know what [disfmarker] [speaker005:] It's what it sounds like. [speaker002:] Practically, huh. With all the overlaps. [speaker001:] Yeah. [speaker003:] What are we doing? [speaker005:] I [disfmarker] Since I've been gone all week, I didn't send out a reminder for an agenda, so. [speaker003:] Yeah, and I'm just [disfmarker] [speaker005:] Do we have anything to talk about or should we just read digits and go? [speaker002:] I wouldn't mind hearing how the conference was. [speaker003:] What conference? [speaker004:] Uh, I had one question about [disfmarker] [speaker005:] Yeah, really. It's all a blur. [speaker004:] Aren't the UW folks coming this weekend? [speaker005:] Yep. [speaker006:] No. The next, right? [speaker005:] Next weekend, week from [disfmarker] [speaker004:] Next weekend? [speaker003:] That is right. [speaker004:] Sorry, not [disfmarker] not [disfmarker] not the days coming up, but [disfmarker] [speaker003:] The next weekend. [speaker006:] It's like the [disfmarker] [speaker005:] A week from Saturday. [speaker004:] Yeah, within ten days. [speaker003:] That's when they're coming. That's correct. [speaker004:] So, are we [disfmarker] do we have like an agenda or anything that we should be [disfmarker] [speaker003:] No, but that would be a good idea. [speaker004:] OK. [speaker006:] So [disfmarker] so the deal is that I can, um, [vocalsound] uh, I can be available after, uh, like ten thirty or something. [speaker003:] Why don't we w [speaker006:] I don't know how s how early you wanted to [disfmarker] [speaker003:] They're not even gonna be here until eleven or so. [speaker006:] Oh, OK. [speaker005:] That's good. [speaker006:] So [disfmarker] [speaker004:] Wait, this is on [disfmarker] on Sunday? [speaker003:] Cuz they're flying up that day. Saturday. [speaker004:] Or Saturday? [speaker003:] Saturday. [speaker006:] Saturday. [speaker004:] OK. [speaker005:] Well, y [speaker003:] S Saturday. [speaker006:] Mm hmm. [speaker003:] Yeah. [speaker005:] Eurospeech is due on Friday and then I'm going down to San [disfmarker] uh, San Jose Friday night, so, if [disfmarker] you know, if we start nice and late Saturday that's a good thing. [speaker003:] No, I mean, they're flying up from [disfmarker] from [disfmarker] [speaker005:] Seattle. [speaker003:] down from Seattle. [speaker005:] They're flying from somewhere to somewhere, [speaker003:] Yeah, and they'll end up here. So b and also Brian Kingsbury is actually flying from, uh, the east coast on that [disfmarker] that morning. [speaker001:] Excellent. [speaker003:] So, i I [disfmarker] I will be [disfmarker] I mean, he's taking a very early flight [speaker006:] Oh. [speaker003:] and we do have the time work difference running the right way, but I still think that there's no way we could start before eleven. It might end up really being twelve. So when we get closer we'll find people's plane schedules, and let everybody know. Uh, So. That's good. [speaker005:] But, uh, yeah maybe an agenda, or at least some things to talk about would be a good idea. [speaker003:] Well we can start gathering those [disfmarker] those ideas, but then we [disfmarker] we should firm it up by next [disfmarker] next Thursday's meeting. [speaker001:] Will we have time to, um, to prepare something that we [disfmarker] in the format we were planning for the IBM transcribers by then, or [disfmarker]? [speaker005:] Oh yeah. Absolutely. [speaker001:] OK. [speaker005:] So have you heard back from Brian about that, Chuck? [speaker002:] Yes, um, he's [disfmarker] I [disfmarker] I'm sorry, I should have forwarded that along. Uh, [vocalsound] oh I [disfmarker] I think I mentioned at the last meeting, he said that, um, he talked to them and it was fine [disfmarker] with the beeps they would be [disfmarker] That's easy for them to do. [speaker005:] Great. OK. So, uh, oh, though Thi Thilo isn't here, um, but, uh, I [disfmarker] I have the program to insert the beeps. What I don't have is something to parse the output of the channelized transcripts to find out where to put the beeps, but that should be really easy to do. So do we have a meeting that that's been done with, that we've tightened it up to the point where we can actually give it to IBM and have them try it out? [speaker001:] He's [disfmarker] he's [disfmarker] He generated, um, a channel wise presegmented version of a meeting, but it was Robustness rather than EDU so I guess depends on whether we're willing to use Robustness? [speaker005:] Mm hmm. [speaker002:] Well for this experiment I think we can use pre pretty much anything. [speaker001:] OK. [speaker002:] This experiment of just [disfmarker] [speaker005:] Well we had [disfmarker] we had talked about doing maybe EDU as a good choice, though. Well, [vocalsound] whatever we have. [speaker002:] Well we've talked about that as being the next ones we wanted to transcribe. [speaker005:] Right. [speaker002:] But for the purpose of sending him a sample one to [disfmarker] f [speaker001:] OK. [speaker005:] Yeah, maybe it doesn't matter. [speaker001:] Great. [speaker002:] I [disfmarker] I don't think it matte [speaker001:] I'll [disfmarker] I'll [disfmarker] I'll, um, get [disfmarker] make that available. [speaker005:] OK, and has it been corrected? [speaker001:] Oh, well, wait. Um [disfmarker] [speaker005:] Hand checked? Cuz that was one of the [vocalsound] processes we were talking about as well. [speaker002:] Right, so we need to run Thilo's thing on it, [speaker001:] That's right. [speaker002:] and then we go in and adjust the boundaries. [speaker001:] Yeah that's right. Yeah, we haven't done that. [speaker005:] And time how long it takes. [speaker001:] I [disfmarker] I could set someone on that tomorrow. [speaker002:] Right. OK. And we probably don't have to do necessarily a whole meeting for that if we just wanna send them a sample to try. [speaker001:] I think they're coming [disfmarker] OK. What would be a good number of minutes? [speaker002:] I don't know, maybe we can figure out how long it'll take [@ @] to [disfmarker] to do. [speaker005:] Um, I don't know, it seems to me w we probably should go ahead and do a whole meeting because we'll have to transcribe the whole meeting anyway sometime. [speaker003:] Yes except that if they had [disfmarker] if there was a choice between having fifteen minutes that was fully the way you wanted it, and having a whole meeting that didn't get at what you wanted for them [disfmarker] It's just dependent of how much [disfmarker] [speaker005:] Like I [disfmarker] I mean I guess if we have to do it again anyway, but, uh [speaker003:] Yeah. [speaker002:] I guess, the only thing I'm not sure about is, um, how quickly can the transcribers scan over and fix the boundaries, [speaker001:] Mm hmm. [speaker002:] and [disfmarker] I mean, is it pretty easy? [speaker005:] I think it's gonna be one or two times real time at [disfmarker] Wow, excuse me, two or more times real time, right? Cuz they have to at least listen to it. [speaker003:] Can we pipeline it so that say there's, uh, the transcriber gets done with a quarter of the meeting and then we [disfmarker] you run it through this other [disfmarker] other stuff? [speaker005:] Well the other stuff is I B [speaker003:] Uh, [speaker005:] I'm just thinking that from a data [disfmarker] keeping track of the data point of view, it may be best to send them whole meetings at a time and not try to send them bits and pieces. [speaker003:] OK, so. Oh, that's right. So the first thing is the automatic thing, [speaker005:] Right. [speaker003:] and then it's [disfmarker] then it's [disfmarker] then it's the transcribers tightening stuff up, and then it's IBM. [speaker001:] Mm hmm. Mm hmm, mm hmm. [speaker005:] Right. [speaker003:] OK, so you might as well ha run the automatic thing over the entire meeting, and then [disfmarker] and then, uh, you would give IBM whatever was fixed. [speaker001:] And have them fix it over the entire meeting too? [speaker005:] Right. [speaker003:] Well, yeah, but start from the beginning and go to the end, right? So if they were only half way through then that's what you'd give IBM. [speaker001:] OK. [speaker003:] Right? [speaker002:] As of what point? I mean. The [disfmarker] I guess the question on my mind is do we wait for the transcribers to adjust the marks for the whole meeting before we give anything to IBM, or do we go ahead and send them a sample? [speaker003:] Why wouldn't we s [@ @] w [speaker002:] Let their [disfmarker] [speaker003:] i if they were going sequentially through it, why wouldn't we give them [disfmarker] I mean i are we trying to get something done by the time Brian comes? [speaker002:] Well I [disfmarker] I [disfmarker] I mean, I don't know. [speaker005:] That was the question. Though. [speaker003:] So if we [disfmarker] if we were, then it seems like giving them something, whatever they had gotten up to, would be better than nothing. [speaker002:] Yeah. Uh. That [disfmarker] I agree. I agree. [speaker005:] Well, I don't think [disfmarker] I mean, h they [disfmarker] they typically work for what, four hours, something like that? [speaker001:] Hmm, I gue hmm. [speaker005:] I think the they should be able to get through a whole meeting in one sitting. I would think, unless it's a lot harder than we think it is, which it could be, certainly. [speaker001:] If it's got like for speakers then I guess [disfmarker] I mean if [disfmarker] [speaker005:] Or seven or eight. [speaker002:] We're just doing the individual channels, right? [speaker001:] Individual channels. Yeah. [speaker002:] So it's gonna be, depending on the number of people in the meeting, um, [speaker001:] I guess there is this issue of, you know, if [disfmarker] if the segmenter thought there was no speech on [disfmarker] on a particular stretch, on a particular channel, [speaker005:] Well [disfmarker] [speaker001:] and there really was, then, if it didn't show up in a mixed signal to verify, then it might be overlooked, so, I mean, the question is "should [disfmarker] should a transcriber listen to the entire thing or can it g can it be based on the mixed signal?" And I th eh so far as I'm concerned it's fine to base it on the mixed signal at this point, and [disfmarker] [speaker005:] That's what it seems to me too, in that if they need to, just like in the other cases, they can listen to the individual, if they need to. [speaker001:] And that cuts down the time. Yeah. [speaker005:] But they don't have to for most of it. [speaker001:] Yeah, that's good. So. Yeah. [speaker002:] I don't see how that will work, though. [speaker001:] Good, good, good. What [disfmarker] what aspect? [speaker003:] So you're talking about tightening up time boundaries? [speaker002:] Yeah. [speaker005:] So, they have the normal channeltrans interface where they have each individual speaker has their own line, [speaker003:] So how do you [disfmarker] [speaker002:] Yeah. [speaker005:] but you're listening to the mixed signal and you're tightening the boundaries, correcting the boundaries. You shouldn't have to tighten them too much because Thilo's program does that. [speaker001:] Should be pretty good, yeah. [speaker004:] Except for [vocalsound] it doesn't do well on short things, remember. [speaker005:] Right, so [disfmarker] so you'll have to I [disfmarker] [speaker004:] It will miss them. It will miss most of the really short things. [speaker005:] Uh huh. [speaker004:] Like that. [speaker001:] But those would be [disfmarker] those would be [disfmarker] [speaker004:] Uh huh. [speaker005:] Uh huh! [speaker004:] It will [disfmarker] it will miss [disfmarker] Yeah, you have to say "uh huh" more slowly to [disfmarker] to get c [speaker005:] Sorry. I'll work on that. [speaker004:] No, I'm s I'm actually serious. So it will miss stuff like that which [disfmarker] [speaker005:] Well, so [disfmarker] so that's something that the transcribers will have to [disfmarker] have to do. [speaker002:] I [disfmarker] [speaker001:] Yeah, but presumably, most of those they should be able to hear from the mixed signal unless they're embedded in the heavil heavy overlap section when [disfmarker] in which case they'd be listening to the channels anyway. [speaker004:] Right, [speaker002:] That's [disfmarker] that's what I'm [disfmarker] I'm concerned about the part. [speaker004:] and that's what I'm not sure about. [speaker001:] Yeah, I am too. And I think it's an empirical question. [speaker002:] Can't we [disfmarker] uh couldn't we just have, um, I don't know, maybe this just doesn't fit with the software, but I guess if I didn't know anything about Transcriber and I was gonna make something to let them adjust boundaries, I would just show them one channel at a time, with the marks, and let them adju [speaker001:] Oh they can [disfmarker] [speaker005:] Well, but then they have to do [disfmarker] but then they [disfmarker] for this meeting they would have to do seven times real time, and it would probably be more than that. [speaker001:] Yeah, that's it. Yeah. [speaker005:] Right? Because they'd have to at least listen to each channel all the way through. [speaker002:] But i but it's very quick, [speaker001:] And if [disfmarker] Uh huh. [speaker002:] right? I mean, you scan [disfmarker] I mean, if you have a display of the waveform. [speaker005:] Oh, you're talking about visually. [speaker001:] Yeah. [speaker005:] I just don't think [disfmarker] [speaker001:] w Well, the other problem is the breaths cuz you also see the breaths on the waveform. I've [disfmarker] I've looked at the int uh, s I've tried to do that with a single channel, and [disfmarker] and you do see all sorts of other stuff besides just the voice. [speaker002:] Uh huh. [speaker005:] Yeah, and I [disfmarker] I think that they're going much more on acoustics than they are on visuals. [speaker001:] Well that [disfmarker] that I'm not sure. [speaker005:] So. [speaker001:] What you [disfmarker] the digital [disfmarker] what the digital task that you had your interface? Um, I know for a fact that one of those [disfmarker] sh she could really well [disfmarker] she could judge what th what the number was based on the [disfmarker] on the waveform. [speaker005:] Yeah, that's actually true. Yeah, you're right. You're absolutely right. Yeah, I found the same thing that when I was scanning through the wave form [vocalsound] I could see when someone started to read digits just by the shapes. [speaker001:] Yeah, she could tell which one was seven. [speaker005:] Um, maybe. [speaker001:] Yeah. [speaker003:] So I don't [disfmarker] [speaker005:] But [disfmarker] [speaker003:] I'm [disfmarker] I'm now entirely confused about what they do. So, they're [disfmarker] they're looking at a mixed signal, or they're looking [disfmarker] what [disfmarker] what are they looking at visually? [speaker001:] Well, they have a choice. They could choose any signal to look at. I've tried lookin but usually they look at the mixed. But I've [disfmarker] I've tried looking at the single signal and [disfmarker] and in order to judge when it [disfmarker] when it was speech and when it wasn't, [speaker005:] Oh. [speaker001:] but the problem is then you have breaths which [disfmarker] which show up on the signal. [speaker003:] But the procedure that you're imagining, I mean, people vary from this, is that they have the mixed signal wave form in front of them, [speaker001:] Yes. Yes. [speaker003:] and they have multiple, uh, well, let's see, there isn't [disfmarker] we don't have transcription yet. So [disfmarker] but there's markers of some sort that have been happening automatically, [speaker005:] Right. [speaker001:] Yes. [speaker003:] and those show up on the mixed signal? There's a [@ @] clicks? [speaker005:] N the t [speaker001:] Oh, they show up on the separate ribbons. [speaker005:] Right. [speaker001:] So you have a separate ribbon for each channel, [speaker003:] There're separate ribbons. [speaker001:] and [disfmarker] and i i it'll be [disfmarker] because it's being segmented as channel at a time with his [disfmarker] with Thilo's new procedure, then you don't have the correspondence of the times across the bins [disfmarker] uh across the ribbons uh you could have [disfmarker] [speaker003:] And is there a line moving across the waveform as it goes? [speaker005:] Yes. [speaker001:] Yes. [speaker003:] OK, so The way you're imaging is they kind of play it, and they see oh this happened, then this happened, then [disfmarker] and if it's about right, they just sort of let it slide, [speaker005:] Right. [speaker001:] Yeah. [speaker005:] Right. [speaker003:] and if it [disfmarker] if it [disfmarker] there's a question on something, they stop and maybe look at the individual wave form. [speaker005:] Right. [speaker001:] Oh, well not [disfmarker] not "look". [speaker005:] Well, they wouldn't look at it [pause] at this point. They would just listen. [speaker003:] They [disfmarker] they might look at it, [speaker005:] Well, the problem is that the [disfmarker] the interface doesn't really allow you to switch visuals. [speaker003:] right? [speaker001:] Not very quickly. [speaker005:] The problem is that [disfmarker] that [disfmarker] the Tcl TK interface with the visuals, it's very slow to load waveforms. [speaker001:] You can but it takes time. That's it. [speaker003:] Uh huh. [speaker005:] And so when I tried [disfmarker] that [disfmarker] that was the first thing I tried when I first started it, [speaker001:] Oh, oh. Visually. [speaker005:] right? [speaker001:] You can [disfmarker] you can switch quickly between the audio, but you just can't get the visual display to show quickly. So you have to [disfmarker] It takes, I don't know, three, four minutes to [disfmarker] Well, I mean, it takes [disfmarker] it takes long enough [disfmarker] [speaker004:] Yeah, it's very slow to do that. [speaker001:] It takes long enough cuz it has to reload the I [disfmarker] I don't know exactly what it's doing frankly cuz [disfmarker] but it t it takes long enough that it's just not a practical alternative. [speaker004:] That w [speaker005:] Well it [disfmarker] it does some sort of shape pre computation so that it can then scroll it quickly, [speaker007:] But you can cancel that. [speaker005:] yeah. [speaker004:] Yeah. [speaker005:] But then you can't change the resolution or scroll quickly. [speaker007:] Oh, really? [speaker005:] So. [speaker007:] Huh! [speaker001:] Now you could set up multiple windows, each one with a different signal showing, and then look between the windows. Maybe that's the solution. [speaker005:] I mean, we [disfmarker] we could do different interfaces, [speaker007:] What if you preload them all? [speaker005:] right? I mean, so [disfmarker] so we could use like X Waves instead of Transcriber, [speaker001:] Yeah. [speaker005:] and it loads faster, certainly. [speaker007:] What if you were to preload all the channels or [disfmarker] or initially [disfmarker] like doesn't [disfmarker] [speaker005:] Well that's what I tried originally. So I [disfmarker] I actually before, uh, Dave Gelbart did this, I did an interface which showed each waveform and ea a ribbon for each waveform, [speaker007:] Mm hmm. [speaker005:] but the problem with it is even with just three waveforms it was just painfully slow to scroll. [speaker007:] Oh, OK. [speaker005:] So you just scroll a screen and it would, you know go "kur chunk!" [speaker001:] Mm hmm. [speaker005:] And so it just was not doable with the current interface. [speaker001:] You know, I am thinking if we have a meeting with only four speakers and, you know, you could fire up a Transcriber interface for, y you know, in different windows, multiple ones, one for each channel. And it's sort of a [disfmarker] a hack but I mean it would be one way of seeing the visual form. [speaker005:] I think that if we decide that we need [disfmarker] that they need to see the visuals, we need to change the interface so that they can do that. [speaker001:] Yeah. Yeah. [speaker004:] That's actually what I thought of, loading the chopped up waveforms, I mean, you know, that [disfmarker] that would make it faster [disfmarker] [speaker005:] An [speaker003:] So [disfmarker] [speaker007:] Hmm. [speaker005:] But isn't [disfmarker] The chopped up waveforms. [speaker002:] The problem is if [disfmarker] if anything's cut off, you can't expand it from the chopped up [disfmarker] [speaker004:] So. [speaker005:] Isn't that [disfmarker] [speaker007:] Right. [speaker004:] Right, but if you [speaker005:] And wouldn't that be the same [comment] as the mixed signal? [speaker004:] a at some point [disfmarker] No, I mean the individual channels that were chopped up that [disfmarker] it'd be nice to be able to go back and forth between those short segments. [speaker003:] Mm hmm. [speaker004:] Cuz you don't really nee like nine tenths of the time you're throwing most of them out, but what you need are tho that particular channel, or that particular location, [speaker005:] Yeah. [speaker004:] and, um, might be nice, cuz we save those out already, [comment] um, to be able to do that. [speaker001:] Yeah. [speaker004:] But it won't work for IBM of course, it only works here cuz they're not saving out the individual channels. [speaker001:] Well, I [disfmarker] I do think that this [disfmarker] this will be a doable procedure, [speaker003:] Yeah. [speaker001:] and have them starting with mixed [speaker003:] OK. [speaker001:] and, um, then when they get into overlaps, just have them systematically check all the channels to be sure that there isn't something hidden from [disfmarker] from audio view. [speaker005:] Yeah. Yeah, hopefully, I mean [disfmarker] The mixed signal, the overlaps are pretty audible because it is volume equalized. So I think they should be able to hear. The only problem is [disfmarker] is, you know, counting how many and if they're really correct or not. So, I don't know. [speaker004:] I don't know that you can locate them very well from the mixed signal, [speaker005:] Right but [disfmarker] but once [disfmarker] once you know that they happen, you can at least listen to the close talking, [speaker004:] but you would know that they were there, and then you would switch. Right. And then you would switch into the other [disfmarker] [speaker005:] so. [speaker003:] But right now, to do this limitation, the switching is going to be switching of the audio? Is what she's saying. [speaker005:] Right. [speaker001:] Yeah. [speaker003:] So [disfmarker] [speaker005:] Right, so [disfmarker] so did Dave [disfmarker] Did Dave do that change where you can actually just click rather than having to go up to the menu to listen to the individual channels? [speaker003:] so they're using their ears to do these markings anyway. [speaker001:] Yes. [speaker004:] Yeah. [speaker001:] Yes. Click [disfmarker] Um, [speaker005:] I had suggested it before. I just don't know whether he did it or not. [speaker001:] I'm not sure what [disfmarker] click what [disfmarker] click on the ribbon? Yeah, you can get that [disfmarker] [speaker005:] Yeah. [speaker001:] oh, oh, get [disfmarker] you can get the, uh [disfmarker] you can get it to switch audio? [speaker005:] Yeah. [speaker001:] Uh, not last I tried, but, um, maybe he's changed it again. [speaker005:] We should get him to do that because, uh, I think that would be much, much faster than going to the menu. [speaker001:] I disagree. There's a reason I disagree, and that is that, uh, you [disfmarker] it's very good to have a dissociation between the visual and the audio. There're times when I wanna hear the mixed signal, bu but I want to transcribe on the single channel. So right now [disfmarker] [speaker005:] Then maybe just buttons down at the bottom next to it. [speaker001:] Maybe, I just don't [disfmarker] I don't see that it's a [disfmarker] [speaker005:] Just something so that it's not in the menu option so that you can do it much faster. [speaker001:] Well, I mean, that's the i I [disfmarker] I think that might be a personal style thing. I find it really convenient the way [disfmarker] the way it's set up right now. [speaker005:] Well it just seems to me that if you wanna quickly [disfmarker] "well was that Jane, no, was that Chuck, no, was that Morgan", right now, you have to go up to the menu, and each time, go up to the menu, select it, listen to that channel then click below, and then go back to the menu, select the next one, and then click below. [speaker001:] That's fine. Yeah, it's true. [speaker005:] So you can definitely streamline that with the i with the interface. [speaker001:] Yeah, it could be faster, but, you know, I mean, th in the ideal world [disfmarker] [speaker005:] What? [speaker001:] Yeah. No I [disfmarker] I agree that'd be nice. [speaker005:] OK. [speaker001:] Yeah. OK. [speaker003:] So, um, Done with that? Does any [disfmarker] I forget, does anybody, uh, working on any [disfmarker] any Eurospeech submission related to this? [speaker005:] I would like to try to do something on digits but I just don't know if we have time. I mean, it's due next Friday so we have to do the experiments and write the paper. So, I'm gonna try, but, uh, we'll just have to see. So actually I wanna get together with both Andreas and, uh, uh, Stephane with their respective systems. [speaker003:] Yeah. Yeah there was that [disfmarker] we that's right, we had that one conversation about, uh, what [disfmarker] what [disfmarker] what did it mean for, uh, one of those speakers to be pathological, [speaker005:] Right, [speaker003:] was it a [disfmarker] [speaker005:] and I haven't had s chance to sit down and listen. [speaker006:] Oh, I haven't [disfmarker] I haven't listened to them either, [speaker005:] I was going to do that this afternoon. [speaker006:] but there must be something wrong, I mean, unless our [disfmarker] [speaker005:] Well, Morgan and I were [disfmarker] were having a debate [comment] about that. Whereas I think it it's probably something pathologic and actually Stephane's results, I think confirm that. He s he did the Aurora system also got very lousy average error, like fifteen or [disfmarker] or, uh, fifteen to twenty percent average? But then he ran it just on the lapel, and got about five or six percent word error? So that [disfmarker] that means to me that somewhere in the other recordings there are some pathological cases. But, you know, we [disfmarker] th that may not be true. It may be just some of the segments they're just doing a lousy job on. So I'll [disfmarker] I'll listen to it and find out since you'd actually split it up by segment. [speaker003:] Right. [speaker005:] So I can actually listen to it. [speaker006:] Yeah. [speaker002:] Did you run the [disfmarker] Andreas [disfmarker] the r SRI recognizer on the digits? [speaker006:] Yeah. [speaker005:] Oh, I thought he had sent that around to everyone, did you just sent that to me? [speaker006:] No, I d I didn't. [speaker005:] Oh. [speaker006:] Since I considered those preliminary, I didn't. But, yeah, if you take [disfmarker] [speaker002:] I it wasn't [disfmarker] [speaker005:] It was bimodal. [speaker006:] So if you [disfmarker] Yeah, it's actually, um, it [disfmarker] uh [disfmarker] it was trimodal, actually [disfmarker] [speaker005:] Oh, was it trimodal, OK. [speaker006:] trimodal, [speaker003:] Yeah. [speaker006:] so [speaker003:] There's zero, a little bit, and a lot. [speaker006:] there were [disfmarker] [vocalsound] t there was [disfmarker] there was one h one bump at ze around zero, which were the native speakers, [speaker003:] Yeah. [speaker006:] the non pathological native speakers. [speaker003:] Yeah. [speaker002:] Zero percent error? [speaker003:] Y yeah. [speaker006:] Then there was another bump at, um, [vocalsound] [vocalsound] oh, like fifteen or something. [speaker003:] Oh was it fifteen? [speaker002:] This is error you're talking about? [speaker006:] whe [speaker005:] Yeah. [speaker003:] Yeah. [speaker006:] Yeah. [speaker002:] OK. [speaker006:] Those were the non natives. And then there was another distinct bump at, like, a hundred, [vocalsound] which must have been some problem. [speaker001:] Oh, wow! [speaker006:] I can't imagine that [disfmarker] [speaker001:] Oh, OK. [speaker007:] What is patho what do you mean by pathological? I'm sorry, I don't [disfmarker] [speaker005:] Just [disfmarker] just something really wrong with [disfmarker] [speaker007:] Oh. [speaker006:] In the recording [speaker005:] A bug is what I mean, so that it's like [disfmarker] [speaker007:] Oh, OK. I see. [speaker006:] And there was this one meeting, I forget which one it was, where like, uh, six out of the eight channels were all, like [disfmarker] had a hundred percent error. [speaker005:] Which probably means like there was a [disfmarker] th the recording interface crashed, [speaker007:] Right. [speaker005:] or there was a short [disfmarker] you know, someone was jiggling with a cord [speaker006:] But [disfmarker] [speaker005:] or, uh, I extracted it incorrectly, [speaker006:] But [disfmarker] [speaker005:] it was labeled [disfmarker] [speaker007:] Mm hmm. [speaker005:] it was transcribed incorrectly, something really bad happened, [speaker007:] OK. [speaker005:] and I just haven't listened to it yet to find out what it was. [speaker006:] So, if I excluded the pathological ones, [vocalsound] by definition, those that had like over ninety five percent error rate, [vocalsound] and the non natives, then the average error rate was like one point four or something, [speaker003:] What we're calling. [speaker001:] Oh. [speaker007:] Hmm! [speaker006:] which [disfmarker] which seemed reasonable given that, you know, the models weren't tuned for [disfmarker] [vocalsound] for it. [speaker001:] Oh. [speaker003:] Yeah. [speaker006:] And the grammar wasn't tuned either. It was just a [@ @]. [speaker002:] And it didn't matter whether it was the lapel or whether it was the [disfmarker] [speaker006:] I haven't split it up that way, but it would be [disfmarker] [speaker004:] But there's no overlap during the digit readings, so it shouldn't really matter. [speaker007:] Right. [speaker006:] Right. [speaker002:] Yeah. [speaker006:] So it should [disfmarker] [speaker003:] No, but there's a little difference, [speaker005:] There's a lot. [speaker003:] and we haven't looked at it for digits, [speaker005:] Yeah. [speaker003:] right? And so, [speaker002:] Yeah, so I was curious about that. [speaker003:] cuz [disfmarker] because what he was [disfmarker] what I was saying when I looked at those things is it [disfmarker] it [disfmarker] I was almost gonna call it quadrimodal because [vocalsound] [disfmarker] because there was a whole lot of cases where it was zero percent. [speaker006:] Mm hmm. [speaker003:] They just plain got it all right. [speaker006:] Yeah. [speaker003:] And then there [disfmarker] and then there was another bunch that were couple percent or something. [speaker006:] But if you p if you actually histogrammed it, and [disfmarker] it was a nice [disfmarker] uh, you know, it [disfmarker] it was [disfmarker] zero was the most of them, [speaker003:] Yeah. [speaker005:] A normal. Yeah. [speaker006:] but then there were [disfmarker] the others were sort of decaying from there. [speaker003:] Yeah, yeah. [speaker006:] And then there was the bump for the non natives and then the pathological ones, [speaker003:] I see. [speaker006:] so. [speaker003:] I see. [speaker005:] Yeah, cuz some of our non natives are pretty non native. So. [speaker003:] Yeah. [speaker001:] You [disfmarker] did you have, uh, something in the report about, uh, [disfmarker] about, uh, for f uh, forced alignment? Have you [disfmarker] have you started on that? [speaker006:] Oh, well, yeah, so I've been struggling with the forced alignments. Um. [vocalsound] [vocalsound] So the scheme that I drew on the board last time where we tried to, um [vocalsound] allow reject models for the s speech from other speakers, um, [vocalsound] [vocalsound] most of the time it doesn't work very well. So, [vocalsound] um, [vocalsound] and the [disfmarker] I haven't done [disfmarker] I mean, the only way to check this right now was for me to actually [vocalsound] load these into X Waves and, you know, plus the alignments, and s play them and see where the [disfmarker] [speaker003:] Hmm. [speaker006:] And it looks [disfmarker] And so I looked at all of the utterances from you, Chuck, in that one conversation, I don't know which [disfmarker] You probably know which one I mean, it's where you were on the lapel [vocalsound] and Morgan was sitting next to you and we can hear everything Morgan says. [speaker001:] Hmm. [speaker006:] But [disfmarker] and [disfmarker] and some of what you [disfmarker] I mean, you also appear quite a bit in that cross talk. So, [vocalsound] I actually went through all of those, there were I think fifty five segments, [vocalsound] um, in [disfmarker] in X Waves, and [disfmarker] and sort of did a crude check, and [vocalsound] more often than not, it [disfmarker] it gets it wrong. So there's either the beginning, mostly the beginning word, [vocalsound] where th you, um, you know, Chuck talks somewhere into the segment, but the first, um, word of what he says, often "I" but it's very reduced "I," that's just aligned [vocalsound] to the beginning of someone else's speech, uh in that segment, which is cross talk. So, [vocalsound] um, [vocalsound] I'm still tinkering with it, but it might well be that we can't get clean alignments out of this [disfmarker] out of those, uh, [vocalsound] channels, so. [speaker003:] Unless maybe we do this, uh, um, cancellation business. [speaker004:] Right, but that's [disfmarker] I mean, that was our plan, [speaker006:] Yeah, right. [speaker004:] but it's clear from Dan that this is not something you can do in a short amount of time. [speaker003:] Oh, the short amount of time thing, right. [speaker004:] So [disfmarker] so we [disfmarker] you know, we had spent a lot of time, um, writing up the HLT paper and we wanted to use that, uh, kind of analysis, [speaker003:] Yeah. [speaker004:] but the HLT paper has, you know, it's a very crude measure of overlap. It's not really something you could scientifically say is overlap, it's just whether or not the, um, the segments that were all synchronized, whether there was some overlap somewhere. [speaker005:] c High correlation. [speaker004:] And, you know, that pointed out some differences, so he thought well if we can do something quick and dirty because Dan said the cross cancellation, it's not straight forward. If it were straight forward then we would try it, but [disfmarker] so, it's sort of good to hear that it was not straight forward, thinking if we can get decent forced alignments, then at least we can do sort of a overall report of what happens with actual overlap in time, but, um [disfmarker] [speaker002:] I didn't think that his message said it wasn't straight forward. [speaker005:] Well if we'd just [disfmarker] [speaker003:] Well [speaker005:] Um hmm. [speaker002:] I thought he's just saying you have to look over a longer time window when you do it. [speaker004:] and the [disfmarker] but there are some issues of this timing, um, in the recordings [speaker003:] Yeah. [speaker004:] and [disfmarker] [speaker002:] Right. So you just have to look over longer time when you're trying to align the things, you can't [disfmarker] you can't just look [disfmarker] [speaker005:] Well. are you talking about the fact that the recording software doesn't do time synchronous? Is that what you're referring to? That seems to me you can do that over the entire file and get a very accurate [disfmarker] [speaker006:] I don't thi I d I don't think that was the issue. [speaker004:] I [disfmarker] yeah, that was sort of a side issue. [speaker006:] The issue was that you have [disfmarker] to [disfmarker] you have have [disfmarker] you first have to have a pretty good speech detection on the individual channels. [speaker005:] I didn't think so either. [speaker004:] And it's dynamic, so I guess it was more dynamic than some simple models would be able t to [disfmarker] so [disfmarker] so there are some things available, and I don't know too much about this area where if people aren't moving around much than you could apply them, and it should work pretty well if you took care of this recording time difference. [speaker005:] Right, which should be pretty straight forward. [speaker004:] Which a at least is well defined, and [speaker005:] Yeah. [speaker004:] um, but then if you add the dynamic aspect of adapting distances, then it wasn't [disfmarker] I guess it just wasn't something that he could do quickly [pause] and not [disfmarker] in time for us to be able to do something by two weeks from now, so. Well less than a week. So [disfmarker] um, so I don't know what we can do if anything, that's sort of worth, you know, a Eurospeech paper at this point. [speaker002:] Well, Andreas, how well did it work on the non lapel stuff? [speaker005:] Yeah. That's what I was gonna say. [speaker006:] I haven't checked those yet. [speaker005:] C [speaker006:] It's very tedious to check these. [speaker002:] Mmm. [speaker006:] Um, we would really need, ideally, a transcriber [vocalsound] to time mark the [disfmarker] you know, the be at least the beginning and s ends [comment] of contiguous speech. Um, [vocalsound] [vocalsound] and, you know, then with the time marks, you can do an automatic comparison of your [disfmarker] of your forced alignments. [speaker002:] Because [disfmarker] really the [disfmarker] the [disfmarker] at least in terms of how we were gonna use this in our system was to get an ideal [disfmarker] an idea, uh, for each channel about the start and end boundaries. [speaker005:] Oh, MNCM. [speaker006:] Mm hmm. [speaker002:] We don't really care about like intermediate word boundaries, [speaker006:] No, that's how I've been looking at it. [speaker002:] so [disfmarker] [speaker006:] I mean, I don't care that the individual words are aligned correctly, [speaker004:] Right. [speaker002:] Yeah. Yeah. [speaker006:] but [vocalsound] you don't wanna, uh, infer from the alignment that someone spoke who didn't. [speaker002:] Right, exactly. [speaker006:] so, so [disfmarker] [speaker002:] So that's why I was wondering if it [disfmarker] I mean, maybe if it doesn't work for lapel stuff, we can just not use that [speaker006:] Yeah. I haven't [disfmarker] I ha just haven't had the time to, um, do the same procedure on one of the [disfmarker] [speaker002:] and [disfmarker] [speaker006:] so I would need a k I would need a channel that has [vocalsound] a speaker whose [disfmarker] who has a lot of overlap but s you know, is a non lapel mike. And, um, [vocalsound] [vocalsound] where preferably, also there's someone sitting next to them who talks a lot. [speaker005:] Hmm! [speaker006:] So, I [disfmarker] [speaker005:] So a meeting with me in it. [speaker006:] maybe someone can help me find a good candidate and then I would be willing to [speaker002:] We c you know what? Maybe the best way to find that would be to look through these. [speaker006:] you know, hand [speaker002:] Cuz you can see the seat numbers, and then you can see what type of mike they were using. And so we just look for, you know, somebody sitting next to Adam at one of the meetings [disfmarker] [speaker004:] Actually y we can tell from the data that we have, [speaker006:] From the insertions, maybe? fr fr from the [disfmarker] [speaker004:] um, yeah, there's a way to tell. It might not be a single person who's always overlapping that person but any number of people, [speaker006:] Right. [speaker004:] and, um, if you align the two hypothesis files across the channels, you know, just word alignment, you'd be able to find that. So [disfmarker] so I guess that's sort of a last [disfmarker] ther there're sort of a few things we could do. One is just do like non lapels if we can get good enough alignments. Another one was to try to get [disfmarker] somehow align Thilo's energy segmentations with what we have. But then you have the problem of not knowing where the words are because these meetings were done before that segmentation. But maybe there's something that could be done. [speaker002:] What [disfmarker] what is [disfmarker] why do you need the, um, the forced alignment for the HLT [disfmarker] I mean for the Eurospeech paper? [speaker004:] Well, I guess I [disfmarker] I wanted to just do something not on recognition experiments because that's ju way too early, but to be able to report, you know, actual numbers. Like if we [disfmarker] if we had hand transcribed pe good alignments or hand checked alignments, then we could do this paper. It's not that we need it to be automatic. But without knowing where the real words are, in time [disfmarker] [speaker002:] So it was to get [disfmarker] it was to get more data and better [disfmarker] to [disfmarker] to squeeze the boundaries in. [speaker004:] To [disfmarker] to know what an overlap really [disfmarker] if it's really an overlap, or if it's just a [disfmarker] a [disfmarker] a segment correlated with an overlap, [speaker002:] Ah, OK. Yeah. [speaker004:] and I guess that's the difference to me between like a real paper and a sort of, promissory paper. So, um, if we d it might be possible to take Thilo's output and like if you have, um, like right now these meetings are all, [speaker005:] Ugh! I forgot the digital camera again. [speaker004:] um, [speaker005:] Every meeting! [speaker004:] you know, they're time aligned, so if these are two different channels and somebody's talking here and somebody else is talking here, just that word, [speaker005:] Mm hmm. [speaker004:] if Thilo can tell us that there're boundaries here, we should be able to figure that out because the only thing transcribed in this channel is this word. But, um, you know, if there are things [disfmarker] [speaker005:] Two words. [speaker004:] Yeah, if you have two and they're at the edges, it's like here and here, and there's speech here, then it doesn't really help you, so, um [disfmarker] [speaker002:] Thilo's won't put down two separate marks in that case [disfmarker] [speaker004:] Well it w it would, but, um, we don't know exactly where the words are because the transcriber gave us two words in this time bin [speaker005:] Thilo's will. But. [speaker004:] and we don't really know, [speaker001:] Well it's a merging problem. [speaker004:] I mean, yeah it's [disfmarker] [speaker001:] If you had a [disfmarker] if you had a s if you had a script which would [disfmarker] I've thought about this, [speaker004:] I mean, if you have any ideas. [speaker001:] um, and I've discussed [disfmarker] I've discussed it with Thilo, [speaker004:] I would [disfmarker] [speaker001:] um, the, I mean, I [disfmarker] I [disfmarker] in principle I could imagine writing a script which would approximate it to some degree, but there is this problem of slippage, [speaker005:] Well maybe [disfmarker] Maybe that will get enough of the cases to be useful. [speaker001:] yeah. [speaker004:] Right. I mean, that [disfmarker] that would be really helpful. That was sort of another possibility. [speaker005:] You know s cuz it seemed like most of the cases are in fact the single word sorts, or at least a single phrase [speaker006:] Mmm. [speaker001:] Well they [disfmarker] they can be stretched. [speaker005:] in most of the bins. [speaker001:] I wouldn't make that generalization [speaker004:] Yeah. [speaker001:] cuz sometimes people will say, "And then I" and there's a long pause and finish the sentence and [disfmarker] and sometimes it looks coherent and [disfmarker] and the [disfmarker] I mean it's [disfmarker] it's not a simple problem. But it's really [disfmarker] And then it's coupled with the problem that sometimes, you know, with [disfmarker] with a fricative you might get the beginning of the word cut off and so it's coupled with the problem that Thilo's isn't perfect either. I mean, we've i th it's like you have a merging problem plus [disfmarker] so merging plus this problem of, uh, not [disfmarker] [speaker005:] Right. Hmm! [speaker001:] y i i if the speech nonspeech were perfect to begin with, the detector, that would already be an improvement, but that's impossible, you know, i that's too much to ask. [speaker004:] Right. [speaker005:] Yes. [speaker001:] And so i and may you know, I mean, it's [disfmarker] I think that there always [disfmarker] th there would have to be some hand tweaking, but it's possible that a script could be written to merge those two types of things. I've [disfmarker] I've discussed it with Thilo and I mean [disfmarker] in terms of not him doing it, but we [disfmarker] we discussed some of the parameters of that and how hard it would be to [disfmarker] in principle [disfmarker] to write something that would do that. [speaker004:] I mean, I guess in the future it won't be as much as an issue if transcribers are using the tightened boundaries to start with, then we have a good idea of where the forced alignment is constrained to. [speaker001:] Well, it's just, you know, a matter of we had the revolution [disfmarker] we had the revolution of improved, uh, interface, um, one month too late, [speaker005:] Oh. [speaker004:] So I'm no I don't know if this [speaker005:] Tools. [speaker001:] but it's like, you know, it's wonderful to have the revolution, [speaker004:] Oh it's [disfmarker] it's a [disfmarker] [speaker001:] so it's just a matter of [disfmarker] of, you know, from now on we'll be able to have things channelized to begin with. [speaker004:] yeah. [speaker005:] Right. And we'll just have to see how hard that is. [speaker001:] Yeah, that's right. [speaker005:] So [disfmarker] so whether the corrections take too much time. [speaker001:] That's right. [speaker005:] I was just thinking about the fact that if Thilo's missed these short segments, that might be quite time consuming for them to insert them. [speaker004:] Yeah. [speaker001:] Good point. [speaker004:] But he [disfmarker] he also can adjust this minimum time duration constraint and then what you get is noises mostly, [speaker001:] Yeah. [speaker005:] Spurious. [speaker004:] but that might be OK, [speaker005:] It might be easier to delete something that's wrong than to insert something that's missing. [speaker004:] an Right. And you can also see in the waveform [disfmarker] [speaker005:] What do you think, Jane? [speaker004:] exac yeah. [speaker003:] If you can feel confident that what the [disfmarker] yeah, that there's actually something [disfmarker] [speaker004:] Yeah. [speaker005:] Yeah. Cuz then [disfmarker] then you just delete it, and you don't have to pick a time. [speaker003:] that you're not gonna miss something, yeah. [speaker004:] I think it's [disfmarker] [speaker001:] Well the problem is I [disfmarker] you know [disfmarker] I [disfmarker] I [disfmarker] it's a [disfmarker] it's a really good question, and I really find it a pain in the neck to delete things because you have to get the mouse up there on the t on the text line and i and otherwise you just use an arrow to get down [disfmarker] I mean, i it depends on how lar th there's so many extra things that would make it one of them harder than the other, or [disfmarker] or vice versa. It's not a simple question. But, you know, I mean, in principle, like, you know, if one of them is easier then to bias it towards whichever one's easier. [speaker005:] Yeah, I guess the semantics aren't clear when you delete a segment, right? Because you would say [disfmarker] You would have to determine what the surroundings were. [speaker004:] You could just say it's a noise, though, and write, you know, a post processor will just [disfmarker] all you have to do is just [disfmarker] [speaker005:] If it's really a noise. [speaker004:] or just say it's [disfmarker] just put "X," you know, like "not speech" or something, and then you can get [disfmarker] [speaker001:] I think it's easier to add than delete, frankly, [speaker004:] Yeah, or [speaker001:] because you have to, uh, maneuver around on the [disfmarker] on both windows then. [speaker005:] To add or to delete? [speaker001:] To delete. [speaker005:] OK. [speaker004:] Anyways, so I [disfmarker] I guess [disfmarker] [speaker005:] That [disfmarker] Maybe that's an interface issue that might be addressable. But I think it's the semantics that are [disfmarker] that are questionable to me, [speaker001:] It's possible. [speaker005:] that you delete something [disfmarker] So let's say someone is talking to here, and then you have a little segment here. Well, is that part of the speech? Is it part of the nonspeech? I mean, w what do you embed it in? [speaker004:] There's something nice, though, about keeping, and this is probably another discussion, keeping the stuff that Thilo's detector detected as possible speech and just marking it as not speech than deleting it. Because then when you align it, then the alignment can [disfmarker] you can put a reject model or whatever, [speaker005:] Oh, I see. So then they could just like put [disfmarker] Oh that's what you meant by just put an "X" there. [speaker004:] and you're consistent with th the automatic system, [speaker005:] Uh, that's an interesting idea. [speaker004:] whereas if you delete it [disfmarker] [speaker005:] So [disfmarker] so all they [disfmarker] So that all they would have to do is put like an "X" there. [speaker004:] Yeah, or some, you know, dummy reject mod [speaker005:] So blank for [disfmarker] blank for silence, "S" "S" for speech, "X" "X" for something else. [speaker004:] whatever, yeah. That's actually a better way to do it cuz the a the forced alignment will probably be more consistent than [disfmarker] [speaker001:] Well, like, I think there's a complication which is that [disfmarker] that you can have speech and noise in s [speaker004:] I mean if it's just as easy, but [disfmarker] [speaker001:] uh, you know, on the same channel, the same speaker, so now sometimes you get a ni microphone pop and, uh, I mean, there're these fuzzy hybrid cases, and then the problem with the boundaries that have to be shifted around. It's not a simple [disfmarker] not a simple problem. [speaker004:] Anyway, quick question, though, at a high level do people think, let's just say that we're moving to this new era of like using the, um, pre segmented t you know, non synchronous conversations, does it make sense to try to take what we have now, which are the ones that, you know, we have recognition on which are synchronous and not time tightened, and try to get something out of those for sort of purposes of illustrating the structure and the nature of the meetings, or is it better to just, you know, forget that and tr [speaker005:] Well, I think we'll have to, eventually. [speaker004:] I mean, it's [disfmarker] [speaker005:] And my hope was that we would be able to use the forced alignment to get it. [speaker004:] Right. [speaker005:] But if we can't [disfmarker] [speaker004:] That was everybody's hope. [speaker005:] But if we can't, then maybe we just have to [disfmarker] [speaker004:] And maybe we can for the non lapel, but is it worth [disfmarker] if we can't then we can fake it even if we're [disfmarker] we report, you know, we're wrong twenty percent of the time or ten percent of the time. [speaker005:] Well, I'm thinking [disfmarker] are you talking about for a paper, or are talking about for the corpus. [speaker004:] Uh [disfmarker] uh, that's a good question actually. [speaker005:] I mean cuz for the corpus it would be nice if everything were [disfmarker] [speaker004:] Actually that's a good question because we'd have to completely redo those meetings, and we have like ten of them now. [speaker005:] We wouldn't have to re do them, we would just have to edit them. [speaker001:] Well, and also, I mean, I still haven't [disfmarker] I still haven't given up on forced alignment. [speaker004:] No, you're right, actually [disfmarker] [speaker001:] I think that when Brian comes, this'll be uh an interesting aspect to ask him as well b [speaker005:] When [disfmarker] [speaker001:] when Brian Kingsbury comes. [speaker005:] Oh, Brian. You s I thought you said Ryan. And it's like, "Who's Ryan?" OK. [speaker001:] Yeah, good question. Well, Ryan could come. [speaker004:] Uh, no, that's a good point, though, because for feature extraction like for prosody or something, I mean, the meetings we have now, it's a good chunk of data [disfmarker] [speaker005:] Yep. [speaker004:] we need to get a decent f OK. [speaker001:] That's what my hope has been, [speaker004:] So we should at least try it even if we can't, [speaker001:] and that's what [disfmarker] that's what [disfmarker] you know, ever since the [disfmarker] the February meeting that I transcribed from last year, forced alignment has been on the [disfmarker] on the table as a way of cleaning them up later. [speaker004:] right? [speaker005:] On the table, right? [speaker001:] And [disfmarker] and so I'm hopeful that that's possible. I know that there's complication in the overlap sections and with the lapel mikes, [speaker006:] There's [disfmarker] Yeah. [speaker001:] but [disfmarker] [speaker004:] I mean, we might be able, at the very worst, we can get transcribers to correct the cases where [disfmarker] I mean, you sort of have a good estimate where these places are because the recognition's so poor. Right? And so you're [disfmarker] [speaker002:] Yeah, we were never just gonna go with these as the final alignments. [speaker001:] I agree. [speaker004:] Yeah. [speaker001:] I agree. [speaker002:] We were always gonna run them past somebody. [speaker004:] So we need some way to push these first chunk of meetings into a state where we get good alignments. [speaker001:] Absolutely. [speaker006:] I'm probably going to spend another day or so trying to improve things by, um, [vocalsound] [vocalsound] by using, um, acoustic adaptation. Um, the [disfmarker] [vocalsound] Right now I'm using the unadapted models for the forced alignments, and it's possible that you get considerably better results if you, uh, manage to adapt the, [vocalsound] uh, phone models to the speaker and the reject model to the [disfmarker] to [disfmarker] to all the other speech. Um, so [speaker002:] Could you [disfmarker] could you at the same time adapt the reject model to the speech from all the other channels? [speaker003:] That's what he just said. [speaker005:] That's what he was saying. [speaker006:] That's what I just said. [speaker004:] Yeah. [speaker002:] Oh, not just the speech from that [disfmarker] of the other people from that channel, [speaker006:] Right. [speaker004:] Right. [speaker002:] but the speech from the a actual other channels. [speaker006:] Oh, oh, I see. Um, [speaker005:] I don't think so. [speaker003:] Oh. [speaker005:] I don't think that would work, [speaker006:] No, it [disfmarker] [speaker005:] right? Because you'd [disfmarker] A lot of it's dominated by channel properties. [speaker006:] th Exactly. [speaker004:] But what you do wanna do is take the, even if it's klugey, take the segments [disfmarker] the synchronous segments, the ones from the HLT paper, where only that speaker was talking. [speaker006:] So you want to u [speaker004:] Use those for adaptation, cuz if you [disfmarker] if you use everything, then you get all the cross talk in the adaptation, and it's just sort of blurred. [speaker006:] That's a good point. Yep. [speaker002:] If you [disfmarker] [speaker004:] And that we know, I mean, we have that. And it's about roughly two thirds, I mean, very roughly averaged. [speaker006:] Yeah. [speaker004:] That's not completely negligible. [speaker006:] Mm hmm. [speaker004:] Like a third of it is bad for adaptation or so. [speaker005:] Cool. I thought it was higher than that, that's pr [speaker004:] It really [disfmarker] it depends a lot. This is just sort of an overall [disfmarker] [speaker006:] So. [speaker003:] Well I know what we're not turning in to Eurospeech, a redo of the HLT paper. [speaker005:] Right. [speaker003:] That [disfmarker] I don't wanna do that, [speaker005:] Yeah, I'm doing that for AVIOS. [speaker003:] but. [speaker004:] Yeah. But I think we're [disfmarker] oh, Morgan's talk went very well, I think. [speaker003:] Bleep. [speaker005:] Uh, "bleep". Yeah, really. [speaker004:] I think Morgan's talk went very well it woke [disfmarker] [speaker001:] Excellent. [speaker004:] you know, it was really a well presented [disfmarker] and got people laughing [disfmarker] [speaker006:] Some good jokes in it? [speaker001:] Yeah. [speaker005:] Especially the batteried meter popping up, that was hilarious. [speaker004:] Yeah. [speaker005:] Right when you were talking about that. [speaker003:] You know, that wa that was the battery meter saying that it was fully charged, [speaker005:] It's full. Yeah. [speaker003:] yeah. [speaker001:] You said, "Speaking about energy", or [vocalsound] something. [speaker005:] But that was funny. [speaker003:] Yeah. [speaker005:] He [disfmarker] he [disfmarker] he was onto the bullet points about talking about the [disfmarker] you know [disfmarker] the little hand held, and trying to get lower power and so on, [speaker001:] That was very nice. [speaker006:] Po low power [speaker005:] and Microsoft pops up a little window saying "Your batteries are now fully charged." [speaker001:] That's great. [speaker003:] Yeah, yeah, yeah. [speaker005:] I'm thinking about scripting that for my talk, you know, put [disfmarker] put a little script in there to say "Your batteries are low" right when I'm saying that. [speaker003:] Yeah. Yeah. No I mean, i in [disfmarker] in your case, I mean, you were joking about it, but, I mean, your case the fact that your talking about similar things at a couple of conferences, it's not [disfmarker] these are conferences that have d really different emphases. Whereas HLT and [disfmarker] and Eurospeech, pretty [disfmarker] pretty [disfmarker] pretty similar, so I [disfmarker] I [disfmarker] I can't see really just putting in the same thing, [speaker005:] Are too close, yeah. [speaker004:] No, I d I don't think that paper is really [disfmarker] [speaker003:] but [disfmarker] [speaker004:] the HLT paper is really more of a introduction to the project paper, and, um [disfmarker] [speaker003:] Yeah. [speaker005:] Yeah, for Eurospeech we want some results if we can get them. [speaker004:] Well, yeah, it [disfmarker] it's [disfmarker] probably wouldn't make sense, [speaker003:] Or some [disfmarker] or some [disfmarker] [speaker004:] but [disfmarker] [speaker003:] I mean, I would see Eurospeech [disfmarker] if we have some Eurospeech papers, these will be paper p p uh, submissions. These will be things that are particular things, aspects of it that we're looking at, rather than, you know, attempt at a global paper about it. [speaker005:] Detail, [speaker004:] Right, right. [speaker005:] yeah. Overall. [speaker001:] I did go through one of these meetings. I had, uh, one of the transcribers go through and tighten up the bins on one of the, uh, NSA meetings, and then I went through afterwards and double checked it so that one is really very [disfmarker] very accurate. [speaker004:] Oh. [speaker001:] I men I mentioned the link. [speaker004:] Oh, [speaker001:] I sent [disfmarker] You know that one? [speaker004:] so [disfmarker] [speaker007:] The [disfmarker] which one? I'm sorry. [speaker001:] Um, I'm trying to remember [disfmarker] [speaker005:] Those are all [disfmarker] [speaker001:] I don't remember the number off hand. It's one of the NSA's. I sent email before the conference, before last week. [speaker007:] Oh, OK. [speaker004:] That might [disfmarker] might have been the one [disfmarker] one of the ones that we did. [speaker001:] Bef What I mean is Wednesday, Thursday. [speaker007:] Mm hmm. [speaker001:] I'm sure that that one's accurate, I've been through it [vocalsound] myself. [speaker007:] OK. [speaker004:] So that might actually be useful but they're all non native speakers. [speaker006:] So we could compare before and after and see [disfmarker] [speaker005:] Yeah. Yeah, that's what I was gonna say. [speaker007:] Yeah, that's the problem with the NSA speakers. [speaker006:] oh, Darn! [speaker005:] The problem with those, they're all German. [speaker004:] And e and e and extremely hard to follow, like word wise, [speaker005:] So. [speaker007:] Yeah. [speaker004:] I bet the transcri I mean, I have no idea what they're talking about, so, [speaker001:] I corrected it for a number of the words. [speaker004:] um, [speaker001:] I'm sure that, um, they're [disfmarker] they're accurate now. [speaker006:] Uh, actually I have to [disfmarker] [vocalsound] to go. [speaker004:] I mean, this is tough for a language model probably [disfmarker] [speaker007:] Right. [speaker004:] but [disfmarker] but that might be useful just for speech. [speaker003:] Well. [speaker005:] OK, Andreas is leaving [disfmarker] leaving the building. Mm hmm. [speaker003:] Yeah. [speaker005:] See ya. [speaker003:] See ya. [speaker005:] Um, oh, before you l go [disfmarker] [speaker003:] I don't think we'll go much longer. [speaker005:] I guess it's alright for you to talk a little without the mike [disfmarker] I noticed you adjusting the mike a lot, did it not fit you well? Oh. [speaker001:] Well I won I noticed when you turned your head, it would [disfmarker] it would tilt. [speaker005:] Maybe it wasn't just tightened enough, or [disfmarker] [speaker004:] Maybe the [disfmarker] yeah, the s thing that you have tightened [@ @], [speaker002:] Actually if [disfmarker] if you have a larger head, that mike's gotta go farther away which means the [disfmarker] the balance is gonna make it wanna tip down. [speaker004:] oh. [speaker005:] OK. Anyway. [speaker003:] Yeah. OK, [speaker005:] Cuz, I'm just thinking, you know, we were [disfmarker] we're [disfmarker] we've been talking about changing the mikes, uh, for a while, [speaker003:] see ya. [speaker001:] Yeah. [speaker005:] and if these aren't [disfmarker] acoustically they seem really good, but if they're not comfortable, we have the same problems we have with these stupid things. [speaker001:] I think it's com This is the first time I've worn this, I find it very comfortable. [speaker005:] I find it very comfortable too, but, uh, it looked like Andreas was having problems, and I think Morgan was saying it [disfmarker] [speaker003:] Well, but I had it on [disfmarker] I had it on this morning and it was fine. [speaker002:] Can I see that? [speaker005:] Oh, oh you did wear it this morning? [speaker003:] Yeah. [speaker005:] OK, it's off, so you can put it on. [speaker002:] I [disfmarker] yeah, I don't want it on, I just [disfmarker] I just want to, um, say what I think is a problem with this. If you are wearing this over your ears and you've got it all the way out here, then the balance is gonna want to pull it this way. [speaker004:] Yeah. [speaker005:] Right. [speaker002:] Where as if somebody with a smaller head has it back here, [speaker005:] It's more balanced. [speaker002:] right? [speaker004:] So we have to [speaker001:] Oh! [speaker002:] Yeah. Then it [disfmarker] then it falls back this way so it's [disfmarker] [speaker005:] Well wh what it's supposed to do is the backstrap is supposed to be under your crown, and so that should be [disfmarker] should be [disfmarker] if it's right against your head there, which is what it's supposed to be, that balances it [speaker001:] Ah. [speaker005:] so it doesn't slide up. [speaker002:] So this is supposed to be under that little protuberance. [speaker005:] Yep, right [disfmarker] right below [disfmarker] if you feel the back of your head, you feel a little lump, um, and so it's supposed to be right under that. [speaker002:] Yeah. So it's really supposed to go more like this than like this. [speaker005:] Yes, exactly. [speaker002:] But then isn't that going to [disfmarker] [speaker005:] That [disfmarker] that [disfmarker] that tilts, [speaker002:] Well, I guess you can control that. [speaker005:] right? In lots and lots of different ways. [speaker004:] So I'm not saying anything about bias towards small headsize, [speaker005:] About heads? [speaker004:] but does seem, uh [disfmarker] [speaker002:] It would be an advantage. [speaker001:] Well, wonder if it's [disfmarker] if [disfmarker] if he was wearing it over his hair instead of under his hair. [speaker003:] Well, we should [disfmarker] We shou we should work on compressing the heads, and [disfmarker] [speaker005:] I think probably it was [disfmarker] Yeah. It probably just wasn't tight enough to the back of his head. I mean, so the directions do talk about bending it to your size, which is not really what we want. [speaker002:] The other thing that would do it would be to hang a five pound weight off the back. [speaker003:] Yeah [speaker004:] Right. [speaker003:] that's good! [speaker001:] What did you say? [speaker005:] wh [speaker004:] A little, [speaker003:] Hang a five pound weight off the [disfmarker] off the back. [speaker005:] We did that [disfmarker] [speaker004:] um, [speaker002:] Hang a five pound weight off the back. [speaker005:] We [disfmarker] at Boeing I used [disfmarker] I was doing augmented reality so they had head mounts on, [speaker003:] Weight. [speaker005:] and we [disfmarker] we had a little jury rigged one with a welder's helmet, [speaker002:] Counter balance. [speaker005:] and we had just a bag with a bunch of marbles in it [vocalsound] as a counter balance. [speaker003:] Or maybe this could be helpful just for evening the conversation between people. If people [disfmarker] those who talk a lot have to wear heavier weights or something, [speaker005:] Yeah! [speaker003:] and [disfmarker] and [disfmarker] um, [speaker005:] Anyway. [speaker003:] um, so, uh, what was I gonna say? Oh, yeah, I was gonna say, uh, I had these, uh, conversations with NIST folks also while I was there [speaker005:] Yep. [speaker003:] and [disfmarker] and, uh, um, so they [disfmarker] they have their [disfmarker] their plan for a room, uh, with, um, mikes in the middle of the table, and, uh, close mounted mikes, and they're talking about close mounted and lapels, just cuz [speaker004:] And arrays, [speaker005:] And arrays, [speaker003:] sort of [disfmarker] and the array. Yeah, so they were [disfmarker] [speaker005:] yep. [speaker004:] which is the i interesting [disfmarker] [speaker005:] And cameras. [speaker004:] and video, right. [speaker003:] And yeah, like multiple [disfmarker] multiple video cameras coverin covering every [disfmarker] everybody [disfmarker] every place in the room, uh, the [disfmarker] yeah [disfmarker] the [disfmarker] the mikes in the middle, the head mounted mikes, the lapel mikes, the array, uh, with [disfmarker] well, there's some discussion of fifty nine, [speaker005:] Fifty nine elements. [speaker003:] they might go down to fifty seven Because, uh, there is, uh, some pressure from a couple people at the meeting for them to use a KEMAR head. [speaker004:] Mm hmm. [speaker003:] I forget what KEMAR, uh, stands for, but what it is is it's dummy head that is very specially designed, [speaker005:] Oh, that's right. Yep. [speaker004:] Right. [speaker003:] and [disfmarker] and [disfmarker] and, so what they're actually doing is they're really [disfmarker] there's really two recording systems. [speaker004:] That's a great idea. [speaker003:] So they may not be precisely synchronous, but the but there's two [disfmarker] two recording systems, one with, I think, twenty four channels, and one with sixty four channels. And the sixty four channel one is for the array, but they've got some empty channels there, and anyway they [disfmarker] like they're saying they may give up a couple or something if [disfmarker] for [disfmarker] for the KEMAR head if they go [disfmarker] go with that. [speaker005:] Right. Yeah, it is a good idea. Yeah, h uh, J Jonathan Fiscus did say that, uh, they have lots of software for doing calibration for skew and offset between channels [speaker003:] So. [speaker004:] Mm hmm [speaker005:] and that they've found that's just not a big deal. [speaker003:] Yeah. [speaker005:] So. [speaker003:] Yeah, I'm not [pause] too worried about that. I was thinking [disfmarker] [speaker004:] But they're still planning to do like fake [disfmarker] [speaker005:] Scenario based. [speaker004:] they have to do something like that, [speaker005:] Y right. Their [disfmarker] their legal issues won't allow them to do otherwise. [speaker004:] right. [speaker003:] Yeah. [speaker005:] But it sounded like they were [pause] pretty well thought out [speaker004:] Yeah, th that's true. [speaker005:] and they're [disfmarker] they're gonna be real meetings, it's just that they're with str with people who would not be meeting otherwise. [speaker001:] Mm hmm. [speaker004:] Mm hmm. [speaker002:] Did [disfmarker] did they give a talk on this or was this informal? [speaker005:] So. No. [speaker004:] No. [speaker005:] It's just informal. [speaker003:] No, we just had some discussions, various discussions with them. [speaker001:] Mm hmm. Mm hmm. Yeah. [speaker005:] Yeah, I also sat and chatted with several of the NIST folks. They seemed like a good group. [speaker002:] What was the, um [disfmarker] the paper by, um, Lori Lamel that you mentioned? [speaker003:] Um, yeah, we sh we should just have you [disfmarker] have you read it, but, I mea ba i i uh, we've all got these little proceedings, [speaker001:] Mmm, yeah. [speaker003:] but, um, basically, it was about, um, uh, going to a new task where you have insufficient data and using [disfmarker] using data from something else, and adapting, and how well that works. Uh, so in [disfmarker] in fact it was pretty related to what Liz and Andreas did, uh, except that this was not with meeting stuff, it was with [speaker005:] Right. [speaker003:] uh, like I think they s didn't they start off with Broadcast News system? [speaker005:] The their Broadcast News was their acoustic models [speaker003:] And then they went to [disfmarker] [speaker005:] and then all the other tasks were much simpler. [speaker003:] Yeah. [speaker005:] So they were command and control and that sort of thing. [speaker003:] TI digits was one of them, and, uh, Wall Street Journal. [speaker005:] Yep. [speaker002:] What was their rough [disfmarker] what was their conclusion? [speaker005:] Yeah, read Wall Street Journal. It works. [speaker003:] Yeah. [speaker004:] Well, it's [disfmarker] it's a good paper, I mean [disfmarker] [speaker005:] Yeah. [speaker003:] Yeah, yeah. [speaker005:] Yeah, that was one of the ones that I liked. [speaker004:] Bring the [disfmarker] [speaker005:] That [disfmarker] It not only works, in some cases it was better, which I thought was pretty interesting, but that's cuz they didn't control for parameters. So. You know, the Broadcast News nets were [disfmarker] not nets, [speaker003:] Probably. [speaker004:] Right. [speaker005:] acoustic models [comment] were a lot more complex. [speaker002:] Did they ever try going [disfmarker] going the other direction from simpler task to more complicated tasks, [speaker005:] n Not in that paper. [speaker002:] or [disfmarker]? [speaker003:] That might be hard. [speaker005:] Yeah, well, one of the big problems with that is [disfmarker] is often the simpler task isn't fully [disfmarker] doesn't have all the phones in it, and that [disfmarker] that makes it very hard. [speaker003:] Yeah. [speaker002:] Mm hmm. [speaker003:] Yeah. [speaker005:] But I've done the same thing. I've been using Broadcast News nets for digits, [speaker003:] Yeah. [speaker005:] like for the spr speech proxy thing that I did? That's what I did. [speaker003:] Yeah, sure. [speaker005:] So. It works. [speaker003:] Yeah. Yeah, and they have [disfmarker] I mean [disfmarker] they have better adaptation than we had than that [disfmarker] that system, [speaker005:] Yep. [speaker003:] so they [disfmarker] um, [speaker005:] You mean they have some. [speaker003:] yeah, we should probably what would [disfmarker] actually what we should do, uh, I haven't said anything about this, but probably the five of us should pick out a paper or two that [disfmarker] that, uh, you know, got our interest, and we should go around the room at one of the Tuesday lunch meetings and say, you know, what [disfmarker] what was good about the conference, [speaker005:] Present. Yep. Do a trip report. [speaker003:] yeah. [speaker004:] Well, the summarization stuff was interesting, I mean, I don't know anything about that field, but for this proposal on meeting summarization, um, I mean, it's sort of a far cry because they weren't working with meeting type data, but he got sort of an overview on some of the different approaches, [speaker005:] Right. [speaker002:] Do you remember who the groups were that we're doing? [speaker004:] so. Well there're [disfmarker] [speaker005:] A lot of different ones. [speaker004:] this was the last day, [speaker001:] R I think [disfmarker] [speaker004:] but, I mean, there's [disfmarker] that's a huge field and probably the groups there may not be representative of the field, [speaker001:] Mm hmm. [speaker004:] I [disfmarker] I don't know exactly that everyone submits to this particular conference, [speaker002:] Was [disfmarker] were there folks from BBN presenting? [speaker004:] but yet there was, let's see, this was on the last day, Mitre, BBN, and, um, Prager [disfmarker] [speaker005:] Mitre, BBN, IBM. Uh, [speaker001:] Maryland. [speaker004:] um, I wo it was [disfmarker] [speaker003:] Columbia have anything? No. [speaker004:] no it was [disfmarker] [speaker005:] Wasn't [disfmarker] Who [disfmarker] who [disfmarker] who did the order one? [speaker004:] this was Wednesday morning. The sentence ordering one, was that Barselou, and these guys? [speaker005:] Ugh! [comment] I'm just so bad at that. [speaker001:] Oh. [speaker004:] Anyway, I [disfmarker] I [disfmarker] it's in the program, I should have read it to remind myself, but that's sort of useful and I think like when Mari and Katrin and Jeff are here it'd be good to figure out some kinds of things that we can start doing maybe just on the transcripts cuz we already have [disfmarker] [speaker005:] Yeah, we do have word transcripts. [speaker003:] Mm hmm. [speaker004:] you know, [speaker005:] So. [speaker004:] yeah. [speaker001:] Well, I like the idea that Adam had of [disfmarker] of, um, z maybe generating minutes based on some of these things that we have because it would be easy to [disfmarker] to [disfmarker] to do that just, you know, [speaker004:] Right. [speaker001:] and [disfmarker] and it has to be, though, someone from this group because of the technical nature of the thing. [speaker005:] Someone who actually does take notes, um, [vocalsound] I'm very bad at note taking. [speaker004:] But I think what's interesting is there's all these different evaluations, like [disfmarker] just, you know, how do you evaluate whether the summary is good or not, [speaker005:] I always write down the wrong things. [speaker001:] I do take notes. [speaker004:] and that's what's [disfmarker] was sort of interesting to me is that there's different ways to do it, [speaker005:] A judge. [speaker004:] and [disfmarker] [speaker005:] Yep. [speaker002:] Was SRA one of the groups talking about summarization, no? [speaker004:] Hm umm. No. [speaker001:] It was an interesting session. [speaker005:] And as I said, I like the Microsoft talk on [pause] scaling issues in, uh, word sense disambiguation, [speaker001:] One of those w [speaker004:] Yeah. [speaker005:] that was interesting. [speaker003:] Yeah, that was an interesting discussion, [speaker005:] The [disfmarker] [speaker003:] uh, I [speaker005:] It [disfmarker] it [disfmarker] it was the only one [disfmarker] It was the only one that had any sort of real disagreement about. [speaker004:] The data issue comes up all the ti [speaker005:] So. [speaker003:] Well, I didn't have as much disagreement as I would have liked, but I didn't wanna [disfmarker] I wouldn I didn't wanna get into it because, uh, you know, it was the application was one I didn't know anything about, [speaker005:] Yep. [speaker003:] uh, it just would have been, you know, me getting up to be argumentative, but [disfmarker] but, uh, I mean, the missing thi so [disfmarker] so what they were saying [disfmarker] it's one of these things [disfmarker] is [disfmarker] you know, all you need is more data, sort of [disfmarker] But I mea i wh it [disfmarker] [@ @] that's [disfmarker] that's dissing it, uh, improperly, I mean, it was a nice study. Uh, they were doing this [disfmarker] it wasn't word sense disambiguation, it was [disfmarker] [speaker004:] Yeah [disfmarker] yeah [disfmarker] yeah [disfmarker] [speaker005:] Well, it sort of was. [speaker003:] was it w was it word sense? [speaker005:] But it was [disfmarker] it was a very simple case of "to" versus "too" versus "two" and "there", "their", "they're" [disfmarker] [speaker003:] Yes. [speaker004:] And there and their and [disfmarker] [speaker003:] Yeah, yeah. OK. [speaker004:] and that you could do better with more data, I mean, that's clearly statistically [disfmarker] [speaker003:] Right. [speaker005:] Yeah. [speaker003:] And so, what they did was they had these different kinds of learning machines, and they had different amounts of data, and so they did like, you know, eight different methods that everybody, you know, uh, argues about [disfmarker] about, "Oh my [disfmarker] my kind of learning machine is better than your kind of learning machine." And, uh, they were [disfmarker] started off with a million words that they used, which was evidently a number that a lot of people doing that particular kind of task had been using. So they went up, being Microsoft, they went up to a billion. And then they had this log scale showing a [disfmarker] you know, and [disfmarker] and naturally everything gets [disfmarker] [speaker005:] Them being beep, [comment] they went off to a billion. [speaker003:] they [disfmarker] well, it's a big company, I didn't [disfmarker] I didn't mean it as a ne anything negative, [speaker005:] Yeah. [speaker003:] but i i i [speaker004:] You mean the bigger the company the more words they use for training? [speaker005:] Well, I think the reason they can do that, is that they assumed that text that they get off the web, like from Wall Street Journal, is correct, and edit it. So that's what they used as training data. [speaker003:] Yeah. [speaker005:] It's just saying if it's in this corpus it's correct. [speaker003:] OK. But, I mean, yes. Of course there was the kind of effect that, you know, one would expect that [disfmarker] uh [disfmarker] that you got better and better performance with more and more data. Um, but the [disfmarker] the real point was that the [disfmarker] the different learning machines are sort of all over the place, and [disfmarker] and by [disfmarker] by going up significantly in data you can have much bigger effect then by switching learning machines and furthermore which learning machine was on top kind of depended on where you were in this picture, so, [speaker002:] This was my concern about the recognizer in Aurora. [speaker003:] uh, That [disfmarker] [speaker002:] That the differences we're seeing in the front end is b [speaker003:] Yeah. [speaker005:] Are irrelevant. [speaker002:] are irrelevant once you get a real recognizer at the back end. [speaker003:] Yeah. [speaker004:] If you add more data? Or [disfmarker] [speaker003:] Yeah. [speaker002:] You know? [speaker004:] Huh. [speaker003:] Yeah, could well be. So [disfmarker] so, I mean, that was [disfmarker] that was kind of, you know, it's a good point, but the problem I had with it was that the implications out of this was that, uh, the kind of choices you make about learning machines were therefore irrelevant which is not at [disfmarker] n t as for as I know in [disfmarker] in tasks I'm more familiar with [@ @] is not at all true. What i what is [disfmarker] is true is that different learning machines have different properties, and you wanna know what those properties are. And someone else sort of implied that well we s you know, a all the study of learning machine we still don't know what those properties are. We don't know them perfectly, but we know that some kinds use more memory and [disfmarker] and some other kinds use more computation and some are [disfmarker] are hav have limited kind of discrimination, but are just easy to use, and others are [disfmarker] [speaker002:] But doesn't their conclusion just sort of [disfmarker] you could have guessed that before they even started? Because if you assume that these learning things get better and better and better, [speaker003:] You would guess [disfmarker] [speaker002:] then as you approach [disfmarker] there's a point where you can't get any better, right? You get everything right. [speaker003:] Yeah. [speaker004:] It's just no [disfmarker] [speaker005:] But [disfmarker] No, but there was still a spread. [speaker002:] So they're all approaching. [speaker005:] They weren't all up They weren't converging. They were all still spread. [speaker003:] It w [speaker002:] But what I'm saying is that th they have to, as they all get better, they have to get closer together. [speaker005:] But they [disfmarker] Right, right. Sure. But they hadn't even come close to that point. All the tasks were still improving when they hit a billion. [speaker003:] Yeah. [speaker002:] But they're all going the same way, right? So you have to get closer. [speaker003:] Eventually. O one would [speaker005:] But they didn't get closer. [speaker002:] Oh they didn't? [speaker005:] They just switched position. [speaker003:] Well [disfmarker] well that's getting cl I mean, yeah, the spread was still pretty wide that's th that's true, [speaker005:] Yep. [speaker003:] but [disfmarker] but, uh, I think it would be irntu intu intuition that this would be the case, but, uh, to really see it and to have the intuition is quite different, I mean, I think somebody w w let's see who was talking about earlier that the effect of having a lot more data is quite different in Switchboard than it is in [disfmarker] in Broadcast News, [speaker004:] Well it's different for different tasks. [speaker005:] Yeah. It was Liz. Yeah. [speaker003:] yeah. [speaker004:] So it depends a lot on whether, you know, it [disfmarker] disambiguation is exactly the case where more data is better, right? You're [disfmarker] you're [disfmarker] you can assume similar distributions, [speaker003:] Yeah. [speaker004:] but if you wanted to do disambiguation on a different type of, uh, test data then your training data, then that extra data wouldn't generalize, [speaker005:] Right. [speaker004:] so. [speaker003:] Right. [speaker005:] But, I think one of their p They [disfmarker] they had a couple points. w [comment] Uh, I think one of them was that "Well, maybe simpler algorithms and more data are [disfmarker] is better". Less memory, faster operation, simpler. Right? Because their simplest, most brain dead algorithm did pretty darn well [speaker003:] Mm hmm. [speaker005:] when you got [disfmarker] gave it a lot more data. And then also they were saying, "Well, m You have access to a lot more data. Why are you sticking with a million words?" I mean, their point was that this million word corpus that everyone uses is apparently ten or fifteen years old. And everyone is still using it, so. [speaker003:] Yeah. But anyway, I [disfmarker] I [disfmarker] I think it's [disfmarker] it's just the [disfmarker] the i it's [disfmarker] it's [disfmarker] it's not really the conclusion they came to so much, as the conclusion that some of the, uh, uh, commenters in the crowd [vocalsound] came up with [speaker005:] But we could talk about this stuff, I think this would be fun to do. Right. [speaker003:] that, uh, you know, this therefore is further evidence that, you know, more data is really all you should care about, and that I thought was just kind of going too far the other way, [speaker005:] Machine learning. [speaker003:] and [disfmarker] and the [disfmarker] the, uh, one [disfmarker] one person ga g g got up and made a [disfmarker] a brief defense, uh, but it was a different kind of grounds, it was that [disfmarker] that, uh, i w the reason people were not using so much data before was not because they were stupid or didn't realize data was important, but in fact th they didn't have it available. Um, but the other point to make a again is that, uh, machine learning still does matter, but it [disfmarker] it matters more in some situations than in others, and it [disfmarker] and also there's [disfmarker] there's not just mattering or not mattering, but there's mattering in different ways. I mean, you might be in some situation where you care how much memory you're using, [speaker005:] Right. [speaker003:] or you care, you know, what recall time is, or you care, you know, and [disfmarker] and [disfmarker] [speaker005:] Or you only have a million words [pause] for your [disfmarker] some new task. [speaker003:] Yeah, or [disfmarker] or, uh [disfmarker] [speaker004:] Or done another language, or [disfmarker] [speaker005:] Yep. [speaker004:] I mean, you [disfmarker] so there's papers on portability and rapid prototyping and blah blah blah, [speaker003:] Yeah. [speaker005:] Right. [speaker004:] and then there's people saying, "Oh, just add more data." [speaker003:] Yeah. [speaker004:] So, [speaker003:] And there's cost! [speaker005:] Mm hmm. [speaker004:] these are like two different religions, basically. [speaker005:] Cost. Yeah. [speaker003:] There's just plain cost, [speaker005:] That's a big one. [speaker003:] you know, so [disfmarker] so these, I mean th the [disfmarker] in the [disfmarker] in the speech side, the thing that [@ @] always occurs to me is that if you [disfmarker] if you [disfmarker] uh [disfmarker] one person has a system that requires ten thousand hours to train on, and the other only requires a hundred, and they both do about the same because the hundred hour one was smarter, that's [disfmarker] that's gonna be better. [speaker005:] Yep. [speaker003:] because people, I mean, there isn't gonna be just one system that people train on and then that's it for the r for all of time. I mean, people are gonna be doing other different things, and so it [disfmarker] these [disfmarker] these things matters [disfmarker] matter. [speaker001:] Yeah, that's it. So, I mean, this was a very provocative slide. [speaker005:] Yeah, so that's one of the slides they put up. [speaker001:] She put this up, and it was like this is [disfmarker] this p people kept saying, "Can I see that slide again?" [speaker003:] Yeah. [speaker004:] Yeah, yeah. [speaker001:] and then they'd make a comment, and one person said, well known person said, um, you know, "Before you dismiss forty five years including my work [disfmarker]" [speaker005:] Forty five years of research. [speaker007:] Yeah. [speaker004:] Yeah. But th you know, the same thing has happened in computational linguistics, right? You look at the ACL papers coming out, and now there's sort of a turn back towards, OK we've learned statistic [disfmarker] you know, we're basically getting what we expect out of some statistical methods, and, you know, the there's arguments on both sides, [speaker005:] Yep. I think the matters is the thing that [disfmarker] that was misleading. [speaker004:] so [disfmarker] [speaker001:] That was very offending, [speaker004:] Yeah, yeah. [speaker001:] very offending. [speaker005:] Is that [disfmarker] all [disfmarker] all of them are based on all the others, right? Just, you [disfmarker] you can't say [disfmarker] [speaker003:] Right. [speaker002:] Maybe they should have said "focus" or something. [speaker005:] Yeah. I mean, so. [disfmarker] And I'm saying the same thing happened with speech recognition, right? For a long time people were hand c coding linguistic rules and then they discovered machine learning worked better. And now they're throwing more and more data and worrying [disfmarker] perhaps worrying less and less about, uh, the exact details of the algorithms. [speaker004:] And [disfmarker] and then you hit this [disfmarker] [speaker005:] Except when they have a Eurospeech paper. [speaker003:] Yeah. [speaker001:] Yeah. [speaker005:] Anyway. [speaker003:] Anyway, [speaker005:] Shall we read some digits? [speaker003:] tea is [disfmarker] tea is, uh, starting. [speaker005:] Are we gonna do one at a time? Or should we read them all agai at once again. [speaker003:] Let's do it all at once. We [disfmarker] [@ @] [disfmarker] let's try that again. [speaker004:] Yes! [speaker001:] Yeah, that's good. [speaker005:] OK. [speaker004:] So, and maybe we won't laugh this time also. [speaker005:] So remember to read the transcript number so that, uh, everyone knows that [disfmarker] what it is. And ready? [speaker001:] Yeah. [speaker005:] Three, two, one. [speaker003:] Boy, is that ever efficient. [speaker005:] Yep. That's really fast. [speaker003:] Yeah. Yeah. [speaker002:] OK we're on [speaker001:] OK. [speaker003:] Yes. [speaker002:] and we seem to be working. [speaker003:] One, two, three, four, f [speaker001:] OK. [speaker002:] We didn't crash [disfmarker] we're not crashing anymore and it really bothers me. [speaker007:] I do. [speaker001:] Yeah? [speaker003:] No crashing. [speaker007:] I crashed when I started this morning. [speaker002:] You crashed [disfmarker] crashed this morning? [speaker003:] Yeah? [speaker002:] I did not crash this morning. [speaker001:] Oh! Well maybe it's just, you know, how many t u u u u how many times you crash in a day. [speaker007:] Really? Yeah. Maybe, yeah. [speaker001:] First time [disfmarker] first time in the day, you know. [speaker007:] Or maybe it's once you've [pause] done enough meetings [comment] it won't crash on you anymore. [speaker005:] Yeah. [speaker003:] No? [speaker006:] Yeah. [speaker007:] It's a matter of experience. [speaker005:] Yeah. [speaker001:] Yeah. [speaker006:] Self learning, yeah. [speaker001:] That's [disfmarker] that's great. [speaker007:] Yeah. [speaker003:] Yeah. [speaker001:] Uh. Do we have an agenda? Liz [disfmarker] Liz and Andreas can't sh can't [disfmarker] uh, can't come. [speaker002:] I do. [speaker001:] So, they won't be here. [speaker002:] I have agenda [speaker007:] Did [disfmarker] [speaker002:] and it's all me. Cuz no one sent me anything else. [speaker007:] Did they send, uh, the messages to you about the meeting today? [speaker002:] I have no idea but I just got it a few minutes ago. [speaker007:] Oh. Oh. [speaker002:] Right when you were in my office it arrived. [speaker007:] OK, cuz I checked my mail. I didn't have anything. [speaker002:] So, does anyone have any a agenda items other than me? I actually have one more also which is to talk about the digits. [speaker001:] Uh, right, so [disfmarker] so I [disfmarker] I was just gonna talk briefly about the NSF ITR. [speaker003:] Mm hmm. Yeah. [speaker002:] Oh, great. [speaker001:] Uh, and then, you have [disfmarker] [speaker006:] Can w [speaker001:] I mean, I won't say much, but [disfmarker] [comment] uh, but then, uh, you said [disfmarker] wanna talk about digits? [speaker002:] I have a short thing about digits and then uh I wanna talk a little bit about naming conventions, although it's unclear whether this is the right place to talk about it. So maybe just talk about it very briefly and take the details to the people who [disfmarker] for whom it's relevant. [speaker001:] Right. [speaker003:] Yeah. [speaker006:] I could always say something about transcription. I've been [disfmarker] [vocalsound] but [disfmarker] but [disfmarker] uh, well [disfmarker] [speaker001:] Well if we [disfmarker] Yeah, we shouldn't add things in just to add things in. I'm actually pretty busy today, [speaker006:] Yeah. [speaker001:] so if we can [disfmarker] [comment] [vocalsound] we [disfmarker] [speaker006:] Yeah, yeah, yeah. [speaker001:] a short meeting would be fine. [speaker006:] This does sound like we're doing fine, yeah. That won't do. [speaker002:] So the only thing I wanna say about digits is, we are pretty much done with the first test set. There are probably forms here and there that are marked as having been read that weren't really read. So I won't really know until I go through all the transcriber forms and extract out pieces that are in error. So I wa Uh. Two things. The first is what should we do about digits that were misread? My opinion is, um, we should just throw them out completely, and have them read again by someone else. You know, the grouping is completely random, [speaker003:] Uh huh. [speaker002:] so it [disfmarker] it's perfectly fine to put a [disfmarker] a group together again of errors and have them re read, just to finish out the test set. [speaker006:] Oh! By [disfmarker] throw them out completely? [speaker002:] Um, the other thing you could do is change the transcript to match what they really said. So those are [disfmarker] those are the two options. [speaker006:] Mm hmm. [speaker003:] Yeah. [speaker001:] But there's often things where people do false starts. I know I've done it, where I say [disfmarker] say a [disfmarker] [speaker002:] What the transcribers did with that is if they did a correction, and they eventually did read the right string, [comment] you extract the right string. [speaker007:] Oh, you're talking about where they completely read the wrong string and didn't correct it? [speaker005:] Yeah. [speaker002:] Yeah. And didn't notice. [speaker007:] Ah. [speaker005:] Yeah. [speaker002:] Which happens in a few places. [speaker003:] Yeah. [speaker006:] Well, and s and you're talking string wise, [speaker002:] So [disfmarker] so [disfmarker] [speaker006:] you're not talking about the entire page? [speaker005:] Yeah. [speaker002:] Correct. [speaker006:] I get it. [speaker002:] And so the [disfmarker] the two options are change the transcript to match what they really said, but then [disfmarker] but then the transcript isn't the Aurora test set anymore. I don't think that really matters because the conditions are so different. And that would be a little easier. [speaker007:] Well how many are [disfmarker] how [disfmarker] how often does that happen? [speaker002:] Mmm, five or six times. [speaker007:] Oh, so it's not very much. [speaker002:] No, it's not much at all. [speaker007:] Seems like we should just change the transcripts [speaker005:] Yeah. [speaker002:] OK. [speaker007:] to match. [speaker001:] Yeah, it's five or six times out of [pause] thousands? [speaker003:] Yeah. [speaker002:] Four thousand. [speaker003:] Four thous Ah! Four thousand. [speaker001:] Four thousand? [speaker007:] Yeah, it's [disfmarker] [speaker001:] Yeah, I would, uh, [vocalsound] tak do the easy way, [speaker007:] Yeah. [speaker002:] OK. [speaker001:] yeah. [speaker003:] Yeah. [speaker001:] It [disfmarker] it's kinda nice [disfmarker] [speaker003:] Mmm. [speaker001:] I mean, wh who knows what studies people will be doing on [disfmarker] on speaker dependent things and so I think having [disfmarker] having it all [disfmarker] [speaker003:] Yeah. [speaker001:] the speakers who we had is [disfmarker] is at least interesting. [speaker007:] So you [disfmarker] um, how many digits have been transcribed now? [speaker002:] Four thousand lines. And each line is between one and about ten digits. [speaker007:] Four thousand lines? [speaker002:] I didn't [disfmarker] I didn't compute the average. I think the average was around four or five. [speaker001:] So that's a couple hours of [disfmarker] of, uh, speech, probably. [speaker007:] Wow. [speaker002:] Yep. Yep. [speaker001:] Which is a yeah reasonable [disfmarker] reasonable test set. [speaker003:] Mm hmm. [speaker007:] Mm hmm. [speaker002:] And, Jane, I do have a set of forms which I think you have copies of somewhere. [speaker006:] Mm hmm. Yeah, true. [speaker002:] Oh you do? [speaker006:] Mm hmm. Mm hmm. [speaker002:] Oh OK, good, good. Yeah, I was just wond I thought I had [disfmarker] had all of them back from you. [speaker006:] No, not yet. [speaker002:] And then the other thing is that, uh, the forms in front of us here that we're gonna read later, were suggested by Liz because she wanted to elicit some different prosodics from digits. [speaker003:] Mm hmm. [speaker002:] And so, uh, I just wanted people to, take a quick look at the instructions [speaker005:] Eight eight two two two nine. [speaker002:] and the way it wa worked and see if it makes sense and if anyone has any comments on it. [speaker001:] I see. And the decision here, uh, was to continue with uh the words rather than the [disfmarker] the numerics. [speaker002:] Uh, yes, although we could switch it back. The problem was O and zero. Although we could switch it back and tell them always to say "zero" or always to say "O". [speaker006:] Oh [disfmarker] [speaker001:] Or neither. [speaker003:] Yeah. [speaker001:] But it's just two thing [disfmarker] ways that you can say it. [speaker002:] Mm hmm. [speaker001:] Right? [speaker006:] Oh. [speaker002:] Sure. [speaker005:] Yeah. [speaker001:] Um [disfmarker] um, that's the only thought I have because if you t start talking about these, you know u tr She's trying to get at natural groupings, [speaker002:] Right. [speaker001:] but it [disfmarker] there's [disfmarker] there's nothing natural about reading numbers this way. I mean if you saw a telephone number you would never see it this way. [speaker002:] The [disfmarker] the problem also is she did want to stick with digits. I mean I'm speaking for her since she's not here. [speaker001:] Yeah. [speaker002:] But, um, the other problem we were thinking about is if you just put the numerals, [comment] they might say forty three instead of four three. [speaker005:] Yeah. [speaker007:] Mmm. [speaker003:] Yeah. [speaker005:] Yeah. [speaker003:] Yeah. [speaker006:] Well, if there's space, though, between them. I mean, you can [disfmarker] With [disfmarker] when you space them out they don't look like, uh, forty three anymore. [speaker005:] Yeah. [speaker002:] Well, she and I were talking about it, [speaker001:] Yeah. [speaker002:] and she felt that it's very, very natural to do that sort of chunking. [speaker001:] She's right. It's [disfmarker] it [disfmarker] it's a different problem. I mean it's a [disfmarker] it's a [disfmarker] it's an interesting problem [disfmarker] I mean, we've done stuff with numbers before, and yeah sometimes people [disfmarker] If you say s "three nine eight one" sometimes people will say "thirty nine eighty one" or "three hundred [disfmarker] three hundred eighty nine one", [speaker003:] Yeah. [speaker001:] or [disfmarker] I don't think they'd say that, but [disfmarker] [speaker002:] Not very frequently [speaker001:] but th no [disfmarker] [speaker002:] but, [vocalsound] they certainly could. [speaker001:] But [disfmarker] Yeah. Uh, th thirty eight ninety one is probably how they'd do it. [speaker002:] So. I mean, this is something that Liz and I spoke about [speaker001:] But [disfmarker] [speaker002:] and, since this was something that Liz asked for specifically, I think we need to defer to her. [speaker001:] I see. Mm hmm. [speaker003:] Yeah. [speaker001:] OK. Well, we're probably gonna be collecting meetings for a while and if we decide we still wanna do some digits later we might be able to do some different ver different versions, [speaker002:] Do something different, yeah. [speaker001:] but this is the next suggestion, so. OK. OK, so uh e l I guess, let me, uh, get my [disfmarker] my short thing out about the NSF. I sent this [disfmarker] actually this is maybe a little side thing. Um, I [disfmarker] I sent to what I thought we had, uh, in some previous mail, as the right joint thing to send to, [speaker002:] It was. [speaker001:] which was "M [disfmarker] MTG RCDR hyphen joint". [speaker002:] Joint. Yep. [speaker001:] But then I got some sort of funny mail saying that the moderator was going to [disfmarker] [speaker002:] It's [disfmarker] That's because they set the one up at UW [disfmarker] that's not on our side, that's on the U dub [comment] side. [speaker001:] Oh. [speaker002:] And so U UW set it up as a moderated list. [speaker006:] Yeah. [speaker001:] Oh, OK. [speaker002:] And, I have no idea whether it actually ever goes to anyone so you might just wanna mail to Mari [speaker001:] No [disfmarker] no, th I got [disfmarker] I got, uh, little excited notes from Mari and Jeff and so on, [speaker002:] and [disfmarker] OK, good. [speaker001:] so it's [disfmarker] Yeah. [speaker002:] So the moderator actually did repost it. [speaker001:] Yeah. [speaker002:] Cuz I had sent one earlier [disfmarker] Actually the same thing happened to me [disfmarker] I had sent one earlier. The message says, "You'll be informed" and then I was never informed but I got replies from people indicating that they had gotten it, so. [speaker001:] Right. [speaker002:] It's just to prevent spam. [speaker001:] I see. Yeah so O [disfmarker] OK. Well, anyway, I guess [disfmarker] everybody here [disfmarker] Are y are [disfmarker] you are on that list, [speaker007:] Mm hmm. [speaker001:] right? So you got the note? [speaker007:] Yeah. [speaker001:] Yeah? OK. Um, so this was, uh, a, uh, proposal that we put in before on [disfmarker] on more [disfmarker] more higher level, uh, issues in meetings, from [disfmarker] I guess higher level from my point of view. Uh, [vocalsound] and, uh, meeting mappings, and, uh [disfmarker] so is i for [disfmarker] it was a [vocalsound] proposal for the ITR program, uh, Information Technology Research program's part of National Science Foundation. It's the [pause] second year of their doing, uh, these grants. They're [disfmarker] they're [disfmarker] a lot of them are [disfmarker] some of them anyway, are larger [disfmarker] larger grants than the usual, small NSF grants, and. So, they're very competitive, and they have a first phase where you put in pre proposals, and we [disfmarker] we, uh, got through that. And so th the [disfmarker] the next phase will be [disfmarker] we'll actually be doing a larger proposal. And I'm [disfmarker] I [disfmarker] I hope to be doing very little of it. And [disfmarker] uh, [vocalsound] which was also true for the pre proposal, so. Uh, there'll be bunch of people working on it. So. [speaker002:] When's [disfmarker] when's the full proposal due? [speaker001:] Uh, I think April ninth, or something. So it's about a month. [speaker005:] p s [speaker002:] Yep. [speaker001:] Um [disfmarker] [speaker002:] And they said end of business day you could check on the reviewer forms, [speaker007:] u Tomorrow. [speaker002:] is that [disfmarker] [speaker001:] Tomorrow. [speaker005:] Tomorrow? [speaker001:] March second, I said. [speaker005:] Yeah. [speaker003:] Tomorrow. [speaker002:] I've been a day off all week. I guess that's a good thing cuz that way I got my papers done early. [speaker007:] It would be interesting [disfmarker] [speaker001:] So that's amazing you showed up at this meeting! [speaker002:] It is. It is actually quite amazing. [speaker005:] Yeah. [speaker007:] It'll be interesting to see the reviewer's comments. [speaker001:] Yeah. Yeah. My favorite is was when [disfmarker] when [disfmarker] when one reviewer says, uh, "you know, this should be far more detailed", and the nex the next reviewer says, "you know, there's way too much detail". [speaker002:] Yep. Or "this is way too general", and the other reviewer says, "this is way too specific". [speaker003:] Yeah. [speaker001:] Yeah. [speaker003:] Yeah. [speaker001:] Yeah. [speaker002:] "This is way too hard", "way too easy". [speaker001:] We'll see. Maybe there'll be something useful. And [disfmarker] and, uh [disfmarker] [speaker002:] Well it sounded like they [disfmarker] they [disfmarker] the first gate was pretty easy. Is that right? That they didn't reject a lot of the pre proposals? [speaker001:] Do you know anything about the numbers? [speaker002:] No. [speaker007:] It's just from his message it sounded like that. [speaker002:] Just [disfmarker] just th [speaker005:] Yeah. Yeah. I said something, yeah. [speaker007:] Gary Strong's [disfmarker] there was a sentence at the end of one of his paragraphs [speaker001:] I [speaker007:] I [disfmarker] [speaker005:] Yeah. [speaker001:] I should go back and look. I didn't [disfmarker] I don't think that's true. [speaker002:] Yeah, OK. [speaker007:] Mmm. He said the next phase'll be very, competitive [speaker005:] Very [disfmarker] very, [speaker007:] because we didn't want to weed out much in the first phase. [speaker005:] yeah. Yeah. [speaker002:] Or something like that, [speaker001:] Well we'll have to see what the numbers are. [speaker003:] Mm hmm. [speaker002:] so. [speaker003:] Hmm. [speaker001:] Yeah. But they [disfmarker] they have to weed out enough so that they have enough reviewers. [speaker002:] Right. [speaker003:] Yeah. [speaker001:] So, uh, you know, maybe they didn't r weed out as much as usual, but it's [disfmarker] it's usually a pretty [disfmarker] But it [disfmarker] Yeah. It's [disfmarker] it's certainly not [disfmarker] I'm sure that it's not down to one in two or something [speaker002:] Right. [speaker001:] of what's left. I'm sure it's, you know [disfmarker] [speaker002:] How [disfmarker] how many awards are there, do you know? [speaker001:] Well there's different numbers of w awards for different size [disfmarker] They have three size grants. This one there's, um [disfmarker] See the small ones are less than five hundred thousand total over three years and that they have a fair number of them. Um, and the large ones are, uh, boy, I forget, I think, more than, uh, more than a million and a half, more than two million or something like that. And [disfmarker] and we're in the middle [disfmarker] [speaker002:] Mm hmm. [speaker001:] middle category. I think we're, uh, uh, I forget what it was. But, um [disfmarker] Uh, I don't remember, but it's [pause] pr probably along the li I [disfmarker] I could be wrong on this yeah, but probably along the lines of fifteen or [disfmarker] that they'll fund, or twenty. I mean when they [disfmarker] Do you [disfmarker] do you know how many they funded when they f in [disfmarker] in Chuck's, that he got last year? [speaker007:] I don't [disfmarker] I don't know. [speaker001:] Yeah. [speaker002:] I thought it was smaller, that it was like four or five, wasn't it? [speaker007:] I [disfmarker] I'm [disfmarker] [speaker001:] Well they fund [disfmarker] [speaker007:] I don't remember. [speaker001:] they [disfmarker] yeah. [speaker002:] Uh it doesn't matter, [speaker001:] I mean [disfmarker] [speaker002:] we'll find out one way or another. [speaker001:] Yeah. I mean last time I think they just had two categories, small and big, [speaker002:] Mm hmm. [speaker001:] and this time they came up with a middle one, so it'll [disfmarker] there'll be more of them that they fund than [disfmarker] of the big. [speaker007:] If we end up getting this, um, what will it mean to ICSI in terms of, w wh where will the money go to, what would we be doing with it? [speaker001:] Uh. [speaker002:] Exactly what we say in the proposal. [speaker007:] I [disfmarker] I mean uh which part is ICSI though. [speaker001:] You know, it [disfmarker] i [speaker007:] I mean [disfmarker] [speaker001:] None of it will go for those yachts that we've talking about. [speaker007:] Dang! [speaker001:] Um, well, no, I mean it's [disfmarker] u It [disfmarker] [speaker007:] It's just for the research [disfmarker] to continue the research on the Meeting Recorder stuff? [speaker001:] It's extending the research, right? [speaker007:] Yeah. [speaker001:] Because the other [disfmarker] [speaker002:] Yeah it's go higher level stuff than we've been talking about for Meeting Recorder. [speaker001:] Yeah. Yeah the other things that we have, uh, been working on with, uh, the c with Communicator [disfmarker] uh, especially with the newer things [disfmarker] with the more acoustically oriented things are [disfmarker] are [disfmarker] are [disfmarker] are lower level. And, this is dealing with, uh, mapping on the level of [disfmarker] of, um, the conversation [disfmarker] [speaker007:] Mm hmm. [speaker001:] of mapping the conversations [speaker007:] Right, right. [speaker001:] to different kind of planes. So. Um. But, um. So it's all it's all stuff that none none of us are doing right now, or none of us are funded for, so it's [disfmarker] so it's [disfmarker] it would be new. [speaker007:] So assuming everybody's completely busy now, it means we're gonna hafta, hire more students, or, something? [speaker001:] Well there's evenings, and there's weekends, and [disfmarker] Uh. Yeah, there [disfmarker] there would be [disfmarker] there would be new hires, and [disfmarker] and there [disfmarker] there would be expansion, but, also, there's always [disfmarker] [vocalsound] for everybody there's [disfmarker] there's always things that are dropping off, grants that are ending, or other things that are ending, [speaker007:] Right. [speaker001:] so, [speaker003:] Mm hmm. [speaker001:] there's [disfmarker] there's a [vocalsound] continual need to [disfmarker] to bring in new things. [speaker007:] Right. [speaker002:] Yep. [speaker007:] I see. [speaker001:] But [disfmarker] but there definitely would be new [disfmarker] new [disfmarker] new, uh, students, and so forth, both at [disfmarker] at UW and here. [speaker002:] Are there any students in your class who are [vocalsound] expressing interest? [speaker001:] Um, not [pause] clear yet. [speaker002:] Other than the one who's already here. [speaker001:] Not clear yet. I mean we got [vocalsound] [disfmarker] we have [disfmarker] yeah, two of them are [disfmarker] two in the c There're [pause] two in the class already here, and then [disfmarker] and [disfmarker] and, uh [disfmarker] uh, then there's a third who's doing a project here, [speaker002:] Mm hmm. [speaker001:] who, uh [disfmarker] But he [disfmarker] he [disfmarker] he won't be in the country that long, and, maybe another will end up. [speaker002:] Yep. [speaker001:] Actually there is one other guy who's looking [disfmarker] that [disfmarker] that's that guy, uh, Jeremy? [comment] I think. [speaker003:] Mm hmm. [speaker001:] Anyway, yeah that's [disfmarker] that's all I was gonna say is that [disfmarker] that that's [disfmarker] you know, that's nice and we're sorta preceding to the next step, and, [vocalsound] it'll mean some more work, uh, you know, in [disfmarker] in March in getting the proposal out, and then, it's, uh, you know [disfmarker] We'll see what happens. Uh, the last one was [disfmarker] that you had there, [comment] was about naming? [speaker002:] Yep. It just, uh [disfmarker] we've been cutting up sound files, in [disfmarker] for ba both digits and for, uh, doing recognition. And Liz had some suggestions on naming and it just brought up the whole issue that hasn't really been resolved about naming. So, uh, one thing she would like to have is for all the names to be the same length so that sorting is easier. [speaker003:] Yeah. [speaker002:] Um, same number of characters so that when you're sorting filenames you can easily extract out bits and pieces that you want. And that's easy enough to do. And I don't think we have so many meetings that that's a big deal just to change the names. So that means, uh, instead of calling it "MR one", "MR two", you'd call it "MRM zero zero one", "MRM zero zero two", things like that. Just so that they're [disfmarker] they're all the same length. [speaker006:] But, you know, when you, do things like that you can always [disfmarker] as long as you have [disfmarker] uh, you can always search from the beginning or the end of the string. You know, so "zero zero two" [disfmarker] [speaker002:] The problem is that they're a lot of fields. [speaker006:] Yeah. [speaker002:] Alright, so we [disfmarker] we have th we're gonna have the speaker ID, the session, uh [disfmarker] uh, [speaker006:] Yeah, well, your example was really [disfmarker] [speaker002:] information on the microphones, [speaker006:] i [speaker003:] Uh huh. [speaker002:] information on the speak on the channels and all that. [speaker006:] OK. [speaker002:] And so if each one of those is a fixed length, the sorting becomes a lot easier. [speaker004:] She wanted to keep them [vocalsound] the same lengths across different meetings also. So like, the NSA meeting lengths, [comment] all filenames are gonna be the same length as the Meeting Recorder meeting names? [speaker002:] Yep. And as I said, the it's [disfmarker] we just don't have that many that that's a big deal. [speaker007:] Cuz of digits. [speaker002:] And so, uh, um, at some point we have to sort of take a few days off, let the transcribers have a few days off, make sure no one's touching the data and reorganize the file structures. And when we do that we can also rationalize some of the naming. [speaker006:] I [disfmarker] I would think though that the transcribe [disfmarker] the transcripts themselves wouldn't need to have such lengthy names. So, I mean, you're dealing with a different domain there, and with start and end times and all that, [speaker002:] Right. [speaker006:] and channels and stuff, so, it's a different [pause] set. [speaker002:] Right. So the only thing that would change with that is just the directory names, I would change them to match. So instead of being MR one it would be MRM zero zero one. But I don't think that's a big deal. [speaker006:] Fine. Fine. [speaker002:] So for [disfmarker] for m the meetings we were thinking about three letters and three numbers for meeting I Ds. Uh, for speakers, M or F and then three numbers, For, uh [disfmarker] and, uh, that also brings up the point that we have to start assembling a speaker database so that we get those links back and forth and keep it consistent. Um, and then, uh, the microphone issues. We want some way of specifying, more than looking in the "key" file, what channel and what mike. What channel, what mike, and what broadcaster. Or [disfmarker] I don't know how to s say it. So I mean with this one it's this particular headset with this particular transmitter w [pause] as a wireless. [speaker003:] Yeah. [speaker005:] Yep. [speaker002:] And you know that one is a different headset and different channel. And so we just need some naming conventions on that. [speaker003:] Yeah. [speaker002:] And, uh, [speaker003:] Uh huh. [speaker002:] that's gonna become especially important once we start changing the microphone set up. We have some new microphones that I'd like to start trying out, um, once I test them. And then we'll [disfmarker] we'll need to specify that somewhere. So I was just gonna do a fixed list of, uh, microphones and types. [speaker003:] Yeah. [speaker005:] OK. [speaker002:] So, as I said [disfmarker] [speaker007:] That sounds good. [speaker003:] Yeah. [speaker001:] Um, [pause] [vocalsound] since we have such a short agenda list I guess I wi I will ask how [disfmarker] how are the transcriptions going? Yeah. [speaker006:] The [disfmarker] the news is that I've [disfmarker] I uh [disfmarker] s So [disfmarker] in s um [disfmarker] So I've switched to [disfmarker] Start my new sentence. I [disfmarker] I switched to doing the channel by channel transcriptions to provide, uh, the [disfmarker] uh, tighter time bins for [disfmarker] partly for use in Thilo's work and also it's of relevance to other people in the project. And, um, I discovered in the process a couple of [disfmarker] of interesting things, which, um, one of them is that, um, it seems that there are time lags involved in doing this, uh, uh, using an interface that has so much more complexity to it. And I [disfmarker] and I wanted to maybe ask, uh, Chuck to help me with some of the questions of efficiency. Maybe [disfmarker] I was thinking maybe the best way to do this in the long run may be to give them single channel parts and then piece them together later. And I [disfmarker] I have a script, I can piece them together. I mean, so it's like, I [disfmarker] I know that I can take them apart and put them together and I'll end up with the representation which is where the real power of that interface is. And it may be that it's faster to transcribe a channel at a time with only one, uh, sound file and one, uh, set of [disfmarker] of, uh, utterances to check through. [speaker001:] Mm hmm. [speaker003:] Yeah. [speaker001:] I'm a little confused. I thought that [disfmarker] that one of the reason we thought we were so much faster than [disfmarker] than, uh, the [disfmarker] the other transcription, uh, thing was that [disfmarker] that we were using the mixed [pause] file. [speaker006:] Oh, yes. OK. But, um, with the mixed, when you have an overlap, you only have a [disfmarker] a choice of one start and end time for that entire overlap, which means that you're not tightly, uh, tuning the individual parts th of that overlap by different speakers. [speaker001:] Mm hmm. [speaker006:] So someone may have only said two words in that entire big chunk of overlap. [speaker001:] Yeah. Yeah. [speaker006:] And for purposes of [disfmarker] of, uh, things like [disfmarker] well, so things like training the speech nonspeech segmentation thing. [speaker005:] Yeah. [speaker006:] Th it's necessary to have it more tightly tuned than that. [speaker001:] OK. [speaker006:] And w and w and, you know, is a It would be wonderful if, uh, it's possible then to use that algorithm to more tightly tie in all the channels after that but, um, you know, I've [disfmarker] th the [disfmarker] So, I I don't know exactly where that's going at this point. But m I was experimenting with doing this by hand and I really do think that it's wise that we've had them start the way we have with, uh, m y working off the mixed signal, um, having the interface that doesn't require them to do the ti uh, the time bins for every single channel at a t uh, through the entire interaction. [speaker001:] Mm hmm. [speaker006:] Um, I did discover a couple other things by doing this though, and one of them is that, um, um, once in a while a backchannel will be overlooked by the transcriber. As you might expect, [speaker001:] Mm hmm. [speaker006:] because when it's [speaker001:] Sure. [speaker006:] a b backchannel could well happen in a very densely populated overlap. And if we're gonna study types of overlaps, which is what I wanna do, an analysis of that, then that really does require listening [comment] to every single channel all the way through the entire [comment] length for all the different speakers. Now, for only four speakers, that's not gonna be too much time, but if it's nine speakers, then that i that is more time. So it's li you know, kind of wondering [disfmarker] And I think again it's like this [disfmarker] it's really valuable that Thilo's working on the speech nonspeech segmentation because maybe, um, we can close in on that wi without having to actually go to the time that it would take to listen to every single channel from start to finish through every single meeting. [speaker005:] Yeah, but those backchannels will always be a problem I think. Uh especially if they're really short and they're not very loud and so it [disfmarker] it can [disfmarker] it [disfmarker] it will always happen that also the automatic s detection system will miss some of them, so. [speaker006:] OK. Well so then [disfmarker] then, maybe the answer is to, uh, listen especially densely in places of overlap, just so that they're [disfmarker] they're not being overlooked because of that, [speaker005:] Yeah. [speaker006:] and count on accuracy during the sparser phases. [speaker005:] Yeah. [speaker006:] Cuz there are large s spaces of the [disfmarker] That's a good point. There are large spaces where there's no overlap at all. Someone's giving a presentation, [speaker005:] Yeah. [speaker006:] or whatever. That's [disfmarker] that's a good [disfmarker] that's a good thought. And, um, let's see, there was one other thing I was gonna say. I [disfmarker] I think it's really interesting data to work with, I have to say, it's very enjoyable. I really, not [disfmarker] not a problem spending time with these data. Really interesting. And not just because I'm in there. No, it's real interesting. [speaker001:] Uh, well I think it's a short meeting. Uh, you're [disfmarker] you're [disfmarker] you're still in the midst of what you're doing from what you described last time, I assume, [speaker003:] Is true. I haven't results, eh, yet [speaker001:] and [disfmarker] [speaker003:] but, eh, [speaker001:] Yeah. [speaker003:] I [disfmarker] I'm continue working with the mixed signal now, [comment] after the [disfmarker] the last experience. [speaker001:] Yeah. Yeah. [speaker003:] And [disfmarker] and I'm tried to [disfmarker] to, uh, adjust the [disfmarker] to [disfmarker] to improve, eh, an harmonicity, eh, detector that, eh, I [disfmarker] I implement. [speaker001:] Yeah. [speaker003:] But I have problem because, eh, I get, eh, eh, very much harmonics now. [speaker001:] Yeah. [speaker003:] Um, harmonic [disfmarker] possi possible harmonics, uh, eh, and now I'm [disfmarker] I'm [disfmarker] I'm trying to [disfmarker] to find, eh, some kind of a, um [disfmarker] [vocalsound] of h of help, eh, using the energy to [disfmarker] to distinguish between possible harmonics, and [disfmarker] and other fre frequency peaks, that, eh, corres not harmonics. And, eh, I have to [disfmarker] to talk with y with you, with the group, eh, about the instantaneous frequency, because I have, eh, an algorithm, and, I get, mmm, eh, t t results [disfmarker] similar results, like, eh, the paper, eh, that I [disfmarker] I am following. But, eh, the [disfmarker] the rules, eh, that, eh, people used in the paper to [disfmarker] to distinguish the harmonics, is [disfmarker] doesn't work well. [speaker001:] Mm hmm. [speaker003:] And I [disfmarker] I [disfmarker] I [disfmarker] I not sure that i [vocalsound] eh, the [disfmarker] the way [disfmarker] o to [disfmarker] ob the way to obtain the [disfmarker] the instantaneous frequency is [pause] right, or it's [disfmarker] it's not right. Eh, [speaker001:] Yeah. [speaker003:] I haven't enough file feeling to [disfmarker] to [disfmarker] to distinguish what happened. [speaker001:] Yeah, I'd like to talk with you about it. If [disfmarker] if [disfmarker] if, uh [disfmarker] If I don't have enough time and y you wanna discuss with someone else [disfmarker] some someone else besides us that you might want to talk to, uh, might be Stephane. [speaker003:] Yeah. I talked with Stephane and [disfmarker] and Thilo [speaker001:] Yeah and [disfmarker] and Thilo, yeah. [speaker003:] and, [speaker001:] Yeah, but [disfmarker] [speaker003:] they [disfmarker] nnn they [disfmarker] [vocalsound] they [comment] [disfmarker] [vocalsound] they [vocalsound] didn't [disfmarker] [speaker005:] I'm not too experienced with [vocalsound] harmonics [speaker003:] they think that [comment] the experience is not enough to [disfmarker] [speaker001:] I see. [speaker005:] and [disfmarker] [speaker007:] Is [disfmarker] is this the algorithm where you hypothesize a fundamental, and then get the energy for all the harmonics of that fundamental? [speaker003:] No, no it's [disfmarker] No [disfmarker] No. [speaker007:] And then hypothesize a new fundamental and get the energy [disfmarker] [speaker003:] No. [speaker001:] Yeah, that's wh [speaker003:] No. I [disfmarker] I [disfmarker] I [disfmarker] I don't proth process the [disfmarker] the fundamental. I [disfmarker] [vocalsound] I, ehm [disfmarker] I calculate the [disfmarker] the phase derivate using the FFT. [speaker001:] Yeah. [speaker003:] And [disfmarker] The algorithm said that, eh, [vocalsound] if you [disfmarker] if you change the [disfmarker] the [disfmarker] [vocalsound] the, eh, nnn [disfmarker] the X the frequency "X", eh, using the in the instantaneous frequency, you can find, eh, how, eh, in several frequencies that proba probably the [disfmarker] the harmonics, eh, [speaker001:] Uh huh. [speaker003:] the errors of peaks [disfmarker] the frequency peaks, eh, eh, move around [pause] these, eh [disfmarker] eh frequency harmonic [disfmarker] the frequency of the harmonic. And, [vocalsound] eh, if you [disfmarker] if you compare the [disfmarker] the instantaneous frequency, [vocalsound] eh, [vocalsound] of the [disfmarker] of the, eh, continuous, eh, [vocalsound] eh, filters, that, eh [disfmarker] that, eh, they used eh, to [disfmarker] [vocalsound] to [disfmarker] to get, eh, the [disfmarker] the instantaneous frequency, [speaker001:] Mm hmm. [speaker003:] it probably too, you can find, [vocalsound] eh, that the instantaneous frequency [vocalsound] for the continuous, eh, [vocalsound] eh [disfmarker] the output of the continuous filters are very near. And in [pause] my case [disfmarker] i in [disfmarker] equal with our signal, [vocalsound] it doesn't happened. [speaker001:] Yeah. I'd hafta look at that and think about it. [speaker003:] And [disfmarker] [speaker001:] It's [disfmarker] it's [disfmarker] it's [disfmarker] I haven't worked with that either so I'm not sure [disfmarker] The way [disfmarker] [vocalsound] the simple minded way I suggested was what Chuck was just saying, is that you could make a [disfmarker] a sieve. [speaker003:] Yeah. [speaker001:] You know, y you actually say that here is [disfmarker] Let's [disfmarker] let's hypothesize that it's this frequency or that frequency, and [disfmarker] and, uh, [speaker003:] Yeah. [speaker001:] maybe you [disfmarker] maybe you could use some other cute methods to, uh, short cut it by [disfmarker] by uh, making some guesses, but [disfmarker] but uh [disfmarker] uh [disfmarker] uh, I would, uh [disfmarker] I mean you could make some guesses from, uh [disfmarker] from the auto correlation or something but [disfmarker] but then, given those guesses, try, um, uh, only looking at the energy at multiples of the [disfmarker] of that frequency, and [disfmarker] and see how much of the [disfmarker] take the one that's maximum. Call that the [disfmarker] [speaker003:] Yeah. Using the energy of the [disfmarker] of the multiple of the frequency. [speaker001:] But [disfmarker] Of all the harmonics of that. Yeah. [speaker003:] Yeah. [speaker007:] Do you hafta do some kind of, uh, low pass filter before you do that? [speaker003:] I don't use. [speaker007:] Or [disfmarker] [speaker003:] But, I [disfmarker] I know many people use, eh, low pass filter to [disfmarker] to [disfmarker] to get, eh, the pitch. [speaker001:] No. To get the pitch, yes. [speaker003:] I don't use. [speaker005:] To get the pitch, yeah. [speaker003:] To get the pitch, yes. But the harmonic, no. [speaker007:] But i But the harmonics are gonna be, uh, uh, I don't know what the right word is. Um, they're gonna be dampened by the uh, vocal tract, right? The response of the vocal tract. [speaker001:] Yeah? [speaker003:] Yeah? [speaker007:] And so [disfmarker] just looking at the energy on those [disfmarker] at the harmonics, is that gonna [disfmarker]? [speaker001:] Well so the thing is that the [disfmarker] This is for, uh, a, um [disfmarker] [speaker007:] I m what you'd like to do is get rid of the effect of the vocal tract. Right? [speaker005:] Yeah. [speaker007:] And just look at the [disfmarker] at [disfmarker] at the signal coming out of the glottis. [speaker001:] Yeah. Uh, well, yeah that'd be good. [speaker003:] Yeah. [speaker001:] But, uh [disfmarker] but I [disfmarker] but [disfmarker] [vocalsound] but I don't know that you need to [disfmarker] [speaker002:] Open wide! [speaker001:] but I don't need you [disfmarker] know if you need to get rid of it. I mean that'd [disfmarker] that'd be nice but I don't know if it's ess if it's essential. Um, I mean [disfmarker] cuz I think the main thing is that, uh, you're trying [disfmarker] [speaker007:] Uh huh. [speaker001:] wha what are you doing this for? You're trying distinguish between the case where there is, uh [disfmarker] where [disfmarker] where there are more than [disfmarker] uh, where there's more than one speaker and the case where there's only one speaker. [speaker002:] Sorry. [speaker001:] So if there's more than one speaker, um [disfmarker] yeah I guess you could [disfmarker] I guess [disfmarker] yeah you're [disfmarker] so you're not distinguished between voiced and unvoiced, so [disfmarker] so, i if you don't [disfmarker] if you don't care about that [disfmarker] [speaker003:] Yeah. [speaker001:] See, if you also wanna [vocalsound] just determine [disfmarker] if you also wanna determine whether it's unvoiced, [vocalsound] then I think you want to [pause] look [disfmarker] look at high frequencies also, because the f the fact that there's more energy in the high frequencies is gonna be an ob sort of obvious cue that it's unvoiced. [speaker007:] Yeah. [speaker001:] But, i i uh [disfmarker] [vocalsound] I mean i i but, um, other than that I guess as far as the one person versus two persons, it would be [pause] primarily a low frequency phenomenon. And if you looked at the low frequencies, yes the higher frequencies are gonna [disfmarker] there's gonna be a spectral slope. The higher frequencies will be lower energy. But so what. I mean [disfmarker] [vocalsound] that's [disfmarker] that's w [speaker003:] I will prepare for the next week eh, all my results about the harmonicity and [pause] will [disfmarker] will try to come in and to discuss here, because, eh, I haven't enough feeling to [disfmarker] [vocalsound] to u [vocalsound] many time to [disfmarker] [vocalsound] to understand what happened with the [disfmarker] with, eh, so many peaks, eh, eh, and [vocalsound] I [disfmarker] I see the harmonics there many time but, eh, [vocalsound] there are a lot of peaks, eh, that, eh, they are not harmonics. Um, [speaker001:] Yeah. [speaker003:] I have to discover what [disfmarker] what is the [disfmarker] the w the best way to [disfmarker] to [comment] [disfmarker] to [comment] c to use them [speaker001:] Well, but [disfmarker] yeah I don't think you can [disfmarker] I mean you're not gonna be able to look at every frame, so I mean [disfmarker] I [disfmarker] I mean I [disfmarker] I really [disfmarker] I I really thought that the best way to do it, and I'm speaking with no experience on this particular point, but, [vocalsound] my impression was that the best way to do it was however you [disfmarker] You've used instantaneous frequency, whatever. [comment] However you've come up [disfmarker] you [disfmarker] with your candidates, you wanna see how much of the energy is in that [speaker003:] Yeah. Yeah. [speaker001:] as coppo as opposed to all of the [disfmarker] all [disfmarker] the total energy. And, um, if it's voiced, I guess [disfmarker] so [disfmarker] so y I think maybe you do need a voiced unvoiced determination too. But if it's voiced, [speaker003:] Yeah. [speaker001:] um, and the, uh [disfmarker] e the fraction of the energy that's in the harmonic sequence that you're looking at is relatively low, then it should be [disfmarker] then it's more likely to be an overlap. [speaker003:] Is height. Yeah. This [disfmarker] this is the idea [disfmarker] the idea I [disfmarker] I [disfmarker] I had to [disfmarker] to compare the [disfmarker] the ratio of the [disfmarker] [vocalsound] the energy of the harmonics with the [disfmarker] eh, with the, eh, total energy in the spectrum and try to get a ratio to [disfmarker] to distinguish between overlapping and speech. Mmm. [speaker001:] But you're looking a y you're looking at [disfmarker] Let's take a second with this. Uh, uh, you're looking at f at the phase derivative, um, in [disfmarker] in, uh, what domain? I mean this is [disfmarker] this is in [disfmarker] in [disfmarker] in [disfmarker] in bands? Or [disfmarker] or [disfmarker] [speaker003:] No, no, no. [speaker001:] Just [disfmarker] just overall [disfmarker] [speaker003:] It's a [disfmarker] it's a [disfmarker] o i w the band [disfmarker] the band is, eh, from zero to [disfmarker] to four kilohertz. And I [disfmarker] I ot I [disfmarker] [speaker001:] And you just take the instantaneous frequency? [speaker003:] Yeah. I u m t I [disfmarker] I used two m two method [disfmarker] two methods. Eh, one, eh, based on the F [disfmarker] eh, FTT. to FFT to [disfmarker] to obtain the [disfmarker] or to study the harmonics from [disfmarker] from the spectrum directly, and to study the energy and the multiples of [speaker001:] Yeah. Yeah. [speaker003:] frequency. And another [disfmarker] another algorithm I have is the [disfmarker] in the [pause] instantaneous frequency, based on [disfmarker] on [disfmarker] on the FFT to [disfmarker] to [disfmarker] to calculate the [disfmarker] the phase derivate in the time. Eh, uh n the d I mean I [disfmarker] I have two [disfmarker] two algorithms. [speaker001:] Right. [speaker003:] But, eh, in m [pause] i in my opinion the [disfmarker] the [disfmarker] the instantaneous frequency, the [disfmarker] the [disfmarker] the behavior, eh, was [disfmarker] th it was very interesting. Because I [disfmarker] I saw [vocalsound] eh, how the spectrum [pause] concentrate, eh, [speaker001:] Oh! [speaker003:] around the [disfmarker] the harmonic. But then when I apply the [disfmarker] the rule, eh, of the [disfmarker] in the [disfmarker] [pause] the instantaneous frequency of the ne of the continuous filter in the [disfmarker] the near filter, the [disfmarker] the rule that, eh, people propose in the paper doesn't work. And I don't know why. [speaker001:] But the instantaneous frequency, wouldn't that give you something more like the central frequency of the [disfmarker] you know, of the [disfmarker] where most of the energy is? I mean, I think if you [disfmarker] Does i does it [disfmarker] Why would it correspond to pitch? [speaker003:] Yeah. I [disfmarker] I [disfmarker] I not sure. I [disfmarker] I [disfmarker] I try to [disfmarker] to [disfmarker] [speaker001:] Yeah. [speaker003:] When [vocalsound] first I [disfmarker] [pause] [vocalsound] I calculate, eh, using the FFT, [speaker006:] Di digital camera. [speaker003:] the [disfmarker] the [disfmarker] [speaker002:] Keep forgetting. [speaker003:] I get the [disfmarker] [pause] the spectrum, [speaker001:] Yeah. [speaker003:] and I represent all the frequency. [speaker001:] Yeah. [speaker003:] And [disfmarker] when ou I obtained the instantaneous frequency. And I change [vocalsound] the [disfmarker] the [disfmarker] the [@ @], using the instantaneous frequency, here. [speaker001:] Oh, so you scale [disfmarker] you s you do a [disfmarker] a scaling along that axis according to instantaneous [disfmarker] [speaker003:] I use [disfmarker] Yeah. Yeah. [speaker001:] It's a kinda normalization. [speaker003:] Yeah. Because when [disfmarker] when [disfmarker] [speaker001:] OK. [speaker003:] eh, when i I [disfmarker] I use these [disfmarker] these frequency, eh, the range is different, and the resolution is different. [speaker001:] Yeah. [speaker003:] And I observe more [disfmarker] more or less, thing like this. And the paper said that, eh, these frequencies are probably, eh, harmonics. [speaker001:] I see. [speaker003:] But, eh, they used, [speaker001:] Huh. [speaker003:] eh, a rule, eh, based in the [disfmarker] in the [disfmarker] because to [disfmarker] to calculate the instantaneous frequency, they use a Hanning window. [speaker001:] Yeah. [speaker003:] And, they said that, eh, if [pause] these [pause] peak are, eh, harmonics, the f instantaneous frequency, of the contiguous, eh [disfmarker] w eh eh, filters are very near, or have to be very near. But, eh, phh! I don't [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] I don I I [disfmarker] and I don't know what is the [disfmarker] what is the distance. And I tried to [disfmarker] to put different distance, eh, to put difference, eh [disfmarker] eh, length of the window, eh, different front sieve, Pfff! and I [disfmarker] I not sure what happened. [speaker001:] OK, yeah well I [disfmarker] I guess I'm not following it enough. I'll [comment] probably gonna hafta look at the paper, but [disfmarker] which I'm not gonna have time to do in the next few days, but [disfmarker] [vocalsound] but I'm [disfmarker] I'm curious about it. [speaker003:] Yeah. [speaker001:] Um, uh, OK. [speaker006:] I I did i it did occur to me that this is [disfmarker] uh, the return to the transcription, that there's one third thing I wanted to [disfmarker] to ex raise as a to as an issue which is, um, how to handle breaths. So, I wanted to raise the question of whether people in speech recognition want to know where the breaths are. And the reason I ask the question is, um, aside from the fact that they're obviously very time consuming to encode, uh, the fact that there was some [disfmarker] I had the indication from Dan Ellis in the email that I sent to you, and you know about, [speaker005:] Yeah. [speaker006:] that in principle we might be able to, um, handle breaths by accessi by using cross talk from the other things, be able that [disfmarker] in principle, maybe we could get rid of them, so maybe [disfmarker] And I was [disfmarker] I [disfmarker] I don't know, I mean we had this an and I didn't [disfmarker] couldn't get back to you, [speaker005:] Yeah. [speaker006:] but the question of whether it'd be possible to eliminate them from the audio signal, which would be the ideal situation, cuz [disfmarker] [speaker001:] I don't know [disfmarker] think it'd be ideal. [speaker007:] Uh uh. [speaker001:] We See, we're [disfmarker] we're dealing with real speech and we're trying to have it be as real as possible [speaker005:] Yeah. [speaker001:] and breaths are part of real speech. [speaker006:] Well, except that these are really truly [disfmarker] I mean, ther there's a segment in o the one I did [disfmarker] n the first one that I did for [disfmarker] i for this, [speaker005:] Yeah. [speaker006:] where truly w we're hearing you breathing like [disfmarker] as if we're [disfmarker] you're in our ear, you know, and it's like [disfmarker] it's like [disfmarker] [speaker001:] Yeah. [speaker006:] I y i I mean, breath is natural, but not [speaker001:] It is [disfmarker] but it is if you record it. [speaker003:] Yeah. [speaker006:] Except that we're [disfmarker] we're trying to mimic [disfmarker] Oh, I see what you're saying. You're saying that the PDA application would have [disfmarker] uh, have to cope with breath. [speaker001:] Yeah. [speaker002:] Well [speaker006:] But [disfmarker] [speaker007:] An any application may have to. [speaker005:] No. [speaker002:] The P D A might not have to, [speaker005:] No [disfmarker] i [speaker002:] but more people than just PDA users are interested in this corpus. [speaker005:] Yeah. [speaker006:] OK, then the [disfmarker] then [disfmarker] I have two questions. [speaker002:] So [disfmarker] so mean you're right we could remove it, [speaker006:] Yeah? [speaker002:] but I [disfmarker] I think [disfmarker] we don't wanna w remove it from the corpus, [pause] in terms of delivering it because the [disfmarker] people will want it in there. [speaker006:] OK, so maybe the question is notating it. [speaker001:] Yeah. If it gets [disfmarker] [speaker006:] Yeah? [speaker001:] Yeah [disfmarker] i Right. If [disfmarker] if it gets in the way of what somebody is doing with it then you might wanna have some method which will allow you to block it, but you [disfmarker] it's real data. You don't wanna b but you don't [disfmarker] [speaker006:] OK, well [disfmarker] [speaker001:] If s you know, if there's a little bit of noise out there, and somebody is [disfmarker] is talking about something they're doing, that's part of what we accept as part of a real meeting, even [disfmarker] And we have the f uh [disfmarker] the uh [disfmarker] the [disfmarker] the fan and the [disfmarker] in the projector up there, and, uh, this is [disfmarker] it's [disfmarker] this is actual stuff that we [disfmarker] we wanna work with. [speaker006:] Well this is in very interesting [speaker001:] So. [speaker006:] because i it basically has a i it shows very clearly the contrast between, uh, speech recognition research and discourse research because in [disfmarker] in discourse and linguistic research, what counts is what's communit communicative. [speaker007:] Mm hmm. [speaker006:] And [disfmarker] breath, you know, everyone breathes, they breathe all the time. And once in a while breath is communicative, but r very rarely. OK, so now, I had a discussion with Chuck about the data structure [speaker001:] Mm hmm. [speaker006:] and the idea is that the transcripts will [disfmarker] that [disfmarker] get stored as a master there'll be a master transcript which has in it everything that's needed for both of these uses. [speaker001:] Mm hmm. [speaker006:] And the one that's used for speech recognition will be processed via scripts. You know, like, Don's been writing scripts and [disfmarker] and, [speaker001:] Mm hmm. [speaker006:] uh, to process it for the speech recognition side. Discourse side will [vocalsound] have this [disfmarker] this side over he the [disfmarker] we we'll have a s ch Sorry, not being very fluent here. But, um, this [disfmarker] the discourse side will have a script which will stri strip away the things which are non communicative. OK. So then the [disfmarker] then [disfmarker] let's [disfmarker] let's think about the practicalities of how we get to that master copy with reference to breaths. So what I would [disfmarker] r r what I would wonder is would it be possible to encode those automatically? Could we get a breath detector? [speaker002:] Oh, just to save the transcribers time. [speaker006:] Well, I mean, you just have no idea. I mean, if you're getting a breath several times every minute, [speaker002:] Mm hmm. [speaker006:] and just simply the keystrokes it takes to negotiate, to put the boundaries in, to [disfmarker] to type it in, i it's just a huge amount of time. [speaker002:] Mm hmm. [speaker005:] Oops. [speaker003:] Yeah. [speaker001:] Wh what [disfmarker] [speaker006:] And you wanna be sure it's used, and you wanna be sure it's done as efficiently as possible, and if it can be done automatically, that would be ideal. [speaker001:] what if you put it in but didn't put the boundaries? [speaker006:] Well, but [disfmarker] [speaker001:] So you just know it's between these other things, [speaker006:] Well, OK. So now there's [disfmarker] there's another [disfmarker] another possibility [speaker001:] right? [speaker006:] which is, um, the time boundaries could mark off words [comment] from nonwords. And that would be extremely time effective, if that's sufficient. [speaker001:] Yeah I mean I'm think if it's too [disfmarker] if it's too hard for us to annotate the breaths per se, [vocalsound] we are gonna be building up models for these things and these things are somewhat self aligning, so if [disfmarker] so, [vocalsound] we [disfmarker] i i if we say there is some kind of a thing which we call a "breath" or a "breath in" or "breath out", [vocalsound] the models will learn that sort of thing. Uh, so [disfmarker] but you [disfmarker] but you do want them to point them at some region where [disfmarker] where the breaths really are. [speaker006:] OK. But that would maybe include a pause as well, [speaker001:] So [disfmarker] [speaker007:] Well, there's a there's [disfmarker] [speaker006:] and that wouldn't be a problem to have it, uh, pause plus breath plus laugh plus sneeze? [speaker001:] Yeah, i You know there is [disfmarker] there's this dynamic tension between [disfmarker] between marking absolutely everything, as you know, and [disfmarker] and [disfmarker] and marking just a little bit and counting on the statistical methods. Basically the more we can mark the better. But if there seems to be a lot of effort for a small amount of reward in some area, and this might be one like this [disfmarker] Although I [disfmarker] I [disfmarker] I'd be interested to h get [disfmarker] get input from Liz and Andreas on this to see if they [disfmarker] Cuz they've they've got lots of experience with the breaths in [disfmarker] in, uh, uh, their transcripts. [speaker007:] I [disfmarker] [speaker002:] They have lots of experience with breathing? [speaker001:] Actually [disfmarker] Well, [vocalsound] yes they do, but we [disfmarker] [vocalsound] we can handle that without them here. But [disfmarker] but [disfmarker] but, uh, you were gonna say something about [disfmarker] [speaker007:] Yeah, I [disfmarker] I think, um, one possible way that we could handle it is that, um, you know, as the transcribers are going through, and if they get a hunk of speech that they're gonna transcribe, u th they're gonna transcribe it because there's words in there or whatnot. If there's a breath in there, they could transcribe that. [speaker005:] Yeah. [speaker006:] That's what they've been doing. [speaker005:] Yeah. [speaker007:] Right. [speaker006:] So, within an overlap segment, they [disfmarker] they do this. [speaker007:] But [disfmarker] Right. But if there's a big hunk of speech, let's say on Morgan's mike where he's not talking at all, um, don't [disfmarker] don't worry about that. [speaker005:] Yeah. [speaker007:] So what we're saying is, there's no guarantee that, um [disfmarker] So for the chunks that are transcribed, everything's transcribed. But outside of those boundaries, there could have been stuff that wasn't transcribed. So you just [disfmarker] somebody can't rely on that data and say "that's perfectly clean data". Uh [disfmarker] do you see what I'm saying? [speaker006:] Yeah, you're saying it's [disfmarker] uncharted territory. [speaker007:] So I would say don't tell them to transcribe anything that's outside of a grouping of words. [speaker001:] That sounds like a reasonable [disfmarker] reasonable compromise. [speaker005:] Yeah, and that's [disfmarker] that [disfmarker] that quite co corresponds to the way I [disfmarker] I try to train the speech nonspeech detector, as I really try to [disfmarker] not to detect those breaths which are not within a speech chunk but with [disfmarker] which are just in [disfmarker] in a silence region. [speaker001:] Yeah. [speaker005:] And they [disfmarker] so they hopefully won't be marked in [disfmarker] in those channel specific files. [speaker007:] Yeah, so [disfmarker] [speaker005:] But [disfmarker] [speaker001:] u I [disfmarker] I wanted to comment a little more just for clarification about this business about the different purposes. [speaker006:] Mm hmm. [speaker001:] See, in a [disfmarker] in a way this is a really key point, that for speech recognition, uh, research, uh, um, e a [disfmarker] it's not just a minor part. In fact, the [disfmarker] I think I would say the core thing that we're trying to do is to recognize the actual, meaningful components in the midst of other things that are not meaningful. So it's critical [disfmarker] it's not just incidental it's critical for us to get these other components that are not meaningful. Because that's what we're trying to pull the other out of. That's our problem. If we had nothing [disfmarker] [speaker006:] Yeah. [speaker001:] if we had only linguistically relevant things [disfmarker] if [disfmarker] if we only had changes in the spectrum that were associated with words, with different spectral components, and, uh, we [disfmarker] we didn't have noise, we didn't have convolutional errors, we didn't have extraneous, uh, behaviors, and so forth, and [vocalsound] moving your head and all these sorts of things, then, actually speech recognition i i isn't that bad right now. I mean you can [speaker003:] Yeah. [speaker001:] you know it's [disfmarker] [vocalsound] it's [disfmarker] the technology's come along pretty well. The [disfmarker] the [disfmarker] the reason we still complain about it is because is [disfmarker] when [disfmarker] when you have more realistic conditions then [disfmarker] then things fall apart. [speaker006:] OK, fair enough. I guess, um, I [disfmarker] uh, what I was wondering is what [disfmarker] what [disfmarker] at what level does the breathing aspect enter into the problem? Because if it were likely that a PDA would be able to be built which would get rid of the breathing, so it wouldn't even have to be processed at thi at this computational le well, let me see, it'd have to be computationally processed to get rid of it, but if there were, uh, like likely on the frontier, a good breath extractor then, um, and then you'd have to [disfmarker] [speaker001:] But that's a research question, you know? And so [disfmarker] [speaker006:] Yeah, well, see and that's what I wouldn't know. [speaker001:] that [disfmarker] And we don't either. I mean so [disfmarker] so the thing is it's [disfmarker] it [disfmarker] right now it's just raw d it's just data that we're collecting, and so [vocalsound] we don't wanna presuppose that people will be able to get rid of particular degradations because that's actually the research that we're trying to feed. [speaker006:] OK. [speaker001:] So, you know, an and maybe [disfmarker] maybe in five years it'll work really well, and [disfmarker] and it'll only mess up ten percent of the time, but then we would still want to account for that ten percent, [speaker006:] I guess there's another aspect [speaker001:] so. [speaker006:] which is that as we've improved our microphone technique, we have a lot less breath in the [disfmarker] in the more recent, uh, recordings, so it's [disfmarker] in a way it's an artifact that there's so much on the [disfmarker] on the earlier ones. [speaker001:] Uh huh. I see. [speaker007:] One of the [disfmarker] um, just to add to this [disfmarker] one of the ways that we will be able to get rid of breath is by having models for them. I mean, that's what a lot of people do nowadays. [speaker002:] Right. [speaker001:] Right. [speaker007:] And so in order to build the model you need to have some amount of it marked, [speaker003:] Yeah. [speaker007:] so that you know where the boundaries are. [speaker003:] Hmm. [speaker001:] Yeah. [speaker007:] So [disfmarker] I mean, I don't think we need to worry a lot about breaths that are happening outside of a, you know, conversation. We don't have to go and search for them to [disfmarker] to mark them at all, but, I mean, if they're there while they're transcribing some hunk of words, I'd say put them in if possible. [speaker006:] OK, and it's also the fact that they differ a lot from one channel to the other because of the way the microphone's adjusted. [speaker003:] Yeah. [speaker007:] Mm hmm. [speaker006:] OK. [speaker001:] Should we do the digits? [speaker002:] Yep. OK. [speaker003:] OK. [speaker004:] Mmm. Alright. [speaker001:] OK, we're recording. [speaker006:] We can say the word "zero" all we want, [speaker007:] I'm doing some [speaker006:] but just [disfmarker] [speaker007:] square brackets, coffee sipping, square brackets. [speaker002:] That's not allowed, I think. [speaker003:] Cur curly brackets. [speaker005:] Is that voiced or unvoiced? [speaker002:] Curly brackets. [speaker006:] Curly brackets. [speaker001:] Curly brackets. Right. [speaker006:] Well, [speaker002:] Oops. [speaker006:] correction for transcribers. [speaker007:] Mmm! [comment] [vocalsound] Gar darn! [speaker006:] Yeah. [speaker003:] Channel two. [speaker001:] Do we use square brackets for anything? [speaker003:] Yeah. [speaker005:] These poor transcribers. [speaker003:] Uh [disfmarker] [speaker006:] u [speaker003:] Not ri not right now. I mean [disfmarker] No. [speaker004:] There's gonna be some zeros from this morning's meeting because I noticed that [speaker006:] u [speaker004:] Barry, I think maybe you turned your mike off before the digits were [disfmarker] Oh, was it during digits? Oh, so it doesn't matter. [speaker006:] Yeah. [speaker002:] So it's not [disfmarker] it's not that bad if it's at the end, [speaker001:] It's still not a good idea. [speaker002:] but it's [disfmarker] in the beginning, it's [pause] bad. [speaker004:] Yeah. Yeah. [speaker001:] Yeah, you wanna [disfmarker] you wanna keep them on so you get [pause] good noise [disfmarker] noise floors, [speaker003:] That's interesting. [speaker001:] through the whole meeting. [speaker006:] Uh, I probably just should have left it on. [speaker003:] Hmm. [speaker006:] Yeah I did have to run, but [disfmarker] [speaker005:] Is there any way to change that in the software? [speaker001:] Change what in the software? [speaker005:] Where like you just don't [disfmarker] like if you [disfmarker] if it starts catching zeros, like in the driver or something [disfmarker] in the card, or somewhere in the hardware [disfmarker] Where if you start seeing zeros on w across one channel, you just add some [vocalsound] random, [@ @] [comment] noise floor [disfmarker] like a small noise floor. [speaker001:] I mean certainly we could do that, but I don't think that's a good idea. We can do that in post processing if [disfmarker] if the application needs it. [speaker005:] Yeah. [speaker006:] Well, I [disfmarker] u I actually don't know what the default [comment] is anymore as to how we're using the [disfmarker] the front end stuff but [disfmarker] for [disfmarker] for [disfmarker] when we use the ICSI front end, [speaker002:] Manual post processing. [speaker006:] but um, [speaker001:] As an argument. [speaker006:] there is an [disfmarker] there is an o an option in [disfmarker] in RASTA, which, um, [vocalsound] in when I first put it in, uh, back in the days when I actually wrote things, uh, [vocalsound] I [pause] did actually put in a random bit or so that was in it, [speaker005:] OK. [speaker006:] but [vocalsound] then I realized that putting in a random bit was equivalent to adding uh [disfmarker] adding flat spectrum, [speaker005:] Right. [speaker006:] and it was a lot faster to just add a constant to the [disfmarker] [vocalsound] to the spectrum. So then I just started doing that [speaker005:] Mmm. [speaker006:] instead of calling "rand" [comment] or something, [speaker005:] OK. Right. [speaker006:] so. So it d it does that. Gee! Here we all are! [speaker001:] Uh, so the only agenda items were Jane [disfmarker] was Jane wanted to talk about some of the IBM transcription process. [speaker006:] There's an agenda? [speaker001:] I sort of [vocalsound] condensed the three things you said into that. And then just [disfmarker] I only have like, this afternoon and maybe tomorrow morning to get anything done before I go to Japan for ten days. So if there's anything that n absolutely, desperately needs to be done, you should let me know now. [speaker006:] Uh, and you just sent off a Eurospeech paper, so. [speaker007:] Right. I hope they accept it. I mean, I [disfmarker] [speaker006:] Right. [speaker007:] both actu as [disfmarker] as a submission and [disfmarker] [vocalsound] you know, as a paper. Um [disfmarker] but [disfmarker] [speaker001:] Well yeah, you sent it in [pause] late. [speaker006:] Yeah, I guess you [disfmarker] first you have to do the first one, [speaker001:] Yeah. [speaker006:] and then [disfmarker] Yeah. [speaker007:] We actually exceeded the delayed deadline by o another day, so. [speaker002:] Oops. [speaker006:] Oh they [disfmarker] they had some extension that they announced or something? [speaker007:] Well yeah. Liz had sent them a note saying "could we please [pause] have another" [comment] [pause] I don't know, "three days" or something, and they said yes. [speaker004:] And then she said "Did I say three? I meant four." [speaker001:] Oh, that was the other thing uh, [speaker007:] But u [speaker001:] uh, Dave Gelbart sent me email, I think he sent it to you too, [comment] that um, there's a special topic, section in si in Eurospeech on new, corp corpors corpora. And it's not due until like May fifteenth. [speaker006:] Oh this isn't the Aurora one? It's another one? [speaker001:] No. [speaker002:] No [speaker001:] It's a different one. [speaker002:] it's [disfmarker] [speaker005:] Huh! [speaker002:] Yeah. Yeah. [speaker001:] And uh, [speaker006:] Oh! [speaker002:] I got this mail from [disfmarker] [speaker001:] I s forwarded it to Jane as I thought being the most relevant person. Um [disfmarker] So, I thought it was highly relevant [disfmarker] [speaker006:] That's [disfmarker] [speaker003:] Yeah I'm [disfmarker] Yeah. I think so too. [speaker001:] have you [disfmarker] did you look at the URL? [speaker003:] Um, I haven't gotten over to there yet, but what [disfmarker] our discussion yesterday, I really [disfmarker] I [disfmarker] I wanna submit one. [speaker001:] Mm hmm. [speaker002:] Was this [pause] SmartKom message? [speaker003:] Yeah. [speaker002:] I think [pause] Christoph Draxler sent this, [speaker003:] And, you offered to [disfmarker] to join me, if you want me to. [speaker002:] yeah. [speaker001:] I'll help, but obviously I can't, really do, most of it, [speaker003:] Yeah. [speaker007:] I think several people [disfmarker] sent this, [speaker003:] Yeah, that's right. [speaker002:] Yeah. [speaker001:] so. [speaker007:] yeah. [speaker003:] Uh huh. [speaker001:] But any [disfmarker] any help you need I can certainly provide. [speaker006:] Well, [speaker007:] Yeah. [speaker006:] that's [disfmarker] that's a great idea. [speaker007:] Well [disfmarker] there [disfmarker] there were some interesting results in this paper, though. For instance that Morgan [disfmarker] uh, accounted for fifty six percent of the Robustness meetings in terms of number of words. [speaker001:] Wow. [speaker003:] In [disfmarker] in terms of what? [speaker007:] Number of words. [speaker003:] In term One? Wow! OK. [speaker001:] That's just cuz he talks really fast. [speaker003:] Do you mean, [speaker006:] n No. [speaker002:] Oh. Short words. [speaker003:] because [disfmarker] [speaker001:] I know [speaker003:] is it partly, eh, c correctly identified words? Or is it [disfmarker] [speaker007:] No. Well, according to the transcripts. [speaker003:] or just overall volume? [speaker006:] Yeah. [speaker001:] But re well regardless. I think it's [disfmarker] he's [disfmarker] he's in all of them, [speaker003:] Oh. OK. [speaker007:] I mean, we didn't mention Morgan by name [speaker001:] and he talks a lot. [speaker007:] we just [disfmarker] [speaker006:] Well [disfmarker] we have now, [speaker001:] One participant. [speaker007:] We [disfmarker] we [disfmarker] we [disfmarker] something about [disfmarker] [speaker006:] but [disfmarker] [speaker001:] Did you identify him as a senior [pause] member? [speaker007:] No, we as identify him as the person dominating the conversation. [speaker006:] Well. [speaker001:] Yeah. [speaker003:] OK. [speaker006:] I mean I get these AARP things, but I'm not se really senior yet, [speaker007:] Right [speaker006:] but [disfmarker] [speaker007:] Hmm. [speaker006:] Um, but uh, other than that delightful result, what was the rest of the paper about? [speaker007:] Um, well it was about [disfmarker] it had three sections [speaker006:] You sent it to me but I haven't seen it yet. [speaker007:] uh [disfmarker] three kinds of uh results, if you will. Uh, the one was that the [disfmarker] just the [disfmarker] the amount of overlap [speaker001:] The good, the bad, and the ugly. [speaker007:] um, s in terms of [disfmarker] in terms of number of words and also we computed something called a "spurt", which is essentially a stretch of speech with uh, no pauses exceeding five hundred milliseconds. Um, and we computed how many overlapped i uh spurts there were and how many overlapped words there were. [vocalsound] Um, for four different [pause] corpora, the Meeting Recorder meetings, the Robustness meetings Switchboard and CallHome, and, found [disfmarker] and sort of compared the numbers. Um, and found that the, uh, you know, as you might expect the Meeting Recorder [pause] meetings had the most overlap uh, but next were Switchboard and CallHome, which both had roughly the same, almost identical in fact, and the Robustness meetings were [disfmarker] had the least, so [disfmarker] One sort of unexpected result there is that uh two party telephone conversations have [vocalsound] about the same amount of overlap, [speaker001:] I'm surprised. [speaker007:] sort of in gen you know [disfmarker] order of magnitude wise as, uh [disfmarker] as face to face meetings with multiple [disfmarker] [speaker001:] I have [disfmarker] I had better start changing all my slides! [speaker007:] Yeah. Also, I [disfmarker] in the Levinson, the pragmatics book, [comment] in you know, uh, textbook, [vocalsound] there's [disfmarker] I found this great quote where he says [vocalsound] you know [disfmarker] you know, how people [disfmarker] it talks about how uh [disfmarker] how [disfmarker] how people are so good at turn taking, [speaker003:] Mm hmm. Yeah. Yeah. [speaker007:] and [vocalsound] so [disfmarker] they're so good that [vocalsound] generally, u the overlapped speech does not [disfmarker] is less than five percent. [speaker003:] Oh, that's interesting. [speaker007:] So, [speaker003:] Yeah. [speaker007:] this is way more than five percent. [speaker005:] Did he mean face [disfmarker] like face to face? Or [disfmarker]? [speaker007:] Well, in real conversations, everyday conversations. [speaker005:] Hmm. [speaker007:] It's s what these conversation analysts have been studying for years and years there. [speaker003:] Mm hmm. Mm hmm. Well, of course, no, it doesn't necessarily go against what he said, [speaker002:] But [disfmarker] [speaker003:] cuz he said "generally speaking". In order to [disfmarker] to go against that kind of a claim you'd have to big canvassing. [speaker001:] Hmm. [speaker002:] And in f [speaker007:] Well, he [disfmarker] he made a claim [disfmarker] [speaker001:] Well [disfmarker] [speaker007:] Well [disfmarker] [speaker006:] Yeah, we [disfmarker] we have pretty limited sample here. [speaker002:] But [disfmarker] Five percent of time or five percent of what? [speaker003:] Yeah. [speaker001:] Yeah, I was gonna ask that too. [speaker007:] Well it's time. [speaker002:] Yeah. [speaker003:] Exactly. [speaker007:] So [disfmarker] [vocalsound] but still [disfmarker] but still [disfmarker] u [speaker003:] It's [disfmarker] i it's not against his conclusion, [speaker002:] Yeah, so [disfmarker] [speaker003:] it just says that it's a bi bell curve, and that, [vocalsound] you have something that has a nice range, in your sampling. [speaker007:] Yeah. So there are slight [disfmarker] There are differences in how you measure it, but still it's [disfmarker] [vocalsound] You know, the difference between um [disfmarker] between that number and what we have in meetings, which is more like, [vocalsound] you know, close to [disfmarker] in meetings like these, uh [disfmarker] you know, close to twenty percent. [speaker003:] Mm hmm. [speaker006:] But what was it like, say, in the Robustness meeting, for instance? [speaker003:] Mm hmm. [speaker007:] That [disfmarker] [speaker001:] But [disfmarker] [speaker007:] Robustness meeting? It was [vocalsound] about half of the r So, [vocalsound] in terms of number of words, it's like seventeen or eigh eighteen percent for the Meeting Recorder meetings and [vocalsound] about half that for, [vocalsound] uh, the Robustness. [speaker006:] Maybe ten percent? [speaker001:] But I don't know if that's really a fair way of comparing between, multi party, conversations and two party conversations. [speaker002:] Then [disfmarker] then [disfmarker] then you have to [disfmarker] [speaker001:] Yeah. I [disfmarker] I [disfmarker] I don't know. I mean that's just something [disfmarker] [speaker004:] Yeah, I just wonder if you have to normalize by the numbers of speakers or something. [speaker003:] Yeah. [speaker002:] Then [disfmarker] Yeah, then normalize by [disfmarker] by something like that, [speaker007:] Well, we didn't get to look at that, [speaker003:] Yeah, that's a good point. [speaker002:] yeah. [speaker003:] Yeah. [speaker007:] but this obvious thing to see if [disfmarker] if there's a dependence on the number of uh [disfmarker] participants. [speaker003:] Good idea. [speaker001:] I mean [disfmarker] I bet there's a weak dependence. [speaker002:] Yeah. [speaker001:] I'm sure it's [disfmarker] it's not a real strong one. [speaker007:] Right. [speaker001:] Right? [speaker004:] Cuz not everybody talks. [speaker001:] Because you [speaker007:] Right. [speaker001:] Right. [speaker004:] Yeah. [speaker001:] You have a lot of [disfmarker] a lot of two party, subsets within the meeting. [speaker007:] Right. [speaker003:] Uh huh. [speaker001:] Well regardless [disfmarker] [speaker007:] So [disfmarker] [speaker001:] it's an interesting result regardless. [speaker007:] Right. And [disfmarker] and [disfmarker] and then [disfmarker] and we also d computed this both with and without backchannels, [speaker003:] Yes, that's right. Mm hmm. [speaker007:] so you might think that backchannels have a special status because they're essentially just [disfmarker] [speaker001:] Uh huh. So, did [disfmarker] we all said "uh huh" and nodded at the same time, [speaker007:] R right. But, even if you take out all the backchannels [disfmarker] [speaker001:] so. [speaker007:] so basically you treat backchannels l as nonspeech, as pauses, [speaker001:] Mm hmm. [speaker006:] Mm hmm. [speaker007:] you still have significant overlap. You know, it goes down from maybe [disfmarker] For Switchboard it goes down from [disfmarker] I don't know [disfmarker] f um [disfmarker] [comment] I don't know [disfmarker] f fourteen percent of the words to maybe [vocalsound] uh I don't know, eleven percent or something [disfmarker] it's [disfmarker] it's not a dramatic change, so it's [disfmarker] [speaker001:] Mm hmm. [speaker007:] Anyway, so it's uh [disfmarker] That was [disfmarker] that was one set of [pause] results, and then the second one was just basically the [disfmarker] [vocalsound] the stuff we had in the [disfmarker] in the HLT paper on how overlaps effect the [pause] recognition performance. [speaker003:] Hmm. [speaker001:] Nope. Right. [speaker006:] Mm hmm. [speaker007:] And we rescored things um, a little bit more carefully. We also fixed the transcripts in [disfmarker] in numerous ways. Uh, but mostly we added one [disfmarker] one number, which was what if you [pause] uh, basically score ignoring all [disfmarker] So [disfmarker] so the [disfmarker] the conjecture from the HLT results was that [vocalsound] most of the added recognition error is from insertions [vocalsound] due to background speech. So, we scored [vocalsound] all the recognition results, [vocalsound] uh, in such a way that the uh [disfmarker] [speaker001:] Oh by the way, who's on channel four? [speaker002:] Yeah. [speaker001:] You're getting a lot of breath. [speaker005:] That's [disfmarker] [speaker002:] I j was just wondering. Yeah. [speaker005:] That's me. [speaker007:] uh, well Don's been working hard. [speaker005:] That's right. [speaker007:] OK, so [disfmarker] [vocalsound] so if you have the foreground speaker speaking here, and then there's some background speech, may be overlapping it somehow, um, and this is the time bin that we used, then of course you're gonna get insertion errors here and here. [speaker001:] Right. [speaker007:] Right? So we scored everything, and I must say the NIST scoring tools are pretty nice for this, where you just basically ignore everything outside of the, [vocalsound] uh, region that was deemed to be foreground speech. And where that was we had to use the t forced alignment, uh, results from s for [disfmarker] so [disfmarker] That's somewhat [disfmarker] that's somewhat subject to error, but still we [disfmarker] we [disfmarker] [vocalsound] Uh, Don did some ha hand checking and [disfmarker] and we think that [disfmarker] based on that, we think that the results are you know, valid, although of course, some error is gonna be in there. But basically what we found is after we take out these regions [disfmarker] so we only score the regions that were certified as foreground speech, [comment] [vocalsound] the recognition error went down to almost [vocalsound] uh, the [pause] level of the non overlapped [pause] speech. So that means that [vocalsound] even if you do have background speech, if you can somehow separate out or find where it is, [vocalsound] uh, the recognizer does a good job, [speaker001:] That's great. [speaker002:] Yeah. [speaker007:] even though there is this back [speaker001:] Yeah, I guess that doesn't surprise me, because, with the close talking mikes, the [disfmarker] the signal will be so much stronger. [speaker007:] Right. Right. Mm hmm. [speaker006:] Mm hmm. [speaker007:] Um, so [disfmarker] [speaker001:] What [disfmarker] what sort of normalization do you do? [speaker007:] Uh, well, we just [disfmarker] [@ @] [comment] we do [disfmarker] u you know, vit [speaker001:] I mean in you recognizer, in the SRI recognizer. [speaker007:] Well, we do uh, VTL [disfmarker] [vocalsound] vocal tract length normalization, w and we uh [disfmarker] you know, we [disfmarker] we uh, [vocalsound] make all the features have zero mean and unit variance. [speaker006:] And [disfmarker] [speaker001:] Over an entire utterance? [speaker007:] Over [disfmarker] over the entire c over the entire channel. [speaker001:] Or windowed? [speaker002:] Don't [pause] train [disfmarker] [speaker007:] Over the [disfmarker] but you know. [speaker001:] Hmm. [speaker007:] Um, now we didn't re align the recognizer for this. We just took the old [disfmarker] So this is actually a sub optimal way of doing it, [speaker006:] Right. [speaker007:] right? [speaker001:] Right. [speaker007:] So we took the old recognition output and we just scored it differently. So the recognizer didn't have the benefit of knowing where the foreground speech [disfmarker] a start [speaker006:] Were you including the [disfmarker] the lapel [pause] in this? [speaker007:] Yes. [speaker006:] And did the [disfmarker] did [disfmarker] did the la did the [disfmarker] the problems with the lapel go away also? [speaker007:] Um, it [disfmarker] Yeah. [speaker006:] Or [disfmarker] fray for [disfmarker] for insertions? [speaker007:] It u not per [disfmarker] I mean, not completely, but yes, dramatically. [speaker006:] Less so. [speaker007:] So we have to um [disfmarker] [speaker006:] I mean, you still [disfmarker] [speaker007:] Well I should bring the [disfmarker] should bring the table with results. Maybe we can look at it [pause] Monday. [speaker006:] I would presume that you still would have somewhat higher error with the lapel for insertions than [disfmarker] [speaker007:] Yes. It's [disfmarker] It's [disfmarker] Yes. [speaker006:] Yeah. [speaker007:] Yeah. [speaker006:] Cuz again, looking forward to the non close miked case, I think that we s still [disfmarker] [speaker007:] Mm hmm. [speaker001:] I'm not looking forward to it. [speaker006:] i it's the high signal to noise ratio [speaker007:] Right. [speaker006:] here that [disfmarker] that helps you. [speaker007:] u s Right. So [disfmarker] so that was number [disfmarker] that was the second set of [disfmarker] uh, the second section. And then, [vocalsound] the third thing was, we looked at, [vocalsound] [vocalsound] uh, what we call "interrupts", although that's [disfmarker] that may be [vocalsound] a misnomer, but basically [vocalsound] we looked at cases where [disfmarker] Uh, so we [disfmarker] we used the punctuation from the original transcripts and we inferred the beginnings and ends of sentences. So, you know [disfmarker] [speaker003:] Di did you use upper lower case also, or not? [speaker007:] Um [disfmarker] Hmm? [speaker003:] U upper lower case or no? [speaker007:] No, we only used, you know, uh periods, uh, question marks and [pause] exclamation. [speaker003:] OK. [speaker007:] And we know that there's th that's not a very g I mean, we miss a lot of them, [speaker003:] Yeah. [speaker007:] but [disfmarker] but it's f i i [speaker003:] That's OK but [disfmarker] Comma also or not? [speaker007:] No commas. No. And then [vocalsound] we looked at locations where, uh, if you have overlapping speech and someone else starts a sentence, you know, where do these [disfmarker] where do other people start their [vocalsound] turns [disfmarker] not turns really, but you know, sentences, [speaker002:] Ah. [speaker007:] um [disfmarker] So we only looked at cases where there was a foreground speaker and then at the to at the [disfmarker] so the [disfmarker] the foreground speaker started into their sentence and then someone else started later. OK? [speaker002:] Somewhere in between the start and the end? [speaker007:] And so what [disfmarker] [speaker002:] OK. [speaker007:] Sorry? [speaker002:] Somewhere in between the start and the end of the foreground? [speaker007:] Yes. Uh, so that such that there was overlap between the two sentences. [speaker002:] Yeah. [speaker007:] So, the [disfmarker] the question was how can we [disfmarker] what can we say about the places where the second or [disfmarker] or actually, several second speakers, [vocalsound] um [pause] start their [pause] "interrupts", as we call them. [speaker004:] Three words from the end. [speaker001:] At pause boundaries. [speaker007:] w And we looked at this in terms of um [disfmarker] [speaker001:] On T closures, only. [speaker007:] So [disfmarker] so we had [disfmarker] [vocalsound] we had um u to [disfmarker] for [disfmarker] for the purposes of this analysis, we tagged the word sequences, and [disfmarker] and we time aligned them. Um, and we considered it interrupt [disfmarker] if it occurred in the middle of a word, we basically [disfmarker] you know, considered that to be a interrupt as if it were at [disfmarker] at the beginning of the word. So that, [vocalsound] if any part of the word was overlapped, it was considered an interrupted [pause] word. [speaker006:] Mm hmm. [speaker007:] And then we looked at the [disfmarker] the locatio the, [vocalsound] um, you know, the features that [disfmarker] the tags because we had tagged these word strings, [comment] [vocalsound] um, that [disfmarker] that occurred right before these [disfmarker] these uh, interrupt locations. [speaker002:] Tag by uh [speaker007:] And the tags we looked at are [vocalsound] the spurt tag, which basically says [disfmarker] or actually [disfmarker] Sorry. End of spurt. So [disfmarker] [vocalsound] whether there was a pause essentially here, because spurts are a [disfmarker] defined as being you know, five hundred milliseconds or longer pauses, and then we had things like discourse markers, uh, backchannels, uh, disfluencies. um, uh, filled pauses [disfmarker] So disfluen the D's are for, [vocalsound] um, [vocalsound] the interruption points of a disfluency, so, where you hesitate, or where you start the repair there. Uh, what else do we had. Uh, repeated [disfmarker] you know, repeated words is another of that kind of disfluencies and so forth. So we had both the beginnings and ends of these [disfmarker] uh so, the end of a filled pause and the end of a discourse marker. And we just eyeballed [disfmarker] I mean [vocalsound] we didn't really hand tag all of these things. We just [pause] looked at the distribution of words, and so every [vocalsound] "so yeah", and "OK", uh, and "uh huh" were [disfmarker] were the [disfmarker] were deemed to be backchannels and [vocalsound] "wow" and "so" and [vocalsound] uh "right", uh were um [disfmarker] [pause] Not "right". "Right" is a backchannel. But so, we sort of [disfmarker] just based on the lexical [disfmarker] [vocalsound] um, identity of the words, we [disfmarker] we tagged them as one of these things. And of course the d the interruption points we got from the original transcripts. So, and then we looked at the disti so we looked at the [pause] distribution of these different kinds of tags, overall uh, and [disfmarker] and [disfmarker] and particularly at the interruption points. And uh, we found that there is a marked difference so that for instance after [disfmarker] so at the end after a discourse marker or after backchannel or after filled pause, you're much more likely to be interrupted [vocalsound] than before. OK? And also of course after spurt ends, which means basically in p inside pauses. So pauses are always an opportunity for [disfmarker] So we have this little histogram which shows these distributions and, [vocalsound] um, [speaker004:] I wonder [disfmarker] [speaker007:] you know, it's [disfmarker] it's [disfmarker] it's not [disfmarker] No big surprises, but it is [pause] sort of interesting from [disfmarker] [speaker001:] It's nice to actually measure it though. [speaker007:] Yeah. [speaker004:] I wonder about the cause and effect there. In other words uh [pause] if you weren't going to pause you [disfmarker] you will because you're g being interrupted. [speaker007:] Well we're ne [speaker004:] Uh [disfmarker] [speaker007:] Right. There's no statement about cause and effect. [speaker004:] Yeah, [speaker007:] This is just a statistical correlation, [speaker004:] right. No, no, no. Right, I [disfmarker] I see. [speaker007:] yeah. [speaker004:] Yeah. [speaker006:] But he [disfmarker] yeah, he's [disfmarker] he's right, y I mean maybe you weren't intending to pause at all, but [disfmarker] [vocalsound] You were intending to stop for fifty seven milliseconds, [speaker007:] Right. Right. [speaker006:] but then Chuck came in and so you [vocalsound] paused for a second [speaker004:] Yeah. [speaker007:] Right. Anyway. [comment] So, [speaker006:] or more. [speaker007:] uh, and that was basically it. And [disfmarker] and we [disfmarker] so we wrote this and then, [vocalsound] we found we were at six pages, and then we started [vocalsound] cutting furiously [speaker002:] Oops. [speaker007:] and [vocalsound] threw out half of the [vocalsound] material again, and uh played with the LaTeX stuff and [disfmarker] uh, and [disfmarker] until it fi [speaker001:] Made the font smaller and the narrows longer. [speaker002:] Font smaller, yeah. [speaker007:] No, no. W well, d you couldn't really make everything smaller but we s we put [disfmarker] [speaker002:] Put the abstract end. [speaker007:] Oh, I [disfmarker] I [disfmarker] you know the [disfmarker] the gap between the two columns is like ten millimeters, [speaker001:] Took out white space. [speaker002:] Yeah. [speaker007:] so I d shrunk it to eight millimeters and that helped some. And stuff like that. [speaker004:] Wasn't there [disfmarker] wasn't there some result, Andreas [disfmarker] [speaker006:] Yeah [disfmarker] [speaker004:] I [disfmarker] I thought maybe Liz presented this at some conference a while ago about [vocalsound] uh, backchannels [speaker007:] Mm hmm. Mm hmm. [speaker004:] uh, and that they tend to happen when uh [pause] the pitch drops. You know you get a falling pitch. [speaker007:] Yeah. [speaker004:] And so that's when people tend to backchannel. [speaker007:] Well [disfmarker] [speaker004:] Uh i i do you rem [speaker007:] y We didn't talk about, uh, prosodic, uh, properties at all, [speaker004:] Right. Right. [speaker007:] although that's [disfmarker] I [disfmarker] I take it that's something that uh Don will [disfmarker] will look at [speaker004:] But [disfmarker] [speaker007:] now that we have the data and we have the alignment, [speaker005:] Yeah, we're gonna be looking at that. [speaker007:] so. This is purely based on you know the words and [disfmarker] [speaker004:] Mm hmm. [speaker003:] I have a reference for that though. [speaker004:] Oh you do. [speaker007:] Yeah. [speaker003:] Uh huh. [speaker004:] So am I recalling correctly? [speaker007:] Anyway, so. [speaker004:] About [disfmarker] [speaker003:] Well, I didn't know about Liz's finding on that, [speaker004:] Uh huh. [speaker003:] but I know of another paper that talks about something [speaker004:] Hmm. [speaker003:] that [disfmarker] [speaker005:] I'd like to see that reference too. [speaker004:] It made me think about a cool little device that could be built [speaker003:] OK. [speaker004:] to uh [disfmarker] to handle those people that call you on the phone and just like to talk and talk and talk. And you just have this little detector that listens for these [vocalsound] drops in pitch and gives them the backchannel. And so then you [vocalsound] hook that to the phone and go off [speaker001:] Yeah. [speaker004:] and do the [vocalsound] [disfmarker] do whatever you r wanna do, [speaker001:] Uh huh. [speaker007:] Oh yeah. Well [disfmarker] [speaker004:] while that thing keeps them busy. [speaker007:] There's actually [disfmarker] uh there's this a former student of here from Berkeley, Nigel [disfmarker] Nigel Ward. Do you know him? [speaker004:] Uh huh. Sure. Yeah. [speaker007:] He did a system uh, in [disfmarker] he [disfmarker] he lives in Japan now, and he did this backchanneling, automatic backchanneling system. [speaker006:] Right. [speaker007:] It's a very [disfmarker] [speaker004:] Oh! [speaker007:] So, exactly what you describe, but for Japanese. [speaker004:] Huh. [speaker007:] And it's apparently [disfmarker] for Japa in Japanese it's really important that you backchannel. It's really impolite if you don't, and [disfmarker] So. [speaker006:] Huh. Actually for a lot of these people I think you could just sort of backchannel continuously and it would [pause] pretty much be fine. [speaker004:] It wouldn't matter? [speaker005:] Yeah. [speaker004:] Yeah. [speaker005:] That's w That's what I do. [speaker004:] Random intervals. [speaker001:] There was [disfmarker] there was of course a Monty Python sketch with that. Where the barber who was afraid of scissors was playing a [disfmarker] a tape of clipping sounds, and saying "uh huh", "yeah", "how about them sports teams?" [speaker007:] Anyway. So the paper's on line and y I [disfmarker] I think I uh [disfmarker] I CC'd a message to Meeting Recorder with the URL so you can get it. [speaker001:] Yep. [speaker006:] Yeah. [speaker001:] Printed it out, [speaker007:] Um, uh one more thing. [speaker001:] haven't read it yet. [speaker006:] Yeah. [speaker007:] So I [disfmarker] I'm actually [disfmarker] [vocalsound] about to send Brian Kingbury an email saying where he can find the [disfmarker] the s the m the material he wanted for the s for the speech recognition experiment, so [disfmarker] but I haven't sent it out yet because actually my desktop locked up, like I can't type anything. Uh b so if there's any suggestions you have for that I was just gonna send him the [disfmarker] [speaker004:] Is it the same directory that you had suggested? [speaker007:] I made a directory. I called it um [disfmarker] Well this isn't [disfmarker] [speaker003:] He still has his Unix account here, you know. [speaker007:] He does? [speaker003:] Yeah. [speaker007:] Yeah [speaker003:] And he [disfmarker] and he's [disfmarker] [speaker007:] but [disfmarker] but [disfmarker] but he has to [disfmarker] [speaker003:] I'd hafta add him to Meeting Recorder, I guess, [speaker007:] he prefe he said he would prefer FTP [speaker003:] but [disfmarker] OK. [speaker007:] and also, um, the other person that wants it [disfmarker] There is one person at SRI who wants to look at the [vocalsound] um, you know, the uh [disfmarker] the data we have so far, [speaker003:] OK. [speaker007:] and so I figured that FTP is the best [pause] approach. So what I did is I um [disfmarker] [vocalsound] [vocalsound] [@ @] [comment] I made a n new directory after Chuck said that would c that was gonna be a good thing. Uh, so it's FTP [vocalsound] [pause] pub [speaker001:] Pub real. [speaker007:] real [disfmarker] Exactly. MTGC [disfmarker] What is it again? CR [disfmarker] [speaker006:] u R D [disfmarker] RDR, [speaker001:] Ask Dan Ellis. [speaker007:] Or [disfmarker] [speaker006:] yeah. [speaker007:] Yeah. Right? The same [disfmarker] the same as the mailing list, and [disfmarker] [speaker006:] Yeah, [speaker007:] Yeah. [speaker006:] the [disfmarker] [pause] No vowels. [speaker007:] Um, and then under there [disfmarker] [speaker006:] Yeah [speaker007:] Um actually [disfmarker] Oh and this directory, [vocalsound] is not readable. It's only uh, accessible. So, [vocalsound] in other words, to access anything under there, you have to [vocalsound] be told what the name is. So that's sort of a g [vocalsound] quick and dirty way of doing access control. [speaker001:] Right. [speaker006:] Mm hmm. [speaker007:] So [disfmarker] uh, and the directory for this I call it I "ASR zero point one" because it's sort of meant for recognition. [speaker006:] So anyone who hears this meeting now knows the [disfmarker] [speaker001:] Beta? [speaker007:] And then [disfmarker] then in there I have a file that lists all the other [vocalsound] files, so that someone can get that file and then know the file names and therefore download them. If you don't know the file names you can't [disfmarker] [speaker006:] Is that a dash or a dot in there? [speaker007:] I mean you can [disfmarker] [speaker001:] Don't [disfmarker] don't [disfmarker] don't say. [speaker007:] Dash. Anyway. So all I [disfmarker] all I was gonna do there was stick the [disfmarker] the transcripts after we [disfmarker] the way that we munged them for scoring, because that's what he cares about, and [disfmarker] um, and also [disfmarker] and then the [disfmarker] the [pause] waveforms that Don segmented. I mean, just basically tar them all up f I mean [disfmarker] w for each meeting I tar them all into one tar file and G zip them and stick them there. [speaker001:] I uh, put digits in my own home directory [disfmarker] home FTP directory, [speaker007:] And so. [speaker001:] but I'll probably move them there as well. [speaker007:] Oh, OK. OK. [speaker004:] So we could point Mari to this also for her [vocalsound] March O one request? [speaker007:] Yeah. March O one. Oh! [speaker004:] Or [disfmarker] You n Remember she was [disfmarker] [speaker007:] Oh she wanted that also? [speaker004:] Well she was saying that it would be nice if we had [disfmarker] they had a [disfmarker] Or was she talking [disfmarker] Yeah. She was saying it would be nice if they had eh [pause] the same set, so that when they did experiments they could compare. [speaker007:] Right, but they don't have a recognizer even. [speaker004:] Yeah. [speaker005:] Um [disfmarker] [speaker007:] But yeah, [speaker005:] I [speaker007:] we can send [disfmarker] I can CC Mari on this so that she knows [disfmarker] [speaker004:] Yeah. So, for the thing that [disfmarker] [speaker003:] That's good. [speaker004:] We need to give Brian the beeps file, [speaker007:] Right. [speaker004:] so I was gonna probably put it [disfmarker] [speaker001:] We can put it in the same place. [speaker004:] Yeah, it [speaker001:] Just put in another directory. [speaker007:] Well, make ano make another directory. [speaker004:] I'll make another directory. Yeah. [speaker007:] You don't n m [speaker004:] Exactly. [speaker007:] Yeah. [speaker004:] Yeah. [speaker007:] Yeah. [speaker005:] And, Andreas, um, sampled? [speaker007:] They are? [speaker005:] I think so. Yeah. [speaker007:] OK. [speaker005:] Um, so either we should regenerate the original [vocalsound] versions, [comment] [pause] or um, we should just make a note of it. [speaker007:] Oh. Beca Well [disfmarker] OK, because in one directory there's two versions. [speaker005:] Yeah, that's the first meeting I cut both versions. [speaker007:] OK. [speaker005:] Just to check which w if there is a significant difference. [speaker007:] And so I [disfmarker] but [disfmarker] OK so [disfmarker] but for the other meetings it's the downsampled version that you have. [speaker005:] They're all downsampled, yeah. [speaker007:] Oh, OK. Oh that's th important to know, OK so we should probably [disfmarker] uh [pause] give them the non downsampled versions. [speaker005:] Yeah. So [disfmarker] [speaker007:] OK. Alright, then I'll hold off on that and I'll wait for you um [disfmarker] [speaker005:] Probably by tomorrow [speaker007:] gen [speaker005:] I can [disfmarker] [speaker007:] OK. [speaker005:] I'll send you an email. [speaker007:] Alright. OK. Yeah, definitely they should have the full bandwidth version, yeah. OK. [speaker005:] Yeah, because I mean [disfmarker] I I think Liz decided to go ahead with the [pause] downsampled versions cuz we can [disfmarker] There was no s like, r significant difference. [speaker007:] Well, it takes [disfmarker] it takes up less disk space, for one thing. [speaker005:] It does take up less disk space, and apparently it did even better [pause] than the original [disfmarker] than the original versions, [speaker007:] Yeah. Yeah. Right. [speaker005:] which you know, is just, probably random. [speaker007:] Yeah, it was a small difference but yeah. [speaker005:] But, um [pause] they probably w want the originals. [speaker007:] Yeah. OK. OK, good. Good that [disfmarker] Well, it's a good thing that [disfmarker] [speaker001:] OK, I think we're losing, Don and Andreas at three thirty, right? [speaker007:] Yeah. [speaker005:] Hey mon hafta booga. [speaker001:] OK. [speaker006:] So, that's why it was good to have Andreas, say these things but [disfmarker] So, we should probably talk about the IBM transcription process stuff that [disfmarker] [speaker003:] OK. [speaker006:] Hmm. [speaker003:] So, um you know that Adam created um, a b a script to generate the beep file? To then create something to send to IBM. And, um, you [disfmarker] you should probably talk about that. But [disfmarker] but you were gonna to use the [pause] originally transcribed file because I tightened the time bins and that's also the one that they had already [vocalsound] in trying to debug the first stage of this. And uh, my understanding was that, um [disfmarker] I haven't [disfmarker] [vocalsound] I haven't listened to it yet, but it sounded very good [speaker001:] Mm hmm. [speaker003:] and [disfmarker] and I understand that you guys [vocalsound] were going to have a meeting today, before this meeting. [speaker001:] It was just to talk about how to generate it. Um, just so that while I'm gone, you can regenerate it if you decide to do it a different way. [speaker003:] Excellent. [speaker001:] So uh, Chuck and Thilo should, now more or less know how to generate the file [speaker003:] OK. [speaker001:] and, [vocalsound] the other thing Chuck pointed out is that, um, [vocalsound] since this one is hand marked, [vocalsound] there are discourse boundaries. Right? [speaker003:] Mm hmm. [speaker001:] So [disfmarker] so when one person is speaking, there's breaks. Whereas Thilo's won't have that. [speaker003:] Oh! OK. [speaker001:] So what [disfmarker] what we're probably gonna do is just write a script, that if two, chunks are very close to each other on the same channel we'll just merge them. [speaker003:] Ah, interesting. Yeah. Yeah. Oh, sure. Yeah, sure. Makes sense. [speaker001:] So, uh, and that will get around the problem of, the, [vocalsound] you know "one word beep, one word beep, one word beep, one word beep". [speaker003:] Yeah. Ah! Clever. Yes. Clever. Yeah. Excellent. [speaker004:] Yeah, in fact after our meeting uh, this morning Thilo came in and said that [vocalsound] um, there could be [pause] other differences between [vocalsound] the uh [pause] already transcribed meeting with the beeps in it and one that has [pause] just r been run through his process. [speaker003:] And that's the purpose. Yeah. [speaker004:] So tomorrow, [vocalsound] when we go to make the um [pause] uh, chunked file [vocalsound] for IBM, we're going to actually compare the two. So he's gonna run his process on that same meeting, [speaker003:] Great idea! [speaker004:] and then we're gonna do the beep ify on both, and listen to them and see if we notice any real differences. [speaker007:] Beep ify! [speaker003:] OK, now one thing that prevented us from apply you [disfmarker] you from applying [disfmarker] Exactly. The training [disfmarker] So that is the training meeting. [speaker004:] Yeah, w and we know that. [speaker003:] OK. [speaker004:] Wel uh we just wanna if [disfmarker] if there're any major differences between [vocalsound] doing it on the hand [speaker003:] Uh huh. Oh, interesting. Ah! OK. [speaker001:] Hmm. [speaker003:] Interesting idea. [speaker007:] So this training meeting, uh w un is that uh [pause] some data where we have uh very um, [vocalsound] you know, accurate [pause] time marks? for [disfmarker] [speaker003:] Great. I went back and hand marked the [pause] ba the bins, I ment I mentioned that last week. [speaker007:] OK, yeah. [speaker004:] But the [disfmarker] but there's [disfmarker] yeah, but there is this one issue with them in that there're [disfmarker] [vocalsound] there are time boundaries in there that occur in the middle of speech. [speaker007:] Because [disfmarker] [speaker004:] So [disfmarker] Like when we went t to um [disfmarker] When I was listening to the original file that Adam had, it's like you [disfmarker] you hear a word then you hear a beep [vocalsound] and then you hear the continuation of what is the same sentence. [speaker001:] That's on the other channel. That's because of channel overlap. [speaker004:] Well, and [disfmarker] and so the [disfmarker] th [speaker003:] Hmm. [speaker004:] So there are these chunks that look like uh [disfmarker] [vocalsound] that have uh [disfmarker] [speaker001:] It's [disfmarker] i I mean that's not gonna be true of the foreground speaker. That'll only be if it's the background speaker. [speaker004:] Right. So you'll [disfmarker] you'll have a chunk of, you know, channel [vocalsound] A which starts at zero and ends at ten, and then the same channel starting at eleven, ending at fifteen, and then again, starting at sixteen, ending at twenty. Right, so that's three chunks where [vocalsound] actually we w can just make one chunk out of that [speaker007:] Mm hmm. [speaker004:] which is A, zero, twenty. [speaker003:] Yeah. Sure. [speaker001:] That's what I just said, [speaker003:] Sure. [speaker001:] yeah. [speaker004:] Yeah. So I just wanted to make sure that it was clear. [speaker003:] Yeah, I thought that was [disfmarker] [speaker004:] So [vocalsound] if you were to use these, you have to be careful not to pull out these individual [disfmarker] [speaker003:] Yeah. [speaker007:] Oh! I mean it [disfmarker] Right, I mean w I mean what I would [disfmarker] I was interested in is having [disfmarker] [vocalsound] a se having time marks for the beginnings and ends of speech by each speaker. [speaker001:] Well, that's definitely a problem. [speaker007:] Uh, because we could use that to fine tune our alignment process to make it more accurate. [speaker004:] Yeah. [speaker001:] Battery. [speaker002:] Battery? [speaker004:] Mm hmm. [speaker007:] So [disfmarker] uh, it [disfmarker] I don't care that you know, there's actually abutting segments that we have to join together. That's fine. [speaker004:] OK. [speaker007:] But what we do care about is that [vocalsound] the beginnings and ends um [pause] are actually close to the speech [vocalsound] inside of that [speaker004:] Yeah, [speaker007:] uh [disfmarker] [speaker004:] I think Jane tightened these up by hand. [speaker007:] OK, [speaker002:] OK. [speaker003:] Yeah. [speaker007:] so what is the [disfmarker] sort of how tight are they? [speaker006:] Uh, it looks much better. [speaker002:] Yeah. Looks good. [speaker003:] They were, um, reasonably tight, but not excruciatingly tight. [speaker007:] Oh. [speaker003:] That would've taken more time. I just wanted to get it so tha So that if you have like "yeah" [comment] in a [disfmarker] swimming in a big bin, then it's [disfmarker] [speaker007:] No, no! I don actually I [disfmarker] I [disfmarker] [speaker001:] Let me make a note on yours. [speaker007:] I [disfmarker] it's f [speaker002:] Yeah. [speaker007:] That's fine because we don't want to [disfmarker] th that's perfectly fine. In fact it's good. You always want to have a little bit of pause or nonspeech around the speech, say for recognition purposes. Uh, but just [disfmarker] just u w you know get an id I just wanted to have an idea of the [disfmarker] [vocalsound] of how much extra you allowed um [disfmarker] so that I can interpret the numbers if I compared that with a forced alignment segmentation. So. [speaker003:] I can't answer that, but [disfmarker] but my main goal was [pause] um, in these areas where you have a three way overlap [vocalsound] and one of the overlaps involves "yeah", [vocalsound] and it's swimming in this huge bin, [vocalsound] I wanted to get it so that it was clo more closely localized. [speaker007:] Mm hmm. Mm hmm. Right. But are we talking about, I don't know, [pause] a [vocalsound] [pause] tenth of a second? a [disfmarker]? You know? How [disfmarker] how much [disfmarker] how much extra would you allow at most [disfmarker] [speaker003:] I [disfmarker] I wanted to [disfmarker] I wanted it to be able to [disfmarker] l he be heard normally, [speaker007:] Mm hmm. [speaker003:] so that if you [disfmarker] if you play [pause] back that bin and have it in the mode where it stops at the boundary, [vocalsound] it sounds like a normal word. [speaker007:] OK. [speaker003:] It doesn't sound like the person [disfmarker] i it sounds normal. It's as if the person could've stopped there. [speaker007:] Mm hmm. OK. [speaker003:] And it wouldn't have been an awkward place to stop. Now sometimes you know, it's [disfmarker] these are involved in places where there was no time. And so, [vocalsound] [vocalsound] there wouldn't be [pause] a gap afterwards because [disfmarker] [speaker007:] OK. [speaker003:] I mean some cases, there're some people [pause] um, who [disfmarker] who have very long [pause] segments of discourse where, [vocalsound] you know, they'll [disfmarker] they'll breath [pause] and then I put a break. [speaker007:] Mm hmm. [speaker003:] But other than that, it's really pretty continuous and this includes things like going from one sentence into the [disfmarker] u one utterance into the next, one sentence into the next, um, w without really stopping. I mean [disfmarker] i they, i you know in writing you have this [vocalsound] two spaces and a big gap [speaker007:] Mm hmm. [speaker003:] you know. [speaker007:] Right. [speaker003:] But [disfmarker] but uh [pause] [vocalsound] i some people are planning and, you know, I mean, a lot [disfmarker] we always are planning [pause] what we're going to say next. [speaker007:] OK. [speaker003:] But uh, in which case, the gap between [pause] these two complete syntactic units, [vocalsound] um, which of course n spoken things are not always complete syntactically, but [disfmarker] [vocalsound] but it would be a shorter p shorter break [vocalsound] than [vocalsound] maybe you might like. [speaker007:] Mm hmm. [speaker003:] But the goal there was to [pause] not have [vocalsound] the text be so [disfmarker] so crudely [pause] parsed in a time bin. I mean, because [vocalsound] from a discourse m purpose [pause] it's [disfmarker] [vocalsound] it's more [disfmarker] [vocalsound] it's more useful to be able to see [disfmarker] and also you know, from a speech recognition purpose my impression is that [vocalsound] if you have too long a unit, it's [disfmarker] it doesn't help you very much either, cuz of the memory. [speaker007:] Well, yeah. That's fine. [speaker003:] So, that means that [vocalsound] the amount of time after something is variable depending partly on context, but my general goal [vocalsound] when there was [pause] sufficient space, room, pause [pause] after it [pause] to have it be [pause] kind of a natural feeling [pause] gap. [speaker007:] OK. [speaker003:] Which I c I don't know what it would be quantified as. You know, Wally Chafe says that [vocalsound] um, [vocalsound] in producing narratives, the spurts that people use [vocalsound] tend to be, [vocalsound] uh, that the [disfmarker] the [disfmarker] what would be a pause might be something like two [disfmarker] two seconds. [speaker007:] Mmm. [speaker003:] And um, that would be, you know one speaker. The discourse [disfmarker] [vocalsound] the people who look at turn taking often do use [disfmarker] [speaker007:] Mm hmm. [speaker003:] I was interested that you chose uh, [vocalsound] you know um, [comment] the [disfmarker] you know that you use cuz I think that's a unit that would be more consistent with sociolinguistics. [speaker007:] Well we chose um, you know, half a second [speaker003:] Yeah. [speaker007:] because [vocalsound] if [disfmarker] if you go much larger, you have a [disfmarker] y you know, your [disfmarker] your statement about how much overlap there is becomes less, [vocalsound] um, precise, because you include more of actual pause time into what you consider overlap speech. [speaker003:] Mm hmm. [speaker007:] Um, so, it's sort of a compromise, and [disfmarker] [vocalsound] it's also based [disfmarker] [speaker002:] Yeah. [comment] [vocalsound] [vocalsound] Yeah, I also used I think something around zero point five seconds for the speech nonspeech detector [disfmarker] [speaker007:] I mean Liz suggested that value based on [vocalsound] the distribution of pause times that you see in Switchboard and [disfmarker] and other corpora. [speaker003:] Mm hmm. [speaker007:] Um [disfmarker] So [disfmarker] [speaker002:] for the minimum silence length. [speaker007:] Mm hmm. I see. [speaker002:] So. [speaker007:] Yeah. [speaker003:] Mm hmm. [speaker007:] OK. [speaker003:] In any case, this [disfmarker] this uh, meeting [pause] that I hand [disfmarker] I [disfmarker] I hand adjusted two of them I mentioned before, [speaker007:] Mm hmm. OK, [speaker003:] and I sent [disfmarker] I sent email, [speaker007:] So [disfmarker] so at some point we will try to fine tune our forced alignment [speaker003:] so [disfmarker] And I sent the [comment] [pause] path. [speaker007:] maybe using those as references because you know, what you would do is you would play with different parameters. And to get an object You need an objective [vocalsound] measure of how closely you can align the models to the actual speech. And that's where your your data would be [pause] very important to have. So, I will [disfmarker] Um [disfmarker] [speaker002:] Yeah and hopefully the new meetings [pause] which will start from the channelized version will [disfmarker] will have better time boundaries [pause] and alignments. [speaker007:] Mm hmm. Right. [speaker003:] But I like this idea of [disfmarker] uh, for our purposes for the [disfmarker] for the IBM preparation, [vocalsound] uh, n having these [pause] joined together, [speaker002:] Yeah. [speaker003:] and uh [disfmarker] [speaker002:] Yeah. [speaker003:] It makes a lot of sense. And in terms of transcription, it would be easy to do it that way. [speaker007:] Yeah. [speaker003:] The way that they have with the longer units, [speaker007:] Yeah. [speaker003:] not having to fuss with adding these units at this time. [speaker002:] Yeah. [speaker007:] Right. [speaker002:] Whi which could have one drawback. If there is uh a backchannel in between those three things, [speaker003:] Mm hmm. [speaker002:] the [disfmarker] the n the backchannel will [disfmarker] will occur at the end of [disfmarker] of those three. [speaker003:] Yes. [speaker002:] And [disfmarker] and in [disfmarker] in the [disfmarker] in the previous version where in the n which is used now, [vocalsound] there, the backchannel would [disfmarker] would be in between there somewhere, [speaker003:] I see. [speaker002:] so. [speaker003:] Yeah. [speaker002:] That would be more natural [speaker003:] Well, [speaker002:] but [disfmarker] [speaker003:] that's [disfmarker] that's right, but you know, thi this brings me to the other f stage of this which I discussed with you earlier today, [speaker002:] Yeah. [speaker003:] which is [vocalsound] the second stage is [vocalsound] um, w what to do [pause] in terms of the transcribers adjustment of these data. I discussed this with you too. Um, the tr so the idea initially was, we would get [vocalsound] uh, for the new meetings, so the e EDU meetings, that [vocalsound] Thilo ha has now presegmented all of them for us, on a channel by channel basis. And um, so, I've assigned [disfmarker] I've [disfmarker] I've assigned them to our transcribers and um, so far I've discussed it with one, with uh [disfmarker] And I had a [pause] about an hour discussion with her about this yesterday, we went through [vocalsound] uh EDU one, at some extent. And it occurred to me that [vocalsound] um [disfmarker] that [vocalsound] basically what we have in this kind of a format is [disfmarker] you could consider it as a staggered mixed file, we had some discussion over the weekend a about [disfmarker] at [disfmarker] at this other meeting that we were all a at [disfmarker] um, [vocalsound] about whether the tran the IBM transcribers should hear a single channel audio, or a mixed channel audio. And um, [vocalsound] in [disfmarker] in a way, by [disfmarker] by having this [disfmarker] this chunk and then the backchannel [vocalsound] after it, it's like a stagal staggered mixed channel. And um, [vocalsound] it occurred [pause] to me in my discussion with her yesterday that um, um, the [disfmarker] [pause] the [disfmarker] the maximal gain, it's [disfmarker] from the IBM [pause] people, may be in long stretches of connected speech. So it's basically a whole bunch of words [vocalsound] which they can really do, because of the continuity within that person's turn. So, what I'm thinking, and it may be that not all meetings will be good for this, [comment] but [disfmarker] but what I'm thinking is that [vocalsound] in the EDU meetings, they tend to be [vocalsound] driven by a couple of dominant speakers. And, if the chunked files focused on the dominant speakers, [vocalsound] then, when [disfmarker] when it got s patched together when it comes back from IBM, we can add the backchannels. It seems to me [vocalsound] that [vocalsound] um, you know, the backchannels per se wouldn't be so hard, but then there's this question of the time [pause] [@ @] [comment] uh, marking, and whether the beeps would be [vocalsound] uh y y y And I'm not exactly sure how that [disfmarker] how that would work with the [disfmarker] with the backchannels. And, so um [disfmarker] And certainly things that are [vocalsound] intrusions of multiple words, [vocalsound] taken out of context and displaced in time from where they occurred, [vocalsound] that would be hard. So, m my [vocalsound] thought is [pause] i I'm having this transcriber go through [vocalsound] the EDU one meeting, and indicate a start time [nonvocalsound] f for each dominant speaker, endpoi end time for each dominant speaker, and the idea that [vocalsound] these units would be generated for the dominant speakers, [vocalsound] and maybe not for the other channels. [speaker001:] Yeah the only, um, disadvantage of that is, then it's hard to use an automatic method to do that. The advantage is that it's probably faster to do that than it is to use the automated method and correct it. So. [speaker003:] Well, it [disfmarker] OK. [speaker001:] We'll just have to see. [speaker003:] I think [disfmarker] [vocalsound] I [disfmarker] I think um, you know, the original plan was that the transcriber would adjust the t the boundaries, and all that for all the channels but, [vocalsound] you know, that is so time consuming, and since we have a bottleneck here, we want to get IBM things that are usable s as soon as possible, then this seemed to me it'd be a way of gett to get them a flood of data, which would be useful when it comes back to us. And um [disfmarker] Oh also, at the same time she [disfmarker] when she goes through this, she'll be [vocalsound] uh [disfmarker] If there's anything that [vocalsound] was encoded as a pause, but really has something transcribable in it, [vocalsound] then she's going to [vocalsound] uh, make a mark [disfmarker] [speaker001:] Yeah. [speaker003:] w uh, so you know, so [vocalsound] that [disfmarker] that bin would be marked as it [disfmarker] as double dots and she'll just add an S. And in the other [disfmarker] in the other case, if it's marked as speech, [vocalsound] and really there's nothing transcribable in it, then she's going to put a s dash, and I'll go through and it [disfmarker] and um, you know, with a [disfmarker] [vocalsound] [vocalsound] with a substitution command, get it so that it's clear that those are the other category. I'll just, you know, recode them. But um, [vocalsound] um, the transcribable events [pause] that um, I'm considering in this, [vocalsound] uh, continue to be [vocalsound] laugh, as well as speech, and cough and things like that, so I'm not stripping out anything, just [disfmarker] just you know, being very lenient in what's considered speech. [speaker004:] Jane? [speaker003:] Yeah? [speaker004:] In terms of the [disfmarker] this new procedure you're suggesting, [vocalsound] um, u what is the [disfmarker] [speaker001:] It's not that different. [speaker004:] So I'm a little confused, because how do we know where to put beeps? Is it [disfmarker] i d y is it [disfmarker] [speaker003:] Oh, OK. So what it [disfmarker] what it [disfmarker] what it involves is [disfmarker] is really a s uh, [vocalsound] uh, the original pr procedure, [speaker001:] Transcriber will do it. [speaker003:] but [vocalsound] only applied to [pause] uh, a certain [pause] strategically chosen [pause] s aspect of the data. [speaker001:] We pick the easy parts of the data basically, [speaker003:] So [disfmarker] [speaker001:] and transcriber marks it by hand. [speaker003:] You got it. [speaker001:] And because [disfmarker] [speaker004:] But after we've done Thilo's thing. [speaker001:] No. [speaker003:] Yes! [speaker001:] Oh, after. [speaker003:] Yes! [speaker001:] Oh, OK, [speaker003:] Oh yeah! [speaker001:] I didn't [disfmarker] I didn't understand that. OK. [speaker002:] So, I'm [@ @] [disfmarker] now I'm confused. [speaker003:] OK. We start with your presegmented version [disfmarker] [speaker007:] OK, and I'm leaving. [speaker005:] Yeah, I have to go as well. [speaker007:] So, um [disfmarker] [speaker001:] OK, leave the mikes on, and just put them on the table. [speaker005:] OK. Thanks. [speaker003:] We start with the presegmented version [disfmarker] [speaker001:] Let me mark you as no digits. [speaker002:] You start with the presegmentation, r [vocalsound] yeah? [speaker003:] Yeah. And then um, [vocalsound] the transcriber, [vocalsound] instead of going painstakingly through all the channels and moving the boundaries around, and deciding if it's speech or not, but not transcribing anything. OK? Instead of doing that, which was our original plan, [vocalsound] the tra They focus on the dominant speaker [disfmarker] [speaker004:] Mm hmm. They just [vocalsound] do that on [pause] the main channels. [speaker003:] Yeah. So what they do is they identify who's the di dominant speaker, and when the speaker starts. [speaker004:] OK. [speaker002:] Yeah? OK. [speaker003:] So I mean, you're still gonna [disfmarker] So we're [disfmarker] [speaker002:] And you just [disfmarker] [speaker003:] It's based on your se presegmentation, that's the basic [pause] thing. [speaker002:] and you just use the s the segments of the dominant speaker then? For [disfmarker] for sending to [disfmarker] to IBM or [disfmarker]? [speaker003:] Yeah. Exactly. [speaker004:] So, now Jane, my question is [vocalsound] when they're all done adjusting the w time boundaries for the dominant speaker, [comment] have they then also erased the time boundaries for the other ones? [speaker003:] Mm hmm. Uh No. No, no. [speaker004:] So how will we know who [disfmarker] [speaker003:] Huh uh. S [speaker002:] Yeah. [speaker003:] That's [disfmarker] that's why she's notating the start and end points of the dominant speakers. So, on a [disfmarker] you know, so [vocalsound] i in EDU one, i as far as I listened to it, you start off with a [disfmarker] a s section by Jerry. So Jerry starts at minute so and so, and goes until minute so and so. And then Mark Paskin comes in. And he starts at [vocalsound] minute such and such, and goes on till minute so and so. OK. And then [vocalsound] meanwhile, she's listening to [vocalsound] [pause] both of these guys' channels, determining if there're any cases of misclassification of speech as nothing, and nothing as speech, [speaker004:] Mm hmm. OK. [speaker003:] and [vocalsound] a and adding a tag if that happens. [speaker004:] So she does the adjustments on those guys? [speaker003:] But you know, I wanted to say, his segmentation is so good, that [vocalsound] um, the part that I listened to with her yesterday [vocalsound] didn't need any adjustments of the bins. [speaker002:] On that meeting. [speaker004:] Mm hmm. [speaker003:] So far we haven't. So this is not gonna be a major part of the process, at least [disfmarker] least not in [disfmarker] not on ones that [disfmarker] that really [disfmarker] [speaker004:] So if you don't have to adjust the bins, why not just do what it [disfmarker] for all the channels? [speaker003:] Mm hmm? [speaker004:] Why not just throw all the channels to IBM? [speaker003:] Well there's the question o of [pause] whether [disfmarker] Well, OK. She i It's a question of how much time we want our transcriber to invest here [vocalsound] when she's gonna have to invest that when it comes back from IBM anyway. [speaker004:] Mm hmm. [speaker003:] So if it's only inserting "mm hmm"s here and there, then, wouldn't that be something that would be just as efficient to do at this end, instead of having it go through I B M, then be patched together, then be double checked here. [speaker004:] Mm hmm. Right. [speaker002:] Yeah. But [disfmarker] But then we could just use the [disfmarker] the output of the detector, and do the beeping on it, and send it to I B [speaker004:] Without having her check anything. [speaker006:] Right. [speaker002:] Yeah. [speaker003:] Well, I guess [disfmarker] [speaker002:] For some meetings, I'm [disfmarker] I'm sure it [disfmarker] i [speaker001:] I think we just [disfmarker] we just have to listen to it and see how good they are. [speaker002:] n [speaker003:] I'm [disfmarker] I'm open to that, it was [disfmarker] [speaker006:] Yeah, if it's working well, that sounds like a good idea [speaker002:] That's [disfmarker] And some [disfmarker] on some meetings it's good. [speaker006:] since as you say you have to do stuff with the other end anyway. [speaker002:] Yeah. [speaker003:] Well yea OK, good. [speaker004:] Yeah, I mean we have to fix it when it comes back anyhow. [speaker003:] I mean the detector, this [disfmarker] [speaker002:] Yeah. [speaker003:] Now, you were saying that they [disfmarker] they differ in how well they work depending on channel s sys systems and stuff. [speaker002:] Yeah. So we should perhaps just select meetings on which the speech nonspeech detection works well, [speaker003:] But EDU is great. [speaker002:] and just use, [vocalsound] those meetings to [disfmarker] to [disfmarker] to send to IBM and, do the other ones. [speaker001:] Release to begin with. [speaker003:] How interesting. [speaker006:] What's the problem [disfmarker] the l I forget. [speaker003:] You know [disfmarker] [speaker006:] Is the problem the lapel, or [disfmarker] or [disfmarker] [speaker002:] Uh, it really depends. Um, my [disfmarker] my [disfmarker] my impression is that it's better for meetings with fewer speakers, and it's better for [disfmarker] [vocalsound] for meetings where nobody is breathing. [speaker006:] Oh, the dead meetings. [speaker002:] Yeah, get [disfmarker] That's it. [speaker004:] So in fact this might suggest an alternative sort of a [disfmarker] a c a hybrid between these two things. [speaker001:] No, the undead meeting, yeah. [speaker003:] Yeah. Yeah? [speaker004:] So the [disfmarker] the one suggestion is you know we [disfmarker] [vocalsound] we run Thilo's thing and then we have somebody go and adjust all the time boundaries [speaker003:] Yeah? [speaker004:] and we send it to IBM. [speaker002:] Yeah. [speaker004:] The other one is [vocalsound] we just run his thing and send it to IBM. There's a [disfmarker] a another possibility if we find that there are some problems, [speaker002:] Yeah. Yeah. [speaker004:] and that is [vocalsound] if we go ahead and we [vocalsound] just run his, and we generate the beeps file, then we have somebody listen beeps file. [speaker002:] Yeah. Yeah. [speaker004:] And they listen to each section and say "yes, no" whether that section is [speaker002:] And erase [disfmarker] Yeah. [speaker003:] Is intelligible. [speaker004:] i i intelligible or not. And it just [disfmarker] You know, there's a little interface which will [disfmarker] for all the "yes" es it [disfmarker] then that will be the final [vocalsound] beep file. [speaker002:] Yeah. [speaker001:] Blech. [speaker003:] That's interesting! Cuz that's [disfmarker] that's directly related to the e end task. [speaker001:] Stress test. [speaker004:] Mm hmm. [speaker003:] How interesting! [speaker004:] Yeah. I mean it wouldn't be that much fun for a transcriber to sit there, hear it, beep, yes or no. [speaker002:] Nope. [speaker006:] I [disfmarker] I [disfmarker] I don't know. [speaker004:] But it would be quick. [speaker006:] It would be [disfmarker] kind of quick but they're still listening to everything. [speaker004:] But there's no adjusting. And that's what's slow. There's no adjusting of time boundaries. [speaker003:] Well, [vocalsound] eh, listening does take time too. [speaker006:] Yeah. [speaker004:] Yeah. [speaker006:] I don't know, I [disfmarker] I think I'm [disfmarker] I'm really tending towards [disfmarker] [speaker001:] One and a half times real time. [speaker006:] I mean, [vocalsound] what's the worst that happens? Do the transcribers [disfmarker] I mean as long as th on the other end they can say there's [disfmarker] there's something [disfmarker] conventions so that they say "huh?" [speaker004:] Yeah. Right. [speaker006:] and then we can flag those later. [speaker004:] They [disfmarker] they [disfmarker] Yeah. [speaker006:] i i It [disfmarker] i [speaker004:] That's true. We can just catch it at the [disfmarker] catch everything at this side. [speaker006:] Yeah. [speaker004:] Well maybe that's the best way to go, just [disfmarker] [speaker003:] How interesting! [speaker001:] I mean it just depends on how [disfmarker] [speaker003:] Well EDU [disfmarker] [speaker002:] Yeah, u u u [speaker001:] Sorry, go ahead. [speaker003:] So I was gonna say, EDU one is good enough, [speaker002:] Yeah. [speaker003:] maybe we could include it in this [disfmarker] in this set of uh, this stuff we send. [speaker002:] Yeah there's [disfmarker] I [disfmarker] I think there are some meetings where it would [disfmarker] would [disfmarker] It's possible like this. [speaker001:] Yeah I [disfmarker] I think, we won't know until we generate a bunch of beep files automatically, listen to them and see how bad they are. [speaker002:] Yeah. Yeah. [speaker004:] Yeah. We won't be able to s include it with this first thing, [speaker003:] Mm hmm. Hmm. [speaker001:] If [disfmarker] [speaker003:] Oh, OK. [speaker004:] because there's a part of the process of the beep file which requires knowing the normalization coefficients. [speaker003:] Oh, I see. [speaker004:] And [disfmarker] [vocalsound] So a [speaker001:] That's not hard to do. Just [disfmarker] it takes [disfmarker] you know, it just takes five minutes rather than, taking a second. [speaker004:] OK [speaker002:] Yeah. [speaker004:] Right, except I don't think that [disfmarker] the c the instructions for doing that was in that directory, [speaker001:] So. I just hand [disfmarker] hard coded it. [speaker004:] right? I [disfmarker] I didn't see where you had gener [speaker001:] No, but it's easy enough to do. [speaker002:] What [disfmarker] [speaker006:] But I [disfmarker] but I have a [disfmarker] [speaker002:] Doing the gain? It's no problem. [speaker004:] n Doing th [speaker002:] Adjusting the gain? [speaker004:] No, getting the coefficients, for each channel. [speaker003:] Know what numbers. [speaker002:] Yeah, that's no problem. [speaker004:] OK. So we just run that one [disfmarker] [speaker002:] We can do that. [speaker001:] There are lots of ways to do it. I have one program that'll do it. [speaker002:] Yeah. [speaker001:] You can find other programs. [speaker002:] I [disfmarker] I used it, [speaker004:] We just run that [speaker002:] so. [speaker001:] Yep. [speaker006:] Yeah. [speaker002:] Yeah. [speaker004:] J sound stat? OK. [speaker001:] Minus D, capital D. [speaker006:] But [disfmarker] [speaker002:] Yeah. [speaker006:] but [disfmarker] but I [disfmarker] I [disfmarker] I have [pause] another suggestion on that, which is, [vocalsound] since, really what this is, is [disfmarker] is [disfmarker] is trying to in the large, send the right thing to them and there is gonna be this [disfmarker] this post processing step, um, why don't we check through a bunch of things by sampling it? [speaker004:] Mm hmm. [speaker006:] Right? In other words, rather than, um, uh, saying we're gonna listen to everything [disfmarker] [speaker001:] I didn't mean listen to everything, I meant, just see if they're any good. [speaker006:] Yeah. So y you do a bunch of meetings, you listen to [disfmarker] to a little bit here and there, [speaker004:] Yeah. [speaker006:] if it sounds like it's almost always right and there's not any big problem you send it to them. [speaker004:] Send it to them. [speaker002:] Yeah. [speaker004:] OK. [speaker006:] And, you know, then they'll send us back what we [disfmarker] w what [disfmarker] what they send back to us, [speaker003:] Oh, that'd be great. [speaker006:] and we'll [disfmarker] we'll fix things up and [vocalsound] some meetings will cost more time to fix up than others. [speaker001:] We should [disfmarker] Yeah. [speaker002:] Yeah. [speaker001:] And we should just double check with Brian on a few simple conventions on how they should mark things. [speaker004:] OK. [speaker002:] Sure. [speaker004:] When they [disfmarker] when there's either no speech in there, [speaker002:] Yeah. [speaker004:] or [vocalsound] something they don't understand, [speaker002:] Yeah. [speaker003:] Yeah. [speaker004:] things like that. [speaker003:] Mm hmm. [speaker006:] Yeah. [speaker001:] Yeah, cuz [@ @] uh what I had originally said to Brian was well they'll have to mark, when they can't distinguish between the foreground and background, because I thought that was gonna be the most prevalent. [speaker004:] Mm hmm. [speaker001:] But if we send them without editing, then we're also gonna hafta have m uh, notations for words that are cut off, [speaker002:] Yeah. [speaker004:] Mm hmm. [speaker002:] Yeah. [speaker001:] and other sorts of, uh, acoustic problems. [speaker004:] And they may just guess at what those cut off words are, [speaker003:] They do already. Yeah. [speaker004:] but w I mean we're gonna adjust [disfmarker] everything when we come back [disfmarker] [speaker001:] But what [disfmarker] what we would like them to do is be conservative so that they should only write down the transcript if they're sure. [speaker002:] Yeah. [speaker001:] And otherwise they should mark it so that we can check. [speaker002:] Mark it. Sure. Yeah. Yeah. [speaker004:] Mm hmm. [speaker003:] Well, we have the unintelligibility [pause] convention. And actually they have one also, [speaker001:] Mm hmm. Right. [speaker003:] which [disfmarker] [speaker006:] i Can I maybe have [disfmarker] have an order of [disfmarker] it's probably in your paper that I haven't looked at lately, but [disfmarker] Uh, an order of magnitude [speaker003:] Certainty. [speaker006:] notion of [disfmarker] of how [disfmarker] on a good meeting, how often uh, do you get segments that come in the middle of words and so forth, and uh [disfmarker] in a bad meeting how [vocalsound] often? [speaker002:] Uh. [speaker003:] Was is it in a [disfmarker] in a [disfmarker] what [disfmarker] what is the t [speaker006:] Well he's saying, you know, that the [disfmarker] the EDU meeting was a good [disfmarker] good meeting, [speaker003:] In a good meeting, what? [speaker002:] Yeah. [speaker006:] right? [speaker003:] Yeah. [speaker006:] Uh, and so [disfmarker] so [disfmarker] so it was almost [disfmarker] it was almost always doing the right thing. [speaker003:] Oh I see, the characteristics. [speaker006:] So I wanted to get some sense of what [disfmarker] what almost always meant. And then, uh in a bad meeting, [vocalsound] or p some meetings where he said oh he's had some problems, what does that mean? [speaker003:] Uh huh. [speaker006:] So I mean does one of the does it mean one percent and ten percent? [speaker003:] OK. [speaker006:] Or does it mean [vocalsound] five percent and fifty percent? [speaker003:] OK. [speaker006:] Uh [disfmarker] [speaker002:] So [disfmarker] [speaker006:] Or [disfmarker] Maybe percentage isn't the right word, [speaker003:] Just [speaker006:] but you know how many [disfmarker] how many per minute, [speaker002:] Yeah th [speaker006:] or [disfmarker] You know. [speaker002:] Yeah, the [disfmarker] the problem is that, nnn, the numbers Ian gave in the paper is just uh, some frame error rate. So that's [disfmarker] that's not really [disfmarker] [vocalsound] What will be effective for [disfmarker] for the transcribers, is [disfmarker] They have to [disfmarker] yeah, in in they have to insure that that's a real s spurt or something. And [disfmarker] but, [vocalsound] the numbers [disfmarker] Oops. Um [disfmarker] [speaker003:] Hmm! [speaker002:] Let me think. So the [pause] speech [disfmarker] the amount of speech that is missed by the [pause] detector, for a good meeting, I th is around [pause] or under one percent, I would say. But there can be [disfmarker] Yeah. For [disfmarker] yeah, but there can be more [disfmarker] There's [disfmarker] There's more amount speech [disfmarker] uh, more amount of [disfmarker] Yeah well, the detector says there is speech, but there is none. So that [disfmarker] that can be a lot when [disfmarker] when it's really a breathy channel. [speaker006:] But I think that's less of a problem. They'll just listen. [speaker002:] Yeah. [speaker006:] It's just wasted time. [speaker002:] Yeah. [speaker006:] And th and that's for a good meeting. Now what about in a meeting that you said we've [disfmarker] you've had some more trouble with? [speaker002:] I can't [comment] really [disfmarker] hhh, [comment] [pause] Tsk. [comment] I [pause] don't have really representative numbers, I think. That's really [disfmarker] I [disfmarker] I did [pause] this on [disfmarker] on four meetings and only five minutes of [disfmarker] of every meet of [disfmarker] of these meetings so, [vocalsound] it's not [disfmarker] not that representative, but, it's perhaps, Fff. Um [disfmarker] Yeah, it's perhaps then [disfmarker] it's perhaps five percent of something, which s uh the [disfmarker] the frames [disfmarker] speech frames which are [disfmarker] which are missed, but um, I can't [disfmarker] can't really tell. [speaker006:] Right. So I [disfmarker] [vocalsound] So i Sometime, we might wanna go back and look at it more in terms of [vocalsound] how many times is there a spurt that's [disfmarker] that's uh, interrupted? [speaker002:] Yeah. Yeah. Yeah. [speaker006:] Something like that? [speaker003:] The other problem is, that when it [disfmarker] when it uh d i on the breathy ones, where you get [vocalsound] [vocalsound] breathing, uh, inti indicated as speech. [speaker006:] And [disfmarker] [speaker002:] So [disfmarker] [speaker003:] And I guess we could just indicate to the transcribers not to [pause] encode that if they [disfmarker] We could still do the beep file. [speaker006:] Yeah again I [disfmarker] I think that that is probably less of a problem because if you're [disfmarker] if there's [disfmarker] [vocalsound] If [disfmarker] if a [disfmarker] if a word is [disfmarker] is split, then they might have to listen to it a few times to really understand that they can't quite get it. [speaker003:] OK. OK. OK. [speaker006:] Whereas if they listen [nonvocalsound] to it and there's [disfmarker] don't hear any speech I think they'd probably just listen to it once. [speaker002:] But [disfmarker] Yeah. [speaker006:] So there'd [disfmarker] you'd think there'd be a [disfmarker] a factor of three or four in [disfmarker] in, uh, cost function, [speaker003:] OK. [speaker006:] you know, between them or something. [speaker002:] Yeah, so [disfmarker] but I think that's [disfmarker] n that really doesn't happen very often that [disfmarker] that [disfmarker] that a word is cut in the middle or something. That's [disfmarker] that's really not [disfmarker] not normal. [speaker006:] So [disfmarker] so what you're saying is that nearly always what happens when there's a problem is that [disfmarker] is that uh, there's [vocalsound] some uh, uh nonspeech that uh [disfmarker] that is b interpreted as speech. [speaker002:] That is marked as speech. Yeah. Yeah. [speaker006:] Well then, we really should just send the stuff. [speaker003:] That would be great. [speaker006:] Right? Because that doesn't do any harm. [speaker002:] Yeah, [speaker006:] You know, if they [disfmarker] they hear you know, a dog bark and they say what was the word, [speaker002:] it's [disfmarker] [speaker006:] they [comment] you know, they [disfmarker] [speaker002:] Yeah, I als I [disfmarker] [speaker006:] Ruff ruff! [speaker002:] Yeah I also thought of [disfmarker] there [disfmarker] there are really some channels where it is almost [comment] um, only bre breathing in it. [speaker006:] Yeah? [speaker002:] And to [disfmarker] to re run's Eh, um. Yeah. I've got a [disfmarker] a [pause] P a [pause] method with loops into the cross correlation with the PZM mike, [speaker006:] Uh huh. [speaker002:] and then to reject everything which [disfmarker] which seems to be breath. So, I could run this on those breathy channels, and perhaps throw out [disfmarker] [speaker001:] That's a good idea. [speaker003:] Wow, that's a great idea. [speaker006:] Yeah. But I think [disfmarker] I th Again, I think that sort of [disfmarker] that that would be good, and what that'll do is just cut the time a little further. [speaker002:] Yeah. [speaker004:] Mm hmm. [speaker002:] Yeah. [speaker006:] But I think none of this is stuff that really needs somebody doing these [disfmarker] these uh, uh, explicit markings. [speaker004:] Yeah. [speaker003:] Excellent. Oh, I'd be delighted with that, I [disfmarker] I was very impressed with the [disfmarker] with the result. Yeah. [speaker006:] Yeah, cuz the other thing that was concerning me about it was that it seemed kind of specialized to the EDU meeting, and [disfmarker] and that then when you get a meeting like this or something, [speaker002:] Yeah. [speaker006:] and [disfmarker] [vocalsound] and you have a b a bunch of different dominant speakers [speaker003:] Oh yeah, interesting. [speaker006:] you know, how are you gonna handle it. Whereas this sounds like a more general solution [speaker003:] Oh yeah. [speaker006:] is [disfmarker] [speaker003:] Oh yeah, I pr I much prefer this, I was just trying to find a way [disfmarker] [speaker006:] Yeah. [speaker003:] Cuz I [disfmarker] I don't think the staggered mixed channel is awfully good as a way of handling overlaps. [speaker006:] Uh huh. [speaker003:] But [disfmarker] [speaker004:] Well good. [speaker003:] but uh [disfmarker] [speaker004:] That [disfmarker] that really simplifies thing then. [speaker003:] Yeah. [speaker004:] And we can just, you know, get the meeting, process it, put the beeps file, send it off to IBM. You know? [speaker003:] Mm hmm. [speaker004:] With very little [pause] work on our side. [speaker002:] Yeah. Process it, hear into it. I would [disfmarker] [speaker004:] Do what? [speaker002:] Um, [pause] listen to it, and then [disfmarker] [speaker001:] Or at least sample it. [speaker004:] Well, sample it. [speaker002:] Yeah. [speaker006:] I [disfmarker] I would just use some samples, [speaker004:] Sample it. [speaker002:] Yeah. Yeah. [speaker006:] make sure you don't send them three hours of "bzzz" [comment] or something. [speaker002:] Yeah. [speaker004:] Yeah. Yeah. [speaker002:] No. [speaker004:] Right. [speaker002:] That won't be good. [speaker004:] Yeah. [speaker003:] Yeah. [speaker004:] Yeah that would be very good. [speaker002:] Yeah. [speaker004:] And then we can you know [disfmarker] [speaker006:] Yeah. [speaker004:] That'll oughta be a good way to get the pipeline going. [speaker003:] Oh, I'd be delighted. Yeah. [speaker002:] And there's [disfmarker] there's one point which I [comment] uh [disfmarker] [vocalsound] yeah, which [disfmarker] which I r [vocalsound] we covered when I [disfmarker] when I r listened to one of the EDU meetings, [speaker006:] Great. [speaker002:] and that's [vocalsound] that somebody is playing sound from his laptop. [speaker001:] Uh huh [speaker002:] And i [vocalsound] the speech nonspeech detector just assigns randomly the speech to [disfmarker] to one of the channels, so. Uh I haven't I didn't think of [disfmarker] of s of [vocalsound] this before, [speaker001:] What can you do? [speaker002:] but what [disfmarker] what shall we do about s things like this? [speaker003:] Well you were suggesting [disfmarker] You suggested maybe just not sending that part of the meeting. [speaker001:] Yep. Mmm. [speaker003:] But [disfmarker] [speaker002:] But, sometimes the [disfmarker] [vocalsound] the [disfmarker] the laptop is in the background and some [disfmarker] somebody is [disfmarker] is talking, and, [vocalsound] that's really a little bit confusing, but [disfmarker] [speaker006:] That's life. [speaker001:] It's a little bit confusing. [speaker002:] Yeah. [speaker001:] I mean, [comment] what're we gonna do? [speaker002:] Yeah. OK. [speaker006:] Yeah. [speaker001:] Even a hand transcription would [disfmarker] [speaker003:] Do you [disfmarker] [speaker001:] a hand transcriber would have trouble with that. [speaker002:] Yeah, that's [disfmarker] that's a second question, [speaker001:] So. [speaker002:] "what [disfmarker] what will different transcribers do with [disfmarker] with the laptop sound?" [speaker006:] What was the l what was the laptop sound? [speaker003:] Would you [disfmarker] would [disfmarker] Yeah, go ahead. [speaker006:] I mean was it speech, or was it [disfmarker] [speaker002:] Yeah. It's speech. [speaker006:] Great. [speaker003:] Well, so [disfmarker] I mean [disfmarker] So my standard approach has been if it's not someone close miked, then, they don't end up on one of the close miked channels. They end up on a different channel. And we have any number of channels available, [speaker006:] Uh huh. [speaker003:] I mean it's an infinite number of channels. [speaker002:] Yeah. [speaker003:] So just put them on some other channel. [speaker002:] But, when thi when this is sent to [disfmarker] to the I M eh, I B M transcribers, I don't know if [disfmarker] if they can tell that's really [disfmarker] [speaker003:] Yeah, that's right. [speaker001:] Yeah cuz there will be no channel on which it is foreground. [speaker002:] Yeah. Yeah. [speaker003:] Well, they have a convention, in their own procedures, [vocalsound] which is for a background [pause] sound. [speaker001:] Uh [disfmarker] Right, but, uh, in general I don't think we want them transcribing the background, cuz that would be too much work. [speaker002:] Yeah. [speaker001:] Right? For it [disfmarker] because in the overlap sections, then they'll [speaker004:] Well I don't think Jane's saying they're gonna transcribe it, but they'll just mark it as being [disfmarker] there's some background stuff there, [speaker001:] But that's gonna be all over the place. [speaker004:] right? [speaker003:] Yeah. [speaker001:] How w how will they tell the difference between that sort of background and the dormal [disfmarker] normal background of two people talking at once? [speaker002:] Yeah. [speaker003:] Oh, I think [disfmarker] I think it'd be easy to to say "background laptop". [speaker004:] But wait a minute, [speaker001:] How would they know that? [speaker004:] why would they treat them differently? [speaker002:] Yeah. [speaker003:] Well because one of them [disfmarker] [speaker001:] Because otherwise it's gonna be too much work for them to mark it. They'll be marking it all over the place. [speaker002:] Yeah. [speaker003:] Oh, I s background laptop or, background LT [vocalsound] [vocalsound] wouldn't take any time. [speaker001:] Sure, but how are they gonna tell bet the difference between that and two people just talking at the same time? [speaker003:] And [disfmarker] Oh, you can tell. [speaker002:] Yeah. [speaker003:] Acoustically, can't you tell? [speaker002:] It's really good sound, so [disfmarker] [speaker003:] Oh is it? Oh! [speaker006:] Well, I mean, isn't there a category something like uh, "sounds for someone for whom there is no i close mike"? [speaker002:] Yeah that would be very important, [speaker003:] Yeah. [speaker001:] But how do we d how do we do that for the I B M folks? [speaker002:] yeah. [speaker001:] How can they tell that? [speaker004:] Well we may just have to do it when it gets back here. [speaker001:] Yes, that's my opinion as well. [speaker002:] Yeah. [speaker001:] So we don't do anything for it [disfmarker] with it. [speaker003:] OK. [speaker004:] Yeah. [speaker003:] That sounds good. That sounds good. [speaker004:] Yeah. [speaker001:] And they'll just mark it however they mark it, and we'll correct it when it comes back. [speaker002:] So th [speaker006:] Yeah. [speaker003:] OK. [speaker002:] there was a category for [@ @] [comment] speech. [speaker001:] Yeah, the default. [speaker003:] Yeah, s a [speaker002:] OK. [speaker001:] No, not default. [speaker003:] Well, as it comes back, we have a uh [disfmarker] when we can use the channelized interface for encoding it, then it'll be easy for us to handle. [speaker002:] Yeah. [speaker003:] But [disfmarker] [vocalsound] but if [disfmarker] if out of context, they can't tell if it's a channeled speak uh, you know, a close miked speaker or not, [vocalsound] then that would be confusing to them. [speaker002:] OK. [speaker001:] Right. [speaker002:] OK. [speaker003:] I don't know, I [disfmarker] it doesn't [disfmarker] I don't [disfmarker] Either way would be fine with me, I don't really care. [speaker006:] Yeah. So. Shall we uh, do digits and get out of here? [speaker003:] I have o I have one question. [speaker001:] Yep. [speaker003:] Do you think we should send the um [disfmarker] that whole meeting to them and not worry about pre processing it? [speaker006:] Yes ma ' [speaker003:] Or [disfmarker] Uh, what I mean is [vocalsound] we [disfmarker] we should [vocalsound] leave the [vocalsound] part with the audio in the uh, beep file that we send to IBM for that one, or should we [vocalsound] start after the [disfmarker] that part of the meeting is over [speaker006:] Which part? [speaker003:] in what we send. So, the part where they're using sounds from their [disfmarker] from their laptops. [speaker002:] With [disfmarker] with the laptop sound, or [disfmarker]? [speaker003:] w If we have speech from the laptop should we just uh, excise that from what we send to IBM, [speaker002:] just [disfmarker] [speaker003:] or should we [vocalsound] i give it to them and let them do with it what they can? [speaker004:] I think we should just [disfmarker] it [disfmarker] it's gonna be too much work if we hafta [vocalsound] worry about that I think. [speaker003:] OK, that'd be nice to have a [disfmarker] a uniform procedure. [speaker004:] Yeah, I think if we just [disfmarker] m send it all to them. you know. [speaker003:] Good. [speaker001:] Worry about it when we get back. [speaker004:] Let [disfmarker] [speaker003:] And see how well they do. [speaker004:] Yeah, worry about it when we get back in. [speaker006:] Yeah. [speaker003:] And give them freedom to [disfmarker] [vocalsound] to indicate if it's just not workable. [speaker004:] Yeah. [speaker003:] Yeah, [speaker006:] Yeah. [speaker003:] OK, [speaker006:] Cuz, I wouldn't [disfmarker] don't think we would mind [pause] having that [pause] transcribed, if they did it. [speaker003:] excellent. [speaker001:] I think [disfmarker] [speaker003:] Yeah, [speaker001:] As I say, we'll just have to listen to it and see how horrible it is. [speaker004:] Yeah, e [speaker003:] yeah. [speaker002:] Yeah. [speaker003:] OK. [speaker001:] Sample it, rather. [speaker003:] Alright. [speaker004:] Yeah. [speaker002:] I think that [disfmarker] that will be a little bit of a problem [speaker003:] That's great. [speaker002:] as it really switches around between [vocalsound] two different channels, I think. [speaker001:] Mm hmm, [speaker002:] What [disfmarker] what I would [disfmarker] [speaker001:] and [disfmarker] and they're very [disfmarker] it's very audible? on the close talking channels? [speaker002:] Yeah. [speaker001:] Oh well. [speaker006:] Yeah. [speaker001:] I mean, it's the same problem as the lapel mike. [speaker002:] Yeah. [speaker001:] But [disfmarker] [speaker003:] Oh, interesting. [speaker002:] Comparable, yeah. [speaker006:] Yeah. [speaker003:] OK, alright. [speaker002:] OK. [speaker006:] Let's do digits. [speaker003:] Digits. OK, so we read the transcript number first, right? [speaker001:] Are we gonna do it altogether or separately? [speaker002:] So [disfmarker] What time is it? [speaker006:] Uh, [vocalsound] why don't we do it together, [speaker003:] Uh, quarter to four. [speaker002:] Oh, OK. [speaker006:] that's [disfmarker] that's a nice fast way to do it. [speaker003:] Mm hmm. [speaker006:] One, two, three, go! [speaker003:] It's kind of interesting if there're any more errors in these, [vocalsound] than we had the first set. [speaker001:] Nnn, yeah, I think there probably will be. [speaker002:] Yeah. [speaker004:] Do you guys plug your ears when you do it? [speaker001:] I do. [speaker002:] No. [speaker003:] I usually do. [speaker004:] I do. [speaker003:] I didn't this time. [speaker002:] I don't. [speaker004:] You don't? [speaker006:] I haven't been, [speaker002:] No. [speaker004:] How can you do that? [speaker006:] no. [speaker004:] I [disfmarker] I [disfmarker] [speaker006:] Uh, concentration. [speaker002:] Perhaps there are [vocalsound] lots of errors in it [speaker004:] Gah! You hate to have your ears plugged? [speaker001:] Total concentration. Are you guys ready? [speaker006:] Yeah. [speaker004:] Really? [speaker004:] If you're popular. [speaker005:] One two three four five six [comment] [disfmarker] I'm on seven. [speaker001:] We're on. [speaker003:] It's probably the P Z [speaker006:] Yeah. [speaker001:] So, I think we pre crashed, so I think we're OK. [speaker006:] Pre crashed. [speaker001:] So it should be a really short meeting, I hope. Uh, agenda items, number one, I wanna talk [disfmarker] since we were just discussing that [disfmarker] is microphone issues. What the heck are we gonna do about microphones? So uh I got passed on that the EDU group doesn't like the uh, Crown mikes. [speaker006:] Oh. [speaker002:] Who does? [speaker001:] I do. [speaker006:] I do. [speaker001:] I l I think [disfmarker] I find them very comfortable. [speaker006:] Yeah, I do too. [speaker001:] Uh, but it seems to depend on your head shape. [speaker005:] So you guys need to start going to the EDU meetings. [speaker002:] I see. They don't work for me very well. [speaker001:] That's right. [speaker004:] Yeah right. [speaker002:] I much prefer these. [speaker001:] Yep. [speaker003:] Yeah. [speaker001:] Um, [speaker004:] Me too. [speaker003:] So do I. [speaker005:] I prefer these, but I don't mind using those. [speaker001:] Right. [speaker004:] Th those are intimidating. [speaker001:] So apparently they [disfmarker] they like It has one good effect, that people are trying to get there early because the people who get there early get to pick the mike. [speaker006:] Interesting. [speaker002:] OK. [speaker001:] People who show up late have to use these. So, um, we should probably get different mikes. So the question is, the easiest thing to do is certainly to just get two more, um, Sony mikes. Just two more of those, um. And that [disfmarker] that's easy and that will certainly work. The other option is to try yet another mike. Find one we like and potentially get six, all the same. [speaker006:] I have a question about this. Are the auditory quality [disfmarker] Is it, uh, much different between this kind and the [disfmarker] the fancy ones? [speaker001:] Um, these are better, if they're worn correctly. [speaker006:] They are. [speaker005:] Those are better than the Sonys? [speaker006:] OK. [speaker003:] Have you [disfmarker] Have you listened to [disfmarker] to [pause] them? [speaker001:] Yeah. Yeah, definitely. Yeah. [speaker003:] Yeah? [speaker001:] Yeah. [speaker003:] OK. [speaker001:] So, I mean, they're not a lot better, but they are a little better. [speaker004:] Are they more directional, the microphones, as far as [disfmarker] [speaker001:] Um, they're more directional, a little better error [disfmarker] uh, noise cancellation, and also you can really get it right in front of your mouth, like this, [speaker004:] OK. [speaker001:] whereas that one, to avoid breath noise, you really have to put it at [disfmarker] to the side. So you seem to get better signal with this one. [speaker006:] The other thing is, is it just a few people who don't like them in the EDU group? Cuz [disfmarker] [speaker001:] I don't know, but [disfmarker] you know, in [disfmarker] in sort of random polling [speaker002:] L Liz [disfmarker] Liz said something that leaves me believing that nobody likes them. They are ver [speaker006:] Gosh, cuz I much prefer them, I think they're a whole lot, multiple levels, better. It seems a shame t to discard [disfmarker] discard them if [disfmarker] if they're better auditory quality and there're only a few people who dis who object. [speaker001:] I mean, so s [speaker002:] Right. [speaker005:] Yeah, that's why I was saying if we could just unplug them and plug them in. [speaker001:] Well, I mean, that's the other option, is that we could switch the form so it's more obvious the distinction between channel and mike, [speaker002:] You know, pretty [disfmarker] [speaker001:] um, and then get duplicates. [speaker002:] Hmm. [speaker001:] I mean, there's no problem with that. It's just [disfmarker] what [disfmarker] Should we get just more Sony ones? I hate the Sony ones. [speaker004:] Which ones are those? [speaker001:] b the one you're wearing. [speaker004:] The one I'm wearing? [speaker003:] Those. [speaker001:] Because it [disfmarker] it pinches [disfmarker] pinches the temples too much. [speaker004:] Oh. Oh. [speaker001:] I mean, so, if you wear it sort of around the back, it's not too bad. [speaker006:] And [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker006:] I hate it [speaker001:] But I [disfmarker] [speaker006:] because it's hard to adjust the microphone. [speaker001:] Right. [speaker006:] I mean, I spend all this time fumbling around with it and still not reasonable, yeah. [speaker001:] Right. So, I mean, we could try another mike. But then we have the wiring issue, and [disfmarker] [comment] [pause] So I [disfmarker] I don't know what to do. What do [disfmarker] what do people think? [speaker003:] But the problem is again the [disfmarker] the plug, or [disfmarker]? Wh [speaker001:] The plug is proprietary. So that's why I was saying getting more Sony ones is trivial, [speaker003:] OK. OK. [speaker001:] because we can just go out and buy them. [speaker003:] OK. Yeah. [speaker001:] Any other ones we have to buy them in pigtail versions and get them wired. [speaker006:] OK, n now may maybe I just don't know this but, um, are the only two possibilities from Sony the two that we've tried? Or is there another [disfmarker] [speaker001:] Yes. So, the only possibilities from Sony are that one and the lapel mike. [speaker006:] I see. OK. OK. Oh, I see. [speaker001:] So this isn't Sony. This is Crown. The one we're wearing. [speaker006:] Isn't that something. OK. [speaker001:] And so we had these wired for us. [speaker006:] Oh. I see. OK. [speaker005:] Well, it s It seems like right now if [disfmarker] if u what they're complaining are those, [speaker001:] And so [disfmarker] [speaker006:] Huh, I remember that now. [speaker005:] if we just got two more of these [disfmarker] [speaker001:] I think that's probably the right first step is just get two more immediately, and have them available. [speaker005:] Yeah. [speaker006:] OK. [speaker005:] And then they can just unplug those from the [disfmarker] transmitter. [speaker001:] Right. And just make sure that you write down "Crown" or "Sony" on the mike number, which I'll change to mike type, or something like that. [speaker005:] Mm hmm. Mm hmm. [speaker003:] Yeah. [speaker006:] It might be good to double check at the end of the meeting too, cuz that would be an easy place for, uh, an error in the data. [speaker001:] Yep. [speaker006:] For it to be forgotten. [speaker001:] Well, it'll be me [disfmarker] It'll be whoever's setting up the meeting, who fills out the key file, so [speaker006:] OK, it's just [disfmarker] I'm just thinking that if [disfmarker] [speaker001:] It has the same potential for error as everything else. [speaker006:] it'd be [disfmarker] yeah, but I know [disfmarker] but I mean to have the user fill it out wouldn't be as reliable as have the [disfmarker] [speaker001:] No, no, we definitely would not have the user fill it out. [speaker006:] Good, OK. [speaker001:] It would be me, Chuck, or Liz, depending on which meeting it is. [speaker006:] Perfect. Perfect, OK. [speaker001:] OK, so we'll de definitely go ahead and do that. [speaker005:] How much is it just to buy the mike? [speaker001:] Couple hundred. [speaker005:] Really. [speaker001:] Yeah. [speaker006:] Is that more or less than you thought? [speaker001:] It depends on how good the mike is. [speaker005:] Oh that's way more than I thought. I [disfmarker] [speaker003:] Yeah. [speaker006:] Oh, OK. [speaker004:] For one of these? [speaker005:] yeah. [speaker001:] Well the Crown ones were like two fifty. I think those are like one ninety. [speaker004:] God. [speaker003:] Oops. [speaker005:] Wow! [speaker001:] But. [speaker005:] Is there any educational discount? [speaker001:] Yeah right. No. [speaker003:] Student discount. [speaker001:] Yeah, good mikes are expensive. [speaker005:] And when I was at Computer Motion we used Shure, [speaker001:] So. [speaker005:] the SM ten A's, [speaker001:] Mm hmm. [speaker005:] and I think they were only like eighty bucks or something. [speaker001:] Yeah, eighty or ninety for the Shures. [speaker005:] Yeah. [speaker001:] So. [speaker005:] But they were [disfmarker] they seemed pretty good. [speaker001:] I mean, the Sony ones are expensive because they're proprietary, so they can charge whatever they want. [speaker005:] Oh. [speaker001:] These are expensive because they're quite high quality. [speaker005:] Right, right. [speaker001:] So. So we should buzz [comment] that out if we send the data to Sony. [speaker002:] Ah, come on. [speaker006:] We should keep a list of things we're gonna bleep out and the conditions under which [disfmarker] [speaker001:] Well, I'm just joking. So, you do not have to bleep that out. I don't mind if Sony knows my opinion. So. [speaker002:] Hmm. [speaker001:] Um. Also, we're pro I wanna double check with Morgan, he did say yes before I went to Japan on buying another wireless system so that we can go all wireless, instead of the mix of wired and wireless. And I think that's the right thing to do. [speaker005:] So then all those red channels there would become wireless ones? [speaker001:] And I'm [disfmarker] [speaker003:] Yeah. [speaker001:] Yep. [speaker005:] Cool. [speaker001:] Yep. [speaker004:] Uh huh. [speaker001:] Four more wireless. Um. And also I'm gonna re probably replace the Andrea mike with a Shure, but I'll test it, uh, sometime today or tomorrow to make sure the Shure one really works, cuz I have an extra Shure in my office. [speaker005:] The Andrea mike? [speaker003:] Yeah. [speaker001:] Yeah, apparently it's had some problems. [speaker003:] That's causing problems, yeah. [speaker005:] Which one is the Andrea mike? [speaker006:] It was over here sometimes. [speaker005:] A wired one? [speaker003:] The [disfmarker] yeah a wired one. [speaker001:] Yeah. It's uh [disfmarker] [speaker005:] Well, if you're gonna go to all wireless, Oh, you mean in the meantime. [speaker001:] This one. In the meantime, right, [speaker005:] Ah, I see. [speaker001:] because uh [disfmarker] it'll [disfmarker] it'll probably take a couple weeks to get it delivered from Sony anyway. [speaker003:] Yeah. [speaker005:] Yeah. I haven't [disfmarker] In the meetings that I recorded s It's always been at most six people, [speaker001:] So [disfmarker] [speaker005:] so I've never had to [disfmarker] [speaker001:] Oh really? Yeah. [speaker005:] Recently I haven't had to used any of the wired ones at all. [speaker001:] I guess cuz everyone's been out of town. Probably over the summer it'll be the same [speaker005:] Yeah. [speaker001:] cuz it tends to be less, fewer people. Um, File [disfmarker] Uh, done with microphone issues, I think? [speaker006:] Should we close the door? [speaker001:] If you want. [speaker006:] Oh, I'm thinking [disfmarker] I don't know about the acoustics. That's [disfmarker] that's all I was wondering about. [speaker001:] And this way we can get a door slam in the uh [disfmarker] in the transcript file. [speaker003:] Yeah. [speaker006:] Yeah that's right, we gotta get the obligatory door slam. [speaker001:] Oh well. No not quite a slam. [speaker005:] There's some knocks. [speaker002:] Get a special phone for that. [speaker005:] Mm hmm. [speaker001:] Uh, the door slam phone? [speaker006:] I guess, right, the door slam phone. [speaker002:] Yeah. [speaker005:] You have a special phone? [speaker002:] No, we could add one. [speaker005:] Oh, we could add one, yeah. [speaker001:] And then we could have the phone phone, [speaker002:] Yeah. [speaker001:] for [disfmarker] for when the phone rings. [speaker006:] That's an idea. [speaker001:] Um, Uh, file reorganization. This is something we were talking about before I left and saying we should probably wait until after I'm back, and now I'm back, so, we should do that at some point. So we should get ourselves a list of everything we wanna do to reorganize the file structure and anything else. [speaker002:] Can I [disfmarker] can I just mention something? [speaker001:] Sure. [speaker002:] Um, uh, I think the file regards reorganization. Also, um, another issue there is disk space probably, right? [speaker001:] Mm hmm. [speaker002:] Um, so I know that the files that you've been cutting up for us f for the recognition experiments, [speaker004:] Mm hmm. [speaker002:] uh, one way [disfmarker] one really brain [disfmarker] uh, brain dead way of [disfmarker] of [disfmarker] of not causing any trouble, but saving disk space is to, uh, use the s the Sphere, the NIST, uh, W encode program. to [disfmarker] to encode, you know, to compress them. [speaker001:] Shorten. Is that the same as shorten? [speaker002:] Uh, yeah, but it does it s it happens so that the program that reads the waveforms does the unshortening transparently. [speaker004:] Yeah. [speaker001:] Well, OK, you mean it's built into the SRI, because we have the same thing with shorten in the sound tools. So. [speaker002:] Y uh, I guess, but it um, [speaker001:] So it's just a question of [disfmarker] of what decompression is built into your tools. [speaker002:] Well, it's [disfmarker] It's actually built into the Sphere library that NIST delivers, [speaker005:] Well like [disfmarker] [speaker001:] Oh really? [speaker002:] so [disfmarker] [speaker001:] I didn't know that. [speaker002:] Right. And actually, s the sound tools don't understand that. For the [disfmarker] [speaker005:] At least Feacalc doesn't. [speaker002:] At least Feacalc doesn't. So. [speaker001:] Well, that's not a sound tool, right. [speaker002:] But since [disfmarker] since these files are made to be used with the SRI recognizer, uh and the SRI front end uses the Sphere library which in turn does this transparently um, uh, that will be a quick and [disfmarker] quick and easy way to just, uh, get you know, uh, be able to use more [disfmarker] [speaker004:] Yeah. [speaker005:] Mm hmm. [speaker004:] The other thing I could do to relieve some of the pressure um is just move everything to my eighteen gig disk, which is local. [speaker002:] But that's gonna be only temporary. [speaker004:] Well, I mean, I don't know [disfmarker] [speaker002:] I mean, you should do that too, probably, but [disfmarker] but as you do that you can also just run the [disfmarker] [speaker004:] Yeah, it is kind of a temporary solution. To shorten everything. [speaker002:] well, actually, the [disfmarker] th what you do is you run [disfmarker] Oh, now I have another use for the [disfmarker] The way I recently used it, and there might be better ways [disfmarker] So the program's called W encode. [speaker004:] Mmm. [speaker002:] And I think the type, y you say um I think dash T and then there are different [disfmarker] different encoding methods, but if you wanna use the shorten one, you say d "minus T shorten", and then the old uh wavefile and the new wavefile, and then [disfmarker] Oops! And then I check, you know, if this works, so, you can use the [disfmarker] the shell "AND" operator or something. Then I just move the new wavefile to the, you know, to the old wavefile, [speaker001:] Right. [speaker002:] and then you have replaced the old one with one that behaves identically as long as your programs that use it know how to decode it on the fly. [speaker004:] Mm hmm. [speaker002:] And that [disfmarker] that just saved my butt [speaker004:] OK. [speaker002:] because I actually was running [disfmarker] On a different experiment, I had segmented [disfmarker] I was processing the whole Switchboard two corpus, which is two hundred eighty hours of speech, and I was noticing, as I was almost finishing the processing, that I was running out of disk space. And [disfmarker] and so [speaker001:] Mm hmm. [speaker002:] I uh had this flash of inspiration of just uh the same [disfmarker] the same disk had the segmented waveforms on them, [speaker001:] Shortening everything on the fly. [speaker002:] so I [disfmarker] while this other thing was still going on, I was run I was running this [disfmarker] this thing. [speaker001:] You had another process running that was shortening it. [speaker006:] Wow, wow! [speaker001:] Yep. [speaker002:] And low and behold I gained three g three gig of space and um, you know [disfmarker] [speaker005:] Wow! [speaker001:] Did you have to re nice one of the processes to make sure that "shorten"? [speaker002:] No, no. [comment] Actually it was fast enough. This is very fast. This [disfmarker] this really runs quickly. [speaker006:] Wow. [speaker002:] And that's [disfmarker] [speaker006:] That must have been suspenseful. [speaker002:] That was very suspenseful. [speaker001:] Watching the disk meter. [speaker002:] That was [disfmarker] that was the most excitement I had all weekend. Uh, uh boy, it uh [pause] came out just fine. [speaker004:] To be OK. [speaker002:] So. [speaker005:] You know what would be a [disfmarker] u I don't know if this would mess other things up, but [disfmarker] It seems like kind of a pain to have all these split up files around. What would be easier would be like pointers. [speaker001:] The list, yep. [speaker005:] You know, lists like "original wavefile, start, end". [speaker001:] Start end. Yep, the way Feacalc [disfmarker] calc does it. [speaker005:] This is [disfmarker] [speaker002:] Right, the [disfmarker] the only reason we do this is because the [disfmarker] the SRI front end doesn't have a way to [disfmarker] to um [vocalsound] go into a l a longer file with indices. Um, so I [disfmarker] I suppose someone could try to put a hack like that into the [disfmarker] [speaker004:] And segment on the fly. [speaker001:] It would be easy. It wouldn't be hard at all. Someone just needs to d sit down and do it who has some time. [speaker002:] So. But there's also some [disfmarker] I guess [disfmarker] [speaker001:] And that way we wouldn't have multiple versions floating around. About the only difficulty with that is if it's compressed. Then you really do have to decompress it first. [speaker002:] That's true. [speaker001:] Right? Because the pointers are [disfmarker] Y you don't know how much it's comp It doesn't compress it by a fixed amount. [speaker002:] Exactly. Right right right right right. [speaker005:] Well, is there [disfmarker] is there [disfmarker] is there a NIST routine which can seek in a compressed file but with uncompressed indices? [speaker002:] I don't think so. [speaker001:] And [pause] yeah I mean, that [disfmarker] it [disfmarker] [speaker002:] No, no. I mean, if you [disfmarker] Th the [disfmarker] the [disfmarker] the [disfmarker] If you can operate on the full [disfmarker] If you don't have to segment it, then there would be less of a reason to do the compression, because you don't have that wasted [disfmarker] that extra copy. [speaker005:] Right. Right we [disfmarker] I mean the original Switchboard files are not compressed, [speaker002:] So. [speaker003:] Yeah. [speaker005:] right? [speaker002:] Right. [speaker005:] So we could leave those as they are. [speaker001:] Well, I mean, it just depends on how much disk space is a problem. [speaker002:] Right. [speaker001:] I mean the [disfmarker] what you could do is decompress it to a temporary place and then operate on it and then delete it. But. [speaker002:] I mean, the segmentation also saves you space in the sense that you cut out all the nonspeech regions. [speaker003:] Just silences, yeah. [speaker002:] And if you have, you know, twenty channels and only five speakers, then it's [disfmarker] [speaker001:] That's true. [speaker004:] Mm hmm. [speaker005:] Well, assuming you'd [disfmarker] Yeah. Assuming that you then off load the original Switchboard files. [speaker002:] Yeah. So. [speaker004:] Mmm. [speaker001:] Yeah. Well, it seems like just shortening them is a good short term solution [speaker005:] Yeah, yeah. [speaker001:] so we don't have to do any coding. [speaker002:] Yeah. [speaker004:] But I think [disfmarker] kind of [disfmarker] We've had [disfmarker] There was a big disk crash when you were gone [speaker001:] So. [speaker004:] and [disfmarker] [speaker001:] No, it was [disfmarker] I was still here. It was the day I left. [speaker004:] Oh was that the day you left? That's suspicious. [speaker005:] He did it on purpose. [speaker004:] Yeah. Leave to Japan the day the disk crashes. [speaker001:] No if I [disfmarker] if I had done it on purpose I would have timed it right after I left. [speaker004:] But, um [disfmarker] Yeah so, Chuck helped me out in, uh, r regenerating all the cha the different channel files for like a few meetings [disfmarker] [speaker001:] Mm hmm. [speaker004:] for like six meetings. So I think they're split up even further. It's kind of even more disorganized now since we moved some of the meetings to different directories. They're on a different disk even, right? I think [disfmarker] you [disfmarker] didn't you expand them to XE on Abbott? [speaker005:] Are they? [speaker006:] XF. [speaker004:] No, you [disfmarker] the ones that You [disfmarker] the ones that you put them on when you put them on XE. [speaker006:] Oh, different ones? [speaker005:] I don't remember where I put them now. [speaker004:] I think you put them on XE. [speaker005:] OK. [speaker004:] So. [speaker006:] Well what we [disfmarker] what we found out was that um the disk that crashed was [disfmarker] it [disfmarker] w with a [disfmarker] with a meta disk allocation, you had both c t transcripts and the shortened files and the expanded files were all on XE [disfmarker] were on the same [disfmarker] sorry, different partitions of the same physical disk. [speaker005:] Physical disk. [speaker004:] Mmm. [speaker006:] And it's conceivable I mean, I [disfmarker] I don't [disfmarker] I mean, so um, I was told that it's possible that that might have, uh, caused additional wear on it. Maybe caused it to [disfmarker] to go bad sooner. [speaker004:] Hmm. [speaker001:] Well, I think there was something else going on, because uh Dave Johnson said that DD was getting accessed frequently. [speaker006:] Oh. Uh huh. [speaker001:] And it shouldn't be. Right? That data never gets touched, [speaker006:] Well, why not? [speaker001:] because we write it once and then we never touch it again. [speaker006:] Oh sure it does. [speaker005:] Yeah, after each meeting we copy the data to [disfmarker] [speaker006:] Except that [disfmarker] Well, I mean, so the wavefiles, or [disfmarker] or anything at all, because the transcripts are there as well. [speaker001:] The wave files. I mean, he was saying gigabytes. [speaker006:] Oh, gigabytes. I see. [speaker001:] And so, it has to be the wave files. [speaker005:] Yeah, there was something weird about that. [speaker006:] Yeah, that's right. [speaker001:] Yeah, so I asked Dave about it and he hasn't looked into it yet, but we should definitely double check on it. [speaker005:] Yeah like [disfmarker] like the same meeting [disfmarker] shortened, files that we pull off of Popcorn when we're done doing the recording, looked, to the backup software, as if they had been written, you know, every single night, i for [disfmarker] f you know, a week in a row, [speaker006:] Yeah. [speaker001:] Every night. [speaker006:] Is that right. That's very strange. [speaker005:] which is really weird. [speaker004:] Wow. [speaker006:] I didn't realize that. OK. [speaker001:] So at any rate, so for file reorganization we need to first decide what we're gonna do and then when we're gonna do it. [speaker005:] Yeah. [speaker001:] So I'm not sure who isn't involved with that. I mean, certainly me, Chuck and Jane. Anyone else [pause] care? [speaker002:] No. [speaker006:] And I [disfmarker] and I'd like, in terms of the conventions, to also, uh, you know, s send a bit to Dan Ellis to see if it's [disfmarker] if there's any [disfmarker] get his input on it. I don't think [disfmarker] I don't think that'll be [disfmarker] [speaker001:] Right. [speaker006:] Yeah. [speaker001:] So, maybe we should do that [speaker006:] Yeah. [speaker001:] not during this meeting but s another time, [speaker005:] Yeah. [speaker001:] and just get a list of everything we're gonna do. [speaker006:] Yeah. [speaker005:] Yeah. Maybe n next week if we could. [speaker001:] OK. [speaker005:] I'm trying to finish up some stuff. [speaker006:] Yeah sure. I'd like to [disfmarker] [speaker001:] So just update all the naming conventions and put all the files where they really belong, on one disk, [speaker005:] That sounds like a good idea. [speaker001:] and then leave [disfmarker] leave everything in place until the back up [disfmarker] until the next full back up and then delete the old ones. [speaker005:] Mm hmm. [speaker006:] OK, now [disfmarker] and you're just [disfmarker] you're not talking about the s the X disks [disfmarker] the X uh partitions, [speaker001:] So. [speaker006:] just the backed up space, or [disfmarker]? [speaker001:] Well, both need to be reorganized. [speaker006:] OK. [speaker001:] So um, various paths, th I mean this is why we have to do it in a synchronized way, because um [speaker005:] I think we should also at the same time try to, uh, convert over to your new naming conventions. [speaker001:] Yes, exactly. That too. [speaker005:] That'd be good. [speaker001:] So that's what I was saying. We need to get a list of all that stuff that we wanna do. [speaker005:] Yeah. [speaker006:] OK. [speaker001:] And so I didn't really get any responses from the naming conventions that I sent out, so I assume that's alright with everyone. [speaker005:] I actually haven't looked at it yet. I haven't had a chance, [speaker006:] I haven't either. [speaker001:] Oh. [speaker004:] No. [speaker003:] Hmm. [speaker006:] Sorry. [speaker005:] so. [speaker006:] I will by next week, though. [speaker001:] And [disfmarker] [speaker002:] Uh, I don't know about the naming, [speaker001:] OK. Then I should have made that as an agenda item. [speaker002:] I mean, Th so these names that we've been using so far are with uh uh uh I wouldn't just wanna change them you know, without some advance notice. [speaker001:] Right. [speaker002:] I mean, th that's all these segment names that we we've been using. [speaker004:] Yeah. [speaker002:] I would rather not mess with them until we have some closure on some of the things we are currently dealing with, [speaker004:] Yeah, I'm [disfmarker] Right. [speaker001:] Well, I mean, how you choose to do it [disfmarker] the naming is up to you. [speaker002:] so [disfmarker] [speaker004:] Well, [speaker002:] Right. [speaker001:] So. [speaker006:] You're talking about different files. [speaker004:] I mean, it should probably be [disfmarker] eventually should probably be consistent with what you're doing but, [speaker001:] Yep. [speaker004:] yeah, I kind of agree with Andreas, like I'm a little bit [disfmarker] [speaker005:] I mean, these [disfmarker] [speaker004:] I looked at the naming conventions and they look fine to me, but at the same time it was just like you know to rename everything would be [disfmarker] [speaker005:] Well, if we [disfmarker] If we change things, it won't really affect what you're doing, [speaker004:] No but I think just to be consistent we should also, I mean, have the same conventions, just in case you want [disfmarker] [speaker006:] This [disfmarker] [speaker005:] will it? [speaker001:] Right. [speaker005:] Yeah. So you [disfmarker] but you can switch that any time you want, [speaker004:] Yeah. Yeah. I mean, it's only gonna affect my work. [speaker005:] right? [speaker004:] So. Yeah, it's not gonna [disfmarker] [speaker005:] I mean, if we [disfmarker] [speaker004:] I assume you're not gonna go, like, into, you know, my directories and change my file names. [speaker002:] Right. [speaker004:] So. [speaker001:] I [disfmarker] Actually I was gonna do a global search replace on all entries at [disfmarker] [vocalsound] at ICSI [speaker005:] Everybody will use it. [speaker004:] Yeah. [speaker005:] Fine slash U slash star. [speaker001:] to change MR to MRM at all places at ICSI. [speaker004:] Yeah, that'd be great. Really enjoy that. [speaker006:] With no [disfmarker] with no advance warning. [speaker001:] Y you [disfmarker] Well, maybe I shouldn't say that on re record. [speaker006:] OK. [speaker001:] There was a typo in some of the contracts that Morgan got that someone, one of our sponsors, did a global sear search and replace for [disfmarker] between "sponsor" and their name. [speaker002:] Mm hmm. [speaker006:] Oh no, oh no. [speaker001:] And so, it [disfmarker] it was saying, uh [disfmarker] Well, anyway, I won't [disfmarker] [speaker006:] Yeah, [speaker001:] I'm not sure whether that's right. [speaker006:] one [disfmarker] one can imagine that that might be problematic. [speaker001:] Yeah, one can imagine the problems that that would [pause] engender. [speaker006:] But this [disfmarker] this name change affects a subset, [speaker001:] So. [speaker006:] doesn't need to reflect everything, yeah. [speaker004:] Right. [speaker001:] Right. Um, Thilo, you had [disfmarker] you wanted to talk about the [disfmarker] [speaker003:] Yeah, I had one [disfmarker] one short point. I have just installed a Transcriber version on one of our NT machines so it's available under Windows now. [speaker002:] Oh great. Actually someone [disfmarker] I just got an email this week from someone as [speaker006:] Isn't that great? [speaker003:] Yeah I re responded to sh I have already responded to him. I [disfmarker] I don't know what [disfmarker] what [disfmarker] what he [disfmarker] what the problem was. [speaker002:] To Anant? [speaker003:] It was really straightforward, really easy. [speaker006:] And this is not [disfmarker] [speaker002:] Oh, I'm sorry, I [disfmarker] I just [disfmarker] [speaker006:] Mm hmm. [speaker002:] So, who did you talk to? [speaker003:] There was some [disfmarker] some guy from SRI who wanted to [disfmarker] to install [disfmarker] [speaker002:] Anant? [speaker003:] Yeah. [speaker002:] Anant Venkataraman? [speaker003:] Yeah. [speaker002:] OK. [speaker003:] And he sent an email that he couldn't [disfmarker] couldn't install it [speaker002:] Yeah, OK. Great. [speaker003:] and I [disfmarker] I just described him, well, what I did [speaker002:] Yeah, OK. Great. Great. Thanks. Thanks. [speaker003:] and it was really straightforward, so. [speaker006:] And this is not just the Transcriber, this is the Channeltrans, right? [speaker003:] Yeah, it's the Channeltrans, [speaker002:] Yeah. [speaker003:] so the [disfmarker] the things that Dave Gelbart [disfmarker] [speaker006:] Yeah, excellent. [speaker002:] Cool. [speaker006:] Excellent. [speaker001:] So you should probably talk to a Sys Admin and get it put in some central place. So that it'll work on all the NT machines. [speaker006:] Well, I mean, as it stands, I [disfmarker] I guess [disfmarker] [speaker003:] OK. [speaker006:] yeah I see what you mean. It'll [disfmarker] it'll be on the [disfmarker] on the UNIX side [speaker003:] I've [disfmarker] [speaker006:] but accessible through the H drive. [speaker001:] Right. [speaker006:] Yeah. [speaker003:] OK. Yeah. I could do that. [speaker001:] So I assume Tcl TK wasn't already on the machine, so you had to install it. Yeah. [speaker003:] I had i to install it, yeah. [speaker006:] Huh. Yeah. [speaker002:] Uh [speaker005:] So Andreas, would it be appropriate to ask how the experiments are going? Uh [disfmarker] [speaker002:] Oh, well, yeah I [disfmarker] I [disfmarker] I actually wasn't sure whether this is the right meeting for it, because it has uh very little to do with [disfmarker] with meeting recordings, [speaker001:] Hmm. [speaker002:] but [@ @] [comment] you know, I did uh run [vocalsound] um some recognition experiments with ICSI front end. Um Uh, and [disfmarker] and you know, this is the j joint work with Chuck, and uh, um. So, first, uh, you know, we had [disfmarker] we figured out sometime last week how to um [disfmarker] and [disfmarker] and Chuck wrote this really nice little script [disfmarker] Perl script that takes a uh waveform, runs the feature calculation and then dumps it out into the [disfmarker] into um a f c so called uh cepstra file, which is what the SRI system uses to read features. It's essentially uh uh NIST headered uh waveform. You know, it looks like a waveform except instead of samples you have feature vectors following the header. [speaker004:] Hmm. Mmm, OK. [speaker001:] Mm hmm. [speaker002:] And um that's all done um by the script, and it works great. And uh I first trained up two systems, because it's gender dep you know, the SRI system is gender dependent so to be comparable, I trained uh um on a sh on a so called short training set um a male system and a female system, and uh also for debugging purposes, and for the heck of it, I trained um [disfmarker] trained uh on the same training set uh a standard system with the SRI front end from scratch, um, and compared the two [disfmarker] [speaker001:] So, w what features did you use? [speaker002:] Well, we used, uh, twelve PLP uh, uh [disfmarker] [speaker001:] So not RASTA, just PLP. [speaker005:] Just PLP. [speaker002:] Just PLP and actually that [disfmarker] uh, one of the questions I had was what the RASTA would possibly buy us. But um, we'll talk about that later. So, the uh [disfmarker] so the baseline system [disfmarker] w the SRI system was [disfmarker] uh used [disfmarker] uh also uh uh used t twelve uh mel [disfmarker] uh mel cepstra um [vocalsound] based on a twenty four filter bank um analysis. Um I do not know what [disfmarker] So the f the bandwidth of the um SRI front end is from hundreds hertz to th th thirty se [speaker005:] Thirty seven fifty. [speaker002:] thirty seven fifty or something like that. And I do not know what the um ICSI um front end would do. I mean, what the bandwidth is. Um, but the results are such that uh, let's see [disfmarker] [speaker005:] There's one other slight difference, right? Or two [disfmarker] two differences. [speaker002:] Oh yeah. So the SRI system also does um vocal tract length normalization and we couldn't figure out how to do that yet with the ICSI features. So that's one difference. And the other difference is that in the, uh [disfmarker] [vocalsound] in the SRI system, the uh th the first [disfmarker] the C zero, the energy uh feature is normalized slightly differently from the rest. And what they do is they d they subtract the maximum [disfmarker] [speaker001:] Huh. [speaker002:] For each waveform segment they subtract the maximum of [disfmarker] of th over that waveform segment from from the values of [disfmarker] for that waveform. Which is a kind of automatic gain control, that is localized [disfmarker] [speaker005:] Do they subtract the max from each i one or do they subtract each one from the max? [speaker001:] Who cares? [speaker002:] They subtract [disfmarker] [speaker005:] Doesn't matter? [speaker001:] No, just would be a sign change. [speaker005:] Except you get a lot of negatives the other way. [speaker002:] Right, [vocalsound] right, right. Um, and then, after [disfmarker] But after they done this waveform based normalization, they then do a conversation length normalization just like all the other features. So it's their kind of two stage normalization. [speaker005:] Oh! [comment] Oh! [speaker002:] Um now, I understand that the common practice here has been to just do c standard uh mean subtraction, um on the waveform. Um. [speaker005:] For the C zero. [speaker002:] Right. But in what we've done so far, because we didn't have any special provision for C zero, we just treat it as [disfmarker] as any of the other features, we've done standard mean subtraction over the whole conversation side. So um since both SRI and ICSI use this sort of local normalization for C zero that's presumably, you know, someone has done some experiments to [disfmarker] and found out that that works better. Um, so that's another difference, and that might account for some of the discrepancies in the results. Um, but you know. So the the results are um Where should I start? Uh the [disfmarker] So there's a two [disfmarker] Oh. I tried it with and without. Uh. So without and with adaptation. [speaker001:] How many iterations? [speaker002:] For the adaptation? [speaker001:] Mm hmm. [speaker002:] Well, y we always do three EM iterations to [speaker001:] OK. [speaker002:] and it's [disfmarker] it's this [disfmarker] it's this quick and dirty [disfmarker] the phone loop adaptation which doesn't actually require prior recognition paths and [disfmarker] and so this is not the best you can do with adaptation, but it gives you sort of a first idea of what you could gain with it. And then, you know, so we have the the SRI front end and the ICSI front end and other than that the system configuration was identical. So it was the same [disfmarker] They came up with um you know, same number of uh Gaussians per state cluster Um, same [disfmarker] The clustering used the same information loss threshold, which actually led to roughly the same number of Gaussians overall. So that the system configuration is [disfmarker] is comparable. Um, and the [disfmarker] [vocalsound] Uh, so without adaptation, you had forty nine p [speaker003:] That's error rate or recognition rate? [speaker002:] This is error rate in percent. And with adaptation it's forty seven point one and this [disfmarker] this was fifty two point six. and fifty one point three [speaker001:] Hmm. [speaker002:] and then, when I combined them [disfmarker] I can actually combine them with something like ROVER. It's actually more sophisticated than ROVER but it's [disfmarker] Um, here I got forty eight point five and here I got forty six point five. [speaker001:] So this is just combination at the utterance level. [speaker002:] Um. At the utterance level, right. [speaker001:] Why do you think the ICSI front end is so much worse? [speaker002:] Good question. That's fine. [speaker001:] That seems really odd to me. [speaker002:] Um, so, one percent I would attribute to the lack of VTL, about one percent. [speaker001:] Oh right. Right, right, right. [speaker002:] OK. [speaker003:] OK. Ah OK. [speaker002:] And then maybe another up [disfmarker] I don't know how much the C zero normalization business really matters I can't it see, I mean can't see it [disfmarker] the [speaker001:] Ca can you run the SRI [disfmarker] Just as an experiment, run the SRI front end without vocal tract norma normalization, and see how much difference it makes? [speaker002:] I could. Yeah. I could certainly do that. Yeah. Um. We could also do the vocal tract length normalization with the ICSI features. [speaker003:] Yeah. [speaker002:] That's something we wanted to do [disfmarker] [speaker001:] If we could figure out how. [speaker002:] yeah. We could [disfmarker] I was actually thinking we could use the warping factors that we compute for the MFCC's and just try them with the ICSI uh front end. [speaker003:] Yeah. [speaker002:] Because we already have the capability to apply the warping to the um [disfmarker] to the PLP c [speaker005:] Yeah, Dan added that in, [speaker001:] Yeah, but the [disfmarker] [speaker005:] but [disfmarker] [speaker002:] uh Dan added the [disfmarker] So. [speaker001:] They won't [disfmarker] they don't correspond one to one though. [speaker002:] No, but they should be close, since this [disfmarker] I mean the [disfmarker] Anyway. But I can certainly try the SRI front end without uh VTL. That sh that's [disfmarker] that's certainly quick to do. [speaker001:] Yeah. [speaker002:] Um and so [disfmarker] Yeah, and [disfmarker] and then there's all these um [disfmarker] You know, the number of um [disfmarker] You know, this front end u had a fair amount of experimentation going into it. [speaker001:] Mm hmm. [speaker002:] You know, how many filter banks do you use, what [disfmarker] what bandwidth do you use, and stuff like that. And uh we could play the same kind of games with the ICSI front end. [speaker001:] Right. [speaker002:] Uh, actually, the analysis bandwidth played a very crucial role. We used to use a narrow bandwidth and uh uh that hurt us. So this is, um [disfmarker] And this is [disfmarker] We've now used roughly what everybody else is using. So [speaker001:] Hmm. [speaker002:] um, there's some room for improvements, I figure, in this [disfmarker] in the ICSI front end. Um. So. But the good news is that even with this [disfmarker] with the ICSI system being that much worse, you still get a win out of combining the two. So that gives some hope for the future. Um. Unfortunately however this seems to be reduced with adaptation, so. Um. Also interestingly the [disfmarker] Um, the difference actually widens. I would actually expect it or [disfmarker] or hope that the adaptation reduces the difference because it might um, for instance, um remove some of the um [disfmarker] You know, som If you [disfmarker] if you have some [disfmarker] some difference in the front end processing that uh is suboptimal, but can be possibly remedied by you know moving the um moving the models around. But [disfmarker] but apparently that doesn't [disfmarker] doesn't really [disfmarker] actually the difference becomes larger, so. Um. Anyway. So right now what I'm doing is um [disfmarker] Uh well, there's several things going on. One is that Chuck is working on uh getting the tandem features um into a form that we can train the tandem [disfmarker] the system on the tandem features. So that would actually be the more interesting experiment. Um, the other thing is I'm training uh retraining the models on the large training set that we usually use to build our evaluation models [speaker001:] Right. [speaker002:] and then we can [disfmarker] And I actually want to do the system combination um with our eval system um, on some subset of the data at least, probably only for the males, because I don't have time to train both males and females, but um. Uh and um [disfmarker] [speaker005:] What about ta concatenating the two feature vectors into a single one? [speaker001:] It gets pretty big. [speaker002:] It does get pretty big. Yeah. [speaker005:] Hmm. [speaker001:] And m my experience with that in Broadcast News was usually combining at other levels works better. [speaker005:] Hmm. [speaker001:] So. For [disfmarker] for whatever that's worth. [speaker002:] Oh, you tried that on Broadcast News? [speaker001:] Oh yeah. [speaker002:] Concatenating [speaker001:] Yeah. So w [speaker002:] different feature sets? [speaker001:] Yep. [speaker002:] Yeah. Did you try uh [disfmarker] [speaker001:] It was mostly MSG, PLP, RASTA. [speaker002:] I see. [speaker001:] So, you know, the feature sets we had available. [speaker002:] I see. [speaker001:] And it was almost always better to combine at the probability level. You know, so we'd run the neural nets and combine the probabilities. [speaker002:] OK. Alright. Oh. Yeah, and it does become sort of unwieldy to have these very large feature vectors. [speaker001:] Yep. [speaker002:] And that would blow up the [speaker001:] You'd also have to do some sort of normalization afterwards so that they're uh orthogonal. [speaker002:] uh [disfmarker] Right. [speaker001:] So you'd wanna do a linear transform also. [speaker002:] Right. Um so the [disfmarker] Yeah, and then we could start experimenting a little bit to try to get the ICSI front end to perform better. Um. And [disfmarker] and as a preliminary just sort of diagnostic experiment we can [disfmarker] I can certainly run a SRI system without VTL [speaker001:] Yep. [speaker002:] just uh to get [disfmarker] [speaker006:] Wi without what? Without VTL. [speaker003:] Just [disfmarker] [speaker001:] Vocal tract thing, [speaker002:] Wh Yeah. [speaker006:] OK. [speaker001:] yeah. [speaker002:] And that [disfmarker] that's quick to do. So. [speaker001:] I was thinking about tandem system [disfmarker] Well, let's not talk about it here, but I had some thoughts about the tandem system. [speaker002:] yeah so but things are moving ahead, so [speaker001:] OK, should we do digits? [speaker002:] Digits. Sure. [speaker001:] Do we have any other topics? OK, let's do them one at a time instead of simultaneous since we actually have time. [speaker002:] Poetic reading of digits. [speaker003:] Oh no. [speaker001:] Why don't we let Don go first before his battery dies? [speaker006:] Do you wanna say that one again? That last one? [speaker003:] Um, why? [speaker006:] Or did [disfmarker] did you correct the whole one? [speaker003:] I [disfmarker] yeah, sure. Yeah no I [disfmarker] I gotta write, so [disfmarker] I think. [speaker006:] Oh he did? Never mind. [speaker003:] Yeah. [speaker006:] OK good. Alright. So. [speaker001:] You can really tell from the prosody where it goes. [speaker002:] s meeting. I actually have one more thing that [disfmarker] I don't know if it's [disfmarker] i if [disfmarker] if it's allowed to [disfmarker] to bring up after the dis [speaker001:] After digits, I don't know. [speaker002:] Anyway. But it might be important. For um [disfmarker] [speaker001:] Go ahead. [speaker002:] So Liz remarked that she had recorded a meeting where it was later found that several of the s microphones were turned off, [speaker001:] Mm hmm. [speaker002:] um and this must become a problem especially with non speech meetings. So um is there a way that the software could warn you if it gets zeros from some of the channels, or [disfmarker]? [speaker001:] Probably. We could probably build that in to the front end. [speaker002:] Because it [disfmarker] you know it's really annoying if you go through all that trouble and then basically the meetings aren't useable because uh even [disfmarker] [speaker005:] What are people doing, they're switching their mikes off or something? [speaker002:] I don't know what they do. Maybe the batteries went dead, or th they just didn't [disfmarker] they played with the thing and it didn't leave it in the "on" position or whatever. [speaker001:] I don't know, uh eh. Fff. What would you like it to do when that happens? [speaker002:] Well, no, if [disfmarker] if you um [disfmarker] I mean obviously you always [disfmarker] I mean, there's never gonna be a signal from all the channels, right? because [disfmarker] or rarely. Um. But uh. [speaker001:] Well if an unblacked out channel is zero, is actually spitting out zeros, you can be pretty sure it's off. [speaker002:] Right. [speaker001:] Because it doesn't spit out zeros, it spits out epsilons. Right? Cuz there's little background noise. The question is, when the software detects it, what do you want it to do? [speaker002:] Exactly. That's a good question. I don't know. But is there some [disfmarker] We can collectively think of some [disfmarker] of some mechanism that might reduce the risk of [disfmarker] of just [disfmarker] [speaker001:] I mean, it [disfmarker] it [disfmarker] it [disfmarker] We [disfmarker] we already have visual feedback, right? You can see whether your mike is working or not. [speaker002:] Right. Right. [speaker001:] Um. [speaker002:] So maybe it's just to admonish people to actually look at the screen at the beginning of the meeting to make sure they get a signal. [speaker001:] Yep. [speaker005:] Test [disfmarker] [speaker001:] Turn off the screen saver during the meeting. [speaker005:] Tell them to test their mikes, or [disfmarker] [speaker002:] Yeah, something. [speaker006:] I think they sh [speaker001:] Yeah, I d I don't know what to do other than [disfmarker] [speaker004:] It [disfmarker] it can beep if one of the channels dies while recording. [speaker001:] There's no sound out right now. [speaker004:] Oh. Never mind. [speaker006:] It should give the electric shock to the person recording the meeting. [speaker001:] Yep. Yep. Yeah yeah. That would be good. [speaker002:] Oh yeah. That's a good one. [speaker004:] Wow, that's not a bad idea. [speaker001:] Well, we can think about what to do about it, [speaker002:] Yeah. OK. [speaker001:] but it [disfmarker] It's pretty clear we can detect it, so. [speaker002:] Yeah. [speaker001:] OK. Are we done? [speaker002:] Alright. [speaker006:] Yeah. [speaker006:] OK. [speaker003:] Adam, what is the mike that, uh, Jeremy's wearing? [speaker006:] It's the ear plug mike. [speaker005:] That's good. [speaker001:] Ear plug. [speaker003:] Is that a wireless, or [disfmarker]? [speaker006:] No. [speaker003:] Oh. [speaker007:] It's wired. [speaker002:] Oh! [speaker001:] Is that [disfmarker] Does that mean you can't hear anything during the meeting? [speaker004:] It's old school. [speaker006:] Huh? What? Huh? [speaker002:] Should we, uh, close the door, maybe? [speaker006:] It [disfmarker] it's a fairly good mike, actually. [speaker002:] So it's [disfmarker] Yeah. Huh. [speaker006:] Well, I shouldn't say it's a good mike. All I really know is that the signal level is OK. I don't know if it's a [disfmarker] the quality. [speaker002:] Well, that's a [speaker006:] Ugh! So I didn't send out agenda items because until five minutes ago we only had one agenda item and now we have two. So. [vocalsound] And, uh. [speaker002:] OK. So, just to repeat the thing bef that we said last week, it was there's this suggestion of alternating weeks on [vocalsound] more, uh, automatic speech recognition related or not? [speaker006:] Right. [speaker002:] Was that sort of [pause] the division? So which week are we in? [speaker006:] Well [disfmarker] We haven't really started, but I thought we more [disfmarker] we more or less did Meeting Recorder stuff last week, so I thought we could do, uh [disfmarker] [speaker002:] I thought we had a thing about speech recognition last week too. [speaker006:] But I figure also if they're short agenda items, we could also do a little bit of each. [speaker003:] Yeah. [speaker002:] Yeah. [speaker006:] So. I seem to be having difficulty getting this adjusted. Here we go. [speaker002:] OK. [speaker006:] So, uh, as most of you should know, I did send out the consent form thingies and, uh, so far no one has made any [disfmarker] Ach! [comment] [comment] any comments on them. So, no on no one has bleeped out anything. [speaker002:] Um. Yeah. [speaker006:] So. I don't expect anyone to. But. [speaker002:] Um. [vocalsound] So, w what follows? At some point y you go around and get people to sign something? [speaker006:] No. We had spoken w about this before and we had decided that they have [disfmarker] they only needed to sign once. [speaker002:] Yeah, but I've forgotten. [speaker006:] And the agreement that they already signed simply said that we would give them an opportunity. So as long as we do that, we're covered. [speaker002:] And how long of an opportunity did you tell them? [speaker006:] Uh, July fifteenth. [speaker002:] July fifteenth. Oh, so they have a plenty of time, [speaker006:] Yep. [speaker002:] and y Given that it's that long, um, um [disfmarker] Why was that date chosen? You just felt you wanted to [disfmarker]? [speaker006:] Jane told me July fifteenth. So, that's what I set it. [speaker002:] Oh, OK. [speaker001:] Oh. I just meant that that was [pause] the release date that you had on the [pause] data. [speaker006:] Oh. I [disfmarker] I didn't understand that there was something specific. [speaker001:] I, uh [disfmarker] I thought [disfmarker] [speaker006:] You [disfmarker] y you had [disfmarker] [speaker002:] I don't [disfmarker] [speaker006:] I had heard July fifteenth, so that's what I put. [speaker002:] No, the only [disfmarker] th the only [pause] mention I recall about that was just that July fifteenth or so is when [vocalsound] this meeting starts. [speaker001:] Mm hmm. [speaker006:] So. [speaker001:] That's right. That's why. [speaker002:] Oh, I see. [speaker001:] You said you wanted it to be available then. [speaker002:] OK. Right. [speaker001:] I didn't mean it to be the hard deadline. It's fine with me if it is, or we cou But I thought it might be good to remind people two weeks prior to that [speaker002:] w [speaker001:] in case, uh [disfmarker] you know, "by the way [pause] this is your last [disfmarker]" [speaker002:] Right. [speaker001:] Uh. Yeah. [speaker002:] We probably should have talked about it, cuz i because if we wanna be able to give it to people July fifteenth, if somebody's gonna come back and say "OK, I don't want this and this and this used", uh, clearly we need some time to respond to that. Right? [speaker006:] Yeah. As I said, we [disfmarker] I just got one date [speaker008:] Damn! [speaker002:] Yeah. [speaker006:] and that's the one I used. So. But I can send a follow up. [speaker002:] Yeah. [speaker006:] I mean, it's almost all us. I mean the people who are in the meeti this meeting was, uh, these [disfmarker] the meetings that [disfmarker] in [disfmarker] are in set one. [speaker003:] Was my [disfmarker] was my response OK? [speaker001:] That's right. [speaker003:] I just wrote you [disfmarker] replied to the email saying they're all fine. [speaker006:] Right. I mean, that's fine. [speaker003:] OK, good. [speaker006:] I [disfmarker] we don't [disfmarker] My understanding of what we [pause] had agreed upon when we had spoken about this months ago was that, uh, we don't actually need a reply. [speaker003:] That makes it easy. [speaker006:] We just need to tell them that they can do it if they want. [speaker002:] OK. I just didn't remember, but [disfmarker] [speaker006:] And so no reply is no changes. [speaker001:] And he's got it so that the default thing you see when you look at the page is "OK". [speaker002:] OK. [speaker001:] So that's very clear all the way down the page, "OK". And they have two options they can change it to. One of them is [pause] "censor", and the other one is "incorrect". Is it [disfmarker] is [disfmarker] your word is "incorrect"? [speaker006:] Right. [speaker001:] Which means also we get feedback on [pause] if [pause] um, there's something that they w that needs to be [pause] adjusted, because, I mean, these are very highly technical things. I mean, it's an added, uh, level of checking on the accuracy of the transcription, as I see it. But in any case, people can agree to things that are wrong. [speaker006:] Well [disfmarker] [speaker001:] So. [speaker006:] Yeah. The reason I did that it was just so that people would not censor [disfmarker] not ask to have stuff removed because it was transcribed incorrectly, [speaker001:] And the reason I liked it was because [disfmarker] [speaker006:] as opposed to, uh [disfmarker] [speaker001:] was because it, um [disfmarker] it gives them the option of, uh, being able to correct it. [speaker006:] Right. [speaker001:] Approve it and correct it. And [pause] um. So, you have [pause] it nicely set up so they email you and, uh [disfmarker] [speaker006:] When they submit the form, it gets processed and emailed to me. [speaker001:] Mm hmm. Mm hmm. [speaker006:] So. [speaker001:] And I wanted to say the meetings that are involved in that set are Robustness and Meeting Recorder. The German ones will be ready for next week. Those are three [disfmarker] three of those. A different set of people. And we can impose [disfmarker] [speaker003:] The German ones? [speaker001:] Uh, well. [speaker008:] Yeah. Those [disfmarker] uh [disfmarker] [speaker002:] NSA. [speaker001:] OK. I spoke loosely. The [disfmarker] the German, French [disfmarker] Sorry, the German, [vocalsound] Dutch, and Spanish ones. [speaker005:] Spanish. Yeah. [speaker006:] Mm hmm. [speaker003:] Oh, those are the NSA meetings? [speaker008:] Those are [disfmarker] [speaker005:] The non native [disfmarker] [speaker001:] Yeah. Uh huh. [speaker003:] Oh, oh! OK. [speaker002:] German, Dutch, Swiss and Spanish. [speaker005:] The all non native [disfmarker] [speaker001:] That's [disfmarker] that's [disfmarker] that's r [speaker006:] Mm hmm. [speaker008:] Uh huh. [speaker003:] OK. I'd [disfmarker] I d [speaker001:] Yeah. [pause] It's the other group. [speaker002:] I It was the network [disfmarker] network services group. [speaker003:] OK. [speaker001:] Uh huh. [speaker002:] Yeah. [speaker001:] Yeah, exactly. Yeah. I didn't mean to [pause] isolate them. [speaker002:] Otherwise known as the German, Dutch, and Spanish. [speaker001:] Yeah. Sorry. [speaker006:] Mm hmm. [speaker001:] It was [disfmarker] it was not the best characterization. But what [disfmarker] [vocalsound] what I meant to say was that it's the other group that's not [disfmarker] n no m no overlap with our present members. And then maybe it'd be good to set an explicit deadline, something like [pause] a week [pause] before that, uh, J July fifteenth date, or two weeks before. [speaker002:] I mean, I would suggest we discuss [disfmarker] I mean, if we're going to have a policy on it, that we discuss the length of time that we want to give people, [speaker006:] Mm hmm. [speaker002:] so that we have a uniform thing. So, tha that's a month, which is fine. [speaker003:] Twelve hours. [speaker006:] Well, the only thing I said in the email is that [pause] the data is going to be released on the fifteenth. [speaker002:] I mean, it seems [disfmarker] [speaker006:] I didn't give any other deadline. [speaker001:] Mm hmm. [speaker006:] So my feeling is if someone after the fifteenth says, "wow I suddenly found something", we'll delete it from our record. We just won't delete it from whatever's already been released. [speaker001:] Hmm. That's a little bit difficult. [speaker006:] What else can we do? [speaker001:] Yeah. [speaker006:] If someone says "hey, look, I [pause] found something in this meeting and [pause] it's libelous and I want it removed". What can we do? We have to remove it. [speaker001:] Well. [pause] That's true. I [disfmarker] I agree with that part, but I think that it would [disfmarker] it, uh [disfmarker] we need to have, uh, a [disfmarker] a [disfmarker] a message to them very clearly that [vocalsound] beyond this date, you can't make additional changes. [speaker002:] I mean, um, I [disfmarker] I [disfmarker] [vocalsound] i I think that somebody might [pause] request something even though we say that. But I think it's good to at least start some place like that. So if we agreed, [vocalsound] OK, how long is a reasonable amount of time for people to have [disfmarker] [speaker001:] Mm hmm. Good. [speaker002:] if we say two weeks, or if we say a month, I think we should just say that [disfmarker] say that, you know, i a as [pause] um, [vocalsound] "per the [disfmarker] the [disfmarker] the, uh, page you signed, you have the ability to look over this stuff" and so forth "and, uh, because we w" these, uh [disfmarker] I would [disfmarker] I would imagine some sort of generic thing that would say "because we, uh, will continually be making these things available [vocalsound] to other researchers, uh, this can't be open ended and so, uh, uh, please give us back your response within this am you know, within this amount of time", whatever time we agree upon. [speaker006:] Well, did you read the email and look at the pages I sent? [speaker002:] Did I? No, I haven't yet. [speaker006:] No. OK, well why don't you do that [speaker002:] No, just [disfmarker] [speaker006:] and then make comments on what you want me to change? [speaker002:] No, no. I'm not saying that you should change anything. I I'm [disfmarker] what I'm [disfmarker] what I'm [disfmarker] [vocalsound] I'm trying to spark a discussion hopefully among people who have read it so that [disfmarker] that you can [disfmarker] [vocalsound] you can, uh, decide on something. [speaker001:] Mm hmm. [speaker002:] So I'm not telling you what to decide. [speaker001:] Mm hmm. [speaker002:] I'm just saying you should decide something, [speaker006:] I already did decide something, [speaker001:] OK. [speaker002:] and then [disfmarker] [speaker006:] and that's what's in the email. [speaker001:] Yeah, yeah. OK, so [disfmarker] [speaker006:] And if you disagree with it, why don't you read it and give me comments on it? [speaker001:] Yeah. Well [disfmarker] I [disfmarker] I think that there's one missing line. [speaker002:] Well, the one thing that I did read and that you just repeated to me [pause] was that you gave the specific date of July fifteenth. [speaker006:] Mm hmm. [speaker002:] And you also just said that the reason you said that was because someone said it to you. [speaker006:] Right. [speaker002:] So what I'm telling you [pause] is that what you should do is come up with a length of time that you guys think is enough and you should use that rather than [pause] this date that you just got from somewhere. That's all I'm saying. [speaker006:] OK. [speaker002:] OK? [speaker001:] I ha I have one question. This is in the summer period and presumably people may be out of town. But we can make the assumption, can't we? that, um, they will be receiving email, uh, most of the month. Right? Because if someone [disfmarker] [speaker002:] It [disfmarker] well, it [disfmarker] well, you're right. Sometimes somebody will be [pause] away and, uh, you know, there's, uh [disfmarker] for any length of time that you [vocalsound] uh, choose [pause] there is some person sometime who will not [pause] end up reading it. [speaker001:] OK. [speaker002:] That's [disfmarker] it's, you know, just a certain risk to take. [speaker008:] S so maybe when [disfmarker] Am I on, by the way? [speaker006:] I don't know. [speaker008:] Oh. [speaker006:] You should be. [speaker008:] Hello? Hello? [speaker006:] You should be channel B. [speaker008:] Oh, OK. Alright. So. The, um [disfmarker] Maybe we should say in [disfmarker] w you know, when the whole thing starts, when they sign the [disfmarker] [vocalsound] the agreement [vocalsound] that [disfmarker] you know, specify exactly uh, what, you know, how [disfmarker] how they will be contacted and they can, you know [disfmarker] they can be asked to give a phone number and an email address, or both. And, um, then [disfmarker] [speaker001:] We did that, I [disfmarker] I believe. [speaker008:] Right. So. [vocalsound] A And, then, you know, say very clearly that if they don't [disfmarker] if we don't hear from them, you know, as Morgan suggested, by a certain time or after a certain [vocalsound] period after we contact them [vocalsound] that is implicitly giving their agreement. [speaker005:] Yeah. [speaker006:] Well, they've already signed a form. [speaker008:] Right. [speaker001:] And the form says [disfmarker] [speaker005:] And nobody [disfmarker] nobody really reads it anyway. [speaker006:] So. And the s and the form was approved by Human Subjects, [speaker008:] Says that. Right. Well, if that's i tha if that's already [disfmarker] if [disfmarker] [speaker001:] Uh, the f [speaker006:] so, eh, that's gonna be a little hard to modify. [speaker001:] Well, the form [disfmarker] Well, the form doesn't say, if [disfmarker] uh, you know, "if you don't respond by X number of days or X number of weeks [disfmarker]" [speaker008:] I see. Uh [disfmarker] Oh, OK. So what does it say about the [disfmarker] the [disfmarker] the process of [disfmarker] of, uh [disfmarker] [vocalsound] y the review process? [speaker001:] It doesn't have a time limit. That you'll be provided access to the transcripts [speaker008:] Oh, OK. [speaker001:] and then, uh, allowed to [pause] remove things that you'd like to remove, before it goes to the general [disfmarker] uh, larger audience. [speaker008:] Hmm. Right. Oh. [speaker006:] Here. You can read what you already signed. [speaker001:] There you go. [speaker005:] I guess when I [pause] read it, um [disfmarker] [speaker008:] OK. [speaker005:] I'm not as diligent as Chuck, but I had the feeling I should probably respond and tell Adam, like, "I got this and I will do it by this date, and if you don't hear from me by then [disfmarker]" You know, in other words responding to your email [pause] once, right away, saying "as soon as you get this could you please respond." [speaker006:] Right. [speaker005:] And then if you [disfmarker] if the person thinks they'll need more time because they're out of town or whatever, they can tell you at that point? Because [disfmarker] [speaker006:] Oh, I just [disfmarker] I didn't wanna do that, because I don't wanna have a discussion with every person [pause] if I can avoid it. [speaker005:] Well, it's [disfmarker] [speaker006:] So what I wanted to do was just send it out and say on the fifteenth, the data is released, [speaker001:] Mm hmm. [speaker006:] if you wanna do something about it, do something about it, but that's it ". [speaker001:] I [disfmarker] I kind of like this. [speaker005:] Well [disfmarker] OK. So, we're assuming that [disfmarker] [speaker001:] Yeah. [speaker008:] Well, that's [disfmarker] that would be great if [disfmarker] but you should probably have a [pause] legal person look at this and [pause] make sure it's OK. Because if you [disfmarker] if you, uh, do this [vocalsound] and you [disfmarker] then there's a dispute later and, uh, some [disfmarker] [vocalsound] you know, someone who understands these matters concludes that they didn't have, uh, you know, enough opportunity to actually [vocalsound] exercise their [disfmarker] their right [disfmarker] [speaker005:] Or they [disfmarker] they might never have gotten the email, because although they signed this, they don't know by which date to expect your email. And so [pause] someone whose machine is down or whatever [disfmarker] I mean, we have no [disfmarker] [speaker002:] Mm hmm. [speaker005:] in internally we know that people are there, [speaker006:] Well, OK. l Let me [disfmarker] Let me reverse this. [speaker005:] but we have no confirmation that they got the mail. [speaker006:] So let's say someone [disfmarker] I send this out, and someone doesn't respond. Do we delete every meeting that they were in? I don't think so. [speaker005:] Well, then [disfmarker] [speaker008:] No. [speaker005:] It [disfmarker] we're hoping that doesn't happen, but that's why there's such a thing as registered mail [speaker006:] That will happen. [speaker005:] or [disfmarker] [speaker008:] That will happen. [speaker006:] That will absolutely happen. [speaker005:] Right. [speaker006:] Because people don't read their email, or they'll read and say "I don't care about that, I'm not gonna delete anything" and they don just won't reply to it. [speaker008:] Maybe [disfmarker] uh, do we have mailing addresses for these people? [speaker006:] No. [speaker008:] No. [speaker006:] We have what they put on the speaker form, [speaker008:] Oh. [speaker006:] which was just generic contact information. [speaker001:] Well [disfmarker] [comment] [vocalsound] Most [disfmarker] But the ones that we're dealing with now are all local, [speaker008:] Well, then [disfmarker] [speaker001:] except the ones who [disfmarker] I mean, we [disfmarker] we're totally in contact with all the ones in those two groups. [speaker008:] Mmm. OK. [speaker001:] So maybe, uh, I [disfmarker] you know, that's not that many people and if I [disfmarker] if, uh [disfmarker] i i there is an advantage to having them admit [disfmarker] and if I can help with [disfmarker] with processing that, I will. It's [disfmarker] it's [disfmarker] there is an advantage to having them be on record as having received the mail and indicating [disfmarker] [speaker006:] Yeah. I mean I thought we had discussed this, like, a year ago. [speaker001:] Yes, we did. [speaker006:] And so it seems like this is a little odd for it to be coming up yet again. [speaker001:] You're right. Well, I [disfmarker] you know. But sometimes [disfmarker] [speaker002:] Well, we [disfmarker] we haven't experienced it before. [speaker006:] So. [speaker001:] That's right. [speaker002:] Right? So [disfmarker] [speaker005:] You'll either wonder [pause] at the beginning or you'll wonder at the end. [speaker001:] Need to get it right. [speaker005:] I mean, there's no way to get around [disfmarker] I It's pretty much the same am amount of work except for an additional email just saying they got the email. [speaker001:] Yeah. [speaker005:] And maybe it's better legally to wonder before [disfmarker] you know, a little bit earlier than [disfmarker] [speaker006:] Well [disfmarker] [speaker001:] It's much easier to explain [pause] this way. [speaker006:] OK. Well, why don't you talk [pause] t [speaker001:] T t to have it on record. [speaker006:] Morgan, can you talk to our lawyer about it, and find out what the status is on this? Cuz I don't wanna do something that we don't need to. [speaker001:] Yeah, but w Mmm. [speaker006:] Because what [disfmarker] I'm telling you, people won't respond to the email. No matter what you do, you there're gonna be people who [pause] you're gonna have to make a lot of effort to get in contact with. [speaker004:] I mean, i it's k [speaker008:] Hmm. [speaker001:] Well, then we make the effort. [speaker006:] And do we want to spend that effort? [speaker004:] It's kind of like signing up for a mailing list. [speaker001:] We make the effort. [speaker004:] They have opt in and opt out. And there are two different ways. I mean, and either way works probably, I mean. [speaker001:] Except I really think in this case [disfmarker] I [disfmarker] I'm agr I agree with Liz, that we need to be [pause] in the clear and not have to after the fact say "oh, but I assumed", and "oh, I'm sorry that your email address was just accumulating mail without notifying you", [speaker002:] If this is a purely administrative task, we can actually have administration do it. [speaker001:] you know. Oh, excellent. [speaker002:] But the thing is that, you know, I [disfmarker] I [disfmarker] I think, without going through a whole expensive thing with our lawyers, [vocalsound] from my previous conversations with them, my [disfmarker] my sense very [pause] much is that we would want something on record [pause] as indicating that they actually were aware of this. [speaker001:] Yes. [speaker006:] Well, we had talked about this before and I thought that we had even gone by the lawyers asking about that [speaker002:] Mm hmm. [speaker006:] and they said you have to s they've already signed away the f with that form [disfmarker] that they've already signed once. [speaker001:] I don't remember that this issue of [pause] the time period allowed for response was ever covered. [speaker006:] Yeah. OK. [speaker002:] Yeah. We never really talked about that. [speaker005:] Or the date at which they would be receiving the email from you. [speaker002:] Yeah. [speaker001:] Or [disfmarker] or how they would indicate [disfmarker] [speaker005:] They probably forgot all about it. [speaker002:] We certainly didn't talk, uh, about [disfmarker] with them at all about, uh, the manner of them being [disfmarker] [vocalsound] made the, uh, uh, materials available. [speaker005:] Yeah. [speaker008:] We do it like with these [disfmarker] [speaker002:] That was something that was sort of just within our implementation. [speaker008:] We can use it [disfmarker] we can use a [disfmarker] a ploy [speaker006:] OK. [speaker008:] like they use to, um [disfmarker] you know, that when they serve, like [disfmarker] uh, uh, uh [disfmarker] [comment] uh, you know, like dead beat dads, they [disfmarker] they [disfmarker] they make it look like they won something in the lottery and then they open the envelope and that [disfmarker] [speaker004:] And they're served. [speaker008:] Right? Because [disfmarker] and then the [disfmarker] the [disfmarker] the [disfmarker] the thing is served. So you just make it, you know, "oh, you won [disfmarker] you know, go to this web site and you've, uh [disfmarker] you're [disfmarker]" [speaker005:] That's why you never open these things that come in the mail. [speaker001:] That one. [speaker002:] Yeah. [speaker006:] Well, it's just, we've gone from one extreme to the other, where at one point, a few months ago, Morgan was [disfmarker] you were saying let's not do anything, [speaker008:] Right. [vocalsound] Right. No, it I [disfmarker] it might [disfmarker] [speaker001:] Well, it doesn't matter. [speaker008:] i i it [disfmarker] it might well be the case [disfmarker] [speaker006:] and now we're [disfmarker] [speaker008:] it might [disfmarker] [speaker006:] we're saying we have to follow up each person and get a signature? [speaker008:] Right. [speaker006:] I mean, what are we gonna doing here? [speaker008:] It might well be the case that [disfmarker] that this is perfectly [disfmarker] you know, this is enough to give us a basis t to just, eh, assume their consent if they don't reply. [speaker002:] Well. [speaker006:] Mm hmm. [speaker008:] But, I'm not [disfmarker] you know, me not being a lawyer, I wouldn't just wanna do that without [pause] having the [disfmarker] the expert, uh, opinion on that. [speaker001:] And how many people? [speaker006:] Then I think we had better find out, [speaker001:] Al altogether we've got twenty people. These people are people who read their email almost all the time. [speaker006:] so that we can find a [disfmarker] [speaker008:] Yeah. [speaker002:] Let me look at this again. [speaker006:] Right. [speaker001:] I [disfmarker] I really don't see that it's a problem. I [disfmarker] I think that it's a common courtesy to ask them [disfmarker] uh, to expect for them to, uh, be able to have [@ @] [comment] us try to contact them, [speaker006:] For [disfmarker] for th [speaker001:] u just in case they hadn't gotten their email. I think they'd appreciate it. [speaker002:] Yeah. My [disfmarker] Adam, my [disfmarker] my view before was about [pause] the nature of what was [disfmarker] of the presentation, [speaker006:] Mm hmm. [speaker002:] of [disfmarker] of how [pause] my [disfmarker] my [disfmarker] the things that we're questioning were along the lines of how easy [disfmarker] uh, h how m how much implication would there be that it's likely you're going to be changing something, as opposed to [disfmarker] [speaker006:] Mm hmm. [speaker002:] That was the kind of dispute I was making before. [speaker001:] Mm hmm. I remember that. [speaker002:] But, um, the attorneys, I [disfmarker] uh, I can guarantee you, the attorneys will always come back with [disfmarker] and we have to decide how stringent we want to be in these things, but they will always come back with saying [vocalsound] that, um, you need to [disfmarker] you want to have someth some paper trail or [disfmarker] which includes electronic trail [disfmarker] [vocalsound] that they have, uh, in fact [pause] O K'd it. [speaker006:] Mm hmm. [speaker002:] So, um, I think that if you f i if [pause] we send the email as you have and if there's half the people, say, who don't respond [pause] at all by, you know, some period of time, [vocalsound] we can just make a list of these people and hand it to, uh [disfmarker] you know, just give it to me and I'll hand it to administrative staff or whatever, [speaker006:] Right. [speaker002:] and they'll just call them up and say, you know, "have you [disfmarker] Is [disfmarker] is this OK? And would you please mail [disfmarker] you know, mail Adam that it is, if i if it, you know, is or not." So, you know, we can [disfmarker] we can do that. [speaker005:] The other thing that there's a psychological effect that [disfmarker] at least for most people, that if they've responded to your email saying "yes, I will do it" or "yes, I got your email", they're more likely to actually do it [comment] [pause] later [pause] than to just ignore it. [speaker002:] Mm hmm. [speaker005:] And of course we don't want them to bleep things out, but it [disfmarker] it's a little bit better if we're getting the [disfmarker] their, uh, final response, once they've answered you once than if they never answer you'd [comment] at al at all. That's how these mailing houses work. So, I mean, it's not completely lost work because it might benefit us in terms of getting [pause] responses. [speaker001:] Mm hmm. [speaker005:] You know, an official OK from somebody [pause] is better than no answer, even if they responded that they got your email. And they're probably more likely to do that once they've responded that they got the email. [speaker001:] I also think they'd just simply appreciate it. I think it's a good [disfmarker] a good way of [disfmarker] of fostering goodwill among our subjects. Well, our participants. [speaker002:] I think the main thing is [disfmarker] I mean, what lawyers do is they always look at worst cases. [speaker006:] Sending lots of spam. [speaker002:] So they s so [disfmarker] so [disfmarker] Tha that's what they're paid to do. [speaker006:] Yep. [speaker002:] And so, [vocalsound] it is certainly possible that, uh, somebody's server would be down or something and they wouldn't actually hear from us, and then they find this thing is in there and we've already distributed it to someone. So, [vocalsound] what it says in there, in fact, is that they will be given an opportunity to blah blah blah, [speaker001:] Mm hmm. [speaker002:] but if in fact [disfmarker] if we sent them something or we thought we sent them something but they didn't actually receive it for some reason, [vocalsound] um, then we haven't given them that. [speaker006:] Well, so how far do we have to go? Do we need to get someone's signature? Or, is email enough? Do we have to have it notarized? [speaker002:] I i i em email is enough. [speaker006:] I mean [disfmarker] OK. [speaker002:] Yeah. I mean, I've been through this [disfmarker] I mean, I'm not a lawyer, but I've been through these things a f things f like this a few times with lawyers now [speaker006:] Mm hmm. [speaker002:] so I [disfmarker] I [vocalsound] I I'm pretty comfortable with that. [speaker003:] Do you track, um, when people log in to look at the [disfmarker]? [speaker006:] Uh. If they submit the form, I get it. [speaker003:] Uh huh. [speaker006:] If they don't submit the form, it goes in the general web log. But that's not sufficient. [speaker003:] Hmm. [speaker006:] Right? Cuz if someone just visits the web site that doesn't [pause] imply anything in particular. [speaker003:] Except that you know they got the mail. [speaker001:] Mm hmm. That's right. [speaker006:] Right. [speaker001:] I [disfmarker] I could get you on the notify list if you want me to. [speaker006:] I'm already on it. [speaker001:] For that directory? [speaker006:] Mm hmm. [speaker001:] OK, great. [speaker002:] So again, hopefully, um, this shouldn't be quite as odious a problem either way, uh, in any of the extremes we've talked about because [vocalsound] uh, we're talking a pretty small [pause] number of people. [speaker006:] W For this set, I'm not worried, because [pause] we basically know everyone on it. [speaker002:] Mm hmm. [speaker001:] Hmm. [speaker002:] Mm hmm. [speaker006:] You know, they're all more or less here or it's [disfmarker] it's Eric and Dan and so on. But for some of the others, you're talking about visitors who are [pause] gone from ICSI, whose email addresses may or may not work, [speaker002:] Mm hmm. Mm hmm. [speaker001:] Oh. [speaker006:] and [disfmarker] So what are we gonna do when we run into someone that we can't get in touch with? [speaker001:] I don't think, uh [disfmarker] They're so recent, these visitors. I [disfmarker] and [disfmarker] and I [disfmarker] they're also so [disfmarker] [speaker006:] Mm hmm. [speaker002:] Mm hmm. [speaker001:] They're prominent enough that they're easy to find through [disfmarker] I [disfmarker] I mean, I [disfmarker] I w I'll be able to [disfmarker] if you have any trouble finding them, I really think I could find them. [speaker006:] Other methods? OK. [speaker002:] Yeah. Cuz it [disfmarker] what it [disfmarker] what it really does promise here is that we will ask their permission. Um, and I think, you know, if you go into a room and close the door and [disfmarker] and ask their permission [vocalsound] and they're not there, it doesn't seem [comment] that that's the intent of, uh, meaning here. So. [speaker006:] Well, the qu the question is just whether [disfmarker] how active it has to be. I mean, because they [disfmarker] they filled out a contact information and that's where I'm sending the information. [speaker002:] Right. [speaker006:] And so far everyone has done email. There isn't anyone who did, uh, any other contact method. [speaker002:] Well, the way ICSI goes, people, uh, who, uh, were here ten years ago still have acc [vocalsound] have forwards to other accounts and so on. [speaker006:] Yeah. [speaker002:] So it's unusual that [disfmarker] that they, uh [disfmarker] [speaker006:] So my original impression was that that was sufficient, that if they give us contact information and that contact information isn't accurate that [pause] we fulfilled our burden. [speaker002:] Yeah. [speaker005:] Then they just come back. [speaker003:] All my files were still here. [speaker005:] Same as us. [speaker001:] I just [disfmarker] [speaker002:] So if we get to a boundary case like that then maybe I will call the attorney about it. [speaker003:] Yeah. [speaker006:] OK. [speaker002:] But, you know, hopefully we won't need to. [speaker001:] I d I just don't think we will. [speaker006:] Alright. [speaker001:] For all the reasons that we've discussed. [speaker002:] So we'll [disfmarker] we'll see if we do or not. [speaker006:] Yep. And we'll see how many people respond to that email. [speaker001:] Yeah. [speaker006:] So far, two people have. [speaker002:] Yeah. I think very few people will [speaker006:] So. [speaker002:] and [disfmarker] and [disfmarker] and, you know, people [disfmarker] people see long emails about things that they don't think [vocalsound] is gonna be high priority, they typically, uh, don't [disfmarker] don't read it, or half read it. [speaker006:] Right. [speaker002:] Cuz people are swamped. But [disfmarker] [speaker001:] And actually, um, I [disfmarker] I [disfmarker] didn't anticipate this so I [disfmarker] that's why I didn't give this comment, and it [disfmarker] I [disfmarker] this discussion has made me think it might be nice to have a follow up email within the next couple of days saying "by the way, you know, we wanna hear back from you by X date and please [disfmarker]", and then add what Liz said [disfmarker] "please, uh, respond to [disfmarker] please indicate you received this mail." [speaker002:] Uh, or e well, maybe even additionally, uh, um, "Even if you've decided you have no changes you'd like to make, if you could tell us that". [speaker006:] Respond to the email. [comment] Yep. [speaker001:] Mm hmm. [speaker005:] Right. That would [disfmarker] that would definitely work on me. [speaker002:] Yeah. [speaker001:] It is the first time through the cycle. [speaker005:] You know, it makes you feel m like, um, if you were gonna p if you're predicting that you might not answer, you have a chance now to say that. Whereas, I [disfmarker] I mean, I would be much more likely myself, [speaker003:] And the other th [speaker005:] given all my email, t to respond at that point, saying "you know what, I'm probably not gonna get to it" or whatever, rather than just having seen the email, thinking I might get to it, and never really, [vocalsound] uh, pushing myself to actually do it until it's too late. [speaker003:] Yeah. I was [disfmarker] I was thinking that it also [pause] lets them know that they don't have to go to the page to [pause] accept this. [speaker005:] Right. R Right. That's true. [speaker002:] Right. [speaker003:] I mean, I [disfmarker] I [disfmarker] Yeah. So that way they could [disfmarker] they can see from that email that if they just write back and say "I got it, no changes", they're off the hook. [speaker002:] Yeah. [speaker003:] They don't have to go to the web page and [disfmarker] [speaker002:] I mean, the other thing I've learned from dealing with [disfmarker] dealing with people sending in reviews and so forth, uh, is, um, [vocalsound] if you say "you've got three months to do this review", [vocalsound] um, people do it, you know, [vocalsound] two and seven eighths months from now. [speaker005:] Yeah. That's true. [speaker006:] Right. [speaker002:] If you say "you've got three weeks to do this review", they do [disfmarker] do it, you know, two and seven eighths weeks from now [disfmarker] [vocalsound] they do the review. And, um [disfmarker] So, if we make it [pause] a little less time, I don't think it'll be that much [disfmarker] [speaker006:] Well, and also if we want it ready by the fifteenth, that means we better give them deadline of the first, if we have any prayer of actually getting everyone to respond in time. [speaker002:] There's the responding part and there's also what if, uh, I mean, I hope this doesn't happen, what if there are a bunch of deletions that have to get put in and changes? [speaker006:] Right. [speaker002:] Then [vocalsound] we actually have to deal with that [speaker001:] Mm hmm. Some lead time. [speaker002:] if we want it to [disfmarker] [speaker006:] Ugh! Disk space, oh my god! [speaker001:] By the way, has [disfmarker] has Jeremy signed the form? [speaker006:] I hadn't thought about that. [speaker001:] OK. [speaker006:] That for every meeting [disfmarker] any meeting which has any bleeps in it we need yet another copy of. [speaker008:] Oh. [speaker003:] Just that channel. [speaker004:] Can't you just do that channel? [speaker003:] Oh, no. We have to do [disfmarker] [speaker006:] No, of course not. You need all the channels. [speaker005:] Yeah. You have to do all of them, [speaker004:] Oh. [speaker003:] Do you have to do the other close talking? [speaker005:] as well as all of these. [speaker004:] Yeah. [speaker007:] Wow. [speaker006:] Yes. Absolutely. There's a lot of cross talk. [speaker005:] You have to do all [disfmarker] You could just do it in that time period, though, [speaker001:] Well [disfmarker] [speaker005:] but I guess it's a pain. [speaker006:] Well, but you have to copy the whole file. [speaker005:] Yeah. [speaker006:] Right? Because we're gonna be releasing the whole file. [speaker005:] Yeah. You're right. [speaker001:] Well [disfmarker] [vocalsound] I [disfmarker] you know, I think at a certain point, that copy that has the deletions will become the master copy. [speaker006:] Yeah. It's just I hate deleting any data. So I [disfmarker] I don't want [disfmarker] I really would rather make a copy of it, rather than bleep it out [speaker002:] Are you del are you bleeping it by adding? [speaker006:] and then [disfmarker] Overlapping. So, it's [disfmarker] it's exactly a censor bleep. So what I really think is "bleep" [speaker002:] I I I I understand, [speaker006:] and then I want to [disfmarker] [speaker002:] but is [disfmarker] is it summing signals or do you [pause] delete the old one and put the new one in? [speaker006:] I delete the old one, put the new one in. [speaker002:] Oh, OK. [speaker006:] There's nothing left of the original signal. [speaker002:] Cuz [disfmarker] Oh. Cuz if you were summing, you could [disfmarker] No. But anyway [disfmarker] [speaker006:] Yeah. It would be qui quite easy to get it back again. [speaker001:] But [disfmarker] [vocalsound] And then w I was gonna say also that the they don't have to stay on the system, as you know, [speaker002:] Yeah. [speaker005:] Then someday we can sell the [pause] unedited versions. [speaker001:] cuz [disfmarker] cuz the [disfmarker] the ones [disfmarker] [speaker006:] Say again? [speaker001:] Once it's been successfully bleeped, can't you rely on the [disfmarker]? [speaker003:] Or [pause] we'll tell people the frequency of the beep [speaker002:] Encrypt it. [speaker004:] You can hide it. Yeah. [speaker003:] and then they could subtract the beep out. [speaker008:] Oh, yeah. [speaker001:] Can't you rely on the archiving to preserve the older version? [speaker005:] Right. Exactly. I see. [speaker004:] It wouldn't be that hard to hide it. [speaker006:] Yeah, that's true. Yeah. Yep, that's true. [speaker005:] See, this is good. I wanted to create some [pause] side conversations in these meetings. [speaker002:] Yeah. You could encrypt it, [speaker001:] OK. [speaker004:] You can use spread spectrum. [speaker002:] you know, with a [disfmarker] with a two hundred bit [disfmarker] [vocalsound] thousand bit, uh [disfmarker] [speaker003:] Uh huh. [speaker005:] So [disfmarker] [speaker004:] Hide it. [speaker003:] Here we go. [speaker005:] Yeah. Yeah. [speaker004:] Yeah, there you go. [speaker005:] Cuz we don't have enough asides. [speaker008:] I have an idea. You reverse the signal, so it [disfmarker] it lets people say what they said backwards. [speaker004:] There you go. [speaker006:] Backwards. [speaker004:] Then you have, like, subliminal, uh, messages, [speaker006:] But, ha you've seen the [disfmarker] this [disfmarker] the speech recognition system that reversed very short segments. [speaker008:] Yeah. [speaker004:] like. [speaker006:] Did you read that paper? [speaker008:] No. [speaker006:] It wouldn't work. The speech recognizer still works. [speaker005:] Yeah. And if you do it backward then [disfmarker] [speaker002:] Yeah. [speaker003:] That's cuz they use forward backward. [speaker005:] H good old HMM. [speaker006:] Forward but backward. That's right. [speaker005:] No, it's backward forward. [speaker006:] Good point. A point. Well, I'm sorry if I sound a little peeved about this whole thing. It's just we've had meeting after meeting after meeting a on this and it seems like we've never gotten it resolved. [speaker002:] Well, but we never also [disfmarker] we've also never done it. [speaker001:] Hmm. [speaker005:] Uh. [speaker006:] So. Yeah. [speaker001:] This is the first cycle. [speaker005:] If it makes [disfmarker] [speaker001:] There're bound to be some glitches the first time through. [speaker002:] So. [vocalsound] And, uh [disfmarker] and I'm sorry responding without, uh, having much knowledge, but the thing is, uh, I am, like, one of these people who gets a gazillion mails and [disfmarker] and stuff comes in as [speaker006:] Well, and that's exactly why I did it the way I did it, which is the default is if you do nothing we're gonna release it. Because, you know, I have my [pause] stack of emails of to d to be done, [speaker002:] Yeah. [speaker006:] that, you know, fifty or sixty long, and the ones at the top I'm never gonna get to. [speaker002:] Right. [speaker006:] And, uh [pause] So [disfmarker] so [disfmarker] [speaker003:] Move them to the bottom. [speaker006:] Yeah, right. [speaker002:] So [disfmarker] so the only thing we're missing is [disfmarker] is some way to respond to easily to say, uh, "OK, go ahead" or something. [speaker006:] So, i this is gonna mean [disfmarker] [speaker003:] Just re mail them to yourself and then they're at the bottom. [speaker002:] Yeah. [speaker006:] Yeah. Yeah. That's actually definitely a good point. [speaker001:] Yeah. [speaker006:] The m email doesn't specify that you can just reply to the email, as op as opposed to going to the form [speaker008:] In [disfmarker] [speaker001:] Mm hmm. [speaker005:] Right. [speaker006:] and [disfmarker] [speaker001:] And it also doesn't give a [disfmarker] a specific [disfmarker] I didn't think of it. S I think it's a good idea [disfmarker] an ex explicit time by which this will be considered definite. [speaker006:] Yeah, release. [speaker001:] And [disfmarker] and it has to be a time earlier than that endpoint. [speaker002:] Yeah. It's converging. [speaker008:] This [disfmarker] um, I've seen this recently. [speaker001:] Yeah. That's right. [speaker008:] Uh, I got email, and it [disfmarker] i if I use a MIME capable mail reader, it actually says, you know, click on this button to confirm receipt [pause] of the [disfmarker] of the mail. [speaker004:] Hmm. [speaker001:] Oh, that's interesting. [speaker008:] So [disfmarker] [speaker006:] You [disfmarker] you can [disfmarker] A lot of mailers support return receipt. [speaker004:] It's like certified mail. [speaker001:] Could do that. [speaker008:] Right. [speaker006:] But it doesn't confirm that they've read it. [speaker008:] No, no, no. This is different. This is not [disfmarker] So, I [disfmarker] I know, you can tell, you know, the, uh, mail delivery agent to [disfmarker] to confirm that the mail was delivered to your mailbox. [speaker001:] Mmm. [speaker006:] Right. [speaker008:] But [disfmarker] but, no. This was different. Ins in the mail, there was a [disfmarker] [speaker006:] Oh, just a button. [speaker008:] uh, th there was a button that when you clicked on it, it would send, uh, you know, a actual acknowledgement to the sender that you had actually looked at the mail. [speaker006:] Oh, yeah. Yeah. Unfor Yeah, we could do that. But I hate that. [speaker008:] But it o but it only works for, you know, MIME capable [disfmarker] you know, if you use Netscape or something like that for your n [speaker006:] Yeah. [speaker005:] You might as well just respond to the mail. [speaker002:] And [disfmarker] And we actually need a third thing. [speaker008:] Right. [speaker005:] I mean [speaker002:] It's not that you've looked at it, it's that you've looked at it and [disfmarker] and [disfmarker] and agree with one of the possible actions. [speaker008:] No, no. You can do that. [speaker002:] Right? [speaker008:] You know, you can put this button anywhere you want, [speaker002:] Oh? [speaker008:] and you can put it the bottom of the message and say "here, by [disfmarker] you know, by clicking on this, I [disfmarker] I agree [disfmarker] [vocalsound] uh, you know, I acknowledge [disfmarker]" [speaker002:] Oh, I see. That i i my first born children are yours, and [disfmarker] Yeah. Yeah. [speaker005:] Quick question. Are, um [disfmarker] [speaker006:] Well, I could put a URL in there without any difficulty and [pause] even pretty simple MIME readers can do that. [speaker001:] But why shouldn't they just [pause] email back? [speaker006:] So. Yeah. Reply. [speaker002:] Yeah. [speaker001:] I don't see there's a problem. [speaker008:] Right. [speaker001:] It's very nice. [speaker002:] Yeah. [speaker001:] I [disfmarker] I like the high tech aspect of it, [speaker008:] No, no, no. [vocalsound] I actually don't. [speaker001:] but I think [disfmarker] I appreciate it. [speaker008:] I'm just saying that [speaker006:] Well, I [disfmarker] cuz I use a text mail reader. [speaker008:] if ev but I'm [disfmarker] [speaker005:] Don't you use VI for your mai? [speaker008:] Yeah. [speaker002:] Wow. That's [disfmarker] that's my guy. Alright. [speaker006:] You [disfmarker] you read email [pause] in VI? [speaker008:] Yeah. So [disfmarker] I [disfmarker] i [speaker001:] Yeah. [vocalsound] I like VI. [speaker008:] There's these logos [pause] that you can put at the bottom of your web page, like "powered by VI". [speaker004:] Yeah. [speaker002:] Wow. [speaker005:] Anyway, quick question. [speaker006:] You could put wed bugs in the email. [speaker004:] I see. [speaker005:] How m [speaker008:] Yeah. [speaker005:] Like, there were three meetings this time, or so or how many? [speaker001:] Six. [speaker005:] Six? But, no of different people. So I guess if you're in both these types of meetings, you'd have a lot. But [disfmarker] [vocalsound] How [disfmarker] I mean, it also depends on how many [disfmarker] Like, if we release [disfmarker] this time it's a fairly small number of meetings, but what if we release, like, twenty five meetings to people? [speaker006:] Well, what my s expectation is, is that we'll send out one of these emails [pause] every time a meeting has been checked and is ready. [speaker005:] In th I don't know. Oh. Oh, OK. [speaker006:] So. Tha that was my intention. [speaker005:] So this time was just the first chunk. OK. [speaker006:] It's just [disfmarker] yeah [disfmarker] that we just happened to have a bunch all at once. [speaker005:] Well, that's a good idea. [speaker006:] I mean, maybe [disfmarker] Is that [pause] the way it's gonna be, you think, Jane? [speaker001:] I agree with you. It's [disfmarker] we could do it, uh [disfmarker] I I could [disfmarker] I'd be happy with either way, batch wise [disfmarker] What I was thinking [disfmarker] Uh, so this one [disfmarker] That was exactly right, that we had a [disfmarker] uh, uh [disfmarker] I [disfmarker] I had wanted to get the entire set of twelve hours ready. Don't have it. But, uh, this was the biggest clump I could do by a time where I thought it was reasonable. [speaker002:] Mm hmm. [speaker001:] People would be able to check it and still have it ready by then. My, um [disfmarker] I was thinking that with the [pause] NSA meetings, I'd like [disfmarker] there are three of them, and they're [disfmarker] uh, I [disfmarker] I will have them done by Monday. Uh, unfortunately the time is later and I don't know how that's gonna work out, but I thought it'd be good to have that released as a clump, too, because then, [vocalsound] you know, they're [disfmarker] they [disfmarker] they have a [disfmarker] it it's in a category, it's not quite so distracting to them, is what I was thinking, and it's all in one chu But after that, when we're caught up a bit on this process, then, um, I could imagine sending them out periodically as they become available. [speaker005:] OK. [speaker001:] I could do it either way. I mean, it's a question of how distracting it is to the people who have to do the checking. [speaker002:] We heard anything from IBM? at all? [speaker003:] Uh. Let's see. We [disfmarker] Yeah, right. So we got the transcript back from that one meeting. Everything seemed fine. Adam [pause] had a script that will [pause] put everything back together and there was [disfmarker] Well, there was one small problem but it was a simple thing to fix. And then, um, [vocalsound] we, uh [disfmarker] I sent him a pointer to three more. And so he's [pause] off and [pause] working on those. [speaker006:] Yeah. Now we haven't actually had anyone go through that meeting, to see whether the transcript is correct and to see how much was missed and all that sort of stuff. [speaker001:] That's on my list. [speaker006:] So at some point we need to do that. [speaker003:] Yeah. [speaker001:] Well, that's on my list. [speaker003:] Yeah. It's gonna have to go through our regular process. [speaker006:] I mean, the one thing I noticed is it did miss a lot of backchannels. There are a fair number of "yeahs" and "uh huhs" that [disfmarker] it's just [disfmarker] that aren't in there. So. [speaker001:] Hmm. [speaker002:] But I think [disfmarker] Yeah. Like you said, I mean, that's [disfmarker] that's gonna be our standard proc that's what the transcribers are gonna be spending most of their time doing, I would imagine, [speaker001:] Mm hmm. Mm hmm, mm hmm. [speaker002:] once [disfmarker] once we [disfmarker] [speaker006:] Yes, absolutely. Yeah. [speaker001:] One question about the backchannels. [speaker002:] It's gonna [disfmarker] [speaker001:] Do you suppose that was because they weren't caught by the pre segmenter? [speaker006:] Yes, absolutely. Absolutely. [speaker001:] Oh, interesting. Oh, interesting. OK. [speaker006:] Yeah. They're [disfmarker] they're not in the segmented. It's not that the [pause] IBM people didn't do it. [speaker001:] OK. OK. [speaker006:] Just they didn't get marked. [speaker001:] OK. So maybe when the detector for that gets better or something [disfmarker] I w I [disfmarker] There's another issue which is this [disfmarker] we've been, uh, contacted by University of Washington now, of course, to, um [disfmarker] We sent them the transcripts that correspond to those [pause] six meetings and they're downloading the audio files. So they'll be doing that. Chuck's [disfmarker] Chuck's, uh, put that in. [speaker003:] Mm hmm. Yeah, I pointed them to the set that Andreas put, uh, on the [vocalsound] web so th if they want to compare directly with his results they can. And, um, then once, uh, th we can also point them at the, um, uh, the original meetings and they can grab those, too, with SCP. [speaker005:] Wait. So you put the reference files [disfmarker]? [speaker003:] No, no. They d they wanted the audio. [speaker005:] Or the [disfmarker]? [speaker003:] Jane sent them the, uh, transcripts. [speaker005:] No, I mean of the transcripts. Um. Well, we can talk about it off line. [speaker006:] There's another meeting in here, what, at four? [speaker003:] Mm hmm. [speaker006:] Right? Yeah, so we have to finish by three forty five. [speaker008:] D d So, does Washi does [disfmarker] does UW wanna u do this [disfmarker] wanna use this data for recognition or for something else? [speaker003:] Uh, for recognition. [speaker008:] Oh. [speaker005:] I think they're doing w didn't they want to do language modeling on, you know, recognition compatible transcripts [speaker008:] Oh. I see. [speaker002:] Yeah. [speaker005:] or [disfmarker]? [speaker001:] This is to show you, uh, some of the things that turn up during the checking procedure. Um [@ @] [comment] So, this is from one of the NSA meetings and, uh, i if you're familiar with the diff format, the arrow to the left is what it was, and the arrow to the right is [pause] what it was changed to. So, um. [vocalsound] And now the first one. "OK. So, then we started a weekly meeting. The last time, uh [disfmarker]" And the transcriber thought "little too much" But, [vocalsound] uh, really, um, it was "we learned too much", which makes more sense syntactically as well. [speaker008:] And these [disfmarker] the parentheses were f from [disfmarker] [speaker001:] Then [disfmarker] Oh, this [disfmarker] that's the convention for indicating uncertain. [speaker006:] U uncertains. [speaker001:] So the transcriber was right. [speaker008:] S [speaker001:] You know, she was uncertain about that. [speaker008:] OK. [speaker001:] So she's right to be uncertain. [speaker008:] Oh. [comment] OK. [speaker001:] And it's also a g a good indication of the [disfmarker] of that. The next one. This was about, uh, Claudia and [pause] she'd been really b busy with stuff, such as waivers. Uh, OK. Um, next one. Um. [vocalsound] This was [pause] an interesting one. So the original was "So that's not [disfmarker] so Claudia's not the bad master here", and then he laughs, [speaker006:] Web master. [speaker001:] but it really "web master". [speaker004:] Oh. [comment] Uh oh. [speaker001:] And then you see another type of uncertainty which is, you know, they just didn't know what to make out of that. So instead of "split upon unknown", [comment] it's "split in principle". [speaker006:] Yep. [speaker004:] Jane, these are from IBM? [speaker006:] Spit upon? [speaker004:] The top lines? [speaker001:] No, no. These are [disfmarker] these are our local transcriptions of the NSA meetings. [speaker006:] No, these are [pause] ours. [speaker004:] Oh. [speaker001:] The transcribers [disfmarker] transcriber's version ver versus the checked version. [speaker004:] Oh, I see. [speaker001:] My [disfmarker] my checked version, after I go through it. [speaker004:] OK. [speaker001:] Um, then you get down here. Um. Sometimes some speakers will insert foreign language terms. That's the next example, the next one. The, uh, version beyond this is [disfmarker] So instead of saying "or", especially those words, "also" and "oder" and some other ones. Those sneak in. Um, the next one [disfmarker] [speaker008:] Discourse markers. [speaker006:] That's cool. [speaker008:] Discourse markers. [speaker001:] S Sorry, what? Discourse markers? [speaker008:] Discourse markers. [speaker001:] Sure. Sure, sure, sure. [speaker008:] Yeah. Yeah. [speaker001:] And it's [disfmarker] and it makes sense cuz it's, like, below this [disfmarker] it's a little subliminal there. [speaker008:] Yeah. Yeah, yeah. [speaker001:] Um. OK, the next one, uh, [vocalsound] this is a term. The problem with terminology. Description with th the transcriber has "X as an advance". But really it's "QS in advance". I mean, I [disfmarker] I've benefited from some of these, uh, cross group meetings. OK, then you got, um, [vocalsound] uh, instead of "from something or other cards", [comment] it's "for multicast". And instead of "ANN system related", it's "end system related". This was changed to an acronym initially and it should shouldn't have been. And then, you can see here "GPS" was misinterpreted. It's just totally understanda This is [disfmarker] this is a lot of jargon. Um, and the final one, the transcriber had th "in the core network itself or the exit unknown, not the internet unknown". And it [disfmarker] it comes through as "in the core network itself of the access provider, not the internet backbone core". Now this is a lot of [pause] terminology. [speaker008:] Mmm. [speaker001:] And they're generally extremely good, but, you know in this [disfmarker] this area it really does pay to, um [disfmarker] to double check and I'm hoping that when the checked versions are run through the recognizer that you'll see s substantial improvements in performance cuz the [disfmarker] you know, there're a lot of these in there. [speaker008:] Yeah. So how often [disfmarker]? [speaker006:] Yeah, but I bet [disfmarker] I bet they're acoustically challenging parts anyway, though. [speaker004:] Mmm. [speaker001:] No, actually no. [speaker006:] Oh, really? [speaker001:] Huh uh. [speaker006:] Uh, it's [disfmarker] Oh, so it's just jargon. [speaker001:] It's jargon. Yeah. I mean this is [disfmarker] cuz, you know you don't realize in daily life how much you have top down influences in what you're hearing. [speaker008:] Well, but [disfmarker] But [disfmarker] but [disfmarker] [speaker001:] And it's jar it's jargon coupled with a foreign accent. [speaker008:] But we don't [disfmarker] I mean, our language model right now doesn't know about these words anyhow. So, you know, un until you actually [pause] get a decent language model, [@ @] [comment] Adam's right. [speaker002:] Yeah. [speaker006:] It probably won't do any better. [speaker008:] You probably won't notice a difference. But it's [disfmarker] I mean, it's definitely good that these are fixed. I mean, [vocalsound] obviously. [speaker002:] Yeah. [speaker001:] Well, also from the standpoint of getting people's approval, cuz if someone sees a page full of uh, um, barely decipherable w you know, sentences, and then is asked to approve of it or not, [vocalsound] it's, uh, uh [disfmarker] [speaker006:] Did I say that? [speaker002:] Yeah. Yeah. That would be a shame if people said "well, I don't approve it because [pause] the [disfmarker] it's not what I said". [speaker001:] OK. [speaker006:] Well, that's exactly why I put the extra option in, [speaker002:] Yeah. [speaker001:] Exactly. That's why we discussed that. [speaker006:] is that I was afraid people would say, "let's censor that because it's wrong", [speaker002:] Yeah. [speaker006:] and I don't want them to do that. [speaker002:] Yeah. [speaker001:] And then I also [disfmarker] the final thing I have for transcription is that I made a purchase of some other headphones [speaker008:] C [speaker001:] because of the problem of low gain in the originals. And [disfmarker] and they very much appro they mu much prefer the new ones, and actually I [disfmarker] I mean, I [disfmarker] I think that there will be fewer things to correct because of the [disfmarker] the choice. We'd originally chosen, uh, very expensive head headsets [speaker006:] Yeah. Ugh! [speaker001:] but, um, they're just not as good as these, um, in this [disfmarker] with this respect to this particular task. [speaker008:] Well, return the old ones. [speaker006:] It's probably impedance matching problems. But [disfmarker] [speaker001:] I don't know exactly, but we chose them because that's what's been used here by prominent projects in transcription. [speaker002:] Could be. [speaker006:] Mm hmm. [speaker001:] So it i we had every reason to think they would work. [speaker008:] So you have spare headsets? [speaker001:] Sorry, what? [speaker008:] You have spare headsets? [speaker006:] They're just earphones. They're not headsets. They're not microphones. [speaker008:] No, no. I mean, just earphones? [speaker005:] Right. [speaker008:] Um, because I, uh, I could use one on my workstation, just to t because sometimes I have to listen to audio files and I don't have to b go borrow it from someone and [disfmarker] [speaker001:] We have actua actually I have [disfmarker] W Well, the thing is, that if we have four people come to work [pause] for a day, I was [disfmarker] I was hanging on to the others for, eh [disfmarker] for spares, [speaker008:] Oh, OK. Sure. No problem. [speaker002:] No, but you'd [disfmarker] If you [disfmarker] Yeah, w we should get it. [speaker001:] but I can tell you what I recommend. [speaker006:] But if you need it, just get it. [speaker008:] I just [disfmarker] [speaker006:] Come on. [speaker008:] Right. [speaker002:] Yeah. If you need it. [speaker008:] Yeah. [speaker002:] Yeah. [speaker004:] Yeah, I still [disfmarker] I still need to get a pair, too. [speaker001:] It'd just have to be a s a separate order [disfmarker] an added order. [speaker002:] They're [disfmarker] they're [disfmarker] they're [disfmarker] they're pretty inexpensive. [speaker005:] Yeah, that [disfmarker] We should order a cou uh, t two or three or four, actually. [speaker004:] I'm using one of these. [speaker002:] Yeah. [speaker004:] Yeah. [speaker008:] I think I have a pair that I brought from home, [speaker005:] We have [disfmarker] [speaker008:] but it's f just for music listening [speaker002:] No. Just [disfmarker] just [disfmarker] just [disfmarker] just buy them. [speaker008:] and it's not [disfmarker] [speaker005:] Sh Just get the model number [speaker008:] Nnn. Yeah. [speaker005:] and [disfmarker] [speaker002:] Just buy them. [speaker005:] Where do you buy these from? [speaker008:] Yeah. [speaker001:] Cambridge SoundWorks, just down the street. [speaker005:] Like [disfmarker]? You just b go and b Oh. [speaker001:] Yeah. They always have them in stock. [speaker002:] Yeah. [speaker008:] Anyway. [speaker005:] That'd be a good idea. [speaker006:] W uh, could you email out the brand? [speaker002:] Yeah. [speaker001:] Oh, sure. Yeah. OK. [speaker006:] Cuz I think [disfmarker] sounds like people are interested. [speaker004:] Yeah. [speaker006:] So. [speaker004:] Definitely. [speaker002:] Yeah. [speaker001:] It's made a difference in [disfmarker] in how easy. Yeah. [speaker002:] I realized something I should talk about. So what's the other thing on the agenda actually? [speaker006:] Uh, the only one was Don wanted to, uh, talk about disk space yet again. [speaker004:] Yeah. u It's short. I mean, if you wanna go, we can just throw it in at the end. [speaker002:] No, no. Why don't you [disfmarker] why don't you go ahead since it's short. [speaker004:] Um, well, uh. [speaker006:] Oh, I thought you meant the disk space. [speaker008:] The disk space was short. [speaker006:] Yeah, we know disk space is short. [speaker008:] Yeah. That's what I thought too. [speaker005:] That's a great ambiguity. [speaker002:] Yeah. [speaker005:] It's one of these [disfmarker] it's [disfmarker] it's social [speaker002:] It's [disfmarker] [speaker005:] and, uh, discourse level [speaker002:] I i i it i [speaker004:] Yeah. [speaker005:] and [disfmarker] [speaker002:] Yeah, it's great. [speaker005:] Sorry. [speaker002:] Yeah, double [disfmarker] double [disfmarker] [speaker006:] Yeah, it was really goo [speaker005:] See, if I had that little [pause] scratch pad, I would have made an X there. [speaker004:] Thank you, thank you. [speaker006:] Uh, well, we'll give you one then. [speaker002:] Yeah. [speaker004:] Um. [vocalsound] So, um, without thinking about it, when I offered up my hard drive last week [disfmarker] [speaker006:] Oh, no. [speaker005:] It was while I was out of town. [speaker004:] Um, this is always a suspect phrase. But, um, no. I, uh [disfmarker] I realized that we're going to be doing a lot of experiments, um, [vocalsound] o for this, uh, paper we're writing, so we're probably gonna need a lot more [disfmarker] We're probably gonna need [vocalsound] that disk space that we had on that eighteen gig hard drive. But, um, we also have someone else coming in that's gonna help us out with some stuff. So [disfmarker] [speaker002:] We've just ordered a hundred gigabytes. [speaker005:] I think we need, like, another eighteen gig disk [pause] to be safe. [speaker004:] OK. We just need to [disfmarker] [speaker002:] Well, we're getting three thirty [disfmarker] thirty sixes. [speaker005:] So. [speaker004:] OK. [speaker002:] Right? That are going into the main f file server. [speaker005:] OK. [speaker003:] Markham's ordering and they should be coming in soon. [speaker004:] W Well. So [disfmarker] so [disfmarker] [speaker002:] So. [speaker006:] Soon? [speaker004:] Yeah. I mean, I guess the thing is is, all I need is to hang it off, like, [vocalsound] the person who's coming in, Sonali's, computer. [speaker008:] Oh, so [disfmarker] so, you mean the d the internal [disfmarker] the disks on the machines that we just got? [speaker004:] Whew. Or we can move them. [speaker006:] No. [speaker008:] Or extra disk? [speaker003:] These are gonna go onto Abbott. [speaker006:] Ne new disks. [speaker002:] Onto Abbott, [speaker004:] So are we gonna move the stuff off of my hard drive onto that when those come in? [speaker002:] the file server. [speaker008:] Oh, oh. OK. [speaker006:] On [disfmarker] Yeah. Once they come in. Sure. [speaker005:] Uh, i [speaker004:] OK. That's fine. [speaker005:] Do [disfmarker] when [disfmarker] when is this planned for [pause] roughly? [speaker003:] They should be [disfmarker] I [disfmarker] I imagine next week or something. [speaker005:] OK. [speaker004:] OK. [speaker006:] If you're [disfmarker] if you're desperate, I have some space on my drive. [speaker005:] So [disfmarker] [speaker004:] I think if I'm [disfmarker] [speaker006:] But I [disfmarker] I vacillate between no space free and [pause] a few gig free. [speaker005:] Yeah. [speaker004:] Yeah. I think I can find something if I'm desperate [speaker006:] So. [speaker004:] and, um, in the meantime I'll just hold out. [speaker006:] OK. [speaker004:] That was the only thing I wanted to bring up. [speaker003:] It should be soon. We [disfmarker] we should [disfmarker] [speaker004:] OK. [speaker002:] So there's another hundred gig. So. [speaker004:] Alright. Great. [speaker008:] Mm hmm. [speaker002:] OK. [speaker004:] That's it. [speaker002:] It's great to be able to do it, [speaker005:] Good. [speaker002:] just say oh yeah, a hundred gig, [speaker004:] Yeah. [speaker006:] Yeah. [speaker002:] no big deal ". [speaker006:] A hundred gig here, a hundred gig there. [speaker005:] Well, each meeting is like a gig or something, [speaker006:] It's eventually real disk space. [speaker002:] Yeah. [speaker005:] so it's really [disfmarker] [speaker002:] Yeah. Um. Yeah. I was just going to comment that I I'm going to, uh, be on the phone with Mari tomorrow, late afternoon. [speaker006:] Oh, yeah. [speaker002:] We're supposed to [vocalsound] get together and talk about, uh, where we are on things. Uh, there's this meeting coming up, uh, and there's also an annual report. Now, I never actually [disfmarker] I [disfmarker] I was asking about this. I don't really quite understand this. She was re she was referring to it as [disfmarker] I think this actually [pause] didn't just come from her, but this is [pause] what, uh, DARPA had asked for. Um, she's referring to it as the an annual report for the fiscal year. But of course the fiscal year starts in October, so I don't quite understand w w why we do an annual report that we're writing in July. [speaker003:] She's either really late or really early. [speaker006:] Huh. Or she's getting a good early start. [speaker002:] Uh, I think basically it it's none of those. It's that the meeting is in July so they [disfmarker] so DARPA just said do an annual report. So. So. So anyway, I'll be putting together stuff. I'll do it, uh, you know, as much as I can without bothering people, just by looking at [disfmarker] at papers and status reports. [speaker008:] Hmm. [speaker002:] I mean, the status reports you do are very helpful. Uh, so I can grab stuff there. And if, uh [disfmarker] if I have some questions I'll [disfmarker] [speaker006:] When we remember to fill them out. [speaker002:] Yeah. If [pause] people could do it as soon as [disfmarker] as you can, if you haven't done one si recently. Uh. [vocalsound] Uh, but, you know, I'm [disfmarker] I'm sure before [pause] it's all done, I'll end up bugging people for [disfmarker] for more clarification about stuff. Um. [vocalsound] But, um, I don't know, I guess [disfmarker] I guess I know pretty much what people have been doing. We have these meetings and [disfmarker] and there's the status reports. Uh. But, um. Um. Yeah. So that wasn't a long one. Just to tell you that. And if something [vocalsound] hasn't, uh [disfmarker] I'll be talking to her late tomorrow afternoon, and if something hasn't been in a status report and you think it's important thing to mention on [vocalsound] this kind of thing, uh, uh, just pop me a one liner and [disfmarker] and [disfmarker] and I'll [disfmarker] I'll have it in front of me for the phone conversation. [speaker008:] OK. [speaker002:] Uh. I guess, uh, you you're still pecking away at the [pause] demos and all that, probably. [speaker006:] Yep. And Don is [pause] gonna be helping out with that. [speaker002:] Oh, that's right. [speaker006:] So. [speaker002:] OK. [speaker006:] Did you wanna talk about that this afternoon? [speaker004:] Um. [speaker006:] Not here, but later today? [speaker004:] We should probably talk off line about when we're gonna talk off line. [speaker006:] OK. OK. [speaker002:] OK. Yeah, I might want to get updated about it in about a week cuz, um, I'm actually gonna have a [disfmarker] [vocalsound] a few days off the following week, a after the [disfmarker] after the picnic. So. [speaker006:] Oh, OK. [speaker002:] That's all I had. [speaker006:] So we were gonna do sort of status of speech transcription [disfmarker] automatic transcription, but we're kind of running late. So. [speaker005:] How long does it take you to save the data? [speaker006:] Fifteen minutes. [speaker005:] Yeah. [speaker006:] So. If you wanna do a quick [speaker005:] Yeah. [speaker006:] ten minute [disfmarker] [speaker005:] Guess we should stop, like, twenty of at the latest. [speaker002:] Uh [disfmarker] [speaker005:] We [disfmarker] we have another meeting coming in that they wanna record. So. [speaker002:] And there's the digits to do. So maybe [disfmarker] may maybe [disfmarker] maybe [disfmarker] [speaker006:] Yeah. Well, we can skip the digits. [speaker002:] We could. Fi five minute report or something. [speaker005:] It's up to you. I don't [disfmarker] [speaker006:] Whatever you want. [speaker002:] Yeah, yeah. Well, I would love to hear about it, [speaker006:] What do you have to say? I'm interested, [speaker002:] especially since [disfmarker] [speaker006:] so [disfmarker] [speaker002:] Yeah. Well, I'm gonna be on the phone tomorrow, so this is just a [pause] good example of [pause] the sort of thing I'd like to [pause] hear about. [speaker005:] Wait. Why is everybody looking at me? [speaker006:] Sorry. [speaker003:] I don't know. [speaker002:] Cuz he looked at you [speaker008:] What? [speaker002:] and says you're sketching. [speaker005:] Uh. I'm not sure what you were referring to. [speaker008:] I [disfmarker] I [disfmarker] I [disfmarker] I'm not [disfmarker] actually, I'm not sure what [disfmarker]? Are we supposed to have done something? [speaker006:] No. We were just talking before about alternating the subject of the meeting. [speaker008:] Oh. Alternating. [speaker005:] Uh huh. [speaker006:] And this week we were gonna try to do [pause] t automatic transcription [pause] status. [speaker005:] I wasn't here last week. [speaker008:] Oh! [speaker005:] Sorry. Oh. [speaker008:] We did that last week. [speaker006:] But we sort of failed. [speaker008:] Right? [speaker005:] Hhh. [speaker006:] No. [speaker002:] I thought we did. [speaker006:] Did we? [speaker008:] Yeah. We did. [speaker006:] OK. OK. So now [disfmarker] now we have the schedule. So next week we'll do automatic transcription status, plus anything that's real timely. [speaker008:] OK. [speaker005:] Oh. OK. [speaker001:] OK. [speaker002:] OK. Whew! [speaker006:] Whew! [speaker003:] Good update. [speaker002:] That was [disfmarker] [speaker006:] Dodged that bullet. [speaker002:] Yeah. Nicely done, Liz. [speaker001:] A woman of few words. [speaker002:] But [disfmarker] but lots of prosody. OK. [vocalsound] OK. [speaker008:] Uh, I mean, we [disfmarker] we really haven't done anything. [speaker006:] Th Excuse me? [speaker008:] Sorry. [speaker001:] Well, since last week. [speaker005:] Yeah, we're [disfmarker] [speaker008:] I mean, the [disfmarker] the next thing on our agenda is to go back and look at the, um [disfmarker] [vocalsound] the automatic alignments because, uh, I got some [disfmarker] I [disfmarker] I [disfmarker] I learned from Thilo what data we can use as a benchmark to see how well we're doing on automatic alignments of the background speech [disfmarker] or, of the foreground speech with background speech. So. [speaker006:] Yeah. [speaker008:] But, we haven't actually [disfmarker] [speaker005:] And then, uh, I guess, the new data that Don will start to process [disfmarker] the, um [disfmarker] when he can get these [disfmarker] You know, before we were working with these segments that were all synchronous and that caused a lot of problems [speaker008:] Mmm. [speaker006:] Oh. Right, right. [speaker005:] because you have timed sp at [disfmarker] on either side. [speaker006:] Mm hmm. [speaker005:] And so that's sort of a stage two of trying the same kinds of alignments with the tighter boundaries with them is really the next step. [speaker006:] Right. [speaker005:] We did get our, um [disfmarker] [speaker001:] I'll be interested. [speaker005:] I guess, good news. We got our abstract accepted for this conference, um [disfmarker] workshop, ISCA workshop, in, um, uh, New Jersey. And we sent in a very poor abstract, but they [disfmarker] very poor, very quick. Um, but we're hoping to have a paper for that as well, which should be an interesting [disfmarker] [speaker006:] When's it due? [speaker005:] The t paper isn't due until August. The abstracts were already due. So it's that kind of workshop. [speaker006:] Mm hmm. [speaker005:] But, I mean, the good news is that that will have sort of the European experts in prosody [disfmarker] sort of a different crowd, and I think we're the only people working on prosody in meetings so far, so that should be interesting. [speaker001:] What's the name of the meeting? [speaker005:] Uh, it's ISCA Workshop on Prosody in Speech Recognition and Understanding, or something like that [disfmarker] [speaker008:] It's called Prosody to [disfmarker] [speaker006:] Mm hmm. [speaker005:] some generic [disfmarker] [speaker001:] Good. [speaker005:] Uh, so it's focused on using prosody in automatic systems and there's a [disfmarker] um, a web page for it. [speaker002:] Y you going to, uh, Eurospeech? [speaker008:] Yeah. [speaker002:] Yeah. [speaker006:] I don't have a paper but I'd kinda like to go, if I could. Is that alright? [speaker002:] We'll discuss it. [speaker006:] OK. [vocalsound] I guess that's "no". [speaker002:] My [disfmarker] my [disfmarker] my car [disfmarker] my car needs a good wash, by the way. [speaker006:] OK. Well, that th Hey, if that's what it takes, that's fine with me. [speaker002:] Um. [speaker006:] I'll pick up your dry cleaning, too. Should we do digits? [speaker002:] Yeah. [speaker006:] Uh. [speaker008:] Can I go next? Because I have to leave, actually. [speaker006:] Yep. Go for it. Hmm! Thanks. Thank you. [speaker002:] So you get to be the one who has all the paper rustling. Right? [speaker003:] Nice. [speaker004:] OK. [speaker001:] to [disfmarker] to handle. [speaker004:] Is that good? [speaker003:] Right. Yeah, I've have never handled them. [speaker002:] Goats eat cans, to my understanding. [speaker004:] Did we need to do these things? [speaker002:] Tin cans. [speaker003:] Wow. [speaker004:] OK. [speaker002:] Could I hit [disfmarker] hit F seven to do that? on the [disfmarker] Robert? [speaker001:] I'm [speaker002:] Oh, the remote will do it [speaker004:] OK. [speaker002:] OK. Cuz I'm already up there? [speaker001:] in control here. [speaker002:] You are in control. Already? [speaker004:] Wow, we're all so high tech here. Yet another p PowerPoint presentation. [speaker002:] I [disfmarker] Well it makes it easier [pause] to do [speaker004:] Certainly does. [speaker002:] So, [pause] we were [disfmarker] Ah! [speaker003:] Johno, where are you? [speaker002:] OK. So, Let's see. Which one of these buttons will do this for me? Aha! OK. [speaker003:] Should you go back to the first one? [speaker002:] Do I wanna go back to the first one? [speaker003:] Well [disfmarker] [speaker002:] OK. [speaker004:] I'm sorry I [disfmarker] [speaker003:] Well, I mean, [pause] just to [disfmarker] [speaker002:] OK. Introduce. [speaker004:] OK. [speaker003:] Yeah, um [vocalsound] Well, "the search for the middle layer". It's basically uh talks about uh [disfmarker] [vocalsound] It just refers to the fact that uh [pause] one of main things we had to do was to [pause] decide what the intermediate sort of nodes were, [speaker004:] I can read! I'm kidding. Mm hmm. [speaker003:] you know, because [disfmarker] [speaker001:] But if you really want to find out what it's about you have to [pause] click on the little [pause] light bulb. [speaker002:] Although I've [disfmarker] I've never [disfmarker] I don't know what the light bulb is for. I didn't i install that into my [pause] PowerPoint presentation. [speaker001:] It opens the Assistant that tells you that the font type is too small. [speaker002:] Ah. [speaker001:] Do you wanna try? [speaker004:] Ach u [speaker002:] I'd prefer not to. [speaker001:] OK. Continue. [speaker004:] It's a needless good idea. Is that the idea? [speaker001:] Why are you doing this in this mode and not in the presentation mode? [speaker004:] OK. [speaker002:] Because I'm gonna switch to the JavaBayes program [speaker001:] Oh! OK. Of course. Mm hmm. [speaker002:] and then [pause] if I do that it'll mess everything up. [speaker004:] I was wondering. [speaker002:] Is that OK? [speaker004:] Yeah, it's OK. [speaker001:] Sure. [speaker003:] Can you maximize the window? [speaker004:] Proceed. [speaker002:] You want me to [disfmarker] Wait, what do you want me to do? [speaker003:] Can you maximize the window so all that stuff on the side isn't [disfmarker] doesn't appear? [speaker001:] No, It's OK. It's [disfmarker] It'll work. [speaker002:] Well I can do that, but then I have to end the presentation in the middle so I can go back to open up [speaker003:] OK, fine. [speaker002:] Here, let's see if I can [disfmarker] [speaker003:] Alright. [speaker004:] Very nice. [speaker002:] Is that better? [speaker003:] Yeah. [speaker002:] OK. Uh [disfmarker] I'll also get rid of this "Click to add notes". OK. [speaker004:] Perfect. [speaker002:] So then the features we decided [disfmarker] or we decided we were [disfmarker] talked about, right? Uh the [disfmarker] the prosody, the discourse, [pause] verb choice. You know. We had a list of things like "to go" and "to visit" and what not. The "landmark iness" of uh [disfmarker] I knew you'd like that. [speaker004:] Nice coinage. [speaker002:] Thank you. uh, of a [disfmarker] of a building. Whether the and this i we actually have a separate feature but I decided to put it on the same line [pause] for space. "Nice walls" [vocalsound] which we can look up because I mean if you're gonna [pause] get real close to a building in the Tango mode, right, there's gotta be a reason for it. And it's either because you're in route to something else or you wanna look at the walls. The context, which in this case we've limited to [pause] "business person", "tourist", or [pause] "unknown", the time of day, and "open to suggestions", isn't actually a feature. It's [pause] "We are open to suggestions." [speaker004:] Right. can I just ask the nice walls part of it is that [vocalsound] uh, in this particular domain [disfmarker] you said [disfmarker] be [disfmarker] i it could be on two different lines but are you saying that in this particular domain it happens the [disfmarker] that landmark iness cor is correlated with [speaker003:] No. [speaker002:] Oh [disfmarker] [speaker003:] We have a separate [speaker002:] They're separate things. [speaker004:] their being nice w [speaker003:] feature. [speaker004:] OK. [speaker002:] Yeah. I either could put "nice walls" on its own line or "open to suggestions" off the slide. [speaker004:] And [disfmarker] and [disfmarker] [speaker003:] Like you could have a p [speaker004:] By "nice" you mean [disfmarker] [speaker003:] You [disfmarker] Like you could have a post office with uh [disfmarker] you know, nice murals or something. [speaker002:] Right. [speaker004:] OK. [speaker002:] Or one time I was at this [disfmarker] [speaker004:] So "nice walls" is a stand in for like architecturally it, uh [disfmarker] significant [speaker003:] Architecturally appealing from the outside. [speaker004:] or something like that. [speaker002:] But see the thing is, if it's [disfmarker] [speaker004:] OK. [speaker002:] Yeah but if it's architecturally significant you might be able to see it from [disfmarker] Like you m might be able to "Vista" it, right? [speaker001:] Mm hmm. [speaker002:] And be able to [disfmarker] [speaker004:] Mm hmm. [speaker001:] Appreciate it. [speaker002:] Yeah, versus, like, I was at this place in Europe where they had little carvings of, like, dead people on the walls or something. [speaker004:] Mm hmm. [speaker002:] I don't remember w [speaker004:] Uh huh. [speaker002:] It was a long time ago. [speaker004:] There's a lot of those. [speaker002:] But if you looked at it real close, you could see the [disfmarker] the in intricacy of the [disfmarker] of the walls. [speaker004:] OK. So that count as [disfmarker] counts as a nice wall. [speaker002:] Right. [speaker004:] The [disfmarker] OK. [speaker001:] Mm hmm. [speaker004:] Right. Something you want to inspect at close range [pause] because it's interesting. [speaker001:] The [disfmarker] [speaker002:] Exactly. [speaker004:] OK. [speaker001:] Hmm. [speaker002:] Robert? [speaker001:] Well there [disfmarker] there is a term [pause] that's often used. That's "saliency", or the "salience" of an object. And I was just wondering whether that's the same as what you describe as "landmark iness". But it's really not. I mean an object can be very salient [speaker004:] Hmm. [speaker001:] but not a landmark at all. [speaker004:] Not a landmark at all. There's landmark for um, touristic reasons and landmark for I don't know navigational reasons or something. [speaker002:] Right. [speaker001:] Yep. [speaker003:] Yeah, we meant, uh, touristic reasons. [speaker002:] Yeah. [speaker004:] OK. [speaker001:] Hmm. [speaker002:] Right. [speaker004:] OK. but you can imagine maybe wanting the oth both kinds of things there for different um, goals. [speaker001:] Hmm. [speaker003:] Yeah. [speaker004:] Right? [speaker002:] Right. But [disfmarker] Yeah. Tourist y landmarks also happen to be [disfmarker] Wouldn't [disfmarker] couldn't they also be [disfmarker] They're not exclusive groups, are they? Like [pause] non tourist y landmarks and [speaker001:] Or it can be als [speaker004:] They're not mutually exclusive? [speaker002:] direct navigational [disfmarker] Yeah. [speaker004:] Right. [speaker002:] OK. [speaker004:] Right. Definitely. [speaker002:] OK, So our initial idea was not very satisfying, [pause] because [disfmarker] uh our initial idea was basically all the features pointing to the output node. Uh. [speaker004:] So, a big flat structure. Right? [speaker002:] Right. [speaker003:] Yep. [speaker002:] And uh, so we [disfmarker] Reasons being, you know, it'd be a pain to set up all the probabilities for that. If we moved onto the next step and did learning of some sort, uh according Bhaskara we'd be handicapped. I don't know belief nets very well. [speaker003:] Well usually, I mean, you know, N [disfmarker] If you have N features, then it's two to the N [disfmarker] [pause] or exponential in N. [speaker002:] And they wouldn't look pretty. So. [speaker003:] Yeah, they'd all be like pointing to the one node. [speaker001:] Mm hmm. [speaker002:] Uh. So then our next idea was to add a middle layer, right? So the thinking behind that was [vocalsound] we have the features that we've drawn [pause] from the communication of some [disfmarker] Like, the someone s The person at the screen is trying to communicate some abstract idea, like "I'm [disfmarker]" the [disfmarker] the abstract idea being "I am a tourist I want to go [pause] to this place." Right? So we're gonna set up features along the lines of where they want to go and [pause] what they've said previously and whatnot. And then we have [pause] the means [vocalsound] that they should use. Right? but the middle thing, we were thinking along the lines of maybe trying to figure out, like, the concept of whether they're a tourist [pause] or [pause] whether they're running an errand or something like that along those lines. Or [disfmarker] Yes, we could things we couldn't extract the [disfmarker] from the data, the hidden variables. Yes, good. So then the hidden variables [disfmarker] hair variables we came up with were whether someone was on a tour, running an errand, or whether they were in a hurry, because we were thinking uh, if they were in a hurry there'd be less likely to [disfmarker] like [disfmarker] or th [speaker003:] Want to do Vista, right? Because [pause] if you want to view things you wouldn't be in a hurry. [speaker002:] Right. Or they might be more likely to be using the place that they want to go to as a [disfmarker] like a [pause] navigational point to go to another place. [speaker004:] Mm hmm. [speaker002:] Whether the destination was their final destination, whether the destination was closed. Those are all [disfmarker] And then "Let's look at the belief net" [comment] OK. So that means that I should switch to the [pause] other program. Um right now it's still kind of [pause] in a toy [pause] version of it, because we didn't know the probabilities of [disfmarker] [pause] or [disfmarker] Well I'll talk about it when I get the picture up. [speaker001:] No one knows it. [speaker002:] OK. So this right [disfmarker] what we [disfmarker] Let's see. What happens if I maximize this? There we go. But uh [disfmarker] So. The mode [pause] basically has three different [pause] outputs. The probability [disfmarker] whether the probability of a Vista, Tango, or Enter. Um [disfmarker] The "context", we simplified. Basically it's just the businessman, the tourist, unknown. "Verb used" is actually personally amusing mainly because it's [disfmarker] it's just whether the verb is a Tango verb, an Enter verb, or a [pause] Vista verb. [speaker003:] Yeah, that one needs a lot of [disfmarker] [speaker004:] And are those mutually exclusive sets? [speaker003:] Not at all. That's [disfmarker] that [disfmarker] that needs a lot of work. [speaker002:] No. [speaker004:] Right. Got it. [speaker003:] But uh [vocalsound] [pause] that would've made the probably significantly be more complicated to enter, [speaker004:] Uh huh. Yeah. [speaker003:] so we decided that for the purposes of this [pause] it'd be simpler to just have three verbs. [speaker004:] Simple. [speaker002:] Yeah. [speaker004:] Stab at it. Yep. [speaker002:] Right. Um [disfmarker] Why don't you mention things about this, Bhaskara, that I am [disfmarker] not [disfmarker] that are not coming to my mind right now. [speaker003:] OK, so [disfmarker] Yeah, so note the four nodes down there, the [disfmarker] sort of, the things that are not directly extracted. Actually, the five things. The "closed" is also not directly extracted I guess, from the uh [disfmarker] [speaker002:] Well i it's [disfmarker] [speaker004:] From the [pause] utterance? [speaker003:] Hmm. Actually, no, [speaker002:] it's so it sort of is [speaker003:] wait. It is. [speaker002:] because it's [disfmarker] because have the [disfmarker] the time of day [speaker003:] OK, "closed" sort of is. [speaker002:] and the close it just had the [disfmarker] er and what time it closed. [speaker003:] Right, so f Right, but the other ones, the final destination, the whether they're doing business, whether they're in a hurry, and whether they're tourists, that kind of thing is all uh [vocalsound] sort of [disfmarker] you know probabilistically depends on the other things. [speaker004:] Inferred from the other ones? [speaker003:] Yeah. [speaker004:] OK. [speaker003:] And the mode, you know, depends on all those things only. [speaker002:] Yeah the [disfmarker] [pause] the actual parse is somewhere up around in here. [speaker003:] Yeah. So we haven't uh, managed [disfmarker] Like we don't have nodes for "discourse" and "parse", although like in some sense they are parts of this belief net. [speaker004:] Mm hmm. [speaker003:] But uh [disfmarker] [vocalsound] The idea is that we just extract those features from them, so we don't actually have a node for the entire parse, [speaker004:] Mm hmm. [speaker002:] Right. [speaker003:] because we'd never do inference on it anyway, so. [speaker004:] So some of the [disfmarker] the top row of things [disfmarker] What's [disfmarker] what's "Disc admission fee"? [speaker003:] whether they discuss the admission fees. So we looked at the data [speaker004:] Oh. [speaker003:] and in a lot of data people were saying things like [vocalsound] "Can I get to this place?" "What is the admission fee?". So that's like a huge uh clue that they're trying to Enter the place rather than uh to Tango or Vista, [speaker004:] Uh huh. [speaker002:] Right. [speaker004:] OK. [speaker003:] so. [speaker004:] I see. [speaker002:] There were [disfmarker] there'd be other things besides just the admission fee, but [pause] you know, [pause] we didn't have [disfmarker] [speaker004:] Mm hmm. [speaker003:] That was like our example. [speaker002:] That was the [pause] initial one that we found. [speaker001:] Mm hmm. [speaker004:] OK. So there are certain cues that are very strong [pause] either lexical or topic based um, concept cues [speaker002:] From the discourse that [disfmarker] Yeah. [speaker004:] for one of those. And then in that second row [pause] or whatever that row of Time of Day through that [disfmarker] So all of those [disfmarker] Some of them come from the utterance and some of them are sort of [vocalsound] either world knowledge or situational [pause] things. Right? So that you have no distinction between those and [speaker002:] Right. [speaker004:] OK. [speaker002:] One, uh [disfmarker] Uh. [vocalsound] Um, anything else you want to say Bhaskara? [speaker003:] Um. [speaker004:] "Unmark [@ @] Time of Day" [speaker003:] Yeah, I m I mean [disfmarker] [speaker002:] One thing [disfmarker] uh [disfmarker] [speaker001:] Yeah. They're [disfmarker] they're are a couple of more things. I mean Uh. I would actually suggest we go through this one more time so we [disfmarker] we all uh, agree on what [disfmarker] what the meaning of these things is at the moment and maybe [vocalsound] what changes we [disfmarker] [speaker002:] Yeah, th OK. so one thing I [disfmarker] I'm you know unsure about, is how we have the discus uh [disfmarker] the "admission fee" thing set up. So one [pause] thing that we were thinking was [vocalsound] by doing the layers like this, Uh [disfmarker] we kept um [disfmarker] things from directly affecting the mode [pause] beyond the concept, but you could see perhaps discus the "admission fee" going directly to the mode pointing at "Enter", [speaker001:] Mm hmm. [speaker002:] right? Versus pointing to just at "tourist", [speaker004:] Mm hmm. Mm hmm. [speaker002:] OK? But we just decided to keep all the things we extracted [pause] to point at the middle and then [pause] down. [speaker001:] Mm hmm. Why is the landmark [disfmarker] OK. The landmark is facing to the tourists. That's because we're talking about landmarks as touristic landmarks [speaker002:] Right. [speaker001:] not as possible um [speaker003:] Yeah. [speaker002:] Navigational landmarks, [speaker004:] Navigational cue. [speaker002:] yeah. [speaker001:] navigational landmarks so Mm hmm. [speaker002:] Yeah, [speaker001:] Then [disfmarker] [speaker002:] that would be [pause] whatever building they referred to. [speaker004:] Prosody. [speaker003:] Right. So let's see. The variables. Disc "admission fee" is a binary thing, [speaker001:] Mm hmm. [speaker003:] "time of day" is like morning, afternoon, night. Is that the deal? [speaker002:] That's how we have it currently set up, [speaker003:] Yeah. [speaker002:] but it could be, [pause] you know, based upon hour [speaker001:] Yep. [speaker003:] Yeah. Yeah. [speaker002:] or [pause] dis we could discrete it [disfmarker] des descret ize it. [speaker001:] Whatever granularity. [speaker003:] Yeah. [speaker004:] Mm hmm. [speaker001:] Uh huh. [speaker003:] Yeah. Yeah. Normally context will include a huge amount of information, but um, we are just using the particular [vocalsound] part of the context which consists of the switch that they flick to indicate whether they're a tourist or not, I guess. [speaker001:] Yep. [speaker004:] OK. So that's given in their input. [speaker002:] Right. [speaker003:] So [disfmarker] [speaker004:] Right? [speaker003:] Right, so it's not really all of context. Similarly prosody is not all of prosody but simply [vocalsound] for our purposes whether or not they appear tense or relaxed. [speaker001:] Mm hmm. that's very nice, huh? [speaker004:] OK. [speaker003:] and [speaker001:] The [disfmarker] the [disfmarker] So the context is a switch between tourist or non tourist? Or also unknown? [speaker002:] Or un [pause] unknown, [speaker003:] Yeah. [speaker002:] yeah. [speaker001:] OK. [speaker003:] Unknown, [speaker004:] So final dest [speaker003:] right? [speaker004:] So it seems like that would really help you for doing business versus tourist, [speaker003:] Which is th Which one? [speaker004:] but OK. so the the context being um, e I don't know if that question's sort of in general, "are you [disfmarker]" I mean the [disfmarker] ar ar are do they allow business people to be doing non business things at the moment? [speaker003:] Yeah, it does. [speaker004:] OK. So then you just have some probabilities over [disfmarker] [speaker003:] Everything is probablistic, and [disfmarker] There's always [disfmarker] [speaker004:] OK. [disfmarker] over which which of those it is. [speaker003:] Yeah. Um, right. So then landmark is [disfmarker] Oh, sorry. "Verb used" is like, right now we only have three values, but in general they would be a probability distribution over all verbs. [speaker004:] Mm hmm. [speaker003:] Rather, let me rephrase that. It [disfmarker] it can take values [vocalsound] in the set of all verbs, that they could possibly use. [speaker004:] Mm hmm. [speaker003:] Um "nice walls" is binary, "closed" is binary "final destination", again [disfmarker] Yeah, all those are binary I guess. And "mode" is one of three things. [speaker001:] So, the [disfmarker] the middle layer is also binary? No. [speaker003:] Yeah, anything with a question mark after it in that picture is a binary node. [speaker001:] Uh. It [disfmarker] Yeah. But all those things without question marks are also binary. Right? [speaker003:] Which things? [speaker004:] Mm hmm. [speaker001:] Nice walls? [speaker002:] Wi [speaker003:] Oh. "Nice walls" is uh [disfmarker] something that we extract from our world knowledge. [speaker001:] Mm hmm. [speaker003:] Yeah, a Oh yeah. Sorry. It is binary. [speaker002:] It is binary [speaker003:] That's true. [speaker002:] but it doesn't have question mark because it's extracted. [speaker003:] Yeah. OK, I see your point. [speaker002:] Yeah. [speaker001:] Yeah. OK. [speaker004:] Uh huh. [speaker001:] I [disfmarker] I gotcha. [speaker003:] Yeah, similarly "closed", I guess. [speaker001:] So we can either be in a hurry or not, but we cannot be in a medium hurry at the moment? [speaker003:] Well, we To do that we would add another uh [disfmarker] value for that. [speaker001:] Mm hmm. OK. [speaker003:] And that would require s updating the probability distribution for "mode" as well. [speaker004:] Mm hmm. [speaker003:] Because it would now have to like uh [disfmarker] take that possibility into account. [speaker001:] Mm hmm. [speaker004:] Take a conti [speaker001:] Mm hmm. [speaker004:] So um, of course this will happen when we think more about the kinds of verbs that are used in each cases [speaker001:] Yeah, yeah. [speaker004:] but you can imagine that it's verb plus various other things that are also not in the bottom layer that would [disfmarker] that would help you [disfmarker] [speaker003:] Yeah. [speaker004:] Like it's a conjunction of, I don't know, you know, the verb used and some other stuff that [disfmarker] that would [vocalsound] [pause] determine [disfmarker] [speaker003:] Right. Other syntactic information you mean? [speaker004:] Yeah. Exactly. [speaker003:] Yeah. [speaker004:] Um. [speaker001:] well the [disfmarker] the [disfmarker] sort of the landmark is [disfmarker] is sort of the object right? the argument in a sense? [speaker004:] Usually. I [disfmarker] I don't know if that's always the case I [disfmarker] I guess haven't looked at the data as much as you guys have. So. Um. [speaker001:] that's always warping on something [disfmarker] some entity, [speaker004:] Mm hmm. Mm hmm. [speaker001:] and um [disfmarker] Uh maybe at this stage we will [disfmarker] we do want to [disfmarker] uh sort of get [disfmarker] uh modifiers in there [speaker002:] Hmm. Yeah. [speaker001:] because they may also tell us whether the person is in a hurry or not [speaker003:] Yeah. [speaker002:] I want to get to the church quickly, [speaker004:] Mm hmm. [speaker002:] and uh [disfmarker] [speaker003:] Yeah, right. [speaker004:] That would be a cue. [speaker001:] what's the fastest way [speaker003:] Yeah, [speaker004:] Mm hmm. [speaker003:] correct. [speaker004:] Um. OK. [speaker002:] Right. Excellent. Do we have anything else to say about this? [speaker003:] We can do a little demo. [speaker002:] Oh the Yeah, we could. But the demo doesn't work very well. [speaker001:] No, then it wouldn't be a demo [speaker003:] I mean [disfmarker] We can do a demo in the sense that we can um, [vocalsound] just ob observe the fact that this will, in fact do inference. [speaker001:] I was just gonna s [speaker002:] Observe nodes. [speaker003:] So we can, you know, set some of the uh nodes [speaker004:] Yeah. Go ahead. [speaker003:] and then try to find the probability of other nodes. [speaker002:] OK. Dat dat dah. What should I observe? [speaker003:] Just se set a few of them. You don't have to do the whole thing that we did last time. Just like uh, [vocalsound] maybe the fact that they use a certain verb [disfmarker] [speaker002:] OK. [speaker003:] Actually forget the verb. [speaker002:] OK. [speaker003:] just uh [disfmarker] I don't know, say they discussed the admission fee [disfmarker] [speaker002:] OK. [speaker003:] and uh [disfmarker] [vocalsound] the place has nice walls [speaker002:] I love nice walls, OK? I'm a big fan. [speaker004:] it's starting to grow on me [speaker003:] and it's night. [speaker002:] And the time of day is night? [speaker003:] Yeah, no wait. That [disfmarker] that doesn't uh [disfmarker] it's not really consistent. They don't discuss the admission fee. Make that false. [speaker002:] Alright. [speaker003:] And it's night. [speaker002:] Oh, they [disfmarker] OK. Oh whoops. [speaker003:] That didn't work. [speaker002:] I forgot to uh [disfmarker] Ach! [speaker004:] I'd like to do that again. [speaker002:] One thing that bugs me about JavaBayes is you have to click that and do this. [speaker004:] Yeah. That seems kind of redundant but. [speaker003:] OK. [speaker002:] That all you want? [speaker003:] Yes. [speaker002:] OK. So let's see. I want [pause] to query, [speaker003:] "Go" and, right, "query". [speaker002:] right? the mode. OK, and then on here [disfmarker] So let's see. [speaker003:] So that [pause] is the probability that they're Entering, Vista ing or Tango ing. [speaker004:] Mm hmm. [speaker002:] Yeah. [speaker003:] And uh [disfmarker] [speaker004:] So slightly [pause] biased [pause] toward "Tango" ing [speaker003:] Yeah. [speaker004:] OK. [speaker002:] If it's night time, [pause] they have not discussed admission fee, and the n walls are nice. So, yeah. I guess that [pause] sort of makes sense. The reason I say the [pause] demo doesn't work very well is yesterday we uh [disfmarker] [pause] observed everything in favor of [pause] taking a tour, and it came up as "Tango", right? Over and over again. [speaker004:] So. [speaker002:] We couldn't [disfmarker] we couldn't figure out how to turn it off of "Tango". [speaker004:] Uh huh. Huh! [speaker003:] It loves the Tango. [speaker004:] Um. [speaker003:] Well, that's obviously just to do with our probabilities. [speaker002:] Yeah, yeah. [speaker003:] Like, [pause] we totally hand tuned the probabilities, [speaker004:] Yeah. [speaker003:] right. We were like [vocalsound] hmm, well if the person does this and this and this, let's say forty percent for this, [speaker004:] OK. [speaker003:] fifty per Like, you know. So obviously that's gonna happen. [speaker004:] Right. [speaker002:] Yeah. [speaker001:] Yeah but it [disfmarker] it [speaker004:] Maybe the bias toward "Tango" ing was yours, then? [speaker003:] Yeah. [speaker002:] Yeah, [speaker003:] It's [disfmarker] So we have to like fit the probabilities. [speaker002:] that's [disfmarker] that's at [disfmarker] Spent my youth practicing the tango de la muerte. [speaker004:] So, the real case? [speaker001:] However you know, it [disfmarker] The purpose was not really, at this stage, to come up with meaningful probabilities but to get thinking about that hidden middle layer. [speaker004:] Mm hmm. [speaker001:] And so th And [disfmarker] [speaker002:] We would actually [disfmarker] I guess once we look at the data more we'll get more hidden [pause] nodes, [speaker001:] Mm hmm. [speaker003:] Yeah. [speaker002:] but I'd like to see more. Not because it would [pause] expedite the probabilities, cuz it wouldn't. It would actually slow that down tremendously. [speaker003:] Um. Well, yeah, I guess. [speaker002:] But. [speaker003:] Not that much though. Only a little early. [speaker002:] No, I think we should have uh [disfmarker] exponentially more [pause] middle nodes than features we've extracted. [speaker003:] OK. [speaker002:] I'm ju I'm just jo [speaker004:] So. Are "doing business" versus "tourist" [disfmarker] They refer to your current task. Like [disfmarker] like current thing you want to do at this moment. [speaker003:] Um. Yeah, well [disfmarker] [vocalsound] That's [disfmarker] that's an interesting point. Whether you're [disfmarker] It's whether [disfmarker] It's not [disfmarker] [speaker004:] And are th [speaker003:] I think it's more like "Are you are tourist? are you in Ham like Heidelberg for a [disfmarker]" [speaker004:] Oh, so, I thought that was directly given by the context [pause] switch. [speaker003:] That's a different thing. What if the context, which is not set, but still they say things like, "I want to go [pause] uh, see the uh [disfmarker] the [disfmarker] the castle and uh, et cetera." [speaker001:] Is it [disfmarker] [speaker002:] Well the I kind of [pause] thought of "doing business" as more of running an errand type thing. [speaker003:] Yeah. Business on the other hand is, uh, definitely what you're doing. [speaker001:] So if you run out of cash as a tourist, and [disfmarker] and [disfmarker] and you need to go to the AT [speaker002:] So [pause] i wi th [speaker004:] OK. Oh, I see, you may have a task. wh you have to go get money and so you are doing business at that stage. [speaker001:] Mmm. [speaker003:] Yeah. [speaker002:] Right. [speaker001:] "How do I get to the bank?" [speaker004:] I see. Hmm. [speaker003:] And that'll affect whether you want to enter or you if you [disfmarker] kinda thing. [speaker004:] OK. So the "tourists" node [pause] should be [pause] um, very consistent with the context node. Right? If you say that's more their [disfmarker] in general what their background is. [speaker003:] Yeah, I think this context node is a bit of a [disfmarker] I don't know, like in d Uh [disfmarker] Do we [pause] wanna have [disfmarker] Like it's [disfmarker] [speaker004:] Are you assuming that or not? Like is that to be [disfmarker] I mean if that's accurate then that would determine tourist node. [speaker003:] If the context were to set one way or another, that like strongly uh um, says something about whether [disfmarker] whether or not they're tourists. [speaker004:] Mm hmm. Mm hmm. [speaker003:] So what's interesting is when it's not [disfmarker] when it's set to "unknown". [speaker004:] Mm hmm. [speaker001:] We what set the [disfmarker] they set the context to "unknown"? [speaker004:] OK. [speaker002:] OK. [speaker003:] Right now we haven't observed it, so I guess it's sort of averaging over all those three possibilities. [speaker004:] Mm hmm. [speaker001:] Mm hmm. [speaker002:] Right. [speaker003:] But yes, you can set it to un "unknown". [speaker001:] And if we now do [disfmarker] leave everything else as is the results should be the same, [speaker002:] Oops. [speaker001:] right? [speaker002:] No. [speaker003:] Well no, because we [disfmarker] Th the way we set the probabilities [vocalsound] might not have [disfmarker] Yeah, it's [disfmarker] it's an [disfmarker] it's an issue, right? Like [disfmarker] [speaker001:] Pretty much the same? [speaker003:] Yeah, it is. So the issue is that um in belief nets, it's not common to do what we did of like having, you know, a d bunch of values and then "unknown" as an actual value. [speaker004:] Yeah. [speaker003:] What's common is you just like don't observe the variable, [speaker004:] Yeah. [speaker003:] right, and then just marginalizes [disfmarker] [speaker001:] Yep. [speaker003:] But uh [disfmarker] [vocalsound] We didn't do this because we felt that there'd [disfmarker] I guess we were thinking in terms of a switch that actually [disfmarker] [speaker002:] We were thi Yeah, [speaker001:] Mm hmm. [speaker002:] We were th [speaker003:] But uh [disfmarker] I don't know y what the right thing is to do for that. I'm not [disfmarker] I don't know if I totally am happy with [vocalsound] the way it is. [speaker001:] Why don't we [disfmarker] Can we, um [disfmarker] How long would it take to [disfmarker] to add another [pause] node on the observatory and, um, play around with it? [speaker003:] Another node on what? [speaker002:] Uh, well it depends on how many things it's linked to. [speaker001:] Let's just say make it really simple. If we create [pause] something that for example would be um [disfmarker] So th some things can be landmarks in your sense but [pause] they [pause] can never be entered? So for example s a statue. [speaker003:] Good point. [speaker001:] Yeah? [speaker002:] Right. [speaker004:] Mm hmm. [speaker001:] So [pause] maybe we wanna have "landmark" [pause] meaning now "enterable landmark" versus, um something that's simply just a vista point, for example. [speaker002:] Yeah, that's true. [speaker001:] Yeah? uh, a statue or um [disfmarker] [speaker003:] So basically it's addressing a variable that's "enterable or not". So like an "enterable, question mark". [speaker002:] Also [disfmarker] [pause] you know, didn't we have a [pause] size as one? [speaker003:] What? [speaker002:] The size of the landmark. [speaker003:] Um. Not when we were doing this, [speaker002:] Cuz if it's [disfmarker] Yeah. [speaker003:] but I guess at some point we did. [speaker002:] For some reason I had that [disfmarker] OK, that was a thought that I had at one point but then went away. [speaker003:] So you want to have a [disfmarker] a node for like whether or not it can be entered? [speaker001:] Well, for example, if we include that, [speaker003:] Yeah. [speaker001:] yeah? um, accessibility or something, yeah? [speaker003:] Hmm. [speaker001:] "Is it [disfmarker] Can it be entered?" then of course, this is [pause] sort of binary as well. [speaker003:] Yeah. [speaker001:] And then um, there's also the question whether it may be entered. In the sense that, you know, if it's [pause] Tom [disfmarker] the house of Tom Cruise, you know, it's enterable but you may not enter it. You know? You're not allowed to. [speaker003:] Yeah. [speaker001:] Unless you are, whatever, his [disfmarker] his divorce lawyer or something. [speaker003:] Yeah. [speaker001:] Yeah? and um [disfmarker] And these are very observable sort of from the [disfmarker] from the ontology sort of things. [speaker002:] Way Does it actually help to distinguish between those two cases though? Whether it's practically speaking enterable, or actually physically enterable [pause] or not? [speaker004:] It seems like it would for uh, uh [pause] determining whether they wanna go into it or not. [speaker001:] y y If [disfmarker] If you're running an errand you maybe more likely to be able to enter places that are usually not [vocalsound] al w you're not usually [disfmarker] not allowed to uh m [speaker004:] Cuz they [disfmarker] [speaker002:] Well I can see why [disfmarker] [speaker001:] Let's get this uh b clearer. S so it's matrix between if it's not enterable, [speaker002:] Whether it's a [disfmarker] Whether it's a public building, and whether it's [disfmarker] actually has a door. [speaker001:] period. Yeah, exactly. [speaker002:] OK. [speaker001:] This is sort of uh [speaker004:] Mm hmm. [speaker002:] So Tom Cruise's house is [pause] not a public building but it has a door. [speaker004:] Right. [speaker003:] Mm hmm. [speaker002:] But the thing is [disfmarker] OK, sh explain to me why it's necessary [pause] to distinguish between whether something has a door and is [pause] not public. Or, if something [disfmarker] It seems like it's equivalent to say that it doesn't have a door a [pause] and it [disfmarker] [speaker001:] Mm hmm. [speaker002:] Or "not public" and "not a door" are equivalent things, it seems like in practice. [speaker001:] Yeah. Right. Yeah. So we would have [disfmarker] What does it mean, then, that we have to [disfmarker] we have an object type statue. That really is an object type. [speaker002:] Right. [speaker001:] So there is [disfmarker] there's gonna be a bunch of statues. And then we have, for example, an object type, [pause] hmm, that's a hotel. How about hotels? [speaker002:] OK. [speaker001:] So, the most famous building in Heidelberg is actually a hotel. It's the hotel Zum Ritter, which is the only Renaissance building in Heidelberg that was left after [pause] the big destruction and for the Thirty Years War, blah blah blah. [speaker002:] Hmm. Does it have nice walls? [speaker001:] It has wonderful walls. [speaker002:] Excellent. [speaker001:] Um And lots of detail, c and carvings, engravings and so forth, so. But, um, it's still an unlikely candidate for the Tango mode I [pause] must say. But. Um. So [disfmarker] s So if you are a d Well it's very tricky. So I guess your question is [disfmarker] so far I have no really arg no real argument why to differentiate between statues as [disfmarker] statues and houses of celebrities, from that point of view. Huh. OK. Let [disfmarker] Let's do a [disfmarker] Can we add, just so I can see how it's done, uh, a "has door" [pause] property or [disfmarker]? [speaker002:] OK. [speaker003:] What would it, uh, connect to? Like, what would, uh, it affect? [speaker001:] Um, I think, um, it might affect [disfmarker] Oh actually it's [disfmarker] it [disfmarker] [pause] it wouldn't affect any of our nodes, right? [speaker003:] What I was thinking was if you had a [disfmarker] like [disfmarker] [speaker001:] Oh it's [disfmarker] it affects th The "doing business" is certainly not. [speaker002:] You could affect [disfmarker] [pause] Theoretically you could affect "doing business" with "has door". [speaker003:] Yeah. [speaker004:] Hmm. [speaker003:] OK. [speaker001:] It should, um, inhibit that, [speaker003:] Right. [speaker002:] Let's see. [speaker001:] right? [speaker003:] Yeah, I don't know if JavaBayes is nice about that. It might be that if you add a new thing pointing to a variable, you just like [disfmarker] it just overwrites everything. But [pause] you can check. [speaker002:] Well, [pause] we have it saved. So. [vocalsound] We can rel open it up again. [speaker003:] OK. It's true. [speaker002:] The [pause] safety net. [speaker004:] I think you could just add it. I mean, I have before OK. Whew! [speaker003:] Well that's fine, but we have to see the function now. Has it become all point fives or not? [speaker004:] Oh, right. [speaker002:] Let's see. So this is "has door" Uh, true, false. That's acceptable. And I want to edit the function going to that, right? [speaker003:] No. This is fine, [speaker002:] Oh no. [speaker003:] this business. [speaker002:] Right. It was fine. added this one. [speaker003:] Yep. What would be nice if it [disfmarker] is if it just like kept the old function for either value [speaker002:] This [disfmarker] [speaker003:] but. Nope. Didn't [pause] do it. [speaker004:] Oh. [speaker002:] Oh wait, it might be [disfmarker] Did we w Yes, that's not good. [speaker003:] That's [pause] kind of annoying. [speaker001:] OK, so just dis dismiss everything. Close it and [disfmarker] and load up the old state so [pause] it doesn't screw [disfmarker] screw that up. [speaker002:] Let's see. Oops. [speaker003:] Hmm. [speaker001:] Maybe you can read in? [speaker003:] Ha So have you used JavaBayes a lot? [speaker004:] Yes. Really [vocalsound] I ha I've [disfmarker] I haven't used it a lot and I haven't used it in the last you know many months so [speaker003:] OK. [speaker004:] um, uh, we can ask someone. [speaker003:] It might be worth uh [disfmarker] asking around. [speaker004:] Um. [speaker003:] Like, we looked at sort of uh [disfmarker] a page that had like a bunch of [disfmarker] [speaker004:] Yeah. Srini [disfmarker] [speaker003:] OK. Yeah, [speaker004:] Srini's the one to ask I would say. [speaker003:] S I guess he'd be the person. [speaker004:] Um. He might know. [speaker003:] Yeah. Cuz [disfmarker] [speaker004:] And. [speaker003:] Yeah. I mean in a way this is a lot of good features in Java it's cra has a GUI and it's uh [disfmarker] [speaker004:] Mm hmm. [speaker003:] I guess those are the main two things. It does learning, it has [disfmarker] [speaker004:] Mm hmm. [speaker002:] No it doesn't, actually. [speaker004:] Yeah. [speaker003:] What? [speaker002:] I didn't think it did learning. [speaker003:] OK. [speaker002:] Maybe it did a little bit of learning, [speaker003:] Oh right. [speaker002:] I don't remember. [speaker003:] Maybe you're right. OK. Right. But uh [disfmarker] it's free. [speaker002:] Which is w quite positive, yeah. [speaker003:] But uh, yeah. Maybe another thing that uh [disfmarker] But I mean its interface is not the greatest. So. [speaker004:] Mm hmm. [speaker002:] But actually it had an interface. A lot of them were like, you know. [speaker004:] Yep. [speaker001:] Command line. [speaker002:] Huh. [speaker001:] What is the c code? Can w can we see that? How do you write the code [speaker002:] The c [speaker001:] or do you actually never have to write any code there? [speaker003:] Yeah. There is actually a text file that you can edit. But it's [disfmarker] You don't have to do that. [speaker002:] There's like an XML format for [pause] Bayes nets. [speaker003:] Is it XML? [speaker002:] The there is one. I don't know if this uses it. [speaker003:] Oh, I see. No this doesn't use it. [speaker002:] But it [disfmarker] [speaker003:] I didn't think it did. [speaker002:] Yeah, the [disfmarker] the [disfmarker] [speaker003:] You can look at the text file. [speaker002:] Yeah. [speaker003:] But do you have it here? Well, maybe you don't. [speaker002:] Uh, yes I do actually. Let me see. [speaker003:] Oh yes, of course. Like, there's the [disfmarker] [speaker002:] Oh man, I didn't n [pause] Is there an ampersand in DOS? [speaker003:] Nope. Just s l start up a new DOS. [speaker002:] We That's alright. I can probably double cli click on it. [speaker003:] Or [disfmarker] Yeah, right. [speaker001:] n uh [disfmarker] [speaker002:] Let's see. [speaker003:] Yep. [speaker002:] Let's see, come on. [speaker003:] It'll ask you what you [disfmarker] what it wants [disfmarker] what you want to open it with and see what BAT, I guess. [speaker002:] One of these days, it should open this, theoretically. [speaker001:] Go [disfmarker] Right mouse. Open with. [speaker003:] That's [disfmarker] Oh! [speaker002:] Oh there we go. Maybe it was just [disfmarker] Oh! [speaker001:] Oh. [speaker002:] W Ah, it was dead. To the world. [speaker004:] God! [speaker002:] OK. [speaker001:] Through the old Notepad. That's my favorite editor. [speaker002:] I like [disfmarker] I like Word Pad because it has the uh [disfmarker] the returns, [speaker001:] Wordpad? I [disfmarker] [speaker002:] the carriage returns on some of them. [speaker001:] Mm hmm. OK. [speaker002:] You know how they get "auto fills" I guess, [speaker001:] Mmm hmm. [speaker002:] or whatever you call it. [speaker003:] Anyway, there it is. [speaker001:] So this is sort of LISP y? No. [speaker003:] Uh, Yeah. [speaker002:] It just basically looks like it just specifies a bunch of [speaker001:] Mm hmm. [speaker003:] Yeah. That's how actual probability tables are specified. As, like, lists of numbers. [speaker002:] Yeah. [speaker004:] Mm hmm. Mm hmm. [speaker003:] So theoretically you could edit that. [speaker004:] Mm hmm. [speaker002:] It just that [disfmarker] it's [disfmarker] [speaker003:] But [pause] they're not very friendly. [speaker002:] Yeah the ordering isn't very clear on [disfmarker] [speaker004:] Right. [speaker003:] So you'd have to like figure out [disfmarker] Like you have to go and [disfmarker] [speaker004:] The layout [pause] of the table. [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Well I [disfmarker] [speaker002:] Actually we could write a program that could generate this. [speaker003:] Yeah. I think so. [speaker004:] You could. [speaker002:] Yeah you could. [speaker003:] it's not [disfmarker] [speaker002:] We were doing it [disfmarker] [speaker003:] Yeah we can maybe write an interface th for uh entering probability distributions easily, something like [disfmarker] like a little script. That might [vocalsound] be worth it. [speaker001:] And that might do. [speaker004:] Yeah. I actually seem to recall Srini complaining about something to do with Entering probability so this is probably [speaker003:] The other thing is it is in Java [speaker004:] Yeah, it's [disfmarker] Yeah. [speaker003:] so. [speaker002:] We could [pause] manipulate the source [pause] itself? [speaker004:] Yeah. [speaker002:] Or [disfmarker] [speaker001:] Do you have the true source files [speaker002:] I don't know if he actually [disfmarker] [speaker001:] or just the class? [speaker003:] Yeah. Uh, yeah. we do [speaker002:] Does he [disfmarker] [speaker003:] I [disfmarker] I saw directory called "source", [speaker002:] Oh. [speaker004:] Mm hmm. [speaker002:] I didn't e [speaker003:] or [disfmarker] Yeah. Go up one? [speaker002:] Up one. [speaker003:] Yeah. [speaker002:] Ah yes, good. "Source". That's [disfmarker] that's quite nice. [speaker003:] I don't know if it actually manipulate the source, though. That might be a bit complicated. I think it might [disfmarker] it might be simpler to just [pause] have a script that, you know [disfmarker] [speaker001:] Mm hmm. [speaker003:] It's, like, friendly, [speaker004:] The d the data tables. [speaker003:] it allows you enter things well. [speaker004:] Yeah. [speaker002:] Right. [speaker001:] But if th if there is an XML [pause] file that [disfmarker] or format that it can also read [disfmarker] I mean it just reads this, right? When it starts. [speaker003:] Mm hmm. [speaker002:] Yeah I know there is an [disfmarker] I was looking on the we web page and he's updated it for an XML version of I guess Bayes nets. There's a Bayes net spec for [disfmarker] in XML. [speaker001:] Mm hmm. [speaker003:] He's [disfmarker] Like this guy has? The JavaBayes guy? [speaker002:] Yeah. [speaker003:] So [disfmarker] but, e he doesn't use it. So in what sense has he updated it? [speaker002:] Well th you can either [disfmarker] you ca or you can read both. [speaker003:] Oh. I see. [speaker002:] To my understanding. [speaker003:] OK. [speaker004:] Oh. [speaker003:] That would be awesome. [speaker002:] Because uh [disfmarker] Well at least the [disfmarker] uh [disfmarker] I could have misread the web page, I have a habit of doing that, but. [speaker001:] OK, wonderful. [speaker003:] OK. [speaker001:] So you got more slides? [speaker002:] Do I have more slides? Um yes, one more. "Future Work". I think every presentation have a should have a "Future Work" slide. But uh it's basically [disfmarker] we already talked about all this stuff, so. [speaker003:] Um. The additional thing is I guess learning the probabilities, [pause] also. E That's maybe, I don't know [disfmarker] [vocalsound] If [disfmarker] [speaker002:] Uh that's future future work. [speaker003:] Does [disfmarker] That's [disfmarker] Yeah. [speaker002:] Right. [speaker003:] Very future. [speaker001:] Mm hmm. [speaker002:] And of course if you have a presentation that doesn't [disfmarker] have something that doesn't work at all, then you have "What I learned", as a slide. [speaker004:] Can't you have both? [speaker002:] You could. My first approach failed. [speaker004:] Right. [speaker002:] What I learned. OK, so I think that uh our presentation's finished. [speaker001:] Good. [speaker002:] I know what I like about these meetings is one person will nod, and then the next person will nod, and then it just goes all the way around the room. [speaker001:] So the uh [disfmarker] [speaker004:] I missed my turn. [speaker002:] No I [disfmarker] Earlier I went [nonvocalsound] and Bhaskara went [nonvocalsound] and you did it. You did it. [speaker001:] It's like yawning. [speaker004:] It's like yawning. [speaker001:] And this announcement was in stereo. [speaker003:] Ha. [speaker001:] OK. So this means um [disfmarker] [speaker002:] Should I pull up the [pause] net again? [speaker004:] Yeah. Could you put the [disfmarker] the um, net up again? [speaker002:] Yes. [speaker004:] Thanks. [speaker002:] There we go. And actually I was [disfmarker] cuz I got a wireless mike on. [speaker004:] So a more general thing than "discussed admission fee" um, could be [disfmarker] I [disfmarker] I'm just wondering whether the context, the background context of the discourse [vocalsound] might be [disfmarker] I don't know, if there's a way to define it or maybe you know generalize it some way um, there might be other cues that, say, um, in the last few utterances there has been something that has strongly associated with say one of the particular modes uh, I don't know if that might be [disfmarker] [speaker001:] Mm hmm. [speaker004:] uh, and [disfmarker] and into that node would be various [disfmarker] various things that [disfmarker] that could have specifically come up. [speaker001:] I think we [disfmarker] I think a [disfmarker] a sort of general strategy here [disfmarker] You know, this is [disfmarker] this is excellent because [disfmarker] um it gets you thinking along these terms [disfmarker] is that maybe we ob we could observe a couple of um discourse phenomena such as the admission fee, and something else and something else, that happened in the discourse before. [speaker004:] Mm hmm. Right. [speaker001:] And um [disfmarker] let's make those four. And maybe there are two um [disfmarker] So maybe this could be sort of a separate region of the net, [pause] which has two [disfmarker] [pause] has it's own middle layer. Maybe this, you know, has some [pause] kind [pause] of um, funky thing that di if this and this may influence these hidden nodes of the discourse which is maybe something that is uh, a more general version of the actual [pause] phenomenon that you can observe. So things that point towards [disfmarker] [speaker002:] So instead of single node, for [pause] like, if they said the word "admission fee" [disfmarker] [speaker004:] Exactly. Yeah. [speaker002:] "admission fee", or maybe, you know, "how much to enter" or you know something, [speaker004:] Opening hours or something like that. [speaker002:] other cues. Exactly. That would all f funnel into one node that would [pause] constitute entrance requirements or something like that. [speaker001:] So "pay a visit" [disfmarker] [speaker004:] Mm hmm. [speaker001:] uh uh d [speaker003:] Sure. Yeah. [speaker001:] Yeah? [speaker004:] I mean it sort of get into plan recognition kinds of things in the discourse. I mean that's like the bigger um, version of it. [speaker001:] Exactly. Yeah? And then maybe there are some discourse acts if they happened before, um it's more for um a cue that the person actually wants to get somewhere else and that you are in a [disfmarker] in a [disfmarker] in a route um, sort of proceeding past these things, so this would be just something that [disfmarker] where you want to pass it. Hmm? Is that it? However these are of course then the [disfmarker] the nodes, the observed nodes, for your middle layer. So this [pause] again points to "final destination", "doing business", "tourist hurry" and so forth. [speaker004:] Mm hmm. [speaker002:] OK. [speaker001:] Yeah? And so then we can say, "OK. we have a whole [pause] region [disfmarker]" [vocalsound] in a e [speaker004:] That's a whole set of discourse related cues to your middle layer. [speaker001:] Yeah, exactly. [speaker004:] Right? [speaker001:] And this is just [disfmarker] then just one. So e because at the end the more we um [disfmarker] add, you know, the more spider web ish it's going to become in the middle and the more of hand editing. It's going to get very ugly. But with this way we could say OK, these are the [pause] discourse phenomena. They ra may have there own hidden layer [pause] that points to some of [pause] the [disfmarker] the real hidden layer, um or the general hidden layer. [speaker003:] Sure. [speaker001:] And the same we will be able to do for syntactic information, the verbs used, the object types used, modifiers. And maybe there's a hidden layer for that. [speaker003:] Yep. [speaker001:] And so forth and so forth. Then we have context. [speaker003:] Yeah. So essentially a lot of those nodes can be expanded into little Bayes nets of their own. [speaker001:] Yep. [speaker004:] Mm hmm. [speaker001:] Precisely. So. [speaker002:] One thing that's kind of been bugging me when I [disfmarker] more I look at this is that the [disfmarker] I guess, the fact that the [disfmarker] there's a complete separation between the [pause] observed features and in the output. [speaker003:] Yeah. [speaker002:] I mean, it makes it cleaner, but then uh [disfmarker] I mean. [speaker003:] That's true. [speaker002:] For instance if the discourse does [disfmarker] [speaker004:] What do you mean by that? [speaker002:] well for instance, the "discourse admission fee" [pause] node seems like it should point directly to the [disfmarker] [speaker004:] Uh huh. [speaker002:] or increase the probability of "enter [pause] directly" versus "going there via tourist". [speaker003:] Yeah. Or we could like add more, uh, sort of middle nodes. Like we could add a node like do they want to enter it, which is affected by admission fee and by whether it's closed and by whether it has a door. [speaker001:] Mm hmm. [speaker003:] So it's like [disfmarker] There are [disfmarker] Those are the two options. [speaker002:] Right. [speaker003:] Either like make an arrow directly or put a new node. [speaker004:] Hmm. [speaker002:] Yeah, that makes sense. [speaker001:] Yeah. And if it [disfmarker] if you do it [disfmarker] If you could connect it too hard you may get such phenomenon that [disfmarker] like "So how much has it cost to enter?" and the answer is two hundred fifty dollars, and then the persons says um "Yeah I want to see it." Yeah? meaning "It's way out of my budget" um [disfmarker] [speaker002:] There are places in Germany where it costs two hundred fifty dollars to enter? [speaker001:] Um, nothing comes to mind. Without thinking too hard. Um, maybe, yeah of course, um opera premiers. [speaker002:] Really? [speaker001:] So you know. [speaker004:] Hmm. [speaker001:] Or [disfmarker] or any good old Pink Floyd concert. [speaker002:] I see. If you want to see "The Magic Flute" or something. [speaker001:] Yeah. [speaker004:] Or maybe um, a famous restaurant. or, I don't know. There are various things that you might w not want to eat a [pause] meal there but [pause] your own table. [speaker002:] The Spagos of Heidelberg. [speaker001:] I think that the h I mean nothing beats the [disfmarker] the admission charge prices in Japan. So there, two hundred dollars is [disfmarker] is moderate for getting into a discotheque. You know. [speaker003:] Really. [speaker001:] Then again, everything else is free then once you're ins in there. Food and drink and so forth. So. I mean. But i you know, i we can [disfmarker] Something [disfmarker] Somebody can have discussed the admission fee and u the answer [pause] is s if we [disfmarker] um, you know, um [disfmarker] still, based on that result is never going to enter that building. [speaker002:] Hmm. [speaker001:] You know? Because it's just too expensive. [speaker002:] Oh yeah, I think I see. So the discourse refers to "admission fee" but it just turns out that they change their mind in the middle of the discourse. [speaker004:] Yeah. you have to have some notion of not just [disfmarker] I mean there's a [disfmarker] there's change across several turns of discourse [speaker002:] Right. [speaker004:] so [vocalsound] I don't know how [disfmarker] if any of this was discussed [disfmarker] but how i if it all this is going to interact with [vocalsound] whatever general [vocalsound] uh, other [disfmarker] other discourse processing that might be happen. [speaker001:] Mm hmm. Mm hmm. [speaker004:] I mean. [speaker003:] Yeah. [comment] [nonvocalsound] Yeah. [speaker002:] What sort of discourse [pause] processing is uh [disfmarker] are the [disfmarker] How much is built into SmartKom and [disfmarker] [speaker001:] It works like this. The uh, um [disfmarker] I mean. The first thing we get is that [pause] already the intention is sort of t They tried to figure out the intention, right? simply by parsing it. And this um [disfmarker] m won't differentiate between all modes, yeah? but at least it'll tell us "OK here we have something that [disfmarker] somebody that wants to go someplace, now it's up for us to figure out what kind of going there is [disfmarker] is [disfmarker] [pause] is happening, and um, if the discourse takes a couple of turns before everything [disfmarker] all the information is needed, what happens is you know the parser parses it and then it's handed on to the discourse history which is, um o one of the most elaborate [disfmarker] elaborate modules. It's [disfmarker] it's actually the [disfmarker] the whole memory of the entire system, that knows what [disfmarker] wh who said what, which was [disfmarker] what was presented. It helps an an anaphora resolution and it [disfmarker] and it fills in all the structures that are omitted, so, [pause] um, because you say" OK, [pause] how can I get to the castle? "Oh, how [disfmarker] how much is it?" and um "yeah I would like uh [disfmarker] um [disfmarker] to g let's do it" and so forth. So even without an a ana [pause] anaphora somebody has to make sure that information we had earlier on is still here. [speaker002:] Mm hmm. [speaker001:] Because not every module keeps a memory of everything that happened. so [vocalsound] whenever the uh, um person is not actually rejecting what happened before, so as in "No I really don't want to see that movie. I'd rather stay home and watch TV" um [disfmarker] What movie was selected in what cinema in what town is [disfmarker] is going to be sort of added into the disc into the representations every di at each dialogue step, by the discourse model [disfmarker] discourse [pause] model, Yeah, that's what it's called. and, um, it does some help in the anaphora resolution and it also helps in coordinating the gesture screen issues. So a person pointing to something on the screen, you know, [speaker002:] Hmm. [speaker001:] the discourse model actually stores what was presented at what location on the s on the screen so it's a [disfmarker] it's a rather huge [vocalsound] [disfmarker] huge thing but um [disfmarker] [vocalsound] [comment] um [disfmarker] [pause] [vocalsound] we can [pause] sort of [disfmarker] It has a very clear interface. We can query it whether admission fees were discussed in the last turn and [disfmarker] and the turn before that or you know how [pause] deep we want to search [speaker002:] OK. [speaker001:] um [disfmarker] which is a question. How deep do we want to sear, you know? Um [disfmarker] but we should [pause] try to keep in mind that, you know, we're doing this sort of for research, so we [disfmarker] we should find a limit that's reasonable and not go, you know, all the way back to Adam and Eve. You know, did that person ever discuss admissions fee [disfmarker] fees in his entire life? And the dialogues are pretty [disfmarker] pretty you know [vocalsound] concise and [disfmarker] Anyway. [speaker004:] So one thing that might be helpful which is implicit in the [pause] use of "admission fee discussion" as a cue for entry, [vocalsound] [pause] is thinking about the plans that various people might have. Like all the different [vocalsound] sort of general schemas that they might be following OK. This person is um, finding out information about this thing in order to go [pause] in as a tourist or finding out how to get to this place [pause] in order to do business. Um, because then [pause] anything that's a cue for one of the steps [pause] would be slight evidence for that overall plan. Um, I don't know. They're [disfmarker] in [disfmarker] in non in sort of more traditional AI kinds of plan recognition things you sort of have [vocalsound] [pause] you know, some idea at each turn of agent doing something, "OK, wha what plans is this a [disfmarker] consistent with?" and then get s some more information and then you see [pause] "here's a sequence that this sort of roughly fits into". It [disfmarker] it might be useful here too. I [disfmarker] I don't know how [speaker001:] Mm hmm. [speaker004:] you know you'd have to [vocalsound] [pause] figure out what knowl what knowledge representation would work for that. [speaker001:] I mean the [disfmarker] [vocalsound] u u [speaker002:] Hmm. [speaker001:] It's in the [disfmarker] these [disfmarker] these [disfmarker] these plan schemas. I mean there are some [disfmarker] some of them are extremely elaborate, you know. "What do you need [disfmarker] need to buy a ticket?" [speaker004:] Mm hmm. [speaker001:] You know? [speaker004:] Mm hmm. [speaker001:] and it [disfmarker] it's fifty steps, [speaker004:] Mm hmm. [speaker001:] huh? just for buying a ticket [pause] at a ticket counter, you know, and [disfmarker] and maybe that's helpful to look at it [disfmarker] to look at those. It's amazing what human beings can do. W when we talked uh we had the example, you know, of you being uh [disfmarker] a s a person on a ticket counter working at railway station and somebody r runs up to you with a suitcase in his hands, says [vocalsound] New York and you say Track seven, huh? And it's because you know that that person actually is following, you know [disfmarker] You execute a whole plan of going through a hundred and fifty steps, you know, without any information other than "New York", huh? inferring everything from the context. So, works. Um, even though there is probably no train from here to New York, right? [speaker004:] Mmm. Not direct. [speaker002:] You'd uh probably have to transfer in Chicago. [speaker001:] Mm hmm. But uh [disfmarker] it's possible. Um, no you probably have to transfer also somewhere else. Right? Is that t San Francisco, Chicago? [speaker002:] I think [disfmarker] [speaker001:] Is that possible? [speaker002:] One time I saw a report on trains, and I think there is a l I don't know if [disfmarker] I thought there was a line that went from somewhere, maybe it was Sacramento to Chicago, but there was like a California to Chicago line of some sort. [speaker001:] Mm hmm. Hmm. [speaker002:] I could be wrong though. It was a while ago. [speaker004:] The Transcontinental Railroad, doesn't that ring a bell? I think it has to exist somewhere. [speaker002:] Yeah but I don't know if it's still [disfmarker] They might have blown it up. [speaker001:] Well it never went all the way, right? I mean you always had to change trains at Omaha, [speaker004:] Well most of the way. Uh. Mm hmm. [speaker001:] right? One track ended there and the other one started at [disfmarker] five meters away from that [speaker004:] Yeah. [speaker001:] and [vocalsound] sort of [disfmarker] [speaker004:] Well. [pause] You seem to know better than we do so. [speaker001:] yeah? Has anybody ever been on an Amtrak? [speaker004:] I have. But not transcontinentally. [speaker002:] I'm frightened by Amtrak myself. [speaker003:] What? Why? [speaker002:] I just [disfmarker] They seem to have a lot of accidents on the Amtrak. [speaker003:] Really? [speaker001:] Their reputation is very bad. [speaker002:] Yeah. Yeah. [speaker001:] huh? It's not maybe reality. [speaker004:] It's not like German trains. Like German trains are really great so. [speaker001:] But [disfmarker] you know, I don't know whether it's [disfmarker] which ones are safer, you know, statistically. [speaker004:] Um, but they're faster. [speaker003:] Yeah. [speaker001:] Much faster. Mm hmm. [speaker003:] And there's much more of them. Yeah, they're Yeah, it's [pause] way better [speaker001:] yeah I used [disfmarker] um Amtrak quite a bit on the east coast and I was surprised. [speaker004:] Mm hmm. [speaker001:] It was actually OK. You know, on Boston New York, [speaker004:] Yeah. [speaker001:] New York Rhode Island, [speaker003:] Yeah. I've done that kind of thing. [speaker001:] whatever, Boston. [speaker004:] Mm hmm. [speaker001:] Yeah. But [disfmarker] That's [pause] a different issue. [speaker002:] This is going to be an interesting transcript. [speaker001:] Hmm? [speaker003:] I [disfmarker] I want to see what it does with uh "landmark iness". That's [disfmarker] [speaker002:] Yeah. [speaker004:] Let's all say it a few more times. Just kidding. [speaker003:] So. [speaker002:] It'd help it figure it out. [speaker004:] Right. [speaker003:] Yeah. [speaker004:] So by the way tha that structure [pause] that Robert drew on the board was like more um, [vocalsound] [pause] cue type based, right, here's like we're gonna [vocalsound] segment off a bit of stuff that comes from discourse and then [pause] some of the things we're talking about here are more [disfmarker] you know, we mentioned maybe [pause] if they talk about [pause] um, I don't know, entering or som you know like they might be more task based. [speaker002:] Hmm. [speaker004:] So I [disfmarker] I don't know if there [disfmarker] There's obviously some [disfmarker] m more than one way of organizing [pause] the variables into something so. [speaker001:] I think that um [disfmarker] What you guys did is really nicely sketching out different tasks, and maybe some of their conditions. [speaker004:] Mm hmm. [speaker001:] One task is more likely you're in a hurry when you do that kind of s doing business, [speaker004:] Mm hmm. [speaker001:] and [disfmarker] and less in a hurry when uh [disfmarker] you're a tourist Um [disfmarker] [vocalsound] tourists may have [disfmarker] never have final destinations, you know because they are eternally traveling around so [vocalsound] maybe what [disfmarker] what [disfmarker] what happened [disfmarker] what might happen is that we do get this sort of task based middle layer, [speaker004:] Mm hmm. Mm hmm. [speaker001:] and then we'll get these sub middle layers, that are more cue based. [speaker004:] That feed into those? [speaker001:] Nah? [speaker004:] Mm hmm. [speaker001:] Might be [disfmarker] might be a nice dichotomy of [disfmarker] of the world. So, um I suggest w to [disfmarker] for [disfmarker] to proceed with this in [disfmarker] in the sense that [pause] maybe throughout this week the three of us will [disfmarker] will talk some more about maybe segmenting off different regions, and we make up some [disfmarker] [pause] some toy a observable "nodes" [disfmarker] is that what th [speaker002:] Refined y re just refine the [disfmarker] [speaker003:] OK. [speaker001:] What's the technical term? [speaker003:] For which? [speaker001:] For the uh [disfmarker] nodes that are observable? The "outer layer"? [speaker003:] Just observable nodes, evidence nodes? [speaker002:] The features, I don't know, whatever you [disfmarker] [speaker001:] Feature ma make up some features for those [disfmarker] [speaker003:] Yeah. [speaker001:] Identify [pause] four regions, maybe make up some features for each region and uh [disfmarker] [vocalsound] and uh, uh [disfmarker] and uh [disfmarker] middle layer for those. And then these should then connect somehow to the more plan based deep space [speaker003:] Yeah. [speaker002:] Basically just refine [pause] some of the more general nodes. [speaker001:] Yep. The they [disfmarker] they will be aud ad hoc for [disfmarker] for [disfmarker] for some time to come. [speaker003:] Yeah, this is totally like [disfmarker] The probabilities and all are completely ad hoc. We need to look at all of them. I mean but, they're even like [vocalsound] [pause] I mean like, close to the end we were like, uh, you know we were like uh [vocalsound] really ad hoc. [speaker004:] It's a even distribution. Like, whatever. [speaker003:] Right? Cuz if it's like, uh [disfmarker] If it's four things coming in, right? And, say, some of them have like three possibilities and all that. So you're thinking like [disfmarker] like a hundred and forty four or something [pause] possible things [disfmarker] numbers to enter, [speaker004:] And [disfmarker] That's terrible. [speaker003:] right? So. [speaker002:] Some of them are completely absurd too, [speaker004:] That's uh [disfmarker] [vocalsound] Well [disfmarker] [speaker002:] like [disfmarker] [vocalsound] they want to enter, but it's closed, it's night time, you know there are tourists and all this weird stuff happens at the line up and you're like [disfmarker] [speaker003:] Yeah, the only like possible interpretation is that they are like [disfmarker] come here just to rob the museum or [pause] something to that effect. [speaker002:] confused. [speaker004:] In which case you're supposed to alert the authorities, [vocalsound] and see appropriate action. [speaker003:] Yeah. [speaker002:] Yeah. [speaker003:] Yeah, another thing to do, um, is also to, um [disfmarker] I guess to ask around people about other Bayes net packages. Is Srini gonna be [pause] at the meeting tomorrow, do you know? [speaker004:] Maybe. [speaker001:] The day after tomorrow. [speaker003:] Wait [disfmarker] [speaker004:] Quite possibly. Oh, oh, sorry. [speaker001:] Wednesday. [speaker003:] Day after tomorrow. Yeah. [speaker004:] Sorry, Wednesday, yeah. [speaker003:] Maybe we can ask him about it. [speaker002:] Who's talking on Wednesday? [speaker004:] Mmm. [speaker002:] I haven't [disfmarker] J Jerry never sent out a [disfmarker] sent out an email, did he, ever? [speaker003:] No. But he mentioned at the last meeting that someone was going to be talking, I forget who. Uh. [speaker001:] Oh, isn't Ben? [speaker004:] Ben? I think it's Ben actually, [speaker001:] Ben, then, Ben. [speaker004:] yeah, [speaker002:] Ah! [speaker004:] um, giving his job talk I think. um, Sorry. I was just reading the screen. [speaker001:] OK. [speaker003:] Oh. [speaker002:] Yeah. [speaker001:] So the uh [disfmarker] That will be one [disfmarker] one thing we could do. I actually uh, have [disfmarker] Um, also we can uh, start looking at the SmartKom tables and I will [disfmarker] [speaker002:] Right. [speaker001:] I actually wanted to show that [pause] to you guys now but um. [speaker002:] Do you want to [pause] trade? [speaker001:] Um, no I [disfmarker] I actually made a mistake because it [disfmarker] it fell asleep and when Linux falls asleep on my machine it's [disfmarker] it doesn't wake up ever, so I had to reboot [speaker004:] Oh, no. [speaker001:] And if I reboot without a network, I will not be able to start SmartKom, because I need to have a network. [speaker002:] Uh [disfmarker] [speaker001:] So we'll do that t maybe uh [disfmarker] [speaker003:] But. OK. But once you start [disfmarker] sart start SmartKom you can be on [disfmarker] You don't have to be on a network anymore. Is that the deal? [speaker001:] Yep. [speaker003:] Ah, interesting. [speaker002:] Why does SmartKom need a network? [speaker001:] Um [disfmarker] [vocalsound] it looks up some stuff that, [pause] you know, is [disfmarker] is that [disfmarker] is in the [disfmarker] written by the operating system only if it [disfmarker] if you get a DHCP request, so it [disfmarker] you know, my computer does not know its IP address, you know? [speaker002:] Ah. [speaker001:] You know. So. Unless it boots up with networking. [speaker002:] It's plugged in. Yeah. [speaker001:] And I don't have an IP address, they can't look up [disfmarker] they don't know who localhost is, and so forth and so forth. [speaker004:] Hmm. [speaker001:] Always fun. But it's a, um, simple solution. We can just um, go downstairs and [disfmarker] and [disfmarker] and look at this, but maybe not today. The other thing um [disfmarker] I will [disfmarker] oh yeah, OK, I have to report [vocalsound] um, data collection. We interviewed Fey, [speaker004:] Mm hmm. [speaker001:] She's willing to do it, meaning be the wizard for the data collection, also [vocalsound] maybe transcribe a little bit, if she has to, but also recruiting subjects, organizing them, and so forth. So that looks good. Jerry however suggested that we should uh have a trial run with her, see whether she can actually do all the uh spontaneous, eloquent and creativeness that we uh expect of the wizard. And I talked to Liz about this and it looks as if Friday afternoon will be the time when we have a first trial run for the data. [speaker003:] So who would be the subject [pause] of this trial run? [speaker001:] Pardon me? [speaker003:] Who [disfmarker] Will there be a [disfmarker] Is one [disfmarker] Is you [disfmarker] one of you gonna be the subject? Like are you [disfmarker] [speaker001:] Um Liz also volunteered to be the first subject, [vocalsound] which I think might be even better than us guys. [speaker004:] Good. [speaker002:] One of us, yeah. [speaker001:] If we do need her for the technical stuff, then of course one of you has to sort of uh [disfmarker] jump in. [speaker002:] I like how we've [disfmarker] you guys have successfully narrowed it down. "Is one of you going to be the subject?" Is one of you [disfmarker] jump in. [speaker004:] Reference. I haven't done it yet. [speaker003:] Well I just figured it has to be someone who's, um, familiar enough with the data to cause problems for the wizard, so we can, uh, see if they're you know good. [speaker004:] Oh plants? e u someone who can plant difficult things. [speaker003:] Yeah. I mean that's what we wanna [pause] check, right? [speaker001:] Um, [speaker004:] Well, in this case it's a p it's a [pause] sort of testing of the wizard rather than of the subject. [speaker003:] Isn't that what it is? [speaker004:] It's uh [disfmarker] [speaker001:] yes w we [disfmarker] we would like to test the wizard, but [vocalsound] you know, if we take a subject that is completely unfamiliar with the task, or any of the set up, we get a more realistic [speaker003:] I guess that would be reasonable. [speaker004:] Yeah. [speaker002:] Yeah. [speaker001:] you know, set up as [disfmarker] [speaker002:] I know. That's probably a good enough test of [disfmarker] [speaker004:] Uh huh. [speaker003:] Sort of having an actively antagonistic, uh [disfmarker] [speaker001:] Yeah. [speaker004:] Yeah. That might be a little unfair. Um. [speaker001:] Yeah. [speaker004:] I'm sure if we uh, [disfmarker] You think there's a chance we might need Liz for, whatever, the technical side of things? I'm sure we can get [pause] other people around who don't know anything [pause] um, if we want another subject. [speaker001:] Yeah, yeah. [speaker004:] You know. Like I can drag Ben into it or something. Although he might cause problems but. So, is it a experimental setup for the um, data collection [pause] totally [pause] ready [disfmarker] determined? [speaker002:] I like that. "Test the wizard." I want that on a T shirt. [speaker001:] Um [disfmarker] I think it's [disfmarker] it's [disfmarker] it's [disfmarker] I mean [disfmarker] experimental setup u on the technical issue yes, except we st I think we still need uh [disfmarker] a recording device for the wizard, just a tape recorder that's running in a room. [speaker004:] Mm hmm. [speaker001:] But um [disfmarker] in terms of specifying the scenario, [vocalsound] um [disfmarker] uh [disfmarker] uh [disfmarker] we've gotten a little further [speaker004:] Mm hmm. [speaker001:] but um [disfmarker] we wanted to wait until we know who is the wizard, and have the wizard partake in the [pause] ultimate [pause] sort of definition probe. So [disfmarker] so if [disfmarker] if on Friday it turns out that she really likes it and [disfmarker] and we really like her, then nothing should stop us from sitting down next week and [vocalsound] [comment] getting all the details completely figured out. [speaker004:] Mm hmm. [speaker001:] And um [disfmarker] [speaker004:] OK. So the ideal task um, [pause] will have [pause] whatever I don't know how much the structure of the evolving Bayes net [vocalsound] [pause] will af affect [disfmarker] Like we wanna [disfmarker] we wanna be able to collect [vocalsound] as much of the variables that are needed for that, [speaker001:] Mmm yea some. [speaker004:] right? in the course of the task? Well not all of them but you know. [speaker001:] Bu e e e I'm even [disfmarker] This [disfmarker] this Tango, Enter, Vista is sort of, itself, an ad hoc scenario. [speaker004:] Mm hmm. Mm hmm. [speaker001:] The [disfmarker] the basic u um idea behind the uh [pause] data collection was the following. The data we get from Munich is [pause] very command line, simple linguistic stuff. [speaker004:] Mm hmm. [speaker001:] Hardly anything complicated. No metaphors whatsoever. [speaker004:] Mm hmm. [speaker001:] Not a rich language. So we wanted just to collect data, to get [disfmarker] that [disfmarker] that [disfmarker] that [pause] elicits more, uh, that elicits richer language. [speaker004:] Mm hmm. [speaker001:] And we actually did not want to constrain it too much, [speaker004:] Mm hmm. [speaker001:] you know? Just see what people say. And then maybe we'll discover the phenomenon [disfmarker] the phenomena that we want to solve, you know, with [pause] whatever engine we [disfmarker] we come up with. Um. So this [disfmarker] this [disfmarker] this is a parallel track, [speaker004:] OK. [speaker001:] you know, there [disfmarker] they hopefully meet, [speaker004:] So in other words this data collection is [pause] more general. [speaker001:] but since [disfmarker] [speaker004:] It could [disfmarker] it could [pause] be used for not just this task. [speaker001:] It should tell us, you know, what kind of phenomenon could occur, it should tell us also maybe something about the difference between people who think they speak to a computer versus people who think they speak to a human being [speaker004:] Mm hmm. [speaker001:] and the sort of differences there. So it may get us some more information on the human machine pragmatics, um, that no one knows anything about, as of yesterday. And uh [disfmarker] [vocalsound] nothing has changed [comment] since then, so. Uh. And secondly, now that of course we have sort of started to lick blood with this, and especially since um [disfmarker] [vocalsound] Johno can't stop Tango ing, [vocalsound] we may actually include, you know, those [disfmarker] those intentions. So now I think we should maybe have at least one navigational task with [disfmarker] with sort of explicit [disfmarker] uh [speaker004:] Mm hmm. [speaker001:] not ex it's implicit that the person wants to enter, [speaker004:] Mm hmm. [speaker001:] and maybe some task where it's more or less explicit that the person wants to take a picture, [speaker004:] Mm hmm. [speaker001:] or see it or something. So that we can label it. I mean, that's how we get a corpus that we can label. [speaker004:] Mm hmm. Exactly. [speaker001:] Whereas, you know, if we'd just get data we'd never know what they actually wanted, we'd get no cues. Yep. [speaker002:] Alrighty. [speaker003:] OK. [speaker001:] That was that. [speaker002:] So is this the official end of the meeting now? [speaker003:] Yep. [speaker004:] Looks like it. [speaker003:] So what's "Economics, the fallacy"? [speaker002:] I just randomly [pause] label things. [speaker001:] Ma [speaker002:] So that has nothing to do with economics or anything. [speaker003:] Oh, really? OK. [speaker001:] Maybe we ought to switch off these things before we continue. [speaker004:] OK. Switching o [speaker001:] OK. We seem to be recording. [speaker007:] Alright! We're not crashing. [speaker001:] So, sorry about not [disfmarker] [speaker004:] Number four. [speaker001:] not pre doing everything. The lunch went a little later than I was expecting, Chuck. [speaker005:] Hmm? [speaker007:] OK. [speaker002:] Chuck was telling too many jokes, or something? [speaker005:] Yeah. [speaker001:] Yep. Pretty much. [speaker007:] OK. [vocalsound] Does anybody have an agenda? [speaker001:] No. [speaker006:] Well, I'm [disfmarker] I sent a couple of items. They're [disfmarker] they're sort of practical. [speaker007:] I thought [pause] somebody had. [speaker006:] I don't know if you're [disfmarker] [speaker007:] Yeah, that's right. [speaker006:] if [disfmarker] if that's too practical for what we're [pause] focused on. [speaker007:] Yeah, we only want th useless things. [speaker001:] I mean, we don't want anything too practical. [speaker007:] Yeah. [speaker001:] Yeah, that would be [disfmarker] [speaker007:] No, why don't we talk about practical things? [speaker006:] OK. [speaker007:] Sure. [speaker006:] Well, um, I can [pause] give you an update on the [pause] transcription effort. [speaker007:] Great. [speaker006:] Uh, maybe [nonvocalsound] raise the issue of microphone, uh, um procedures with reference to the [pause] cleanliness of the recordings. [speaker007:] OK, transcription, uh, microphone issues [disfmarker] [speaker006:] And then maybe [nonvocalsound] ask, th uh, these guys. The [disfmarker] we have great [disfmarker] great, uh, p steps forward in terms of the nonspeech speech pre segmenting of the signal. [speaker007:] OK. [speaker001:] Well, we have steps forward. [speaker006:] Well, it's a [disfmarker] it's a big improvement. [speaker003:] Yeah. [speaker007:] Yes. [speaker003:] I would prefer this. [speaker007:] Yeah, well. OK. Uh [disfmarker] [speaker004:] We talk about the [disfmarker] [vocalsound] the results of [speaker007:] You have some [disfmarker] Yeah. OK. [speaker001:] I have a little bit of IRAM stuff [speaker004:] use [disfmarker] [speaker001:] but [pause] I'm not sure if that's of general interest or not. [speaker007:] Uh, bigram? [speaker001:] IRAM. [speaker007:] IRAM. [speaker004:] IRAM. [speaker007:] Well, m maybe. [speaker001:] IRAM, bigram, [speaker004:] Bi Bigram. [speaker007:] Yeah, let's [disfmarker] let's see where we are at three thirty. [speaker001:] you know. [speaker002:] Hmm. [speaker007:] Um [disfmarker] [speaker002:] Since, uh [disfmarker] since I have to leave as usual at three thirty, can we do the interesting stuff first? [speaker006:] I beg your pardon? [speaker007:] Well [disfmarker] [speaker003:] Which is [disfmarker]? [speaker006:] I beg your pardon? [speaker001:] What's the interesting stuff? [speaker007:] Yeah. Th now you get to tell us what's the interesting part. [speaker004:] Yeah. [speaker005:] Please specify. [speaker007:] But [disfmarker] [speaker003:] Yeah. [speaker002:] Well, uh, I guess the work that's been [pause] done on segmentation would be most [disfmarker] [speaker006:] I think that would be a good thing to start with. [speaker002:] Yeah. [speaker007:] OK. Um, and, um, [vocalsound] the other thing, uh, which I'll just say very briefly that maybe relates to that a little bit, which is that, um, uh, one of the suggestions that came up in a brief meeting I had the other day when I was in Spain with, uh, Manolo Pardo and [vocalsound] Javier, uh, Ferreiros, who was [pause] here before, was, um, why not start with what they had before but add in the non silence boundaries. So, in what Javier did before when they were doing, um [disfmarker] h he was looking for, uh, speaker change [pause] points. [speaker004:] Mm hmm. [speaker007:] Um. As a simplification, he originally did this only using [pause] silence as, uh, a [pause] putative, uh, speaker change point. [speaker003:] Yeah. [speaker007:] And, uh, he did not, say, look at points where you were changing broad sp uh, phonetic class, for instance. And for Broadcast News, that was fine. Here obviously it's not. [speaker004:] Yeah. [speaker007:] And, um, so one of the things that they were pushing in d in discussing with me is, um, w why are you spending so much time, uh, on the, uh, feature issue, uh, when perhaps if you sort of deal with what you were using before [speaker004:] Uh huh. [speaker007:] and then just broadened it a bit, instead of just ta using silence as putative change point also [disfmarker]? So then you've got [disfmarker] you already have the super structure with Gaussians and H you know, simple H M Ms and so forth. [speaker004:] Nnn, yeah. [speaker007:] And you [disfmarker] you might [disfmarker] So there was a [disfmarker] there was a little bit of a [disfmarker] a [disfmarker] a [disfmarker] a difference of opinion because I [disfmarker] I thought that it was [disfmarker] it's interesting to look at what features are useful. [speaker004:] Yeah. [speaker007:] But, uh, on the other hand I saw that the [disfmarker] they had a good point that, uh, if we had something that worked for many cases before, maybe starting from there a little bit [disfmarker] Because ultimately we're gonna end up [vocalsound] with some s su kind of structure like that, [speaker004:] Yeah. [speaker007:] where you have some kind of simple HMM and you're testing the hypothesis that, [vocalsound] uh, there is a change. [speaker004:] Yeah. [speaker007:] So [disfmarker] so anyway, I just [disfmarker] reporting that. But, uh, uh [disfmarker] [speaker004:] OK. [speaker007:] So. Yeah, why don't we do the speech nonspeech discussion? [speaker006:] Yeah. Do [disfmarker] I [disfmarker] I hear [disfmarker] you [disfmarker] you didn't [disfmarker] [speaker003:] Speech nonspeech? [speaker006:] Uh huh. [speaker003:] OK. [speaker006:] Yeah. [speaker003:] Um, so, uh, what we basically did so far was using the mixed file to [disfmarker] to detect s speech or nonspeech [pause] portions in that. [speaker007:] Mm hmm. [speaker003:] And what I did so far is I just used our old Munich system, which is an HMM ba based system with Gaussian mixtures for s speech and nonspeech. And it was a system which used only one Gaussian for silence and one Gaussian for speech. And now I added, uh, multi mixture possibility for [disfmarker] [vocalsound] for speech and nonspeech. [speaker007:] Mm hmm. Mm hmm. [speaker003:] And I did some training on [disfmarker] on one dialogue, which was transcribed by [disfmarker] Yeah. We [disfmarker] we did a nons s speech nonspeech transcription. [speaker004:] Jose. [speaker003:] Adam, Dave, and I, we did, for that dialogue and I trained it on that. And I did some pre segmentations for [disfmarker] for Jane. And I'm not sure how good they are or what [disfmarker] what the transcribers say. They [disfmarker] they can use it or [disfmarker]? [speaker006:] Uh, they [disfmarker] they think it's a terrific improvement. And, um, it real it just makes a [disfmarker] a world of difference. [speaker007:] Hmm. [speaker006:] And, um, y you also did some something in addition which was, um, for those in which there [nonvocalsound] was, uh, quiet speakers in the mix. [speaker003:] Yeah. Uh, yeah. That [disfmarker] that was one [disfmarker] one [disfmarker] one thing, uh, why I added more mixtures for [disfmarker] for the speech. So I saw that there were loud [disfmarker] loudly speaking speakers and quietly speaking speakers. [speaker007:] Mm hmm. [speaker003:] And so I did two mixtures, one for the loud speakers and one for the quiet speakers. [speaker001:] And did you hand label who was loud and who was quiet, [speaker003:] I did that for [disfmarker] for five minutes of one dialogue [speaker001:] or did you just [disfmarker]? Right. [speaker003:] and that was enough to [disfmarker] to train the system. [speaker004:] Yeah. [speaker002:] W What [disfmarker]? [speaker003:] And so it [disfmarker] it adapts, uh, on [disfmarker] while running. So. [speaker002:] What kind of, uh, front end processing did you do? [speaker003:] Hopefully. [speaker004:] OK. [speaker003:] It's just our [disfmarker] our old Munich, uh, loudness based spectrum on mel scale twenty [disfmarker] twenty critical bands and then loudness. [speaker002:] Mm hmm. [speaker003:] And four additional features, which is energy, loudness, modified loudness, and zero crossing rate. So it's twenty four [disfmarker] twenty four features. [speaker007:] Mm hmm. [speaker002:] Mmm. [speaker006:] And you also provided me with several different versions, which I compared. [speaker003:] Yeah. Yeah. [speaker006:] And so you change [nonvocalsound] parameters. What [disfmarker] do you wanna say something about the parameters [nonvocalsound] that you change? [speaker003:] Yeah. You can specify [vocalsound] the minimum length of speech or [disfmarker] and silence portions which you want. And so I did some [disfmarker] some modifications in those parameters, basically changing the minimum [disfmarker] minimum [pause] length for s for silence to have, er to have, um [disfmarker] yeah [disfmarker] to have more or less, uh, silence portions in inserted. So. [speaker001:] Right. So this would work well for, uh, pauses and utterance boundaries and things like that. [speaker004:] Yeah. [speaker003:] Yeah. Yeah. [speaker001:] But for overlap I imagine that doesn't work at all, [speaker003:] Yeah. [speaker004:] Yeah. [speaker001:] that you'll have plenty of s sections that are [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] That's it. [speaker006:] Mm hmm, mm hmm. [speaker003:] Yeah. [speaker004:] Yeah. [speaker006:] That's true. [speaker007:] Um [disfmarker] [speaker006:] But [nonvocalsound] it [disfmarker] it saves so much time [disfmarker] the [disfmarker] the [nonvocalsound] transcribers [speaker001:] But [disfmarker] Yep. [speaker006:] just enormous, enormous savings. [speaker007:] That's great. [speaker006:] Fantastic. [speaker007:] Um, just qu one quickly, uh, still on the features. So [vocalsound] you have these twenty four features. [speaker003:] Yeah. [speaker007:] Uh, a lot of them are spectral features. Is there a [disfmarker] a transformation, uh, like principal components transformation or something? [speaker003:] No. No. W w we [disfmarker] originally we did that [speaker001:] Yeah. It was IS two. [speaker007:] Just [disfmarker] [speaker003:] but we saw, uh, when we used it, uh, f for our close talking microphone, which [disfmarker] yeah, for our [disfmarker] for our recognizer in Munich [disfmarker] we saw that w it's [disfmarker] it's not [disfmarker] it's not so necessary. It [disfmarker] it works as well f with [disfmarker] with [disfmarker] without, uh, a LDA or something. [speaker007:] OK. OK. No, I was j [pause] curious. [speaker003:] Yeah. [speaker007:] Yeah, I don't think it's a big deal for this application, [speaker006:] Mm hmm. [speaker007:] but [disfmarker] but [disfmarker] Yeah, it's a [disfmarker] [speaker003:] Yeah. [speaker004:] Right. [speaker006:] Mm hmm. OK. But then there's another thing that also Thilo's involved with, which is, um [disfmarker] OK, and [disfmarker] and also Da Dave Gelbart. So there's this [disfmarker] this problem of [disfmarker] and w and [disfmarker] so we had this meeting. Th the [nonvocalsound] [disfmarker] also Adam, before the [disfmarker] the [disfmarker] before you went away. Uh we, um [disfmarker] regarding the representation [nonvocalsound] of overlaps, because at present, [nonvocalsound] [vocalsound] um, because [nonvocalsound] of the limitations of [vocalsound] th the interface we're using, overlaps are, uh, not being [nonvocalsound] encoded by [nonvocalsound] the transcribers in as complete [nonvocalsound] and, uh, detailed a way as it might be, and as might be desired [disfmarker] I think would be desired in the corpus ultimately. [speaker007:] Mm hmm. [speaker006:] So we don't have start and end points [nonvocalsound] at each point where there's an overlap. We just have the [disfmarker] the [nonvocalsound] overlaps [nonvocalsound] encoded in a simple bin. Well, OK. So [nonvocalsound] [@ @] the limits of the [nonvocalsound] over of [disfmarker] of the interface are [vocalsound] such that we were [disfmarker] at this meeting we were entertaining how we might either expand [nonvocalsound] the [disfmarker] the [vocalsound] interface or find other tools which already [pause] do what would be useful. Because what would ultimately be, um, ideal in my [disfmarker] my view and I think [disfmarker] I mean, I had the sense that it was consensus, is that, um, a thorough going musical score notation would be [nonvocalsound] the best way to go. Because [nonvocalsound] you can have multiple channels, there's a single time line, it's very clear, flexible, and all those nice things. [speaker007:] Mm hmm. [speaker006:] OK. So, um, um, I spoke [disfmarker] I had a meeting with Dave Gelbart on [disfmarker] on [disfmarker] and he had, uh, excellent ideas on how [pause] the interface could be [pause] modified to [disfmarker] to do this kind of representation. But, um, he [disfmarker] in the meantime you were checking into the existence of already, um, existing interfaces which might already have these properties. So, do you wanna say something about that? [speaker003:] Yes. Um, I [vocalsound] talked with, uh, Munich guys from [disfmarker] from Ludwi Ludwig Maximilians University, who do a lot of transcribing and transliterations. [speaker007:] Mm hmm. [speaker003:] And they basically said they have [disfmarker] they have, uh, a tool they developed [pause] themselves and they can't give away, uh, f it's too error prone, and had [disfmarker] it's not supported, a a a and [disfmarker] [speaker007:] Yeah. [speaker003:] But, um, Susanne Bur Burger, who is at se CMU, he wa who was formally at [disfmarker] in Munich and w and is now at [disfmarker] with CMU, she said she has something which she uses to do eight channels, uh, trans transliterations, eight channels simultaneously, [speaker007:] Excuse me. [speaker003:] but it's running under Windows. [speaker006:] Under Windows. Mm hmm. [speaker003:] So I'm not sure if [disfmarker] if [disfmarker] if we can use it. [speaker006:] Mm hmm. [speaker003:] She said she would give it to us. It wouldn't be a problem. And I've got some [disfmarker] some kind of manual [pause] down in my office. [speaker001:] Well, maybe we should get it and if it's good enough we'll arrange Windows machines to be available. [speaker003:] Yeah. [speaker006:] Mm hmm. We could [disfmarker] uh, potentially [nonvocalsound] so. [speaker001:] So. [speaker006:] I also wanted to be sure [disfmarker] I mean, I've [disfmarker] I've seen the [disfmarker] this [disfmarker] this is called Praat, PRAAT, [nonvocalsound] which I guess means spee speech in Dutch or something. [speaker001:] Yep. [speaker003:] Yeah, but then I'm not sure [pause] that's the right thing for us. [speaker006:] But [disfmarker] [speaker007:] Yeah. [speaker006:] In terms [nonvocalsound] of it being [nonvocalsound] Windows [nonvocalsound] versus [disfmarker] But I'm just wondering, is [disfmarker]? [speaker001:] No, no. Praat isn't [disfmarker] Praat's multi platform. [speaker003:] No. No, Praat [disfmarker] Yeah. Yeah. [speaker006:] Oh! I see. [speaker003:] Yeah. [speaker006:] Oh, I see. So Praat may not be [disfmarker] [speaker003:] That's not Praat. It's called "trans transedit" [pause] I think. [speaker006:] It's a different one. I see. [speaker003:] The [disfmarker] the, uh [disfmarker] the tool from [disfmarker] from Susanne. [speaker006:] Oh, I see. OK. OK. Alright. [speaker007:] The other thing, uh, to keep in mind, uh [disfmarker] I mean, we've been very concerned to get all this rolling so that we would actually have data, [speaker006:] Mmm, yeah. [speaker007:] but, um, I think our outside sponsor is actually gonna kick in [speaker006:] Mm hmm. [speaker007:] and ultimately that path will be smoothed out. So I don't know if we have a long term need to do lots and lots of transcribing. I think we had a very quick need to get something out and we'd like to be able to do some later because just it's inter it's interesting. But as far a you know, uh, with [disfmarker] with any luck we'll be able to wind down the larger project. [speaker001:] Oh. [speaker002:] But you s [speaker001:] What our decision was is that [pause] we'll go ahead with what we have with a not very fine time scale on the overlaps. [speaker003:] Yeah. [speaker007:] Right. Yeah. [speaker001:] And [disfmarker] and do what we can later [pause] to clean that up if we need to. [speaker006:] Mm hmm. [speaker007:] Right. [speaker006:] And [disfmarker] and I was just thinking that, um, [vocalsound] if it were possible to bring that in, like, [vocalsound] you know, this week, [speaker007:] Uh huh. [speaker006:] then [nonvocalsound] when they're encoding the overlaps [nonvocalsound] it would be nice for them to be able to specify when [disfmarker] you know, the start points and end points of overlaps. [speaker007:] Yeah. [speaker006:] uh Th they're [nonvocalsound] making really quick progress. [speaker007:] That's great. [speaker006:] And, um, so my [disfmarker] my goal was [disfmarker] w m my charge was to get eleven hours by the end of the month. And it'll be [disfmarker] I'm [disfmarker] I'm [disfmarker] I'm clear that we'll be able to do that. [speaker007:] That's great. Yeah. [speaker001:] And did you, uh, forward Morgan Brian's [pause] thing? [speaker006:] I sent [nonvocalsound] it to, um [disfmarker] who did I send that to? I sent it to a list and I thought [nonvocalsound] I sent it to [nonvocalsound] the [nonvocalsound] [disfmarker] e to the local list. [speaker005:] Meeting Recorder. [speaker001:] Oh, you did? [speaker006:] You saw that? [speaker001:] OK. So you probably did get that. [speaker006:] So Brian did tell [nonvocalsound] me that [nonvocalsound] in fact what you said, that, [nonvocalsound] uh [disfmarker] that [nonvocalsound] our [disfmarker] that they are [pause] making progress and that he's going [disfmarker] that [nonvocalsound] they're [nonvocalsound] going [disfmarker] he's gonna check the f the output of the first transcription and [disfmarker] [speaker007:] I mean, basically it's [disfmarker] it's all the difference in the world. [speaker006:] and [disfmarker] [speaker007:] I mean, basically he's [disfmarker] he's on it now. [speaker001:] Yeah. [speaker006:] Oh, that's [disfmarker] [speaker007:] So [disfmarker] so [disfmarker] so this is [disfmarker] so i it'll happen. [speaker006:] this is a new development. OK. Super. Super. OK. Great. [speaker007:] Yeah. I mean, basically it's just saying that one of our [disfmarker] one of our best people is on it, you know, who just doesn't happen to be here anymore. [speaker006:] Yeah. [speaker007:] Someone else pays him. So [disfmarker] [speaker006:] Isn't that great? [speaker002:] But about the need for transcription, I mean, don't we [disfmarker] didn't we previously [vocalsound] decide that the [pause] IBM [pause] transcripts would have to be [pause] checked anyway and possibly augmented? [speaker007:] So. [vocalsound] Yeah. [speaker006:] Yes. That's true. [speaker002:] So, I think having a good tool is worth something no matter what. [speaker006:] Mm hmm. [speaker007:] Yeah. S OK. That's [disfmarker] that's a good point. [speaker001:] Yeah, and Dave Gelbart did volunteer, [speaker006:] Good. [speaker001:] and since he's not here, I'll repeat it [disfmarker] to at least modify Transcriber, which, if we don't have something else that works, I think that's a pretty good way of going. [speaker003:] Mmm. [speaker006:] Mm hmm. [speaker001:] And we discussed on some methods to do it. My approach originally, and I've already hacked on it a little bit [disfmarker] it was too slow because I was trying to display all the waveforms. But he pointed out that you don't really have to. [speaker006:] Mm hmm. [speaker007:] Mm hmm. [speaker001:] I think that's a good point. [speaker007:] Hmm. [speaker001:] That if you just display the mix waveform and then have a user interface for editing the different channels, that's perfectly sufficient. [speaker006:] Yeah, exactly. And just keep those [nonvocalsound] things separate. And [disfmarker] and, um, Dan Ellis's hack already allows them to be [nonvocalsound] able to display [vocalsound] different [nonvocalsound] waveforms to clarify overlaps and things, so that's already [disfmarker] [speaker001:] No. They can only display one, but they can listen to different ones. [speaker006:] Oh, yes, but [disfmarker] Well, [vocalsound] uh, yes, but [nonvocalsound] what I mean is [pause] that, uh, from the transcriber's [nonvocalsound] perspective, uh, those [nonvocalsound] two functions are separate. And Dan Ellis's hack handles the, [vocalsound] um, choice [nonvocalsound] [disfmarker] the ability to choose different waveforms [vocalsound] from moment to moment. [speaker001:] But only to listen to, not to look at. [speaker003:] Yeah. [speaker006:] Um [disfmarker] [speaker001:] The waveform you're looking at doesn't change. [speaker003:] Yeah. [speaker006:] That's true. Yeah, but [nonvocalsound] that's [disfmarker] that's OK, [speaker001:] Yeah. [speaker006:] cuz they're [disfmarker] they're, you know, they're focused on the ear anyway. [speaker001:] Right. [speaker007:] Hmm. [speaker006:] And then [disfmarker] and then the hack to [vocalsound] preserve the overlaps [nonvocalsound] better would be one which creates different output files for each channel, which then [nonvocalsound] would also serve Liz's request [pause] of having, [speaker001:] Right. [speaker006:] you know, a single channel, separable, uh, cleanly, easily separable, [speaker007:] Mm hmm. [speaker006:] uh, transcript tied to a single channel, uh, audio. [speaker007:] Mm hmm. Have, uh, folks from NIST been in contact with you? [speaker006:] Not directly. I'm trying to think if [disfmarker] if I could have gotten it over a list. [speaker007:] OK. [speaker006:] I don't [disfmarker] I don't think so. [speaker007:] OK. Well, holidays may have interrupted things, cuz in [disfmarker] in [disfmarker] in [disfmarker] They [vocalsound] seem to want to [pause] get absolutely clear on standards for [disfmarker] transcription standards and so forth with [disfmarker] with us. [speaker006:] Oh! This was from before December. [speaker007:] Right. Because they're [disfmarker] they're presumably going to start recording next month. [speaker006:] Yeah. OK. OK. [speaker007:] So. [speaker001:] Oh, we should definitely get with them then, and agree upon a format. Though I don't remember email on that. So was I not in the loop on that? [speaker007:] Um. Yeah, I don't think I mailed anybody. I just think I told them to contact Jane [disfmarker] that, uh, if they had a [disfmarker] [speaker001:] Oh, OK. [speaker006:] That's right. [speaker007:] if, uh [disfmarker] that [disfmarker] that, uh, as the point person on it. But [disfmarker] [speaker001:] Yeah, I think that's right. Just, uh [disfmarker] [speaker007:] So, yeah. Maybe I'll, uh, ping them a little bit about it to [vocalsound] get that straight. [speaker006:] OK. I'm keeping the conventions [pause] absolutely [pause] as simple [nonvocalsound] as possible. [speaker007:] Yeah. So is it [disfmarker] cuz with any luck there'll actually be a [disfmarker] a [disfmarker] there'll be collections at Columbia, collections at [disfmarker] at UW [disfmarker] I mean Dan [disfmarker] Dan is very interested in doing some other things, [speaker001:] Right. [speaker006:] Yeah. Yeah. [speaker001:] Well, I think it's important both for the notation and the machine representation to be the same. [speaker007:] and collections at NIST. So [disfmarker] Yeah. [speaker001:] So. [speaker006:] N there was also this, [nonvocalsound] uh, email from Dan regarding the [pause] speech non nonspeech segmentation thing. [speaker001:] Yep. [speaker004:] Yeah. [speaker003:] Yeah. [speaker006:] I don't know if, uh, uh, we wanna, uh [disfmarker] and Dan Gel and Dave Gelbart is interested in pursuing the aspect [nonvocalsound] of using amplitude [nonvocalsound] as a [disfmarker] a [disfmarker] a [disfmarker] as a basis for the separation. [speaker007:] Oh, yeah. He was talking [disfmarker] he was talking [disfmarker] I mean, uh, we [disfmarker] he had [disfmarker] [speaker001:] Cross correlation. [speaker006:] Cross [speaker007:] Yeah, cross correlation. [speaker003:] Cross [speaker007:] I had mentioned this a couple times before, the c the commercial devices that do, uh, [vocalsound] uh, voice, uh [disfmarker] you know, active miking, [speaker006:] Uh huh. [speaker007:] basically look at the amp at the energy at each of the mikes. And [disfmarker] and you basically compare the energy here to [vocalsound] some function of all of the mikes. [speaker003:] Yeah. [speaker007:] So, [speaker006:] OK. [speaker004:] Yeah. [speaker007:] by doing that, you know, rather than setting any, uh, absolute threshold, you actually can do pretty good, uh, selection of who [disfmarker] who's talking. [speaker006:] OK. [speaker007:] Uh [disfmarker] And those [disfmarker] those systems work very well, by the way, I mean, so people use them in [vocalsound] panel discussions and so forth with sound reinforcement differing in [disfmarker] in sort of, [speaker004:] Uh huh. [speaker007:] uh [disfmarker] and, uh, those [disfmarker] if [disfmarker] Boy, the guy I knew who built them, built them like twenty [disfmarker] twenty years ago, so they're [disfmarker] [vocalsound] it's [disfmarker] the [disfmarker] the techniques work pretty well. [speaker001:] Hmm. [speaker006:] Fantastic. [speaker007:] So. [speaker006:] Cuz there is one thing that we don't have right now and that is the automatic, um, channel identifier. [speaker007:] Mm hmm. [speaker006:] That [disfmarker] that, you know, that would g help in terms of encoding of overlaps. The [disfmarker] the transcribers would have less, uh, disentangling to do [pause] if that were available. [speaker007:] Yeah. So I think, you know, basically you can look at some [disfmarker] p you have to play around a little bit, uh, to figure out what the right statistic is, [speaker006:] But. [speaker004:] Mm hmm. [speaker007:] but you compare each microphone to some statistic based on the [disfmarker] [vocalsound] on the overall [disfmarker] [speaker006:] Mm hmm. [speaker003:] Yeah. [speaker004:] Mm hmm. [speaker006:] OK. [speaker007:] Uh, and we also have these [disfmarker] we have the advantage of having [pause] distant mikes too. So that, you cou yo [speaker001:] Yeah, although the [disfmarker] the [disfmarker] using the close talking I think would be much better. Wouldn't it? [speaker006:] Yeah. [speaker007:] Um. I [disfmarker] I don't know. I just [disfmarker] it'd be [disfmarker] [speaker001:] Yeah. [speaker007:] If I was actually working on it, I'd sit there and [disfmarker] and play around with it, and [disfmarker] and get a feeling for it. I mean, the [disfmarker] the [disfmarker] the, uh [disfmarker] But, uh, you certainly wanna use the close talking, as a [disfmarker] at least. [speaker001:] Right. [speaker007:] I don't know if the other would [disfmarker] would add some other helpful dimension or not. [speaker006:] Mm hmm. [speaker004:] Mm hmm. OK. What [disfmarker] what are the different, uh, classes to [disfmarker] to code, uh, the [disfmarker] the overlap, you will use? [speaker006:] Um, to code d so types of overlap? [speaker004:] What you [disfmarker] you [disfmarker] Yeah. [speaker006:] Um, so [nonvocalsound] at a meeting that wasn't transcribed, we worked up a [disfmarker] a typology. [speaker004:] Yeah. [speaker006:] And, um [disfmarker] [speaker004:] Look like, uh, you t you explaining in the blackboard? The [disfmarker]? [speaker006:] Yes, exactly. [speaker004:] Yeah? Yeah. [speaker006:] That hasn't changed. So it [nonvocalsound] i the [disfmarker] it's basically a two tiered structure where the first one is whether [nonvocalsound] the person who's interrupted continues or not. And then below that there're [nonvocalsound] subcategories, uh, that have more to do with, [nonvocalsound] you know, is it, [vocalsound] uh, simply [nonvocalsound] backchannel [speaker004:] Mm hmm. [speaker006:] or is [nonvocalsound] it, um, someone completing someone else's thought, or is it someone in introducing a new thought. [speaker001:] Right. And I hope that if we do a forced alignment with the close talking mike, that will be enough to recover at least some of the time the time information of when the overlap occurred. [speaker004:] Huh. Yeah. [speaker006:] Mm hmm. Well, [vocalsound] one would [disfmarker] [speaker004:] Yeah. We hope. [speaker001:] Yeah. Who knows? [speaker006:] That'd be [disfmarker] that'd be nice. I mean, [vocalsound] I [disfmarker] I [disfmarker] I [disfmarker] I've [disfmarker] [speaker002:] So who's gonna do that? Who's gonna do forced alignment? [speaker001:] Well, u uh, IBM was going to. Um [disfmarker] [speaker002:] Oh, OK. [speaker004:] Oh. [speaker001:] and I imagine they still plan to but [disfmarker] but, you know, I haven't spoken with them about that recently. [speaker002:] OK. [speaker004:] Uh huh. [speaker007:] Well, uh, my suggestion now is [disfmarker] is on all of these things to, uh, contact Brian. [speaker001:] OK. I'll do that. [speaker006:] This is wonderful [nonvocalsound] to have a direct contact like that. [speaker004:] Yeah. [speaker007:] Yeah. [speaker006:] uh Well, th lemme ask [nonvocalsound] you this. [speaker007:] Yeah. [speaker006:] It occurs to me [disfmarker] [vocalsound] one of my transcribers t [nonvocalsound] told [nonvocalsound] me today that she'll [nonvocalsound] be finished with one meeting, [vocalsound] um, by [disfmarker] [speaker007:] Mm hmm. [speaker006:] well, she said tomorrow but then she said [nonvocalsound] [disfmarker] you know, but [nonvocalsound] [disfmarker] the, you know [disfmarker] let's [disfmarker] let's just, uh, say [speaker007:] Mm hmm. [speaker006:] maybe the day after just to be s on the safe side. I could send Brian the, [nonvocalsound] um [disfmarker] the [nonvocalsound] transcript. I know these [nonvocalsound] are [disfmarker] er, uh, I could send him that [nonvocalsound] if [nonvocalsound] it would be possible, [nonvocalsound] or a good idea or not, to [nonvocalsound] try [nonvocalsound] to do a s forced alignment on what we're [disfmarker] on the way we're encoding overlaps now. [speaker007:] Well, just talk to him about it. [speaker001:] Yep. [speaker006:] Good. [speaker007:] I mean, you know, basically he's [disfmarker] he just studies, he's a colleague, a friend, and, [speaker006:] Yeah! [speaker007:] uh, they [disfmarker] and [disfmarker] and, you know, the [disfmarker] the organization always did wanna help us. [speaker006:] Super. Super. [speaker007:] It was just a question of getting, you know, the right people connected in, who had the time. [speaker006:] Yeah, yeah. [speaker001:] Right. [speaker007:] So, um, eh [disfmarker] [speaker001:] Is he on the mailing list? The Meeting Recorder mailing li? [speaker006:] Oh! [speaker001:] We should add him. [speaker006:] Yeah. I [disfmarker] I [disfmarker] I don't know for sure. [speaker007:] Yeah. [speaker005:] Did something happen, Morgan, that he got put on this, or was he already on it, [speaker001:] Add him. [speaker005:] or [disfmarker]? [speaker007:] No, I, eh, eh, p It [disfmarker] it oc I [disfmarker] h it's [disfmarker] Yeah, something happened. I don't know what. [speaker002:] He asked for more work. [speaker005:] Huh. [speaker007:] But he's on it now. [speaker006:] That would be [nonvocalsound] like [disfmarker] that'd be like him. He's great. [speaker007:] Right. So, uh, where are we? Maybe, uh, uh, brief [disfmarker] Well, let's [disfmarker] why don't we talk about microphone issues? [speaker006:] Yeah. That'd be great. [speaker007:] That was [disfmarker] that was a [disfmarker] [speaker001:] Um, so one thing is that I did look on Sony's for a replacement for the mikes [disfmarker] [vocalsound] for the head m head worn ones cuz they're so uncomfortable. But I think I need someone who knows more about mikes than I do, because I couldn't find a single other model that seemed like it would fit the connector, which seems really unlikely to me. Does anyone, like, know stores or [vocalsound] know about mikes who [disfmarker] who would know the right questions to ask? [speaker007:] Oh, I probably would. I mean, my knowledge is twenty years out of date but some of it's still the same. So [disfmarker] [speaker001:] Mm hmm. [speaker007:] Uh, so maybe we c we can take a look at that. [speaker005:] You couldn't [disfmarker] you couldn't find the right connector to go into these things? [speaker001:] Yep. [speaker005:] Huh! [speaker001:] When I looked, i they listed one microphone and that's it as having that type of connector. But my guess is that Sony maybe uses a different number for their connector than everyone else does. And [disfmarker] and so [disfmarker] [speaker007:] Mm hmm. Well, let's look at it together [speaker001:] it seems [disfmarker] it seems really unlikely to me that there's only one. [speaker007:] and [disfmarker] [speaker006:] And there's no adaptor for it? Seems like there'd be a [disfmarker] [speaker003:] Yeah. [speaker006:] OK. [speaker001:] As I said, who knows? [speaker006:] Mm hmm. [speaker007:] Who [disfmarker] who are we buying these from? [speaker001:] Um, [speaker007:] That'd be [speaker001:] I have it downstairs. I don't remember off the top of my head. [speaker007:] Yeah. OK. Yeah. We [disfmarker] we can try and look at that together. [speaker001:] And then, uh [disfmarker] just in terms of how you wear them [disfmarker] I mean, I had thought about this before. I mean, when [disfmarker] when [disfmarker] when you use a product like DragonDictate, they have a very extensive description about how to wear the microphone and so on. [speaker006:] Oh. [speaker001:] But I felt that in a real situation we were very seldom gonna get people to really do it and maybe it wasn't worth concentrating on. But [disfmarker] [speaker007:] Well, I think that that's [disfmarker] that's a good back off position. That's what I was saying [vocalsound] earlier, th that, you know, we are gonna get some [vocalsound] recordings that are imperfect and, hey, that's life. But I [disfmarker] I think that it [disfmarker] it doesn't hurt, uh, the naturalness of the situation to try to have people [pause] wear the microphones properly, if possible, [speaker001:] Mm hmm. [speaker007:] because, [vocalsound] um, the natural situation is really what we have with the microphones on the table. I mean, I think, [vocalsound] you know, in the target applications that we're talking about, people aren't gonna be wearing head mounted mikes anyway. [speaker001:] Oh. That's true. [speaker004:] Yeah. [speaker003:] Yeah. [speaker007:] So this is just for u these head mounted mikes are just for use with research. [speaker001:] Mm hmm. [speaker007:] And, uh, it's gonna make [disfmarker] [speaker004:] Yeah. [speaker007:] You know, if [disfmarker] if An Andreas plays around with language modeling, he's not gonna be m wanna be messed up by people breathing into the microphone. [speaker001:] Right. [speaker007:] So it's [disfmarker] it's, uh, uh [disfmarker] [speaker001:] Well, I'll dig through the documentation to DragonDictate and ste s see if they still have the little [pause] form. [speaker007:] But it does happen. Right? I mean, and any [disfmarker] [speaker004:] Yeah. [speaker002:] It's interesting, uh, I talked to some IBM guys, uh, last January, I think, I was there. And [disfmarker] so people who were working on the [disfmarker] on their ViaVoice dictation product. [speaker007:] Yeah. [speaker002:] And they said, uh, the breathing is really a [disfmarker] a terrible problem [pause] for them, to [disfmarker] to not recognize breathing as speech. [speaker006:] Wow. [speaker002:] So, anything to reduce breathing is [disfmarker] is [disfmarker] is a good thing. [speaker007:] Yeah. [speaker001:] Well, that's the [disfmarker] It seemed to me when I was using Dragon that it was really microphone placement helped an [disfmarker] in, uh [disfmarker] an enormous amount. [speaker002:] Mm hmm. [speaker001:] So you want it enough to the side so that when you exhale through your nose, it doesn't [disfmarker] the wind doesn't hit the mike. [speaker002:] Right. Mm hmm. [speaker001:] And then, uh [disfmarker] Everyone's adjusting their microphones, of course. And then just close enough so that you get good volume. So you know, wearing it right about here seems to be about the right way to do it. [speaker004:] Yeah. [speaker007:] Yeah. I remember when I was [disfmarker] when I [disfmarker] I [disfmarker] I [disfmarker] I used, uh, um, [vocalsound] a prominent laboratory's, uh, uh, speech recognizer [speaker006:] Is [disfmarker] Uh huh. [speaker007:] about, [vocalsound] uh [disfmarker] This was, boy, this was a while ago, this was about twelve [disfmarker] twelve years ago or something. And, um, they were [disfmarker] they were perturbed with me because I was breathing in instead of breathing out. And they had models for [disfmarker] they [disfmarker] [vocalsound] they had Markov models for br breathing out but they didn't have them for breathing in. Uh [disfmarker] [speaker004:] Yeah. [speaker006:] That's interesting. Well, what I wondered is whether it's possible to have [disfmarker] [vocalsound] to maybe use the display at the beginning to be able to [disfmarker] to judge how [disfmarker] how correctly [disfmarker] [speaker001:] Yeah. [speaker006:] I mean, have someone do some routine whatever, and [disfmarker] and then see if when they're breathing it's showing. I don't know if the [disfmarker] if it's [disfmarker] [speaker001:] I mean, when [disfmarker] when it's on, you can see it. [speaker007:] I [disfmarker] [speaker001:] You can definitely see it. [speaker006:] Can you see the breathing? Cuz I [disfmarker] [speaker001:] Absolutely. [speaker006:] Oh. [speaker001:] Absolutely. [speaker007:] Yeah. I [speaker001:] And so, you know, I've [disfmarker] I've sat here and watched sometimes the breathing, and the bar going up and down, and I'm thinking, I could say something, but [speaker007:] I mean, I think [disfmarker] [speaker001:] I don't want to make people self conscious. Stop breathing! [speaker007:] It [disfmarker] it's going to be imperfect. You're not gonna get it perfect. [speaker006:] Yeah. Uh huh. [speaker007:] And you can do some, uh, you know, first order thing about it, which is to have people move it, uh, uh, a away from being just directly in front of the middle [speaker004:] Yeah. [speaker006:] Good. [speaker007:] but not too far away. [speaker006:] Yeah, i [speaker007:] And then, you know, I think there's not much [disfmarker] Because you can't al you know, interfere w you can't fine tune the meeting that much, I think. [speaker001:] Right. [speaker007:] It's sort of [disfmarker] [speaker004:] Yeah. [speaker006:] That's true. It just seems like i if something l simple like that can be tweaked [vocalsound] and the quality goes, you know, uh, dramatically up, then it might be worth [pause] doing. [speaker001:] Yep. And then also [disfmarker] the position of the mike also. If it's more directly, you'll get better volume. [speaker004:] Yeah. [speaker001:] So [disfmarker] so, like, yours is pretty far down [pause] below your mouth. Yeah. [speaker006:] But [disfmarker] Mm hmm. My [disfmarker] my feedback from the transcribers is he is always close to crystal clear and [disfmarker] and just fan fantastic to [disfmarker] [speaker003:] Yeah. [speaker004:] Mmm, yeah. [speaker007:] Mm hmm. [speaker001:] I don't know why that is. [speaker006:] Well, I mean, you [disfmarker] Yeah, of course. You're [disfmarker] you're also [disfmarker] uh, your volume is [disfmarker] is greater. But [disfmarker] but still, I mean, they [disfmarker] they say [disfmarker] [speaker001:] I've been eating a lot. [speaker006:] I it makes their [disfmarker] their job extremely easy. [speaker007:] Uh. [speaker006:] Yeah. [speaker007:] And then there's mass. [speaker006:] Mm hmm. [speaker007:] Anyway. [speaker006:] I could say something about [disfmarker] about the [disfmarker] Well, I don't know what you wanna do. Yeah. [speaker007:] About what? [speaker006:] About the transcribers or anything or [disfmarker]? I don't know. [speaker007:] Well, the other [disfmarker] why don't we do that? [speaker002:] But, uh, just to [disfmarker] to, um [disfmarker] One more remark, uh, concerning the SRI recognizer. Um. It is useful to transcribe and then ultimately train models for things like breath, and also laughter is very, very frequent and important to [disfmarker] [vocalsound] to model. [speaker007:] Mm hmm. [speaker002:] So, if you can in your transcripts mark [disfmarker] [speaker001:] So, mark them? [speaker002:] mark very audible breaths and laughter especially, [speaker003:] Mmm. [speaker006:] They are. [speaker002:] um [disfmarker] [speaker006:] They're putting [disfmarker] Eh, so in curly brackets they put "inhale" or "breath". [speaker002:] OK. Mm hmm. [speaker001:] Oh, great. [speaker006:] It [disfmarker] they [disfmarker] and then in curly brackets they say "laughter". Now they're [disfmarker] [vocalsound] they're not being [pause] awfully precise, uh, m So they're two types of laughter that are not being distinguished. [speaker002:] Mm hmm. [speaker006:] One is [vocalsound] when sometimes s someone will start laughing when they're in the middle of a sentence. [speaker002:] Mm hmm. [speaker006:] And [disfmarker] and then the other one is when they finish the sentence and then they laugh. So, um, I [disfmarker] I did s I did some double checking to look through [disfmarker] I mean, [vocalsound] you'd need to have extra e extra complications, like time tags indicating the beginning and ending of [disfmarker] [vocalsound] of the laughing through the utterance. [speaker002:] It's not so [disfmarker] I don't think it's, um [disfmarker] [speaker006:] And that [disfmarker] and what they're doing is in both cases just saying "curly brackets laughing" a after the unit. [speaker002:] As [disfmarker] as long as there is an indication that there was laughter somewhere between [pause] two words [vocalsound] I think that's sufficient, [speaker003:] Yeah. [speaker006:] Good. Oh! OK. [speaker001:] Against [disfmarker] they could do forced alignment. [speaker002:] because [speaker003:] Yeah. [speaker002:] actually the recognition of laughter once you kn um [disfmarker] you know, is pretty good. So as long as you can stick a [disfmarker] you know, a t a tag in there that [disfmarker] that indicates that there was laughter, [speaker001:] Oh, I didn't know that. [speaker006:] OK. [speaker002:] that would probably be, uh, sufficient to train models. [speaker001:] That would be a really interesting [pause] prosodic feature, [speaker006:] Then [disfmarker] And let me ask y [speaker004:] Yeah. [speaker006:] and I gotta ask you one thing about that. [speaker001:] when [disfmarker] [speaker006:] So, um, [speaker002:] Hmm. [speaker006:] if they laugh between two words, you [disfmarker] you'd get it in between the two words. [speaker002:] Mm hmm. [speaker006:] But if they laugh across three or four words you [disfmarker] you get it after those four words. [speaker002:] Right. [speaker006:] Does that matter? [speaker004:] Yeah. [speaker002:] Well, the thing that you [disfmarker] is hard to deal with is whe [vocalsound] when they speak while laughing. Um, and that's, uh [disfmarker] I don't think that we can do very well with that. [speaker001:] Right. [speaker002:] So [disfmarker] [speaker004:] Yeah. [speaker002:] But, um, that's not as frequent as just laughing between speaking, [speaker006:] OK. [speaker002:] so [disfmarker] [speaker001:] So are [disfmarker] do you treat breath and laughter as phonetically, or as word models, or what? [speaker007:] Uh is it? [speaker004:] Huh. [speaker006:] I think he's right. [speaker004:] I [disfmarker] I think it's frequent in [disfmarker] in the meeting. [speaker006:] Yeah. [speaker002:] We tried both. Uh, currently, um, we use special words. There was a [disfmarker] there's actually a word for [disfmarker] uh, it's not just breathing but all kinds of mouth [disfmarker] [speaker001:] Mm hmm. Mouth stuff? [speaker002:] uh, mouth [disfmarker] mouth stuff. And then laughter is a [disfmarker] is a special word. [speaker001:] How would we do that with the hybrid system? [speaker007:] Same thing. [speaker002:] Same thing? [speaker001:] So train a phone [pause] in the neural net? [speaker002:] Yeah. Yeah. You ha Oh. And each of these words has a dedicated phone. [speaker007:] No [disfmarker] [speaker001:] Oh, it does? [speaker002:] So the [disfmarker] so the [disfmarker] the mouth noise, uh, word has just a single phone, um, that is for that. [speaker001:] Right. So in the hybrid system we could train the net with a laughter phone and a breath sound phone. [speaker007:] Yeah. [speaker002:] Yeah. [speaker007:] I mean, it's [disfmarker] it's [disfmarker] it's always the same thing. [speaker002:] Yeah. [speaker007:] Right? [speaker006:] Mm hmm. [speaker007:] I mean, you could [disfmarker] you could say well, let [disfmarker] we now think that laughter should have three sub sub [vocalsound] sub units in the [disfmarker] the three states, uh [disfmarker] different states. [speaker003:] Yeah. [speaker007:] And then you would have three [disfmarker] I mean, you know, eh, eh, it's u [speaker002:] And the [disfmarker] the pronun the pronunciations [disfmarker] the pronunciations are l are somewhat non standard. [speaker001:] Do whatever you want. [speaker007:] Yeah, yeah. [speaker004:] Yeah. No. [speaker002:] They actually are [disfmarker] uh, it's just a single, s uh, you know, a single phone in the pronunciation, but it has a self loop on it, so it can [disfmarker] [speaker001:] To [pause] go on forever? [speaker002:] r can go on forever. [speaker001:] And how do you handle it in the language model? [speaker002:] It's just a [disfmarker] it's just a word. We train it like any other word. [speaker001:] It's just a word in the language model. Cool. [speaker002:] Yeah. We also tried, [vocalsound] um, absorbing these [disfmarker] uh, both laughter and [disfmarker] and actually also noise, and, um [disfmarker] [speaker004:] Yeah. [speaker002:] Yes. OK. Anyway. We also tried absorbing that into the pause model [disfmarker] I mean, the [disfmarker] the [disfmarker] the model that [disfmarker] that matches the stuff between words. [speaker007:] Mm hmm. [speaker002:] And, um, it didn't work as well. So. [speaker001:] Huh. OK. [speaker006:] Mm hmm. [speaker001:] Can you hand me your digit form? [speaker002:] Sorry. [speaker001:] I just wanna mark that you did not read digits. [speaker007:] OK. Say hi for me. [speaker006:] Good. You [disfmarker] you did get me to thinking about [disfmarker] I [disfmarker] I'm not really sure which is more frequent, whether f f laughing [disfmarker] I think it may be an individual thing. Some people are more prone to laughing when they're speaking. [speaker007:] Yeah. Yeah. I think [disfmarker] [speaker001:] I was noticing that with Dan in the one that we, uh [disfmarker] we hand tran hand segmented, [speaker006:] But I can't [disfmarker] [speaker004:] Yeah. [speaker001:] that [disfmarker] th he has these little chuckles as he talks. [speaker006:] Yeah. OK. [speaker007:] I'm sure it's very individual. And [disfmarker] and [disfmarker] one thing that c that we're not doing, of course, is we're not claiming to, uh, get [disfmarker] be getting a representation of mankind in these recordings. We have [vocalsound] this very, very tiny sample of [disfmarker] of [disfmarker] [speaker004:] Yeah. [speaker006:] Yeah. [speaker004:] Yeah. [speaker001:] Speech researchers? [speaker007:] Uh, yeah. And [disfmarker] [vocalsound] Yeah, r right. [speaker004:] Speech research. [speaker007:] So, uh, who knows. Uh [disfmarker] Yeah. Why don why don't we just [disfmarker] since we're on this vein, why don't we just continue with, uh, what you were gonna say about the transcriptions and [disfmarker]? [speaker006:] OK. Um, um, the [disfmarker] I [disfmarker] I'm really very for I'm extremely fortunate with the people who, uh, applied and who are transcribing for us. They [vocalsound] are, um, um, uh really perceptive and very, um [disfmarker] and I'm not just saying that cuz they might be hearing this. [speaker001:] Cuz they're gonna be transcribing it in a few days. [speaker006:] No, they're super. They're [disfmarker] the they [disfmarker] very quick. [speaker005:] OK. Turn the mikes off and let's talk. [speaker006:] Yeah, I know. I am [disfmarker] I'm serious. They're just super. So I, um, e you know, I [disfmarker] I brought them in and, um, trained them in pairs because I think people can raise questions [disfmarker] you know, i i the they think about different things [speaker001:] That's a good idea. [speaker006:] and they think of different [disfmarker] and um, I trained them to, uh, f on about a minute or two of the one that was already transcribed. This also gives me a sense of [disfmarker] You know, I can [disfmarker] I can use that later, with reference to inter coder reliability kind of issues. But the main thing was to get them used to the conventions and, [vocalsound] you know, the idea of the [disfmarker] th th the size of the unit versus how long it takes to play it back so these [disfmarker] th sort of calibration issues. And then, um, I just set them loose and they're [disfmarker] they all have e a already background in using computers. They're, um [disfmarker] they're trained in linguistics. [speaker001:] Good. [speaker006:] They got [disfmarker] [speaker007:] Uh huh. [speaker001:] Oh, no. Is that good or bad? [speaker006:] Well, they they're very perce they'll [disfmarker] So one of them said "well, you know, he really said" n ", not really "and", [speaker004:] Yeah. [speaker006:] so what [vocalsound] [disfmarker] what should I do with that? " [speaker004:] Yeah. [speaker006:] And I said, well for our purposes, [speaker001:] Yeah. [speaker007:] Yeah. [speaker006:] I do have a convention. If it's an [disfmarker] a noncanonical p That one, I think we [disfmarker] you know, with Eric's work, I sort of figure we [disfmarker] we can just treat that as a variant. [speaker007:] OK. [speaker006:] But I told them if [disfmarker] if there's an obvious speech error, uh, like I said in one thing, [speaker007:] Yes. [speaker006:] and I gave my [disfmarker] my example, like I said, "microfon" [pause] in instead of "microphone". Didn't bother [disfmarker] I knew it when I said it. I remember s thinking "oh, that's not correctly pronounced". But it [disfmarker] but I thought [vocalsound] it's not worth fixing cuz often when you're speaking everybody knows what [disfmarker] what you mean. [speaker001:] You'll self repair. Yeah. [speaker003:] Yeah. [speaker006:] But I have a convention that if it's obviously a noncanonical pronunciation [disfmarker] a speech error with [disfmarker] you know, wi within the realm of resolution that you can tell in this native English [disfmarker] American English speaker, you know that I didn't mean to say "microfon." Then you'd put a little tick at the beginning of the word, [speaker007:] Yeah. [speaker006:] and that just signals that, um, this is not standard, and then in curly brackets "pron [nonvocalsound] error". And, um, and other than that, it's w word level. But, you know, the fact that they noticed, you know, the "nnn". "He said" nnn ", not "and". What shall I do with that? I mean, they're very perceptive. And [disfmarker] and s several of them are trained in IPA. C they really could do phonetic transcription if [disfmarker] if we wanted them to. [speaker007:] Mm hmm. Right. Well [disfmarker] [vocalsound] Well, you know, it might be something we'd wanna do with some, uh, s small subset [pause] of the whole thing. [speaker001:] Hmm. Where were they when [pause] we needed them? [speaker006:] I think [disfmarker] [speaker007:] We certainly wouldn't wanna do it with everything. [speaker006:] And I'm also thinking these people are a terrific pool. I mean, if, uh [disfmarker] so I [disfmarker] I told them that, um, we don't know if this will continue past the end of the month [speaker007:] Uh huh. [speaker006:] and I also [disfmarker] m I think they know that the data p source is limited and I may not be able to keep them employed till the end of the month even, although I hope to. [speaker007:] The other thing we could do, actually, uh, is, uh, use them for a more detailed analysis of the overlaps. [speaker006:] And [disfmarker] Oh, that'd be so super. They would be so [disfmarker] s so terrific. [speaker007:] Right? [speaker001:] I mean, this was something that we were talking about. We could get a very detailed overlap if they were willing to transcribe each meeting four or five times. Right? One for each participant. So they could by hand [disfmarker] [speaker007:] Well, that's one way to do it. [speaker001:] Yeah. [speaker007:] But I've been saying the other thing is just go through it for the overlaps. [speaker001:] Yeah. [speaker006:] Mm hmm, that's right. [speaker007:] Right? [speaker006:] And with the right in interface [disfmarker] [speaker007:] Given that y and [disfmarker] and do [disfmarker] so instead of doing phonetic, uh, uh, transcription for the whole thing, which [vocalsound] we know from the [disfmarker] Steve's experience with the Switchboard transcription is, you know, very, very time consuming. [speaker004:] Yeah. [speaker007:] And [disfmarker] [vocalsound] and you know, it took them I don't know how many months to do [disfmarker] to get four hours. And so [vocalsound] that hasn't been really our focus. Uh, we can consider it. But, I mean, the other thing is since we've been spending so much time thinking about overlaps is [disfmarker] is maybe get a much more detailed analysis of the overlaps. [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker007:] But anyway, I'm [disfmarker] I'm open to c our consideration. [speaker006:] That'd be great. [speaker007:] I [disfmarker] I don't wanna say that by fiat. [speaker004:] Hmm. [speaker007:] I'm open to every consideration of [vocalsound] what are some other kinds of detailed analysis that would be most useful. [speaker006:] Yeah. [speaker007:] And, uh, uh, [speaker006:] Mm hmm. [speaker004:] Hmm. [speaker007:] I [disfmarker] I [disfmarker] I think [vocalsound] this year we [disfmarker] we actually, uh, can do it. [speaker006:] Oh, wonderful. [speaker007:] It's a [disfmarker] we have [disfmarker] we have [disfmarker] due to [@ @] [comment] variations in funding we have [disfmarker] we seem to be doing, uh, very well on m money for this [disfmarker] this year, and [vocalsound] next year we may have [disfmarker] have much less. So I don't wanna hire a [disfmarker] [speaker001:] Is [disfmarker] you mean two thousand one? Calendar year or [disfmarker]? [speaker007:] Uh, I mean, calendar year two thousand one. [speaker001:] OK. [speaker007:] Yeah. So it's [disfmarker] uh, it's [disfmarker] we don't wanna hire a bunch of people, a long term staff, [speaker001:] Full time. [speaker006:] Mm hmm. Yeah. [speaker001:] Yeah. [speaker007:] because [vocalsound] the [disfmarker] the funding that we've gotten is sort of a big chunk for this year. But [vocalsound] having [pause] temporary people doing some specific thing that we need is actually a perfect match to that kind of, uh, funding. [speaker006:] Wonderful. And then school will start in [disfmarker] in the sixt on the sixteenth. [speaker004:] Yeah. [speaker007:] So. [speaker006:] Some of them will have to cut back their hours at that point. [speaker007:] Yeah. [speaker005:] Are they working full time now, or [disfmarker]? [speaker006:] But [disfmarker] [nonvocalsound] [vocalsound] Some of them are. Yeah. [speaker001:] Wow. [speaker006:] Well, why do I wouldn't say forty hour weeks. No. But what I mean is [disfmarker] Oh, I shouldn't say it that way because [nonvocalsound] that does sound like forty hour weeks. No. I th I [disfmarker] I would say they're probably [nonvocalsound] [disfmarker] they don't have o they don't have other things that are taking away their time. [speaker001:] I don't see how someone could do forty hours a week on transcription. [speaker005:] Hmm. [speaker006:] But [nonvocalsound] it's [disfmarker] you can't. [speaker007:] Yeah. Yeah. [speaker006:] No. You're right. It's [disfmarker] i it would be too taxing. But, um, they're putting [nonvocalsound] in a lot of [disfmarker] [speaker007:] Yeah. [speaker006:] And [disfmarker] and I checked them over. [speaker007:] I [disfmarker] [speaker006:] I [disfmarker] I [disfmarker] I haven't checked them all, but [pause] just spot checking. They're fantastic. [speaker007:] I remember when we were transcribing BeRP, uh, uh, [vocalsound] uh, Ron Kay, uh, volunteered to [disfmarker] to do some of that. [speaker001:] I think it would be [disfmarker] [speaker007:] And, he was [disfmarker] the first [disfmarker] first stuff he did was transcribing Chuck. And he's saying "You [disfmarker] you know, I always thought Chuck spoke really well." [speaker006:] Yeah. Yeah. Well, you know, and I also thought, y Liz has this, eh, you know, and I do also, this [disfmarker] this interest in the types of overlaps that are involved. These people would be [nonvocalsound] great choices for doing coding of that type if we wanted, [speaker001:] We'd have to mark them. [speaker006:] or [speaker007:] Mm hmm. [speaker006:] whatever. So, um. [speaker004:] Mm hmm. [speaker007:] Yeah. [speaker001:] I think it would also be interesting to have, uh, a couple of the meetings have more than one transcriber do, [speaker006:] Mm hmm. [speaker001:] cuz I'm curious about inter annotator agreement. [speaker004:] Yeah. [speaker006:] OK. Yeah. Th that'd be [disfmarker] I think that's a [disfmarker] a good idea. [speaker007:] Yeah. [speaker006:] You know, there's also, the e In my mind, I think A An Andreas was [pause] leading to this topic, the idea that, um, [vocalsound] we haven't yet seen the [disfmarker] the type of transcript that we get from IBM, and it may just be, you know, pristine. But on the other hand, given the lesser interface [disfmarker] Cuz this is, you know [disfmarker] we've got a good interface, we've got great headphones, m um [disfmarker] [speaker007:] It could be that they will uh [disfmarker] theirs will end up being a kind of fir first pass or something. [speaker006:] Something like that. [speaker007:] Maybe an elaborate one, cuz again they probably are gonna do these alignments, which will also clear things up. [speaker006:] That's [disfmarker] that's true. Al although you have to s Don't you have to start with a close enough approximation [nonvocalsound] of the [disfmarker] of the verbal part [nonvocalsound] to be able to [disfmarker]? [speaker007:] Well, tha that's [disfmarker] that's debatable. Right? I mean, so the [disfmarker] so the argument is that if your statistical system is good [vocalsound] it will in fact, uh, clean things up. [speaker006:] OK. [speaker007:] Right? [speaker006:] OK. [speaker007:] So it it's got its own objective criterion. [speaker004:] Yeah. [speaker007:] And, uh, so in principle you could start up with something that was kind of rough [disfmarker] I mean, to give an example of, um, something we used to do, uh, at one point, uh, back [disfmarker] back when Chuck was here in early times, is we would take, um, [vocalsound] da take a word and, uh, have a canonical pronunciation and, uh, if there was five phones in a word, [vocalsound] you'd break up the word, [vocalsound] uh, into five equal length pieces which is completely gross. [speaker001:] Wrong. [speaker007:] Right? [speaker004:] Yeah. [speaker007:] I mean, th the timing is off [pause] all over the place in just about any word. [speaker006:] Mm hmm. OK. [speaker007:] But it's O K. You start off with that and the statistical system then aligns things, and eventually you get something that doesn't really look too bad. [speaker006:] Oh, excellent. OK. [speaker007:] So [disfmarker] so I think using a [disfmarker] a good [pause] aligner, um, actually can [disfmarker] can help a lot. Um. [vocalsound] But, uh, you know, they both help each other. If you have a [disfmarker] if you have a better starting point, then it helps the aligner. If you have a good alignment, it helps the, uh, th the human in [disfmarker] in taking less time to correct things. So [disfmarker] so [disfmarker] [speaker006:] OK. Excellent. I guess there's another aspect, too, and I don't know [disfmarker] uh, this [disfmarker] this is [disfmarker] very possibly a different, uh, topic. But, [nonvocalsound] uh, just let me say [pause] with reference to this idea of, um, [vocalsound] higher order organization within meetings. So like in a [disfmarker] you know, the topics that are covered during a meeting with reference to the other, uh, uses of the data, [speaker007:] Mm hmm. [speaker006:] so being able to [pause] find where so and so talked about such and such, then, um, um [disfmarker] e I mean, I [disfmarker] I [disfmarker] I did sort of a [disfmarker] [vocalsound] a rough [pause] pass [nonvocalsound] on encoding, like, episode like level things on the, uh, transcribed meeting [disfmarker] [speaker007:] Mm hmm. [speaker006:] already transcribed meeting. [speaker007:] Mm hmm. [speaker006:] And I don't know if, um [disfmarker] where [nonvocalsound] that [disfmarker] i if that's something that we wanna do with each meeting, sort of like a, um [disfmarker] it's like a manifest, when you get a box full of stuff, [speaker007:] Mm hmm. [speaker006:] or [disfmarker] or if that's, um [disfmarker] I mean, i I [disfmarker] I don't know what uh, level of detail would be most useful. I don't know i if that's something that [pause] I should do when I look over it, or if we want someone else to do, or whatever. [speaker007:] Mm hmm. [speaker006:] But this issue of the contents of the meeting in an outline form. OK. [speaker007:] Yeah. Meaning really isn't my thing. Um [disfmarker] [speaker001:] I think it just [disfmarker] whoever is interested can do that. I mean, so if someone wants to use that data [disfmarker] [speaker006:] OK. [speaker007:] We're running a little short here. We, uh, uh, cou trying to [disfmarker] [speaker006:] That's fine. I'm finished. [speaker007:] eh, was [disfmarker] p Well, you know, the thing I'm concerned about is we wanted to do these digits and [disfmarker] and I haven't heard, uh, from Jose yet. [speaker006:] Oh, yeah. Oh, yes. [speaker007:] So [disfmarker] [speaker006:] Mm hmm. [speaker004:] OK. What do you want? [speaker007:] Uh [disfmarker] [speaker001:] We could skip the digits. We don't have to read digits each time. [speaker007:] Uh [disfmarker] I [disfmarker] I [disfmarker] I think it [disfmarker] you know, another [disfmarker] another bunch of digits. More data is good. [speaker001:] OK. [speaker007:] So [disfmarker] so I'd like to do that. [speaker004:] Yeah. Sure. [speaker007:] But I think, do you, maybe, eh [disfmarker]? Did you prepare some whole thing you wanted us just to see? Or what was that? [speaker004:] Yeah. It's [disfmarker] it's prepared. [speaker007:] Yeah. Uh, how long a [disfmarker]? [speaker006:] Oh, k Sorry. [speaker004:] I [disfmarker] I think it's [disfmarker] it's fast, because, uh, I have the results, eh, of the study of different energy without the law length. Eh, um, eh, in the [disfmarker] in the measurement, uh, the average, uh, dividing by the [disfmarker] by the, um, variance. [speaker007:] Yeah. [speaker004:] Um, I [disfmarker] th i the other, uh [disfmarker] the [disfmarker] the last w uh, meeting [disfmarker] eh, I don't know if you remain we have problem to [disfmarker] with the [disfmarker] [vocalsound] with [disfmarker] with the parameter [disfmarker] with the representations of parameter, [speaker007:] Yes. [speaker004:] because the [disfmarker] the valleys and the peaks in the signal, eh, look like, eh, it doesn't follow to the [disfmarker] to the energy in the signal. [speaker007:] Right. [speaker004:] And it was a problem, uh, with the scale. Eh, the scale. [speaker001:] With what? [speaker006:] Scale. [speaker001:] Scale. [speaker004:] Eh, and I [disfmarker] I change the scale and we can see the [disfmarker] the variance. [speaker007:] OK. But the bottom line is it's still not, uh, separating out very well. [speaker004:] Yeah. [speaker007:] Right? [speaker004:] Yeah. [speaker007:] OK. [speaker004:] The distribution [disfmarker] the distribution is [disfmarker] is similar. [speaker007:] So that's [disfmarker] that's [disfmarker] that's enough then. OK. [speaker004:] Yeah. [speaker007:] No, I mean, that there's no point in going through all of that if that's the bottom line, really. [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker007:] So, I [disfmarker] I think we have to start [disfmarker] [speaker004:] Yeah. [speaker007:] Uh, I mean, there there's two suggestions, really, which is, uh [disfmarker] what we said before is that, [speaker004:] Mmm, yeah. [speaker007:] um, it looks like, at least that you haven't found an obvious way to normalize so that the energy is anything like a reliable, uh, indicator of the overlap. [speaker004:] Yeah. [speaker007:] Um, I [disfmarker] I'm [disfmarker] I'm still [pause] a little f think that's a little funny. These things l [@ @] seems like there should be, [speaker004:] Yeah. Yeah. [speaker007:] but [disfmarker] [vocalsound] but you don't want to keep, uh [disfmarker] keep knocking at it if it's [disfmarker] if you're not getting any [disfmarker] any result with that. But, I mean, the other things that we talked about is, uh, [vocalsound] pitch related things and harmonicity related things, [speaker004:] Yeah. [speaker007:] so [disfmarker] which we thought also should be some kind of a reasonable indicator. Um [disfmarker] But, uh, a completely different tack on it wou is the one that was suggested, uh, by your colleagues in Spain, [speaker004:] Yeah. [speaker007:] which is to say, don't worry so much about the, uh, features. [speaker004:] Yeah. [speaker007:] That is to say, use, you know, as [disfmarker] as you're doing with the speech, uh, nonspeech, use some very general features. [speaker003:] Yeah. [speaker007:] And, uh, then, uh, look at it more from the aspect of modeling. [speaker004:] Yeah. [speaker007:] You know, have a [disfmarker] have a couple Markov models [speaker004:] Yeah. [speaker007:] and [disfmarker] and, uh, try to indi try to determine, you know, w when is th when are you in an overlap, when are you not in an overlap. [speaker004:] Hmm. [speaker007:] And let the, uh, uh, statistical system [pause] determine what's the right way to look at the data. I [disfmarker] I, um, [speaker004:] Yeah. [speaker007:] I think it would be interesting to find individual features and put them together. I think that you'd end up with a better system overall. But given the limitation in time [vocalsound] and given the fact that Javier's system already exists [pause] doing this sort of thing, [speaker004:] Yeah. Yeah. [speaker007:] uh, but, uh, its main limitation is that, again, it's only looking at silences which would [disfmarker] [speaker004:] Yeah. [speaker007:] maybe that's a better place to go. [speaker004:] Yeah. Yeah. [speaker007:] So. [speaker006:] Mm hmm. [speaker004:] I [disfmarker] I [disfmarker] I think that, eh, the possibility, eh, can be that, eh, Thilo, eh, working, eh, with a new class, [speaker007:] Mm hmm. [speaker004:] not only, eh, nonspeech and speech, but, eh, in [disfmarker] in [disfmarker] in the speech class, [speaker007:] Mm hmm. [speaker004:] dividing, eh, speech, eh, of [disfmarker] from a speaker and overlapping, to try [disfmarker] to [disfmarker] to do, eh, eh, a fast [disfmarker] a fast, eh, [vocalsound] experiment to [disfmarker] to prove that, nnn, this fea eh, general feature, [vocalsound] eh, can solve the [disfmarker] the [disfmarker] the problem, [speaker007:] Yeah. [speaker004:] and wh what [disfmarker] nnn, how far is [disfmarker] [speaker007:] Maybe. Yeah. [speaker004:] And, I [disfmarker] I have prepared the [disfmarker] the pitch tracker now. [speaker001:] Mm hmm. [speaker004:] And I hope the [disfmarker] the next week I will have, eh, some results and we [disfmarker] we will show [disfmarker] we will see, eh, the [disfmarker] the parameter [disfmarker] the pitch, [vocalsound] eh, tracking in [disfmarker] with the program. [speaker007:] I see. Ha h have you ever looked at the, uh, uh [disfmarker] Javier's, uh, speech segmenter? [speaker004:] And, nnn, nnn [disfmarker] [speaker003:] No. [speaker007:] Oh. Maybe m you could, you kn uh show Thilo that. [speaker004:] No. [speaker003:] No. [speaker004:] Yeah. [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Sure. [speaker007:] Cuz again the idea is there [disfmarker] the limitation there again was that he was [disfmarker] he was only using it to look at silence as a [disfmarker] as a [disfmarker] as a [disfmarker] as a p putative split point between speakers. [speaker004:] Yeah. [speaker007:] But if you included, uh, broadened classes then [pause] in principle maybe you can [pause] cover the overlap cases. [speaker003:] OK. [speaker004:] Yeah. Mmm, yeah. [speaker003:] Yeah, but I'm not too sure if [disfmarker] if we can [pause] really represent [vocalsound] overlap with [disfmarker] with the s [pause] detector I [disfmarker] I [disfmarker] I used up to now, [speaker007:] Uh [disfmarker] [speaker006:] Mm hmm. [speaker003:] the [disfmarker] to speech nonspeech as [disfmarker] [speaker001:] I think with [disfmarker] [speaker004:] Ah. [speaker003:] it's only speech or it's [disfmarker] it's [disfmarker] it's nonspeech. [speaker001:] That's right. But I think Javier's [disfmarker] [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker007:] N n [speaker001:] I think Javier's might be able to. [speaker003:] So. [speaker001:] It doesn't have the same Gaus uh, H M M modeling, [speaker003:] Yeah. [speaker001:] which is I think a drawback. [speaker003:] OK. [speaker001:] But, uh [disfmarker] [speaker007:] Well, it's [disfmarker] sort of has a simple one. [speaker004:] Mmm, yeah. [speaker007:] Right? It's [disfmarker] [speaker001:] Does it? [speaker007:] it's just [disfmarker] it's just a [disfmarker] isn't it just a Gaussian for each [disfmarker]? [speaker004:] Yeah. [speaker001:] Yeah. [speaker004:] Hmm. [speaker006:] Mm hmm. [speaker007:] Yeah. [speaker001:] And then [pause] he ch you choose optimal splitting. [speaker007:] Oh, it doesn't have [disfmarker] it doesn't have any temporal, uh [disfmarker]? I thought it [disfmarker] [speaker001:] Maybe I'm misremembering, but I did not think it had a Markov [disfmarker] [speaker007:] Yeah. I gues I guess I don't remember either. Uh. It's been a while. [speaker004:] Javier [disfmarker] [speaker003:] Yeah. Uh, I could have a look at it. [speaker007:] Uh. [speaker004:] You mean Ja eh, eh, Javier program? [speaker003:] So. [speaker001:] Mm hmm. [speaker004:] No, Javier di doesn't worked with, uh, a Markov [disfmarker] [speaker001:] Yeah, I didn't think so. [speaker004:] He on only train [disfmarker] [speaker007:] Oh, OK. So he's just [disfmarker] he just computes a Gaussian over potential [disfmarker] [speaker001:] Yep. [speaker004:] Yeah. It was only Gaussian. [speaker007:] Oh, I see. I see. And [disfmarker] and [disfmarker] [speaker004:] This is the idea. [speaker001:] And so I [disfmarker] I think it would work fine for detecting overlap. It's just, uh, that i it [disfmarker] he has the two pass issue that [disfmarker] What he does is, as a first pass he [disfmarker] he [disfmarker] p he does, um, a guess at where the divisions might be and he overestimates. And that's just a data reduction step, so that you're not trying at every time interval. [speaker003:] OK. [speaker001:] And so those are the putative [pause] places where he tries. [speaker004:] Yeah. [speaker003:] Yeah. OK. [speaker001:] And right now he's doing that with silence and that doesn't work with the Meeting Recorder. [speaker003:] Yeah. [speaker001:] So if we used another method to get the first pass, I think it would probably work. [speaker003:] Yeah. Sure. Yeah. Yeah, OK. [speaker001:] It's a good method. As long as the len as long the segments are long enough. [speaker004:] Yeah. [speaker001:] That's the other problem. [speaker007:] O k OK. So let me go back to what you had, though. [speaker003:] So [disfmarker] Yeah. [speaker007:] Um. The other thing one could do is [disfmarker] Couldn't [disfmarker] [speaker004:] Mm hmm. [speaker007:] I mean, it's [disfmarker] So you have two categories and you have Markov models for each. [speaker003:] Yeah. [speaker007:] Couldn't you have a third category? So you have, uh [disfmarker] you have, [vocalsound] uh, nonspeech, single person speech, and multiple person speech? [speaker006:] He has this on his board actually. Don't you have, like those [disfmarker] those several different [vocalsound] categories on the board? [speaker007:] Right? And then you have a Markov model for each? [speaker003:] Um [disfmarker] I'm not sure. I [disfmarker] I thought about, uh, adding, uh, uh, another class too. But it's not too easy, I think, the [disfmarker] the transition between the different class, to model them in [disfmarker] in the system I have now. But it [disfmarker] it [disfmarker] it could be possible, I think, [speaker007:] I see. I see. [speaker003:] in principle. [speaker007:] Yeah, I mean, I [disfmarker] This is all pretty gross. I mean, the [disfmarker] th the reason why, uh, I was suggesting originally that we look at features is because I thought, well, we're doing something we haven't done before, [speaker004:] Yeah. [speaker003:] Yeah. [speaker007:] we should at least look at the space and understand [disfmarker] [speaker004:] Yeah. [speaker003:] Yeah. [speaker007:] It seems like if two people [disfmarker] two or more people talk at once, it should get louder, [speaker003:] Yeah. [speaker007:] uh, and, uh, uh, there should be some discontinuity in pitch contours, [speaker003:] I had the impression. [speaker004:] Yeah. [speaker007:] and, uh, there should overall be a, um, smaller proportion of the total energy that is explained by any particular harmonic [pause] sequence in the spectrum. [speaker003:] Yeah. Yeah. [speaker001:] Right. [speaker007:] So those are all things that should be there. So far, um, uh, Jose has [disfmarker] has been [disfmarker] [speaker004:] Mm hmm. [speaker003:] Yeah. [speaker007:] By the way, I was told I should be calling you Pepe, but [disfmarker] by your friends, [speaker004:] Yeah. [speaker007:] but Anyway, um, [speaker004:] Yeah. [speaker007:] uh, the [disfmarker] has [disfmarker] has, uh, been exploring, uh, e largely the energy issue and, um, as with a lot of things, it is not [disfmarker] uh, like this, it's not as simple as it sounds. And then there's, you know [disfmarker] Is it energy? Is it log energy? Is it LPC residual energy? Is it [disfmarker] is it [disfmarker] [vocalsound] is it, uh, delta of those things? [speaker003:] Yeah. [speaker007:] Uh, what is it no Obviously, just a simple number [disfmarker] [vocalsound] absolute number isn't gonna work. So [vocalsound] it should be with [disfmarker] compared to what? Should there be a long window for the [vocalsound] normalizing factor and a short window for what you're looking at? [speaker003:] Yeah. [speaker007:] Or, you know, how b short should they be? So, th he's been playing around with a lot of these different things [speaker004:] Hmm. [speaker007:] and [disfmarker] and so far at least has not come up with [vocalsound] any combination that really gave you an indicator. [speaker004:] Yeah. [speaker007:] So I [disfmarker] I still have a hunch that there's [disfmarker] it's in there some place, but it may be [disfmarker] given that you have a limited time here, it [disfmarker] it just may not be the best thing to [disfmarker] [vocalsound] to [disfmarker] to focus on for the remaining of it. [speaker004:] Yeah. To overrule, yeah. [speaker007:] So pitch related and harmonic related, I'm [disfmarker] I'm [pause] somewhat more hopeful for it. [speaker004:] Yeah. Yeah. [speaker007:] But it seems like if we just wanna get something to work, [speaker003:] Yeah. [speaker004:] Yeah. [speaker007:] that, uh, their suggestion of [disfmarker] of [disfmarker] Th they were suggesting going to Markov models, uh, but in addition there's an expansion of what Javier did. And one of those things, looking at the statistical component, [speaker004:] One. [speaker003:] Yeah. [speaker007:] even if the features that you give it are maybe not ideal for it, it's just sort of this general filter bank or [disfmarker] [vocalsound] or cepstrum or something, um [disfmarker] [speaker003:] Yeah. [speaker007:] Eee [vocalsound] it's in there somewhere probably. [speaker004:] But, eh, what did you think about the possibility of using the Javier software? Eh, I mean, the, uh [disfmarker] the, uh [disfmarker] the BIC criterion, the [disfmarker] the [disfmarker] t to train the [disfmarker] the Gaussian, eh, using the [disfmarker] the mark, eh, by hand, eh, eh, to distinguish be mmm, to train overlapping zone and speech zone. I mean, eh, [vocalsound] I [disfmarker] I [disfmarker] I think that an interesting, eh, experiment, eh, could be, th eh, to prove that, mmm, if s we suppose that, eh, the [disfmarker] the first step [disfmarker] [vocalsound] I mean, the [disfmarker] the classifier what were the classifier from Javier or classifier from Thilo? W What happen with the second step? I [disfmarker] I mean, what [disfmarker] what happen with the, eh [disfmarker] the, uh, clu the, uh [disfmarker] the clu the clustering process? [speaker001:] Mm hmm. [speaker004:] Using the [disfmarker] the Gaussian. [speaker001:] You mean Javier's? [speaker004:] Yeah. [speaker001:] What do you mean? [speaker004:] I [disfmarker] I mean, that is [disfmarker] is enough [disfmarker] is enough, eh, to work well, eh, to, eh, separate or to distinguish, eh, between overlapping zone and, eh, speaker zone? Because th [vocalsound] if [disfmarker] if we [disfmarker] if we, eh, nnn, develop an classifier [disfmarker] and the second step doesn't work [pause] well, eh, we have [pause] another problem. [speaker001:] I [disfmarker] Yeah. I had tried doing it by hand at one point with a very short sample, [speaker004:] N [speaker001:] and it worked pretty well, but I haven't worked with it a lot. So what I d I d I took a hand segmented sample [speaker004:] Nnn, yeah. [speaker001:] and I added ten times the amount of numbers at random, [speaker004:] Yeah. Oh. [speaker001:] and it did pick out pretty good boundaries. [speaker004:] Yeah. But is [disfmarker] is [disfmarker] if [disfmarker] [speaker001:] But this was just very anecdotal sort of thing. [speaker004:] But it's possible with my segmentation by hand [pause] that we have information about the [disfmarker] the overlapping, [speaker001:] Right. So if we [disfmarker] if we fed the hand segmentation to Javier's and it doesn't work, then we know something's wrong. [speaker004:] uh [disfmarker] Yeah. The [disfmarker] N n Yeah. No. The demonstration by hand. Segmentation by hand I [disfmarker] I [disfmarker] I think is the fast experiment. [speaker001:] Yeah. I think that's probably worthwhile doing. [speaker007:] Uh huh. [speaker004:] Uh, we can prove that the [disfmarker] [speaker001:] Whether it'll work or not. [speaker004:] this kind o emph emphasises parameter and Gaussian [disfmarker] [speaker001:] Yeah. [speaker007:] Yeah. [speaker001:] Yep. Y do you know where his software is? Have you used it at all? [speaker004:] I yeah have. I have. [speaker001:] OK. So. I [disfmarker] I have as well, so if you need [disfmarker] need help let me know. [speaker004:] OK. [speaker007:] Let's read some digits. [speaker001:] OK. [speaker006:] Mm hmm. [speaker001:] uuh And we are [disfmarker] [speaker002:] OK. [speaker003:] Oh, I don't [disfmarker] [speaker001:] I think I'm zero. [speaker002:] Wow! [speaker005:] Ah [speaker006:] Wh what causes the crash? [speaker002:] Unprecedented. [speaker003:] Hello, hello, hello, hello. [speaker001:] Did you fix something? [speaker003:] Hello. [speaker005:] Five, five. [speaker003:] Hello, hello. [speaker006:] Oh, maybe it's the turning [disfmarker] turning off and turning on of the mike, right? [speaker002:] Uh, you think that's you? Oh. [speaker003:] Aaa aaa aaa. [speaker006:] Yeah, OK, mine's working. [speaker003:] OK. That's me. [speaker002:] OK. OK. So, um I guess we are [pause] um [pause] gonna do the digits at the end. Uh [speaker004:] Channel [disfmarker] channel three, yeah. [speaker005:] Mmm, channel five? [speaker004:] OK. [speaker003:] Channel two. [speaker005:] Doesn't work? [speaker003:] Two. [speaker002:] Yeah, that's the mike number there, uh [pause] Uh, mike number five, and [pause] channel [disfmarker] channel four. [speaker005:] No? Ah, [speaker001:] Is it written on her sheet, I believe. [speaker004:] Mike four. [speaker006:] Watch this. [speaker005:] era el cuatro. [speaker006:] Yep, that's me. [speaker005:] Yeah. [speaker001:] But, channel [speaker005:] Yeah yeah yeah. [speaker002:] This is you. [speaker005:] OK. I saw that. Ah [disfmarker] yeah, it's OK. [speaker002:] Yeah. And I'm channel uh two I think, [speaker003:] Ooo. [speaker002:] or channel [disfmarker] [speaker003:] I think I'm channel two. [speaker002:] Oh, I'm channel [disfmarker] must be channel one. [speaker005:] Channel [disfmarker] [vocalsound] I decided to talk about that. [speaker002:] Channel one? Yes, OK. OK. So uh [pause] I also copied uh the results that we all got in the mail I think from uh [disfmarker] [pause] from OGI and we'll go [disfmarker] go through them also. So where are we on [disfmarker] [pause] on uh [vocalsound] [pause] our runs? [speaker004:] Uh so. [pause] uh [disfmarker] We [disfmarker] So [pause] As I was already said, we [disfmarker] we mainly focused on uh four kind of features. [speaker002:] Excuse me. [speaker004:] The PLP, the PLP with JRASTA, the MSG, and the MFCC from the baseline Aurora. [speaker002:] Mm hmm. [speaker004:] Uh, and we focused for the [disfmarker] the test part on the English and the Italian. Um. We've trained uh several neural networks on [disfmarker] so [disfmarker] on the TI digits English [pause] and on the Italian data and also on the broad uh [pause] English uh French and uh Spanish databases. Mmm, so there's our result tables here, for the tandem approach, and um, actually what we [disfmarker] we [@ @] observed is that if the network is trained on the task data it works pretty well. [speaker002:] OK. [speaker003:] Chicken on the grill. [speaker002:] Our [disfmarker] our uh [disfmarker] [pause] There's a [disfmarker] [pause] We're pausing for a photo [disfmarker] [speaker003:] Try that corner. [speaker001:] How about over th from the front of the room? [speaker003:] Yeah, it's longer. [speaker002:] We're pausing for a photo opportunity here. Uh. [vocalsound] Uh. So. [speaker006:] Oh wait wait wait wait wait. Wait. [speaker003:] Get out of the [disfmarker] Yeah. [speaker006:] Hold on. Hold on. [speaker002:] OK. [speaker006:] Let me give you a black screen. [speaker002:] He's facing this way. What? OK, this [disfmarker] this would be a [pause] good section for our silence detection. [speaker006:] OK. [speaker003:] Mm hmm. [speaker002:] Um Oh. [speaker006:] Musical chairs everybody! [speaker002:] OK. So um, [pause] you were saying [pause] about the training data [disfmarker] [speaker004:] Yeah, [speaker002:] Yeah. [speaker004:] so if the network is trained on the task data um [pause] tandem works pretty well. And uh actually we have uh, results are similar Only on, [speaker001:] Do you mean if it's trained only on [disfmarker] On data from just that task, [speaker004:] yeah. [speaker001:] that language? [speaker004:] Just that task. But actually we didn't train network on [pause] uh both types of data I mean [pause] uh [pause] phonetically ba phonetically balanced uh data and task data. We only did either task [disfmarker] task data or [pause] uh broad [pause] data. [speaker001:] Mmm. Mm hmm. [speaker004:] Um [pause] Yeah. So, [speaker002:] So how [disfmarker] I mean [disfmarker] clearly it's gonna be good then [speaker001:] So what's th [speaker002:] but the question is how much [pause] worse is it [pause] if you have broad data? I mean, [pause] my assump From what I saw from the earlier results, uh I guess last week, [pause] was that um, [pause] if you [pause] trained on one language and tested on another, say, that [pause] the results were [disfmarker] were relatively poor. [speaker004:] Mmm. Yeah. [speaker002:] But [disfmarker] but the question is if you train on one language [pause] but you have a broad coverage [pause] and then test in another, [pause] does that [disfmarker] [pause] is that improve things [pause] i c in comparison? [speaker004:] If we use the same language? [speaker002:] No, no, no. Different lang So [pause] um [pause] If you train on TI digits [pause] and test on Italian digits, [pause] you do poorly, [pause] let's say. [speaker004:] Mm hmm. [speaker002:] I don't have the numbers in front of me, [speaker004:] But [disfmarker] Yeah but I did not uh do that. [speaker002:] so I'm just imagining. E So, you didn't train on [pause] TIMIT and test on [disfmarker] [pause] on Italian digits, say? [speaker004:] We [disfmarker] No, we did four [disfmarker] four kind of [disfmarker] of testing, actually. The first testing is [pause] with task data [disfmarker] So, with nets trained on task data. So for Italian on the Italian speech [@ @]. The second test is trained on a single language um with broad database, but the same language as the t task data. [speaker002:] OK. [speaker004:] But for Italian we choose Spanish which [pause] we assume is close to Italian. The third test is by using, um the three language database and the fourth is [speaker002:] W which in [disfmarker] It has three languages. That's including the w the [disfmarker] [pause] the [disfmarker] [speaker004:] This includes [disfmarker] [speaker002:] the one that it's [disfmarker] [speaker004:] Yeah. But [pause] not digits. [speaker001:] In [speaker004:] I mean it's [disfmarker] [speaker002:] Right. [speaker001:] The three languages [pause] is not digits, it's the broad [pause] data. [speaker004:] Yeah [speaker001:] OK. [speaker004:] And the fourth test is uh [pause] excluding from these three languages the language [pause] that is [pause] the task language. [speaker002:] Oh, OK, yeah, so, that is what I wanted to know. [speaker004:] Yeah. [speaker002:] I just wasn't saying it very well, I guess. [speaker004:] Uh, yeah. So um [pause] for uh TI digits for ins example [pause] uh when we go from TI digits training to [pause] TIMIT training [pause] uh we lose [pause] uh around ten percent, uh. The error rate increase u of [disfmarker] of [disfmarker] of ten percent, relative. [speaker002:] Relative. Right. [speaker004:] So this is not so bad. And then when we jump to the multilingual data it's uh it become worse and, well Around uh, let's say, [pause] twenty perc twenty percent further. [speaker002:] Ab about how much? [speaker004:] So. Yeah. [speaker002:] Twenty percent further? [speaker004:] Twenty to [disfmarker] to thirty percent further. Yeah. [speaker001:] And so, remind me, the multilingual stuff is just the broad data. Right? [speaker004:] Yeah. [speaker001:] It's not the digits. So it's the combination of [pause] two things there. It's [pause] removing the [pause] task specific [pause] training and [pause] it's adding other languages. [speaker004:] Yeah. Yeah. [speaker001:] OK. [speaker004:] But the first step is al already removing the task s specific from [disfmarker] from [disfmarker] [speaker001:] Already, right right right. [speaker004:] So. And we lose [disfmarker] [speaker001:] So they were sort of building [pause] here? [speaker004:] Yeah. [speaker001:] OK? [speaker004:] Uh [pause] So, basically when it's trained on the [disfmarker] the multilingual broad data [pause] um or number [disfmarker] so, the [disfmarker] the [pause] ratio of our error rates uh with the [pause] baseline error rate is around [pause] uh one point one. So. [speaker002:] Yes. [vocalsound] And it's something like one point three of [disfmarker] of the [pause] uh [disfmarker] I i if you compare everything to the first case at the baseline, you get something like one point one for the [disfmarker] for the using the same language but a different task, and something like one point three [pause] for three [disfmarker] three languages [pause] broad stuff. [speaker004:] No no no. Uh same language we are at uh [disfmarker] for at English at O point eight. So it improves, [pause] compared to the baseline. But [disfmarker] So. Le let me. [speaker002:] I [disfmarker] I [disfmarker] I'm sorry. [speaker004:] Tas task data we are u [speaker002:] I [disfmarker] I [disfmarker] I meant something different by baseline [speaker004:] Yeah. [speaker002:] So let me [disfmarker] let me [disfmarker] Um, [pause] so, [pause] um [disfmarker] [speaker004:] Mmm. [speaker002:] OK, fine. Let's [disfmarker] let's use the conventional meaning of baseline. [speaker004:] Hmm. [speaker002:] I [disfmarker] I [disfmarker] By baseline here I meant [pause] uh using the task specific data. [speaker004:] Oh yeah, the f Yeah, OK. Yeah. [speaker002:] But uh [disfmarker] [pause] uh, because that's what you were just doing with this ten percent. So I was just [disfmarker] I just trying to understand that. [speaker004:] Yeah. Sure. [speaker002:] So if we call [pause] a factor of w just one, just normalized to one, the word error rate [pause] that you have [pause] for using TI digits as [disfmarker] as [pause] training and TI digits as test, [speaker004:] Mmm. Mm hmm. [speaker002:] uh different words, I'm sure, but [disfmarker] [pause] but uh, uh the same [pause] task and so on. [speaker004:] Mm hmm. [speaker002:] If we call that "one", [pause] then what you're saying is [pause] that the word error rate [pause] for the same language but using [pause] uh different training data than you're testing on, say TIMIT and so forth, [pause] it's one point one. [speaker004:] Mm hmm. Yeah, it's around one point one. [speaker002:] Right. And if it's [disfmarker] [speaker004:] Yeah. [speaker002:] you [pause] do [pause] go to [pause] three languages including the English, [pause] it's something like one point three. [speaker004:] Ye [speaker002:] That's what you were just saying, I think. [speaker004:] Uh, more actually. [speaker001:] One point four? [speaker004:] If I [disfmarker] Yeah. [speaker001:] So, it's an additional thirty percent. [speaker004:] What would you say? Around one point four yeah. [speaker002:] OK. And if you exclude [pause] English, [pause] from this combination, what's that? [speaker004:] If we exclude English, [pause] um [pause] there is [pause] not much difference with the [pause] data with English. So. Yeah. [speaker002:] Aha! That's interesting. [pause] That's interesting. Do you see? Because [disfmarker] Uh, [speaker004:] Uh. [speaker002:] so [disfmarker] No, that [disfmarker] that's important. So what [disfmarker] what it's saying here is just that yes, there is a reduction [pause] in performance, [pause] when you don't [pause] um [pause] have the s [pause] when you don't have [pause] um [speaker001:] Task data. [speaker002:] Wait a minute, th th the [disfmarker] [speaker004:] Hmm. [speaker002:] No, actually [pause] it's interesting. So it's [disfmarker] So when you go to a different task, there's actually not so [pause] different. It's when you went to these [disfmarker] So what's the difference between two and three? Between the one point one case and the one point four case? I'm confused. [speaker001:] It's multilingual. [speaker004:] Yeah. The only difference it's [disfmarker] is that it's multilingual [disfmarker] Um [speaker002:] Cuz in both [disfmarker] in both [disfmarker] both of those cases, you don't have the same task. [speaker004:] Yeah. Yeah sure. [speaker002:] So is [disfmarker] is the training data for the [disfmarker] for this one point four case [disfmarker] does it include the training data for the one point one case? [speaker004:] Uh yeah. [speaker006:] Yeah, a fraction of it. [speaker004:] A part of it, yeah. [speaker002:] How m how much bigger is it? [speaker004:] Um [pause] It's two times, [speaker006:] Yeah, um. [speaker004:] actually? Yeah. Um. The English data [disfmarker] [pause] No, the multilingual databases are two times the [pause] broad English [pause] data. We just wanted to keep this, w well, not too huge. So. [speaker002:] So it's two times, but it includes the [disfmarker] but it includes the broad English data. [speaker004:] I think so. Do you [disfmarker] Uh, Yeah. [speaker002:] And the broad English data is what you got this one point one [pause] with. So that's TIMIT basically right? [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker002:] So it's band limited TIMIT. [speaker006:] Mm hmm. [speaker004:] Mm hmm. [speaker002:] This is all eight kilohertz sampling. [speaker004:] Yeah. [speaker006:] Downs Right. [speaker002:] So you have band limited TIMIT, [pause] gave you uh almost as good as a result as using TI digits [pause] on a TI digits test. OK? [speaker004:] Hmm? [speaker002:] Um [pause] and [pause] um But, [pause] when you add in more training data but keep the neural net the same size, [pause] it [pause] um performs worse on the TI digits. OK, now all of this is [disfmarker] [pause] This is noisy [pause] TI digits, I assume? Both training and test? Yeah. OK. Um OK. Well. [pause] We [disfmarker] we [disfmarker] we may just need to uh [disfmarker] So I mean it's interesting that h going to a different [disfmarker] different task didn't seem to hurt us that much, and going to a different language um It doesn't seem to matter [disfmarker] The difference between three and four is not particularly great, so that means that [pause] whether you have the language in or not is not such a big deal. [speaker004:] Mmm. [speaker002:] It sounds like um [pause] uh [pause] we may need to have more [pause] of uh things that are similar to a target language or [disfmarker] I mean. [pause] You have the same number of parameters in the neural net, you haven't increased the size of the neural net, and maybe there's just [disfmarker] [pause] just not enough [pause] complexity to it to represent [pause] the variab increased variability in the [disfmarker] in the training set. That [disfmarker] that could be. Um [pause] So, what about [disfmarker] So these are results with [pause] uh th [pause] that you're describing now, that [pause] they are pretty similar for the different features or [disfmarker] [pause] or uh [disfmarker] [speaker004:] Uh, let me check. Uh. [speaker002:] Yeah. [speaker004:] So. This was for the PLP, [speaker002:] Yeah. [speaker004:] Um. The [disfmarker] Yeah. For the PLP with JRASTA the [disfmarker] [pause] the [disfmarker] we [disfmarker] This is quite the same [pause] tendency, [pause] with a slight increase of the error rate, [pause] uh if we go to [disfmarker] to TIMIT. And then it's [disfmarker] it gets worse with the multilingual. Um. Yeah. There [disfmarker] there is a difference actually with [disfmarker] b between PLP and JRASTA is that [pause] JRASTA [pause] seems to [pause] perform better with the highly mismatched [pause] condition [pause] but slightly [disfmarker] slightly worse [pause] for the well matched condition. Mmm. [speaker002:] I have a suggestion, actually, even though it'll delay us slightly, would [disfmarker] would you mind [pause] running into the other room and making [pause] copies of this? Cuz we're all sort of [disfmarker] [speaker004:] Yeah, yeah. [speaker002:] If we c if we could look at it, while we're talking, I think it'd be [speaker004:] OK. [speaker002:] uh [disfmarker] [pause] Uh, I'll [disfmarker] I'll sing a song or dance or something while you [vocalsound] do it, too. [speaker006:] Alright. [speaker001:] So um [disfmarker] Go ahead. Ah, while you're gone I'll ask s some of my questions. [speaker002:] Yeah. [speaker001:] Um. [speaker002:] Yeah. Uh, this way and just slightly to the left, yeah. [speaker001:] The um [disfmarker] What was [disfmarker] Was this number [pause] forty or [disfmarker] It was roughly the same as this one, [pause] he said? [speaker002:] Um. [speaker001:] When you had the two language versus the three language? [speaker002:] That's what he was saying. [speaker001:] That's where he removed English, [speaker006:] Yeah. [speaker001:] right? [speaker002:] Right. [speaker006:] It sometimes, actually, depends on what features you're using. [speaker002:] Yeah. But [disfmarker] but i it sounds like [disfmarker] [speaker006:] Um, but [disfmarker] [vocalsound] [vocalsound] He [disfmarker] Mm hmm. [speaker002:] I mean. That's interesting because [pause] it [disfmarker] it seems like what it's saying is not so much that you got hurt [pause] uh because [pause] you [pause] uh didn't have so much representation of English, because in the other case you don't get hurt any more, at least when [pause] it seemed like uh it [disfmarker] it might simply be a case that you have something that is just much more diverse, [speaker001:] Mm hmm. [speaker002:] but you have the same number of parameters representing it. [speaker001:] Mm hmm. I wonder [disfmarker] were um all three of these nets [pause] using the same output? This multi language [pause] uh labelling? [speaker006:] He was using uh sixty four phonemes from [pause] SAMPA. [speaker001:] OK, OK. [speaker006:] Yeah. [speaker001:] So this would [disfmarker] [pause] From this you would say, "well, it doesn't really matter if we put Finnish [pause] into [pause] the training of the neural net, [pause] if there's [pause] gonna be, [pause] you know, Finnish in the test data." Right? [speaker002:] Well, it's [disfmarker] it sounds [disfmarker] [pause] I mean, we have to be careful, cuz we haven't gotten a good result yet. [speaker001:] Yeah. [speaker002:] And comparing different bad results can be [pause] tricky. [speaker001:] Hmm. [speaker002:] But I [disfmarker] I [disfmarker] I [disfmarker] [pause] I think it does suggest that it's not so much uh [pause] uh cross [pause] language as cross type of speech. [speaker001:] Mm hmm. [speaker002:] It's [disfmarker] it's um [disfmarker] [vocalsound] But we did [disfmarker] Oh yeah, the other thing I was asking him, though, is that I think that in the case [disfmarker] Yeah, you [disfmarker] you do have to be careful because of com compounded results. I think we got some earlier results [pause] in which you trained on one language and tested on another and you didn't have [pause] three, but you just had one [pause] language. So you trained on [pause] one type of digits and tested on another. Didn Wasn't there something of that? Where you, [pause] say, trained on Spanish and tested on [disfmarker] on TI digits, or the other way around? Something like that? [speaker005:] No. [speaker002:] I thought there was something like that, [pause] that he showed me [pause] last week. We'll have to wait till we get [disfmarker] [speaker001:] Yeah, that would be interesting. [speaker002:] Um, This may have been what I was asking before, Stephane, but [disfmarker] [pause] but, um, wasn't there something that you did, [pause] where you trained [pause] on one language and tested on another? I mean no [disfmarker] no mixture but just [disfmarker] [speaker006:] I'll get it for you. [speaker004:] Uh, no, no. [speaker002:] We've never just trained on one lang [speaker004:] Training on a single language, you mean, and testing on the other one? [speaker002:] Yeah. [speaker005:] Not yet. [speaker004:] Uh, no. So the only [pause] task that's similar to this is the training on two languages, and [comment] that [disfmarker] [speaker002:] But we've done a bunch of things where we just trained on one language. Right? I mean, you haven't [disfmarker] you haven't done all your tests on multiple languages. [speaker004:] Uh, No. Either thi this is test with [pause] uh the same language [pause] but from the broad data, or it's test with [pause] uh different languages also from the broad data, excluding the [disfmarker] [speaker005:] The early experiment that [disfmarker] [speaker004:] So, it's [disfmarker] it's three or [disfmarker] three and four. [speaker001:] Did you do different languages from digits? [speaker004:] Uh. No. You mean [pause] training digits [pause] on one language and using the net [pause] to recognize on the other? [speaker001:] Digits on another language? [speaker004:] No. [speaker002:] See, I thought you showed me something like that last week. You had a [disfmarker] you had a little [disfmarker] [speaker004:] Uh, [pause] No, I don't think so. [speaker002:] Um What [disfmarker] [speaker003:] These numbers are uh [pause] ratio to baseline? [speaker002:] So, I mean wha what's the [disfmarker] [speaker004:] So. [speaker002:] This [disfmarker] this chart [disfmarker] this table that we're looking at [pause] is um, show is all testing for TI digits, or [disfmarker]? [speaker006:] Bigger is worse. [speaker004:] So you have uh basically two [pause] uh parts. [speaker006:] This is error rate, I think. [speaker003:] Ratio. [speaker006:] No. [pause] No. [speaker004:] The upper part is for TI digits [speaker006:] Yeah, yeah, yeah. [speaker004:] and it's divided in three [pause] rows [pause] of four [disfmarker] four rows each. [speaker006:] Mm hmm. [speaker002:] Yeah. [speaker004:] And the first four rows is well matched, then the s the second group of four rows is mismatched, and [pause] finally highly mismatched. And then the lower part is for Italian and it's the same [disfmarker] [pause] the same thing. [speaker001:] So, so the upper part is training [pause] TI digits? [speaker004:] So. It's [disfmarker] it's the HTK results, I mean. So it's [pause] HTK training testings [pause] with different kind of features [speaker001:] Ah. [speaker004:] and what appears in the [pause] uh left column is [pause] the networks that are used for doing this. [speaker002:] Hmm. [speaker004:] So. Uh Yeah. [speaker002:] Well, What was is that i What was it that you had [pause] done [pause] last week when you showed [disfmarker] Do you remember? Wh when you showed me [pause] the [disfmarker] your table last week? [speaker004:] It It was part of these results. Mmm. Mmm. [speaker001:] So where is the baseline [pause] for the TI digits [pause] located in here? [speaker004:] You mean the HTK Aurora baseline? [speaker001:] Yeah. [speaker004:] It's uh the one hundred number. It's, well, all these numbers are the ratio [pause] with respect to the baseline. [speaker001:] Ah! Ah, OK, OK. [speaker002:] So this is word [disfmarker] word error rate, so a high number is bad. [speaker004:] Yeah, this is [pause] a word error rate ratio. [speaker005:] Yeah. [speaker004:] Yeah. [speaker001:] OK, I see. [speaker004:] So, seventy point two means that [pause] we reduced the error rate uh by thirty [disfmarker] thirty percent. So. [speaker001:] OK, OK, gotcha. [speaker004:] Hmm. [speaker002:] OK, [vocalsound] so if we take uh um let's see PLP [pause] uh with on line [pause] normalization and [pause] delta del so that's this thing you have circled here [pause] in the second column, [speaker004:] Yeah. [speaker002:] um [pause] and "multi English" refers to what? [speaker004:] To TIMIT. Mmm. Then you have [pause] uh MF, [pause] MS and ME which are for French, Spanish and English. And, yeah. Actually I [disfmarker] [pause] I uh forgot to say that [pause] the multilingual net are trained [pause] on [pause] uh [pause] features without the s derivatives uh but with [pause] increased frame numbers. Mmm. And we can [disfmarker] we can see on the first line of the table that it [disfmarker] it [disfmarker] [pause] it's slightly [disfmarker] slightly worse when we don't use delta but it's not [disfmarker] [pause] not that much. [speaker002:] Right. So w w So, I'm sorry. I missed that. What's MF, MS and ME? [speaker001:] Multi French, [speaker004:] So. Multi French, Multi Spanish, and Multi English. [speaker001:] Multi Spanish [speaker002:] Uh OK. So, it's [pause] uh [pause] broader vocabulary. [speaker004:] Yeah. [speaker002:] Then [disfmarker] And [disfmarker] OK so I think what I'm [disfmarker] what I saw in your smaller chart that I was thinking of was [disfmarker] was [pause] there were some numbers I saw, I think, that included these multiple languages and it [disfmarker] and I was seeing [pause] that it got worse. I [disfmarker] I think that was all it was. You had some very limited results that [disfmarker] at that point [speaker004:] Yeah. [speaker002:] which showed [pause] having in these [disfmarker] these other languages. In fact it might have been just this last category, [pause] having two languages broad that were [disfmarker] where [disfmarker] where English was removed. So that was cross language and the [disfmarker] and the result was quite poor. What I [disfmarker] [pause] we hadn't seen yet was that if you added in the English, it's still poor. [speaker004:] Yeah. [speaker002:] Uh [vocalsound] [vocalsound] Um now, what's the noise condition [pause] um [pause] of the training data [disfmarker] [speaker004:] Still poor. [speaker002:] Well, I think this is what you were explaining. The noise condition is the same [disfmarker] It's the same uh Aurora noises uh, in all these cases [pause] for the training. [speaker004:] Yeah. Yeah. [speaker002:] So there's not a [pause] statistical [disfmarker] sta a strong st [pause] statistically different [pause] noise characteristic between [pause] uh the training and test [speaker004:] No these are the s s s same noises, [speaker002:] and yet we're seeing some kind of effect [disfmarker] [speaker004:] yeah. At least [disfmarker] at least for the first [disfmarker] [pause] for the well matched, [speaker006:] Well matched condition. [speaker004:] yeah. [speaker002:] Right. So there's some kind of a [disfmarker] a [disfmarker] an effect from having these [disfmarker] uh this broader coverage um Now I guess what we should try doing with this is try [pause] testing these on u this same sort of thing on [disfmarker] you probably must have this [pause] lined up to do. To try the same t [pause] with the exact same training, do testing on [pause] the other languages. [speaker004:] Mmm. [speaker002:] On [disfmarker] on um [disfmarker] So. Um, oh I well, wait a minute. You have this here, for the Italian. That's right. [speaker004:] Yeah. [speaker002:] OK, so, [pause] So. [speaker004:] Yeah, so for the Italian the results are [vocalsound] uh [pause] stranger um [pause] Mmm. So what appears is that perhaps Spanish is [pause] not very close to Italian because uh, well, [pause] when using the [disfmarker] the network trained only on Spanish it's [disfmarker] [pause] the error rate is [pause] almost uh twice [pause] the baseline error rate. [speaker002:] Mm hmm. [speaker004:] Mmm. [vocalsound] Uh. [speaker002:] Well, I mean, let's see. Is there any difference in [disfmarker] So it's in [pause] the uh [disfmarker] So you're saying that [pause] when you train on English [pause] and [pause] uh [pause] and [disfmarker] and test on [disfmarker] [speaker004:] Yeah. [speaker002:] No, you don't have training on English testing [disfmarker] [speaker004:] There [disfmarker] there is [disfmarker] another difference, is that the noise [disfmarker] the noises are different. Well, For [disfmarker] for the Italian part I mean [speaker002:] In [disfmarker] in what? [speaker004:] the [pause] uh [pause] the um [pause] networks are trained with noise from [pause] Aurora [disfmarker] TI digits, [speaker005:] Aurora two. [speaker004:] mmm. Yeah. [speaker002:] And the noise is different in th [speaker004:] And perhaps the noise are [pause] quite different from the noises [pause] in the speech that Italian. [speaker002:] Do we have any um [pause] test sets [pause] uh in [pause] any other language that um have the same noise as in [pause] the Aurora? [speaker004:] And [disfmarker] [speaker005:] Mmm, no. [speaker004:] No. [speaker001:] Can I ask something real quick? In [disfmarker] in the upper part [disfmarker] [pause] in the English [pause] stuff, [pause] it looks like the very best number is sixty point nine? and that's in the uh [disfmarker] [pause] the third [pause] section in the upper part under PLP JRASTA, sort of the middle column? [speaker004:] Yeah. [speaker001:] I is that [pause] a noisy condition? [speaker004:] Yeah. [speaker001:] So that's matched training? Is that what that is? [speaker004:] It's [disfmarker] no, the third part, so it's uh [pause] highly mismatched. So. Training and [pause] test noise are different. [speaker001:] So [disfmarker] why do you get your best number in [disfmarker] Wouldn't you get your best number in the clean case? [speaker003:] Well, it's relative to the um [pause] baseline mismatching [speaker004:] Yeah. [speaker001:] Ah, [speaker004:] Yeah. Yeah. [speaker001:] OK so these are not [disfmarker] OK, alright, I see. [speaker003:] Yeah. [speaker001:] OK. And then [disfmarker] so, in the [disfmarker] in the um [disfmarker] [pause] in the [pause] non mismatched clean case, [pause] your best one was under MFCC? That sixty one point four? [speaker004:] Yeah. [pause] But it's not a clean case. It's [pause] a noisy case but [pause] uh training and test noises are the same. [speaker001:] Oh! So this upper third? [speaker004:] So [disfmarker] Yeah. [speaker001:] Uh that's still noisy? [speaker004:] Yeah. [speaker001:] Ah, OK. [speaker004:] So it's always noisy basically, [speaker001:] Mm hmm. [speaker004:] and, [pause] well, the [disfmarker] [speaker001:] I see. [speaker004:] Mmm. [speaker002:] OK? Um [pause] So uh, I think this will take some [pause] looking at, thinking about. But, [pause] what is uh [disfmarker] what is currently running, that's [disfmarker] uh, i that [disfmarker] just filling in the holes here or [disfmarker] or [disfmarker]? [comment] [pause] pretty much? [speaker004:] Uh, no we don't plan to fill the holes but [pause] actually there is something important, [speaker002:] OK. [speaker004:] is that [pause] um we made a lot of assumption concerning the on line normalization and we just noticed [pause] uh recently that [pause] uh the [pause] approach that we were using [pause] was not [pause] uh [pause] leading to very good results [pause] when we [pause] used the straight features to HTK. Um [pause] [pause] Mmm. So basically d [pause] if you look at the [disfmarker] at the left of the table, [pause] the first uh row, [pause] with eighty six, one hundred, and forty three and seventy five, these are the results we obtained for Italian [pause] uh with [pause] straight [pause] mmm, PLP features [pause] using on line normalization. [speaker002:] Mm hmm. [speaker004:] Mmm. And the, mmm [disfmarker] what's [pause] in the table, just [pause] at the left of the PLP twelve [pause] on line normalization column, so, the numbers seventy nine, fifty four and [pause] uh forty two [pause] are the results obtained by uh Pratibha with [pause] uh his on line normalization [disfmarker] uh her on line normalization approach. [speaker001:] Where is that? seventy nine, fifty [speaker002:] Uh, it's just sort of sitting right on the uh [disfmarker] the column line. [speaker005:] Fifty one? [speaker004:] So. [speaker005:] This [disfmarker] [speaker002:] Uh. [pause] Yeah. [speaker001:] Oh I see, OK. [speaker004:] Just [disfmarker] uh Yeah. So these are the results of [pause] OGI with [pause] on line normalization and straight features to HTK. And the previous result, eighty six and so on, [pause] are with our [pause] features straight to HTK. [speaker002:] Yes. Yes. [speaker004:] So [pause] what we see that [disfmarker] is [disfmarker] there is that um [pause] uh the way we were doing this was not correct, but [pause] still [pause] the networks [pause] are very good. When we use the networks [pause] our number are better that [pause] uh Pratibha results. [speaker005:] We improve. [speaker002:] So, do you know what was wrong with the on line normalization, or [disfmarker]? [speaker004:] Yeah. There were diff there were different things and [pause] basically, [pause] the first thing is the mmm, [pause] alpha uh [pause] value. So, the recursion [pause] uh [pause] part. um, [pause] I used point five percent, [pause] which was the default value in the [disfmarker] [pause] in the programs here. And Pratibha used five percent. [speaker002:] Uh [speaker004:] So it adapts more [pause] quickly [speaker002:] Yes. Yeah. [speaker004:] Um, but, yeah. I assume that this was not important because [pause] uh previous results from [disfmarker] from Dan and [disfmarker] show that basically [pause] the [pause] both [disfmarker] both values g give the same [disfmarker] same [pause] uh results. It was true on uh [pause] TI digits but it's not true on Italian. [speaker002:] Mm hmm. [speaker004:] Uh, second thing is the initialization of the [pause] stuff. Actually, [pause] uh what we were doing is to start the recursion from the beginning of the [pause] utterance. And using initial values that are the global mean and variances [pause] measured across the whole database. [speaker002:] Right. Right. [speaker004:] And Pratibha did something different is that he [disfmarker] uh she initialed the um values of the mean and variance [pause] by computing [pause] this on the [pause] twenty five first frames of each utterance. Mmm. There were other minor differences, the fact that [pause] she used fifteen dissities instead s instead of thirteen, and that she used C zero instead of log energy. Uh, but the main differences concerns the recursion. So. [pause] Uh, I changed the code uh and now we have a baseline that's similar to the OGI baseline. [speaker002:] OK. [speaker004:] We [disfmarker] It [disfmarker] it's slightly [pause] uh different because [pause] I don't exactly initialize the same way she does. Actually I start, [pause] mmm, I don't wait to a fifteen [disfmarker] twenty five [disfmarker] twenty five frames [pause] before computing a mean and the variance [pause] to e to [disfmarker] to start the recursion. [speaker003:] Mm hmm. [speaker002:] Yeah. [speaker004:] I [disfmarker] I use the on line scheme and only start the re recursion after the twenty five [disfmarker] [pause] twenty fifth frame. But, well it's similar. So [pause] uh I retrained [pause] the networks with [pause] these [disfmarker] well, the [disfmarker] the [disfmarker] the networks are retaining with these new [pause] features. [speaker002:] Mm hmm. [speaker004:] And, yeah. [speaker002:] OK. [speaker004:] So basically what I expect is that [pause] these numbers will a little bit go down but [pause] perhaps not [disfmarker] not so much [speaker002:] Right. [speaker004:] because [pause] I think the neural networks learn perhaps [pause] to [disfmarker] even if the features are not [pause] normalized. It [disfmarker] it will learn how to normalize [speaker002:] Right. [speaker004:] and [disfmarker] [speaker002:] OK, but I think that [pause] given the pressure of time we probably want to draw [disfmarker] because of that [pause] especially, we wanna draw some conclusions from this, do some reductions [pause] in what we're looking at, [speaker004:] Yeah. [speaker002:] and make some strong decisions for what we're gonna do testing on before next week. [speaker004:] Yeah [vocalsound] I [speaker002:] So do you [disfmarker] are you [disfmarker] w did you have something going on, on the side, with uh multi band [pause] or [disfmarker] on [disfmarker] on this, [speaker004:] No, [speaker002:] or [disfmarker]? [speaker004:] I [disfmarker] we plan to start this uh so, act actually we have discussed uh [pause] [@ @] um, these [disfmarker] what we could do [pause] more as a [disfmarker] as a research and [disfmarker] [pause] and [pause] we were thinking perhaps that [pause] uh [pause] the way we use the tandem is not [disfmarker] Uh, well, there is basically perhaps a flaw in the [disfmarker] in the [disfmarker] the stuff because [pause] we [pause] trained the networks [disfmarker] If we trained the networks on the [disfmarker] on [pause] a language and a t or a specific [pause] task, [speaker002:] Mm hmm. [speaker004:] um, what we ask is [disfmarker] to the network [disfmarker] is to put the bound the decision boundaries somewhere in the space. [speaker002:] Mmm. [speaker004:] And uh [pause] mmm and ask the network to put one, [pause] at one side of the [disfmarker] for [disfmarker] for a particular phoneme at one side of the boundary [disfmarker] decision boundary and one for another phoneme at the other side. And [pause] so there is kind of reduction of the information there that's not correct because if we change task [pause] and if the phonemes are not in the same context in the new task, [pause] obviously the [pause] decision boundaries are not [disfmarker] [pause] should not be at the same [pause] place. But the way the feature gives [disfmarker] The [disfmarker] the way the network gives the features is that it reduce completely the [disfmarker] [pause] it removes completely the information [disfmarker] [pause] a lot of information from the [disfmarker] the features [pause] by uh [pause] uh [pause] placing the decision boundaries at [pause] optimal places for [pause] one kind of [pause] data [speaker002:] I di [speaker004:] but [pause] this is not the case for another kind of data. [speaker002:] It's a trade off, [speaker004:] So [disfmarker] [speaker002:] right? Any anyway go ahead. [speaker004:] Yeah. So uh what we were thinking about is perhaps [pause] um one way [pause] to solve this problem is increase the number of [pause] outputs of the neural networks. Doing something like, um [pause] um phonemes within context and, well, basically context dependent phonemes. [speaker002:] Maybe. I mean, I [disfmarker] I think [pause] you could make [pause] the same argument, it'd be just as legitimate, [pause] for hybrid systems [pause] as well. [speaker004:] Yeah but, we know that [disfmarker] [speaker002:] Right. And in fact, [pause] th things get better with context dependent [pause] versions. Right? [speaker004:] Ye yeah but here it's something different. We want to have features uh well, [pause] um. [speaker002:] Yeah. Yeah, but it's still true [pause] that what you're doing [pause] is you're ignoring [disfmarker] you're [disfmarker] you're coming up with something to represent, [pause] whether it's a distribution, [pause] probability distribution or features, you're coming up with a set of variables [pause] that are representing [pause] uh, [pause] things that vary w over context. [speaker004:] Mm hmm. [speaker002:] Uh, and you're [pause] putting it all together, ignoring the differences in context. That [disfmarker] that's true [pause] for the hybrid system, it's true for a tandem system. So, for that reason, when you [disfmarker] in [disfmarker] in [disfmarker] in a hybrid system, [pause] when you incorporate context one way or another, [pause] you do get better scores. [speaker004:] Yeah. [speaker002:] OK? But I [disfmarker] it's [disfmarker] it's a big deal [pause] to get that. I [disfmarker] I'm [disfmarker] I'm sort of [disfmarker] And once you [disfmarker] the other thing is that once you represent [disfmarker] start representing more and more context [pause] it is [pause] uh [pause] much more [pause] um specific [pause] to a particular task in language. So um Uh, the [disfmarker] [pause] the acoustics associated with [pause] uh a particular context, for instance you may have some kinds of contexts that will never occur [pause] in one language and will occur frequently in the other, so the qu the issue of getting enough training [pause] for a particular kind of context becomes harder. We already actually don't have a huge amount of training data um [speaker004:] Yeah, but [disfmarker] mmm, I mean, [pause] the [disfmarker] the way we [disfmarker] we do it now is that we have a neural network and [pause] basically [pause] the net network is trained almost to give binary decisions. [speaker002:] Right. [speaker004:] And [pause] uh [disfmarker] binary decisions about phonemes. Nnn [disfmarker] Uh It's [disfmarker] [speaker002:] Almost. But I mean it [disfmarker] it [disfmarker] it does give a distribution. [speaker004:] Yeah. [speaker002:] It's [disfmarker] and [disfmarker] and [pause] it is true that if there's two phones that are very similar, [pause] that [pause] uh [pause] the [disfmarker] [pause] i it may prefer one but it will [pause] give a reasonably high value to the other, too. [speaker004:] Yeah. Yeah, sure but uh [pause] So basically it's almost binary decisions and [pause] um the idea of using more [pause] classes is [pause] to [pause] get something that's [pause] less binary decisions. [speaker002:] Oh no, but it would still be even more of a binary decision. It [disfmarker] it'd be even more of one. Because then you would say [pause] that in [disfmarker] that this phone in this context is a one, [pause] but the same phone in a slightly different context is a zero. [speaker004:] But [disfmarker] yeah, but [disfmarker] [speaker002:] That would be even [disfmarker] even more distinct of a binary decision. I actually would have thought you'd wanna go the other way and have fewer classes. [speaker004:] Yeah, but if [disfmarker] [speaker002:] Uh, I mean for instance, the [disfmarker] the thing I was arguing for before, but again which I don't think we have time to try, [pause] is something in which you would modify the code so you could train to have several outputs on and use articulatory features [speaker004:] Mmm. Mm hmm. [speaker002:] cuz then that would [disfmarker] that would go [disfmarker] [pause] that would be much broader and cover many different situations. But if you go to very very fine categories, it's very [pause] binary. [speaker004:] Mmm. Yeah, but I think [disfmarker] Yeah, perhaps you're right, but you have more classes so [pause] you [disfmarker] you have more information in your features. So, [vocalsound] Um [pause] You have more information in the [pause] uh [speaker002:] Mm hmm. True. [speaker004:] posteriors vector um which means that [disfmarker] But still the information is relevant [speaker002:] Mm hmm. [speaker004:] because it's [disfmarker] it's information that helps to discriminate, [speaker002:] Mm hmm. [speaker004:] if it's possible to be able to discriminate [pause] among the phonemes in context. [speaker002:] Well it's [disfmarker] [speaker004:] But the [disfmarker] [speaker002:] it's [disfmarker] [pause] it's an interesting thought. I mean we [disfmarker] we could disagree about it at length [speaker004:] Mmm. Mmm. [speaker002:] but the [disfmarker] the real thing is if you're interested in it you'll probably try it and [disfmarker] [pause] and [pause] we'll see. But [disfmarker] but what I'm more concerned with now, as an operational level, is [pause] uh, you know, [speaker004:] Mmm. [speaker002:] what do we do in four or five days? Uh, and [disfmarker] [pause] so we have [pause] to be concerned [pause] with Are we gonna look at any combinations of things, you know once the nets get retrained so you have this problem out of it. [speaker004:] Mmm. [speaker002:] Um, are we going to look at [pause] multi band? Are we gonna look at combinations of things? Uh, what questions are we gonna ask, uh now that, I mean, [pause] we should probably turn shortly to this O G I note. Um, how are we going to [pause] combine [pause] with what they've been focusing on? Uh, [pause] Uh we haven't been doing any of the L D A RASTA sort of thing. [speaker004:] Mm hmm. [speaker002:] And they, although they don't talk about it in this note, um, [pause] there's um, [pause] the issue of the [pause] um Mu law [pause] business [pause] uh [pause] versus the logarithm, [speaker004:] Mm hmm. [speaker002:] um, [pause] so. So what i what is going on right now? What's right [disfmarker] you've got [pause] nets retraining, Are there [disfmarker] is there [disfmarker] are there any H T K [pause] trainings [disfmarker] testings going on? [speaker004:] N [speaker005:] I [disfmarker] I [disfmarker] I'm trying the HTK with eh, [pause] PLP twelve on line delta delta and MSG filter [pause] together. [speaker002:] The combination, I see. [speaker005:] The combination, yeah. But I haven't result [vocalsound] at this moment. [speaker002:] MSG and [disfmarker] and PLP. [speaker005:] Yeah. [speaker002:] And is this with the revised [pause] on line normalization? [speaker005:] Ye Uh, with the old [pause] older, [speaker004:] Yeah. [speaker002:] Old one. [speaker005:] yeah. [speaker002:] So it's using all the nets for that but again we have the hope that it [disfmarker] [pause] We have the hope that it [disfmarker] [pause] maybe it's not making too much difference, [speaker005:] Yeah. But [pause] We can know soon. Maybe. [speaker002:] but [disfmarker] but yeah. [speaker005:] I don't know. [speaker004:] Yeah. [speaker002:] Uh, OK. [speaker004:] Uh so there is this combination, yeah. Working on combination obviously. [speaker005:] Mm hmm. [speaker004:] Um, I will start work on multi band. And [pause] we [pause] plan to work also on the idea of using both [pause] features [pause] and net outputs. Um. And [pause] we think that [pause] with this approach perhaps [pause] we could reduce the number of outputs of the neural network. Um, So, get simpler networks, because we still have the features. So we have um [pause] come up with um [pause] different kind of [pause] broad phonetic categories. And we have [disfmarker] Basically we have three [pause] types of broad phonetic classes. Well, something using place of articulation which [disfmarker] which leads to [pause] nine, I think, [pause] broad classes. Uh, another which is based on manner, which is [disfmarker] is also something like nine classes. And then, [pause] something that combine both, and we have [pause] twenty f [pause] twenty five? [speaker006:] Twenty seven. [speaker004:] Twenty seven broad classes. So like, uh, oh, I don't know, like back vowels, front vowels. [speaker002:] So what you do [disfmarker] [speaker004:] Um For the moments we do not [disfmarker] don't have nets, [speaker002:] um I just wanna understand so [pause] You have two net or three nets? Was this? How many [disfmarker] how many nets do you have? [speaker004:] I mean, [pause] It's just [disfmarker] Were we just changing [pause] the labels to retrain nets [pause] with fewer out outputs. [speaker002:] No nets. [speaker005:] Begin to work in this. We are [@ @]. [speaker002:] Right. [speaker004:] And then [disfmarker] [speaker002:] But [disfmarker] but I didn't understand [disfmarker] [speaker004:] Mm hmm. [speaker002:] Uh. [pause] the software currently just has [disfmarker] uh a [disfmarker] allows for I think, the one [disfmarker] one hot output. So you're having multiple nets and combining them, or [disfmarker]? Uh, how are you [disfmarker] how are you coming up with [disfmarker] If you say [pause] uh [pause] If you have a place [pause] characteristic and a manner characteristic, how do you [disfmarker] [speaker004:] It It's the single net, [speaker001:] I think they have one output. [speaker004:] yeah. [speaker002:] Oh, it's just one net. [speaker005:] Yeah. [speaker006:] mm hmm [speaker004:] It's one net with [pause] um [pause] twenty seven outputs if we have twenty seven classes, [speaker002:] I see. [speaker004:] yeah. [speaker002:] I see, OK. [speaker004:] So it's [disfmarker] Well, it's basically a standard net with fewer [pause] classes. [speaker002:] So you're sort of going the other way of what you were saying a bit ago instead of [disfmarker] [speaker004:] Yeah, [speaker002:] yeah. [speaker004:] but I think [disfmarker] Yeah. [speaker006:] But including the features. [speaker004:] B b including the features, yeah. [speaker005:] Yeah. [speaker004:] I don't think this [pause] will work [pause] alone. I think it will get worse because Well, I believe the effect that [disfmarker] of [disfmarker] of too reducing too much the information is [pause] basically [disfmarker] basically what happens [speaker002:] Uh huh. [speaker004:] and [disfmarker] [speaker002:] But you think if you include that [pause] plus the other features, [speaker004:] but [disfmarker] Yeah, because [pause] there is perhaps one important thing that the net [pause] brings, and OGI show showed that, is [pause] the distinction between [pause] sp speech and silence Because these nets are trained on well controlled condition. I mean the labels are obtained on clean speech, and we add noise after. So this is one thing And But perhaps, something intermediary using also [pause] some broad classes could [disfmarker] could bring so much more information. Uh. [speaker002:] So [disfmarker] so again then we have these broad classes and [disfmarker] well, somewhat broad. I mean, it's twenty seven instead of sixty four, [pause] basically. [speaker004:] Yeah. [speaker002:] And you have the original features. Which are PLP, or something. [speaker004:] Yeah. Mm hmm. [speaker002:] And then uh, just to remind me, all of that goes [pause] into [disfmarker] uh, that all of that is transformed by uh, uh, K KL or something, or [disfmarker]? [speaker004:] There will probably be, [speaker005:] Mu. [speaker004:] yeah, one single KL to transform everything or [vocalsound] [pause] uh, [speaker002:] Right. [speaker005:] No transform the PLP [speaker004:] per [speaker005:] and only transform the other I'm not sure. [speaker004:] This is [pause] still something [pause] that [speaker002:] Well no, I think [disfmarker] [speaker004:] yeah, we [pause] don't know [disfmarker] [speaker002:] I see. So there's a question of whether you would [disfmarker] [speaker005:] Two e [@ @] it's one. [speaker004:] Yeah. [speaker002:] Right. Whether you would transform together or just one. Yeah. Might wanna try it both ways. But that's interesting. So that's something that you're [disfmarker] you haven't trained yet but are preparing to train, and [disfmarker] [speaker004:] Yeah. [speaker002:] Yeah. Um [pause] [pause] Yeah, [speaker004:] Mmm. [speaker002:] so I think Hynek will be here Monday. Monday or Tuesday. So [speaker004:] Uh, yeah. [speaker002:] So I think, you know, we need to [pause] choose the [disfmarker] choose the experiments carefully, so we can get uh key [disfmarker] [pause] key questions answered [pause] uh before then [speaker004:] Mm hmm. [speaker002:] and [pause] leave other ones aside even if it [pause] leaves incomplete [pause] tables [vocalsound] [pause] someplace, uh [pause] uh, it's [disfmarker] it's really time to [disfmarker] [pause] time to choose. [speaker004:] Mm hmm. [speaker002:] Um, let me pass this out, [pause] by the way. Um These are [disfmarker] Did [disfmarker] did [disfmarker] [pause] did I interrupt you? [speaker005:] Yeah, I have one. [speaker002:] Were there other things that you wanted to [disfmarker] [speaker004:] Uh, no. I don't think so. Yeah, I have one. [speaker007:] Oh, thanks. [speaker002:] Ah! [pause] OK. [pause] OK, we have [pause] lots of them. [speaker005:] We have one. [speaker002:] OK, so [vocalsound] um, Something I asked [disfmarker] So they're [disfmarker] they're doing [pause] the [disfmarker] the VAD I guess they mean voice activity detection So again, it's the silence [disfmarker] So they've just trained up a net [pause] which has two outputs, I believe. Um [vocalsound] I asked uh [pause] Hynek whether [disfmarker] I haven't talked to Sunil [disfmarker] I asked Hynek whether [pause] they compared that to [pause] just taking the nets we already had [pause] and summing up the probabilities. [speaker004:] Mm hmm. [speaker002:] Uh. [pause] To get the speech [disfmarker] voice activity detection, or else just using the silence, [pause] if there's only one [pause] silence output. Um [pause] And, he didn't think they had, um. But on the other hand, maybe they can get by with a smaller net and [pause] maybe [pause] sometimes you don't run the other, maybe there's a computational advantage to having a separate net, anyway. [speaker004:] Mm hmm. [speaker002:] So um Their uh [disfmarker] [pause] the results look pretty good. [speaker004:] Yeah. [speaker002:] Um, [pause] I mean, not uniformly. I mean, there's a [disfmarker] an example or two [pause] that you can find, where it made it slightly worse, but [pause] uh in [disfmarker] in all but a couple [pause] examples. [speaker004:] Mmm. [speaker002:] Uh. [speaker005:] But they have a question of the result. Um how are trained the [disfmarker] the LDA filter? How obtained the LDA filter? [speaker004:] Mmm. [speaker002:] I I'm sorry. I don't understand your question. [speaker005:] Yes, um the LDA filter [pause] needs some [pause] training set [pause] to obtain the filter. Maybe I don't know exactly how [pause] they are obtained. [speaker002:] It's on [pause] training. [speaker005:] Training, with the training test of each [disfmarker] You understand me? [speaker002:] No. [speaker005:] Yeah, uh for example, [pause] LDA filter [pause] need a set of [disfmarker] [pause] a set of training [pause] to obtain the filter. [speaker002:] Yes. [speaker005:] And maybe [pause] for the Italian, for the TD [pause] TE on for Finnish, these filter are [disfmarker] are obtained with their own training set. [speaker002:] Yes, I don't know. That's [disfmarker] that's [disfmarker] so that's a [disfmarker] that's a very good question, then [disfmarker] now that it [disfmarker] [pause] I understand it. It's "yeah, where does the LDA come from?" In the [disfmarker] In [pause] earlier experiments, they had taken LDA [pause] from a completely different database, [speaker005:] Yeah. [speaker002:] right? [speaker005:] Yeah, because maybe it the same situation that the neural network training with their own [speaker004:] Mmm. [speaker005:] set. [speaker002:] So that's a good question. Where does it come from? Yeah, I don't know. Um, [pause] but uh to tell you the [pause] truth, I wasn't actually looking at the LDA so much when I [disfmarker] I was looking at it I was [pause] mostly thinking about the [disfmarker] [pause] the VAD. And um, it ap [pause] it ap Oh what does [disfmarker] what does ASP? Oh that's [disfmarker] [speaker004:] The features, yeah. Yeah. [speaker005:] I don't understand also [speaker002:] It says "baseline ASP". [speaker005:] what is [disfmarker] [pause] what is the difference between ASP and uh baseline over? [speaker004:] Yeah, I don't know. [speaker003:] ASP. [speaker005:] This is [disfmarker] [speaker003:] Oh. [speaker002:] Anybody know [pause] any [disfmarker] [speaker003:] There it is. [speaker002:] Um Cuz there's "baseline Aurora" [pause] above it. [speaker003:] Mm hmm. [speaker002:] And it's [disfmarker] This is mostly better than baseline, although in some cases it's a little worse, in a couple cases. [speaker003:] Well, it says baseline ASP is twenty three mill [pause] minus thirteen. [speaker005:] Yeah. [speaker002:] Yeah, it says what it is. But I don't how that's different [pause] from [disfmarker] [speaker003:] From the baseline. [comment] OK. [speaker002:] I think this was [disfmarker] [pause] I think this is the same point we were at when [disfmarker] when we were up in Oregon. [speaker005:] Yeah. [speaker004:] I think [disfmarker] [pause] I think it's the C zero [disfmarker] using C zero instead of log energy. [speaker005:] Ah, OK, mm hmm. [speaker004:] Yeah, it's this. [speaker005:] yeah. [speaker002:] Oh. OK. [speaker004:] It should be that, yeah. Because [disfmarker] [speaker002:] Shouldn't it be [disfmarker] [speaker001:] They s they say in here that the VAD is not used as an additional feature. Does [disfmarker] does anybody know how they're using it? [speaker002:] Yeah. So [disfmarker] so what they're doing here is, [pause] i [speaker004:] Yeah. [speaker002:] if you look down at the block diagram, [pause] um, [pause] they estimate [disfmarker] they get a [disfmarker] [pause] they get an estimate [pause] of whether it's speech or silence, [speaker001:] But that [disfmarker] [speaker002:] and then they have a median filter of it. [speaker001:] Mm hmm. [speaker002:] And so um, [pause] basically they're trying to find stretches. The median filter is enforcing a [disfmarker] i it having some continuity. [speaker001:] Mm hmm. [speaker002:] You find stretches where the [pause] combination of the [pause] frame wise VAD and the [disfmarker] [pause] the median filter say that there's a stretch of silence. And then it's going through and just throwing the data away. [speaker003:] Hmm. [speaker002:] Right? So um [disfmarker] [speaker001:] So it's [disfmarker] it's [disfmarker] I don't understand. You mean it's throwing out frames? Before [disfmarker] [speaker002:] It's throwing out chunks of frames, yeah. There's [disfmarker] the [disfmarker] the median filter is enforcing that it's not gonna be single cases of frames, or isolated frames. [speaker001:] Yeah. [speaker002:] So it's throwing out frames and the thing is [pause] um, [pause] what I don't understand is how they're doing this with H T [speaker001:] Yeah, [speaker002:] This is [disfmarker] [speaker001:] that's what I was just gonna ask. How can you just throw out frames? [speaker002:] Yeah. Well, you [disfmarker] you can, [speaker004:] i [speaker002:] right? I mean y you [disfmarker] you [disfmarker] [speaker004:] Yeah. [speaker002:] it stretches again. For single frames I think it would be pretty hard. [speaker001:] Yeah. [speaker002:] But if you say speech starts here, speech ends there. [speaker001:] Mm hmm. [speaker002:] Right? [speaker003:] Huh. [speaker004:] Yeah. Yeah, you can basically remove the [disfmarker] the frames from the feature [disfmarker] feature files. [speaker002:] Yeah. Yeah, so I mean in the [disfmarker] i i in the [disfmarker] in the decoding, you're saying that we're gonna decode from here to here. [speaker004:] I t [speaker001:] Mm hmm. [speaker002:] I think they're [disfmarker] they're [disfmarker] they're treating it, [pause] you know, like uh [disfmarker] well, it's not isolated word, but [disfmarker] but connected, you know, the [disfmarker] the [disfmarker] [speaker001:] In the text they say that this [disfmarker] this is a tentative block diagram of a possible configuration we could think of. So that sort of sounds like they're not doing that yet. [speaker002:] Well. [pause] No they [disfmarker] they have numbers though, right? So I think they're [disfmarker] they're doing something like that. I think that they're [disfmarker] they're [disfmarker] I think what I mean by tha that is they're trying to come up with a block diagram that's plausible for the standard. In other words, it's [disfmarker] uh [disfmarker] I mean from the point of view of [disfmarker] of uh reducing the number of bits you have to transmit it's not a bad idea to detect silence anyway. [speaker001:] Yeah. Yeah. [speaker002:] Um. [speaker001:] I'm just wondering what exactly did they do up in this table if it wasn't this. [speaker002:] But it's [disfmarker] the thing is it's that [disfmarker] that [disfmarker] that's [disfmarker] that's I [disfmarker] I [disfmarker] Certainly it would be tricky about it intrans in transmitting voice, [pause] uh uh for listening to, is that these kinds of things [pause] uh cut [pause] speech off a lot. Right? [speaker001:] Mm hmm. [speaker002:] And so [pause] um [speaker001:] Plus it's gonna introduce delays. [speaker002:] It does introduce delays but they're claiming that it's [disfmarker] it's within the [disfmarker] [pause] the boundaries of it. [speaker001:] Mmm. [speaker002:] And the LDA introduces delays, and b [pause] what he's suggesting this here is a parallel path so that it doesn't introduce [pause] uh, any more delay. I it introduces two hundred milliseconds of delay but at the same [pause] time the LDA [pause] down here [disfmarker] I don't know [disfmarker] Wh what's the difference between TLDA and SLDA? [speaker003:] Temporal and spectral. [speaker002:] Ah, thank you. [speaker005:] Temporal LDA. [speaker002:] Yeah, you would know that. [speaker003:] Yeah [speaker002:] So um. The temporal LDA does in fact include the same [disfmarker] so that [disfmarker] I think he [disfmarker] well, by [disfmarker] by saying this is a b a tentative block di diagram I think means [pause] if you construct it this way, this [disfmarker] this delay would work in that way and then it'd be OK. [speaker001:] Ah. [speaker002:] They [disfmarker] they clearly did actually remove [pause] silent sections in order [disfmarker] because they [pause] got these [pause] word error rate [pause] results. So um I think that it's [disfmarker] it's nice to do that in this because in fact, it's gonna give a better word error result and therefore will help within an evaluation. Whereas to whether this would actually be in a final standard, I don't know. Um. Uh, as you know, part of the problem with evaluation right now is that the [pause] word models are pretty bad and nobody wants [disfmarker] [pause] has [disfmarker] has approached improving them. So [pause] it's possible that a lot of the problems [pause] with so many insertions and so forth would go away if they were better word models [pause] to begin with. So [pause] this might just be a temporary thing. But [disfmarker] But, on the other hand, and maybe [disfmarker] maybe it's a decent idea. So um The question we're gonna wanna go [pause] through next week when Hynek shows up I guess is given that we've been [disfmarker] if you look at what we've been trying, we're uh looking at [pause] uh, by then I guess, combinations of features and multi band Uh, and we've been looking at [pause] cross language, cross [pause] task [pause] issues. And they've been not so much looking at [pause] the cross task uh multiple language issues. But they've been looking at uh [disfmarker] [pause] at these issues. At the on line normalization and the uh [pause] voice activity detection. And I guess when he comes here we're gonna have to start deciding about [pause] um what do we choose [pause] from what we've looked at [pause] to um blend with [pause] some group of things in what they've looked at And once we choose that, [pause] how do we split up the [pause] effort? Uh, because we still have [disfmarker] even once we choose, [pause] we've still got [pause] uh another [pause] month or so, I mean there's holidays in the way, but [disfmarker] but uh [pause] I think the evaluation data comes January thirty first so there's still a fair amount of time [pause] to do things together it's just that they probably should be somewhat more coherent between the two sites [pause] in that [disfmarker] that amount of time. [speaker001:] When they removed the silence frames, did they insert some kind of a marker so that the recognizer knows it's [disfmarker] [pause] knows when it's time to back trace or something? [speaker002:] Well, see they, I [disfmarker] I think they're Um. I don't know the [disfmarker] [pause] the specifics of how they're doing it. They're [disfmarker] [pause] they're getting around the way the recognizer works because they're not allowed to [pause] um, change the scripts [pause] for the recognizer, [pause] I believe. [speaker001:] Oh, right. [speaker002:] So. [speaker001:] Maybe they're just inserting some nummy frames or something? [speaker002:] Uh. Uh, you know that's what I had thought. But I don't [disfmarker] I don't think they are. [speaker001:] Hmm. [speaker002:] I mean that's [disfmarker] sort of what [disfmarker] the way I had imagined would happen is that on the other side, yeah you p put some low level noise or something. Probably don't want all zeros. Most recognizers don't like zeros [speaker001:] Hmm. [speaker002:] but [vocalsound] but [pause] you know, [pause] put some epsilon in or some rand [speaker001:] Yeah. [speaker002:] sorry epsilon random variable [pause] in or something. [speaker001:] Some constant vector. I mean [speaker002:] Maybe not a constant but it doesn't, uh [disfmarker] don't like to divide by the variance of that, [speaker001:] i w Or something [disfmarker] [speaker002:] but I mean it's [speaker001:] That's right. But something that [disfmarker] what I mean is something that is [pause] very distinguishable from [pause] speech. [speaker002:] Mm hmm. [speaker001:] So that the [disfmarker] the silence model in HTK will always pick it up. [speaker002:] Yeah. So I [disfmarker] I [disfmarker] that's what I thought they would do. or else, uh [pause] uh maybe there is some indicator to tell it to start and stop, I don't know. [speaker001:] Hmm. [speaker002:] But whatever they did, I mean they have to play within the rules of this specific evaluation. [speaker001:] Yeah. [speaker002:] We c we can find out. [speaker001:] Cuz you gotta do something. Otherwise, if it's just a bunch of speech, stuck together [disfmarker] [speaker002:] No they're [disfmarker] [speaker001:] Yeah. [speaker002:] It would do badly and it didn't so badly, [speaker001:] Yeah, right. [speaker002:] right? So they did something. [speaker001:] Yeah, yeah. [speaker002:] Yeah. Uh. So, OK, So I think [pause] this brings me up to date a bit. It hopefully brings other [pause] people up to date a bit. And um Um [pause] I think [disfmarker] Uh, I wanna look at these numbers off line a little bit and think about it and [disfmarker] [pause] and talk with everybody uh, [pause] outside of this meeting. Um, but uh No I mean it sounds like [disfmarker] I mean [pause] there [disfmarker] there [disfmarker] there are the usual number of [disfmarker] of [pause] little [disfmarker] little problems and bugs and so forth but it sounds like they're getting ironed out. And now we're [pause] seem to be kind of in a position to actually [pause] uh, [pause] look at stuff and [disfmarker] and [disfmarker] and compare things. So I think that's [disfmarker] that's pretty good. Um [pause] I don't know what the [disfmarker] One of the things I wonder about, [pause] coming back to the first results you talked about, is [disfmarker] is [pause] how much, [pause] uh [pause] things could be helped [pause] by more parameters. And uh [disfmarker] [pause] And uh how many more parameters we can afford to have, [vocalsound] [pause] in terms of the uh computational limits. Because anyway when we go to [pause] twice as much data [pause] and have the same number of parameters, particularly when it's twice as much data and it's quite diverse, um, I wonder if having twice as many parameters would help. [speaker004:] Mm hmm. [speaker002:] Uh, just have a bigger hidden layer. Uh But [disfmarker] I doubt it would [pause] help by forty per cent. But [vocalsound] [pause] but uh [speaker004:] Yeah. [speaker002:] Just curious. How are we doing on the [pause] resources? Disk, and [disfmarker] [speaker004:] I think we're alright, um, [pause] not much problems with that. [speaker002:] OK. Computation? [speaker004:] It's OK. Well this table took uh [pause] more than five days to get back. [speaker002:] We [disfmarker] Yeah. Yeah, well. [speaker004:] But [disfmarker] Yeah. [speaker002:] Are [disfmarker] were you folks using Gin? That's a [disfmarker] that just died, you know? [speaker004:] Mmm, no. You were using Gin [comment] perhaps, yeah? No. [speaker005:] No. [speaker002:] No? Oh, that's good. [speaker006:] It just died. [speaker002:] OK. Yeah, [pause] we're gonna get a replacement [pause] server that'll be a faster server, [pause] actually. [speaker005:] Yes. [speaker004:] Hmm. [comment] Mm hmm. [speaker002:] That'll be [disfmarker] It's a [pause] seven hundred fifty megahertz uh SUN [speaker003:] Tonic. [speaker002:] uh [pause] But it won't be installed for [pause] a little while. U Go ahead. [speaker007:] Do we [disfmarker] Do we have that big new IBM machine the, I think in th [speaker002:] We have the [pause] little tiny IBM machine [vocalsound] [pause] that might someday grow up to be a big [pause] IBM machine. It's got s slots for eight, uh IBM was donating five, I think we only got two so far, processors. We had originally hoped we were getting eight hundred megahertz processors. They ended up being five fifty. So instead of having eight processors that were eight hundred megahertz, we ended up with two [pause] that are five hundred and fifty megahertz. And more are supposed to come soon and there's only a moderate amount of dat of memory. So I don't think [pause] anybody has been sufficiently excited by it to [pause] spend much time [pause] uh [pause] with it, but uh [vocalsound] Hopefully, [pause] they'll get us some more [pause] parts, soon and [disfmarker] Uh, yeah, I think that'll be [disfmarker] once we get it populated, [pause] that'll be a nice machine. I mean we will ultimately get eight processors in there. And uh [disfmarker] and uh a nice amount of memory. Uh so it'll be a pr pretty fast Linux machine. [speaker007:] And if we can do things on Linux, [pause] some of the machines we have going already, like Swede? [speaker002:] Mm hmm. [speaker007:] Um It seems pretty fast. [speaker002:] Mm hmm. [speaker007:] But [disfmarker] I think Fudge is pretty fast too. [speaker002:] Yeah, I mean you can check with uh [pause] Dave Johnson. I mean, it [disfmarker] it's [disfmarker] [pause] I think the machine is just sitting there. And it does have two processors, you know and [disfmarker] [pause] Somebody could do [disfmarker] [pause] you know, uh, check out [pause] uh the multi threading [pause] libraries. And [pause] I mean i it's possible that the [disfmarker] I mean, I guess the prudent thing to do would be for somebody to do the work on [disfmarker] [pause] on getting our code running [pause] on that machine with two processors [pause] even though there aren't five or eight. There's [disfmarker] there's [disfmarker] there's gonna be debugging hassles and then we'd be set for when we did have five or eight, to have it really be useful. But. [pause] Notice how I said somebody and [vocalsound] turned my head your direction. That's one thing you don't get in these recordings. You don't get the [disfmarker] [pause] don't get the visuals but [disfmarker] [speaker007:] I is it um [pause] mostly um the neural network trainings that are [pause] um slowing us down or the HTK runs that are slowing us down? [speaker002:] Uh, I think yes. Uh, [vocalsound] Isn't that right? I mean I think you're [disfmarker] you're sort of held up by both, right? If the [disfmarker] if the neural net trainings were a hundred times faster [pause] you still wouldn't [pause] be anything [disfmarker] running through these a hundred times faster because you'd [pause] be stuck by the HTK trainings, [speaker004:] Mmm. Yeah. [speaker002:] right? But if the HTK [disfmarker] I mean I think they're both [disfmarker] It sounded like they were roughly equal? Is that about right? [speaker004:] Yeah. [speaker002:] Yeah. [speaker007:] Because, um [pause] I think that'll be running Linux, and Sw Swede and Fudge are already running Linux so, [pause] um I could try to get [pause] um the train the neural network trainings or the HTK stuff running under Linux, and to start with I'm [pause] wondering which one I should pick first. [speaker002:] Uh, probably the neural net cuz it's probably [disfmarker] it [disfmarker] it's [disfmarker] [pause] it's um [disfmarker] Well, I [disfmarker] I don't know. They both [disfmarker] HTK we use for [pause] um [pause] this Aurora stuff Um [pause] Um, I think [pause] It's not clear yet what we're gonna use [pause] for trainings uh [disfmarker] Well, [pause] there's the trainings uh [disfmarker] is it the training that takes the time, or the decoding? Uh, is it about equal [pause] between the two? For [disfmarker] for Aurora? [speaker004:] For HTK? [speaker002:] For [disfmarker] Yeah. For the Aurora? [speaker004:] Uh Training is longer. [speaker002:] OK. [speaker004:] Yeah. [speaker002:] OK. Well, I don't know how we can [disfmarker] I don't know how to [disfmarker] Do we have HTK source? Is that [disfmarker] [speaker004:] Mmm. [speaker002:] Yeah. You would think that would fairly trivially [disfmarker] the training would, anyway, th the testing [pause] uh I don't [disfmarker] I don't [pause] think would [pause] parallelize all that well. But I think [pause] that [pause] you could [pause] certainly do d um, [pause] distributed, sort of [disfmarker] [pause] Ah, no, it's the [disfmarker] [pause] each individual [pause] sentence is pretty tricky to parallelize. But you could split up the sentences in a test set. [speaker001:] They have a [disfmarker] they have a thing for doing that and th they have for awhile, in H T [speaker002:] Yeah? [speaker001:] And you can parallelize the training. And run it on several machines [speaker002:] Aha! [speaker001:] and it just basically keeps counts. And there's something [disfmarker] [pause] a final [pause] thing that you run and it accumulates all the counts together. [speaker004:] Mmm. [speaker002:] I see. [speaker001:] I don't what their scripts are [pause] set up to do for the Aurora stuff, but [disfmarker] [speaker004:] Yeah. [speaker002:] Something that we haven't really settled on yet is other than [pause] this Aurora stuff, [pause] uh what do we do, large vocabulary [pause] training slash testing [pause] for uh tandem systems. Cuz we hadn't really done much with tandem systems for larger stuff. Cuz we had this one collaboration with CMU and we used SPHINX. Uh, we're also gonna be collaborating with SRI and we have their [disfmarker] have theirs. Um [pause] So [pause] I don't know Um. So I [disfmarker] I think the [disfmarker] the advantage of going with the neural net thing is that we're gonna use the neural net trainings, no matter what, [speaker007:] OK. [speaker002:] for a lot of the things we're doing, whereas, w exactly which HMM [disfmarker] Gaussian mixture based HMM thing we use is gonna depend uh So with that, maybe we should uh [vocalsound] go to our [nonvocalsound] digit recitation task. And, it's about eleven fifty. Canned. Uh, I can [disfmarker] I can start over here. Great, uh, could you give Adam a call. Tell him to [speaker006:] Oh. [speaker002:] He's at two nine seven seven. OK. I think we can [vocalsound] [@ @] You know Herve's coming tomorrow, right? Herve will be giving a talk, yeah, talk at eleven. Did uh, did everybody sign these consent Er everybody Has everyone signed a consent form before, on previous meetings? You don't have to do it again each time Yes. microphones off [speaker001:] And we're going. So the only status ite well, first of all we h haven't decided whether we're [vocalsound] Meeting Recorder data issues, or recognition this week. I think we were recognition. [speaker003:] Wha what was on the list? Th the [disfmarker] I mean, I sent you a couple things, although I don't remember them. [speaker001:] You only sent me one thing, which was demo status. And asking which one we were on this week. [speaker003:] Right. Ah. That was the second thing. Right. [vocalsound] Right. [speaker001:] So should we simply assert that this week we are recognition, and next week data issues, [speaker006:] Hmm. [speaker002:] I think that's correct. [speaker001:] and [disfmarker]? [speaker006:] Yeah. I think so, too. [speaker002:] Yeah. [speaker001:] And, uh [disfmarker] [speaker006:] Yeah. [speaker001:] So, I think what we should probably do is any quick, small stuff we can do every week. So [disfmarker] like Morgan asked about the demo status. We can go ahead and talk about that a little bit. And then do [disfmarker] then alternate in more depth. [speaker003:] By the way, I'll [disfmarker] I [disfmarker] I won't be here [pause] next Thursday. I'll be out of town. [speaker001:] OK. [speaker003:] But [disfmarker] [speaker001:] Actually, I [pause] may not be here either. So I gotta double check the dates. But, [vocalsound] anyway. So, uh, demo status. First of all, I did a little thing for Liz with the Transcriber tool, that, um [disfmarker] first of all, it uses the forced alignments, so that the words appear, uh, in their own segments, rather than in long [disfmarker] in long chunks. [speaker008:] Oh. [speaker001:] She said that that [disfmarker] she thought that was a much better idea for the other stuff she's working on. [speaker008:] Yeah. That's great. [speaker001:] Um [disfmarker] And that works fine except it's even slower to load. It's already pretty slow to load. [speaker004:] Yeah. It's more segments? [speaker005:] Yeah. [vocalsound] It's slow. [speaker004:] Or [disfmarker]? Yeah. [speaker008:] Is that because the transcripts get longer? [speaker004:] Yeah. [speaker008:] The f transcript file gets longer? [speaker001:] Yeah. Yep. [speaker004:] Yep. [speaker001:] And the Transcriber tool is just not very good n at [disfmarker] [vocalsound] at that. [speaker008:] But th but that's [disfmarker] that's n You didn't have to change the software for that yet. Right? [speaker001:] Correct. [speaker008:] It's just formatting the right kind of, uh, XML [disfmarker]? [speaker001:] Yeah, it's just writing conversion tools from the format that the aligner [disfmarker] Actually th he did a SRT file for it. [speaker008:] Mm hmm. OK. [speaker001:] And then just back into Transcriber [disfmarker] Transcriber format. [speaker008:] Oh, good. That's very good. OK. [speaker001:] Yeah. So my [disfmarker] my decision was, for the first pass for this demo that Liz was talking about [pause] I decided that I would do, um, only enough to get it working, as opposed to any coding. [speaker008:] Mm hmm. Right. [speaker001:] And so the other thing I sh she wanted to display the stylized F zeroes, I think they're called? Is that right? [speaker005:] Yeah. The [pause] linear fit. [speaker008:] Mm hmm. [speaker001:] And, uh [disfmarker] So what I did is I just took the file with those in it, converted it so that it looks like an audio file. [speaker008:] Right. [speaker001:] And, so you s it shows that instead of the wavefile. And so that [disfmarker] that's working, [speaker008:] Cool. [speaker001:] and I think it actually looks pretty good. [speaker008:] Mm hmm. [speaker001:] Um, I'd like someone who's more familiar with it to look at it because when I was looking at it, [vocalsound] we seemed to have lots of stuff going on when no one's saying anything. [speaker005:] That's just background speech. [speaker001:] Ah. [speaker005:] Yeah. So. [speaker002:] So do you have to pad that out? Uh, so that it looks like it's an eight kilohertz sampled thing? [speaker001:] No. I [comment] [disfmarker] the audio file you can specify any sampling rate. [speaker002:] Or [disfmarker]? [speaker001:] And so I s I specified [disfmarker] instead of, you know, sixteen thousand or eight thousand, I specified a hundred. [speaker002:] Mmm. [speaker001:] Um, and, the only problem with that is that there's a bug in Transcriber, that if the sample rate is too low, when it tries to compute the shape file, it fails. [speaker008:] Hmm. [speaker001:] Um, and crashes. Um [disfmarker] But the solution to that is just, set the option so it doesn't compute the shape file, [vocalsound] and it will work and the only problem with that is you can't, uh, zoom out on it. You can zoom in, but not out. [speaker003:] What's a shape file? [speaker001:] The shape file is [disfmarker] If you think about a wavefile, sixteen thousand samples per second is way too many to display on the screen. [speaker003:] Mm hmm. [speaker001:] So what Transcriber does, is it computes a [disfmarker] another thing to display based on the waveform. [speaker005:] We're talking about [disfmarker] We're talking about that demo. [speaker007:] Yeah. [comment] [nonvocalsound] Tried that, and it died. [speaker001:] And it displays it at [disfmarker] [speaker005:] Yeah. Did it? [speaker001:] And it allows you to show m many different resolutions. So there's a little user interface component that lets you sh select the resolution. [speaker007:] Great! [speaker003:] I see. [speaker001:] And if you don't compute the wavefile, you can't zoom out. You can't get a larger view of it. But you can zoom in. [speaker008:] Hmm. [speaker001:] Um [disfmarker] And that's alright, because at [disfmarker] at a hundred samples that's already pretty far [pause] out. And, uh [disfmarker] so I think it looks pretty good, but I'll let Liz look at it and see what she thinks. [speaker007:] I [disfmarker] I got the wavefile but [disfmarker] Sorry. I got the wavefile, but I couldn't get the words yet. But the [disfmarker] the wavefile part looks [disfmarker] looks good. [speaker001:] OK. We should [disfmarker] If you were having problems with the words, we should figure out why. [speaker007:] I [disfmarker] I'll have done [disfmarker] I'm probably doing something wrong. [speaker001:] OK. [speaker007:] Sorry this microphone's moving around. I can't put this over my ear. [speaker006:] It could [disfmarker] Do you have to put it in your ear? [speaker001:] You c you clip that part over your ear. [speaker007:] I can [disfmarker] I can do that but there's no orientation where the [disfmarker] Darn! [speaker001:] Anyway. [comment] We'll all watch Liz play with the mike. [speaker006:] I Does it really need to go in her ear? [speaker001:] Um [disfmarker] [speaker006:] That [disfmarker] that bud? It doesn't have to go in her ear. Right? [speaker001:] Uh, no. It doesn't have to, but that [disfmarker] that's [disfmarker] I find that's the only way to wear it. [speaker006:] Oh, wow. [speaker001:] Is that the bud's in the ear and that the link is over it. [speaker003:] This bud's for [disfmarker] [speaker001:] But, so, anyway, I think that looks pretty good. [speaker003:] No. [speaker001:] The only [disfmarker] the only other thing we might wanna do with that is be able to display more than one waveform. [speaker007:] Hmm. [speaker001:] And that actually shouldn't be too slow, uh, because it's much lower resolution than a full waveform. [speaker004:] Yeah. [speaker001:] The problem with it is just it does require coding. [speaker005:] Yeah. It migh [speaker001:] And so it would be much better to get, uh, Dave Gelbart to do that than me, because he's familiar with the code, and is more likely to be able to get it to work quickly. [speaker005:] It'd be nice if we can do, like, a quick hack, just so we can play [pause] the audio file, too. [speaker007:] Right. [speaker005:] Um, with th with the display. [speaker001:] Oh, OK. [speaker005:] Like, even if we [disfmarker] I think that even if we didn't display the waveform, [comment] it might be better to, rather, play the waveform than display it. I mean, like, if we were to choose [disfmarker] I don't know, if I were to choose between one or the other, I'd rather have it played. [speaker001:] Mm hmm. I understand what you mean. [speaker007:] Yeah. [speaker005:] And then displayed. [speaker001:] Right. Ps But for the demo maybe it doesn't matter. I'm not sure whether you wanna do the demo live anyway, or just screen shots of what we have. The problem with doing it live is it takes so long to load, that, um [disfmarker] [speaker003:] I don't know. [speaker008:] So this [disfmarker] the [disfmarker] this [disfmarker] uh, the sluggishness of the loading is all due to the parsing of the XML format. [speaker001:] Um, w I was talking to Dave Gelbart about that [speaker008:] Right? [speaker001:] and apparently it's not actually the parsing of the XML raw [disfmarker] that going from the XML to an internal [pause] t tree structure is pretty fast. [speaker008:] Hmm. [speaker001:] But then it walks the tree to assemble its dat internal data structures and that's slow. [speaker008:] Mmm. [speaker002:] Seems like you should be able to spawn that off into a background process, because [vocalsound] not everything is displayed in that tree at once. Right? [speaker001:] Uh, no. But what it does is it actually assembles all the user interface components then. [speaker008:] But [disfmarker] d y [speaker002:] Right. [speaker001:] And then displays all the user interface components. [speaker003:] U I'm [disfmarker] I'm confused. [speaker002:] Seems like you wanna ass [speaker003:] Uh, is [disfmarker] is this downloading something that happens once? [speaker001:] Yes. [speaker003:] And [disfmarker] and then when you d display different things, it's fine? So, in that case [disfmarker] [speaker008:] No. Whenever you load a new [pause] meeting or a new [pause] transcript. [speaker001:] A new transcript. [speaker004:] Yep. [speaker003:] Well, a new meeting, a transcript. Right. [speaker008:] Right. [speaker003:] But [disfmarker] but [disfmarker] i i for [disfmarker] but for [disfmarker] [speaker001:] Or audio file. Well, actually the audio files are pretty fast, too. [speaker003:] for presentation in, uh [disfmarker] Uh [disfmarker] I wouldn't be [disfmarker] [speaker004:] Yeah. [speaker005:] Yeah. [speaker008:] You just have to have the thing running before you open your laptop. [speaker005:] Yeah. [speaker007:] Yeah. [speaker001:] Right. The only problem with that is if anything goes wrong [pause] or if you wanna switch from one thing to another. [speaker007:] Right. [speaker005:] Yeah. [speaker003:] Go wrong? [speaker004:] Yeah. [speaker003:] I see. Yeah. [speaker007:] Right. I guess for the demo you can always play [disfmarker] just store the pieces that you're gonna display and play those as separate files, if we can't, you know, actually do it. [speaker008:] Just make shorter files. [speaker007:] But it's [disfmarker] you know, it [disfmarker] it's [disfmarker] [speaker001:] That's true. We could just subset it. [speaker007:] Yeah. [speaker001:] That's a good idea. [speaker008:] Yeah. [speaker001:] That's actually probably the right thing to do. [speaker007:] And just make it l [speaker004:] Yep. [speaker001:] You know, just take f ten minutes instead of an hour and a half. [speaker004:] Yeah. Th that's what I did for [disfmarker] for my talk. [speaker003:] Oh! Oh, you're downloading a whole meeting. [speaker001:] Yeah. [speaker003:] Oh, yeah that [@ @]. [speaker004:] Yeah. [speaker001:] Yeah. So that [disfmarker] that's actually [pause] the [disfmarker] definitely the way to do it. [speaker004:] Yeah. [speaker001:] That's a good idea. [speaker003:] Yeah. And then still do it ahead of time, but then at least you're covered if [disfmarker] if, uh [disfmarker] [vocalsound] if there's a problem. [speaker001:] Yeah, if there are any problems. [speaker007:] Right. [speaker001:] Yeah, I mean, even five minutes is probably enough. [speaker004:] Huh. [speaker007:] Right. So, what happened to [disfmarker] Is it st possible at all to display the words in their aligned locations? [speaker001:] That's what I did. [speaker007:] OK. So. Sorry, [speaker001:] Yeah. [speaker008:] You missed that part. [speaker007:] I missed [disfmarker] OK. Great! But it [disfmarker] OK. I couldn't get the words and the waveform at the same time for some reason, and there must be some [disfmarker] I'll [disfmarker] I'll work on it with Don and see what I'm doing wrong. [speaker001:] Yeah. I mean, just ask [disfmarker] Just come by my office. I can show you as well. [speaker007:] OK. Great! Well, thanks a lot. That's really great. [speaker001:] Right. And for the information retrieval, uh, Don has been working on that. So. [speaker005:] Yeah. So, it's coming along. [speaker001:] It looks like it [speaker005:] Um Just hacking [pause] Dan's code and m stepping through it. But, I think [pause] it's close. [speaker001:] Great. [speaker005:] And we should be there pretty soon, with at least, like [disfmarker] at least with, you know, being able to search over c certain amount of meetings, just, like, really basic stuff. Just asking fo looking for a word, and looking through a bunch of different meetings. And if we have time, I'll also add, you know, like, choosing which speakers you wanna include, and stuff. But [disfmarker] [speaker003:] OK. Well, I'm gonna start working on this the week after next, so that's the point when I'll need to look more carefully at what y what [disfmarker] what you guys have. [speaker005:] So, is the end of the month still [pause] the [disfmarker] the d? [speaker003:] Right. The week after [disfmarker] th th The Monday the week after next is July second, which is the first day I get back. [speaker005:] OK. [speaker003:] So [speaker005:] OK. [speaker001:] Yeah. So I think for the L stuff Liz was talking about, we have something that'll work now. And Liz can look at it and see if she wants anything else. Maybe we can work on doing [disfmarker] displaying multiple [disfmarker] or displaying one and playing back the other. [speaker007:] So do you think it's [pause] reasonable to display more than one [pause] before the demo? Cuz [disfmarker] [speaker001:] Um, I think I'd h I'd have to ask Dave. I did it once before and it was just so slow to scroll, that I gave up. [speaker007:] Huh. [speaker001:] But, the advantage is that these things are much lower sampling rate. [speaker007:] Right. [speaker001:] And so then it might be alright. [speaker007:] OK. Let me know. [speaker005:] Morgan, when's the demo? [speaker003:] Well, uh, I'm giving a talk on [vocalsound] July sixteenth. It's a Mon Monday in four weeks? [speaker008:] So [disfmarker] If [disfmarker] if raw speed is the problem [pause] This thing is written in Tcl. [speaker003:] Three weeks? [speaker001:] Tcl. [speaker008:] Right? Y I mean, John Osterhaut, uh [disfmarker] Y you know, he started his own company based on Tcl stuff, and maybe they have the native code compiler or something. [speaker001:] I mean, we could check. I don't think they do. Um, there was actually a [pause] Java back end that apparently is actually a little faster. It generates byte code. [speaker008:] Hmm. [speaker001:] But, uh [disfmarker] [speaker003:] It's always exciting to hear that Java's faster than something. [speaker001:] Yeah. Well, e everything is faster than Tcl TK. [speaker003:] Yeah. [speaker001:] It's a string substitution language, basically. I should probably beep that out in case John Osterhaut ever listens. [speaker006:] But Tcl is wonderful. [speaker001:] But [disfmarker] Well, it is wonderful. It is, for prototyping and user interface. It's just really [disfmarker] the language is awful. [speaker006:] There you go. Oh. [speaker003:] Beep. [speaker001:] Beep, y right. [speaker006:] Yeah. [comment] [vocalsound] Yeah. I like it. [speaker001:] But let me tell you how I really feel. [speaker003:] We're all entitled to our opinions here. [speaker001:] Yep. [speaker003:] Um. Yeah. So it's [disfmarker] Yeah. I think it must be three and a half weeks. [speaker006:] It's great. [speaker003:] Uh, cuz July [disfmarker] [vocalsound] The [disfmarker] the meeting is July sixteenth through eighteenth. And, uh, my talk's the first day. So. [speaker005:] OK. [speaker003:] I'm flying out there the Sunday before. So. Um [disfmarker] [speaker008:] Hmm. [speaker003:] I guess, you know, it'd be desirable [pause] if, a week ahead of that, we basically had [disfmarker] thought we had it, which would allow a week for [speaker001:] Oh, that's right. [speaker005:] For realizing we don't? [speaker003:] re iterating. Yeah. Pretty much. Yeah. [speaker001:] Then the other issue related to that is data release. If we wanna show this in public, it should be releas So, I, uh, haven't gotten any other replies from the original email asking for approval. So I sent out another set this morning. [speaker003:] I th I saw that. [speaker001:] And, uh, we'll see if we get any responses. [speaker007:] I just did it. [speaker001:] Very good. [speaker007:] But it is [disfmarker] I did wanna say that, um [disfmarker] [speaker001:] Did you notice I put in the filter? [speaker007:] No! No! No! [speaker001:] Go ahead. [speaker007:] I just figured you p [speaker001:] There's a link there that now says if you want to search by [disfmarker] filter by a regular expression, you can. [speaker007:] Oh, my gosh. OK. [speaker001:] I put that in just for you. [speaker007:] Terrific. Well, since you didn't answer the emai So there was a q question I had asked Adam whether it's possible to search only for your own name [disfmarker] your own utterances, so that you know you don't have to go through the whole meeting, and [disfmarker] [vocalsound] Um, and I didn't hear back. So I thought "OK. It's probably too hard. He's overloaded. I won't say anything. I'll just do it". Great. OK. So, anyway I looked at everybody else's stu [speaker001:] It's actually an arbitrary [disfmarker] arbitrary regular expression. [speaker007:] Good. [speaker001:] But if you search your name, you'll get all of the things you said and any time anyone said your name. [speaker007:] So th That's great. [speaker001:] So. [speaker003:] And it's [disfmarker] it's case insensitive? [speaker001:] Correct. [speaker003:] Yeah. [speaker007:] That's great. [speaker002:] Did you actually look through your [pause] transcripts? Or [disfmarker] you just approved them all. [speaker007:] Uh [pause] Well [speaker002:] I just approved all mine. I didn't look at them. [speaker007:] I sort of spot checked. I was trying to remember [disfmarker] [speaker008:] Oh, darn. I haven't done that yet. Er [disfmarker] OK. [speaker007:] I couldn't find the [disfmarker] the keywords for things [vocalsound] that I thought I had said wrong. So. [speaker001:] It's hard to find. [speaker007:] That makes it g [speaker002:] That's a compliment to you. He said it's hard to find things you say wrong. [speaker001:] Hmm. [speaker007:] It's hard to find anything that you say in these. [speaker001:] Yep. [speaker007:] Great. Well, thanks for the filter. [speaker001:] You really do have to sort of r [speaker007:] Uh, it's really useful, tha Cuz if you're only at part of a meeting, or something [speaker003:] So, we have our first information retrieval example. It's a regular expression [speaker001:] Yeah. That's right. [speaker007:] Yeah! [speaker003:] searcher. [speaker007:] That's actually [disfmarker] Well, it's useful. [speaker001:] And it demonstrates why it doesn't work, because you really wanna go acro more than one meeting. [speaker003:] Yeah. Yeah. [speaker001:] And you need a better user interface for displaying the results. [speaker007:] But this helps a lot. [speaker001:] So. Yeah. [speaker003:] You wanna say, "Where are al where [disfmarker] Find all the contentious things I said." [speaker007:] Great. Thanks. [speaker001:] Yeah, really. [speaker002:] Find everything that should be bleeped. [speaker001:] That's right. Th we do have that bi nice marker [disfmarker] is that, n n because we all know we're being recorded, whenever anyone says anything like that, we then have a conversation about bleeping it out. [speaker007:] Right. We Yeah. You can search for "beep" or "bleep". [speaker001:] So. Yep. [speaker003:] Yeah. [speaker007:] In somebody else's turn. [speaker001:] Um. Oh. And also we actually have a few people who have still not filled out speaker forms. Specifically in the NSA ones, and I noticed that when I tried to, uh [disfmarker] uh, generate the transcripts for NSA. That there are a few with no speaker forms. And so, uh, I have a [disfmarker] I sent out yet another this morning, which I think makes six total emails that I've sent to these people, and so I think we need to escalate to some other method of trying to contact them. [speaker003:] Um [pause] [vocalsound] Right. [speaker008:] Stalk them [@ @] at their [disfmarker] [speaker003:] Has [disfmarker] has [disfmarker] has Joachim Sokol replied? [speaker008:] Like in the morning, when I leave for work. [speaker001:] Nope. [speaker003:] Or [disfmarker]? [speaker006:] I think, maybe talk to him first in person? That's what I would [pause] think. [speaker003:] He's [pause] not around, is the only problem. [speaker006:] Mm hmm. Oh, is that right? Oh. I saw him [disfmarker] I saw him on Tuesday. [speaker003:] Yeah. Otherwise, it'd be a good idea. Yeah. He popped in. But, I mean, he's basically gone. [speaker006:] Oh, OK. [speaker008:] Cold calling at lunch time? Uh, dinner time, I mean. [speaker001:] Well, if I could find phone numbers, that would certainly work. [speaker006:] Well, did you ask Lila? [speaker001:] But. [speaker006:] Cuz I bet she has this information. [speaker001:] Yeah, that's a good idea. I'll ask her if she can con track some of them down. [speaker003:] Yeah. [speaker006:] Yeah. [speaker003:] Yeah. And [disfmarker] and tell her [disfmarker] You know. Tell her your specific problem. She'll fix [disfmarker] [speaker006:] And it [disfmarker] M And, uh, then there's still m um [disfmarker] Miguel is still an active member of the group and he's [disfmarker] What I mean is, he's an active member and he's still here. [speaker003:] He's right there. Yeah. [speaker001:] Mm hmm. [speaker006:] Very helpful. [speaker001:] Yeah, I didn't actually see who they all were. Um, a couple of them were, like, people at IBM who were here for one of the IBM meetings [speaker003:] Yeah. [speaker001:] and one [disfmarker] a guy from SRI who was at one of the SRI meetings. And so, uh, those might be harder to track down. [speaker006:] Most of them, though, really were visitors here and Lila should have contacts with them. [speaker001:] Yep. [speaker006:] Nice meetings, by the way. Yeah. [speaker001:] I mean, they [disfmarker] they were people who didn't have accounts at ICSI, so they're [disfmarker] they're harder to find. [speaker006:] Well, not the ones that [disfmarker] Are you su? Are you sure? [speaker001:] Am I sure about what? [speaker006:] Yeah. NSA one and NSA three? [speaker001:] There were other people also. [speaker006:] We're talking about those? [speaker001:] There were other c other people also who didn't ha fill out the speaker forms, in addition to the N S [speaker003:] I i in [disfmarker] in other meetings. [speaker001:] Yeah. [speaker003:] Yeah. [speaker006:] Oh, oh. I see. OK. Fine. [speaker003:] Well, SRI people, easy to f find. [speaker007:] Yeah. [speaker001:] Yeah. [speaker003:] And, uh, IBM people, also. Just let us know. I mean [disfmarker] Just [disfmarker] we certainly have their email. [speaker006:] But I knew everybody in the NSA meetings. [speaker001:] Mm hmm. [speaker006:] So th I'm sure that we have, uh, fresh, you know, information on them. [speaker001:] Right. Yeah. None of the e emails bounced, so I know they're going somewhere. [speaker006:] Good. OK. [speaker003:] Right. [speaker001:] That's all I have. You wanna talk about recognition? [speaker008:] Chuck, you wanna talk about recognition? [speaker002:] I haven't done anything. [speaker001:] J T Liz, you wanna talk about recognition? [speaker002:] I was [disfmarker] [speaker001:] Thilo, you wanna talk about recognition? [speaker002:] I was away for a couple of days. So. I haven't done a [disfmarker] [speaker007:] We're sort of in a stage where we're [disfmarker] uh, Don's going through getting some of the next meetings that Jane s has. And, uh, you know, creating a second database. So we haven't actually run anything yet. We need to get a critical mass for that. [speaker008:] However, I just got an email from Thilo saying that we are ready to run [disfmarker] I mean, we have segmentations for the old meetings that [pause] are from his segmenter, [speaker007:] And check the segmentations. [speaker005:] Mm hmm. [speaker007:] Yeah. [speaker008:] and so [disfmarker] You [disfmarker] you had three different versions, with different, like, pause thresholds [pause] between the segments? [speaker007:] Great. [speaker004:] Yep. Yep. Yeah. Just [disfmarker] Yeah. Just smooth the [disfmarker] the output of the [disfmarker] of the detector. [speaker008:] Right. And you recommended using the one with two s maximum of two seconds? [speaker004:] Yeah. [speaker007:] A [disfmarker] What do you mean, a different pause threshold? [speaker004:] Yep. [speaker008:] But two s [speaker004:] Or you can [disfmarker] Yeah. You can use the one with one second or whatever. [speaker007:] Do you mean a [disfmarker]? [speaker004:] I [disfmarker] I [disfmarker] I [disfmarker] There's no [disfmarker] not much difference between the [disfmarker] the one second and the two second one. [speaker008:] Mm hmm. I mean, the only advantage to using the longer threshold would be that you run less risk of missing some [disfmarker] some speech. [speaker001:] Backchannel. [speaker008:] Right? [speaker004:] And I think [disfmarker] wouldn't it be better to [disfmarker] to have a little longer sequences for the recognizer? B eh because of the language model? As sometimes it happens that [disfmarker] that it cuts off within a s [speaker007:] Yeah. [speaker008:] But two seconds is pretty long. So [disfmarker] [speaker004:] Yeah. But, we can be sure that as [disfmarker] Or we can be [disfmarker] er not [disfmarker] not totally sure, [speaker007:] No, it's not bad. That's good. [speaker004:] but we can be somehow sure that there is nothing [disfmarker] not [disfmarker] no speech between those. [speaker008:] Hmm. Hmm. [speaker007:] i ye Yeah. th u [speaker004:] So, it [disfmarker] it [disfmarker] [speaker002:] What does the two second threshold mean? [speaker004:] I think i it doesn't [disfmarker] [speaker007:] I th I think that's g that's good. [speaker004:] It's the same as in the [disfmarker] the smoother for the [disfmarker] [vocalsound] IBM thing. [speaker001:] It combines them if it's [disfmarker] if the pause is longer than [disfmarker] [speaker004:] Yeah. [speaker007:] It's [disfmarker] it's no more than six words. So [disfmarker] Roughly. On average. [speaker002:] Ah. [speaker004:] So. Yeah. [speaker008:] Hmm. [speaker004:] So. [speaker007:] That's pretty good, I think. [speaker001:] Right. So the [disfmarker] the trade off is you get longer utterances, but you miss fewer utterances. [speaker007:] It's better than a [disfmarker] [speaker004:] But [disfmarker] Yeah. But [disfmarker] but the [disfmarker] the chunks are already sh i in general, are short. So [pause] I [disfmarker] I t I think it would be better to have [disfmarker] to have more of them concatenated together, in order to have better language model [speaker008:] Mmm. [speaker004:] or language modeling. [speaker008:] I think two seconds [disfmarker] mmm [speaker004:] I don't know. [speaker008:] I would maybe go with one second. I don't know, it's a [disfmarker] [speaker001:] Well, take a look [disfmarker] [speaker004:] Yeah. Do that. Yeah. [speaker008:] See what the length distribution is. [speaker004:] Yeah. But [disfmarker] but y there's [disfmarker] there's really not much difference between the one second and the two second. [speaker008:] Yeah. Really? [speaker002:] I wouldn't think that the language model would continue across two seconds. [speaker004:] So just [pause] take the one [disfmarker] the one second one. [speaker008:] Uh [disfmarker] bu I'm [disfmarker] I'm [disfmarker] I'm just scared that with two seconds you get [disfmarker] you get, um [disfmarker] [vocalsound] you [disfmarker] you get false recognitions. [speaker007:] Yeah. Well, yeah. Y you do, becau [speaker008:] You're gonna [disfmarker] Yeah, you're gonna hurt yourself occasionally by having [disfmarker] missing the language model context. But you might hurt yourself more by having misrecognitions due to [pause] background speech, or, uh, y noise, or whatever. [speaker004:] I [disfmarker] I'm not too afraid ab about that as w when there [disfmarker] when there would be something [disfmarker] some background speech or something [disfmarker] there wo there would be a [disfmarker] a chunk in another [vocalsound] channel, [speaker008:] Oh, I see. Then [disfmarker] Oh, I see. OK. [speaker007:] Yeah. I think it's better. [speaker004:] and when there is s something in between, I con I [disfmarker] I do not concatenate them. [speaker008:] Oh, right. Oh, that's [disfmarker] [speaker007:] The longer is better. [speaker004:] It's just when there is [disfmarker] when they are sequentially and [disfmarker] [speaker007:] There's [disfmarker] there's [disfmarker] th [speaker008:] Mm hmm. OK. Sure. [speaker004:] So. I wou I would use th [speaker008:] We can try them all and see which works better. [speaker007:] There's a l there's a lot of these cases just like now, where I c people say "uh [disfmarker] uh" when they're trying to ta [speaker008:] Mm hmm. [speaker007:] and there's about a half second pause to a second in between and then another word, [speaker004:] Yep. [speaker008:] OK. [speaker007:] and it's much better if we can keep those together, I think. [speaker001:] Yeah. It's, uh, funny looking at some of the transcripts. I was filtering by [pause] person, [speaker008:] Mm hmm. [speaker001:] and in one of the [disfmarker] one of the early meetings, one pers particular person, almost the only thing they said the entire meeting was "yeah", "uh huh". It was just a whole list of them. It was very funny. [speaker006:] I bet I know who that was. [speaker008:] OK. So [disfmarker] so we need to split the waveforms, then. [speaker006:] Yeah. [speaker008:] Or do you already have them split up? No, you don't. Right? [speaker004:] No. [speaker008:] So [disfmarker] so, I guess Don [pause] would need your help to [disfmarker] to create a new set of split, uh, meetings. [speaker005:] Sure. [speaker007:] You know, you just fake the format that you take as input with the synch times to a new set of synch times, [speaker008:] If [disfmarker] [speaker004:] We could [disfmarker] [speaker008:] Right. [speaker004:] Mmm. [speaker001:] Uh, do we know about disk? [speaker007:] and [disfmarker] [speaker008:] But th Oh, yeah. There's that pressure. [speaker001:] Uh, Abbott disk? [speaker003:] Uh. I know they're in. [speaker001:] OK. [speaker003:] And, uh [disfmarker] But I don't know. I think he was [disfmarker] Wasn't he asking about [disfmarker]? [speaker002:] Wel He had a problem. Right? [speaker001:] Yep. [speaker004:] Yep. [speaker007:] Oh. [speaker003:] Well, there was an issue. He wanted to take it down. And then he tried [disfmarker] [speaker002:] H he did, and then it didn't work, [speaker001:] Couldn't format them. [speaker002:] and [speaker001:] No. [speaker002:] I didn't hear anything after that. [speaker001:] The only reason I'm asking is, you're gonna need space to split them up. And so I wanted to make sure we had some available for you. [speaker005:] I still have, like, [vocalsound] probably [pause] six, seven, eight gig on my disk. [speaker001:] OK. So we're OK [disfmarker] we're still OK for another couple days, then? [speaker007:] And I have [disfmarker] I have like another un backed [disfmarker] I ha have another six gig, which Jeremy, if you're not using, can [disfmarker] on XA. [speaker005:] Yeah. [speaker003:] Wel [speaker001:] So we're OK for [disfmarker] for a couple weeks, then. [speaker003:] Markham probably needs u H he probably needs us to approve another time to take things down. [speaker007:] To you. [speaker003:] Right? In order to do that? [speaker002:] Yeah. He [disfmarker] he didn't say [disfmarker] he hasn't said anything to me about it. [speaker003:] I thought he [disfmarker] I thought he said in that mail that he would need to take it down another time. So I think he [disfmarker] [speaker002:] Yeah. I think [disfmarker] [speaker001:] Y yeah. He just didn't say when. So [disfmarker] [speaker002:] Yeah. He [disfmarker] [speaker003:] Well, no. I think he wanted u us to tell him when. [speaker007:] How about during the picnic? [speaker001:] Ah. Yeah. I'm sure he'd appreciate that. [speaker003:] Yeah. I'm sure he'd love to [disfmarker] [speaker001:] Well, m my feeling about that [speaker003:] Fortunately, Markham's not a transcriber. But, um [disfmarker] [speaker001:] is p is p [speaker007:] Sorry. [comment] [vocalsound] Beep. [speaker001:] Well, OK. That's the point. So it's Jane [pause] that we have to coordinate that through. Uh, what I was gonna say is, as soon as possible, and I'm willing to not work for an hour to get it done. Uh, but [disfmarker] [speaker003:] "I might not work." [speaker008:] Oops. [comment] I'm sorry. [speaker004:] Oops! [speaker005:] Whoa! [speaker003:] "I'm [disfmarker] I'm willing to not work for an hour." [speaker001:] Because when Abbott's [disfmarker] [speaker003:] I know you're willing to not work for an hour. [speaker001:] Yeah, right. [speaker003:] But, I th [speaker001:] Because when Abbott is down you can't work. [speaker007:] You're really dedicated, if you're [disfmarker] No matter how you parse that one. [speaker005:] Yea [speaker008:] OK. [speaker001:] But, uh, I think the per the people it disrupts the most are the transcribers. [speaker006:] Well, you know, uh, I [disfmarker] All I need to do is mail, um [disfmarker] send them m a mail like two days in advance so they can schedule their time. I did that with the last outage. I j I wrote to them letting them know that this w [speaker001:] OK. [speaker008:] Hmm. [speaker003:] I So, OK, it sounds like Markham should almost decide when he wants to do it, and tell us, as long as [disfmarker] [speaker006:] was not, um [speaker001:] So early next week. And just as long as we have a little warning. [speaker003:] Yeah. [speaker007:] So that means we can't, um, save meeting data either. [speaker006:] Yeah. [speaker007:] Right? [speaker001:] Uh, just not during that time when it's down. But that [disfmarker] it should be down for an hour. [speaker007:] So it just can't [disfmarker] so we can't have two meetings in a row where the first meeting's during that hour. That's all I meant. [speaker001:] Right. Well, no, we can store them here. [speaker007:] That [disfmarker] that's happening, like, today. [speaker001:] We can store them here. You [disfmarker] we just run the risk [pause] that if you have a crash we lose the data. [speaker007:] Temporarily. [speaker001:] So. [speaker002:] They're stored on Popcorn. [speaker006:] This is on Popcorn? Or [disfmarker]? [speaker007:] Well, no. I mea I mean, the fir Oh, jus OK. [speaker001:] Yep. [speaker006:] This is on Popcorn or something? [speaker001:] Yep. [speaker006:] Yeah? OK. [speaker003:] We store [disfmarker] we store our data on Popcorn. [speaker001:] Anything else? [speaker003:] That's really great. [speaker001:] Excuse me? [speaker003:] I was just thinking we store our data on Popcorn. How many [disfmarker] how many institutes can you [vocalsound] say do that? [speaker001:] Yep. [speaker008:] Pop goes the data. [speaker003:] OK. Uh, g megabytes and mega many megabytes, too. Um, what [disfmarker] what, uh [disfmarker]? [speaker001:] We have a kernel on Popcorn, too. [speaker008:] Right. [speaker006:] That's very good. [speaker003:] So uh, what [disfmarker] what's on your queue for [disfmarker] for recognition experiments? [speaker008:] u un Can I have butter on my meeting? [speaker003:] Let's talk about that for a second, maybe. [speaker006:] What was the question? [speaker002:] Uh [disfmarker] [speaker003:] What [disfmarker] what was on his queue for recognition experiments? [speaker006:] OK. [speaker002:] Um, I'm rebuilding the net that we're gonna use for the tandem stuff. And so what I'm doing is, [vocalsound] um, putting in the stream reader into the Quicknet libraries for the SRI feature files. [speaker001:] Mmm. [speaker002:] An Which is the right way to do it. I mean, when we did our first experiments and I was, [vocalsound] uh, creating SRI feature files from the ICSI front end, I just had perl scripts, you know, and [vocalsound] hacked a bunch of stuff together just to get it going. But the r the right way to do it is to integrate it in with the ICSI tools. And so, that's what I'm doing now. And so once I get that done, then I'll generate the P files I need. Cuz we already have the feature files in the SRI format. [speaker003:] Mm hmm. [speaker002:] So what I need to do is, make it so that the [disfmarker] the Quicknet stuff can read those. And, uh [disfmarker] [speaker008:] I is that independent or related to [pause] also being able to write out the, uh, feature file [pause] i in the SRI format f r [speaker002:] It's both. There's a [disfmarker] there's an input stream and an output stream. [speaker001:] Input reader and an output stream. [speaker008:] Oh, OK. So then you could use, um [disfmarker] You could use, um [disfmarker] [vocalsound] [comment] uh, like Feacalc and s just specify as an output format [speaker001:] Yeah. [speaker002:] Yeah. [speaker001:] Yeah. That's the point. [speaker008:] the [disfmarker] the [disfmarker] Oh, OK. Th t I'm just ignorant about the [vocalsound] sof software architecture of this thing. [speaker002:] Yeah. [speaker001:] Yeah. So, if [disfmarker] if you [disfmarker] Right. Quicknet is a very nice stream based library, [speaker008:] Uh huh. [speaker001:] so without too much effort, once he has the classes written we can incorporate it into all the standard tools. [speaker008:] Oh, cool. [speaker001:] So. [speaker008:] Great. [speaker002:] So, then it's t uh, tandem experiments after that? [speaker003:] Uh huh. [speaker001:] And at some point, I'd like to get back to, uh, porting Quicknet to the multiprocessor Linux box. [speaker003:] Yeah. [speaker001:] You know, I [disfmarker] I have forward passes working, but I haven't done training yet. [speaker008:] So, b speaking of Linux. So [disfmarker] [vocalsound] Th there's some i impetus at, um, [vocalsound] SRI to actually [vocalsound] u u p th [comment] uh, build [disfmarker] support Linux as a platform. So. [speaker001:] Mm hmm. [speaker008:] What that means is, [vocalsound] once we have, uh, everything running on Linux we can also use a Li [vocalsound] eh [speaker007:] n Run all our jobs on your machines. [speaker008:] Yeah. Exac [speaker002:] We don't have too many. We just have that [disfmarker] just have a few Linux machin [speaker008:] I mean, if you can't use all the processors on whatever machine, we'll help you with that. [speaker001:] Yeah. That's right. Well, that's the nice thing about it, is that [disfmarker] i since it's coarse parallelism you don't have to do anything special. [speaker008:] Right. Exactly. [speaker001:] So. I mean that would be a fine use for b for that machine. [speaker008:] Yeah. So it's just, uh [disfmarker] [speaker001:] Five more processors. [speaker003:] Oh, I know what it was [disfmarker] [speaker008:] Uh. Yeah. [speaker003:] Um. Yeah. U um [disfmarker] Some [disfmarker] [speaker008:] Or if [disfmarker] uh, you know, in the future, if Linux machines become like way cheaper, than, [vocalsound] uh, you know, Solaris machines, then [pause] you know, that wouldn't be a reason not to [pause] use Linux anymore. [speaker001:] Yep. [speaker008:] So. [speaker003:] Yeah. Um, I think it would be ne neat at some point [pause] in this [pause] to do, um, a recognition, uh, pass on one of the PZM mikes for these s same meetings that you've been gi n gi bi [speaker008:] For the meeting? [speaker007:] Yeah. [speaker003:] I mean, it's gonna be terrible, [speaker008:] Mm hmm. [speaker003:] but, you know, we [disfmarker] we just don't know how terrible. [speaker007:] Ye It's also an interesting problem to come up with the reference. [speaker008:] Mm hmm. [speaker003:] And [disfmarker] [speaker007:] So, the reference file for the relative times at which [disfmarker] [speaker001:] Oh, yeah. That's really hard. [speaker007:] So [disfmarker] Well, i it's an interesting question, because I was thinking well you can force align the transcriber transcripts, and then of course, you try to merge them in time. [speaker001:] It's not determined. [speaker003:] Mm hmm. [speaker007:] But how do you score? " [speaker001:] I think the first pass is [pause] throw out words which are overlapped. [speaker007:] I mean it's just an interesting problem. [speaker001:] That would be a good first pass. Just ignore everything that has any overlap. [speaker008:] Mmm. [speaker007:] Right. I mean, there's a whole sort of dis [speaker003:] Yeah. Cuz you have a set of scores about that, [speaker007:] Right. We [disfmarker] Right. [speaker003:] so maybe then, that wouldn't be so bad. [speaker007:] But there's a whole interesting discussion, cuz of course the alignments are not perfect either, [speaker003:] Oh. [speaker007:] and so [disfmarker] um. In fact, we actually don't have a [disfmarker] a p [speaker001:] Right. [speaker003:] Yeah. [speaker007:] I mean, it it's worth trying. [speaker003:] But [disfmarker] that'd be a hell of a lot better than what we do with just these. [speaker007:] I [disfmarker] [speaker003:] And [disfmarker] [speaker007:] Right. Right. [speaker003:] and [disfmarker] and, again, if you rule out the overlap, you have some numbers for that, cuz that's yet another [disfmarker] But, I [disfmarker] I'm just concerned, of course, about that [disfmarker] [speaker007:] Oh, I see. You mean when only one person is [pause] talki [speaker001:] Yep. [speaker007:] Yes. We could [disfmarker] we could try tha [speaker003:] Cuz you have scores for that for the other case. [speaker007:] Right. Right. Exactly. [speaker008:] Mm hmm. [speaker003:] And [disfmarker] [speaker007:] Yeah. We [disfmarker] we should try [disfmarker] [speaker003:] We just don't know how bad it will be. I mean, one of the things that Dave was noticing [disfmarker] we were talking this morning [disfmarker] is that i it seems like [disfmarker] and we do don't know this in detail, but it seems like you're getting a lot from the channel adaptation, the speaker adaptation, and so forth. [speaker005:] Hmm. [speaker008:] Mmm. Mm hmm. [speaker003:] Um. So you are, already, in that recognizer, doing something that is likely to affect, uh, the [disfmarker] the far field microphone, uh, formant. [speaker008:] Mmm. Right. [speaker003:] So, it may not [disfmarker] I mean, it's gonna be bad, [speaker008:] Right. [speaker003:] but, uh, it may not be, like, "won't decode" kind of bad. [speaker008:] Right. [speaker003:] It might [disfmarker] might only be that it goes from forty percent to eighty, or something like that. [speaker001:] Yeah, eighty percent. Yeah. [speaker008:] Right. [speaker007:] Do you assume you know the speaker when you do this? [speaker003:] I want us to assume the exac whatever it was you assumed when you did the other [disfmarker] the [disfmarker] the close mike. [speaker008:] You just di you just [disfmarker] [speaker007:] Well, th r Well, there, there's only one person who it can be, because they own that microphone. [speaker008:] Right. [speaker007:] I'm just wondering [disfmarker] There's [disfmarker] [speaker008:] Yeah. That [disfmarker] that becomes another problem, actually. [speaker005:] Well, and then you have the gender detection [speaker007:] No It's not a pro [speaker004:] Yeah. [speaker001:] Well, but for a [disfmarker] But, for s for scoring, you can do it or not do it as you choose. [speaker007:] Right. And [disfmarker] [speaker003:] Oh, I'm [disfmarker] [speaker007:] R right. But in terms of [disfmarker] for norm [speaker008:] n s [speaker001:] So. [speaker003:] But you're saying for this [disfmarker] For the adaptation, you mean. [speaker008:] So, d s so [disfmarker] fo not [disfmarker] well, for everything. [speaker007:] for adaptation. Right. [speaker008:] For [disfmarker] s for [disfmarker] f even feature normalization, for, uh, vocal tract length estimation, [speaker007:] You know, do you do a supervised adaptation [disfmarker]? [speaker003:] Right. [speaker007:] Ye All of these adaptations ha [speaker008:] all of [disfmarker] all of these assume you know who's speaking. [speaker007:] assume that the same person [disfmarker] [speaker008:] So, [speaker004:] Mmm. [speaker008:] you would have to do a speaker segmentation first on the far field micropho signal. [speaker007:] Yeah. [speaker001:] N well, but you can use the [disfmarker] when you're doing the scoring [disfmarker] Since you're [disfmarker] you're gonna be scoring against transcript, you can use [disfmarker] [speaker008:] You mean you wanna cheat. [speaker001:] Well, you're doing that anyway. [speaker007:] N Well. I was just aski [speaker008:] No. If [disfmarker] i i [speaker006:] Ooo. I don't like that term. [speaker001:] So ch chea try to cheat in the same way that you're doing with the close talking. [speaker006:] I don't like that term. [speaker003:] I [disfmarker] [speaker007:] Actually, the s Those are two [disfmarker] [speaker006:] I don't like that term. [speaker003:] I have [disfmarker] [speaker008:] OK. We g we're gonna bleep that out. [speaker003:] I [disfmarker] I have a suggestion. Do the simplest thing first. [speaker001:] Yeah, right. [speaker003:] Because we're gonna want to know that anyway. [speaker007:] So, wait, the simplest thing is you cheat, saying [disfmarker] [speaker003:] In other words, if you d No. It's [disfmarker] No. It's even simpler thing than that, is just that you don't know. [speaker008:] You mean you do you don't do all those normalizations. [speaker001:] Yeah. [speaker003:] Yeah. [speaker007:] Oh, you m y totally unadapted. [speaker001:] Just do [disfmarker] or a free ri Yeah. [speaker003:] Yeah. Because you can get a number [disfmarker] uh, for that with the other as well. [speaker007:] Uh. Right. [speaker003:] Right? You can turn those things off. Right? [speaker008:] Um, actually [disfmarker] [vocalsound] We don't have any models. [speaker007:] Yeah. i i [speaker008:] Um, you can [disfmarker] [speaker007:] No. You can [disfmarker] [speaker008:] Um [disfmarker] [speaker007:] You can use a speaker [disfmarker] eh [disfmarker] What about gender detection? [speaker008:] A actually, it's that [disfmarker] it's [disfmarker] it's [disfmarker] We would have to retrain models that are not [disfmarker] that have none of that stuff, uh, in it. But actually we could [disfmarker] We can just run it, assuming that it's all one speaker, basically. And see what happens. [speaker003:] Yeah. Yeah. Yeah. And then put it in correctly and see how much that helps. [speaker007:] Yeah. [speaker008:] Yeah. [speaker003:] I mean, I was just thinking, do the one that's easiest first, because you wanna know [pause] how much that's helping you in these cases anyhow. [speaker008:] Actually [disfmarker] [speaker004:] But d do you have gender dependent models? [speaker008:] Actually, no. Th th Sorry. [speaker004:] A Are the models gender dependent? [speaker005:] Mmm. [speaker008:] Yeah. [speaker001:] Yeah. They're all gender dependent. [speaker007:] Yeah. [speaker004:] So [speaker007:] Yeah. So. You can r [speaker001:] So we would have to at least do that. [speaker008:] No, actually [disfmarker] [speaker007:] No. You can run both. [speaker004:] Yeah. [speaker007:] And you can s [speaker004:] Yeah. [speaker008:] No, actually, what [disfmarker] Here's [disfmarker] here's what we would usually do u under these circumstances. [speaker001:] And pick whichever's better. [speaker008:] We would actually [disfmarker] we would run some sort of segmentation. Thilo's is as [pause] good as any, probably. Um, and then we would do an unsupervised clustering of [disfmarker] of the segments, to [disfmarker] and [disfmarker] and put the similar ones into bins that would be sort of pseudo speakers. [speaker003:] Mm hmm. [speaker008:] And then we would do our standard processing on these pseudo speakers. And that turns out to work very well on Broadcast News, SPINE [disfmarker] those types of tasks, where you don't have the speaker segmentation given to you. [speaker001:] S does the clustering [disfmarker]? Do you give it sort of a target number of clusters? Or is it [disfmarker]? [comment] adapted in some way? [speaker008:] Um, you can either do it by target number or by some measure of dissimilarity that you use as a threshold. [speaker001:] Mm hmm. That's what I'm just thinking one of the big differences with Broadcast News and these meetings is we have m many fewer participants. [speaker007:] The other thing is that you actually have direction here. [speaker008:] Right. [speaker007:] So, unlike these corpora that are recorded with other microphones, like [disfmarker] The right way to do this I guess, w you know, in the future would be, well, in general, Thilo's sitting there, and this PZM is gonna [disfmarker] You know, he's roughly [pause] in that location. [speaker001:] Speaker ID. Yeah. Well, there're different ways of thinking about it. I mean that [disfmarker] that would be true if w you had a meeting situation with multiple mikes. But if you only had your PDA sitting in front of you [disfmarker] [speaker007:] Well, any case where the people are not all sitting at the same place and they're not moving around too much. [speaker001:] And you have more than one mike. [speaker003:] Yeah. If you don't have one [disfmarker] more than one mike, you don't have a very good handle on location. [speaker007:] W well, you have distance, and you have [disfmarker] [speaker003:] That's [disfmarker] it's not enough. [speaker007:] I mean, that, um, Jane's [disfmarker] i n The pickup of Adam on this mike is gonna be different than me, in terms of energy and so forth over the whole meeting. [speaker001:] Oh, so just from clustering. You might be able to cluster it better because of that. [speaker007:] You might get some clustering from the speaker and some of it from the characteristics of the distance, [speaker003:] Mmm. [speaker001:] From mike. Yeah. [speaker003:] Yeah. [speaker007:] and [disfmarker] [speaker003:] But, say, if [disfmarker] if you had a [disfmarker] a cardioid mike or something sitting someplace, then [disfmarker] [vocalsound] sitting there, then it would [disfmarker] Its [disfmarker] its response to him would be about the same as the response to him, and so on. [speaker001:] And transfer functions. [speaker007:] Right. Exactly. [speaker008:] Well, you can do [disfmarker] [vocalsound] You can [disfmarker] you can [disfmarker] you can do certain normalizations like, you know, gain control, [speaker001:] Well, I think there're lots of [disfmarker] lots of ways of doing it. [speaker007:] They're both picked up in the clustering, I guess. [speaker003:] Mm hmm. [speaker008:] uh, before you do the clustering to rule out those [disfmarker] those types of things. [speaker007:] Or to just do the clustering, knowing that you're capturing both. It's just that the kind of clustering we've done before hasn't had that, uh, distance factor, or location factor in it in the same way. [speaker003:] I see. Yeah. [speaker007:] And so we're not really modeling it directly. If somebody does, we sh maybe we add that [speaker008:] Mmm. [speaker001:] Yeah. That's an interesting [disfmarker] [speaker007:] because I think it would be a pretty big difference. When you listen, you can sort of tell where people are, [speaker001:] It's a big difference. Yeah. [speaker008:] OK. That would be fun [disfmarker] fun to try. [speaker007:] not which s you know, side, but [disfmarker] [speaker001:] Well, humans are really good at that [disfmarker] transfer function through the head, and things like that. [speaker007:] Right. Even with one microphone. [speaker001:] So you know, even if you only have one ear, you can still get [disfmarker] get good transfers. [speaker003:] Yeah. Yeah. [speaker007:] Right. So our [disfmarker] our clustering is not gonna be intelligent that way. [speaker001:] So [speaker007:] It's just gonna pick up whatever energy difference, or whatever, is [disfmarker] [speaker001:] Yeah. [speaker003:] But [disfmarker] Anyway, I [disfmarker] I'd [disfmarker] it'd be neat to have that, because you know we've been at this for a little while and we don't have [disfmarker] have [disfmarker] any results yet with [disfmarker] with conversational speech at a distance. [speaker008:] Mm hmm. OK. [speaker003:] So, um [disfmarker] [speaker008:] Hmm. [speaker001:] Yeah. [speaker003:] We should at least get [pause] a first one. [speaker001:] Something. Yeah. [speaker007:] OK. [speaker006:] Mm hmm. [speaker003:] Um, and the other thing [disfmarker] This would kind of be a Hail Mary, but [disfmarker] but, uh [disfmarker] [vocalsound] Uh, Dave does have this stuff that is helping on digits, and, you know, and so with [disfmarker] then it'd be, you know, just throw that in and see. [speaker008:] Oh, yeah. Then we should [disfmarker] [speaker001:] Yeah. So it'd be cool to see if it helped. [speaker008:] Well, first you have to filter the whole training set and retrain. [speaker007:] Yeah. [speaker005:] Mm hmm. [speaker003:] Yeah. [speaker001:] That would be [disfmarker] ft quick, [speaker007:] Mm hmm. [speaker001:] since I think he did it in Matlab. [speaker003:] Uh, well, he can do it in something else. [speaker006:] Hmm. Interesting. [speaker001:] Yep. [speaker003:] But, I mean [disfmarker] [vocalsound] you know, it's [disfmarker] [speaker001:] Can't you export C from Matlab, [speaker003:] Um [disfmarker] Actually, we're experimenting with phase stuff now, [speaker001:] or is that Mathematica? [speaker003:] and [disfmarker] and, uh, thi this, uh [disfmarker] first result he got, uh, was really great. It actually, uh [disfmarker] uh [disfmarker] [vocalsound] didn't exactly eliminate the reverberation, but it completely got rid of the speech. [speaker008:] Oh. That would solve all of our problems. [speaker006:] Wow. [speaker003:] Yeah. Well, I was thinking that. [speaker008:] Wouldn't it? [speaker001:] Well, so just take the inverse and you're fine. [speaker003:] So, it's [disfmarker] That's [disfmarker] You think I didn't s [vocalsound] tell him that? No. I got pretty excited, because it completely got rid of the speech. So, I was thinking well, so, [speaker001:] So that's a speech detector. That's great. [speaker006:] That's interesting. [speaker003:] you know, it could be useful for lots of things ". So, we have to [disfmarker] [speaker006:] Did it get rid of other stuff, too, though? [speaker008:] Hmm. [speaker003:] What? [speaker006:] Did it get rid of other stuff besides the speech? [speaker001:] Yeah. Just subtract that [disfmarker] [speaker003:] Well, we have to sort of check that out. [speaker001:] subtract that from the original signal and you're set. [speaker003:] That's [disfmarker] [speaker006:] OK. Wow. Interesting. [speaker003:] Yeah. Yeah. [speaker008:] Right. Then you can do like an [disfmarker] you can estimate the [disfmarker] the noise estimates. [speaker001:] Noise estimate. Signal to noise. [speaker008:] Right? Yeah. [speaker001:] That's great. [speaker007:] Mm hmm. [speaker003:] Reminds me of when th when Herve and I were first playing with c uh, context dependent things for nets, and [disfmarker] and at one point we took out the speech input, so we [vocalsound] only had priors, and our performance went up. [speaker008:] Mmm [speaker006:] Wow. [speaker001:] I guess that's why Herve always talks about using the priors as one of the mixtures in [disfmarker] in his all ways combos. [speaker008:] Hmm. [speaker003:] S Yeah. Well, of course it was a bug. But, I mean, it's [disfmarker] it's a [disfmarker] but it was pretty f f [vocalsound] It was pretty funny anyway. [speaker001:] But still [disfmarker] [speaker006:] Wow. Interesting. [speaker008:] Yeah. [speaker006:] Hmm. [speaker001:] So if you run [disfmarker] er, your recognizer with all probabilities equal, what do you get out? Probably garbage. [speaker002:] Whatever the language model says. [speaker001:] I bet the pruning [disfmarker] [speaker004:] Yep. [speaker001:] The pruning probably prunes everything out. [speaker008:] You get out Switchboard. That's just the lang the language model. [speaker001:] Yeah. That's right. [speaker003:] So that's how it was generated. [speaker008:] Ha Yeah. So we have this new speaker adaptation. Um A [disfmarker] b Oh, it's a s sort of feature normalization t uh, like f speaker adaptation, which, uh, which I wr which I [pause] wrote about in the last status report, which seems to be helping about a percent and a half on Hub five. So, um. We haven't tried that yet on the meetings, uh, but hopefully it'll help there, too. [speaker006:] I wanna ask, um [disfmarker] So, you know that the data [disfmarker] I' I have upgraded it considerably. So I've probably made [disfmarker] I probably corrected something like, well, s It's really a substantial amount of things that I've caught, changed [disfmarker] um, added to it. Including a lot of, uh, backchannels. So, when you're d running things, if you run it on the old [disfmarker] So. If you [disfmarker] if you [disfmarker] run it on the new version, then the numbers will be [disfmarker] um, and you compare it to the [disfmarker] to runs on the old version, then you're gonna end up with more of an improvement than would actually be the case. [speaker008:] Hmm. [speaker001:] Different. [speaker008:] Mm hmm. Well, we [disfmarker] we have a frozen [disfmarker] we do all our experiments with a frozen version of the transcripts as of, I don't know [speaker006:] As of February? [speaker008:] A a as of [disfmarker] no, a little [disfmarker] I don't know. When [disfmarker] when did we grab the transcripts? [speaker006:] So a l Like the HLT paper? [speaker005:] The b for the f? It was m m March, probably. [speaker007:] F Sorry. [speaker008:] The [disfmarker] [speaker007:] For these meetings? [speaker008:] We're talking about which [disfmarker] which version we're using for evaluating the recognition. Which version of the transcripts. [speaker007:] Right. They're somewhere in between January and late March, or something like that? [speaker005:] Yeah. Something around there. [speaker006:] OK. Well, so long as you have the same baseline, then [pause] you'll be able to tell. [speaker001:] But they are channelized ones, though? [speaker008:] No, no. Yeah. O obviously. Yeah. [speaker007:] N N They're meetings that are now channelized, but they were not from [disfmarker] [speaker006:] So long as i so long as it's the same baseline you'll be able to tell. [speaker008:] The [disfmarker] uh, the other thing is [disfmarker] Th the [disfmarker] the [disfmarker] [speaker006:] But [disfmarker] but I'm just gonn I'm just saying that if you were to compare that with running that on the new data, that it would be an [disfmarker] a more optimistic outcome. [speaker008:] Hmm. And [disfmarker] Well, the [disfmarker] and the other thing is, it takes only a [disfmarker] a minute to rescore all the old outputs with [disfmarker] If you had new transcripts, then we j we just re rescore the old [disfmarker] [speaker001:] Right. Cuz you haven't done any training. [speaker008:] Sorry? [speaker001:] Right. Cuz we're not doing it for training. [speaker007:] Yeah. We haven't a modified the recognizer at all. [speaker001:] Right. So it's really [disfmarker] it would be really easy to re do it. [speaker007:] So, actually, um, at some point we should update and rescore everything with, you know, the corrected transcripts, [speaker008:] W we just [disfmarker] We save [disfmarker] [speaker001:] Yeah. It'd be interesting just to see i how much it changes. [speaker008:] Well, the [disfmarker] the [disfmarker] [speaker007:] just to h [speaker001:] I bet it wouldn't change a lot. [speaker008:] Right. [speaker005:] What are the nature of most of the changes? [speaker006:] Yeah. [speaker001:] We can take [disfmarker] we can have a pool. [speaker007:] Sometimes the changes are, um, cases where the recognizer would get it wrong anyway, cuz it was some word that we didn't have in the vocabulary, [speaker001:] That's just what I was thinking. [speaker008:] So, I [disfmarker] I [disfmarker] The thing is, when [disfmarker] [speaker007:] or [disfmarker] But it does help to get the backchannels back in, and things like that. [speaker008:] Right. So, whenever the [disfmarker] Right now, the s the scoring is based on segments. Um, which is not great because, for instance [disfmarker] So [disfmarker] so, the [disfmarker] the [nonvocalsound] other way to do the scoring is using a [disfmarker] a NIST format called STM. S uh, Segment Time Marked, [speaker001:] Yep. I [disfmarker] we know about it. [speaker008:] where [disfmarker] So, um, I have to convert the, uh, transcripts into this format, and then the scoring program actually looks at the times. And, uh, you know, it [disfmarker] You can have a different segmentation in your recognizer output than in your references, and it will still do the right thing. So, that's what we need to basically, uh, to [disfmarker] [speaker001:] Tran [speaker008:] Hmm? [speaker001:] Transcriber will export STM. [speaker008:] Oh. [speaker001:] In case you care. [speaker008:] Well. But then there's other changes. So. I mean, there's other [disfmarker] [speaker001:] OK. [speaker008:] We [disfmarker] we strip away a lot of the mark up, uh, in the transcripts, which, you know, isn't relevant to the scoring of the speech recognition output. [speaker001:] Right. [speaker008:] So [speaker007:] Would that [disfmarker]? Doesn't that only change the scoring if a word has moved into a different segment? I mean, I don't think that's [disfmarker] har I hardly ever see that. I think most of them are pretty good. [speaker008:] No. I mean if [disfmarker] suppo I [disfmarker] I assume you also changed some boundaries. [speaker006:] Mmm. I did. [speaker008:] Right? So, if we want to use new transcripts with a different segmentation, then we can't use them in the current way we do scoring. We have to [disfmarker] m m switch to this o [speaker007:] Oh, you mean you would need to re run the recognition. [speaker008:] No. We have to r There's a difference [disfmarker] [speaker007:] I mean, if you re ran the recognition, then you just run it [disfmarker] [speaker008:] Right. Right. [speaker007:] Oh, I see. If you wanna use the old [disfmarker] You can actually never, though, really infer what you would get with a different [disfmarker] [speaker008:] That's true. [speaker007:] You know? It's [disfmarker] it's probably more fair to re run the whole th re run that. Uh, in other words, it's not really a scoring script problem. It's [disfmarker] [speaker008:] No. But if you w just want to see what [disfmarker] Like, suppose you fixed some typ No, Jane fixed some typos and you wanna see what effect does that have on the word error, then we can f we c [speaker001:] Right. But [disfmarker] but [disfmarker] If the segments change, that won't work. [speaker007:] Y Yeah. Yeah. You [disfmarker] It'll sort of work, but it's not exactly what you would maybe get from recognition. [speaker001:] Well, I mean, what happens if you break one segment into two? [speaker007:] You never know. [speaker001:] Suddenly the don they don't match at all and you can't line them up anymore. [speaker008:] No, but that's what the pr Well, that's what I'm saying. [speaker007:] Well, you c You [disfmarker] you s [speaker008:] You can line them up based on the times. [speaker007:] You can [disfmarker] Yeah. [speaker008:] So the scoring program with [disfmarker] if you give it an STM reference file, it will actually compare the words based on their time marks. [speaker001:] Right. [speaker007:] Yeah. [speaker008:] So therefore, you can, um [disfmarker] [speaker001:] Does STM do it per word, or por per utterance? [speaker008:] It's per utterance, but it [disfmarker] it allows [disfmarker] As long as you hypothesize the word in the right segment in the reference, it gives you credit for that. [speaker001:] Th what I thought. Alright. [speaker008:] So it does a [disfmarker] [vocalsound] it does a word alignment, like you have to do for scoring, [speaker007:] Mm hmm. [speaker001:] And then it does also some [disfmarker] [speaker008:] but it constrains the words to lie within the t the time bins of the reference. [speaker001:] Within the segment. Yep. I see. [speaker008:] And [disfmarker] fo for you to get a credit for it. So. [speaker006:] That sounds great. [speaker008:] So [disfmarker] so, it's [disfmarker] it should be just a straightforward re formatting issue of the references. So. [speaker006:] Yeah. I mean, t y I was thinking, th the other day, that this [vocalsound] is not just conversational speech. The fact that is has so much technological jargon in it. I it makes it [disfmarker] u considerably harder to, um [disfmarker] me e e e to transcribe and to [disfmarker] to double check and all those things. So, [vocalsound] I think you're gonna find a substantial gain in terms of the word ac accuracy. So long as those words are in your vocabulary. [speaker007:] Well, it definitely helps with, um, forced alignment, too, [speaker001:] One percent. [speaker007:] because, you know, whe when we know the [disfmarker] the [disfmarker] the true words and we're adding them to the vocabulary and training a language model and so forth for future meetings, especially the [disfmarker] like, the front end meetings or meetings with a lot of [pause] jargon in them, that is not represented in Switchboard or CallHome, then it [disfmarker] then it really helps. [speaker006:] Well, all of them are. All of our [disfmarker] all of our meetings have a lot of jargon in them. But we know what those words are. [speaker007:] So Yeah. [speaker006:] I mean, I don't know how you get them into the vocabulary, but it would seem [@ @] [disfmarker] Now, I have to [disfmarker] I have to [vocalsound] [disfmarker] I really need to, uh, raise a question about the term "cheating". OK? And the reason is, um, if I understand that, "cheating" is a term which is used to apply for basically what all of linguistics [disfmarker] corpus linguistics does, and what [disfmarker] what my transcribers are doing and what I do, [vocalsound] which is a methodology whereby you actually physically mark things in the data. [speaker001:] All [disfmarker] al [speaker006:] Like the transcription, like the words that are actually there. [speaker001:] All we mean by that is that we're giving the recognizer more information than it would have if you were running it raw, over a meeting that no person has ever listened to or transcribed. [speaker006:] OK. I mean, that [disfmarker] that part is OK. But I [disfmarker] but I do wonder sometimes if it might be possible to use [vocalsound] a term that's a little bit less evaluative, like, um, "hand marked". [speaker003:] Uh, but cheating is [disfmarker] is [disfmarker] p Sure. But chea but cheating is [disfmarker] is [disfmarker] is [disfmarker] pretty commonly used to mean this. [speaker008:] It's a [disfmarker] it's a technical term. [speaker007:] It [disfmarker] I guess it [disfmarker] It's been around for a long time, [speaker003:] It really is [disfmarker] [speaker001:] It is. Yep. [speaker007:] and it [disfmarker] it's sort of, um [disfmarker] Because it's so strong a word, people don't take it that seriously, [speaker003:] Yeah. Yeah. I i i i [speaker007:] in [disfmarker] It's not negatively viewed. It just really means [disfmarker] [speaker003:] I it's [disfmarker] i It's even more than that. I think it [disfmarker] it really gives a very strong perspective, that you know that what you were doing is not an un biased experiment. If you don't say that, then people think "oh, they did that and they threw that, but that doesn't represent what would happen in the real world". But if you say "we did a n cheating experiment", which is really the standard way you'd say it, it says you deliberately put in a piece of me information that you would not have in the real world so that you can learn something. Just as part [disfmarker] as your process. So it's [disfmarker] I d I don't know, I [disfmarker] [speaker006:] OK. I guess what I'm [disfmarker] I'm thinking, is just in [disfmarker] if [disfmarker] uh, when these are presented in an inte interdisciplinary context, it might be nice to add that explanation? [speaker003:] i [speaker006:] Cuz otherwise it sounds like a pejorative statement on an alternative methodology? Cuz it sounds like [disfmarker] it sort of devalues the [disfmarker] the, um, all [disfmarker] other approach, which is to [disfmarker] to put those distinctions in. [speaker003:] Maybe. I [disfmarker] I've heard this at ICSLP for years and years and years, though. I mean, it's [disfmarker] it's pretty [disfmarker] pretty common. People say this. [speaker008:] If [disfmarker] i Well, you i i in a [disfmarker] To a broader audience you could call it a diagnostic experiment, rather than a cheating experiment. [speaker007:] Actually, Jane is right. Like, in conversation analysis, I've never heard people use this, because they're not using an automatic system. So, it really [disfmarker] [speaker008:] But they don't run experiments. [speaker003:] Yeah. [speaker007:] Right. I'm j Well, [speaker006:] They can do experiments. [speaker007:] you can do experiments, but the [disfmarker] it's cheating relative to what we call a system where we can completely control this black box. And it's not a very smart system. It [disfmarker] it only knows what we give it. And if w If it knows more than we would really give it when it runs, we call it cheating. But it's [disfmarker] Yeah, it's f only used in a community that does some type of computational modeling, [speaker003:] Yeah. In pattern recognition, or in [disfmarker] [speaker007:] I think. It's really not used in any kind of community doing experiments on human perception, [speaker003:] Yeah. [speaker007:] or [disfmarker] [speaker003:] Right. [speaker007:] You know. So [pause] she certainly can define it. [speaker003:] Yeah. It's machine [disfmarker] machine experiments. Um, [vocalsound] I mean, w when [disfmarker] when the, uh, the neural net w uh [disfmarker] wave hit in the mid eighties, and by the late eighties there were [disfmarker] uh, we were reviewing thousands of papers that were [vocalsound] coming out in neural nets. It was really hot. Everybody thought it would do everything. Um, and a really common error that people were making was, they were just reporting their, uh [disfmarker] uh, classification results on the data that they were training on. And so I think it was [disfmarker] it was [disfmarker] it was very important for people then who were doing something diagnostic to say "hey, I know I'm doing something that isn't kosher". [speaker008:] Hmm. [speaker003:] And to make it really clear that they knew it. So that was, I think, why [disfmarker] why it became a popular term. [speaker006:] Yeah, it just seems like, you know, i if it were "bootstrapping", or if it were [disfmarker] I mean, there are other ways to maybe get the point across. [speaker003:] But it isn't bootstrapping. Uh, it's [disfmarker] it's really cheating. [speaker001:] It's us it's using information you wouldn't normally have. [speaker003:] It's just ng Yeah. Yeah. [speaker006:] Hu Well. OK. [speaker003:] But it's cheating in a way that's pril It's [disfmarker] it's announcing to everybody "hey, I'm cheating, by doing this." [speaker001:] So. [speaker003:] It's saying [disfmarker] so it's all above the table. I'm actually using this other thing. [speaker006:] OK. [speaker003:] That's [disfmarker] that's [disfmarker] that's the reason. [speaker006:] OK. Alright. [speaker003:] Bootstrapping would imply it was actually legitimate in some kind of way. [speaker001:] Hmm. [speaker006:] Well, I [disfmarker] I think [disfmarker] [speaker003:] But [disfmarker] [speaker001:] We're not de legitimizing the data. We're de legitimizing the experiment. We're not saying that the data is cheating data. We're saying th we are cheating by using this data. [speaker003:] Yeah. [speaker006:] OK. [speaker003:] Yeah. [speaker001:] Because normally you wouldn't have that data available. [speaker006:] OK. OK. [speaker003:] Yeah. Anyway. [speaker006:] Does seem to me that it [disfmarker] that it carries over some baggage with it. That, uh, [vocalsound] i that [disfmarker] I can understand it in context as you described it. But, it [disfmarker] it seems to me that [disfmarker] u um, it does import some negative evaluation, that, um, would maybe not be good if [disfmarker] [speaker003:] It has some shock value. [speaker006:] It has shock value, and [speaker003:] Which is part of why it's used. [speaker006:] to the wr to the wrong audience, I think that that might be t [vocalsound] a negative shock. [speaker007:] Yeah. I thi I think to the wrong audience, I agree that, um, p [speaker003:] Uh huh. [speaker008:] It's like a disclaimer. [speaker007:] to the wrong audie we should just explain what it [disfmarker] what it means here, and that it's a common term, [speaker006:] Yeah. We started with hand marked data, or with, uh, t you know, hand transcribed data, [speaker007:] and we're [disfmarker] [speaker006:] or [disfmarker] [speaker001:] Shoot. [speaker006:] Yeah. Did you break the microphone? [speaker007:] Yeah. [speaker001:] Just the clip. [speaker006:] Nnn. OK. [speaker008:] Now we have to l delete that expletive. [speaker006:] OK. Well, I feel better now. Thank you very much. [speaker007:] Hmm. The first time I heard that, I [disfmarker] I thought, you know, the same thing [speaker005:] Thank you, Andreas. [speaker007:] and I guess you ju after a while it becomes almost a [disfmarker] It's a bu bit of a [disfmarker] a humbling thing when somebody says that. [speaker001:] It's p it [disfmarker] it really is part of the jargon. [speaker003:] But it's also is [disfmarker] [speaker007:] If they get good results but we were cheating on this feature because we took it for granted even though we [pause] can't really assume that, then it's actually the opposite. [speaker003:] Right. [speaker007:] It's [disfmarker] [speaker006:] The trouble is, that [disfmarker] you know, I understand it in that context, but it [disfmarker] but it is almost resentful. It's almost like, you know, resentful of the data, resentful of the, um, the hard work that's going into preparing the data [speaker001:] Well [disfmarker] [speaker006:] is [disfmarker] [speaker003:] No, no. It has [disfmarker] I don't think you're saying the data is cheating. [speaker007:] Well, it's [disfmarker] it's [disfmarker] [speaker003:] I think you're saying "in my experiment, I cheated in this way". [speaker001:] That's what I was saying. [speaker004:] Yeah. [speaker003:] I mean, another is what Liz was talking about, how in the Switchboard tests [disfmarker] in all the Switchboard tests we've been g doing, we've been making the same er uh, same [disfmarker] uh, uh, standard [disfmarker] using the same standard way of [disfmarker] of getting the data to test on. Which means that we weren't actually running it on [disfmarker] on, uh, on data that had [disfmarker] had no speech. And, in a sense, that was cheating. [speaker006:] OK. [speaker003:] So i so it [disfmarker] it's [disfmarker] it's a good wake up call if people "well, we have this performance but you have to keep in mind, we're doing [disfmarker] we're [disfmarker] we're not doing the whole real task in this way and this way and this way. But, I'll convince you that [disfmarker] that it's still important for you to listen to what I have to say next because of this and this." [speaker006:] OK. OK. [speaker003:] And so it's just a way of putting it all out on the table, uh, as opposed to [disfmarker] [speaker001:] Uh, and it's used for a lot of different types of data. So, segment whether you have segmentation or not, is it male or female or not [disfmarker] [vocalsound] Um, do you know the signal to noise? [speaker003:] Yeah. [speaker006:] Mmm. [speaker001:] Like, that's another one I see all the time, where [vocalsound] you assume it's known. [speaker003:] Yeah. Sure. [speaker006:] Oh, interesting. [speaker003:] Yeah. [speaker001:] And you say it's cheating because [pause] you don't actually compute it. [speaker008:] Hmm. [speaker006:] I didn't know that. Huh! [speaker003:] In [disfmarker] In the multi band experiments, in the first one, we really wanted to find out, "what if you knew which band was really noisy?" [speaker006:] Uh huh! [speaker001:] N Right. [speaker003:] I mean, suppose you just know that. [speaker001:] Same cheating. [speaker003:] And then [disfmarker] then, even if you know that, can that help you? I mean, what can [disfmarker] what strategy can you do, to do well without that particular band in the spectrum? [speaker008:] Hmm. [speaker003:] And so that was important to know as a baseline. And [disfmarker] and then, once you knew that, then s you go "well, now how do I know that that's noisy?" [speaker008:] Hmm. [speaker006:] OK. [speaker003:] And [disfmarker] and [disfmarker] and, uh [disfmarker] But, y you know, I [disfmarker] I should also say that there's a lot of [disfmarker] It's not just the word "cheating". But there's lots of other things that we talk about, [vocalsound] which, as soon as you go outside of your narrow little group, it gets very, very confusing to people. [speaker001:] Mm hmm. [speaker006:] Oh, sure. Terminology is always context dependent. [speaker003:] And [disfmarker] I i i i [speaker006:] No question about it. It's just [pause] that it seems like [pause] the [disfmarker] the point without the negative evaluation would be [disfmarker] However, you know, if i I understand your point. That it [disfmarker] th it has a long tradition in this field [disfmarker] [speaker001:] It's not even really negative. [speaker006:] I didn't realize. And is used [disfmarker] This is interesting what [disfmarker] what Adam said about it being used also for a bunch of other, uh, dimensions. [speaker003:] Well, the example I was thinking of, also, was this w thing that [disfmarker] that Herve and [disfmarker] and Hynek and I made about, uh, increasing the error rate. And, so we [disfmarker] we did a number of papers, and talks, and so forth about, uh [disfmarker] w uh, i the [disfmarker] the virtues of increasing the error rate. [speaker006:] Oh. [speaker003:] OK. And [disfmarker] I mean, and the whole point of it was, not that it was good to increase your error rate, but it was good to be willing to risk increasing your error rate by trying risky things, and trying [disfmarker] Because there's this notion of a local minimum, that if you just have [vocalsound] some system that's very complex, and you t you [disfmarker] [vocalsound] you turn some knobs to try to make it better and better, you'll never get out of this local minimum. [speaker001:] Fine tune little bits. [speaker006:] Mm hmm. [speaker003:] You have to be willing [vocalsound] to jump to something that's quite different. [speaker008:] Mmm. [speaker006:] Interesting. [speaker003:] And the first time you jump to something quite different, or maybe the first ten times or a hundred times you do, [vocalsound] it's gonna be much much worse, because you've optimized the other system. So the effect [disfmarker] the immediate effect is gonna be to increase your error rate. [speaker006:] Interesting. [speaker003:] And so it was [disfmarker] we had a couple papers like "Towards increasing the error rate in speech" and [disfmarker] and so on. [speaker001:] Hmm. [speaker006:] Huh! [speaker003:] And we really did get feedback from a few people, some of whom were, you know, fairly senior, that [disfmarker] that, um, "Well, you know, you sh you [disfmarker] you c We're c really concerned about you misleading people into thinking they should be increasing the error rate." [speaker008:] Hmm. [speaker006:] Yeah. [speaker003:] And, [vocalsound] uh, and we go [disfmarker] we thought "Oh, come on." But. [speaker001:] Did you read the paper? [speaker008:] But weren't you cheating in those experiments? [speaker006:] Yeah. [speaker003:] That's [disfmarker] we weren't. That's why we increased the error rate. [speaker006:] Interesting. [speaker007:] Actually, I think of cheating as [pause] a way to do some work, where you can't address all of the computational tasks. Like, if we wanna study, um, speaker habits, but we can't do speaker detection. But we wanna assume [disfmarker] Let's say, we know this is Jane and we know this is Chuck, even though automatically to look at the habits, we would need to also first figure that out. [speaker003:] Mm hmm. [speaker007:] But we can sort of cheat on that factor because it's somebody else's research. And then we just [disfmarker] So it's [disfmarker] Then we wanna make a proof of concept for [disfmarker] [speaker001:] What [disfmarker] what this [disfmarker] w Right. What would this be like if [disfmarker] w it were perfect? [speaker007:] If someone else [disfmarker] [speaker001:] If this component were perfect? [speaker007:] Right. Eh [disfmarker] or [disfmarker] w But we really can't work on that problem. [speaker004:] Yep. [speaker007:] We don't have time, we're not interested, or w whatever [disfmarker] [speaker001:] Hmm. [speaker007:] And so you [disfmarker] you assume it's given [speaker001:] It's too hard, usually. [speaker007:] because it's, um [disfmarker] you wanna go forward with your research, and assume that you have that information. [speaker006:] Mm hmm. [speaker007:] So, I guess if [disfmarker] I don't ever think of it as negative. More like, it's something we're not building. [speaker001:] It's not pejorative at all. [speaker006:] Well. [vocalsound] The term itself is. You know? [speaker001:] But, it's not pejorative towards the data. [speaker006:] But [disfmarker] but it [disfmarker] And it's pejorative with respect to a certain purpose. [speaker001:] It's pejorative towards [disfmarker] [speaker006:] I understand. [speaker001:] It's pejorative to o ourselves. [speaker006:] I [disfmarker] I wanted to raise the issue. [speaker008:] S self [disfmarker] [speaker001:] Right? To say "I am cheating in this experiment" is not saying that the data is bad. [speaker003:] Yeah. [speaker006:] Yeah. [speaker003:] But [disfmarker] [speaker001:] It's saying that my experiment is bad. [speaker003:] Yeah. Anyway, it's colloquial, [speaker006:] Anyway. [speaker003:] an and [disfmarker] and, uh, it's [disfmarker] it's interesting to hear that [disfmarker] that someone coming from a different direction [disfmarker] it s it sounds the way it sounds to you. But [disfmarker] but, uh, I [disfmarker] I'd never heard that before. [speaker001:] Yep. [speaker006:] OK. That's interesting. OK. [speaker003:] So, so i so, that's interesting. [speaker006:] Well, I wanted to raise the issue, and um, [speaker003:] Mm hmm. [speaker006:] I [disfmarker] I, uh [comment] [disfmarker] I appreciate the discussion. I wanted to ask one other question which is a different [pause] matter, which is with respect to, um, this thing that you've been r working on for the recording monitoring script? So. The idea [disfmarker] You had this script that you're working on, to be sure that the microphone values are in [disfmarker] are kept in [disfmarker]? [speaker001:] Oh, right. I haven't gotten back to that recently. [speaker006:] OK. [vocalsound] OK. [speaker001:] But [speaker008:] Hmm. [speaker001:] I assume you're saying you want me to [pause] get back to it. [speaker006:] Cuz, y you know [disfmarker] Well, I was just wondering if [disfmarker] i i Because I [disfmarker] I am finding that, um, in double checking, I've run across, u um, one data set where the microphone was off early on, and then two other [pause] speakers [disfmarker] their microphones went off later. [speaker001:] Hmm. [speaker006:] And this is out of, like, seven speakers. So out of seven speakers, four microphones [disfmarker] Y you know, basically, three [disfmarker] three microphones bak basically flaked, which makes it hard for the data to be [pause] used for all possible purposes. [speaker002:] D Do you think the battery ran out? [speaker006:] That's what I think. [speaker002:] Or [disfmarker]? [speaker006:] I think the two that flaked late, I think it was the battery. But, you know, if the script could, um, you know, alert the recording person to that [disfmarker] I mean, I don't know if there's a way to replace a battery, if it happens in the middle of a meeting. [speaker001:] Well [disfmarker] [speaker006:] Maybe that's hopeless anyway. [speaker001:] Well, you can, but you'll lose a lot of data. [speaker006:] But [disfmarker] [speaker004:] Yeah. [speaker001:] But, I mean, that doesn't really help because often the recording person isn't in the room. [speaker006:] Oh, I see. [speaker001:] So, what are you gonna do? I mean it will [disfmarker] w the person [disfmarker] if you're looking up at the board and I disable the screensaver, you will see that the mike is off, but that doesn't necessarily help. [speaker006:] OK. Then I guess that raises the question of, [vocalsound] uh, whether we should screen the data before they get transcribed. [speaker001:] So [speaker006:] Because, although I think that the data are still useful in terms of providing content, and [disfmarker] and that, I know that having three out of seven microphones out of commission during some part of the meeting restricts the, uh, usefulness of the data [pause] for other purposes. [speaker007:] Yeah. [speaker001:] Mm hmm. [speaker007:] I [disfmarker] I think that's a really good idea. [speaker006:] And maybe you don't wanna have it [disfmarker] [speaker007:] Because for all kinds of studies, we don't really enjoy meetings where some of the [disfmarker] where the signal's going off at times. It just makes it hard to study any kind of parameters. So, if there's a way to check the signal quality before transcribing it and you find any problem at all, it'd be much better to go to another meeting, I think. [speaker006:] I actually think that Thilo's [disfmarker] n in a [disfmarker] in a [disfmarker] k a [disfmarker] You know, when you do the pre segmenter and you [disfmarker] and you run across troubles [disfmarker] He [disfmarker] he runs across s some of these that are [disfmarker] [speaker004:] Yeah. Yeah. Some of these [pause] things are captured by [disfmarker] Yeah. Um, by looking at [disfmarker] at the minimum and maximum and whatever [speaker006:] So, we have some part of that. [speaker004:] you find in the channels, and [disfmarker] [speaker001:] It's just hard to tell between that and just someone not talking. [speaker006:] I c [speaker004:] But [disfmarker] Yep. Yep. Yeah. Yeah. [speaker001:] So. [speaker006:] Well, there the [disfmarker] And the other aspect of it is, um, that, when the microphone is not well adjusted, then y i i even if it's not a lapel mike, you can get lapel mike type behavior. [speaker001:] Right. [speaker006:] Cuz I'm [disfmarker] I'm expecting, for example, with this, that you're gonna end up, [vocalsound] you know, picking up other peoples' signals. [speaker007:] I've already written some notes here, so I [disfmarker] [speaker001:] Mm hmm. [speaker007:] Um, right. Uh [disfmarker] [speaker006:] Yeah. I mean, it's just [disfmarker] you know, the microphone is intended for certain adjustments. [speaker001:] Well, we should be getting new equipment in, so we don't have to use the earplug any more. [speaker007:] OK. But it's my ear. I'm sure it's something wrong with my head, [speaker001:] It's your hair. [speaker004:] Hmm. [speaker007:] but [disfmarker] Um, actually, is there a way, though, to use whatever you're using for background noise to check post hoc that a microphone was constantly on? I mean [disfmarker] [speaker001:] I mean, y d you [disfmarker] you can do sort of a check, [speaker007:] Uh, like this o [speaker001:] but it will be very hard to tell the difference between that and, um, someone not talking. [speaker003:] Just so it doesn't [disfmarker] when the microphone's dead, it doesn't put out zeros? [speaker001:] No. [speaker003:] Or [disfmarker]? Mm hmm. [speaker008:] What does it put out? [speaker005:] Really? [speaker003:] Johnson noise? [speaker007:] But then how are y how are you detecting during a meeting? [speaker001:] Little bit of noise. [speaker003:] Yeah. [speaker001:] I use a threshold. If it's below a particular value, it [disfmarker] it flashes yellow. So as I [disfmarker] eh [disfmarker] but it's not perfect. [speaker007:] But is it better than nothing? [speaker001:] Yeah, probably. [speaker007:] Cuz it would really be [disfmarker] [speaker001:] I mean, this is the reason why I haven't gotten back to it, is cuz my first pass at it didn't really work because all the mikes have different noise levels. [speaker007:] Right. [speaker008:] Hmm. [speaker001:] And so I have to do something a little more clever. [speaker005:] It's just the noise from the connections and the [disfmarker] everything. [speaker007:] But you could [disfmarker] [speaker001:] Yep. Yep. [speaker007:] So, you could run that post hoc on a already recorded meeting, in the sense that, you know, not everyone [disfmarker] as you just said, you won't be there. And then if we find any problems, have the transcribers listen and [disfmarker] [speaker001:] Yep. [speaker007:] I really think it's better not to transcribe a meeting that's gonna have problems once you've spent all this effort. [speaker001:] Yep. [speaker006:] The only argument for doing so would be with reference to the content. Content maps. [speaker007:] Yeah. Yeah. [speaker006:] Uh [disfmarker] but mayb maybe then it should be done in the old original way, w instead of channelizing and having the, uh, uh, you know, elaborate coding of the backchannels. [speaker008:] Mmm. Yeah. You could still [disfmarker] eh [speaker001:] Yeah. There's [disfmarker] eh [disfmarker] the s the standard deviation of the signal gives you a good clue. I mean, if that is too low, then you can be pretty sure that it's, uh, empty. [speaker007:] Mm hmm. [speaker005:] Hmm. [speaker004:] Yeah. Yeah. Sometimes you [disfmarker] you capture that when you mix it together, and [disfmarker] [speaker001:] Right. Exactly. [speaker004:] Yeah. Yeah. [speaker006:] And actually, an [disfmarker] an alternative to even doing that level of transcription would be to have a transcriber listen and [disfmarker] and m you know, maybe just [disfmarker] Oh, I don't know if a transcr uh [disfmarker] Well, someone who's associated with the meeting could have, like, a summary of points handled in the meeting. You know, maybe if we could pay one person who knows that subject matter to do an outline of the meeting's content. [speaker001:] Uh huh. [speaker006:] Instead of losing the content altogether. [speaker008:] Well, it's even [disfmarker] If y if you could still transcribe the words based on the [pause] far field microphone or something, [nonvocalsound] you could, uh, still use it for, say, language modeling, [speaker001:] Is there a [disfmarker]? Far field. Yeah. [speaker008:] you know. [speaker006:] I guess the troub trouble in my mind is that it [disfmarker] that it's not a very neat corpus if you say these data are available, [speaker007:] Yeah. [speaker008:] Right. [speaker006:] but these are imperfect, [speaker007:] Yeah. [speaker006:] because of the f you know, the batteries flaked. " [speaker001:] Well, we'll just [disfmarker] We'll [disfmarker] we'll just have to note those. [speaker007:] I'd mu We're [disfmarker] I'd much rather have, you know, meetings that have all the channels, even if we had to skip a meeting or something, [speaker008:] Well, if [disfmarker] Right. [speaker007:] just for [pause] all these other purposes. So. [speaker006:] And we have so many. [speaker008:] I guess there is no [disfmarker] [speaker005:] You could put them at the back of the queue or something. [speaker001:] Well, I [disfmarker] I [disfmarker] I think we can't throw away that data, [speaker007:] Yeah. [speaker001:] cuz otherwise we'll end up with very few meetings. [speaker007:] Right. But may just [disfmarker] [speaker008:] there's [disfmarker] there's no shortage of meetings, [speaker005:] Yeah. [speaker001:] But [speaker004:] Yep. [speaker008:] so [speaker007:] Right. There [disfmarker] but there's a shortage of trans transcription power. [speaker008:] we can afford [disfmarker] we can a [speaker006:] We have such a backlog. I mean, to be able to get through the backlog of the [disfmarker] of the good meetings, it would be nice to not have energies diverted. [speaker008:] Hmm. [speaker007:] Yeah. [speaker001:] Do we have a [disfmarker]? I'm sorry for interrupting. Do we have an EDU meeting at four? [speaker006:] That's fine. [speaker008:] Hmm. [speaker006:] Oh, good question. [speaker007:] You know what? They can their four o'lock cancelled, [speaker004:] I don't think so. [speaker007:] and they're starting at four twenty, and I'll set up at four fifteen. [speaker001:] OK. Great. Good. [speaker008:] Hmm. [vocalsound] Hey, I bet there's tea. [speaker003:] Yeah, but there's [disfmarker] [speaker001:] Cuz otherwise I was gonna say we have to cancel. [speaker007:] Yeah. [speaker003:] There's tea, also, though. [speaker008:] Yeah. [speaker006:] OK. [speaker007:] Yeah. So you'll have si [speaker003:] So, maybe we should [disfmarker] maybe we should do a, uh, [vocalsound] simultaneous digits [disfmarker] digits read. In the interest of [disfmarker] of getting the snacks. [speaker001:] A simultaneous digit. OK. [speaker006:] Mm hmm. [speaker008:] Hey, I've never done one of those. [speaker003:] Oh, you never have? Oh, it's a treat. [speaker007:] You have to plug your ears or just prepare not to laugh. [speaker008:] Right. [speaker003:] OK. [speaker001:] Everyone ready? S reading simultaneous digits? Three! Two! One! [speaker006:] Yeah. Yep. Yep. [speaker008:] Sev [speaker003:] OK, another successful babble. [speaker001:] And [disfmarker] [speaker007:] OK. [speaker006:] Test. [speaker002:] Let's see, I should be [speaker005:] As close to your mouth as you can get it. [speaker002:] Two. [speaker004:] Up high [disfmarker] high as you can get. [speaker002:] La [speaker008:] Channel one one one. [speaker007:] Yeah, on your upper lip. [speaker002:] Is this channel one? Gee, OK. Yes. OK. [speaker005:] OK, so for [disfmarker] for [disfmarker] For people wearing the wireless mikes, like [disfmarker] like this one, I find the easiest way to wear it is sorta this [disfmarker] this sorta like that. [speaker008:] This is [disfmarker] chan channel channel one one two three [speaker006:] Channel five, channel five. [speaker002:] Yeah. [speaker005:] It's actually a lot more comfortable then if you try to put it over your temples, [speaker002:] Mm hmm. What do you do, [speaker006:] Test, test test. [speaker005:] so [disfmarker] [speaker002:] you do it higher? Mm hmm. [speaker004:] Adam's just trying to generate good uh data for the recognizer there. [speaker007:] Yeah, I think we're supposed to [disfmarker] that's right. [speaker005:] And then also, for [disfmarker] for all of them, if your boom is adjustable, the boom should be towards the corner of your mouth, [speaker006:] Test test. [speaker001:] By the way, there was a bug. Yeah, i it wasn't using the proper [speaker004:] Oh it was. [speaker005:] and about a uh a thumb to a thumb and a half distance away from your mouth, [speaker001:] basically it wasn't adapting anything. [speaker004:] Oh. [speaker005:] so about like I'm wearing it now. [speaker004:] Oh that's interesting. So why didn't you get the same results and the unadapted? [speaker008:] It's not always possible. [speaker005:] so so Jane, you could actually do even a little closer to your mouth, [speaker001:] Hmm? [speaker007:] I could [disfmarker] can this be adjuste like this? [speaker004:] Why didn't you get the same results and the unadapted? [speaker005:] but [disfmarker] [speaker001:] Oh, because when it estimates the transformer pro produces like a single matrix or something. [speaker005:] Yep. [speaker007:] Is that [@ @]? OK, thank you. [speaker006:] Adam, I'm not [disfmarker] uh, looks kinda low on channel five [disfmarker] [speaker004:] O Oh oh I see. I see, I see. [speaker006:] no? [speaker002:] OK. [speaker005:] Channel five, s speak again. [speaker007:] Hello. [speaker006:] Maybe not. [speaker001:] Basically there were no counts [speaker005:] Yeah, that's alright. [speaker006:] Hello? [speaker008:] Yeah. [speaker006:] It's OK? [speaker005:] I mean, we could [disfmarker] we could up the gain slightly if you wanted to. [speaker006:] Is this OK? [speaker008:] OK. [speaker004:] I see what you mean. [speaker003:] Who's channel B? [speaker005:] but [disfmarker] Uh, channel B is probably Liz. [speaker003:] Uh oh. [speaker008:] Uh channel B [disfmarker] I am channel B. [speaker002:] You wanna close this, [speaker007:] Channel eight, eight. [speaker002:] or [speaker003:] No I [speaker005:] Thank you. [speaker008:] No, channel B. [speaker003:] yeah, [speaker001:] Hello, hello. [speaker003:] yeah, you're channel B. [speaker008:] Yeah, yeah. [speaker003:] So can you talk a bit? I thought it might be too [speaker008:] OK, yeah, channel B, one two three four five. [speaker003:] OK. [speaker005:] Yeah, it's alright. So, the gain isn't real good. [speaker002:] We're recording, [speaker003:] OK. [speaker005:] OK, so we are recording. [speaker008:] Ah. [speaker002:] right? Yeah. [speaker006:] Oh. [speaker005:] Um everyone should have at least two forms possibly three in front of you depending on who you are. [speaker001:] OK. [speaker005:] Um we [disfmarker] we're doing a new speaker form and you only have to spea fill out the speaker form once but everyone does need to do it. And so that's the name, sex, email, et cetera. [speaker008:] Mm hmm. [speaker005:] We [disfmarker] we had a lot of discussion about the variety of English and so on so if you don't know what to put just leave it blank. Um I [disfmarker] I designed the form and I don't know what to put for my own region, so [speaker001:] Mmm. [speaker004:] California. [speaker001:] I think [disfmarker] [speaker005:] California. [speaker008:] California. [speaker001:] Um may I make one suggestion? Instead of age put date of [disfmarker] uh year of birth [speaker005:] Sure. [speaker001:] because age will change, but The year of birth changes, you know, stays the same, usually. [speaker005:] Oh. Birth year? [speaker003:] A actually, wait a minute, [speaker007:] Although on [disfmarker] [speaker001:] Yeah. [speaker003:] shouldn't it be the other way around? [speaker004:] Not for me. [speaker007:] course on the other [disfmarker] on the other hand you could [disfmarker] you view it as the age at the time of the [disfmarker] [speaker003:] On the other side, yeah. [speaker001:] Well the thing is, if ten years from now you look at this form knowing that [disfmarker] [speaker007:] Yes, but what we care about is the age at [disfmarker] at the recording date rather than the [disfmarker] [speaker004:] But there's no other date on the form. [speaker003:] O yeah. W we don't care how they [disfmarker] old they really are. [speaker001:] Well [disfmarker] well I don't know. [speaker007:] Yes. [vocalsound] Unless we wanna send them a card. [speaker005:] Well I guess it depends on how long the corpus is gonna be collected for. [speaker007:] Yeah, that's true. [speaker001:] Anyway. [speaker003:] I still don't see the problem. [speaker005:] Either way yeah I think [disfmarker] I think age is alright and then um there will be attached to this a point or two these forms uh so that you'll be able to extract the date off that [speaker001:] OK. Mm hmm. [speaker005:] so, anyway. And so then you also have a digits form which needs to be filled out every time, the speaker form only once, the digit form every time even if you don't read the digits you have to fill out the digits form so that we know that you were at the meeting. OK? And then also if you haven't filled one out already you do have to fill out a consent form. And that should just be one person whose name I don't know. OK? [speaker006:] Do you want this [pause] Adam? [speaker005:] Uh sure. Thank you. [speaker002:] So uh [speaker005:] OK so should we do agenda items? [speaker002:] Uh oh that's a good idea. I shouldn't run the meeting. [speaker005:] Uh well I have [disfmarker] I wanna talk about new microphones and wireless stuff. [speaker007:] Mmm. [speaker005:] And I'm sure Liz and Andreas wanna talk about recognition results. Anything else? [speaker003:] I guess [disfmarker] what time do we have to leave? Three thirty? [speaker001:] Yeah. [speaker003:] Yeah, [speaker005:] Why don't you go first then. [speaker003:] so. [speaker002:] Yeah, good idea. [speaker001:] OK. [speaker003:] Um Well, I [disfmarker] I sent out an email s couple hours ago so um with Andreas' help um Andreas put together a sort of no frills recognizer which is uh gender dependent but like no adaptation, no cross word models, no trigrams [disfmarker] a bigram recognizer and that's trained on Switchboard which is telephone conversations. Um and thanks to Don's help wh who [disfmarker] Don took the first meeting that Jane had transcribed and um [vocalsound] you know separated [disfmarker] used the individual channels we segmented it in into the segments that Jane had used and uh Don sampled that so [disfmarker] so eight K um and then we ran up to I guess the first twenty minutes, up to synch time of one two zero zero so is that [disfmarker] that's twenty minutes or so? Um yeah because I guess there's some, [speaker005:] Or so. [speaker003:] and Don can talk to Jane about this, there's some bug in the actual synch time file that ah uh I'm [disfmarker] we're not sure where it came from but stuff after that was a little messier. Anyway so it's twenty minutes and I actually [speaker005:] Hmm. [speaker003:] um [speaker005:] I [disfmarker] was that [disfmarker] did that [disfmarker] did that recording have the glitch in the middle? [speaker007:] I'm puzzled by that. I [disfmarker] oh [disfmarker] oh, I see. [speaker003:] There's [disfmarker] there's a [disfmarker] [speaker007:] Oh there was a glitch somewhere. [speaker003:] yeah, so that actually um [speaker006:] Was it twenty minutes in, [speaker007:] I forgot about that. [speaker003:] if it was twenty minutes in then I don't know [speaker006:] I thought [disfmarker] [speaker007:] Well, I mean, they [disfmarker] [speaker001:] Well it was interesting, [speaker007:] but I was able to can transcribe [speaker005:] I don't remember when it is. [speaker001:] suddenly [disfmarker] the [disfmarker] the overall error rate when we first ran it was like eighty percent but i looking at [disfmarker] the first sentences looked much better than that and then suddenly it turned very bad and then we noticed that the reference was always one off with the [disfmarker] [speaker006:] Yeah, that might be [disfmarker] that might be [disfmarker] that might be my fault. [speaker003:] Wel [speaker001:] it was actually recognized [speaker005:] Oh no. [speaker007:] Wow. [speaker001:] so [speaker006:] I'm not [disfmarker] [speaker005:] Oh so that was just a parsing mismatch. [speaker001:] OK. [speaker003:] No actually it was [disfmarker] yeah i it was a complicated bug because they were sometimes one off and then sometimes totally random so [speaker007:] Oh. [speaker006:] yeah, I was pretty certain that it worked up until that time, [speaker007:] That's not good. [speaker003:] um Yeah [speaker006:] so [speaker003:] so that's what we have [speaker001:] OK. [speaker005:] Alright. [speaker007:] Yeah. [speaker003:] but that [disfmarker] that will be completely gone if this synch time problem [speaker005:] The [disfmarker] the glitch [speaker001:] So [disfmarker] so we have everything recognized but we scored only the first uh whatever, up to that time to [speaker007:] And the only glitch [disfmarker] yeah. [speaker005:] yeah. [speaker003:] So you guys know. [speaker002:] S sorry I haven't seen the email, [speaker003:] Yeah. [speaker005:] Th the [speaker007:] The [disfmarker] the [disfmarker] well [disfmarker] wait [speaker003:] So here's the actual copy of the email [speaker002:] what was the score? [speaker007:] we should say something about the glitch. He [disfmarker] he can say something about the glitch. [speaker003:] um oh OK [speaker005:] yeah. [speaker007:] Cuz it's [disfmarker] it's [disfmarker] it's [disfmarker] h it's [disfmarker] it's very small [disfmarker] [speaker003:] so does this glitch occur at other [disfmarker] [speaker005:] There [disfmarker] there [disfmarker] there's an acoustic glitch that occurs where um the channels get slightly asynchronized [speaker007:] very small. Yep. [speaker003:] Oh. Right. [speaker005:] so the [disfmarker] that [disfmarker] that problem has gone away in the original driver believe it or not when the SSH key gen ran the driver paused for a fraction of a second [speaker001:] Mmm. [speaker006:] Hmm. [speaker002:] Hmm. [speaker005:] and so the channels get a little asynchronous and so if you listen to it in the middle there's a little part where it starts doing [disfmarker] doing click sounds. [speaker002:] So [disfmarker] [speaker003:] And is it only once that that happens? [speaker005:] But yeah it [disfmarker] [speaker003:] OK. [speaker005:] right once in the middle. [speaker003:] There's [disfmarker] the previous page has some more information about sort of what was wrong [speaker002:] so [disfmarker] [speaker003:] but [speaker005:] Um But that shouldn't affect anything [speaker002:] so un unsurprisingly Adam is the golden voice, [speaker007:] S [speaker003:] OK so that's actually [speaker007:] and it [disfmarker] [speaker002:] you see this here? [speaker005:] yeah yeah [speaker003:] It [disfmarker] y it's [disfmarker] [speaker005:] "bah" [speaker003:] OK no [disfmarker] [speaker001:] Oh, and [disfmarker] [speaker003:] What happens is it actually affects the script that Don [disfmarker] [speaker004:] Huh. [speaker003:] I mean if we know about it then I guess it could always be checked for it [speaker005:] Well the acoustic one shouldn't do anything. [speaker003:] but they [speaker006:] Yeah, I don't know exactly what affected it [speaker007:] I agree. I agree. [speaker006:] but I'll [disfmarker] I'll talk to you about it, [speaker001:] I [disfmarker] I have [disfmarker] [speaker005:] But I [disfmarker] I do remember [disfmarker] [speaker001:] Yeah. [speaker006:] I'll show you the point. [speaker007:] Yeah. It [disfmarker] it had no effect on my transcription, [speaker003:] Yeah. [speaker007:] you know, I mean I [disfmarker] I had no trouble hearing it and [disfmarker] and having time bins [speaker001:] Mmm. [speaker007:] but there was a [disfmarker] [speaker005:] I do remember seeing once the transcriber produce an incorrect XML file where one of the synch numbers was incorrect. [speaker007:] Oh. [speaker006:] That's what happened. [speaker003:] Well, the [disfmarker] the synch time [disfmarker] the synch numbers have more significant digits than they should, [speaker007:] Oh. [speaker008:] Yeah. [speaker006:] There was [disfmarker] [speaker005:] Where [disfmarker] where they weren't monotonic. [speaker006:] yeah, I mean [disfmarker] [speaker003:] right? There's things that are l in smaller increments than a frame. [speaker008:] Yeah. [speaker007:] Oh, interesting. [speaker005:] Oh OK so that's [speaker006:] Hmm. [speaker003:] And so then, I mean you look at that and it's got you know more than three significant digits in a synch time then that can't be right [speaker007:] Oh. [speaker005:] yeah sounds like a bug. [speaker007:] Yeah. [speaker001:] Mmm. [speaker003:] so anyway it's [disfmarker] it's just [disfmarker] that's why we only have twenty minutes but there's a significant amount of [disfmarker] [speaker006:] Non zero? Um there are like more [disfmarker] cuz there's a lot of zeros I tacked on just because of the way the script ran, [speaker005:] The other one I saw was that it [speaker006:] I mean but there were there was a point. [speaker005:] yeah. [speaker003:] Yeah that was fine. [speaker005:] The other one I saw was non non monotonic synch times [speaker003:] That [disfmarker] that was OK. [speaker006:] OK. [speaker005:] and that definitely indicra indicates a bug. [speaker006:] Uh. [speaker003:] Well that would really be a problem, yeah. So anyway these are just the ones that are the prebug for one meeting. [speaker006:] Yeah. [speaker003:] um and what's [disfmarker] which [disfmarker] [speaker005:] So that's very encouraging. [speaker003:] this is really encouraging cuz this is free recognition, [speaker008:] Yeah. [speaker002:] Hmm. Cool. [speaker003:] there's no I mean the language model for Switchboard is totally different [speaker001:] Mmm. [speaker003:] so you can see some like this Trent Lott which [speaker004:] Trent Lott. [speaker003:] um I mean these are sort of funny ones, [speaker004:] It'll get those though. [speaker003:] there's a lot of perfect ones and good ones and all the references, I mean you can read them and when we get more results you can look through and see [speaker005:] I and as I said I would like to look at the lattices [speaker001:] Mm hmm. [speaker003:] but um it's pretty good. [speaker005:] because it sounded like even the ones it got wrong it sort of got it right? [speaker003:] Well so I guess we can generate [speaker005:] Sounds likes? [speaker007:] Mm hmm. [speaker001:] There are a fair number of errors that are, you know where [disfmarker] got the plural S wrong or the inflection on the verb wrong. [speaker003:] um [speaker005:] Yeah, and who cares? And [disfmarker] and there were lots of [disfmarker] of course the "uh uh" s, "in on" s "of uh" s. [speaker001:] Mmm, so if [disfmarker] [speaker003:] there's [disfmarker] No those are actually [speaker001:] Yeah. [speaker003:] a lot of the errors I think are out of vocabulary, so is it like PZM is three words, [speaker001:] Mm hmm. [speaker003:] it's PZM, [speaker001:] Mm hmm. [speaker003:] I mean there's nothing [speaker005:] Right. [speaker003:] There's no language model for PZM or [speaker005:] Ri ri right. [speaker003:] um [speaker005:] Did you say there's no language for PZM? [speaker003:] No language model, I mean those [disfmarker] [speaker005:] Do you mean [disfmarker] so every time someone says PZM it's an error? Maybe we shouldn't say PZM in these meetings. [speaker003:] Well [disfmarker] well there's all kinds of other stuff like Jimlet and I mean um anyway there [disfmarker] [speaker005:] Yeah, that's right, Jimlet. [speaker002:] Well, we don't even know what that means, [speaker003:] so [vocalsound] but this is really encouraging because [speaker005:] Yeah, that's right. [speaker002:] so I [speaker003:] so, I mean the bottom line is even though it's not a huge amount of data um it should be uh reasonable to actually run recognition and be like within the scope of [disfmarker] of r reasonable s you know Switchboard this is like h about how well we do on Switchboard two data with the Switchboard one trained [disfmarker] mostly trained recognizer [speaker005:] Right. [speaker003:] and Switchboard two is [disfmarker] got sort of a different population of speakers and a different topic [speaker005:] Excellent. [speaker003:] and they're talking about things in the news that happened after Switchboard one so there was [@ @] so that's great. [speaker002:] Yeah. Yeah so we're in better shape than we were say when we did [disfmarker] had the ninety three workshop [speaker003:] Um [speaker002:] and we were all getting like seventy percent error on Switchboard. [speaker001:] Mm hmm. [speaker003:] Oh yeah I mean this is really, [speaker002:] you know [speaker001:] Mmm. [speaker003:] and thanks to Andreas who, I mean this is a [speaker001:] Mmm. [speaker005:] Well especially for the very first run, I mean you [disfmarker] [speaker002:] Yeah. [speaker003:] eh um [speaker001:] Oh it's the [disfmarker] [speaker002:] Yeah. [speaker005:] the first run I ran of Switchboard I got a hundred twenty percent word error [speaker003:] yeah [speaker005:] but [speaker003:] So and what al also this means is that [speaker007:] Right. [speaker005:] Not Switchboard, [speaker003:] um [speaker005:] uh Broadcast News. [speaker001:] Well it's [disfmarker] [speaker003:] I mean there's a bunch of things in this note to various people especially I guess um with Jane that [disfmarker] that would help for [disfmarker] since we have this new data now uh in order to go from the transcripts more easily to um just the words that the recognizer would use for scoring. I had to deal with some of it by hand but I think a lot of it can be automated s by [disfmarker] [speaker002:] Oh one thing I guess I didn't get so you know the language model was straight from [disfmarker] from bigram from Switchboard the acoustic models were also from Switchboard or [disfmarker] or [speaker001:] Yeah. [speaker003:] Yeah. [speaker007:] That's amazing. [speaker002:] So they didn't have anything from this acoustic data in yet? [speaker005:] Yeah, so that's great. [speaker003:] No. And actually [disfmarker] we actually um used Switchboard telephone bandwidth models [speaker007:] That's amazing. [speaker002:] OK. Yeah. [speaker003:] which I guess [speaker004:] I was just gonna say, [speaker001:] Well that's [disfmarker] those are the only we ones there are, [speaker004:] yeah. [speaker003:] so that's the on that's the only acoustic training data that we have a lot of [speaker005:] Yeah. [speaker001:] I mean Right. [speaker003:] and I guess Ramana, so a guy at SRI said that um there's not a huge amount of difference going from [disfmarker] it's [disfmarker] it's not like we probably lose a huge amount [speaker002:] Right. [speaker003:] but we won't know because we don't have any full band models for s conversational speech. [speaker004:] It's probably not as bad as going f using full band models on telephone band speech [speaker003:] So. [speaker001:] Oh yeah. [speaker004:] right? [speaker003:] Right. Right, so it's [disfmarker] so [speaker002:] Yeah, [speaker001:] Yeah. [speaker002:] but for Broadcast News when we [disfmarker] we played around between the two there wasn't a huge loss. [speaker005:] Right, it was not a big deal. [speaker003:] Yeah so I wou [speaker001:] I should [disfmarker] I should say that [disfmarker] the language model is not just Switchboard [speaker003:] so that's good. [speaker005:] Although combining em worked well. [speaker001:] it's also [disfmarker] I mean there's uh actually more data is from Broadcast News but with a little less weight [speaker003:] Yeah. [speaker002:] Uh huh. [speaker003:] Like Trent Lott must have been from [speaker001:] uh because [speaker003:] I guess [vocalsound] Switchboard was before [speaker001:] mm hmm, right. Um By the way just [disfmarker] for fun we also ran, [speaker003:] uh. [speaker002:] Good point. [speaker001:] I mean our complete system starts by doing ge a gender detection [speaker002:] Mm hmm. [speaker001:] so just for the heck of it I ran that [speaker005:] And it said a hundred percent male? [speaker001:] um and it might be reassuring for everybody to know that it got all the genders right. [speaker003:] The j [speaker005:] Oh it did? [speaker001:] Yeah so [speaker007:] Oh that's [disfmarker] I'm glad. [speaker005:] It got all two genders? [speaker003:] Yeah but you know Jane and Adam have you kn about equal performance [speaker001:] Yeah. Yes. [speaker003:] and uh and that's interesting cuz I think the [disfmarker] their language models are quite different so and I [disfmarker] I'm pretty sure from listening to Eric that, you know given the words he was saying and given his pronunciation that the reason that he's so much worse is the lapel. [speaker005:] Right. [speaker007:] That makes a lot of sense, [speaker002:] Yeah. [speaker003:] So it's nice now if we can just sort of eliminate the lapel one when [disfmarker] when we get new microphones [speaker007:] yeah. Very possible. [speaker002:] Yeah I [disfmarker] I [disfmarker] I would bet on that too [speaker003:] that would be worth it um [speaker002:] cuz he certainly in that [disfmarker] when as a [disfmarker] as a burp user he was [disfmarker] he was a pretty uh strong one. [speaker005:] Sheep. [speaker003:] Yeah he [disfmarker] he [disfmarker] he sounded to me just from [disfmarker] he sounded like a, [speaker002:] Yeah. [speaker003:] what's it a sheep or a goat? [speaker005:] A sheep. Baah. [speaker002:] Sheep. [speaker003:] Sheep, right. [speaker002:] Yeah. Sheep is good. [speaker003:] Sounded good. [speaker002:] Yeah. [speaker003:] Right so um so I guess the good news is that [speaker007:] Mm hmm. [speaker003:] and [disfmarker] and again this is without a lot of the sort of bells and whistles that we c can do with the SRI system and we'll have more data and we can also start to maybe adapt the language models once we have enough meetings. So this is only twenty minutes of one meeting with no [disfmarker] no tailoring at all. [speaker001:] I mean clearly there are um with just a small amount of uh actual meeting transcriptions uh thrown into the language model you can probably do quite a bit better [speaker003:] Yeah. [speaker001:] because the [disfmarker] [speaker005:] Or just dictionary. [speaker003:] The voca the vocabulary especially yeah. Yeah, so. [speaker001:] Not that much the vocabulary actually I think [disfmarker] um well we have to see but [disfmarker] it's uh [disfmarker] [speaker003:] Yeah. It's pretty good um so then [speaker002:] Have to add PZM and so on [speaker005:] And I have to try it on the far field mike [speaker002:] but [speaker005:] yeah. [speaker003:] PZM and then there's things like for the transcription I got when someone has a digit in the transcript I don't know if they said, you know one one or eleven and I don't know if they said Tcl or TCL. there's things like that where, you know the um we'll probably have to ask the transcribers to indicate some of those kinds of things but in general it was really good and I'm hoping [disfmarker] and this is [disfmarker] this is good news because that means the force alignments should be good and if the force alignments, I mean it's good news anyway but if the force alignments are good we can get all kinds of information. For example about, you know prosodic information and speaker overlaps and so forth directly from the aligned times. Um so that'll be something that actually in order to assess the forced alignment um we need s some linguists or some people to look at it and say are these boundaries in about the right place. Because it's just gonna give us time marks [speaker005:] Well we've done that for one meeting. [speaker004:] But you know [disfmarker] [speaker003:] so. For forced alignment. [speaker005:] Uh oh oh f not for words [speaker003:] Ye right. [speaker005:] I'm sorry just for overlaps is we did it for not [disfmarker] not for words. [speaker003:] Right. So this would be like if you take the words um you know and force align them on all the individual close talk uh close talking mikes then how good are these sort of in reality [speaker005:] Right. [speaker003:] and then I was thinking it [disfmarker] [speaker005:] So we might want to take twenty minutes and do a closer word level transcription. Maybe actually mark the word boundaries. [speaker003:] Oh or [disfmarker] i have someone look at the alignments uh maybe a linguist who can say um you know roughly if these are OK and how far away they are. [speaker002:] Yeah. [speaker003:] Um but I think it's gotta be pretty good because otherwise the word recognition would be really b crummy. [speaker005:] Right, right. [speaker003:] It wouldn't necessarily be the other way around, if the wor word recognition was crummy the alignment might be OK but if the word recognition is this good the alignment should be pretty good. So that's about it. [speaker004:] I wonder if this is a good thing or a bad thing though, [speaker002:] I r [speaker004:] I mean if we're pr [speaker005:] That we're starting so well? [speaker004:] yeah if we're producing a database that everybody's gonna do well on [speaker002:] Oh [speaker005:] Don't worry about it w d that's that's the close talking mikes. Try it on the P Z Ms and [disfmarker] and [speaker002:] Yeah, which [disfmarker] I would [disfmarker] which [disfmarker] well n n n n [speaker008:] Yeah, yeah, yeah, yeah. [speaker004:] So the real value of the database is these? [speaker005:] Yeah, abso well no but [speaker002:] I mean there's still just the w the percentages and, I mean they're not [disfmarker] a [speaker003:] This i [speaker002:] as we've talked about before there's probably overlaps [speaker003:] yeah. This is not that good. [speaker002:] there's probably overlaps in [disfmarker] in uh in fair number in Switchboard as well so but [disfmarker] but there's other phenomena, it's a meeting, it's a different thing and there's lots of stuff to learn with the close talking mikes but uh yeah certainly I'd like to see as soon as we could, I mean maybe get some of the glitches out of the way but soon as we could how well it does with say with the P Z Ms or maybe even one of the [speaker003:] Right. [speaker002:] and uh see if it's, you know is it a hundred twenty percent or maybe it's not maybe if with some adaptation you get this down to fifty percent or forty five percent or something and [disfmarker] and then if for the PZM it's seventy or something like that that's actually something we could sort of work with a little bit [speaker003:] Yeah. No I think it's really, [speaker002:] so [speaker003:] I mean this way we least have a baseline we know that for instance the transcripts are very good so once you can get to the words that the recognizer which is a total subset of the things you need to understand the [disfmarker] the text um yeah they're pretty good so and [disfmarker] and it's converting automatically from the XML to the chopping up the wave forms and so forth it's not the case that the end of one utterance is in the next segment and things like that which we had more problems with in Switchboard so that's good. And um let's see there was one more thing I wanted to [disfmarker] to mention [disfmarker] I can't remember um Sorry can't remember. anyway it's [disfmarker] [speaker007:] Congratulations is really great. [speaker003:] well it was, I mean I really didn't do this myself [speaker002:] Yeah. [speaker005:] Yeah, it's really good. [speaker003:] so Andreas set up this recognizer and [disfmarker] by the way the recognizer all the files I'm moving to SRI and running everything there so I brought back just these result files and people can look at them um so [speaker001:] We [disfmarker] we talked about setting up the SRI recognizer here. That's [disfmarker] you know if [disfmarker] if there are more machines um uh here plus people can [disfmarker] could run their own uh you know variants of [disfmarker] of [disfmarker] of the recognition [pause] runs um certainly doable. Um. [speaker002:] Yeah and [disfmarker] well certainly if the recognition as opposed to training, yeah. Seems reasonable. [speaker007:] I need t Hmm. I need to ask one question. [speaker001:] Yeah. Yeah. [speaker007:] Which is um so this issue [vocalsound] of the uh legalistic aspects of the pre sent you know pre adapted [disfmarker] Yeah, well, so what I mean is um the [disfmarker] uh the data that you take into SRI, first [disfmarker] first question, you're maintaining it in [disfmarker] in a place that wouldn't be publicly readable that [disfmarker] that kind of stuff, right? [speaker001:] U um [speaker003:] From the outside world or [speaker007:] By uh people uh who are not associated with this project. [speaker005:] It's human subjects issues, [speaker001:] Oh. [speaker005:] I told you about that. [speaker007:] Exactly. [speaker003:] Um oh. Well OK we have n no names. Although I sh um [speaker005:] That [disfmarker] that's not the issue, it's just the audio data itself, [speaker003:] de audio data itself? [speaker005:] until people have a chance to edit it. [speaker007:] Mm hmm, exactly. [speaker003:] Uh so well I can [disfmarker] I can protect my directories through there. [speaker007:] Yeah. Great. [speaker003:] Right now they're not [disfmarker] they're in the speech group directories which [disfmarker] so I will [disfmarker] I didn't know that actually. [speaker002:] Yeah so we just have to go through this process of having people approve the transcriptions, [speaker003:] Yeah OK. [speaker002:] say it's OK. [speaker007:] Yeah, we had to get them to approve em [speaker003:] Right OK. [speaker007:] and then i cuz [disfmarker] cuz the other question I was gonna ask is if we're having um you know it's but this [disfmarker] this meeting that you have, no problem cuz I [disfmarker] I well I mean I [disfmarker] I speak for myself [speaker005:] It's us. [speaker007:] but [disfmarker] but I think that we didn't do anything that but well anyway so [vocalsound] uh I wouldn't be too concerned about it with respect to that although we should clear it with Eric and Dan of course but these results are based on data which haven't had the uh haven't had the chance to be reviewed by the subjects [speaker003:] That's true. [speaker007:] and I don't know how that stands, I mean if you [disfmarker] if you get fantastic results and it's involving [comment] data which [disfmarker] which later end up being lessened by, you know certain elisions, then I don't know but I wanted to raise that issue, that's all. [speaker002:] Well we, I mean once we get all this streamlined it may be sh it [disfmarker] hopefully it will be fairly quick but we get the transcriptions, people approve them and so on [speaker005:] Alright [speaker002:] it's just that we're [speaker005:] we need to work at a system for doing that approval so that we can send people the transcripts [speaker007:] Great. [speaker001:] Mmm. [speaker007:] Yeah. [speaker005:] and get back any bleeps that they want [speaker003:] Yeah actually the bleeps are also an issue I thought. [speaker002:] It's gonna be a rare thing that there's a bleep for the most part. [speaker001:] U uh actually I had a question about the downsampling, um I don't know who, I mean how this was done but is [disfmarker] is there [disfmarker] are there any um [vocalsound] issues with downsampling [speaker003:] Don did this. [speaker001:] because I know that the recognizer um that we use h can do it sort of on the fly um so we wouldn't have to have it eh you know do it uh explicitly beforehand. And is there any um i are there other d sev uh is there more than one way to do the downsampling where one might be better than another? [speaker006:] There are lots of w [vocalsound] there are lots of ways to do the downsampling um different filters to put on, [speaker001:] OK. Right. [speaker006:] like anti aliasing stuff. [speaker001:] OK. So [disfmarker] so the [disfmarker] th [speaker005:] I don't think we even know which one I assume you're using syncat to do it? [speaker006:] No, I'm using uh SN SND uh are resample. [speaker005:] Or sound resample? [speaker003:] Re re ref [speaker005:] Resample. [speaker003:] yeah. [speaker005:] Yeah and Dan's archaic acronyms. [speaker006:] RSMP. Yeah, I don't really. [speaker003:] Missing all the vowels. [speaker006:] I just [disfmarker] yeah I found it. [speaker005:] Not all of them. [speaker003:] Some of the vowels, almost all the vowels, that's the hard part. [speaker001:] So [disfmarker] so the other thing we should try is to just take the original wave forms, [speaker005:] And a few of the consonants. [speaker001:] I mean segment them but not downsample them. [speaker003:] Yeah we could [disfmarker] we could try that and [disfmarker] and compare [speaker006:] Yeah, that's [disfmarker] [speaker001:] And [disfmarker] and feed them to [disfmarker] feed them to the SRI recognizer and see if [disfmarker] if the SRI front end does something. [speaker003:] Yeah. [speaker005:] I suspect that's sort of premature optimization, but Sure. [speaker003:] We can try it. I [disfmarker] I only downsampled them first cuz I was [speaker006:] I mean that's just one line [disfmarker] that's one line of code to comment at [speaker001:] Well [disfmarker] [speaker003:] yeah [speaker001:] Right and [disfmarker] and it doesn't [disfmarker] is no more work [vocalsound] for um you know for us. [speaker006:] so [speaker005:] Mm hmm. [speaker006:] Yeah. [speaker003:] Well they're just bigger to transfer, that's why I s downsampled them before but [speaker001:] Well but they're only twice as big so [speaker003:] Well I mean that was [disfmarker] if it's the same then we can downsample here [speaker001:] I mean it's [disfmarker] it's just a [speaker003:] but if it's [disfmarker] [speaker006:] Although those eighty meg files take a while to copy into my directories so, [speaker003:] Yeah. [speaker006:] but no, I mean it's not [disfmarker] i it wouldn't be a problem if you're interested in it [disfmarker] [speaker003:] We could try that. [speaker006:] it would [disfmarker] [speaker001:] Yeah I mean it would be uh you know it would probably take uh about um you know minus the transfer time it would [disfmarker] it would take uh you know ten minutes to try and [disfmarker] and [disfmarker] and [speaker006:] Yeah. [speaker005:] It's about a fifty minute drive, right? [speaker003:] Well it takes more disk space too [speaker001:] And [disfmarker] and if for some reason we see that it works better then we might investigate why [speaker003:] so I was just [disfmarker] [speaker001:] and, you know, what [disfmarker] [speaker006:] Mmm. [speaker001:] Yeah. [speaker006:] In the front end we could do that. [speaker001:] Yeah. [speaker002:] So you just train [disfmarker] just different filters [speaker006:] Yeah, I [disfmarker] [speaker002:] and so you're just wondering whether the filter is [speaker006:] Yeah, I can imagine it would be [disfmarker] [speaker001:] Right. Right. [speaker006:] I mean I guess there's some [disfmarker] [speaker003:] So we could try that with this particular twenty minutes of speech and sort of see if there's any differences. [speaker001:] You know a at some point someone might have optimized whatever filtering is done for the actual recognition um performance. [speaker006:] Hmm. [speaker001:] So in other words [speaker002:] Right. [speaker001:] right, [speaker005:] It just seems to me that, you know small changes to the language model and the vocabulary will so swamp that that it may be premature to worry about that. [speaker001:] so [speaker005:] I mean so one is a half a percent better than the other I don't think that gives you any information. [speaker003:] Well it's just as easy to [disfmarker] to give you the sixteen K individual, [speaker005:] Yep. [speaker003:] it was just more disk space you know for storing them [speaker002:] Are you [disfmarker] are you using uh uh mel cepstrum or PLP over there? [speaker003:] so [speaker001:] Mel cepstrum. [speaker002:] So probably doesn't matter. [speaker003:] Well we could try. [speaker006:] There's [disfmarker] there's your answer. [speaker002:] But [disfmarker] but it wouldn't hurt to try, [speaker003:] Could easily try [speaker001:] That's what I would assume but you never know, [speaker002:] yeah. [speaker003:] so [speaker001:] you know. [speaker002:] Sure. [speaker007:] Just [disfmarker] Mm hmm. [speaker002:] No the reason I say this PLP uses uh auto regressive filtering and uh modeling and so it can be sensitive to the kind of filtering that you're doing [speaker001:] Mm hmm. [speaker002:] but uh uh mel cepstrum uh might not [disfmarker] b you wouldn't expect to be so much but [speaker003:] Well we can try it if you generate like the same set of files just up to that point where we stopped anyway and just sti stick them somewhere [speaker006:] Yeah, it's [disfmarker] it's really not a problem. [speaker003:] and I'll rerun it with [speaker001:] Actually, no. Don't stop. [speaker006:] Keep going. [speaker001:] Don't stop at that part because we're actually using the entire conversation to estimate the speaker parameters, [speaker006:] Yeah. [speaker001:] so shouldn't use [disfmarker] you should s you know, get [speaker006:] Yeah, I mean I'll [disfmarker] I have to do is eh e the reference file would stay the same, [speaker003:] OK. [speaker001:] Right. [speaker006:] it's just the individual segments would be approximately twice as long [speaker001:] Mmm. [speaker003:] Right. Right. [speaker006:] and I could just replace them with the bigger ones in the directory, that's not a problem. [speaker005:] Yeah. [speaker001:] Right. [speaker003:] I mean I corrected all [disfmarker] I mean I hand edited the whole [disfmarker] the whole meeting so that can be run it's just [disfmarker] Once we get the [disfmarker] the bug out. [speaker001:] Mmm. [speaker007:] One [disfmarker] one question which is I [disfmarker] I had the impression [comment] from this [disfmarker] from this meeting that w that I transcribed that um that there was already automatic downsampling occurring, [speaker001:] Yeah. Mm hmm. [speaker007:] is that I thought that in order to [speaker005:] Yep. [speaker007:] so it was [disfmarker] so it's like there's already down [speaker005:] There's one level that's already happening right here. [speaker007:] OK. [speaker002:] This is being recorded at forty eight kilohertz. Which is more that anybody needs [speaker006:] Oh. [speaker005:] Right. And it gets downsampled to sixteen. [speaker007:] OK. [speaker003:] And that's actually said in your meeting, [speaker002:] so [speaker006:] Hmm. [speaker007:] Oh OK. [speaker003:] that's how I know that. [speaker007:] That's exactly, and that's how I know it. [speaker002:] Yeah. [speaker003:] I [disfmarker] I [vocalsound] It's like are we downsampling to sixteen? [speaker002:] It's a digital audio orientation for the board [speaker003:] Right. [speaker002:] it's in the monitor so it's [speaker001:] Mmm. [speaker003:] Thank God it's not [vocalsound] more than that. [speaker005:] So [disfmarker] [speaker006:] Is eight kilohertz [disfmarker] is [disfmarker] is eighty kilohertz generally accepted as like standard for voice? [speaker002:] Yeah. [speaker005:] And I have no idea what filter it's using, so Telephone. [speaker002:] For telephone stuff. [speaker004:] Telephone. [speaker006:] Yeah that's what I was gonna say, I mean like [disfmarker] so [speaker002:] So it's [disfmarker] it's [disfmarker] it's just that they were operating from Switchboard which was a completely telephone database [speaker006:] Oh, I see, so. OK. [speaker002:] and so that was a standard for that sixteen s [speaker005:] So sixteen seems to be pretty typical for with this sort of thing. [speaker006:] Right. [speaker002:] Sixteen is more common for [disfmarker] for uh broadband stuff that isn't [disfmarker] [speaker005:] That isn't music. [speaker002:] that isn't music and isn't telephone, [speaker003:] And I guess if you're comparing like [disfmarker] uh if you wanna run recognition on the PZM stuff you would want you don't want to downsample the wh that [speaker002:] yeah. [speaker005:] Why is that? [speaker003:] right? [speaker002:] I don't know. [speaker003:] Well I don I mean if it's any better [speaker002:] No actually I would think that you would [disfmarker] you would get better [disfmarker] you'd get better high frequencies in the local mike. [speaker005:] All the way around I'd think. [speaker002:] Uh but who knows? [speaker003:] Yeah [speaker005:] Yeah. [speaker002:] I mean we do [vocalsound] we [disfmarker] we [disfmarker] we [disfmarker] we [disfmarker] we wanna find all this stuff out, [speaker003:] well we could try it. [speaker002:] we don't know. [speaker005:] We're gonna have plenty of low frequency on the P Z Ms with the fans. [speaker003:] OK. Yeah. [speaker002:] Uh yeah. Yeah. [speaker003:] Oh yeah there was just one more thing I wanted to say which is totally unrelated to the recognition except that um well [disfmarker] well it's sort of related but um good news also uh I got [disfmarker] well Chuck Fillmore agreed to record meetings but he had too many people in his meetings and that's too bad cuz they're very animated and but uh Jerry also agreed so uh we're starting on [disfmarker] on [speaker001:] They're less animated. [speaker003:] Well but he has fewer [disfmarker] he [disfmarker] he won't have more than eight and it's a meeting on even deeper understanding, EDU, so that sounds interesting. [speaker005:] Dot EDU? [speaker003:] As a compliment to our front end meeting and um so that's gonna start Monday and one of the things that I was realizing is um it would be really great if anyone has any ideas on some kind of time synchronous way that people in the meeting can make a comment to the person whose gonna transcribe it or [disfmarker] or put a [vocalsound] push a button or something when they wanna make a note about "oh boy you should probably erase those last few" or uh "wait I want this not to be recorded now" or uh something like that s [speaker002:] Weren't we gonna do something with a pad at one point? [speaker007:] The cross pads? [speaker005:] Yeah, we could do it with the cross pads. [speaker003:] Cuz I was thinking you know if [disfmarker] if the person who sets up the meeting isn't there and it's a group that we don't know um and this came up talking to [disfmarker] to Jerry also that you know is there any way for them to indicate [disfmarker] to make sure that the qu request that they have that they make explicitly get addressed somehow [speaker002:] Yeah. [speaker003:] so I don't know if anyone has ideas or [disfmarker] you could even write down "oh it's about three twenty five and" [disfmarker] [speaker002:] Well what I was just suggesting is [disfmarker] is we have these [disfmarker] this cross pad just for this purpose [speaker005:] Yeah, and use that. Not a bad idea. [speaker002:] and just use that [speaker003:] That would be great. [speaker002:] and if we sink it in [disfmarker] The other thing is eh [speaker003:] That be great. [speaker002:] I don't know if you know this or if it's a question for the mail to Dan but is this thing of two eight channel boards a maximum for this setup or could we go to a third board? [speaker005:] I don't know. I don't know. I'll send mail to Dan and ask. I [disfmarker] I think that it's the maximum we can do without a lot of effort because it's one board with two digital channels. [speaker002:] Oh it is one board. [speaker005:] E eight each. So it [disfmarker] it takes two fibers in to the one board. And so w I think if we wanna do that [disfmarker] more than that we'd have to have two boards, and then you have the synchronization issue. [speaker002:] But that's a question because that would [disfmarker] if it was possible cuz it is i you know already we have a [disfmarker] a [disfmarker] a group of people in this room that cannot all be miked [speaker005:] Right. [speaker002:] and it's not just cuz we haven't been to the store, right it's [disfmarker] [speaker004:] What is the limit on each of those f fiber channels, is it the [speaker005:] Eight. [speaker004:] It just [disfmarker] it's eight channels come in, [speaker005:] It's eight. [speaker004:] does it have do with the sampling rate? [speaker005:] I have no idea. But each [disfmarker] each fiber channel has eight [disfmarker] eight channels and there are two ch two fibers that go in to the card. [speaker002:] It might be a hard limitation, [speaker005:] So [speaker002:] I mean one thing is it [disfmarker] the whole thing as I said is [disfmarker] is all structured in terms of forty eight kilohertz sampling so that pushes requirements up a bit [speaker005:] Yeah. [speaker004:] I was just wondering if [disfmarker] if that could change. [speaker002:] but [speaker005:] I mean then we'd also have to get another ADD and another mixer and all that sort of stuff. [speaker004:] If we could drop that. [speaker002:] Yeah. [speaker005:] So I [disfmarker] I'll send a mail to Dan and ask him. [speaker002:] Yeah. [speaker005:] OK on the uh are we done with that? So the oth topic is uh getting more mikes and different mikes, so I got a quote um We can fit [disfmarker] we have room for one more wireless and the wireless, this unit here is three fifty [disfmarker] three hundred fifty dollars, it [disfmarker] I didn't realize but we also have to get a tuner [disfmarker] the receiver [disfmarker] the other end, that's uh four thirty um and then also [speaker004:] Wow. [speaker003:] For [disfmarker] for each? I mean the tuner is four thirty for each. [speaker005:] Yep. [speaker003:] Wow. [speaker005:] And we just need one more so [disfmarker] so [speaker002:] Yeah at least w we got the good ones. [speaker005:] Yeah. So that's you know something like seven hundred eighty bucks for one more of these. [speaker002:] Yeah. OK. [speaker005:] Um and then also um It turns out that the connector that this thing uses is proprietary of Sony [speaker004:] Oh. [speaker005:] believe it or not and Sony only sells this headset. [speaker007:] Mmm. [speaker005:] So if we wanna use a different set [disfmarker] headset the solution that the guy suggested and they [disfmarker] apparently lots of people have done is Sony will sell you the jack with just wires coming out the end and then you can buy a headset that has pigtail and solder it yourself. And that's the other solution and so the jacks are forty bucks apiece and the [disfmarker] he recommended um a crown CM three eleven AE headset for two hundred bucks apiece. [speaker002:] There isn't this some sort of thing that plugs in, you actually have to go and do the soldering yourself? [speaker005:] Becau the reason is the only [disfmarker] only thing you can get that will plug into this is this mike or just the connector. [speaker002:] No I understand. The reason I ask is these sort of handmade uh wiring jobs fall apart in use so the other thing is to see if we can uh get them to do a custom job and put it together for this. [speaker005:] Oh I'm sure they would, they would just charge us, [speaker004:] Well, and they'd probably want quantity too, [speaker005:] so. [speaker004:] they'd [speaker002:] Well no they'll just charge us more, so it's [disfmarker] this [speaker004:] Mmm. [speaker005:] So [disfmarker] so my question is should we go ahead and get na nine identical head mounted crown mikes? [speaker002:] Not before having one come here and have some people try it out. [speaker005:] OK. [speaker002:] Because there's no point in doing that if it's not gonna be any better. [speaker005:] So why don't we get one of these with the crown with a different headset? [speaker002:] Yeah. [speaker005:] And [disfmarker] and see if that works. [speaker002:] And see if it's preferable and if it is then we'll get more. [speaker003:] Comfort. [speaker005:] Yeah. [speaker002:] Yeah. [speaker003:] Cuz I mean I think the microphones are OK it's just the [disfmarker] the [speaker005:] Right, it's just they're not comfortable to wear. [speaker002:] Right. [speaker003:] Could make our own handbands and [speaker005:] Um, and he said they don't have any of these in stock but they have them in LA and so it will take about a week to get here. [speaker002:] Yeah well it's [disfmarker] [speaker005:] Um so OK to just go order? [speaker002:] We're in this for the long term, yeah. Just order it. [speaker003:] It's a lot of money for a handband. [speaker005:] OK and who is the contact if I wanna do an invoice [speaker006:] Yeah. [speaker005:] cuz I think that's how we did it before. [speaker002:] Uh we'll do this off line, yeah. [speaker006:] It's a long time to get from LA. [speaker005:] OK. And then nine channels is the maximum we can do, so. [speaker002:] Uh y right [speaker005:] Without getting more stuff. [speaker002:] cuz [disfmarker] so one is for the daisy chain so that's fifteen instead of sixteen and there's six on the table [speaker005:] Right. [speaker002:] so that's nine. [speaker003:] Can I ask a really dumb question? [speaker002:] Yeah. [speaker005:] Probably. [speaker003:] Is [disfmarker] [vocalsound] is there any way we can have you know like a [disfmarker] a wireless microphone that you pass around to the people who you know the extra people for the times they wanna talk that [disfmarker] I mean [disfmarker] [speaker002:] That's a good idea. That's not a dumb question, it's a good idea, [speaker003:] Well I mean [disfmarker] [speaker001:] Like uh like you know Jerry Springer thing, [speaker005:] I'm just not sure how we would handle that in the [speaker002:] yeah. [speaker006:] That's like the Conch. [speaker003:] Well but [disfmarker] [speaker004:] Like at conferences [speaker003:] well but there might be a way to say that there are gonna be these different people [speaker001:] you know r [speaker006:] See, look. [speaker004:] so nail the chairs down. [speaker003:] um and I don't know identifying somehow? [speaker001:] Yeah. [speaker005:] Yeah, somehow. [speaker003:] You know I was just thinking of Jerry Springer. [speaker005:] It's not a bad idea. [speaker002:] No that [disfmarker] no [disfmarker] no that's a very [disfmarker] if we can't get another board and even if we can I have a feeling they'll be some work. [speaker003:] I mean for the few times that you might wanna have that. [speaker004:] The Springer mike. [speaker002:] Let's figure that we have eight which are set up and then there's a ninth which is passed around to [disfmarker] [speaker005:] A hand held, yeah. [speaker002:] that's a good idea [speaker004:] Infinite expansion. [speaker002:] Right. Kind of rules out overlap but [disfmarker] [vocalsound] but uh [speaker003:] Well or also for you know if people are not [speaker002:] Yeah. [speaker005:] Well we could just hand around the lapel. [speaker002:] Uh no [disfmarker] [speaker005:] Rather than get a [disfmarker] [speaker002:] no that's [disfmarker] [speaker005:] do you want a handset? [speaker003:] No not the lapel. [speaker002:] No. [speaker005:] Well I mean is the [disfmarker] is the hand held really any better? [speaker004:] Liz hates the lapel. [speaker002:] Yes. [speaker005:] OK. [speaker003:] I don't know but I d I know the lapel is really suboptimal. [speaker002:] No it [disfmarker] [speaker005:] Is awful? [speaker002:] no it depends on the hand held but hand [disfmarker] many hand helds are built wi with sort of uh anti shock sort of things so that it [disfmarker] it is less uh susceptible to hand noises. [speaker004:] Mm hmm. [speaker002:] If you hold the lapel mike i you just get all k sorts of junk. [speaker003:] Right. [speaker005:] OK. [speaker003:] I mean the ones they really pass around must be sort of OK. [speaker002:] so [speaker005:] So I wonder if they have one that will hook up. [speaker002:] Yeah. They have [disfmarker] What? [speaker005:] I wonder if they have one that will hook up to this or whether again we'll have to wire it ourselves. [speaker004:] Well, you wouldn't want it to hook there you'd just want it to hook into the receiver in the other room, right? [speaker002:] No that's uh [disfmarker] you need a transmitter. [speaker005:] What? [speaker004:] Is th isn't that built into the mike? [speaker002:] Oh I see. Get a [disfmarker] get a different radio, yeah. [speaker003:] Yeah just these ones that they pass around with no you know wireless [speaker002:] Yeah. But you need a ra but it has to correspond to the receiver. [speaker004:] Have a little antenna coming out the bottom. [speaker005:] It's gonna be much easier to get one of these and just plug in a mike, isn't it? [speaker004:] But then the mike has to h [speaker001:] Do you have to hand it around and if you have two pieces of [speaker006:] Yeah. [speaker002:] No no [disfmarker] [speaker003:] Right. [speaker002:] so right, so this is a good point, so yeah you have these [disfmarker] these mikes with a little antenna on the end [speaker005:] OK. [speaker002:] right? [speaker005:] And do you think you would be able to use the same receiver? [speaker002:] I don't know. [speaker005:] OK I'll [disfmarker] I'll ask. [speaker002:] You'll have to check with them, [speaker005:] Yeah. [speaker002:] yeah. [speaker004:] It's just a frequency. [speaker002:] But that's [disfmarker] that's a great idea and then just sort of have that as the [disfmarker] and then you can have groups of twenty people or whatever and [disfmarker] and uh [speaker003:] Yeah because there's only I mean as Andreas pointed out actually I think in the large [disfmarker] the larger the group the less interaction [disfmarker] the less people are talking um over each other [disfmarker] [speaker001:] Pretty soon. [speaker004:] Mmm, yeah. [speaker003:] it just [disfmarker] there might be a lot of people that speak once or twice and [speaker002:] Right. [speaker001:] Um Gotta go. [speaker002:] Off you go, yeah. [speaker005:] OK so I guess people who have to leave can leave and do we have anything else to discuss or should we just do digits? [speaker007:] I [disfmarker] I thought of some extra [disfmarker] a couple of extra things I'd like to mention. [speaker005:] OK. [speaker007:] One of them is to give you a status in terms of the transcriptions so far. So um as of last night um I'd assigned twelve hours and they'd finished nine [speaker005:] uh Yep, [speaker007:] and my goal was to have eleven done by the end of the month, I think that by tomorrow we'll have ten. [speaker003:] Uh it's great [disfmarker] [speaker007:] So they're still working. [speaker002:] Pretty close, [speaker004:] Wow. [speaker003:] I j and this [disfmarker] I got this email from Jane at like two in the morning or something [speaker002:] that's good. [speaker005:] that's good. [speaker003:] so it's really great [speaker007:] It's working out, thanks. [speaker003:] It's really great. [speaker007:] Thanks. And then um also an idea for another meeting, which would be to have the transcribers talk about the data It's sort of a [disfmarker] a little bit [disfmarker] a little bit [speaker003:] That's a great idea. [speaker005:] Yep, [speaker002:] Super idea. [speaker005:] that'd be very interesting. [speaker006:] Yeah. [speaker003:] That's a great idea cuz I'd like to g have it recorded so that we can remember all the little things, [speaker005:] I'd love to hear what they have to say. [speaker007:] Yeah. [speaker004:] So if we got them to talk about this meeting, it would be a meta [disfmarker] meta meeting. [speaker003:] that's a great idea. [speaker007:] Yeah. Yeah, exa [vocalsound] exactly I guess [disfmarker] nested several layers, [speaker002:] Now you have eight transcribers and there's ten of us [speaker007:] but [speaker002:] so how do we do this, is the only thing. [speaker003:] Or just have them talk amongst themselves. [speaker004:] Have them have their own meeting. [speaker007:] Well that's what I'm thinking, [speaker003:] And have [speaker007:] yeah. [speaker002:] Oh. [speaker007:] Have them talk about the data and they [disfmarker] and they've made observations to me [speaker003:] that would be great. [speaker007:] like they say uh you know this meeting that we think has so much overlap, in fact it does but there are other groups of similar size that have very little, you know it's part of it's [disfmarker] it's the norm of the group and all that and they have various observations that would be fun, I think. [speaker003:] That's a great idea. [speaker005:] Yeah, I'd like to hear what they s say. [speaker007:] Yeah. [speaker003:] Be great. [speaker007:] OK. [speaker002:] So maybe we could [disfmarker] they could have a meeting more or less without us that [disfmarker] to do this and we should record it and then maybe one or two of them could come to one of these meetings and [disfmarker] and could you know could tell us about it. [speaker007:] Yeah. [speaker005:] Give us a status. [speaker003:] Yeah. [speaker007:] Oh good. OK. [speaker002:] Yeah. [speaker007:] What [disfmarker] what [speaker005:] That would be weird. [speaker003:] It's [disfmarker] they will get to transcribe their own meeting but they also get paid for having a break [speaker007:] yeah that's right. [speaker003:] and I think that's a good idea, [speaker007:] Yeah exactly, yeah. [speaker003:] get them involved. [speaker002:] Yeah. [speaker007:] Great. [speaker003:] Um that's a great idea. [speaker007:] Great. [speaker002:] Super. [speaker003:] I'm really sorry I have to g no I have to go as well. [speaker002:] OK. [speaker007:] And then I wanted to also um say something about the Fiscus uh uh John [disfmarker] John Fiscus visit tomorrow. And Which is to say that w it'll be from nine to one that I'm going to uh uh offer the organization [disfmarker] allow him to uh adjust it if he wishes but to be basically in three parts, the acoustic part coming first which would be basically the room engineering aspects um other things and he'll be also presenting what NIST is doing and [disfmarker] and uh then uh number two would be sort of a the [disfmarker] the transcription process so this would be a focus on like presegmentation and the modifications to the [disfmarker] the multitrans interface which allows more refined encoding of the beginnings and ends of the overlapping segments which uh Dave Gelbart's been doing and then um uh and of course the presegmentation Thilo's been doing and then um the third part would [disfmarker] and again he has some stuff that's i relevant with respect to NIST and then the third one would be focus on transcription standards so at NIST he's interested in this establishment of a global encoding standard I guess I would say and I want it, you know k yeah see what they're doing and also present what [disfmarker] what we've chosen as ours and [disfmarker] and discuss that kind of thing. And so but he's only here until until one and actually we're thinking of noon being uh lunch time so basically hoping that we can get as much of this done as possible before noon. S [speaker002:] OK. [speaker007:] And everybody who wants to attend is welcome. So yeah. [speaker005:] Oh, where you're gonna meet? [speaker007:] Here mostly but I've also reserved the BARCO room um eh to figure out how that works in terms of like maybe having a live demonstration. [speaker002:] OK but the nine o'l nine o'lock will be i be in here. [speaker007:] Yeah. Mm hmm. [speaker002:] Yeah, OK. [speaker005:] I assume we're not gonna try to record it? [speaker007:] Oh I think that would be hard, yeah. [speaker002:] Yeah, I think just adds [disfmarker] [speaker005:] Alright. [speaker007:] Yeah. [speaker002:] Um good. [speaker007:] Thank you though, uh huh. [speaker002:] So maybe do digits and recess? [speaker007:] Yeah. [speaker005:] Unless there's anything else? [speaker007:] Yeah. [speaker004:] Do digital ones? [speaker002:] Uh OK. [speaker007:] Yeah. [speaker005:] Uh should y we make him wear Andreas' mike or would that just be too confusing? [speaker002:] Yeah. No I don't think it's confusing. Well, it doesn't confuse me. [speaker007:] When we do this in the key [disfmarker] in the key [disfmarker] in the key it has to indicate that channel change, [speaker004:] Does it mess up the forms? [speaker007:] right? [speaker005:] Uh yeah I just don't know how we would do that, so. [speaker007:] Well i have a time mark. [speaker005:] I mean other than free [disfmarker] free form. [speaker004:] The on switch is here on the [disfmarker] on the top there. [speaker007:] Yeah. [speaker002:] OK. [speaker005:] And just clip it to your collar. [speaker002:] That's fine. [speaker009:] OK, my name is uh Espen Eriksen. I'm a Norwegian. Um uh this is my second semester at Berkeley. Currently I'm taking uh my first graduate level courses in DSP and um when I come back to Norway I'm gonna continue with the [disfmarker] more of a research project work [disfmarker] kind of work. So this semester I'm starting up with a [disfmarker] with a small project through uh Dave Gelbart which I'm taking a course with I got in touch with him and he told me about this project. So with the help of uh Dan Ellis I'm gonna do small project associated to this. What I'm gonna try to do is uh use [disfmarker] use ech echo cancellation to uh to handle the periods where you have overlapping talk. To try to do something about that. So currently I'm um I'm just reading up on echo cancellation, s looking into the theory behind that and then uh hopefully I get some results. So it [disfmarker] it's a [disfmarker] it's a project goes over the course of one semester. [speaker005:] Great. [speaker009:] So I'm just here today to introduce myself. Tell about I'll be [disfmarker] I'll be working on this. [speaker005:] And are you staying at Berkeley or is [disfmarker] are you just here a semester? [speaker009:] This is my second semester and last. [speaker005:] Ah second and last, [speaker009:] So I leave [speaker005:] OK. [speaker002:] Yeah. He's in the [disfmarker] he's in the cour two two five D course. [speaker009:] Yeah, I'm in Morgan's course, [speaker002:] So, [speaker009:] yeah. [speaker002:] yeah. [speaker005:] Good. [speaker004:] Welcome. [speaker007:] Then you [disfmarker] then you go back to Norway, that's [speaker009:] Yeah. [speaker007:] OK. [speaker006:] We were just talking about something like this yesterday or yeah yesterday with Liz. About doing some of the [vocalsound] echo cancellation stuff or possibly the spectroanalysis over the overlaps, so. Cool. [speaker009:] Yeah. [speaker002:] OK, [speaker005:] Digits? [speaker002:] let's do digits. OK. [speaker005:] And stop. [speaker002:] Sorry. Mental [disfmarker] mental Palm Pilot. Right. Hence [pause] no problem. [speaker006:] Let's see. So. What? I'm supposed to be on channel five? Her. Nope. Doesn't seem to be, yeah. [speaker002:] Hello [pause] I'm channel one. [speaker005:] What does your thing say on the back? [speaker006:] Nnn, five. [speaker004:] Testing. [speaker006:] Alright, I'm five. [speaker004:] Sibilance. Sibilance. [comment] [pause] Three, three. I am three. [speaker002:] Eh. [speaker004:] See, that matches the seat up there. [speaker006:] Yeah, well, I g guess [pause] it's coming up then, or [disfmarker] [speaker004:] So. Cuz it's [disfmarker] That starts counting from zero and these start counting from one. Ergo, the classic off by one error. [speaker002:] But mine is correct. [speaker004:] Is it? [speaker005:] No. [speaker002:] It's one. Channel one. [speaker005:] Look at the back. [speaker004:] Your mike [pause] number [pause] is what we're t [speaker002:] Oh, oh, oh! Oh. [speaker004:] Ho! [speaker002:] So [disfmarker] [speaker004:] I've bested you again, Nancy. [speaker002:] But your p No, but the paper's correct. [speaker004:] The paper is correct. [speaker002:] Look at the paper. [speaker004:] I didn't det I was saying the microphone, not the paper. [speaker003:] Nnn, it's n [speaker002:] Oh. [speaker003:] It's always offset. Yeah. [speaker002:] OK. Yes, you've bested me again. That's how I think of our continuing interaction. Damn! Foiled again! [speaker004:] So is Keith showing up? He's talking with George right now. Uh, is he gonna get a rip [disfmarker] uh [disfmarker] rip himself away from [disfmarker] from that? [speaker003:] What [disfmarker] [speaker002:] He'll probably come later. [speaker003:] He he he's probably not, is my guess. [speaker004:] Oh, then it's just gonna be the five of us? [speaker005:] Well, he [disfmarker] he was very affirmative in his way of saying he will be here at four. [speaker003:] Yeah. [speaker005:] But [pause] you know, that was before he knew about that George lecture probably. [speaker003:] Right. This [disfmarker] this is not [disfmarker] It's not bad for the project if Keith is talking to George. OK. So my suggestion is we just [speaker002:] Forge ahead. [speaker003:] Forge ahead, yeah. [speaker005:] Cool. [speaker002:] Are you in charge? [speaker005:] Sure. Um. Well, I sort of had informal talks with most of you. So, Eva just reported she's really happy about the [pause] CBT's being in the same order in the XML as in the um [disfmarker] be Java declaration format [speaker006:] Yeah. The e [speaker005:] so you don't have to do too much in the style sheet transversion. [speaker006:] Uh, yeah. Yeah, so. [speaker005:] The [disfmarker] uh, Java [disfmarker] the embedded Bayes [pause] wants to take input [disfmarker] uh, uh, a Bayes net [disfmarker] in [disfmarker] in some Java notation and Eva is using the Xalan style sheet processor to convert the XML that's output by the Java Bayes for the [disfmarker] into the, uh, E Bayes input. [speaker004:] Mmm. [speaker006:] Actually, maybe I could try, like, emailing the guy and see if he has any something already. [speaker003:] Sure. [speaker005:] Hmm. [speaker006:] That'd be weird, that he has both the Java Bayes and the embedded Bayes in [disfmarker] Yeah. [speaker004:] But that's some sort of conversion program? [speaker006:] Yeah. And put them into different [pause] formats. Oh [disfmarker] Yep, he could do that, too. [speaker004:] I think you should demand things from him. [speaker003:] He charges so much. Right. [speaker004:] Yeah. [speaker003:] No, I think it's a good idea that you may as well ask. Sure. [speaker006:] Yeah. [speaker005:] And, um, well [pause] pretty mu pretty much on t on the top of my list, I would have asked Keith how the "where is X?" [pause] hand parse is standing. Um. [pause] But we'll skip that. Uh, there's good news from Johno. The generation templates are done. [speaker004:] So the trees [pause] for [disfmarker] the XML trees for the [disfmarker] for the gene for the synthesizer are written. So I just need to [pause] do the, uh [disfmarker] write a new set of [pause] tree combining rules. But I think those'll be pretty similar to the old ones. So. Just gonna be [disfmarker] you know [disfmarker] [speaker003:] Oh! You were gonna send me a note about hiring [disfmarker] [speaker005:] Yes. [speaker003:] I didn't finish the sentence but he understood it. [speaker004:] I know what he's talking about. [speaker003:] OK. But Nancy doesn't. [speaker002:] Hiring somebody. [speaker005:] We [disfmarker] w um [disfmarker] [speaker004:] The guy. [speaker005:] OK, so [pause] natural language generation [pause] produces not a [disfmarker] just a surface string that is fed into a text to speech but, a [pause] surface string with a syntax tree that's fed into a concept to speech. [speaker003:] No. [speaker002:] Yeah. Mm hmm. [speaker005:] Now and this concept to speech module has [pause] certain rules on how [pause] if you get the following syntactic structure, how to map this onto prosodic rules. [speaker002:] Better. Mm hmm. Sure. Mm hmm. [speaker005:] And Fey has foolheartedly agreed to rewrite uh, the German concept uh syntax to prosody rules [disfmarker] [speaker002:] I didn't know she spoke German. [speaker005:] No, she doesn't. [speaker002:] Oh, OK. [speaker005:] But she speaks English. [speaker002:] Oh. Rewrite the German ones into English. [speaker005:] Into English. [speaker002:] OK, got it. [speaker005:] And um therefore [pause] the, uh [disfmarker] if it's OK that we give her a couple of more hours per week, then [pause] she'll do that. [speaker002:] OK, got it. [speaker004:] What [pause] language is that [pause] written i Is that that Scheme thing that you showed me? [speaker005:] Yeah. That's the LISP type scheme. [speaker004:] She knows how to program in Scheme? I hope? [speaker005:] No, I [disfmarker] My guess is [disfmarker] I [disfmarker] I asked for a commented version of that file? If we get that, then it's [pause] doable, even without getting into it, even though the Scheme li uh, stuff is really well documented in the [pause] Festival. [speaker004:] Well, I guess if you're not used to functional programming, Scheme can be completely incomprehensible. Cuz, there's no [disfmarker] Like [pause] there's lots of unnamed functions [speaker003:] Syntax. Yeah. [speaker004:] and [disfmarker] You know? [speaker002:] Mm hmm. [speaker003:] Anyway, it [disfmarker] We'll sort this out. Um. But anyway, send me the note and then I'll I'll check with, uh, Morgan on the money. I [disfmarker] I don't anticipate any problem but we have to [pause] ask. Oh, so this was [disfmarker] [nonvocalsound] You know, on the generation thing, um if [comment] sh y she's really going to do that, then we should be able to get prosody as well. So it'll say it's nonsense with perfect intonation. [speaker004:] Are we gonna [disfmarker] Can we change the voice of the [disfmarker] of the thing, because right now the voice sounds like a murderer. [speaker005:] Yep. We ha we have to change the voice. [speaker002:] Wh Which one? [speaker004:] The [disfmarker] the little Smarticus [disfmarker] Smarticus sounds like a murderer. [speaker002:] Oh. [speaker001:] That's good to know. [speaker004:] "I have your reservations." [speaker001:] But I will not give them to you unless you come into my lair. [speaker005:] It is [disfmarker] Uh, we have the choice between the, uh, usual Festival voices, which I already told the SmartKom people we aren't gonna use because they're really bad. [speaker002:] Festival? [speaker003:] It's the name of some program, the [disfmarker] the synthesizer. [speaker002:] Oh, oh. Got it. OK. [speaker005:] But, um [speaker001:] You know, the usual party voices. [speaker002:] Yeah, I know. That doesn't sound, [vocalsound] exactly right either. [speaker005:] OGI has, uh, crafted a couple of diphone type voices that are really nice and we're going to use [pause] that. We can still, um, d agree on a gender, if we want. So we still have male or female. [speaker002:] I think [disfmarker] Well, let's just pick whatever sounds best. [speaker005:] Hmm? [speaker002:] Whatever sounds best. [speaker005:] Uh. [speaker004:] Does OGI stand for [disfmarker]? [comment] Original German Institute? [speaker002:] Unfortunately, probably male voices, a bit more research on. [speaker003:] Orego [speaker002:] So. [speaker003:] Or [speaker005:] Oregon. [speaker003:] Oregon [@ @] [comment] Graduate Institute [speaker005:] Try Oregon. [speaker004:] Oh. [speaker002:] Oregon Graduate Insti [speaker004:] Ah. [speaker003:] It turns out there's the long standing links with these guys in the speech group. [speaker002:] Hmm! [speaker003:] Very long. [speaker004:] Hmm! [speaker005:] Hmm. [speaker003:] In fact, there's this guy who's basically got a joint appointment, Hynek [pause] Hermansky. He's spends a fair amount of time here. Anyway. Leave it. Won't be a problem. [speaker005:] OK. And it's probably also absolutely uninteresting for all of you to, um learn that as of twenty minutes ago, David and I, per accident, uh managed to get the whole SmartKom system running on the [disfmarker] uh, ICSI Linux machines with the ICSI NT machines thereby increasing the number of running SmartKom systems in this house from [pause] one on my laptop to three. [speaker002:] Mmm, that's good. [speaker004:] How was this by accident? [speaker002:] Yeah, I know. Tha that's the part I didn't understand. [speaker005:] Um, I suggested to try something that was really kind of [disfmarker] even though against better knowledge shouldn't have worked, but it worked. [speaker002:] Hmm! [speaker005:] Intuition. Maybe [disfmarker] maybe [disfmarker] maybe a bit for the AI i intuition thing. [speaker002:] Will it work again, [speaker001:] Yeah. [speaker002:] or [disfmarker]? [speaker004:] Yeah. [speaker005:] OK. And, um, we'll never found out why. It it's just like why [disfmarker] why the generation ma the presentation manager is now working? [speaker001:] Hmm! [speaker005:] Which [speaker001:] This is something you ha you get used to as a programmer, right? You know, [comment] and it's cool, it works out that way. [speaker005:] Hmm. So, [vocalsound] the [disfmarker] the people at Saarbruecken and I decided not to touch it ever again. Yeah, that would work. OK. Um [disfmarker] I was gonna ask you where something is and what we know about that. [speaker001:] Where [disfmarker] [speaker002:] Where the "where is" construction is. [speaker001:] OK. [speaker005:] Where is X? [speaker001:] What [disfmarker] what thing is this? OK. [speaker005:] Oh, but by [disfmarker] Uh, we can ask, uh, did you get to read all four hundred words? Was it OK? [speaker003:] I did. [speaker005:] Was it? [speaker004:] I [disfmarker] I wa I was looking at it. [speaker003:] Yeah. [speaker004:] It doesn't follow logically. It doesn't [disfmarker] The first paragraph doesn't seem to have any link to the second paragraph. [speaker001:] And so on. [speaker005:] Hmm. That [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] You know, i Yeah, it [disfmarker] [speaker004:] Each paragraph is good, though. I li [speaker003:] I i Yeah. Well, it it's fine. [speaker001:] It was written by committee. [speaker003:] Anyway. Um. But c the meeting looks like it's, it's gonna be good. So. [speaker005:] Yeah. [speaker003:] I think it's uh [disfmarker] [speaker002:] Yeah, I didn't know about it until [pause] Robert told me, like, [speaker003:] Yeah, I [disfmarker] I ra I ran across it in [disfmarker] I don't even know where, you know [disfmarker] some just [disfmarker] some weird place. And, uh, yeah, I I'm surprised I didn't know about it [speaker002:] Y yeah. Well, yeah. I was like, why didn't Dan tell me? [speaker003:] since we know all the invited speakers, an Right, or some [speaker001:] Right. [speaker003:] Anyway. So [disfmarker] But anyway, yeah. I so I [disfmarker] I did see that. Oh wha Yeah. Before we get started on this st so I also had a nice email correspondence with Daphne Kohler, who said yes indeed she would love to work with us on the, um, [disfmarker] you know, using these structured belief nets and stuff but [pause] starting in August, that she's also got a new student working on this and that we should get in touch with them again in August and then we'll figure out a way for you [disfmarker] uh [disfmarker] you to get seriously connected with, um their group. So that's, uh [disfmarker] looks pretty good. And um [disfmarker] Yeah, I'll say it now. So, um [disfmarker] And it looks to me like [comment] we're now at a good point to do something [disfmarker] start working on something really hard. We've been so far working on things that are easy. [speaker001:] Oh! [speaker003:] Uh, w Which is [comment] mental spaces and uh [disfmarker] and or [disfmarker] [speaker001:] Hmm! [speaker002:] It's hard. [speaker003:] Huh? [speaker002:] Yeah, it's hard. [speaker003:] It's a hard puzzle. [speaker002:] Yeah. [speaker001:] Yeah. [speaker003:] But the other part of it is the way they connect to these, uh, probabilistic relational models. So [pause] there's all the problems that the linguists know about, about mental spaces, and the cognitive linguists know about, but then there's this problem of the belief net people have only done a moderately good job of dealing with temporal belief nets. Uh, which they call dynamic [disfmarker] they incorrectly call dynamic belief nets. [speaker002:] Mmm. [speaker003:] So there's a term "dynamic belief net", doesn't mean that. It means time slices. And Srini used those and people use them. Uh. But one of the things I w would like to do over the next, uh, month, it may take more, [comment] is to st understand to what extent we can not only figure out the constructions for them for multiple worlds and uh sort of what the formalism will look like and where the slots and fillers will be, but also what that would translate into in terms of belief net and the inferences. So the story is that if you have these probabilistic relational models, they're set up, in principle, so that you can make new instances and instances connect to each other, and all that sort of stuff, so it should be feasible to set them up in such a way that if you've got the past tense and the present tense and each of those is a separate [pause] uh, belief structure that they do their inferences with just the couplings that are appropriate. But that's g that's, as far as I can tell, it's [disfmarker] it's putting together two real hard problems. One is the linguistic part of what are the couplings and [disfmarker] and when you have a certain, uh, construction, that implies certain couplings and other couplings, you know, between let's say between the past and the present, or any other one of these things and then we have this inference problem of exactly technically how does the belief net work if it's got um, let's say one in [disfmarker] in, you know, different tenses or my beliefs and your beliefs, or any of these other ones of [disfmarker] of multiple models. So um you know, in the long run we need to solve both of those and my suggestion is that we start digging into them both, uh, in a way we that, you know, th hopefully turns out to be consistent, so that the [disfmarker] Um. And sometimes it's actually easier to solve two hard problems than one [speaker001:] Yeah. [speaker003:] because they constrain each other. I mean if you've got huge ra huge range of possible choices um [disfmarker] We'll see. But anyway, so that's, um [disfmarker] [speaker001:] Oh yeah, like uh, I solved the [disfmarker] the problem of um [disfmarker] we were talking about how do you [disfmarker] various issues of how come a plural noun gets to quote "count as a noun phrase", you know, occur as an argument of a higher construction, but a bare singular stem doesn't get to act that way. [speaker003:] Right. [speaker001:] Um, and it would take a really long time to explain it now, but I'm about to write it up this evening. I solved that at the same time as "how do we keep adjectives from floating to the left of determiners and how do we keep all of that from floating outside the noun phrase" to get something like "I the kicked dog". Um. [speaker003:] That's great. [speaker001:] Did it [disfmarker] did it at once. So maybe [disfmarker] maybe it'll be a similar thing. [speaker003:] Yeah. [speaker002:] Cool. [speaker003:] No, I know, I th I I think that is gonna be sort of the key to this wh to th the big project of the summer of [disfmarker] of getting the constructions right is that people do manage to do this so there probably are some, uh, relatively clean rules, they're just not context free trees. [speaker001:] Right. [speaker003:] And if we [disfmarker] if the formalism is [disfmarker] is good, then we should be able to have, you know, sort of moderate scale thing. And that by the way is [disfmarker] is, Keith, what I encouraged George to be talking with you about. Not the formalism yet [speaker001:] Mm hmm. [speaker003:] but the phenomena. [speaker001:] Yeah. [speaker003:] The p And [disfmarker] Oh, another thing, um there was this, uh thing that Nancy agreed to in a [disfmarker] in a weak moment this morning that [speaker001:] Hmm! [speaker002:] I was really strong. [speaker001:] Hmm! [speaker006:] Hmm. [speaker003:] Uh, sorry. In a [disfmarker] in a friendly moment. [speaker001:] Same thing. [speaker003:] Anyway, uh, that we were [disfmarker] that we're gonna try to get a uh, first cut at the revised formalism by the end of next week. OK? [speaker001:] Alright. [speaker003:] Probably skipping the mental spaces part. [speaker002:] Seems [disfmarker] [speaker001:] Right. I do. [speaker003:] Uh, just trying to write up essentially what [disfmarker] what you guys have worked out so that everybody has something to look at. We've talked about it, but only the innermost inner group currently, uh, [speaker001:] Mm hmm. Knows. [speaker003:] knows, uh [speaker001:] OK. [speaker002:] Yeah, and [disfmarker] and not even all of them really do. [speaker001:] Yeah. [speaker002:] But like [disfmarker] [speaker003:] Right. [speaker001:] There's [disfmarker] The group as a whole knows but no individual member kno [speaker003:] Well that that [disfmarker] yeah th there's one of the advantages of a document, right?, [speaker001:] Yeah. [speaker003:] is [disfmarker] is that it actually transfers from head to head. [speaker002:] Right. [speaker003:] So anyway. [speaker001:] OK. [speaker003:] So um [disfmarker] [speaker002:] Ah, communication! [speaker003:] Huh? [speaker002:] Communication. [speaker003:] Communication, documentation and stuff. [speaker001:] Hunh! [speaker003:] Anyway, so, uh, with a little luck [disfmarker] Uh [disfmarker] l let's, let's have that as a goal anyway. And [disfmarker] [speaker001:] So, uh, what was the date there? Monday or [disfmarker]? [speaker003:] No, no, no. [speaker001:] It's a Friday. [speaker003:] No, w uh [disfmarker] we're talking about a week fr e end of next week. [speaker002:] But, uh, but [disfmarker] but the two of us will probably talk to you at well before th [speaker001:] End of next week. I thought you said beginning of n [speaker002:] I mean. [speaker001:] Yeah. [speaker002:] Anyway, w let's talk separately about how t [speaker001:] Yeah, I have a busy weekend but after that [disfmarker] [comment] [vocalsound] Yeah, gung ho. [speaker003:] OK. Yeah, so [disfmarker] so someti sometime next week. [speaker001:] Great, [speaker003:] Now if it turns out that that effort leads us into some big hole that's fine. [speaker001:] Mm hmm. OK. [speaker003:] You know, if you say we're [disfmarker] we're dump [disfmarker] dump [disfmarker] dump. There's a really hard problem we haven't solved yet [disfmarker] that, that's just fine. [speaker001:] OK. [speaker002:] Mm hmm. [speaker001:] But at [disfmarker] at least sort of try and work out what the state of the art is right now. [speaker003:] Right, t t if [disfmarker] to the extent that we have it, let's write it [speaker001:] OK. [speaker003:] and to the extent we don't, let's find out what we need to do. [speaker001:] OK. [speaker003:] So, uh [speaker005:] Can we [disfmarker]? [vocalsound] Is it worth [pause] thinking of an example out of our tourism thing domain, that involves a [disfmarker] a [disfmarker] a decent mental [pause] space shift [pause] or setting up [disfmarker] [speaker003:] I think it is, but [disfmarker] uh [disfmarker] but I interrupted before Keith got to tell us what happened with "where is the Powder Tower?" or whatever [speaker002:] Right. [speaker001:] Well. Uh, what was supposed to happen? I've sort of been actually caught up in some other ones, so, um, you know, I don't have a write up of [disfmarker] or I haven't elaborated on the ideas that we were already talking about [speaker005:] Hmm, yeah. I think [disfmarker] I think we already came to the conclusion that we have two alternative [pause] paths that we [disfmarker] two alternative ways of representing it. [speaker001:] which were [disfmarker] [speaker005:] One is sort of a [disfmarker] has a um [speaker001:] It's gone. [speaker005:] um [speaker001:] The question of whether the polysemy is sort of like in the construction or pragmatic. [speaker002:] One of them was th Right. [speaker005:] or comes [disfmarker] [speaker002:] Right. [speaker005:] is resolved later. Yeah. [speaker001:] I think it has to be the [disfmarker] the second case. [speaker005:] Yeah. [speaker001:] Um, so d'ou [disfmarker] Is it clear what we're talking about here? [speaker002:] I agree. [speaker003:] Uh [disfmarker] [speaker001:] The question is whether the construction is semantic or like ambiguous between asking for location and asking for path. [speaker005:] It's [disfmarker] [speaker002:] So you might be [disfmarker] yeah, y And asking for directions. [speaker001:] Um [speaker005:] Should we have a [disfmarker] a [disfmarker] a [disfmarker] [speaker001:] or [disfmarker] or whether the construction semantically, uh, is clearly only asking for location [speaker002:] Uh [disfmarker] [speaker001:] but pragmatically that's construed as meaning "tell me how to get there". [speaker003:] Mm hmm. [speaker005:] So [pause] assume these are two, uh, nodes we can observe in the Bayes net. [speaker003:] Yep. [speaker002:] Yeah. [speaker003:] Right. [speaker005:] So these are either true or false and it's also just true [pause] or false. If we encounter a phrase such as "where is X?", should that set this to true and this to true, and the Bayes net figures out which under the c situation in general is more likely? Um, or should it just activate this, have this be false, and the Bayes net figures out whether this actually now means [disfmarker]? [speaker003:] Uh w that's a s OK, so that's a [disfmarker] that's a separate issue. [speaker002:] Slightly different. [speaker001:] OK. [speaker003:] So I a I I th I agree with you that, um, it's a disaster to try to make separate constructions for every uh, pragmatic reading, [speaker001:] Mm hmm. [speaker003:] although there are some that will need to be there. [speaker002:] Good. Mm hmm. [speaker001:] Right. [speaker002:] Right. [speaker003:] I mean, there there's some that [disfmarker] [speaker002:] Or have every construction list all the possible pragmatic implications of the same one. [speaker003:] You can't do that either. Yeah. [speaker002:] Right. Yeah. [speaker003:] But, you know, c um [disfmarker] almost certainly "can you pass the salt" is a construction worth noting that there is this th this [disfmarker] this [disfmarker] this [disfmarker] uh [speaker001:] Yeah. [speaker002:] Request. [speaker001:] Mm hmm. Yeah. [speaker002:] Very yeah. [speaker001:] So right, this one is maybe in the gray area. Is it [disfmarker] is it like that or is it just sort of obvious from world knowledge that no one [disfmarker] you wouldn't want to know the location without wanting to know how to get there or whatever. [speaker003:] Ri [speaker002:] Mmm. [speaker003:] Yeah. [speaker005:] One Or in some cases, it's [disfmarker] it's quite definitely [speaker003:] Yeah. [speaker005:] s so that you just know [disfmarker] wanna know where it is. [speaker001:] Yeah. Well the question is basically, is this conventional or conversational implicature? [speaker003:] Exactly. Yeah. [speaker002:] Might be, yeah. [speaker003:] And I guess, see, the more important thing at this stage is that we should be able to know how we would handle it in ei f in the short run it's more important to know how we would treat [disfmarker] technically what we would do if we decided A and what we would do if we decided B, than it is t to decide A or B r right now. [speaker002:] Right. [speaker001:] OK, right. [speaker002:] Right. Which one it is. [speaker001:] Which of that is. [comment] Yeah, OK [speaker005:] Hmm. [speaker002:] Cuz there will be other k examples that are one way or the other. Right. [speaker003:] W we know for sure that we have to be able to do both. [speaker001:] Yeah. [speaker003:] So I guess [vocalsound] In the short run, let's [disfmarker] let's be real clear on h what the two alternatives would be. [speaker001:] OK. [speaker005:] And then the [vocalsound] we had another idea floating around um, which we wanted to, uh, get your input on, and that concerns the [disfmarker] But the nice thing is w we would have a person that would like to work on it, and that's Ir Irina Gurevich from EML [pause] who is going to be visiting us, uh, the week before, uh, August and a little bit into August. And she would like to [vocalsound] apply the [pause] ontology that is, um [vocalsound] being crafted at EML. That's not the one I sent you. The one I sent you was from GMD, out of a European CRUMPET. [speaker003:] It was terrible. [speaker005:] Agreed. Um, and one of the reas one of the [disfmarker] those ideas was, so, back to the old Johno observation that if y if you have a dialogue history [pause] and it said the word "admission fee" was uh, mentioned um, it's more likely that the person actually wants to enter [pause] than just take a picture of it from the outside. Now what could imagine [disfmarker] to, you know, have a list for each construction of things that one should look up in the discourse history, yeah? That's the really stupid way. Then there is the [pause] really clever way that was suggested by Keith and then there is the, uh, middle way that I'm suggesting and that is you [disfmarker] you get X, which is whatever, the castle. The ontology will tell us that castles have opening hours, that they have admission fees, they have whatever. And then, this is [disfmarker] We go via a thesaurus and look up [pause] certain linguistic surface structures [pause] that are related to these concepts and feed those through the dialogue history and check dynamically for each e entity. We look it up check whether any of these were mentioned and then activate the corresponding nodes on the discourse side. But Keith suggested that a [disfmarker] a much cleaner way would be [disfmarker] is, you know, to keep track of the discourse in such a way that you [disfmarker] if you know that something like that ha has been mentioned before, this just a continues to add up, you know, in th in a [disfmarker] [speaker001:] So if someone mentions admission f fees, that activates an Enter schema which sticks around for a little while in your rep in the representation of what's being talked about. And then when someone asks "where is X?" you've already got the [disfmarker] the Enter schema activated [speaker003:] Mm hmm. [speaker002:] Kind of a priming [speaker001:] and you're able to [disfmarker] to conclude on it. [speaker003:] Yeah. [speaker002:] priming a spreading activation [speaker003:] Right. [speaker001:] Yeah. [speaker003:] Yeah. So that's certainly [pause] more [pause] realistic. I m I mean psychologically. [speaker001:] Right. [speaker003:] Now technically [speaker001:] Yeah. [speaker003:] Um [speaker004:] Well, uh, is it [disfmarker] doesn't it seem like if you just managed the dialogue history with a [disfmarker] a thread, that you know, kept track of ho of the activity of [disfmarker] I mean, cuz it would [disfmarker] the [disfmarker] the thread would know what nodes [pause] like, needed to be activated, so it could just keep track of [pause] how long it's been since [pause] something's been mentioned, and [pause] automatically load it in. [speaker003:] Yeah. You could do that. Um. But here's [disfmarker] here's a way [disfmarker] in th in the bl Bayes net you could [disfmarker] you could think about it this way, that if um [pause] at the time "admissions fee" was mentioned [pause] you could increase the probability [pause] that someone wanted to enter. [speaker002:] Turn prior on. [speaker004:] We yeah [disfmarker] th th that's what I wa I wasn't [disfmarker] I was [disfmarker] I wasn't thinking in terms of Enter schemas. I was just [disfmarker] [speaker003:] Fair enough, OK, but, but, in terms of the c c the current implementation [disfmarker] right? so that um [speaker002:] It would already be higher in the [pause] context. [speaker003:] th that th the [disfmarker] the [disfmarker] the conditional probability that someone [disfmarker] So at the time you mentioned it [disfmarker] This is [disfmarker] this is essentially the Bayes net equivalent of the spreading activation. [speaker001:] Mm hmm. Yeah. [speaker003:] It's [disfmarker] In some ways it's not as good but it's [pause] the implementation we got. [speaker001:] Yeah, sure. [speaker003:] We don't have a connectionist implementation. [speaker001:] No, I mean [speaker003:] Now [disfmarker] Now my guess is that it's not a question of time but it is a question of whether another [pause] intervening object has been mentioned. [speaker002:] Yeah, relevance. Yeah. [speaker003:] I mean, we could look at dialo [speaker001:] Yeah. [speaker003:] this is [disfmarker] Of course the other thing we ha we do is, is we have this data coming [speaker001:] Yeah. [speaker003:] which probably will blow all our theories, but [disfmarker] [vocalsound] but skipping that [disfmarker] [speaker001:] Yeah, right. [speaker003:] so [disfmarker] so [disfmarker] but my guess is what [disfmarker] what'll probably will happen, Here's a [disfmarker] here's a proposed design. [comment] is that there're certain constructions which, uh, for our purposes do change the probabilities of EVA decisions and various other kinds and th that the, uh, standard way that [disfmarker] that the these contexts work is sort of stack like or whatever, but that's sort of the most recent thing. And so it could be that [pause] when another uh, en tourist entity gets mentioned, you [speaker002:] Renew [speaker003:] re re essentially re initiali you know, re i essentially re initialize the [pause] state. [speaker004:] Mmm. [speaker002:] Yeah. [speaker001:] Mm hmm. [speaker003:] And of course i if we had a fancier one with multiple worlds you could have [disfmarker] uh, you could keep track of what someone was [pause] uh saying about this and that. You know, I wanna go [disfmarker] in the morning [speaker001:] Yeah. [speaker003:] I wanna [disfmarker] " [speaker001:] "Here's my plan for today. Here's my plan for tomorrow." [speaker003:] Yeah, or [disfmarker] Yeah, in the morning morning I I'm planning t to go shopping, [speaker001:] hypothetically. [speaker003:] in the afternoon to the Powder Tower [disfmarker] [speaker001:] Yeah. [speaker003:] Uh, tal so I'm talking about shopping and then you say, uh, you know, well, um "What's it cost?" or something. [speaker001:] Mm hmm. [speaker003:] Or [disfmarker] Anyway. So one could well imagine, but not yet. [speaker001:] Yeah. [speaker003:] But I do th think that the [disfmarker] [comment] It'll turn out that it's gonna be [disfmarker] depend pretty much on whether there's been an override. [speaker005:] Yeah, I mean, if [disfmarker] if you ask "how much does a train ride and [disfmarker] and cinema around the vineyards cost?" and then somebody tells you it's sixty dollars and then you say "OK How much is, uh [disfmarker] I would like to [pause] visit the [disfmarker]" [vocalsound] whatever, something completely different, "then I go to, you know, Point Reyes", [speaker003:] Yeah. [speaker005:] it [disfmarker] it's not more likely that you want to enter anything, but it's, as a matter of fact, a complete rejection of entering by doing that. [speaker003:] Right. [speaker002:] Right. [speaker003:] Right. [speaker001:] Yeah. [speaker002:] So when you admit have admission fee and it changes something, it's only for that particular [disfmarker] It's relational, right? It's only for that particular object. [speaker003:] Yeah, I th th Yeah. Well, and [disfmarker] and [disfmarker] and the simple idea is that it's on it's only for m for the current uh, tourist e entity of instre interest. [speaker002:] Yeah. [speaker001:] Yeah. [speaker002:] Right. [speaker005:] Yeah. But that's [disfmarker] I mean this [disfmarker] this function, so, has the current object been mentioned in [disfmarker] in [disfmarker] with a question about [disfmarker] concerning its [disfmarker] [speaker003:] No, no. It's [disfmarker] it [disfmarker] It goes the other d it goes in the other direction. Is [disfmarker] When th When the [disfmarker] this is mentioned, [pause] the uh probability of [disfmarker] of, let's say, entering changes [speaker002:] Of that object. [speaker003:] changes. [speaker002:] For [disfmarker] But [disfmarker] Right. [speaker004:] You could just hav uh, just basically, ob it [disfmarker] It observes an [disfmarker] er, it sets the [disfmarker] a node for "entered" or "true" or something, [speaker003:] Yeah. Yeah. Now, uh [disfmarker] But I think Ro Robert's right, that to determine that, OK? you may well want to go through a th thesaurus [speaker004:] "discourse enter". [speaker003:] and [disfmarker] and [disfmarker] So, if the issue is, if [disfmarker] so now th this construction has been matched and you say "OK. Does this actually have any implications for our decisions?" Then there's another piece of code [vocalsound] that presumably [pause] does that computation. [speaker002:] So, sort of forward chaining in a way, rather than [pause] backward. [speaker003:] Yeah. Yeah. [speaker002:] OK. [speaker003:] But [disfmarker] but what's Robert's saying is [disfmarker] is, and I think he's right, [comment] is you don't want to try to build into the construction itself all the synonyms and all [disfmarker] you know, all the wo Uh maybe. I'll have to think about that. [speaker002:] Hmm. [speaker003:] I don't know. I mean it [disfmarker] th [vocalsound] I can thi I can think of arguments in either direction on that. But somehow you want to do it. [speaker005:] Mm hmm. Well, it's just another, sort of, construction side is how to get at the possible inferences we can draw from the discourse history or changing of the [pause] probabilities, and or [disfmarker] [speaker002:] Guess it's like [disfmarker] I g The other thing is, whether you have a m m user model that has, you know, whatever, a current plan, whatever, plans that had been discussed, and I don't know, I mean [disfmarker] [speaker004:] What [disfmarker] uh, what's the argument for putting it in the construction? Is it just that [pause] the s synonym selection is better, or [disfmarker]? [speaker003:] Oh, wel Well, the ar the [disfmarker] The argument is that you're gonna have the [disfmarker] If you've recognized the word, you've recognized the word, which means you have a lexical construction for it, so you could just as well tag the lexical construction with the fact that it's a uh, you know, thirty percent increase in probability of entering. You [disfmarker] So you could [disfmarker] you could [disfmarker] you could invert [disfmarker] invert the whole thing, so you s you tag that information on to [pause] the lexicon [speaker004:] Mmm. Oh, I see. [speaker003:] since you had to recognize it anyway. That [disfmarker] that's the argument in the other direction. at [disfmarker] at [disfmarker] Yeah, and this is [disfmarker] [speaker005:] Even though uh the lexical construction itself [disfmarker] out [disfmarker] out of context, uh, won't do it. I mean, y you have to keep track whether the person says "But I but I'm not interested in the opening times" [speaker002:] Yeah. [speaker005:] is sort of a more a V type. [speaker003:] Yeah there's, yeah ther there's that as well. [speaker005:] Yep. Hmm. So. But, we'll [disfmarker] uh, we have time to [disfmarker] This is a s just a sidetrack, but uh I think it's also something that people have not done before, is um, sort of abuse an ontology for these kinds of, uh, inferences, on whether anything relevant to the current something has been [disfmarker] [vocalsound] uh, has crept up in the dialogue history already, or not. And, um I have the, uh [disfmarker] If we wanted to have that function in the dialogue hi dialogue module of SmartKom, I have the written consent of Jan to put it in there. [speaker003:] Good. OK. [comment] [vocalsound] Well, this [disfmarker] this is highly relevant to someone's thesis. [speaker005:] Yes, um. That's [disfmarker] uh, I'm [disfmarker] I'm keeping on good terms with Jan. [speaker003:] You've noticed that. [speaker005:] Yeah. [speaker003:] OK. So the point is, it's very likely that Robert's thesis is going to be along these lines, and the local rules are if it's your thesis, you get to decide how it's done. [speaker002:] Oh, s [speaker003:] OK. So if, you know [disfmarker] if this is [disfmarker] seriously, if this becomes part of your thesis, you can say, hey we're gonna do it this way, that's the way it's done. [speaker005:] Mmm. [speaker002:] Yay, it's not me. It's always me when it's someone's thesis. [speaker003:] No, no, no! No, no. We've got a lot [disfmarker] we've got a lot of theses going. [speaker001:] There's a few of us around now. [speaker002:] Now it's not. Yay! I know it is. [speaker003:] Yeah. Right. [speaker005:] Well, let's [disfmarker] let's talk after Friday the twenty ninth. Then we'll see how f f [speaker003:] Right. So h he's got a th he's got a meet meeting in Germany with his thesis advisor. [speaker002:] Yeah, he said he's gonna f finish his thesis by then. [speaker001:] Oh yeah. [speaker005:] Yeah. I should try to finish it by then. Yeah. [speaker003:] Oh, right. [speaker005:] So. [speaker003:] Um. Yeah. So I think [pause] in fact, That's the other thing. uh, this is [disfmarker] this is, speaking of hard problems, [comment] this is a very good time um, to start trying to make explicit where construal comes in and [disfmarker] you know, where c where the construction per se ends [pause] and where construal comes in, [speaker001:] Mm hmm. [speaker005:] Mm hmm. [speaker003:] cuz this is clearly part of th [speaker002:] Yeah, we've [disfmarker] we've done quite a bit of that. [speaker003:] Huh? [speaker001:] Yeah. [speaker002:] We've been doing quite a bit of that. [speaker001:] Yeah. [speaker003:] Well I said. But that's part of what the f [speaker002:] We have many jobs for you, Ro Robert. [speaker003:] Yeah. Well, he's gonna need this. [speaker002:] The conclusion. [speaker001:] Yeah, it seems to always land in your category. [speaker002:] Yeah. [speaker003:] Right. So. [vocalsound] Right. So thing [disfmarker] That's part of why we want the formalism, [speaker001:] You're lucky. [speaker003:] is [disfmarker] is because th it is gonna have implicit in it [speaker001:] Yeah. [speaker005:] Was I? In the room? [speaker002:] No, you weren't there [pause] on purpose. [speaker003:] Yeah. [speaker002:] Like [disfmarker] [speaker001:] Made it much easier to make these decisions. [speaker002:] Obviously. [speaker001:] Uh. [speaker002:] Yeah. [speaker003:] Right. Well I [disfmarker] That's tentative. They aren't decisions, [speaker001:] Yeah. Right, right, right. [speaker003:] they're ju they're just proposals. [speaker002:] No, they're decisions. [speaker001:] Yes. [vocalsound] Excuse me. [speaker002:] OK. [speaker003:] Yeah, that [disfmarker] That's the point, is [disfmarker] is th [speaker001:] Yeah. [speaker005:] Constraints. Let's call them constraints, around which one has to [disfmarker] [speaker003:] Yeah. Yeah. [comment] [pause] Anyway. But so that's that's w [speaker002:] Actually, yeah. [vocalsound] There's a problem with that word, too, though. [speaker004:] Yeah, but it [disfmarker] he the decisions I made wer had to do with my thesis. [speaker003:] Yeah. [speaker004:] So consequently don't I get to decide then that it's Robert's job? [speaker003:] No. Uh. [speaker001:] Anyhow. [speaker002:] Well, I'll just pick a piece of the problem and then just push the hard stuff into the center [pause] and say it's Robert's. [speaker005:] I've always been [pause] completely in favor of consensus decisions, [speaker002:] Like. [speaker005:] so we'll [disfmarker] we'll find a way. [speaker003:] Right. [speaker002:] I can [disfmarker] [speaker003:] Well, we [disfmarker] we [disfmarker] we will, but um [speaker002:] I haven't. [comment] OK. [speaker003:] not [disfmarker] [speaker005:] It [disfmarker] it might even be [pause] interesting then to [pause] say that I should be forced to um, sort of pull some of the ideas that have been floating in my head out of the, uh [disfmarker] out of the top hat [speaker003:] Yes. [speaker005:] and, um [disfmarker] [speaker001:] Always good. [speaker003:] Right. [speaker005:] That metaphor is not going anywhere, you know. [speaker003:] So [speaker001:] Yeah. [speaker003:] Ri No. Absolutely. So, uh, wh you had [disfmarker] you know you ha You had done one draft. [speaker005:] Yes, and, um, it's [disfmarker] Ha None of that is basically still around, [speaker003:] And a another draft [speaker002:] I didn't get [speaker005:] but it's [disfmarker] [speaker003:] OK. D i [speaker001:] That's normal. [speaker002:] Oh, I guess it's good I didn't read it. [speaker003:] I i I [disfmarker] this is [disfmarker] I'm shocked. This is the first time I've seen a thesis proposal change. Right. Anyway, uh. [vocalsound] So. [speaker002:] Really? [speaker003:] But, yeah, a second [disfmarker] that would be great. So, uh, a sec I mean you're gonna need it anyway. [speaker005:] Hmm. Yeah, and I would like to d discuss it [speaker003:] and [speaker005:] and, you know, get you guys's input [speaker003:] Right. [speaker005:] and make it sort of bomb proof. [speaker003:] Yep. [speaker002:] Bomb proof! [speaker001:] Good. [speaker002:] Oh! Oh, OK. [speaker005:] Bullet proof. That's the word I was looking for. [speaker003:] Both proof. [speaker001:] Either way. [speaker002:] Both. [speaker003:] Right. Uh [speaker002:] Good luck. [vocalsound] Really. [speaker003:] So that, so th thi this [disfmarker] I mean, so this is the point, is we [disfmarker] we're going to have to cycle through this, but th the draft of the p proposal on the constructions is [disfmarker] is going to tell us a lot about [pause] what [pause] we think needs to be done by construal. [speaker001:] Yeah. [speaker003:] And, um, we oughta be doing it. [speaker005:] OK. Yeah, we need [disfmarker] we need some [disfmarker] Then we need to make some dates. Um. Meeting [disfmarker] regular meeting time for the summer, we really haven't found one. We did [pause] Thursdays one for a while. I just talked to Ami. It's it's a coincidence that he can't do [disfmarker] couldn't do it today [pause] here. [speaker002:] Usually, he can. [speaker005:] Usually he has no real constraints. So [disfmarker] [speaker003:] And the NTL meeting moved to Wednesday, cuz of [disfmarker] of, uh [speaker005:] Yeah, it was just an exception. [speaker003:] Yeah, you weren't here, but [disfmarker] but [disfmarker] but [disfmarker] s uh, [disfmarker] And so, if that's OK with you, you would [disfmarker] [speaker001:] It's i Is it staying basically at the Wednesday noon? [speaker003:] Yeah, it was th [speaker001:] OK. It was th off this week, [speaker002:] Yeah. I always thought it was staying. [speaker001:] yeah. [speaker003:] Right. [speaker002:] Yeah, I thought it was just this week that we were changing it. [speaker005:] Mmm. [pause] Yeah. [speaker003:] OK. [speaker005:] And, um. How do we feel about doing it Wednesdays? Because it seems to me that this is sort of a time where when we [pause] have things to discuss with other people, there [disfmarker] they seem to be s tons of people around. [speaker003:] The only disadvantage [pause] is that it may interfere with other [speaker005:] Or [disfmarker] subgroup meetings [speaker003:] s you know, other [disfmarker] other [disfmarker] No, you [disfmarker] Uh, people in this group connecting with [disfmarker] with [speaker002:] Those people who [pause] happen to be around. [speaker003:] those people [pause] who [disfmarker] who might not be around so much. Uh, I don't care. I I uh you know I have no fixed [disfmarker] [speaker001:] To tell you the truth, I'd rath I'd, I'd [disfmarker] would like to avoid more than one ICSI meeting per day, if possible. [speaker003:] OK. [speaker001:] But [disfmarker] [vocalsound] I mean. I don't know. Whatever. [speaker003:] No, that's fine. I mean that [disfmarker] [speaker005:] The [disfmarker] I'd like to have them all in one day, so package them up and then [disfmarker] [speaker001:] Yeah, I can understand that. [speaker003:] Well p people [disfmarker] people differ in their tastes in this matter. [speaker001:] Yeah. [speaker003:] I [disfmarker] I'm neutral. [speaker001:] Yeah. [speaker002:] Yeah. [pause] I'm always here anyway, [speaker005:] It's OK, that [disfmarker] [speaker002:] so [disfmarker] [speaker003:] Yeah. [@ @] That's [disfmarker] Me too. [speaker002:] It doesn't matter. [speaker003:] I'm basically [disfmarker] I'm here. So. [speaker005:] Well, if [disfmarker] one [pause] sort of thing is, this room is taken at [disfmarker] after three thirty pr pretty much every day by the data collection. [speaker002:] Oh. [speaker005:] So we have subjects anyway [disfmarker] Except for this week, we have subjects in here. That's why it was one. [speaker002:] Oh. [speaker003:] OK. [speaker005:] So we just knew i [speaker002:] So did you just say that Ami can't make one o ' [speaker005:] No, he can. [speaker002:] Oh, OK. [speaker001:] Oh. [speaker005:] So let's say Thursday one. But for next week, this is a bit late. So [pause] I would suggest that we need to [disfmarker] to talk [disfmarker] [speaker002:] Oh, oh, OK. [speaker005:] OK. About the c the [disfmarker] th [speaker002:] Could we do Thursday at one thirty? Would that [disfmarker] that be horrible? [speaker005:] No. Yes. [speaker002:] Oh really? [speaker005:] Because, uh, this room is again taken at two thirty by Morgan. [speaker002:] Oh, OK. OK. You didn't tell me that. OK, that's fine. [speaker005:] And the [disfmarker] s meeting recorder meeting meeting meeting recording on meeting meetings [disfmarker] [speaker002:] OK, OK, OK. OK. [pause] Yeah. [speaker005:] So. [speaker003:] Interesting. [speaker001:] Ah, yeah. [speaker003:] So you're proposing that we meet Tuesday. [speaker005:] How about that? [speaker003:] I [disfmarker] I could [speaker001:] Next week. [speaker002:] Well, we're meeting Tuesday. I mean we usually meet Tuesday [disfmarker] or l like, linguists [pause] um, at two. [speaker004:] Would it [disfmarker] [speaker001:] That's right. [speaker002:] So. [speaker004:] And the s [speaker002:] Do you want to meet again here bef [speaker005:] I mean w [speaker004:] Is the Speech Gen meeting still at [disfmarker] on Tuesdays? [speaker005:] Well, actually we w we we did scrap our Monday time just because Bhaskara couldn't come Monday. [speaker002:] Hhh. [comment] Maybe I do need a Palm Pilot. [speaker005:] So there's [disfmarker] Nothing's impeding Monday anymore [pause] either. [speaker001:] That doesn't apply to a [disfmarker] [speaker005:] Get a fresh start [disfmarker] [speaker004:] Although I thought you wanted to go camping on Monday [disfmarker] er, take off Mondays a lot so you could go camping. [speaker005:] Yeah, that's another s thing. Yeah. But, um. I mean, there are also usually then holidays anyways. I mean [pause] like [disfmarker] [comment] Sometimes [pause] it works out that way. [speaker002:] Usually? [speaker005:] So. Hmm! [speaker002:] Well, I mean, the linguists' meeting [pause] i happens to be at two, but I think that's [disfmarker] I mean. [speaker001:] That should be relatively flexible be [speaker002:] pretty flexible, I think. [speaker001:] Yeah. There's just [pause] sort of the two to four of us. [speaker002:] So. The multiple meetings [speaker001:] Right? Yeah. [speaker002:] yeah. [speaker001:] So. [speaker002:] Right. [speaker001:] And, you know, of course Nancy and I are just sort of always talking anyway and sometimes we do it in that room. [speaker002:] Yeah. [speaker001:] So, you know, I mean. [speaker005:] OK, so [pause] l forget about the b the camping thing. So let's [disfmarker] eh, any other problems w w w? But, I suggested Monday. If that's a problem for me then I shouldn't [pause] suggest it. [speaker004:] Ha ha ha. [speaker003:] OK. [speaker005:] So. [speaker001:] Um, all of the proposed times sound fine with me. [speaker002:] Same here. [speaker005:] Monday? [speaker003:] OK, whate I mean [disfmarker] What I think Robert's saying is that [speaker001:] Earlier in the week [speaker003:] earlier we [disfmarker] At least for next week, there's a lot of stuff we want to get done, [speaker001:] Mm hmm. Yeah. [speaker003:] so why don't we plan to meet Monday [speaker005:] Mmm. [speaker003:] and [pause] we'll see if we want to meet any more than that. [speaker001:] OK. [speaker005:] OK. [speaker002:] What time? At o o o o one, two, three [disfmarker]? [speaker005:] One, two, three? Three's too late. Two thirty? [speaker003:] Oh, I i [pause] Yeah, I actually [disfmarker] Two is the earliest I can meet on Monday. [speaker005:] OK, two. [speaker003:] Here I'm blissfully agreeing to things and realizing that I actually do have some stuff scheduled on Monday. [speaker001:] Sure. Sounds great. Uh, so that's the eighteenth. [speaker002:] You guys will still remind me, right? [speaker004:] No way! [speaker002:] Y you'll come and take all the [disfmarker] [vocalsound] the headph the good headphones first and then remind me. [speaker005:] W why do you [disfmarker]? [speaker001:] Yeah, exactly. [speaker005:] And [speaker001:] Sorry, two PM. [speaker002:] Why do I have this unless I'm gonna write? [speaker005:] do I get to see th uh, your formalism before [pause] that? [speaker002:] Fine. Yes. Uh. Would you like to? [speaker005:] Mm hmm. I wo I would like [disfmarker] I would sort of [pause] get a [disfmarker] get a notion of what [disfmarker] what you guys have in store for me. [speaker002:] OK. I was actually gonna work on it for tomorrow [disfmarker] like this [disfmarker] this weekend. Yeah. [speaker003:] Well m [@ @] you know, w maybe Mond Maybe we can put [disfmarker] This is part of what we can do Monday, if we want. [speaker002:] Yeah. I OK. [speaker001:] Alright. [speaker005:] OK. [speaker002:] I mean, I [disfmarker] I [disfmarker] I [disfmarker] [speaker003:] Is some [disfmarker] some version [speaker002:] Yeah, so there was like, you know, m m in my head the goal to have like an intermediate version, like, everything I know. [speaker001:] Mm hmm. [speaker002:] And then, w I would talk to you and figure out everything you know, that [disfmarker] you know, see if they're consistent. [speaker001:] Yeah. OK. Why don't w Maybe you and I should meet sort of more or less first thing Monday morning and then we can work on this. [speaker002:] Yes. Yeah. That's f fine with me. [speaker001:] OK. [speaker002:] So. I might [disfmarker] I might [disfmarker] um, [speaker005:] You y [speaker002:] s You said you're busy [pause] over th until the weekend, right? [speaker001:] Yeah, sort of through the weekend because Kate has a photography show. [speaker002:] That's fine. So we might continue our email thing [speaker001:] Yeah. [speaker002:] and that might be fine, too. So, maybe I'll send you some [disfmarker] [speaker001:] Um, if you have time after this I'll show you the noun phrase thing. [speaker002:] OK. That would be cool. So. OK, and we'll [disfmarker] You wanna m [speaker005:] So the idea is on Monday at two we'll [disfmarker] we'll see an intermediate version of the formalism for the constructions, [speaker001:] Yeah. [speaker002:] So that's OK for you [disfmarker] [speaker005:] and do an on line merging with my construal [pause] ideas. [speaker002:] Sure, sure. [speaker003:] OK. [speaker001:] Alright. [speaker005:] So it won't be, like, a for semi formal presentation of my [pause] proposal. [speaker002:] That's OK. [speaker005:] It'll be more like towards [pause] finalizing that proposal. OK, that's fine. [speaker001:] OK. [speaker002:] Cuz then you'll find out more of what we're making you do. [speaker005:] Yep, and then [disfmarker] [speaker001:] Yeah. [speaker004:] Hmm, hmm. [speaker005:] Yikes. [speaker001:] Oy, [comment] deadlines. [speaker002:] We'll make a presentation of your propo [comment] of your proposal. [speaker005:] Perfect. Can you also write it up? [speaker002:] It's like, this is what we're doing. [speaker003:] Abso [speaker002:] And the complement is Robert. " [speaker005:] I'll [disfmarker] I'll send you [disfmarker] I'll [disfmarker] I'll send you a style file, right? [speaker002:] OK. [speaker005:] You just [disfmarker] [speaker002:] I already sent you my fi [comment] my bib file. So. [speaker005:] OK. And, um. Sounds good. [speaker001:] Someday we also have to [disfmarker] we should probably talk about the other side of the "where is X" construction, which is the issue of, um, how do you simulate questions? What does the simspec look like for a question? [speaker005:] Yeah. [speaker003:] Mm hmm. [speaker001:] Because [pause] it's a little different. [speaker002:] Yeah. [speaker003:] Yeah, now, we we w [speaker001:] We had to [disfmarker] we had an idea for this which seemed like it would probably work. [speaker003:] Great. OK. Yeah. Simspec may need [disfmarker] we may n need to re name that. [speaker002:] I [disfmarker] Yeah. I [disfmarker] [speaker001:] Yeah. [speaker003:] OK? So let's think of a name for [disfmarker] for whatever the [disfmarker] this intermediate structure is. Oh, we talked about semspec, for "semantic spec specification" [speaker001:] Mmm. [speaker003:] and that seems [disfmarker] Um. [speaker001:] It's more general [speaker003:] You know, so it's a m minimal change. [speaker002:] Only have to change one vowel. That's great. [speaker003:] Yeah. Just [disfmarker] [speaker002:] All the old like [vocalsound] graphs, [speaker003:] Right. [speaker002:] just change the [disfmarker] just, like, mark out the [disfmarker] [speaker001:] Cool. [speaker003:] Right, a little substi substi You know, that's what text substitution uh macros are for. [speaker001:] Yeah. It's good for you. [speaker002:] Yeah. [speaker003:] Anyway, uh, so let's [disfmarker] let's for the moment call it that until we think of something better. [speaker001:] OK. [speaker003:] And, yeah, we absolutely need to find [disfmarker] Part of what was missing were markings of all sorts that weren't in there, incl including the questions [disfmarker] We didn't [disfmarker] we never did figure out how we were gonna do emphasis in [disfmarker] in uh, the semspec. [speaker001:] Mm hmm. Yeah. [speaker002:] Yeah, we've talked a little bit about [pause] that, too, which [disfmarker] uh, uh, it's hard for me to figure out with sort of our general linguistic issues, how they map onto this particular one, but [disfmarker] [speaker001:] Yeah. [speaker002:] OK, yeah, understood. [speaker003:] But that's part of the formalism [disfmarker] is got to be uh, how things like that get marked. [speaker001:] Mm hmm. [speaker002:] W do you have data, like the [disfmarker] the [disfmarker] You have preliminary [pause] data? Cuz I know, you know, we've been using this one easy sentence and I'm sure you guys have [disfmarker] uh, maybe you are the one who've been looking at [pause] the rest of it [disfmarker] [speaker001:] Um, I [speaker002:] it'd [disfmarker] it'd be useful for me, if we want to [pause] have it a little bit more data oriented. [speaker001:] To tell you the truth, what I've been looking at has not been the data so far, [speaker002:] Yeah. Mm hmm [pause] mm hmm. [speaker001:] I just sort of said "alright let's see if I can get noun phrases and, uh, major verb co uh, constructions out of the way first." And I have not gotten them out of the way yet. [speaker002:] Mm hmm. [speaker001:] Surprise. So, um. [speaker002:] Yeah. [speaker001:] So, I have not really approached a lot of the data, but I mean obviously like these [disfmarker] the [disfmarker] the question one, since we have this idea about the indefinite pronoun thing and all that, you know, I ca can try and, um run with that, you know, try and do some of the sentence constructions now. [speaker005:] OK. Do you wanna run the indefinite pronoun idea past Jerry? [speaker001:] It would make sense. [speaker002:] OK. [speaker001:] Oh yeah, the basic idea is that um, uh [pause] you know [disfmarker] Uh, [vocalsound] let's see [pause] if I can [pause] formulate this. [speaker005:] So [pause] Mary fixed the car with a wrench. [speaker001:] Yeah. [speaker005:] So you perform the mental sum and then, you know, "who fixed the car with a wrench?" You [pause] basically are told, to [disfmarker] to do this In the [disfmarker] in [disfmarker] analogously to the way you would do "someone fixed the car with a wrench". And then you hand it back to your hippocampus and find out [pause] what that, you know, [speaker001:] Means. [speaker005:] means, and then [pause] come up with that [disfmarker] so who that someone was. [speaker001:] The WH question has this as sort of extra thing which says "and when you're done, tell me who fills that slot" or w you know. [speaker003:] Mm hmm. [speaker001:] So, um. And, you know, this is sort of a nice way to do it, the idea of sort of saying that you treat [disfmarker] from the simulation point of view or whatever [disfmarker] you treat, uh, WH constructions similarly to uh, indefinite pronouns like "someone fixed the car" because [pause] lots of languages, um, have WH questions with an indefinite pronoun in situ or whatever, [speaker002:] Use actually the same one. [speaker001:] and you just get intonation to tell you that it's a question. [speaker003:] Alright, which is [speaker001:] So it makes sense um [speaker003:] Skolemization. [speaker001:] Hmm? [speaker003:] In [disfmarker] in logic, it's [disfmarker] it's [disfmarker] [@ @] [comment] it's actual Huh? [speaker002:] Mmm. Right. [vocalsound] Let's put a Skolem [disfmarker] [vocalsound] Skolem constant in, [speaker001:] Yeah. shko [speaker003:] What? [speaker002:] yeah. [speaker001:] Sure. [speaker002:] Yeah. [pause] Right. [speaker001:] OK. [speaker003:] That that's not [disfmarker] that's not saying it's bad, it's just that [disfmarker] [speaker001:] Right. Right. No. Of course. [speaker002:] Mmm. [speaker003:] that [disfmarker] that [disfmarker] the logicians have [disfmarker] have, uh [disfmarker] [speaker001:] That's right. [speaker005:] come up with this [speaker001:] It makes sense from that point of view, too, which is actually better. So yeah, um. Anyway, but just that kind of thing and we'll figure out exactly how to write that up and so on, but [speaker003:] Good. [speaker001:] Uh, no, all the focus stuff. We sort of just dropped that cuz it was too weird and we didn't even know, like, what we were talking about [comment] exactly, what the object of study was. [speaker002:] Um mmm. [speaker001:] So. [speaker003:] Yeah. Well, if [disfmarker] if [disfmarker] I mean, i part of [disfmarker] of what the exercise is, t by the end of next week, is to say what are the things that we just don't have answers for yet. [speaker001:] Yeah. [speaker003:] That's fine. [speaker001:] Yep. [speaker003:] I mean [speaker002:] Mm hmm. [speaker005:] Well, if you [disfmarker] if you do wanna discuss focus [pause] background and then get me into that because [disfmarker] I mean, I wo I w scientifically worked on that for [disfmarker] for almost two years. [speaker001:] Yeah. OK, then certainly we will. [speaker002:] Yeah, you should definitely, um be on on that [disfmarker] [speaker001:] Good. [speaker002:] maybe [disfmarker] maybe by [disfmarker] after Monday we'll [disfmarker] y you can see what things we are and aren't [disfmarker] [speaker001:] Yeah. w We should figure out what our questions are, for example, [vocalsound] to ask you. [speaker002:] Yeah. Yeah. [speaker001:] So. [speaker002:] OK. [speaker001:] OK. [speaker003:] Wel then t Hans. Has [disfmarker] I haven't seen Hans Boas? [speaker002:] He's been around. [speaker001:] Yeah. [speaker003:] OK. So has he been [disfmarker] been involved with this, [speaker002:] Just maybe not today. [speaker003:] or [disfmarker]? [speaker002:] Eh. with us? Yeah. [speaker001:] Yeah. [speaker003:] Yeah. [speaker002:] I would say that tha that those discussions have been primarily, um, Keith and [disfmarker] Keith and me, but um like in th the meeting [disfmarker] I mean, he sort of [disfmarker] I thin like the last meeting we had, I think we were all very much part of it [speaker001:] Yeah. [speaker002:] but [pause] um [speaker001:] Sometimes Hans has been sort of coming in there as sort of like a [pause] devil's advocate type role or something, [speaker002:] but different perspec Yeah. [speaker001:] like [pause] "This make [disfmarker] you know, I'm going to pretend I'm a linguist who has nothing to do with this. This makes no sense." And he'll just go off on parts of it which [pause] definitely need fixing [speaker002:] Right. [speaker001:] but aren't where we're at right now, [speaker002:] Like [disfmarker] like what you call certain things, [speaker001:] so it's [speaker002:] which we decided long ago we don't care that much right now. [speaker001:] Yeah. [speaker003:] Right. OK. [speaker002:] But in a sense, it's good to know that he [pause] of all people [disfmarker] you know, like maybe a lot of people would have m much stronger reactions, so, you know, he's like a relatively friendly linguist [speaker001:] Yeah. [speaker002:] and yet a word like "constraint" causes a lot of problems. [speaker001:] Yeah. [speaker002:] And, so. [pause] Right. So. [speaker003:] OK. This is consistent with um the role I had suggested that he [disfmarker] he play, [speaker002:] Ah. [speaker001:] Mm hmm. [speaker003:] OK, which was [pause] that o one of the things I would like to see happen is a paper that was tentatively called "Towards a formal cognitive semantics" which was addressed to these linguists [pause] uh [pause] who haven't been following [pause] this stuff at all. [speaker001:] Yeah. [speaker003:] So [pause] it could be that he's actually, at some level, thinking about how am I going to [pause] communicate this story [disfmarker] [speaker001:] Yeah. Yeah. [speaker003:] So, internally, we should just do [pause] whatever works, [speaker001:] Yeah. [speaker003:] cuz it's hard enough. [speaker001:] Yeah. [speaker003:] But [pause] if he g if he turns [disfmarker] is [disfmarker] is really gonna turn around and help t to write this version that does [pause] connect with as many as possible of the [pause] other linguists in the world um [comment] then [disfmarker] then it becomes important to [pause] use terminology that doesn't make it hard [disfmarker] [speaker002:] Mm hmm. [speaker001:] Mm hmm. Yeah. Yeah. [speaker002:] Mm hmm. [pause] Sure. [speaker003:] I mean, it's gonna be plenty hard for [disfmarker] for people to understand it as it is, but y y you don't want to make it worse. [speaker001:] Yeah. Yeah. [speaker003:] So. [speaker001:] No, right. I mean, tha that role is [disfmarker] is, uh, indispensable but that's not where sort of our heads were at in these meetings. [speaker003:] Right. Yeah, yeah. [disfmarker] No, that's fine. [speaker001:] It was a little strange. [speaker003:] I just wanted t to I have to catch up with him, and I wanted t to get a feeling for that. OK. [speaker001:] Yeah. [speaker002:] Mm hmm. [speaker001:] So I don't know what his take will be on these meetings exactly, you know. [speaker003:] OK. Good. [speaker001:] Cuz sometimes he sort of sounds like we're talking a bunch of goobledy gook from his point of view. [speaker002:] I think it's good when we're [disfmarker] when we're into data and looking at the [disfmarker] some specific linguistic phenomenon [pause] in [disfmarker] in English or in German, in particular, whatever, that's great, [speaker003:] Yeah. [speaker001:] Mm hmm. [speaker002:] and Ben and [disfmarker] and Hans are, if [disfmarker] if anything, more [disfmarker] you know, they have more to say than, let's say, I would about some of these things. [speaker003:] Right. [speaker002:] But when it's like, well, w how do we capture these things, you know, I think it's definitely been Keith and I who have d you know, who have worried more about the [disfmarker] [speaker003:] Well, that's good. That's [disfmarker] I I I think that should be the [disfmarker] the core group [speaker001:] Mm hmm. [speaker002:] s Which is fine. [speaker001:] Yeah. [speaker002:] Mm hmm. [speaker003:] and [pause] um that's, you know, I think [pause] very close to the maximum number of people working together that can get something done. [speaker001:] Yeah. [speaker002:] Yes. Yeah. We actually have [disfmarker] I think we have been making progress, and its sort of surprising. [speaker001:] Yeah. [speaker003:] I [disfmarker] I [disfmarker] I [disfmarker] I definitely get that impression. Yeah. [speaker002:] You know, like [disfmarker] [speaker003:] That's great. [speaker001:] Yep. [speaker002:] Yeah. So anyone else would like uh [comment] ruin the balance of [disfmarker] Anyway. [speaker003:] Well, but [disfmarker] Well. But th th then w then we have to come back to the bigger group. [speaker002:] Right. [speaker001:] Yeah. [speaker003:] Yeah. [comment] [pause] Great. And then we're gon we're gonna [disfmarker] because of this other big thing we haven't talked about is [pause] actually implementing this stuff? So that I guess the three of us are gonna connect tomorrow about that. [speaker002:] Yeah, we could talk tomorrow. I was just gonna say, though, that, for instance, there was [disfmarker] you know, out of a meeting with Johno [pause] came the suggestion that "oh, could it be that the [pause] meaning [pause] constraints really aren't used for selection?" which has sort of been implicit [pause] in the parsing [pause] strategy we talked about. [speaker003:] Right. [speaker002:] In which case we w we can just say that they're the effects or the bindings. Which [pause] uh, so far, in terms of like putting up all the constraints as, you know, pushing them into type constraints, the [disfmarker] when I've, you know, propo then proposed it to linguists who haven't yet given me [disfmarker] you know, we haven't yet thought of a reason that that wouldn't work. Right? As long as we allow our type constraints to be reasonably [pause] complex. [speaker003:] Well, it [disfmarker] [speaker002:] So [disfmarker] Anyway, to be [disfmarker] to talk about later. [speaker003:] Yeah, it has to in the sense that you're gonna use them eventu it's [disfmarker] you know, it's sort of a, um, generate and test kind of thing, [speaker002:] Mm hmm. [pause] Mm hmm. [speaker003:] and if you over generate then you'll have to do more. I mean, if there are some constraints that you hold back and don't use uh, in your initial matching then you'll match some things [disfmarker] [speaker002:] Mm hmm. [pause] Mm hmm. [speaker003:] I mean, I [disfmarker] I d I don't think there's any way that it could completely fail. It [disfmarker] it could be that uh, you wind up [disfmarker] I mean [disfmarker] The original bad idea of purely context free grammars died because [pause] there were just vastly too many parses. You know, exponentially num num many parses. And so th the concern might be that [disfmarker] not that it would totally fail, but that [disfmarker] [speaker002:] Mm hmm. Mm hmm. That it would still generate too many. [comment] Right? [speaker003:] it would still genera [speaker002:] So by just having semantic even bringing semantics in for matching just in the form of j semantic types, right? Like "conceptually these have to be construed as this, this, and this" might still give us quite a few possibilities [speaker003:] Yeah. We don't know, [speaker002:] that, you know [disfmarker] And [disfmarker] and it certainly helps a lot. [speaker003:] but, yeah. [speaker002:] I mean, le let's put it that way. [speaker003:] No question. Yeah. [speaker002:] So. [speaker003:] And I think it's a [disfmarker] it's a perfectly fine place to start. You know, and say, let let's see how far we can go this way. [speaker002:] Mm hmm. [pause] Mm hmm. [speaker003:] And, uh [disfmarker] [speaker004:] Well it definitely makes the problem easier. [speaker003:] I'm [disfmarker] I'm in favor of that. Uh, cuz I think i I think it's [disfmarker] As you know, I think it's real hard and if w if we [disfmarker] Right. [speaker002:] So [pause] Friday, Monday [speaker001:] Yeah. [speaker002:] Monday. [speaker001:] Yeah. [speaker002:] So. OK, that's [disfmarker] Tuesday. [speaker003:] Yeah. [speaker001:] Yeah. [speaker002:] Like [disfmarker] [comment] th that's the conclusion. OK. [speaker005:] So, you your dance card is [pause] completely filled now? [speaker004:] Mm hmm. [speaker001:] Shoot. [speaker002:] Yeah, and I have nothing to do this weekend but work. [speaker005:] Why don't [disfmarker] [speaker002:] No, that's not really true, but like [disfmarker] [speaker001:] Bummer. [speaker004:] What about [disfmarker] What about DDR? [speaker002:] It's almost true. Oh, I don't have it this weekend, so, tsk [comment] don't have to worry about that. [speaker004:] Mmm. [speaker003:] DDR, he asked? [speaker002:] Speaking of dance, Dance Dance Revolution I can't believe I'm [disfmarker] It's a [disfmarker] it's like a game, but it's for, like, dancing. Hard to [disfmarker] It's like karaoke, but for dancing, and they tell you what [disfmarker] It's amazing. It's so much fun. Yeah, it's so good. My friend has a home version and he brought it over, and we are so into it. It's so amazing. Well, y you know of it? I i i it's one of your hobbies? It's great exercise, I must say. I can't wait to hear this. Uh huh. Oh, definitely. They have, like, places [disfmarker] instead of like [disfmarker] Yeah, instead of karaoke bars now that have, like, DDR, like [disfmarker] Yeah, yeah, I didn't until I started hanging out with this friend, who's like "Oh, well, I can bring over the DDR if you want." Oh, oh, Dance Dance Revolution [disfmarker] OK. He actually brought a clone called Stepping Selection, but it's just as good. So. Anyw [speaker005:] Let's see. Test? Test? Yeah. OK. [speaker002:] Channel one. [speaker001:] Hello? Hello? [speaker003:] Test. [speaker005:] I was saying Hynek'll be here next week, uh, Wednesday through Friday [disfmarker] uh, through Saturday, and, um, I won't be here Thursday and Friday. But my suggestion is that, uh, at least for this meeting, people should go ahead, uh, cuz Hynek will be here, and, you know, we don't have any Czech accent yet, uh, [vocalsound] as far as I know, so [disfmarker] [speaker006:] OK. [speaker005:] There we go. Um. So other than reading digits, what's our agenda? [speaker006:] I don't really have, uh, anything new. Been working on [pause] Meeting Recorder stuff. So. [speaker005:] OK. Um. Do you think that would be the case for next week also? Or is [disfmarker] is, uh [disfmarker]? What's your projection on [disfmarker]? [speaker006:] Um. [speaker005:] Cuz the one thing [disfmarker] the one thing that seems to me we really should try, if you hadn't tried it before, because it hadn't occurred to me [disfmarker] it was sort of an obvious thing [disfmarker] is, um, adjusting the, uh, sca the scaling and, uh, insertion penalty sorta stuff. [speaker006:] I did play with that, actually, a little bit. Um. What happens is, uh, [vocalsound] when you get to the noisy stuff, you start getting lots of insertions. [speaker005:] Right. [speaker006:] And, um, so I've tried playing around a little bit with, um, the insertion penalties and things like that. [speaker005:] Yeah. [speaker006:] Um. I mean, it [disfmarker] it didn't make a whole lot of difference. Like for the well matched case, it seemed like it was pretty good. Um. [vocalsound] I could do more playing with that, though. And, uh [disfmarker] and see. [speaker005:] But you were looking at mel cepstrum. [speaker006:] Yes. [speaker005:] Right. [speaker006:] Oh, you're talking about for th [vocalsound] for our features. [speaker005:] Right. So, I mean, i it it's not the direction that you were working with that we were saying what's the [disfmarker] uh, what's the best you can do with [disfmarker] with mel cepstrum. [speaker006:] Mmm. [speaker005:] But, they raised a very valid point, which, I guess [disfmarker] So, to first order [disfmarker] I mean, you have other things you were gonna do, but to first order, I would say that the conclusion is that if you, um, do, uh, some monkeying around with, uh, the exact HTK training and [@ @] [comment] with, uh, you know, how many states and so forth, that it [disfmarker] it doesn't particularly improve the performance. In other words, that even though it sounds pretty dumb, just applying the same number of states to everything, more or less, no matter what language, isn't so bad. Right? And I guess you hadn't gotten to all the experiments you wanted to do with number of Gaussians, [speaker006:] Right. [speaker005:] but, um, let's just [disfmarker] If we had to [disfmarker] if we had to draw a conclusion on the information we have so far, we'd say something like that. Right? [speaker006:] Mm hmm. [speaker005:] Uh, so the next question to ask, which is I think the one that [disfmarker] that [disfmarker] that Andreas was dre addressing himself to in the lunch meeting, is, um, we're not supposed to adjust the back end, but anybody using the system would. [speaker006:] Yeah. [speaker005:] So, if you were just adjusting the back end, how much better would you do, uh, in noise? Uh, because the language scaling and insertion penalties and so forth are probably set to be about right for mel cepstrum. [speaker006:] Mm hmm. [speaker005:] But, um, they're probably not at all set right for these things, particularly these things that look over, uh, larger time windows, in one way or another with [disfmarker] with LDA and KLT and neural nets and [vocalsound] all these things. In the fa past we've always found that we had to increase the insertion penalty to [disfmarker] to correspond to such things. So, I think that's, uh, [@ @] [comment] that's kind of a first order thing that [disfmarker] that we should try. [speaker006:] So for th so the experiment is to, um, run our front end like normal, with the default, uh, insertion penalties and so forth, and then tweak that a little bit and see how much of a difference it makes [speaker005:] So by "our front end" I mean take, you know, the Aurora two s take some version that Stephane has that is, you know, our current best version of something. [speaker006:] if we were [disfmarker] Mm hmm. [speaker005:] Um. I mean, y don't wanna do this over a hundred different things that they've tried but, you know, for some version that you say is a good one. You know? Um. How [disfmarker] how much, uh, does it improve if you actually adjust that? [speaker006:] OK. [speaker005:] But it is interesting. You say you [disfmarker] you have for the noisy [disfmarker] How about for the [disfmarker] for the mismatched or [disfmarker] or [disfmarker] or [disfmarker] or the [disfmarker] or the medium mismatched conditions? Have you [disfmarker]? When you adjusted those numbers for mel cepstrum, did it [disfmarker]? [speaker006:] Uh, I [disfmarker] I don't remember off the top of my head. Um. Yeah. I didn't even write them down. I [disfmarker] I [disfmarker] I don't remember. I would need to [disfmarker] Well, I did write down, um [disfmarker] So, when I was doing [disfmarker] I just wrote down some numbers for the well matched case. [speaker005:] Yeah. [speaker006:] Um. Looking at the [disfmarker] I wrote down what the deletions, substitutions, and insertions were, uh, for different numbers of states per phone. [speaker005:] Yeah. [speaker006:] Um, but, uh, that [disfmarker] that's all I wrote down. [speaker005:] OK. [speaker006:] So. I [disfmarker] I would [disfmarker] Yeah. I would need to do that. [speaker005:] OK. So [disfmarker] [speaker006:] I can do that for next week. [speaker005:] Yeah. And, um [disfmarker] Yeah. Also, eh, eh, sometimes if you run behind on some of these things, maybe we can get someone else to do it and you can supervise or something. But [disfmarker] but I think it would be [disfmarker] it'd be good to know that. [speaker006:] OK. I just need to get, um, [vocalsound] front end, uh, stuff from you [speaker002:] Hmm. [speaker006:] or you point me to some files [pause] that you've already calculated. [speaker002:] Yeah. Alright. [speaker005:] OK. Uh. [speaker006:] I probably will have time to do that and time to play a little bit with the silence model. [speaker005:] Mm hmm. [speaker006:] So maybe I can have that for next week when Hynek's here. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker005:] Yeah. Cuz, I mean, the [disfmarker] the other [disfmarker] That, in fact, might have been part of what, uh, the difference was [disfmarker] at least part of it that [disfmarker] that we were seeing. Remember we were seeing the SRI system was so much better than the tandem system. [speaker006:] Hmm. [speaker005:] Part of it might just be that the SRI system, they [disfmarker] they [disfmarker] they always adjust these things to be sort of optimized, [speaker006:] Is there [disfmarker]? [speaker005:] and [disfmarker] [speaker006:] I wonder if there's anything that we could do [vocalsound] to the front end that would affect the insertion [disfmarker] [speaker005:] Yes. I think you can. [speaker006:] What could you do? [speaker005:] Well, um [disfmarker] uh, part of what's going on, um, is the, uh, the range of values. So, if you have something that has a much smaller range or a much larger range, and taking the appropriate root. [speaker006:] Oh. Mm hmm. [speaker005:] You know? If something is kind of like the equivalent of a bunch of probabilities multiplied together, you can take a root of some sort. If it's like seven probabilities together, you can take the seventh root of it or something, or if it's in the log domain, divide it by seven. [speaker006:] Mm hmm. [speaker005:] But [disfmarker] but, um, that has a similar effect because it changes the scale of the numbers [disfmarker] of the differences between different candidates from the acoustic model [speaker006:] Oh, right. [speaker005:] as opposed to what's coming from the language model. [speaker006:] So that w Right. So, in effect, that's changing the value of your insertion penalty. [speaker005:] Yeah. I mean, it's more directly like the [disfmarker] the language scaling or the, uh [disfmarker] the model scaling or acoustic scaling, [speaker006:] That's interesting. [speaker005:] but you know that those things have kind of a similar effect to the insertion penalty [speaker006:] Mm hmm. [speaker005:] anyway. They're a slightly different way of [disfmarker] of handling it. [speaker006:] Right. [speaker005:] So, um [disfmarker] [speaker006:] So if we know what the insertion penalty is, then we can get an idea about what range our number should be in, so that they [pause] match with that. [speaker005:] I think so. Yeah. Yeah. So that's why I think that's another reason other than curiosity as to why i it would in fact be kinda neat to find out if we're way off. [speaker006:] Mm hmm. [speaker005:] I mean, the other thing is, are aren't we seeing [disfmarker]? Y y I'm sure you've already looked at this bu in these noisy cases, are [disfmarker]? We are seeing lots of insertions. Right? The insertion number is quite high? I know the VAD takes pre care of part of that, [speaker002:] Yeah. [speaker006:] Yeah. [speaker005:] but [disfmarker] [speaker006:] I've seen that with the mel cepstrum. [speaker002:] Yeah. [speaker006:] I don't [disfmarker] I don't know about [pause] the Aurora front end, but [disfmarker] [speaker002:] I think it's much more balanced with, uh [disfmarker] when the front end is more robust. Yeah. I could look at it [disfmarker] at this. Yeah. [speaker005:] Yeah. Wha what's a typical number? [speaker002:] Mm hmm. I don't [disfmarker] I don't know. [speaker005:] Do we [disfmarker]? Oh, you [disfmarker] oh, you don't know. OK. [speaker002:] I don't have this in [disfmarker] [speaker005:] I'm sure it's more balanced, but it [disfmarker] it [disfmarker] it wouldn't surprise me if there's still [disfmarker] [speaker002:] Mm hmm. [speaker005:] I mean, in [disfmarker] in the [disfmarker] the [disfmarker] the old systems we used to do, I [disfmarker] I [disfmarker] uh, I remember numbers kind of like insertions being half the number of deletions, as being [disfmarker] and both numbers being [disfmarker] tend to be on the small side comparing to [disfmarker] to, uh, substitutions. [speaker002:] Mm hmm. Mm hmm. [speaker006:] Well, this [disfmarker] the whole problem with insertions was what I think, um, we talked about when the guy from OGI came down [pause] that one time and [disfmarker] and that was when people were saying, well we should have a, uh, uh, voice activity detector [disfmarker] [speaker005:] Right. [speaker006:] that, because all that stuff [comment] that we're getting thr the silence that's getting through is causing insertions. So. [speaker005:] Right. [speaker002:] Mmm. [speaker006:] I'll bet you there's still a lot [vocalsound] of insertions. [speaker002:] Mm hmm. [speaker005:] Yeah. And it may be less of a critical thing. I mean, the fact that some get by may be less of a critical thing if you, uh, get things in the right range. [speaker006:] Mm hmm. [speaker005:] So, I mean, the insertions is [disfmarker] is a symptom. It's a symptom that there's something, uh, wrong with the range. [speaker006:] Right. [speaker005:] But there's [disfmarker] uh, your [disfmarker] your [disfmarker] your substitutions tend to go up as well. So, uh, I [disfmarker] I [disfmarker] I think that, [speaker006:] Mm hmm. [speaker005:] uh, the most obvious thing is just the insertions, [@ @]. But [disfmarker] Uh [disfmarker] um. If you're operating in the wrong range [disfmarker] I mean, that's why just in general, if you [vocalsound] change what these [disfmarker] these penalties and scaling factors are, you reach some point that's a [disfmarker] that's a minimum. So. Um. Um. We do have to do well over a range of different conditions, some of which are noisier than others. Um. But, um, I think we may get a better handle on that if we [disfmarker] if we see [disfmarker] Um, I mean we ca it's if we actually could pick a [disfmarker] a [disfmarker] a more stable value for the range of these features, it, um, uh, could [disfmarker] Uh [disfmarker] Even though it's [disfmarker] it's [disfmarker] it's true that in a real situation you can in fact adjust the [disfmarker] these [disfmarker] these scaling factors in the back end, and it's ar artificial here that we're not adjusting those, you certainly don't wanna be adjusting those all the time. [speaker006:] Hmm. [speaker005:] And if you have a nice front end that's in roughly the right range [disfmarker] I remember after we got our stuff more or less together in the previous systems we built, that we tended to set those scaling factors at kind of a standard level, and we would rarely adjust them again, even though you could get a [disfmarker] [speaker006:] Mm hmm. [speaker005:] for an evaluation you can get an extra point or something if you tweaked it a little bit. But, once we knew what rou roughly the right operating range was, it was pretty stable, and [disfmarker] Uh, we might just not even be in the right operating range. [speaker006:] So, would the [disfmarker]? Uh, would a good idea be to try to map it into the same range that you get in the well matched case? So, if we computed what the range was in well matched, and then when we get our noisy conditions out we try to make it have the same range as [disfmarker]? [speaker005:] No. You don't wanna change it for different conditions. No. No. I [disfmarker] I [disfmarker] I [disfmarker] What [disfmarker] what I'm saying [disfmarker] [speaker006:] Oh, I wasn't suggesting change it for different conditions. I was just saying that when we pick a range, we [disfmarker] we wanna pick a range that we map our numbers into [disfmarker] we should probably pick it based on the range that we get in the well matched case. [speaker005:] Yeah. [speaker006:] Otherwise, I mean, what range are we gonna choose to [disfmarker] to map everything into? [speaker005:] Well. It depends how much we wanna do gamesmanship and how much we wanna do [disfmarker] I mean, i if he it [disfmarker] to me, actually, even if you wanna be [disfmarker] play on the gamesmanship side, it can be kinda tricky. So, I mean, what you would do is set the [disfmarker] set the scaling factors, uh, so that you got the best number for this point four five times the [disfmarker] [vocalsound] you know, and so on. [speaker006:] Mm hmm. [speaker005:] But they might change that [disfmarker] those weightings. [speaker006:] Yeah. [speaker005:] Um. So [disfmarker] Uh [disfmarker] I just sorta think we need to explore the space. [speaker006:] Mm hmm. [speaker005:] Just take a look at it a little bit. [speaker006:] OK. [speaker005:] And we [disfmarker] we [disfmarker] we may just find that [disfmarker] that we're way off. [speaker006:] Mm hmm. [speaker005:] Maybe we're not. You know? As for these other things, it may turn out that, uh, [vocalsound] it's kind of reasonable. But then [disfmarker] I mean, Andreas gave a very reasonable response, and he's probably not gonna be the only one who's gonna say this in the future [disfmarker] of, you know, people [disfmarker] people within this tight knit community who are doing this evaluation [vocalsound] are accepting, uh, more or less, that these are the rules. [speaker006:] Yeah. [speaker005:] But, people outside of it who look in at the broader picture are certainly gonna say "Well, wait a minute. You're doing all this standing on your head, uh, on the front end, when all you could do is just adjust this in the back end with one s one knob." [speaker006:] Mm hmm. [speaker005:] And so we have to at least, I think, determine that that's not true, which would be OK, or determine that it is true, in which case we want to adjust that and then continue with [disfmarker] with what we're doing. [speaker006:] Right. [speaker005:] And as you say [disfmarker] as you point out [disfmarker] finding ways to then compensate for that in the front end [vocalsound] also then becomes a priority for this particular test, and saying you don't have to do that. [speaker006:] Mm hmm. [speaker005:] So. OK. So, uh [disfmarker] What's new with you? [speaker002:] Uh. So there's nothing [pause] new. [speaker005:] Uh, what's old with you that's developed? [speaker002:] Um. I'm sorry? [speaker005:] You [disfmarker] OK. What's old with you that has developed over the last week or two? [speaker002:] Mmm. Well, so we've been mainly working on the report and [disfmarker] and [disfmarker] Yeah. [speaker006:] Mainly working on what? [speaker002:] On the report [pause] of the work that was already done. [speaker006:] Oh. [speaker002:] Um. [speaker006:] How about that [disfmarker]? [speaker002:] Mm hmm. That's all. [speaker006:] Any anything new on the thing that, uh, you were working on with the, uh [disfmarker]? [speaker003:] I don't have results yet. [speaker006:] No results? Yeah. [speaker005:] What was that? [speaker006:] The [disfmarker] the, uh, [speaker001:] Voicing thing. [speaker006:] voicing detector. [speaker005:] I mean, what what's [disfmarker] what's going on now? What are you [pause] doing? [speaker003:] Uh, to try to found, nnn, robust feature for detect between voice and unvoice. And we [disfmarker] w we try to use [vocalsound] the variance [vocalsound] of the es difference between the FFT spectrum and mel filter bank spectrum. [speaker005:] Yeah. [speaker003:] Uh, also the [disfmarker] another parameter is [disfmarker] relates with the auto correlation function. [speaker005:] Uh huh. [speaker003:] R ze energy and the variance a also of the auto correlation function. [speaker005:] Uh huh. So, that's [disfmarker] Yeah. That's what you were describing, I guess, a week or two ago. So. [speaker003:] Yeah. But we don't have res we don't have result of the AURO for Aurora yet. [speaker005:] Mm hmm. [speaker003:] We need to train the neural network and [disfmarker] [speaker005:] So you're training neural networks now? [speaker003:] No, not yet. [speaker005:] So, what [disfmarker] wha [vocalsound] wh wha what what's going on? [speaker003:] Well, we work in the report, too, because we have a lot of result, [speaker005:] Uh huh. [speaker003:] they are very dispersed, and was necessary to [disfmarker] to look in all the directory to [disfmarker] to [disfmarker] to give some more structure. [speaker005:] So. B So [disfmarker] [speaker002:] Yea [speaker005:] Yeah. I if I can summarize, basically what's going on is that you're going over a lot of material that you have generated in furious fashion, f generating many results and doing many experiments and trying to pull it together into some coherent form to be able to see wha see what happens. [speaker003:] Hm hmm. [speaker002:] Uh, y yeah. Basically we we've stopped, uh, experimenting, [speaker005:] Yes? [speaker002:] I mean. We're just writing some kind of technical report. And [disfmarker] [speaker006:] Is this a report that's for Aurora? Or is it just like a tech report for ICSI, [speaker003:] No. [speaker006:] or [disfmarker]? [speaker002:] Yeah. [speaker003:] For ICSI. [speaker006:] Ah. I see. [speaker002:] Yeah. [speaker003:] Just summary of the experiment and the conclusion and something like that. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker005:] OK. So, my suggestion, though, is that you [disfmarker] you not necessarily finish that. But that you put it all together so that it's [disfmarker] you've got [disfmarker] you've got a clearer structure to it. You know what things are, you have things documented, you've looked things up that you needed to look up. So that, you know [disfmarker] so that such a thing can be written. [speaker002:] Mm hmm. [speaker005:] And, um [disfmarker] When [disfmarker] when [disfmarker] when do you leave again? [speaker003:] Uh, in July. First of July. [speaker005:] First of July? OK. And that you figure on actually finishing it in [disfmarker] in June. Because, you know, you're gonna have another bunch of results to fit in there anyway. [speaker002:] Mm hmm. [speaker003:] Mm hmm. [speaker005:] And right now it's kind of important that we actually go forward with experiments. [speaker003:] It's not. [speaker005:] So [disfmarker] so, I [disfmarker] I think it's good to pause, and to gather everything together and make sure it's in good shape, so that other people can get access to it and so that it can go into a report in June. But I think [vocalsound] to [disfmarker] to really work on [disfmarker] on fine tuning the report n at this point is [disfmarker] is probably bad timing, I [disfmarker] I [pause] think. [speaker002:] Mm hmm. Yeah. Well, we didn't [disfmarker] we just planned to work on it one week on this report, not [disfmarker] no more, anyway. Um. [speaker005:] But you ma you may really wanna add other things later anyway [speaker002:] Yeah. Mm hmm. [speaker005:] because you [disfmarker] [speaker002:] Mmm. [speaker005:] There's more to go? [speaker002:] Yeah. Well, so I don't know. There are small things that we started to [disfmarker] to do. But [disfmarker] [speaker006:] Are you discovering anything, uh, that makes you scratch your head as you write this report, like why did we do that, or why didn't we do this, [speaker002:] Uh. [speaker006:] or [disfmarker]? [speaker002:] Yeah. Yeah. And [disfmarker] Actually, there were some tables that were also with partial results. We just noticed that, wh while gathering the result that for some conditions we didn't have everything. [speaker006:] Mmm. [speaker002:] But anyway. Um. Yeah, yeah. We have, yeah, extracted actually the noises from [pause] the SpeechDat Car. And so, we can train neural network with speech and these noises. Um. It's difficult to say what it will give, because when we look at the Aurora [disfmarker] the TI digits experiments, um, they have these three conditions that have different noises, and apparently this system perform as well on the seen noises [disfmarker] on the unseen noises and on the seen noises. But, I think this is something we have to try anyway. So [disfmarker] adding the noises from [disfmarker] from the SpeechDat Car. Um. [speaker005:] That's [disfmarker] that's, uh [disfmarker] that's permitted? [speaker002:] Uh. Well, OGI does [disfmarker] did that. Um. At some point they did that for [disfmarker] for the voice activity detector. [speaker003:] Uh, for a v VAD. [speaker002:] Right? Um. [speaker006:] Could you say it again? What [disfmarker] what exactly did they do? [speaker002:] They used some parts of the, um, Italian database to train the voice activity detector, I think. It [disfmarker] [speaker005:] Yeah. I guess the thing is [disfmarker] Yeah. I guess that's a matter of interpretation. The rules as I understand it, is that in principle the Italian and the Spanish and the English [disfmarker] no, Italian and the Finnish and the English? [disfmarker] were development data [speaker002:] Yeah. And Spanish, yeah. [speaker005:] on which you could adjust things. And the [disfmarker] and the German and Danish were the evaluation data. [speaker002:] Mm hmm. [speaker005:] And then when they finally actually evaluated things they used everything. [speaker002:] Yeah. That's right. [speaker005:] So [disfmarker] [speaker002:] Uh [disfmarker] [speaker005:] Uh, and it is true that the performance, uh, on the German was [disfmarker] I mean, even though the improvement wasn't so good, the pre the raw performance was really pretty good. [speaker002:] Mm hmm. [speaker005:] So [disfmarker] And, uh, it [disfmarker] it doesn't appear that there's strong evidence that even though things were somewhat tuned on those three or four languages, that [disfmarker] that going to a different language really hurt you. And the noises were not exactly the same. Right? Because it was taken from a different, uh [disfmarker] I mean they were different drives. [speaker002:] Different cars. [speaker005:] I mean, it was [disfmarker] it was actual different cars and so on. [speaker002:] Yeah. Yeah. [speaker005:] So. Um, it's somewhat tuned. It's tuned more than, you know, a [disfmarker] a [disfmarker] a [disfmarker] a [disfmarker] [speaker002:] Mm hmm. [speaker005:] You'd really like to have something that needed no particular noise at all, maybe just some white noise or something like that a at most. [speaker002:] Mm hmm. [speaker005:] But that's not really what this contest is. So. Um, I guess it's OK. [speaker002:] Mm hmm. [speaker006:] I think it's [disfmarker] [speaker005:] That's something I'd like to understand before we actually use something from it, because it would [disfmarker] [speaker006:] it's probably something that, mmm, the [disfmarker] you know, the, uh, experiment designers didn't really think about, because I think most people aren't doing trained systems, or, you know, uh, systems that are like ours, where you actually use the data to build models. I mean, they just [pause] doing signal processing. [speaker002:] Yeah. [speaker006:] So. [speaker005:] Well, it's true, except that, uh, that's what we used in Aurora one, and then they designed the things for Aurora two knowing that we were doing that. [speaker006:] Yeah. That's true. [speaker005:] Um. [speaker006:] And they didn't forbid us [disfmarker] right? [disfmarker] to build models on the data? [speaker005:] No. But, I think [disfmarker] I think that it [disfmarker] it [disfmarker] it probably would be the case that if, say, we trained on Italian, uh, data and then, uh, we tested on Danish data and it did terribly, uh, that [disfmarker] that it would look bad. And I think someone would notice and would say "Well, look. This is not generalizing." [speaker006:] Mm hmm. [speaker005:] I would hope tha I would hope they would. Um. But, uh, it's true. You know, maybe there's parameters that other people have used [disfmarker] you know, th that they have tuned in some way for other things. So it's [disfmarker] it's, uh [disfmarker] We should [disfmarker] we should [disfmarker] Maybe [disfmarker] that's maybe a topic [disfmarker] Especially if you talk with him when I'm not here, that's a topic you should discuss with Hynek to, you know, double check it's OK. [speaker002:] Mm hmm. [speaker006:] Do we know anything about [pause] the speakers for each of the, uh, training utterances? [speaker002:] What do you mean? [speaker006:] Do you have speaker information? [speaker002:] We [disfmarker] we [disfmarker] [speaker005:] Social security number [speaker006:] That would be good. [speaker002:] Like, we have [pause] male, female, [speaker006:] Bank PIN. [speaker003:] Hmm. [speaker006:] Just male f female? [speaker002:] at least. Mmm. [speaker005:] What kind of information do you mean? [speaker006:] Well, I was thinking about things like, you know, gender, uh [disfmarker] you know, gender specific nets and, uh, vocal tract length normalization. [speaker002:] Mm hmm. [speaker006:] Things like that. I d I don't [disfmarker] I didn't know what information we have about the speakers that we could try to take advantage of. [speaker002:] Mm hmm. [speaker005:] Hmm. Uh. Right. I mean, again, i if you had the whole system you were optimizing, that would be easy to see. But if you're [vocalsound] supposedly just using a fixed back end and you're just coming up with a feature vector, w w I'm not sure [disfmarker] I mean, having the two nets [disfmarker] Suppose you detected that it was male, it was female [disfmarker] you come up with different [disfmarker] [speaker006:] Well, you could put them both in as separate streams or something. Uh. [speaker002:] Mm hmm. [speaker005:] Maybe. [speaker006:] I don't know. I was just wondering if there was other information we could exploit. [speaker002:] Mm hmm. [speaker005:] Hmm. Yeah, it's an interesting thought. Maybe having something along the [disfmarker] I mean, you can't really do vocal tract normalization. But something that had some of that effect [speaker006:] Yeah. [speaker005:] being applied to the data in some way. [speaker006:] Mm hmm. [speaker005:] Um. [speaker002:] Do you have something simple in mind for [disfmarker] I mean, vocal tract length normalization? [speaker006:] Uh no. I hadn't [disfmarker] I hadn't thought [disfmarker] it was [disfmarker] thought too much about it, really. It just [disfmarker] something that popped into my head just now. And so I [disfmarker] I [disfmarker] I mean, you could maybe use the ideas [disfmarker] a similar [pause] idea to what they do in vocal tract length normalization. You know, you have some sort of a, uh, general speech model, you know, maybe just a mixture of Gaussians that you evaluate every utterance against, and then you see where each, you know, utterance [disfmarker] like, the likelihood of each utterance. You divide the [disfmarker] the range of the likelihoods up into discrete bins and then each bin's got some knob [disfmarker] uh, setting. [speaker005:] Yeah. But just listen to yourself. I mean, that uh really doesn't sound like a real time thing with less than two hundred milliseconds, uh, latency that [disfmarker] and where you're not adjusting the statistical engine at all. [speaker006:] Yeah. Yeah. [speaker002:] Mm hmm. [speaker006:] Yeah. That's true. Right. [speaker005:] You know, that just [disfmarker] [speaker002:] Hmm. [speaker006:] Could be expensive. [speaker005:] I mean [disfmarker] Yeah. No. Well not just expensive. I [disfmarker] I [disfmarker] I don't see how you could possibly do it. You can't look at the whole utterance and do anything. You know, you can only [disfmarker] [speaker006:] Oh, [speaker005:] Right? [speaker006:] right. [speaker005:] Each frame comes in and it's gotta go out the other end. [speaker006:] Right. [speaker005:] So, uh [disfmarker] [speaker006:] So whatever it was, it would have to be uh sort of on a per frame basis. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker005:] Yeah. I mean, you can do, um [disfmarker] [speaker006:] Yeah. [speaker005:] Fairly quickly you can do male female [disfmarker] f male female stuff. [speaker006:] Yeah. [speaker005:] But as far as, I mean [disfmarker] Like I thought BBN did a thing with, uh, uh, vocal tract normalization a ways back. Maybe other people did too. With [disfmarker] with, uh, uh, l trying to identify third formant [disfmarker] average third formant [disfmarker] [vocalsound] using that as an indicator of [disfmarker] [speaker006:] I don't know. [speaker005:] So. You know, third formant [disfmarker] I if you imagine that to first order what happens with, uh, changing vocal tract is that, uh, the formants get moved out by some proportion [disfmarker] [speaker006:] Mm hmm. [speaker005:] So, if you had a first formant that was one hundred hertz before, if the fifty [disfmarker] if the vocal tract is fifty percent shorter, then it would be out at seven fifty hertz, and so on. So, that's a move of two hundred fifty hertz. Whereas the third formant which might have started off at twenty five hundred hertz, you know, might be out to thirty seven fifty, you know so it's at [disfmarker] So, although, you frequently get less distinct higher formants, it's still [disfmarker] third formant's kind of a reasonable compromise, and [disfmarker] [speaker006:] Mm hmm. [speaker005:] So, I think, eh, if I recall correctly, they did something like that. [speaker006:] Hmm. [speaker005:] And [disfmarker] and [disfmarker] But [disfmarker] Um, that doesn't work for just having one frame or something. [speaker006:] Yeah. [speaker005:] You know? That's more like looking at third formant over [disfmarker] over a turn or something like that, [speaker002:] Mm hmm. Mm hmm. [speaker005:] and [disfmarker] [speaker006:] Right. [speaker005:] Um. So. But on the other hand, male female is a [disfmarker] is a [disfmarker] is a much simpler categorization than figuring out a [disfmarker] a factor to, uh, squish or expand the [disfmarker] the spectrum. [speaker006:] Mm hmm. [speaker005:] So, um. Y you could imagine that [disfmarker] I mean, just like we're saying voiced unvoiced is good to know [disfmarker] uh, male female is good to know also. [speaker006:] Mm hmm. [speaker005:] Um. But, you'd have to figure out a way to [disfmarker] to [disfmarker] to, uh, incorporate it on the fly. Uh, I mean, I guess, as you say, one thing you could do is simply, uh, have the [disfmarker] the male and female output vectors [disfmarker] you know, tr nets trained only on males and n trained only on females or [disfmarker] or, uh, you know. But [disfmarker] Um. I don't know if that would really help, because you already have males and females and it's mm hmm putting into one net. So is it [disfmarker]? [speaker006:] Is it balanced, um, in terms of gender [disfmarker] the data? [speaker005:] Do you know? [speaker002:] Mmm. Almost, yeah. [speaker006:] Hmm. [speaker002:] Mm hmm. [speaker005:] Hmm. OK. Y you're [disfmarker] you were saying before [disfmarker]? [speaker002:] Uh. Yeah. So, this noise, um [disfmarker] Yeah. The MSG [disfmarker] Um. Mmm. There is something [disfmarker] perhaps, I could spend some days to look at this thing, cuz it seems that when we train networks on [disfmarker] let's say, on TIMIT with MSG features, they [disfmarker] they look as good as networks trained on PLP. But, um, when they are used on [disfmarker] on the SpeechDat Car data, it's not the case [disfmarker] oh, well. The MSG features are much worse, and so maybe they're, um, less [disfmarker] more sensitive to different recording conditions, [speaker005:] Shouldn't be. [speaker002:] or [disfmarker] Shou [speaker005:] They should be less so. [speaker002:] Yeah. But [disfmarker] [speaker005:] R right? [speaker002:] Mmm. [speaker005:] Wh -? But let me ask you this. What [disfmarker] what's the, um [disfmarker]? Do you kno recall if the insertions were [disfmarker] were higher with MSG? [speaker002:] I don't know. I cannot tell. But [disfmarker] It's [disfmarker] it [disfmarker] the [disfmarker] the error rate is higher. [speaker005:] Yeah. But you should always look at insertions, deletions, and substitutions. [speaker002:] So, I don Yeah. Mm hmm. [speaker005:] So [disfmarker] [speaker002:] Mm hmm. [speaker005:] so, uh [disfmarker] MSG is very, very dif Eh, PLP is very much like mel cepstrum. MSG is very different from both of them. [speaker002:] Mm hmm. [speaker005:] So, if it's very different, then this is the sort of thing [disfmarker] I mean I'm really glad Andreas brought this point up. I [pause] sort of had forgotten to discuss it. Um. You always have to look at how this [disfmarker] uh, these adjustments, uh, affect things. And even though we're not allowed to do that, again we maybe could reflect that back to our use of the features. So if it [disfmarker] if in fact, uh [disfmarker] [speaker002:] Mm hmm. [speaker005:] The problem might be that the range of the MSG features is quite different than the range of the PLP or mel cepstrum. [speaker002:] Mm hmm. [speaker005:] And you might wanna change that. [speaker002:] Mm hmm. But [disfmarker] Yeah. But, it's d it's after [disfmarker] Well, it's tandem features, so [disfmarker] Mmm. [speaker005:] Yeah. [speaker002:] Yeah. We [disfmarker] we have estimation of post posteriors with PLP and with MSG as input, [speaker005:] Yeah. [speaker002:] so I don Well. I don't know. [speaker005:] That means they're between zero and one. [speaker002:] Mm hmm. [speaker005:] But i it [disfmarker] it [disfmarker] it [disfmarker] it doesn't necessarily [disfmarker] You know, they could be, um [disfmarker] Do doesn't tell you what the variance of the things is. [speaker002:] Mmm. Mm hmm. [speaker005:] Right? Cuz if you're taking the log of these things, it could be, uh [disfmarker] Knowing what the sum of the probabilities are, doesn't tell you what the sum of the logs are. [speaker002:] Mm hmm. Yeah. [speaker005:] So. [speaker002:] Yeah. So we should look at the likelihood, or [disfmarker] or what? Or [disfmarker] well, at the log, perhaps, and [disfmarker] [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker005:] Yeah. Or what [disfmarker] you know, what you're uh [disfmarker] the thing you're actually looking at. [speaker002:] Mm hmm. [speaker005:] So your [disfmarker] your [disfmarker] the values that are [disfmarker] are actually being fed into HTK. [speaker002:] Mm hmm. [speaker005:] What do they look like? [speaker006:] No [speaker002:] But [disfmarker] [speaker006:] And so th the, uh [disfmarker] for the tandem system, the values that come out of the net don't go through the sigmoid. Right? They're sort of the pre nonlinearity values? [speaker005:] Right. [speaker002:] Yes. [speaker005:] So they're [pause] kinda like log probabilities is what I was saying. [speaker006:] And those [disfmarker] OK. And tho that's what goes [pause] into [pause] HTK? [speaker005:] Uh, almost. But then you actually do a KLT on them. [speaker006:] OK. [speaker005:] Um. They aren't normalized after that, are they? [speaker002:] Mmm. No, they are not [disfmarker] [speaker005:] No. [speaker002:] no. [speaker005:] OK. So, um. Right. So the question is [disfmarker] Yeah. Whatever they are at that point, um, are they something for which taking a square root or cube root or fourth root or something like that is [disfmarker] is gonna be a good or a bad thing? So. [speaker002:] Mm hmm. [speaker005:] Uh, and that's something that nothing [disfmarker] nothing else after that is gonna [disfmarker] Uh, things are gonna scale it [disfmarker] Uh, you know, subtract things from it, scale it from it, but nothing will have that same effect. Um. So. Um. Anyway, eh [disfmarker] [speaker006:] Yeah. Cuz if [disfmarker] if the log probs that are coming out of the MSG are really big, the standard [pause] insertion penalty is gonna have very little effect [speaker005:] Well, the [disfmarker] Right. [speaker006:] compared to, you know, a smaller set of log probs. [speaker005:] Yeah. No. Again you don't really [pause] look at that. It's something [disfmarker] that, and then it's going through this transformation that's probably pretty close to [disfmarker] It's, eh, whatever the KLT is doing. But it's probably pretty close to what a [disfmarker] a [disfmarker] a discrete cosine transformation is doing. [speaker006:] Yeah. [speaker005:] But still it's [disfmarker] it's not gonna probably radically change the scale of things. I would think. And, uh [disfmarker] Yeah. It may be entirely off and [disfmarker] and it may be [disfmarker] at the very least it may be quite different for MSG than it is for mel cepstrum or PLP. So that would be [disfmarker] So the first thing I'd look at without adjusting anything would just be to go back to the experiment and look at the, uh, substitutions, insertions, and deletions. And if the [disfmarker] if the, uh [disfmarker] i if there's a fairly large effect of the difference, say, uh, uh, the r ratio between insertions and deletions for the two cases then that would be, uh, an indicator that it might [disfmarker] might be in that direction. [speaker002:] Mm hmm. Mm hmm. [speaker005:] Anything else? [speaker002:] Yeah. But, my [disfmarker] my point was more that it [disfmarker] it works sometimes and [disfmarker] [speaker005:] Yeah. [speaker002:] but sometimes it doesn't work. So. [speaker005:] Well. [speaker002:] And it works on TI digits and on SpeechDat Car it doesn't work, and [disfmarker] [speaker005:] Yeah. [speaker002:] Mm hmm. Yeah. Well. [speaker005:] But, you know, some problems are harder than others, [speaker002:] Mm hmm. Yeah. [speaker005:] and [disfmarker] And, uh, sometimes, you know, there's enough evidence for something to work and then it's harder, it breaks. You know, so it's [disfmarker] [speaker002:] Mm hmm. [speaker005:] But it [disfmarker] but, um, i it [disfmarker] it could be that when you say it works maybe we could be doing much better, even in TI digits. Right? [speaker002:] Yeah. [speaker005:] So. [speaker002:] Yeah, sure. Uh. [speaker005:] Hmm? Yeah. [speaker002:] Yeah. Well, there is also the spectral subtraction, which, um [disfmarker] I think maybe we should, uh, try to integrate it in [disfmarker] in our system. [speaker005:] Yeah. [speaker002:] Mmm. Mm hmm. [speaker005:] Right. [speaker002:] But, [speaker005:] O [speaker002:] I think that would involve to [disfmarker] [vocalsound] to mmm [vocalsound] use a big [disfmarker] a [disfmarker] al already a big bunch of the system of Ericsson. Because he has spectral subtraction, then it's followed by, [vocalsound] um, other kind of processing that's [disfmarker] are dependent on the [disfmarker] uh, if it's speech or noi or silence. [speaker005:] Mm hmm. [speaker002:] And there is this kind of spectral flattening after [disfmarker] if it's silence, and [disfmarker] and s I [disfmarker] I think it's important, um, [vocalsound] to reduce this musical noise and this [disfmarker] this increase of variance during silence portions. So. Well. This was in this would involve to take almost everything from [disfmarker] from the [disfmarker] this proposal and [disfmarker] and then just add some kind of on line normalization in [disfmarker] in the neural network. Mmm. [speaker005:] OK. Well, this'll be, I think, something for discussion with Hynek next week. [speaker002:] Yeah. Mm hmm. [speaker005:] Yeah. OK. Right. So. How are, uh, uh [disfmarker] how are things going with what you're doing? [speaker004:] Oh. Well, um, I took a lot of time just getting my taxes out of the way [disfmarker] multi national taxes. So, I'm [disfmarker] I'm starting to write code now for my work but I don't have any results yet. Um, i it would be good for me to talk to Hynek, I think, when he's here. [speaker005:] Yeah. [speaker004:] Do you know what his schedule will be like? [speaker005:] Uh, he'll be around for three days. Uh, we'll have a lot of time. [speaker004:] OK. So, y [speaker005:] So, uh [disfmarker] [speaker004:] OK. [speaker005:] Um. I'll, uh [disfmarker] You know, he's [disfmarker] he'll [disfmarker] he'll be talking with everybody in this room So. [speaker006:] But you said you won't [disfmarker] you won't be here next Thursday? [speaker005:] Not Thursday and Friday. Yeah. Cuz I will be at faculty retreat. [speaker006:] Hmm. [speaker005:] So. I'll try to [vocalsound] connect with him and people as [disfmarker] as I can on [disfmarker] on Wednesday. But [disfmarker] Um. Oh, how'd taxes go? Taxes go OK? [speaker004:] Mmm. Yeah. [speaker005:] Yeah. Oh, good. Yeah. Yeah. That's just [disfmarker] that's [disfmarker] that's one of the big advantages of not making much money is [vocalsound] the taxes are easier. Yeah. [speaker006:] Unless you're getting money in two countries. [speaker005:] I think you are. [speaker006:] They both want their cut. [speaker005:] Aren't you? [speaker002:] Hmm. [speaker004:] Hmm. Yeah. [speaker006:] Right? [speaker005:] Yeah. Yeah. Huh. Canada w Canada wants a cut? [speaker004:] Mm hmm. [speaker005:] Have to do [disfmarker] So you [disfmarker] you have to do two returns? [speaker004:] Mmm. W uh, for two thousand I did. Yeah. [speaker005:] Oh, oh. Yeah. For tw That's right, ju [speaker006:] But not for this next year? [speaker005:] Two thousand. Yeah. Probably not this next year, I guess. [speaker004:] Ye [speaker005:] Yeah. [speaker004:] Um. [speaker005:] Yeah. [speaker004:] Uh, I'll [disfmarker] I'll still have a bit of Canadian income but it'll be less complicated because I will not be a [disfmarker] considered a resident of Canada anymore, so I won't have to declare my American income on my Canadian return. [speaker005:] OK. Alright. Uh. Barry, do you wanna [pause] say something about your stuff here? [speaker001:] Oh, um. Right. I [pause] just, um, continuing looking at, uh, ph uh, phonetic events, and, uh, this Tuesday gonna be, uh, meeting with John Ohala with Chuck to talk some more about these, uh, ph um, phonetic events. Um, came up with, uh, a plan of attack, uh, gonna execute, and um [disfmarker] Yeah. It's [disfmarker] that's pretty much it. [speaker005:] Oh, well. No Um, why don't you say something about what it is? [speaker001:] Oh, you [disfmarker] oh, you want [disfmarker] you want details. Hmm. OK. [speaker005:] Well, we're all gathered here together. I thought we'd, you know [disfmarker] [speaker001:] I was hoping I could wave my hands. Um. So, um. So, once wa I [disfmarker] I was thinking getting [disfmarker] getting us a set of acoustic events to [disfmarker] um, to be able to distinguish between, uh, phones and words and stuff. And [vocalsound] um, once we [disfmarker] we would figure out a set of these events that can be, you know, um, hand labeled or [disfmarker] or derived, uh, from h the hand labeled phone targets. Um, we could take these events and, um, [vocalsound] do some cheating experiments, um, where we feed, um, these events into [pause] an SRI system, um, eh, and evaluate its performance on a Switchboard task. [speaker004:] Hey, Barry? [speaker001:] Uh, yeah. [speaker004:] Can you give an example of an event? [speaker001:] Yeah. Sure. Um, I [disfmarker] I can give you an example of [pause] twenty odd events. Um [disfmarker] So, he In this paper, um, it's talking about phoneme recognition using acoustic events. So, things like frication or, uh, nasality. [speaker005:] Whose paper is it? [speaker001:] Um, this is a paper by Hubener and Cardson [pause] Benson [disfmarker] Bernds Berndsen. [speaker005:] Yeah. Huh. From, uh, University of Hamburg and Bielefeld. [speaker001:] Mm hmm. [speaker005:] OK. [speaker001:] Um. [speaker006:] Yeah. I think the [disfmarker] just to expand a little bit on the idea of acoustic event. [speaker001:] Mm hmm. [speaker006:] There's, um [disfmarker] in my mind, anyways, there's a difference between, um, acoustic features and acoustic events. And I think of acoustic features as being, um, things that linguists talk about, like, um [disfmarker] [speaker005:] So, stuff that's not based on data. [speaker006:] Stuff that's not based on data, necessarily. [speaker005:] Yeah. Oh, OK. [speaker006:] Right. [speaker005:] Yeah. Yeah, [speaker006:] That's not based on, you know, acoustic data. [speaker005:] OK. [speaker006:] So they talk about features for phones, like, uh, its height, its tenseness, [speaker001:] Yeah. [speaker006:] laxness, things like that, [speaker001:] Mm hmm. [speaker006:] which may or may not be all that easy to measure in the acoustic signal. Versus an acoustic event, which is just [nonvocalsound] some [nonvocalsound] something in the acoustic signal [nonvocalsound] that is fairly easy to measure. Um. So it's, um [disfmarker] it's a little different, in [disfmarker] at least in my mind. [speaker005:] I mean, when we did the SPAM work [disfmarker] I mean, there we had [disfmarker] we had this notion of an, uh, auditory [disfmarker] [@ @] [comment] auditory event. [speaker001:] Good. That's great. [speaker005:] And, uh, um, called them "avents", [speaker006:] Mm hmm. [speaker005:] uh, uh, uh, with an A at the front. Uh. And the [disfmarker] the [disfmarker] the idea was something that occurred that is important to a bunch of neurons somewhere. So. [speaker001:] Mm hmm. [speaker005:] Um. A sudden change or a relatively rapid change in some spectral characteristic will [disfmarker] will do sort of this. I mean, there's certainly a bunch of [disfmarker] a bunch of places where you know that neurons are gonna fire because something novel has happened. That was [disfmarker] that was the main thing that we were focusing on there. But there's certainly other things beyond what we talked about there that aren't just sort of rapid changes, but [disfmarker] [speaker006:] It's kinda like the difference between top down and bottom up. [speaker005:] Yeah. [speaker006:] I think of the acoustic [disfmarker] you know, phonetic features as being top down. You know, you look at the phone and you say this phone is supposed to be [disfmarker] you know, have this feature, this feature, and this feature. Whether tha those features show up in the acoustic signal is sort of irrelevant. Whereas, an acoustic event goes the other way. Here's the signal. Here's some event. [speaker001:] Mm hmm. [speaker006:] What [disfmarker]? And then that [disfmarker] you know, that may map to this phone sometimes, and sometimes it may not. It just depen maybe depends on the context, things like that. And so it's sort of a different way of looking. [speaker005:] Mm hmm. Mm hmm. [speaker001:] Yeah. So. Yeah. [speaker004:] OK. [speaker001:] Mm hmm. Um [disfmarker] Using these [disfmarker] these events, um, you know, we can [disfmarker] we can perform these [disfmarker] these, uh, cheating experiments. See how [disfmarker] how [disfmarker] how good they are, um, in, um [disfmarker] in terms of phoneme recognition or word recognition. And, um [disfmarker] and then from that point on, I would, uh, s design robust event detectors, um, in a similar, um, wa spirit that Saul has done w uh, with his graphical models, and this [disfmarker] this probabilistic AND OR model that he uses. Um, eh, try to extend it to, um [disfmarker] to account for other [disfmarker] other phenomena like, um, CMR co modulation release. And, um [disfmarker] and maybe also investigate ways to [disfmarker] to modify the structure of these models, um, in a data driven way, uh, similar to the way that, uh, Jeff [disfmarker] Jeff, uh, Bilmes did his work. Um, and while I'm [disfmarker] I'm doing these, um, event detectors, you know, I can ma mea measure my progress by comparing, um, the error rates in clean and noisy conditions to something like, uh, neural nets. Um, and [disfmarker] So [disfmarker] so, once we have these [disfmarker] these, uh, event detectors, um, we could put them together and [disfmarker] and feed the outputs of the event detectors into [disfmarker] into the SRI, um, HMM [disfmarker] HMM system, and, um [disfmarker] and test it on [disfmarker] on Switchboard or, um, maybe even Aurora stuff. And, that's pretty much the [disfmarker] the big picture of [disfmarker] of um, the plan. [speaker005:] By the way, um, there's, uh, a couple people who are gonna be here [disfmarker] I forget if I already told you this, but, a couple people who are gonna be here for six months. [speaker001:] Mm hmm. [speaker005:] Uh [disfmarker] uh, there's a Professor Kollmeier, uh, from Germany who's, uh, uh, quite big in the, uh, hearing aid signal processing area and, um, Michael Kleinschmidt, who's worked with him, who also looks at [vocalsound] auditory properties inspired by various, uh, brain function things. [speaker001:] Hmm. [speaker005:] So, um, um, I think they'll be interesting to talk to, in this sort of issue as these detectors are [disfmarker] are, uh, developing. [speaker001:] Hmm. OK. [speaker005:] So, he looks at interesting [disfmarker] interesting things in [disfmarker] in the [disfmarker] [vocalsound] different ways of looking at spectra in order to [disfmarker] to get various speech properties out. So. [speaker001:] OK. [speaker005:] OK. Well, short meeting, but that's OK. And, uh, we might as well do our digits. And like I say, I [disfmarker] I encourage you to go ahead and meet, uh, next week with, uh, uh, Hynek. Alright, I'll [disfmarker] I'll start. It's, uh, one thirty five. seventeen OK [speaker001:] OK. [speaker003:] OK. So the one [disfmarker] [vocalsound] [vocalsound] one thing I knew I wanted to talk about was about, uh, sort of last minute stuff to to, uh, try to get some recognition results. [speaker005:] Recognition results [speaker003:] Yeah. [speaker005:] for [disfmarker] [speaker003:] So, uh, on [disfmarker] on [disfmarker] on meeting data. And so, I'm [disfmarker] I'm not sure exactly what you're doing already, and [disfmarker] and there's some stuff I've talked to Dave [disfmarker] [speaker005:] Well, we I just started recognition on the on Thilo's [vocalsound] segments, which was [disfmarker] but using the far far the close talking microphone. [speaker003:] OK. So [disfmarker] Um [disfmarker] [speaker005:] And you wanted [disfmarker] I know you wanted the far field [pause] data. [speaker003:] Right. So [disfmarker] so, we have some stuff with no overlap, uh, for which there would be near [disfmarker] near field results. [speaker005:] Uh huh. [speaker003:] We wanted to get the far field results for that. And then this [disfmarker] this real, uh, long shot thing would be, that we'd apply Dave's processing to, uh, potentially training and test data [speaker005:] Mm hmm. [speaker003:] and do the [disfmarker] look at the same thing. And in talking this morning with, uh, Chuck and with Dave [vocalsound] one thought was to use [disfmarker] We couldn't remember how different the numbers were, but if you just [pause] worked with males only and used the short training there're [disfmarker] there're [disfmarker] uh, I think Chuck's recollection was that when he was doing the feature stuff, it took maybe a day and a half to do the training. [speaker005:] To retrain? [speaker003:] Yeah. [speaker005:] Uh Yeah. Um. That's about right. Actually, it should probably be [disfmarker] It depends on who else is using machines, but we have more machines now. So. [speaker001:] That's true. [speaker005:] It's more like a day, probably. [speaker003:] Um, how much worse is the short training set than the large one, in terms of the ultimate performance? [speaker005:] Mmm, it's like [disfmarker] Something like three [disfmarker] three percent [disfmarker] three or four percent absolute. [speaker003:] Yep. So, it's [disfmarker] that should be fine for this, I would think. So we [disfmarker] an and you have the short [disfmarker] you have short training results for the close [pause] case? [speaker005:] Um, not for meetings. Because we didn't train [disfmarker] we didn't re ever recognize with the [disfmarker] with the small models [pause] on meeting data. But I [disfmarker] I have the models, so I could run reco [speaker003:] So, how do you know it's [disfmarker]? Oh, it's three percent on [disfmarker] on, uh, on Hub five. [speaker005:] On [disfmarker] [speaker001:] Hub five. [speaker003:] I see. Yeah. But we have the models so we could get that number, and [disfmarker] So the question is, what [disfmarker]? w [speaker005:] I mean, the [disfmarker] the [disfmarker] mmm The recognition also takes [pause] non negligible amount of time. So [disfmarker] we might wanna restrict it to, maybe, a few meetings, if you want to do a full comparison. [speaker003:] It has to be enough so that [disfmarker] [speaker005:] Uh. [speaker003:] I mean, it's the non overlap only. Um [disfmarker] [comment] And [disfmarker] it has to be enough to be sort of comparable to what you folks were seeing and what you reported already. [speaker005:] Hmm. [speaker003:] I mean [disfmarker] [speaker005:] Well, do we have the [disfmarker] do we have the processed data? That [disfmarker] that's also [disfmarker] [speaker003:] No. He has to create that. But [disfmarker] but [disfmarker] s but [disfmarker] so [disfmarker] [speaker005:] Right. [speaker003:] Um [disfmarker] We have [pause] a whole parallel set of things over here which are all with digits. [speaker005:] Mmm. I see. [speaker003:] And [disfmarker] and [disfmarker] and Dave has been working with that and there's all of those issues. But [disfmarker] [vocalsound] I know that if I [pause] go in with something that's not just digits it would be [pause] good. And, um, so [disfmarker] We already have these results that you [disfmarker] I mean, on [disfmarker] uh, a l a lot tha tha a particular test set, that you [disfmarker] that we reported at HLT. [speaker005:] Right. [speaker003:] Um, it'd be nice to have something more than that. And w we had talked about was having distant [disfmarker] Um, and then, uh, if we could on top of that [disfmarker] I mean, so this is gonna be a lot worse. Right? Whatever comparison we [disfmarker] w w one would presume. [speaker005:] Mm hmm. [speaker003:] But we don't know how much worse, [speaker005:] Right. [speaker003:] w w uh, which is certainly one interesting thing. And then, um, Dave [disfmarker] I think we figured that it'd probably take a day or two to compute the [disfmarker] the, uh [disfmarker] uh [disfmarker] Well [disfmarker] How many hours of training [disfmarker]? [speaker005:] Actually, I did retrain [disfmarker] I recently retrained, um, [vocalsound] for another reason, on the full training set. And that took only [disfmarker] I think it took only two days. [speaker003:] Yeah. [speaker005:] So, it's actually conceivable to do [disfmarker] use the full training set. [speaker003:] Yeah, but we also have to do this other processing, so having a smaller training set, if it's only a few percent difference, it might be [disfmarker] be worth doing it. [speaker005:] Oh, I see. [speaker003:] But m How big is the small [pause] training [pause] set? Do you remember? [speaker005:] Hmm. [comment] [vocalsound] [comment] Uh. I don't know, something [disfmarker] something like [disfmarker] something between [comment] thirty and fifty hours, maybe. I f I forget the exac [speaker003:] It's around there. And ha and [disfmarker] and male is roughly half of that, [speaker005:] Right. [speaker003:] or [disfmarker]? [speaker001:] Well, that's only male. [speaker003:] Or [disfmarker] or [disfmarker] or was that only male? [speaker005:] Uh. Actually, I don't know. I'd [disfmarker] [vocalsound] I [disfmarker] I can look it up. It's [disfmarker] it's [disfmarker] it's just, uh, I don't know the [disfmarker] remember the [disfmarker] the number. [speaker001:] We could only jus just do the male only [speaker003:] OK. [speaker001:] right? Or will we run into trouble when we g? [speaker005:] Well, the males [pause] account for most of this meeting data anyhow. [speaker001:] Yeah. [speaker005:] So [disfmarker] Yeah. I would say we [disfmarker] you do only males. [speaker003:] Yeah. So [disfmarker] Yeah. So that's certainly part of the issue, is that right now he's [disfmarker] he hasn't written his stuff for efficiency. Yeah. It's [disfmarker] it's in Matlab and so on, and [disfmarker] [vocalsound] And, uh, it's not an impossible amount of time. [speaker005:] Mm hmm. [speaker003:] We [disfmarker] we were guestimating it was like one and a half times faster than real time, or something? So, if there's thirty hours of data, you can calculate that he can do, uh, the enhancement in a day [speaker005:] u [speaker003:] and something. So [disfmarker] But [vocalsound] if we were dealing with two hundred hours or something, I think it'd be [pause] prohibitive. [speaker005:] No. It's definitely [disfmarker] It's less than a hundred hours, for sure. It's [disfmarker] [speaker003:] Yeah. [speaker005:] It's probably actually, uh [disfmarker] It's [disfmarker] uh, I think it's around thirty hours just for [disfmarker] for one gender. [speaker001:] That's what I was thinking. [speaker003:] Yeah. [speaker005:] Yeah. [speaker001:] Yeah. [speaker003:] Yeah. So, I mean, it's a bit of a push, but it seems like, OK, we've got some models, we've got some training data, we have software that works, he's got a method that helps with, you know, other ta another task. Um [disfmarker] It, you know, appears to be, you know, debugged. [speaker001:] S [speaker005:] Mm hmm. [speaker003:] Um [disfmarker] [speaker001:] So [disfmarker] We Yo So, one thing I was wondering is, did you already do that middle one or should we re do that one, too? [speaker005:] No. I didn't do that, because we haven't even cut the waveforms for that. [speaker001:] You didn't do that. OK. Yeah. That's what I was gonna say next. [speaker005:] So [disfmarker] [speaker001:] We hafta [pause] cut the, uh [disfmarker] [speaker005:] Right. So. [speaker001:] W s so is [disfmarker] Morgan, is the plan to just pick one of the far field mikes? [speaker004:] Yeah. [vocalsound] [vocalsound] [vocalsound] [vocalsound] [vocalsound] Yeah. We did. [speaker001:] Uh [disfmarker] [speaker005:] And there's a bit of a question whether you want to use [disfmarker] um, what segmentations you want to use. [speaker003:] Yes. [speaker005:] Uh [disfmarker] Uh, David just [disfmarker] Um, I'm sorry. Don just, uh, created a new version of the first meetings that we had previously recognized, but with different segmentation. And so [disfmarker] [vocalsound] [vocalsound] Um [disfmarker] It would be nice [disfmarker] I mean, if the results are comparable to what we had before [disfmarker] to use those segmentations, because th then we could claim that everything's automatic. [speaker001:] Do you know [pause] when he'll have the [disfmarker] the comparison? [speaker005:] Right. Well, I'm [disfmarker] as I said, I just started [pause] the recognizer, um [disfmarker] It will [disfmarker] [vocalsound] uh, [vocalsound] it will probably be [vocalsound] a couple hours before [disfmarker] before I have some results. [speaker003:] Oh. Well, that's not bad. [speaker001:] Oh, OK. [speaker005:] So [disfmarker] [speaker001:] So [disfmarker] cuz I don't think the new data will be ready, uh, for a couple days may probably. [speaker003:] You mean the training. [speaker001:] So, the training. [speaker005:] But the segmentations matter for the filtering. [speaker003:] Yeah. [speaker005:] Right? Because [disfmarker] [speaker003:] For the test. [speaker005:] For the test set. Yeah. So, [speaker003:] Yeah, fo [speaker001:] Yeah. [speaker005:] need to be [disfmarker] But, first [disfmarker] first, of course, you would wanna process the [pause] training data, because we wanna get that started. [speaker003:] Right. [speaker005:] Yeah. [speaker003:] Yeah. I mean, it [disfmarker] it'd be really great if it was all automatic, but I think that, you know, given the pressure of time, if [disfmarker] i i I mean, since you're gonna find out in a short amount of time, that's great. [speaker005:] Mm hmm. [speaker003:] But i if [disfmarker] if it doesn't work out, I think we would rather charge ahead with the older segmentations [speaker005:] Mm hmm. Right. [speaker003:] and [disfmarker] Um, and we were gonna use one of the P Z I don I don't know what [disfmarker] Probably whatever one you've been using for [disfmarker] for [disfmarker] for the digits. [speaker001:] I think it's that one. Right? F. [speaker003:] Is it this one? [speaker002:] It's e F. [speaker001:] F. Yeah, that's it. [speaker003:] I [disfmarker] I'd [disfmarker] which [disfmarker]? That's F? How do you know? [speaker001:] That's F. It's the second nearest the machine room. [speaker003:] Oh. Bien sur OK. [vocalsound] Alright, so [disfmarker] Bet they'll have fun with that one. OK, so. Uh [disfmarker] [speaker001:] OK. So, uh, I just want to make sure I understand what we need to run. Um [disfmarker] Let's see. So, it's [disfmarker] OK, so if we're talking about [disfmarker] Let's [disfmarker] let's assume that we're gonna use the new segmentations. We need to, um, [vocalsound] run recognition o just looking at the no overlap column. Basically, we have to d r do recognitions for all three of those cases. Right? [speaker003:] Mm hmm. Right. [speaker001:] Um, because [pause] we're gonna be using the [disfmarker] just the male, uh, model [disfmarker] short training set for the male. [speaker005:] Mm hmm. [speaker001:] So we need to have results for all three of those, even though we have [disfmarker] [speaker005:] Ma maybe you should limit ourselves to the Meeting Recorder meetings? [speaker001:] OK. [speaker005:] Um, if you were gonna cut down on the test set, I would suggest that. [speaker003:] Well [disfmarker] Maybe. But, I [disfmarker] I mean, how long does it take for the test? [speaker005:] Actually, the longer [disfmarker] the Robustness meetings take longer, because there's this one speaker who talks a lot. And so the [disfmarker] [vocalsound] Um. No. It's because for all the [disfmarker] for the adaptation and normalization steps, you cannot [disfmarker] you have to d you have to, uh, um, you cannot chop it up into small pieces. So, you're sort of limited by how long the longest speaker, uh, is s speaking. So how much data there is from the [disfmarker] the speaker who talks the most. So, [vocalsound] um, you parallelize across different speakers, [speaker001:] Mm hmm. [speaker005:] but [vocalsound] you know, if you have a bunch of speakers who speak very little and then one wh who [disfmarker] who speaks a lot, then [vocalsound] effectively, everybody waits for the longest one to process. [speaker003:] Right. [speaker001:] Right. Yeah. [speaker003:] Bu But what [disfmarker] what was your result for [disfmarker] uh, that we had at the HLT? [speaker005:] So. [speaker003:] Was that a combination of me? [speaker005:] That was both types of meetings, but most [disfmarker] but there were only two Robustness meetings, and four or five, uh, Meeting Recorders. [speaker001:] And if we're re doing the baseline anyways, it [disfmarker] it [disfmarker] it would be OK. [speaker005:] Right. [speaker001:] Right? I mean [disfmarker] To [disfmarker] to just limit ourselves to a smaller [disfmarker] How long would it take to run recognition, if we did that? [speaker005:] Oh. Uh, I [disfmarker] I [disfmarker] I don't have [disfmarker] I don't have a good [pause] gue [speaker001:] I mean, is [disfmarker] is it like a day or is it [pause] a few hours or [disfmarker] roughly? [speaker005:] For everything? For all the meetings? [speaker001:] For [disfmarker] yeah, let's say we just did the Meeting Recorder meetings for our test set. [speaker005:] Uh. Um. It's probably more than a day, but probably less than two. [speaker001:] Oh, really. I didn't realize each test took that long. [speaker005:] Well, i No. I mean for all the meetings. Because it's [disfmarker] [vocalsound] Again, it's, um [disfmarker] [speaker003:] So you were doing like [disfmarker] [speaker005:] So each meeting [disfmarker] each meetings takes, uh, something like [disfmarker] Again, we [disfmarker] we [disfmarker] I ran [disfmarker] when we ran these, we were sort of short on machines, and, um, I don't know, I [disfmarker] I would estimate maybe four hours per meeting. Something like that. [speaker001:] Four hours per meeting. [speaker005:] Right. [speaker001:] Wow. [speaker003:] Yeah. But if you [disfmarker] So, if you do half a dozen meetings, that's [disfmarker] that's about a day. We also have more machines now. [speaker005:] Right. Right. So that's why I'm saying I'm not sure how they would scale with more machines. [speaker003:] well Yeah. Yeah. I mean, if we had about six [disfmarker] A six hour test set's not bad. Right? [speaker005:] Right. [speaker001:] S six hour test set. Six meetings. OK. [speaker003:] You know? Right? I mean, a lot of the evaluations have been [disfmarker] [speaker005:] We have MR [disfmarker] two, th We have two, three, four, fi I think there are four Meeting Recorder meetings that we worked with. [speaker003:] Four that you worked with? [speaker004:] I think it's [disfmarker] Is it the same set as [pause] the alignments? [speaker005:] Yeah. [speaker004:] I think it's five Meeting Recorder meetings. [speaker005:] Five? OK. [speaker003:] That would be OK, too. I mean, I'm [disfmarker] i So, if they have a set that they worked with, and you [disfmarker] you got [disfmarker] Did you do similarly in performance between them and the other meetings, or was it [disfmarker]? [speaker005:] Uh, with the Robust compared to Robustness? Yeah. The big variation is by [disfmarker] whether it's a native speaker or not. [speaker003:] Yeah. [speaker005:] And whether [vocalsound] it's, um [disfmarker] Uh, I think that's the o actually [disfmarker] and [disfmarker] and of course what, um, you know, whether it's lapel or, uh, headset microphone. [speaker003:] And overlap or not. [speaker005:] Yeah. [speaker003:] Yeah. So maybe just with the [disfmarker] the [disfmarker] the Meeting Recorder set of the [disfmarker] De th that you did before. [speaker005:] And we can exclude [disfmarker] we don't need to recognize the [pause] non natives, because we know that [disfmarker] I mean, in fact, we excluded them previously from [disfmarker] [speaker003:] Yeah. So we want to do the same [disfmarker] same thing. [speaker005:] Right. [speaker001:] OK. So [disfmarker] Alright. So if we got a list of the, uh, segmentations for these five Meeting Recorder meetings, we could start, uh, the first two experiments going right away, using the short male models. [speaker005:] Mm hmm. [speaker001:] So, you could get those going while Dave is, um, creating waveforms for the r retraining the short male models. [speaker003:] Yeah. [vocalsound] Once we know which segmentations we're using. Yeah. [speaker001:] Right. OK. OK. And then [disfmarker] OK. So, do we w also want to run that bottom experiment without retraining the short male models on his thing? Did you want that? Or [disfmarker]? [speaker003:] Um, I [disfmarker] I agree that that would be an interesting thing to do, but I sort of regard it as secondary. [speaker001:] OK. So, we'll save that. [speaker003:] So if there's sort of machines sitting around and people sitting around and they're waiting for [vocalsound] other things to finish, then sure. But [disfmarker] Uh, Chuck had been asking about that earlier as kind of a control to know, um [disfmarker] Cuz, I mean, you could imagine a fantasy in which you said that Dave's processing [vocalsound] made the, uh, far microphone like the near microphone. In which case you shouldn't have to actually retrain. [speaker005:] Mm hmm. [speaker003:] But it's [disfmarker] it's not really true. It's [disfmarker] it's sort of fantasy. It does [disfmarker] it does muck up the data in [disfmarker] in some funny ways. [speaker005:] Mm hmm. [speaker003:] And so, I'm [disfmarker] I'm kind of questioning that. [speaker001:] Well, i it [disfmarker] [speaker003:] But [disfmarker] But [disfmarker] [speaker001:] Well, on a more basic level, also, it means that that third experiment, there are actually two differences between the other experiments, not one. [speaker003:] Right. [speaker001:] So, it's hard to know [disfmarker] [speaker003:] It involves retraining and it involves a [disfmarker] [speaker001:] Right. [speaker003:] Uh, that's right. I mean the other thing which [disfmarker] which it might come in [disfmarker] to is if there was some problem in the retraining. I mean, maybe you'd just have some mechanical thing we do wrong. [speaker001:] Mm hmm. [speaker003:] Uh, that, uh [disfmarker] since Dave's experience was that it didn't help as much if you didn't retrain, but it does help some, that we would hopefully see that. [speaker001:] Right. Mm hmm. [speaker003:] So, that [disfmarker] that's [disfmarker] that's true. [speaker005:] Wait, did [disfmarker]? [speaker003:] I i [speaker005:] So when you used original [disfmarker] the original models, and you just process the test set in this way, d do you get any [disfmarker] do you get decent performance or not? [speaker002:] u [vocalsound] I [disfmarker] I [disfmarker] I think, um, for the far mike HTK system I was using, it did help somewhat. [speaker005:] Mm hmm. [speaker002:] I could re check that. But it was such a bad baseline that I don't know what that means. [speaker005:] Mmm. Right. Right. OK. [speaker002:] Cuz the baseline word error rate was around forty percent on digits. [speaker005:] Mm hmm. [speaker001:] On the far field? [speaker002:] Right. [speaker003:] Right. [vocalsound] So [disfmarker] [speaker005:] OK. Well, I'll [disfmarker] I can get started on the [disfmarker] well, the first [disfmarker] the one that already has a cross there. We need to re do that with small models. Right. [speaker003:] Yeah. [speaker005:] And then, have to ask, um, I guess, Don to, uh, cut the, um, cut the segments for the sh for the tis distant mike. Uh, uh [disfmarker] that's [disfmarker] [vocalsound] So we would be using the same channel for each [disfmarker] fo for everything? [speaker003:] Yeah. [speaker001:] Mm hmm. [speaker005:] OK. So. [speaker003:] I mean, do you ha do you have to rely on his segmentations at all to do the top one? [speaker005:] No, no. We would use the same segmentations, but he needs to extract [disfmarker] extract the wavef form segments from a different channel. [speaker003:] Oh. OK. Got it. [speaker005:] Right. [speaker001:] So when you said you were gonna start that top one, were you gonna use the new segmentations? [speaker005:] Mm hmm. Um [disfmarker] Yeah. [speaker001:] OK. [speaker005:] If [disfmarker] assuming that the performance turns out to be comparable with [disfmarker] with the old experiments and the old segmentations. [speaker001:] Right. OK. [speaker005:] Now there's the issue of [disfmarker] Oh, OK. So there's the issue of speaker normalization. So, with the distant microphone you wouldn't know which speaker is talking. Right? [speaker001:] Can we [disfmarker]? [speaker003:] We [disfmarker] we talked about this before. I think what we were saying was that, um, the very fact that in both cases we're ignoring the overlap section means that, um, uh, we're to some extent finessing that. So, um, I think, for the purposes of just determining whether a far field microphone [disfmarker] uh, what the effect of the far field microphone is, we should do the same to both. [speaker005:] Mm hmm. I see. So you want to cheat? [speaker003:] I mean [disfmarker] We want to i incorporate [comment] [vocalsound] certain data that would not be available during final tests, [comment] uh, under a [disfmarker] a full fair test of it, much as we are in the [disfmarker] all the numbers that we have so far. [speaker005:] OK. So we assume [disfmarker] we assume knowledge of the speakers as [disfmarker] as, um [disfmarker] [vocalsound] in a way that's compatible with the close talking test set. [speaker004:] Oops. [speaker003:] Yeah. We [disfmarker] we simply wanna determine what's the difference in performance due to being distant versus close. [speaker005:] OK. Oh, OK. [speaker001:] So, does that mean you turn off speaker normalization when you run it? Or you just let it do what it would do, an? [speaker005:] No. It means [disfmarker] No. It just means you [disfmarker] you group together the segments [pause] that by magic you know belong to one speaker, and [disfmarker] and treat [disfmarker] [speaker003:] I mean to a lesser extent you had that same magic the other way, too, because you have leakage into other microphones. Right? But, it's just you're using the fact that [vocalsound] this is where this person is. [speaker005:] Right. Right. [speaker003:] Right? So. [speaker005:] But, um [disfmarker] [speaker003:] It's just easier to do. [speaker005:] Well, in the new test, actually, that's not true. So [disfmarker] [vocalsound] Again, if this [disfmarker] if these new segmentations work OK, then we [disfmarker] then it's a fair [disfmarker] it's a completely fair [pause] test. [speaker003:] Yeah. So, how do you determine what you use to group together to be a [disfmarker] a [disfmarker]? [speaker005:] You group together all the data coming in through one channel and where [pause] Thilo's speech detector has [disfmarker] has determined that there is speech. And that speech is [disfmarker] is deemed to come from that speaker, whether that's true or not. So if you get some cross talk from another microphone, then you just process this [disfmarker] it as if it were from that speaker. [speaker003:] The only other alternative would be to turn off speaker adaptation in both. [speaker005:] Well, that's more of a problem. I mean, because it's [disfmarker] You can just pretend it's some kind of gene I mean you can pretend it's all from one speaker and do all this processing the same, but then you're gonna get results that are worse on account of not doing proper speaker normalization [speaker003:] Mm hmm. [speaker005:] and you're gonna have [disfmarker] So, you could certainly do better than that by doing, for instance [disfmarker] uh, cluster the segments, which is what we do, say, in a Broadcast News system, where you don't have speaker labels. But that would be another processing step that I'm [disfmarker] I would have to [pause] debug first, and so forth, and so we wanna avoid that. So I agree with you. We should [disfmarker] [vocalsound] we should, uh, do the [disfmarker] you know, this sort of cheating experiment. [speaker003:] OK. Yeah. An and e so that will tell us [pause] what the difference it between the mikes, and then, uh, in order to [disfmarker] The [disfmarker] the other difference that we'd have to take care of is that, uh [disfmarker] yeah, we [disfmarker] we don't have a mike that, uh, is particular to a person. And so we'll have to do some clustering, and that'll be another [disfmarker] [vocalsound] another, uh, issue, too. [speaker005:] Mm hmm. [speaker003:] But, it [disfmarker] it [disfmarker] I could be wrong, but it seems to me that [disfmarker] that the speaker [disfmarker] the [disfmarker] the level of degradation that you get from having the distant mike in a normal acoustic is much greater than what you get from, say, not applying speaker adaptation or applying speaker adaptation. I think that the [disfmarker] I mean, we'll see. But [disfmarker] but, I think that the kind of gains that we've seen from speaker adaptation [vocalsound] on Hub five sort of things are like a few percent. Right? And [disfmarker] [speaker005:] It's not just speaker adaptation. It's the whole norm feature normalization process. I it's spea uh, all that is speaker based. You know, so we [disfmarker] So, in that I'm, um [disfmarker] Y you know, d d b the most important, of course, is the [pause] cepstral mean subtraction. [speaker003:] Yeah. [speaker005:] And that [disfmarker] I don't know if we [disfmarker] we never really [disfmarker] I don't remember, because it's so far [disfmarker] [vocalsound] s so long ago that we didn't do that on a per speaker basis, [speaker003:] It doesn't make that much difference, I think. [speaker005:] but [disfmarker] [speaker003:] I would doubt that it would be a huge amount of difference for that. So, I mean, I [disfmarker] I think that that difference would definitely be marginal. I think the main thing is to do something [disfmarker] to do some cepstral mean subtraction on some level. And, uh, so what's different about this processing is just that we're doing it at a much longer time [pause] scale. Right? But [disfmarker] Um [disfmarker] [speaker005:] A and by the way, [speaker002:] What wo [speaker005:] it's [disfmarker] actually we're [disfmarker] we're already [disfmarker] If we use the same segmentations that we use for the close talking microphone, then [vocalsound] the segmentations assume that we have access to all channels and cross correlate them, [speaker003:] That's right. [speaker005:] so [vocalsound] there's no point in not using that knowledge for speaker identification. [speaker003:] Yeah. [speaker001:] Mm hmm. [speaker002:] Uh, I I think also for the log spectral mean subtraction, uh, we wanna know which speaker's talking when, [speaker003:] Yeah. [speaker002:] cuz we wanna chain together the audio from one particular speaker to calculate the mean [speaker005:] Yeah. [speaker002:] and subtract it, and we don't [disfmarker] [speaker005:] Right. [speaker003:] Right. [speaker005:] OK. [speaker003:] Um. Yeah, I guess. But, um, I also think that a again once we got into it, that, um, using some kind of clustering would probably work reasonably well there, too. Certainly for the [disfmarker] the two microphone case, which we're not gonna mess with because it's another whole deal with the low quality microphones, um, we ought to be able to at least tell that it appears that things are coming from a particular direction. So we ought to be able to use that information, um, as well. But, I think we might be able to do not too bad a job of separating out sp uh, segments that appear to come from a single speaker both in terms of s acoustic similarity and in terms of direction. So, I mean [disfmarker] But that's another research thing to do and [vocalsound] probably won't get done the next week. [speaker005:] Right. So what [disfmarker] what is the schedule here? [speaker003:] Well, I mean, I'm [disfmarker] I'm leaving for [vocalsound] for the, uh, the New Orleans meeting, uh, next Saturday, and [disfmarker] and, um, [speaker005:] Mm hmm. [speaker003:] it'd be kinda nice to have some results at least a day or two before that, so that I could figure out what I wanted to say about it. [speaker005:] Mm hmm. Oh, we'll call you when you get there. [speaker001:] You'll have email. Right? [speaker003:] Yeah. Uh, not to mention that [disfmarker] that, uh, Mari's putting together this report next week, too, you know. So [disfmarker] Uh, what we were hoping was that over the weekend we could do, uh, the, um, calculation on the training set and, uh [disfmarker] uh, maybe, you know, we could [disfmarker] by the end of the weekend, we could have the top one, and [disfmarker] and then [vocalsound] early next week, do these. If we had enough machines, maybe do them in parallel. [speaker005:] Mm hmm. [speaker003:] So that by the middle of the week we had s had some kind of result. I mean, it's [disfmarker] it's one of these [pause] Hail Mary kinds of things. I mean, it [disfmarker] it, uh [disfmarker] might [disfmarker] might not [pause] work out. But, uh, f figured I may as well ask for it. [speaker005:] OK. I'll ask [disfmarker] [speaker001:] So [disfmarker]? [speaker005:] The other thing is, um [disfmarker] and I'll ask Don which is easier to process in terms of creating these [disfmarker] the [disfmarker] the test data for the far [disfmarker] far microphone. If [disfmarker] if it turns out that for some reason it's easier for him to [vocalsound] use the old [disfmarker] um, [vocalsound] the [disfmarker] the [disfmarker] the old, uh, [speaker001:] Segmentation? [speaker005:] segmentations, then we'll just use that, I figure. [speaker003:] Right. [speaker005:] Um [disfmarker] [speaker001:] S So, um. I don't want you to have to be burdened with doing a lot of stuff. What [disfmarker] what can I do to [disfmarker]? Y y you said it would be easy for you to do that [pause] top one there, and [pause] I guess Don can do the segmentations of the, uh, channel F? [speaker005:] Mm hmm. Um [disfmarker] [speaker001:] I mean, I can certainly help with, uh, retraining [pause] the short male models, [speaker005:] Right. [speaker001:] uh, once we have the new data. [speaker003:] You have models of short males? [speaker005:] Yeah. Um [disfmarker] Right. Uh, let's see. The [disfmarker] the You [disfmarker] you can [disfmarker] I mean [vocalsound] you could [disfmarker] you could run the, um, you c Basically, once the, um, whe the top [disfmarker] the top one's done, you could easily re run the whole set of experiments. [speaker001:] The top one is done. I c I can re do the [pause] next one. Yeah. Mm hmm. [speaker005:] Uh, I mean, manage the jobs and so forth, [speaker001:] Yeah. Sure. [speaker005:] uh [disfmarker] [vocalsound] Um, [speaker001:] Yeah, the [disfmarker] the bottom one would b just be a matter of pointing it at a new set of files and kicking it off, [speaker005:] that's all [disfmarker] [speaker001:] so that would be re I mean, th not the bottom one, but the middle one [speaker005:] Mm hmm. [speaker001:] would be really easy once you've got the top one going. [speaker005:] Yeah. [speaker001:] I could do that. [speaker005:] Right. [speaker001:] OK. So I guess I just need to get Don to, uh [disfmarker] [speaker005:] Right. So, somehow the [disfmarker] Assuming he uses the new naming scheme, then he should call [pause] the waveforms the [disfmarker] so, the waveform names have the, you know, meeting [disfmarker] meeting ID, and the [pause] microphone, and the, [vocalsound] um [disfmarker] I guess, the channel and the microphone and the speaker, um, [vocalsound] speaker [disfmarker] some something that identifies the speaker. [speaker001:] Mm hmm. [speaker005:] So [disfmarker] [speaker001:] To keep it the same but j just change them all to channel F? [speaker005:] Exactly. So, uh [disfmarker] well, you still need to be able to distinguish the different speakers. That's the key point. [speaker001:] Right. Right. [speaker005:] Because, if we wanna do what we just discussed [disfmarker] So [disfmarker] [vocalsound] Uh, uh, the [disfmarker] the best [disfmarker] the easiest way to do that would be to just take [disfmarker] You know, you make the channel be channel F, [speaker001:] Yeah. [speaker005:] but then keep the speaker names the same as they would be in the old [disfmarker] in the close talking, uh, version. [speaker001:] Yeah. OK. OK. OK. And so that's something that Don would do [disfmarker] right? [disfmarker] when he creates these. [speaker005:] Right. Exactly. [speaker001:] OK. OK. So will you talk to him about that, or do you want me to talk to him? [speaker005:] I, uh, I [disfmarker] I can talk to him. [speaker001:] OK. And then the bottom [pause] one in terms of the test will be [disfmarker]? Uh [disfmarker] That will just be a copy of the one above it, except for [pause] different models from training. [speaker002:] Well, we also have to mean subtract the test data. [speaker001:] Ah. OK. So, we need to run [disfmarker]? OK. Well, once we have the new, uh [disfmarker] Well, once I do that, uh, second experiment, we'll have the, um, files, and I can give you those to [disfmarker] to process. [speaker002:] OK. And there's, um, S so the way this means subtraction expects to work, is it expects to have, um, this continuous stream of audio data from a particular speaker to operate on. And it goes along it with this sliding window, calculating the mean using the data in the window, and then subtracting that. [speaker001:] So, do you create this continuous stream from the individual utterance files? [speaker002:] I mean, uh [disfmarker] That's [disfmarker] that's how I've been doing it, just by concatenating files together. Um, and if these files [disfmarker] and the since they're individual utterance files, um, s long silence periods are removed, which is a good thing. Because this method might estimate the mean badly, if it had to face long silence periods. But that does mean that I need as much [disfmarker] I need twice as much disk space as the original set cuz I need [disfmarker] while I'm running it [disfmarker] cuz I need to create this intermediate set, um, of these big files, [speaker001:] Yeah. [speaker002:] and then [pause] create the [disfmarker] finally, the mean subtracted, um, little files. And then I can get rid of the big files. But st while I'm doing the processing, I'll nee I need twice as much disk space. [speaker001:] OK. I'm gonna [disfmarker] I'll check with, um, Markham, and see what happened with the [disfmarker] the disks. He went to put those on a couple of weeks ago and something [disfmarker] [speaker003:] I think [pause] Markham's out on [pause] vacation. I think, che check with Dave. [speaker001:] Oh, OK. I'll ask David, then. [speaker004:] Yeah. [speaker003:] Yeah. [speaker001:] You haven't seen new disks pop up, have you? [speaker004:] Nope. I was wondering [pause] if they were [pause] in. [speaker001:] OK. Yeah. [speaker005:] Well, they grow them on trees now. [speaker003:] I thought they were like mushrooms. [speaker001:] Well, he went to put them on [disfmarker] [speaker005:] Just, you shake them and they fall down. [speaker003:] They're popping up. [speaker001:] Yeah. He went to put them on and then something happened. And he sent around a note saying "Oh, uh, it didn't work, and we'll have to schedule another time." [speaker004:] Yeah. Yeah. Something w went wrong. [speaker001:] And then, didn't see [disfmarker] [speaker004:] Nothing happened. [speaker001:] nothing happened. [speaker004:] No? Yeah. [speaker001:] So [disfmarker] I'll check with David about that. OK. [speaker005:] OK. [speaker003:] Yeah. Cuz we still all have tha the [disfmarker] that other one going, which is the, uh [disfmarker] the Macrophone training. [speaker002:] Right, an and [disfmarker] So [disfmarker] so, Andreas, um, [speaker005:] Mm hmm? [speaker002:] in U doctor speech data SRI Hub five, there's this, uh, Hub five training set. [speaker005:] Mmm. Mm hmm. [speaker002:] Now, is that the long training set there? [speaker005:] Mm hmm. That's everything. Yeah, [speaker002:] OK. [speaker005:] so [disfmarker] So, I can give you a list of the short [pause] version. [speaker002:] OK. I th I think you already did, actually. [speaker005:] So you can [disfmarker] Oh, OK. [speaker002:] OK. And so, say the Macrophone files that are included in this short training, are just a subset of the Macrophone files. Right? [speaker005:] That's right. [speaker002:] OK. So [disfmarker] so, um [disfmarker] when you [disfmarker] You did some TI digits t t experiments training on Macrophone. [speaker005:] Yeah. [speaker002:] Um. But that's not necessarily any less data [pause] than the SRI Hub five set. It's not a [disfmarker] it's not a subset of the short SRI Hu Hub five set. Right? [speaker005:] Um [disfmarker] Wel No, it is. Th the [disfmarker] Sorry. Um. Can you repeat the question? [speaker002:] Uh, whe when you trained on Macrophone, um, to do those digits experiments, did you use the entire Macrophone corpus? [speaker005:] There wa it is a subse Yeah. No. Only the portion that was in the Hub five training set. [speaker002:] Oh. OK. [speaker003:] That was in the Hub five small training set? [speaker005:] Well, the Hub five small training set contains as much Macrophone as the large training set, for historical reasons. [speaker003:] Yes. [speaker005:] Yeah. [speaker002:] OK OK. So [disfmarker] Um [disfmarker] [speaker005:] So, do you have that processed there, then [disfmarker] right? Because you already did [disfmarker] did y didn't you already do that experiment? [speaker002:] I [comment] I [disfmarker] I got confused, cuz I thought [disfmarker] I thought you were using [vocalsound] the whole Macrophone set. [speaker005:] Mm hmm. No. [speaker002:] Um. OK. Well, if [disfmarker] if [disfmarker] if I just need to use that subset, I [disfmarker] I can get it processed. I actually [pause] got [disfmarker] I think I got f into it before, and then I thought I was doing the wrong thing and I stopped. And it shouldn it shouldn't take that long to do. [speaker005:] Mmm. OK. Right. [speaker003:] OK. [speaker005:] And you need only the males. So. [speaker002:] Oh, OK. [speaker001:] So, basically, Dave, so y for you to get your processing going, you need the list of the, uh, wave Well, I guess it'll dep you'll [disfmarker] you'll [disfmarker] we'll need to get the segmentations. [speaker002:] Yeah. [speaker001:] Figure out whether we're using the new or the old [pause] from Don. And then, uh, from that you need the [disfmarker] from the segmentations you'll have the list of wavefiles that the short, uh, set is trained on. And then [vocalsound] you'll need disk space. And once you've got [pause] those things, then you can start your processing. Right? [speaker002:] Yeah. [speaker001:] OK. [speaker005:] Does this [disfmarker] th this [disfmarker]? It it's sort of [disfmarker] f f not very nice to use the small training set for another reason, which is that the [vocalsound] you also are losing on [disfmarker] Again, because you don't use [pause] all the data you have for one speaker. So, the normalizations you compute for your training speakers will be, uh, crummier [pause] than they would in the large training set. So, [vocalsound] um, I have to [disfmarker] So, to make it really a matching experiment, I have to find [disfmarker] uh, I have to use short models that were trained on normalizations that were also only estimated on the [pause] short set. Which is, uh [disfmarker] [speaker001:] Do you have this, uh [disfmarker]? [speaker005:] I think so. I've [disfmarker] I [disfmarker] I have to check. In any case, I could retrain short models within [vocalsound] a few hours actually at [disfmarker] if I use machines at SRI. [speaker003:] I wonder about that, though. I mean, because all we're doing [disfmarker] The only reason we're using the short training set is [disfmarker] is for speed. And there [disfmarker] we're not really making any claims about using a smaller training set. [speaker005:] Yeah. [speaker003:] So as long as we're not using any testing data from [disfmarker] [speaker005:] No. But the thing is, if [disfmarker] if we used [disfmarker] if we used the whole training set for normalizations, then [pause] David would have to process much more data, [speaker003:] Yeah. [speaker005:] which [disfmarker] That's a [disfmarker] that's one bottleneck, for us right, in terms of get [speaker003:] Oh, you mean for [disfmarker] for his normalizations. [speaker005:] Yeah. [speaker003:] Oh, oh, oh. I'm sorry. [speaker005:] Right. So, you wanna do the exact same thing, [speaker003:] Right. [speaker005:] or else [pause] you'll have apples and oranges. [speaker003:] Yeah. [speaker005:] So. It doesn't make [disfmarker] I don't think it makes that much of a difference. It's just this little detail that if you can take care of that, then you should. [speaker003:] Mm hmm. [speaker005:] I [disfmarker] I think I have [disfmarker] I have the models, I have [disfmarker] [comment] I have, um [disfmarker] let's see, um [disfmarker] Yeah. And if not I can retrain [pause] those models very quickly. [speaker001:] Oh, there's [disfmarker] there's one other issue, uh, and that is that Dave throws out speakers that have less than twelve seconds of training data. And he said there were a few in the Macrophone set like that. So, do we need to wait to find out who he's gonna throw out, so that we create a new set of short models that don't include those speakers? [speaker005:] Uh [disfmarker] [speaker003:] Say this again? Sorry I missed it. [speaker001:] So i in [disfmarker] [vocalsound] Th the problem is that if we proceed like we just described, um, [vocalsound] when he goes to m um, [vocalsound] create the new s training [pause] data with his processing, he throws out some speakers. So the t two training sets won't be identical. [speaker005:] Yeah. [speaker003:] He throws out some speakers that are [disfmarker] that are very small. [speaker001:] Yeah. Have just a little bit. [speaker005:] Yeah. I don't think it'll make a [disfmarker] matter. [speaker003:] Yeah. I think there was only a few [speaker001:] OK. [speaker003:] we thought to be the case. [speaker005:] In fact, I thought about throwing those out too, [speaker003:] Righ -? [speaker005:] because when I heard how little speech there was for some of them, I thought they can only hurt your models, because they're [disfmarker] again their normalizations will be all [disfmarker] all [disfmarker] all over the map, [speaker001:] Yeah. [speaker003:] Right. [speaker005:] and you won't get very [disfmarker] very clean models from them, anyhow. So. [speaker001:] So you think it's OK, then? [speaker005:] Yeah. In fact, if [disfmarker] if you wanna do this, uh, to speed things up, um, you [disfmarker] we can leave out the Macrophone data altogether. That hurt [disfmarker] Actually. Oh, no. Sorry. Not in the short. Then you have too little data. OK. Sorry. Forget that. Um, When you use [disfmarker] when you go to the large training set, then [pause] leaving out Macrophone actually sometimes helps you, because it's [disfmarker] it it's just not relevant to the [disfmarker] to the meeting and [disfmarker] or to conversational speech anyway. [speaker001:] It's read. [speaker005:] OK. Yeah. Leave it out, and [disfmarker] [vocalsound] Um, in the event that I retrain the short models, um, why don't you give me a list of the files that you throw out, and I [disfmarker] I'll throw them out, too. [speaker002:] Right. [speaker005:] And then we have complete [disfmarker] [vocalsound] completely [pause] identical training conditions. [speaker001:] We sh M Right. A actually you should be able to figure out, Dave, right, once you know the segmentations, uh, who you're going to [disfmarker] which speakers will get left out, even before you run your processing. [speaker002:] Mm hmm. [speaker001:] Right? [speaker002:] The segmentations? [speaker001:] Yeah. The segmentations from Don? [speaker005:] But th the segmentations are only [disfmarker] they only affect the test set. [speaker001:] Once we [disfmarker]? [speaker002:] Oh. [speaker005:] We're talking about the training speakers. [speaker001:] Ah. Right, right, right, right, right. [speaker003:] No. The training set he could go through right now, and see how [disfmarker] how long the [disfmarker] [speaker005:] Right. [speaker001:] Right. But I'm just wondering how long it will take to get that information. [speaker002:] Um [disfmarker] [speaker005:] He already has the in you already have the information. [speaker002:] I [disfmarker] I have i I have it for [disfmarker] for Macrophone, um, already, I think, [speaker005:] Right? [speaker002:] and, um, I think by tomorrow I'll have it for th for the rest. [speaker005:] Mm hmm. OK. Alright. [speaker001:] OK. [speaker005:] OK. [speaker003:] At that one? [vocalsound] Maybe. Uh [disfmarker] Been looking at synthesizers? [speaker004:] Synthesizers? [speaker003:] You were looking at Festival. [speaker004:] Yeah, yeah. I was doing something for [pause] the SmartKom data collectioners. [speaker003:] Yeah. [speaker004:] Robert has taken his laptop t back to Germany so we needed a new synthesis machine. And we have now a SUN workstation in the library, which does the synthesis and the Festival speech s s system is running on [speaker001:] Sorry, Robert did what? Robert took what? [speaker004:] His laptop where, uh [disfmarker] which we used for the SmartKom data collection, for the synthesis. [speaker001:] Uh huh. [speaker004:] And so he took it to Germany and [pause] so we couldn't do any data collection. [speaker001:] Oh. Is he gone now? [speaker004:] Uh [disfmarker] No. He's just gone to a SmartKom workshop. [speaker001:] Oh. Oh, oh. Oh, OK. [speaker004:] And so, we have now the SUN in the library which can do that. And I looked into the ts F zero thing and talked to [disfmarker] to Liz, and it seems that it's quite s what she wants, but we'll [vocalsound] have to think about the [disfmarker] the energy thing [disfmarker] uh, what we wanna do. [speaker003:] Right. This was a business about, uh, um, coming up with something that [disfmarker] that was purely prosodic. [speaker005:] Mm hmm. [speaker003:] And so, uh, we're just gonna use a pitch detector, drive a synthesizer, and since it doesn't have a hook in it for, uh, modifying energy, we'll have a little box at the output that'll modify the energy. So [disfmarker] [speaker005:] OK. [speaker003:] So. Rrrrr rrrrm Something like that. [speaker001:] Are you [disfmarker] are you interfacing to that thing with the C plus plus Or are you [disfmarker] is there another interface that you use? [speaker004:] For w for Festival? [speaker001:] Yeah. [speaker004:] Uh, you can just [pause] use it from the [disfmarker] Yeah. Basically from the command line, and the defining the phones, whatever you want to have synthesized, [speaker001:] Oh. [speaker004:] and [vocalsound] give the F zero targets, [speaker001:] And it saves it in a [disfmarker] [speaker004:] and then ts get [disfmarker] gives out a [disfmarker] a waveform, and I [disfmarker] I want to manipulate the [disfmarker] the waveform, then. [speaker001:] I see. Oh, that's neat. [speaker003:] OK? Digits? [speaker001:] Ah. OK. [speaker003:] That's all folks. [speaker002:] OK, so we're live. [speaker001:] OK. [speaker002:] Are we live or are we Memorex? [speaker001:] We're somewhere in between. [speaker003:] OK, so that's [speaker001:] OK, so we have some sheets for uh [pause] some standard doohickies to [disfmarker] [speaker003:] Wait. [speaker002:] Right. So. [speaker003:] You're going to sit there? [speaker002:] Is that alright? [speaker003:] Yeah, OK, maybe I'll put this thing here then. [speaker002:] OK. [speaker001:] OK, so this is February second, eh, two thousand, this is eh meeting number one [pause] with Adam and Dan and Morgan, about five o'lock uh in the afternoon. A range of microphones. [speaker003:] Yeah. [speaker001:] What's this one here? This [disfmarker] this one is [disfmarker] has a couple of cheezy electrettes and [disfmarker] [speaker003:] Yeah, that's [disfmarker] [pause] that's the dummy [disfmarker] dummy PDA. [speaker001:] That's connected too, or [disfmarker]? [speaker002:] Yep. [speaker003:] Yeah, it is. [speaker001:] That's connected. [speaker002:] Yeah, both channels. [speaker001:] It's got two [disfmarker] to both channels. [speaker003:] Oh, hang on. [speaker001:] Yeah. [speaker003:] Yeah, it is at the moment. [speaker001:] OK. [speaker003:] Yep. [speaker001:] OK. This is [disfmarker] [speaker002:] Although the gain [pause] is pretty low. So for the um [pause] read numbers task, I have extracted these [disfmarker] [speaker001:] Mm hmm. [speaker003:] Hang on, let's [disfmarker] let's [disfmarker] OK let's just [disfmarker] let's just name the microphones. [speaker002:] OK. [speaker003:] So I'm speaking on the [pause] ear mounted [pause] uh wired headset thing. OK. [speaker002:] Right and I'm now talking on microphone number two, uh, the wireless microphone number two. [speaker003:] Yeah. [speaker001:] Um, I have no idea which one I'm [disfmarker] I'm on. [speaker002:] I'm [disfmarker] [speaker001:] Oh one! [speaker003:] You're on one. [speaker001:] Oh, I'm number one. [speaker003:] Absolutely. [speaker001:] OK, [vocalsound] which is a [disfmarker] a l a lapel, in fact it's sticking through my [pause] lapel. [speaker003:] Yes. OK so this is [disfmarker] this is the PZM nearest the eh eh the machine room end of the table. [speaker002:] Actually, i [speaker003:] Then [pause] there's [disfmarker] I know, but yeah. It's number three, it says, but [pause] we'll be able to figure it out. This is at the middle of the table This further down the table. And this [disfmarker] this one is right at the end of the table, OK? And then we've got the uh [disfmarker] This is the left side of the dummy PDA and this would be the right side. [speaker001:] I think listening to this is gonna be like watching somebody's home movies. [speaker002:] Yes, it'll be pretty horrible. [speaker001:] Yeah, OK. [speaker003:] Very exciting. [speaker002:] So, what I have on these forms here is for the read numbers [disfmarker] read digits tasks and it's extracted directly from Aurora and uh [disfmarker] So what I was thinking is we could start just by filling it out and then reading [disfmarker] reading the numbers on the form. [speaker001:] OK, what do we do with the stuff on top? [speaker002:] Fill it out. [speaker001:] But I [disfmarker] Do we say it after we [disfmarker] [speaker003:] No. [speaker002:] No, you don't need to. [speaker003:] Just write it down. [speaker001:] What are we [disfmarker] why are we writing it down? [speaker002:] So that [pause] when we [pause] transcribe the data we can figure out who said it and their gender and the date and the time and what mike they were on and all that sort of thing. [speaker001:] Oh. OK That's good. So I don't need to fill that out right now. So maybe we should just say it. [speaker002:] W Why don't you want to fill it out right now? [speaker003:] OK. [speaker002:] Just so that [disfmarker] [speaker001:] So then we're not spending the time writing it out. So that we're spending the time talking. [speaker003:] Well, how about [pause] you give me one of those and we can fill it out and Morgan can read and then we can do it like that. [speaker001:] I'm just uh concerned about a bunch of dead time while we all sit here and fill out the things. [speaker002:] OK. [speaker003:] Yeah, yeah, yeah. [speaker001:] Right? [speaker002:] And so you have to be sure to pause [pause] between each line, since we're going to be segmenting it and then [pause] doing it that way. [speaker001:] Oh. Right. OK, so this is Morgan. I'm going to read some numbers now. That's my group. [speaker002:] So I definitely wasn't pausing enough between them. [speaker003:] This is [disfmarker] these are things we will find out. [speaker002:] That'll be an issue. Yep. [speaker001:] Yeah. I don't know if I said mine was transcript zero, but it was. [speaker002:] Mm hmm! [speaker001:] Oh, good. [speaker003:] OK. [speaker001:] OK. That's a little bit. [speaker003:] Can I have something to write on the back of? [speaker001:] So, so where are we in this? [speaker003:] Thanks. [speaker001:] Let's see, so we now [disfmarker] we now have a few mikes that work. How long did it take you to set this up? [speaker003:] Oh! It didn't [disfmarker] doesn't take [disfmarker] Well, I mean as you see we haven't really set it up that well, but it doesn't [disfmarker] it doesn't take long to set up. I mean, We just came up here [pause] one night after recording so it took like twenty minutes something like that. [speaker002:] Right and that [disfmarker] that included a fair amount of fiddling around, since it's [disfmarker] it's more or less the first time we've tried to do it. [speaker003:] I mean it's [disfmarker] Yeah, [pause] I mean the equipment [disfmarker] Right. The equip Me and Adam went through it yesterday, just looked at everything. But it's all kind of here and set up, so it's just a question of turning it all on. And, um [disfmarker] running the program. [speaker002:] I mean, ev eventually we'll want to like tape that over and run them up through the center so that wires aren't [disfmarker] aren't [disfmarker] people aren't tripping over the wires and so on. [speaker003:] Right. [speaker001:] Are we gonna [disfmarker] I mean, is it gonna be over there, or is it gonna be in there? [speaker003:] Um, it's [disfmarker] it's gonna b it's [disfmarker] for reasons [disfmarker] because of the microphones runs we want that chunk there. There's more equipment in there. But there's [disfmarker] [pause] the idea is that there are [pause] these microphones on the table, there are some cables coming down [pause] sort of under a nice [pause] mat, and then that's [disfmarker] [pause] they [disfmarker] a bunch of them get sort of converged with digital there. And the idea is that this gets replaced by a cabinet with doors, so that [disfmarker] [pause] so that it's not [pause] open to fiddling. [speaker001:] So that's what's gonna happen. [speaker002:] Right. [speaker003:] That's [speaker002:] Do we have like a [pause] cabinet on order or do we just need to do that? [speaker003:] We need to do that. It's on a list of things to do. I keep on hoping that [pause] they're gonna get done but they don't. [speaker002:] Well, I [disfmarker] [vocalsound] I can help them get done um one question is budget. [speaker003:] I [disfmarker] Yeah! [speaker002:] Do we have any money at all that we can go out and spend on things like cabinets or a hard drive or things like that. [speaker001:] Oh [disfmarker] I mean, I don't know. Did we s did we spend our [disfmarker] the budget already that we had? [speaker003:] Yeah. [speaker001:] Uh, how much [pause] are we talking about here? [speaker003:] Um. I don't know. A cabinet is probably going to cost [pause] a hundred dollars, two hundred dollars something like that. [speaker001:] Yeah, I mean, you know, we [disfmarker] we can spend under a thousand dollars or something without [disfmarker] without worrying about it. [speaker003:] I mean. Yeah. [speaker001:] I mean, I think [disfmarker] um [pause] I'm more worried about things being kept in a funky state long enough that [disfmarker] that uh stuff gets broken or [disfmarker] or you know. [speaker003:] Yeah. [speaker002:] Well, I think our intention at this point is when we're not using it to record a meeting we'll coil them all up and put them under [pause] so that people will not be tripping over them and so on. [speaker001:] Yeah. [speaker002:] So. [speaker003:] But, [pause] well we should get to a [disfmarker] a nice kind of working state of our set up [pause] as quickly as possible. [speaker001:] Yeah, and there's also the minor matter, it'd be kind of nice to able to [pause] bring people in and say "yes, this is where we do such and such", and then be able to see uh, you know, [pause] some kind of [disfmarker] I mean it's less important than doing what we're doing right now [speaker002:] Right. Not have it [disfmarker] [speaker003:] Yeah. [speaker001:] but [disfmarker] [speaker003:] Yeah. [speaker001:] So. [speaker003:] I mean [pause] there's one big issue with the equipment still, which is um ultimately we're gonna [disfmarker] well the idea is to have any number of these wired headsets, but there's this amplification problem. I built this thing, it's [disfmarker] that's [disfmarker] it's very noisy. We're actually recording [disfmarker] I'm [disfmarker] I'm going through it at the moment, [pause] but it's so noisy that I am not using for this. it's [disfmarker] it's designed to be used for the [disfmarker] for the desktop meeting as well. But I'm using [disfmarker] running that through my uh [pause] damp machine as a pre amp as an alternative. [speaker001:] Well, we should be able to just go buy some preamps, [pause] I would think. [speaker003:] Yeah. It's just [disfmarker] it eh You see how clumsy this is right? I have to use these batteries to bias them. And I'm only get two channels and there're a bunch of [disfmarker] [pause] There are like three connectors in the circuit. The thing is, like, it's [disfmarker] it's not s it's not that the thing doesn't exist [pause] commercially, but there isn't one unit which does the whole thing in one [disfmarker] in one [pause] swoop. So it seemed like it was a good idea to make one. um And I'm still not sure. I mean, I think [pause] I probably know how to fix this. [speaker001:] Yeah. [speaker003:] I can probably build it better so it doesn't have such noise, but it's kind of a buy. [speaker002:] Well, ultimately we're gonna need to build one anyway. [speaker001:] Are we? [speaker002:] Sure. [speaker003:] Well, we could just [disfmarker] [speaker002:] Because we we're gonna have to put it in a PDA. [speaker001:] Well. [speaker002:] Right? If we have microphones in a PDA it's gonna need bias and [pause] a pre amp. [speaker001:] I guess I'd sort of figured that [disfmarker] that Jim would do that at some point. [speaker002:] Mm hmm. [speaker001:] Right? [speaker003:] Yeah. Well n Yeah. Jim is busy at the moment. That's the problem. I was kind of hoping that Jim was going to build this box. But um [disfmarker] but he hasn't had time. [speaker001:] Yeah. [speaker003:] Well, I think [disfmarker] I think he is quite interested in doing it. But. [speaker001:] OK. Well If it's too noisy and if there's something we can buy, again for, you know, moderate amount of price, even if it takes a couple boxes or something, maybe we should just do it. [speaker003:] Yeah. Yeah. [speaker001:] And then [disfmarker] and then later we can [pause] uh replace it with [disfmarker] Because there may be other considerations [pause] that will go into the design. I mean, uh our getting some experience with this may help to determine [pause] some of things that are [disfmarker] I don't know [disfmarker] might change this design, So. [speaker002:] Mm hmm. Right. [speaker001:] There was uh a statement made uh which I don't think is right, by the way, but [disfmarker] [pause] there was a statement made at the uh DARPA Communicator meeting I was at recently that um [pause] [vocalsound] microphone [disfmarker] multiple microphones [pause] at a distance of less than [pause] three feet or something like that were totally useless, completely irrelevant. [speaker003:] Oh! [speaker001:] Yeah. [speaker002:] You [disfmarker] you mean for noise cancelling, or [disfmarker]? [speaker001:] for nois noise cancelling. Um, my counter argument is [disfmarker] is an [disfmarker] uh uh um, a white rat. uh, which [pause] has th it's two microphones exceedingly close together and it works pretty well. [speaker003:] Oh. [speaker001:] Um but [disfmarker] Yeah. [speaker003:] OK. This happened. [speaker002:] Do they mean [pause] omnidirectional, is that their point? I mean all the noise cancelling mikes, they're [disfmarker] you know, they're a few m millimeters away from each other pointing in different directions. [speaker003:] I think [disfmarker] [pause] uh, I mean, uh my guess is that means it's just that if it's [disfmarker] if it's near field, [pause] the mathematics of beam [disfmarker] [pause] beam steering is different. [speaker002:] But. Oh, I see. [speaker003:] If the f if the wavefronts are not approximately planar [pause] then you can't just do delay and sum and expect it to work. [speaker002:] It's [disfmarker] Oh, OK, so they're not talking about [pause] how far apart the mikes are, they're talking about how far they are from the speaker. [speaker003:] I guess. Yeah. [speaker001:] Right. But I think that if [disfmarker] my point is I guess if you [disfmarker] if you have a um [disfmarker] a uh s a volume in between [pause] the two [disfmarker] the two uh uh sensors and you do something cleverer, there clearly are ways [pause] to get more out of it. [speaker003:] Yeah. Yeah. Yeah, yeah. Right, right, right. I mean the [disfmarker] um The [disfmarker] the weird thing is that actually [pause] delay in some beam forming is not [disfmarker] is not the clever thing to do. [speaker001:] Yeah. [speaker003:] The [disfmarker] the [disfmarker] the underwater guys do this stuff which is [pause] different. And it actually means that you should get the microphones as close together as possible. Rather than trying to get them far apart so you can get [pause] separation, you sort of [disfmarker] [pause] you want them very close. And [disfmarker] And [pause] the math is too difficult for me to understand but it's [disfmarker] but it's there. [speaker001:] Yeah. Well, there are various points in meeting where people were making uh uh very strong statements as a matter of fact, like one shouldn't even question it and [pause] this was one of them that [pause] was kind of [pause] funny. Um but I [disfmarker] I [disfmarker] I [disfmarker] what I take it to mean is doing [disfmarker] doing the dumb thing uh with two microphones would be very difficult to get too much out of it [speaker003:] Yeah. Yeah. [speaker001:] but I think it will be interesting to do other things that aren't dumb. So I [disfmarker] I still think it's potentially [comment] [pause] interesting to do that. [speaker002:] Hmm. [speaker001:] Although we we'll have to be uh a aware of this if we're writing proposals about it, wherever, that there may well be reviewers who will say, "well everyone knows that [pause] you can't get anything out of it if it's [disfmarker] [vocalsound] if it's close". [speaker003:] Yeah. [speaker002:] Do they believe you can get speaker location out of it? Cuz that's [disfmarker] that's one way we could sneak it in. [speaker001:] Uh, that is an interesting point. Yeah. [speaker003:] Yeah. [speaker001:] Certainly you can get some [disfmarker] [speaker003:] Well we could [disfmarker] I mean. By the time [disfmarker] [pause] We'll have [disfmarker] we'll have data so we can actually [pause] you know, demonstrate that. [speaker001:] Yeah. [speaker002:] Well. [speaker001:] Hopefully. [speaker002:] Oh, you mean for location? [speaker003:] Yeah. [speaker002:] OK. [speaker001:] Well, we're getting some [disfmarker] we we're getting some right now. [speaker002:] Depends on who's working on it. Yeah, right. but someone has to write some software to [disfmarker] to actually do it. [speaker003:] Right. Right. But I mean. [speaker001:] Well someone someone might be interested in [disfmarker] in it. [speaker002:] Someone might. Oh that's uh [disfmarker] I was just thinking one thing we may wanna do is put the fake PDA right in front of someone [pause] instead of in the middle of the table like we have it now. [speaker003:] Yeah. [speaker002:] on the thought that [disfmarker] my vision of it is you know each of us will have our little PDA in front of us [pause] and so the acoustics [disfmarker] uh you might want to try to match the acoustics. [speaker001:] Maybe. [speaker003:] Yeah. [speaker001:] But on the other hand, [pause] suppose that uh, you're the only one uh who is advanced enough in this thinking to actually bring such a PDA to [disfmarker] to a meeting [speaker002:] No, and I don't need to meet with those people. [speaker001:] and [disfmarker] and [disfmarker] [vocalsound] and therefore you bring it in, and you probably want it to be as near to people as possible so you might in fact shove it to the middle of the table. [speaker002:] Mm hmm. [speaker001:] Well. [speaker003:] Yep, yep. For those those we can make several [disfmarker] several more of these [comment] and then we can have actual [pause] recording with you know three [pause] P D [speaker001:] Yeah. Well, I think we should. I mean, I think at least with two. [speaker003:] Mm hmm. [speaker001:] Right? [speaker002:] Right. [speaker001:] Because uh one of the th one of the fantasies that was at least fun to talk about was that uh uh OK maybe you th [pause] you're [disfmarker] [pause] the range of things you can do with multiple microphones is limited if they're a few inches apart, but what if you have two of these things? [speaker003:] Yeah. [speaker001:] And you know we have this [disfmarker] this idea of the handshake [pause] back and forth and say "I'm here, I'm here" [speaker003:] Yeah. [speaker002:] Right. [speaker001:] "We're now an array." [speaker003:] Right, right. Yeah. And that [disfmarker] that seems very uh [pause] easy to believe scenario, I mean, given the number of palm pilots we've got on the table right now, so. [speaker002:] That's right. Three, right? [speaker003:] Yeah. [speaker001:] Yeah [speaker002:] So I guess this a pretty dense group but it it's getting a lot higher. [speaker003:] Yeah, but [disfmarker] But that [disfmarker] it's We're not, even, you know [pause] [disfmarker] It's [disfmarker] we're just [disfmarker] It's not like we [disfmarker] We [disfmarker] we may be technical, but we're not particularly PDA people. So it's [disfmarker] That's just how it is. [speaker002:] Well, I'm trying to be. So. [speaker003:] Well, I know. Well, that's true. I mean. [speaker002:] Except I s got a PDA way before [disfmarker] Well I was doing hand I was doing wearable computers at Boeing, so I guess I sort of count. [speaker003:] OK. [speaker002:] But. [speaker001:] So uh [disfmarker] so where are we [pause] uh now? I mean so we've just [disfmarker] uh We're doing this recording so we show [pause] we can [disfmarker] we're capable of doing recording [speaker003:] Right. [speaker001:] uh there's [pause] I guess a little more wiring to do or something to have [disfmarker] to have more [disfmarker] if we had more people participating in the meeting we'd wouldn't quite be ready for that. [speaker002:] Mm hmm. [speaker001:] Uh. [speaker003:] Well we've got [disfmarker] we've got [disfmarker] Right now we could have two more people on wireless, [speaker001:] Mm hmm. [speaker003:] with a trip to Leo's Audio we could get a third extra person on wireless, and uh [pause] right now we could have three more [disfmarker] [pause] three more people wearing mikes like this, and that would just work. I mean I'm not sure how [disfmarker] [pause] how good the quality is on this mike but I think it's going to be OK. So we could actually have [pause] eight people with [disfmarker] with headset mikes [pause] with, I mean, basically no extra effort. [speaker002:] Well. Five with wireless. So we could have eight. [speaker003:] Yeah. Nine. [speaker002:] Nine. Excuse me, nine. [speaker003:] Yeah. [speaker002:] Five with wireless and four without, and four with wired. [speaker003:] Yeah. [speaker002:] Um, we don't have [pause] headsets though. [speaker003:] Yeah. We have the um [disfmarker] the ones we've been using, the ones we got for the PC. [speaker002:] Oh, OK so we just have to c [pause] wander around and collect them up. [speaker003:] Yeah, yeah. I think we have three of those. [speaker002:] Mm hmm. [speaker003:] So, and the h [speaker002:] Uh, there's one in my office. So. [speaker003:] Right, right. They don't all work, but the nice ones, the um Plan Plantronic ones work. [speaker001:] OK. [speaker002:] So, we'll just have to see if that works. [speaker001:] So a little bit of futzing, there's the issue about a cabinet, we might want to change some pre amps. [speaker003:] Yeah. [speaker001:] So we can do some more recordings but there's [disfmarker] there's a little [disfmarker] little bit of ramp up [pause] on that over the next month, [pause] say. [speaker003:] Yeah, yeah. [speaker002:] Right and also I would like to work out some of the [pause] software issues a little more [pause] completely. So, um, you know, we're gonna want to resample and con and bring them over to another machine and uh [pause] um [pause] organize the [disfmarker] how we're gonna store it all. [speaker003:] Yeah. Yeah. Yeah. Yeah. [speaker001:] Yeah we were talking about that a little bit over coffee today, that [disfmarker] that um uh if you downsample it sounded like it came [disfmarker] it sounded like it was something like a gig gigabyte an hour if you didn't downsample and once you downsampled then it was something like four hundred megabytes an hour, if you used all [disfmarker] if you used all [disfmarker] [speaker003:] Per channel, or [disfmarker]? [speaker001:] no, if you used all the channels. Something like that? [speaker003:] We figured out that it was t twelve gig twelve gigabytes an hour? [speaker002:] It was more than that. [speaker003:] For [disfmarker] for all the channels? Was that with or without the [disfmarker] [speaker001:] I thought it was one gigabyte. It couldn't be twelve. [speaker003:] Cuz no, we've only [disfmarker] no, I mean we've only got four gigabytes and that [speaker002:] We said, four gigabytes [pause] would give us twenty minutes [pause] before we did anything else. [speaker003:] Yeah. Right. And then we [disfmarker] then we cut it in half. [speaker002:] And then we cut in half, [pause] by [disfmarker] [speaker001:] How'd you cut it? [speaker003:] But [disfmarker] By s [pause] saving sixteen bits on instead of thirty [disfmarker] instead of thirty two. [speaker001:] OK, so that's [disfmarker] so that's two gigabytes. And then [disfmarker] [speaker003:] Well no I mean it's uh four gigabytes for forty minutes. [speaker002:] That's forty minutes [speaker001:] Right. [speaker002:] So, six gigabytes. [speaker001:] But that's at forty four kilohertz with sixteen channels. [speaker003:] Forty eight kilohertz, [pause] yeah. So we can actually triple that. [speaker001:] So you can [disfmarker] right, so you can [speaker002:] Yeah. [speaker003:] So, hang on [disfmarker] so that would [disfmarker] OK [speaker002:] I came out at the gig an hour, [pause] at one point. That [disfmarker] that's one computation I did for sixteen channels, sixteen bit. [speaker003:] Well I think it's [disfmarker] I think it's two gig an hour. [speaker002:] Is it two gig an hour? I'm off by an order of magni That's alright. Order of two. [speaker003:] Cuz like, we don't [disfmarker] [pause] Maybe we don't [disfmarker] We don't need to save all channels, [speaker002:] Factor of two. [speaker003:] right? So, on average. We could [disfmarker] we don't [disfmarker] we [disfmarker] we wouldn't use all the channels, necessarily. [speaker001:] Oh, we'll do the higher math later, maybe but it's [disfmarker] so it's [disfmarker] so maybe like a gig or two a [disfmarker] an hour, um but [disfmarker] um [disfmarker] [speaker003:] Yeah. [speaker001:] OK, that is a bit more than I thought. [speaker002:] How much here would we have to reformat the drive in there to get some of the more space cuz it [disfmarker] I mean it's a nine gig drive and there's four gig on sc on scratch and there's [pause] about three gig unused, it looked like, on the home directory. [speaker003:] Yeah. I'm sure. Yeah. [speaker002:] I just don't know enough about LINUX. How hard is it to [pause] repartition? [speaker003:] It's a pain. [speaker002:] It's a pain. [speaker003:] I mean, you know, because [disfmarker] cuz we've got data on "user", right? And then we'd have to back it all up and [pause] copy it back on again. [speaker002:] Mm hmm. [speaker001:] Yeah so this'll take some thought cuz we have [disfmarker] we [disfmarker] um If we are in fact gonna have forty or fifty hours at some point and we [disfmarker] we wanna ha have part of our work be on feature extraction, it means that we need to have waveforms available [pause] somehow. [speaker003:] Yeah. Hmm. [speaker002:] Right. So I've [disfmarker] I've already created a "U [disfmarker] U, doctor speech, data, MR" directory, [pause] or had Jane do it and uh [pause] um [disfmarker] but there's not a lot of free space around. [speaker003:] No. [speaker002:] So it [disfmarker] it [disfmarker] it has currently about four gig on the drive that I think she's going to put it on um. [speaker003:] Mm hmm. Mm hmm. [speaker002:] And so we need [disfmarker] we definitely need to go [pause] buy [pause] uh [pause] you know forty gig [disfmarker] fifty gig of drive [disfmarker] drive space. [speaker003:] Yeah. We don't need all [disfmarker] we don't necessarily need it all on line. I mean it will be different subsets. So you won't [disfmarker] you won't, you know, do all [pause] sixteen channels in a single [disfmarker] [speaker002:] That's true. I hadn't even thought about that, although it'd be kind of interesting. [speaker003:] Yeah. [speaker001:] What would you do? [speaker003:] You typically train [disfmarker] I mean you'd have fi you'd have fifty hours [speaker001:] I mean [disfmarker] [speaker003:] we'd only use [pause] one of the PZM channels, something like this, and so that would you wouldn't have to use all sixteen channels [pause] at once. But it's kind of [pause] painful for [disfmarker] because basically, if we put them on C Ds then each CD will have [pause] all the channels for one [pause] recording, and then when you want to build the P files, you just want to get one of those files off each of [pause] the sessions and [disfmarker] [speaker002:] Yeah, it's pretty horrible. [speaker001:] So what would we do for training? So if we're [disfmarker] if we're training distant mi we'd just use one of them? [speaker003:] Yeah. Well, maybe two of them, but I mean we wouldn't use [disfmarker] we'd only use the distant mikes, [speaker001:] OK. [speaker003:] right? [speaker002:] Yeah so it might be four. Right? But it wouldn't be sixteen. [speaker001:] Well, although we wanted to get alignments from the close mike [pause] stuff. [speaker002:] But that's a separate training. [speaker001:] Right. But you don't want to be constantly take [disfmarker] putting that on disk and off disk and on disk. [speaker002:] No wait. More disk is better. Absolutely. I mean [pause] my [disfmarker] my preference would be to get [disfmarker] get ourselves a hundred gig of disk. [speaker001:] Yeah. The thing is it's really not very expensive right now. [speaker003:] Yeah. [speaker002:] And just not bother. [speaker001:] So it's more of an issue of [disfmarker] of structuring it with uh the uh Sys Admin folks so we don't panic them and figuring out where it goes. [speaker002:] And back up. [speaker001:] Well, we don't [disfmarker] it doesn't need to be backed up. [speaker003:] Right if [disfmarker] if we [disfmarker] if we burn it all on CD ROM, then, um, [speaker002:] Right, then it doesn't need to be backed up. [speaker003:] in fact, we could you know [disfmarker] we could throw away the Broadcast News stuff and put it on there or something like that. [speaker002:] Yeah, we were talking about that. [speaker001:] Yeah. [speaker002:] I'm not sure if anyone's [pause] still using it. [speaker001:] I think the only one using it has been Javier, [speaker003:] Somebody used it recently. [speaker001:] but at this point I think it's [disfmarker] he's only been using it for features that he calculated quite a while ago, so if nobody else is doing [pause] feature calculation [disfmarker] [speaker003:] Yeah. [speaker002:] I'll send mail to speech local. [speaker001:] Pardon? [speaker002:] I'll send mail to speech local and see if anyone's still using it. [speaker001:] How much space is that using up? Do you have any idea? [speaker003:] It's uh twenty C Ds, so that's twelve gigabytes. [speaker002:] So that would last us for a while. [speaker001:] Yeah, couple hours! Well, anyway for this [disfmarker] this recording we're fine [speaker003:] Yeah. [speaker001:] cuz we're only [disfmarker] we're only [disfmarker] how many mikes are we recording right now? [speaker002:] Well, right now we don't have any choice. It r still records all sixteen channels. It's just most of them are all zeroes. So that [disfmarker] that's on our list too. [speaker001:] OK. Yeah, well that we can do something about. [speaker003:] Yeah. [speaker002:] Yep. It's mostly a question of user interface. How do we [disfmarker] how do we specify which ones are on? [speaker001:] Yeah. [speaker002:] Yeah, so we don't store any of our audio formats compressed in any way do we? So these large sections when you guys aren't talking [disfmarker] [speaker003:] Yeah. Probably, we could [disfmarker] we could probably just run Shorten on them or something and it will get a big win. I think that's a good [disfmarker] [speaker002:] Run what on those? [speaker003:] Shorten. Tony Robinson's [pause] uh lossless compression thing. [speaker002:] OK. [speaker001:] What's it do? [speaker003:] Um, it [disfmarker] [pause] it does [pause] AR [pause] modeling and so it looks at a section, [pause] it runs an AR model to sort of r w to whiten it [pause] and then quantizes [pause] the residual [pause] with a nice [disfmarker] with the three bits it can get away with. Something like that. So I think it [disfmarker] On [disfmarker] on each block it'll [disfmarker] [pause] it'll compress you know if there's load on in any range it'll use ratio bits. [speaker001:] Mm hmm. [speaker003:] I think it does that. [speaker002:] Yeah, so, P Z [speaker003:] Yeah, well. [speaker002:] Yeah. Yeah, it's something that [disfmarker] that we definitely need to look at with this. Maybe we shouldn't've [pause] subsampled it down to sixteen bits [pause] for our first try. [speaker003:] Well, it's OK, it's only the first try. I [disfmarker] what I'm thinking is um [pause] obviously we can do the [disfmarker] we can [disfmarker] we can choose any sixteen bits from the twenty four, so we can just have different gain things that we use [pause] you know per channel. [speaker002:] Right. Yeah, so we might wanna see if maybe the low sixteen bits on the P Z Ms are better than the middle [pause] sixteen bits. [speaker003:] Yeah. I think [disfmarker] [pause] probably the bottom four bits are not any good, [speaker002:] Regardless, yeah. [speaker003:] Right? Although you know it claims they are. But probably the [disfmarker] [speaker002:] W I was just noticing on the [disfmarker] on the [disfmarker] there's a view meter, software view meter and the PZM gain is real low. [speaker001:] Yeah, so I think initially we should take up more space until we've analyzed this and get a better sense of it [speaker002:] So. [speaker001:] because [pause] uh we don I mean we're not talking about forty or fifty hours at this point [speaker003:] Mm hmm. [speaker002:] Yeah. [speaker001:] we're talking about [pause] a half hour of [disfmarker] of speech [speaker002:] Yep. [speaker001:] and so [pause] uh we should look at this carefully [speaker003:] Yeah. [speaker001:] but [disfmarker] Yeah, I was [pause] interested when I saw the twenty four on there. [speaker003:] Oh, yeah! [speaker001:] Wow, I mean that's probably eight bits of noise. [speaker003:] Oh yeah! [speaker001:] Yeah. So. [speaker002:] Yeah, that's right. We've got lots of bits of noise. [speaker003:] These theories things are pretty amazing. [speaker001:] Yeah. [speaker003:] And And there's help [disfmarker] help they seem [disfmarker] they seem good though they seem to be [speaker001:] It's some sort of oversampling thing I imagine, right? Didn't we already get that? [speaker003:] Oh, God knows. Yeah, yeah. [speaker001:] Cuz you [disfmarker] you can't have [pause] uh component tolerances that are [pause] uh [pause] sixteen million [disfmarker] one in sixteen million parts, [speaker003:] Right. [speaker001:] so. [speaker003:] Right. [speaker002:] Right the other [disfmarker] other thing [pause] another issue is just getting a sound card up here. You know, I assumed that we could just put a sound blaster in if we want sound output. [speaker003:] That [disfmarker] that card [disfmarker] that machine's got sound [disfmarker] They've got a sound blaster built into it. [speaker002:] Oh, it's just not working. [speaker003:] I just [disfmarker] I just disabled it cuz I wanted to find out [pause] how well the [pause] sound infrastructure would deal with both at once [speaker002:] OK. [speaker003:] but I think yeah, that's, uh [disfmarker] [pause] probably it will be fine. [speaker002:] Yeah, that way we don't [disfmarker] we can't listen to it [pause] on this machine. [speaker001:] I'm s [speaker003:] There's no, [speaker001:] Oh! [speaker003:] For a couple of reasons. Well, but the [@ @] [disfmarker] [pause] There are two ways we could listen to it. There's a sound [disfmarker] there's actually sound output built into this thing. [speaker001:] Yeah. [speaker003:] But the driver [disfmarker] I haven't [disfmarker] the driver may not support it yet. And then there's also the sound or the sound card built in. [speaker001:] Hmm. [speaker003:] But I don't know how to [disfmarker] I don't know how the [pause] operating system would like having two sound cards. [speaker001:] But we can FTP stuff over, right? [speaker002:] Right. [speaker001:] So [pause] it's connected. [speaker003:] Yeah. [speaker002:] And we've looked at it so we know it's working. I mean, you can see the little squiggles, and it looks like speech [pause] so [speaker003:] Yeah. [speaker001:] Yeah. [speaker002:] You just can't listen. [speaker001:] Yeah. OK. Um. One suggestion I have is that um I think this [disfmarker] [pause] maybe we should just read some more [disfmarker] you made more of these, right? [speaker002:] Right. [speaker001:] I think we should read some more because [disfmarker] because this is very f [speaker002:] OK. [speaker001:] we [disfmarker] [pause] we need to [disfmarker] to work the [disfmarker] the [disfmarker] the uh [disfmarker] the talkers more because we're not gonna have that many meetings [speaker002:] Mm hmm. [speaker001:] and this is a pretty tiny amount of data. [speaker002:] Mm hmm. [speaker001:] So maybe we should do another [pause] number list and [disfmarker] [speaker002:] Well right I [disfmarker] I just did ten per page I could easily do twenty per page. It's just a question of uh [pause] of how much uh annoyance I wanted to put uh people going to meetings through. [speaker001:] Yeah. Well this is us. We'll have a fair amount of annoyance [speaker002:] Mm hmm. [speaker003:] Yeah. [speaker001:] yeah I think uh. So, can we do [disfmarker] we do another set? [speaker002:] Sure. [speaker001:] Are we [disfmarker] are we [disfmarker] are we [disfmarker] sort of [disfmarker] [speaker003:] Sure. [speaker001:] we talked out about what we're [disfmarker] [speaker003:] Well no, OK, the question I still have about what we're going to do is what is [disfmarker] what [disfmarker] eh [disfmarker] OK, so we can get the equipment set up so that basically we're in a situation where we can [disfmarker] [pause] with, you know, ten minutes of preparation, we can bring [pause] nine people in here and do a multi channel recording. [speaker001:] OK. Right. [speaker003:] Um. What are [disfmarker] wh When are we [disfmarker] Are we going to do that? Who are they going to be? What meetings are we going to record? [speaker002:] Us, to begin with. [speaker003:] So which [disfmarker] which meetings [disfmarker] What meetings do we have? [speaker002:] I think that's the intention. [speaker003:] I mean like "us" being the Meeting Recorder people or "us" being anything that speech [pause] local does. [speaker002:] Well, I think [disfmarker] [pause] I think, um, certainly us, Meeting Recorder. [speaker003:] Yeah. [speaker002:] Um, us other groups. It just depends on how many meetings we're [disfmarker] we're having. Um. Jerry also said it would be [disfmarker] he would be willing to have his group, some of the AI meetings do it as well. [speaker003:] And we should have like [disfmarker] we should have Decoder meetings and stuff. [speaker002:] And uh Exactly exactly. I think there are enough [pause] incidental meetings for us to [disfmarker] to do this. And if [pause] we can get it set up so it is convenient enough, um, then you can just do spontaneous meetings as well. [speaker003:] Yeah, I mean that's the thing. It's OK to g Right. [speaker002:] Which I'm interested in. I mean, that's one of the things, the whole reason I want a PDA as opposed to a wired room is for spontaneous meetings. [speaker003:] Yeah. Right. Well, we've got to make it really really easy to start it going then in that case. [speaker002:] Right. [speaker003:] Cuz that [disfmarker] that's the real [pause] goal. [speaker002:] That's right. That's why I put ten on here instead of thirty or forty or fifty. [speaker003:] Right. But I mean if we [disfmarker] if we want [pause] Jerry's group to [pause] use it then we probably [@ @] won't ask them to read any numbers, right? [speaker002:] Right. [speaker003:] Cuz. [speaker002:] Well, I [disfmarker] I bet they would be willing to do it the first few times, just for the novelty. [speaker003:] Yeah. Yeah. [speaker001:] I think they would. And also he's [disfmarker] he's [disfmarker] he's got [pause] at least one woman. [speaker003:] That's great. [speaker002:] Oh, that's a good point! [speaker003:] I know. [speaker001:] And we're [disfmarker] we have a little problem of [pause] no women, [speaker002:] Boy, that's abs I had never [disfmarker] I hadn't even thought about that. [speaker003:] I know, it's a disaster. [speaker001:] Yeah, well, when L when Liz uh Shriberg starts coming, then there's another woman although she [disfmarker] [speaker002:] Oh, man! [speaker003:] It's alright she doesn't mind as long as we don't video her, [speaker001:] Yeah. [speaker003:] right? [speaker001:] Ah, that's right. That was it. So. [speaker002:] Maybe we should have the reading group here and force Jane to come. [speaker003:] Yeah. [speaker001:] Yeah. No, well, actually we could have Jane uh [disfmarker] Jane's interested and curious about this stuff so she could attend one of [disfmarker] [speaker002:] Mm hmm. [speaker003:] Yeah. [speaker001:] probably not the decoder meeting but some of the other stuff she [pause] would be interested enough and have her read [disfmarker] [speaker003:] We could [disfmarker] we could have the reading group in here. [speaker002:] Yeah, I mean it's a different style. [speaker003:] Yeah. [speaker001:] Yeah. [speaker003:] I mean. [speaker002:] But I think that's alright. [speaker003:] Yeah, it's great. [speaker001:] Yeah. [speaker002:] Um, [pause] [vocalsound] we were joking about having lunch up here but eating with the microphones [disfmarker] Um, [pause] I had another question briefly about transcription. [speaker003:] OK. [speaker002:] So are we going get an external service or are we gonna do it? I mean who's gonna [pause] generate the transcripts? [speaker001:] Um I think that one of the neat things about having this little pilot thing from today is we should try it a couple different ways, [speaker002:] Mm hmm. [speaker001:] so we should [pause] try to do it internally, maybe ask [disfmarker] ask Jane if we could get her to do it or [disfmarker] or [disfmarker] or have her [pause] supervise a linguistics student or somebody [speaker003:] Mm hmm [speaker002:] Mm hmm. [speaker001:] and uh see how long it takes, and you know, multiply out the cost, and then send [disfmarker] send the same thing out, or maybe split it in halves or something [disfmarker] something like that to a [disfmarker] to an outside service [speaker003:] Mm hmm. [speaker001:] and uh see what that costs and how much hassle that is. [speaker002:] Right. [speaker001:] And then we can assess it. [speaker002:] Yeah, the [disfmarker] the problem of course is that it's [pause] you know, in [disfmarker] in this meeting it's three four five six seven eight, it's nine transcripts, [pause] potentially. [speaker003:] Well, Except that you wouldn't transcribe them all individually, [speaker001:] No. [speaker003:] right? You would send [disfmarker] I mean, [pause] uh a transcriber is used [disfmarker] I mean, I don't know, assume [disfmarker] presumably a transcriber [pause] transcribes a single meeting, [speaker001:] No. [speaker003:] right? And so they [disfmarker] they just have to [pause] indicate that the [pause] speaker changes. [speaker001:] Right. This was [disfmarker] this was the point of having near zero skew. [speaker003:] Right? [speaker001:] Right? So you just pick one, with the best sounding one, and give it to somebody and they transcribe it. [speaker002:] Oh, you're right, yeah, yeah, yeah. [speaker003:] Yeah. [speaker002:] Right. So [pause] the only thing we'll have to be careful with is that they're [disfmarker] well, what are they listening to? [speaker003:] So [disfmarker] so they [disfmarker] they're gonna [disfmarker] they're gonna have to make speaker assignments [pause] or something like this. [speaker002:] They're going to have to make speaker assignments, and wh and which recording are they gonna listen to? [speaker003:] For one of [disfmarker] one of [@ @]. Or we could mix [disfmarker] we could mix the [disfmarker] the headset mikes, you know, just to make a single high q high quality [disfmarker] [speaker001:] Right. [speaker003:] and we could do [disfmarker] and we could do all kinds of weird things like having you know squelch so that [pause] you only, you know [disfmarker] you only [disfmarker] you cut it out when there's no energy. [speaker002:] Oh, that would be interesting. [speaker003:] So you don't [@ @] noise. [speaker002:] Mm hmm. [speaker001:] Yeah, you could do fancy things, but it seems like that's enough. [speaker002:] Yeah. [speaker003:] Yeah. I mean there's [disfmarker] it's [disfmarker] If we [disfmarker] if we do something which is have a little bit of human input to get the words then we can [pause] leverage that a lot by some clever cross thing to figure out which channel they really belong to and stuff like this, probably. [speaker002:] OK. Mm hmm. Yet. So that [disfmarker] I mean, that's one chunk of work and just segmenting for this is gonna be another chunk of work. Um. [speaker003:] Yeah. And [disfmarker] Yeah. As much work as you want. [speaker001:] Yeah But uh I don't know. This meeting, so, say it's maybe going to be a half hour or something, and [disfmarker] [pause] and uh we've heard estimates ranging between ten and twenty [pause] times uh real time to do this kind of transcription [speaker003:] Yeah. [speaker001:] um [pause] so you know in [disfmarker] in principle then it'll take somebody a day for somebody to deal with this. [speaker003:] Mm hmm. Wow. Yeah. [speaker001:] And uh we'll see [pause] which one of those seems closest to right and if they [disfmarker] [pause] ramped up, if towards the end it was faster you know, get some sort of sense from them uh from whoever does it [speaker002:] Mm hmm. [speaker001:] and um then uh like I say, check it out with an external service and see [pause] if it's faster, cheaper, better, worse, whatever. [speaker002:] Right. [speaker001:] Then we'll go from there. [speaker002:] Mm hmm. Yeah, one of the things I was thinking with transcript is that if we take the near field and [pause] we do an initial [pause] recognition pass, is it easier for a transcriber to correct a transcript, if the [pause] near field mike does a reasonably good job? [speaker003:] Hmm. [speaker002:] I bet it isn't, unless it's really close to right. [speaker003:] Yeah. I have [disfmarker] just because it's such an un The transcribing is sort of something I can imagine doing, whereas having this distraction of having sort of something that looks like it might be right but has got some [comment] very wrong bits in is sort of a less [disfmarker] less easy thing to do. [speaker002:] Right. Some errors, yeah. [speaker001:] I don't know. I do I don't know the answer to that. [speaker003:] I'm sure the professional transcribers obviously don't have any use for [disfmarker] [speaker001:] I don't know. [speaker003:] Yeah Yeah, that's true, [speaker001:] Well, I mean there's this Cyber Transcriber service, right? [speaker003:] that's true. [speaker001:] And at least supposedly the way they work is by feeding them what is the output of the recognizer. [speaker003:] Yeah, by fixing up [pause] transcripts. [speaker001:] So maybe it depends on what the software tools are like. [speaker003:] Yeah, but [disfmarker] Yeah. [speaker001:] If it's [disfmarker] you know, if they can easily flip through stuff and get to the right thing, or do you point, or I don't know. There's supposed to be some out [disfmarker] out [disfmarker] out of work shepherders or something in [vocalsound] Scotland but [disfmarker] [speaker003:] Yeah. [speaker001:] Ah, the internet. [speaker003:] They should all be in India right now. That's where the good, but the cheap English speakers are. [speaker001:] Yeah. [speaker002:] Not in Scotland. [speaker001:] Inexpensive. [speaker003:] I'm sorry, yeah. [speaker002:] Not in Scotland. [speaker003:] Well they don't speak [pause] such clear English [speaker002:] They [disfmarker] they sheep [disfmarker] they s they [disfmarker] they speak cheap English, right? [speaker001:] OK so we're degenerating, right. Are [disfmarker] are [disfmarker] are we done with uh MR business, [speaker003:] Yep. [speaker002:] Sure. [speaker001:] or [disfmarker]? I mean what what's [disfmarker] maybe we should [disfmarker] what are our action items? So there's [disfmarker] there's the continuing process of [pause] cleaning up the stuff and making it easier to do that. There's taking [pause] the results of this meeting [pause] and putting it in some kind of decent form, probably getting [pause] some kind of single channel [pause] merged sum data or something so that you can give a simple [pause] sound file of some sort to a transcriber to deal with it rather than this massive thing. [speaker003:] Mm hmm. [speaker002:] Mm hmm. [speaker003:] Mm hmm. [speaker001:] That's a little bit of [pause] stuff to do. [speaker002:] Right. [speaker001:] Right? [speaker002:] And then [pause] do some segmenting and recognition [disfmarker] initial recognition would be interesting to do. [speaker001:] Yeah, although it [disfmarker] it [disfmarker] it [disfmarker] it may be separating out these numbers [pause] from the rest. [speaker002:] That's what I mean. [speaker001:] Yeah. And then [speaker002:] Yeah just doing a digits on it [disfmarker] uh, connected digits. [speaker001:] Yeah [pause] and uh Right. So, with this next [disfmarker] [speaker002:] You wanna read? [speaker001:] Sure. OK. How many more you got? [speaker002:] Oh I on I did ten but I can [disfmarker] I have a script to generate them automatically. [speaker001:] Well, let's do some more while we got them here. [speaker003:] Wow. [speaker002:] OK. [speaker003:] What fun [pause] it is. [speaker001:] Well, you know, it's not, but [speaker003:] It's interesting that um it m it's not clear whether it's better to have them written as [disfmarker] as words or [disfmarker] or [pause] figures. But, of course, to get zero and "O"s distinguished we [disfmarker] you have to do it. [speaker002:] That's why, yep. [speaker003:] But [pause] it's kind of weird to read. [speaker001:] Yeah, there's just two more. uh [speaker002:] Go ahead. [speaker001:] OK. So. [speaker003:] Thanks. [speaker002:] OK, go ahead and read it and then [pause] fill it out. [speaker001:] OK. [speaker003:] OK. [speaker001:] OK. I'd say we're done. [speaker002:] OK. [speaker003:] Oh, I wonder [disfmarker] I wonder if we ran out of disk space yet. [speaker001:] So, is it [disfmarker] it's going to disk, or is this [disfmarker] And how [disfmarker] how much disk do we have? [speaker001:] It's not very significant. [speaker002:] Uh, channel one. [speaker004:] Channel three. [speaker006:] Mm hmm. [speaker002:] Yes. OK. [speaker004:] Channel three. [speaker001:] Ta [speaker004:] Channel three. Alright. [speaker002:] OK, did you solve speech recognition last week? [speaker005:] Almost. [speaker002:] Alright! Let's do image processing. [speaker003:] Yes, again. [speaker001:] Great. [speaker003:] We did it again, Morgan. [speaker002:] Alright! [speaker005:] Doo doop, doo doo. [speaker001:] What's wrong with [disfmarker]? [speaker002:] OK. It's April fifth. Actually, Hynek should be getting back in town shortly if he isn't already. [speaker003:] Is he gonna come here? [speaker002:] Uh. Well, we'll drag him here. I know where he is. [speaker003:] So when you said "in town", you mean [pause] Oregon. [speaker002:] U u u u uh, I meant, you know, this end of the world, [speaker003:] Oh. [speaker002:] yeah, [vocalsound] is really what I meant, [speaker005:] Doo, doo doo. [speaker002:] uh, cuz he's been in Europe. [speaker005:] Doo doo. [speaker002:] So. [speaker003:] I have something just fairly brief to report on. [speaker002:] Mmm. [speaker003:] Um, I did some [pause] experim uh, uh, just a few more experiments before I had to, [vocalsound] uh, go away for the w well, that week. [speaker002:] Great! [speaker003:] Was it last week or whenever? Um, so what I was started playing with was the [disfmarker] th again, this is the HTK back end. And, um, I was curious because the way that they train up the models, [vocalsound] they go through about four sort of rounds of [disfmarker] of training. And in the first round they do [disfmarker] uh, I think it's three iterations, and for the last three rounds e e they do seven iterations of re estimation in each of those three. And so, you know, that's part of what takes so long to train the [disfmarker] the [disfmarker] the back end for this. [speaker002:] I'm sorry, I didn't quite get that. There's [disfmarker] there's four and there's seven and [disfmarker] I [disfmarker] I'm sorry. [speaker003:] Yeah. Uh, maybe I should write it on the board. So, [vocalsound] there's four rounds of training. Um, I g I g I guess you could say iterations. The first one is three, then seven, seven, and seven. And what these numbers refer to is the number of times that the, uh, HMM re estimation is run. It's this program called H E [speaker002:] But in HTK, what's the difference between, uh, a [disfmarker] an inner loop and an outer loop in these iterations? [speaker003:] OK. So what happens is, um, at each one of these points, you increase the number of Gaussians in the model. [speaker002:] Yeah. Oh, right! This was the mix up stuff. [speaker003:] Yeah. The mix up. Right. [speaker002:] That's right. I remember now. [speaker003:] And so, in the final one here, you end up with, uh [disfmarker] for all of the [disfmarker] the digit words, you end up with, uh, three [pause] mixtures per state, [speaker002:] Yeah. [speaker003:] eh, in the final [pause] thing. So I had done some experiments where I was [disfmarker] I [disfmarker] I want to play with the number of mixtures. [speaker002:] Mm hmm. [speaker003:] But, um, [speaker005:] Uh, one, two, [speaker003:] uh, I wanted to first test to see if we actually need to do [pause] this many iterations early on. [speaker002:] Mm hmm. [speaker003:] And so, um, I [disfmarker] I ran a couple of experiments where I [vocalsound] reduced that to l to be three, two, two, [vocalsound] uh, five, I think, and I got almost the exact same results. And [disfmarker] but it runs much much faster. [speaker002:] Mm hmm. [speaker003:] So, um, I [disfmarker] I think m [pause] it only took something like, uh, three or four hours to do the full training, [speaker002:] As opposed to [disfmarker]? [speaker006:] Good. [speaker003:] as opposed to wh what, sixteen hours or something like that? [speaker006:] Yeah. It depends. [speaker003:] I mean, it takes [disfmarker] you have to do an overnight basically, the way it is set up now. [speaker001:] Mm hmm. [speaker002:] Mm hmm. [speaker003:] So, uh, even we don't do anything else, doing something like this could allow us to turn experiments around a lot faster. [speaker002:] And then when you have your final thing, do a full one, so it's [disfmarker] [speaker003:] And when you have your final thing, we go back to this. [speaker006:] Yeah. [speaker003:] So, um, and it's a real simple change to make. I mean, it's like one little text file you edit and change those numbers, and you don't do anything else. [speaker006:] Oh, this is a [disfmarker] [speaker003:] And then you just run. [speaker001:] Mm hmm. [speaker006:] OK. [speaker003:] So it's a very simple change to make and it doesn't seem to hurt all that much. [speaker001:] So you [disfmarker] you run with three, two, two, five? That's a [speaker003:] So I [disfmarker] Uh, I [disfmarker] I have to look to see what the exact numbers were. [speaker001:] Yeah. [speaker003:] I [disfmarker] I thought was, like, three, two, two, five, but I I'll [disfmarker] I'll double check. [speaker001:] Mm hmm. [speaker003:] It was [vocalsound] over a week ago that I did it, [speaker001:] OK. [speaker005:] Oh. [speaker003:] so I can't remember exactly. [speaker001:] Mm hmm. [speaker003:] But, uh [disfmarker] [speaker002:] Mm hmm. [speaker003:] um, but it's so much faster. [speaker005:] Hmm. [speaker003:] I it makes a big difference. So we could do a lot more experiments and throw a lot more stuff in there. [speaker006:] Yeah. [speaker002:] That's great. [speaker003:] Um. Oh, the other thing that I did was, um, [vocalsound] I compiled [pause] the HTK stuff for the Linux boxes. So we have this big thing that we got from IBM, which is a five processor machine. Really fast, but it's running Linux. So, you can now run your experiments on that machine and you can run five at a time and it runs, [vocalsound] uh, as fast as, you know, uh, five different machines. [speaker001:] Mm hmm. [speaker006:] Mm hmm. [speaker003:] So, um, I've forgotten now what the name of that machine is but I can [disfmarker] I can send email around about it. [speaker001:] Yeah. [speaker003:] And so we've got it [disfmarker] now HTK's compiled for both the Linux and for, um, the Sparcs. Um, you have to make [disfmarker] you have to make sure that in your dot CSHRC, [vocalsound] um, it detects whether you're running on the Linux or a [disfmarker] a Sparc and points to the right executables. Uh, and you may not have had that in your dot CSHRC before, if you were always just running the Sparc. So, um, uh, I can [disfmarker] I can tell you exactly what you need to do to get all of that to work. [speaker001:] Mm hmm. [speaker003:] But it'll [disfmarker] it really increases what we can run on. [speaker005:] Hmm. Cool. [speaker003:] So, [vocalsound] together with the fact that we've got these [pause] faster Linux boxes and that it takes less time to do [pause] these, um, we should be able to crank through a lot more experiments. [speaker001:] Mm hmm. [speaker003:] So. [speaker005:] Hmm. [speaker003:] So after I did that, then what I wanted to do [comment] was try [pause] increasing the number of mixtures, just to see, um [disfmarker] see how [disfmarker] how that affects performance. [speaker001:] Yeah. [speaker003:] So. [speaker002:] Yeah. In fact, you could do something like [pause] keep exactly the same procedure and then add a fifth thing onto it [speaker003:] Mm hmm. Exactly. [speaker002:] that had more. Yeah. [speaker003:] Right. Right. [speaker005:] So at [disfmarker] at the middle o where the arrows are showing, that's [disfmarker] you're adding one more mixture per state, [speaker003:] Uh huh. [speaker005:] or [disfmarker]? [speaker003:] Uh, let's see, uh. It goes from this [disfmarker] uh, try to go it backwards [disfmarker] this [disfmarker] at this point it's two mixtures [pause] per state. So this just adds one. Except that, uh, actually for the silence model, it's six mixtures per state. [speaker002:] Mm hmm. [speaker005:] OK. [speaker003:] Uh, so it goes to two. Um. And I think what happens here is [disfmarker] [speaker002:] Might be between, uh, shared, uh [disfmarker] shared variances or something, [speaker003:] Yeah. I think that's what it is. [speaker002:] or [disfmarker] [speaker003:] Uh, yeah. It's, uh [disfmarker] Shoot. I [disfmarker] I [disfmarker] I can't remember now what happens at that first one. Uh, I have to look it up and see. [speaker005:] Oh, OK. [speaker003:] Um, there [disfmarker] because they start off with, uh, an initial model which is just this global model, and then they split it to the individuals. And so, [vocalsound] it may be that that's what's happening here. I [disfmarker] I [disfmarker] [vocalsound] I have to look it up and see. I [disfmarker] I don't exactly remember. [speaker005:] OK. [speaker002:] OK. [speaker003:] So. That's it. [speaker002:] Alright. So what else? [speaker001:] Um. Yeah. There was a conference call this Tuesday. Um. I don't know yet the [disfmarker] [vocalsound] what happened [vocalsound] Tuesday, but [vocalsound] the points that they were supposed to discuss is still, [vocalsound] uh, things like [vocalsound] the weights, uh [disfmarker] [speaker002:] Oh, this is a conference call for, uh, uh, Aurora participant sort of thing. [speaker005:] For [disfmarker] [speaker001:] Yeah. [speaker002:] I see. [speaker001:] Yeah. Mmm. [speaker002:] Do you know who was [disfmarker] who was [disfmarker] since we weren't in on it, uh, do you know who was in from OGI? Was [disfmarker] [vocalsound] was [disfmarker] was Hynek involved or was it Sunil [speaker001:] I have no idea. [speaker002:] or [disfmarker]? Oh, you don't know. [speaker001:] Mmm, I just [disfmarker] [speaker002:] OK. [speaker001:] Yeah. [speaker002:] Alright. [speaker001:] Um, yeah. So the points were the [disfmarker] the weights [disfmarker] how to weight the different error rates [vocalsound] that are obtained from different language and [disfmarker] and conditions. Um, it's not clear that they will keep the same kind of weighting. Right now it's a weighting on [disfmarker] on improvement. [speaker002:] Mm hmm. [speaker001:] Some people are arguing that it would be better to have weights on uh [disfmarker] well, to [disfmarker] to combine error rates [pause] before computing improvement. Uh, and the fact is that for [disfmarker] right now for [pause] the English, they have weights [disfmarker] they [disfmarker] they combine error rates, but for the other languages they combine improvement. So it's not very consistent. [speaker002:] Mm hmm. [speaker001:] Um [disfmarker] Yeah. The, um [disfmarker] Yeah. And so [disfmarker] Well, [vocalsound] this is a point. And right now actually there is a thing also, [vocalsound] uh, that happens with the current weight is that a very non significant improvement [pause] on the well matched case result in [pause] huge differences in [disfmarker] [vocalsound] in the final number. [speaker002:] Mm hmm. [speaker003:] Hmm. [speaker001:] And so, perhaps they will change the weights to [disfmarker] Yeah. [speaker003:] How should that be done? I mean, it [disfmarker] it seems like there's a simple way [disfmarker] [speaker001:] Mm hmm. [speaker003:] Uh, this seems like an obvious mistake or something. Th they're [disfmarker] [speaker002:] Well, I mean, the fact that it's inconsistent is an obvious mistake. But the [disfmarker] but, um, the other thing [disfmarker] [speaker001:] In [speaker002:] I don't know I haven't thought it through, but one [disfmarker] one would think that [vocalsound] each [disfmarker] It [disfmarker] it's like if you say what's the [disfmarker] what's the best way to do an average, an arithmetic average or a geometric average? [speaker003:] Mm hmm. [speaker002:] It depends what you wanna show. [speaker001:] Mm hmm. [speaker002:] Each [disfmarker] each one is gonna have a different characteristic. [speaker001:] Yeah. [speaker002:] So [disfmarker] [speaker003:] Well, it seems like they should do, like, the percentage improvement or something, rather than the [pause] absolute improvement. [speaker001:] Tha that's what they do. [speaker002:] Well, they are doing that. [speaker001:] Yeah. [speaker002:] No, that is relative. But the question is, do you average the relative improvements [pause] or do you average the error rates and take the relative improvement maybe of that? [speaker001:] Yeah. Yeah. [speaker002:] And the thing is it's not just a pure average because there are these weightings. [speaker003:] Oh. [speaker002:] It's a weighted average. Um. [speaker001:] Yeah. And so when you average the [disfmarker] the relative improvement it tends to [disfmarker] [vocalsound] to give a lot of [disfmarker] of, um, [vocalsound] importance to the well matched case because [pause] the baseline is already very good and, um, [speaker003:] Why don't they not look at improvements but just look at your av your scores? [speaker001:] i it's [disfmarker] [speaker003:] You know, figure out how to combine the scores [speaker001:] Mm hmm. [speaker003:] with a weight or whatever, and then give you a score [disfmarker] here's your score. And then they can do the same thing for the baseline system [disfmarker] and here's its score. And then you can look at [disfmarker] [speaker001:] Mm hmm. [speaker002:] Well, that's what he's seeing as one of the things they could do. It's just when you [disfmarker] when you get all done, I think that they pro [speaker001:] Yeah. [speaker002:] I m I [disfmarker] I wasn't there but I think they started off this process with the notion that [vocalsound] you should be [pause] significantly better than the previous standard. [speaker003:] Mm hmm. [speaker002:] And, um, so they said "how much is significantly better? what do you [disfmarker]?" And [disfmarker] and so they said "well, [vocalsound] you know, you should have half the errors," or something, "that you had before". [speaker001:] Mm hmm. Hmm. [speaker003:] Mm hmm. [speaker001:] Yeah. [speaker002:] So it's, uh, [speaker003:] Hmm. [speaker002:] But it does seem like i i it does seem like it's more logical to combine them first and then do the [disfmarker] [speaker001:] Combine error rates [speaker002:] Yeah. [speaker001:] and then [disfmarker] Yeah. [speaker002:] Yeah. [speaker001:] Well [disfmarker] But there is this [disfmarker] this [disfmarker] is this still this problem of weights. When [disfmarker] when you combine error rate it tends to [pause] give more importance to the difficult cases, [speaker002:] Oh, yeah? [speaker001:] and some people think that [disfmarker] well, they have different, [vocalsound] um, opinions about this. Some people think that [vocalsound] it's more important to look at [disfmarker] [vocalsound] to have ten percent imp relative improvement on [pause] well matched case than to have fifty percent on the m mismatched, and other people think that it's more important to improve a lot on the mismatch and [disfmarker] [speaker003:] It sounds like they don't really have a good idea about what the final application is gonna be. [speaker001:] So, bu l de fff! Mmm. [speaker002:] Well, you know, the [disfmarker] the thing is [vocalsound] that if you look at the numbers on the [disfmarker] on the more difficult cases, [vocalsound] um, if you really believe that was gonna be the predominant use, [vocalsound] none of this would be good enough. [speaker001:] Yeah. Mmm. [speaker003:] Mm hmm. [speaker001:] Yeah. [speaker002:] Nothing anybody's [disfmarker] whereas [vocalsound] you sort of with some reasonable error recovery could imagine in the better cases that these [disfmarker] these systems working. So, um, I think the hope would be that it would [disfmarker] [vocalsound] uh, it would work well [pause] for the good cases and, uh, it would have reasonable [disfmarker] reas [vocalsound] soft degradation as you got to worse and worse conditions. Um. [speaker003:] Yeah. I [disfmarker] I guess what I'm [disfmarker] I mean, I [disfmarker] I was thinking about it in terms of, if I were building the final product and I was gonna test to see which front end I'd [disfmarker] [vocalsound] I wanted to use, I would [vocalsound] try to [pause] weight things depending on the exact environment that I was gonna be using the system in. If I [disfmarker] [speaker002:] But [disfmarker] but [disfmarker] No. Well, no [disfmarker] well, no. I mean, [vocalsound] it isn't the operating theater. I mean, they don they [disfmarker] they don't [disfmarker] they don't really [pause] know, I think. [speaker003:] Yeah. [speaker002:] I mean, I th [speaker003:] So if [disfmarker] if they don't know, doesn't that suggest the way for them to go? Uh, you assume everything's equal. I mean, y y I mean, you [disfmarker] [speaker002:] Well, I mean, I [disfmarker] I think one thing to do is to just not rely on a single number [disfmarker] to maybe have two or three numbers, [speaker003:] Yeah. Right. [speaker002:] you know, and [disfmarker] and [disfmarker] and say [vocalsound] here's how much you, uh [disfmarker] you improve [vocalsound] the, uh [disfmarker] the [disfmarker] the relatively clean case and here's [disfmarker] or [disfmarker] or well matched case, and here's how [disfmarker] here's how much you, [speaker003:] Mm hmm. [speaker002:] uh [disfmarker] [speaker003:] So not [disfmarker] So not try to combine them. [speaker002:] So. Yeah. Uh, actually it's true. [speaker003:] Yeah. [speaker002:] Uh, I had forgotten this, uh, but, uh, well matched is not actually clean. What it is is just that, [speaker003:] The training and testing. [speaker002:] u uh, the training and testing are similar. [speaker001:] Mmm. [speaker002:] So, I guess what you would do in practice is you'd try to get as many, [vocalsound] uh, examples of similar sort of stuff as you could, [speaker003:] Yeah. [speaker002:] and then, uh [disfmarker] So the argument for that being the [disfmarker] the [disfmarker] the more important thing, [vocalsound] is that you're gonna try and do that, [vocalsound] but you wanna see how badly it deviates from that when [disfmarker] when [disfmarker] when the, uh [disfmarker] it's a little different. [speaker003:] So [disfmarker] [speaker002:] Um, [speaker003:] so you should weight those other conditions v very [disfmarker] you know, really small. [speaker002:] But [disfmarker] No. That's a [disfmarker] that's a [disfmarker] that's an arg [speaker003:] I mean, that's more of an information kind of thing. [speaker002:] that's an ar Well, that's an argument for it, but let me give you the opposite argument. [speaker003:] Uh huh. [speaker002:] The opposite argument is you're never really gonna have a good sample of all these different things. I mean, are you gonna have w uh, uh, examples with the windows open, half open, full open? Going seventy, sixty, fifty, forty miles an hour? On what kind of roads? [speaker003:] Mm hmm. [speaker002:] With what passing you? With [disfmarker] uh, I mean, [speaker003:] Mm hmm. [speaker002:] I [disfmarker] I [disfmarker] I think that you could make the opposite argument that the well matched case is a fantasy. [speaker003:] Mm hmm. [speaker005:] Uh huh. [speaker002:] You know, so, I think the thing is is that if you look at the well matched case versus the po you know, the [disfmarker] the medium and the [disfmarker] and the fo and then the mismatched case, [vocalsound] um, we're seeing really, really big differences in performance. Right? And [disfmarker] and y you wouldn't like that to be the case. You wouldn't like that as soon as you step outside [disfmarker] You know, a lot of the [disfmarker] the cases it's [disfmarker] is [disfmarker] [speaker003:] Well, that'll teach them to roll their window up. [speaker002:] I mean, in these cases, if you go from the [disfmarker] the, uh [disfmarker] I mean, I don't remember the numbers right off, but if you [disfmarker] if you go from the well matched case to the medium, [vocalsound] it's not an enormous difference in the [disfmarker] in the [disfmarker] the training testing situation, and [disfmarker] and [disfmarker] and it's a really big [vocalsound] performance drop. [speaker003:] Mm hmm. [speaker002:] You know, so, um [disfmarker] Yeah, I mean the reference one, for instance [disfmarker] this is back old on, uh [disfmarker] on Italian [disfmarker] uh, was like [pause] six percent error for the well matched and eighteen for the medium matched and sixty for the [disfmarker] [vocalsound] for highly mismatched. Uh, and, you know, with these other systems we [disfmarker] we [vocalsound] helped it out quite a bit, but still there's [disfmarker] there's something like a factor of two or something between well matched and medium matched. And [vocalsound] so I think that [vocalsound] if what you're [disfmarker] [vocalsound] if the goal of this is to come up with robust features, it does mean [disfmarker] So you could argue, in fact, that the well matched is something you shouldn't be looking at at all, that [disfmarker] that the goal is to come up with features [vocalsound] that will still give you reasonable performance, you know, with again gentle degregra degradation, um, even though the [disfmarker] the testing condition is not the same as the training. [speaker003:] Hmm. [speaker002:] So, you know, I [disfmarker] I could argue strongly that something like the medium mismatch, which is you know not compl pathological but [disfmarker] I mean, what was the [disfmarker] the medium mismatch condition again? [speaker001:] Um, [vocalsound] it's [disfmarker] Yeah. Medium mismatch is everything with the far [pause] microphone, but trained on, like, low noisy condition, like low speed and [disfmarker] or [pause] stopped car and tested on [pause] high speed conditions, I think, like on a highway and [disfmarker] [speaker002:] Right. So it's still the same [disfmarker] same microphone in both cases, [speaker001:] So [disfmarker] Same microphone but [disfmarker] Yeah. [speaker002:] but, uh, it's [disfmarker] there's a mismatch between the car conditions. And that's [disfmarker] uh, you could argue that's a pretty realistic situation [speaker003:] Yeah. [speaker001:] Mm hmm. [speaker002:] and, uh, I'd almost argue for weighting that highest. But the way they have it now, [vocalsound] it's [disfmarker] I guess it's [disfmarker] it's [disfmarker] They [disfmarker] they compute the relative improvement first and then average that with a weighting? [speaker001:] Yeah. [speaker002:] And so then the [disfmarker] that [disfmarker] that makes the highly matched the really big thing. [speaker001:] Mm hmm. [speaker002:] Um, so, u i since they have these three categories, it seems like the reasonable thing to do [vocalsound] is to go across the languages [pause] and to come up with an improvement for each of those. [speaker001:] Mm hmm. [speaker002:] Just say "OK, in the [disfmarker] in the highly matched case this is what happens, in the [disfmarker] [vocalsound] m the, uh [disfmarker] this other m medium if this happens, in the highly mismatched [pause] that happens". [speaker001:] Mm hmm. [speaker002:] And, uh, you should see, uh, a gentle degradation [pause] through that. [speaker001:] Mmm. [speaker002:] Um. But [disfmarker] I don't know. [speaker001:] Yeah. [speaker002:] I think that [disfmarker] that [disfmarker] I [disfmarker] I [disfmarker] I gather that in these meetings it's [disfmarker] it's really tricky to make anything [vocalsound] ac [vocalsound] make any [comment] policy change because [vocalsound] [vocalsound] everybody has [disfmarker] has, uh, their own opinion and [disfmarker] [speaker001:] Mm hmm. [speaker002:] I don't know. [speaker001:] Yeah. [speaker002:] Yeah. [speaker001:] Uh, so [disfmarker] Yeah. Yeah, but there is probably a [disfmarker] a big change that will [vocalsound] be made is that the [disfmarker] the baseline [disfmarker] th they want to have a new baseline, perhaps, which is, um, MFCC but with [vocalsound] a voice activity detector. And apparently, [vocalsound] uh, some people are pushing to still keep this fifty percent number. So they want [vocalsound] to have at least fifty percent improvement on the baseline, [speaker002:] Mm hmm. [speaker001:] but w which would be a much better baseline. [speaker002:] Mm hmm. [speaker001:] And if we look at the result that Sunil sent, [vocalsound] just putting the VAD in the baseline improved, like, more than twenty percent, [speaker002:] Mm hmm. [speaker001:] which would mean then [disfmarker] then [disfmarker] mean that fifty percent on this new baseline is like, well, more than sixty percent improvement on [disfmarker] [speaker002:] So nobody would [pause] be there, probably. Right? [speaker001:] on [disfmarker] o e e uh [disfmarker] Right now, nobody would be there, but [disfmarker] [speaker002:] Good. [speaker001:] Yeah. [speaker002:] Work to do. [speaker001:] Uh huh. [speaker002:] So whose VAD is [disfmarker] Is [disfmarker] is this a [disfmarker]? [speaker001:] Uh, they didn't decide yet. I guess i this was one point of the conference call also, but [disfmarker] mmm, so I don't know. Um, but [disfmarker] Yeah. [speaker005:] Oh. [speaker002:] Oh, I [disfmarker] I think th that would be [vocalsound] good. I mean, it's not that the design of the VAD isn't important, but it's just that it [disfmarker] it [disfmarker] it does seem to be i uh, a lot of [pause] work to do a good job on [disfmarker] on that and as well as being a lot of work to do a good job on the feature [vocalsound] design, [speaker001:] Yeah. [speaker002:] so [speaker001:] Yeah. [speaker002:] if we can [pause] cut down on that maybe we can make some progress. [speaker001:] M Yeah. [speaker005:] Hmm. [speaker001:] But I guess perhaps [disfmarker] I don't know w [vocalsound] Yeah. Uh, yeah. Per e s s someone told that perhaps it's not fair to do that because the, um [disfmarker] to make a good VAD [pause] you don't have enough to [disfmarker] with the [disfmarker] the features that are [disfmarker] the baseline features. So [disfmarker] mmm, you need more features. So you really need to put more [disfmarker] more in the [disfmarker] in [disfmarker] in the front end. [speaker002:] Yeah. [speaker001:] So i [speaker002:] Um, [speaker001:] S [speaker002:] sure. [speaker003:] Wait a minute. [speaker002:] But i bu [speaker003:] I [disfmarker] I'm confused. [speaker001:] Yeah. [speaker003:] Wha what do you mean? [speaker001:] Yeah, [speaker002:] So y so you [speaker001:] if i [speaker002:] m s Yeah, but [disfmarker] Well, let's say for ins see, MFCC for instance doesn't have anything in it, uh, related to the pitch. So just [disfmarker] just for example. So suppose you've [disfmarker] that [vocalsound] what you really wanna do is put a good pitch detector on there and if it gets an unambiguous [disfmarker] [speaker003:] Oh, oh. I see. [speaker001:] Mm hmm. [speaker002:] if it gets an unambiguous result then you're definitely in a [disfmarker] in a [disfmarker] in a voice in a, uh, s region with speech. [speaker003:] So there's this assumption that the v the voice activity detector can only use the MFCC? [speaker002:] Uh. [speaker001:] That's not clear, but this [disfmarker] [vocalsound] e [speaker002:] Well, for the baseline. [speaker003:] Yeah. [speaker002:] So [disfmarker] so if you use other features then y But it's just a question of what is your baseline. Right? [speaker003:] I g [speaker002:] What is it that you're supposed to do better than? [speaker003:] Yeah. [speaker002:] And so [speaker003:] I don't s [speaker002:] having the baseline be the MFCC's [pause] means that people could [pause] choose to pour their ener their effort into trying to do a really good VAD [speaker003:] But they seem like two [pause] separate issues. [speaker002:] or tryi [speaker003:] Right? I mean [disfmarker] [speaker002:] They're sort of separate. Unfortunately there's coupling between them, which is part of what I think Stephane is getting to, is that [vocalsound] you can choose your features in such a way as to improve the VAD. [speaker001:] Yeah. [speaker002:] And you also can choose your features in such a way as to prove [disfmarker] improve recognition. [speaker003:] But it seems like you should do both. [speaker002:] They may not be the same thing. [speaker003:] Right? [speaker002:] You should do both and [disfmarker] and I [disfmarker] I think that this still makes [disfmarker] I still think this makes sense as a baseline. It's just saying, as a baseline, we know [disfmarker] [speaker001:] Mmm. [speaker002:] you know, we had the MFCC's before, lots of people have done voice activity detectors, [speaker001:] Mm hmm. [speaker002:] you might as well pick some voice activity detector and make that the baseline, just like you picked some version of HTK and made that the baseline. [speaker001:] Yeah. Right. [speaker002:] And then [pause] let's try and make everything better. Um, and if one of the ways you make it better is by having your features [pause] be better features for the VAD then that's [disfmarker] so be it. But, [speaker001:] Mm hmm. [speaker002:] uh, uh, uh, at least you have a starting point that's [disfmarker] um, cuz i i some of [disfmarker] the some of the people didn't have a VAD at all, I guess. Right? And [disfmarker] and [speaker001:] Yeah. [speaker002:] then they [disfmarker] they looked pretty bad and [disfmarker] and in fact what they were doing wasn't so bad at all. [speaker001:] Mm hmm. [speaker002:] But, [speaker001:] Mm hmm. [speaker003:] Yeah. It seems like you should try to make your baseline as good as possible. [speaker002:] um. [speaker003:] And if it turns out that [pause] you can't improve on that, well, I mean, then, you know, nobody wins and you just use MFCC. Right? [speaker002:] Yeah. I mean, it seems like, uh, it should include sort of the current state of the art [vocalsound] that you want [disfmarker] are trying to improve, and MFCC's, you know, or PLP or something [disfmarker] it seems like [vocalsound] reasonable baseline for the features, and anybody doing this task, [vocalsound] uh, is gonna have some sort of voice activity detection at some level, in some way. They might use the whole recognizer to do it [vocalsound] but [disfmarker] rather than [vocalsound] a separate thing, but [disfmarker] [vocalsound] but they'll have it on some level. So, um. [speaker003:] It seems like whatever they choose they shouldn't, [vocalsound] you know, purposefully brain damage a part of the system to [pause] make a worse baseline, or [disfmarker] You know? [speaker002:] Well, I think people just had it wasn't that they purposely brain damaged it. I think people hadn't really thought through [vocalsound] about the, uh [disfmarker] the VAD issue. [speaker003:] Mmm. [speaker001:] Mm hmm. [speaker002:] And [disfmarker] and then when the [disfmarker] the [disfmarker] the proposals actually came in and half of them had V A Ds and half of them didn't, and the half that did did well and the [vocalsound] half that didn't did poorly. [speaker003:] Mm hmm. [speaker002:] So it's [disfmarker] [speaker001:] Mm hmm. Um. [speaker002:] Uh. [speaker001:] Yeah. So we'll see what happen with this. And [disfmarker] Yeah. So what happened since, um, [vocalsound] last week is [disfmarker] well, from OGI, these experiments on [pause] putting VAD on the baseline. And these experiments also are using, uh, some kind of noise compensation, so spectral subtraction, and putting on line normalization, um, just after this. So I think spectral subtraction, LDA filtering, and on line normalization, so which is similar to [vocalsound] the pro proposal one, but with [pause] spectral subtraction in addition, and it seems that on line normalization doesn't help further when you have spectral subtraction. [speaker003:] Is this related to the issue that you brought up a couple of meetings ago with the [disfmarker] the [vocalsound] musical tones [speaker001:] I [disfmarker] [speaker003:] and [disfmarker]? [speaker001:] I have no idea, because the issue I brought up was with a very simple spectral subtraction approach, [speaker003:] Mmm. [speaker001:] and the one that [vocalsound] they use at OGI is one from [disfmarker] from [vocalsound] the proposed [disfmarker] the [disfmarker] the [disfmarker] the Aurora prop uh, proposals, which might be much better. So, yeah. I asked [vocalsound] Sunil for more information about that, but, uh, I don't know yet. Um. And what's happened here is that we [disfmarker] so we have this kind of new, um, reference system which [vocalsound] use a nice [disfmarker] a [disfmarker] a clean downsampling upsampling, which use a new filter [vocalsound] that's much shorter and which also cuts the frequency below sixty four hertz, [speaker002:] Right. [speaker001:] which was not done on our first proposal. [speaker002:] When you say "we have that", does Sunil have it now, too, [speaker001:] I [speaker002:] or [disfmarker]? [speaker001:] No. No. [speaker002:] OK. [speaker001:] Because we're still testing. So we have the result for, [vocalsound] uh, just the features [speaker002:] OK. [speaker001:] and we are currently testing with putting the neural network in the KLT. Um, it seems to improve on the well matched case, um, [vocalsound] but it's a little bit worse on the mismatch and highly mismatched [disfmarker] I mean when we put the neural network. And with the current weighting I think it's sh it will be better because the well matched case is better. Mmm. [speaker002:] But how much worse [disfmarker] since the weighting might change [disfmarker] how [disfmarker] how much worse is it on the other conditions, when you say it's a little worse? [speaker001:] It's like, uh, fff, fff [comment] [vocalsound] [pause] um, [comment] [vocalsound] [vocalsound] [pause] ten percent relative. Yeah. [speaker002:] OK. Um. [speaker001:] Mm hmm. [speaker002:] But it has the, uh [disfmarker] the latencies are much shorter. That's [disfmarker] [speaker001:] Uh y w when I say it's worse, it's not [disfmarker] it's when I [disfmarker] I [disfmarker] uh, compare proposal two to proposal one, so, r uh, y putting neural network [vocalsound] compared to n not having any neural network. [speaker002:] Uh huh. [speaker001:] I mean, this new system is [disfmarker] is [disfmarker] is better, because it has [vocalsound] um, this sixty four hertz cut off, uh, clean [vocalsound] downsampling, and, um [disfmarker] what else? Uh, yeah, a good VAD. We put the good VAD. So. Yeah, I don't know. I [disfmarker] I [disfmarker] j uh, uh [disfmarker] pr [speaker002:] But the latencies [disfmarker] but you've got the latency shorter now. [speaker001:] Latency is short [disfmarker] [speaker006:] Isn't it [speaker001:] is [disfmarker] Yeah. [speaker002:] Yeah. [speaker001:] And so [speaker002:] So it's better than the system that we had before. [speaker001:] Yeah. Mainly because [pause] [vocalsound] of [pause] the sixty four hertz and the good VAD. [speaker002:] OK. [speaker001:] And then I took this system and, [vocalsound] mmm, w uh, I p we put the old filters also. So we have this good system, with good VAD, with the short filter and with the long filter, and, um, with the short filter it's not worse. So [disfmarker] well, is it [disfmarker] [speaker002:] OK. So that's [disfmarker] that's all fine. [speaker001:] it's in [disfmarker] [speaker002:] But what you're saying is that when you do these [disfmarker] [speaker001:] Yes. Uh [disfmarker] [speaker002:] So let me try to understand. When [disfmarker] when you do these same improvements [vocalsound] to proposal one, [speaker001:] Mm hmm. [speaker002:] that, uh, on the [disfmarker] i things are somewhat better, uh, in proposal two for the well matched case and somewhat worse for the other two cases. [speaker001:] Yeah. [speaker002:] So does, uh [disfmarker] when you say, uh [disfmarker] So [disfmarker] The th now that these other things are in there, is it the case maybe that the additions of proposal two over proposal one are [pause] less im important? [speaker001:] Yeah. Probably, yeah. [speaker002:] I get it. [speaker001:] Um [disfmarker] So, yeah. Uh. Yeah, but it's a good thing anyway to have [vocalsound] shorter delay. Then we tried, um, [vocalsound] to do something like proposal two but having, um, e using also MSG features. So there is this KLT part, which use just the standard features, [speaker002:] Mm hmm. Right. [speaker001:] and then two neura two neural networks. [speaker002:] Mm hmm. [speaker001:] Mmm, and it doesn't seem to help. Um, however, we just have [vocalsound] one result, which is the Italian mismatch, so. Uh. We have to wait for that to fill the whole table, but [disfmarker] [speaker002:] OK. There was a [vocalsound] start of some effort on something related to voicing or something. Is that [disfmarker]? [speaker001:] Yeah. Um, [vocalsound] yeah. So basically we try to, [vocalsound] [vocalsound] uh, find [vocalsound] good features that could be used for voicing detection, uh, but it's still, uh [disfmarker] on the, um [disfmarker] [speaker006:] Oh, well, I have the picture. [speaker001:] t we [disfmarker] w basically we are still playing with Matlab to [disfmarker] [vocalsound] to look at [disfmarker] at what happened, [speaker003:] What sorts of [disfmarker] [speaker006:] Yeah. [speaker001:] and [disfmarker] [speaker003:] what sorts of features are you looking at? [speaker006:] We have some [disfmarker] [speaker001:] So we would be looking at, um, the [pause] variance of the spectrum of the excitation, [speaker006:] uh, um, this, this, and this. [speaker001:] something like this, which is [disfmarker] should be high for voiced sounds. Uh, [speaker003:] Wait a minute. I [disfmarker] what does that mean? [speaker001:] we [disfmarker] [speaker003:] The variance of the spectrum of excitation. [speaker001:] Yeah. So the [disfmarker] So basically the spectrum of the excitation [vocalsound] for a purely periodic sig signal shou sh [speaker002:] OK. Yeah, w what yo what you're calling the excitation, as I recall, is you're subtracting the [disfmarker] the, um [disfmarker] the mel [disfmarker] mel [disfmarker] [vocalsound] mel filter, uh, spectrum from the FFT spectrum. [speaker001:] e That's right. Yeah. [speaker002:] Right. [speaker001:] So [disfmarker] [speaker006:] Mm hmm. [speaker001:] Yeah. So we have the mel f filter bank, we have the FFT, so we [pause] just [disfmarker] [speaker002:] So it's [disfmarker] it's not really an excitation, but it's something that hopefully tells you something about the excitation. [speaker001:] No. Yeah, that's right. [speaker002:] Yeah, yeah. [speaker001:] Um [disfmarker] Yeah. [speaker006:] We have here some histogram, but they have a lot of overlap. [speaker001:] E yeah, but it's [disfmarker] it's still [disfmarker] Yeah. So, well, for unvoiced portion we have something tha [vocalsound] that has a mean around O point three, and for voiced portion the mean is O point fifty nine. But the variance seem quite [vocalsound] high. [speaker003:] How do you know [disfmarker]? [speaker001:] So [disfmarker] Mmm. [speaker003:] How did you get your [pause] voiced and unvoiced truth data? [speaker001:] We used, uh, TIMIT and we used canonical mappings between the phones [speaker006:] Yeah. We, uh, use [pause] TIMIT on this, [speaker001:] and [speaker006:] for [disfmarker] [speaker001:] th Yeah. [speaker006:] But if we look at it in one sentence, it [disfmarker] apparently it's good, I think. [speaker001:] Yeah, but [disfmarker] Yeah. Uh, so it's noisy TIMIT. That's right. [speaker005:] It's noisy TIMIT. [speaker006:] Yeah. [speaker001:] Yeah. It seems quite robust to noise, so when we take [disfmarker] we draw its parameters across time for a clean sentence and then nois the same noisy sentence, it's very close. [speaker002:] Mm hmm. [speaker001:] Yeah. So there are [disfmarker] there is this. There could be also the, um [disfmarker] [vocalsound] something like the maximum of the auto correlation function or [disfmarker] [speaker003:] Is this a [disfmarker] a s a trained system? [speaker001:] which [disfmarker] [speaker003:] Or is it a system where you just pick some thresholds? Ho how does it work? [speaker001:] Right now we just are trying to find some features. [speaker003:] Mm hmm. [speaker001:] And, uh [disfmarker] Yeah. Hopefully, I think what we want to have is to put these features in s some kind of, um [disfmarker] well, to [disfmarker] to obtain a statistical model on these features and to [disfmarker] or just to use a neural network and hopefully these features w would help [disfmarker] [speaker003:] Because it seems like what you said about the mean of the [disfmarker] the voiced and the unvoiced [disfmarker] [comment] [vocalsound] that seemed pretty encouraging. [speaker001:] Mm hmm. [speaker002:] Well, yeah, except the variance was big. [speaker003:] Right? [speaker001:] Yeah. Except the variance is quite high. [speaker002:] Right? [speaker003:] Well, y Well, y I [disfmarker] I don't know that I would trust that so much [speaker001:] Yeah. [speaker003:] because you're doing these canonical mappings from TIMIT labellings. Right? [speaker001:] Uh huh. [speaker003:] So, really that's sort of a cartoon picture about what's voiced and unvoiced. So that could be giving you a lot of variance. I mean, [speaker001:] Yeah. [speaker003:] i it [disfmarker] it may be that [disfmarker] that you're finding something good and that the variance is sort of artificial because of how you're getting your truth. [speaker001:] Mm hmm. [speaker002:] Yeah. But another way of looking at it [vocalsound] might be that [disfmarker] I mean, what w we we are coming up with feature sets after all. So another way of looking at it is that [vocalsound] um, the mel cepstru mel [pause] spectrum, mel cepstrum, [vocalsound] any of these variants, um, give you the smooth spectrum. It's the spectral envelope. By going back to the FFT, [vocalsound] you're getting something that is [pause] more like the raw data. So the question is, what characterization [disfmarker] and you're playing around with this [disfmarker] another way of looking at it is what characterization [vocalsound] of the difference between [pause] the raw data [pause] and this smooth version [pause] is something that you're missing that could help? So, I mean, looking at different statistical measures of that difference, coming up with some things and just trying them out and seeing if you add them onto the feature vector does that make things better or worse in noise, where you're really just i i the way I'm looking at it is not so much you're trying to f find the best [disfmarker] the world's best voiced unvoiced, uh, uh, classifier, [speaker003:] Mm hmm. [speaker001:] Mmm. [speaker002:] but it's more that, [vocalsound] you know, uh, uh, try some different statistical characterizations of that difference back to the raw data [speaker003:] Right. [speaker002:] and [disfmarker] and [speaker003:] Right. [speaker002:] m maybe there's something there that [pause] the system can use. [speaker001:] Yeah. Yeah, but ther more obvious is that [disfmarker] Yeah. The [disfmarker] the more obvious is that [disfmarker] that [disfmarker] well, using the [disfmarker] th the FFT, um, [vocalsound] you just [disfmarker] it gives you just information about if it's voiced or not voiced, ma mainly, I mean. But [disfmarker] So, [speaker002:] Yeah. [speaker001:] this is why we [disfmarker] we started to look [pause] by having sort of voiced phonemes [speaker002:] Well, that's the rea w w what I'm arguing is that's Yeah. I mean, uh, what I'm arguing is that that [disfmarker] that's givi you [disfmarker] gives you your intuition. [speaker001:] and [disfmarker] Mm hmm. [speaker002:] But in [disfmarker] in reality, it's [disfmarker] you know, there's all of this [disfmarker] this overlap and so forth, [speaker005:] Oh, sorry. [speaker002:] and [disfmarker] But what I'm saying is that may be OK, because what you're really getting is not actually voiced versus unvoiced, both for the fac the reason of the overlap and [disfmarker] and then, uh, th you know, structural reasons, uh, uh, like the one that Chuck said, that [disfmarker] that in fact, well, the data itself is [disfmarker] [vocalsound] that you're working with is not perfect. [speaker001:] Yeah. Mm hmm. [speaker002:] So, what I'm saying is maybe that's not a killer because you're just getting some characterization, one that's driven by your intuition about voiced unvoiced certainly, [speaker001:] Mm hmm. [speaker002:] but it's just some characterization [vocalsound] of something back in the [disfmarker] in the [disfmarker] in the almost raw data, rather than the smooth version. [speaker001:] Mm hmm. [speaker002:] And your intuition is driving you towards particular kinds of, [vocalsound] uh, statistical characterizations of, um, what's missing from the spectral envelope. [speaker001:] Mm hmm. [speaker002:] Um, obviously you have something about the excitation, um, and what is it about the excitation, and, you know [disfmarker] and you're not getting the excitation anyway, you know. So [disfmarker] so I [disfmarker] I would almost take a [disfmarker] uh, especially if [disfmarker] if these trainings and so forth are faster, I would almost just take a [vocalsound] uh, a scattershot at a few different [vocalsound] ways of look of characterizing that difference and, uh, you could have one of them but [disfmarker] and [disfmarker] and see, you know, which of them helps. [speaker001:] Mm hmm. [speaker003:] So i is the idea that you're going to take [pause] whatever features you develop and [disfmarker] and just add them onto the future vector? [speaker001:] OK. [speaker003:] Or, what's the use of the [disfmarker] the voiced unvoiced detector? [speaker001:] Uh, I guess we don't know exactly yet. But, [vocalsound] um [disfmarker] Yeah. [speaker003:] It's not part of a VAD system that you're doing? [speaker001:] Th [speaker006:] No. [speaker001:] Uh, no. [speaker003:] Oh, OK. [speaker001:] No. No, the idea was, I guess, to [disfmarker] to use them as [disfmarker] as features. [speaker003:] Features. I see. [speaker001:] Uh [disfmarker] Yeah, it could be, uh [disfmarker] it could be [vocalsound] a neural network that does voiced and unvoiced detection, [speaker003:] Mm hmm. [speaker001:] but it could be in the [disfmarker] also the big neural network that does phoneme classification. [speaker003:] Mm hmm. [speaker001:] Mmm. [speaker002:] But each one of the mixture components [disfmarker] [speaker001:] Yeah. [speaker002:] I mean, you have, uh, uh, variance only, so it's kind of like you're just multiplying together these, um, probabilities from the individual features [pause] within each mixture. So it's [disfmarker] so, uh, [speaker003:] I think it's a neat thing. [speaker002:] it seems l you know [disfmarker] [speaker003:] Uh, it seems like a good idea. [speaker002:] Yeah. Um. Yeah. I mean, [vocalsound] I know that, um, people doing some robustness things a ways back were [disfmarker] were just doing [disfmarker] just being gross and just throwing in the FFT and actually it wasn't [disfmarker] wasn't [disfmarker] wasn't so bad. Uh, so it would s and [disfmarker] and you know that i it's gotta hurt you a little bit to not have a [disfmarker] [vocalsound] a spectral, uh [disfmarker] a s a smooth spectral envelope, so there must be something else that you get [pause] in return for that [disfmarker] that, uh [disfmarker] [speaker001:] Mm hmm. [speaker002:] uh [disfmarker] So. [speaker003:] So how does [disfmarker] uh, maybe I'm going in too much detail, but [vocalsound] how exactly do you make the difference between the FFT and the smoothed [pause] spectral envelope? Wha wh i i uh, how is that, uh [disfmarker]? [speaker001:] Um, we just [disfmarker] How did we do it up again? [speaker006:] Uh, we distend the [disfmarker] we have the twenty three coefficient af after the mel f [vocalsound] filter, [speaker001:] Mm hmm. [speaker006:] and we extend these coefficient between the [disfmarker] all the frequency range. [speaker003:] Mm hmm. [speaker006:] And i the interpolation i between the point [vocalsound] is [disfmarker] give for the triang triangular filter, the value of the triangular filter and of this way we obtained this mode this model speech. [speaker002:] So you essentially take the values that [disfmarker] th that you get from the triangular filter and extend them [speaker001:] S [speaker002:] to sor sort of like a rectangle, that's at that m value. [speaker006:] Yeah. Mm hmm. [speaker001:] Yeah. I think we have linear interpolation. So we have [disfmarker] we have one point for [disfmarker] one energy for each filter bank, [speaker006:] mmm Yeah, it's linear. [speaker003:] Mmm. [speaker002:] Oh. [speaker006:] Yeah. [speaker001:] which is [pause] the energy [pause] that's centered on [disfmarker] on [disfmarker] on the triangle [disfmarker] [speaker006:] At the n at the center of the filter [disfmarker] [speaker003:] So you [disfmarker] you end up with a vector that's the same length as the FFT [pause] vector? [speaker001:] Yeah. [speaker006:] Yeah. [speaker001:] That's right. [speaker003:] And then you just, uh, compute differences [speaker006:] Yeah. I have here one example if you [disfmarker] if you want see something like that. [speaker003:] and, [speaker001:] Then we compute the difference. Yeah. Uh huh. [speaker003:] uh, sum the differences? [speaker002:] OK. [speaker001:] So. And I think the variance is computed only from, like, two hundred hertz to [pause] one [disfmarker] to fifteen hundred. [speaker003:] Oh! OK. [speaker002:] Mm hmm. [speaker006:] Two thou two [disfmarker] [comment] fifteen hundred? [speaker002:] Mm hmm. [speaker006:] No. [speaker001:] Because [disfmarker] [speaker002:] Right. [speaker006:] Two hundred and fifty thousand. [speaker001:] Fifteen hundred. Because [disfmarker] [speaker006:] Yeah. [speaker001:] Yeah. [speaker006:] Two thousand and fifteen hundred. [speaker001:] Above, um [disfmarker] [vocalsound] it seems that [disfmarker] Well, some voiced sound can have also, [vocalsound] like, a noisy [pause] part on high frequencies, and [disfmarker] [speaker002:] Yeah. [speaker001:] But [disfmarker] [speaker002:] No, it's [disfmarker] makes sense to look at [pause] low frequencies. [speaker001:] Well, it's just [disfmarker] [speaker003:] So this is [disfmarker] uh, basically this is comparing [vocalsound] an original version of the signal to a smoothed version of the same signal? [speaker006:] Yeah. [speaker002:] Right. So i so i i this is [disfmarker] I mean, i you could argue about whether it should be linear interpolation or [disfmarker] or [disfmarker] or [disfmarker] or zeroeth order, but [disfmarker] but [speaker003:] Uh huh. [speaker002:] at any rate something like this [pause] is what you're feeding your recognizer, typically. [speaker003:] Like which of the [disfmarker]? [speaker002:] No. Uh, so the mel cepstrum is the [disfmarker] is the [disfmarker] is the cepstrum of this [disfmarker] [vocalsound] this, uh, spectrum or log spectrum, [speaker001:] So this is [disfmarker] Yeah. [speaker003:] Yeah. Right, right. [speaker002:] whatever it [disfmarker] You you're subtracting in [disfmarker] in [disfmarker] in [vocalsound] power domain or log domain? [speaker001:] In log domain. [speaker006:] Log domain. [speaker001:] Yeah. [speaker002:] OK. So it's sort of like division, when you do the [disfmarker] yeah, the spectra. [speaker006:] Yeah. [speaker001:] Uh, yeah. [speaker003:] It's the ratio. [speaker002:] Um. Yeah. But, anyway, um [disfmarker] and that's [disfmarker] [speaker003:] So what's th uh, what's the intuition behind this kind of a thing? I [disfmarker] I don't know really know the signal processing well enough to understand what [disfmarker] [vocalsound] what is that doing. [speaker001:] So. Yeah. [speaker002:] Yeah. I guess that makes sense. [speaker001:] What happen if [disfmarker] what we have [disfmarker] have [disfmarker] what we would like to have is [pause] some spectrum of the excitation signal, [speaker002:] Yeah. [speaker001:] which is for voiced sound ideally a [disfmarker] a pulse train [speaker003:] Uh huh. [speaker001:] and for unvoiced it's something that's more flat. [speaker003:] Uh huh. Right. [speaker001:] And the way to do this [vocalsound] is that [disfmarker] well, we have the [disfmarker] we have the FFT because it's computed in [disfmarker] in the [disfmarker] in the system, and we have [vocalsound] the mel [vocalsound] filter banks, [speaker003:] Mm hmm. Mm hmm. [speaker001:] and so if we [disfmarker] if we, like, remove the mel filter bank from the FFT, [vocalsound] we have something that's [pause] close to the [pause] excitation signal. [speaker005:] Oh. [speaker003:] OK. [speaker001:] It's something that's like [vocalsound] a [disfmarker] a a train of p a pulse train for voiced sound [speaker002:] Yeah. [speaker003:] Oh! OK. Yeah. [speaker001:] and that's [disfmarker] that should be flat for [disfmarker] [speaker002:] Yeah. [speaker003:] I see. So do you have a picture that sh? Is this for a voiced segment, [speaker001:] So It's [disfmarker] Y [speaker003:] this picture? [speaker001:] yeah. [speaker003:] What does it look like for unvoiced? [speaker006:] Yeah. [speaker001:] You have several [disfmarker] some unvoiced? [speaker006:] The dif No. Unvoiced, I don't have for unvoiced. [speaker001:] Oh. [speaker002:] Yeah. [speaker006:] I'm sorry. [speaker002:] So, you know, all [disfmarker] [speaker001:] But [disfmarker] Yeah. [speaker002:] Yeah. [speaker006:] Yeah. This is the [disfmarker] between [disfmarker] [speaker001:] This is another voiced example. [speaker006:] No. [speaker001:] Yeah. [speaker006:] But it's this, [speaker001:] Oh, yeah. This is [disfmarker] [speaker006:] but between the frequency that we are considered for the excitation [disfmarker] [speaker001:] Right. Mm hmm. [speaker006:] for the difference and this is the difference. [speaker003:] This is the difference. [speaker001:] Yeah. [speaker003:] OK. [speaker001:] So, of course, it's around zero, [speaker002:] Yeah. [speaker005:] Sure looks [disfmarker] [speaker001:] but [disfmarker] [speaker005:] Hmm. [speaker001:] Well, [speaker003:] Hmm. [speaker001:] no. [speaker006:] Yeah. [speaker001:] It is [disfmarker] [speaker006:] Because we begin, [vocalsound] uh, in fifteen [vocalsound] point [disfmarker] the fifteen point. [speaker003:] So, does [disfmarker] does the periodicity of this signal say something about the [disfmarker] the [disfmarker] [speaker006:] Fifteen p [speaker001:] So it's [disfmarker] [speaker002:] Pitch. [speaker001:] Yeah. [speaker003:] the pitch? [speaker001:] It's the pitch. [speaker003:] OK. [speaker002:] Yeah. [speaker001:] Yeah. Mm hmm. [speaker002:] That's like fundamental frequency. [speaker001:] Mm hmm. [speaker003:] OK. [speaker002:] So, I mean, i t t [speaker003:] I see. [speaker002:] I mean, to first order [vocalsound] what you'd [disfmarker] what you're doing [disfmarker] I mean, ignore all the details and all the ways which is [disfmarker] that these are complete lies. [speaker003:] Mm hmm. [speaker002:] Uh, the [disfmarker] the [disfmarker] you know, what you're doing in feature extraction for speech recognition is you have, [vocalsound] uh, in your head a [disfmarker] a [disfmarker] a [disfmarker] a simplified production model for speech, [speaker006:] Yeah. [speaker003:] Mm hmm. [speaker002:] in which you have a periodic or aperiodic source that's driving some filters. [speaker006:] This is the [disfmarker] the auto correlation [disfmarker] the R zero energy. [speaker001:] Do you have the mean [disfmarker] do you have the mean for the auto correlation [disfmarker]? [speaker002:] Uh, first order for speech recognition, you say "I don't care about the source". [speaker006:] For [disfmarker] Yeah. I have the mean. [speaker001:] Well, I mean for the [disfmarker] the energy. [speaker003:] Right. [speaker002:] Right? [speaker003:] Right. [speaker002:] And so you just want to find out what the filters are. [speaker006:] Yeah. [speaker002:] The filters [vocalsound] roughly act like a, um [disfmarker] [vocalsound] a, uh [disfmarker] [vocalsound] a an overall resonant [disfmarker] you know, f some resonances and so forth that th that's processing excitation. [speaker006:] Here. [speaker001:] They should be more close. [speaker006:] Ah, no. This is this? More close. Is this? And this. [speaker003:] Mm hmm. Mm hmm. [speaker001:] Yeah. So they are [disfmarker] this is [disfmarker] there is less difference. [speaker006:] Mm hmm. [speaker002:] So if you look at the spectral envelope, just the very smooth properties of it, [vocalsound] you get something closer to that. [speaker001:] This is less [disfmarker] it's less robust. [speaker006:] Less robust. Yeah. [speaker001:] Oh, yeah. [speaker002:] And the notion is if you have the full spectrum, with all the little nitty gritty details, [vocalsound] that that has the effect of both, [speaker003:] Yeah. Mm hmm. [speaker002:] and it would be a multiplication in [disfmarker] in frequency domain so that would be like an addition in log [disfmarker] [vocalsound] power spectrum domain. [speaker003:] Mm hmm. Mm hmm. [speaker002:] And so this is saying, well, if you really do have that [vocalsound] sort of vocal tract envelope, and you subtract that off, what you get is the excitation. And I call that lies because you don't really have that, you just have some kind of [vocalsound] signal processing trickery to get something that's kind of smooth. It's not really what's happening in the vocal tract [speaker003:] Yeah. [speaker002:] so you're not really getting the vocal excitation. [speaker003:] Right. [speaker002:] That's why I was going to the [disfmarker] why I was referring to it in a more [disfmarker] [vocalsound] a more, uh, [vocalsound] uh, [vocalsound] conservative way, when I was saying "well, it's [disfmarker] yeah, it's the excitation". But it's not really the excitation. It's whatever it is that's different between [disfmarker] [speaker003:] Oh. [speaker002:] So [disfmarker] so, stand standing back from that, you sort of say there's this very detailed representation. [speaker003:] This moved in the [disfmarker] Yeah. Mm hmm. [speaker002:] You go to a smooth representation. You go to a smooth representation cuz this typically generalizes better. [speaker003:] Mm hmm. [speaker002:] Um, but whenever you smooth you lose something, so the question is have you lost something you can you use? [speaker003:] Right. [speaker002:] Um, probably you wouldn't want to go to the extreme of just ta saying "OK, our feature set will be the FFT", cuz we really think we do gain something in robustness from going to something smoother, [speaker003:] Mm hmm. [speaker002:] but maybe there's something that we missed. [speaker003:] Yeah. [speaker002:] So what is it? And then you go back to the intuition that, well, you don't really get the excitation, but you get something related to it. [speaker003:] Mm hmm. Mm hmm. [speaker002:] And it [disfmarker] and as you can see from those pictures, you do get something [vocalsound] that shows some periodicity, uh, in frequency, [speaker003:] Hmm. [speaker002:] you know, and [disfmarker] and [disfmarker] and also in time. So [disfmarker] [speaker003:] That's [disfmarker] that's really neat. [speaker002:] so, [speaker003:] So you don't have one for unvoiced [pause] picture? [speaker006:] Uh, not here. [speaker003:] Oh. [speaker006:] No, I have s [speaker001:] Mm hmm. [speaker002:] Yeah. [speaker006:] But not here. [speaker002:] But presumably you'll see something that won't have this kind of, uh, uh, uh, regularity in frequency, uh, in the [disfmarker] [speaker001:] But [disfmarker] Yeah. Well. [speaker006:] Not here. [speaker003:] I would li I would like to see those [pause] pictures. [speaker006:] Well, so. [speaker002:] Yeah. [speaker006:] I can't see you [comment] now. [speaker003:] Yeah. [speaker002:] Yeah. Yeah. [speaker001:] Mm hmm. [speaker006:] I don't have. [speaker003:] And so you said this is pretty [disfmarker] doing this kind of thing is pretty robust to noise? [speaker001:] It seems, yeah. [speaker006:] Pfft. [speaker003:] Huh. [speaker001:] Um, [speaker006:] Oops. The mean is different [vocalsound] with it, because the [disfmarker] [vocalsound] the histogram for the [disfmarker] [vocalsound] the classifica [speaker001:] No, no, no. But th the kind of robustness to noise [disfmarker] [speaker006:] Oh! [speaker001:] So if [disfmarker] if you take this frame, [vocalsound] uh, from the noisy utterance and the same frame from the clean utterance [disfmarker] [speaker006:] Hmm. [speaker003:] You end up with a similar difference [speaker001:] Y y y yeah. We end up with [disfmarker] [speaker003:] over here? [speaker001:] Yeah. [speaker003:] OK. [speaker006:] I have here the same frame for the [pause] clean speech [disfmarker] [speaker003:] Cool! Oh, that's clean. [speaker006:] the same cle [speaker003:] Oh, OK [speaker006:] But they are a difference. Because here the FFT is only with [vocalsound] two hundred fifty six point [speaker001:] Yeah, that's [disfmarker] [speaker006:] and this is with five hundred [pause] twelve. [speaker003:] Oh. OK. [speaker001:] Yeah. This is kind of inter interesting also because if we use the standard, [vocalsound] uh, frame length of [disfmarker] of, like, twenty five milliseconds, [vocalsound] um, [vocalsound] what happens is that for low pitched voiced, because of the frame length, y you don't really have [disfmarker] [vocalsound] you don't clearly see this periodic structure, [speaker002:] Mm hmm. [speaker001:] because of the first lobe of [disfmarker] of each [disfmarker] each of the harmonics. [speaker003:] So this one inclu is a longer [disfmarker] Ah. [speaker001:] So, this is like [disfmarker] yeah, fifty milliseconds or something like that. [speaker006:] Fifty millis Yeah. [speaker001:] Yeah, but it's the same frame and [disfmarker] [speaker003:] Oh, it's that time frequency trade off thing. [speaker001:] Yeah. [speaker003:] Right? I see. Yeah. [speaker001:] So, yeah. [speaker002:] Mm hmm. [speaker003:] Oh. Oh, so this i is this the difference here, [speaker006:] No. This is the signal. [speaker003:] for that? [speaker006:] This is the signal. [speaker001:] I see that. Oh, yeah. [speaker006:] The frame. [speaker003:] Oh, that's the f the original. [speaker006:] This is the fra the original frame. [speaker001:] Yeah. So with a short frame basically you have only two periods [speaker003:] Yeah. [speaker001:] and it's not [disfmarker] not enough to [disfmarker] to have this kind of neat things. [speaker003:] Mm hmm. [speaker006:] Mm hmm. [speaker003:] Yeah. [speaker001:] But [disfmarker] [speaker006:] And here [disfmarker] No, well. [speaker001:] Yeah. So probably we'll have to use, [vocalsound] like, long f long frames. Mm hmm. [speaker003:] Mm hmm. [speaker005:] Hmm. [speaker003:] Oh. That's interesting. [speaker002:] Mmm. Yeah, maybe. Well, I mean it looks better, but, I mean, the thing is if [disfmarker] if, uh [disfmarker] if you're actually asking [disfmarker] you know, if you actually j uh, need to do [disfmarker] place along an FFT, it may be [disfmarker] it may be pushing things. [speaker001:] Yeah. [speaker002:] And [disfmarker] and, uh [disfmarker] [speaker003:] Would you [disfmarker] would you wanna do this kind of, uh, difference thing [vocalsound] after you do spectral subtraction? [speaker001:] Uh, [vocalsound] maybe. [speaker006:] No. Maybe we can do that. [speaker001:] Mmm. [speaker002:] Hmm. The spectral subtraction is being done at what level? Is it being done at the level of FFT bins or at the level of, uh, mel spectrum or something? [speaker001:] Um, I guess it depends. [speaker002:] I mean, how are they doing it? [speaker001:] How they're doing it? Yeah. Um, I guess Ericsson is on the, um, filter bank, [speaker006:] FFT. Filter bank, [speaker001:] no? [speaker006:] yeah. [speaker001:] It's on the filter bank, so. So, yeah, probably [disfmarker] [speaker002:] So in that case, it might not make much difference at all. [speaker001:] I i it [disfmarker] Yeah. [speaker003:] Seems like you'd wanna do it on the FFT bins. [speaker002:] Maybe. I mean, certainly it'd be better. [speaker003:] I I mean, if you were gonna [disfmarker] uh, for [disfmarker] for this purpose, that is. [speaker002:] Yeah. [speaker001:] Mm hmm. [speaker002:] Yeah. [speaker001:] Mm hmm. [speaker002:] OK. [speaker001:] Mmm. [speaker002:] What else? [speaker001:] Uh. [vocalsound] Yeah, that's all. So we'll perhaps [vocalsound] [vocalsound] [vocalsound] try to convince OGI people to use the new [disfmarker] [vocalsound] the new filters and [disfmarker] Yeah. [speaker002:] OK. Uh, has [disfmarker] has anything happened yet on this business of having some sort of standard, uh, source, [speaker001:] Uh, [speaker002:] or [disfmarker]? [speaker001:] not yet but I wi I will [vocalsound] call them and [disfmarker] [speaker002:] OK. [speaker001:] now they are [disfmarker] I think they have more time because they have this [disfmarker] well, Eurospeech deadline is [vocalsound] over [speaker003:] When is the next, um, Aurora [pause] deadline? [speaker001:] and [disfmarker] It's, um, in June. [speaker003:] June. [speaker001:] Yeah. [speaker002:] Early June, late June, middle June? [speaker001:] I don't know w [speaker002:] Hmm. [speaker005:] Hmm. [speaker002:] OK. Um, and [pause] he's been doing all the talking but [disfmarker] but [vocalsound] these [disfmarker] [vocalsound] he's [disfmarker] he's, uh [disfmarker] [speaker006:] Yeah. [speaker002:] This is [disfmarker] this by the way a bad thing. We're trying to get, um, m more female voices in this record as well. So. Make sur make sure Carmen [vocalsound] talks as well. Uh, but has he pretty much been talking about what you're doing also, and [disfmarker]? [speaker006:] Oh, I [disfmarker] I am doing this. Yeah, yeah. [speaker002:] Yes. [speaker006:] I don't know. I'm sorry, but I think that for the recognizer for the meeting recorder that it's better that I don't speak. [speaker002:] Yeah, well. [speaker006:] Because [disfmarker] [speaker002:] You know, uh, we'll get [disfmarker] we'll get to, uh, Spanish voices sometime, and [vocalsound] we do [disfmarker] we want to recognize, [vocalsound] uh, you too. [speaker006:] After the [disfmarker] after, uh, the result for the TI digits [vocalsound] on the meeting record there will be foreigns people. [speaker001:] Yeah, but [disfmarker] [speaker003:] Y [speaker002:] Oh, no. We like [disfmarker] we [disfmarker] we're [disfmarker] we're [disfmarker] w we are [disfmarker] we're in the, uh, Bourlard Hermansky Morgan, uh, frame of mind. Yeah, we like high error rates. It's [disfmarker] [speaker001:] Yeah. [speaker002:] That way there's lots of work to do. So it's [disfmarker] Uh, anything to [speaker004:] N um, not not not much is new. [speaker002:] talk about? [speaker004:] So when I talked about what I'm planning to do last time, [vocalsound] I said I was, um, going to use Avendano's method of, um, [vocalsound] using a transformation, um, [vocalsound] to map from long analysis frames which are used for removing reverberation to short analysis frames for feature calculation. He has a trick for doing that [pause] involving viewing the DFT as a matrix. Um, but, uh, um, I decided [vocalsound] not to do that after all because I [disfmarker] I realized to use it I'd need to have these short analysis frames get plugged directly into the feature computation somehow [speaker002:] Mm hmm. [speaker004:] and right now I think our feature computation is set to up to, um, [vocalsound] take, um, audio as input, in general. So I decided that I [disfmarker] I'll do the reverberation removal on the long analysis windows and then just re synthesize audio and then send that. [speaker002:] This is in order to use the SRI system or something. [speaker004:] Um, [speaker002:] Right? [speaker004:] or [disfmarker] or even if I'm using our system, I was thinking it might be easier to just re synthesize the audio, [speaker002:] Yeah? [speaker004:] because then I could just feacalc as is and I wouldn't have to change the code. [speaker002:] Oh, OK. Yeah. I mean, it's [disfmarker] um, certainly in a short [disfmarker] short term this just sounds easier. [speaker004:] Uh huh. [speaker002:] Yeah. I mean, longer term if it's [disfmarker] [vocalsound] if it turns out to be useful, one [disfmarker] one might want to do something else, [speaker004:] Right. That's true. [speaker002:] but [disfmarker] Uh, uh, I mean, in [disfmarker] in other words, you [disfmarker] you may be putting other kinds of errors in [pause] from the re synthesis process. [speaker004:] But [disfmarker] e u From the re synthesis? Um, [speaker002:] Yeah. [speaker004:] O OK. I don't know anything about re synthesis. Uh, how likely do you think that is? [speaker002:] Uh, it depends what you [disfmarker] what you do. I mean, it's [disfmarker] it's [disfmarker] it's, uh, um [disfmarker] Don't know. But anyway it sounds like a reasonable way to go for a [disfmarker] for an initial thing, and we can look at [disfmarker] [vocalsound] at exactly what you end up doing and [disfmarker] and then figure out if there's some [disfmarker] [vocalsound] something that could be [disfmarker] be hurt by the end part of the process. [speaker004:] OK. [speaker002:] OK. So that's [disfmarker] [speaker004:] That [disfmarker] Yeah, e That's it, that's it. [speaker002:] That was it, huh? OK. OK. [speaker004:] Uh huh. [speaker002:] Um, anything to [pause] add? [speaker005:] Um. Well, I've been continuing reading. I went off on a little tangent this past week, um, looking at, uh, [vocalsound] uh, modulation s spectrum stuff, um, and [disfmarker] and learning a bit about what [disfmarker] what, um [disfmarker] what it is, and, uh, the importance of it in speech recognition. And I found some [disfmarker] [vocalsound] some, uh, neat papers, [vocalsound] um, historical papers from, [vocalsound] um, [vocalsound] Kanedera, Hermansky, and Arai. [speaker002:] Yeah. [speaker005:] And they [disfmarker] they did a lot of experiments where th where, [vocalsound] um, they take speech [vocalsound] and, um, e they modify [vocalsound] the, uh [disfmarker] they [disfmarker] they [disfmarker] they measure the relative importance of having different, um, portions of the modulation spectrum intact. And they find that the [disfmarker] the spectrum between one and sixteen hertz in the modulation [vocalsound] is, uh [disfmarker] is im important for speech recognition. [speaker002:] Yeah. [speaker005:] Um. [speaker002:] Sure. I mean, this sort of goes back to earlier stuff by Drullman. [speaker005:] Yeah. [speaker002:] And [disfmarker] and, uh, the [disfmarker] the MSG features were sort of built up [vocalsound] with this notion [disfmarker] [speaker005:] Right. [speaker002:] But, I guess, I thought you had brought this up in the context of, um, targets somehow. [speaker005:] Right. [speaker002:] But i m [speaker005:] Um [disfmarker] [speaker002:] i it's not [disfmarker] I mean, they're sort of not in the same kind of category as, say, a phonetic target or a syllabic target [speaker005:] Mmm. Mm hmm. [speaker002:] or a [disfmarker] [speaker005:] Um, I was thinking more like using them as [disfmarker] as the inputs to [disfmarker] to the detectors. [speaker002:] or a feature or something. Oh, I see. [speaker005:] Yeah. [speaker002:] Well, that's sort of what MSG does. [speaker005:] Yeah. [speaker002:] Right? [speaker005:] Mm hmm. [speaker002:] So it's [disfmarker] But [disfmarker] but, uh [disfmarker] [speaker005:] S [speaker002:] Yeah. [speaker005:] Yeah. [speaker002:] Anyway, we'll talk more about it later. [speaker005:] OK. We can talk more about it later. [speaker002:] Yeah. Yeah. [speaker005:] Yeah. [speaker002:] Yeah. So maybe, [vocalsound] le [speaker003:] Should we do digits? [speaker002:] let's do digits. Let you [disfmarker] you start. [speaker004:] Oh, OK. [speaker005:] L fifty. [speaker001:] Right. [speaker006:] OK, so we should be on. [speaker004:] So [speaker006:] And I'm gonna go ahead and start by reading the digit strings, just to give you an example of how to do it. And so the first thing to do is read the transcript number which is [nonvocalsound] on the right hand side there. And so if you could just go around the table and read them out. [speaker007:] OK, should I next? [speaker006:] Yep If you could sign that? Thanks. Thanks So, I wanted to just era it reiterate cuz we did have have one person come in a little later that, um there will be an opportunity to remove anything from the transcript that you don't want made public after it's transcribed, so that'll be some months off, but you will have that opportunity OK? Thanks. [speaker002:] OK. [speaker004:] OK. Thanks Adam. We could start with our meeting? [speaker006:] Yep! [speaker004:] Yep! [speaker006:] Thanks very much. [speaker004:] OK. You have to stay all the time here? [speaker006:] I don't have to Um. [speaker004:] But ch you wanna do? [speaker006:] Well [disfmarker] It's just [disfmarker] At the end of the meeting, don't turn off the mikes just leave them on. [speaker002:] OK. [speaker006:] And uh Uh. Actually could y actually, could you call me, At the end of the meeting? I'll leave my number on the board there [speaker004:] Yeah. OK. [speaker006:] Thanks. [speaker004:] We can do that. [speaker007:] Hey, Adam, you know what? It must be funny to do an a research on how people [pause] behave while [pause] reading these numbers. [speaker006:] Numbers are strange! [speaker007:] That must be [disfmarker] [speaker006:] It is! [speaker007:] Yeah! It must be much more interesting to [disfmarker] to see their behavior. [speaker006:] the [disfmarker] the meetings are strange. [speaker007:] Oh y yeah! [speaker006:] People interrupt each other much more than I thought they did. [speaker004:] Two nine seven seven? Right. [speaker006:] two nine seven seven. [speaker004:] OK. [speaker002:] OK. [speaker004:] Yeah. I will call you then. [speaker006:] Thanks. [speaker004:] Uh, OK. So, then let's start with [pause] our weekly meeting. The last time's uh we learned too much, I believe, so uh, today, I have two topics. And the first topic, um, we have a new member in our group, Miguel Sanchez. I believe everybody knows him very well in the meantime because he stayed here quite a little time [speaker002:] Hi. [speaker004:] and I think he is talking to everybody in the meantime. So, but, anyway, I would like that Miguel Sanchez will introduce himself a little bit, concerning his background, what he intend to do. And [disfmarker] [pause] Because I discussing with him uh, uh many things concerning, uh, his skills and what the [pause] NSA group intends to do. [speaker002:] Mm hmm. [speaker004:] I hope there will be a match, uh, then, for his future work. And the second topic is then [pause] to discuss a proposal in [pause] much more detail. I believe [pause] we still haven't left our starting position. And, um, yesterday I also discussed with Wilbert some things and [disfmarker] I would like to focus on the question [comment] "What problem we are going to solve with such a proposal." Not so will they have a common activity within the group itself and maybe with other partners outside, but, the technical problem. But that's under the s second topic, so first I would like that Miguel will told us a little bit about his skills and [disfmarker] and uh his background. [speaker002:] OK. OK, thank you Joe. [speaker004:] So, please. [speaker002:] Well, I, uh, as all of you know I came from [disfmarker] from Spain, from the uh [pause] Polytechnic University of Valencia and i uh [pause] I um, just finished my PHD thesis about uh [pause] um power saving [pause] techniques for wireless networking. uh, In fact, I have developed an algorithm to control the ra radio frequency power uh of a wireless transceiver. And, uh [pause] I've been working [disfmarker] also in routing issues, especially in the so called ad hoc netw wireless networks. Where uh, you have a set of, uh, mobile nodes and, no other infrastructure, no base stations. So all the routing functions are done by the node [disfmarker] by the same nodes. OK? So, nodes act as both end points and routers. eh I've in the study for some time these kind of networks. And, uh, in fact, there are a lot of [disfmarker] of [disfmarker] research uh articles about this [disfmarker] this kind of networks because You know, uh, mmm infrastructure based network [disfmarker] network has been a long topic of research. So, for example, routing on [disfmarker] on wired networks is a [disfmarker] a topic still to be researched but it has an important background. But mobile network this kind of mobile networks is a little bit [disfmarker] uh newer, especially because uh, until not so long uh, the [disfmarker] the things were too heavy to have a computer, a wireless transceiver, on a more or less portable thing. So, with the advent of this new technology, low power uh, high, uh, processing capability and wireless [disfmarker] eh [disfmarker] networking capability, in a really small package, new kind of devices are [disfmarker] are being built and are appearing, in the market. And, eh, well this is more or less what I have done. And the kind of, uh, things, I'm interested is more or less all, around this [disfmarker] these [pause] kind of networks. But I could say, uh in general, mobile service is not only about, uh these specific kind of networks. But, in general, about, uh services uh over mobile or services that can benefit from the [disfmarker] uh, capability of nodes to move around. And, uh, well, I [disfmarker] I've been teaching now for twelve years, comp eh At a computer sciences school in my university, I've been teaching computer networks. And, well, I am [disfmarker] more or less uh, knowledgable about, well, TCP IP networking, ISO uh, networking, and, uh well I'm [disfmarker] I'm a computer guy, so I'm really [disfmarker] um Well, I can say uh, proudly uh, more or less, computer skilled. I mean, uh, operating systems uh, programming languages uh, networking [@ @]. So this kind of things I think I'm more or less well trained. So this is, I think, um, my first presentation. Next [disfmarker] maybe next meeting I will uh, do a small sketch about uh, my past work, and [disfmarker] well, presenting some det a little bit more detailed approach with some schemes and [disfmarker] and some things from previous talks, so you can have a better idea of w the kind of work I've been doing. OK? [speaker004:] Mm hmm. [speaker002:] Thank you. [speaker004:] So, thanks, Miguel. Yeah, we m discussed it yesterday that you'll maybe [disfmarker] will have a talk [pause] next Tuesday [speaker002:] Y Yeah. [speaker004:] But, Give me a sign and send me an abstract so that I can announce it, then, for the next meeting. [speaker002:] Yeah. That's OK. [speaker004:] Um. One short other topic is Uh. Claudia did a lot of work concerning the web pages in the meantime. [speaker007:] Oh, OK. [speaker004:] And, despite [vocalsound] busy stuff with waivers and uh [vocalsound] and visas and whatever, uh, she found the time to um [pause] write, with what we discussed before and in [comment] the form of HTML stuff. And we wanna put it on the [pause] web pages in the next days. So, I will send an email, then, to everybody that they should check their web pages whether they're aligned with, um, their own views and opinions of what we should [disfmarker] what should be presented on the web pages. And any feedback is then, well appreciated. No? [speaker002:] OK. [speaker004:] So. Let's switch to the project proposal. Um, I mentioned that it is still f [speaker007:] Ah. Sorry. Can I? [speaker004:] Yeah? [speaker007:] Maybe I can add something. [speaker004:] Claudia [disfmarker] [speaker007:] Yeah, if there's anything else which we [disfmarker] what we could add on the web site. So, p For example, if you have a small abstract or some pictures or whatever, about the work you did before, what you are planning to do here, that would be fine, [speaker002:] OK. [speaker007:] cuz now we can add some more [pause] stuff there. Or if you have something [disfmarker] or if somebody has some slides, o or articles, whatever he wants to be [pause] published there. [speaker004:] Yeah. Then everybody sees the structure, [disfmarker] how it is uh, structured [disfmarker] really structured. [speaker007:] Yeah and [speaker004:] Then [pause] maybe you wi can easy offer some additional input which should be presented, then, on the web pages. [speaker007:] Yeah, it depends what [disfmarker] what everybody wants to do. [speaker004:] Yeah. Yeah. [speaker007:] So about the projects will be something, then, about publications will be another point, and [disfmarker] Yeah, additional information about how is life in Berkeley and i Yep. I don't know, [pause] some [disfmarker] some hints, some web sites or links which are useful when somebody arrives or stuff like that. [speaker004:] Yeah, it is. Yeah. [speaker002:] Hmm hmm hmm. [speaker004:] Half collection of necessary information. [speaker005:] But there's no personal homepage for everybody? Or, [pause] is it planned or not, [speaker004:] Yeah su You can have [disfmarker] [speaker005:] or [disfmarker]? [speaker004:] Yeah, you can have it. Because, as uh [disfmarker] Uh, uh, le [speaker005:] I'm not interested in this. That makes work. [speaker004:] No, but this is [disfmarker] oh, uh [disfmarker] Everybody's obliged to do it for his own, you know. [speaker005:] Yeah. [speaker004:] So there is not uh [disfmarker] Claudia is not the webmaster here to, uh, [vocalsound] [pause] get the collection for [disfmarker] for [disfmarker] for everything. [speaker005:] OK. [speaker007:] Oh, oh, no, I'm not doing it for everybody [speaker004:] So. [speaker007:] but what I can [disfmarker] [speaker004:] This is responsible for [disfmarker] uh, everybody's responsible for his own, yeah. [speaker007:] Yeah, but [disfmarker] but what is planned is that I do something like a previous, you know, layout, and give everybody the, uh, opin uh, the possibility to [disfmarker] the chance to put his [pause] content inside [speaker005:] Sure. OK, his name and so on. [speaker007:] so that the layout is for everybody the same, [speaker005:] Yeah. [speaker002:] Well, like have a [pause] blueprint. [speaker007:] so about the projects and stuff. [speaker005:] OK Mm hmm. [speaker007:] So, there will be something. [speaker004:] Yeah. Yeah. OK, thanks. So, let's switch to the proposal. I ask everybody whether [pause] read it in the meantime, and understand everything. Yeah. Everybody say yes? Yeah? [speaker005:] Yes. [speaker004:] Yes, OK. Um. From my point of view, As I mentioned before, we still in the starting position. I think the four building blocks on the network level will have really a major impact on [disfmarker] on f current um, available wireless networks and, uh, also for future generations. And, what the missing thing is really what kind of problem we are [disfmarker] wan wanna s solve with such a project proposal. And Wilbert yesterday mentioned, and he is right, [comment] [vocalsound] that, what kind of, maybe, service we are really going to offer. The linkage between such kind of application, also interactive multimedia application and, uh, this kind of networking stuff is, um, maybe not very well aligned, because there will be of course, a portion of mobile access to such kind of application. But, I think that's not the majority and that is not the major focus of um this kind of [disfmarker] um [pause] this kind of application anyway. So I would like to [disfmarker] to [disfmarker] uh split, in principle, the discussions of [pause] what kind of application tsk [comment] um f far away and I would like to [disfmarker] more to focus if we have certain kind of vision t concerning these building blocks and the network, Um, What kind of problem we are going to solve. What kind of, uh, um, um, glue between these four building blocks exists and what kind of synergy effects, in principle, as if they work and fit very well together exists for networking stuff. And what kind of service could we provide [pause] in that area. We will have these kind of, uh, really closely working together of these four building blocks. So maybe uh, Wilbert, you can start to discu oder to [disfmarker] to tell the group about what you mentioned yesterday concerning certain kind of ideas. Maybe it's a brainstorming um, of [disfmarker] of these things and maybe we can comment later on th these [disfmarker] uh these ideas. And I saw you sk still discuss with [disfmarker] with [disfmarker] uh [disfmarker] Mark, right? some stuff. Was it also related with [disfmarker] with these things. [speaker003:] No. [speaker004:] No? OK, It was [@ @]. [speaker003:] This afternoon you mean? [speaker004:] Yeah. [speaker003:] No, this was [pause] unrelated to [disfmarker] to the things of yesterday and last weeks [speaker004:] OK Yeah, yeah. Then, maybe then, one [disfmarker] one w uh sentence before. [speaker003:] wha [speaker004:] I believe everybody fits very well in these kind of things. Because from the mathematical description with QS in Advance, me, as an expert for quality of service and, uh, Michael as an expert for uh, MPLS and Wilbert an expert for [disfmarker] for Multicast, and, uh, I discussed with Miguel. It's uh maybe a little bit the shift to active routing, but it is still in the area of routing. So. My idea is, in principle, to have, independently of any funding, independently of any outer contact, such [disfmarker] such a [disfmarker] core activity, in principle. And we can see how much from the outer world can fit in. First of all, the first stage is the institution which is behind everybody. That means uh KPN in your case and Siemens in our case. And maybe, then, in the broader world here other partners registered in the proposal migh Tomorrow, I will go to Cisco, for instance, and I will discuss uh, all this kind of things. Maybe they have certain additional activities and they're interested in the results before [disfmarker] The benefit here is, in principle, that the results are available. They are not restricted any way. If you have your own common activity. That makes a little bit more easier. And, uh, if there is a technical linkage between such, uh um, between the outer world and us OK, the better it is. But, anyway, if we come to a certain kind of [pause] uh, common work within the core of the NSA group, I think that's really beneficial to everybody. And [pause] uh, especially, Wilbert, if you have in mind to go in the [comment] mmm, uh, really from the project proposal view, I think we can build it on this [disfmarker] this core. You know, and go to the outer world with, and [disfmarker] and [disfmarker] and align it in a certain way. For instance, that it also fit, maybe, to KPN, and Siemens, and in other companies. Yeah. [speaker003:] Yeah, and [disfmarker] [vocalsound] This probably has to be a brainstorm [comment] right now. [speaker004:] Yep. [speaker003:] Um [pause] What I told Joachim yesterday was that, uh uh for my company [disfmarker] the company I work for, they are more interested in [disfmarker] in an explicit service, than in core technology. u Me, on the other hand, I'm more interested in uh, core service, or c core technology, myself. So I'm u I was thinking about [pause] uh, how I could combine the both of them. KPN was, um [disfmarker] they're interested in having, uh like, six people or something, working on the project for one year or maybe, uh, three people fo working on the project for two years, and have a kind of a demonstration uh, for a specifically in UMTS service. [speaker002:] Mm hmm. [speaker003:] I was thinking that um as a [disfmarker] well, not really a UMTS service, but [disfmarker] [pause] yesterday, what I discussed with Joachim was uh, a w little bit based on your ad hoc networking also. That it would be interesting to have a kind of a device where you can switch from w a WaveLAN Technology to, um, UMTS. I think that's, uh, it's in [disfmarker] in the future that will be a main competitor of, uh, of UMTS, [speaker002:] Hmm hmm hmm. [speaker003:] too, like the [disfmarker] the [disfmarker] [pause] the Lucent, uh, equipment has a l huge scope already, like five hundred feet, or [disfmarker] or even more sometimes. so that's [disfmarker] it's might be interesting for, a service to walk into a building or whatever is available on that network. You can switch from UMTS or uh uh, to WaveLAN Technologies. [speaker004:] Hmm. [speaker002:] Hmm hmm hmm. [speaker004:] Uh, maybe I will write a few keywords on the [disfmarker] on the whiteboard [speaker003:] Sure. [speaker004:] OK. [speaker003:] Of course, it sounds strange for a company that does UMTS to switch [disfmarker] to [disfmarker] to be able to switch or provide the service to switch to WaveLAN but at the end it makes it easier for your [disfmarker] and cheaper for your customers. So, if they don't do it, somebody else will probably do it. [speaker002:] And you can build a [disfmarker] a [disfmarker] [vocalsound] an [disfmarker] a [disfmarker] a double technology adapter, too. So you can market these double technology adapters. And, this can be a [disfmarker] a potential uh um you know, uh, income, too. [speaker003:] Yeah. [speaker002:] Because you just have to license, uh uh WaveLAN technology. You can build, uh, the whole thing. [speaker003:] Yeah, [speaker002:] This U UMTS WaveLAN, uh mixed together. [speaker003:] and [disfmarker] Yeah, something like that could be possible, maybe even a UMTS to WaveLAN gateway, or [disfmarker] I'm not sure if that's possible like we U if UMTS re contains redirects or whatever, like switches. I'm not sure about it. It could be. [speaker002:] Mmm. I think, they are using different, uh, radio technologies. So, [pause] uh, at the physical level, they're [disfmarker] they're [disfmarker] they are not compatible. [speaker003:] It's not a problem. No, but s You could have a UMTS gateway that translates everything to that uh w local [@ @]. [speaker002:] Yeah. Yeah. [speaker003:] If you will [speaker002:] That [disfmarker] that's right. [speaker004:] Yet the basic idea is, in principle, to switch seamlessly between wireless [pause] LAN technology, whatever and [pause] UMTS, right? [speaker003:] Yes. [speaker004:] And [pause] you see the benefit that, in principle, the customer will save a lot of money. Because if [pause] in house communication like WaveLAN or maybe I see it also in the American market, some WaveLAN communication in the outer world will be available, the [disfmarker] m and that is your assumption, that the more cost uh extensive um U M T S connection could be principle used, I would like to say, over WaveLAN, and then I have a normal WaveLAN connection, right? [speaker003:] Yes. [speaker004:] And the router and the WaveLAN connection goes into the core internet elsewhere and the connection is not dropped between this seamless hand off I would like to say. [speaker003:] Mm hmm. [speaker004:] Right? or what [disfmarker] or [disfmarker] [speaker003:] Well, If [pause] connection is lost, I am [disfmarker] I'm not sure if sometimes you'd simply have to drop a connection. If you have outstanding connections. [speaker004:] What [disfmarker] [speaker001:] Yeah. [speaker003:] But that's uh one thing of the internet, right? It's [pause] should be able to unplug a host, bring it somewhere else and everything continues like it should be. I'm not sure i if it's uh [pause] if you really have to con uh save a connection for all applications, I [disfmarker] I don't think so. [speaker004:] Yeah, but if this is the final goal, it should be as much as possible available that you do not have to break the connection if you switch between the subnets' technology Right? [speaker002:] Yeah but I [disfmarker] I have a point uh regarding the [disfmarker] this question, is eh the cost function. I mean. uh What about the user who is switching from one [disfmarker] let's say [disfmarker] zero cost area network to another [pause] network where he has to pay. It is OK [pause] for the service to continue when he uh switches from one network to another. But probably, the user wants to be aware that he's switching from one place [disfmarker] from one network to another. Because maybe, [vocalsound] while he's uh having, let's say, a [pause] vide video conference uh over the [disfmarker] the WaveLAN oh he's not paying any extra cost. But when he continues with this service on the UMTS, he's having to cover uh an important [disfmarker] let's say, an important cost [disfmarker] or some cost, non null cost. [speaker004:] Yeah, non null [speaker002:] So [pause] uh this can be something that probably the user wants to be aware of. Of course, this doesn't mean that the user wants the connection to be dropped when he switches. to the w uh [disfmarker] r uh from the zero cost area network to [disfmarker] to the [disfmarker] uh um non null cost area network. But maybe some users want to do this. Because they say "OK [disfmarker] I'm not willing to pay any back for this service, because I'm just watching [pause] uh you know the sports news. So, if I switch [disfmarker] If [disfmarker] if, because of my motion, I'm switching, or I'm going out of the coverage of the [disfmarker] of the uh wave [disfmarker] uh wireless uh local area network, well, I [disfmarker] I want the service to be uh stopped." And this can be some pattern then that [pause] some users will follow. [speaker001:] Yeah. [speaker004:] Yeah, but if your s uh I believe everybody is aware about the USAIA architecture. And uh this one is, here, [disfmarker] this stuff is, here, also related to this kind what is mentioned there concerning the end system. And in my conf uh considered end system, there is a certain kind of policy, concerning how to use subnets technologies and [disfmarker] and all these kind of things. How to use quality of service, you know, in advance, because you have to pay for certain technologies and maybe for certain services. And that should not be done automatically. But it could be done automatically. But nevertheless there must be a certain kind of um user interaction always possible, that you could set up a certain kind of mini database how to deal with all these kind of things. I think that's not a problem. But uh what I [disfmarker] I would like to ask Wilbert here, and we discuss it yesterday, Is it only the usage of that PCMCIA card where you have [disfmarker] instead of GSM today, UMTS, and instead of wireless LAN today also wireless LAN. Remember your four and one [pause] card uh concerning um, what is the name, from Dataco -? No. Dacom [speaker003:] Oh, the one I have Sitcom? [speaker004:] Where you can enable four technologies, and you plug in, in your laptop [speaker003:] Yeah. [speaker004:] and then you have every four uh subnet layer technologies available in your laptop. Is that the problem? Yeah, that's not the problem. Right. [speaker003:] Um. Well, the switching itself could be a problem. [speaker004:] Why? [speaker003:] Becau well it's a depends if one of the other networks is available, maybe you are, I don't know. So that could be a t topic for research, [speaker004:] Yeah, but this i [speaker003:] so maybe you know more about it. [speaker004:] Yeah, but this is th only the probing to figure out that you get at a certain location the information that a s certain coverage of a certain subnet techn technology is available. And if so, then what Miguel mentions, then you add a certain kind of policy, to switch [pause] or not to switch, to this kind of technology, [speaker003:] Yeah. [speaker004:] right? [speaker003:] Y Yes. [speaker004:] But this is end system related. That's not network related. I see no network related stuff within these [disfmarker] in this uh idea. [speaker003:] Um. [speaker004:] It's only to set up i u For me it's like a laptop t to plug one PCMCIA [disfmarker] and currently will have two PCMCIA cards, and I compare GSM with UMTS. [speaker002:] Hmm hmm hmm. [speaker004:] And today I would like to have GSM, and then I have my GSM connectivity. And I cannot use it [pause] other technology, only GSM, because the interfaces, the driver interface, does not permit me to switch between the [disfmarker] between different technologies. So if I want to go to wireless LAN I have to stop my connection and put in my wireless LAN PCMCIA card, and then I can go on and set up the [disfmarker] the same connection one more time. And what you have in mind, to do it a bit more seamlessly, and that makes an only sense for me if both technologies, in principle, were close together in that sense for application. For instance, the application would be that I have my laptop here, and I work here with [disfmarker] wi with my device, any kind of device, and I work here in wireless LAN area. And then I go outside, and [pause] want to go still on, leaving the coverage of [disfmarker] uh of the [disfmarker] uh wireless LAN. And at a certain state, the device uh receives certain signals that the coverage of UMTS or GSM is available. And at that state, you automatically, or based on the certain kind of user profile, switched to the um UMTS [comment] or GSM st uh network. That's the only senseful application. From my understanding, that means only to have a certain kind of control layer for the different drivers, which you also mentioned as a USAIA architecture. and uh then to do the [pause] switching for an existing stream or application between these [disfmarker] uh, transparently to this application, between those uh these both technologies. Maybe there's more potential. I don't s see it, but maybe you have a additional idea. And I don't think that this [pause] end system device uh architecture, which will come automatically. And I believe there's a lot of activity throughout the world, because this seamlessly communication is always mentioned elsewhere where you cou where you listen to. So uh [pause] uh [speaker003:] If [disfmarker] if you can leave it out the network that will be really cool. That's what you w what I would like to do, so Yes. [speaker002:] But when quality of service comes to the uh scene, well, I think [pause] we have a [disfmarker] a bigger problem because maybe this [disfmarker] this uh uh seamless migration cannot be seamless at all, because maybe the [disfmarker] the quality of service we are getting when we are [pause] connected to a certain network, let's say high speed networking, cannot be [pause] sustained when we are switching to a different technology. So, this is another thing that is uh putting a little bit in more trouble our [disfmarker] our scheme or [disfmarker] [speaker004:] Now, I would like not to hear only three opinions. [speaker001:] Hm [speaker004:] But maybe there are also other opinions. [speaker002:] OK. Yeah. Oh, sorry. [speaker004:] And maybe everybody should uh give a short [disfmarker] [speaker001:] I think you [disfmarker] [speaker004:] Let us understand, built for everybody and their com whether their comments. [speaker001:] Yeah. I think we should [disfmarker] In this thing of seamless handoff [disfmarker] or handovers [disfmarker] should think more maybe in [pause] different layers? So one layer is [pause] having the seamless handover. It's clear if a technology doesn't provide that [pause] quality of service, you can't do magic and have it. There are certain constraints on that. It's true. That's still in mind. But for [disfmarker] with the cases there [disfmarker] There, it is possible. So that should be made possible then? [speaker002:] Mm hmm. Yeah. Yeah. [speaker001:] So [speaker002:] It can be done. It is just [disfmarker] Mmm uh Trying to [disfmarker] to [disfmarker] to [disfmarker] to poi to pinpoint The [disfmarker] the possible problems. [speaker001:] Yeah. There are still enough problems i [pause] in doing it. Yeah. Yep. But it might be the wrong thing to promise, uh, ten megabits to everywhere. There you are. But saying "OK if it's [pause] a [disfmarker] if the technology is OK, you can do it." [speaker002:] Yeah. In fact, if you take the slower technology [pause] of the ones you are planning to [disfmarker] to support, you can offer this warran this [disfmarker] uh this throughput as uh the minimum warranty you [disfmarker] you can get. [speaker001:] Yep. Yep. Yeah. [speaker002:] So, if everybody is [disfmarker] is asking for less than this minimum amount, no problem at all. But in this case, maybe not too much effort should be put [pause] on providing this quality of service because you just have a [disfmarker] usually a huge [pause] bandwidth compared on a [disfmarker] a [disfmarker] w with what the user is [disfmarker] is using. [speaker001:] Yeah. Yep. Mm hmm. [speaker002:] I mean, if you have a ten megabits bandwidth and the user is asking two kilobits per second, of course, you can uh build uh certain reserve mechanisms to warranty the user these two kilobits per second bandwidth. But probably in [disfmarker] it isn't [disfmarker] it is not worth because you have so many uh available bandwidths that maybe uh it is not a problem. [speaker004:] Oh, please come in. [speaker002:] Hi. [speaker001:] Hi. [speaker007:] Hmm? Oh! OK. [speaker001:] ist fertig. [speaker004:] Uh. OK [speaker002:] Oh, OK. [speaker004:] Good luck. [speaker003:] The [disfmarker] the thing is that we are looking for a research [pause] topic on [disfmarker] on those uh four areas. [speaker004:] Yeah. Yeah. And the building blocks, yeah. Multicast, not a topic. [speaker003:] So [disfmarker] So we [disfmarker] [speaker004:] Routing? [speaker003:] So [disfmarker] so we went [pause] through [pause] uh [pause] uh to [disfmarker] through multicast. So this architecture with m well, at least no wireless environment or a mobile IP environment, comparable to uh UMTS environments. We really did not find any [disfmarker] well, difficulties there. On the blackboard, that is. Of course, in real life there will b probably be some problems. [speaker002:] Have you [disfmarker] have you read about this [disfmarker] uh Barwan project held at Berkeley University? [speaker004:] Well, what you [disfmarker] the pointer [disfmarker] the pointer you sent yesterday. [speaker003:] Iceberg? [speaker002:] Yeah. [speaker004:] Yeah, I c I didn't find the time, yeah, to do so, [speaker002:] OK. [speaker004:] but, uh [speaker003:] You mean the Iceberg project? [speaker002:] Barwan. [speaker003:] Barwan [pause] of [pause] uh ES [speaker002:] Barwan. [speaker001:] How's it start? [speaker003:] I know there's one that's of uh Randy Katz. [speaker002:] Uh da it was another related project called Daedalus. [speaker003:] Yes. [speaker002:] And uh well it seems they what they were doing was some kind of [pause] uh things si somehow similar to this. [speaker003:] They worked together wi little bit with Nokia? [speaker002:] u Uh [pause] Don't know. [speaker003:] Palo Alto? Mountain View? [speaker002:] I [disfmarker] I [disfmarker] I [disfmarker] I just uh take a [disfmarker] a quick uh look at the [disfmarker] at the thing because I [disfmarker] I was [disfmarker] uh read about this uh a long time ago. And it [disfmarker] the [disfmarker] the latest report I [disfmarker] I found was in nineteen ninety eight. And in this report, well, they [disfmarker] they pres [pause] they have uh some slides, some uh papers, some [disfmarker] well, a lot of things. [speaker003:] Hmm. [speaker002:] And [pause] well the main thing and the quick uh thing I [disfmarker] I was uh looking at was a [disfmarker] a video available in MP thr [pause] in MPG M peg format. And uh well [pause] what this video was presenting was a kind of test of [disfmarker] of one guy with a [pause] portable computer, moving across different networks. So starting at the CS department at Berkeley and then going out the street switching to the [disfmarker] a campus wide network, uh um I think it's a Metricom uh network, an which is uh a test [disfmarker] [speaker004:] A Ricochet here, right? [speaker002:] Ricochet, yeah, [speaker004:] Yeah. Yeah. [speaker002:] that's right. [speaker004:] Yeah. [speaker002:] And then, finally switching to [disfmarker] I think it was city PD or something like this. OK. I [disfmarker] as lo uh as the [disfmarker] as the user is moving out of the [disfmarker] of the coverage area of this second network. And, well, what they have built is a a [disfmarker] kind of uh proxy structure. So they are [disfmarker] they are putting most of the work of this adaptational layer on [disfmarker] [pause] on the side of the proxies. So the client or the server is not changed. It is the proxy, the one that does the work. [speaker003:] Yeah. [speaker002:] OK, so you can even, for example, use [disfmarker] It's the same? [speaker004:] Ah we just discussed a lot about proxies. [speaker002:] Oh, OK. [speaker004:] That's a fun [speaker003:] I like them so [speaker002:] OK, I [disfmarker] It is just that [disfmarker] that [disfmarker] well, maybe we can [disfmarker] we can take a look at [disfmarker] oh maybe a more in depth look at this project just to see if we are doing the same, or if we are doing something similar. It would be important for us [disfmarker] to [disfmarker] to highlight the differences. [speaker003:] Yeah. [speaker002:] Because if not, well, they can say, "Oh, you are doing the same thing." "This was done [pause] now." [speaker003:] Or maybe talk to them [pause] about s operations they thought of but didn't touch yet. [speaker002:] Yeah, well we are close. [speaker004:] Mm hmm. Uh maybe a few comments. First of all, Um. Do you mind Mi Miguel, to write the name of the project on the whiteboard? [speaker002:] Yeah. No problem. [speaker004:] So that everybody [disfmarker] Uh [pause] I [disfmarker] I will send also the pointer which Miguel sent to me, to everybody so that everybody can take a look to this uh, project. [speaker003:] I was [disfmarker] [speaker004:] And uh I don't know what avail information is available because I didn't find the time currently to check it out. [speaker005:] It's a project of the UC. [speaker004:] But first of [disfmarker] [speaker002:] Yeah, [speaker005:] Oh, yeah. [speaker002:] UC Ber uh Berkeley, [speaker004:] Yeah. [speaker003:] Wh when did you go through this? [speaker002:] yeah. I'll [disfmarker] I [disfmarker] you [disfmarker] uh [pause] I seen uh uh reports from nineteen ninety uh ninety six, ninety seven and ninety eight. [speaker003:] Cuz yesterday I was uh Well [pause] coincidentally, I b bumped into this project here. [speaker002:] Maybe the project [disfmarker] n uh [speaker003:] It's funny. [speaker004:] Uh, sorry? Uh do [disfmarker] [speaker003:] Yeah, yesterday I was just surfing a little bit. [speaker004:] Mm hmm. Ah! [speaker003:] And then from Nokia I came to this [disfmarker] that [pause] project, [speaker004:] Ah! OK, I see. [speaker003:] so. [speaker004:] Yeah? [pause] [@ @] [speaker003:] Nokia research. [speaker002:] I think that [disfmarker] that the leader was uh Professor Katz. [speaker003:] And then [disfmarker] [speaker004:] OK. [speaker003:] That's right. And he's also board in Nokia research, [speaker002:] OK. [speaker003:] so [speaker004:] Mm hmm. Then the second thing is that uh Daimler Chrysler research here in Palo Alto as well as in Germany, Ulm, is doing, in principle, the same with a car. [speaker002:] Mm hmm. [speaker004:] They have a specific antenna for uh GSM [pause] Oh, GPS and [disfmarker] and fu for future extended G uh GSM networks and wireless LAN and [pause] and other thi and other network technology. In the roof of the car. They have a s lot of servers in the [pause] trunk of the car. Then they collect data how it is [disfmarker] with handoff and all these kind of things and they are driving around the different areas. And the same car exists here in Palo Alto uh based on wireless LAN, GSM and 2 x Ricochet networks. [speaker002:] Mm hmm. [speaker004:] They collect the dat collect [disfmarker] collecting a lot of data. concerning the seamless handoffs and whether it works very well with speed and um uh, Thomas is going to build mobile IP stuff in their servers. uh because they want to use current technology with a c a combination of mobile IP. So I think there is a lot of activities in the area. Uh. And my third uh comment is uh if we are going [pause] something in that direction that is quite different from this thing [pause] here. But, anyway, maybe as a core technology with a real product in mind. I see some difficulty because nobody currently has here certain kind uh of UMTS know how. [speaker002:] Mm hmm. [speaker004:] And you need it definitely because then [disfmarker] it j must be your uh Bible I would like to say that you are very familiar with UMTS if you wanna go in [disfmarker] in that way. And As far as I know the UMTS standardization is not finished yet. Ninety percent, maybe, or eighty percent. I kno I'm not sure. But [pause] uh [speaker002:] Yeah b le let me [disfmarker] let me ask you [vocalsound] a question about eh UMTS or [disfmarker] or third or possible fourth generation communic wireless communication systems. [speaker004:] that [disfmarker] Yeah. [speaker002:] Uh, from the point of view of the user, uh [pause] is there uh something more that [disfmarker] nnn, that [disfmarker] than more speed? Is there any other advantage, from the point of view of the user or of [disfmarker] of the terminals? [speaker004:] Users of [disfmarker] of UMTS? [speaker002:] Yeah. [speaker004:] You can request certain kind of QS. You can even request it in GPS but it's not end to end quality of service. [speaker002:] Hmm hmm hmm. [speaker004:] It is only um [pause] what the gateways get. [speaker002:] OK. Yeah. [speaker004:] And within the core network [pause] there are certain kind of tunnels set up in the core network itself of the access provider, not the internet backbone core. And you use certain kind of tunnel mechanism [pause] to combine, in principle, the base station cluster control. But this is [disfmarker] uh is controlled by a certain kind of [pause] node. [speaker002:] Hmm hmm hmm. [speaker004:] And you have the gateway to the [pause] PDN, or the Public Data Network. And between both there's a certain kind of tunnel set up. [speaker002:] But in fact [disfmarker] But in [disfmarker] fact now, at the GSM level, what you have is that every voice channel you use provides a [disfmarker] a p a pretty fine fixed uh data speed. So, we can say that every voice channel has an i an incr uh [disfmarker] implied um [pause] quality of service, which is nnn ninety six hundred bits per second. Full duplex. [speaker004:] You mean now for GSM, [@ @], [speaker002:] Doesn't it? [speaker004:] or [disfmarker]? [speaker002:] Yeah. [speaker004:] Yeah. Yeah. [speaker002:] So. [speaker004:] Yeah, what is [disfmarker] what means quality of service. A certain degree. Right? [speaker002:] Yeah. [speaker004:] Uh. Sometimes we have some [disfmarker] [speaker002:] No, but [disfmarker] um but not only because of [disfmarker] of data speed, but also about delay, you know, because GSM networks were developed to y uh [pause] with a voice service in mind. Ah, delay is also [pause] considered in these networks. So, uh, in fact for every voice channel you have uh a pretty fine maximum uh delay and jitter, and a [disfmarker] a [disfmarker] a specified um data speed. So you can scale up [disfmarker] scale up this uh this um thing by using several channels. So, in fact, you have, using just GSM technology, you have more or less the same building blocks as you can get uh with the UMTS. I mean that one thing is that the technology is a little bit different, that uh the base station thing is different and maybe uh the [disfmarker] the internal routing uh inside the network is different but from the point of view of the mobile things uh I'm not sure if [disfmarker] if [disfmarker] [speaker004:] It's packaged data oriented. It's not really circuit switched. In UMTS you have always GPS as a [disfmarker] a involvement in the core network. You have, in principle, the path which still is available for [disfmarker] for the GSM network. Also they use, in principle, the same uh the same mechanism. [speaker002:] Mm hmm. [speaker004:] But you also have a [disfmarker] dat uh packet oriented communication that's on the GPS stuff. And that's also related to UMTS. [speaker002:] Yeah, but [disfmarker] The [disfmarker] You know, it is also being d uh developed, this G uh P DPRS thing, GPRS thing. [speaker004:] So, that is [disfmarker] [speaker002:] which is uh pack uh packet data over GSM. So, I mean. Uh, I don't see that a [disfmarker] a lot of difference. We can expect a lot of differences f uh from the uh mobile terminal point of view. Uh, when uh UMTS be deployed. What I'm trying to say is that [disfmarker] is that [disfmarker] well, uh maybe uh it is not a big problem that any of us uh be uh um were um a UMTS expert because uh I [disfmarker] I don't see [disfmarker] it [disfmarker] I think that the UMTS thing is more a technology issue, than a research issue. Maybe I'm wrong. Just es um [speaker004:] No, I think there's a lot [disfmarker] [speaker002:] explaining my [disfmarker] m [speaker004:] Yeah. [disfmarker] I think there's a lot of research. But the question is "is this uh a research wh in which direction we are going?" [speaker002:] I mean [disfmarker] to be done here, [speaker004:] And because [disfmarker] For inf [speaker002:] not [disfmarker] not at the [disfmarker] of course, you know, [vocalsound] I'm [disfmarker] I [disfmarker] I agree with you that there are a lot o of things to be developed but [disfmarker] But the point is that [disfmarker] uh this is [pause] probably not here. Because uh well, you know the [disfmarker] the [disfmarker] the orientation of our group which is uh "Networks Services and Applications", not uh wireless technology or the underlying wireless technology. [speaker005:] You think more about hardware technology. [speaker002:] Yeah. [speaker005:] So, there is a lot of hardware to develop, but [pause] not [disfmarker] [speaker002:] You know, radio frequency technology, wide band w uh CDMA. These kind of things all have a lot of things to be [disfmarker] to be [disfmarker] uh researched. But uh I don't think w this is not something we are doing [pause] here. [speaker004:] Right, but bear in mind the picture is a [disfmarker] is a little bit different. The picture I have in mind concerning UMTS is, first of all, if you use UMTS only for voice, Like you did it or are doing now for GSM, it's not worth to have this kind of network. And, [pause] currently in Europe, where it will be deployed first, I believe, it will be very hard, in principle, to get some money back if you have only for [disfmarker] using it for voice. [speaker002:] Voice, yeah. [speaker004:] Because they paid a lot of money. So. What is the point? The point is that you use the UMTS to connect to the internet and have certain kind of services which are still not available up to now. But the problem is if you have the internet content, and you have maybe your small uh um [pause] device of maybe only evolution based for [disfmarker] for these uh new technologies, for [disfmarker] for [disfmarker] certain mobile phones, then you have certain kind of problems, [speaker002:] Hmm hmm hmm. [speaker004:] you know, because you cannot use these things. And WAP is not an answer for [disfmarker] for these kind of things, you know. So. The [disfmarker] I believe there is a lot of research work and also if you see the end to end scope of the internet, from the mobile node up to the certain kind of server, "correspondent node", or whatever you call this [pause] destination, then you have certain kind of models, for instance, for the quality of service and all these kind of things. And then is UMTS only one link where you have certain kind of subnet layer technology. And you have to match these things anyway. It's like a [pause] normal ethernet. [speaker002:] Hmm hmm hmm. [speaker004:] Whatever you see, it is only an additional subnet layer technology. [speaker002:] Yeah. [speaker004:] And you have all the things mapped to the [disfmarker] uh to [disfmarker] to the [disfmarker] to the [disfmarker] the internet stuff, in principle, to this technology. And furthermore, you see that the trend is to put more IP technology, really, really IP technology, and not certain kind of modifications and adaptations, to their access network itself. And th there's a lot of potential, as I believe, for [disfmarker] for uh certain research work. But the question is, first of all, "Do we have the competence to do something like that?" First of all. Second, we are not hardware builders, in that sense to set up certain kind of end systems. We can design them, yeah, from the protocol layering and [disfmarker] and maybe have some simulations, or whatever. But isn't the final outcome really such a device? Prototype, Linux based. And we go around here and we have maybe a GPS base station sponsored by Ericsson, since they are living not so far away and [disfmarker] and maybe WaveLAN connection, here now a testbed and then [disfmarker] then figure out that it works? So, that's my point. [speaker003:] Well, just to make the list a little bit longer. [speaker004:] Yep. [speaker003:] May Um, you could also think about uh session management. I do along with uh the network applications and services, uh as you might know. It's a [disfmarker] a if you can do it in the end systems or at sp specific servers, then, I'm in favor of that. But se session management, I mean uh the authentication, if you go from one network to the other, it's not solved. uh, Nobody really wants a [disfmarker] a guest on your network. Unless you get some money through some way [disfmarker] uh through some way. Then, there is also the session management in the sense that a lot of these things [disfmarker] at the Barwan project, as you said, go through proxies. There might be kind of handoffs at [disfmarker] uh at proxies, like the zip uh [disfmarker] s zip proxy that we [pause] also discussed before a little bit. That [disfmarker] that's als I think also in the line of uh session management. Then there is this uh perfect thing uh the Berkeley sockets. It's almost perfect. They s kind of skipped uh layer five, uh session management or session layer, I think. That's um [pause] normally with [disfmarker] if you do a socket programming, it's directly the prot application protocol on layer four, uh TCP socket, or layer five for UDP socket. There might also be some research [disfmarker] Well, could you introduce such a session management, or [disfmarker] Norm what they do right now is they make it part of the application level protocol. That's what they do most often. Like with session management? for mobile the [disfmarker] uh applications such as ist instant messaging or uh talks, and so on, uh like zip. It's just doing a [disfmarker] a reconnect. If they think that the network is different or different capabilities come into play, they simply do a reconnect so it's part of the protocol. That's all what I think about session management, like part of [disfmarker] in proxies or is it en embedded in application level protocols? [speaker004:] But it's really higher level. Right? It's no more related to a network. [speaker003:] I it's [disfmarker] [speaker004:] We are leaving the network area. That's dangerous. [speaker003:] So how do you want to focus on network services. [speaker004:] Cuz if you go to the app If you [disfmarker] For me that's starting with middleware aspects, you know? [speaker003:] Well the [speaker004:] And [disfmarker] [speaker003:] So "network applications", if you look at the name of [pause] the group, one thing is that um the they used to call a [disfmarker] a n a "network" that was uh [disfmarker] if it was like distributed between two systems connected by a network. If you look at "network" from the OZ layer, then it's uh, level two or something or [pause] maybe level three, but everything below [pause] three. So there are, I think, two [disfmarker] two different interpretations of uh "network". [speaker004:] No, I believe "network" is, if you see it really from [disfmarker] from the IP protocol suite, is then layer one, in principle, up t including layer four, despite the fact that layer four is only residing, normally, in the end system and in the gateways. [speaker003:] But are there "network applications", then? [speaker004:] Yeah. [speaker003:] So, the [disfmarker] the name is "Network Services and Applications", s so what could be an en example of a network application if you look at the OZ stick? [speaker004:] Yeah, we have, I think, some. Yeah. Do you see this word "application" related to a network? Or do you see this word independently? That was never discussed in detail. "Network [pause] Services [pause] and [pause] Applications". So, everything in the world. [speaker003:] OK. [speaker004:] Or is the application related to the network? For my understanding and as far as we discussed, so [disfmarker] such a long time uh, these applications should [disfmarker] For instance, e commerce. [speaker003:] OK, OK, well, they will [disfmarker] I didn't want to get a discussion now. [speaker004:] And it's really, really yeah You know? That [disfmarker] Yeah. No no, uh that is a typical application, I think, which is, in principle, also a little bit related to [disfmarker] uh to the network. But not any application. And we also have the aspects for security. Miguel, you mentioned here authentication and [disfmarker] and triple A, but we do not have any expert anymore. Because Hannes was an expert for that. So the [pause] problem for me, it still exists that we have, in principle, here [pause] basic building blocks in the proposal from the networking side. We could have a certain kind of application and there's a lot of activities in the outside world and I believe there's a lot of contacts. Or there could be a lot of contacts uh with [disfmarker] with these people. Especially at uh Georgia Tech. I know on [disfmarker] next week on Tuesday. Tuesday, right? We will go to UCB. There are three guys. Then Morgan mentioned one. I do not know exactly um uh what he is doing, but, he's also interesting in some kind of collaboration. He's also a professor in the UCB. And uh many universities maybe [pause] Claudia mentioned Duisburg. And uh [disfmarker] in principle, most of the university have some activities in that area. We will not take care about these one, but we can use them as um [pause] for field ex uh field trial access and [disfmarker] and [disfmarker] and maybe some support and [disfmarker] and whatever kind of thing. And we focus really on the networking stuff and going in more detail in this area. And the alternative is maybe what uh currently we have only uh this uh proposal from Wilbert to go more in the smaller scope. Figure out some potential from maybe this uh WaveLAN UMTS stuff and [disfmarker] and starting from that. Or, are there other suggestions? [speaker005:] I think we uh have already so much work put into this proposal, why to switch now? So I don't see the reason for that. Maybe the application we suggested is not the best one. So we have to think about an application which [disfmarker] which is um um, for mobile users, more relevant. I don't know. Maybe video games. If you are traveling uh, it's boring to travel and then you decide to play a g video game with somebody else who's traveling. [speaker004:] Yeah, you know, it's still mentioned in the proposal. [speaker005:] So. [speaker004:] And th um I [disfmarker] I'm sure, OK, that it is the wrong place to [pause] get the foundation why we are use this kind of application, but uh [disfmarker] And [disfmarker] In [disfmarker] in the end of the text [disfmarker] in the end of the pr text of the proposal. But it is mentioned that it is only an example. And that we use this application, in principle, while it is then very very easy to get a certain kind of access to these kind of things. When you have a video game server and [disfmarker] and you want to have mobile access you must at least have one in your consortium, or as a partner whatever, who provides this service. And then we [disfmarker] we are playing videos, in principle, if you have a field trial. And that is, I think, much more harder than to have a certain kind of [disfmarker] certain kind of access to the University, and uh to this server, which is run on the campus of the University. [speaker005:] OK, that's right. [speaker004:] And that is the point. You can have this f to figure out all the things and the [disfmarker] the um and the [disfmarker] the um um, now, how to say, the requirements of this application using this classroom scenario. But nevertheless, it's not the best for and [disfmarker] and [disfmarker] and [disfmarker] and uh s networking stuff. You are right. But it [disfmarker] Does anybody see certain kind of potential here to go on, and to start the work maybe in this little bit smaller oh uh scope based on these building blocks? Because Multicast you still mentio uh you mentioned that there is no potential. I think it's described here that the potential, if you use it for the mapping to the next generation networks in the wireless access area they are still some potential to map Multicast because they [pause] do not use it. And to figure out whether they're useful inse instead of using certain kind of tunneling mechanism and to set up these tunnels and have the mapping between a PSDN uh telephone number or whatever kind of number they are using to IP addresses and then there's the tunnel I Ds, and [disfmarker] and all this kind of things, I think, is very beneficial as well for [disfmarker] for telecommunication providers. as well as for ISP's. I [disfmarker] I think the potential is still there. You [disfmarker] you neglected that, but I don't think that you are right in that sense. [speaker003:] Well, think it works, but uh yes, the mapping of course between multicasting and the UMTS or [disfmarker] or whatever [speaker004:] In next generation? Yeah. [speaker003:] That's [disfmarker] or CDMA or whatever [disfmarker] that's uh, yeah, probably a thing that has to be find out. [speaker004:] Yeah. [speaker003:] I don't know what statuses. Maybe they already did it par as part of UMTS, I'm not sure. I don't have a clue. I [disfmarker] I only know how it works on copper and like ethernet and on PS PPP over something and how it works like that, but on the wireless I'm not sure. [speaker004:] Now what do you see about the um, um, KPN interests? Because, I believe s if the argument changes as follows that you are here and that you are very familiar with uh the Multicast stuff, and KPN is very interesting in UMTS stuff, and this is one potential target network environment we are focusing on, but only for the uh IP layer technology, then, why do [disfmarker] the mapping of multica also potential mapping of Multicast, I think, must be of major interest for them. Right? [speaker002:] One question. [speaker003:] Yes, but if [disfmarker] if it's available that means so [disfmarker] so if uh if it's already part of the UMTS standardization then uh it's immediately finished, like, OK, you can tell, KPN well go to Ericsson or Nokia and buy it? And if it's not in there yet well, it could be a suggestion and then the hardware people will make it part u like the ETSI or whatever [disfmarker] where U [disfmarker] wherever UMTS is standardized they will d well, go for it. [speaker004:] Mm hmm. [speaker003:] Of course, yeah, this [disfmarker] this group could do that, too. Could be, but [speaker002:] I have a [disfmarker] an important doubt about the IP availability over UMTS. I mean [disfmarker] you know that the carriers and some manufacturers have done an important effort in developing this web thing and uh I'm not sure what they plan to do with next generation. I mean, are they [pause] maintaining [disfmarker] are they holding the same web technology? Or, are ju just they [disfmarker] discarding this technology and forgetting about the thing? I [disfmarker] I don't know what will happen. uh, I [disfmarker] I don't have a clue. Probably, if after searching a little bit on the [disfmarker] on the net I can get the answer, but initially this is something to [disfmarker] to [disfmarker] to have in mind. I mean. Because maybe this UMTS thing will appear on the market without IP as a native protocol. And if this is the case well, thus seamless integration could be we strongly affected. At least, uh, if you [disfmarker] we want to use U M I'm [disfmarker] I'm [disfmarker] just uh throwing the question. uh I mean, looking for an answer because I don't know if uh some of you have a clue about this. I don't know what [disfmarker] what carriers are planning to do about this. [speaker003:] I'm not really sure, but I thought I [disfmarker] I had the impression that we can throw away our Nokia stuff when [disfmarker] our web gateways, when we g even go to GPRS, so let alone UMTS. [speaker004:] Now, WAP is ja not only related now to available bandwidth. [speaker003:] That's the impression I get. [speaker004:] That's not the only thing. It's only the adaptation towards the capabilities of the end device. Right? [speaker003:] Yeah, definitely. [speaker004:] So. uh [disfmarker] I never was a friend [disfmarker] [speaker003:] But [pause] but still a le d it's a [disfmarker] it's a fundamental [speaker004:] Huh. Yeah. [speaker003:] uh it's a [disfmarker] part of a [disfmarker] of design like the [disfmarker] the [disfmarker] the terminal has a WAP stick. And there's a box that has also a [disfmarker] a WAP stick. [speaker004:] So [pause] yeah. [speaker003:] And they do sometime [disfmarker] most cases, they do share an IP address. But doesn't uh have to deal with IP at all. It's l just a thirty bit number [speaker002:] Mm hmm. [speaker003:] that a actually is an assigned number. But it's all only used for identification. That's the only thing. It's stored in a WAP cookie, much like HTP in a cookie and from the WAP gateway then it will be HTP [disfmarker] real HTP [disfmarker] or something. So it's [disfmarker] it's really a different uh type of connection. It looks a little bit like uh the [disfmarker] the well, WML looks a little bit like HTML, but that's all what's uh [disfmarker] It's not IP at all. [speaker004:] Right. [speaker003:] Because there is a gateway, you can access with almost the same microbrowser, the same i uh web servers. If they provide the right content [disfmarker] uh, spit out the right uh WML language. So it's [disfmarker] it's really a different network [disfmarker] technology [disfmarker] I think. [speaker004:] Yeah. It's transparent, in principle, what you can carry, but [disfmarker] Anyway, you have the gateway. It must be uh [pause] The ca uh c uh t conversion of the media [speaker003:] Yeah. But [@ @] [speaker004:] of the contents, in principle. But, I don't see that uh [disfmarker] It's too late, I believe. If WAP was available in the beginning of GSM, then I think it was there would be a good chance that it is well deployed. [speaker002:] Mm hmm. [speaker004:] But now, the first generation of web browsers are in the ha and [disfmarker] and uh I would say "handies", [speaker003:] I i [speaker004:] so that's wrong [vocalsound] [disfmarker] in the mobile phones, but the [disfmarker] the next generation of networks are still available. and the ba uh the bandwidth constraints are no more applicable? [speaker003:] Well [disfmarker] Ri right now there is still a lot of WAP, actually. [speaker004:] And [disfmarker] [speaker003:] So that the [disfmarker] [speaker002:] Mm hmm. [speaker004:] Yeah, but [disfmarker] The hype but pffft uh [speaker003:] Well, users [disfmarker] usage. [speaker004:] I never saw a user using WAP. Please raise your hand, who is using WAP. [speaker003:] My colleagues are. [speaker004:] Yeah? [speaker003:] Yeah. Yeah, they are using WAP. [speaker004:] Yeah? OK, one of six. So. [speaker005:] I don't have a mobile phone. [speaker003:] Yeah. [speaker004:] Oh. [speaker003:] i [speaker001:] Who is using mobile phones? [speaker003:] Bu [pause] well, you don't have a mobile phone in [disfmarker] in the USA here. [speaker002:] No. No. [speaker003:] So that's maybe the reason. [speaker002:] Not me. [speaker003:] And I'm not [disfmarker] I doubt that it's not available in Germany. It's [disfmarker] would be unbelievable to me. [speaker004:] Now, do you [disfmarker] yeah, do you see the chance for WAP? I don't see it. [speaker003:] It i it is. In [disfmarker] in [disfmarker] in Sweden it is also there. And in the Netherlands, it is also available. [speaker002:] Mmm. [speaker003:] every provider provides [disfmarker] [speaker002:] In Spain it's available but uh you know, uh my main complaint has been all the time that it is difficult to me to believe that I can do interesting work with a 4 lines display. Ah [pause] Maybe, check the lottery result [vocalsound] or something like this, [speaker003:] Oh. Yeah, but [disfmarker] Oh! Bu I'm the first one to admit that's a h that it is a hype, [speaker002:] but [disfmarker] [speaker003:] but [disfmarker] and [disfmarker] but I'm [disfmarker] I just wanted to say it is being used. But if it makes sense to me and to [disfmarker] especially to have WAP in [disfmarker] in between, no. I'm not a supporter of it. [speaker002:] Huh huh huh. [speaker003:] So they have this optimized protocol to [disfmarker] to transmit a few ASCII bytes that can [pause] rescr refresh the screen with a different animation [pause] every, well, second, a few times. [speaker004:] Yeah. [speaker003:] So, that's [disfmarker] that's true. I immediately agree. But the thing was, y you just asked well, "will [disfmarker] will it be the same, will there be WAP for UMTS" and so on, QPRS. [speaker002:] Well [speaker003:] And [disfmarker] well [disfmarker] I don't have the answer, [speaker002:] I [disfmarker] I don't know what will [disfmarker] [speaker003:] but I thought it was not. I'm just [disfmarker] It's weird [disfmarker] it's weird to stop doing it with GPRS. [speaker001:] Yeah. [speaker002:] Hmm hmm hmm. [speaker001:] But usually WAP shouldn't [disfmarker] is so general that it can be used with UMTS. So if you [pause] still want to use WAP with UMTS you might be able to do it. But [speaker003:] Yes. [speaker001:] that shouldn't be your only way of [disfmarker] [speaker004:] There's a gate You can put the gateway [disfmarker] [speaker003:] if you want to do it in a [disfmarker] in a mobile phone [disfmarker] want to implement it, as a light stack or something. [speaker004:] Yeah. Yeah. [speaker003:] Yes? [speaker004:] Yep. [speaker003:] Sure. [speaker001:] But usually [disfmarker] that's not the reason to have UMTS. [speaker003:] No, not [disfmarker] not if you have a big uh web pad or something. [speaker001:] Yep. [speaker003:] Then you m There [disfmarker] there's an IP stack in it [speaker004:] Mm hmm. [speaker003:] and [disfmarker] [speaker004:] Mm hmm. So maybe I would like to ask Claudia, [speaker003:] Hmm. [speaker004:] you read the proposal in the meantime? [speaker007:] Yeah, but the task changed really a lot, [speaker004:] Yeah. [speaker007:] so, I'd [disfmarker] I rather would like to [speaker004:] Yes, that's right. And do you have some comments on it from the next [disfmarker] this is version zero dot five right, yeah. [speaker007:] Yeah, but I'd rather like to follow the discussion before I gave any comments, cuz it's really The first point is it's much more than it was before I left, so it's like five weeks in between, so it's like that whew! that big now. [speaker004:] That's right. [speaker007:] And it really changed a lot [speaker004:] Yeah. [speaker007:] cuz the [disfmarker] I mean I found the classroom inside but, the thing where I left the discussion was that we are talking about uh a classroom where you know everybody could look on with any kind of device and follow lectures or do exercise or whatever. So. Now we are talking about something which is much more precise and the [disfmarker] the range of stuff is maybe more [pause] narrow than before so I'd rather like to [disfmarker] to follow it a bit, and then to get more into [pause] the ideas. [speaker003:] So [speaker004:] OK. [speaker003:] c Can I [pause] do another one, like UMTS or wireless [pause] u i generally are [disfmarker] will be m more expensive. And especially UMTS. If they want to get back the money. So is it maybe a research topic to focus on the [disfmarker] not really billing, of course. I wouldn't [disfmarker] but at least the different application of protocol based usage. So you have uh one connection, one application uses FTP whatever, bit voice or some other [disfmarker] That [disfmarker] those type of application you m may [disfmarker] might want to [pause] make more out of a megabit per second or something. Could that be something, or [disfmarker]? [speaker004:] Also we want to go more in the triple A area. Right? Because billing [pause] alone [speaker003:] Or is n No [disfmarker] Well, it's [disfmarker] [speaker004:] Or [pause] or ac um Billing is related to accounting, [speaker003:] e [speaker004:] Accounting is related to [pause] Authentification and Authorization. [speaker003:] Yeah, that's true. [speaker004:] So. And then you are still at the triple A service uh activity [speaker003:] Yeah, that's right. [speaker004:] and nobody is [disfmarker] has the skills to deal with that. And then we are leaving our scope anyway. [speaker003:] OK. [speaker001:] My impression on this proposal is that every single block [pause] has some [pause] research done already. So, multicast. This means that there are many things done. Oh, what was it? Maybe I was routing mobility management But no one actually put them [pause] together. [speaker004:] Yeah. That's [disfmarker] [speaker001:] But [disfmarker] So that's the big problem. [speaker004:] That is a [disfmarker] [speaker001:] But the problem with this problem, might be that it's too [pause] big to put all these big pieces to [speaker007:] What do you mean by "put together"? [speaker001:] So that you really have a complete [disfmarker] a complete scenario. [speaker004:] As it [disfmarker] What I mean, the synergy effect, right? From [disfmarker] from [disfmarker] Yeah. [speaker001:] You have, in one scenario, multicast, [pause] quality of service, routing [pause] and mobile [pause] um networking. [speaker004:] Yeah. [speaker007:] Ah. Mm hmm. [speaker001:] Not just [pause] the islands alone. [speaker004:] Yeah. I [speaker007:] So why I wasn't, well, yeah, uh hold myself back a bit with comments, is uh cuz when I left the discussion, the discussion was what to [disfmarker] to have something like a big umbrella about everybody [pause] who was at this point, here. And so the discussion was much broader than it is maybe right now. And there I had the impression the umbrella was this kind of classroom [pause] thing. Right? Or [pause] wrong? [speaker004:] Yeah, in principle, it is related to the current state. But the networking stuff is more explained in much more detail [speaker007:] Yeah, yeah, yeah. It is. Of course. [speaker004:] and the work packages are better ident oder what c could be derived for work packages is much better aligned. [speaker007:] Yeah. Yeah, yeah, yeah. That's [disfmarker] that's clear. [speaker004:] But that has not changed, yeah? [speaker007:] But Yeah, that's [disfmarker] that's clear, but then I don't get [disfmarker] because when we were discussion [disfmarker] discussing this uh stuff weeks ago, uh I had the impression that, you know, everything of these work packages [pause] uh fit together under this one big umbrella. And now it sounds different. So that's what I don't understand. Ah but, it's also because I missed [pause] the last four weeks. So. [speaker004:] Yeah, the problem is if you [disfmarker] [speaker007:] It's a [disfmarker] a bit hard for me to follow the development what happened in the last four weeks, [speaker004:] Yeah OK maybe one sentence [disfmarker] one sentence. [speaker007:] why it changed so. [speaker004:] If you're [pause] focussing on the argument chain as follows. I have an application. This application has certain kind of requirements. These requirements are, maybe, Multicast and [disfmarker] and [disfmarker] and quality of service and mobility access and then mobility management and [disfmarker] [pause] That is it. I want to focus on it. Then you have in mind that [disfmarker] Or you must have in mind that all other applications and all other networking stuff is not affected by this. That you can use it with every application and if you introduce some new mechanism in the network, that is not related only to interactive multimedia applications like the classroom, it must be fit end to end. And if you assume something, like we discussed it, very hard with [disfmarker] with Multicast, that you assume that Multicast will be available [pause] at certain portions of the network or [pause] if you [disfmarker] if you want to have it end to end, this mechanism must be applied really for [disfmarker] all throughout of the internet. Otherwise it makes no sense, and we come to the conclusion, "OK, uh Multicast, we must deal with both approaches", that you have eh application layer Multicast as well as maybe some other Multicast and maybe really Multicast and then the point is how to select these mechanism if you want to have not a certain kind of access to the server elsewhere in the network you want to use any application. [speaker007:] So if I understandi it right, what has changed is before we were focussing on [disfmarker] [speaker004:] Yeah, and [disfmarker] [speaker007:] So [disfmarker] Cuz what's difficult for me is what is meant by "application"? So. Cuz I have sometimes the impression that everybody is defining "application" in different way. So, what exactly is mean by "application"? [speaker004:] The application, for instance, here, that eh the interactive classroom. But elsewhere any application in in in any kind of video audio [speaker007:] Yeah. Oh, OK. So, what has changed [disfmarker] [speaker004:] or whatever kind of [disfmarker] game server what whatever kind of interactive multimedia especially do uh downstreaming applications maybe eh video on demand then, eh whatever kind of application you can consider. [speaker007:] So what has changed is that we're not only focussing on this intelligent classroom, but on everything. [speaker004:] Not on everything. That's not true. But we s That's why we say here, that we have a certain kind of eh possibility to change in the access networks. [speaker007:] Mm hmm. [speaker004:] That means the wireless networks because that provides, in principle, the mobility. And these mechan And you see that there's a trend to provide IP functionality more and more in these kind of networks. But the problem is that you could not focus [pause] on this kind of um technology only for the access network. You have to have in mind always the end to end scope. And if you see, for instance [disfmarker] We have routing algorithm but different protocols and whatever kinds of things, u especially also for Multicast, different things really for different providers, and uh and it's uh very easy to change it, maybe in the access network, for novel things. For instance, that's one provider domain and you can say well why not use active routing in that provider domain? So it means that [pause] packet are delivering information, and you do not have the separation of routing and [disfmarker] and [disfmarker] and forwarding process. It's impossible to deal with that in the whole internet, because security reasons and whatever kinds of things make it impossible. So how to combine these pos uh potential um extensions in the um access networks, and [disfmarker] without losing the end to end scope [pause] in the internet. Then you can have certain n novel features give the providers like KPN or whatever, the po the potential, in principle, to [disfmarker] to uh [pause] see the building blocks for the future networks because [pause] there's a three g third generation like UMTS, a fourth generation more IP related. There will be a fifth generation, I'm sure. And then you derive the certain kind of building blocks for this network, if you ever really full I [disfmarker] uh uh full I P, end to end. But nevertheless you can only [disfmarker] not only focus on "OK, I provide this mechanism" because u you are accessing a certain kind of server node in the internet, using any application. And then this mechanism must fit it there. But nevertheless eh internet backbone is in the meantime, based on its success, impossible to change. You see the difficulties with IP version six, eh eh you see the difficulties with Multicast. They are discussed since, you know, ten years or whatever. The specification are rather stable and [disfmarker] and available but we have only islands in that. We cannot assume that everything is end to end available. And so you have to mi to have to have in mind [pause] that certain kinds of mechanisms are [pause] maybe at a certain stage end to end available, maybe we have a lucky that [pause] the islands are connected to each other or [disfmarker] the same applies for MPLS, for instance, maybe they are only portions available, how to deal with that when the islands are not available and maybe nothing is available. But you have to select it because I have a mobile phone, a mobile node or whatever kind of device, and I want to contact a certain kind of [pause] content in the internet. So is it possible to d have certain kind of normal features in the access network without losing the end to end scope of [disfmarker] with these difficulties we are dealing with. I always say that the success of internet make it unflexible. It's impossible to change something [comment] [pause] very easy. s s It's easy maybe more on the access network as in the internet backbone. So that is eh overall picture that we have in mind with these things. And we can not u We [disfmarker] we are six people or seven or whatever if Hannes come, We are not able to solve the problems uh in [disfmarker] in uh w in the few years or whatever what the whole internet community has um done in [disfmarker] in [disfmarker] si since [disfmarker] uh since a decade. But we can f pick up some f some um potential and [disfmarker] and [disfmarker] and [disfmarker] to start with something. And that's my missing point. That we have here, in principle, really the description of this problem I mentioned and focus really on the reasonable size of the project maybe in the first stage fo for the NSA [pause] core itself, and maybe possible extension from, for instance, if KPN say "yes, that's great," "that's the right direction. We will support that." And if Siemens say "OK, great, we will support it", and [disfmarker] and what [disfmarker] it [disfmarker] mmm University of Berlin or something and maybe University of Mannheim or Duisburg or [pause] in Spain oder whatever. So if we have all the similar project then you have only to take care about that's [pause] a little bit aligned, that there's some transparency concerning the results and and the activities are literally going in the [disfmarker] in the same um pace. So that's the basic idea. But really it's to [disfmarker] to figure out certain kind of smaller activities where we can start it. That's the point. And if we get some funding back, and we have certain [pause] potent uh potential partners in the boat, OK, we can go to a broader scope of this [pause] whole thing, yeah. [speaker002:] u Let me [disfmarker] let me propose a [disfmarker] a [disfmarker] a [disfmarker] a reasoning way. Uh let's assume that we are successful with this proposal, so we get the funding to do whatever what we want to do. So my question is, what we want to do is? Let's assume that we have the funding. What will be our first and or [disfmarker] or and second steps? Ah, what kind of things uh we [disfmarker] eh we will buy, what kind of things we will eh program or we will deploy? Because answering this [disfmarker] these questions [pause] we can [pause] know much better [pause] what the application we want. So let's assume [disfmarker] [speaker004:] Yeah, but the problem is [pause] that f f f funding is the second step. [speaker003:] Mmm. N Well, I'm not sure. I [disfmarker] I th I think I agree with him. But uh right now we're looking at b a th You want to do a big thing [pause] instead of a great thing. [speaker004:] Yeah. [speaker003:] So if you d assume that it should connect uh really good to Siemens and to uh KPN, and to the [disfmarker] then it starts to get big we should all go to the [disfmarker] in the same direction. If you already assume "well, OK, they paid money", and now you think, OK, what [disfmarker] what great thing you want to do, then [pause] it gives you more freedom to think [pause] about this great thing you want to m do. [speaker004:] Yeah, but we have no eh [disfmarker] current no public funding, [speaker003:] No. [speaker004:] OK, everybody's paid here. [speaker002:] Yeah. [speaker001:] Yeah, but [disfmarker] but [disfmarker] [speaker003:] But you [disfmarker] but you don't get one if you [disfmarker] if you try to [disfmarker] to align KPN with Siemens, for instance. [speaker007:] Yeah. [speaker001:] Yeah. But [disfmarker] [speaker004:] Yeah. Yeah. [speaker001:] But for theoretical [disfmarker] or for historical reasons [pause] a really good question, "if we would have the money what would we do with that?" [speaker003:] Uh. [speaker001:] So if we don't know what we would like to do with the money, we probably wouldn't get any money to do what we don't know. [speaker004:] Mm hmm. That's right. [speaker002:] Absolutely. [speaker004:] That's right. Yeah. Yeah. [speaker001:] So that's why, I think, yeah, it's a good question. [speaker004:] Yeah. Now, that's the point why I am focussing on [disfmarker] more on the [disfmarker] [speaker001:] Yep. [speaker004:] Yeah. I want to have really the activities you know, what is really the outcome what is really the problem we are going to solve. These are my questions. And if we have identified this block then it is possible to raise some money. Yeah. [speaker001:] That's just the same question just other way around. [speaker002:] Let [disfmarker] let me put an example. You know I [disfmarker] Some days ago, I bought, like some of you, this kind of things. This is a PDA eh three com uh Palm Pilot compatible, [speaker001:] Modernistic stuff! [speaker002:] nothing [disfmarker] nothing that great, [speaker001:] Completely useless! [speaker002:] but it has some [disfmarker] some possibility of [pause] being expanded. But the point is eh, the only [disfmarker] in here [disfmarker] The only thing [disfmarker] wireless thing I can put on [disfmarker] on this is just uh this Omnisky modem, which is uh, I think it is a subscription based uh thing. You have to pay every [disfmarker] every month, and eh of course, I cannot imagine that uh tomorrow I'll be able to buy a different thing with four different networks, four different providers on it. Maybe I'm wrong but if I have the funding I say "OK, well, I can buy this or even I can buy a mobile computer, a notebook". OK? A lot more expensive, but [pause] that's OK, I have the funding Then I can add a couple of cards at most, because mmm the majority of the [disfmarker] of the notebooks only have two PC cards uh type two slots. So In [disfmarker] in the better case, maybe I'm only able to add two different networks to this thing. So ah the difficulty I see is that [pause] ah if we are [disfmarker] if you want to think in [disfmarker] in two three four different technologies um, maybe we are uh too advanced too, um, let's say, uh too in advance for the current technology. Which is not bad, but uh, I'm just saying that it will be able [disfmarker] It will [disfmarker] Sorry. It will be difficult for us to develop a [disfmarker] a reasonable test bed with the available technology. If somebody tell us, "OK, now here you have the thing, and just put these four networks you want on the thing." How can I do it now? So mmm I'm still trying to [disfmarker] you know, to get used to the uh [disfmarker] all the ideas and to see that all the things are really connecting because this the eh most difficult thing I see to [disfmarker] to [disfmarker] to get this alignment to this connection point [pause] among the different things we are trying to align. So that's why I was proposing this [disfmarker] this question. Let's assume we have the funding. Now let's proceed. What [disfmarker] what kind of things we can buy, we can put together, we can program. Because oh at the end uh somebody will uh be asking us OK, did you do your homework? Did you do what you [disfmarker] So. [speaker004:] It's clear. Now from my point of view in the beginning, one more time, if I see the four building blocks, and eh I could speak only for my skills, I believe to derive a certain kind of QS Reservation in Advance scheme Much more uh general as it is done for [disfmarker] for USAIA, [speaker002:] Mm hmm. [speaker004:] but mapped then really to desired network subnet layer technologies. [speaker002:] But for different networks? [speaker004:] Yes. [speaker002:] The point is for [speaker004:] For [disfmarker] No, for y UMTS. But first of all, a generic system, but because you can have several mechanism really using certain kind of [disfmarker] of um um quantitative description, you can have more of a [pause] qualitative description, you can have taken into account moving patterns, and [disfmarker] and [disfmarker] and [pause] whatever kind of things, but there are a lot of [disfmarker] lot of ways to have certain kind of QS Reservation in Advance, and I see it as definitely is necessary to have this one if I want be mobile and I want to have not a interruption of my [disfmarker] in my application qualities of service. [speaker002:] Hmm hmm hmm. [speaker004:] So. Um But then, to go, for instance, to UMTS, and figure out what is available within UMTS. And if I really have an IP application, where I need this kind of [disfmarker] of [pause] quality of service and having in mind that certain kind of [pause] QS will be available in the [pause] rest of the internet. So how can I map these things uh to the UMTS stuff? And what is the trend towards UMTS to the first generation? And how it is possible to map this one to this technology? And if you have the building blocks identified, what is really necessary [pause] for QS [disfmarker] eh QS in Advance for [disfmarker] for mobile systems, then you have, in principle, the ideas for the following generation what you could improve. [speaker002:] Hmm hmm. [speaker004:] Because then you are [pause] maybe no more related to this kind of thing. I believe the same applies for Multicast. And I believe the same applies for routing. So, in principle, in the beginning, there would be a certain kind of uh paperwork anyway, some theoretical examination of these problems, [speaker002:] I think so [speaker004:] and then [pause] some prototypes must be derived from that. And then [pause] maybe we will start here with a certain kind of uh testbed equipment we still have here. We have four uh P Cs available. Maybe we have to extend it then, and figure out some real scenarios, and maybe then pfff! [speaker002:] Oh yeah. [speaker004:] Because uh, despite the fa uh uh Hopefully, then we have with the money also not eh only for the room, [speaker002:] W [speaker004:] that would be great, but there are also some sponsors then then then we add a certain kind of field trial. But I think that's [disfmarker] [pause] pfff! [comment] that's years, you know. [speaker002:] A long way, yeah. [speaker004:] That's years. [speaker002:] That's right. [speaker004:] But in the beginning we must have a certain kind of [pause] theoretical framework with the relevant technology. Real technology. Maybe you can start with GPS or [disfmarker] or UMTS, or whatever. because these technology will definitely available in Europe. There's some money [pause] spent on it, so that it will be in a certain kind of flavor. It will be available. [speaker002:] Hmm hmm hmm. [speaker004:] And it will be also available in the USA. Because there is a big eh harmonization effort, despite the fact they are [pause] not aligned to each other anyway, but [pause] they will have s also certain kind of thr third generation and fourth generation networks. So. And after this theoretical framework, if this one is aligned, OK you have some specification and you set up some Linux or Free BSD or whatever kind of prototype. So that is what I have in mind with that. And then [disfmarker] OK, and then you will see. [speaker002:] Hmm hmm. [speaker004:] The problem is [disfmarker] which I see is that it could be fruitful to standardization that it could be fruitful to some extent to telecommunication providers. But there's not a real product like you say. There's not the selling idea behind it. [speaker002:] Hmm hmm hmm. [speaker004:] Yeah, if you go with something what [disfmarker] what Wilbert mentioned "OK, let's have this one here". We have a new PCMCIA card and [disfmarker] and extend something and the end system, or what you mentioned here was the seamless four technologies inter working in the end s, then you have an [disfmarker] an [disfmarker] a device which you can sell. If you go for a networking pfff [comment] um uh improvements, in principle, and [disfmarker] and understanding next generation building blocks, uh there's no product. [speaker002:] Hmm hmm. [speaker004:] But it is a long term [disfmarker] it's long term, I think, uh work you can do. And it's more related I believe to the skills to what is available and [disfmarker] and currently as I see what uh will be available within the NSA group. [speaker002:] OK. [speaker004:] And uh, the point is if we focus on [disfmarker] on this networking [disfmarker] small [disfmarker] more networking stuff and em some application guys came into the boat and they focus on their application stuff, whatever this means, It might be we have then the active classroom, but it could be another one that is not representative, [speaker007:] Yeah but there's something needed to sell it. [speaker004:] it's only [disfmarker] Hmm? [speaker007:] And I think [disfmarker] [speaker004:] Yeah. Then [disfmarker] then is something available but I don't know you know, I'm not an application expert, I [disfmarker] We have some content. We will then discuss it on next Tuesday as I mentioned. [speaker002:] Bless you. [speaker007:] Bless you. [speaker004:] Uh Anyway, but uh that were [disfmarker] is my answer to the question if some money is available. The good thing is that the money is available, because everybody is paid here. So that is money that the [disfmarker] the smallest portion of money which is available. [speaker002:] Hmm hmm hmm. [speaker004:] But first of all Wilbert need more, definitely, because maybe then KPN is uh no more interesting to go on with this work, or we can convince with these kind of things which we have in mind, uh companies like KPN. So And if we have other suggestions and eh have this as a long term NSA core group activity, and this theoretical framework, whether it is finished in half a year or one year doesn't matter, in principle, because the road map for the [disfmarker] the next generation networks is [disfmarker] is eh [disfmarker] is for many years. And b before UMTS will be deployed very well, I believe eh it will be common t uh two thousand four or something like that. So. And [disfmarker] and have [pause] an intermediate project, which is related in this direction, something like the seamless handover for two technologies or whatever. This is a smaller portion in the gen more generic framework, I would like to say, from this proposal. Why not go in this way? [speaker002:] OK [speaker004:] I [disfmarker] I don't know. But uh that's the point eh we are sitting here together. [speaker002:] Yeah. But [disfmarker] [speaker004:] But for [disfmarker] for Dietmar I think eh eh he mentioned it to me, it's very clear that he will go on with a portion of time, with this USAIA stuff, [speaker005:] That's right. It depends on my [disfmarker] [speaker004:] in the matical sense because that means for you in a certain kind [disfmarker] a certain extent, you are decoupled from networking stuff. Because your focusing on the mathematical thing of whether it's a network or other thing, [speaker005:] Well, I'm foc [speaker004:] it doesn't matter [disfmarker] [speaker005:] I'm focused on performance evaluation [speaker004:] Yeah. Yeah. [speaker005:] and um [disfmarker] especially of communication networks and [disfmarker] [speaker004:] Yeah. [speaker005:] I never eh e um dealed with um mobile communications. So why not? [speaker004:] Yeah. So. [speaker005:] So that's eh, to open my mind. [speaker004:] There is it still ongoing activity with uh maybe a portion of time at the University of Berlin [disfmarker] Berlin. And independently whether I will leave the uh uh ICSI in [disfmarker] in the mid of December or not, I will still focus on [disfmarker] on [disfmarker] on these QS in Advance schemes eh also within Siemens and also with some project there. And I believe your routing stuff is also available, [speaker002:] Hmm mmm. [speaker004:] uh, And when you [disfmarker] when you will go back to Spain, right? [speaker002:] Yeah. Yeah. Sure. [speaker004:] Yeah, so. and [disfmarker] and that's why, in principle, the skills will be always available I guess. [speaker002:] Yeah. And [disfmarker] and [speaker004:] if there is a certain type of project in the home countries and then the [pause] eh institutions and [disfmarker] [speaker002:] Mm hmm. N Not only this. We can even use this u platform to [disfmarker] to get some European projects in the future because you know, sometimes eh it's a good thing [disfmarker] [speaker004:] yeah maybe Currently no money is available, you know. [speaker002:] Yeah but [pause] uh, this will [disfmarker] I think w This will improve over time, I'm sure. So uh [disfmarker] Probably. You know, this kind of joint work of different countries de companies, universities. [pause] This is usually the best ah [disfmarker] the best place to [disfmarker] to put a project on. Because it is involving different partners from different points of [disfmarker] of Europe. [speaker004:] Now, I will not prevent anybody from getting some cake and we are sitting here together it's uh very uh very slow, in principle, but my last question is for Wilbert. Wilbert, you I [disfmarker] I think you got the basic idea because it was mostly discussed with you here from this current available scrap project proposal. Don't you think that KPN could be convinced to go on this way what I mentioned before? There's no way? [speaker003:] No. [speaker004:] They needed definitely, in principle, some kind of service or device or product or whatever. [speaker003:] Yeah. That's right now the s the status. Yes. [speaker004:] Can we both have together, maybe, I think it's a little bit more easier, a discussion on Thursday? [speaker003:] I'm out for th one [disfmarker] a week and a half. So on Wednesday, Thursday, Friday, I have a workshop at Stanford [speaker004:] OK. [speaker003:] and next week I have a week on holiday. [speaker004:] OK. Maybe that's enough time for you to consider a little bit more in this direction? [speaker003:] Could be. [speaker004:] And then we could have eh discussion and maybe this one is an overall work, in principle, where everybody could hook in and [disfmarker] and get certain kind of small portions to solve it [speaker003:] Yeah. So [disfmarker] so the only solution would [disfmarker] for [disfmarker] for KPN would be that there is a p a project in this area. [speaker004:] and [speaker003:] So [disfmarker] that makes sense to do in this ar This, I mean the Bay Area [speaker002:] Hm hmm. [speaker003:] with a group of uh a few people, like four or something, and that connects to this project. So, one somebody or two somebodies can be found uh founded [pause] or get money to join this project. But that the other four should build a service on top of it, that has a clear business focus. [speaker004:] mm hmm. [speaker003:] It means eh a [disfmarker] a UMTS service basically. [speaker004:] Mm hmm. Can you consider poten also in the meantime, some potential UMTS services besides this one? [speaker003:] Yeah. Yeah, but you see the [disfmarker] the [disfmarker] they ask something pretty strange. [speaker004:] And [disfmarker] [speaker003:] They [disfmarker] they want something pretty strange from me that I think of a service, a network service. There are sixty million billion people in the [vocalsound] [disfmarker] in the world that think of services all day and try to make a lot of money with it in the form of start ups or whatever. And just think of a service like that, if I could do this. [speaker004:] No, but, uh fff [comment] maybe based on this one and you see the networking stuff [speaker003:] w [speaker004:] that it is a little bit more evol evolved and [disfmarker] than in the [disfmarker] the core network technology in the access network, I would like to say. [speaker003:] Mm hmm. [speaker004:] And this is aligned then as much as possible to the internet. That's the basic idea, that you can have a certain kind of service which make use of these novel things. [speaker003:] Mm hmm. [speaker004:] Because normally I [disfmarker] I believe the [disfmarker] the [disfmarker] the people are either on the application layer, [speaker003:] I [disfmarker] you know [disfmarker] [speaker004:] that means higher than layer four, or they are on layer f one to l l up to layer four. [speaker003:] Bu [speaker004:] And the service people are considering OK, UMTS you have some location information, and maybe you are [disfmarker] have the infrastructure in the car, so, if I provide a certain kind of service, independently what the network is offering, you know. Because they do not consider it very well. They know there is a certain kind of location information and you have a GPS, and they know, OK, it's based on satellite. And there's UMTS, that is based on base station. There's all that they consider. And then assume, OK, I can do that. And the networking guy said OK it would be great to have Multicast and all these kind of things in the network, but do not consider really what a sense of benefit uh for applications, there were new app old application make use of it and how can new application arise which we haven't considered up to now based on this new network function, LAT enrich [disfmarker] enriched network function, LAT. So, I think it's worth to [disfmarker] to consider something in that way [speaker003:] Well, u until now I have been trying to think about it, but eh f to me u what I have every [disfmarker] every time I encountered it, it is all [disfmarker] this is all pretty close by, like the industry is already behind this. [speaker004:] Yeah, what is our [disfmarker] [speaker003:] So they [disfmarker] they want to make this available pretty soon. So I'm just trying to think uh m more ahead, like uh use the wireless device as a [disfmarker] only as a signalling thing or something. So [disfmarker] so whenever you come in with your uh GSM phone and there is a PC available, like everything will be directed to the [disfmarker] the bigger device or something. So more services like that. I think [pause] the they are working pretty hard to get IP available very good and efficiently from uh two mobile devices or wireless devices. [speaker004:] Not currently there is a techno technology split. [speaker003:] a th thing [speaker004:] Of course you have IP in the mobile device but if you see the strange protocols text there's IP over IP and tunnels and [disfmarker] and whatever, so, but it's uh looks very strange to me because you have different numbering screens, concerning you have a phone number, you have to map it to IP addresses and [disfmarker] and so on, but for me, you have not the end to end parts of for [disfmarker] for IP functionality. We are far away from that. And if you take a really look to the GPRS protocols texts and I believe also they apply for UMTS but I'm not eh very familiar with that, and the next generations access network is [disfmarker] will come soon. You see it's every three years or four years they will have a new generation. [speaker003:] And there's a [disfmarker] We m [speaker004:] But anyway you [disfmarker] [speaker003:] Our group might fo want to focus on the uh IP over ether no nets [speaker004:] yeah? yeah? [speaker003:] or something like or over ether [disfmarker] [speaker004:] Why? What kind [disfmarker] what kind [disfmarker] IP over what do you call it? [speaker003:] N no nets, [speaker004:] No net. [speaker003:] Yeah. [speaker004:] That would be great! [speaker003:] Over [disfmarker] Like only over ether. carefully integrated for net but there is no net. If you want to do [disfmarker] make like also for wireless devices end to end IP available, that could be a vision. Like. [speaker004:] Can you repeat it? [speaker003:] Well, so he [disfmarker] he says well IP to IP or end to end IP, is far [disfmarker] [pause] well, it's far away. um So you [disfmarker] what you might want to do is well get U M T S out there and just do uh I Ps playing over radio. [speaker004:] Mm hmm. Something like that is my final vision. [speaker003:] Like [disfmarker] like SDH, IP over fiber or WDM WDM. [speaker004:] Mm hmm. [speaker003:] This gets rid of the SDH. Or, well, start with get rid of the ATM first [speaker004:] Yeah. ATM stuff. Yeah. Yeah. [speaker003:] and then get rid of SDH. [speaker004:] There's [disfmarker] the problem is [disfmarker] is, you see that the providers eh have always there broken view for [disfmarker] for the management. If you have different kind of protocols texts and network management system, and the billing system and all these kinds of things which is besides the networking stuff, in principle, but you still need it, is eh not aligned. [speaker003:] Yeah. [speaker004:] We have it for different islands [speaker002:] Hmm hmm hmm. [speaker004:] and [disfmarker] and Well, that's my final vision you know. If you have really then IP networks at a certain generation where you have really the physical media and where necessary uh additional maclear protocols, and then you have really always end to end. It makes you things much more simple much more easier, [speaker002:] Uh. [speaker004:] uh, but is far far away, you know. [speaker002:] Unless [disfmarker] unless we speak about quality of service, [speaker004:] Yeah. Yeah. Yeah. [speaker002:] which is not eh embedded on [disfmarker] on IP services. [speaker004:] Yeah but you have the extension for the eh quality of service mechanism [speaker002:] Yeah, but [disfmarker] [speaker004:] which you have to map then to the underlying technology. [speaker002:] Yeah but for example nobody's really using RSVP. [speaker004:] And [disfmarker] Yeah but maybe in the LAN interface it makes since to use it [speaker003:] Eh you guys do that discussion later on please, [speaker004:] And [disfmarker] and [disfmarker] But yeah [speaker003:] uh? [speaker004:] but I think we sh we will miss the cake [speaker002:] OK, sorry, sorry. [speaker003:] RSVP. [speaker004:] and I think we should stop now but please everybody, short eh s go on with considerations and potential and [disfmarker] and uh um we didn't make s much progress today in the meeting, but anyway this kind of activity lives from all, from the input of all. And so I really if somebody has an idea send an email and then I would like to enforce everybody not that I'm doing everything I could not [disfmarker] I f didn't have the time in the past and I will not have it in the future and maybe I will stay here only s four weeks or five weeks. So it would be really not the best thing if the activity is dead when I am leaving and if I stay longer of course we can go on but nevertheless I rely heavily uh also from the contributions of everybody, and it's in their own interest [speaker002:] Hmm hmm hmm. That's right. [speaker004:] of [disfmarker] of everybody, also from the institution behind. So If there is certain kind of idea or questions or whatever, we are s only a small group of people we can easily join on a white board and [disfmarker] and start a discussion. [speaker002:] Well, if [pause] Klaus [disfmarker] [speaker004:] It must not be on every Tuesday, you know, [speaker002:] OK. [speaker004:] at a certain time. And I try to motivate the people. That it's free. That's a starting. That's a ramp. And we can go in different directions. And if you have a real focus on the certain novel thing for the services or UMTS or whatever, great. If not, the long term and then these things must be, nevertheless [disfmarker] must be defined in much more detail. Yeah. OK? So, thanks [speaker002:] OK [speaker003:] OK [speaker004:] please do not, um what Adam mentioned to [speaker005:] Not to switch off. [speaker004:] uh switch off, [speaker003:] Don't switch it off. [speaker007:] Do not [disfmarker] Don't switch it off. [speaker002:] OK, yeah, that's right. [speaker004:] So maybe I will find him [@ @]. [speaker002:] Oh, can you call Adam? [speaker004:] cake area. [speaker002:] At this number? [speaker004:] and now let me see whether he will be here, and We did not forget. We are done. [speaker006:] Yeah, just leave them on for now. Sometimes if you turn it off while it's recording it crashes [speaker004:] OK [pause] OK. Hopefully everything went fine. [speaker001:] Eh, we should be going. [speaker002:] So ne next week we'll have, uh, both Birger [pause] and, uh, Mike [disfmarker] Michael [disfmarker] Michael Kleinschmidt [speaker004:] Uh huh. [speaker002:] and Birger Kollmeier will join us. Um, and you're [disfmarker] [vocalsound] you're probably gonna go up in a couple [disfmarker] three weeks or so? When d when are you thinking of going up to, uh, OGI? [speaker004:] Yeah, like, uh, not next week but maybe the week after. [speaker002:] OK. Good. So at least we'll have one meeting with [vocalsound] yo with you still around, [speaker004:] Uh huh. [speaker002:] and [disfmarker] and [disfmarker] That's good. [speaker004:] Um, Yeah. Well, [vocalsound] maybe we can start with this. Mmm. [speaker002:] All today, huh? [speaker004:] Yeah. [speaker002:] Oh. [speaker004:] Um. Yeah. So there was this conference call this morning, um, and the only topic on the agenda was just to discuss a and to come at [disfmarker] uh, to get a decision about this latency problem. [speaker002:] No, this [disfmarker] I'm sorry, this is a conference call between different Aurora people [speaker004:] Uh, yeah. It's the conference call between the Aurora, [vocalsound] uh, group. [speaker002:] or just [disfmarker]? It's the main conference call. OK. [speaker004:] Uh, yeah. There were like two hours of [pause] discussions, and then suddenly, [vocalsound] uh, people were tired, I guess, and they decided on [nonvocalsound] a number, two hundred and twenty, um, included e including everything. Uh, it means that it's like eighty milliseconds [pause] less than before. Um. [speaker002:] And what are we sitting at currently? [speaker004:] So, currently d uh, we have system that has two hundred and thirty. [speaker002:] Yeah. [speaker004:] So, that's fine. [speaker002:] Two thirty. [speaker004:] Yeah. So that's the system that's described on the second point of [pause] this [vocalsound] document. [speaker002:] So it's [disfmarker] we have to reduce it by ten milliseconds somehow. [speaker004:] Yeah. But that's [disfmarker] Yeah. That's not a problem, I [disfmarker] I guess. [speaker002:] OK. [speaker004:] Um. [speaker002:] W It's [disfmarker] it's p d primary [disfmarker] primarily determined by the VAD at this point, right? [speaker004:] Yeah. Yeah. At this point, [speaker002:] S so we can make the VAD a little shorter. [speaker004:] yeah. [speaker002:] That's [disfmarker] [speaker004:] Yeah, uh huh. [speaker002:] Yeah. We probably should do that pretty soon so that we don't get used to it being a certain way. [speaker004:] Uh huh. [speaker002:] Yeah. [speaker004:] Um. [speaker002:] Was Hari on the [disfmarker] on the phone? [speaker004:] Yeah, sure. [speaker002:] OK. [speaker004:] Well, it was mainly a discussion [vocalsound] between Hari and [vocalsound] David, [speaker002:] Hmm. [speaker004:] who was like [disfmarker] [speaker002:] Yeah. [speaker004:] Uh, [speaker002:] OK. [speaker004:] mmm [disfmarker] Uh, yeah. So, the second thing is the system that we have currently. Oh, yes. We have, like, a system that gives sixty two percent improvement, but [vocalsound] if you want to stick to the [disfmarker] [vocalsound] this latency [disfmarker] Well, it has a latency of two thirty, but [vocalsound] if you want also to stick to the number [vocalsound] of features that [disfmarker] limit it to sixty, [vocalsound] then we go a little bit down but it's still sixty one percent. Uh, and if we drop the tandem network, then we have fifty seven percent. [speaker002:] Uh, but th the two th two thirty includes the tandem network? [speaker004:] Yeah. [speaker002:] OK. And i is the tandem network, uh, small enough that it will fit on the terminal size [speaker004:] Uh, no, I don't think so. [speaker002:] in terms of [disfmarker]? No. [speaker004:] No. [speaker002:] OK. [speaker004:] It's still [disfmarker] in terms of computation, if we use, like, their way of computing the [disfmarker] the maps [disfmarker] the [disfmarker] the MIPs, [vocalsound] I think it fits, [speaker002:] Mm hmm. Mm hmm. [speaker004:] but it's, uh, m mainly a problem of memory. [speaker002:] Right. [speaker004:] Um, and I don't know how much [pause] this can be discussed or not, because it's [disfmarker] it could be in ROM, so it's maybe not that expensive. But [disfmarker] [speaker002:] Ho how much memory d? H how many [disfmarker]? [speaker004:] I d I d uh, I [disfmarker] I don't kn remember exactly, but [disfmarker] [vocalsound] Uh. Yeah, I c I [disfmarker] I have to check that. [speaker002:] Yeah. I'd like to [pause] see that, cuz maybe I could think a little bit about it, cuz we [vocalsound] maybe we could make it a little smaller or [disfmarker] I mean, it'd be [disfmarker] it'd be neat if we could fit it all. [speaker004:] Uh huh. [speaker002:] Uh, I'd like to see how far off we are. [speaker004:] Mm hmm. [speaker002:] But I guess it's still within their rules to have [disfmarker] have it on the, uh, t uh, server side. [speaker004:] Yeah. Yeah. [speaker002:] Right? OK. [speaker004:] Mmm. [speaker002:] And this is still [disfmarker]? Uh, well, y you're saying here. I c I should just let you go on. [speaker004:] Yeah, there were small tricks to make this tandem network work. Uh, [vocalsound] mmm, and one of the trick was to, [vocalsound] um, use [vocalsound] some kind of hierarchical structure where [pause] the silence probability is not computed by [pause] the final tandem network but by the VAD network. Um, so apparently it looks better when, [vocalsound] uh, we use the silence probability from the VAD network and we re scale the other probabilities by one minus the silence probability. [speaker002:] Huh. [speaker004:] Um. So it's some kind of hierarchical thing, [vocalsound] uh, that Sunil also tried, um, [vocalsound] [vocalsound] on SPINE and apparently it helps a little bit also. Mmm. And. Yeah, the reason w why [disfmarker] why we did that with the silence probability was that, [vocalsound] um [disfmarker] [speaker002:] Could [disfmarker]? Uh, uh, I'm [disfmarker] I'm really sorry. Can you repeat what you were saying about the silence probability? [speaker004:] Mm hmm. [speaker002:] I only [disfmarker] [speaker004:] Yeah. [speaker002:] My mind was some [disfmarker] [speaker004:] So there is the tandem network that e e e estimates the phone probabilities [speaker002:] Yeah. Yeah. [speaker004:] and the silence probabilities also. [speaker002:] Right. [speaker004:] And [vocalsound] things get better when, instead of using the silence probability computed by the tandem network, we use the silence probability, uh, given by the VAD network, [speaker002:] Oh. [speaker004:] um, [speaker002:] The VAD network is [disfmarker]? [speaker004:] Which is smaller, but maybe, um [disfmarker] So we have a network for the VAD which has one hundred hidden units, and the tandem network has five hundred. Um. So it's smaller but th the silence probability [pause] from this network seems, uh, better. [speaker002:] OK. [speaker004:] Mmm. Uh. Well, it looks strange, but [disfmarker] [speaker002:] Yeah. But [disfmarker] [speaker004:] but it [speaker002:] OK. [speaker004:] Maybe it's [disfmarker] has something to do to [vocalsound] the fact that [vocalsound] we don't have infinite training data and [disfmarker] [speaker002:] We don't? [speaker004:] Well! And so [disfmarker] Well, things are not optimal and [disfmarker] [speaker002:] Yeah. [speaker004:] Mmm [disfmarker] [speaker005:] Are you [disfmarker] you were going to say why [disfmarker] what made you [disfmarker] wh what led you to do that. [speaker004:] Yeah. Uh, there was a p [comment] problem that we observed, um, [vocalsound] [vocalsound] that there was [disfmarker] there were, like, many insertions in the [disfmarker] in the system. [speaker002:] Mm hmm. [speaker004:] Mmm. [speaker002:] Hmm. [speaker004:] Actually plugging in the tandem network was increasing, I [disfmarker] I [disfmarker] I think, the number of insertions. [speaker002:] Mm hmm. [speaker004:] And, [vocalsound] um [disfmarker] So it looked strange and then just using the [disfmarker] the other silence probability helps. Mmm. Um [disfmarker] Yeah. The next thing we will do is train this tandem on more data. Um [disfmarker] [speaker002:] So, you know, in a way what it might [disfmarker] i it's [disfmarker] it's a little bit like [vocalsound] combining knowledge sources. Right? Because [vocalsound] the fact that you have these two nets that are different sizes [pause] means they behave a little differently, [speaker004:] Mm hmm. [speaker002:] they find different [pause] things. And, um, if you have, um [disfmarker] f the distribution that you have from, uh, f speech sounds is w [comment] sort of one source of knowledge. [speaker004:] Mm hmm. [speaker002:] And this is [disfmarker] and rather than just taking one minus that to get the other, which is essentially what's happening, you have this other source of knowledge that you're putting in there. So you make use of both of them [vocalsound] in [disfmarker] in [pause] what you're ending up with. Maybe it's better. [speaker004:] Yeah. [speaker002:] Anyway, you can probably justify anything if what's use [speaker004:] Yeah. And [disfmarker] and the features are different also. [speaker002:] Yeah. [speaker004:] I mean, the VAD doesn't use the same features there are. [speaker002:] Mm hmm. [speaker005:] Hmm. [speaker002:] Oh! [speaker004:] Um [disfmarker] [speaker002:] That might be the key, actually. [speaker004:] Mm hmm. [speaker002:] Cuz you were really thinking about speech versus nonspeech for that. [speaker004:] Mm hmm. [speaker002:] That's a good point. [speaker004:] Mmm. Uh. Well, there are other things that [vocalsound] we should do but, [vocalsound] um, [vocalsound] it requires time and [disfmarker] [vocalsound] We have ideas, like [disfmarker] so, these things are like hav having a better VAD. Uh, we have some ideas about that. It would [disfmarker] [vocalsound] probably implies working a little bit on features that are more [vocalsound] suited to a voice activity detection. [speaker002:] Mm hmm. [speaker004:] Working on the second stream. Of course we have ideas on this also, but [disfmarker] [vocalsound] w we need to try different things and [disfmarker] Uh, but their noise estimation, um [disfmarker] [vocalsound] uh [disfmarker] [speaker002:] I mean, back on the second stream, I mean, that's something we've talked about for a while. I mean, I think [nonvocalsound] that's certainly a high hope. [speaker004:] Yeah. [vocalsound] Mmm. [speaker002:] Um, so we have this [disfmarker] this default idea about just using some sort of purely spectral thing? [speaker004:] Uh, yeah. [speaker002:] for a second stream? [speaker004:] But, um, we [disfmarker] we did a first try with this, and it [disfmarker] it [vocalsound] clearly hurts. [speaker002:] But, uh, how was the stream combined? [speaker004:] Uh. [vocalsound] It was c it was just combined, um, by the acoustic model. So there was, no neural network for the moment. [speaker002:] Right. So, I mean, if you just had a second stream that was just spectral and had another neural net and combined there, that [disfmarker] that, uh, [vocalsound] might be good. [speaker004:] Mm hmm. Yeah. Mm hmm. Mm hmm. Mmm. Yeah. Um [disfmarker] Yeah, and the other thing, that noise estimation and th um, maybe try to train [disfmarker] uh, the training data for the t tandem network, right now, is like [disfmarker] i is using the noises from the Aurora task and [vocalsound] [vocalsound] I think that people might, [vocalsound] um, try to argue about that because [vocalsound] then in some cases we have the same noises in [disfmarker] for training the network [pause] than the noises that are used for testing, [speaker002:] Right. [speaker004:] and [disfmarker] So we have t n uh, to try to get rid of these [disfmarker] [vocalsound] this problem. [speaker002:] Yeah. Maybe you just put in some other noise, [speaker004:] Mm hmm. [vocalsound] Yeah. [speaker002:] something that's different. I mean, it [disfmarker] it's probably helpful to have [disfmarker] have a little noise there. [speaker004:] Uh huh. [speaker002:] But it may be something else th at least you could say it was. [speaker004:] Yeah. [speaker002:] And then [disfmarker] if it doesn't hurt too much, though. [speaker004:] Uh huh. [speaker002:] Yeah. That's a good idea. [speaker004:] Um. Yeah. The last thing is that I think we are getting close to human performance. Well, that's something I would like to investigate further, but, um, I did, like, um [disfmarker] I did, uh, listen to the m most noisy utterances of the SpeechDat Car Italian and tried to transcribe them. And, um [disfmarker] [speaker002:] So this is a particular human. This is [disfmarker] this i this is Stephane. [speaker004:] Yeah. So that's [disfmarker] that's [disfmarker] [speaker005:] St Stephane. [speaker002:] Yeah. [speaker004:] that's the [disfmarker] the flaw of the experiment. This is just [disfmarker] i j [comment] [vocalsound] [vocalsound] it's just one subject, [speaker002:] Yeah. [speaker005:] Getting close. [speaker004:] but [disfmarker] but still, uh, [vocalsound] what happens is [disfmarker] is that, [vocalsound] uh, the digit error rate on this is around one percent, [speaker002:] Yeah. [speaker004:] while our system is currently at seven percent. Um, but what happens also is that if I listen to the, um [disfmarker] [nonvocalsound] a re synthesized version of the speech and [pause] I re synthesized this using a white noise that's filtered by a LPC, uh, filter [disfmarker] [speaker002:] Yeah. [speaker004:] Um, well, you can argue, that, uh [disfmarker] that this is not speech, [speaker002:] Yeah. [speaker004:] so the ear is not trained to recognize this. But s actually it sound like [pause] whispering, so we are [disfmarker] [speaker002:] Well, I mean, it's [disfmarker] [speaker004:] eh [disfmarker] [speaker002:] There's two problems there. I mean [disfmarker] I mean, so [disfmarker] so the first is [vocalsound] that by doing LPC twelve with synthesized speech w like you're saying, uh, it's [disfmarker] [vocalsound] i i you're [disfmarker] you're adding other degradation. [speaker004:] Uh huh. [speaker002:] Right? So it's not just the noise but you're adding in fact some degradation because it's only an approximation. Um, and the second thing is [disfmarker] which is m maybe more interesting [disfmarker] is that, um, [comment] [vocalsound] if you do it with whispered speech, you get this number. What if you had [pause] done analysis [comment] re synthesis and taken the pitch as well? Alright? So now you put the pitch in. [speaker004:] Uh huh. [speaker002:] What would the percentage be then? [speaker004:] Um [disfmarker] [speaker002:] See, that's the question. So, you see, if it's [disfmarker] if it's [disfmarker] if it's, uh [disfmarker] Let's say it's [pause] back down to one percent again. [speaker004:] Uh huh. [speaker002:] That would say at least for people, having the pitch is really, really important, which would be interesting in itself. [speaker004:] Uh, yeah. But [disfmarker] [speaker002:] Um, if i on the other hand, if it stayed up [pause] near five percent, [vocalsound] then I'd say "boy, LPC n twelve is pretty crummy". You know? [speaker004:] Uh huh. [speaker002:] So I I I'm not sure [disfmarker] I'm not sure how we can conclude from this anything about [disfmarker] that our system is close to [vocalsound] the human performance. [speaker004:] Ye Yeah. Well, the point is that eh l ey [disfmarker] the point is that, um, [vocalsound] what I [disfmarker] what I listened to when I re synthesized the LP the LPC twelve [pause] spectrum [vocalsound] is in a way what the system, uh, is hearing, cuz [@ @] [disfmarker] all the [disfmarker] all the, um, excitation [disfmarker] all the [disfmarker] well, the excitation is [disfmarker] is not taken into account. That's what we do with our system. And [speaker002:] Well, you're not doing the LPC [disfmarker] [speaker004:] in this case [disfmarker] [speaker002:] I mean, so [disfmarker] so what if you did a [disfmarker] [speaker004:] Well, it's not LPC, sure, but [disfmarker] [speaker002:] What if you did LPC twenty? [speaker004:] LPC [disfmarker]? [speaker002:] Twenty. Right? I mean, th the thing is LPC is not a [disfmarker] a really great representation of speech. [speaker004:] Mm hmm. Mm hmm. [speaker002:] So, all I'm saying is that you have in addition to the w the, uh, removal of pitch, [vocalsound] you also are doing, uh, a particular parameterization, [speaker004:] Mm hmm. [speaker002:] which, um, uh [disfmarker] [speaker004:] Mmm. [speaker002:] Uh, so, let's see, how would you do [disfmarker]? So, fo [speaker004:] But that's [disfmarker] that's what we do with our systems. And [disfmarker] [speaker002:] No. Actually, we d we [disfmarker] we don't, because we do [disfmarker] we do, uh, [vocalsound] uh, mel filter bank, for instance. [speaker004:] Yeah, but is it that [disfmarker] is it that different, I mean? [speaker002:] Right? Um, [vocalsound] I don't know what mel, [pause] uh, based synthesis would sound like, [speaker004:] I Mm hmm. [speaker002:] but certainly the spectra are quite different. [speaker004:] Mm hmm. [speaker001:] Couldn't you t couldn't you, um, test the human performance on just the original [pause] audio? [speaker004:] This is the one percent number. [speaker002:] Yeah, it's one percent. [speaker004:] Mm hmm. [speaker002:] He's trying to remove the pitch information [speaker001:] Oh, oh. OK, [speaker004:] Mm hmm. [speaker002:] and make it closer to what [disfmarker] to what we're seeing as the feature vectors. [speaker001:] I see. OK. So, y uh, your performance was one percent, [speaker004:] Uh huh. [speaker001:] and then when you re synthesize with LPC twelve it went to five. [speaker004:] Yeah. [speaker001:] OK. [speaker002:] I mean [disfmarker] We were [disfmarker] we were j It [disfmarker] it [disfmarker] it's a little bit still apples and oranges because we are choosing these features in order to be the best for recognition. [speaker004:] Uh huh. [speaker002:] And, um, i if you listen to them they still might not be very [disfmarker] Even if you made something closer to what we're gonna [disfmarker] i it might not sound very good. [speaker004:] Yeah. [speaker002:] Uh, and i the degradation from that might [disfmarker] might actually make it even harder, [vocalsound] uh, to understand than the LPC twelve. So all I'm saying is that the LPC twelve [vocalsound] puts in [disfmarker] synthesis puts in some degradation [speaker004:] Uh huh. [speaker002:] that's not what we're used to hearing, and is, um [disfmarker] It's not [disfmarker] it's not just a question of how much information is there, as if you will always take maximum [vocalsound] advantage of any information that's presented to you. [speaker004:] Mm hmm. [speaker002:] In fact, you [vocalsound] hear some things better than others. And so it [disfmarker] it isn't [disfmarker] [speaker001:] But [disfmarker] [speaker002:] But, [vocalsound] I agree that it says that, uh, the kind of information that we're feeding it is probably, [vocalsound] um, um, a little bit, um, minimal. There's definitely some things that we've thrown away. And that's why I was saying it might be interesting if you [disfmarker] [vocalsound] an interesting test of this would be if you [disfmarker] if you actually put the pitch back in. So, you just extract it from the actual speech and put it back in, and see does that [disfmarker] is that [disfmarker] does that make the difference? [speaker004:] Uh huh. [speaker002:] If that [disfmarker] if that takes it down to one percent again, [vocalsound] then you'd say "OK, it's [disfmarker] it's in fact having, um, [vocalsound] not just the spectral envelope but also the [disfmarker] also the [disfmarker] the pitch [vocalsound] that, uh, [comment] [@ @] [comment] has the information that people can use, anyway." [speaker004:] Mmm. [speaker001:] But from this it's pretty safe to say that the system is with either [vocalsound] two to seven percent away from [pause] the performance of a human. Right? So it's somewhere in that range. [speaker002:] Well, or it's [disfmarker] it's [disfmarker] Yeah, so [disfmarker] [speaker001:] Two [disfmarker] two to six percent. [speaker002:] It's [disfmarker] it's one point four times, uh, to, uh, seven times the error, [speaker004:] To f seven times, yeah. [speaker002:] for Stephane. [speaker004:] Um. [speaker002:] So, uh [disfmarker] uh, but i I don't know. [speaker004:] But [disfmarker] [comment] but [disfmarker] [speaker002:] I do don't wanna take you away from other things. But that's [disfmarker] [vocalsound] that's what [disfmarker] that's the first thing that I would be curious about, is, you know, i i [vocalsound] when you we [speaker004:] But the signal itself is like a mix of [disfmarker] um, of a [disfmarker] a periodic sound and, [pause] [@ @] [comment] uh, unvoiced sound, and the noise [speaker002:] Mm hmm. [speaker004:] which is mostly, [vocalsound] uh, noise. I mean not [pause] periodic. So, [pause] what [disfmarker] what do you mean exactly by putting back the pitch in? Because [disfmarker] [speaker001:] In the LPC synthesis? [speaker002:] Yeah. You did LPC re synthesis [disfmarker] [speaker001:] I think [disfmarker] [speaker004:] I Uh huh. [speaker002:] L PC re synthesis. So, [vocalsound] uh [disfmarker] and you did it with a noise source, [speaker004:] Mm hmm. [speaker002:] rather than with [disfmarker] with a s periodic source. Right? So if you actually did real re synthesis like you do in an LPC synthesizer, where it's unvoiced you use noise, where it's voiced you use, [vocalsound] uh, periodic pulses. [speaker004:] Um. [speaker002:] Right? [speaker004:] Yeah, but it's neither [pause] purely voiced or purely unvoiced. Esp especially because there is noise. [speaker002:] Well, it might be hard to do it [speaker004:] So [disfmarker] [speaker002:] but it but [disfmarker] but the thing is that if you [disfmarker] [vocalsound] um, if you detect that there's periodic [disfmarker] s strong periodic components, then you can use a voiced [disfmarker] voice thing. [speaker004:] Oh. Uh huh. Yeah. [speaker002:] Yeah. I mean, it's probably not worth your time. It's [disfmarker] it's a side thing and [disfmarker] and [disfmarker] and there's a lot to do. [speaker004:] Uh huh, yeah. [speaker002:] But I'm [disfmarker] I'm just saying, at least as a thought experiment, [vocalsound] that's what I would wanna test. [speaker004:] Mm hmm. [speaker002:] Uh, I wan would wanna drive it with a [disfmarker] a [disfmarker] a two source system rather than a [disfmarker] than a one source system. [speaker004:] Mm hmm. Mm hmm. [speaker002:] And then that would tell you whether in fact it's [disfmarker] Cuz we've talked about, like, this harmonic tunneling or [vocalsound] other things that people have done based on pitch, maybe that's really a key element. Maybe [disfmarker] maybe, uh, [vocalsound] uh, without that, it's [disfmarker] it's not possible to do a whole lot better than we're doing. That [disfmarker] that could be. [speaker004:] Yeah. That's what I was thinking by doing this es experiment, like [disfmarker] [speaker002:] Yeah. [speaker004:] Mmm. [vocalsound] Evi [speaker002:] But, I mean, other than that, I don't think it's [disfmarker] I mean, other than the pitch de information, [vocalsound] it's hard to imagine that there's a whole lot more [vocalsound] in the signal that [disfmarker] that, uh [disfmarker] that we're throwing away that's important. [speaker004:] Yeah, but [disfmarker] Yeah. [vocalsound] Mm hmm. Yeah, right. [speaker002:] Right? I mean, we're using [vocalsound] a fair number of filters in the filter bank and [disfmarker] uh [disfmarker] [speaker004:] Mm hmm. Uh, yeah. [speaker002:] Hmm. Yeah. [speaker004:] Um. Yeah, that's it. [speaker002:] Yeah. That look Yeah. That's [disfmarker] that's [disfmarker] I mean, one [disfmarker] one percent is sort of what I would [disfmarker] I would figure. If somebody was paying really close attention, you might get [disfmarker] I would actually think that if, [vocalsound] you looked at people on various times of the day and different amounts of attention, you might actually get up to three or four percent error on digits. [speaker004:] Mm hmm. [speaker002:] Uh, [vocalsound] uh [disfmarker] [speaker004:] Um. [speaker002:] So it's [disfmarker] you know, we're not [disfmarker] we're not incredibly far off. On the other hand, with any of these numbers except maybe the one percent, it's st it's not actually usable in a commercial system with a full telephone number or something. [speaker004:] Uh huh. Yeah. At these noise levels. Yeah. Mm hmm. [speaker002:] Yeah. Right. [speaker004:] Well, yeah. These numbers, I mean. Mmm. [speaker002:] Good. Um, while we're still on Aurora stuff [pause] maybe you can talk a little about the status with the, uh, [vocalsound] Wall Street Journal [vocalsound] things for it. [speaker001:] So I've, um, downloaded, uh, a couple of things from Mississippi State. Um, one is their [vocalsound] software [disfmarker] their, uh, LVCSR system. Downloaded the latest version of that. Got it compiled and everything. Um, downloaded the scripts. They wrote some scripts that sort of make it easy to run [vocalsound] the system on the Wall Street Journal, uh, data. Um, so I haven't run the scripts yet. Uh, I'm waiting [disfmarker] there was one problem with part of it and I wrote a note to Joe asking him about it. So I'm waiting to hear from him. But, um, I did print something out just to give you an idea about where the system is. Uh, [vocalsound] they [disfmarker] on their web site they, uh, did this little table of where their system performs relative to other systems that have done this [disfmarker] this task. And, um, the Mississippi State system [vocalsound] using a bigram grammar, uh, is at about eight point two percent. Other comparable systems from, uh [disfmarker] [vocalsound] were getting from, uh, like six point nine, six point eight percent. So they're [disfmarker] [speaker002:] This is on clean test set? [speaker001:] This is on clean [disfmarker] on clean stuff. Yeah. They [disfmarker] they've started a table [vocalsound] where they're showing their results on various different noise conditions but they [disfmarker] they don't have a whole lot of it filled in and [disfmarker] [vocalsound] and I didn't notice until after I'd printed it out that, um, [vocalsound] they don't say here [pause] what these different testing conditions are. You actually have to click on it on the web site to see them. So I [disfmarker] I don't know what those [pause] numbers really mean. [speaker002:] What kind of numbers are they getting on these [disfmarker] on the test conditions? [speaker001:] Well, see, I was a little confused because on this table, I'm [disfmarker] the they're showing word error rate. But on this one, I [disfmarker] I don't know if these are word error rates because they're really big. So, [vocalsound] under condition one here it's ten percent. Then under three it goes to sixty four point six percent. [speaker002:] Yeah, that's probably Aurora. I mean [disfmarker] [speaker001:] Yeah. So m I guess maybe they're error rates but they're, uh [disfmarker] they're really high. [speaker002:] I [disfmarker] I [disfmarker] I don't find that surpri [speaker001:] So [disfmarker] [speaker002:] I mean, we [disfmarker] W what's [disfmarker] what's some of the lower error rates on [disfmarker] on [disfmarker] on [disfmarker] uh, some of the higher error rates on, uh, [vocalsound] some of these w uh, uh, highly mismatched difficult conditions? What's a [disfmarker]? [speaker004:] Uh. Yeah, it's around fifteen to twenty percent. [speaker001:] Correct? [speaker004:] And the baseline, eh [disfmarker] [speaker001:] Accuracy? [speaker004:] Uh, error rate. [speaker002:] Yeah. [speaker004:] Twenty percent error rate, [speaker002:] Yeah. So twenty percent error rate on digits. [speaker004:] and [disfmarker] [speaker002:] So if you're doing [disfmarker] so if you're doing, [speaker004:] and [disfmarker] [speaker001:] Oh, oh, on digits. Yeah. [speaker004:] On digits. And this is so [disfmarker] so [disfmarker] still the baseline. [speaker001:] OK. [speaker002:] you know, sixty thousand [disfmarker] [speaker004:] Right? [speaker001:] Yeah. [speaker002:] Yeah, and if you're saying sixty thousand word recognition, getting sixty percent error on some of these noise condition not at all surprising. [speaker001:] Yeah. [speaker004:] The baseline is sixty percent also on digits, [speaker001:] Oh, is it? [speaker004:] on the m more [pause] mismatched conditions. [speaker001:] OK. [speaker004:] So. [speaker002:] Yeah. [speaker001:] So, yeah, that's probably what it is then. Yeah. So they have a lot of different conditions that they're gonna be filling out. [speaker002:] It's a bad sign when you [disfmarker] looking at the numbers, you can't tell whether it's accuracy or error rate. [speaker001:] Yeah. Yeah. It's [disfmarker] it's gonna be hard. Um, they're [disfmarker] I I'm still waiting for them to [pause] release the, um, [vocalsound] multi CPU version of their scripts, cuz right now their script only handles processing on a single CPU, which will take a really long time to run. So. [speaker002:] This is for the training? [speaker001:] But their s Uh [disfmarker] I beli Yes, for the training [pause] also. [speaker002:] OK. [speaker001:] And, um, they're supposed to be coming out with it any time, the multi CPU one. [speaker002:] OK. [speaker001:] So, as soon as they get that, then I'll [disfmarker] I'll grab those too and so w [speaker002:] Yeah. Cuz we have to get started, cuz it's [disfmarker] cuz, uh, [speaker001:] Yeah. Yeah. I'll go ahead and try to run it though with just the single CPU one, [speaker002:] if the [disfmarker] [speaker001:] and [disfmarker] I [disfmarker] they [disfmarker] they, [vocalsound] um, released like a smaller data set that you can use that only takes like sixteen hours to train and stuff. So I can [disfmarker] I can run it on that just to make sure that the [disfmarker] [vocalsound] the thing works and everything. [speaker002:] Oh! Good. Yeah. [speaker005:] Hmm. [speaker002:] Cuz we'll [disfmarker] I guess the actual evaluation will be in six weeks or something. So. Is that about right [pause] you think? [speaker004:] Uh, we don't know yet, I [disfmarker] I think. [speaker002:] Really, we don't know? [speaker004:] Uh huh. Um. [speaker002:] Hmm. [speaker001:] It wasn't on the conference call this morning? [speaker004:] No. [speaker001:] Hmm. Did they say anything on the conference call [pause] about, um, how the [pause] Wall Street Journal part of the test was going to be [pause] run? Because I [disfmarker] I thought I remembered hearing that some sites [vocalsound] were saying that they didn't have the compute to be able to run the Wall Street Journal stuff at their place, [speaker004:] No. Mmm. [speaker001:] so there was some talk about having Mississippi State run [pause] the systems for them. And I [disfmarker] Did [disfmarker] did that come up at all? [speaker004:] Uh, no. Well, this [disfmarker] first, this was not the point at all of this [disfmarker] the meeting today [speaker001:] Oh, OK. [speaker004:] and, uh, frankly, I don't know [speaker002:] Some [speaker004:] because I d [comment] didn't read also the [pause] most recent mails about [vocalsound] the large vocabulary task. But, [vocalsound] uh, did you [disfmarker] do you still, uh, get the mails? You're not on the mailing list or what? [speaker001:] Hmm mm. The only, um, mail I get is from Mississippi State [disfmarker] [speaker004:] Uh huh. [speaker001:] so [disfmarker] [speaker004:] Oh, yeah. So we should have a look at this. [speaker001:] about their system. I [disfmarker] I don't get any [pause] mail about [disfmarker] [speaker002:] I have to say, there's uh something funny sounding about saying that one of these big companies doesn't have enough cup compute power do that, so they're having to have it done by Mississippi State. [speaker001:] Yeah. [speaker002:] It just [disfmarker] [vocalsound] just sounds funny. [speaker001:] Yeah. It does. [speaker002:] But, anyway. [speaker001:] Yeah. I'm [disfmarker] I'm wondering about that because there's this whole issue about, you know, simple tuning parameters, like word insertion penalties. [speaker004:] Mm hmm. [speaker001:] And [pause] whether or not those are going to be tuned or not, and [disfmarker] [comment] So. [speaker004:] Mm hmm. [speaker001:] I mean, it makes a big difference. If you change your front end, you know, the scale is completely [disfmarker] can be completely different, so. It seems reasonable that that at least should be tweaked to match the front end. But [disfmarker] [speaker004:] You didn't get any answer from [pause] Joe? [speaker001:] I did, but Joe [pause] said, you know, what you're saying makes sense [speaker004:] Uh huh. [speaker001:] and [pause] I don't know ". [speaker004:] Uh huh. [speaker001:] So he doesn't know what the answer is. I mean, that's th We had this back and forth a little bit about, [vocalsound] you know, are sites gonna [disfmarker] are you gonna run this data for different sites? And, well, if [disfmarker] if Mississippi State runs it, then maybe they'll do a little optimization on that [pause] parameter, and, uh [disfmarker] But then he wasn't asked to run it for anybody. So i it's [disfmarker] it's just not clear yet what's gonna happen. [speaker004:] Mm hmm. [speaker001:] Uh, he's been putting this stuff out on their web site and [disfmarker] for people to grab but I haven't heard too much about what's happening. [speaker002:] So it could be [disfmarker] I mean, Chuck and I had actually talked about this a couple times, and [disfmarker] and [disfmarker] over some lunches, I think, [vocalsound] that, um, [vocalsound] one thing that we might wanna do [disfmarker] The there's this question about, you know, what do you wanna scale? Suppose y you can't adjust [vocalsound] these word insertion penalties and so forth, so you have to do everything at the level of the features. What could you do? And, uh, one thing I had suggested at an earlier time was maybe some sort of scaling, some sort of root or [disfmarker] or something of the, um, [vocalsound] uh, features. But the problem with that is that isn't quite the same, it occurred to me later, because what you really want to do is scale the, uh, [@ @] [comment] the range of the likelihoods rather than [disfmarker] [speaker004:] Nnn, the dist Yeah. [speaker002:] But, [vocalsound] what might get at something similar, it just occurred to me, is kind of an intermediate thing [disfmarker] is because we do this strange thing that we do with the tandem system, at least in that system what you could do [vocalsound] is take the, um, [vocalsound] uh, values that come out of the net, which are something like log probabilities, and scale those. And then, uh, um [disfmarker] [pause] then at least those things would have the right values or the right [disfmarker] the right range. And then that goes into the rest of it and then that's used as observations. So it's [disfmarker] it's, [vocalsound] um, another way to do it. [speaker004:] Mm hmm. Mm hmm. But, these values are not directly used as probabilities anyway. [speaker002:] I know they're not. [speaker004:] So there are [disfmarker] there is [disfmarker] [speaker002:] I know they're not. But [disfmarker] but, you know [disfmarker] [speaker004:] Uh huh. [speaker002:] So because what we're doing is pretty strange and complicated, we don't really know what the effect is [pause] at the other end. [speaker004:] Mm hmm. [speaker002:] So, [vocalsound] um, [pause] my thought was maybe [disfmarker] I mean, they're not used as probabilities, but the log probabilities [disfmarker] we're taking advantage of the fact that something like log probabilities has more of a Gaussian shape than Gaus than [vocalsound] probabilities, and so we can model them better. So, [pause] in a way we're taking advantage of the fact that they're probabilities, because they're this quantity that looks kind of Gaussian when you take it's log. So, [comment] [vocalsound] uh, maybe [disfmarker] maybe it would have a [disfmarker] a reasonable effect to do that. [speaker004:] Mm hmm. [speaker002:] I d I don't know. But, [pause] I mean, I guess we still haven't had a [disfmarker] [vocalsound] a ruling back on this. And we may end up being in a situation where we just you know really can't change the [vocalsound] word insertion penalty. But the other thing we could do [vocalsound] is [disfmarker] also we could [disfmarker] I mean, this [disfmarker] this may not help us, [vocalsound] uh, in the evaluation but it might help us in our understanding at least. We might, [vocalsound] just run it with different insper insertion penalties, and show that, uh, "well, OK, not changing it, [vocalsound] playing the rules the way you wanted, we did this. But in fact if we did that, it made a [disfmarker] [pause] a big difference." [speaker001:] I wonder if it [disfmarker] it might be possible to, uh, simulate the back end with some other system. So we [disfmarker] we get our f front end features, and then, uh, as part of the process of figuring out the scaling of these features, [comment] you know, if we're gonna take it to a root or to a power or something, [comment] [vocalsound] we have some back end that we attach onto our features that sort of simulates what would be happening. [speaker002:] Mm hmm. [speaker001:] Um, [speaker002:] And just adjust it until it's the best number? [speaker001:] and just adjust it until that [disfmarker] our l version of the back end, uh, decides that [disfmarker] that [disfmarker] [speaker002:] Well, we can probably use the real thing, can't we? And then jus just, uh, [vocalsound] use it on a reduced test set or something. [speaker001:] Yeah. Oh, yeah. That's true. [speaker002:] Yeah. [speaker001:] And then we just use that to determine some scaling factor that we use. [speaker002:] Yeah. So I mean, I I think that that's a reasonable thing to do and the only question is what's the actual knob that we use? And the knob that we use should [disfmarker] [speaker001:] Mm hmm. [speaker002:] uh, uh, unfortunately, like I say, I don't know the analytic solution to this cuz what we really want to do is change the scale of the likelihoods, not the cha not the scale of the [disfmarker] [vocalsound] the [pause] observations. [speaker001:] Mm hmm. [speaker002:] But [disfmarker] but, uh [disfmarker] [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker005:] Out of curiosity, what [disfmarker] what kind of recognizer [pause] is the one from Mississippi State? [speaker001:] Uh, w what do you mean when you say "what kind"? [speaker005:] Is it [disfmarker]? Um, is it like a [pause] Gaussian mixture model? [speaker001:] Yeah. Gaussian mixture model. [speaker005:] OK. [speaker001:] It's the same system that they use [pause] when they participate in the Hub five evals. It's a, [vocalsound] um [disfmarker] sort of [pause] came out of, uh [disfmarker] uh, looking a lot like HTK. I mean, they started off with [disfmarker] um, when they were building their system they were always comparing to HTK to make sure they were getting similar results. And so, [vocalsound] it's a Gaussian mixture system, uh [disfmarker] [speaker002:] Do they have the same sort of mix down sort of procedure, where they [vocalsound] start off with a small number of some things [speaker001:] I don't know. Yeah. And then [pause] divide the mixtures in half. [speaker002:] and [disfmarker]? Yeah. [speaker001:] I don't know if they do that. I'm not really sure. [speaker002:] Yeah. [speaker005:] Hmm. [speaker002:] D Do you know what kind of tying they use? Are they [disfmarker] they sort of [disfmarker] some sort of [disfmarker] a bunch of Gaussians that they share across everything? Or [disfmarker] [vocalsound] or if it's [disfmarker]? [speaker001:] Yeah, th I have [disfmarker] I [disfmarker] I [disfmarker] I don't have it up here but I have a [disfmarker] [pause] the whole system description, that describes exactly what their [pause] system is [speaker002:] OK. [speaker001:] and I [disfmarker] I'm not sure. But, um [disfmarker] [speaker002:] OK. [speaker001:] It's some kind of a mixture of Gaussians and, [vocalsound] uh, clustering and, uh [disfmarker] They're [disfmarker] they're trying to put in sort of all of the standard features that people use nowadays. [speaker005:] Mm hmm. [speaker002:] So the other, uh, Aurora thing maybe is [disfmarker] I I dunno if any of this is gonna [vocalsound] [pause] come in in time to be relevant, but, uh, we had talked about, uh, [comment] Guenter [vocalsound] playing around, uh, uh, over in Germany [speaker004:] Mm hmm. [speaker002:] and [disfmarker] and, [@ @] [comment] uh, [pause] possibly coming up with something [vocalsound] that would, uh, [pause] uh, fit in later. Uh, I saw that other mail where he said that he [disfmarker] [vocalsound] uh, it wasn't going to work for him to do CVS. [speaker004:] Yeah. Yeah. So now he has a version of the software. [speaker002:] So he just has it all sitting there. Yeah. [speaker004:] Yeah. Um [disfmarker] Mm hmm. [speaker002:] So if he'll [disfmarker] he might work on improving the noise estimate or on [vocalsound] some histogram things, [speaker004:] Yeah. [speaker002:] or [disfmarker] [speaker004:] Mm hmm. [speaker002:] Yeah. I just saw the Eurospeech [disfmarker] We [disfmarker] we didn't talk about it at our meeting but I just saw the [disfmarker] just read the paper. Someone, I forget the name, [comment] and [disfmarker] and Ney, uh, about histogram equalization? Did you see that one? [speaker004:] Um, it was a poster. Or [disfmarker] [speaker002:] Yeah. I mean, I just read the paper. [speaker004:] Yeah. [speaker002:] I didn't see the poster. [speaker004:] Yeah. Um [disfmarker] [vocalsound] It was something [pause] similar to n [vocalsound] on line normalization finally [disfmarker] I mean, in [vocalsound] the idea of [disfmarker] of normalizing [disfmarker] [speaker002:] Yeah. But it's a little more [disfmarker] it [disfmarker] it's a little finer, right? [speaker004:] Yeah. [speaker002:] So they had like ten quantiles and [disfmarker] [vocalsound] and they adjust the distribution. [speaker004:] Right. [speaker002:] So you [disfmarker] you have the distributions from the training set, [speaker004:] N [speaker002:] and then, uh [disfmarker] So this is just a [disfmarker] a histogram of [disfmarker] of [vocalsound] the amplitudes, I guess. Right? And then [disfmarker] [vocalsound] Um, people do this in image processing some. [speaker004:] Mm hmm. [speaker002:] You have this kind of [disfmarker] [vocalsound] of histogram of [disfmarker] of levels of brightness or whatever. And [disfmarker] and [disfmarker] and then, [vocalsound] when you get a new [disfmarker] new thing that you [disfmarker] you want to adjust to be [pause] better in some way, [vocalsound] you adjust it so that the histogram of the new data looks like the old data. [speaker001:] Hmm. [speaker002:] You do this kind of [vocalsound] piece wise linear or, [vocalsound] uh, some kind of piece wise approximation. They did a [disfmarker] uh one version that was piece wise linear and another that had a power law thing between them [disfmarker] [vocalsound] between the [pause] points. And, uh, they said they s they sort of see it in a way as s for the speech case [comment] [disfmarker] as being kind of a generalization of spectral subtraction in a way, because, you know, in spectral subtraction you're trying to [vocalsound] get rid of this excess energy. Uh, you know, it's not supposed to be there. Uh [disfmarker] [vocalsound] and, uh, this is sort of [pause] [vocalsound] adjusting it for [disfmarker] for a lot of different levels. And then they have s they have some kind of, [vocalsound] uh, [pause] a floor or something, [speaker005:] Hmm. [speaker002:] so if it gets too low you don't [disfmarker] don't do it. [speaker001:] Hmm. [speaker002:] And they [disfmarker] they claimed very nice results, [speaker004:] Mm hmm. [speaker002:] and [disfmarker] [speaker001:] So is this a histogram across different frequency bins? Or [disfmarker]? [speaker002:] Um, I think this i You know, I don't remember that. Do you remember [disfmarker]? [speaker004:] I think they have, yeah, different histograms. I uh [disfmarker] Something like one per [pause] frequency band, [speaker002:] One [disfmarker] [speaker004:] or [disfmarker] [speaker002:] One per critical [disfmarker] [speaker001:] So, one histogram per frequency bin. [speaker004:] But I did [disfmarker] Yeah, I guess. But I should read the paper. [speaker001:] And that's [disfmarker] [speaker004:] I just went [pause] through the poster quickly, [speaker002:] Yeah. And I don't remember whether it was [pause] filter bank things [speaker001:] So th [speaker004:] and I didn't [disfmarker] [speaker001:] Oh. [speaker002:] or whether it was FFT bins or [disfmarker] [speaker001:] Huh. And [disfmarker] and that [disfmarker] that, um, [pause] histogram represents [pause] the [pause] different energy levels that have been seen at that [pause] frequency? [speaker002:] I don't remember that. And how often they [disfmarker] you've seen them. Yeah. [speaker005:] Hmm. [speaker001:] Uh huh. [speaker002:] Yeah. And they do [disfmarker] they said that they could do it for the test [disfmarker] So you don't have to change the training. You just do a measurement over the training. And then, uh, for testing, uh, you can do it for one per utterance. Even relatively short utterances. And they claim it [disfmarker] it works pretty well. [speaker001:] So they, uh [disfmarker] Is the idea that you [disfmarker] you run a test utterance through some histogram generation thing and then you compare the histograms and that tells you [vocalsound] what to do to the utterance to make it more like [disfmarker]? [speaker002:] I guess in pri Yeah. In principle. I didn't read carefully how they actually implemented it, [speaker001:] I see. Hmm. Yeah. [speaker002:] whether it was some, [vocalsound] uh, on line thing, or whether it was a second pass, or what. But [disfmarker] but they [disfmarker] [vocalsound] That [disfmarker] that was sort of the idea. [speaker001:] Hmm. [speaker002:] So that [disfmarker] that seemed, you know, different. We're sort of curious about, uh, what are some things that are, u u um, [vocalsound] [@ @] [comment] [pause] conceptually quite different from what we've done. Cuz we [disfmarker] you know, one thing that w that, [speaker001:] Mm hmm. [speaker002:] uh, Stephane and Sunil seemed to find, [vocalsound] uh, was, you know, they could actually make a unified piece of software that handled a range of different things that people were talking about, and it was really just sort of setting of different [pause] constants. And it would turn, you know, one thing into another. It'd turn Wiener filtering into spectral subtraction, or whatever. But there's other things that we're not doing. So, we're not making any use of pitch, uh, uh, which again, might [disfmarker] might be important, uh, because the stuff between the harmonics is probably a schmutz. And [disfmarker] and the, [vocalsound] uh, transcribers will have fun with that. Uh [disfmarker] [vocalsound] And, um, the, uh, stuff at the harmonics isn't so much. And [disfmarker] and, uh [disfmarker] And we there's this overall idea of really sort of matching the [disfmarker] the hi distributions somehow. Uh, not just, um, [vocalsound] um [disfmarker] not just subtracting off your estimate of the noise. So. So I guess, uh, [vocalsound] Guenter's gonna play around with some of these things now over this next [pause] period, [speaker004:] Uh, I dunno. [speaker002:] or [disfmarker]? [speaker004:] I don't have feedback from him, but [speaker002:] Yeah. [speaker004:] I guess he's gonna, maybe [disfmarker] [speaker002:] Well, he's got it anyway, so he can. [speaker004:] Yeah. Uh huh. [speaker002:] So potentially if he came up with something that was useful, like a diff a better noise estimation module or something, he could ship it to you guys u up there [speaker004:] Yeah. [speaker002:] and we could put it in. [speaker004:] Mm hmm. [vocalsound] Mm hmm. [speaker002:] Yeah. Yeah. So, that's good. So, why don't we just, uh, um [disfmarker] I think starting [disfmarker] [pause] starting a w couple weeks from now, especially if you're not gonna be around for a while, we'll [disfmarker] we'll be shifting more over to some other [disfmarker] [vocalsound] other territory. But, uh, uh, [comment] uh, n not [disfmarker] not so much in this meeting about Aurora, but [disfmarker] but, uh, uh, maybe just, uh, quickly today about [disfmarker] maybe you could just say a little bit about what you've been talking about with Michael. And [disfmarker] and then Barry can say something about [pause] what [comment] [disfmarker] what we're talking about. [speaker003:] OK. So Michael Kleinschmidt, who's a PHD student from Germany, [vocalsound] showed up this week. He'll be here for about six months. And he's done some work using [vocalsound] an auditory model [pause] of, um, [vocalsound] human hearing, and [pause] using that f uh, to generate speech recognition features. And [pause] he did [vocalsound] work back in Germany [vocalsound] with, um, a toy recognition system [vocalsound] using, um, isolated [vocalsound] digit recognition [vocalsound] as the task. It was actually just a single layer neural network [vocalsound] that classified words [disfmarker] classified digits, [vocalsound] in fact. Um, and [pause] he tried that on [disfmarker] I think on some Aurora data and got results that he thought [pause] seemed respectable. And he w he's coming here to u u use it on a [vocalsound] uh, a real speech recognition system. So I'll be working with him on that. And, um, maybe I should say a little more about these features, although I don't understand them that well. The [disfmarker] I think it's a two stage idea. And, um, [vocalsound] the first stage of these features correspond to what's called the peripheral [vocalsound] auditory system. And [vocalsound] I guess that is like [vocalsound] a filter bank with a compressive nonlinearity. And [vocalsound] I'm I'm not sure what we have [@ @] in there that isn't already modeled in something like, [vocalsound] um, [pause] PLP. I should learn more about that. And then [vocalsound] the second stage [pause] is, um, [vocalsound] the most different thing, I think, from what we usually do. It's, um [disfmarker] [vocalsound] [vocalsound] it computes features which are, [vocalsound] um, [vocalsound] based on [disfmarker] sort of like based on diffe different w um, wavelet basis functions [vocalsound] used to analyze [vocalsound] the input. So th he uses analysis functions called [vocalsound] Gabor functions, um, [vocalsound] which have a certain [vocalsound] extent, um, [vocalsound] in time and in frequency. And [vocalsound] the idea is these are used to sample, [vocalsound] um, the signal in a represented as a time frequency representation. So you're [pause] sampling some piece of this time frequency plane. And, um, [vocalsound] that, [vocalsound] um, is [disfmarker] is interesting, cuz, [vocalsound] [@ @] for [disfmarker] for one thing, you could use it, [vocalsound] um, in a [disfmarker] a multi scale way. You could have these [disfmarker] instead of having everything [disfmarker] like we use a twenty five millisecond or so analysis window, [vocalsound] typically, um, and that's our time scale for features, but you could [disfmarker] [vocalsound] using this, um, basis function idea, you could have some basis functions which have a lot longer time scale and, um, some which have a lot shorter, and [vocalsound] so it would be like [pause] a set of multi scale features. So he's interested in, um [disfmarker] Th this is [disfmarker] because it's, um [disfmarker] there are these different parameters for the shape of these [vocalsound] basis functions, [vocalsound] um [disfmarker] [vocalsound] there are a lot of different possible basis functions. And so he [disfmarker] [vocalsound] he actually does [vocalsound] an optimization procedure to choose an [disfmarker] [vocalsound] an optimal set of basis functions out of all the possible ones. [speaker001:] Hmm. H What does he do to choose those? [speaker003:] The method he uses is kind of funny [disfmarker] is, [comment] [vocalsound] um, [vocalsound] he starts with [disfmarker] he has a set of M of them. Um, he [disfmarker] and then [pause] he uses that to classify [disfmarker] I mean, he t he tries, um, [vocalsound] using [pause] just M minus one of them. So there are M possible subsets of this [vocalsound] length M vector. He tries classifying, using each of the M [vocalsound] possible sub vectors. [speaker004:] Hmm. [speaker003:] Whichever sub vector, [vocalsound] um, works the [disfmarker] the best, I guess, he says [disfmarker] [vocalsound] the [disfmarker] the fe feature that didn't use was the most useless feature, [speaker002:] Y yeah. Gets thrown out. Yeah. [speaker003:] so we'll throw it out and we're gonna randomly select another feature [pause] from the set of possible basis functions. [speaker001:] Hmm! [speaker002:] Yeah. [speaker001:] So it's a [disfmarker] [speaker002:] So i so it's actuall [speaker001:] it's a little bit like a genetic algorithm or something in a way. [speaker005:] It's like a greedy [disfmarker] [speaker002:] Well, it's [disfmarker] it's much simpler. But it's [disfmarker] but it's [disfmarker] uh, it's [disfmarker] there's a lot [disfmarker] number of things I like about it, let me just say. [speaker001:] Greedy. [speaker002:] So, first thing, well, you're absolutely right. I mean, [vocalsound] i i [nonvocalsound] in truth, [pause] both pieces of this are [disfmarker] have their analogies in stuff we already do. But it's a different take [vocalsound] at how to approach it and potentially one that's m maybe a bit more systematic than what we've done, uh, and a b a bit more inspiration from [disfmarker] from auditory things. So it's [disfmarker] so I think it's a neat thing to try. The primary features, [vocalsound] um, are in fact [disfmarker] Yeah, essentially, it's [disfmarker] it's, uh, you know, PLP or [disfmarker] or mel cepstrum, or something like that. You've [disfmarker] you've got some, [vocalsound] uh, compression. We always have some compression. We always have some [disfmarker] you know, the [disfmarker] the [disfmarker] the kind of filter bank with a kind of [vocalsound] [vocalsound] quasi log scaling. Um, [vocalsound] if you put in [disfmarker] if you also include the RASTA in it [disfmarker] i RASTA [disfmarker] the filtering being done in the log domain [vocalsound] has an AGC like, uh, characteristic, which, you know, people typi typically put in these kind of, [vocalsound] uh, [pause] um, [vocalsound] uh, auditory front ends. So it's very, very similar, uh, but it's not exactly the same. Um, I would agree that the second one is [disfmarker] is somewhat more different but, [vocalsound] um, it's mainly different in that the things that we have been doing like that have been [disfmarker] [vocalsound] um, had a different kind of motivation and have ended up with different kinds of constraints. So, for instance, if you look at the LDA RASTA stuff, [vocalsound] you know, basically what they do is they [disfmarker] they look at the different eigenvectors out of the LDA and they form filters out of it. Right? And those [pause] filters have different, uh, kinds of temporal extents and temporal characteristics. And so in fact they're multi scale. But, they're not sort of systematically multi scale, like "let's start here and go to there, and go to there, and go to there", and so forth. It's more like, [vocalsound] you run it on this, you do discriminant analysis, and you find out what's helpful. [speaker003:] I it's multi scale because you use several of these in parallel, is that right? [speaker002:] Yeah. They use several of them. [speaker003:] Of [disfmarker] OK. [speaker002:] Yeah. Uh, I mean, you don't have to but [disfmarker] but [disfmarker] but, uh, Hynek has. Um, but it's also, uh [disfmarker] Hyn when Hynek's had people do this kind of LDA analysis, they've done it on frequency direction and they've done it on the time direction. I think he may have had people sometimes doing it on both simultaneously [disfmarker] some two D [disfmarker] and that would be the closest to these Gabor function kind of things. Uh, but I don't think they've done that much of that. And, uh, the other thing that's interesting [disfmarker] the [disfmarker] the, uh [disfmarker] the feature selection thing, it's a simple method, but I kinda like it. Um, [vocalsound] there's a [disfmarker] [pause] a old, old method for feature selection. I mean, [pause] eh, uh, I remember people referring to it as old when I was playing with it twenty years ago, so I know it's pretty old, uh, called Stepwise Linear Discriminant Analysis in which you [disfmarker] which [disfmarker] I think it's used in social sciences a lot. So, you [disfmarker] you [disfmarker] you [disfmarker] you pick the best feature. And then [vocalsound] you take [disfmarker] y you find the next feature that's the best in combination with it. And then so on and so on. And what [disfmarker] what Michael's describing seems to me much, much better, because the problem with the stepwise discriminant analysis is that you don't know that [disfmarker] you know, if you've [vocalsound] picked the right set of features. Just because something's a good feature doesn't mean that you should be adding it. So, [vocalsound] um, [pause] uh, here at least you're starting off with all of them, and you're [vocalsound] throwing out useless features. I think that's [disfmarker] that seems, uh [disfmarker] [vocalsound] that seems like a lot better idea. Uh, you're always looking at things in combination with other features. Um, so the only thing is, of course, there's this [disfmarker] this artificial question of [disfmarker] of, uh, [vocalsound] exactly how you [disfmarker] how you a how you assess it and if [disfmarker] if your order had been different in throwing them out. I mean, it still isn't necessarily really optimal, but it seems like a pretty good heuristic. [speaker005:] Hmm. [speaker002:] So I th I think it's [disfmarker] it's [disfmarker] I think it's kinda neat stuff. And [disfmarker] and [disfmarker] and, uh, the thing that I wanted to [disfmarker] to add to it also was to have us use this in a multi stream way. [speaker005:] Hmm. [speaker002:] Um, so [disfmarker] so that, um, [vocalsound] when you come up with these different things, [vocalsound] and these different functions, [vocalsound] you don't necessarily just put them all into one huge vector, but perhaps [vocalsound] you [vocalsound] have some of them in one stream and some of them in another stream, and so forth. And, um, um, [comment] um [disfmarker] And we've also talked a little bit about, uh, [vocalsound] uh, Shihab Shamma's stuff, in which [vocalsound] you [disfmarker] the way you look at it is that there's these different mappings and some of them emphasize, uh, upward moving, [vocalsound] uh, energy and fre and frequency. And some are emphasizing downward and [vocalsound] fast things and slow things and [disfmarker] and [pause] so forth. So. So there's a bunch of stuff to look at. But, uh, I think we're sorta gonna start off with what [vocalsound] he, uh, came here with and branch out [disfmarker] [vocalsound] branch out from there. And his advisor is here, too, [vocalsound] at the same time. So, he'll be another [pause] interesting source of [pause] wisdom. [speaker005:] Hmm. [speaker002:] So. [speaker005:] As [disfmarker] as we were talking about this I was thinking, [vocalsound] um, [vocalsound] whether there's a relationship between [disfmarker] [vocalsound] um, [vocalsound] [vocalsound] between Michael's approach to, uh, some [disfmarker] some sort of optimal brain damage or optimal brain surgeon on the neural nets. [speaker002:] Yeah. [speaker005:] So, like, if we have, [speaker003:] Hmm. [speaker005:] um [disfmarker] we have our [disfmarker] we have our RASTA features and [disfmarker] and presumably the neural nets are [disfmarker] are learning some sort of a nonlinear mapping, [vocalsound] uh, from the [disfmarker] the [disfmarker] the features [vocalsound] to [disfmarker] to this [disfmarker] this probability posterior space. [speaker002:] Mm hmm. [speaker005:] Right? And, um [disfmarker] [vocalsound] [vocalsound] and each of the hidden units is learning some sort of [disfmarker] some sort of [disfmarker] some sort of pattern. Right? And it could be, like [disfmarker] [vocalsound] like these, um [disfmarker] these auditory patterns that Michael [pause] is looking at. And then when you're looking at the [disfmarker] [vocalsound] the, uh, [pause] um, [vocalsound] the best features, [vocalsound] you know, you can take out [disfmarker] you can do the [disfmarker] do this, uh, brain surgery by taking out, [vocalsound] um, hidden units that don't really help at all. [speaker002:] Mm hmm. [speaker005:] And this is k sorta like [disfmarker] [speaker002:] Or the [disfmarker] or features. Right? [speaker005:] Yeah. [speaker002:] I mean, y actually, you make me think a [disfmarker] a very important point here is that, um, [vocalsound] if we a again try to look at how is this different from what we're already doing, [vocalsound] uh, there's a [disfmarker] a, uh [disfmarker] [vocalsound] a nasty argument that could be made th that it's [disfmarker] it's not different at [disfmarker] at all, because, uh [disfmarker] if you ignore the [disfmarker] the selection part because we are going into a [disfmarker] a very powerful, [vocalsound] uh, nonlinearity that, uh, in fact is combining over time and frequency, and is coming up with its own [disfmarker] you know, better than Gabor functions [speaker005:] Mm hmm. [speaker002:] its, you know, neural net functions, its [disfmarker] [comment] [vocalsound] whatever it finds to be best. Um, so you could argue that in fact it [disfmarker] But I [disfmarker] I don't actually believe that argument because I know that, um, [vocalsound] you can, uh [disfmarker] computing features is useful, even though [pause] in principle you haven't [pause] [vocalsound] added anything [disfmarker] in fact, you subtracted something, from the original waveform [disfmarker] You know, uh, if you've [disfmarker] you've processed it in some way you've typically lost something [disfmarker] some information. And so, [vocalsound] you've lost information and yet it does better with [disfmarker] [vocalsound] with features than it does with the waveform. So, uh, I [disfmarker] I know that i sometimes it's useful to [disfmarker] [pause] to constrain things. So that's [vocalsound] why it really seems like the constraint [disfmarker] in [disfmarker] in all this stuff it's the constraints that are actually what matters. Because if it wasn't [pause] the constraints that mattered, then we would've completely solved this problem long ago, because long ago we already knew how to put waveforms into powerful statistical mechanisms. So. [speaker004:] Yeah. Well, if we had infinite processing power and [pause] data, [comment] I guess, using the waveform could [disfmarker] [speaker005:] Right. [speaker002:] Yeah Uh, then it would work. Yeah, I agree. Yeah. There's the problem. [speaker004:] So, that's [disfmarker] [speaker002:] Yeah. Then it would work. But [disfmarker] but, I mean, i it's [disfmarker] [vocalsound] With finite [pause] of those things [disfmarker] I mean, uh, we [disfmarker] we have done experiments where we literally have put waveforms in and [disfmarker] and [disfmarker] and, uh, [speaker004:] Mm hmm. [speaker002:] we kept the number of parameters the same and so forth, and it used a lot of training data. And it [disfmarker] and it [disfmarker] it, uh [disfmarker] not infinite but a lot, and then compared to the number parameters [disfmarker] and it [disfmarker] it, uh [disfmarker] it just doesn't do nearly as well. [speaker004:] Mm hmm. [speaker002:] So, anyway the point is that you want to suppress [disfmarker] it's not just having the maximum information, you want to suppress, [vocalsound] uh, the aspects of the input signal that are not helpful for [disfmarker] for the discrimination you're trying to make. So. So maybe just briefly, uh [disfmarker] [speaker005:] Well, that sort of segues into [pause] what [disfmarker] what I'm doing. [speaker002:] Yeah. [speaker005:] Um, [vocalsound] so, uh, the big picture is k um, [vocalsound] come up with a set of, [vocalsound] uh, intermediate categories, then build intermediate category classifiers, then do recognition, and, um, improve speech recognition in that way. Um, so right now I'm in [disfmarker] in the phase where [vocalsound] I'm looking at [disfmarker] at, um, deciding on a initial set of intermediate categories. And [vocalsound] I'm looking [vocalsound] for data data driven [pause] methods that can help me find, [vocalsound] um, a set of intermediate categories [vocalsound] of speech that, uh, will help me to discriminate [pause] later down the line. And one of the ideas, [vocalsound] um, that was to take a [disfmarker] take a neural net [disfmarker] train [disfmarker] train an ordinary neural net [vocalsound] to [disfmarker] [vocalsound] uh, to learn the posterior probabilities of phones. And so, um, at the end of the day you have this neural net and it has hidden [disfmarker] [vocalsound] hidden units. And each of these hidden units is [disfmarker] [vocalsound] um, is learning some sort of pattern. And so, um, what [disfmarker] what are these patterns? [speaker001:] Hmm. [speaker005:] I don't know. Um, and I'm gonna to try to [disfmarker] [vocalsound] to look at those patterns [vocalsound] to [disfmarker] to see, [vocalsound] um, [vocalsound] from those patterns [disfmarker] uh, presumably those are important patterns for discriminating between phone classes. And maybe [disfmarker] [vocalsound] maybe some, uh, intermediate categories can come from [vocalsound] just looking at the patterns of [disfmarker] [vocalsound] um, that the neural net learns. [speaker002:] Be before you get on the next part l let me just point out that s there's [disfmarker] there's a [disfmarker] a pretty nice [comment] [vocalsound] relationship between what you're talking about doing and what you're talking about doing there. Right? [speaker005:] Yeah. [speaker002:] So, [vocalsound] it seems to me that, you know, if you take away the [disfmarker] the [disfmarker] [pause] the difference of this [pause] primary features, [vocalsound] and, say, you use [disfmarker] as we had talked about maybe doing [disfmarker] you use P RASTA PLP or something for the [disfmarker] the primary features, [vocalsound] um, then this feature discovery, [pause] uh, uh, thing [vocalsound] is just what he's talking about doing, too, except that he's talking about doing them in order to discover [pause] intermediate categories that correspond [vocalsound] to these [disfmarker] uh, uh, what these sub features are [disfmarker] are [disfmarker] are [disfmarker] are showing you. And, um, [vocalsound] the other difference is that, um, [vocalsound] he's doing this in a [disfmarker] in a multi band setting, which means that he's constraining himself [vocalsound] to look across time in some f relatively limited, uh, uh, spectral extent. Right? And whereas in [disfmarker] in this case you're saying "let's just do it unconstrained". So they're [disfmarker] they're really pretty related and maybe they'll be [disfmarker] at some point where we'll see the [disfmarker] the connections a little better [speaker003:] Hmm. [speaker002:] and [vocalsound] connect them. [speaker005:] Mm hmm. Um. Yeah, so [disfmarker] so that's the [disfmarker] that's the first part [disfmarker] uh, one [disfmarker] one of the ideas to get at some [disfmarker] [vocalsound] some patterns of intermediate categories. Um, [vocalsound] the other one [pause] was, [vocalsound] um, to, [vocalsound] uh, come up with a [disfmarker] a [disfmarker] a model [disfmarker] [comment] um, a graphical model, [vocalsound] that treats [pause] the intermediate categories [vocalsound] as hidden [disfmarker] hidden variables, latent variables, that we don't know anything about, but that through, [vocalsound] um, s statistical training and the EM algorithm, [vocalsound] um, at the end of the day, [vocalsound] we have, um [disfmarker] we have learned something about these [disfmarker] these latent, um [disfmarker] latent variables which happen to correspond to [vocalsound] intermediate categories. Um. [vocalsound] [nonvocalsound] Yeah, and so those are the [disfmarker] the two directions that I'm [disfmarker] I'm looking into right now. And, uh, [vocalsound] um [disfmarker] [vocalsound] [vocalsound] Yeah. I guess that's [disfmarker] that's it. [speaker002:] OK. Should we do our digits and get ou get our treats? [speaker005:] Oh, tea time? [speaker002:] Yeah. It's kind of like, you know, the little rats with the little thing dropping down to them. [speaker001:] That's ri [speaker002:] We do the digits and then we get our treats. [speaker005:] Oops. [speaker001:] OK. [speaker002:] OK. [speaker001:] Yeah, I do it My head's too big, also. [speaker004:] Or, I like it, also, like this. [speaker002:] OK Yeah. [speaker005:] Yeah I [disfmarker] I just tried it. [speaker004:] So you see the [disfmarker] [speaker005:] I think my [pause] head is too big. [speaker004:] So. [speaker006:] And there is a problem that you can hear it come and go, [speaker001:] It's too ba [speaker006:] cuz when you turn your head, then [pause] you drift away from the microphone. [speaker005:] Oh yeah. [speaker003:] Yeah. [speaker001:] Oh, that's true. [speaker002:] OK [pause] so [pause] let's uh [speaker005:] The distance isn't [disfmarker] [speaker001:] course that's true of a lapel as well. [speaker002:] Let's discuss agenda items. [speaker004:] I should have brought more [disfmarker] [speaker002:] What did we have? [speaker006:] Yes. [speaker002:] uh [pause] You [disfmarker] you were announce some things [speaker004:] I have a [pause] couple things about speaker form [disfmarker] about forms in general. [speaker002:] OK, forms [pause] is one thing uh, there's status on [pause] on the uh [vocalsound] transcription discussion [pause] [pause] which will take us about thirty seconds [speaker004:] Stuff. [speaker002:] uh [pause] and then uh Jose [pause] as usual [pause] uh, the one among us who's actually doing a bunch of things on this uh He has [pause] this and I think we can have some [disfmarker] I was just glancing through it, so I think we have something to discuss [pause] about. [speaker003:] Mm hmm. [speaker002:] um and um We just sent in that uh NSF pre proposal but I don't think there's much to say about that except we've sent it in and we'll see what happens um Anybody? Anything? Anything else [pause] going on? [speaker006:] That's it. [speaker002:] No. OK. So Why don't we do it in that order? [speaker004:] OK. [speaker002:] Go ahead. [speaker004:] I uh [pause] probably should have printed out more of these. So one of the things that I've been doing with the digits forms is I've been adding more and more speaker specific information to them and it's getting pretty crowded and you have to fill it out each time. So, what I thought was, it would be nice to have a single speaker form that you fill out once. And then, on the digits form you just have to put your name down and you can look it up from the speaker form. so, I had a first pass of it, and I [pause] spoke with Jane a bit about how to specify what language and education and that sort of stuff and so I have a first pass at it. And I was uh just wondering if people had any comments on information that should or shouldn't be on it. And I [pause] only did one copy which was really stupid of me. um So I'll just sort of say what it's on and then I can hand it around. So the top of it is [pause] uh, Name, Sex Email or Other Contact Information. And I put the contact information just in case we have people with the same name, so that you can distinguish them, and then also just to have a separate contact from the consent form. And then, uh, we talked a little bit about whether to do [disfmarker] how we get at the background. And we could do profession, but it's so uniform that we figured what was more important was [pause] actually the education level. um At least that would get at some idea of their background. So I have a [disfmarker] And s [speaker006:] Well [disfmarker] and it's also relevant to status. I c I can insert that. [speaker004:] and relevant to status. [speaker003:] Mm hmm. [speaker004:] So that [pause] what I have is uh [speaker005:] Status? [speaker006:] Which might in af affect the discourse aspects. So, go ahead. [speaker005:] Oh [disfmarker] oh, stats kind of status, OK. [speaker006:] His [disfmarker] Uh huh. Yeah. [speaker004:] So [disfmarker] uh, I have uh, five things that you can circle, [speaker006:] Categories. [speaker004:] Undergrad, Grad, Post doc, Professor, and then Other, with a colon and a place where you can fill in if there's some Other. So I think the problem with that is that there's not a necessarily easy equivalent for our visitors, who are [pause] not used to the [pause] US education levels. and [pause] may need some [pause] help to translate what that means. [speaker002:] How fine a resolution do you need on that for this? [speaker003:] Is the question. [speaker002:] I mean [pause] maybe not so much. [speaker004:] Mm hmm. [speaker005:] Student, Non student. [speaker006:] Oh [pause] I think undergrad [pause] is useful. [speaker001:] Can you just write something like "position"? and people can fill in their own [pause] you know, "Post doc" or [pause] "visitor" or whatever. [speaker002:] P prone. Yeah. [speaker005:] Sitting. [speaker004:] Well, the problem with that is um, I think the same as the problem of profession. That it [disfmarker] it will be [disfmarker] there will be a tendency all [disfmarker] for everyone to put down "speech researcher" or [disfmarker] or something at that granularity. [speaker001:] oh. [speaker004:] So I don't think there's a problem asking the education level, I th At least I don't feel like there is. um I think the only question is whether the [disfmarker] what categories should there be. [speaker006:] Yeah. I think that [disfmarker] so Uh [disfmarker] did [disfmarker] are [disfmarker] were you suggesting that it should be more fine grained than this? [speaker002:] No, no. Less. [speaker004:] Less. [speaker006:] Less? What would you suggest as a [disfmarker] as a change? [speaker002:] Um, Hmm um well, do you think there's [disfmarker] I don't really know. I mean, were you thinking, uh, that it would be useful for some research later to know say, you said undergrad for instance, so, undergrad versus grad, [speaker006:] Mm hmm. Mm hmm. [speaker002:] do you think that's important? [speaker006:] I do, and the reason is because I think that um, well having been a grad [pause] student as, yeah, a as others here, at [disfmarker] at Berkeley, that, um, I think there's almost like a quantum leap from undergrad to grad in terms of like the status. [speaker002:] OK. [speaker006:] I mean, I mean you notice it in various ways, like simply [pause] trying to meet with a professor during office hours. You know, in [disfmarker] in my department it was always the case grad students had a huge [pause] amount of different priority, and were treated as equals, g and [disfmarker] and help helpers, you know, in [disfmarker] in research, uh uh c uh [pause] roles and things that they [disfmarker] that they [disfmarker] In terms of like the s the social dynamics of interactions, I would expect them to be less deferent. [speaker002:] Mm hmm. [speaker006:] and [pause] um maybe [pause] less assertive of their own views and things like that. [speaker002:] I see. [speaker006:] I don't [pause] think that we're going to have a lot of undergrads, [speaker004:] Right. [speaker006:] unless we [pause] branch out to different types of meetings, but [disfmarker] [speaker005:] How about [disfmarker] You could say, if you're giving a talk, how likely are you to be challenged? [speaker006:] This is the dimension. Well, this is the dimension. [speaker004:] Well, one of the [disfmarker] one of the criteria I had for designing this form, is that I didn't want a separate instruction sheet. [speaker006:] Yeah, but [disfmarker] One [disfmarker] one dimension. [speaker003:] Yeah. [speaker004:] And so we'll get to that in a moment when we talk about language also. [speaker006:] Mm hmm. [speaker004:] Is I did not want to have to, you know come down and quiz each person and have a uh social dynamics expert being the one who's filling out the answers. I want it to be self evaluating. You can give it to them and they can understand what you're asking. [speaker006:] Mm hmm. And it's non threatening, I mean, this [disfmarker] this compared to, you know, wh exactly what level of professorship, and [disfmarker] [vocalsound] and [disfmarker] [speaker004:] Yep. [speaker005:] Well, sounds like you really need to know something kind of like that. You need to know sort of [vocalsound] Bas in in this meeting, sort of what is your social status? [speaker004:] That's a separate question. So, we'll [disfmarker] We'll get to that in a moment. [speaker006:] Well, I agree with him, though. I [disfmarker] Yeah. But I [disfmarker] I agree that this is that w the rea the motivation was things that could affect social dynamics in ways that are relevant to the research without being threatening to the person. [speaker005:] But [disfmarker] but [disfmarker] but it's gonna depend on who's [disfmarker] [speaker004:] And this particular form [disfmarker] I know. So, this particular form is speaker [disfmarker] is dependent only on the speaker, [speaker005:] Yeah. [speaker004:] not on the meeting. [speaker005:] Mm hmm. [speaker004:] You could imagine then having another form [vocalsound] that you might want to do during a meeting to get out the relationships [disfmarker] [speaker005:] Mm hmm. [speaker003:] Yeah. [speaker004:] But then that's interfering with the meeting. So, what I like about this information is you give it to them once, and they fill it out once, and [disfmarker] and then you have that information. [speaker006:] well, and we [disfmarker] also discussed the idea of having the separate D T Ds would handle maybe, s specific [disfmarker] meeting specific things that might be relevant. [speaker005:] I g [speaker002:] I DTD? [speaker006:] uh, the data, so that, you know, his XML thing? You got the Data Type Definition [disfmarker] the Document Type Definition p part that it's [disfmarker] can s be used for relational things, [speaker002:] Oh. Oh, OK. [speaker003:] Ah. Yeah. Yeah. DTD. [speaker006:] yeah. [speaker005:] I guess [disfmarker] I guess it seems to me like you know I [disfmarker] trying to deduce information about the person's status from the meeting independent form [pause] will be useless because [disfmarker] [speaker002:] See [speaker004:] Not at all. I mean, the fact that one person's an undergrad and the other's a full professor is interesting. So, the question is how much of this information um, can y [speaker005:] But, that will depend on what the meeting's about, right? [speaker004:] Well, absolutely! [speaker005:] Let's say the undergrad was an expert in physics, and [disfmarker] [speaker004:] But [disfmarker] but you want to have that information. [speaker003:] Yeah. [speaker004:] Right? So, if you don't have that information, how are you gonna do anything? [speaker005:] Mm hmm. I guess I'm wondering how can you make a conclusion based on that, if you don't know about the meeting? [speaker002:] yeah [speaker006:] Where'd [disfmarker] This is just [disfmarker] Well, you know, we're [disfmarker] we're talking at [disfmarker] at a general level of description. [speaker002:] yeah [speaker006:] You can always get more specific. And, it may be [disfmarker] [speaker004:] I mean, so what would you suggest? [speaker005:] I don't know. I'm just [disfmarker] That's why I was [pause] wondering about if you can put something about the d [speaker002:] Right. [speaker001:] Can you repeat the categories you had? [speaker003:] Yeah. [speaker001:] Maybe they're fine. [speaker004:] Undergrad, Grad, Post doc, Professor, Other [speaker001:] So that, yeah, I think that's fine except the one [pause] thing is there are a lot of ICSI v like I don't know what I would be here, just I'm not a professor I'm not a [disfmarker] So [disfmarker] so anything that, like a Visitor r ICSI Visitor or, ICSI y [speaker004:] oh, that's right, you know I was saying "Professor". [speaker001:] I mean a Visiting S Researcher or whatever. [speaker004:] Uh huh. Yeah, when I said "Professor", I guess what I was thinking in my mind was more "PHD but not Post doc". [speaker002:] How about [disfmarker] Yeah, how about Post PHD researcher or something [speaker001:] So if [disfmarker] if we can put something [disfmarker] [speaker004:] and [disfmarker] Or maybe Post doc [pause] change "Post doc" to "PHD"? [speaker001:] Yeah, that [disfmarker] then [disfmarker] then I think what you have is great. [speaker003:] Yeah. [speaker002:] Right? [speaker001:] for [disfmarker] for that purpose and [disfmarker] [speaker006:] So, change [disfmarker] Would we change "Post doc" to [pause] "Post PHD Researcher"? [speaker001:] or just [disfmarker] You could say [disfmarker] [speaker004:] Because I think PHD and Professor are d are distinct [speaker003:] Yeah. [speaker004:] aren't they? [speaker002:] Yeah. [speaker001:] Wait, what's a PHD? [speaker003:] It's possible. [speaker001:] Like, if you have a PHD and you're [pause] hanging around here, and you're not a professor and you're not a post doc, then that's like what we are [vocalsound] so There's a bunch of people like that. [speaker004:] Someone who has a PHD. [speaker002:] Yeah so for [disfmarker] for me there's [disfmarker] there's a [disfmarker] There are people [disfmarker] Um, this got very tricky in fact when we got involved with the Spanish program actually because [pause] uh, when we said, in our original forms, "post doc" what we were used to, from the German program and from the US standard, when you say "post doc" it meant somebody who had just gotten a PHD who was doing one year someplace. [speaker003:] Yeah. [speaker001:] Right. [speaker002:] But when we did the Spanish call, many people said "oh I want to be uh [disfmarker] have a post doc slot" and they were twenty years out [speaker003:] Yeah. Yeah. [speaker001:] Right. [speaker003:] Yeah. [speaker002:] because they were "post" their [pause] "doc" [speaker001:] Right. Yeah. [speaker003:] Yeah. [speaker001:] So, you can say Post PHD Researcher or what if there's one more category that's sort of general like that. [speaker002:] so [speaker003:] Yeah. Sure. [speaker002:] I [disfmarker] I [disfmarker] I think that [disfmarker] [speaker003:] Right. [speaker002:] It might be hard to do a finer thing than that because whether somebody is going to be dominant in a meeting is really I think it's going to be so clouded by everything else [disfmarker] Someone who's just gotten their PHD uh, could be very very uh, strongly opinionated about something and somebody who's been twenty years out could be shy, you know, s [speaker004:] So, maybe Undergrad, Grad, Post PHD and Other? [speaker006:] Mm hmm. [speaker003:] Yeah. [speaker006:] That's true. [speaker002:] What's [pause] "Other"? [speaker004:] um, other [disfmarker] [speaker005:] For the European [disfmarker] [speaker006:] It's just to allow. [speaker004:] If i Instead of [disfmarker] [comment] instead of [disfmarker] [comment] instead of Professor or instead of Post doc? [speaker001:] You I think you could keep Professor, but you could say Post PHD Researcher or something. [speaker003:] Yep. [speaker006:] It could be st It could be [disfmarker] That's true. [speaker001:] No, instead of Post [disfmarker] Post doc or Post PHD, just wrap all the people that are post PHD and non professors together. [speaker002:] so you want Professor to be in a separate category, you think [disfmarker] you think uh [disfmarker] think [disfmarker] think we [disfmarker] [speaker004:] So, as I said Undergrad [disfmarker] [speaker001:] I think professors probably want it I don't [vocalsound] care. [speaker004:] Undergrad, Grad, Post PHD, Professor, Other? [speaker001:] Yeah. Well, yeah the Post PHD one is the one that can be, like Post Post doc or Post PHD Researcher or whatever. [speaker005:] Mm hmm. [speaker001:] Yeah. [speaker004:] Right. [speaker001:] Then it'd be fine. I think it's fine. [speaker002:] So, I'm te I'm tending to push toward simpler even though I know that more detailed means more [pause] potential information for someone later who's doing research [pause] in [disfmarker] in this area I [disfmarker] uh [disfmarker] The [disfmarker] the [disfmarker] the thing [pause] making me lean toward simpler where possible is that uh [pause] the more a thing [disfmarker] Many of the meetings that we record uh, may be the kind where we just record somebody once even though [pause] we want to get a lot of data that has uh, many meetings with the same people. And so the more forms we have and the more lines in the forms that's th the more overhead for that one time thing makes it harder to do. [speaker004:] Well, the whole reason that I'm doing it this way is [disfmarker] is uh [disfmarker] my experience so far has been the exact reverse. That [disfmarker] [speaker002:] Right. [pause] That's what we've done so far. [speaker004:] Right. [speaker002:] Right. And so I'm saying I expect us to do both kinds. [speaker004:] I guess we could have another type of form [pause] for less frequent. [speaker001:] Yeah. [speaker006:] Well, one thing, so you understand he [disfmarker] he d he wants you to do this only once for each person? [speaker002:] Yes. [speaker006:] OK, OK. [speaker002:] So I'm saying there are going to be [disfmarker] What I imagine once we've collected a lot of data, a chunk of it will be uh, the same people many times [speaker006:] So. [speaker002:] and another chunk of it will be people [disfmarker] random meetings that we got with from different people [speaker006:] Mm hmm. Mm hmm. [speaker003:] Yeah. [speaker006:] Mm hmm. [speaker002:] and I think it's use We've talked about this before, it's useful to have both kinds of data. I think. [speaker004:] Well I think that if [disfmarker] if we're planning to do that, then we should probably have another set of forms. [speaker002:] OK. [speaker004:] You know, a single form that's the consent form and the speaker information form and not have them do digits. [speaker002:] OK. [speaker005:] No, I like the idea of putting down the [disfmarker] the status information because I think you probably can get a lot of interest there can be a lot of interesting research on that. [speaker006:] Mm hmm. [speaker003:] Mm hmm. [speaker005:] I'm wondering could we add something to the form that gets filled out at each meeting that would somehow [disfmarker] [speaker004:] This is a topic I want to bring up [pause] after I get through this. [speaker005:] Yeah. OK. [speaker004:] So, that [disfmarker] that [disfmarker] that's another form that I want to discuss. um, with the digits forms. [speaker005:] Well [disfmarker] I guess I was thinking maybe, you know how you were taking information off of the digits and putting it onto that? [speaker004:] So, s can we get [disfmarker] Mm hmm. [speaker005:] Could we put one more thing on here maybe? [speaker004:] That's what I want to talk about in a [disfmarker] Yeah, I'm [disfmarker] I have that as a topic, also. [speaker005:] OK. [speaker004:] So, let's finish with the speaker form [disfmarker] speaker form and then we'll get over [pause] o get over to that one. [speaker005:] Cuz you said separate forms so I thought maybe [disfmarker] OK. [speaker004:] So, Education Level, Undergrad, Grad, Post PHD, Professor, Other. OK? [speaker003:] OK. [speaker004:] um and then I put "Optional" with a big "Optional" in parentheses, Age. [speaker001:] Wait, why is that optional? [speaker004:] Because I think a lot of people are sensitive to it and won't want to write it down. [speaker003:] Yeah. [speaker004:] So I don't want to make them feel like they have to. [speaker006:] Yeah. [speaker001:] I don't know. It [disfmarker] it would be very good to get age for a lot of purposes. [speaker003:] Yeah. [speaker001:] I mean a lot of corpora [disfmarker] [speaker004:] Uh, sure. But if someone doesn't want to write it down I don't want them to say you can't record me [disfmarker] [speaker001:] yeah, so we ca we have these age ranges or something? [speaker005:] Oh, right. [speaker006:] Well it [disfmarker] [speaker002:] Well, wait [disfmarker] wait [disfmarker] wait [disfmarker] wait a minute, [speaker001:] You know, like [disfmarker] [speaker002:] um, It's not illegal or anything right, it's not pushing on anything unethical or illegal, whatever, to ask for their age, [speaker001:] No, not at all, [speaker002:] right? [speaker001:] every [disfmarker] most corpora have that, you know, information. [speaker002:] So [disfmarker] I suggest you ask for their age and if they say I don't want to give it say OK. [speaker006:] In [disfmarker] in like the London Lund [comment] they [disfmarker] they [disfmarker] the researchers themselves estimate the age afterw after the fact. [speaker003:] Yeah. [speaker006:] So they say, you know, uh middle thirties, you know [pause] fifties or sixties, [speaker003:] Yeah. [speaker006:] and they [disfmarker] and they just [disfmarker] they estimate for the person But [disfmarker] [speaker004:] Well, I think that just having on the form [disfmarker] saying Optional, Age, again means I don't have to be sitting here and explaining the form every time we do this. [speaker001:] yeah, that's OK. [speaker006:] Yeah. And it isn't [disfmarker] it isn't that boldf [speaker004:] And having a little instruction sheet and say "If you're in this age range do this, and if you're in this rage do that." [speaker002:] I see. [speaker006:] Yeah. [speaker005:] Yeah. [speaker006:] And it's [disfmarker] and it isn't really that prominent. [speaker004:] So. [speaker005:] How about [disfmarker] [speaker006:] It's [disfmarker] you know his "Optional" is not really boldface, but [disfmarker] [speaker005:] How about "optional but highly desirable"? [speaker006:] Please, please! [speaker003:] Yeah. [speaker001:] Age or Approximate Age. So somebody can say "thirties" if they're thirty nine. [speaker006:] Actually, "approximate"? [speaker002:] O opt age is optional but those who don't give it will be given the uncomfortable microphones? [speaker004:] Yeah, right. [speaker006:] That's right. [speaker001:] Well, those who don't give it we will be estimating your age for you [speaker004:] Mm hmm. [speaker005:] Yeah, [vocalsound] If not given we will estimate. [speaker001:] so maybe you'll prefer to [disfmarker] [speaker003:] I [disfmarker] I think is not [disfmarker] [speaker002:] The [disfmarker] the mean of the estimates from the [pause] group will be used [speaker005:] Yeah. [speaker003:] I think it's not necessary to put uh "Optional". [speaker001:] Yeah, people can leave it blank if they [disfmarker] [speaker003:] I think people decide uh, in that moment. [speaker001:] I [disfmarker] I think people in Speech are gonna wanna [speaker003:] Because if you put "Optional," I think you [disfmarker] you give eh [disfmarker] [speaker005:] They're very strong willed. You know? People [disfmarker] [vocalsound] [vocalsound] He [speaker004:] Well, I think maybe that's because no one here is sensitive about their age. [speaker001:] They can always lie. [speaker004:] But, I would rather let them leave it blank than lie. [speaker001:] OK, in some sense you could say that "Please leave blank but [disfmarker] but don't lie." [speaker004:] So that's why I want to put [disfmarker] That's why I want to put s "Optional, Age", so that if they don't want to put it, they leave it blank. [speaker002:] They might lie anyway. [speaker003:] Yeah. [speaker004:] But if you put "Optional", won't they just leave it blank rather than lie? [speaker002:] I don't know. [speaker004:] OK. [speaker005:] I think [disfmarker] I think [disfmarker] you can put "optional" [speaker002:] I mean [disfmarker] I mean the peop the people I've known who've lied about it I think would just lie about it. [speaker005:] it's not gonna [disfmarker] [speaker001:] Right. [speaker004:] You don't think they'd just leave it blank if it says [disfmarker] has a big "Optional" right next to it? [speaker003:] Yeah. [speaker005:] You know people who lie about it? [speaker002:] Oh yeah. [speaker001:] Well they might leave it blank if they didn't care about their age being known but it says "optional" they might not. [speaker002:] Hey, Jack Benny was thirty nine for forty years. [speaker001:] Anyway, it's fine I don't know. [speaker006:] Well, the thing abou the reason, uh, one reason for putting "Optional" might be if [pause] it [pause] uh, made the form seem gentler and k and not as intrusive kind of thing. [speaker002:] kinder and gentler. [speaker006:] Yeah, I noticed, I was afraid you'd [disfmarker] I [comment] shouldn't say that. [speaker003:] Yeah. [speaker006:] They uh [disfmarker] that, you know, i if you say "Optional" then it's sort of [disfmarker] if people who [disfmarker] get nervous about the age question then they realize that "Oh well, but they're not trying to [pause] ask me lots of [pause] things." [speaker005:] Yeah, I think it's OK. [speaker002:] I don't think it matters. [speaker005:] People will put it, I [disfmarker] I [disfmarker] [speaker002:] I [disfmarker] I don't think it matters. If it makes you more comfortable, put Optional [speaker004:] It [disfmarker] It [disfmarker] I think it's better to put "Optional". [speaker002:] I [disfmarker] I don't think it's [disfmarker] [speaker001:] OK. [speaker004:] I mean, that's why I put it there, s [speaker002:] Then [disfmarker] [speaker005:] OK. [speaker003:] OK. OK. [speaker004:] but it seems like [pause] maybe you disagree. [speaker006:] I like it. I think it's softening. [speaker002:] I [disfmarker] I [disfmarker] I don't think it's important but I also don't think it's an important point the other way [speaker004:] Mm hmm. [speaker002:] and I don't [vocalsound] want to make you do it some different way than you want to do it. [speaker005:] Yeah. [speaker001:] Yeah, it's OK. [speaker002:] So. [speaker003:] Hmm. [speaker006:] OK. [speaker001:] As long as we put it high enough up that it doesn't sort of get lost in the optional, if there are a lot of other optional things on the form. [speaker004:] Well I didn't mark anything else as specifically optional, [speaker001:] That's [disfmarker] that's the only optional thing, [speaker006:] Let's see. That's the only other one. That's the only one. [speaker001:] OK. [speaker004:] So. Um, and then the next are two sections One indented separately. So, one's for native English speakers and one for non native English speakers. OK, so for the native English speaker it asks for a variety of English with three circle boxes American, British, Indian and Other, for write in. [speaker002:] Now what was this thing that [disfmarker] who was telling [disfmarker] was it Steve? Somebody was telling us about asking about uh, [speaker005:] Where you lived between the ages of [disfmarker] [speaker004:] Yep. Jane and I discussed this, and it was [disfmarker] it was again the same problem that [pause] it's gonna be a self evaluation anyway, and so I don't want to have to be standing over the person and asking them "Well, do you mean that your native an l an language was British when you were born, and then it changed to American when you were two, and then it changed to [disfmarker]" and so on. [speaker006:] Well also, you [disfmarker] you [disfmarker] you [disfmarker] he has a sep separate subject for Region. [speaker002:] Uh huh [speaker006:] and so it's a self assessment of what you think your closest variety is of English. And then following that, a characterization of [disfmarker] of region. Now, we discussed maybe indicating a t a time frame. So, "Where did you live during your, [pause] you know, childhood and adolescence?" [speaker004:] Right. [speaker005:] Mm hmm. [speaker006:] which would be [disfmarker] [speaker005:] Which is more objective. than somebody evalu saying, I think I speak this or that. [speaker006:] Yes. Mm hmm. Although you know I mean a lot of people there won't be, e much difference, [speaker004:] I mean, [pause] th there's room on the form to put al a a line of instruction, but I'm just not sure what it re should really be. [speaker002:] I [disfmarker] I [disfmarker] I'm sorry, so you're saying [speaker006:] but [disfmarker] Mm hmm. [speaker002:] uh I [disfmarker] I [disfmarker] I just missed something, though. Are you saying that you have both? [speaker006:] Mm hmm. [speaker002:] that you're saying, uh what is your [disfmarker] [speaker006:] Mm hmm. [speaker004:] So the [disfmarker] the [pause] for [pause] the non na for the native English speakers we have Variety of English and Region. [speaker002:] Yeah. [speaker004:] And that's it. [speaker006:] And that's partly to pick up [disfmarker] [speaker004:] And then [pause] we have another form later down which is um, list other language influences. [speaker006:] Yeah. S Mm hmm [speaker001:] So what does Region mean [speaker005:] Yeah, I was just gonna ask the same thing. [speaker001:] and like if I see this form I wouldn't be quite sure what you mean by [pause] Region. [speaker005:] Region [disfmarker] [speaker004:] Right, neither would I. [speaker001:] Do you want to have [disfmarker] give choices and have them circle one, or, I mean [speaker006:] Well, we [disfmarker] When we [disfmarker] when we discussed it it was [disfmarker] it was where [disfmarker] where you lived during your childhood and adolescence. [speaker004:] Um, [pause] I don't think we can do that because it would be different for [disfmarker] [speaker005:] Does that mean region of [disfmarker] Region of the US, or region of the world [disfmarker] [speaker004:] If you're American English it would be region of the US. [speaker006:] Well [disfmarker] It's [disfmarker] W it's where you lived. [speaker005:] OK. [speaker004:] That [disfmarker] that [disfmarker] so that's why I didn't have circle forms. [speaker001:] but I mean. [speaker004:] Because it would be different depending on what your language is. So [disfmarker] [speaker005:] Do you mean state, [nonvocalsound] or do you mean more broad than the state level? [speaker002:] Hmm. [speaker005:] You mean like you know southern US, or northeastern or [speaker004:] It I think it would be however you would identify your own. And, as I said I don't really see a way of doing a circle fill in, because there would be [disfmarker] there would have to be one for American English, one for British English, one for you know, one for each type. [speaker006:] Mm hmm. [speaker005:] If you wanted to get regions for American English you could copy what they did in TIMIT. [speaker006:] Well, and also [disfmarker] [speaker005:] They had divided things up into regions. [speaker004:] But [disfmarker] but [pause] that would mean I would have to do [disfmarker] generate a different form for circles, if you're American English or British English or Indian English. [speaker005:] No, no. It would only be f if you said American English [speaker001:] yeah maybe not for the others, because there aren't going to be enough people to really group them [speaker005:] then you'd put the regions. For the others, you would just leave it. Yeah. We wouldn't have enough to really make them [disfmarker] difference on those. [speaker001:] and you know when you talk about dialects in England, I mean, that's really [disfmarker] [speaker006:] Well [disfmarker] [speaker001:] Every block of [pause] London has a different [disfmarker] [speaker003:] Hmm. [speaker001:] so. [speaker002:] Yeah. [speaker006:] I haven't seen what [disfmarker] what they did in TIMIT. [speaker001:] My Fair Lady's [disfmarker] [speaker006:] Are you s Do they have a bunch of options like uh [disfmarker] [speaker005:] Oh, it's like six regions of the country or something. [speaker002:] Yeah. [speaker006:] I see. [speaker005:] Very [disfmarker] just a limited number [speaker006:] OK. [speaker005:] and then if you're American you would just circle the region where [disfmarker] [speaker002:] Yeah. [speaker006:] Well, that's the concept, yeah. [speaker004:] Mm hmm. [speaker002:] Yeah, and Henry Higgins could say which block you were from. [speaker006:] Well [disfmarker] [speaker002:] Right? [speaker005:] Yeah. [speaker002:] So that's [disfmarker] [speaker006:] That's right. Well that [disfmarker] you know that was the concept of having the region thing, I [disfmarker] It would be no problem to change it to options if um [disfmarker] if that were desirable. [speaker004:] Yeah, I just don't know what to do about the other varieties of English. [speaker006:] Although for different Well, that's true. [speaker005:] You'd just leave them. [speaker006:] I mean, Indian English I wouldn't know f [speaker005:] You don't even ask. [speaker001:] I [disfmarker] I [disfmarker] there are huge differences you know in the others [speaker006:] what [speaker004:] Well, that's why I wanted to just leave it something that you could write it down. [speaker001:] so it might be better to just do it for the yeah you could leave region for [disfmarker] for the others but have [pause] circled choice for the American [pause] English. [speaker005:] One [disfmarker] one good reason to use the TIMIT ones is if anybody asks you you know, about the data later on you can say "well I chose these based on TIMIT", And [disfmarker] [speaker001:] Right. [speaker006:] That is kind of nice. [speaker002:] What [disfmarker] what about just doing the [disfmarker] the sort of the Steve suggestion, uh, as [disfmarker] as an add on to it, right? Rather than saying Region, [vocalsound] just say "Where did you live between the ages of here and here?" [speaker006:] That was the concept of region. [speaker005:] And that would apply to everybody then? Yeah. [speaker002:] Just [disfmarker] [speaker006:] and [disfmarker] It's just the wording of it. [speaker005:] And then you could map those to the TIMIT regions later, if people wanted to. [speaker003:] Mm hmm. [speaker002:] And then they [disfmarker] they could s Right [disfmarker] yeah, they could say it different ways. [speaker005:] Yeah. [speaker003:] Yeah. [speaker005:] That's more general. [speaker002:] They could say Cincinnati or they could say Midwest or [disfmarker] [speaker005:] Yeah. That's true. [speaker003:] Yeah. [speaker001:] Actually that is true for them but it means that you're gonna get a bunch of different level [disfmarker] levels of resolution in your survey, it [disfmarker] it'll be more of a pain later, so. [speaker002:] Somebody's gonna have to do some data [disfmarker] [speaker004:] Well, there could be a circle one. [speaker001:] It [pause] depends. [speaker006:] I guess there's a [speaker004:] Right? So, you could say [pause] "What region did you grow up in from these years?" and have a set of circ of six or seven circles [disfmarker] [speaker001:] Yeah, that [disfmarker] that's good but if you [disfmarker] have it totally free form First of all some people take [disfmarker] you [speaker004:] Yeah. [speaker005:] A lot of post processing. [speaker001:] Well, and some people would give you more specific information than others and that's a waste because you can only use the least common denominator of specificity and generally. [speaker002:] Yeah but that [disfmarker] but the good thing about it is it [disfmarker] is it frees you from coming up with exactly what the right categories are. um, Right? [speaker001:] I don't know later on, if [disfmarker] if somebody just says Midwest, now all these other people who said you know area then that'll have to become your category. [speaker002:] Yeah. [speaker001:] and it's [disfmarker] It's harder later on, i I [disfmarker] It's easier for the person [speaker002:] I see. [speaker006:] There is another problem. [speaker001:] but it's harder for the r it actually less informative for us because you can't enforce any minimum level of specificity. [speaker002:] But I think that if you're going to have it be general uh, [speaker001:] so it's [disfmarker] [speaker002:] so the idea again was, you know, you might wanna also know if [disfmarker] if they grew up in Germany or if they grew up [disfmarker] and so [pause] uh, I think [pause] you wanna [disfmarker] The idea [disfmarker] the motivation I think for this being suggested before was that it did cover a range of these cases that might um [disfmarker] you know, if you say your native language was [disfmarker] was English but uh [pause] you [pause] you grew up in Germany, uh I think this [disfmarker] [speaker005:] Yeah, y there is actu [disfmarker] you made me think about another wrinkle in this whole thing, which is that [pause] just asking them where they grew up between certain ages isn't enough, because somebody could grow up in Germany but live on an army base speaking English. [speaker006:] Mm hmm. Yeah. Or someone could in Geor in [disfmarker] in New York and have, I mean, any number of different types of [pause] speech patterns. [speaker001:] Or their parents live [disfmarker] yeah. [speaker005:] Yeah, so you sort of have to say what languages were spoken in the home between the ages of [disfmarker] [speaker001:] and their parents [disfmarker] [speaker006:] Well, we have this [disfmarker] We hafta [disfmarker] So. He hasn't given you the overview, but you have, like, the Native Speaker category, the Non native Speaker [pause] category, and then you have Other Language Influences, for example bilingual dialects, th things like that. [speaker005:] Oh that's nice, OK. [speaker006:] Now there's this issue of not wa like you're saying, you don't want the forms to be hugely complicated and take half the meeting to complete. [speaker005:] Mm hmm. [speaker006:] So, it's trying to [disfmarker] e there's this [disfmarker] trying to [disfmarker] trying to hold this balance between the amount of information needed and uh, you know, not taking too much meeting time, and then in addition, add to it the third level of the balance, uh, um this [disfmarker] this issue of not being too intrusive, asking in a way that they don't mind. [speaker005:] Mm hmm. [speaker001:] If we release this corpus, what do we want, I mean if you do this free form um [pause] the free form solution where some questions are very open ended then you need [disfmarker] someone needs to enter that, and they'll be notes and it will be very difficult for somebody later actually use that without mapping it. [speaker006:] Well, I think rea I think that we need to remember that we have [disfmarker] First of all, it's not all free form. [speaker001:] It's OK. But it's [disfmarker] [speaker006:] We have several [pause] categories that are specifically constrained, [speaker001:] Mm hmm. [speaker006:] And in terms of the [pause] more open ones, we have the [disfmarker] we're in the enviable position of having the contact information and also knowing most of the people who will be participating in the meeting data. We can find it [disfmarker] We can resolve certain issues later, if [disfmarker] [speaker001:] Mmm. [speaker002:] Yeah, and the question is how extensive this gets. [speaker001:] I mean, I guess what you really want, is if there's some information you wanna know [disfmarker] [speaker006:] Mm hmm. [speaker001:] So we want some rough idea of their dialect, basically. [speaker006:] That's it. [speaker001:] then we should have some question where it's not free form, except for these people who are who have non American English accents. There we probably have to have a category "Non American English" and then a few choices. And we should have something that's simple, that people can circle, like these TIMIT regions or something, so that we have something that minimally you can [disfmarker] cuz a lot of times if you give the data out that's what people are gonna use. They're gonna want some small set of places, and then, but anything additional people can fill out, and we can put it in as a note or something, but if we don't have that, someone's gotta sit there and figure out you know, I've got twenty people who said Midwest, and I've got a few people who said something else and these don't overlap or these [speaker006:] Well Isn't it [disfmarker] I mean, I think [pause] that [pause] we're going a little bit in a circle, because I like the TIMIT suggestion very much, and I think it's a question of how to ge incorporate it, and [disfmarker] and we've already suggested [disfmarker] I mean, it's come up in this discussion, that [disfmarker] the suggestion that we could have [pause] um the TIMIT categories for the American, and [disfmarker] and then make the choice [pause] or not, as to whether we do it for British, [speaker001:] Oh, OK, so [disfmarker] yeah as long as you have [disfmarker] [speaker006:] I mean, w we sort of discussed one way or the other, and it doesn't matter to me. I don't know if [pause] Adam has [disfmarker] has a preference, [speaker002:] But [disfmarker] but [disfmarker] [speaker006:] but we could have [pause] you know, a sub a sub categorization for American, We could even use the TIMIT subcategorization. I don't know if it has one for British, or whatever, [speaker005:] Mm hmm. [speaker006:] But you know, just have it [disfmarker] you know, a short simple form that you can categorize yourself in a [disfmarker] in a systematic way. [speaker002:] What about the [disfmarker] the phrasing that um Chuck used, uh [pause] which was in [disfmarker] in relation to the question I was trying to formulate of uh Where [disfmarker] where did you [disfmarker] what [disfmarker] what was that [disfmarker] what w how did you phrase it? [speaker004:] What language was spoken in the home? [speaker002:] "What language was spok", yeah, so it's not where did you grow up, but what language was spoken in the home between the ages of, what would it be, five and twelve or something like that? [speaker003:] Mmm. Yeah. [speaker006:] It's a good idea. [speaker004:] Depends of who you ask, what the age range is. [speaker001:] Well, in the home, the influence of the home is much lower age than that, [speaker003:] Yeah. [speaker001:] I mean once you go beyond the age five or six then it's the [disfmarker] the school influence as well and [pause] and [speaker006:] And the peers. [speaker001:] yeah, so [pause] In the home it's sort of your phonetic first level, [speaker006:] That's one part though. [speaker001:] um [pause] a So it's [disfmarker] you know it starts to get sort of complicated. [speaker004:] Yeah, so this is why I [disfmarker] [speaker006:] Well, [disfmarker] It's [disfmarker] that's not a problem though, [speaker004:] So [disfmarker] so th [speaker006:] I mean. That would be a good question to ask, and then you could add a question. You know, would [disfmarker] uh "among your peers". You know. [speaker002:] cuz he does make a good point about the region [speaker006:] Specify the age range. [speaker002:] not necessarily again, if you're on the army base or [disfmarker] [speaker001:] Right. [speaker004:] Well, the way it's phrased right now, it's clearly asking about language. [speaker002:] Yeah. [speaker004:] And so, what it's trying to get at is what you think you speak. Now, I think that the point is well taken that sometimes people are wrong about what they think they speak. [speaker001:] Right. [speaker004:] But, other than having a trained linguist interview them [pause] and ask these very specific questions, I don't see a way around it. [speaker005:] Mm hmm. [speaker004:] Because as you said, what you're gonna have to stay [disfmarker] say is "Well, list the languages that your parents spoke when you were in home and how good they were at it and how long they spoke it, and now list the ones in preschool and now list the ones in first grade and now list the ones your friends speak [disfmarker]" [speaker005:] Sort of like as soon as you get to the [pause] cases on the edge, the complexity just shoots up. [speaker006:] Yeah, that's right. [speaker004:] Right. [speaker001:] Mm hmm. [speaker004:] So I want to try to keep the complexity so someone can fill out this form at the beginning of a meeting without taking f uh forty five minutes. [speaker003:] Mm hmm. [speaker006:] and we're [disfmarker] and then we [disfmarker] like I said, we have the enviable benefit of knowing most of these people. and being able to ask them follow ups if we need to. [speaker002:] For now, right so it's [disfmarker] I mean, again, we are hoping to expand this out, [speaker006:] For now. That's true. Washington and stuff, yeah. [speaker002:] yeah. [speaker006:] That's true. [speaker004:] So I think that adding the [disfmarker] the TIMIT categories is a good idea, because that's very easy for a person to fill out. But I think we still have this problem of is it gonna be a self evaluation? or are we gonna ask them "what was it when you were four years old, or what was it when you were six years old, or what wher" w What's the best way of phrasing the question? [speaker005:] Well, you know the other thing, too, is that it seems like th one of the reasons you would gather real detailed information about their native language is if you were gonna do fine grained phonetic analysis or something. and [disfmarker] I don't [disfmarker] [speaker006:] Well, it's [disfmarker] [speaker002:] And somebody might. [speaker004:] Or adapta or adaptation. [speaker001:] Most of [disfmarker] but most of those people don't rely on the self evaluations, anyway. [speaker005:] Well that's what I was gonna ask, is that [disfmarker] [speaker001:] They'll probably listen to the speech, so [speaker006:] Well, I [disfmarker] you know, I w I was thinking of it as [disfmarker] e that's a good point, but I was thinking uh, you know, with reference to the [pause] language model issue, which uh and I guess it could be done inductively, but if you have a bunch of, say, R less dialects and uh you know, if they [disfmarker] [speaker001:] But even you know in Switchboard, I [disfmarker] I remember looking really carefully at [disfmarker] They have information sort of like TIMIT I don't know it's [disfmarker] how well they categorized, but they had North Atlantic, Mid Atlantic, and [disfmarker] You know, Texas was definitely out on [disfmarker] I mean most of the people from cer certain places in the South were from specific Texas locations, but [pause] other than that you had a huge range of [disfmarker] You know, people would say Boston, and they had nothing like what I would call a Boston accent. [speaker006:] Mm hmm. [speaker001:] So it was [disfmarker] [speaker006:] Interesting. [speaker005:] Were they putting down where they lived [pause] at the moment they m recordings were made? Is that why? [speaker001:] Um No, the question was really geared towards trying to get people's self evaluation of their dialect region. [speaker005:] Rather than where they grew up? [speaker006:] Mm hmm. [speaker005:] Mmm. [speaker006:] But you have a lot of stratification in [disfmarker] any region. [speaker001:] and you know, for instance, you know, Minnesota Midwest is totally different than Michigan Midwest which is totally [disfmarker] [speaker005:] Yeah. [speaker002:] Right. [speaker006:] Interesting. [speaker001:] these were all settled by very different [pause] type people, so it [disfmarker] it's sort of a general help but I think that anyone who's doing phonetics wouldn't rely on that, they would probably use it as a first pass or something, [speaker005:] So, then we probably don't need the fine grained information. [speaker004:] Well maybe should we just [disfmarker] Then maybe we shouldn't [disfmarker] just shouldn't ask. [speaker001:] Well, I think we do need to make sure about the f the non native accents because those are, you know [disfmarker] [speaker005:] Yeah. [speaker001:] but we don't need to ha be really really specific about the others, because it won't be very w helpful anyway. [speaker005:] Mm hmm. [speaker004:] Well, maybe for the native English speakers we shouldn't even ask the region if it's that unreliable. [speaker001:] I think we should, because other databases do. It's sort of a political thing. And there are some general [disfmarker] like, you will find more New Yorkers in the New York category. but it's not [speaker005:] It can't be relied on only. [speaker001:] Right. [speaker004:] The other thing also is that I'm asking for the variety of English, and I did just put American, British, Indian and other. So, I didn't put Canadian, Australian, and all the other varieties. [speaker006:] Mm hmm. [speaker001:] But Other is there, and people will fill that out. [speaker004:] e But do you think those three are OK? [speaker001:] They'll see by example. Yeah that's [disfmarker] I think that's fine, [speaker005:] Yeah, I think those are good. [speaker004:] OK. [speaker001:] they'll say oh, I'm not one of these, I'm Canadian, or [speaker002:] Yeah. So internally, we have Texas and other. [speaker003:] Hmm. [speaker004:] So that for the Eng native English speakers, we'll do variety of English, regions from the TIMIT labels, and another field. [speaker001:] Right. I mean, even like Cal LA is totally different than [disfmarker] [speaker004:] And then [speaker001:] I mean, you really can't um [disfmarker] [speaker006:] Well [disfmarker] Mm hmm. [speaker001:] We could ask them for more information, but I don't think it's [disfmarker] [pause] necessary and it's not going to solve this problem. [speaker004:] I thought this was [comment] be fast. [speaker005:] Yeah. [speaker006:] Well that's what the [pause] final [disfmarker] the fin the third category was designed to do, because [disfmarker] I mean, you have a bunch of different types of accents in New York You know. Native speaking New Yorkers, as a [disfmarker] as a function of their social uh groups and their ethnic identifications and their [disfmarker] You know. [speaker001:] Oh yeah. [speaker002:] mm hmm [speaker006:] There are lots of things that are very identifiable as [pause] um [pause] you know, other [disfmarker] other aspects, which is why we have this third category. [speaker003:] Mm hmm. [speaker006:] So we wanted to [pause] certainly distinguish American versus British, and other types at that level, and then allow people to self identify if there are other specific things, and then in addition, this issue of regions with [pause] some sort of time frame for the re region because I think that if you live [pause] in Massachusetts now versus you know, in childhood and as adolescence [disfmarker] So if you lis if [disfmarker] if you [disfmarker] [speaker004:] Air conditioning. [speaker006:] You know. So, region with respect to a time frame is what I would suggest. "Where did you live?" I mean. What you said. you know, "where [disfmarker] where [disfmarker] wh where did you live?", or uh, what you were saying, "Languages in the home". But I think that uh a time frame on the region would be useful instead of [pause] "Region now", say. It's a little bit [pause] unbounded. [speaker002:] um [pause] so are you suggesting that you put something in about childhood [pause] in there. [speaker006:] well, you know, there's this issue of [disfmarker] of what age is relevant for the formation of your speech patterns, whatever they are. [speaker002:] Well you could distinguish between pre five and five t five to twelve, [speaker006:] And [disfmarker] Oh, that's interesting. [speaker002:] right? So depending on what someone's interest was in the formation of the phonetic kinds of [pause] categories or whether [disfmarker] or whether you're talking more about their linguistic speech patterns, you know uh so But, again, it depends how fine we wanna get, but we could say uh "where did you live before you were five?", "where did you live after you were five?", I don't know. [speaker006:] Oh that's interesting. from five to [disfmarker] fifteen or something, is a [disfmarker] [speaker002:] yeah [speaker001:] I mean, I think the question on regions is really to ask people how did they define their dialect region now, di the whether they picked it up at [disfmarker] the [disfmarker] one age or another. pretty much, I mean, most people will know whether they have a southern accent or not. [speaker006:] Mm hmm. [speaker001:] um, but they may [pause] have lived you know, in other places, too. So if we ask the question about dialect regions, it makes no sense to say where do you live now. [speaker004:] Right. [speaker001:] Right? You ask how would you best define your your accent or your dialect style. [speaker002:] Right. [speaker004:] Right, the way [disfmarker] the way the form reads now, I think it's pretty clear that's what we're asking [pause] cuz it says "Native English speakers, Variety of English, American, British, Indian, Other, Region". [speaker001:] OK [speaker006:] Mm hmm. [speaker004:] And then we'll have Southern English, you know, whatever [disfmarker] whatever the TIMIT categories are. So I think i if you [pause] look at this fr [speaker001:] Or you could say to somebody, how do you define your [disfmarker] your [disfmarker] the accent that you have. [speaker004:] Right. [speaker001:] um And then you can ask these other questions as well, but what we don't want them to think is that they're living in a certain place and that therefore they're [disfmarker] I mean, nowadays [pause] hardly anybody from California [vocalsound] has a [pause] native Californian accent, [speaker002:] Right. [speaker006:] Yeah. Well I [disfmarker] I g OK. [speaker001:] so. [speaker006:] I don't think it was ever the intent to [pause] talk about the region now, because [speaker004:] Right, so I [disfmarker] it seems to me that [pause] if you read this form, i w it's pretty obvious that it's a subcategory of the language [pause] that you think you speak, [speaker001:] OK, [speaker006:] It's [disfmarker] It's a sub categorization. [speaker001:] as long as it's clear. [speaker006:] Mm hmm. [speaker004:] not where you live. I mean, I don't say anything about where you live on the form. [speaker001:] OK. [speaker006:] Yeah. [speaker001:] OK. [speaker002:] OK. [speaker004:] So it's just in the section talking about language. [speaker001:] OK. [speaker002:] Why don't we try it out. and see [disfmarker] [vocalsound] see what kind of responses we get from people. [speaker004:] Yeah. [speaker001:] Yeah, [speaker002:] Anyway. [speaker004:] And then uh, as Jane said, the last question is "List other language influences, bilingual dialects, et cetera." And that's just and open ended place for them to [pause] do. [speaker001:] Oh. Mm hmm. Sounds good. [speaker006:] That's to handle that unusual percent [pause] or two. [speaker004:] And then [disfmarker] [speaker006:] Yeah. [speaker004:] And then the bottom of the form is for [disfmarker] not for them to fill out, but for us to fill out, for um the aliases and database [disfmarker] speaker database that the person is in, since we'll eventually anonymize and this will be a form a place for us to connect the two. And then the meetings that they attended, [speaker001:] Mm hmm. [speaker004:] because there'll be one form for each person. [speaker001:] Sounds good. [speaker006:] It's very efficient. [speaker004:] OK. So. Yeah, I guess I'll fiddle around yet again with the language stuff. [speaker005:] I don't think you really have to do all that much. [speaker002:] d I yeah. [speaker001:] It [disfmarker] it gets into a can of worms. [speaker006:] Well, you [disfmarker] Mm hmm. [speaker001:] I mean, the more you ask, the more you realize that you're not getting great answers to what you're asking and [disfmarker] [speaker005:] Yeah. N Right. Yeah. [speaker006:] Well, y you didn't really talk about the non English speaker categories. [speaker001:] So. [speaker004:] Right. So, there's another area which is non native English speakers which asks for Native language, Region, and Variety of English. And that's it. [speaker006:] Because it used to be that Germans would learn British English, and now I think there's, you know, is a certain percentage of them who [disfmarker] who've [@ @] to tend toward American English. [speaker005:] N detec [speaker001:] What about proficiency in English? as well? [speaker004:] Proficiency. That's a good idea. [speaker002:] Uh, I [disfmarker] [speaker006:] That's hard though, for self identification [disfmarker] [speaker001:] I [disfmarker] You don't want to do it for political reasons, [speaker003:] Hmm. [speaker001:] I mean [disfmarker] I'm no I wasn't thinking about that, I was just thinking, "Gee, I'd like to know actually [disfmarker]" [speaker002:] I wasn't thinking [disfmarker] oh No, no. I wasn't thinking political at all. [speaker001:] Oh, OK. [speaker002:] I was just saying, looking at [pause] applications of people over the years for post docs here. They're r generally not very good at assessing their own abilities. [speaker004:] Well, do you think that's [disfmarker] That's also because they know that it's a re important for them to be good at it. [speaker002:] but [disfmarker] [speaker001:] OK. [speaker006:] I agree. [speaker002:] That could be. [speaker004:] So, don't [disfmarker] don't you think they overrate rather than underrate? [speaker002:] That could be. Yes. [speaker001:] Well, so maybe that's [disfmarker] [speaker004:] So, but no, I think proficiency is actually a good thing to have on this sort of form. [speaker001:] Maybe we [disfmarker] maybe we shouldn't ask it, [speaker004:] The [disfmarker] the question are [disfmarker] what are [disfmarker] what would the categories be? [speaker001:] I don't know. [speaker006:] Well, [vocalsound] e [vocalsound] [vocalsound] I'd [disfmarker] I'd have to [disfmarker] I [disfmarker] I'd wanna relate it back to training. [speaker002:] Good, Bad. [speaker006:] "How many years did you study English", or, you know, "have you lived in the America [disfmarker] in a c an English country before". But it seems like that's so [disfmarker] I mean, I think that you know, if a person who [disfmarker] [speaker001:] What about something like "How comfortable are you in a meeting conducted in English?", or "How easy is it for you?", or something like that where it's [disfmarker] [speaker002:] How b How [disfmarker] About "How long have you been uh [pause] in [disfmarker] in an English speaking country?" [speaker001:] Yeah, that's a good one. [speaker003:] Yeah. [speaker006:] That'd be OK. [speaker003:] Yeah. [speaker001:] Something [disfmarker] wha what do you think, what's it [disfmarker] [speaker003:] Is better. [speaker001:] I mean [disfmarker] [speaker003:] I think it's better. [speaker001:] How [disfmarker] yeah? How long have you been in an English speaking country. [speaker003:] Yeah. [speaker001:] OK. [speaker006:] Yeah, I like that. I like that. That's [disfmarker] that's non threatening, and it's also an i a good indicator. [speaker002:] Yeah [speaker006:] And there are people who [pause] haven't ever lived in an English speaking country and who are superb. However, um, that's [disfmarker] you know I do think it's a pretty good indicator. [speaker003:] Mmm. [speaker005:] And then there will be people who [pause] live in an English speaking country, but all their meetings are held in another language. [speaker006:] That's true too. Or even, you know, I mean, some people who just don't change much in their [disfmarker] [speaker005:] So, [speaker003:] Yeah. [speaker004:] Yeah, I'm [disfmarker] I don't know. [speaker003:] Yeah. [speaker005:] Yochai's company is like that. [speaker004:] Being totally monolingual, I'm not the right person to ask, to talk about this [speaker005:] Everybody speaks Hebrew. [speaker002:] Really? [speaker004:] but it just seems to me that that doesn't [disfmarker] the length of time doesn't really get at what we're asking. Right? It seems to be that asking about the proficiency, even if it's self rated, is what we wanna know. [speaker006:] Except that you have some many biasing factors, [speaker002:] Yeah. [speaker006:] you have [disfmarker] you have people who are highly confident when they really [pause] should be a little bit less confident, and then you have, [pause] uh [pause] other people who are just the reverse, and [disfmarker] and it [disfmarker] [speaker002:] Yes. [speaker006:] so many things that vary [disfmarker] I [disfmarker] And also we wanna keep it simple and non evaluative. [speaker004:] Mm hmm. OK. [speaker002:] We done? [speaker004:] With that form. [speaker002:] Yeah. Oh, there's another form. [speaker004:] Shall we wait for other forms later, or you [disfmarker] Keep going more form [disfmarker] [speaker002:] No, you got [disfmarker] what's the other form? [speaker004:] Uh, the other form is a modified digits form. uh and [pause] it's pretty sim this one's pretty simple, it's [disfmarker] it's very much what like the digits forms in front of us, except it doesn't ha ask for sex or native language. Um, so it's Name, Email, Time, Date, Seat, Session, Mike number, Channel, Mike type. [speaker006:] Why do we need that much information, why do we need the email address, for example? [speaker004:] Uh, in case people have the same name. [speaker006:] Oh, I see. How often would that happen? [speaker002:] In case people what? [speaker004:] So I wanted [disfmarker] Have the same name. [speaker002:] Oh. [speaker004:] Yeah, I'm particularly sensitive to this cuz a good friend of mine in S in Seattle was named Scott Smith and there were fourteen of them in [disfmarker] at Boeing. [speaker006:] I see. [speaker004:] So [comment] We could drop email. [speaker006:] So the only [disfmarker] so [disfmarker] [speaker004:] I mean that's not a big deal, it's just uh [disfmarker] [speaker006:] Well, it's an [disfmarker] it's an interesting identifier, I [disfmarker] I [disfmarker] now that I realize the und the reason for it. The [disfmarker] That's true too. [speaker002:] Well the other thing is that potentially if this is done over a period of time, someone's email could change. [speaker004:] Right. Well, the other way I was thinking about it was assigning them an ID [speaker006:] Yeah. [speaker004:] but I hate it when people do that to me, when they say, here's you ID, don't forget it. [speaker002:] Oh yeah, that's bad. [speaker004:] So, I [disfmarker] I wanted just something that would give us a hint of [pause] who the person is if the name field isn't readable or it's a duplicate. [speaker002:] Thanks [speaker006:] That's interesting. [speaker001:] How about their birthday? [speaker006:] Good idea. [speaker001:] I don't know. I mean, not [disfmarker] not the year [pause] not the year [speaker006:] It's the age question again. [speaker004:] Your mother's maiden name [disfmarker] mother's maiden name [speaker002:] Oh, I see. Then, you don't hafta ask the age. Yeah. [speaker001:] not the year but just their birthday. [speaker003:] Not the year. [speaker001:] I don't know, what are the chances of people having the same name. [speaker004:] social security number, [speaker006:] Yeah, that's right. And a bank where you frequent. Yeah. [speaker005:] PIN. [speaker001:] They [disfmarker] [speaker002:] Social [disfmarker] social security number. [speaker003:] The PIN! [speaker004:] So, although the email [disfmarker] email will change, I think that's a good way of doing it. [speaker006:] Yeah. [speaker004:] We'll be able to backtrack from email. [speaker006:] I like that. I never thought of that as an identifier. That makes sense. [speaker004:] Um, and then the other thing I was thinking about and it probably wouldn't be on the digits form, but it [disfmarker] it is something that some form about the individual meeting. [speaker006:] Yeah. [speaker004:] Now, the problem I have with that is I don't know who would be filling it out. [speaker006:] Well, that I think, someone internal to the meeting could do after the fact. And this would [disfmarker] and [disfmarker] and we discussed this, this is with reference to like the DTD, what you would put in the DTD that's meeting specific. So. And it could be something like, "oh by the way, the guy giving the presentation today is the student of the professor who you know, whatever". So, speaker A is his professor and they [disfmarker] and he's preparing for his quals or some such thing like that. Or it could be you know, "so and so and so and so are married". [speaker003:] Hmm. [speaker004:] Just to pick up an arbitrary an [comment] example. [speaker003:] So and so and so [disfmarker] [speaker006:] We [disfmarker] this wouldn't be relevant all the time but you know, if there's a big discussion on [pause] you know, the [disfmarker] the [disfmarker] um, best way to move house, then [disfmarker] and these people are talking a whole lot, then it might be interesting information to say oh by the way they're married, or [disfmarker] I [disfmarker] I don't mean, uh, you know. [speaker004:] Right. So th so the sorts of questions I had were one Who's gonna fill it out, and who's gonna take responsibility for filling it out [pause] in each meeting, I mean, will we re will we require it. [speaker006:] I would tend to leave that e spontaneous, I think that e at the stage of uh Uh, sc scanning the [pause] transcripts after they've been produced by [disfmarker] by our [disfmarker] [pause] by IBM, At that point, I think that some of these dynamics will appear to be important, [speaker004:] Mm hmm. Mm hmm. [speaker006:] and you could [disfmarker] you could spend a lot of time enumerating all the different relationships that people have. [speaker004:] And that was my other question, which is [pause] um [pause] what sort of information do we want to record? [speaker006:] Yeah. [speaker004:] I mean, so I think pairwise relationships are pretty easy. [speaker006:] Mm hmm. [speaker004:] You know, source, destination, relation. Are there other sorts of things that might [disfmarker] we might want to record? [speaker006:] Well, I ju I was just thinking, with reference to uh, things that have [disfmarker] that bear on the content or the status relations, would be the things [disfmarker] Without being exhaustive, by any means, but just like I said, if there's a k a certain topic that comes up in the meeting, and [disfmarker] [pause] that knowing their relationship will clarify it, or [pause] if there's a certain dynamic that comes up [disfmarker] So, I mean, a person is asked a whole bunch of questions, more than you'd usually think they'd be asked, and it turns out it's because he's being prepared for a job interview or something like that, then it's useful to know that [disfmarker] that relationship. [speaker004:] Right. I think that fits in well with the whole meeting map [disfmarker] mapping meetings concept is that's another way of looking at [disfmarker] looking at it. [speaker006:] But, not to be exhaustive. Interesting. [speaker004:] So, are there anything other than pairwise? [speaker006:] Oh, well yeah, you could have people who are all part of the same football league, or [disfmarker] uh or chess club, or [disfmarker] [speaker004:] Oh yeah, OK. [speaker001:] So thi Isn't this an open ended, like basically a notes column? I mean you can imagine wanting you should be able always to attach some notes to [disfmarker] [speaker004:] Yes, shirt [disfmarker] certainly. [speaker001:] is this the [disfmarker] the place where we would do that, or is there another place. [speaker004:] I've been thi I [disfmarker] it seems to be that there is [disfmarker] We could attach more structure if we wanted to. Especially because a lot of these relationships are pairwise power relationships. [speaker002:] I Maybe I was missing something maybe, but I mean [pause] this is something one would try to infer, but how do [disfmarker] how would you How would you freeze that information about what, uh, I mean, we already have these things about someone's a professor and someone's a student, or something, but would you [disfmarker] would you [disfmarker] Some person would be writing OK this person was leading the meeting and that was or [disfmarker] or [disfmarker] [speaker006:] Mm hmm. [speaker002:] is that what you mean, or [disfmarker] What [disfmarker] what [disfmarker] what [disfmarker] uh, [speaker004:] Um, just as an example, it would be, "Source, Adam Janin", "Destination, Morgan", "Relationship, Advisor". [speaker002:] Oh, so you're just putting preexisting relationships that are [speaker004:] that are relevant. [speaker001:] I think that starts to [disfmarker] There's definitely some clear cases, [speaker002:] I see. [speaker001:] but there are a lot of gray areas, where [pause] the context happens just in that meeting, [speaker002:] yeah. [speaker004:] Yep. [speaker001:] or, I mean, where you don't wanna sort of have to make a distinction between a pre existing relationship, or [speaker002:] Yeah, I m [speaker006:] Well, realize, w this was [disfmarker] the idea was that this would be meeting specific. Otherwise, they would [disfmarker] it would be possible to have a higher level thing. [speaker001:] But I mean, there are some that generalize over many meetings and there are some that don't, and there are some things that happen in between meetings, [speaker002:] Right, but that example was not [disfmarker] [speaker006:] That's true. That's true. [speaker004:] Right, so my example was not a good one, because that's [disfmarker] [speaker002:] Yeah. So, I mean, for [disfmarker] today, for instance, [speaker001:] and [disfmarker] [speaker002:] this is the second meeting recorded today. The first meeting was a [disfmarker] was a front end meeting, and in that one, the basic form of that was uh, I was the leader, and I was saying uh, "what are the results you've gotten in the last week?" And they were [disfmarker] and uh one person was sort of taking the lead, in describing what the results were, and then uh, Chuck was taking the role of saying, "well, what did you mean by that?", so he was sort of like the out s semi outsider, asking curiosity questions. And then it would come back to me with saying "well, you know, you showed me that but you know, I think you should do this in the future," so it was very much sort of "Leader" and [disfmarker] and [disfmarker] and [disfmarker] and "Led" sort of relationship. And um [pause] the guy I was primarily doing this with uh is [disfmarker] is a visiting researcher, uh, in this l meeting, uh, you are my graduate student, but uh it's not taking that kind of role at all, right? [speaker004:] Right. [speaker002:] You're [disfmarker] you're [disfmarker] you're actually leading the meeting for the most part, most of this meeting, [speaker004:] Well, it's just cuz it's my turn. [speaker002:] so it's [disfmarker] Yeah, well, whatever. So [pause] Uh [pause] Yeah, so I [disfmarker] I don't know, so you're suggesting [disfmarker] [speaker001:] I think this one needs to be open ended, completely open ended, and the only concern I would have is if there's something we wanna be able to say and we can't put it in that field. [speaker006:] Well, the there's [disfmarker] [@ @] I mean. I [disfmarker] g you [disfmarker] You're the expert on the D T [speaker004:] Well, the D T Ds mean we can do whatever [disfmarker] however we want it. [speaker006:] but [disfmarker] Yeah. [speaker004:] But what I [disfmarker] what I was thinking uh [disfmarker] the reason that I didn't want to make it completely open ended is that it does seems like there is some structure which is common. Especially with these pairwise ones, where you can describe them and pairwise. [speaker001:] So you mean like [disfmarker] [speaker004:] And you might want a tool that used that information. [speaker001:] Is there a way that we can add keywords or sub I don't know what you would call them, but little labels for things that we decide are useful things to have [speaker004:] Sure, absolutely. [speaker001:] but You know, as long as we have this one place where it can all be stored, then we're fine, I think. [speaker002:] Right, cuz it seems like it's sort of a [disfmarker] [speaker001:] and those'll probably occur [disfmarker] I mean, to us, we'll think of a few of them, some other researcher will think of some totally different keyword. [speaker004:] Right. [speaker002:] Exactly. It's sort of a research question, you [disfmarker] you're analyzing the meeting and you're deciding, "ah, there's this kind of structure to it." [speaker001:] Y Yeah, it's [disfmarker] it's [disfmarker] it's gonna change [disfmarker] [speaker002:] Yeah. [speaker001:] It's gonna change, [speaker003:] Yeah. [speaker001:] and maybe we wanna regroup things under different labels later, [speaker003:] Interesting. [speaker001:] if we figure out that these pairwise things are [disfmarker] [speaker004:] Right [comment] Well I think [disfmarker] [comment] being [disfmarker] being in a common organization was one I hadn't thought of, that [disfmarker] [speaker006:] Well [disfmarker] [speaker004:] So you're football league was a good one. So the ones I've been thinking of are all sort of uh, power, responsibility. I guess husband wife is yet another one. [speaker006:] And so, you know, in discourse [pause] analysis you get things like [@ @] the role. [speaker004:] Ach! [speaker006:] You have [disfmarker] you have a role in relationships and it has to do with your negotiating your [disfmarker] how you're perceived and how y and how the other [@ @] [disfmarker] [speaker004:] Hmm. [speaker003:] Role, yeah. Yeah. [speaker006:] You know, and so there are various things you negotiate. And [pause] I'm [disfmarker] I'm wondering if we might be able to do it s in sort of a loose way, but e with [disfmarker] by having a searchable field like "role" and then you know "within this context this person has the following roles", and it could be very loose, [speaker004:] Mm hmm. [speaker003:] Yeah. [speaker006:] but then it would be possible to do something as simple as a grep across a database and if you're interested in you know which [disfmarker] which [disfmarker] which ones this person serves as an advisor in and versus you know a l uh s uh whatever, a leader, speaker, whatever, [pause] you could just grep for "role" and [disfmarker] and you know, some [disfmarker] some particular value. [speaker004:] Mm hmm. [speaker002:] So it sounds like having the facility is great, [speaker003:] Mm hmm. [speaker002:] and then over a period of time as the research happens on this we might develop [pause] some of these categories that would be generally useful [speaker004:] Some more. [speaker002:] and it [disfmarker] it sounds like a good thing to do. [speaker006:] And [disfmarker] [speaker004:] OK. [speaker006:] Interesting. Good. And systematize also maybe the encoding of it if [disfmarker] as we find some are useful and some are not so useful. [speaker004:] Nope. Yeah, so what I have right now is all pairwise, [speaker003:] Mm hmm. [speaker004:] so that's probably not sufficient. [speaker006:] I think you don't need to have that much structure built in, and that would be less time consuming to do it this way first, and then find out what structure's needed. [speaker004:] Yeah, maybe not. Yep. OK. [speaker005:] what about all of the meetings that have been recorded so far that [disfmarker] without this? Will [disfmarker] Will somebody go back and fill that in, or we just won't have it, or [disfmarker] [speaker004:] Well, I think that in a lot of the meetings people won't be filling this out anyway. So really what I want to have is a place that if a researcher wants that information, they can add it. [speaker005:] Mmm. [speaker002:] Almost all of the meetings that we have are recorded with roughly this group [speaker004:] Right. [speaker002:] uh and uh [pause] you know and [pause] a fair minority with the group that we have in the morning, so this is not a lot of people. We could easily [disfmarker] There's just a couple other meetings that [speaker004:] Right. So it [disfmarker] We can regenerate a lot of this information. [speaker005:] So it wouldn't be that hard to go back and [disfmarker] Yeah. [speaker002:] Yeah. [speaker004:] But [disfmarker] Even if we can't, generate it, I think it's alright to have some of the sections in your database blank, that this information was not collected for this meeting. [speaker005:] Mm hmm. [speaker004:] And there's just not gonna be a way around that. That's just gonna happen like the N S A meeting, the networks meeting. I don't think they're gonna want to fill out that information. [speaker005:] Yeah. [speaker004:] So. OK, enough on forms. [speaker002:] OK, so we're running kind of late, [speaker003:] OK. [speaker002:] so we'll probably wanna zip through some other things, maybe do some more time on [disfmarker] on Jose's stuff later, um [pause] uh Status of Transcription, Uh, I guess, you know, we [disfmarker] we recently woke up um our friends and uh so I [disfmarker] I guess [disfmarker] you've sent off this [disfmarker] the uh CD ROMs and you got some response from [disfmarker] from uh IBM about [disfmarker] [speaker004:] They haven't received them yet, but I did get a response from them saying "Yes we're still alive, we haven't done anything yet." basically. [speaker002:] Yeah. [speaker004:] And uh [disfmarker] [speaker002:] Was there some suggestion that they might soon, or [disfmarker] [speaker004:] Yeah. [speaker002:] Yeah. [speaker004:] So [disfmarker] so, they've transferred the responsibility to someone else. I [pause] can't remember his name McCoond? [speaker002:] McCoond [speaker004:] McCoond? [speaker002:] Yeah. [speaker004:] Something like that, but uh [disfmarker] And so he sent me email and said he's gonna be starting to work on it, and I haven't received anything else. [speaker002:] Great, OK. [speaker004:] So. [speaker002:] Well, that was that one. [speaker003:] Mm hmm. [speaker002:] Um, [speaker004:] So tomorrow I'll send a email and just ask if he received it, he should be getting it today. Actually he should have gotten it yesterday. But. Give him a day. [speaker002:] um, and I think we should just sort of look at this thing from you, off line Jose cuz it's getting kind of late [speaker003:] Yeah. [speaker002:] but I mean I think the bottom line just glancing through it is that there's [disfmarker] there's a lot of overlap, in just energy related things [speaker003:] Yeah. [speaker002:] and so you need something else. I mean, that's sort of [disfmarker] [speaker003:] OK. [speaker002:] right? And uh what [disfmarker] the [disfmarker] so I'd [disfmarker] [speaker005:] What about that error that [disfmarker] that uh [pause] the [disfmarker] the supposed lub [speaker002:] The residual error? [speaker005:] no, the supposed bug [pause] that was [disfmarker] that we were [disfmarker] We were guessing you had a bug before because [pause] you went through and you concatenated all of these things together [disfmarker] [speaker006:] Mm hmm. [speaker003:] Ah, yeah yeah. I [disfmarker] I found the error, and I repeated the experiments eh and here, eh [pause] you can find the new results eh [pause] with [disfmarker] [speaker005:] Ah. [speaker002:] OK, so this is post bug fix. [speaker003:] Yeah, yeah. OK? [speaker002:] Good, OK. So um [disfmarker] so I had two thought just glancing through this. So, one is [disfmarker] one is that um that uh [pause] we might wanna talk about normalization. I mean, I don't know what normalizations you use. Sometimes things have more overlap before you've done some kind of normalization and we might think about what different kinds of normalizations are possible. [speaker003:] Yeah. [speaker002:] And the other thing is that, if that isn't [@ @] uh so much of the issue, then probably [disfmarker] [speaker003:] Yeah. [speaker002:] I mean, residual LPC energy and just plain energy are very closely related, even though they're different and um so um [pause] you really may need to go to something that looks at i in some sense harmonicity, or um [pause] um [nonvocalsound] relationship to or [disfmarker] fit [disfmarker] fit to pitch tracks on uh [disfmarker] on either side of [disfmarker] of it, for instance? [speaker003:] Yeah. [speaker004:] Hmm. [speaker003:] Yeah. [speaker002:] You know, so if [disfmarker] if there's uh a really bad fit between where the pitch is going in this suppose [disfmarker] uh hypothesized region of overlap, and the [disfmarker] the [disfmarker] and one or the other uh side, you know, then this [disfmarker] this [disfmarker] this might uh, suggest a [disfmarker] [speaker003:] Another side. an alteration in th in the tracking way. [speaker002:] Excuse me? [speaker003:] You mean that eh you ehm [pause] mmm [disfmarker] I understand that eh you [disfmarker] you mean eh to [disfmarker] to study the alteration between the [disfmarker] the tracking of the pitch eh if we compare the overlapping zone with a speaker zone. [speaker002:] I if you have two [disfmarker] if you have two [disfmarker] if you have [pause] uh [pause] two people speaking uh [pause] and there's an overlap, then [disfmarker] I mean the first thing is that there should be a mixture of harmonics [pause] uh during overlapped voice sections anyway. [speaker003:] in the co in the context. Yeah. Mm hmm. [speaker002:] And um the [disfmarker] any [disfmarker] a measure that you have of how much of the energy is due to your best guess at a particular harmonic sequence, it's also going to be a smaller fraction. This is, you know, related to the harmonicity sort of thing. [speaker003:] Uh huh. [speaker002:] And then the other thing I'm saying is that if you look at the temporal evolution [pause] of the pitch there should be something like a discontinuity there. [speaker003:] Yeah. Yeah. [speaker002:] Now, there's other places where you'll have discontinuities in pitch and speech, but [disfmarker] but [disfmarker] maybe it'll have something to [disfmarker] have a different character. But it seems like energy [disfmarker] energy tells you something, but uh if [disfmarker] unless this is a normalization problem, it looks from your results like there's so much overlap that [pause] certainly you need something that isn't energy like, in addition to it, at least. [speaker003:] Uh huh. OK. [speaker005:] Are [disfmarker] Are the things in the [disfmarker] [speaker003:] I don't normalicized eh in these er experiments. by the moment. I don't normalicize. [speaker002:] OK, so the question is [disfmarker] is [disfmarker] would some kind of normalization [pause] help. [speaker003:] Yeah? [speaker002:] You know, uh, i i it could be that if you normalized uh by the overall um [pause] energy uh for uh you know some [disfmarker] some longer period of time or something, uh [pause] that [disfmarker] that there would be more of a distinction. [speaker003:] Nnn. Yeah. The different zone. [speaker002:] uh, I [disfmarker] I don't know. [speaker005:] So, uh, from [disfmarker] from these charts, are [disfmarker] [speaker003:] Yeah. [speaker005:] hmm, I'm just trying to remember where we were, but uh, were [disfmarker] you were interested in finding out if you can tell the difference between [pause] overlap and single speakers by looking at the e energy, like say, for example, the mean of the energy. Was that the idea? [speaker003:] Yeah. [speaker005:] OK. [speaker003:] Yeah. [speaker005:] So, if you look at [disfmarker] on this page that has uh frame energy, you look at the mean for the overlap, that's forty four point sixty two versus the mean for the speakers, thirty nine point sixty two, [speaker003:] Yeah. Yeah. [speaker005:] so there seems to be more energy when there's overlapping which makes sense, [speaker002:] Which is the intuition, [speaker003:] Yeah. [speaker002:] un unfortunately there's a standard deviation of like ten, [speaker003:] Yeah. [speaker005:] but [disfmarker] [speaker003:] Yeah. [speaker005:] Yeah, [speaker006:] Oh. [speaker002:] so [pause] [vocalsound] For each one, [speaker003:] The product of the deviation is the vari is the variance. [speaker005:] different than standard deviation. [speaker002:] yeah. [speaker005:] Yeah. [speaker001:] But are we assuming that you could know the energy for a speaker when there's no overlap? I mean I can imagine a model where you say, OK, you know I'm [disfmarker] I'm gonna give myself [disfmarker] [speaker004:] And [disfmarker] [vocalsound] [vocalsound] [vocalsound] And [disfmarker] And did [disfmarker] is this from the mixed signal? [speaker001:] it's cheating, but not completely if they're often not overlapping and normalized for the speaker [speaker003:] Yeah. [speaker001:] or is that not allowed. I mean, what's the [disfmarker] [speaker002:] Well, that's what I was suggesting, that y you wanna do some kind of longer time normalization eh with the [disfmarker] [speaker003:] Yeah. Yeah. But [disfmarker] [speaker002:] even if you make a mistake occasionally, that [disfmarker] that i is roughly i is there some corresponding to that speaker, so that if you, if [disfmarker] if uh [disfmarker] [pause] if you [speaker001:] right [speaker003:] Yeah. [speaker002:] because that [disfmarker] that's gonna spread out these distributions really a lot, to not do any kind of normalization, [speaker001:] right [speaker002:] and so that [disfmarker] that could mask the effect somewhat. [speaker005:] so the um [disfmarker] just one more clarifying question [disfmarker] On the overlap category, [pause] is that overlapping of speakers only? [speaker003:] Yeah. [speaker005:] It doesn't include some of the other sounds? [speaker003:] No. [speaker005:] OK. [speaker003:] It's a pu pure overlapping zone. [speaker002:] So that's why it doesn't add up down [disfmarker] [speaker003:] I mean that eh overlapping between two, three, four eh eh speaker. [speaker005:] OK. OK. [speaker003:] Th Yeah. [speaker004:] OK, so there is some normalization already, in that they've been volume equalized uh over the entire signal. [speaker001:] Right, right. [speaker003:] Yeah. [speaker001:] Right. That's what makes it [pause] seem difficult, [speaker003:] Because they [disfmarker] they're [disfmarker] [speaker001:] because you're already left with a high standard deviation [speaker003:] Yeah. Yeah. [speaker001:] and they're already roughly so that each person's hearable, right? [disfmarker] so sort of equally loud, perceptually anyway. [speaker004:] Right. Right. [speaker006:] What about doing it with just the single channels? [speaker003:] Sorry? [speaker006:] What about, uh [speaker003:] Say this graphic? [speaker006:] Well, I [disfmarker] I'm thinking [disfmarker] so they're raising the question about the fact that if you use the mixed signal, there's already been a [disfmarker] [speaker003:] Yeah. I use the mixed signal without normalization [disfmarker] without normalization, [speaker004:] Well, but the mixed signal already has a normalization in it. [speaker006:] I [disfmarker] I'm wo [speaker003:] but [disfmarker] but [speaker006:] Exactly. So what I'm wondering is what about doing some of this with the single channel recordings. [speaker003:] Ah. I [disfmarker] I don't study the single channel yet but eh is eh [disfmarker] is an idea. [speaker004:] Well, I think it'll be worse, right, I think the mixed [disfmarker] [speaker003:] to [disfmarker] to begin to work with the [pause] eh [pause] with the one single channel. Now. [speaker002:] I have another [disfmarker] another thought. Um, this is [disfmarker] this is frame energy, of course frame [disfmarker] frames have a lot of variance to them because different sounds are different loudnesses. So I mean, what if you took uh one second chunks or something like that. [speaker003:] Yeah. [speaker002:] Or [disfmarker] or maybe not even a second, maybe a quarter of a second or something, so that you would typically have two or three basically syllable kind of length [disfmarker] [speaker001:] Yeah, like two hundred milliseconds [speaker002:] Yeah. [speaker001:] or twenty to twenty five frames, [speaker002:] Right. [speaker004:] Or window it [disfmarker] [speaker001:] or [speaker003:] Yeah. [speaker002:] So then you [disfmarker] you'll [disfmarker] you know [disfmarker] [speaker004:] hamming window. [speaker002:] Something, [speaker003:] Yeah. Yeah. [speaker002:] yeah. So [disfmarker] so I mean, so you know two hundred, three hundred four hundred mill some longer chunk of time, and then you looked at the amount of energy in that. and how does that vary over these different cases. [speaker003:] Uh huh. [speaker001:] or voiced regions, I mean can we tell? [speaker002:] or voiced regions. [speaker006:] Interesting. [speaker001:] can we [disfmarker] sorry I mean, is it known whether something's voiced here, or is it roughly estimate [disfmarker] estimateable? [speaker002:] Well, I mean, you could do a [disfmarker] you could first determine that something was voiced or not. [speaker001:] I mean like if one speaker's voicing and the other one isn't and they're overlapping, do we know that the first speaker's voicing, or not? [speaker002:] Well, I mean the overlaps are typically more than just a tiny little bit of time [speaker003:] Ah. [speaker002:] and so uh there could be voiced and non voiced pieces during the overlap. [speaker004:] Huh. [speaker001:] Mm hmm. But will we be able t to detect the voicing when there's overlap? That's what I just wondered. [speaker002:] I don't know, because it's [disfmarker] it's [disfmarker] this [disfmarker] this has to do with these questions of harmonicity, and so forth, [speaker001:] OK. So if we could, or if you could do some of it, [speaker003:] Yeah. Yeah. [speaker001:] yeah, [speaker003:] Yeah. [speaker002:] right? [speaker001:] then um, voiced regions would be, like, probably more informative, for energy [speaker002:] Yeah. [speaker001:] because [disfmarker] [speaker002:] Right, so you could do something like, uh [speaker001:] and pi and pitch for that matter. [speaker002:] but [disfmarker] Sure. But I think what will be easy for him to do would be to do this thing of looking over a large enough region of time. Cuz if you look over a large enough region of time, most of the time you'll have some voiced [disfmarker] a fair amount of voiced uh energy in it. [speaker004:] Right. [speaker002:] And so I think that would be [disfmarker] [speaker004:] And he has on his graph something like forty percent. It looked like fifty percent or more are point two seconds or longer overlap It's li I think the last page. [speaker003:] Yeah. [speaker002:] Is it? [speaker006:] That'll also lessen the [pause] standard deviation. [speaker003:] Here in the [disfmarker] yeah it [disfmarker] it's a new [disfmarker] [pause] new [disfmarker] a new diapositive with information about the context for overlapping zone. [speaker004:] Nope, not the last page. [speaker003:] Um. [speaker004:] Where was it? It was one of the tables. [speaker002:] What? [speaker003:] How define? [speaker002:] Are you using a pitch detector in this yet, in your experiments? [speaker003:] I Eh [disfmarker] [speaker002:] Do you have a pitch detector that you're using? [speaker003:] j my? [speaker002:] Do you [disfmarker] do you have a [disfmarker] a pitch detector that you are using in these experiments? [speaker006:] Pitch. [speaker003:] eh, I [disfmarker] I'm [disfmarker] working now with pitch detector. [speaker001:] Well, it's the fifth page, I guess. [speaker002:] Yeah. [speaker003:] I'm working now, but eh I haven't eh results yet. [speaker004:] Fifth page. [speaker003:] But eh at this moment I [disfmarker] I [disfmarker] I prepare, I am [disfmarker] I am preparing the [disfmarker] the pitch eh eh tracking. pitch tracker eh algorithm that I have [speaker004:] Oh, it's seventy five percent! [speaker002:] Does [disfmarker] does it have a voiced, unvoiced uh detector in it, [speaker003:] but eh [speaker002:] uh? [speaker003:] No. [speaker002:] It doesn't. it just finds the pi it just finds the pitch even when it's unvoiced? I mean, it must have some [disfmarker] It has to tell you something, [speaker003:] eh, I [disfmarker] I [disfmarker] [speaker001:] It has to tell you. It has to give you at least a zero, one, [speaker003:] Yeah. [speaker002:] right? [speaker001:] I mean uh d binary [disfmarker] [speaker002:] Yeah. [speaker003:] It's based on uh a correlation between frame the utterance [speaker002:] Yeah. [speaker003:] but eh I don't know what is ehm [disfmarker] eh [disfmarker] [speaker002:] Right. [speaker003:] because it's no my [disfmarker] my algorithm. [speaker002:] Yeah. So ordinarily in these things, it does tell you that it's [disfmarker] it's voiced with a pitch of such and such. [speaker003:] Is [disfmarker] [speaker002:] And it will [disfmarker] i i it will say something like zero [speaker003:] Uh huh. [speaker002:] or you know it'll have something that it says when it's [disfmarker] when it can't find a good [disfmarker] good uh uh period, [speaker003:] Uh huh. [speaker002:] right? So it's looking for some periodic behavior, and if it can't find a period it tells you for [disfmarker] for most things like that. [speaker003:] Uh huh. For most [@ @]. [speaker002:] So I mean, this is, getting back to what Liz was suggesting which I think actually is good, the more I think of it, that [disfmarker] that um [pause] for normalization you could do something like take voiced sections and normalize to equalize the energy in that. I'm just still going back to normalization, that even though it's roughly normalized, uh, for overall gain, I think it it's [disfmarker] it [disfmarker] it may not be normalized enough. [speaker003:] Uh huh. For b for both, for eh the frame energy and eh residular LPC eh energy too. [speaker002:] Uh, for both, yeah. [speaker003:] For both. [speaker002:] Yeah. Yeah. [speaker003:] but eh I test different normalization I [disfmarker] eh What kind of normalization? I [disfmarker] I [disfmarker] [speaker002:] OK, so [disfmarker] so I think I mean, [@ @] this is something to [disfmarker] to experimentally determine, [speaker003:] This is [disfmarker] Yeah. [speaker002:] but I mean, if um u [speaker003:] Yeah. [speaker002:] as I understand it, you have [disfmarker] you have some regions that uh um are marked, as [disfmarker] You have training data that you're trying to find a threshold, right? and so um um if you take um all the things [disfmarker] all the uh speak things marked "SPK", and uh look for the voiced uh energy in them, [speaker003:] Uh huh. Uh huh. [speaker002:] and uh well, just in [disfmarker] in each case, take a [disfmarker] take a [disfmarker] take a frame, take the uh [pause] uh if it's [disfmarker] if it's voiced, include it, [disfmarker] Just separate out from the [disfmarker] from the normal things, [speaker001:] Mm hmm, mm hmm. [speaker002:] separate out voiced. [speaker003:] Yeah. [speaker002:] Right? Then you're gonna end up with uh a [disfmarker] a uh [disfmarker] a mean uh standard devia deviation, and so forth. And then do your normalization based on that. [speaker003:] Yeah. Yeah. [speaker002:] And once you do that just for voiced, um um I guess you'd [disfmarker] [speaker001:] You can also look at just the voiced regions later. I mean, that would be the most sensitive measure. because you know, if you're normalizing for voiced, and then you look at regions that are voiced with the same robustness in estimation of what voicing is, um, you should be much more sensitive to the [disfmarker] to the overlaps then if you [disfmarker] [speaker003:] Yeah. [speaker001:] you know if you normalize but you look everywhere at [disfmarker] at [disfmarker] at fricatives or something you won't really know anything. [speaker002:] Yeah. [speaker001:] So I mean assuming if it's true that the overlap regions are sort of long enough that there's some voicing in each of them, by each of the speakers, then you should do very well if you do this. [speaker002:] Right. Right. Right. [speaker001:] But you'll only know that in the re voiced regions [speaker003:] Hmm. [speaker001:] but those are close enough to the [disfmarker] You know, there's enough voicing going on and off uh that if you can capture the voicing and if you can capture it when there's overlap then you should do quite well there. [speaker003:] Uh huh. Mmm. [speaker006:] I'm wondering about um if there [disfmarker] I suppose this is still just that one [disfmarker] that one data sample, right, that one meeting? [speaker003:] Yeah. [speaker006:] But it seems like to have [disfmarker] it [disfmarker] it seems like a very hard test to have it be looking at the energy of all the speakers combined rather than taking s you know, maybe two extreme speakers out of the mix and seeing if if it's promising with respect to [pause] that type of analysis. [speaker002:] w [speaker006:] If you take like a target speaker, and [pause] maybe [speaker002:] Yeah. [speaker006:] well, I mean, if there's enough data, maybe two [disfmarker] two target speakers and then compare that to the mixture of all the speakers. [speaker002:] You could do that, yeah. I mean, be [disfmarker] i And if they still overlapped, i i the [disfmarker] [speaker006:] That's hard. [speaker002:] But [disfmarker] but I [disfmarker] I think that [pause] this is a good first thing, to just sort of see. And [disfmarker] and what this says is that [disfmarker] th that without any special normalization at least, this is [disfmarker] there's a lot of overlap here. [speaker003:] Yeah. [speaker006:] OK. [speaker002:] And so [disfmarker] I mean coming back to Jose's question, I mean there's still this issue of [disfmarker] of [disfmarker] of how much time [disfmarker] I mean [disfmarker] What do you look at in order to normalize? I mean, dur you [disfmarker] you get some data in, you don't know which it is, how do you [disfmarker] how do you [disfmarker] W w what period of time do you look over to normalize it by? [speaker003:] Yeah. [speaker002:] And [disfmarker] and uh, [pause] that's [disfmarker] that's something to think about and experiment with. [speaker003:] Yeah. [speaker001:] I think also the harmonics would be very important. So if you can find voiced regions and [disfmarker] [speaker002:] Yes. [speaker001:] You know, if you have two speakers and they're both voicing unless they have very similar, you know, pitch, you should be able to see the [pause] difference in harmonics pretty clearly [speaker002:] Right, so that's what we were talking about before with harmo with harmonicity measure. [speaker001:] just by counting them, [speaker004:] Um [disfmarker] And [disfmarker] the length [disfmarker] [speaker001:] I mean [speaker003:] Yeah. [speaker001:] yeah, so even if you're not exactly sure when the overlap starts and ends because there's some [pause] non voiced regions, at least you can [pause] have these islands of reliability where you're pretty sure. [speaker002:] Yeah. So. [speaker003:] Yeah. [speaker004:] It seems like with the overlaps being fairly long, that should make it easier. [speaker003:] Yeah. Yeah. [speaker001:] Right, [speaker004:] That r that was an interesting statistic. [speaker003:] Yeah. [speaker001:] and you could put an estimate [disfmarker] estimate, you know, a fuzzy start and end time. [speaker003:] Yeah. Yeah. [speaker004:] That they're much longer than I expected. [speaker003:] But this is very short. [speaker004:] Ninety percent of them are over two m two hundred milliseconds. [speaker003:] Yeah. Yeah. [speaker002:] Yeah. [speaker003:] Is a problem to detect. [speaker001:] Well, it's sort of good, though, if you're [disfmarker] [vocalsound] [pause] for [disfmarker] for your accuracy. [speaker003:] But [disfmarker] It's a problem to [disfmarker] It's a problem to identify and to detect, [vocalsound] too, with eh the Javier system, for example. [speaker004:] It is good, but it's uh [disfmarker] but it surprised me. [speaker001:] Uh, Gives you a little more time to try to detect them. [speaker004:] Yes. For example. [speaker003:] For example. It's a problem. [speaker002:] Yeah. Uh, well. [speaker003:] Uh Eh, what [disfmarker] [speaker002:] Good. [speaker003:] sorry. [speaker002:] Yeah. [speaker003:] What [disfmarker] what happened with the different contexts because eh [pause] eh [pause] you are talking about eh the period or the [disfmarker] the duration of the [disfmarker] the window to eh the [disfmarker] the long of the window to [disfmarker] to consider normalization no? [speaker002:] Yeah, and I [disfmarker] I [disfmarker] [speaker003:] But eh we have different contexts. The [disfmarker] the [disfmarker] the [disfmarker] the period [disfmarker] the [disfmarker] the [disfmarker] the length of the window to [disfmarker] to normalize eh the energy is eh in the context of the overlapping zone? Is eh in the [disfmarker] in the [pause] left context as in th in the right context, considering eh the [disfmarker] these frames? You mean? [speaker002:] I don't know. I [disfmarker] I wanna think about it and you should think about it. [speaker003:] Yeah. [speaker002:] I [disfmarker] I [disfmarker] I didn't want to just pop out an answer because I realized I [disfmarker] I hadn't thought about it enough. [speaker003:] OK. [speaker004:] Two hundred milliseconds, hamming window. [speaker003:] Because [disfmarker] because the [disfmarker] the [disfmarker] the problem is that we have different contexts. [speaker001:] Yeah, something [disfmarker] [speaker003:] Different situation, and [disfmarker] [speaker001:] somethin something like that [speaker006:] That's true. That's complicated. [speaker001:] too. [speaker005:] Just out of curiosity, what was the bug? [speaker002:] I'm not sure. [speaker003:] Yeah. [speaker005:] What was the bug that you found? [speaker003:] What the bu [speaker005:] The bug from last uh meeting? [speaker003:] Ah, the bug. The bug of [disfmarker] was the [disfmarker] the F seek function in C. because doesn't work uh when you eh [disfmarker] you try to [disfmarker] to [disfmarker] to ass n n to do an [disfmarker] a dated access to [disfmarker] to a file, a big file, eh eh moving the [disfmarker] the head of the [disfmarker] in the hard disk, eh e forward and backward a lot of time [speaker005:] Mm hmm. Mm hmm. [speaker003:] eh in a moment [vocalsound] F seek function doesn't work. [speaker004:] Hmm. [speaker003:] and I had an problem. [speaker006:] Ah, that's interesting, and it didn't give you good error messages. [speaker003:] Yes, this is [disfmarker] this is [disfmarker] it's incredible. [speaker006:] Huh. [speaker005:] Hmm! [speaker003:] I [disfmarker] I [disfmarker] I do the same in [disfmarker] in another [disfmarker] in another way [speaker005:] So it was positioning randomly? [speaker003:] and I haven't problem. [speaker004:] Did you typecast to long? or size T or whatever it is? I don't remember what it is. One of the problems, if the file is large, um, and you're cast to the wrong type. [speaker003:] I [disfmarker] I [disfmarker] I don't know, but uh I [disfmarker] I think the [disfmarker] the position in the file eh doesn't work eh eh in long file, a very long file. [speaker002:] Well, what he's saying is that the typecasting could be the issue, for F seek. [speaker004:] Right, if you're sending at [disfmarker] an integer instead of a long. [speaker003:] The typecast. [speaker004:] Typecast from integer, "int" INT or [pause] long or unsigned long. [speaker003:] Yeah. eh it's ehm [pause] No, no, it's unsigned. [speaker004:] I think it's [disfmarker] Yeah. Well, It's one possibility. [speaker003:] Signed. [speaker002:] Yeah, cuz I think in [disfmarker] I mean, in general F seek works, [speaker003:] I [disfmarker] I [disfmarker] I don't know [speaker002:] so it's [speaker004:] Yep. [speaker003:] I don't know but eh I [disfmarker] I solved the problem with eh a similar way and [vocalsound] I haven't problem [speaker002:] Yeah, Yeah. [speaker004:] Mm hmm. [speaker002:] Yeah. [speaker003:] but eh I [disfmarker] I [disfmarker] I don't know because eh eh it doesn't eh happen to me before. [speaker002:] Yeah. [speaker003:] but eh [pause] I [disfmarker] I [disfmarker] I have the same codes in different parts, [speaker002:] Anyway, [speaker003:] and uh I [disfmarker] I had to [disfmarker] to [disfmarker] to change all [disfmarker] all the points in the [disfmarker] in the code, eh to substitute eh for another way, [speaker004:] Yep. [speaker003:] but eh [disfmarker] is [speaker002:] Anyway, we're [disfmarker] we're missing snacks here, so uh [disfmarker] [vocalsound] let's [disfmarker] let's [disfmarker] let's [disfmarker] let's do our [disfmarker] let's do our digits [speaker003:] Oh, OK. [speaker004:] Shall we do digits? [speaker003:] Sorry. [speaker006:] Oh, digits. [speaker002:] and [disfmarker] and uh [speaker006:] OK. [speaker004:] OK. Are we done? Yes, OK. [speaker002:] Yep. [speaker003:] OK. [speaker004:] OK, we are going off. [speaker007:] I was just thinking that [comment] in the meeting you could like [comment] have this secret thing that would play music if you're [disfmarker] [vocalsound] if you got bored. [speaker006:] Yeah, that's right! [speaker007:] Give you stock quotes [@ @]. Nobody would know, right? [speaker006:] Thanks. [speaker007:] No, I mean, just like no one would know if something's actually [comment] coming through if you got bored or something, cuz [disfmarker] [speaker002:] Ch ch ch, ch ch ch, ch ch ch, ch ch ch [speaker003:] Well, it's being recorded anyway, right? [speaker007:] Right. [speaker003:] Why listen to the guy? [speaker007:] Right. [speaker002:] Yeah, right. [speaker007:] You sit at the meeting but you just [disfmarker] [speaker002:] Yeah, that's right. [speaker007:] Yeah, you just say, "Mm hmm. Mm hmm". [speaker002:] You need me, yeah. [speaker003:] If you need it you may record it. [comment] Then we can get Liz's X marks later to see if there are any important things. [speaker007:] Right. Right. [speaker008:] Oh, [speaker006:] So. [speaker008:] so, [speaker002:] I really count. [speaker008:] you have to turn those things on if they're not on already. Are they [disfmarker] [speaker003:] Mine is on. Where am I? [speaker008:] There's a switch on top. [speaker003:] Which one? [speaker006:] There's a switch on top. So. "On off". [speaker003:] Which one? [speaker004:] Hello? [speaker001:] This is on. [speaker008:] OK, [speaker001:] Hi, [speaker008:] Looks good. [speaker001:] hi, hi. [speaker003:] Channel [disfmarker] Oh, I'm on two. Good. [speaker001:] Hello? Hello? I'm [disfmarker] I'm on zero! Yes! [speaker003:] You are on zero. [speaker001:] Oh, uh I'm being recorded by the wire. [speaker005:] Hello? [speaker006:] The [disfmarker] the wireless ones are actually labeled. [speaker007:] Hello? [speaker008:] Yeah. [speaker004:] I'm on channel four? Channel four, channel four, [speaker003:] Oh, they're already labeled? [speaker004:] channel four? [speaker006:] They have a little piece of tape. [speaker003:] Yeah, that's what I figured out. No? [speaker004:] Why? [speaker002:] And is that the channel number or the microphone number? [speaker003:] I'm, uh [disfmarker] [speaker004:] That was good. [speaker003:] Well, I mean, you're counting maybe from one instead of zero, [speaker006:] That's r exactly the problem. [speaker004:] Hey. [speaker003:] so. [speaker006:] Subtract one from the mike number to get the channel number. [speaker004:] Hey. Hey. You know, I have a feeling we're on reversed. [speaker002:] Right. [speaker005:] Right. [speaker003:] Right. [speaker008:] Yeah. [speaker007:] And what about these? [speaker001:] and Mike are coming on my channel. [speaker006:] Or maybe add one. [speaker008:] No, no, no, no, no, no. [speaker004:] Does yours say "three"? [speaker007:] Where's the [disfmarker] [speaker005:] No, no. You subtract [disfmarker] He was just saying trap subtract one. [speaker004:] No? [speaker008:] But you have to [disfmarker] the channel [disfmarker] uh the number on the mike is the number on the screen plus one, of course. [speaker004:] Oh, thank you. Sss this is the problem. [speaker003:] Right. [speaker004:] Good. OK, good, good, good. [speaker006:] Of course. [speaker004:] Thank you. [speaker008:] Cuz, uh, otherwise we would've [disfmarker] [speaker006:] So I guess I am channel eight. [speaker005:] Hmm. [speaker004:] Whoa! I guess so. [speaker007:] Am I nine? OK. [speaker006:] You look ni You look nine. [speaker004:] Hello? Yeah, I'm number three. That's right. OK. [speaker003:] So, yeah. [speaker006:] Funny, you don't look a day over seven. [speaker007:] Well, thank you. This is kinda fun. [speaker001:] So I guess now we have a lot of censors self censorship going on. [speaker002:] Yeah, yeah. [speaker007:] No, But it won't last long. [speaker003:] No! [speaker008:] That's right. [speaker006:] Yeah, [speaker003:] This is [disfmarker] [speaker006:] so it's r it's remarkable how quickly people forget. [speaker001:] No, we got to relax and let go. [speaker007:] It probably [disfmarker] [speaker005:] Ch ch. [speaker007:] Right. [speaker002:] These are disturbing. Being just in front of your ears, it [disfmarker] you can [disfmarker] you can hear the shadow it creates. [speaker006:] So far, e [speaker002:] There was this like little acoustic shadow [speaker005:] Oh! Really? [speaker008:] Oh, god. [speaker002:] from having this in front, yeah. [speaker001:] Yeah? Uh huh. [speaker007:] You can hear it. [speaker005:] Uh. [speaker002:] Well, I can [disfmarker] I can feel it. [speaker007:] The rest of us [disfmarker] [speaker006:] You h you must have much better ear. And so far every meeting I've recorded someone has at one point or another said, "Oh, I probably shouldn't have said that." [speaker007:] That's amazing. [speaker002:] There's too much [disfmarker] [speaker006:] So I think they're [disfmarker] they're [disfmarker] [speaker003:] So, Harry, we should maybe record some of your clicks and, you know. [speaker005:] I've been clicking plenty. [speaker001:] He [disfmarker] he's a [disfmarker] [speaker005:] But they don't show up. [speaker001:] the sound effect man. [speaker004:] Wow! [speaker001:] Yeah. Alright. [speaker004:] That's great! [speaker005:] They don't show up on that thing. [speaker006:] Well, I [disfmarker] I bet they're recorded [speaker004:] Yeah. [speaker008:] Oh, yeah. [speaker004:] They'll be [disfmarker] they'll be deaf deafening the transcriber. [speaker006:] quite loudly. [speaker005:] Yeah, I'm sure [disfmarker] I'm sure they're recorded. [speaker006:] Yeah, so it [disfmarker] [speaker001:] They want to do something serious with this research. [speaker005:] They tend to clip. [speaker002:] Hmm. [speaker006:] That's a nice user interface, Dan. [speaker004:] That's [disfmarker] that's really something. [speaker006:] Very [disfmarker] what does "solo" mean? [speaker008:] Uh, it means I was trying to think of things they put on real mixers that I should have on mine. [speaker006:] Ah! [speaker004:] Oh. [speaker008:] But it it's really good. The solo actually [disfmarker] [speaker002:] So you can feed it to the [disfmarker] [speaker001:] Solo means you just [disfmarker] [speaker005:] Ahh. [speaker002:] feed it through the headphones, for example. [speaker008:] cuz it's actually [disfmarker] you know, it's [disfmarker] it's how to [disfmarker] it's what can I [disfmarker] what application can I have for a [disfmarker] for a, uh, a class method rather [disfmarker] or a class [disfmarker] you know, a class procedure. [speaker001:] can [disfmarker] [speaker008:] So the solo by sa satellite. Then if you click on one of the other ones, it clears that one. [speaker002:] Hmm. [speaker006:] Oh, but it doesn't actually do anything else? [speaker001:] OK. [speaker008:] Doesn't do anything, no, no. But I mean, it's [disfmarker] [speaker005:] So you've implemented radio buttons. [speaker004:] It looks pretty slick though. That's great. [speaker008:] Yeah. [speaker004:] Ca can [disfmarker] can we fast forward through this conversation? [speaker001:] So [disfmarker] [speaker006:] Oh, yeah, that's right. [speaker008:] Yeah, the uh, [speaker003:] Actually I think this thing is really cutting the blood flow. [speaker004:] No. [speaker008:] those [disfmarker] [speaker003:] I'm gonna pass out. [speaker001:] Wait, you you're going to be [disfmarker] you're going to be saying, you know, funnier things, [speaker003:] This is a very sensual thing. [speaker006:] Yeah, I think a lot o a lot of us ended up wearing it around your neck and then facing up. [speaker001:] so leave it on. [speaker002:] Hmm. [speaker006:] I kn and, uh, just because there doesn't seem to be a good way to mount it. [speaker003:] Maybe this is a little better. [speaker001:] Yeah, [speaker004:] That's really nice. [speaker001:] I mean [disfmarker] [speaker003:] Boy! [speaker001:] Clearly [disfmarker] I mean, my impression is i if I have to wear this thing, you know, more than five minutes, uh, uh, uh, I mean, I [disfmarker] I will be completely affected by it somehow. [speaker004:] Huh. [speaker001:] So. [speaker006:] This [disfmarker] this thing? [speaker001:] I wouldn't be [disfmarker] Yeah. [speaker006:] Or [disfmarker] or just that one? [speaker001:] Oh. That [disfmarker] that one. [speaker006:] Yeah. [speaker001:] The one I tried. [speaker004:] I mean, on [disfmarker] on the temple's the pressure? [speaker001:] I'd [disfmarker] They really press my temples, [speaker004:] Uh huh. [speaker007:] Oh yeah. [speaker001:] otherwise my ear channels, [speaker007:] That would give you a headache. [speaker005:] What happened? [speaker001:] otherwise my joints here. I mean, depending where I put it, [comment] it press different things, [speaker002:] Mmm. [speaker001:] but that's a [@ @] fine. [speaker004:] I think we have to add that to the [disfmarker] to the form they sign. [speaker001:] I mean [disfmarker] [speaker006:] So [disfmarker] so the [disfmarker] [speaker001:] Yeah. So, [speaker002:] Yes, that's right. [speaker001:] I mean, this [disfmarker] this thing of course is probably, you know, acoustically more challenging [speaker002:] The medical. [speaker006:] I think [disfmarker] Yeah. [speaker001:] but I like this. [speaker006:] Oh, yeah. But this is very comfortable. [speaker001:] Th [speaker006:] I mean, I [disfmarker] I could wear this without any difficulty. [speaker001:] Is it? [speaker007:] Can you convert these ear ones to something comfortable? I mean, just by putting little [disfmarker] [speaker005:] They have so much pressure at th it's [disfmarker] [speaker001:] But [disfmarker] but there [disfmarker] there I think, is a t you know, [speaker006:] I [disfmarker] [speaker003:] Yeah, this is just m bad design. This, um, this is [disfmarker] [speaker006:] We c we'd [disfmarker] we'd have to buy new ones. [speaker002:] Mmm. [speaker005:] Yeah. [speaker007:] Or j Yeah, I mean, you can just use the microphone and take the headband off. [speaker006:] The [disfmarker] the other wa the other way I had it, which might work, is you put one over an ear and then you can adjust the other one. [speaker004:] Uh oh. [speaker006:] And that didn't [disfmarker] that seemed to be pretty [disfmarker] [speaker003:] Ouch. [speaker006:] That doesn't work for you? [speaker001:] No. A Anything which is distracting or painful, [vocalsound] is going to impact negatively with this. [speaker004:] Yeah. [speaker002:] Mmm. [speaker007:] But maybe the [disfmarker] [speaker006:] So [disfmarker] so you see how Jane's wearing it? [speaker002:] Mmm. [speaker006:] That seems to work OK too. [speaker001:] Yeah. [speaker008:] Yeah. [speaker004:] It's very comfortable. [speaker001:] But I mean, [speaker006:] And you just have to bring it up. [speaker001:] but at that point [disfmarker] at that point it wouldn't be much different to [disfmarker] to wear this or that. [speaker003:] Yeah, go to that. [speaker008:] Right, right. [speaker006:] Well, except you can bring it quite a bit closer to your mouth. So. [speaker001:] Hhh. [comment] OK. [speaker008:] But, uh, yeah, but it's not such a [disfmarker] it's not such a directional microphone probably. [speaker001:] I mean, but probably this is designed in such a way that it should be closer. And this, eh, eh [disfmarker] but this is designed to be worn like this [disfmarker] to be worn like this. [speaker008:] Yeah. [speaker006:] Mm hmm. [speaker008:] Yeah. [speaker006:] Right. So. Yeah, we [disfmarker] maybe we should look into [pause] if we can buy a replacement. [speaker004:] Sure. [speaker006:] If they're eighty bucks, that might be worth doing cuz they're very uncomfortable. [speaker008:] Mm hmm. [speaker001:] I mean, I really like this. Uh I [disfmarker] I mean, if I have to design the thing, uh I [disfmarker] I would rather, you know, use this type of thing than the other ones. [speaker006:] Well, I mean, what you'd rather do is nothing at all, [speaker001:] And also i is [disfmarker] [speaker004:] Wow. [speaker005:] Just trying to give you a hard transcription task. [speaker006:] of course, but to get [disfmarker] to get a good quality [disfmarker] I mean, if [disfmarker] if you were listening to it downstairs, that one's quite a bit lower quality. [speaker001:] Yeah. [speaker005:] No. I'm sorry. [speaker001:] Yeah. [speaker004:] Hey, whi what [disfmarker] where are [@ @]? [speaker001:] Yeah. [speaker003:] What are you doing? [speaker007:] Yeah, yeah. [speaker001:] Yeah, yeah, I noticed that. [speaker004:] Do you s do you know [disfmarker] Have you studied a click language? [speaker006:] So. [speaker001:] I noticed that. [speaker004:] I don't [@ @] [disfmarker] [speaker001:] But, you know, that [disfmarker] that might be the challenging part. [speaker005:] No, I wish. No. [speaker004:] Wow. [speaker005:] No. [speaker006:] Yeah. [speaker005:] Just studied phonetics in general. [speaker001:] I mean, because otherwise you are [disfmarker] [speaker007:] If it didn't [disfmarker] if it didn't hurt, you know, if it was comfortable but you knew it was there, would it bother you? [speaker004:] OK. I'll just take [disfmarker] [speaker001:] If it was comfortable [disfmarker] OK. [speaker006:] Yeah. Well [disfmarker] well try this one on. [speaker007:] Because I actually think it sort of makes everybody act a little more professional. [speaker004:] I didn't know that you could whistle that [disfmarker] [speaker001:] OK. [speaker004:] You're a very soft whistler, you're mostly able to do that without much air at all, [speaker003:] Actually, I use something like this on [disfmarker] on the telephone and it's very comfortable. [speaker007:] Yeah. [speaker001:] I, uh, [speaker006:] I [speaker007:] Your telephone, [speaker004:] aren't you? [speaker001:] I mean, no. [speaker007:] and people use it for cell phones [speaker005:] Yeah, it's weird. [speaker008:] Yeah. [speaker005:] It's kind of, uh, [speaker007:] and. [speaker001:] I mean, [speaker005:] apical alveolar kind of thing. [speaker008:] Yeah. [speaker005:] I don't know. [speaker008:] Right, [speaker004:] OK. [speaker008:] cuz you've got [disfmarker] [speaker005:] It's [disfmarker] [comment] You know? [speaker004:] OK. [speaker001:] I [disfmarker] [speaker004:] Wow, that's amazing! [speaker001:] I [disfmarker] I mean, I'm kind of more deaf on this ear, whatever this is on. Uh, I could wear it this way. Still, I [disfmarker] I [disfmarker] I mean, I'm [speaker005:] Get a very high pitch that way too. And you get dogs to d cock their heads and stuff. [speaker008:] Yeah it is, [speaker001:] I [speaker008:] yeah. [speaker001:] I mean, it's my impression. I mean, of course there is a lot more work to do with this one [speaker005:] It's really good. [speaker004:] Oh, how fun! [speaker005:] Cuz you can get really high. [speaker004:] Can you project it pretty well? I mean [disfmarker] [speaker001:] but [disfmarker] [speaker005:] No. Not too well. [speaker004:] OK. OK. [speaker005:] Yeah. [speaker004:] But they're just very, very sensitive. [speaker005:] But it's that you get enough of the high frequencies. [speaker001:] I mean if [disfmarker] if you want something really [disfmarker] really nonintrusive, really nonintrusive, really something that [disfmarker] [speaker005:] Yeah, that you really get [disfmarker] you really get animals like [disfmarker] [speaker004:] Fascinating! Wow. [speaker007:] Mm hmm. [speaker005:] Yeah. [speaker006:] Well, I guess my feeling is that this is for collecting a corpus not for an actual end user application. [speaker004:] Huh. [speaker006:] And so I think a little bit of intrusion is not [pause] ridiculous. [speaker001:] OK. OK. [speaker003:] Not for this, for sure. No. I mean, it's just [disfmarker] But talking in general, yeah, that's, uh [disfmarker] [speaker002:] So [disfmarker] [speaker001:] OK. [speaker007:] I think you can't do any research otherwise. [speaker001:] Well [disfmarker] [speaker002:] What exactly is the purpose of the corpus? [speaker006:] Yeah, I agree. [speaker007:] It's just [disfmarker] [speaker001:] I mean, you think [disfmarker] you think if you [disfmarker] [speaker002:] What's the purpose of the [disfmarker] [speaker006:] Training s Training speech recognition systems. [speaker007:] It's too hard. It's too hard without [disfmarker] [speaker001:] Do you think this is going to be too f too [disfmarker] [speaker002:] Right. [speaker007:] I think you should collect both simultaneously. [speaker002:] But it will train them to the microphone characteristics of this c [speaker006:] We wanna do both. [speaker007:] And [disfmarker] [speaker006:] So [disfmarker] so the [disfmarker] the point is that if you [disfmarker] if we try, from the get go, to do it off the PZM, we won't get anything. [speaker002:] Right. [speaker001:] Mm hmm. [speaker007:] It's gonna be too hard to do it without a [disfmarker] Yeah, [speaker002:] Yeah. [speaker007:] forget it. [speaker002:] Yeah. [speaker006:] You won't be able to align. [speaker001:] Yeah. [speaker008:] So [disfmarker] no, I mean, the [disfmarker] [speaker002:] Absolutely. [speaker007:] You won't be able to get grants. [speaker002:] I agree. [speaker006:] You won't be able to segment. [speaker002:] Yeah. [speaker006:] You won't be able to do anything. [speaker002:] Yeah. Yeah. [speaker007:] You won't be able to get funding. [speaker001:] So [disfmarker] [speaker006:] Right! [speaker007:] No, I'm serious. [speaker008:] Hang on, hang on! [speaker002:] Yeah. [speaker007:] I mean, you won't be able to show any results. [speaker002:] Mmm. [speaker007:] So. [speaker008:] Oh, right, right, right. [speaker001:] And with this [disfmarker] with this you're saying, you are also in that situation? [speaker008:] I mean th [speaker006:] Well, I mean, we haven't done the [disfmarker] the detail experiment but just from listening to it, it's pretty bad, [speaker007:] It was pretty bad. [speaker001:] Yeah. Yeah. [speaker007:] It takes off quite a lot. [speaker006:] whereas these are m are very good. [speaker003:] Well, yeah, I mean, that's really like having one of these almost, [speaker006:] I mean, that's why [disfmarker] That's why if you [disfmarker] if you use DragonDictate or [disfmarker] or ViaVoice, or any of those others to do dictation you need to use these. [speaker003:] so. [speaker002:] Hmm. [speaker001:] Yeah, yeah, yeah. [speaker002:] Hmm. [speaker001:] Yeah. [speaker006:] There's just no other way around it. [speaker007:] Somebody will design a comfortable close talking mike. I mean it's [disfmarker] there are too many applications to [disfmarker] for them not [disfmarker] to not do that. [speaker001:] Mm hmm. [speaker006:] Well, and as I said, I find this one [disfmarker] [speaker007:] I mean, it may not look like this but. [speaker006:] I find this one is no problem at all. The only thing is [nonvocalsound] a little bit of audio [disfmarker] a [disfmarker] a little bit of audio degradation in one ear, [speaker007:] Yeah. [speaker008:] Yeah it's y [speaker007:] Yeah, this one's fine. [speaker001:] Yeah, yeah, yeah, yeah. [speaker007:] Right. [speaker006:] and I get a little bit of peripheral vision, but other than that, it's not bad at all. [speaker008:] Yeah. [speaker002:] Hmm. [speaker006:] Those, on the other hand, are very uncomfortable. [speaker007:] I [speaker004:] We could [disfmarker] [speaker007:] Yeah, those are pretty bad. [speaker001:] Yeah, those are really bad. [speaker006:] So. [speaker004:] Can [disfmarker] can't we buy an expander that you just [disfmarker] [speaker002:] Uh [disfmarker] I'm actually finding this quite comfortable now that I've got it [disfmarker] I've got it just right. [speaker006:] That's right. [speaker005:] I'm surviving. [speaker008:] It depends, yeah I've [disfmarker] [speaker001:] I think you have, you know, he would. [speaker005:] Yeah, I uh [speaker002:] It's just [disfmarker] [speaker003:] I think he's about to pass out. That's why [disfmarker] [speaker002:] U i [speaker005:] You guys gave up too soon. [speaker002:] It must have numbed the areas. [speaker005:] I think you just get used to it after five minutes. [speaker006:] So I [disfmarker] I guess [disfmarker] you know, maybe the problem is we got the kid's version. [speaker007:] Yeah. [speaker003:] Yeah. [speaker008:] No, we got a Japanese version, you know that's [disfmarker] that's it, [speaker007:] Sorry. [speaker001:] And so [disfmarker] and what about this one? [speaker008:] yeah. [speaker001:] I mean, this thing wearing sideway that [disfmarker] that can? [speaker008:] Uh the [disfmarker] [speaker006:] It's wired. [speaker008:] It's funny, I li I kind of like this. So it actually gets most of its strength from [disfmarker] [comment] from you know, the fact this thing's stuck in your ear. [speaker007:] Oh, this would drive me crazy. [speaker002:] Mmm. [speaker008:] And, yeah, [speaker001:] Oh. [speaker007:] I absolutely [disfmarker] [speaker001:] D uh does it have a speaker? [speaker008:] uh I [disfmarker] and originally [disfmarker] [speaker007:] people are very picky about this. [speaker001:] Does it have a speaker [speaker005:] Mmm. [speaker001:] or [disfmarker]? [speaker008:] Oh yeah, it does. When I bought those things I originally got this version, cuz Plantronics makes uh you know, an in the ear one. [speaker001:] Uh [disfmarker] Uh huh. Mm hmm. [speaker008:] And I really liked it but, other people didn't. So it just depends. I guess, you know, I've been using in the ear Walkman headphones, for you know, a long time, [speaker001:] Yeah. [speaker002:] Mmm. [speaker008:] so I'm kind of used to that weird sensation having something stuck in my ear. But if, uh, [speaker007:] But that's painful. [speaker006:] Whereas I used speech recognition for about four years using one of these, [speaker008:] If you're not used [disfmarker] [speaker007:] I can't [disfmarker] [speaker006:] so I'm used to this. [speaker001:] You do. [speaker006:] I don't use it anymore. [speaker001:] Y [speaker006:] Uh, my hands got better [speaker001:] Uh huh. [speaker006:] so I stopped using it. [speaker001:] OK. [speaker006:] But, uh. So, for about four years I used DragonDictate and NaturallySpeaking [speaker001:] OK. [speaker003:] Really? So you could actually produce like code [speaker006:] to do all my work. [speaker001:] Oh, you're a [disfmarker] [speaker003:] or [disfmarker]? [speaker006:] Yeah. [speaker003:] Yeah? [speaker006:] It was really a pain. [speaker002:] Code! Right. [speaker006:] Yeah, but I didn't have a choice. [speaker003:] Wow. [speaker006:] I mean, I couldn't type. [speaker002:] That's an interesting language model I'm sure. [speaker003:] Uh. [speaker006:] So, [speaker001:] So [disfmarker] [speaker003:] Huh. [speaker006:] uh. [speaker008:] Yeah. [speaker001:] The market is a captive market. [speaker006:] Yeah. [speaker008:] Right. [speaker002:] Mmm. [speaker004:] Mm hmm, yeah. [speaker006:] In fact, when I first came back to grad school that was a project I wanted to work on, "voice environments for programmers", but we couldn't get any funding for it and my hands got better. So that's how I ended up working on this project instead. [speaker008:] Hmm. [speaker007:] Huh. [speaker001:] Good. [speaker006:] Anyway. [speaker008:] So, yeah. [speaker006:] So [disfmarker] so what are you guys doing? [speaker008:] I don't know. Yeah. [speaker006:] What's your project? [speaker001:] Well, basically, the [disfmarker] the [disfmarker] the original project looks a lot like, you know, the [disfmarker] the p your project. I mean, basically we set up with this, uh, idea of the [disfmarker] the [disfmarker] the meeting [disfmarker] the [disfmarker] the mediated spaces idea in general [speaker006:] Mm hmm. [speaker001:] and then incr incarnated in this room that will be mediated somehow by, you know, a computer to access information, to help along the meeting, and will be like two kind of eh, f main functions. One is the interaction with the computer [speaker006:] Mm hmm. [speaker001:] which, as you said, you know, is your interaction with the PDA in your case, where you kind of query it to ask for information, et cetera. [speaker006:] Mm hmm. [speaker001:] At the same time, our process is somehow, you know, getting all this multi channel information, doing some sort of, you know, cross, eh, channel, you know, cancellation and transcribing the individual streams, [speaker006:] Mm hmm. [speaker002:] Mmm. [speaker001:] not necessarily in real time. And the idea is y you can eventually after the meeting query, you know, previous meetings [speaker006:] Right. [speaker001:] and [disfmarker] and there is all this work on transcription, similar probably to what we do in Switchboard but with better bandwidth, [speaker006:] Mm hmm. [speaker001:] you know, but worse, [vocalsound] you know, environmental conditions. [speaker006:] Worse acoustics. Yeah. [speaker001:] An and then, you know, there is, all [disfmarker] all these issues about processing all this information, you know, transcription, [speaker006:] Mmm. [speaker001:] but then using, you know, prosody information to better, you know, segment, endpoint. Eh, also using topic segmentation to help better classification of information or extraction of information. [speaker006:] Mm hmm. [speaker001:] I mean, we didn't dare to say it will be just in a PDA you throw in the middle of a meeting. [speaker002:] I'm beginning to feel my heart beat now. [speaker006:] Yeah. So. Yeah. This is [disfmarker] this is [disfmarker] [speaker001:] That [disfmarker] that was daring. [speaker006:] The PDA one is very, very ambitious. [speaker001:] Yeah. [speaker006:] And it's also [disfmarker] it's depending on not only that [disfmarker] [speaker002:] Mmm. [speaker006:] Uh, not only does it depend on IRAM taping out and working as specified, it also depends on writing vectorized speech recognition algorithms [speaker003:] Hmm. Mmm. [speaker001:] Yeah. Yeah. [speaker006:] which is non trivial. [speaker001:] Yeah. Yeah. [speaker006:] So. [speaker001:] Yeah. But, you know, uh let's assume somehow the compu processor power is going to be there. Whatever. I mean, initially you may do a super computer for the transcription, [comment] real time and then [disfmarker] [speaker006:] Right, I mean, so [disfmarker] so one of our [disfmarker] [speaker002:] Mmm. [speaker008:] Right. [speaker006:] our back off policy on that if [disfmarker] if there's problems with IRAM is it'll be a wireless link, [speaker001:] Yeah. Uh huh. [speaker006:] right? You'll dump the [disfmarker] the [disfmarker] dump the audio wirelessly to a network somewhere [speaker001:] Yeah. Yeah. I [disfmarker] I mean, ou uh we [disfmarker] we kind of set up this concept like a year and a half ago. [speaker006:] and you'll do the recognition. [speaker002:] Mm hmm. [speaker006:] Mm hmm. [speaker001:] And at that time I said, you know, we said, "don't worry about the [disfmarker] the compute power". I mean, it will be somewhere, [comment] you know. [speaker006:] Hmm. [speaker008:] Yeah. [speaker006:] Well, we [disfmarker] we [disfmarker] we've had a lot of discussions about it and I think [speaker001:] Mm hmm. [speaker006:] maybe [disfmarker] I think t to some extent it comes down to a religious issue. My religion is personal computers are good and terminal mainframes are bad. And so if you can get away with it, I don't want to have a terminal in my hand and a mainframe somewhere else. [speaker001:] Yeah. Yeah, yeah. [speaker006:] Um, the [disfmarker] too many things to cont can go wrong and you lack control. [speaker001:] Yeah. Yeah. [speaker006:] I think the PC revolution has shown how good it is if you can have the power yourself. [speaker001:] Yeah, yeah. [speaker006:] And so the IRAM project provided us a method of having a very powerful thing in your hand. [speaker001:] Mm hmm. [speaker006:] And so we wanna see what we can do with it but we'd also don't want the project to actually completely depend on it. So, [speaker001:] Yeah. [speaker006:] the project will go forward regardless of whether IRAM works or not. [speaker001:] I mean, our approach, somehow, I mean, I believe it's probably, you know, complimentary in many ways, is to focus on the actual deep technology that we have somehow in speech. And basically, make progress on, you know, the algorithms. [speaker006:] Right. [speaker001:] And then, you know, it will run somewhere. I mean, as you said, to [disfmarker] I mean, we would like the idea to have it there [speaker006:] So w [speaker001:] but [disfmarker] but our focus is basically to develop the algorithms, to push the state of the art on this transcription task. [speaker006:] Mm hmm. [speaker001:] And, uh, the other task probably is also, you know, challenging the comp the [disfmarker] the querying, uh, the command, control, or, you know, I mean, doing that in natural language. [speaker006:] So a Right. Mm hmm. Right, so the [disfmarker] so for us the Meeting Recorder project has a very similar uh, approach. [speaker001:] Mm hmm. Uh huh. [speaker006:] Me personally, for my P H D thesis, I'm sorta doing the other way and seeing how much speech recognition can I do on this little unit. [speaker001:] Oh. I see. You really are challenged to do that, [speaker006:] Yep. [speaker001:] to put as much power as you can in the PDA. [speaker006:] Yep. [speaker001:] OK, uh, our approach is, you know, we do whatever is necessary to do it and, you know, some of it may go into something, depending on, you know, your scalable power. [speaker006:] That's right. So [disfmarker] so the [disfmarker] As I said, the Meeting Recorder project, so we have people at UW, people here working on robust algorithms that will be running on work stations. [speaker001:] Yeah. [speaker006:] They won't be running on IRAM. [speaker001:] Mm hmm. [speaker006:] And then what I'm trying to do is see how much of it I can put on IRAM. [speaker001:] OK. [speaker002:] Hmm. [speaker008:] Mmm. [speaker001:] Well, you know, is, eventually y you will get all this you know, sc eh, higher and higher, you know, powers in these portable things. [speaker008:] Yeah, it's not [disfmarker] [speaker006:] Mm hmm. [speaker008:] Yeah. [speaker001:] So as long as you have enough algorithms and technology to do something better if you have more processing power. [speaker006:] Right. [speaker001:] I mean, the [disfmarker] the situation now in the speech community is that, you know, there is some boundary, some limit in the performance. And no matter how much processing you throw in, you don't get any better. [speaker006:] That's right, we don't even know what to do with it. We don't need to n [speaker002:] Mmm. [speaker001:] And so we are trying to push that. [speaker006:] Right. Yeah. [speaker008:] Yeah. [speaker001:] And then you have room to port whatever you can to, you know, smaller devices. [speaker006:] Yep. [speaker002:] You get the wrong answer quicker basically, [speaker006:] Yeah, we don't even know what to do with their cycles. [speaker002:] don't you? [speaker001:] Yeah. [speaker002:] Yeah. [speaker008:] Right. [speaker001:] Sorry I interrupt you. [speaker006:] We w [speaker008:] I was gonna say that, um, you know, in terms [disfmarker] So, Adam has a [disfmarker] you know, a particular agenda, um, but obviously there are lots of other things. [speaker001:] Mm hmm. [speaker008:] And you know, it's the way that a project like this works so that a group like this where we have students coming in, you know [disfmarker] It's [disfmarker] it's meant to be sort of a long term thing that will get us data [speaker001:] Mm hmm. [speaker008:] and people will be able to come along and look at different aspects of it. So it's not clear, you know, whether s someone at UW's gonna look at, um, beam forming or whatever. But, you know, it's goo it's a good property for us that there are a lot of different things you could do with it, and so it's [disfmarker] We [disfmarker] we haven't thought it through. [speaker002:] Hmm. [speaker001:] Yeah. Always [@ @]. Yeah, yeah. [speaker008:] But that's OK, you know, it's kind of [disfmarker] it's fair enough. [speaker006:] Like [disfmarker] [speaker001:] Yeah. [speaker006:] As an example, we had a visitor who was working on speaker ID, acoustic change detection. [speaker008:] Yeah. [speaker001:] Huh. Yeah, yeah, yeah, yeah. [speaker006:] And he has a very good system for that and [disfmarker] and he wants to use this corpus. [speaker001:] Yeah. Yeah. [speaker002:] Mmm. [speaker001:] Exactly. [speaker006:] So. [speaker001:] I mean, it's one of the areas where Kemal is going to work, like, you know, speaker tracking, and speaker ID. [speaker006:] Yeah. [speaker002:] Mmm. [speaker003:] Mm hmm. [speaker006:] Mm hmm. [speaker001:] I mean, you want to know. I mean, if you have the people wired individually it probably is easier. [speaker006:] Oh, it [disfmarker] th what's nice about [disfmarker] [speaker001:] But [disfmarker] but you know, in [disfmarker] in the other situation, you know. [speaker008:] Yeah. [speaker006:] What's nice about getting this corpus is, i with the people wired you have a excellent baseline. [speaker001:] Yeah. Yeah. [speaker006:] Right? [speaker008:] Yeah. [speaker002:] Hmm. [speaker006:] You have a very easy problem and you can get the right answer, to ground truth very easily [speaker008:] You can get the ground truth. [speaker001:] Yeah. [speaker008:] Yeah. [speaker006:] and then you can degrade it by using the P Z [speaker001:] Yeah, yeah. [speaker002:] Mmm. [speaker006:] you can degrade it further by using those things. [speaker001:] Yeah, yeah, yeah. [speaker006:] And [disfmarker] and really see what you can do with a real system. So. [speaker003:] Yeah, for example, when you go to your PDA system it's, uh, really an awful [disfmarker] open problem to determine how many speakers there are and where they change turns, [speaker006:] That's right. [speaker002:] Mmm. [speaker003:] uh. [speaker008:] Yeah. [speaker006:] That's right. [speaker003:] That's actually being ad That's one of the problems that's being addressed at this year's evaluation, [speaker001:] Mm hmm. [speaker003:] uh, [speaker006:] Oh, really? They're doing an unknown number? [speaker003:] which is taking place right now, actually. [speaker004:] Mmm. [speaker003:] Right. You [disfmarker] you do need to figure out the number of speakers in the waveform. [speaker006:] That's good. [speaker008:] Which, uh [disfmarker] Wha Evaluation of [disfmarker] [speaker002:] Right. [speaker003:] This is a NIST evaluation. Um. [speaker008:] Of the speaker ID or something? [speaker002:] This i [speaker003:] Eh, right, speaker verification. [speaker008:] And what's the [disfmarker] what [disfmarker] wha So what has the data been? What [disfmarker] what [disfmarker] what data are they using? What is the [disfmarker] [speaker003:] Uh, Switchboard. [speaker008:] Oh, OK. [speaker003:] Switchboard and, uh [disfmarker] I [disfmarker] I [disfmarker] We [disfmarker] Well, I mean [disfmarker] We will also run so some part of that, uh, evaluation too but probably not that segmentation part. But I don't know if they are using a different data for that. But I mean, it's [disfmarker] s While I say "Switchboard", s eh, of course there's [vocalsound] only two speakers. [speaker008:] Yeah. [speaker003:] But, um, I don't know how they, uh, actually went about. Um. Cuz they were saying some [disfmarker] some [disfmarker] some [disfmarker] up to eight speakers. Maybe they [disfmarker] maybe some other data for that part, [speaker008:] Uh huh. [speaker003:] you're right. [speaker006:] I mean, just mixing it wouldn't work because the channel characteristics are too different. [speaker003:] No, no. [speaker002:] Mmm. [speaker003:] Yeah. [speaker006:] So, uh, what we [disfmarker] what we were working on here was parts of Broadcast News. [speaker003:] Right. [speaker006:] And so that [disfmarker] that's a slightly strange task because they tend to be long segments. But they were [disfmarker] they were parts that were interviews that he worked on. [speaker001:] Mm hmm. [speaker006:] And, uh. [speaker003:] No. We've also, uh, s done some of that as, uh, eh, [speaker006:] Right. [speaker003:] um, I mean, I have a so [disfmarker] small, uh, hacked up system which sort of, for [disfmarker] for the Broadcast News determines the approximate number of speakers and actually assigns some names to them. [speaker006:] Mm hmm. Mm hmm. [speaker003:] This is something we did for the MAESTRO project, uh, for [disfmarker] for Rick. [speaker006:] Mm hmm. [speaker003:] But, um, uh, yeah, uh, for [disfmarker] for Broadcast News it works much better actually. It's, uh, [speaker006:] Yeah, he had [disfmarker] he got very good results. [speaker003:] you do get eh, like in an interview, uh, nice long segments [speaker006:] Right. [speaker003:] and [disfmarker] [speaker006:] Right. Yeah, so I [disfmarker] I tried his on [disfmarker] his system on the Meeting Recorder stuff, um, but of course we don't have any trained neural nets or, uh [disfmarker] So [disfmarker] so his system was based on our hybrid neural net system that we use for speech recognition. [speaker002:] Hmm. [speaker006:] And so, uh, I was using the Broadcast News nets on this data. [speaker001:] Hmm. [speaker006:] So of course it didn't do [disfmarker] didn't do t tremendously well. And then the other problem also was that it doesn't handle overlap very well, his algorithm. [speaker007:] Yeah, I mean Broadcast News is for this third party listener, [speaker006:] Hmm? [speaker007:] right? [speaker006:] Right. [speaker008:] Right. [speaker007:] So they, you know if you were listening to an interview that sounded like this, you know, you'd turn off your radio. [speaker002:] Yes. [speaker006:] That's right, with all the i interrupts. [speaker008:] Right. [speaker007:] So it's just got a d it's a lot easier. [speaker006:] Yes. [speaker001:] I think there are some programs in TV. I've been trying to get this from [disfmarker] something like this from, you know, public cast uh w eh eh. [speaker004:] There are [disfmarker] there are some with a lot of overlap. [speaker001:] Um uh yeah, and they [disfmarker] but they [disfmarker] they do it somehow. They clean it up. [speaker004:] Yeah. [speaker001:] I notice that they [disfmarker] they s they sa have some help with control of what person to pick up. [speaker006:] Mixing, yeah. [speaker007:] Mm hmm. [speaker001:] I [disfmarker] I mean there is some program at midnight somewhere. It's a bunch of, like, four people discussing [speaker004:] Well [disfmarker] [speaker001:] and they, it's a lot clean. [speaker006:] Like [speaker001:] Yeah. I yeah, [speaker004:] But the [disfmarker] but the worst [disfmarker] the worst case is Politically Incorrect where they don't really make any adjustments at all. [speaker001:] probably. [speaker006:] But yeah, they [disfmarker] they have [disfmarker] [speaker007:] Uh huh. [speaker004:] And [disfmarker] and it's hard to even follow what they're saying. [speaker001:] Uh huh. [speaker004:] It's just [disfmarker] it's truly [pause] totally overlapped. [speaker002:] Right. [speaker001:] Yeah. Uh, yeah. [speaker002:] Yeah. [speaker001:] I've heard that [disfmarker] that [disfmarker] that sometimes. But it's [disfmarker] it's OK, though. [speaker006:] I think in [disfmarker] in [disfmarker] [speaker002:] It's still in the forma it's still in the format which is slightly more formal I think than, you know, completely informal conversational speech. [speaker001:] So you don't think they [disfmarker] they fill it at all with any, you know [disfmarker] [speaker004:] OK. [speaker008:] Mm hmm. [speaker006:] Oh, I'm sure they do. They [disfmarker] I'm sure they have an audio engineer in the back. [speaker001:] They [disfmarker] [speaker002:] They will do, a bit. [speaker001:] I am pretty sure they do. [speaker002:] Yeah. [speaker001:] I mean, it's [disfmarker] it's [disfmarker] it's still, you know, understandable. [speaker004:] OK, you guys are more in business that I am. [speaker008:] But it [@ @] [disfmarker] bu [speaker002:] The host probably gets, uh, you know [disfmarker] gets a little bit more. [speaker008:] Yeah. [speaker004:] My feeling is that when I [disfmarker] when I [disfmarker] when I've seen that show it's like there [disfmarker] there are sometimes people who totally take over the floor [speaker002:] Gets a little boost. [speaker001:] Yeah, yeah. [speaker004:] and other people are trying to get in, and they can never [disfmarker] never get access. [speaker001:] Yeah. Yeah, yeah, yeah. [speaker004:] It depends very much on the mix. [speaker006:] Well, but that that's different than an audio engineer fiddling with the mixing parameters. [speaker004:] Yeah, it's just, I [disfmarker] I know that there are periods where two people are talking in [disfmarker] t at the same time [speaker006:] Right? [speaker001:] At the same time. Yeah, yeah, yeah, yeah, yeah. [speaker004:] and you're only able to hear um, parts of the weaker voice. And, you know, that's [disfmarker] It's a style. [speaker001:] Uh huh. [speaker004:] It's a style. Uh, but I [disfmarker] but I've seen debates where [disfmarker] [speaker002:] Uh huh. [speaker001:] Yeah, yeah, yeah. [speaker006:] Mm hmm. [speaker004:] And you know it's apparently engaging people. Uh, it's been going on for a while now. [speaker001:] Yeah, yeah. [speaker004:] Yeah. [speaker001:] Yeah, in many cultures it's not bad manners to [disfmarker] to speak on top of another one. [speaker007:] Yeah, I was thinking, if Transcriber were invented in Italy or something, [comment] it would have been handling multiple [disfmarker] [speaker004:] Sure. Absolutely. [speaker001:] I it somehow [disfmarker] [speaker004:] Yeah. Or in New York [disfmarker] or in New York, [speaker007:] in New York. [speaker004:] high involve [disfmarker] high in high involvement style. [speaker001:] Yeah. Yeah. [speaker007:] Right. [speaker002:] Hmm. [speaker001:] I mean, could be reassuring, [speaker004:] Yeah. [speaker006:] High involvement. [speaker001:] you know? [speaker006:] That's [disfmarker] that's a [speaker001:] Keep going, keep going. [speaker004:] Yeah. [speaker006:] good way of saying it. [speaker004:] That's what it's called. Yeah. [speaker006:] So what do we do? [speaker001:] Well, I mean, [speaker006:] It seems like there's a lot of overlap between our projects, [speaker001:] I [disfmarker] I belie I believe there is a lot of overlap [speaker002:] There i yeah, indeed. [speaker006:] so it would be nice to work together. [speaker001:] and would be great to com to [disfmarker] to basically take advantage of that to [disfmarker] to make a synergy more than an overlap, and, you know, help each other. [speaker006:] Mm hmm. [speaker002:] Hmm. [speaker001:] I [disfmarker] I believe each group has, you know, depths in [disfmarker] in different, uh, areas, [speaker005:] We [disfmarker] [speaker001:] so. I mean, we clearly c should collaborate for the best of everybody. [speaker002:] Hmm. [speaker006:] So is your intention to start collecting some meeting data? [speaker001:] Uh, I, you know [disfmarker] There are no final [disfmarker] I mean, the official decisions I guess have to be made in terms of resources to that. [speaker006:] Mm hmm. [speaker001:] My personal opinion, if you ask, is that we should because the only way [disfmarker] I mean, if I learn anything on speech in [disfmarker] for the last, you know [@ @] [disfmarker] [speaker006:] More data is better. [speaker001:] No, no, no. Um, you need a task to really, you know, improve your system for that particular environment. [speaker006:] Oh. Mm hmm. [speaker001:] So right now we are focusing our research on doing evaluation type of, you know, in incremental you know, improvements. So, Switchboard, for instance. They have five is kind of [disfmarker] OK, you have two people talking, it's the closest thing. But we really should move into the [disfmarker] you know, data that reflects as much as possible, the real scenario. [speaker002:] Hmm. [speaker001:] And that that's a way to improve. [speaker006:] Right. [speaker008:] Mm hmm. [speaker001:] And then you do improvement on that particular database. So even if we don't collect the data to train models, we should collect data at least to evaluate progress. [speaker006:] So what would be nice is if we could, uh, converge on [disfmarker] on formats and conventions, and so on, [speaker001:] Yeah. [speaker007:] Yeah. [speaker002:] Hmm. [speaker001:] Yeah. [speaker007:] And [disfmarker] and software too. [speaker002:] I think that's the op [speaker006:] and software. [speaker007:] I mean, things like rewriting this Transcriber tool [speaker006:] I mean, just [disfmarker] [speaker001:] Yeah. [speaker002:] Yeah. [speaker007:] and thi that [disfmarker] That shouldn't be re redone. [speaker006:] Right. [speaker001:] Yeah. [speaker008:] Right. [speaker001:] Yeah. [speaker002:] Hmm. [speaker007:] And the transcription conventions [disfmarker] [speaker006:] I, uh [disfmarker] uh [disfmarker] It shouldn't be done. [speaker005:] You'd mentioned there was some possibility that somewhere else is going to [disfmarker] you might contract out to do the transcription? [speaker006:] Yes. We're talking about that right now. [speaker005:] Can y [speaker006:] I'm not sure how much I should say about it. [speaker005:] Yeah, that's what I was wondering. [speaker006:] So that the, [speaker005:] OK. [speaker006:] uh [disfmarker] there's some possibility that we're gonna get a third c party company to do the transcription for us in exchange for the corpus. [speaker001:] Uh huh? [speaker002:] Is that in exchange for access to the c [speaker006:] Right, [speaker002:] Right. [speaker001:] Yeah. So, on one question. [speaker006:] and [disfmarker] [speaker001:] I mean, uh, this [disfmarker] this thing is also for you, made on internal funding? Or I mean, I know you're applying for some DARPA funding. [speaker006:] We have a little bit of DARPA. [speaker001:] Oh, you already got some. [speaker006:] Uh, under the communicator project. [speaker001:] OK. OK. [speaker006:] So [disfmarker] so that's part of it. And then, uh, part of it's on the IRAM group. [speaker001:] OK. [speaker006:] IRAM group is paying for part of it. And then part of it's just ICSI general funds, is paying for it as well. [speaker001:] OK. [speaker002:] And you're saying some of the infrastructure is, um, the A T andT video conferencing. [speaker006:] The infrastructure, yeah, they [disfmarker] th they were paid [disfmarker] [speaker002:] Yeah. [speaker006:] but they don't expect anything in return. [speaker002:] Right. So pi [speaker006:] They're just being very nice and good neighbors [speaker002:] A free gift. Yeah. [speaker006:] and [disfmarker] and so they [disfmarker] they basically wanted stuff anyway. [speaker001:] OK. [speaker006:] And then they just sort of allowed us to direct the audio portion of it to what we needed. [speaker002:] Hmm. [speaker001:] Yeah. [speaker002:] Right. [speaker006:] So. [speaker007:] And then, uh, UW [disfmarker] I don't know if you mentioned this at the beginning but, the [disfmarker] Mari's group, Mari and Morgan [disfmarker] I th I think what happened historically, um [disfmarker] Oh, I don't know how much I'm supposed to say if this is being recorded. Well, anyway, uh. [speaker002:] Should we stop recording? [speaker001:] You are on the line. [speaker007:] Mmm, huh. Um. [speaker001:] This would never go away! [speaker007:] There's funding, eh. [speaker008:] I [disfmarker] I can pause if you wanna say something that [disfmarker] that's sensitive. I mean [disfmarker] [speaker007:] It's [disfmarker] No, it's OK. [speaker008:] OK. [speaker007:] Um, I'll tell you later. [speaker008:] Yeah. [speaker007:] Sorry, listeners. [speaker006:] And to our listening audience. [speaker002:] That's [vocalsound] to the listeners. [speaker001:] Yeah. [speaker007:] Too bad you weren't here. [speaker001:] You know, maybe thousands of people are listening to us now [speaker007:] Well, it does sort of make you [disfmarker] [speaker001:] because it's in the future, but, you know, sequentially maybe thousands. [speaker008:] Yeah. [speaker002:] That's right. It could be. [speaker007:] It's [disfmarker] it's like you're behind closed doors [speaker002:] This could be the most popular corpus. [speaker007:] but you have to [disfmarker] [speaker002:] Yeah. [speaker008:] That's right. It's so funny, like we close the door and it's [disfmarker] [speaker007:] Normally a note taker would know not to do that. OK. [speaker002:] Ha! [speaker008:] Yeah. [speaker004:] What I can do is say, la la la la la la la la la. [speaker002:] But that's only on one sp [speaker008:] Yeah. [speaker007:] Anyway, UW is getting some funding from DARPA on Communicator, uh, with [disfmarker] [speaker004:] I know, but we can blend it for the public. Uh. [speaker007:] and that's sorta how I got involved. I mean, Morgan wanted me to sort of do something. I guess, I [disfmarker] My interest was sort of in the language and dialogue m modeling, and the turn overlaps, and how to apply you know, a language model, a dialogue act model to actually feed down to do better recognition. So not just [disfmarker] Because I think it will be really difficult to do just speech recognition and then try to get some kind of meaning out of it. I think you have to have some kind of structure from the [disfmarker] [speaker002:] Hmm. [speaker006:] Feedback. [speaker007:] I mean [disfmarker] Yeah. And [disfmarker] So knowing that an utterance is really short and low volume, you should be able to figure out it's a backchannel and things like that. Anyway, apparently, and you guys know more than I do, there was some talk that you were gonna build a portable version of this so that Mari's group [disfmarker] [speaker008:] Mm hmm. [speaker007:] I [disfmarker] I mean, there's not a huge amount of funding in the way, you know, Mari probably won't be able to do a ton of collection. But I was thinking if there were sort of prototype collection devices or standards, then we could share data. [speaker008:] Mm hmm. [speaker002:] Hmm. [speaker008:] Yeah. Yeah. [speaker007:] And we would have like a [vocalsound] monopoly on that. Mar I mean, once you get three sites collecting data and each site can demo it, and say it's at the other two sites. [speaker008:] Yeah. Yeah. [speaker002:] Hmm. [speaker007:] Which was [disfmarker] would be really good for funding purposes. [speaker006:] Right. [speaker007:] But that [disfmarker] I don't really know. That's what I heard from Morgan, [speaker008:] Yeah, I know. That [disfmarker] that's [disfmarker] That's [disfmarker] that [disfmarker] that [disfmarker] that portable thing's been mentioned. [speaker007:] so. [speaker006:] We [disfmarker] we talked about that and [disfmarker] It was [disfmarker] [speaker007:] Is that [disfmarker] And according to Morgan it's like happening and [disfmarker] [speaker006:] Well i w when I spoke with him he [disfmarker] he really just said it was a money issue. [speaker008:] But that was [disfmarker] [speaker006:] That it's certainly not any sort of intellectual property issue [vocalsound] where we're perfectly willing to give people those boxes. [speaker007:] OK. [speaker008:] Oh, well. [speaker006:] The boxes are cheap. [speaker007:] Right. [speaker006:] Um, but, you know, can they afford to get a six channel Sony wireless and can they afford the P Z and that sort of stuff. [speaker007:] And he said something about a portable [disfmarker] You know, that they wouldn't have a dedicated room or something like [disfmarker] like you guys have here. [speaker006:] Well [disfmarker] [speaker008:] I think [disfmarker] I think somebody's got a unit which is something like the ADAT or something, which, you know, it's a device which will record onto tape, uh, multiple channels. [speaker006:] Oh. [speaker008:] And so I think that was [disfmarker] that was gonna be and we were g Like someone actually has this knocking around. Some company that we were gonna [disfmarker] uh, was gonna lend it to us, something like this. [speaker007:] OK. [speaker006:] This is [disfmarker] this is the same [disfmarker] same company that may be doing the transcripts for us, [speaker008:] The same [disfmarker] it might be the same company or it might be a different company. [speaker006:] may be giving us [disfmarker] [speaker002:] Right. [speaker008:] There's another company which also has a [disfmarker] apparently has purchased a very [disfmarker] a big multi channel thing. [speaker006:] may be giving us, uh huh. [speaker008:] Um, and so if we [disfmarker] if they could lend us that, we could imagine doing these kinds of recordings. I mean that you know, that like you said, the fundamental thing is it doesn't have to be this hardware. It's just the idea that you have multiple channels to synchronize data [speaker007:] Right. [speaker002:] Mmm. [speaker008:] and then the post processing can be uniform and stuff like that, [speaker007:] Right, and it doesn't have [disfmarker] You know, different sides can do different pieces of it I think. [speaker008:] so. Right. [speaker006:] Yeah. And it would also [disfmarker] it would be really nice to be able to do it in other rooms, [speaker007:] And [disfmarker] [speaker006:] cuz otherwise we're gonna learn a lot about this air conditioning. [speaker007:] Exactly. [speaker002:] Yeah. [speaker007:] Right. And you'll get different kinds of conversations and different interactions, [speaker002:] That was, w [speaker006:] Yes. [speaker007:] and I think just having three different sites would be great [speaker006:] That'd be great. Yeah, I agree. [speaker007:] just for sort of diversity, [speaker001:] Yeah. [speaker002:] Mmm. [speaker007:] and so. [speaker005:] Well, one thing that Horacio h hadn't mentioned is, um, I mean, y you were talking about uh the same [disfmarker] the same sort of idea of just collecting a meeting and [disfmarker] and looking at it afterwards. But the [disfmarker] the thing that I'm interested in, and I'm probably the one who's gonna start doing the collection, [comment] um, [vocalsound] is e is having this situation where people are talking to each other and talking to a computer in the machine [disfmarker] in [disfmarker] in the room. [speaker008:] Right. [speaker005:] Um, and so that's probably [disfmarker] that's probably what [disfmarker] u [speaker002:] That's gonna be [@ @]. [speaker005:] that's certainly what's gonna come out of, uh, of our group first. [speaker006:] So [disfmarker] so are you gonna m uh, i is there gonna be some sort of mark for that? [speaker005:] Uh um. And it [disfmarker] [speaker006:] I mean, are they gonna say, "Computer, bring up a web page on [disfmarker]" or [disfmarker]? [speaker005:] Well, so, you know, what we're [disfmarker] we're basically doing is [disfmarker] is waiting to look at the pilot and see what people wanna do [speaker006:] Uh huh. [speaker005:] but it's going all be Wizard of Oz at first. So we're gonna let them do what they wanna do first and then see if we should um, you know, subsequent things. [speaker006:] Oh, I see. [speaker007:] Mm hmm. [speaker005:] Try to [disfmarker] try to constrain them or not constrain them. Or try [disfmarker] try to figure that out. [speaker006:] I'm just thinking, from a discourse point of view it'd be very interesting to somehow mark "I'm talking to the computer". [speaker008:] Right. No, it has to be [disfmarker] has to be error complete. [speaker007:] Yeah, but you know what? [speaker008:] So the computer has to [disfmarker] [speaker007:] They don't say when they stop. [speaker005:] That's one of them [disfmarker] [speaker006:] Right. [speaker005:] Pardon? [speaker007:] They don't say, you know, "I'm done, Computer". They only mark the beginnings. [speaker008:] Yeah. [speaker002:] Hmm. [speaker005:] Yeah. [speaker008:] Yeah, yeah. [speaker002:] Well, there might be a different style of, um [disfmarker] a different, actually, style of interaction. [speaker005:] e [speaker007:] So, [speaker005:] and it's all of these things [disfmarker] i i is a g [speaker007:] it [disfmarker] uh [disfmarker] there are very [disfmarker] [speaker006:] "Computer, show me a web page on X," return. [speaker002:] Yeah. [speaker005:] It is exactly these things that I mentioned. [speaker002:] Yeah. [speaker005:] This is [disfmarker] these are the main things that I'm interested in. [speaker007:] Very interesting. [speaker005:] Exactly these. [speaker007:] And actually Mari's [disfmarker] [speaker005:] When [disfmarker] when [disfmarker] do you know when you're starting and when you know you're ending. [speaker008:] No, but [disfmarker] No but, y But it's like, you know, if it really was an intelligent assistant sitting there you could say, "Oh, perhaps the computer could show us this." And you'd be talking to someone else but the computer's meant to be listening, and so saying, "Oh, it means me." You know, "I shou I should [disfmarker] I should do something now." [speaker002:] Yes, certainly. [speaker006:] Are you talking to me? Are you talking to me? [speaker007:] But there might be, uh, neat ways you can do this that are not natural but they become very easy, [speaker002:] Mmm. [speaker004:] So like saying, like "Hal?" [speaker007:] and that [disfmarker] So. [speaker005:] Yeah. [speaker002:] Mmm! [speaker005:] So. Right. So we wanna [disfmarker] we wanna look at exactly those sorts of things. [speaker007:] I [disfmarker] I think, yeah. [speaker005:] of [disfmarker] of us, at least in the beginning, is gonna have a number of different characteristics. [speaker001:] Erase that part. [speaker005:] It's not going to be natural. It's not gonna be a regular meeting. It's gonna be an artificial situation cuz we're gonna have to, you know, mock up some back end. Um. So the kinds of, you know [disfmarker] the kinds of speech is [disfmarker] the speech is gonna be slightly different. People are gonna be in an unnatural task. We're [disfmarker] we're t gonna be trying to figure out how they're gonna [disfmarker] [speaker007:] But you can still collect the multiple microphones in a similar way and transcribe, [speaker005:] But But we will have the, you know [disfmarker] We'll [disfmarker] we'll have all this [disfmarker] all the m microphones set up all the same. [speaker002:] Mmm. [speaker008:] So the [disfmarker] [speaker007:] and th That'd be cool actually. [speaker008:] Yeah. I mean, that's the question. If [disfmarker] if we are collecting these at different sites, different kinds of tasks, different people, what is there that's gonna be uniform about this collection? [speaker005:] Mm hmm. Right. [speaker008:] And part of it will be, well, this combination of close talking and uh, and uh, environmental mikes. [speaker002:] Mmm. [speaker008:] And [disfmarker] and then part of it will just be the data presentation, you know, maybe if [disfmarker] if we have common transcription standards and, you know, that it's all transcribed or there's [disfmarker] there's some integration of the standards thing. [speaker005:] Right. [speaker008:] But I mean I u u as long as that makes sense, then it's nice to have a variety of tasks, right? To have sort of, you know, different [disfmarker] [speaker007:] Yeah, I mean, I think the formats would be similar. [speaker005:] And [disfmarker] Yeah. And if your task is to just sort of, you know, generally do, um, summary and, uh [disfmarker] and large vocabulary speech recognition over the thing, then it doesn't matter that much which, uh, that there's much a difference. [speaker008:] Right. Yeah. [speaker007:] But I think having the [disfmarker] the idea of the computer sort of in there forces you to create a [disfmarker] a log file format that can handle these other devices. [speaker004:] Hmm. [speaker008:] Right. Right. [speaker007:] I mean, you were talking about the Penn and [disfmarker] So I [disfmarker] I think it's [disfmarker] the formatting is really one of the biggest [disfmarker] issues. [speaker006:] XML, that solves everything. [speaker007:] You know, not [disfmarker] not the f transcript formatting but the audio formatting and the sort of figuring out how to synchronize all of these, um, pieces. [speaker006:] Uh huh. That's right. We're gonna make [disfmarker] [speaker007:] And what kind of software you can use for post processing all of them. [speaker008:] Yeah. [speaker006:] Right. Someone will make some assumption that's different than someone else. [speaker002:] Yeah. [speaker007:] Yeah, which i [speaker006:] And then we suddenly won't be able to interoperate. [speaker002:] Well, just [disfmarker] just the fact that people are making different assumptions is quite an interesting problem. [speaker006:] Yep. [speaker005:] Mm hmm. [speaker002:] You know, it's quite an interesting issue. [speaker006:] So that's something we should try to [disfmarker] [speaker002:] That brings up [disfmarker] Mmm. [speaker007:] Yeah. [speaker006:] I I'm not sure what [disfmarker] wh I mean what's a good solution than that other than trying to keep in touch with each other. [speaker008:] But [disfmarker] yeah. Well, i w if we're gonna have [disfmarker] if we're gonna [disfmarker] there's an opportunity to share tools, right? The if we [disfmarker] [speaker006:] Right. [speaker001:] We could have like, you know, some periodic meetings, [speaker002:] Mmm. [speaker001:] and like, uh, [speaker006:] Yeah. [speaker007:] Yeah, or phone meetings, or even [disfmarker] [speaker001:] phone meetings or something. [speaker007:] Or meetings like this with [disfmarker] about the meetings. [speaker006:] Uh huh. Yeah, I mean, meetings like this are good because we can actually record them. [speaker007:] You know, with a [disfmarker] with a telephone. [speaker001:] Yeah. At the same time we contribute to the [vocalsound] database collection, [speaker006:] Yep, [speaker002:] Yeah, contribute the data. [speaker006:] absolutely. [speaker007:] No, I'm serious. [speaker001:] and, uh. [speaker007:] Even if we don't make any decisions we'll have a lot of data. [speaker008:] That's right. [speaker001:] But it will be all documented. [speaker007:] Sorry. [speaker002:] Yes. [speaker005:] Yeah, actually we could do that at our site too even though that's not what I'm normally gonna be collecting, [speaker007:] Right. [speaker005:] but there's no reason we can't collect that, and, uh, then you'd have a more [disfmarker] m your normal kind of meeting. [speaker006:] Right. So [disfmarker] When you're talking about your artificial meetings, i they are gonna be goal directed, though. [speaker007:] Right. [speaker006:] So they're gonna s say, you're [disfmarker] you're having a meeting to do this. [speaker005:] They're going to be goal directed. You're supposed to [disfmarker] Yeah, you know. You're supposed to [disfmarker] you're [disfmarker] a budget meeting or something and you have a spreadsheet up there, and everybody has to sort of work out the [disfmarker] you know, some particular goal of where to p allocate money. And they can t ask the computer to switch things around in different columns or something like that. [speaker006:] Mm hmm. [speaker005:] But everybody [disfmarker] s and acs actually a o another characteristic is that we're only dealing with three people. [speaker006:] Oh. [speaker005:] Um. [speaker006:] So that will be different. [speaker005:] Yeah. [speaker002:] Hmm. [speaker005:] So. [speaker001:] How is that? Three peop total? [speaker005:] We're gonna start with just three people, yeah. [speaker002:] Is that the channel limitation cutting in? [speaker005:] Um, I think it's mainly [disfmarker] It's partly channel limitation because we wanna [disfmarker] we wanna double mike everybody. [speaker002:] Mmm. [speaker005:] Um, it's also to a large extent, [speaker008:] Oh, right. [speaker006:] Oh. [speaker005:] um [disfmarker] And we have eight channels. Um. It's also I think, uh, e e sort of what came up I think when even Liz was, uh, involved a little while ago with [disfmarker] and Christine I think just in terms of the thinking of the subjects and trying to have enough people. We sort of wanted to, you know, encourage like everybody to talk, try to figure out some way that everybody would be able to participate and [disfmarker] [speaker006:] So if [disfmarker] [speaker007:] Yeah. [speaker006:] You're double miking everyone, so that's a close talking and a lapel, and then two environmental mikes, [speaker005:] A g [speaker002:] Two of [@ @]. [speaker006:] is that right? [speaker005:] Yes, that was [disfmarker] that was the idea. [speaker002:] Yeah. [speaker005:] Uh, but [speaker004:] Mm hmm. [speaker002:] M [speaker005:] I don't know, [speaker007:] Hmm. [speaker005:] that was, you know, w we're not uh planning to do anything with [disfmarker] with all that like right away. So if there were [disfmarker] is there [disfmarker] you know, we [disfmarker] h we're happy to take input on that. [speaker006:] Right. [speaker008:] Wha [speaker004:] Is it the three, I see, [speaker006:] Well, if you ha m u eh [disfmarker] [speaker005:] And also, I very much wanna find out about the, you know, the specs on the board that you have in the ma in the [disfmarker] i in the PCI card [speaker004:] mm hmm. [speaker005:] cuz I think we might wanna go with that. [speaker006:] Mm hmm. [speaker002:] Mmm. [speaker004:] I wanna ask one question though. So is [disfmarker] is it the same three people each time or a different three people? [speaker005:] Yeah. No, no, we're ho we're hoping to just bring in different groups of three people apiece. [speaker004:] OK, OK. That's good. [speaker002:] And do you have the same agenda, although there's a goal? Is there one goal or several goals? Because that'll would be more interesting data if there is like [disfmarker] [speaker005:] Uh. In a given [disfmarker] in a given meeting? [speaker002:] I mean, a budget meeting is a classic example, of where every comes in pursuit of a different goal. [speaker008:] Yeah, yeah. [speaker005:] What we're try what we're trying to do is not make it too [disfmarker] eh [comment] you know, not make it too, um, ar artificial. You know, not like say, "OK, mmm, let's role play. Your, y" you know, "you do this, you wanna [disfmarker]" [speaker002:] Right. [speaker005:] but rather give them an actual task. [speaker003:] Well, there can be like a problem solution kind of thing where they would actually, you know [disfmarker] they have to come to one solution. [speaker005:] So actually have [disfmarker] Yeah, and they all just have their own [disfmarker] they just happen to have their own opinions. [speaker002:] Right. [speaker005:] And another possible thing that we're thinking of is something like, uh, you know, we have these regular Wednesday lunches [speaker002:] Right. [speaker001:] S [speaker005:] and you gotta plan a menu, right? So everybody has things that they like [speaker001:] Well, some peo people speak with their mouth full, you know that's a [@ @]. [speaker005:] an and, uh [disfmarker] [speaker006:] Yeah that [disfmarker] [speaker005:] But a lot of the [disfmarker] [speaker008:] Right. [speaker005:] No. [speaker006:] That is, uh. [speaker008:] We [disfmarker] we [disfmarker] we've thought about doing [disfmarker] [speaker001:] Delete that part. [speaker005:] None of that. N no eating during meetings. [speaker008:] Right, right, right. [speaker006:] Yes, or drinking. [speaker005:] Maybe salivating if you're thinking about the food, but it [disfmarker] [speaker006:] Y getting [disfmarker] getting coffee on these is definitely a bad thing. [speaker003:] Not with these around. [speaker005:] Um. [speaker008:] That's [disfmarker] [speaker007:] Oh, right. [speaker001:] So [disfmarker] so, I [disfmarker] One question. [speaker008:] Yeah. [speaker007:] Yeah. [speaker001:] I [disfmarker] I mean, I haven't been in these, uh, [comment] you know, preliminary de decision meetings, but I don't know what's the [comment] idea for the computer interaction. I mean, it's going to be a real computer, as in Wizard of Oz? [speaker006:] Wizard of Oz. [speaker005:] OK, so the idea is we wanna start off with a somewhat simple back end [speaker001:] Uh, what's [disfmarker] [speaker005:] and we're thinking of, um, agentifying an EXCEL spreadsheet and having s like some kind of spreadsheet [disfmarker] So basically, what [disfmarker] wh people will see a spreadsheet and they can ask it to move numbers around. So we thought that would be a [disfmarker] be a useful kind of task for budget meetings or that kind of thing. [speaker001:] And so it will be a real computer recognition. [speaker005:] So starting off [disfmarker] we would start off definitely with a Wizard of Oz task [speaker001:] Oh, OK. [speaker005:] but it would be something [disfmarker] [speaker006:] So a real computer but a w uh, someone driving it. [speaker005:] Uh yes, [speaker001:] OK, OK, OK. [speaker002:] Mmm. [speaker005:] but it would be something [disfmarker] uh but everything would be an agent. [speaker001:] Uh huh. [speaker005:] And so it just be [disfmarker] you know, we wouldn't have the recognition grammar to begin with. [speaker001:] OK. [speaker005:] We wouldn't have the NL grammar to begin with. [speaker001:] Yeah. [speaker005:] But these are all components that are obvious to us how to build. [speaker001:] Yeah, yeah, yeah, yeah, yeah. [speaker005:] Nothing [disfmarker] nothing that's sort of far in the future. [speaker002:] Mmm. [speaker001:] Yeah. [speaker005:] It's just a matter of [disfmarker] of work. [speaker001:] So somebody will be playing the recognizer and then other agents will be doing their task independently. [speaker005:] Yeah. [speaker001:] OK. [speaker005:] So, sort of [disfmarker] Right. Something that can be built up, you know, relatively quickly and makes sense how to do. So a fairly simple back end. [speaker001:] Yeah. [speaker005:] Um. [speaker001:] Good. [speaker006:] Yeah, I [disfmarker] I think that's close enough to what we're interested in that it would t it'd be useful for training uh, the acoustic models, without any question. [speaker008:] Oh, yeah. Yeah, sure. [speaker005:] Uh huh. [speaker007:] Yeah, definitely. [speaker001:] You p you plan to [disfmarker] to train actually models based on this data. [speaker002:] Mmm. [speaker005:] That's good. [speaker006:] Oh, absolutely. [speaker005:] Yeah. Great. [speaker006:] Yeah, cuz if you don't the [disfmarker] the accuracy's gonna be horrendously bad. [speaker002:] In th Mmm. Right. [speaker001:] Yeah, yeah. [speaker002:] And this is just train um, training them using the [disfmarker] um, the head mounted data initially. [speaker006:] So. [speaker002:] Or, I mean is the intention to do some [disfmarker] som [speaker008:] I think [disfmarker] [speaker006:] Well, we'll definitely need to do both. [speaker008:] No, I think the idea is to train on the, uh [disfmarker] on these mikes but use these to get the alignments that we're gonna train to, [speaker002:] Right. OK. [speaker006:] Right. [speaker002:] Understood. [speaker001:] Mm hmm. [speaker002:] Right. [speaker008:] right? [speaker002:] You're not intending to do any [disfmarker] You said you weren't gonna [disfmarker] planning to do any beam forming as such, but you're not intending to do any signal processing or, uh [disfmarker] or, uh, uh, the sort of acoustic modeling of the environment or microphones, or anything else? [speaker008:] I don't know. [speaker006:] It [disfmarker] it really [disfmarker] i [speaker002:] Cuz that's actually my interest at the moment, so. [speaker006:] It will really depend on if we have someone who really wants to do that. [speaker002:] Hmm. Right, right. [speaker008:] Yeah. [speaker006:] So. [speaker001:] Yeah. [speaker008:] Well, also, I mean, I [disfmarker] I [disfmarker] I suspect that it may [disfmarker] we may not be able to get anywhere without doing something, [speaker006:] I [speaker002:] Yeah, yeah. [speaker008:] right? So. [speaker001:] Yeah. [speaker008:] We may end up having to do that. [speaker006:] Well it's [disfmarker] just as an example, I I've done some very simple digits recognitions where, uh, I said if I had known we were having this meeting I would have done this before, uh. Having people read digit strings [speaker007:] That was Morgan. [speaker006:] cuz digits are much easier. [speaker002:] Mmm. [speaker006:] Um. [speaker005:] Oh, that's really interesting. [speaker006:] And just combining [disfmarker] just doing multis [speaker005:] We're starting off with digit stuff too [vocalsound] for [disfmarker] for [disfmarker] for our different [disfmarker] for [disfmarker] for this, um, endpointing stuff. [speaker002:] Hmm. [speaker006:] Yeah, of course. Yeah. [speaker008:] Uh huh. [speaker006:] But, uh, just mixing the [disfmarker] [speaker005:] Yeah. [speaker006:] just doing multi stream on the different mikes helped enormously. And I was a little surprised. [speaker004:] Hmm, hmm. [speaker002:] Right. [speaker006:] Yeah, so you [disfmarker] so you [disfmarker] [speaker007:] Why [disfmarker] why is that? [speaker002:] So you're having [disfmarker] [speaker006:] Wh why? [speaker007:] Why? [speaker006:] I have no idea. Because they're a little different. [speaker002:] Sorry. When you say multi multi stream, you mean [speaker008:] Hang on. You mean, you ne [speaker007:] The microphones are different? [speaker008:] You mean, uh, bu to [disfmarker] combining the [disfmarker] the posteriors on full band recognition like from [disfmarker] from three different or four different mikes. [speaker006:] Mm hmm. Mm hmm. Right. [speaker002:] Right. [speaker008:] Cool. [speaker002:] Hmm. [speaker007:] Hmm. [speaker006:] And that actually helped. [speaker002:] That's interesting. [speaker006:] That gave several percent improvement. [speaker002:] Right, yeah. [speaker006:] Absolute. So that, uh, I think just [disfmarker] just having stereo will help cuz they're a little different. [speaker001:] Yeah. [speaker006:] You get a little [disfmarker] slightly different estimators [speaker002:] Hmm. [speaker001:] Yeah. [speaker006:] and, uh, uh, you combine them together, and you do a little better. [speaker001:] So basically if you plan on really training [disfmarker] collecting a database uh for [disfmarker] database for training, I mean, you are talking about, you know, twenty thousand or forty thousand sentences. Things like that. [speaker006:] Right, so that the [disfmarker] [speaker008:] It's something like tens of hours [disfmarker] [speaker001:] Mm hmm. [speaker008:] tens of hours of data, I think. [speaker006:] M [speaker001:] Tens of hours. [speaker008:] Yeah. [speaker002:] Mm hmm. [speaker006:] So our initial plan for ICSI was to do forty hours. [speaker005:] Yeah. [speaker001:] OK. [speaker006:] And then U W is talking about doing an additional sixty to a hundred hours if they have the money. [speaker001:] Uh huh. [speaker007:] That [disfmarker] [speaker006:] If they can actually do it. [speaker007:] Yeah, [speaker008:] Really? [speaker007:] that's pretty [disfmarker] [speaker002:] Mmm. [speaker007:] Mmm. [speaker006:] I mean, that's what they had said before [speaker007:] That's [disfmarker] [speaker006:] but I guess that was more ambitious. [speaker005:] We were thinking something like, you know, ten hours to begin [vocalsound] with. [speaker007:] They were gonna [disfmarker] There was a difference in the amount of money awarded from the beginning to e [speaker002:] Mmm. [speaker005:] It's about [disfmarker] [speaker006:] Uh huh. [speaker001:] Mm hmm. [speaker004:] Well, the other thing too is, I mean, i if [disfmarker] if you have the equipment then, i i it could be different. [speaker007:] Um. [speaker001:] But I mean, i i [speaker006:] Yeah, recording it's easy. [speaker004:] Then the recording is easy and maybe [disfmarker] maybe they [disfmarker] [speaker007:] Yeah, recording is [disfmarker] Right. [speaker006:] It's transcription that's a pain. [speaker002:] It's transcription and processing, yeah. [speaker001:] So [disfmarker] so [disfmarker] [speaker005:] Oh, the [disfmarker] [speaker007:] Right. [speaker005:] Right. [speaker004:] Yeah. [speaker005:] That's actually, that's the other thing. We weren't [disfmarker] [speaker001:] Yeah. [speaker002:] Hmm? [speaker006:] OK, but you have to put on a [disfmarker] You have to put on a mike. [speaker001:] So [disfmarker] so, clear [speaker008:] I think we've run it [disfmarker] well, uh. [speaker006:] Oh. We're out of mikes. [speaker004:] I'd give you my mike. [speaker008:] Not quite but [disfmarker] [speaker007:] You can have my mike. [speaker008:] Yeah, yeah. It's not [disfmarker] we [disfmarker] we [disfmarker] it's a [disfmarker] [speaker002:] It's not a speaker phone is it? [speaker006:] Don't [disfmarker] don't switch mikes. Don't switch mikes. [speaker004:] Oh, I'm not allowed to. [speaker006:] That's way too confusing. [speaker008:] We [disfmarker] [speaker004:] OK. Alrighty. [speaker008:] we need to, uh. [speaker001:] So, I mean, clearly i if everybody's collecting data to train on, m maybe we should do something [disfmarker] [speaker005:] Uh. One thing I should point out, we were n we were not planning on tra doing transcription of the full thing right away, u cuz that's not [disfmarker] I mean, I was initially interested in just the speech that was directed at the computer. [speaker001:] Mm hmm. [speaker006:] Mm hmm. [speaker007:] Mm hmm. [speaker002:] Mmm. [speaker005:] And so we were not planning on doing the [disfmarker] transcribing the entire thing. [speaker008:] Right. OK. [speaker005:] But I mean, that's something to talk about. [speaker006:] But if you had the data and we had [disfmarker] a [disfmarker] a company. [speaker007:] But perhaps the company [disfmarker] a company that might, uh. [speaker005:] If you wanted [disfmarker] if you wanted to transcribe, [vocalsound] you have the yes, [@ @] you'll be fine. [speaker006:] I didn't know whether I should name the company that may be providing us transcription or not. [speaker007:] Yeah. [speaker001:] Yeah. [speaker007:] There [disfmarker] No. [speaker001:] If you doubt, don't do it. [speaker008:] Cuz we're very naturalistic here. [speaker007:] Yeah. But it, um, [vocalsound] and there will be a lot of people to people talk in those meetings I bet with even three people, [speaker001:] So [disfmarker] [speaker005:] So what is the company? [speaker007:] so. [speaker005:] I heard a "sh". [speaker006:] IBM. [speaker007:] Yeah. [speaker006:] So we've been [disfmarker] we've been trying [disfmarker] we've been trying to work with them for a while on these projects [speaker002:] I guessed it from the clue, [speaker001:] Ah! [speaker002:] yeah. [speaker005:] Ah! OK. [speaker006:] and they don't [disfmarker] it's hard for them to give us money but they [disfmarker] they seem to be willing to give us time. [speaker002:] Mm hmm. [speaker006:] And so one of the po potentials is for them to do transcription in exchange for access to the corpus. [speaker001:] Yeah. So basically it seems to be like, uh, I mean, if everybody w e u was in the universities collecting data, we are, you are, I mean, sharing the data and making a bigger corpus is going to be, you know, a benefit for everybody. [speaker006:] It would be great. [speaker002:] Mmm, mmm. [speaker007:] Oh, yeah. [speaker006:] Yeah. Yes, absolutely. [speaker008:] Yeah. [speaker006:] I think that would be excellent. [speaker007:] And, uh, it will really help the design too. [speaker006:] Mm hmm. [speaker007:] It'll m keep it from being too specific to somebody's set up or task. [speaker001:] Yeah. [speaker004:] Should we [disfmarker] should we interpret what he says? [speaker001:] Morgan said yes. [speaker007:] Oh, do we [disfmarker] [speaker006:] That's right. [speaker004:] Morgan says [disfmarker] [speaker006:] Morgan says thumbs up. [speaker004:] There we go. [speaker007:] Right, right. [speaker008:] Um, bu but how much [disfmarker] how valuable [disfmarker] Given that you're interested in this computer directed data and we're not gonna have any of that? [speaker005:] Well, that's me personally. Our group in general as Horacio was saying is also interested in some of the same general issues. [speaker008:] Yeah. I see. Yeah. [speaker006:] Well, and I think that [disfmarker] [speaker002:] Mmm. [speaker005:] Um, [speaker007:] Yeah. [speaker005:] and I'm also interested in prosody, um, use [disfmarker] you know, u used over person to person stuff like segment and helping for segmentation and stuff. [speaker008:] Right. [speaker007:] Yeah, so we have some ideas there but not enough data to train. [speaker006:] So al Right. [speaker008:] Mm hmm. [speaker007:] So [disfmarker] Harry, and Kemal and I probably [disfmarker] So actually having different kinds of conversations is really helpful too [speaker006:] Mm hmm. [speaker007:] because [disfmarker] to g to make these models robust. You don't wanna train on just a few people's voices cuz then you'll just learn the [disfmarker] their prosodic patterns. [speaker002:] Mmm. [speaker008:] Right. [speaker002:] Mmm. [speaker007:] That's not what we want. [speaker005:] Mm hmm. [speaker006:] So [disfmarker] so how are you gonna get dra ground truth on that? So [disfmarker] so we should [disfmarker] [speaker007:] That is from the transcription and from the speaker s segmentation. That's not hard. [speaker006:] Mm hmm. [speaker007:] I think the hard part is getting enough training data and [speaker006:] Mm hmm. [speaker007:] nor you also need normalization data. So you're sort of iteratively normalizing as you go through the [disfmarker] So. [speaker006:] But anyway, what I was gonna say is that although we're not gonna have speech directed to a computer, my expectation is that the acoustic models will be very similar. The language models probably won't be. [speaker005:] N the language models c Yeah. [speaker007:] Actually the acoustic may not be either. So, it's [disfmarker] [speaker005:] Really? [speaker006:] Yeah, I [disfmarker] if it's Wizard of Oz I think it will be. [speaker005:] Oh, sure. [speaker006:] If they know that [disfmarker] [speaker007:] Yeah, if it's very good, [speaker006:] Yeah. [speaker007:] that's true. [speaker001:] Yeah. [speaker007:] If it makes mistakes it'll be different though [speaker005:] Yeah. On the language model [pause] we're [disfmarker] [speaker007:] I think it [disfmarker] [speaker008:] Hmm, [vocalsound] right. [speaker002:] Mmm. [speaker006:] The language model will be [disfmarker] certainly be different [speaker005:] uh we're [disfmarker] we're gonna be using the same thing that we've done in the past [speaker006:] but [disfmarker] [speaker005:] which is, you know, we need a natural language grammar written anyway. It's written in unification grammar framework. [speaker006:] Mm hmm. [speaker005:] um, but with context free grammar, power, and so it's just changed directly into something the recognizer recognizes off of directly with [disfmarker] with no, um, [speaker006:] And you're gonna hand [disfmarker] c hand code that? [speaker005:] It's [disfmarker] it's [disfmarker] it's hand coded. Because we need [disfmarker] we need to write the natural language grammar anyway to understand. So it's [disfmarker] it's a small, you know, ATIS sized type task. [speaker006:] Oh, I [disfmarker] I uh sure, sure, sure, sure. I see. [speaker002:] Hmm. [speaker005:] So, yeah, that's [disfmarker] that's the way we've been approaching it. [speaker006:] Right, if you wanna interpret it you uh need to do [disfmarker] do that anyway. [speaker005:] Eh if we [disfmarker] if we're ge yeah, we [disfmarker] you know, if we're gonna be able to interpret it then [disfmarker] I mean, if we're not gonna be able to interpret that then it's not gonna do us much good anyway to recognize it. So we might as well go directly from the natural language grammar. [speaker004:] Is it [disfmarker] is it gonna be like an HPSG kind of grammar? [speaker005:] It is a unification base, it's not [disfmarker] it's not as [disfmarker] as theoretically bound as HPSG in particular, although the couple of people that we have working on it have HPSG backgrounds. [speaker004:] I see. [speaker006:] So what's your time frame look like for [disfmarker] doing meetings, actually starting to record some? [speaker001:] I guess [speaker005:] It's all up to when I have free time. [speaker001:] Yeah. I mean, except [disfmarker] [speaker005:] Anybody else want to work on this by the way, Horacio? [speaker006:] Mm hmm. [speaker001:] Well [disfmarker] well, indeed. I mean, most people, you know, after this evaluation and conference [@ @] [comment] should be working on these type of things. At least, [speaker002:] Mmm. [speaker005:] Uh huh. [speaker007:] Yeah. [speaker001:] you know [speaker005:] Well I know they're all working on mediated spaces but there's a lotta aspects o e to that [disfmarker] to that project. [speaker002:] Yeah. [speaker001:] Yeah, yeah, but [disfmarker] but eventually [disfmarker] I mean, will be people working on uh kind of recognition aspects and, uh, you know, and [disfmarker] and [disfmarker] and in core technology. So I mean, recognition technology and also core, you know, algorithms, et cetera. And for those people to have these data will be, you know, the best way to, you know, develop the systems to an [@ @]. [speaker005:] Right. [speaker006:] Mm hmm. Well the [disfmarker] the reason I'm asking is that it's certainly better to get in early with standards, and data formats, and conventions. [speaker001:] Yeah. Yeah. [speaker007:] Yeah, [speaker005:] Yes. [speaker007:] definitely. [speaker005:] Yes. [speaker001:] So probably, uh, on the [disfmarker] [speaker006:] Um. [speaker005:] Well, so we haven't even [disfmarker] you know, we haven't collected a byte yet [speaker007:] Right. That's actually why I called this meeting, [speaker005:] so. [speaker007:] cuz i I mean, you're both working on it and with, you know, different sampling rates, and these silly things like that. [speaker006:] Well, yeah, it's a [disfmarker] it's ex excellent, excellent. [speaker002:] Mmm, mmm. [speaker005:] Right. Yeah. [speaker006:] Yeah, that's right. [speaker002:] Mmm. [speaker006:] I mean, just [disfmarker] just [disfmarker] just agreeing on a sampling rate and a number of bits, and, uh, header formats and all that. [speaker001:] Yeah. [speaker007:] So [disfmarker] [speaker005:] And we have the same sampling rate. [speaker007:] Yeah. [speaker002:] Sampling rate in the end. [speaker007:] What? [speaker005:] One [disfmarker] one thing we do know is that we're [disfmarker] we're all producing NIST Sphere files in [disfmarker] in, uh, eh in the standard format. [speaker002:] Yeah. [speaker007:] Oh, OK. NIST headers. [speaker006:] Sixteen bit, sixteen kilohertz? [speaker005:] Sixteen bit, sixteen kilohertz. [speaker002:] Bits, sixteen K, [speaker007:] OK, s [speaker006:] Eh, good good. [speaker005:] So at the end we're all [disfmarker] [speaker002:] yeah. [speaker005:] And the amazing thing is that even [disfmarker] even, you know, the entire way through we're actually using some very, very similar stuff. [speaker006:] Big endian, little endian? [speaker008:] Yeah. [speaker007:] OK. Well I just made that up [speaker005:] And so tha [speaker007:] but, [speaker005:] yeah. [speaker007:] um. [speaker006:] Well, but that's [disfmarker] that was coincidence, right? [speaker005:] So, [speaker007:] But that [disfmarker] [speaker005:] Yeah, it's coincidence that it's very fortunate cuz [disfmarker] [speaker002:] Yes, [speaker007:] yeah, yeah. [speaker002:] that's right, not, but [disfmarker] [speaker007:] But you have different na you [disfmarker] we're gonna have different, you know, hardware and [disfmarker] and [disfmarker] [speaker005:] Yeah. [speaker008:] Yeah. [speaker005:] We actually have, yeah, quite similar hardware as it turns out. [speaker002:] It's similar but not quite the same. [speaker007:] OK, [speaker008:] But it [disfmarker] but it [disfmarker] [speaker007:] so [disfmarker] [speaker005:] Which is ver it's [disfmarker] it's very good, [speaker002:] Yeah. [speaker005:] yeah. [speaker007:] so. [speaker005:] Um, so we're actually [disfmarker] [speaker002:] So the bit we're a bit deficient in as we [disfmarker] [speaker006:] So I [disfmarker] I think probably the biggest thing would [disfmarker] Well, no that [disfmarker] that wouldn't be true. If you're not gonna transcribe it yourself, then you're not gonna care too much about our transcription conventions. [speaker007:] Actually that's [disfmarker] the transcription conventions [disfmarker] I was talking about this with Jane [disfmarker] it's really important to people at least like me who wanna know, um, certain [@ @] [comment] uh, who are looking at aspects that are sort of [disfmarker] they're not words but they're structural things you're trying to recognize. [speaker005:] Well [disfmarker] Uh [disfmarker] [speaker006:] Right of it. Mm hmm. [speaker007:] And also I guess for the speaker. People interested in the speaker separation because what you call a turn and the end of a turn, and that [disfmarker] it [disfmarker] it's horribly confused if you don't sort of [disfmarker] [speaker002:] Mmm. [speaker007:] It's inherently hard. And so you have to make some simplifications and different people make different simplifications, [speaker006:] Mm hmm. [speaker007:] and then you can't compare them at all, which is what the literature is sort of like now. So, um, and then there's the issue of recognizer dictionaries, and non standard forms, and things like that that have to be mapped. [speaker005:] Yeah. [speaker007:] So "gonna" versus "going to" and that kind of thing. [speaker006:] Right. [speaker005:] I mean, all those issues yeah I'm sure we [disfmarker] [speaker008:] Mm hmm. [speaker002:] Mmm. [speaker005:] I mean, [@ @] like I say. I'm [disfmarker] I'm personally not interested in those for my particular aspect of the project [speaker007:] They're not hard but [disfmarker] [speaker005:] but SRI in general is [disfmarker] is the [disfmarker] [speaker007:] but we definitely will be interested in some language and dialogue. [speaker005:] Yeah. And s I don't know, maybe we will end up transcribing stu [speaker002:] Mm hmm. [speaker005:] if Hor if Horacio wants, you know, wants that done, maybe that'll be [disfmarker] maybe we'll end up doing that. [speaker007:] So. [speaker006:] Well [disfmarker] [speaker007:] And UW is also probably gonna be interested in that too, the dialogue kinds of things. So, i [speaker006:] Well, so it's a [disfmarker] [speaker007:] you know. [speaker006:] so it seems like, um, if you and Jane [disfmarker] [speaker004:] Is [disfmarker] is Morgan nodding his head? [speaker006:] Yeah. [speaker008:] Yeah. [speaker004:] OK. [speaker006:] So if you and Jane keep [disfmarker] keep in touch about transcription conventions, that would be very helpful. [speaker007:] Yeah. [speaker001:] Yeah. [speaker007:] Yeah, that's originally [disfmarker] We should thank Jane [speaker002:] Right. [speaker007:] because that's how I originally [disfmarker] she showed me some transcripts and I thought, Wow, [speaker006:] Yeah. [speaker005:] I mean, it's kind of amazing too that, you know, the transcription tool we just converged on the same [disfmarker] [speaker002:] Mm hmm. [speaker007:] this is great ". [speaker006:] Yep. [speaker007:] Oh, yeah. [speaker002:] Yeah, yeah, to study using the same thing. [speaker005:] All these things were very, very similar. [speaker008:] Yeah. [speaker001:] You were also thinking about the same tool? [speaker008:] That's interesting. [speaker007:] Yeah. [speaker005:] It's [disfmarker] We [disfmarker] we [disfmarker] we were gonna use the same tool. [speaker001:] Wow! [speaker007:] But what if you had like written the s the Tcl kluge or something, [vocalsound] and then both done the same [disfmarker] [speaker005:] We [disfmarker] we set up that tool already, yeah. [speaker001:] Already! [speaker005:] Yeah. [speaker006:] Hmm. That's right. [speaker008:] Well, the [disfmarker] I mean, and in some ways that does change the, uh, the economics of doing that. [speaker001:] I mean, it's [disfmarker] [speaker002:] Can you do that? [speaker006:] Right. [speaker008:] But, um. [speaker002:] Hmm! [speaker006:] It's much more worth while to do if someone else is gonna use it. [speaker002:] Yeah. Yeah. [speaker007:] Yeah. [speaker008:] Yeah. [speaker005:] Yeah, [speaker006:] Although again if this now named company is the one who's gonna do the transcription, I sus I suspect they have in house tools that they'll end up using. [speaker005:] that's right. [speaker008:] Mm hmm, mm hmm. But we still [disfmarker] yeah. [speaker005:] Uh huh. [speaker007:] But you still need something that can display these channels. [speaker008:] That's interesting. Right, we need [disfmarker] we need some kind of browsing. [speaker007:] e e better than what it can do, I mean. [speaker008:] Yeah. It's [disfmarker] [speaker007:] And then I guess who is the contact at UW? Like Morgan? Morgan doesn't have a microphone but is it [disfmarker] is it [disfmarker] [speaker006:] That's alright. [speaker008:] He's allowed to speak, [speaker006:] He can talk anyway. [speaker008:] he ca [speaker007:] I mean Mari and [disfmarker] [speaker004:] Well, he could be reached [disfmarker] he could be reached by the far field, [speaker008:] yeah. [speaker004:] couldn't he? [speaker007:] Mari or [disfmarker] or [disfmarker] or Katrin, or Jeff, or [disfmarker] [speaker004:] Yeah. [speaker007:] Yeah. I mean, both [disfmarker] I think both Jeff and Katrin have some time on this. No, on the p on the DARPA project, who would be sort of contacts for [disfmarker] Mari. And if Mari's too busy then [disfmarker] [speaker002:] Mmm. [speaker007:] OK, [speaker002:] Mmm. [speaker007:] yeah. [speaker008:] Hmm, [@ @]. [speaker006:] I was. [speaker001:] Uh huh. [speaker005:] I don't think we have any problem. I mean, we had certainly in our [disfmarker] you know, the proposal that we tried to make to NSF, we were assuming we would distribute this through the LDC. [speaker001:] No, yeah. Yeah, I [disfmarker] I mean, this [disfmarker] this is something that re re requires a bootstrap. [speaker006:] LDC is what we were thinking too. [speaker005:] Yeah, [speaker002:] Mm hmm. [speaker007:] Yeah. [speaker005:] so we have no problem with that. [speaker001:] I mean, there is no LDC database. [speaker007:] Yeah. [speaker008:] Mm hmm. [speaker001:] There is, at least we have to share the data, I mean. [speaker002:] Mmm. [speaker001:] That's the bottom line, [speaker007:] I [disfmarker] I mean, I think eventually LDC could be very interested in it, [speaker001:] uh. [speaker007:] especially if you have a tool that can do this. [speaker001:] Yeah. Yeah. [speaker007:] I mean, there isn't one. [speaker006:] Hmm. [speaker007:] And [disfmarker] and the data. So maybe it'll [disfmarker] we can actually [disfmarker] [speaker001:] Yeah. [speaker007:] You can sell corpora eventually to the LDC and you can get time, student time, to clean up the corpus and things. [speaker008:] Hmm. [speaker007:] So. Yeah. Mm hmm. [speaker001:] Yep. [speaker008:] Right. [speaker004:] I wanted to ask one question about the [disfmarker] the nature of the design. [speaker005:] It's terrific. [speaker004:] Th Given that the overlaps are up [disfmarker] up uh, where the difficulty comes, in a bunch of a areas, um, whether it would be useful to have some of the sessions have like an additional rule where if a person, you know, the [disfmarker] that there's someone who recognizes people to speak or something like that to cut down on the [disfmarker] so you [disfmarker] then you have the meeting context, you have multiple channels, you have the microphones n varying the way you want them to vary, but you'd cut down substantially on the overlaps. It wouldn't be f for all meetings but whether maybe just sprinkled through there once in a while, something like that might be useful or not. Just wanted to raise that. I don't know. [speaker008:] I mean, it's difficult because you want it to be real speech, right? And also it's not clear how well people can even adjust to even adjust to that kind of rule. [speaker004:] Mm hmm. [speaker005:] Yeah. I mean, s it [disfmarker] tho those [disfmarker] those kinds of things are sorta the kind of things that we were thinking of [disfmarker] of trying, [speaker008:] But if there [disfmarker] [speaker006:] Robert's rules. [speaker008:] Yeah. [speaker005:] cuz we are having [disfmarker] we do have a more artificial set environment again. [speaker004:] Mm hmm. [speaker005:] We're trying something totally new having people speak to the computer. I don't think we talked about that in particular cuz we didn't [disfmarker] we haven't done any pilots, we haven't seen this, [vocalsound] incredibly difficult data of overlapping. But this is uh, Christine Helverson, who's working on this with me from the s um, aspect of user, you know, usability and stuff, would, y you know [disfmarker] sh she's interested in these sorts of things. What [disfmarker] what can you ask people to do that, um, that they can reliably do and without too much, you know, cognitive load, um, that can help things out, or, you know, those sorts of issues. So, I don't know. Maybe it's something that could fit easier in [disfmarker] in our context than in a general meeting context. I don't know. [speaker004:] Well, what I can say is that wha sometimes, you know, if [disfmarker] if [disfmarker] if we were to have this kind of coordination thing, some [disfmarker] some of the overlap is due to someone wanting to claim the floor at a certain point [speaker006:] Well I think that [disfmarker] Mm hmm. [speaker004:] without knowing that other people have the same desire at the same point in time. And if [disfmarker] and if you were to have someone who was like a moderator, then you raise your hand, the moderator would say, you first, [speaker005:] Just have a button. [speaker004:] you know, or yeah, press the button. [speaker005:] And ring the buzzer. [speaker004:] And [disfmarker] and I think, you know, there are always gonna be backchannels, I think that's natural and lovely. [speaker005:] Mm mmm. [speaker008:] Mm hmm. [speaker004:] But these [disfmarker] these overlaps that are due to people n not knowing other people wanna leap in at the same time and [disfmarker] and wanting to coordinate [disfmarker] you know, I think that, uh, it could be lessened. [speaker006:] Hmm. [speaker005:] I don't know. [speaker004:] Not eliminated entirely. [speaker006:] There are certainly f formal meeting settings where that's what's happens, y that you know, Robert's rules of order and all that. [speaker004:] Yeah. [speaker007:] Right. [speaker006:] Uh, so that's not something we had thought about before but I think it [disfmarker] it [disfmarker] it certainly is another type of meeting that could be done. [speaker008:] Yeah. [speaker004:] Mm hmm. [speaker006:] Uh, no, I didn't. I didn't know we were gonna be recording this meeting, [speaker008:] We [disfmarker] [speaker006:] so I didn't. [speaker008:] no [disfmarker] Yeah, in general [speaker006:] Yes, in the other meetings we have. [speaker008:] yeah. Yeah. [speaker004:] And they've been collecting digits too. OK. [speaker006:] So that's another thing we should talk about, is making sure that we're doing it in the same way. [speaker002:] Mm hmm. [speaker005:] We [disfmarker] [vocalsound] For a [disfmarker] t for a completely different reason I assume [speaker002:] Mmm. [speaker005:] but [disfmarker] [speaker006:] So I [disfmarker] I [disfmarker] [speaker005:] And [disfmarker] and not within a [disfmarker] not within a big room. Just, uh. Yeah. [speaker002:] Mmm. [speaker004:] That's a good idea. [speaker006:] Yeah, if I had known and here we have all these new speakers, I could have gotten lots of digits in [speaker008:] Yeah. [speaker006:] but [disfmarker] [speaker007:] But sorry, so you connect [disfmarker] you collect digits in conversation or [disfmarker] or individ [speaker006:] No, no, I have people read them. [speaker008:] Yeah, yeah. We [disfmarker] we give people a list of digits and they have to like put them into their discussion, [speaker007:] Uh. [speaker008:] like say, "Five, one, seven, seven, three." [speaker006:] Seven. [speaker003:] You you're encouraged to, you know, just utter digits once in a while. [speaker007:] I was thinking [disfmarker] [speaker002:] Yeah. [speaker007:] Never mind, [speaker001:] You say, hey, [speaker007:] I'm getting attacked. [speaker001:] by the way, eight! " [speaker005:] Uh huh. [speaker002:] Mmm. [speaker007:] So you don't need the meeting for that. You can just sit me in this room and set up all the microphones. [speaker005:] But you need the room and the set up. [speaker007:] Right, OK. Right, right. [speaker002:] Right. [speaker005:] Yeah, we're doing digits [disfmarker] [speaker007:] OK. [speaker008:] Yeah. [speaker005:] yeah, we're not doing digits in the [disfmarker] in the room set up at all. We're doing it for a complete I mean, one of the specific things I'm interested in is knowing [disfmarker] um, is the open mike issue. When you're talking to the computer, knowing when you're done speaking. And so I'm loo interested in [disfmarker] in [disfmarker] well, in interested in looking at prosody. Um, [speaker006:] Oh, cool. [speaker005:] and so we're just having [disfmarker] we're just doing lists of digits to see when people pause to know, you know, "there was a big gap in time here but, they" [disfmarker] you know, we can tell prosodically they weren't done, so. Yeah. [speaker007:] I [disfmarker] I think that's a really good idea. [speaker002:] Hmm. [speaker008:] But [disfmarker] but there is [disfmarker] but there is also this fascinating prosodic thing that goes on [speaker007:] To always [disfmarker] yeah. [speaker008:] because you [disfmarker] what we [disfmarker] we have is like these lists of ten digit strings or twenty digit strings and they're like one [disfmarker] one to five or one to seven digits. And people read through them. And you know, we're saying, well we won't to be able to separate them so please pause between but it's very [disfmarker] people don't like doing that. And so there is this prosodic thing which is [disfmarker] which is indicating when [disfmarker] whe the pauses between these digit strings. And then when they finish, the actual [disfmarker] the whole sheet, there's this nice, wonderful [disfmarker] yeah, phrase [@ @]. [speaker002:] Mmm, mmm. [speaker007:] Oh, yeah. [speaker005:] Yeah, [speaker006:] Nice falling, [speaker007:] Oh, yeah, [speaker002:] Declination, yeah. [speaker005:] right. [speaker007:] d [speaker006:] yeah. [speaker007:] If [disfmarker] you definitely shouldn't use your last sample. [speaker005:] Right. [speaker008:] And it [disfmarker] [speaker006:] So. [speaker007:] I mean, people that do prosody work, they add a few at the end [speaker008:] Right. [speaker006:] Oh, I hadn't thought about that. [speaker007:] because otherwise people will [disfmarker] [speaker002:] Mmm. [speaker008:] Oh right. [speaker006:] I hadn't even thought about throwing out the last couple. [speaker007:] Yeah. Yeah. No, but that [disfmarker] that is beautiful [speaker006:] Definitely should. [speaker002:] Mm hmm. [speaker007:] because then you can get this F zero floor for each of the speakers. [speaker008:] Mm hmm. [speaker005:] Oh, right, just look at the very last [disfmarker] [vocalsound] look at the very end. [speaker007:] Just from that. [speaker002:] Right, [speaker003:] Yeah. [speaker002:] yeah, yeah, yeah. [speaker008:] Yeah, yeah. [speaker006:] Don't throw it away but don't use it in your training. [speaker007:] Your [disfmarker] your junk is our, uh, [vocalsound] training data. [speaker008:] Treat especially [disfmarker] [speaker002:] Mmm. [speaker008:] Yeah. [speaker006:] Cool. [speaker007:] No, that's a really good idea to have some standard [disfmarker] I mean, especially if a the different sites can do something like that. You'll at least be able to see whether difficulty across site is due to, you know, the lexicon version, [speaker006:] And [disfmarker] [speaker008:] Right, right. [speaker007:] uh. [speaker008:] There'll be a common [disfmarker] That'd be great. [speaker007:] Yeah. [speaker008:] Yeah, that's a good idea. [speaker005:] Funny. [speaker006:] Hmm. [speaker004:] That's a good idea. [speaker007:] Oh. [speaker002:] That's the reason, [speaker007:] You should definitely [disfmarker] [speaker004:] Mm hmm. [speaker002:] uh huh. [speaker007:] it's some kind of calibration, uh, for uh many of these tests. [speaker008:] Yeah. [speaker006:] Right. [speaker002:] Mmm. [speaker006:] The only thing I don't like about it is that it's not phonetically very rich, [speaker008:] Right. [speaker007:] Well, you could have TI [disfmarker] you know, "she put her dirty laundry in the clean [disfmarker] [vocalsound] clean bathwater", whatever. [speaker006:] but. Yeah, that's right, TIMIT. So we actually talked about that. Yeah, it's plenty bad. It's plenty bad just with digits, [speaker002:] Mmm. [speaker007:] Yeah. [speaker002:] Mmm. [speaker006:] although I haven't trained any nets on it. I've just been using existing digits or Broadcast News nets since we don't have enough data yet. [speaker005:] And what are these [disfmarker] what are the digit strings [disfmarker] are from TI digits and, [vocalsound] uh [disfmarker] [speaker006:] One, seven, eight, two, three, one, three, seven, eight, four. [speaker005:] Uh [disfmarker] [speaker008:] Yeah, it's TI digit strings. [speaker005:] I don't [disfmarker] [speaker006:] M that sorta thing. [speaker005:] Uh [disfmarker] [speaker004:] That's right, this [disfmarker] we could each make up a string. [speaker006:] Yeah. [speaker008:] It's actually [disfmarker] [speaker005:] How [disfmarker] how long are [disfmarker] how long chunks are they, [speaker006:] But what were they? [speaker005:] a [speaker008:] it's like [disfmarker] but [disfmarker] from one to [disfmarker] one to nine digits. [speaker006:] They vary from [disfmarker] [speaker003:] One two three four. [speaker008:] It is actually the [disfmarker] the actual digit strings from TI digits. [speaker005:] OK, [speaker006:] I [disfmarker] I just extracted exactly the same ones, [speaker005:] yeah, I've never looked over the TI digits corpus, [speaker008:] OK. [speaker007:] Have we all memorized these? [speaker002:] Up a bit. [speaker005:] I don't know. [speaker006:] so. [speaker007:] Maybe we know them. [speaker008:] Yeah. [speaker006:] That's like the TIMIT sentences. [speaker007:] Right. [speaker008:] Yeah. [speaker002:] Yeah. [speaker006:] Greasy wash water, [speaker004:] I know the numbers that were used [speaker006:] oh. [speaker002:] In one ear. [speaker004:] but I don't remember the combinations. [speaker008:] The order, yeah. [speaker002:] Just the order, OK? [speaker006:] But, so I have a bunch of tools for generating these and for helping transcribe them. Since you know what [disfmarker] you know what the transcript is you don't have to type it in, you just have to segment it. [speaker005:] Mm hmm. [speaker006:] So I have tools for doing that. Using X X Waves, Tcl TK, Perl, and maybe some C programs too. [speaker004:] We could do some digits. [speaker006:] Well I [disfmarker] I don't have any of the print outs. [speaker004:] OK. [speaker006:] I [disfmarker] I print them out a a as I need them. So what I should do is just print out a whole stack and leave them up here but I haven't done that. [speaker008:] Put them on the Internet, put them on a web page. [speaker004:] That's what I was wondering. [speaker007:] Eh yeah, you could write [disfmarker] write them [disfmarker] write something on the board [speaker006:] Well, but i [speaker004:] Ha wha I mean, we could just make [disfmarker] make numbers up, [speaker007:] and, I don't know. [speaker006:] It's [disfmarker] it's a different task. [speaker004:] can't we? [speaker007:] Yeah, [speaker006:] It's a different task than read. [speaker007:] yeah. [speaker006:] You [disfmarker] you do it differently if it's sitting in front of you and you're reading it than if it's not. [speaker004:] OK. [speaker006:] So, [speaker007:] Right. [speaker006:] I guess we could write them down and then read them, but. [speaker002:] Mm hmm. [speaker004:] OK. [speaker006:] I [disfmarker] I [disfmarker] we can get digits. [speaker008:] I [disfmarker] [speaker006:] I mean, a as you said, we don't have to be in a meeting. And so if we're short on digits I can just, you know, capture people in the hallway, drag them in here, and have them read digits for me. That's right. [speaker004:] Oh, can you bring them up from the [disfmarker] no, OK. Alright. I wanted to ask about also the air conditioning. Are [disfmarker] uh, is there a plan to [disfmarker] to turn the air conditioning off in some of these meetings or is it always gonna be on? [speaker007:] Yeah, it's way too cold in here. [speaker008:] We should do that cuz y you can turn it off on that [disfmarker] that thing right there [speaker002:] Right. [speaker006:] Well, [speaker008:] and it makes a huge difference. [speaker002:] You also gain noise from the projector as well, isn't it? [speaker006:] the [disfmarker] the hum we hear right now is actually the projector, [speaker002:] Yeah, yeah. [speaker007:] A all right. [speaker008:] But the projector, [speaker006:] not the air conditioner. [speaker008:] yeah. [speaker005:] Huh. [speaker006:] The projector is much louder than the air conditioning. [speaker008:] This is the first time we've done it with a projector. [speaker004:] Yeah, but I was just thinking, he was showing it [disfmarker] on what he showed us on the web site, you could see the contribution of the air conditioning. [speaker006:] Yep. [speaker004:] And I just wondered if that was gonna be sometimes clear. [speaker005:] But the projector's not part of your meeting or [disfmarker] or [disfmarker] or is it? Do [disfmarker] is this normally gonna be on during collection? [speaker006:] Um, no, not normally, not for our meetings. I [disfmarker] at least very few of the ones we've done have we had to do anything on the computer. [speaker002:] No? It [disfmarker] [speaker005:] For ours it will but we have a rear projector, so it's in a separate room. [speaker002:] Mmm. [speaker006:] Right. [speaker002:] It actually gives you a way of calibrating microphone position in a way. Doesn't it? [speaker006:] That's right, from volume. [speaker002:] Yeah. [speaker006:] That's interesting. [speaker002:] For volume, it's a point source pretty much, [speaker007:] Mmm. [speaker002:] yeah. [speaker004:] Hmm, hmm. [speaker002:] So. [speaker006:] Although unfortunately, you know, they're all pretty close, [speaker002:] Hmm. [speaker006:] right? [speaker002:] t Uh [disfmarker] [speaker006:] Oh, look! Digit strings. [speaker004:] Oh wow, there you go. [speaker001:] Mm hmm! [speaker007:] Modern technology. [speaker001:] Oh, I see. [speaker004:] Perfect. [speaker007:] See, you may wanna call up your computer. "Computer, give us digit strings." [speaker002:] Digit strings, [speaker006:] That's right. [speaker002:] yeah. [speaker008:] Wow, but the c but the recorder just crashed. [speaker004:] I think I recognize those. [speaker006:] Oh no! [speaker004:] Mm hmm. [speaker005:] And we're on. So, uh [disfmarker] [speaker003:] Time? Fourteen whatever. [speaker005:] So, topic number one is, we got the NSF ATR [disfmarker] ITR, excuse me, which was originally pretty big, but it's divided among many sites, and they, like, cut the budget by a th by two thirds. So they [disfmarker] they approved it for all the sites, but only gave us like sixty two percent of the money. Or, excuse me, cut it by sixty two percent. [speaker003:] Thirty two. Yeah. Thirty. Whatever. [speaker005:] So, uh, Morgan is, uh [disfmarker] And the other condition is that they have to get a budget out, like, immediately, or the or they'll give the money to someone else. [speaker002:] By next Friday. [speaker005:] So Morgan's working on that. So, I mean, it's good news. It's not as good as it could have been but it was, uh [disfmarker] [speaker002:] It's more money. [speaker005:] The [disfmarker] the I T Rs are, you know, a [disfmarker] a [disfmarker] long shot, and so it's [disfmarker] it's really nice that we got that. [speaker001:] What does that stand for? [speaker005:] Information Technology Research? Something like that? [speaker002:] Something like that. [speaker004:] Hmm. [speaker001:] Alright. [speaker005:] It's, uh [disfmarker] [speaker003:] Doesn't say much. [speaker001:] Yeah, it's pretty vague. [speaker002:] It's [disfmarker] it's um, it's some money that we can use [disfmarker] [speaker001:] It's government run, I guess. [speaker002:] Uh, I mean, we're in conjunction with uh, SRI, Washington, and, um, Columbia. [speaker001:] Mm hmm. OK. [speaker002:] And to do research on the meetings. [speaker001:] Cool. [speaker005:] Yeah, it was specifically on the meeting. [speaker003:] Not great. [speaker005:] So, d was that the one that was the meeting maps? [speaker002:] Yeah. [speaker005:] OK, good. I liked that one. [speaker002:] Yeah. [speaker003:] What is meet meeting maps? [speaker002:] Mapping meetings, or [disfmarker]? [speaker005:] Well, the [disfmarker] the sort of general idea was, you can think about meetings at lots of different levels, all the way d from all the way down to acoustics to all the way up to dialogue, discourse, topic. [speaker003:] Yeah. Yeah. Yeah. [speaker005:] And so this [disfmarker] this was, how do you map all that information onto that? a onto the meeting domain. [speaker002:] It's like creating a m a map of a meeting, in a sense. [speaker005:] So that the [disfmarker] the analogy was you have maps of different things at different resolutions. And so, uh, [speaker003:] Uh, OK. [speaker005:] So this had a lot of different stuff in it. [speaker003:] OK. [speaker002:] It actually doesn't have a whole lot of [disfmarker] m speech stuff to it. [speaker003:] Yeah. Sure. Yeah, that's [disfmarker] [speaker005:] It's mostly higher level. [speaker002:] It's [disfmarker] [speaker003:] Yeah. [speaker002:] Higher level, yeah. [speaker005:] Yeah. [speaker003:] Yeah. [speaker005:] But, nonetheless, I think if we [disfmarker] It's a [disfmarker] it's a cool idea. And if we can actually get it [disfmarker] get it going that would be neat. So [disfmarker] And uh, this was the one also that was, uh, Was it with Susan Ervin Tripp? [speaker002:] I don't remember. It might have been. Yeah. We had to get some [disfmarker] [speaker003:] With who? [speaker005:] She's a linguist on campus. [speaker003:] OK. [speaker005:] Anyway. So, other topics I wanted to talk about are, um, the DARPA demo which, I guess, since Morgan isn't here we can't really talk about. I just wanted to make sure it was ready to go. [speaker001:] Yeah. [speaker005:] Uh, but I guess [disfmarker] [speaker001:] I sent [disfmarker] [speaker005:] Your stuff is all ready so it's just a question of, did Morgan get around to testing it. [speaker001:] Yeah, and I sent you an email [speaker005:] Yeah, I got it. With the instructions. [speaker001:] with the instructions. I don't know. So, Morgan [disfmarker] I guess Morgan's leaving on Saturday? [speaker005:] Yep. [speaker001:] So until then, I guess we might hear something, [speaker005:] Yeah, so I guess we ge we had better be on call [pause] for a little while. [speaker001:] but [disfmarker] Yeah, exactly. [speaker002:] Did he ever figure out how to d switch between applications like he was wanting to do? [speaker005:] I don't know. I haven't spoken with him. [speaker001:] Yeah. [speaker002:] He did? [speaker001:] Oh Yeah. [speaker003:] Switch between applications? [speaker002:] How is he gonna do it? [speaker005:] Alt Tab. [speaker001:] It's just Alt Tab. Like if you hold down Alt and then tap Tab, you just [disfmarker] the s just [disfmarker] the Task Manager w window comes up. [speaker002:] Yeah. Yeah, he [disfmarker] [speaker003:] Yes, that's [disfmarker] [speaker005:] I [disfmarker] I think [disfmarker] [speaker002:] he [disfmarker] Right. [speaker005:] I think the problem was that, I was t saying "hit Alt Tab" with the assumption that he knew what I meant. [speaker003:] Yeah. [speaker005:] But he didn't. So. [speaker002:] No. He said he tried that, but he didn't like it because it, [speaker001:] We Well [disfmarker] [speaker002:] uh [disfmarker] it took it out of the, um, [speaker005:] But it doesn't. [speaker002:] presentation mode. [speaker005:] It doesn't. [speaker001:] Well, because if you just touch Alt a If you just hit it once? it'll go back to the previous application. And the previous application was the PowerPoint demo, which was the display of like, you know [disfmarker] [speaker002:] Oh! [speaker005:] And [disfmarker] and all I said was oh just cycle through with Alt Tab [speaker001:] Not the slide show. [speaker005:] because that's the standard way of doing it in Windows. [speaker001:] Yeah, he didn't [disfmarker] [speaker005:] But of course he doesn't use Windows. [speaker002:] Yeah. [speaker001:] Right. [speaker005:] So, he didn't know what I meant. [speaker002:] Oh, so what do you have to do? [speaker004:] Hmm. [speaker002:] You have olt [comment] Alt Tab and [disfmarker] [speaker005:] Hold down at [disfmarker] [comment] hold down Alt, and then tap Tab several times and you'll cycle through all the open applications. [speaker001:] Yeah. [speaker003:] Yeah. [speaker002:] Ah. [comment] OK. And then it won't [disfmarker] And then you can come back to your full screen version of your presenta [speaker005:] Right. [speaker001:] Right. [speaker003:] Yeah. [speaker002:] Oh! [speaker001:] And I told them, um, if he decides to use multiple examples for, um, the prosody demo, to open up separate Transcribers. Just so you don't have to load up files [disfmarker] e during the meeting. [speaker005:] That's a good idea. [speaker003:] Yeah. [speaker001:] So. [speaker003:] Yeah. [speaker005:] That's not a bad idea at all, so that, uh [disfmarker] [speaker001:] He's got the memory for it. So. [speaker005:] Yeah, and since the files are short, it's just [disfmarker] [speaker001:] Yeah, they're all really short. [speaker005:] I mean the Tcl TK overhead is pretty high, but it shouldn't matter. [speaker002:] How much memory does he have? [speaker001:] No. I didn't have any problems [disfmarker] I'm sorry? [speaker002:] How much memory does he have? [speaker001:] I don't know, but it seemed [disfmarker] it didn't seem to be a problem at all. I would imagine you add sixty four to a hundred and twenty eight. [speaker002:] Hmm. [speaker005:] I think he had one twenty eight, [speaker001:] Yeah, either one will work, I think. [speaker005:] eh, but [disfmarker] It was faster than my PC. [speaker001:] I mean, it was [disfmarker] [speaker002:] Hmm. [speaker005:] So. [speaker002:] Wow! [speaker001:] Yeah, it was fine. I think y Yeah, I mean [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] We kinda walked through it a little bit on his p on his laptop. [speaker002:] So, are we the only ones who are giving demo? Or is, uh, Washington gonna have a demo too? [speaker001:] I could [speaker005:] I assume. But they're all doing Communicator stuff. [speaker002:] or [disfmarker]? Oh! [comment] OK. [speaker005:] So. [speaker002:] Hmm. [speaker001:] And it's part of the talk itself, so it's really not a big part. [speaker002:] Yeah, I see. [speaker005:] I mean, the talk's only twenty minutes. [speaker001:] It's just gonna be a couple of [disfmarker] [speaker005:] So. You know, all this time we spent on it's gonna be thirty seconds of screen time. [speaker001:] I know. [speaker002:] Oh, my gosh. [speaker001:] Right. Uh huh. [speaker002:] Oh. [speaker001:] But it's a good tool to have. [speaker005:] Yeah, I [speaker001:] The [disfmarker] th the m the meeting I R stuff is actually [disfmarker] [speaker005:] Actually, both of them. You know, cuz we're gonna wanna be playing with the [disfmarker] [speaker001:] Mm hmm. [speaker005:] uh [disfmarker] help, my brain [disfmarker] with the uh, [speaker001:] Prosody? [speaker005:] prosody stuff anyway. Um, so, actually, at some point w we should talk to users of that technology who are actually gonna be using it and see what they would like to see. [speaker001:] Yeah, have you guys seen the display? Have you guys seen either of the demos? They're pretty cool, [speaker002:] No. [speaker003:] Nope. [speaker001:] like [disfmarker] the prosody demo, we basically w loaded up um, word alignments. So, instead of having um, utterances in the bottom, it's like just word by word. [speaker002:] Mm hmm. [speaker001:] So you can see exactly where the overlaps are. [speaker002:] Oh. [speaker001:] And then, um, Adam did something where, um, we converted these feature files into wave files, so you can display them. [speaker002:] Display the features. [speaker003:] Instead of the wave file? [speaker001:] Yeah. Instead of the wave file. [speaker003:] Oh. OK. [speaker001:] And then w using multi wave you can add a file and play that sound. [speaker002:] Mm hmm. [speaker001:] Play the audio corresponding to that, um, um, feature file, [speaker003:] OK. [speaker001:] and it's all like, aligned and stuff. [speaker003:] Ah, great. [speaker002:] So you see, like, the F zero contour? [speaker001:] Yeah, so you see the pitch contours and you [disfmarker] and when you hit play, like there's a line that goes through it, you know, and it's all aligned with like the t the audio and the [disfmarker] feature file. [speaker002:] Mm hmm. [speaker003:] Oh, that's great. [speaker005:] Audio. [speaker003:] Yeah. [speaker005:] And the stylized F naught features are pretty cool. [speaker002:] Hmm. [speaker001:] Yeah. [speaker005:] So, these [disfmarker] this it's does a piece wise linear and so, normally if you look at an F naught track, it jumps all over the place. [speaker002:] Yeah. [speaker003:] Yeah. [speaker005:] This is a nice smooth one [speaker002:] Oh, neat. [speaker005:] and it really does a good job. [speaker003:] And [disfmarker] there's something like a median filter in it? or something? And uh, and it's [disfmarker] it's a linear fit or whatever. [speaker001:] Yeah, there's a [disfmarker] [speaker005:] It's a combination. [speaker001:] Yeah. There's a median filtering and then there's a piece wise linear fit, based on some criteria. [speaker003:] Yeah. Yeah. [speaker001:] I'm not sure. [speaker003:] OK. [speaker005:] Uh, t G Actually could you email me a reference to that paper? If you have it? [speaker001:] Yeah, sure. [speaker005:] Cuz I'm sorta curious how [disfmarker] what criteria it has. [speaker001:] I should email Morgan about it too, because he was asking about that. And [disfmarker] Yeah. [speaker005:] I mean, so, there's obviously a trade off between the number of knots y you pick, and the best fit. I mean, so, if you're doing a piece wise linear, you have to figure out how many pieces you want. [speaker003:] Yeah. Yeah. Sure. [speaker005:] And so there's a trade off because the best [disfmarker] the best mean squared fit would be with an infinite number of knots. [speaker001:] Right, well, then it's not a linear f fit. [speaker003:] Yeah. Yeah. Yeah. [speaker001:] You're not fitting anything then. [speaker005:] Well, sure it is. But [speaker001:] Well [disfmarker] [speaker003:] Yeah. [speaker005:] So [disfmarker] so you [disfmarker] There's a trade off [speaker001:] Yeah. It [disfmarker] It's kind of a useless one. [speaker005:] and I'm wondering what criteria they use. [speaker001:] Yeah, I actually don't know off the top of my head. [speaker005:] Mm hmm. [speaker002:] Was this code that you downloaded to do this? Or was it [disfmarker] [speaker001:] This was um, Kemal Sonmez at SRI, who's a co worker with, uh, Liz. [speaker002:] Hmm. [speaker005:] Dot com. [speaker002:] At SRI dot com? [speaker001:] Yeah. Did I just [disfmarker] [speaker005:] No, I [disfmarker] you said "at SRI" [speaker002:] No, i [speaker005:] so I had [comment] to add the "dot com". [speaker002:] It sounded like a URL, the way you were saying it, [speaker001:] Oh. [vocalsound] Oh. [speaker002:] yeah. [speaker001:] Sorry. Um, yeah, but, so Kamal has, uh [disfmarker] He did a paper on this for some v speech verification, uh, project, and we basically e extrapolated what he [disfmarker] what he did for that work and applied it to our files. And that's how we do [disfmarker] That's how we get our feature files and everything for all the prosody. [speaker002:] So what I mean is, like [disfmarker] the implementation that you used was something that you coded up here? Or you grabbed code from somewhere else? [speaker001:] That [disfmarker] No, we grabbed most of it. Like, we grabbed the linear fitting stuff. [speaker002:] Ah. I see. [speaker001:] We did all the alignments and all the matching and stuff, [speaker002:] Mm hmm. [speaker003:] And [disfmarker] [speaker001:] but the actual coding of the [disfmarker] [speaker003:] And the [speaker001:] the math was done somewhere else. [speaker003:] These are our F zero candidates which I'm supposed to use for the synthesis thing? I suppose? [speaker001:] Uh, so that's what you were working on, right? [speaker004:] Right. That's the same software [pause] I used to get the data. [speaker003:] Yeah. OK. OK. [speaker004:] I [disfmarker] I gave you a c couple examples, [speaker003:] Yeah, I m didn't have time to [disfmarker] to look at that, [speaker004:] and [disfmarker] [speaker005:] Hmm. [speaker003:] so. [speaker004:] Right. Right. [speaker003:] Yeah. [speaker005:] So, other topic is, uh, the IBM [comment] transcription status stuff. So we've sent a few to them. We've gotten a few back. Um, I sorta wish Jane was [disfmarker] were here because I think she gave [disfmarker] Didn't she give one of them to one of the transcribers? [speaker002:] Yeah, it's done now. [speaker005:] And, how was it? [speaker002:] She said it was much faster. [speaker005:] Great. [speaker002:] It really helped a lot to have that. So, um, you know, there were a l [speaker003:] Much faster than doing everything alone. [speaker005:] Hand. [speaker002:] From scratch. [speaker003:] OK. [speaker002:] Yeah. So [disfmarker] [speaker001:] From end to en Like, from the beginning of the process to the end of the process? [speaker002:] No, f no. So from the point where we s take it, it's faster. [speaker005:] Well, but that's what we wanted. [speaker003:] Yeah. [speaker005:] Right. We knew it would be longer to have multiple groups. [speaker001:] Ooo. Yeah. [speaker002:] u So it's sorta like doing things in parallel. [speaker001:] Yeah, right. [speaker005:] Right. [speaker002:] You know? So if, you know, if they can be working on that while we're working on the other things, then when we get them in, they don't have to do that one from scratch, basically. [speaker005:] Well, pipelined. [speaker001:] Mm hmm. Right. [speaker005:] The idea is that we're taking less time of the linguists. [speaker001:] Right. [speaker005:] Of the transcribers. So. [speaker001:] S [speaker002:] Yeah, th Right now it is taking a bit of time to turn around the meetings from IBM, part There's [disfmarker] there's several problems. One is, um, we were giving them these large files, [speaker003:] Fff! [speaker002:] which they then had to split up and put onto cassette tapes. So either ninety minute or sixty minute tapes, [speaker001:] Oh. [speaker002:] so they had to try to find an appropriate place to break this large file that we gave them. So one of the things that we did is, Adam made a modification to th his script that generates this so you can tell it that you want, uh, basically, uh, chunks that are no more than either thirty or forty five minutes. [speaker001:] Mm hmm. [speaker002:] And then we can give those to IBM and it'll make it easier for them to put them on tape. Um, then the other thing is that these [disfmarker] pool of transcribers that they're using, um, sort of are not dedicated to [disfmarker] to our project, but are in use for all IBM projects, and IBM has recently giving [disfmarker] given them uh, a whole bunch of stuff. So it's gonna, sort of delay us. [speaker005:] We might wanna revisit hiring our own external transcription company. Because I think that would be cheaper than having our people do it, but do it with the same process with these beep files. [speaker002:] Hmm, that's an interesting idea. [speaker005:] I mean, cuz, originally I had disc discounted that because it was just gonna be too hard. But now we have this procedure with the beep files worked out. [speaker002:] Yeah. So the only difference would be, we would be putting the stuff on tape versus IBM. [speaker005:] Then maybe it would be OK. [speaker003:] Yeah. [speaker005:] Yep. [speaker002:] And then we could send [disfmarker] If [disfmarker] if we had that process down, of putting the audio onto tape, we can send IBM the tapes or we can send this transcription place the tape [speaker005:] And we might wanna do that anyway. Because it seems like that's being a bottleneck and that's silly. It shouldn't be a bottleneck. [speaker002:] Yeah. [speaker005:] I mean [disfmarker] It would [disfmarker] Once we got it set up it w it [disfmarker] [speaker002:] Well, u u So, I don't know where the bottleneck was. Was the bottleneck in breaking this large file up into the appropriate size chunks? or was it actually putting it onto the tape? [speaker005:] It may well have been partially that, you know, because they tried it, and then they would have to listen to the tape and find out where it bleeps, [speaker002:] Exactly. [speaker005:] and go forward in the file, et cetera, et cetera. [speaker002:] Yeah. [speaker005:] But, uh, [speaker002:] Yeah. [speaker005:] it just seems like [disfmarker] [vocalsound] it's not hard for us to do, and that would just make things go faster. [speaker002:] Yeah, so how do we do that? How would we put [disfmarker] [speaker005:] Get a tape deck, [speaker003:] Yeah. [speaker002:] Yeah. [speaker005:] plug it in to the computer, [speaker003:] Plug it into the sound card. [speaker005:] hit record, on the tape deck, [speaker003:] Yeah. [speaker005:] and just do it all [disfmarker] all uh, analog. [speaker002:] So plug i So, like, the headphone output in the back of the computer? [speaker005:] Well, line out. [speaker003:] Line out, yeah. [speaker002:] Line out into the line in on a tape player, [speaker005:] Mm hmm. [speaker001:] Hmm. [speaker003:] Yep. [speaker002:] and [disfmarker] Oh! Well, that sounds pretty easy. [speaker005:] So [disfmarker] I mean, that's the easy way. They might have a more complex set up at IBM. I've seen digital [disfmarker] digital analog tapes where it's controlled by the computer. [speaker003:] Yeah, but [disfmarker] The easy way is [disfmarker] [speaker002:] Mmm. [speaker005:] But w But we don't need to do that. [speaker003:] Yeah. I know. No. Yeah. [speaker005:] The [disfmarker] the disadvantage of doing it this way is it's real time. So you would have to sit there for an hour while it's recording. [speaker002:] Yeah. But if it was at your desk or something [disfmarker] [speaker005:] But [disfmarker] [speaker003:] Yeah, but you [disfmarker] Yeah. Pfff. [comment] Yeah. [speaker005:] Right. It's not a big deal. [speaker001:] So, what's the turn around period right now from IBM? [speaker002:] Well, [vocalsound] they [disfmarker] There was [disfmarker] [speaker001:] Should we be discussing this, [speaker002:] OK. So th [vocalsound] They, um, they gave [disfmarker] They've given us [disfmarker] Since we started doing this new, uh, beep format with "beep number beep", [speaker001:] Mm hmm. [speaker002:] they've given us, uh, I think, two full, uh, transcripts. They have a third one and Brian just sent me a note saying "oh, that third one somehow slipped through the cracks of their tr of the eh [disfmarker] transcriptionists', uh, s company". [speaker001:] Mmm. [speaker002:] And so there's gonna be delay of a week before we get that third one back. And I've already given him three more to work on, [speaker001:] OK. [speaker002:] but he said those new three won't get done until this big chunk of data has been processed by the company. [speaker001:] OK. And the transcribers here are still, uh [disfmarker] are working, though. [speaker002:] Yeah. Yeah, they're working away. [speaker001:] Th they're just doing their own thing, like, while they're waiting. [speaker002:] They're just doing [disfmarker] Right, right. [speaker001:] OK. [speaker002:] So what we do is, uh, when I go to um, select meetings for IBM, I go into the um [disfmarker] well, into the [disfmarker] [nonvocalsound] the, uh, master sheet here with the statuses [speaker005:] Master. [speaker001:] Mm hmm. [speaker002:] and I pick ones that Thilo has, um, pre segmented, and I change the status to "Trans IP IBM", so that, you know, when Jane or whoever goes to select the next meeting they won't choose one of those. And, um [disfmarker] So I'm not picking huge chunks, right now, of meetings for them, [speaker005:] So [disfmarker] We should at least ask, um, Brian if there are any particular requirements for putting things on tape. [speaker001:] Mm hmm. [speaker002:] but [disfmarker] [speaker005:] Is there a particular type of tape that they want to use or anything else like that? [speaker002:] Mm hmm. That's a good idea. [speaker005:] And, uh [disfmarker] And then we should see how much of a pain it is for one of us. I mean the o the other problem is you would wanna r do it on an unloaded machine. Right? Because you wouldn't want it to start stuttering. [speaker003:] Yep. Yeah. [speaker002:] Yeah. [speaker003:] Yeah. [speaker002:] Huh. [speaker005:] And that means [disfmarker] That [disfmarker] that's more of a problem [speaker003:] And all [disfmarker] [speaker005:] because y then you have to be there. [speaker003:] Yeah. And all of the P Cs are connected to the network, too, here. [speaker001:] What i [speaker003:] Or, uh, [speaker005:] Right. [speaker001:] You're just talking about playing it [disfmarker] and [disfmarker] recording it, like, real time? [speaker003:] bad. [speaker005:] Yep. [speaker001:] Oh. [speaker002:] So when that thing [disfmarker] When it stutters like that, is it because there's a delay getting the data off a disk? Or is it because there's a delay in the D to A? [speaker005:] Mm hmm. It could be either. I mean, there are lots of different places. [speaker002:] OK. [speaker005:] It can be delay on the disk, or it can be just load gets high on the machine. [speaker002:] Yeah. [speaker005:] Um, either IO load or processor load. [speaker002:] I see. [speaker005:] You know, it's a problem with these multi user systems is that lots of processes are running and you don't t really have control. [speaker002:] Maybe we could do it from a w a PC or something. [speaker005:] Wha [speaker003:] Yeah. But they are also on the networks, [speaker002:] Yeah. [speaker003:] so. [speaker004:] If you used a line out, too, you have to worry about if you click around and you hit a beep or something. It'll beep onto [disfmarker] onto the tape too. [speaker003:] Maybe. [speaker005:] That's true. It'll [disfmarker] That's right. So you really do wanna be using a machine that's not [disfmarker] [speaker001:] And you have to record every channel separately. [speaker005:] No, because we're doing the beep files. [speaker003:] The beep files. [speaker002:] Well, you know, I [disfmarker] I think David might have a bunch of o old P Cs, [speaker001:] Oh, OK. [speaker002:] cuz he's replacing them with these new ones. Maybe we could get one, and just dedicate it to doing this. Not even hook it u Well, [vocalsound] I guess we need to hook it to the network [speaker001:] We [disfmarker] [speaker002:] so we can get the files to it, but [speaker005:] Yep. [speaker002:] Uh, [speaker005:] Or you could just [disfmarker] You could have it [disfmarker] no active connection. Just SCP them. [speaker002:] Yeah. Right. And then just use that machine to hook up to the tape. [speaker005:] Actually, it wouldn't be bad to have that, plus put on that machine a CD burner. [speaker003:] Yep. [speaker002:] Yeah. We don't need a powerful machine to do these things, right? [speaker005:] No. [speaker002:] So, one of these old Pentiums that we have [disfmarker] Hmm. Should talk to him about that. [speaker003:] Or even a DVD burner. [speaker005:] A DVD burner, yeah. I guess the real question is, where would we put it and who would do it? I guess whoever records the meeting. I mean, it it's not a big deal, but it's yet another thing you have to do. [speaker002:] Yeah, yeah. That's why I wanted to avoid having to do that if we could, but [disfmarker] It's just another [disfmarker] [speaker001:] They couldn't use C Ds? [speaker005:] No, because the transcription company does what they do. [speaker002:] They have a um [disfmarker] a device that, you put the cassette in and it has a foot pedal that lets you go back and forth and stuff like that. [speaker001:] Oh, I see. I see. [speaker002:] So. I guess they haven't got a p [speaker005:] Actually there are a few [disfmarker] I saw a web based one at one point. [speaker003:] Web based? [speaker005:] Where you could send them audio files over the web and they send you back text files over the web. [speaker002:] Really! [speaker005:] I should look that up again. [speaker002:] Y can you send them huge audio files? [speaker005:] Mmm. Mmm. [speaker003:] You can, but [comment] [vocalsound] they never will be able to read them. [speaker002:] Yeah. [speaker005:] I assume that you can send them anything that they can access with HTTP. [speaker002:] Huh! [speaker001:] Well, they must be dealing with large amounts of data if they're transcribing, anyway. [speaker002:] That's interesting. [speaker003:] Yeah. [speaker001:] I mean [disfmarker] People probably aren't sending them, you know [disfmarker] an utterance or two. [speaker002:] Well, that's true. Are [disfmarker] How big are these files that we send to Brian? [speaker003:] Uh, pfff! [speaker005:] Half a gig. [speaker002:] Really? That big? [speaker005:] Oh, no, wait. [speaker002:] I thought it was, like, a hundred megs. [speaker005:] They're the beep files. Yeah, it's a p hundred meg. They're the beep files. That's right. So they're smaller. [speaker002:] Yeah. Hmm. That's still pretty dang big. [speaker005:] Yes. [speaker003:] Send it via email. [speaker002:] Yeah. [speaker005:] And we are sending them the compressed ones, right? [speaker002:] I don't know. Wha whatever you generate. [speaker005:] Yeah. [speaker002:] Yeah, the last ones you did were compressed. [speaker005:] Were "shortened". Yeah. [speaker002:] Yeah. Wonder if the ones before that were shortened. [speaker005:] Yeah. The beep files, actually [disfmarker] the "shorten" doesn't save a whole lot. Right? [speaker002:] Cuz it's all [disfmarker] [speaker005:] Because the whole point is that it's extracting pie pars parts that are loud anyway. [speaker003:] It's all speech. Yeah. Yeah. [speaker002:] Mmm. [speaker005:] So. [speaker002:] Hmm! [speaker005:] I think it only saves, like, half. [speaker003:] Only. [speaker005:] OK. So I think we just gotta let things slide for now, and see [disfmarker] see what's gonna happen. [speaker002:] Yeah. [speaker005:] Nothing else really to do. Um, We have new equipment that I'm gonna try to set up tomorrow. New wireless. [speaker002:] What will that do? [speaker005:] And so, uh [disfmarker] [speaker002:] Is that going to [disfmarker]? [speaker005:] Give us a couple more wireless channels. [speaker002:] Oh, so we can get rid of the uh [disfmarker] [speaker005:] Yep. [speaker002:] Oh! That'll be nice. [speaker005:] Yep. But it involves rewiring. And so, if I could borrow someone tomorrow afternoon to help my brain. [speaker002:] I should be around. [speaker005:] You know, just have someone to bounce the instructions off of and make sure I don't do anything too stupid. [speaker003:] Yeah, sure. [speaker005:] Did I have another topic on the [disfmarker] on the list? [speaker002:] Experiments? [speaker001:] Isn't that next week? [speaker003:] Experiments? [speaker005:] That's [disfmarker] [speaker002:] Yeah, I guess that's probably next. [speaker003:] Yeah. These are next week. [speaker005:] Wha what did I have on the agenda I mailed out? [speaker002:] Yeah. [speaker001:] I think it was [disfmarker] Oh yeah, I don't know. [speaker003:] The demo? [speaker002:] Something about [disfmarker] [speaker001:] The demo was one of them. [speaker003:] Demo and IBM, I think. [speaker002:] Demo. [speaker005:] Yeah, that's it. [speaker003:] Yeah. [speaker001:] Yeah. I think that's it. [speaker003:] Yeah. [speaker005:] That was it. [speaker002:] Yeah. [speaker003:] Short meeting? [speaker002:] Yeah. [speaker005:] Short meeting? Does anyone have anything else? [speaker002:] Oh, I made a [disfmarker] Oh, but that's not specific to Meeting Recorder. I was gonna say I took some [disfmarker] a suggestion of Adam's, and I, um, created a m CGI script that lets you see the current status of the speech disks. [speaker001:] Mm hmm. [speaker002:] So if you go to the speech local web page, and then there's a disk status page. You can click on that and it'll query Abbott, and give you a [disfmarker] a list of all the disks that we have and then, uh, what's on each one, and then it shows how big the disk is and what percent full it is. [speaker001:] Cool. [speaker002:] So. And that's, you know, each time you go there, it's updated. So. [speaker001:] So, the new disks are installed now? [speaker003:] So what [disfmarker] what [disfmarker] [speaker001:] Or [disfmarker]? [speaker002:] The new disks are installed. If you need space just let me know. I have to create the p appropriate sub directory. So we're s kinda trying to keep [disfmarker] even the scratch ones we're keeping a little organized, where we put a y a U doctor speech data, [speaker001:] Mm hmm. [speaker002:] and then I'll either create a sub directory for, like if it's Meeting Recorder project, or Hub five or whatever. So we c I just have to create those sub directories, [speaker001:] OK. [speaker005:] Do we have new DD disks also? [speaker002:] and No. [speaker005:] Darn. [speaker002:] No. Nnn. [speaker005:] So, it's all just scratch. [speaker002:] It's all scratch. [speaker005:] OK. [speaker002:] Yeah. [speaker005:] How are we doing on [disfmarker] [speaker002:] Uh, DE, which is where we're putting all the meeting stuff [disfmarker] [speaker005:] No, DE is full. We must be doing it on D F. [speaker002:] No, it's D E. [speaker003:] D E, I think, [speaker002:] D E's not full. [speaker003:] yeah? D D is full. [speaker002:] DD's full. That was the first one. Or, that was the last one that w filled up, and [disfmarker] DE still had like, [speaker005:] OK, I guess I'm just one off. [speaker002:] last time I looked it was like seven gigs or something. [speaker005:] OK. [speaker002:] So. And each meeting is roughly half [pause] a gig, [speaker005:] Half. [speaker002:] and so. [speaker005:] So that means we're p getting pretty close. [speaker002:] We're getting close. Yeah. So. [speaker005:] Shoot! [speaker001:] Are we g just gonna keep recording? [speaker002:] Yeah. [speaker001:] Like, at some point [disfmarker] [speaker002:] Morgan said something about stopping at the end of the year. [speaker001:] OK. [speaker002:] Like when we get around a hundred meetings. [speaker001:] OK. [speaker002:] Um, because there's gonna be data coming from UW, hopefully by then. [speaker001:] Cuz u [speaker005:] I just hate not to get data. [speaker001:] How m [speaker002:] Yeah, it feels funny, huh? [speaker001:] You're a data fiend. [speaker002:] If we have the capability. [speaker005:] Yeah. [speaker001:] Yeah, it's like you build this room up for so long, and then it's just like, you stop. [speaker002:] Now that we're used to recording, it's like, eh, well, why not keep going? [speaker005:] I [disfmarker] The other thing we could do is stop doing the regular ones. And try to convince other topics to come in and do some. Cuz I think that would be nice. [speaker002:] Mm hmm. We have right now, like, seventy five or seventy six hours of meetings. [speaker005:] Yeah, the other thing we were talking about also is once we exp once the disk has been backed up, we don't actually need it on line. So another option is to copy it to tape manually so we have the back up copy and the tape copy, the archive copy and the back up copy. And then just take them off line. [speaker002:] But aren't we still using the, uh, compressed [disfmarker] [speaker005:] Once we have the expanded versions. So if we have a ton of scratch disk maybe the thing to do is leave those up, [speaker002:] Oh. Yeah. [speaker005:] and then use [disfmarker] And then not necessarily have the original data on line. [speaker002:] Yeah, we need to figure something out, cuz the [disfmarker] Abbott is basically [disfmarker] [speaker005:] Full. [speaker002:] e e We can't add anymore disks to it. We can increase the size of the disks that are on it, but that will only take us so far. So, um, David had the suggestion about, you know, new servers and things like that, and so I'm gonna see if we can talk to Morgan about getting some money for a new disk server. And, uh [disfmarker] [speaker005:] Well, that would be pretty tight i in the machine room. [speaker002:] Yeah, it would have to replace Abbott, basically. [speaker005:] Eesh! [speaker002:] Yeah. Well, he [disfmarker] David's planning to get new servers anyways for all of the main servers. [speaker005:] Mm hmm. [speaker002:] I think he wants to sorta go with the same type of machine for all of these. But, um [disfmarker] Yeah, [vocalsound] it's full in there. Have you seen it recently with all of the new machines that we got [disfmarker] [speaker001:] Mm mmm. [speaker002:] uh, the SUN Blade one hundreds? They're like stacked up on benches and things all over. It's really crowded. So. [speaker003:] But they are not yet accessible? The new machines? [speaker002:] Oh, yeah. Yeah, yeah. [speaker003:] Yeah? They are? [speaker002:] They're in the [disfmarker] Yeah, if you do a P make [disfmarker] [speaker003:] Oop. [speaker005:] What are they called? What are they named? [speaker003:] Yeah. [speaker002:] Oh! I've forgotten now. Um. [speaker005:] I assume they're in the P make pool. [speaker002:] Yeah. They're in the P make pool. So if you do a "reginfo minus show attribute", and uh [disfmarker] [speaker003:] A what? [speaker002:] A r [speaker005:] P make stuff. P make magic. [speaker002:] Yeah. Y [speaker005:] Or customs magic. [speaker003:] Oh god. [speaker002:] You can get a list of all the machines that are available to P make, by using the command reginfo [speaker003:] OK. "Reginfo", OK. [speaker002:] minus show ATTR ". [speaker003:] Maybe you can send me that. [speaker002:] Yeah. Yeah. And then, um [disfmarker] [speaker005:] If you do "man customs", it has almost all of that. [speaker003:] OK. [speaker002:] Yeah. And [disfmarker] I for I'm not sure what attributes we attach to the SUN Blade one hundreds, but it could be SB one hundred. And so if you query for f all machines that have that attribute, then you should see the names of the new machines. [speaker003:] OK. OK. [speaker002:] But. [speaker005:] Do we have an attribute for non interactive machines? Cuz I was thinking that would be a good one to have. [speaker002:] Non interactive [disfmarker] [speaker005:] Ones that aren't on people's desks. [speaker002:] Well, there's sort of. There's this one called [disfmarker] There's the attribute called "no" [vocalsound] What's that called, uh, "n" "no" uh, "no evict". Which is roughly that. [speaker005:] Y right. [speaker002:] It's [disfmarker] Although it's like it h My machine has it on my desk. So, I do get jobs running on my machine all the time. Um, But, uh, So [disfmarker] so that's [disfmarker] that's sort of what it is. [speaker005:] Yep. [speaker002:] Uh huh. [speaker005:] Anyway. Shall we do digits? [speaker002:] Sure. [speaker004:] one three, one three O, eight eight six seven. [comment] One three, five four, one two, two three, seven zero. [speaker005:] And off. [speaker006:] That's looks strange. [speaker002:] OK, [speaker005:] Oh there we go. [speaker002:] now we're on and it seems to be working. [speaker003:] One two three four five six [speaker005:] This looks good. [speaker001:] That is weird. It's like when it's been sitting for a long time or something. [speaker002:] So, I mean [disfmarker] I don't know what it is. But all [disfmarker] all I know is that it seems like every time I am up here after a meeting, and I start it, it works fine. And if I'm up here and I start it and we're all sitting here waiting to have a meeting, it gives me that error message and I have not yet sat down with [disfmarker] been able to get that error message in a point where I can sit down and find out where it's occurring in the code. [speaker001:] Next time you get it maybe we should write it down. [speaker002:] Yep, we will. [speaker004:] Yeah. [speaker005:] Was it a pause, or [disfmarker]? [speaker002:] One of these days. [speaker005:] OK. Was it on "pause" or something? [speaker002:] No. [speaker005:] OK. Don't know. [speaker004:] So uh [disfmarker] so the uh, the new procedural change that just got suggested, which I think is a good idea is that um, we do the digit recordings at the end. And that way, if we're recording somebody else's uh meeting, and a number of the participants have to run off to some other meeting and don't have the time, uh, then they can run off. It'll mean we'll get somewhat fewer uh, sets of digits, but um, I think that way we'll cut into people's time, um, if someone's on strict time uh, less. So, I th I think [disfmarker] I think we should start doing that. Um, so, uh, let's see, we were having a discussion the other day, maybe we should bring that up, about uh, the nature of the data that we are collecting. uh [@ @] that uh, we should have a fair amount of data that is um, collected for the same meeting, so that we can, uh [disfmarker] I don't know. Wh what [disfmarker] what were some of the points again about that? Is it [disfmarker] [speaker006:] Uh, well, OK, I'll back up. Um, at the previous [disfmarker] at last week's meeting, this meeting I was griping [vocalsound] about wanting to get more data [speaker004:] Yeah. [speaker006:] and I [disfmarker] I talked about this with Jane and Adam, um, and was thinking of this mostly just so that we could do research on this data um, since we'll have a new [disfmarker] this new student di does wanna work with us, th the guy that was at the last meeting. [speaker001:] Well, great. Great. [speaker006:] And he's already funded part time, so we'll only be paying him for sort of for half of the normal part time, [speaker001:] What a deal. [speaker006:] uh [disfmarker] Yeah. [speaker002:] And what's he interested in, specifically? [speaker006:] So he's [disfmarker] comes from a signal processing background, but I liked him a lot cuz he's very interested in higher level things, like language, and disfluencies and all kinds of eb maybe prosody, [speaker002:] Mm hmm. [speaker006:] so he's just getting his feet wet in that. [speaker002:] Great. [speaker006:] Anyway, I thought OK, maybe we should have enough data so that if he starts [disfmarker] he'd be starting in January, next semester that we'd have, you know, enough data to work with. [speaker002:] Right. [speaker006:] But, um, Jane and Adam brought up a lot of good points that just posting a note to Berkeley people to have them come down here has some problems in that you m you need to make sure that the speakers are who you want and that the meeting type is what you want, and so forth. So, I thought about that and I think it's still possible, um, but I'd rather try to get more regular meetings of types that we know about, and hear, then sort of a mish mosh of a bunch of one [disfmarker] one time [disfmarker] [speaker002:] One offs? [speaker006:] Yeah, just because it would be very hard to process the data in all senses, both to get the, um [disfmarker] to figure out what type of meeting it is and to do any kind of higher level work on it, like well, I was talking to Morgan about things like summarization, or what's this meeting about. I mean it's very different if you have a group that's just giving a report on what they did that week, versus coming to a decision and so forth. So. Then I was um, talking to Morgan about some [pause] new proposed work in this area, sort of a separate issue from what the student would be working on where I was thinking of doing some kind of summarization of meetings or trying to find cues in both the utterances and in the utterance patterns, like in numbers of overlaps and amount of speech, sort of raw cues from the interaction that can be measured from the signals and from the diff different microphones that point to sort of hot spots in the meeting, or things where stuff is going on that might be important for someone who didn't attend to [pause] listen to. And in that uh, regard, I thought we definitely w will need [disfmarker] it'd b it'd be nice for us to have a bunch of data from a few different domains, or a few different kinds of meetings. So this [disfmarker] this meeting is one of them, although I'm not sure I can participate if I [disfmarker] You know, I would feel very strange being part of a meeting that you were then analysing later for things like summarization. [speaker002:] Mm hmm. [speaker006:] Um, and then there are some others that menti that Morgan mentioned, like the front end meeting [pause] and maybe a networking [pause] group meeting. [speaker002:] Right. Yep. Yeah, we're [disfmarker] we're hoping that they'll let us start recording regularly. [speaker006:] So [disfmarker] So if that were the case then I think we'd have enough. [speaker002:] So. Mm hmm. [speaker006:] But basically, for anything where you're trying to get a summarization of some kind of meeting [disfmarker] [comment] [pause] meaning out of the meeting, um, it would be too hard to have fifty different kinds of meetings where we didn't really have a good grasp on what does it mean to summarize, [speaker002:] Yeah. [speaker006:] but [disfmarker] rather we should have different meetings by the same group but hopefully that have different summaries. And then we need a couple that [disfmarker] of [disfmarker] [pause] We don't wanna just have one group because that might be specific to that particular group, but [@ @] three or four different kinds. [speaker004:] S So [disfmarker] [speaker002:] Yeah, we have a lot of overlap between this meeting and the morning meeting. [speaker006:] See, I've never listened to the data for the front end [pause] meeting. [speaker003:] Yeah. [speaker004:] Yeah. [speaker002:] Yeah, we [disfmarker] we've only had three. So. [speaker006:] OK. But maybe that's enough. So, in general, I was thinking more data but also data where we hold some parameters constant or fairly similar, like a meeting about of people doing a certain kind of work where at least half the participants each time are the same. [speaker002:] Mm hmm. Um [disfmarker] [speaker004:] Now, let [disfmarker] l l let me just give you the other side to that cuz I ca because I [disfmarker] I don't disagree with that, but I think there is a complimentary piece to it too. Uh, for other kinds of research, particularly the acoustic oriented research, I actually feel the opposite need. I'd like to have lots of different people. [speaker006:] Right. Right. [speaker004:] As many people here a a and talking about the kind of thing that you were just talking about it would have uh too few people from my point of view. I'd like to have many different speakers. So, um I think I would also very much like us to have a fair amount of really random scattered meetings, of somebody coming down from campus, and [disfmarker] and uh, [speaker003:] Mm hmm. [speaker004:] I mean, sure, if we can get more from them, fine, [speaker005:] Mm hmm. [speaker006:] Right. [speaker004:] but if we only get one or two from each group, that still could be useful acoustically just because we'd have close and distant microphones with different people. [speaker006:] Yeah, I definitely agree with that. [speaker003:] Yeah. [speaker005:] Mm hmm. [speaker006:] Definitely. [speaker003:] Yeah. [speaker005:] Can I [disfmarker] can I say about that [disfmarker] that the [disfmarker] the issues that I think Adam and I raised were more a matter of advertising so that you get more native speakers. Because I think if you just say [disfmarker] an And in particular, my suggestion was to advertise to linguistics grad students because there you'd have so people who'd have proficiency enough in English that [disfmarker] that uh, it would be useful for [disfmarker] for purposes [disfmarker] You know. [speaker004:] Mm hmm. [speaker005:] But you know, I think I've been [disfmarker] I've I [disfmarker] I've gathered data from undergrads at [disfmarker] on campus and if you just post randomly to undergrads I think you'd get such a mixed bag that it would be hard to know how much conversation you'd have at all. And [disfmarker] and the English you'd have [disfmarker] The language models would be really hard to build [speaker004:] Well, you want to i [speaker005:] because it would not really be [disfmarker] it would be an interlanguage rather than [pause] than a [disfmarker] [speaker004:] Well, OK, uh, first place, I [disfmarker] I [disfmarker] I don't think we'd just want to have random people come down and talk to one another, [speaker005:] OK. [speaker004:] I think there should be a meeting that has some goal and point cuz I [disfmarker] I think that's what we're investigating, [speaker006:] It has to be a [disfmarker] a pre existing meeting, [pause] like a meeting that would otherwise happen anyway. [speaker004:] so Yeah, yeah. [speaker002:] Right. [speaker005:] OK. [speaker004:] So I was [disfmarker] I was thinking more in terms of talking to professors uh, and [disfmarker] and [disfmarker] and uh, senior uh, uh, d and uh, doctoral students who are leading projects and offering to them that they have their [disfmarker] hold their meeting down here. [speaker002:] Yep. [speaker006:] That's I think what we [disfmarker] and I agree with. [speaker005:] Oh, interesting! [speaker003:] Yeah. [speaker005:] Oh, I see. Oh, interesting! [speaker004:] Uh, that's the first point. The second point is um I think that for some time now, going back through BeRP I think that we have had speakers that we've worked with who had non native accents and I th I think that [disfmarker] [speaker005:] Oh, oh. I'm not saying accents. u The accent's not the problem. [speaker004:] Oh, OK. [speaker005:] No, it's more a matter of uh, proficiency, e e just simply fluency. I mean, I deal with people on [disfmarker] on campus who [disfmarker] I think sometimes people, undergraduates um in computer science uh, have language skills that make, you know [disfmarker] that their [disfmarker] their fluency and writing skills are not so strong. [speaker004:] Yeah. Oh! You're not talking about foreign language at all. [speaker002:] Yeah. Yeah, just talking about. [speaker004:] You're just talking about [disfmarker] [speaker005:] Well, e I just think, [speaker002:] We all had the same thought. [speaker005:] but you know, it's like when you get into the graduate level, uh, no problem. I mean, I'm not saying accents. [speaker003:] Uh huh. [speaker004:] Yeah, then we're completely gone. [speaker005:] I'm say I'm saying fluency. [speaker004:] It's [disfmarker] [vocalsound] The [disfmarker] the habits are already burnt in. [speaker002:] Mm hmm. [speaker005:] Well, yeah. I'm just saying fluency. [speaker004:] But [disfmarker] [speaker002:] Well, I think that, um [disfmarker] I think that the only thing we should say in the advertisement is that the meeting should be held in English. [speaker004:] Yeah. [speaker002:] And [disfmarker] and I think if it's a pre existing meeting and it's held in English, [comment] I [disfmarker] I think it's probably OK if a few of the people don't have uh, g particularly good English skills. [speaker005:] OK, now can I [disfmarker] can I say the other aspect of this from my perspective which is that um, there's [disfmarker] there's this [disfmarker] this issue, you have a corpus out there, it should be used for [disfmarker] for multiple things cuz it's so expensive to put together. [speaker002:] Right. [speaker004:] Right. [speaker005:] And if people want to approach [disfmarker] Um, i so I know [pause] e e [pause] You know this [disfmarker] The idea of computational linguistics and probabilistic grammars and all may not be the focus of this group, [speaker004:] Uh huh. [speaker005:] but the idea of language models, which are fund you know generally speaking uh, you know, t t terms of like the amount of benefit per dollar spent or an hour invested in preparing the data, [speaker004:] Mm hmm. Mm hmm. [speaker005:] if you have a choice between people who are pr more proficient in [disfmarker] [nonvocalsound] um, i more fluent, more [disfmarker] more close to being academic English, then it would seem to me to be a good thing. [speaker004:] I guess [disfmarker] I maybe [disfmarker] Hmm. [speaker005:] Because otherwise y you don't have the ability to have [disfmarker] [speaker004:] I [speaker005:] Uh, so if [disfmarker] if you have a bunch of idiolects that's the worst possible case. If you have people who are using English as a [disfmarker] as an interlanguage because they [disfmarker] they don't [disfmarker] uh, they can't speak in their native languages and [disfmarker] but their interlanguage isn't really a match to any existing, uh, language model, [speaker004:] Uh huh. [speaker005:] this is the worst case scenario. [speaker003:] Yeah. Yeah. [speaker004:] Well, that's pretty much what you're going to have in the networking group. [speaker005:] And [disfmarker] [speaker002:] Right. [speaker004:] because [disfmarker] because they [disfmarker] most [disfmarker] the network group is almost entirely Germans and Spaniards. [speaker005:] Well Oh. But the thing is, I think that these people are of high enough level in their [disfmarker] in their language proficiency that [disfmarker] [speaker004:] I see. [speaker005:] And I'm not objecting to accents. I [disfmarker] I'm [disfmarker] I'm just thinking that we have to think at a [disfmarker] at a higher level view, could we have a language model, a [disfmarker] a grammar [disfmarker] a grammar, basically, that um, wo would be a [disfmarker] a possibility. [speaker004:] OK. Uh huh. [speaker005:] So y so if you wanted to bring in a model like Dan Jurafsky's model, an and do some top down stuff, it [disfmarker] to help th the bottom up and merge the things or whatever, uh, it seems like um, I don't see that there's an argument [disfmarker] [speaker004:] Mm hmm. [speaker005:] I'm [disfmarker] I [disfmarker] what I think is that why not have the corpus, since it's so expensive to put together, uh, useful for the widest range of [disfmarker] of central corp things that people generally use corpora for and which are, you know, used in computational linguistics. [speaker004:] Mm hmm. [speaker005:] That's [disfmarker] that's my point. Which [disfmarker] which includes both top down and bottom up. [speaker004:] OK. [speaker003:] It's difficult. Yeah. [speaker004:] OK, well, i i let's [disfmarker] let's see what we can get. I mean, it [disfmarker] it [disfmarker] I think that if we're aiming at [disfmarker] at uh, groups of graduate students and professors and so forth who are talking about things together, and it's from the Berkeley campus, probably most of it will be OK, [speaker005:] Yes, that's fine. That's fine. Exactly. And my point in m in my note to Liz was I think that undergrads are an iff iffy population. [speaker004:] but [disfmarker] OK. [speaker006:] I definitely agree with that, I mean, for this purpose. [speaker004:] OK. OK. [speaker005:] Grads and professors, fine. [speaker003:] Yeah. [speaker002:] Well, not to mention the fact that I would be hesitant certainly to take anyone under eighteen, probably even an anyone under twenty one. [speaker003:] Yeah. [speaker002:] So. [speaker004:] Oh, you age ist! [speaker002:] What's that? [speaker005:] Age ist. [speaker002:] Well, age ist. [comment] The "eighteen" is because of the consent form. [speaker006:] Right, Yeah. [speaker003:] Yeah. [speaker002:] We'd hafta get [disfmarker] find their parent to sign for them. [speaker003:] "Age ist". Yeah. Yeah. [speaker004:] Yes. [speaker005:] Yeah, that's true. [speaker002:] So. [speaker006:] I have a [disfmarker] uh, um, question. Well, Morgan, you were mentioning that Mari may not use the k equipment from IBM if they found something else, cuz there's a [disfmarker] [speaker004:] They're [disfmarker] they're [disfmarker] yeah, they're d they're uh [disfmarker] assessing whether they should do that or y do something else, hopefully over the next few weeks. [speaker006:] Cuz I mean, one remote possibility is that if we st if we inherited that equipment, if she weren't using it, could we set up a room in the linguistics department? And [disfmarker] and I mean, there [disfmarker] there may be a lot more [disfmarker] or [disfmarker] or in psych, or in comp wherever, in another building where we could um, record people there. I think we'd have a better chance [speaker002:] I think we'd need a real motivated partner to do that. [speaker006:] Right, [speaker002:] We'd need to find someone on campus who was interested in this. [speaker006:] but [disfmarker] Right. But if there were such a [disfmarker] I mean it's a remote possibility, then um, you know, one of us could you know, go up there and record the meeting or something rather than bring all of them down here. [speaker002:] Yep. [speaker006:] So it's just a just a thought if they end up not using the [disfmarker] the hardware. [speaker004:] Well, the other thing [disfmarker] Yeah, I mean the other thing that I was hoping to do in the first place was to turn it into some kind of portable thing so you could wheel it around. [speaker002:] Right. [speaker004:] Uh. But. Um, and [disfmarker] [speaker002:] Well, I know that space is really scarce on [disfmarker] at least in CS. [speaker004:] Uh [disfmarker] [speaker002:] You know, to [disfmarker] to actually find a room that we could use regularly might actually be very difficult. [speaker004:] Yeah. [speaker006:] But you may not need a separate room, you know, [speaker002:] That's true. [speaker006:] the idea is, if they have a meeting room and they can guarantee that the equipment will be safe and so forth, and if one of us is up there once a week to record the meeting or something [disfmarker] [speaker004:] Yeah. [speaker002:] True. Mm hmm. Yep. [speaker004:] Well, maybe John would let us put it into the phonology lab or something. [speaker006:] Huh. [speaker002:] Yep. [speaker004:] You know. [speaker006:] I [disfmarker] I think it's not out of the question. [speaker004:] Yeah. [speaker002:] Yeah, I think it would be interesting because then we could regularly get another meeting. [speaker006:] Um. So. [speaker003:] Yeah. [speaker002:] another type of meeting. [speaker006:] Right. Right. [speaker003:] But I [disfmarker] I [disfmarker] I think you need, uh, another portable thing a another portable equipment to [disfmarker] to do, eh, more e easier the recording process, eh, out from ICSI. [speaker002:] Hmm. [speaker004:] Yeah. [speaker003:] Eh and probably. I don't know. [speaker004:] Yeah. [speaker002:] Right. [speaker003:] Eh, if you [disfmarker] you want to [disfmarker] to record, eh, a seminar or a class, eh, in the university, you [disfmarker] you need [disfmarker] [vocalsound] It it would be eh eh very difficult to [disfmarker] to put, [vocalsound] eh, a lot of, eh, head phones eh in different people when you have to [disfmarker] to record only with, eh, this kind of, eh, d device. [speaker004:] Yeah. [speaker002:] Yeah, but [disfmarker] I think if we [disfmarker] if we wanna just record with the tabletop microphones, that's easy. [speaker003:] Oh yeah. Ye Yeah, yeah. [speaker002:] Right? That's very easy, but that's not the corpus that we're collecting. [speaker003:] Yeah. [speaker004:] Actually, that's a int that raises an interesting point that came up in our discussion that's maybe worth repeating. We realized that, um, when we were talking about this that, OK, there's these different things that we want to do with it. So, um, it's true that we wanna be selective in some ways, uh, the way that you were speaking about with, uh, not having an interlingua and uh, these other issues. But on the other hand, it's not necessarily true that we need all of the corpus to satisfy all of it. So, a a as per the example that we wanna have a fair amount that's done with a small n recorded with a small, uh, typ number of types of meetings But we can also have another part that's, uh, just one or two meetings of each of a [disfmarker] of a range of them and that's OK too. Uh, i We realized in discussion that the other thing is, what about this business of distant and close microphones? I mean, we really wanna have a substantial amount recorded this way, that's why we did it. But [pause] what about [disfmarker] For th for these issues of summarization, a lot of these higher level things you don't really need the distant microphone. [speaker006:] Right, I mean, I c I think there's [disfmarker] [speaker004:] You actually don't. [speaker002:] And you don't really need the close microphone, you mean. [speaker003:] Yeah. [speaker006:] Yea yeah yeah, you actually don't really even need any fancy microphone. [speaker005:] Which one did you mean? [speaker004:] You d You don't ne it doesn't [disfmarker] you just need some microphone, somewhere. [speaker006:] You can use found data. [speaker002:] Ye Yeah. Yep. [speaker003:] Yeah. [speaker002:] Tape recorder. [speaker005:] Oh. [speaker004:] Yeah. [speaker003:] Yeah. [speaker006:] You [disfmarker] you can. [speaker004:] You need some microphone, [speaker006:] You can use [disfmarker] Um, but I think that any [pause] data that we spend a lot of effort [nonvocalsound] to collect, [speaker004:] but I mean [disfmarker] [speaker002:] Mm hmm. [speaker004:] Yeah. [speaker006:] you know, each person who's interested in [disfmarker] I mean, we have a cou we have a bunch of different, um, slants and perspectives on what it's useful for, um, they need to be taking charge of making sure they're getting enough of the kind of data that they want. [speaker004:] Right. [speaker006:] And [disfmarker] So in my case, um, I think there w there is enough data for some kinds of projects and not enough for others. [speaker002:] Not enough for others, right. [speaker006:] And so [nonvocalsound] I'm looking and thinking, "Well I'd be glad to walk over and record people and so [nonvocalsound] forth if it's [disfmarker] to help th in my interest." [speaker002:] Mm hmm. [speaker006:] And other people need to do that for themselves, uh, h or at least discuss it so that we can find some optimal [disfmarker] [speaker004:] Right. So that [disfmarker] [speaker003:] Yeah. [speaker004:] But I think that [disfmarker] I'm raising that cuz I think it's relevant exactly for this idea up there that if you think about, "Well, gee, we have this really complicated setup to do," well maybe you don't. Maybe if [disfmarker] if [disfmarker] If really all you want is to have a [disfmarker] a [disfmarker] a recording that's good enough to get a [disfmarker] [vocalsound] uh, a transcription from later, you just need to grab a tape recorder and go up and make a recording. [speaker002:] Yeah. For some of it. [speaker006:] Right. [speaker002:] Yep. [speaker004:] I mean, we [disfmarker] we could have a fairly [disfmarker] We could just get a DAT machine and [disfmarker] [speaker006:] Well, I agree with [nonvocalsound] Jane, though, on the other hand that [disfmarker] [speaker003:] Yeah. [speaker006:] So that might be true, you may say for instance, summarization, or something that sounds very language oriented. You may say well, "Oh yeah, you just do that from transcripts of a radio show." I mean, you don't even need the speech signal. [speaker004:] Right. [speaker006:] But what you [disfmarker] what I was thinking is long term what would be neat is to be able to pick up on um [disfmarker] Suppose you just had a distant microphone there and you really wanted to be able to determine this. There's lots of cues you're not gonna have. So I [pause] do think that long term you should always try to satisfy the greatest number of [disfmarker] of interests and have this parallel information, which is really what makes this corpus powerful. [speaker002:] Right. [speaker004:] Yeah. [speaker003:] Yeah. [speaker002:] Special? Yep. [speaker004:] I [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] I agree. [speaker006:] Otherwise, you know, lots of other sites can propose [disfmarker] individual studies, so [disfmarker] [speaker004:] Uh but I [disfmarker] I think that the uh [vocalsound] i We can't really underestimate the difficulty [disfmarker] shouldn't really u underestimate the difficulty of getting a setup like this up. [speaker002:] Yep. [speaker004:] And so, [disfmarker] uh it took quite a while to get that together and to say, "Oh, we'll just do it up there," [disfmarker] [speaker006:] OK. [speaker004:] If you're talking about something simple, where you throw away a lot of these dimensions, then you can do that right away. Talking about something that has all of these different facets that we have here, it won't happen quickly, it won't be easy, and there's all sorts of issues about th you know [vocalsound] keeping the equipment safe, or else hauling it around, and all sorts of o [speaker006:] So then maybe we should [nonvocalsound] [pause] try to bring people here. [speaker004:] I think the first priority should be to pry [comment] to get [disfmarker] try to get people to come here. [speaker002:] Here. [speaker006:] I mean, that's that's [disfmarker] OK, so [speaker004:] We're set up for it. [speaker005:] Mm hmm. [speaker006:] OK. [speaker004:] The room is [disfmarker] is really, uh, underused. [speaker006:] Right. [speaker004:] Uh [disfmarker] [speaker005:] I thought the free lunch idea was a great idea. [speaker002:] Yeah, I thought so too. [speaker006:] Yeah, I [disfmarker] And I think we can get people to come here, that [disfmarker] [speaker004:] Free lunch is good. [speaker003:] Yeah. [speaker006:] But the issue is you definitely wanna make sure that the kind of group you're getting is the right group so that you don't waste a lot of your time [nonvocalsound] and the overhead in bringing people down. [speaker005:] Mm hmm. [speaker001:] No crunchy food. [speaker004:] Yeah. [speaker006:] So [disfmarker] [comment] Well, it would be [pause] lunch afterwards. [speaker005:] Yeah. [speaker002:] Well, I was thinking, lunch after. [speaker006:] Right. And they'd have to do their digits or they don't get dessert. [speaker004:] Yeah, they have to do their digits or they don't [comment] get [disfmarker] they don't [comment] get their food. [speaker002:] Yep. [speaker006:] Yeah. [speaker004:] Yeah [speaker002:] Um, I had a [disfmarker] I spoke with some people up at Haas Business School who volunteered. Should I pursue that? [speaker006:] Oh, definitely, yeah. [speaker002:] Yeah. [speaker004:] Yeah. [speaker002:] So. They [disfmarker] they originally [disfmarker] They've decided not to do [disfmarker] go into speech. So I'm not sure whether they'll still be so willing to volunteer, but I'll send an email and ask. [speaker004:] Tell them about the free lunch. [speaker002:] I'll tell them about the free lunch. [speaker006:] Yeah. Yeah. [speaker002:] And they'll say there's no such thing. So. [speaker006:] I'd love to get people that are not linguists or engineers, cuz these are both weird [disfmarker] [speaker002:] Right. [speaker004:] Yeah. [speaker003:] Yeah. [speaker004:] The [disfmarker] the [disfmarker] [vocalsound] The oth the other h [speaker006:] well, I know, I shouldn't say that. [speaker002:] That's alright. No, the they [disfmarker] they're very weird. [speaker006:] We need a wider sampling. [speaker001:] "Beep." [speaker004:] Uh, [speaker003:] Yeah. [speaker004:] "beep" Uh, the [disfmarker] the [disfmarker] [speaker002:] The problem with engineers is "beep." [speaker004:] They make funny sounds. The o the o the other [disfmarker] The other thing is, uh, that we [disfmarker] we talked about is give to them [disfmarker] uh, burn an extra CD ROM. [speaker002:] Yep. Let them have their meeting. [speaker004:] and give them [disfmarker] So if they want a [disfmarker] [nonvocalsound] basically and audio record of their [disfmarker] [speaker006:] Well, I thought that was [disfmarker] I thought he meant, "Give them a music CD," like they g [vocalsound] Then he said a CD of the [disfmarker] of their speech [speaker004:] Oh. [speaker006:] and I guess it depends of what kind of audience you're talking to, but [disfmarker] [vocalsound] You know, I personally [nonvocalsound] would not want a [nonvocalsound] CD [comment] of my meeting, [speaker002:] Mmm. Of the meeting? [speaker006:] but [vocalsound] maybe [disfmarker] yeah, [pause] maybe you're [speaker004:] If you're having some planning meeting of some sort and uh you'd like [disfmarker] [speaker006:] right. [comment] Right. Right. [speaker001:] Oh, that's a good idea. [speaker002:] It'd be fun. [speaker004:] Yeah. [speaker002:] I think it would just be fun, you know, if nothing else, you know. [speaker006:] Right. [speaker003:] Yeah. [speaker004:] But it als It [disfmarker] it [disfmarker] it also I think builds up towards the goal. [speaker002:] It's a novelty item. [speaker006:] Right. [speaker004:] We're saying, Look, you know, you're gonna get this. Is is isn't that neat. Then you're gonna go home with it. It's actually p It's probably gonna be pretty useless to you, [speaker002:] Yep. [speaker004:] but you'll ge appreciate, you know, where it's useful and where it's useless, [speaker006:] Right. [speaker004:] and then, we're gonna move this technology, so it'll become useful. " [speaker006:] No, I think that's a great idea, actually. [speaker004:] So. [speaker003:] Yeah. [speaker006:] But we might need a little more to incentivize them, [comment] that's all. [speaker001:] What if you could tell them that you'll give them the [disfmarker] the transcripts when they come back? [speaker005:] Alth [speaker002:] Oh, yeah. I mean, anyone can have the transcripts. So. I thought we could point that out. [speaker004:] Oh yeah. [speaker005:] Yeah. [speaker006:] Well, that's interesting. [speaker005:] I hav I have to uh raise a little eensy weensy concern about doing th giving them the CD immediately, because of these issues of, you know, this kind of stuff, [comment] where maybe [disfmarker] [vocalsound] You know? [speaker004:] Good point. [speaker005:] So. [speaker004:] That's a very good point. [speaker005:] We could burn it after it's been cleared with the transcript stage. [speaker004:] So we can [disfmarker] so we can [disfmarker] r Right. [speaker005:] And then they [disfmarker] they get a CD, but just not the same day. [speaker006:] Oh, right. If [disfmarker] It should be the same CD ROM that we distribute publically, [speaker002:] Yeah, that's right. That's a good point. [speaker006:] right? [speaker002:] Right, it can't be the internal one. [speaker006:] Otherwise they're not allowed to play it for anyone. [speaker004:] Although it's [disfmarker] [speaker005:] There we go. Oh, I like that. [speaker002:] That's right. [speaker005:] Well put. Well put. So, after the transcript screening phase. [speaker002:] Yeah, that's true. [speaker005:] Things have been weeded out. [speaker006:] Otherwise we'd need two lawyer stages. [speaker005:] Yeah, that's right, say [comment] "Yeah, well, I got this CD, and, Your Honor, I [disfmarker]" [speaker002:] Yeah. [speaker006:] That's a good point. [speaker004:] Yeah so that's [disfmarker] so let's start with Haas, and Yeah. [speaker006:] Sorry to have to [disfmarker] [nonvocalsound] Sorry I have to [pause] leave. I will be here full time next week. [speaker004:] Oh, that's fine. [speaker002:] OK, see you. [speaker004:] OK. [speaker002:] No. Bye. [speaker004:] That's alright. [speaker001:] See you. [speaker004:] OK. [speaker003:] See you. [speaker004:] So, uh [disfmarker] Let's see. So that was that topic, and [vocalsound] then um, I guess another topic would be [vocalsound] where are we in the whole disk resources [pause] question for [disfmarker] [speaker002:] We are slowly slowly getting to the point where we have uh enough sp room to record meetings. So I uh did a bunch of archiving, and still doing a bunch of archiving, I [disfmarker] I'm in the midst of doing the P files from uh, [vocalsound] Broadcast News. and it took eleven hours [comment] [vocalsound] to do [disfmarker] to uh copy it. [speaker003:] Eleven? [speaker002:] And it'll take another eleven to do the clone. [speaker001:] Where did you copy it to? [speaker002:] Well, it's Abbott. It's Abbott, so it just [disfmarker] But it's [disfmarker] it's a lot of data. [speaker004:] Sk It's copying from one place on Abbott to another place on Abbott? [speaker002:] Tape. [speaker003:] Tape? [speaker002:] I did an archive. [speaker004:] Oh! I'm sorry. [speaker001:] Oh, on the tape. [speaker003:] Oh. [speaker001:] Ah! [speaker002:] So I'm archiving it, and then I'm gonna delete the files. So that will give us ten gigabytes of free space. [speaker003:] Eleven hours? [speaker001:] Wow! [speaker003:] Oh. [speaker005:] Yeah, the archiving m [pause] program does take a long time. [speaker003:] Yeah. [speaker002:] And [disfmarker] and [disfmarker] Yep. And so one That [disfmarker] that will be done, like, in about two hours. And so uh, [vocalsound] at that point we'll be able to record five more meetings. So. [speaker003:] Yeah. [speaker005:] One thing [disfmarker] The good news about that [disfmarker] that is that once [disfmarker] once it's archived, it's pretty quick to get back. [speaker003:] Yeah. [speaker005:] I mean, it [disfmarker] it [disfmarker] it [disfmarker] The other direction is fast, but this direction is really slow. [speaker004:] Is it? [speaker002:] Right. [speaker004:] Hmm. [speaker003:] Yeah. [speaker002:] Well, especially because I'm generating a clone, also. [speaker005:] Yeah, OK. [speaker002:] So. And that takes a while. [speaker003:] Yeah. [speaker001:] Generating a clone? [speaker005:] Yeah, that's a good point. [speaker002:] Two copies. [speaker005:] Yeah. [speaker002:] One offsite, one onsite. [speaker001:] Oh! Oh! Hunh! [speaker005:] Now, what will uh [disfmarker] Is the plan to g [pause] to [disfmarker] So [pause] stuff will be saved, it's just that you're relocating it? [speaker004:] S [speaker005:] I mean, so we're gonna get more disk space? Or did I [disfmarker]? [speaker002:] No, the [disfmarker] the [disfmarker] these are the P files from Broadcast News, which are regeneratable [disfmarker] regeneratable [speaker005:] OK. Oh, good. I see. [speaker002:] um, if we really need to, but we had a lot of them. And [disfmarker] for the full, uh, hundred forty hour sets. [speaker005:] OK. [speaker002:] And so they [disfmarker] they were two gigabytes per file and we had six of them or something. [speaker005:] Wow. [speaker003:] Yeah. [speaker005:] Wow. [speaker004:] W w we are getting more space. We are getting, uh, another disk rack and [disfmarker] and four thirty six gigabyte disks. Uh [pause] so [pause] uh [pause] but that's not gonna happen instantaneously. [speaker005:] Wonderful. [speaker002:] Or maybe six. [speaker004:] Or maybe six? [speaker002:] The SUN, ha uh, takes more disks than the Andatico one did. [speaker004:] How many [disfmarker] [speaker002:] The SUN rack takes [disfmarker] [comment] Th One took four and one took six, or maybe it was eight and twelve. Whatever it was, it was, [pause] you know, fifty percent more. [speaker004:] How much [disfmarker] [speaker001:] Is there a difference in price or something? [speaker002:] Well, what happened is that we [disfmarker] we bought all our racks and disks from Andatico for years, according to Dave, and Andatico got bought by another company and doubled their prices. [speaker001:] Oh! [speaker003:] Oh. [speaker002:] And so, uh, we're looking into other vendors. "We" [disfmarker] By "we" of course I mean Dave. [speaker005:] Wow. [speaker002:] So. [speaker001:] Mm hmm. Hmm. I've been looking at the, uh, Aurora data and, um, first [disfmarker] first look at it, there were basically three directories on there that could be moved. One was called Aurora, one was Spanish, which was Carmen's Spanish stuff, and the other one was, um, SPINE. [speaker002:] SPINE. [speaker001:] And so, um, I wrote to Dan and he was very concerned that the SPINE stuff was moving to a non backed up disk. So, um, I realized that well, probably not all of that should be moved, just [pause] the [pause] CD ROM type data, the [disfmarker] [pause] the static data. So I moved that, and then um, I asked him to check out and see if it was OK. before I actually deleted the old stuff, um, but I haven't heard back yet. I told him he could delete it if he wanted to, I haven't checked [pause] today to see if he's deleted it or not. And then Carmen's stuff, I realized that when I had copied all of her stuff to XA, I had copied stuff there that was dynamic data. And so, I had to redo that one and just copy over the static data. And so I need to get with her now and delete the old stuff off the disk. And then I lo haven't done any of the Aurora stuff. I have to meet with, uh, Stephane to do that. So. [speaker004:] So, but, uh y you're figuring you can record another five meetings or something with the space that you're clearing up from the Broadcast News, but, we have some other disks, some of which you're using for Aurora, but are we g do we have some other [disfmarker] other space now? [speaker002:] Yep. So, so, uh, we have space on the current disk right now, where Meeting Recorder is, [speaker004:] Yeah. [speaker002:] and that's probably enough for about four meetings. [speaker004:] Yeah. [speaker001:] Is that the one that has [disfmarker] is that DC? [speaker002:] So. Yep. No, no, well, it's wherever the Meeting Recorder currently is. I think it's DI. [speaker001:] OK, I [disfmarker] but the stuff I'm moving from Aurora is on the DC disk that we [disfmarker] [speaker002:] I don't remember. Th I think it's DC It's whatever that one is. [speaker001:] OK, DC. [speaker002:] I just don't remember, it might be DC. [speaker001:] Yeah. [speaker002:] And that has enough for about four more meetings right now. [speaker004:] Mm hmm. [speaker002:] Yeah, I mean we were at a hundred percent and then we dropped down to eighty six for reasons I don't understand. Um, someone deleted something somewhere. And so we have some room again. And then with Broadcast News, that's five or six more meetings, so, you know, we have a couple weeks. Uh, so, yeah, I think [disfmarker] I think we're OK, until we get the new disk. [speaker003:] OK. [speaker001:] So should, um [disfmarker] One question I had for you was, um, we need [disfmarker] [pause] we sh probably should move the Aurora an and all that other stuff off of the Meeting Recorder disk. Is there another backed up [pause] disk that you know of that would [disfmarker]? [speaker002:] We should put it onto the Broadcast News one. That's probably the best thing to do. And that way we consolidate Meeting Recorder onto one disk [pause] rather than spreading them out. [speaker001:] OK. Right. Right. Do you know what [disfmarker] happen to know what disk that is off [disfmarker]? [speaker002:] No. [speaker001:] OK. [speaker002:] I mean, I can tell you, I just don't know off the top of my head. [speaker001:] Yeah. OK. Alright, I'll find out from you. [speaker002:] But, so we could'us just do that at the end of today, once the archive is complete, and I've verified it. [speaker001:] OK. [speaker002:] Cuz that'll give us plenty of disk. [speaker004:] Uh, OK, [@ @] [comment] So, uh, then I guess th the last thing I'd had on my [disfmarker] my agenda was just to hear [disfmarker] hear an update on [vocalsound] what [disfmarker] what Jose has been doing, [speaker003:] Uh huh. [speaker004:] so [speaker003:] OK. I have, eh, [vocalsound] The result of my work during the last days. [speaker004:] OK. [speaker003:] Thank you for your information because I [disfmarker] I read. Eh, and the [disfmarker] the last, eh, days, eh, I work, eh, in my house, eh, in a lot of ways and thinking, reading eh, different things about the [disfmarker] the Meeting Recording project. [speaker002:] Yeah. [speaker004:] Uh huh. [speaker003:] And I have, eh, some ideas. Eh, this information is very [disfmarker] very useful. [speaker005:] I'm glad to hear it. Glad to hear it. [speaker003:] Because [vocalsound] you have the [disfmarker] the [disfmarker] the distribution, now. But for me, eh is interesting because, eh, eh, here's i is the demonstration of the overlap, eh, [pause] problem. [speaker002:] I've seen it already. [speaker003:] It's a real problem, [comment] a frequently problem [comment] uh, because you have overlapping zones eh, eh, eh, all the time. [speaker005:] Yeah. Yeah. [speaker002:] Yep. [speaker003:] Yeah. [speaker002:] Throughout the meeting. [speaker003:] Eh, by a moment I have, eh, nnn, the, eh, [pause] n I [disfmarker] I did a mark of all the overlapped zones in the meeting recording, with eh, a exact [pause] mark. [speaker002:] Mm hmm. Oh, you did that by hand? [speaker003:] Heh? That's eh, yet b b Yeah, by [disfmarker] b b by hand [disfmarker] by hand because, eh, [vocalsound] eh [disfmarker] "Why." [speaker002:] Can I see that? Can I get a copy? [speaker004:] Oh. [speaker003:] My [disfmarker] my idea is to work [disfmarker] I [disfmarker] I [disfmarker] I do I don I don't [@ @] [disfmarker] I don't know, eh, if, eh, it will be possible because I [disfmarker] I [disfmarker] I haven't a lot [disfmarker] eh, enough time to [disfmarker] to [disfmarker] to work. uh, only just eh, six months, as you know, [speaker001:] Wow! [speaker003:] but, eh, my idea is, eh, is very interesting to [disfmarker] to work [pause] in [disfmarker] in the line of, eh, automatic segmenter. [speaker002:] Mm hmm. [speaker003:] Eh but eh, eh, in my opinion, [pause] we need eh, eh, a reference [pause] eh session to [disfmarker] t to [disfmarker] to evaluate the [disfmarker] the [disfmarker] the tool. [speaker002:] Yes, absolutely. And so are you planning to do that or have you done that already? [speaker003:] And [disfmarker] No, no, with i Sorry? [speaker002:] Have you done that or are you planning to do that? [speaker003:] No, I [disfmarker] I [disfmarker] plan to do that. [speaker002:] OK. Darn! [speaker003:] I plan [disfmarker] I plan, but eh, eh, the idea [vocalsound] is the [disfmarker] is the following. Now, [vocalsound] eh, I need ehm, [vocalsound] to detect eh all the overlapping zones exactly. I [disfmarker] I will [disfmarker] I will eh, talk about eh, [pause] in the [disfmarker] in the blackboard about the [disfmarker] my ideas. [speaker005:] Yeah. Duration. [speaker004:] Mm hmm. [speaker003:] Eh, um, [vocalsound] [vocalsound] eh [disfmarker] This information eh, with eh, exactly time marks eh, for the overlapping zones [vocalsound] eh [disfmarker] overlapping zone, and eh, a speaker [disfmarker] a [disfmarker] a pure speech eh, eh, speaker zone. I mean, eh zones eh of eh speech of eh, one speaker without any [disfmarker] any eh, noise eh, any [disfmarker] any acoustic event eh that eh, eh, w eh, is not eh, speech, real speech. And, I need t true eh, silence for that, because my [disfmarker] my idea is to [disfmarker] to study the nnn [disfmarker] the [disfmarker] [vocalsound] the set of parameters eh, what, eh, are more m more discriminant to eh, classify. [speaker002:] Right. [speaker003:] the overlapping zones in cooperation with the speech [pause] eh zones. The idea is [pause] to eh [disfmarker] to use [disfmarker] eh, I'm not sure to [disfmarker] eh yet, but eh my idea is to use a [disfmarker] a cluster [pause] [vocalsound] eh algorithm or, nnn, a person strong in neural net algorithm to eh [disfmarker] to eh study what is the, eh, the property of the different feat eh feature, eh, to classify eh speech and overlapping eh speech. [speaker001:] Mmm. [speaker003:] And my idea is eh, it would be interesting to [disfmarker] to have eh, [vocalsound] a control set. And my control set eh, will be the eh, silence, silence without eh, any [disfmarker] any noise. [speaker004:] Mm hmm. [speaker005:] Which means that we'd still [disfmarker] You'd hear the [disfmarker] Yeah. [comment] That's interesting. [speaker003:] Yeah, acoustic with this. [comment] With [disfmarker] with, yeah, the background. [speaker002:] Yeah, fans. [speaker005:] This is like a ground level, with [disfmarker] It's not it's not total silence. [speaker003:] Eh, I [disfmarker] I mean eh, noise eh, eh claps eh, tape clips, eh, the difference eh, [speaker004:] Mm hmm. [speaker003:] eh, eh, event eh, which, eh, eh, has, eh eh, a hard effect of distorti spectral distortion in the [disfmarker] in the eh [pause] speech. [speaker004:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] So [disfmarker] so you intend to hand mark those and exclude them? [speaker003:] Yeah, I have mark in [disfmarker] in [disfmarker] in [disfmarker] in that [disfmarker] Not in all [disfmarker] in all the [disfmarker] the file, [speaker002:] Mm hmm. [speaker003:] only eh, eh, nnn, [pause] mmm, I have eh, ehm [pause] I don't remind [comment] what is the [disfmarker] the [disfmarker] the [disfmarker] the quantity, but eh, I [disfmarker] I have marked enough speech on over and all the overlapping zones. I have, eh, [pause] two hundred and thirty, more or less, overlapping zones, and is similar to [disfmarker] to this information, [speaker002:] Whew! [speaker005:] Great. Great. [speaker002:] Mm hmm. [speaker003:] because with the program, I cross [pause] the information of uh, of Jane [comment] with eh, my my segmentation by hand. And [pause] is eh, mor more similar. [speaker005:] Excellent. Glad to hear it. Good. [speaker003:] But [disfmarker] Sorry, sorry. [speaker004:] Go ahead. [speaker003:] And the [disfmarker] the idea is, eh, [vocalsound] I [disfmarker] I will use, eh, [disfmarker] I want [disfmarker] [pause] My idea is, eh, [vocalsound] [vocalsound] to eh [disfmarker] [comment] [nonvocalsound] to classify. [speaker002:] I should've [pause] got the digital camera. Oh well. [speaker003:] I [disfmarker] I need eh, the exact eh, mark of the different, eh, eh, zones because I [disfmarker] I want to put, eh, for eh, each frame a label [pause] indicating. It's a sup supervised and, eh, hierarchical clustering process. I [disfmarker] I [disfmarker] I put, eh, eh, for each frame [nonvocalsound] a label indicating what is th the type, what is the class, eh, which it belong. [speaker002:] Mm hmm. [speaker003:] Eh, I mean, the class you will [nonvocalsound] overlapping speech "overlapping" is a class, eh, "speech" [nonvocalsound] [@ @] the class [pause] that's [speaker002:] Nonspeech. [speaker001:] These will be assigned by hand? [speaker003:] a I [disfmarker] I [disfmarker] I ha I h I [disfmarker] I put the mark by hand, [speaker001:] Based on the [disfmarker] Uh huh. [speaker003:] because, eh, [vocalsound] my idea is, eh, in [disfmarker] in the first session, I need, eh, [pause] I [disfmarker] I need, eh, to be sure that the information eh, that, eh, I [disfmarker] I will cluster, is [disfmarker] is right. Because, eh, eh, if not, eh, I will [disfmarker] I will, eh, return to the speech file to analyze eh, what is the problems, [speaker002:] Well, training, and validation. Sure. Mm hmm. [speaker003:] eh. And [vocalsound] I [disfmarker] I'd prefer [disfmarker] I would prefer, the to [disfmarker] to have, eh, this labeled automatically, but, eh, eh, fro th I need truth. [speaker001:] You need truth. Hmm. [speaker002:] Yeah, but this is what you're starting with. [speaker003:] Yeah. [speaker005:] I've gotta ask you. So, uh, the difference between the top two, i [speaker003:] Yeah. Yeah. Yeah. [speaker005:] So [disfmarker] so [disfmarker] I start at the bottom, so "silence" is clear. By "speech" do you mean speech by one sp by one person only? [speaker003:] Speech [disfmarker] [speaker005:] So this is un OK, and then and then the top includes people speaking at the same time, or [disfmarker] or a speaker and a breath overlapping, someone else's breath, or [disfmarker] or clicking, overlapping with speech [disfmarker] So, that [disfmarker] that's all those possibilities in the top one. [speaker003:] Yeah. Yeah. Yeah. Is [disfmarker] [speaker002:] One or two or more. [speaker003:] One, two, three. but No, by th by the moment n Yeah. Yeah. Yeah. Yeah. Yeah. [speaker005:] OK. [speaker003:] Eh, in the first moment, because, eh, eh, I [disfmarker] I have information, eh, of the overlapping zones, eh, information about if the, eh, overlapping zone is, eh, from a speech, clear speech, from a one to a two eh speaker, [pause] or three speaker, or is [disfmarker] is the zone where the breath of a speaker eh, overlaps eh, onto eh, a speech, another, especially speech. [speaker005:] So it's basi it's basically speech wi som with [disfmarker] with something overlapping, which could be speech but doesn't need to be. [speaker003:] No, no, es especially [pause] eh, overlapping speech [pause] from, eh, different eh, eh, speaker. [speaker004:] No, but there's [disfmarker] but, I think she's saying "Where do you [disfmarker] In these three categories, where do you put the instances in which there is one person speaking and other sounds which are not speech?" [speaker003:] Eh [disfmarker] Ah! [speaker004:] Which category do you put that in? [speaker005:] Yeah, that's right. That's my question. [speaker003:] Yeah. Yeah, he here I [disfmarker] I put eh speech from eh, from, eh, one speaker [pause] without, eh, eh, any [disfmarker] any [disfmarker] any events more. [speaker005:] Oh! [speaker004:] Right, so where do you put speech from one speaker that does have a nonspeech event at the same time? Which catege which category? [speaker005:] Like a c [speaker003:] Where? Where [disfmarker] What is the class? No. By the moment, no. [speaker005:] Oh. [speaker002:] Yeah, yeah, that's what he was saying before. [speaker005:] So you don't [disfmarker] i i it's not in that [disfmarker] [speaker004:] Oh, so you [disfmarker] not [disfmarker] not marked. [speaker003:] For [disfmarker] for the [disfmarker] by the [@ @] no, [@ @] because I [disfmarker] I [disfmarker] I [disfmarker] I want to limit the [disfmarker] the [disfmarker] nnn, [vocalsound] the [disfmarker] the study. [speaker004:] OK. Got it. Fine. So [disfmarker] so [disfmarker] [speaker002:] Yeah, so that's what he was saying before, is that he excluded those. [speaker001:] So you're not using all of the data. [speaker003:] The [disfmarker] All [disfmarker] I [disfmarker] Exactly. [speaker002:] Yeah. [speaker004:] Yeah. [speaker003:] Yeah, you mean [disfmarker] [speaker005:] So you're ignoring overlapping events unless they're speech with speech. [speaker003:] Yeah, be [speaker004:] Yeah, that's fine. [speaker003:] Yeah. [speaker005:] OK. [speaker003:] "Why? Why? What's the reason?" because [pause] i it's the first study. [speaker004:] Oh, no [disfmarker] no, it's a perfectly sensible way to go. [speaker003:] the first [speaker005:] We're just [speaker004:] We just wondered [disfmarker] trying to understand what [disfmarker] what you were doing. [speaker003:] Yeah. [speaker005:] Yeah. Yeah cuz you've talked about other overlapping events in the past. [speaker004:] OK. [speaker003:] Yeah. [speaker005:] So, this is [disfmarker] this is [disfmarker] a subset. [speaker003:] Yeah. In the [disfmarker] in the future, the [disfmarker] the idea is to [disfmarker] to extend [pause] the class, [speaker001:] Is [disfmarker] is [disfmarker] [speaker003:] to consider all the [disfmarker] all the information, you [disfmarker] you mentioned before [speaker004:] Yeah. Yeah, I [disfmarker] I don't think we were asking for that. [speaker005:] OK. [speaker004:] We were jus just trying to understand [disfmarker] [speaker005:] Yeah. [speaker003:] but eh, the [disfmarker] the first idea [disfmarker] Because eh, I don't know [pause] what hap what will happen [comment] with the study. [speaker005:] Yeah, we just wanted to know what the category was here. [speaker004:] Yeah. [speaker002:] Right. [speaker004:] Sure. [speaker003:] Yeah. [speaker001:] Is your silence category pure silence, or [disfmarker]? What if there was a door slam or something? [speaker003:] i it's pure [disfmarker] No, no, it's pure silence. [speaker001:] Pure silence. [speaker003:] It's the control set. [speaker001:] OK. [speaker003:] OK? It's the control set. [speaker004:] What you [disfmarker] [speaker003:] It's pure si pure silence [comment] with the [disfmarker] with the machine on the [disfmarker] on the roof. [speaker004:] Well [disfmarker] w [vocalsound] I [disfmarker] I think what you m I think what you mean [vocalsound] is that it's nonspeech segments that don't have impulsive noises. [speaker002:] With the fan. [speaker003:] Yeah. [speaker004:] Right? Cuz you're calling [disfmarker] what you're calling "event" is somebody coughing [vocalsound] or clicking, or rustling paper, or hitting something, which are impulsive noises. [speaker003:] Yeah. [speaker004:] But steady state noises are part of the background. [speaker003:] Yeah. [speaker004:] Which, are being, included in that. Right? [speaker003:] h here yet, yet I [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] I think [disfmarker] I [disfmarker] I think, eh, there are [disfmarker] that [disfmarker] some kind of noises that, eh, don't [disfmarker] don't wanted to [disfmarker] to be in that, eh, in that control set. [speaker004:] Yeah. [speaker005:] So it's like a signal noise situation. Yeah. [speaker004:] Well [disfmarker] Yeah. [speaker003:] But I prefer, I prefer at [disfmarker] at the first, eh, the [disfmarker] the silence with eh, this eh this kind of the [disfmarker] of eh [disfmarker] of noise. [speaker005:] Well, steady state. [speaker004:] Right, it's [disfmarker] I mean, it's [disfmarker] "Background" might be [disfmarker] might be a better word than "silence". [speaker003:] Yeah. [speaker004:] It's just sort of that [disfmarker] the [disfmarker] the background acoustic [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Yeah. [speaker002:] Right. So [disfmarker] Fine. Go on. [speaker003:] Is [disfmarker] is [disfmarker] is only [disfmarker] [speaker004:] Yeah. [speaker003:] OK. [speaker005:] Well, we needed to get the categories, yeah. [speaker003:] And, um, with this information [vocalsound] The idea is eh, eh, nnn, I have a label for [disfmarker] for each, eh, frame and, eh with a cluster eh [disfmarker] algorithm I [disfmarker] and [disfmarker] Sorry. And eh I am going [pause] to prepare a test bed, eh, well, eh, a [disfmarker] a set of [pause] feature structure eh, eh, models. [speaker002:] Right. [speaker003:] And [pause] my idea is [speaker002:] "Tone", whatever. [speaker003:] so [disfmarker] so [disfmarker] on [disfmarker] because I have a pitch extractor yet. [speaker004:] Right. [speaker002:] Mm hmm. [speaker003:] I have to [disfmarker] to test, but eh I [disfmarker] [speaker001:] You have your own? [speaker003:] Yeah, yeah, yeah. [speaker001:] Oh! [speaker003:] I ha I have prepare. Is a modified version of [disfmarker] of [disfmarker] of a pitch tracker, eh, from, eh, Standar eh Stanford University [disfmarker] in Stanford? No. From, eh, em, [vocalsound] Cambridge [pause] University. [speaker001:] Oh! What's it written in? [speaker003:] Eh, em, I [disfmarker] I [disfmarker] I don't remember what is the [disfmarker] the name of the [disfmarker] of the author, because I [disfmarker] I have several [disfmarker] I have eh, eh, em, eh, library tools, from eh, Festival and [disfmarker] of [disfmarker] from Edinburgh eh, from Cambridge, eh, and from our department. [speaker001:] Ah. [speaker004:] Mm hmm. Mm hmm. [speaker003:] And [disfmarker] And I have to [disfmarker] because, [vocalsound] in general the pitch tracker, doesn't work [comment] [vocalsound] very well and [disfmarker] [speaker002:] Bad. Right. But, you know, as a feature, it might be OK. [speaker003:] Yeah. Yeah. [speaker002:] So, we don't know. [speaker003:] This [disfmarker] this is [disfmarker] And [pause] th the idea is to [disfmarker] to, eh, to obtain, eh, [pause] for example, eh, [pause] [vocalsound] eh diff eh, eh, different [disfmarker] well, no, a great number of eh FEC for example, eh, [pause] eh, twenty five, eh, thirty [disfmarker] thirty parameters, eh, for [disfmarker] for each one. And in a first eh, nnn, step in the investi in the research in eh, my idea is try to, eh, to prove, what is the performance of the difference parameter, eh [pause] to classify [pause] the different, eh, what is the [disfmarker] the [disfmarker] the [disfmarker] the front end approach to classify eh, the different, eh, frames of each class [pause] eh and what is the [disfmarker] the, nnn, nnn, nnn, eh, what is the, the error [pause] eh, of the data [speaker002:] Supervised clustering. Mm hmm. [speaker003:] This is the [disfmarker] the eh, first idea [speaker005:] Mm hmm. [speaker003:] and the second [pause] is try to [disfmarker] eh, to use [pause] some ideas eh, similar to the linear discriminant analysis. [speaker002:] Mm hmm. [speaker003:] Eh? Eh, similar, because the the idea is to [disfmarker] to study [pause] what is the contribution of eh, each parameter to the process of classify correctly the different [disfmarker] the different parameters. [speaker002:] Mm hmm. What sort of classifier ar? [speaker003:] Eh, the [disfmarker] the [disfmarker] the classifier is [disfmarker] nnn by the moment is eh [disfmarker] is eh, similar, nnn, that the classifier used eh, in a quantifier [disfmarker] vectorial quantifier is eh, used to [disfmarker] to eh, some distance [pause] to [disfmarker] to put eh, a vector eh, in [disfmarker] in a class different. Is [disfmarker] Yeah? W with a model, is [disfmarker] is only to cluster using a eh, [@ @] or a similarity. [speaker002:] Unimodal? [speaker005:] Mm hmm. [speaker002:] So is it just one cluster per [disfmarker] [speaker003:] A another possibility it to use eh a netw netw a neural network. [speaker002:] Right. [speaker003:] But eh what's the p [vocalsound] What is my idea? What's the problem I [disfmarker] I [disfmarker] I [disfmarker] I see in [disfmarker] in [disfmarker] in [disfmarker] [vocalsound] if you [disfmarker] you use the [disfmarker] the neural network? If [disfmarker] w when [pause] this kind of eh, mmm, cluster, clustering algorithm to can test, to can eh observe what happened you [disfmarker] you can't [disfmarker] you can't eh, eh put up with your hand [comment] in the different parameter, [speaker002:] Right, you can't analyse it. [speaker003:] but eh [disfmarker] If you use a neural net is [disfmarker] is a good idea, but eh you don't know what happened in the interior of the neural net. [speaker004:] Well, actually, you can do sensitivity analyses which show you what the importance of the different parce pieces of the input are. [speaker003:] Yeah. [speaker004:] It's hard to [disfmarker] w w what you [disfmarker] It's hard to tell on a neural net is what's going on internally. [speaker003:] Yeah. [speaker004:] But it's actually not that hard to analyse it and figure out the effects of different inputs, especially if they're all normalized. [speaker003:] Yeah. Yeah. [speaker004:] Um, but [disfmarker] [speaker002:] Well, using something simpler first I think is probably fine. [speaker004:] Well, this isn't tru if [disfmarker] if [disfmarker] if you really wonder what different if [disfmarker] if [disfmarker] [speaker003:] Yeah. But [disfmarker] [speaker002:] Decision tree. [speaker004:] Yeah, then a decision tree is really good, but the thing is here he's [disfmarker] he's not [disfmarker] he's not like he has one you know, a bunch of very distinct variables, like pitch and this [disfmarker] he's talking about, like, a all these cepstral coefficients, and so forth, [speaker002:] Right. [speaker003:] Yeah. Yeah. [speaker002:] Right. [speaker003:] Yeah. [speaker004:] in which case a a any reasonable classifier is gonna be a mess, and it's gonna be hard to figure out what [disfmarker] what uh [disfmarker] [speaker003:] And [disfmarker] [speaker002:] Right. [speaker003:] I [disfmarker] I [disfmarker] I will include too the [disfmarker] the [disfmarker] the differential de derivates too. [speaker004:] Yeah. [speaker002:] Deltas, yeah. So. [speaker004:] I [disfmarker] I mean, I think the other thing that one [disfmarker] I mean, this is, I think a good thing to do, to sort of look at these things at least [disfmarker] See what I'd [disfmarker] I'd [disfmarker] Let me tell you what I would do. I would take just a few features. Instead of taking all the MFCC's, or all the PLP's or whatever, I would just take a couple. [speaker003:] Yeah. [speaker004:] OK? Like [disfmarker] like C one, C two, something like that, so that you can visualize it. and look at these different examples and look at scatter plots. [speaker003:] Yeah. Yeah. [speaker004:] OK, so before you do [disfmarker] build up any kind of fancy classifiers, just take a look in two dimensions, at how these things are split apart. That I think will give you a lot of insight of what is likely to be a useful feature when you put it into a more complicated classifier. [speaker003:] Yeah. Yeah. [speaker004:] And the second thing is, once you actually get to the point of building these classifiers, [vocalsound] [@ @] what this lacks so far is the temporal properties. So if you're just looking at a frame and a time, you don't know anything about, you know, the structure of it over time, and so you may wanna build [@ @] [disfmarker] build a Markov model of some sort uh, or [disfmarker] or else have features that really are based on um on [disfmarker] on some bigger chunk of time. [speaker003:] Yeah. [speaker002:] Context window? [speaker003:] Yeah. Yeah. [speaker004:] But I think this is a good place to start. But don't uh anyway, this is my suggestion, is don't just, you know, throw in twenty features at it, the deltas, and the delta del and all that into some classifier, even [disfmarker] even if it's K nearest neighbors, you still won't know [speaker003:] Yeah. Yeah, yeah. [speaker004:] what it's doing, even [disfmarker] You know it's Uh, I think to know what it's [disfmarker] to have a better feeling for what it's [speaker002:] Yep. [speaker004:] look at [disfmarker] at som some picture that shows you, "Here's [disfmarker] These things uh, uh are [disfmarker] offer some separation." [vocalsound] And, uh, in LPC, uh, the thing to particularly look at is, I think [disfmarker] is something [vocalsound] like, uh, the residual [disfmarker] [speaker003:] Yeah. [speaker004:] Um So. [speaker003:] Yeah. [speaker005:] Can I ask? It strikes me that there's another piece of information um, that might be useful and that's simply the transition. [speaker003:] S [speaker005:] So, w if you go from a transition of silence to overlap versus a transition from silence to speech, there's gonna be a b a big informative area there, it seems to me. [speaker003:] Yeah, because [disfmarker] Yeah yeah. Yeah. Yeah. I [disfmarker] Yeah. But eh I [disfmarker] I [disfmarker] Is my my [disfmarker] my own vision, [vocalsound] of the [disfmarker] of the project. [speaker002:] So, some sort of [disfmarker] That's [disfmarker] [speaker005:] Mm hmm. [speaker003:] I [disfmarker] eh the [disfmarker] the Meeting Recorder project, for me, has eh, two [vocalsound] eh, w has eh several parts, several p [vocalsound] objective [speaker004:] Mm hmm. [speaker003:] eh, because it's a [disfmarker] a great project. But eh, at the first, in the acoustic, eh, eh, parts of the project, eh I think [pause] you eh [disfmarker] we have eh [vocalsound] [pause] two main eh objective. One [disfmarker] one of these is to [disfmarker] eh to detect the change, the acoustic change. And [vocalsound] for that, if you don't use, eh, [vocalsound] eh, a speech recognizer, eh broad class, or not broad class to [disfmarker] to try to [disfmarker] to [pause] [pause] [vocalsound] to label the different frames, I think [pause] the Ike criterion [pause] or BIC criterion eh will be enough to detect the change. [speaker005:] OK. [speaker003:] And [disfmarker] Probably. [comment] I [disfmarker] I [disfmarker] I [disfmarker] I would like to [disfmarker] to t prove. Uh, probably. When you you have, eh, eh s eh the transition of speech or [disfmarker] or silence eh to overlap zone, this criterion is enough with [disfmarker] [pause] probably with, eh, this kind of, eh, eh the [disfmarker] the [disfmarker] the more eh use eh [disfmarker] use eh [disfmarker] used eh em [pause] normal, regular eh parameter MF MFCC. you [disfmarker] you have to [disfmarker] to [disfmarker] to find [disfmarker] you can find the [disfmarker] the mark. You can find the [disfmarker] nnn, the [disfmarker] the acoustic change. But eh eh I [disfmarker] I understand that you [disfmarker] your objective is [pause] to eh classify, to know that eh that zone [pause] not is only [comment] a new zone in the [disfmarker] in the file, that eh you have eh, but you have to [disfmarker] to [disfmarker] to know that this is overlap zone. because in the future you will eh try to [disfmarker] to process that zone with a non regular eh eh speech recognizer model, I suppose. [speaker004:] Mm hmm. Mm hmm. [speaker003:] you [disfmarker] you will pretend [comment] to [disfmarker] to [disfmarker] to process the overlapping z eh zone with another kind of algorithm because it's very difficult to [disfmarker] to [disfmarker] to obtain the transcription [pause] from eh using eh eh a regular, normal speech recognizer. That, you know, [pause] I [disfmarker] I [disfmarker] I think is the idea. And so [vocalsound] eh the, nnn [disfmarker] the [disfmarker] [nonvocalsound] the system [pause] eh will have two models. [speaker005:] Clustering. [speaker003:] A model to detect more acc the mor most accurately possible that is p uh, will be possible the, eh [disfmarker] the mark, the change [speaker005:] OK. [speaker003:] and another [disfmarker] another model will [@ @] [pause] or several models, to try s but [disfmarker] eh several model eh robust models, sample models to try to classify the difference class. [speaker002:] I'm [disfmarker] I'm [disfmarker] I'm sorry, I didn't understand you [disfmarker] what you said. What [disfmarker] what model? [speaker003:] Eh, the [disfmarker] the classifiers of the of the n to detect the different class to the different zones before try to [disfmarker] to recognize, eh with eh [disfmarker] to transcribe, with eh a speech recognizer. [speaker002:] Mm hmm. [speaker005:] So p [speaker003:] And my idea is to use eh, for example, a neural net with [pause] the [pause] information we obtain from this eh [disfmarker] this eh study of the parameter with the [pause] selected [pause] parameter to try to eh [disfmarker] to put the class of each frame. Eh [pause] for [pause] the difference [pause] zone [speaker002:] Features. Yeah. [speaker003:] you [disfmarker] you eh, eh [pause] have obtained in the first eh, step [pause] with the [pause] for example, BIC eh, eh [pause] criterion compare model [speaker005:] Mm hmm. [speaker003:] And [disfmarker] [vocalsound] You [speaker004:] OK, but, I [disfmarker] I think [disfmarker] in any event we're agreed that the first step is [disfmarker] [speaker003:] I don't u [speaker005:] Yeah. [speaker003:] i [speaker004:] Because what we had before for [disfmarker] for uh, speaker change detection did not include these overlaps. [speaker003:] Yeah. [speaker004:] So the first thing is for you to [disfmarker] to build up something that will detect the overlaps. [speaker003:] Yeah. [speaker004:] Right? So again, I think the first thing to do to detect the overlaps is to look at these uh, in [disfmarker] in [disfmarker] in [disfmarker] in [disfmarker] [speaker003:] Yeah. [speaker002:] Features? [speaker004:] Well, I [disfmarker] again, the things you've written up there I think are way too [disfmarker] way too big. [speaker003:] Yeah. [speaker004:] OK? If you're talking about, say, twelfth [disfmarker] twelfth order uh MFCC's or something like that it's just way too much. You won't be able to look at it. [speaker003:] Yeah. [speaker004:] All you'll be able to do is put it into a classifier and see how well it does. [speaker003:] Yeah. [speaker004:] Whereas I think if you have things [disfmarker] if you pick one or two dimensional things, or three of you have some very fancy display, uh, and look at how the [disfmarker] the different classes separate themselves out, you'll have much more insight about what's going on. [speaker003:] It will be enough. [speaker004:] Well, you'll [disfmarker] you'll get a feeling for what's happening, you know, so if you look at [disfmarker] [vocalsound] Suppose you look at first and second order cepstral coefficients for some one of these kinds of things and you find that the first order is much more effective than the second, [vocalsound] and then you look at the third and there's not [disfmarker] and not too much there, [vocalsound] you may just take first and second order cepstral coefficients, [speaker003:] Yeah. Yeah. [speaker004:] right? [speaker003:] Yeah. [speaker004:] And with LPC, I think LPC per se isn't gonna tell you much more than [disfmarker] than [disfmarker] than the other, maybe. Uh, and uh on the other hand, the LPC residual, the energy in the LPC residual, [vocalsound] will say how well, uh [vocalsound] the low order LPC [vocalsound] model's fitting it, which should be [vocalsound] pretty poorly for two two or more [vocalsound] people speaking at the same time, and it should be pretty well, for w for [disfmarker] for one. [speaker003:] Yeah. Yeah. [speaker004:] And so [vocalsound] I [disfmarker] i again, if you take a few of these things that are [disfmarker] are [vocalsound] prob um [comment] [pause] promising features and look at them in pairs, [vocalsound] uh, I think you'll have much more of a sense of OK, I now have [disfmarker] [vocalsound] uh, doing a bunch of these analyses, [speaker003:] Yeah. [speaker004:] I now have ten likely candidates. And then you can do decision trees or whatever to see how they combine. [speaker003:] Yeah. Yeah. Yeah. [speaker001:] I've got a question. [speaker005:] Interesting. [speaker003:] This [speaker005:] Hmm. [speaker003:] Sorry. but eh, eh [vocalsound] eh eh eh I don't know it is the first eh way to [disfmarker] to [disfmarker] do that and I would eh like to [disfmarker] to know what eh, your opinion. Eh [vocalsound] all this study in the f in the first moment, I [disfmarker] I w I [disfmarker] I will pretend to do [comment] with eh eh equalizes speech. The [disfmarker] the equalizes speech, the speech eh, the mixes of speech. [speaker002:] With [disfmarker] [speaker005:] With what? With what? [speaker002:] Right. Mixed. [speaker003:] the [disfmarker] the mix, mixed speech. [speaker005:] "Mixed". Thank you. [speaker003:] Eh, why? Because eh the spectral distortion is [disfmarker] [comment] [pause] more eh [disfmarker] a lot eh clearer, very much clearer if we compare with the PDA. [speaker002:] Right. [speaker003:] PDA speech file is eh [disfmarker] it will be eh difficult. I [disfmarker] [speaker005:] So it's messier. The [disfmarker] the PDA is messier. [speaker003:] Yeah, fff! [comment] Because the n the noise eh to sp the signal to noise relation is eh [disfmarker] is [disfmarker] is low. [speaker004:] OK. [speaker002:] Yeah, I think that that's a good way to start. [speaker003:] And, [vocalsound] I don't know [disfmarker] [speaker002:] But. [speaker003:] I don't know eh uh i i that eh the [disfmarker] [vocalsound] the result of the [disfmarker] of the study eh with eh [disfmarker] with eh this eh [disfmarker] this speech, the mix speech eh [pause] will work [pause] exactly [pause] with the [pause] eh PDA files. [speaker002:] It would be interesting in itself to see. [speaker003:] eh What, I [disfmarker] I mean, what what is the effect of the low'ignal to [disfmarker] to [disfmarker] to noise relation, you know, eh with [disfmarker] [speaker002:] Well, I think that would be an interesting result. [speaker004:] N u We Well, I think [disfmarker] I think [disfmarker] I think it's not a [disfmarker] it's not at all unreasonable. It makes sense to start with the simpler signal because if you have features which don't [disfmarker] aren't even helpful in the high signal to noise ratio, then there's no point in putting them into the low signal ratio, one would think, anyway. [speaker003:] Yeah. [speaker004:] And so, if you can get [disfmarker] [@ @] [comment] Uh again, my prescription would be that you would, with a mixed signal, you would take a collection of possible uh, features [vocalsound] look at them, look at how these different classes that you've marked, separate themselves, [comment] [vocalsound] and then collect, uh in pairs, [vocalsound] and then collect ten of them or something, and then proceed [vocalsound] with a bigger classifier. [speaker003:] Yeah. Yeah. [speaker004:] And then if you can get that to work well, then you go to the other signal. And then, and you and you know, they won't work as well, but how m you know, how much [disfmarker] [speaker003:] Yeah. [speaker002:] Right. [speaker003:] Yeah. Yeah. [speaker004:] And then you can re optimize, and so on. [speaker002:] Yeah. But it I think it would be interesting to try a couple with both. Because it [disfmarker] I think it would be interesting to see if some features work well with close mixed, and [disfmarker] And don't [disfmarker] [speaker004:] Hmm. [speaker003:] Ah, yeah, yeah yeah yeah. [speaker004:] That's [disfmarker] well, the [disfmarker] It [disfmarker] it's [disfmarker] it's true that it also, it could be [vocalsound] useful to do this exploratory analysis where you're looking at scatter plots and so on in both cases. Sure. [speaker003:] But [disfmarker] [speaker002:] Mm hmm. [speaker003:] I [disfmarker] I [disfmarker] I [disfmarker] I think that the [disfmarker] the eh parameter we found, eh, eh [vocalsound] worked with both eh, speech file, [speaker005:] That's good. [speaker003:] but eh what is the [disfmarker] the [disfmarker] the relation of eh [disfmarker] of the [vocalsound] performance when eh you use eh the, eh eh speech file the PDA speech files. [speaker004:] Hmm. [speaker003:] Yeah, I don't know. [speaker004:] Right. [speaker003:] But it [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] I think it will be important. Because eh people eh eh, different groups eh has eh experience with this eh kind of problem. Is [disfmarker] eh is not easy eh to [disfmarker] to solve, because if you [disfmarker] I [disfmarker] I [disfmarker] [vocalsound] I have seen the [disfmarker] the [disfmarker] the speech file from eh PDA, and s some parts is [comment] very difficult because you [disfmarker] you don't see the spectrum [disfmarker] the spectrogram. [speaker002:] Right. [speaker003:] Is very difficult to apply eh, eh a parameter to detect change when you don't see. [speaker002:] Yeah, they're totally hidden. [speaker004:] Yeah. Yeah. Well, that [disfmarker] that [disfmarker] that's another reason why very simple features, things like energy, and things [disfmarker] things like harmonicity, and [vocalsound] residual energy are uh, yeah are [disfmarker] are better to use than very complex ones because they'll be more reliable. [speaker003:] But I suppose [disfmarker] [speaker002:] Are probably better, yep. [speaker003:] Yeah, yeah yeah, I [disfmarker] I [disfmarker] I will put eh the energy here. Yeah. Yeah. Yeah. [speaker004:] Ch Chuck was gonna ask something I guess. [speaker003:] You have a question. [speaker001:] Yeah, I [pause] maybe this is a dumb question, but w I thought it would be [disfmarker] [vocalsound] I thought it would be easier if you used a PDA [speaker004:] Nah. [speaker001:] because can't you, couldn't you like use beam forming or something to detect speaker overlaps? I mean [disfmarker] [speaker002:] Well, if you used the array, rather than the signal from just one. [speaker001:] Uh huh. [speaker004:] Yeah, no, you you're [disfmarker] you're right [speaker002:] But that's [disfmarker] [speaker004:] that [disfmarker] In fact, if we made use of the fact that there are two microphones, you do have some location information. which we don't have with the one and [disfmarker] and so that's [disfmarker] [speaker001:] Is that not allowed with this project? [speaker004:] Uh, well, no, I mean, we we don't have any rules, r really. [speaker001:] But I didn't mean [disfmarker] I w [pause] Given [disfmarker] given the goal. [speaker004:] I think [disfmarker] [vocalsound] I [disfmarker] I think [disfmarker] I think it's [disfmarker] it's [disfmarker] it's a [disfmarker] it's an additional interesting question. [speaker001:] I mean, is [disfmarker] is that violation of the [disfmarker] [speaker003:] Oh. No. Yeah. [speaker004:] I mean, I think you wanna know whether you can do it with one, because you know it's not necessarily true that every device that you're trying to do this with will have two. [speaker001:] Mm hmm [speaker003:] Yeah. [speaker004:] Uh, if, on the other hand, we show that there's a huge advantage with two, well then that could be a real point. [speaker003:] Yeah. [speaker004:] But, we don't n even know yet what the effect of detecting [disfmarker] having the ability to detect overlaps is. You know, maybe it doesn't matter too much. [speaker001:] Right. Right. [speaker003:] Yeah. [speaker004:] So, this is all pretty early stages. [speaker001:] OK. [speaker003:] Yeah. [speaker004:] But no, you're absolutely right. [speaker003:] Yeah. [speaker001:] I see. [speaker003:] Yeah, yeah, yeah. [speaker004:] That's [pause] a good thing to consider. [speaker001:] OK. [speaker005:] There [disfmarker] there is a complication though, and that is if a person turns their back to the [disfmarker] to the PDA, then some of the positional information goes away? [speaker004:] Well, it [disfmarker] it [disfmarker] it does, i it d it does, but the [disfmarker] the [disfmarker] the issue is that [disfmarker] that [disfmarker] [speaker003:] Yeah. [speaker001:] No, it's not [disfmarker] it's not that so much as [disfmarker] [speaker005:] And then, And if they're on the access [disfmarker] [comment] on the axis of it, that was the other thing I was thinking. [speaker002:] Mm hmm. [speaker005:] He [disfmarker] You mentioned this last time, that [disfmarker] that if [disfmarker] if you're straight down the midline, then [disfmarker] then [disfmarker] the r the left right's gonna be different, [speaker002:] Yeah, we hav need to put it on a little turntable, [speaker003:] I [disfmarker] I [disfmarker] [vocalsound] I [disfmarker] I [disfmarker] I th [speaker002:] and [disfmarker] [speaker001:] Well, it's [speaker003:] Yeah. [speaker005:] and [disfmarker] and [disfmarker] and in his case, I mean, he's closer to it anyway. [speaker003:] Yeah. [speaker005:] It seems to me that [disfmarker] that it's not [disfmarker] a p uh, you know, it's [disfmarker] this [disfmarker] the topograph the topology of it is [disfmarker] is a little bit complicated. [speaker003:] Yeah. [speaker002:] But it's another source of information. [speaker003:] I [disfmarker] I [disfmarker] Yeah. [speaker001:] I don't [disfmarker] I don't know ho [speaker003:] I [disfmarker] I [disfmarker] I think [disfmarker] Sorry. I [disfmarker] I [disfmarker] I think because the the the distance between the two microph eh, microphone, eh, in the PDA is very near. But it's uh [disfmarker] from my opinion, it's an interesting idea to [disfmarker] to try to study the binaural eh problem eh, with information, because I [disfmarker] I found difference between the [disfmarker] the speech from [disfmarker] from each micro eh, in the PDA. [speaker001:] I would guess [disfmarker] [speaker002:] Yep. [speaker004:] Yeah, it's timing difference. [speaker005:] Oh yeah! Oh I agree! And we use it ourselves. [speaker004:] It it's not amplitude, right? S Right. [speaker005:] I mean, I know [disfmarker] I n I know that's a very important cue. [speaker002:] Yep. [speaker003:] Yeah. [speaker005:] But I'm just [disfmarker] I'm just saying that the way we're seated around a table, is not the same with respect to each [disfmarker] to each person with respect to the PDA, [speaker003:] No. No. No, no, no. [speaker005:] so we're gonna have a lot of differences with ref respect to the speaker. [speaker004:] That's [disfmarker] That's fine. [speaker001:] But th I don't think that matters, though. [speaker003:] But [disfmarker] [speaker004:] That's [disfmarker] So [disfmarker] so i [@ @] [comment] I think the issue is, "Is there a clean signal coming from only one direction?" [speaker001:] Right. [speaker004:] If it's not coming from just one direction, if it [disfmarker] if th if there's a broader pattern, it means that it's more likely there's multiple people speaking, [speaker003:] Yeah. [speaker004:] wherever they are. [speaker001:] So it's sort of like how [disfmarker] how confused is it about where the beam is. [speaker004:] Is it a [disfmarker] is it [disfmarker] Yeah, is there a narrow [disfmarker] Is there a narrow beam pattern or is it a [disfmarker] a distributed beam pattern? [speaker003:] Yeah. [speaker004:] So if there's a distributed beam pattern, then it looks more like it's [disfmarker] it's uh, multiple people. [speaker005:] OK. [speaker004:] Wherever you are, even if he moves around. [speaker003:] Yeah. [speaker005:] Yeah. OK, it just [disfmarker] it just seemed to me that [disfmarker] uh, that this isn't the ideal type of separation. I mean, I [disfmarker] I think it's [disfmarker] I can see the value o [speaker004:] Oh, ideal would be to have the wall filled with them, but I mean [disfmarker] [vocalsound] But the thing is just having two mikes [disfmarker] If you looked at that thing on [disfmarker] on Dan's page, it was [disfmarker] When [disfmarker] when there were two people speaking, and it looked really really different. [speaker005:] Yeah, OK. [speaker003:] Yeah. Yeah. Yeah. [speaker002:] Yep. [speaker005:] Oh yeah yeah. OK. [speaker003:] Yeah. [speaker001:] What looked different? [speaker005:] Yeah. [speaker004:] Uh, well, basic he was looking at correlation. [speaker002:] Cross co cross correlation. [speaker004:] Just cross correlation between two sides. [speaker003:] Correlation, yeah. [speaker004:] So cross correlation is pretty sensitive. [speaker001:] Did Sorry, b uh I'm not sure what Dan's page is that you mean. He was looking at the two [disfmarker] [speaker005:] Uh, his a web page. [speaker004:] You take the signal from the two microphones and you cros and you cross correlate them with different lags. [speaker002:] Subtract them. [speaker001:] OK. [speaker005:] Mm hmm. [speaker003:] Yeah. [speaker001:] Uh huh. [speaker004:] OK. [speaker002:] And you find [disfmarker] They get peaks. [speaker004:] So when one person is speaking, then wherever they happen to be at the point when they're speaking, [vocalsound] then there's a pretty big maximum right around that point in the l in [disfmarker] in the lag. So if [disfmarker] at whatever angle you are, [vocalsound] at some lag corresponding to the time difference between the two there, you get this boost in the [disfmarker] in [disfmarker] in the cross correlation value [disfmarker] function. [speaker001:] OK. OK. So [disfmarker] so if there's two [disfmarker] [speaker002:] And if there are multiple people talking, you'll see two peaks. [speaker004:] It's spread out. [speaker005:] Well, let me ask you, if [disfmarker] if both people were over there, it would be less effective than if one was there and one was across, catty corner? [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Yeah. [speaker004:] The the [disfmarker] Oh, I'm sorry, [speaker005:] No? [speaker004:] if they're right next to one another? [speaker001:] If I was [disfmarker] if I was here and Morgan was there and we were both talking, it wouldn't work. [speaker005:] Next [disfmarker] next one over n over [comment] on this side of the P [disfmarker] PDA. [speaker004:] i i [speaker005:] There we go. [speaker002:] Right. [speaker003:] Yeah. [speaker005:] Good example, the same one I'm asking. [speaker003:] Yeah. [speaker004:] Yeah, e I see. [speaker003:] Yeah. [speaker005:] Versus you [disfmarker] versus [disfmarker] you know, and we're catty corner across the table, and I'm farther away from this one and you're farther away from that one. [speaker001:] Yes. [speaker002:] Or [disfmarker] or even if, like, if people were sitting right across from each other, you couldn't tell the difference either. [speaker003:] Yeah. Yeah. [speaker005:] It seems like that would be pretty strong. [speaker004:] Yeah. Oh, yeah. [speaker003:] Yeah. Yeah. [speaker005:] Across [disfmarker] the same axis, you don't have as much to differentiate. And so my point was just that it's [disfmarker] it's gonna be differentially [disfmarker] differentially varia valuable. [speaker003:] Yeah. [speaker004:] Well, we d yeah, we don't have a third dimension there. Yeah, so it's [disfmarker] [speaker002:] Right. [speaker005:] I mean, it's not to say [disfmarker] I mean, I certainly think it's extremely val [comment] And we [disfmarker] we humans [pause] n n depend on [pause] you know, these [disfmarker] these binaural cues. [speaker003:] Yeah, yeah. [speaker004:] But it's almost [disfmarker] but it's almost a [disfmarker] [vocalsound] I think what you're talking about i there's two things. [speaker005:] But. [speaker002:] Must do. [comment] Yeah. [speaker004:] There's a sensitivity issue, and then there's a pathological error uh issue. So th the one where someone is just right directly in line is sort of a pathological error. [speaker005:] Yes. Yeah. [speaker004:] If someone just happens to be sitting right there then we won't get good information from it. [speaker003:] Yeah. [speaker005:] OK. and i and if there [disfmarker] So it [disfmarker] And if it's the two of you guys on the same side [disfmarker] [speaker004:] Uh, if they're [disfmarker] if they're close, it's just a question of the sensitivity. [speaker002:] Yep. [speaker005:] Yeah. OK. [speaker004:] So if the sensitivity is good enough [disfmarker] and we just [disfmarker] we just don't have enough, uh, experience with it to know how [disfmarker] [speaker005:] Yeah yeah, OK. Yeah. [speaker002:] But [disfmarker] [speaker003:] Yeah. [speaker005:] Oh I'm not [disfmarker] I'm not trying to argue against using it, by any means. I just wanted to point out that [disfmarker] that weakness, that it's topo topologically impossible to get it perfect for everybody. [speaker004:] Yeah. Mm hmm. [speaker002:] And I think Dan is still working on it. So. He actually [disfmarker] he wrote me about it a little bit, so. [speaker005:] Great. No, I don't mean to discourage that at all. [speaker004:] I mean, the other thing you can do [disfmarker] uh, if [disfmarker] I mean, i We're assuming that it would be a big deal just to get somebody [disfmarker] convince somebody to put two microphones in the PDA. But if you h put a third in, [vocalsound] you could put in the other axis. And then you know [disfmarker] then you're sort of [disfmarker] Yeah, then [disfmarker] then you pretty much could cover [disfmarker] [speaker001:] Once you got two [disfmarker] [speaker005:] Interesting. [speaker003:] Yeah. [speaker001:] Well what about just doing it from these mikes? [speaker005:] Interesting. [speaker001:] You know? [speaker003:] Yeah. [speaker002:] Yep. [speaker003:] It will be more interesting to study the PZM because the [disfmarker] the [disfmarker] the separation [disfmarker] I [disfmarker] I think [disfmarker] [speaker004:] Uh [@ @] [comment] [vocalsound] But but that's [disfmarker] I mean, we can we'll be [disfmarker] all of this is there for us to study. [speaker002:] Then they're much broader. Yeah, we can do whatever we want. [speaker003:] Yeah. [speaker004:] But [disfmarker] [vocalsound] but [disfmarker] but the thing is, uh, one of the [disfmarker] at least one of the things I was hoping to get at with this is what can we do with what we think would be the normal situation if some people get together and one of them has a PDA. [speaker002:] Whatever you're interested in. [speaker003:] Yeah. [speaker001:] That's what I was asking about, what are the constraints? [speaker003:] Yeah. Yeah. [speaker004:] Right. Yeah. [speaker003:] Yeah. Yeah. [speaker004:] Well, that's [disfmarker] that's the constraint of one question that I think both Adam and I were [disfmarker] were [disfmarker] were interested in. [speaker002:] Well [disfmarker] [speaker001:] Mm hmm. [speaker002:] Yep. [speaker001:] Mm hmm. [speaker004:] Uh, [speaker003:] Yeah. [speaker004:] but [disfmarker] you know if you can instrument a room, this is really minor league compared with what some people are doing, right? Some people at [disfmarker] at [disfmarker] uh, yeah, at Brown and [disfmarker] and [disfmarker] and [disfmarker] and [disfmarker] at uh [pause] um and at Cape, [speaker002:] Big micro [@ @] arrays. [speaker003:] Yeah. [speaker001:] Didn't they have something at Cape? [speaker004:] they both have these, you know, big arrays on the wall. And you know, if you could do that, you've got microphones all over the place [speaker002:] Very finely. [speaker004:] uh, you know p tens of microphones, and [disfmarker] and uh [disfmarker] [speaker001:] Oh! I saw a demo. [speaker003:] Oh, right, oh, yeah. [speaker004:] And if you do that then you can really get very nice uh kind of selectivity [disfmarker] [speaker001:] Yeah. [speaker002:] Oh, I saw one that was like a hundred microphones, a ten by ten array. [speaker004:] Yeah. Yeah. [speaker003:] Hundred. [speaker001:] And you could [disfmarker] In a noisy room, they could have all kinds of noises and you can zoom right in on somebody. [speaker003:] Yeah. [speaker002:] And they had very precision. [speaker003:] Yeah. [speaker002:] Right. [speaker004:] Ye Pretty much. Yeah. [speaker003:] Very complex, uh [disfmarker] Yeah. [speaker002:] It was all in software and they [disfmarker] and you could pick out an individual beam and listen to it. [speaker004:] Yeah. [speaker001:] That is cool. [speaker003:] Yeah. [speaker002:] It was [disfmarker] yeah, it was interesting. [speaker004:] But, the reason why I haven't focused on that as the fir my first concern is because um, I'm interested in what happens for people, random people out in some random place where they're p having an impromptu discussion. And you can't just always go, "well, let's go to this heavily instrumented room that we spent tens of thousands of dollars to se to set up". [speaker003:] Yeah. Yeah. [speaker001:] No, what you need to do is you'd have a little fabric thing that you unroll and hang on a wall. It has all these mikes and it has a plug in jack to the PDA. [speaker005:] Interesting. [speaker004:] The other thing actually, that gets at this a little bit of something else I'd like to do, is what happens if you have two P D [speaker002:] But I think [disfmarker] Yep. [speaker004:] and they communicate with each other? And then [disfmarker] You know, they're in random positions, the likelihood that [disfmarker] I mean, basically there wouldn't be any [disfmarker] l likely to be any kind of nulls, if you even had two. If you had three or four it's [disfmarker] Yeah. [speaker003:] Yeah. [speaker001:] Ooo! [speaker002:] That's on my web pages. Yeah. [speaker001:] Network! [speaker005:] Interesting. [speaker002:] Though [disfmarker] All sorts of interesting things you can do with that, [speaker005:] Interesting. [speaker002:] I mean, not only can you do microphone arrays, but you can do all sorts of um multi band as well. [speaker005:] Hmm. [speaker003:] Yeah. [speaker004:] Yeah. [speaker005:] Ah! [speaker002:] So it's [disfmarker] it would be neat. [speaker001:] I still like my rug on the wall idea, so if anybody patents that, then [disfmarker] [speaker002:] But [disfmarker] I think [disfmarker] [speaker005:] Well, you could have strips that you stick to your clothing. [speaker002:] in terms of [disfmarker] Yeah. [speaker001:] Yeah! Hats? [speaker002:] In terms of the research [pause] th research, it's really [disfmarker] it's whatever the person who is doing the research wants to do. [speaker001:] Shirts. [speaker002:] So if [disfmarker] if Jose is interested in that, that's great. But if [disfmarker] if he's not, that's great too. [speaker004:] Yeah. Yeah. [speaker003:] Yeah, yeah. [speaker004:] Um, I [disfmarker] i I [disfmarker] i I would actually kind of like us to wind it down, see if we can still get to the end of the, uh, birthdays thing there. [speaker002:] Catch some tea? Um. [speaker004:] So [speaker002:] Well, I had a couple things that I did wanna bring out. [speaker004:] OK. [speaker002:] One is, do we need to sign new [disfmarker] these again? [speaker005:] Well, it's slightly different. So I [disfmarker] I would say it would be a good idea. Cuz [disfmarker] it [disfmarker] it's slightly different. [speaker001:] Are they new? [speaker002:] Yep. [speaker001:] Oh. [speaker004:] Oh, this morning we didn't sign anything cuz we said that if anybody had signed it already, we didn't have to. [speaker002:] Yeah, I [disfmarker] I should've checked with Jane first, but the ch the form has changed. [speaker005:] It's slightly different. [speaker004:] Ah oh. [speaker002:] So we may wanna have everyone sign the new form. [speaker003:] OK. [speaker005:] I had to make one [disfmarker] [speaker002:] Um, I had some things I wanted to talk about with the thresholding stuff I'm doing. But, if we're in a hurry, we can put that off. Um and then also anonymity, how we want to anonymize the data. [speaker005:] Well, should I [disfmarker] I mean I have some results to present, but I mean I guess we won't have time to do that this time. [speaker002:] Uh. [speaker005:] But it seems like um the anonymization is uh, is also something that we might wanna discuss in greater length. [speaker004:] Um. I mean, wha what [disfmarker] [speaker005:] If [disfmarker] if we're about to wind down, I think [disfmarker] what I would prefer is that we uh, delay the anonymization thing till next week, and I would like to present the results that I have on the overlaps. [speaker001:] We still have to do this, too, right? [speaker004:] Right. [speaker001:] Digits? [speaker004:] Right. [speaker002:] No well, we don't have to do digits. [speaker004:] Well, why don't we [disfmarker] Uh, so [@ @] OK. [@ @] [comment] It sounds like u uh, there were [disfmarker] there were a couple technical things people would like to talk about. Why don't we just take a couple minutes to [disfmarker] to briefly [comment] do them, and then [disfmarker] and then [disfmarker] and then [disfmarker] and then [disfmarker] and then we [disfmarker] [speaker002:] OK, go ahead, Jane. [speaker005:] I'd [disfmarker] Oh, I'd prefer to have more time for my results. e Could I do that next week maybe? [speaker004:] OK. Oh, yeah. Sure. [speaker005:] OK, that's what I'm asking. [speaker004:] Oh yeah, yeah. [speaker005:] And I think the anonymization, if y if you want to proceed with that now, I just think that that's [disfmarker] that's a discussion which also n really deserves a lo a [disfmarker] you know, more that just a minute. [speaker004:] We could s [speaker005:] I really do think that, [speaker002:] Mm hmm. [speaker005:] because you raised a couple of possibilities yourself, you and I have discussed it previously, and there are different ways that people approach it, e and I think we should [disfmarker] [speaker002:] Alright. We're [disfmarker] we're just [disfmarker] We're getting enough data now that I'd sort of like to do it now, before I get overwhelmed with [disfmarker] once we decide how to do it [speaker005:] Well, OK. [speaker002:] going and dealing with it. [speaker005:] It's just [disfmarker] Yeah. OK. I [disfmarker] I'll give you the short version, but I do think it's an issue that we can't resolve in five minutes. [speaker002:] Mm hmm. [speaker005:] OK, so [disfmarker] the [disfmarker] the short thing is um, we have uh, tape recording uh, uh, sorry, digitized recor recordings. Those we won't be able to change. If someone says "Hey, Roger so and so". [speaker002:] Right. [speaker005:] So that's gonna stay that person's name. [speaker002:] Yep. [speaker005:] Now, in terms of like the transcript, the question becomes what symbol are you gonna put in there for everybody's name, and whether you're gonna put it in the text where he says "Hey Roger" or are we gonna put that person's anonymized name in instead? [speaker002:] No, because then that would give you a mapping, and you don't wanna have a mapping. [speaker005:] OK, so first decision is, we're gonna anonymize the same name for the speaker identifier and also in the text whenever the speaker's name is mentioned. [speaker002:] No. [speaker001:] I don't [disfmarker] [speaker002:] Because that would give you a mapping between the speaker's real name and the tag we're using, and we don't want [disfmarker] [speaker005:] I [disfmarker] I don't think you understood what I [disfmarker] what I said. [speaker002:] OK. [speaker005:] So [disfmarker] uh, so in [disfmarker] within the context of an utterance, someone says "So, Roger, what do you think?" OK. Then, uh, it seems to me that [disfmarker] Well, maybe I [disfmarker] uh it seems to me that if you change the name, the transcript's gonna disagree with the audio, and you won't be able to use that. [speaker002:] We don't [disfmarker] we wanna [disfmarker] we ha we want the transcript to be "Roger". [speaker001:] Right, you don't wanna do that. [speaker002:] Because if we made the [disfmarker] the transcript be the tag that we're using for Roger, someone who had the transcript and the audio would then have a mapping between the anonymized name and the real name, and we wanna avoid that. [speaker001:] Yeah. [speaker005:] OK, well, but then there's this issue of if we're gonna use this for a discourse type of thing, then [disfmarker] and, you know, Liz was mentioning stuff in a previous meeting about gaze direction and who's [disfmarker] who's the addressee and all, then to have "Roger" be the thing in the utterance and then actually have the speaker identifier who was "Roger" be "Frank", that's going to be really confusing and make it pretty much useless for discourse analysis. [speaker002:] Oh. Ugh! That's a good point. [speaker005:] Now, if you want to, you know, I mean, in some cases, I [disfmarker] I [disfmarker] I know that Susan Ervin Tripp in some of hers, uh, actually did do uh, um, a filter of the s signal where the person's name was mentioned, except [speaker004:] Yeah Yeah, once you get to the publication you can certainly do that. [speaker005:] And [disfmarker] and I [disfmarker] cer and I [disfmarker] So, I mean, the question then becomes one level back. Um, how important is it for a person to be identified by first name versus full name? Well, on the one hand, uh, it's not a full identity, we're taking all these precautions, um and they'll be taking precautions, which are probably even the more important ones, to [disfmarker] they'll be reviewing the transcripts, to see if there's something they don't like [disfmarker] [comment] OK. So, maybe, uh, maybe that's enough protection. On the other hand, this is a small [disfmarker] this is a small pool, and people who say things about topic X e who are researchers and well known in the field, they'll be identifiable and simply from the [disfmarker] from the first name. However, taking one step further back, they'd be identifiable anyway, even if we changed all the names. [speaker002:] Right. [speaker005:] So, is it really, um [disfmarker] [comment] You know? [speaker003:] Mmm. [speaker002:] Ugh! [speaker005:] Now, in terms of like [disfmarker] so I [disfmarker] I did some results, which I'll report on n next time, which do mention individual speakers by name. [speaker002:] Mm hmm. [speaker005:] Now, there, the Human Subjects Committee is very precise. You don't wanna mention subjects by name in published reports. Now, it would be very possible for me to take those data put them in a [disfmarker] in a study, and just change everybody's name for the purpose of the publication. And someone who looked [disfmarker] [speaker004:] You can go, you know, uh, "Z" [vocalsound] uh, for instance. [speaker005:] Yeah, exactly. Doesn't matter if [disfmarker] [speaker004:] Uh. Um, yeah, I mean, t it doesn't [disfmarker] I mean, I'm not knowledgeable about this, [speaker005:] That's the same thing you saw. [speaker004:] but it certainly doesn't bother me to have someone's first name in [disfmarker] in the [disfmarker] in the transcript. [speaker002:] OK. [speaker004:] Uh, I think [disfmarker] you don't wanna have their full name to be uh, listed. [speaker005:] Yeah, and [disfmarker] and in the form that they sign, it does say "your first name may arise in the course of the meetings". [speaker004:] And so [disfmarker] [speaker002:] Yeah. [speaker001:] Well [disfmarker] [speaker004:] Yeah. So again, th the issue is if you're tracking discourse things, you know, if someone says, uh, uh, "Frank said this" and then you wanna connect it to something later, you've gotta have this part where that's "Frank colon". [speaker005:] Or "your name". [speaker002:] Yeah, shoot! [speaker004:] Right? [speaker005:] Yeah, and [disfmarker] and [disfmarker] you know, even more i i uh, immediate than that just being able to, uh [disfmarker] Well, it just seems like to track [disfmarker] track from one utterance to the next utterance who's speaking and who's speaking to whom, cuz that can be important. [speaker002:] Mm hmm. [speaker005:] S i You know, "You raised the point, So and so", it's be kind of nice to be able to know who "you" was. [speaker004:] Yeah. [speaker002:] Shoot! I [disfmarker] I'm thinking too much. [speaker005:] And ac [comment] and actually you remember [disfmarker] furthermore, you remember last time we had this discussion of how you know, I was sort of avoiding mentioning people's names, [speaker004:] Yeah, I was too. Yeah. [speaker005:] and [disfmarker] and it was [disfmarker] and we made the decision that was kind of artificial. Well, I mean, if we're going to step in after the fact and change people's names in the transcript, we've basically done something one step worse. [speaker002:] Yep. [speaker004:] Yeah. [speaker003:] Yeah. [speaker002:] Well, I would sug I [disfmarker] I [disfmarker] don't wanna change the names in the transcript, [speaker005:] Misleading. [speaker002:] but that's because I'm focused so much on the acoustics instead of on the discourse, and so I think that's a really good point. [speaker004:] Yeah. [speaker002:] You're right, this is going to require more thought. [speaker004:] Yeah. L let me just back up this to make a [disfmarker] a brief comment about the, uh, what we're covering in the meeting. Uh I realize when you're doing this that uh [disfmarker] I mean, I didn't realize that you had a bunch of things that you wanted to talk about. Uh, and so, uh [disfmarker] and so I was proceeding some somewhat at random, frankly. So I think what would be helpful would be uh, i and I'll [disfmarker] I'll mention this to [disfmarker] to Liz and Andreas too, that um, before the meeting if anybody could send me, any [disfmarker] any, uh, uh, agenda items that they were interested in and I'll [disfmarker] I'll take the role of organizing them uh, into [disfmarker] into the agenda, [speaker005:] OK. Sure. [speaker004:] but I'd be very pleased to have everyone else [vocalsound] completely make up the agenda. I've no desire to [disfmarker] [vocalsound] to make it up, but if [disfmarker] if no one's told me things, then I'm just proceeding from my [disfmarker] my guesses, and [disfmarker] and uh, and i ye yeah, I [disfmarker] I'm sorry it ended up with your out your time to [disfmarker] I mean, I'm just always asking Jose what he's doing, you know, and [disfmarker] [vocalsound] and so it's [disfmarker] [pause] There's uh, there's obviously other things going on. [speaker002:] Mm hmm. [speaker005:] Oh, it's not a problem. Not a problem. Yeah. I just [disfmarker] I just couldn't do it in two minutes. [speaker002:] How will we [disfmarker] how would the person who's doing the transcript even know who they're talking about? Do you know what I'm saying? [speaker001:] "The person who's doing the transcript [disfmarker]" [comment] The IBM people? [speaker002:] Yeah. I mean, so so [disfmarker] how is that information gonna get labeled anyway? [speaker005:] How do you mean, who [disfmarker] what they're [disfmarker] who they're talking about? How do you mean? [speaker002:] I mean, so if I'm saying in a meeting, "oh and Bob, by the way, wanted [disfmarker] wanted to do so and so", [speaker001:] They're just gonna write "Bob" on it or do [@ @] [disfmarker] [speaker002:] if you're doing [disfmarker] Yeah, [@ @] they're just gonna write "Bob". And so. If you're [disfmarker] if you're doing discourse analysis, [speaker005:] They won't be able to change it themselves. [speaker004:] What ar how are they gonna do any of this? [speaker005:] Well, I [disfmarker] I'm betting we're gonna have huge chunks that are just totally un untranscribable by them. [speaker002:] Yeah, really. [speaker004:] I mean, they're gonna say speaker one, or speaker two or speaker I mean I [disfmarker] I [disfmarker] [speaker002:] Well, the current one they don't do speaker identity. [speaker003:] Yeah, I think [disfmarker] [speaker001:] They can't do that. [speaker002:] because in NaturallySpeaking, or, excuse me, in ViaVoice, it's only one person. and so in their current conventions there are no multiple speaker conventions. [speaker004:] So it may just be one long transcript of a bunch of words. [speaker005:] Oh. [vocalsound] I think that [disfmarker] My understanding from Yen [speaker002:] Yep. [speaker005:] Is it Yen Ching? Is that how you pronounce her name? [speaker004:] Uh [pause] Yu Ching, Yu Ching. Yeah. [speaker005:] Oh, uh Yu Ching? Yu Ching? [speaker002:] y Yu Ching. [speaker005:] was that um, they will [disfmarker] that they will adopt the [disfmarker] part of the conventions that [disfmarker] that we discussed, where they put speaker identifier down. But, you know, h they won't know these people, so I think it's [disfmarker] Well, they'll [disfmarker] they'll adopt some convention but we haven't specified to them [disfmarker] So they'll do something like speaker one, speaker two, is what I bet, but I'm betting there'll be huge variations in the accuracy of [disfmarker] of their labeling the speakers. We'll have to review the transcripts in any case. [speaker004:] And it [disfmarker] and it may very well be [disfmarker] I mean, since they're not going to sit there and [disfmarker] and [disfmarker] and worry ab about, uh, it being the same speaker, they may very well go the [disfmarker] eh the [disfmarker] the first se the first time it changes to another speaker, that'll be speaker two. [speaker005:] Yeah. [speaker004:] And the next time it'll be speaker three even if it's actually speaker one. [speaker005:] You know [disfmarker] Uh huh. You know, that would be a very practical solution on their part. [speaker003:] Yeah. [speaker005:] And [disfmarker] and [disfmarker] but then we would need to label it. [speaker004:] Yeah. [speaker003:] It's a good idea. [speaker005:] And that's OK. [speaker002:] Yeah we [disfmarker] we can probably regenerate it pretty easily from the close talking mikes. [speaker003:] Yeah. Yeah, I think [disfmarker] [speaker005:] Yes, I was thinking, the temp the time values of when it changes. [speaker003:] Yeah. [speaker004:] Yeah. [speaker002:] So. But I mean that doesn't [disfmarker] This doesn't answer the [disfmarker] the question. [speaker003:] Yeah. [speaker004:] But that [disfmarker] [speaker005:] That'd be very efficient. [speaker002:] The p It's a good point, "which [disfmarker] what do you do for discourse tracking?" [speaker003:] Because y y you don't know to know, eh [disfmarker] you don't need to know what i what is the iden identification of the [disfmarker] of the speakers. You only eh want to know [disfmarker] [speaker002:] Hmm. For [disfmarker] for acoustics you don't but for discourse you do. [speaker004:] Well, you do. [speaker003:] Ah, for discourse, yeah. Yeah. Yeah. [speaker004:] Yeah. If [disfmarker] if [disfmarker] if [disfmarker] if someone says, uh, "what [disfmarker] what is Jose doing?" and then Jose says something, you need to know that that was Jose responding. [speaker003:] Yeah, yeah. Yeah. Yeah. Yeah, yeah, yeah. Yeah. Yeah, [speaker005:] Mm hmm. [speaker004:] Uh, so. [speaker002:] Ugh, [comment] that's a problem. [speaker003:] Yeah. [speaker005:] Unless we adopt a different set of norms which is to not id to make a point of not identifying people by name, which then leads you to be more contextually ex explicit. [speaker001:] That would be hard. [speaker005:] Well, people are very flexible. You know? I mean, so when we did this las last week, I felt that you know, now, Andreas may, uh, [@ @] [comment] uh, he [disfmarker] he [disfmarker] i sometimes people think of something else at the same time and they miss a sentence or something, and [disfmarker] and because he missed something, then he missed the r the initial introduction of who we were talking about, and was [disfmarker] was unable to do the tracking. [speaker001:] Mm hmm. [speaker005:] But I felt like most of us were doing the tracking and knew who we were talking about and we just weren't mentioning the name. So, people are really flexible. [speaker003:] Yeah. [speaker001:] But, you know, like, at the beginning of this meeting [disfmarker] Or, you I think said, [pause] you know, or s Liz, said something about um, uh, "is Mari gonna use the equipment?" [speaker005:] Yeah? [speaker001:] I mean, how would you say that? I mean, you have to really think, you know, about what you're saying bef [speaker002:] if you wanted to anonymize. [speaker003:] Yeah. [vocalsound] Yeah, is [disfmarker] [speaker004:] "Is you know who up in you know where?" [speaker001:] Yeah. Yeah. [speaker002:] Mm hmm. [speaker004:] Right? Use the [disfmarker] [speaker001:] I think it would be really hard if we made a policy where we didn't say names, plus we'd have to tell everybody else. [speaker002:] Yeah, darn! I mean, what I was gonna say is that the other option is that we could bleep out the names. [speaker005:] Well, it [speaker003:] Yeah. [speaker002:] but then, again that kills your discourse analysis. [speaker001:] Right. [speaker005:] Uh huh. [speaker002:] Ugh! [speaker004:] Yeah. [speaker001:] Yeah. [speaker003:] Yeah. [speaker005:] Yeah. That's [disfmarker] that's the issue. [speaker001:] I [disfmarker] I think the [disfmarker] I think [disfmarker] I don't know, my own two cents worth is that you don't do anything about what's in the recordings, you only anonymize to the extent you can, the speakers have signed the forms and all. [speaker002:] Well, but that but that [disfmarker] as I said, that [disfmarker] that [disfmarker] that works great for the acoustics, but it [disfmarker] it hurts you a lot for trying to do discourse. [speaker005:] Well. Mm hmm. [speaker004:] Yeah. [speaker001:] Why? [speaker003:] Yeah. [speaker002:] Because you don't have a map of who's talking versus [pause] their [pause] name that they're being referred to. [speaker004:] Th Bec [speaker003:] Yeah. [speaker001:] I thought we were gonna get it labelled speaker one, speaker two [disfmarker] [speaker002:] Sure but, h then you have to know that Jose is speaker one and [disfmarker] [speaker001:] Why do you have to know his name? [speaker004:] OK, so suppose someone says, "well I don't know if I really heard what [disfmarker] uh, what Jose said." [speaker001:] Yeah. [speaker003:] Yeah. [speaker004:] And then, Jose responds. [speaker001:] Yeah. [speaker004:] And part of your learning about the dialogue is Jose responding to it. But it doesn't say "Jose", it says "speaker five". [speaker001:] OK. [speaker003:] Yeah. [speaker004:] So [pause] uh [pause] u [speaker003:] Yeah. [speaker001:] Oh, I see, you wanna associated the word "Jose" in the dialogue with the fact that then he responded. [speaker004:] Right. And so, if we pass out the data to someone else, and it says "speaker five" there, we also have to pass them this little guide that says that speaker five is Jose, [speaker002:] Someone who's doing discourse would wanna do that. [speaker004:] and if were gonna do that we might as well [comment] give them "Jose" [disfmarker] say it was "Jose". [speaker002:] And that violates our privacy. [speaker003:] Yeah. Yeah. [speaker005:] Mm hmm. [speaker002:] And that violates our privacy issue. [speaker003:] Yeah. [speaker005:] Yeah. [speaker003:] Yeah. [speaker005:] Now, I [disfmarker] I think that we have these two phases in the [disfmarker] in the data, which is the one which is o our use, University of Washington's use, IBM, SRI. [speaker004:] Yeah. [speaker005:] And within that, it may be that it's sufficient to not uh change the [disfmarker] to not incorporate anonymization yet, but always, always in the publications we have to. [speaker002:] Mm hmm. [speaker005:] And I think also, when we take it that next step and distribute it to the world, we have to. But I [disfmarker] but I don that's [disfmarker] that's a long way from now and [disfmarker] and it's a matter of [disfmarker] between now and then of d of deciding how [disfmarker] [speaker002:] Making some decisions? [speaker005:] i i it [disfmarker] You know, it may be s that we we'll need to do something like actually X out that part of the um [disfmarker] the audio, and just put in brackets "speaker one". [speaker002:] Yeah. For the public one. [speaker003:] the?? [speaker005:] You know. [speaker003:] Yeah. [speaker002:] You know, what we could do also is have more than one version of release. One that's public and one [disfmarker] one that requires licensing. And so the licensed one would [disfmarker] w we could [disfmarker] it would be a sticky limitation. [speaker005:] Uh huh. [speaker002:] You know, like [disfmarker] [speaker005:] I think that's risky. I think that the public should be the same. [speaker002:] Well, we can talk about that later. [speaker005:] I think that when we do that world release, it should be the same. [speaker004:] I [disfmarker] I agree. I [disfmarker] I agree with Jane. [speaker005:] For a bunch of reasons, legal. [speaker004:] I [disfmarker] I think that we [disfmarker] we have a [disfmarker] need to have a consistent licensing policy of some sort, and [disfmarker] [speaker005:] But I also think a consistent licensing policy is important. [speaker003:] Yeah. [speaker001:] Well, one thing to to take into consideration is w are there any um [disfmarker] For example, the people who are funding this work, they want this work to get out and be useful for discourse. If we all of a sudden do this and then release it to the public and it's not longer useful for discourse, you know [disfmarker] [speaker002:] Well, depending on how much editing we do, you might be able to [pause] still have it useful. because for discourse you don't need the audio. Right? So you could bleep out the names in the audio. [speaker001:] Mm hmm. [speaker002:] and use the anonymized one through the transcript. [speaker005:] Excuse me. We [disfmarker] we do need audio for discourse. [speaker004:] Uh. [speaker001:] But if you release both [disfmarker] [speaker002:] But, n excuse me, but you could bleep out just the names. [speaker004:] She [disfmarker] No, but she's saying, from the argument before, she wants to be able to say if someone said "Jose" in their [disfmarker] in their thing, and then connect to so to what he said later, then you need it. [speaker002:] Right. But in the transcript, you could say, everywhere they said "Jose" that you could replace it with "speaker seven". [speaker004:] Oh I see. [speaker005:] Yeah. But I [disfmarker] [pause] I also wanna say that people [disfmarker] [speaker004:] I see. [speaker002:] And then it wouldn't meet [disfmarker] match the audio anymore. [speaker005:] Uh huh. [speaker002:] But it would be still useful for the [disfmarker] [speaker001:] But if both of those are publically available [disfmarker] [speaker005:] Yeah. That's good. [speaker002:] But they [disfmarker] Right. [speaker004:] And th and the other thing is if [disfmarker] if [disfmarker] if Liz were here, [vocalsound] what she might say is that she wants to look if things that cut across between the audio and the dialogue, [speaker005:] Well, you see? So, it's complicated. [speaker004:] and so, [vocalsound] uh, [speaker005:] Mm hmm. Yeah. I think we have to think about w [@ @] [comment] how. I think that this can't be decided today. [speaker004:] yeah. Sorry. [speaker002:] Yeah, OK, good point. [speaker005:] But it's g but I think it was good to introduce the thing and we can do it next time. [speaker004:] Yeah. OK. [speaker002:] I didn't think [disfmarker] when I wrote you that email I wasn't thinking it was a big can of worms, but I guess it is. [speaker003:] OK. [speaker004:] Yeah, a lot of these things are. [speaker002:] Discourse. [speaker005:] Well it [disfmarker] Discourse, you know [disfmarker] Also I wanted to make the point that [disfmarker] that discourse is gonna be more than just looking at a transcript. [speaker002:] Yeah, ab absolutely. [speaker005:] It's gonna be looking at a t You know, and prosod prosodic stuff is involved, [speaker002:] Oh, yeah, sure. [speaker005:] and that means you're going to be listening to the audio, and then you come directly into this [disfmarker] confronting this problem. [speaker001:] Maybe we should just not allow anybody to do research on discourse, [speaker005:] So. [speaker001:] and then, we wouldn't have to worry about it. [speaker003:] OK. [speaker005:] Yeah, we should just market it to non English speaking countries. [speaker004:] Uh, maybe we should only have meetings between people who don't know one another and who are also amnesiacs who don't know their own name. [speaker003:] OK. [speaker002:] Did you read the paper on Eurospeech? [speaker005:] We could have little labels. I [disfmarker] I [disfmarker] I wanna introduce my Reservoir Dogs solution again, which is everyone has like "Mister White", "Mister Pink", [vocalsound] "Mister Blue". [speaker001:] Mister White. [speaker002:] Yeah. Did you read the paper a few years ago where they were reversing the syllables? They were di they they had the utterances. and they would extract out the syllables and they would play them backwards. [speaker001:] But [disfmarker] so, the syllables were in the same order, with respect to each other, but the acous [speaker002:] Everything was in the same order, but they were [disfmarker] the individual syll [comment] syllables were played backwards. And you could listen to it, [pause] and it would sound the same. [speaker001:] What did it sound like? [speaker002:] People had no difficulty in interpreting it. So what we need is something that's the reverse, that a speech recognizer works exactly the same on it but people can't understand it. [speaker004:] Oh, well that's [disfmarker] there's an easy way to do that. Jus jus just play it all backwards. [speaker002:] Oh right. The speech recognizer's totally symmetric, isn't it. [speaker004:] What, what does the speech recognizer care? [speaker002:] Ah, anyway. [speaker004:] Um, [speaker005:] Oh, do we do digits? Or [disfmarker]? What do we do? [speaker004:] Let's do digits. [speaker002:] Uh [disfmarker] OK, we'll quickly do digits. [speaker005:] Or do we just quit? [speaker004:] Yeah, we [disfmarker] we [disfmarker] we already missed the party. So. [speaker002:] OK. [speaker005:] Yeah. [speaker002:] OK, go off here. [speaker001:] I think it would be fun sometime to read them with different intonations. like as if you were talking like, "nine eight six eight seven?" [speaker005:] Well, you know, in the [disfmarker] in the one I transcribed, I did find a couple instances [disfmarker] [pause] I found one instance of contrastive stress, where it was like the string had a [disfmarker] li So it was like "nine eight two four, nine nine two four". [speaker001:] Oh, really. So they were like looking ahead, [speaker005:] And [disfmarker] [speaker001:] huh? [speaker005:] Well, they differed. I mean, at that [disfmarker] that session I did feel like they did it more as sentences. But, um, sometimes people do it as phone numbers. [comment] I mean, I've [disfmarker] I [pause] am sort of interested in [disfmarker] in [disfmarker] And sometimes, you know, I s And I [disfmarker] I never know. When I do it, I [disfmarker] I ask myself what I'm doing each time. [speaker002:] Yep. [speaker001:] Yeah, yeah. Well, I was thinking that it must get kind of boring for the people who are gonna have to transcribe this [speaker005:] and I [disfmarker] [speaker002:] Well, except, [speaker001:] They may as well throw in some interesting intonations. [speaker005:] I like your question intonation. [speaker002:] yeah. [speaker005:] That's very funny. I haven't heard that one. [speaker002:] We have the transcript. We have the actual numbers they're reading, so we're not necessarily depending on that. OK, I'm gonna go off. [speaker002:] I think for two years we were two months, uh, away from being done. [speaker001:] And what was that, Morgan? What project? [speaker002:] Uh, the, uh, TORRENT chip. [speaker001:] Oh. [speaker002:] Yeah. We were two [disfmarker] we were [disfmarker] [speaker003:] Yeah. [speaker002:] Uh, uh, we went through it [disfmarker] Jim and I went through old emails at one point and [disfmarker] and for two years there was this thing saying, yeah, we're [disfmarker] we're two months away from being done. It was very [disfmarker] very believable schedules, too. I mean, we went through and [disfmarker] with the schedules [disfmarker] and we [disfmarker] [speaker001:] It was true for two years. [speaker002:] Yeah. Oh, yeah. It was very true. [speaker001:] So, should we just do the same kind of deal where we [pause] go around and do, uh, status report [pause] kind of things? OK. And I guess when Sunil gets here he can do his last or something. [speaker003:] Mm hmm. [speaker001:] So. [speaker002:] Yeah. So we [pause] probably should wait for him to come before we do his. [speaker001:] OK. [speaker006:] OK. [speaker001:] That's a good idea. [speaker002:] Yeah. Yeah. [speaker001:] Any objection? Do y [speaker002:] All in favor [speaker001:] OK, M Do you want to start, Morgan? Do you have anything, or [disfmarker]? [speaker002:] Uh, I don't do anything. I [disfmarker] No, I mean, I [disfmarker] I'm involved in discussions with [disfmarker] with people about what they're doing, but I think they're [disfmarker] since they're here, they can talk about it themselves. [speaker006:] OK. So should I go so that, uh, [speaker001:] Yeah. Why don't you go ahead, Barry? [speaker006:] you're gonna talk about Aurora stuff, per se? [speaker001:] OK. [speaker006:] OK. Um. Well, this past week I've just been, uh, getting down and dirty into writing my [disfmarker] my proposal. So, um [disfmarker] Mmm. I just finished a section on, uh [disfmarker] on talking about these intermediate categories that I want to classify, um, as a [disfmarker] as a middle step. And, um, I hope to [disfmarker] hope to get this, um [disfmarker] a full rough draft done by, uh, Monday so I can give it to Morgan. [speaker001:] When is your, uh, meeting? [speaker006:] Um, my meeting [speaker001:] Yeah. [speaker006:] with, uh [disfmarker]? Oh, oh, you mean the [disfmarker] the quals. [speaker001:] The quals. Yeah. [speaker006:] Uh, the quals are happening in July twenty fifth. [speaker001:] Oh. Soon. [speaker006:] Yeah. D Day. [speaker001:] Uh huh. Yeah. [speaker006:] Uh huh. [speaker001:] So, is the idea you're going to do this paper and then you pass it out to everybody ahead of time and [disfmarker]? [speaker006:] Right, right. So, y you write up a proposal, and give it to people ahead of time, and you have a short presentation. And, um, and then, um [disfmarker] then everybody asks you questions. [speaker001:] Hmm. [speaker006:] Yeah. [speaker001:] I remember now. [speaker006:] Yep. So, um. [speaker001:] Have you d? [speaker006:] Y s [speaker001:] I was just gonna ask, do you want to say any [disfmarker] a little bit about it, or [disfmarker]? [speaker006:] Oh. Uh, a little bit about [disfmarker]? [speaker001:] Mmm. Wh what you're [disfmarker] what you're gonna [disfmarker] You said [disfmarker] you were talking about the, uh, particular features that you were looking at, [speaker006:] Oh, the [disfmarker] the [disfmarker] [speaker001:] or [disfmarker] [speaker006:] Right. Well, I was, um, I think one of the perplexing problems is, um, for a while I was thinking that I had to come up with a complete set of intermediate features [disfmarker] in intermediate categories to [disfmarker] to classify right away. But what I'm thinking now is, I would start with [disfmarker] with a reasonable set. Something [disfmarker] something like, um, um [disfmarker] like, uh, re regular phonetic features, just to [disfmarker] just to start off that way. And do some phone recognition. Um, build a system that, uh, classifies these, um [disfmarker] these feat uh, these intermediate categories using, uh, multi band techniques. Combine them and do phon phoneme recognition. Look at [disfmarker] then I would look at the errors produced in the phoneme recognition and say, OK, well, I could probably reduce the errors if I included this extra feature or this extra intermediate category. That would [disfmarker] that would reduce certain confusions over other confusions. And then [disfmarker] and then [vocalsound] reiterate. Um, build the intermediate classifiers. Uh, do phoneme recognition. Look at the errors. And then postulate new [disfmarker] or remove, um, intermediate categories. And then do it again. [speaker001:] So you're gonna use TIMIT? [speaker006:] Um, for that [disfmarker] for that part of the [disfmarker] the process, yeah, I would use TIMIT. [speaker001:] Mm hmm. [speaker006:] And, um, then [disfmarker] after [disfmarker] after, uh, um, doing TIMIT. Right? Um, that's [disfmarker] [vocalsound] that's, um [disfmarker] that's just the ph the phone recognition task. [speaker001:] Mm hmm. Yeah. [speaker006:] Uh, I wanted to take a look at, um, things that I could model within word. So, I would mov I would then shift the focus to, um, something like Schw Switchboard, uh, where I'd [disfmarker] I would be able to, um [disfmarker] to model, um, intermediate categories that span across phonemes, not just within the phonemes, themselves, [speaker001:] Mm hmm. [speaker006:] um, and then do the same process there, um, on [disfmarker] on a large vocabulary task like Switchboard. Uh, and for that [disfmarker] for that part I would [disfmarker] I'd use the SRI recognizer since it's already set up for [disfmarker] for Switchboard. And I'd run some [disfmarker] some sort of tandem style processing with, uh, my intermediate classifiers. [speaker001:] Oh. So that's why you were interested in getting your own features into the SRI files. [speaker006:] Yeah. That's why I [disfmarker] I was asking about that. [speaker001:] Yeah. [speaker006:] Yeah. [speaker001:] Yeah. [speaker006:] Um, and I guess that's [disfmarker] that's it. Any [disfmarker] any questions? [speaker001:] Sounds good. So you just have a few more weeks, huh? [speaker006:] Um, yeah. A few more. [speaker001:] It's about a month from now? [speaker006:] It's a [disfmarker] it's a month and [disfmarker] and a week. [speaker001:] Yeah. [speaker006:] Yeah. [speaker001:] So, uh, you want to go next, Dave? And we'll do [disfmarker] [speaker005:] Oh. OK, sure. So, um, last week I finally got results from the SRI system about this mean subtraction approach. And, um, we [disfmarker] we got an improvement, uh, in word error rate, training on the TI digits data set and testing on Meeting Recorder digits of, um, [vocalsound] six percent to four point five percent, um, on the n on the far mike data using PZM F, but, um, the near mike performance worsened, um, from one point two percent to two point four percent. And, um, wh why would that be, um, [vocalsound] considering that we actually got an improvement in near mike performance using HTK? And so, uh, with some input from, uh, Andreas, I have a theory in two parts. Um, first of all HTK [disfmarker] sorry, SR the SRI system is doing channel adaptation, and so HTK wasn't. Um, so this, um [disfmarker] This mean subtraction approach will do a kind of channel [pause] normalization and so that might have given the HTK use of it a boost that wouldn't have been applied in the SRI case. And also, um, the [disfmarker] Andreas pointed out the SRI system is using more parameters. It's got finer grained acoustic models. So those finer grained acoustic models could be more sensitive to the artifacts [pause] in the re synthesized audio. Um. And me and Barry were listening to the re synthesized audio and sometimes it seems like you get of a bit of an echo of speech in the background. And so that seems like it could be difficult for training, cuz you could have [pause] different phones [pause] lined up with a different foreground phone, [vocalsound] um, [vocalsound] depending on [pause] the timing of the echo. So, um, I'm gonna try training on a larger data set, and then, eh, the system will have seen more examples o of these artifacts and hopefully will be more robust to them. So I'm planning to use the Macrophone set of, um, read speech, and, um [disfmarker] Hmm. [speaker002:] I had another thought just now, which is, uh, remember we were talking before about [disfmarker] we were talking in our meeting about, uh, this stuff that [disfmarker] some of the other stuff that Avendano did, where they were, um, getting rid of low energy [pause] sections? Um, uh, if you [disfmarker] if you did a high pass filtering, as Hirsch did in [pause] late eighties to reduce some of the effects of reverberation, uh, uh, Avendano and Hermansky were arguing that, uh, perhaps one of the reasons for that working was ma may not have even been the filtering so much but the fact that when you filter a [disfmarker] an all positive power spectrum you get some negative values, and you gotta figure out what to do with them if you're gonna continue treating this as a power spectrum. So, what [disfmarker] what Hirsch did was, uh, set them to zero [disfmarker] set the negative values to zero. So if you imagine a [disfmarker] a waveform that's all positive, which is the time trajectory of energy, um, and, uh, shifting it downwards, and then getting rid of the negative parts, that's essentially throwing away the low energy things. And it's the low energy parts of the speech where the reverberation is most audible. You know, you have the reverberation from higher energy things showing up in [disfmarker] So in this case you have some artificially imposed [pause] reverberation like thing. I mean, you're getting rid of some of the other effects of reverberation, but because you have these non causal windows, you're getting these funny things coming in, uh, at n And, um, what if you did [disfmarker]? I mean, there's nothing to say that the [disfmarker] the processing for this re synthesis has to be restricted to trying to get it back to the original, according to some equation. [speaker005:] Uh huh. [speaker002:] I mean, you also could, uh, just try to make it nicer. [speaker005:] Mm hmm. [speaker002:] And one of the things you could do is, you could do some sort of VAD like thing and you actually could take very low energy sections and set them to some [disfmarker] some, uh, very low or [disfmarker] or near zero [pause] value. [speaker005:] Uh huh. [speaker002:] I mean, uh, I'm just saying if in fact it turns out that [disfmarker] that these echoes that you're hearing are, uh [disfmarker] or pre echoes, whichever they are [disfmarker] are [disfmarker] are, uh, part of what's causing the problem, you actually could get rid of them. [speaker005:] Uh huh. [speaker002:] Be pretty simple. [speaker005:] OK. [speaker002:] I mean, you do it in a pretty conservative way so that if you made a mistake you were more likely to [pause] keep in an echo than to throw out speech. [speaker005:] Hmm. [speaker007:] Um, what is the reverberation time [pause] like [pause] there? [speaker005:] In thi in this room? [speaker007:] On, uh, the [disfmarker] the one what [disfmarker] the s in the speech that you are [disfmarker] you are using like? [speaker005:] Uh [disfmarker] Y Yeah. I [disfmarker] I [disfmarker] I [disfmarker] I don't know. [speaker002:] So, it's this room. [speaker007:] It's, uh [disfmarker] Oh, this room? [speaker002:] It's [disfmarker] it's this room. [speaker007:] OK. [speaker002:] So [disfmarker] so it's [disfmarker] these are just microphone [disfmarker] this micro close microphone and a distant microphone, he's doing these different tests on. [speaker006:] Oh. [speaker002:] Uh, we should do a measurement in here. I g think we never have. I think it's [disfmarker] I would guess, uh, point seven, point eight seconds f uh, R T [speaker006:] Hmm! [speaker002:] something like that? [speaker007:] Mm hmm. [speaker002:] But it's [disfmarker] you know, it's this room. So. [speaker007:] OK. Mm hmm. [speaker002:] Uh. But the other thing is, he's putting in [disfmarker] w I was using the word "reverberation" in two ways. He's also putting in, uh, a [disfmarker] he's taking out some reverberation, but he's putting in something, because he has [pause] averages over multiple windows stretching out to twelve seconds, which are then being subtracted from the speech. And since, you know, what you subtract, sometimes you'll be [disfmarker] you'll be subtracting from some larger number and sometimes you won't. [speaker007:] Mm hmm. Mm hmm. [speaker002:] And [disfmarker] So you can end up with some components in it that are affected by things that are seconds away. Uh, and if it's a low [pause] energy compo portion, you might actually hear some [pause] funny things. [speaker007:] Yeah. [speaker005:] O o one thing, um, I noticed is that, um, the mean subtraction seems to make the PZM signals louder after they've been re synthesized. So I was wondering, is it possible that one reason it helped with the Aurora baseline system is [pause] just as a kind of gain control? Cuz some of the PZM signals sound pretty quiet if you don't amplify them. [speaker003:] Mm hmm. I don't see why [disfmarker] why your signal is louder after processing, because yo [speaker005:] Yeah. I don't know why y, uh, either. [speaker003:] Yeah. [speaker002:] I don't think just multiplying the signal by two would have any effect. [speaker003:] Mm hmm. [speaker005:] Oh, OK. [speaker002:] Yeah. I mean, I think if you really have louder signals, what you mean is that you have [pause] better signal to noise ratio. [speaker003:] Well, well [disfmarker] [speaker002:] So if what you're doing is improving the signal to noise ratio, then it would be better. [speaker003:] Mm hmm. [speaker002:] But just it being bigger if [disfmarker] with the same signal to noise ratio [disfmarker] [speaker005:] It w i i it wouldn't affect things. [speaker003:] Yeah. [speaker005:] OK. [speaker002:] No. [speaker003:] Well, the system is [disfmarker] use [pause] the absolute energy, so it's a little bit dependent on [disfmarker] on the [pause] signal level. But, not so much, I guess. [speaker002:] Well, yeah. But it's trained and tested on the same thing. [speaker003:] Mmm. Mm hmm. [speaker002:] So if the [disfmarker] if the [disfmarker] if you change [vocalsound] in both training and test, the absolute level by a factor of two, it will n have no effect. [speaker003:] Yeah. [speaker001:] Did you add [pause] this data to the training set, for the Aurora? Or you just tested on this? [speaker005:] Uh [disfmarker] Um. Did I w what? Sorry? [speaker001:] Well, Morgan was just saying that, uh, as long as you do it in both training and testing, it shouldn't have any effect. [speaker005:] Yeah. [speaker001:] But I [disfmarker] I was [pause] sort of under the impression that you just tested with this data. [speaker005:] I [disfmarker] I b [speaker001:] You didn't [pause] train it also. [speaker005:] I [disfmarker] Right. I trained on clean TI digits. I [disfmarker] I did the mean subtraction on clean TI digits. But I didn't [disfmarker] I'm not sure if it made the clean ti TI digits any louder. [speaker002:] Oh, I see. [speaker005:] I only remember noticing it made the, um, PZM signal louder. [speaker002:] OK. Well, I don't understand then. Yeah. [speaker005:] Huh. I don't know. If it's [disfmarker] if it's [disfmarker] like, if it's trying to find a [disfmarker] a reverberation filter, it could be that this reverberation filter is making things quieter. And then if you take it out [disfmarker] that taking it out makes things louder. I mean. [speaker002:] Uh, no. I mean, [vocalsound] uh, there's [disfmarker] there's nothing inherent about removing [disfmarker] if you're really removing, [speaker005:] Nuh huh. The mean. [speaker002:] uh, r uh, then I don't [pause] see how that would make it louder. [speaker005:] OK. Yeah, I see. Yeah. OK. [speaker002:] So it might be just some [disfmarker] [speaker005:] So I should maybe listen to that stuff again. [speaker002:] Yeah. It might just be some artifact of the processing that [disfmarker] that, uh, if you're [disfmarker] Uh, yeah. [speaker005:] Oh. OK. [speaker002:] I don't know. [speaker003:] Eh [speaker001:] I wonder if there could be something like, uh [disfmarker] for s for the PZM data, uh, you know, if occasionally, uh, somebody hits the table or something, you could get a spike. Uh. I'm just wondering if there's something about the, um [disfmarker] you know, doing the mean normalization where, uh, it [disfmarker] it could cause [pause] you to have better signal to noise ratio. Um. [speaker002:] Well, you know, there is this. Wait a minute. It [disfmarker] it [disfmarker] i maybe [disfmarker] i If, um [disfmarker] Subtracting the [disfmarker] the mean log spectrum is [disfmarker] is [disfmarker] is like dividing by the spectrum. So, depending what you divide by, if your [disfmarker] if s your estimate is off and sometimes you're [disfmarker] you're [disfmarker] you're getting a small number, you could make it bigger. [speaker001:] Mm hmm. [speaker005:] Mm hmm. [speaker002:] So, it's [disfmarker] it's just a [disfmarker] a question of [disfmarker] there's [disfmarker] It [disfmarker] it could be that there's some normalization that's missing, or something [speaker005:] Mm hmm. [speaker002:] to make it [disfmarker] Uh, y you'd think it shouldn't be larger, but maybe in practice it is. [speaker005:] Hmm. [speaker002:] That's something to think about. I don't know. [speaker003:] I had a question about the system [disfmarker] the SRI system. So, [vocalsound] you trained it on TI digits? But except this, it's exactly the same system as the one that was tested before and that was trained on [pause] Macrophone. Right? So on TI digits it gives you one point two percent error rate and on Macrophone it's still O point eight. Uh, but is it [pause] exactly the same system? [speaker005:] Uh. I think so. If you're talking about the Macrophone results that Andreas had about, um, a week and a half ago, I think it's the same system. [speaker003:] Hmm. Mm hmm. So you use VTL uh, vocal tract length normalization and, um, like MLLR transformations also, [speaker005:] Mm hmm. [speaker003:] and [disfmarker] [speaker005:] That's [disfmarker] [speaker002:] I'm sorry, [speaker003:] all that stuff. [speaker002:] was his point eight percent, er, a [disfmarker] a result on testing on Macrophone or [disfmarker] or training? [speaker003:] It was [pause] training on Macrophone and testing [disfmarker] yeah, on [disfmarker] on meeting digits. [speaker002:] Oh. So that was done already. So we were [disfmarker] Uh, and it's point eight? OK. [speaker003:] Mm hmm. [speaker002:] OK. [speaker003:] Yeah. I [disfmarker] I've just been text [comment] testing the new [pause] Aurora front end with [disfmarker] well, Aurora system actually [disfmarker] so front end and HTK, um, acoustic models on the meeting digits and it's a little bit better than the previous system. We have [disfmarker] I have two point seven percent error rate. And before with the system that was proposed, it's what? It was three point nine. So. [speaker002:] Oh, that's a lot better. [speaker003:] We are getting better. [speaker002:] So, what [disfmarker] w? [speaker007:] With the [disfmarker] with the HTK back end? What we have for Aurora? [speaker003:] And [disfmarker] Yeah. Two point seven. [speaker007:] I know in the meeting, like [disfmarker] [speaker003:] On the meeting we have two point seven. [speaker007:] Right. Oh. [speaker006:] That's with the new IIR filters? [speaker003:] Uh. Yeah, yeah. [speaker006:] OK. [speaker003:] So, yeah, we have [pause] the new LDA filters, and [disfmarker] I think, maybe [disfmarker] I didn't look, but one thing that makes a difference is this DC offset compensation. Uh, eh [disfmarker] Do y did you have a look at [disfmarker] at the meet uh, meeting digits, if they have a DC component, or [disfmarker]? [speaker005:] I [disfmarker] I didn't. No. [speaker003:] Oh. [speaker002:] Hmm. [speaker007:] No. The DC component could be negligible. I mean, if you are [pause] recording it through a mike. I mean, any [disfmarker] all of the mikes have the DC removal [disfmarker] some capacitor sitting right in [pause] that bias it. [speaker002:] Yeah. But this [disfmarker] uh, uh, uh, no. Because, uh, there's a sample and hold in the A toD. And these period these typically do have a DC offset. [speaker007:] Oh, OK. [speaker002:] And [disfmarker] and they can be surprisingly large. It depends on the electronics. [speaker007:] Oh, so it is the digital [disfmarker] OK. It's the A toD that introduces the DC in. [speaker002:] Yeah. The microphone isn't gonna pass any DC. [speaker007:] Yeah. Yeah. Yeah. [speaker002:] But [disfmarker] but, [speaker007:] OK. [speaker002:] typi you know, unless [disfmarker] Actually, there are [pause] instrumentation mikes that [disfmarker] that do pass [disfmarker] go down to DC. But [disfmarker] but, [speaker007:] Mm hmm. [speaker002:] uh, no, it's the electronics. And they [disfmarker] and [disfmarker] [speaker007:] Mm hmm. [speaker002:] then there's amplification afterwards. And you can get, I think it was [disfmarker] I think it was in the [pause] Wall Street Journal data that [disfmarker] that [disfmarker] I can't remember, one of the DARPA things. There was this big DC DC offset [speaker001:] Mm hmm. [speaker002:] we didn't [disfmarker] we didn't know about for a while, while we were [pause] messing with it. And we were getting these terrible results. And then we were talking to somebody and they said, "Oh, yeah. Didn't you know? Everybody knows that. There's all this DC offset in th" So, yes. You can have DC offset in the data. [speaker007:] Oh, OK. OK. [speaker002:] Yeah. [speaker001:] So was that [disfmarker] was that everything, Dave? [speaker005:] Oh. And I also, um, did some experiments [pause] about normalizing the phase. Um. So I c I came up with a web page that people can take a look at. And, um, the interesting thing that I tried was, um, Adam and Morgan had this idea, um, since my original attempts to, um, take the mean of the phase spectra over time and normalize using that, by subtracting that off, didn't work. Um, so, well, that we thought that might be due to, um, problems with, um, the arithmetic of phases. They [disfmarker] they add in this modulo two pi way and, um, there's reason to believe that that approach of taking the mean of the phase spectrum wasn't really [pause] mathematically correct. So, [vocalsound] what I did instead is I [vocalsound] took the mean of the FFT spectrum without taking the log or anything, and then I took the phase of that, and I subtracted that phase [pause] off to normalize. But that, um, didn't work either. [speaker002:] See, we have a different interpretation of this. He says it doesn't work. I said, I think it works magnificently, but just not for the task we intended. Uh, it gets rid of the speech. [speaker006:] Uh, gets rid of the speech. [speaker001:] What does it leave? [speaker002:] Uh, it leaves [disfmarker] you know, it leaves the junk. I mean, I [disfmarker] I think it's [disfmarker] it's tremendous. [speaker006:] Oh, wow. [speaker002:] You see, all he has to do is go back and reverse what he did before, and he's really got something. [speaker001:] Well, could you take what was left over and then subtract that? [speaker002:] Ex exactly. [speaker007:] Yeah. [speaker002:] Yeah, you got it. [speaker006:] Yeah. [speaker002:] So, it's [disfmarker] it's a general rule. [speaker007:] Oh, it's [disfmarker] [speaker002:] Just listen very carefully to what I say and do the opposite. Including what I just said. [speaker005:] And, yeah, that's everything. [speaker001:] All set? Do you want to go, Stephane? [speaker003:] Um. Yeah. Maybe, concerning these d still, these meeting digits. I'm more interested in trying to figure out what's still the difference between the SRI system and the Aurora system. And [disfmarker] Um. Yeah. So, I think I will maybe train, like, gender dependent models, because [pause] this is also one big difference between [pause] the two systems. Um, the other differences were [pause] the fact that maybe the acoustic models of the SRI are more [disfmarker] SRI system are more complex. But, uh, Chuck, you did some experiments with this and [speaker001:] It didn't seem to help in the HTK system. [speaker003:] it was hard t to [disfmarker] to have some exper some improvement with this. Um. [speaker002:] Well, it sounds like they also have [disfmarker] he [disfmarker] he's saying they have all these, uh, uh, different kinds of adaptation. [speaker003:] Mm hmm. [speaker002:] You know, they have channel adaptation. [speaker003:] Yeah. [speaker002:] They have speaker adaptation. [speaker003:] Right. Yeah. [speaker006:] Yeah. [speaker002:] Yeah. Yeah. [speaker001:] Well, there's also the normalization. Like they do, um [disfmarker] I'm not sure how they would do it when they're working with the digits, [speaker003:] The vocal tr [speaker001:] but, like, in the Switchboard data, there's, um [disfmarker] conversation side normalization for the [pause] non C zero components, [speaker003:] Yeah. Yeah. This is another difference. Their normalization works like on [disfmarker] on the utterance levels. [speaker001:] Mm hmm. [speaker003:] But we have to do it [disfmarker] We have a system that does it on line. So, it might be [disfmarker] [speaker001:] Right. [speaker003:] it might be better with [disfmarker] it might be worse if the [pause] channel is constant, [speaker001:] Yeah. [speaker003:] or [disfmarker] Nnn. [speaker007:] And the acoustic models are like k triphone models or [disfmarker] or is it the whole word? [speaker003:] SRI [disfmarker] it's [disfmarker] it's tr [speaker006:] SRI. [speaker007:] Yeah. [speaker003:] Yeah. I guess it's triphones. [speaker007:] It's triphone. [speaker002:] I think it's probably more than that. [speaker003:] Huh. [speaker002:] I mean, so they [disfmarker] they have [disfmarker] I [disfmarker] I thin think they use these, uh, uh, genone things. So there's [disfmarker] there's these kind of, uh, uh, pooled models and [disfmarker] and they can go out to all sorts of dependencies. [speaker007:] Oh. It's like the tied state. [speaker002:] So. [speaker001:] Mm hmm. [speaker002:] They have tied states and I think [disfmarker] I [disfmarker] I [disfmarker] I don't real I'm talk I'm just guessing here. [speaker007:] OK. [speaker002:] But I think [disfmarker] I think they [disfmarker] they don't just have triphones. I think they have a range of [disfmarker] of, uh, dependencies. [speaker003:] Mm hmm. [speaker007:] Mm hmm. [speaker003:] Mm hmm. [speaker006:] Hmm. [speaker003:] And [disfmarker] Yeah. Well. Um. Well, the first thing I [disfmarker] that I want to do is just maybe these gender things. Uh. And maybe see with [pause] Andreas if [disfmarker] Well, I [disfmarker] I don't know [pause] how much it helps, what's the model. [speaker001:] So [disfmarker] so the n stuff on the numbers you got, the two point seven, is that using the same training data that the SRI system used and got one point two? [speaker003:] That's right. So it's the clean [pause] TI digits training set. [speaker001:] So exact same training data? [speaker003:] Right. Mm hmm. [speaker001:] OK. [speaker003:] I guess you used the clean training set. [speaker005:] Right. For [disfmarker] with the SRI system [disfmarker] [speaker003:] Mm hmm. Well. [speaker005:] You know, the [disfmarker] the Aurora baseline is set up with these, um [disfmarker] [vocalsound] this version of the clean training set that's been filtered with this G seven one two filter, and, um, to train the SRI system on digits S Andreas used the original TI digits, um, under U doctor speech data TI digits, which don't have this filter. But I don't think there's any other difference. [speaker003:] Mm hmm. Mm hmm. Yeah. [speaker002:] So is that [disfmarker]? Uh, are [disfmarker] are these results comparable? So you [disfmarker] you were getting with the, uh, Aurora baseline something like two point four percent [pause] on clean TI digits, when, uh, training the SRI system with clean TR digits [disfmarker] [comment] TI digits. Right? [speaker005:] Um. Uh huh. [speaker002:] And [disfmarker] Yeah. And, so, is your two point seven comparable, where you're, uh, uh, using, uh, the submitted system? [speaker003:] Yeah. I think so. Yeah. [speaker002:] OK. [speaker003:] Mm hmm. [speaker002:] So it's [pause] about the same, [speaker005:] W w it was one [disfmarker] one point two [speaker002:] maybe a little worse. [speaker003:] Ye [speaker005:] with the SRI system, I [disfmarker] [speaker002:] I'm sorry. [speaker003:] Yeah. The complete SRI system is one point two. [speaker002:] You [disfmarker] you were HTK. [speaker003:] Yeah. [speaker002:] Right? OK. [speaker003:] Mm hmm. [speaker002:] That's right. So [disfmarker] OK, so [pause] the comparable number then, uh [pause] for what you were talking about then, since it was HTK, would be the [pause] um, two point f [speaker003:] It was four point something. Right? The HTK system with, uh, b [speaker005:] D d [speaker002:] Oh, right, right, right, right. [speaker005:] Do you mean the b? [speaker003:] MFCC features [disfmarker] [speaker005:] The baseline Aurora two system, trained on TI digits, tested on Meeting Recorder near, I think we saw in it today, and it was about six point six percent. [speaker003:] Oh. [speaker002:] Right. Right, right, right. OK. Alright. So [disfmarker] [speaker003:] So [disfmarker] Yeah. The only difference is the features, right now, [speaker002:] He's doing some [pause] different things. [speaker003:] between this and [disfmarker] [speaker002:] Yes. OK, good. So they are helping. [speaker003:] Mm hmm. [speaker002:] That's good to hear. [speaker003:] They are helping. [speaker002:] Yeah. [speaker003:] Yeah. Um. Yeah. And another thing I [disfmarker] I maybe would like to do is to [pause] just test the SRI system that's trained on Macrophone [disfmarker] test it on, uh, the noisy TI digits, [speaker002:] Yeah. [speaker003:] cuz I'm still wondering [pause] where this [pause] improvement comes from. When you train on Macrophone, it seems better on meeting digits. But I wonder if it's just because maybe [pause] Macrophone is acoustically closer to the meeting digits than [disfmarker] than TI digit is, which is [disfmarker] TI digits are very [pause] clean recorded digits [speaker002:] Mm hmm. [speaker003:] and [disfmarker] Uh, [speaker001:] You know, it would also be interesting to see, uh [disfmarker] to do the regular Aurora test, [speaker003:] f s [speaker001:] um, but use the SRI system instead of HTK. [speaker003:] That's [disfmarker] Yeah. That's what [pause] I wanted, just, uh [disfmarker] Yeah. So, just using the SRI system, test it on [disfmarker] and test it on [pause] Aurora TI digits. Right. [speaker001:] Why not the full Aurora, uh, test? [speaker003:] Um. Yeah. There is this problem of multilinguality yet. So we don't [disfmarker] [speaker001:] Mm hmm. [speaker002:] You'd have to train the SRI system with [disfmarker] with all the different languages. [speaker003:] i i We would have to train on [disfmarker] [speaker001:] Right. [speaker003:] Yeah. [speaker001:] Yeah. That's what I mean. So, like, comple [speaker002:] It'd be a [pause] lot of work. That's the only thing. [speaker003:] Yeah. It's [disfmarker] [speaker001:] Mmm. Well, I mean, [speaker003:] Mmm. [speaker001:] uh, uh, I guess the work would be into getting the [disfmarker] the files in the right formats, or something. Right? I mean [disfmarker] [speaker003:] Mm hmm. [speaker001:] Because when you train up the Aurora system, you're, uh [disfmarker] you're also training on all the data. [speaker003:] That's right. Yeah. [speaker001:] I mean, it's [disfmarker] [speaker003:] Yeah. I see. Oh, so, OK. Right. I see what you mean. [speaker002:] That's true, but I think that also when we've had these meetings week after week, oftentimes people have not done the full arrange of things because [disfmarker] on [disfmarker] on whatever it is they're trying, because it's a lot of work, even just with the HTK. [speaker001:] Mm hmm. Mm hmm. [speaker002:] So, it's [disfmarker] it's a good idea, but it seems like [pause] it makes sense to do some pruning [speaker001:] Mm hmm. [speaker002:] first with a [disfmarker] a test or two that makes sense for you, [speaker001:] Yeah. [speaker002:] and then [pause] take the likely candidates and go further. [speaker001:] Yeah. [speaker003:] Mm hmm. Yeah. But, just testing on TI digits would already give us some information [pause] about what's going on. And [disfmarker] mm hmm. Uh, yeah. OK. Uh, the next thing is this [disfmarker] this VAD problem that, um, um [disfmarker] So, I'm just talking about the [disfmarker] the curves that I [disfmarker] I sent [disfmarker] [vocalsound] I sent you [disfmarker] so, whi that shows that [vocalsound] when the SNR decrease, [vocalsound] uh, the current [pause] VAD approach doesn't drop much frames [pause] for some particular noises, uh, which might be then noises that are closer to speech, uh, acoustically. [speaker002:] I i Just to clarify something for me. I [speaker003:] Mm hmm. [speaker002:] They were supp Supposedly, in the next evaluation, they're going to be supplying us with boundaries. So does any of this matter? I mean, other than our interest in it. [speaker003:] Uh [disfmarker] [speaker002:] Uh [disfmarker] [speaker003:] Well. First of all, the boundaries might be, uh [disfmarker] like we would have t two hundred milliseconds or [disfmarker] before and after speech. Uh. So removing more than that might still make [pause] a difference [pause] in the results. And [disfmarker] [speaker002:] Do we [disfmarker]? I mean, is there some reason that we think that's the case? [speaker003:] No. Because we don't [disfmarker] didn't looked [pause] that much at that. [speaker002:] Yeah. [speaker003:] But, [vocalsound] still, I think it's an interesting problem. And [disfmarker] [speaker002:] Oh, yeah. [speaker003:] Um. Yeah. [speaker002:] But maybe we'll get some insight on that when [disfmarker] when, uh, the gang gets back from Crete. Because [pause] there's lots of interesting problems, of course. [speaker003:] Mm hmm. Yeah, [speaker002:] And then the thing is if [disfmarker] if they really are going to have some means of giving us [pause] fairly tight, uh, boundaries, then that won't be so much the issue. [speaker003:] yeah. Mm hmm. Mm hmm. [speaker002:] Um But [vocalsound] I don't know. [speaker007:] Because w we were wondering whether that [pause] VAD is going to be, like, a realistic one or is it going to be some manual segmentation. And then, like, if [disfmarker] if that VAD is going to be a realistic one, then we can actually use their markers to shift the point around, I mean, the way we want to find a [disfmarker] [speaker002:] Mm hmm. [speaker007:] I mean, rather than keeping the twenty frames, we can actually move the marker to a point which we find more [pause] suitable for us. But if that is going to be something like a manual, uh, segmenter, then we can't [pause] use that information anymore, [speaker002:] Right. [speaker003:] Mm hmm. [speaker007:] because that's not going to be the one that is used in the final evaluation. [speaker002:] Right. [speaker007:] So. We don't know what is the type of [pause] [vocalsound] [pause] VAD which they're going to provide. [speaker002:] Yeah. [speaker003:] Yeah. And actually there's [disfmarker] Yeah. There's an [disfmarker] uh, I think it's still for [disfmarker] even for the evaluation, uh, it might still be interesting to [vocalsound] work on this because [pause] the boundaries apparently that they would provide is just, [vocalsound] um, starting of speech and end of speech [pause] uh, at the utterance level. And [disfmarker] Um. [speaker007:] With some [disfmarker] some gap. [speaker003:] So [disfmarker] [speaker007:] I mean, with some pauses in the center, provided they meet that [disfmarker] whatever the hang over time which they are talking. [speaker003:] Yeah. But when you have like, uh, five or six frames, both [disfmarker] [speaker007:] Yeah. Then the they will just fill [disfmarker] fill it up. [speaker003:] it [disfmarker] it [disfmarker] with [disfmarker] [speaker007:] I mean, th [disfmarker] Yeah. [speaker003:] Yeah. [speaker002:] So if you could get at some of that, uh [disfmarker] [speaker003:] So [disfmarker] [speaker002:] although that'd be hard. [speaker003:] Yeah. It might be useful for, like, noise estimation, and a lot of other [pause] things that we want to work on. [speaker002:] But [disfmarker] but [disfmarker] [speaker007:] Yeah. [speaker002:] Yeah. Right. OK. [speaker003:] But [disfmarker] Mmm. Yeah. So I did [disfmarker] I just [pause] started to test [pause] putting together two VAD which was [disfmarker] was not much work actually. Um, I im re implemented a VAD that's very close to the, [vocalsound] um, energy based VAD [vocalsound] that, uh, the other Aurora guys use. Um. So, which is just putting a threshold on [pause] the noise energy, [speaker002:] Mm hmm. [speaker003:] and, detect detecting the first [pause] group of four frames [pause] that have a energy that's above this threshold, and, uh, from this point, uh, tagging the frames there as speech. So it removes [vocalsound] the first silent portion [disfmarker] portion of each utterance. And it really removes it, um, still o on the noises where [pause] our MLP VAD doesn't [pause] work a lot. [speaker002:] Mmm. [speaker003:] Uh, and [disfmarker] [speaker002:] Cuz I would have thought that having some kind of spectral [pause] information, uh [disfmarker] uh, you know, in the old days people would use energy and zero crossings, for instance [disfmarker] uh, would give you some [pause] better performance. Right? Cuz you might have low energy fricatives or [disfmarker] or, uh [pause] stop consonants, or something like that. [speaker003:] Mm hmm. [speaker002:] Uh. [speaker003:] Yeah. So, your point is [disfmarker] will be to u use whatever [disfmarker] [speaker002:] Oh, that if you d if you use purely energy and don't look at anything spectral, then you don't have a good way of distinguishing between low energy speech components and [pause] nonspeech. [speaker003:] Mm hmm. [speaker002:] And, um, just as a gross generalization, most nonsp many nonspeech noises have a low pass kind of characteristic, some sort of slope. And [disfmarker] and most, um, low energy speech components that are unvoiced have a [disfmarker] a high pass kind of characteristic [disfmarker] [speaker003:] Mm hmm. [speaker002:] an upward slope. [speaker003:] Yeah. [speaker002:] So having some kind of a [disfmarker] uh, you know, at the beginning of a [disfmarker] of a [disfmarker] of an S sound for instance, just starting in, it might be pretty low energy, [speaker003:] Mm hmm. [speaker002:] but it will tend to have this high frequency component. Whereas, [vocalsound] a [disfmarker] a lot of rumble, and background noises, and so forth will be predominantly low frequency. Uh, you know, by itself it's not enough to tell you, but it plus energy is sort of [disfmarker] [speaker003:] Yeah. [speaker002:] it plus energy plus timing information is sort of [disfmarker] [speaker003:] Mm hmm. [speaker002:] I mean, if you look up in Rabiner and Schafer from like twenty five years ago or something, that's sort of [pause] what they were using then. [speaker003:] Mm hmm. [speaker002:] So it's [disfmarker] it's not a [disfmarker] [speaker003:] Mm hmm. [speaker006:] Hmm. [speaker003:] So, yeah. It [disfmarker] it might be that what I did is [disfmarker] so, removes like [vocalsound] low, um, [vocalsound] uh [disfmarker] low energy, uh, speech frames. Because [pause] the way I do it is I just [disfmarker] I just combine the two decisions [disfmarker] so, the one from the MLP and the one from the energy based [disfmarker] with the [disfmarker] with the and [pause] operator. So, I only [pause] keep the frames where the two agree [pause] that it's speech. So if the energy based dropped [disfmarker] dropped low energy speech, mmm, they [disfmarker] they are [disfmarker] they are lost. Mmm. [speaker002:] Mm hmm. [speaker003:] But s still, the way it's done right now it [disfmarker] it helps on [disfmarker] on the noises where [disfmarker] it seems to help on the noises where [vocalsound] our VAD was not very [pause] good. [speaker002:] Well, I guess [disfmarker] I mean, one could imagine combining them in different ways. But [disfmarker] but, I guess what you're saying is that the [disfmarker] the MLP based one has the spectral information. [speaker003:] Yeah. [speaker002:] So. [speaker003:] But [disfmarker] Yeah. But the way it's combined wi is maybe done [disfmarker] Well, yeah. [speaker002:] Well, you can imagine [disfmarker] [speaker003:] The way I use a an a "AND" operator is [disfmarker] So, it [disfmarker] I, uh [disfmarker] [speaker002:] Is [disfmarker]? [speaker003:] The frames that are dropped by the energy based system are [disfmarker] are, uh, dropped, even if the, um, MLP decides to keep them. [speaker002:] Right. Right. And that might not be optimal, [speaker003:] But, yeah. [speaker002:] but [disfmarker] [speaker003:] Mm hmm. [speaker002:] but [disfmarker] I mean, I guess in principle what you'd want to do is have a [disfmarker] [vocalsound] uh, a probability estimated by each one [speaker001:] No [speaker002:] and [disfmarker] and put them together. [speaker003:] Yeah. Mmm. M Yeah. [speaker001:] Something that [disfmarker] that I've used in the past is, um [disfmarker] when just looking at the energy, is to look at the derivative. And you [pause] make your decision when the derivative is increasing for [pause] so many frames. Then you say that's beginning of speech. [speaker003:] Uh huh. [speaker001:] But, I'm [disfmarker] I'm trying to remember if that requires that you keep some amount of speech in a buffer. I guess it depends on how you do it. But [pause] I mean, that's [disfmarker] that's been a useful thing. [speaker002:] Yeah. [speaker003:] Mm hmm. [speaker006:] Mm hmm. [speaker007:] Yeah. Well, every everywhere has a delay associated with it. I mean, you still have to k always keep a buffer, [speaker001:] Mm hmm. [speaker007:] then only make a decision because [pause] you still need to smooth the [pause] decision further. [speaker001:] Right. Right. [speaker007:] So that's always there. [speaker001:] Yeah. OK. [speaker003:] Well, actually if I don't [disfmarker] maybe don't want to work too much of [disfmarker] on it right now. I just wanted to [disfmarker] to see if it's [disfmarker] [vocalsound] what I observed was the re was caused by this [disfmarker] this VAD problem. And it seems to be the case. [speaker002:] Mm hmm. [speaker003:] Um. Uh, the second thing is the [disfmarker] this spectral subtraction. Um. Um, which I've just started yesterday to launch a bunch of, uh, [nonvocalsound] twenty five experiments, uh, with different, uh, values for the parameters that are used. So, it's the Makhoul type spectral subtraction which use [pause] an over estimation factor. So, we substr I subtract more, [vocalsound] [vocalsound] um, [nonvocalsound] [vocalsound] noise than the noise spectra that [pause] is estimated [pause] on the noise portion of the s uh, the utterances. So I tried several, uh, over estimation factors. And after subtraction, I also add [pause] a constant noise, and I also try different, uh, [vocalsound] noise, uh, values and we'll see what happen. [speaker002:] Hmm. [speaker003:] Mm hmm. [speaker002:] OK. [speaker003:] Mm hmm. But st still when we look at the, um [disfmarker] Well, it depends on the parameters that you use, but for moderate over estimation factors and moderate noise level that you add, you st have a lot of musical noise. Um. On the other hand, when you [pause] subtract more and when you add more noise, you get rid of this musical noise but [pause] maybe you distort a lot of speech. So. Well. Mmm. Well, it [disfmarker] until now, it doesn't seem to help. But We'll see. So the next thing, maybe I [disfmarker] what I will [pause] try to [disfmarker] to do is just [pause] to try to smooth mmm, [vocalsound] the, um [disfmarker] to smooth the d the result of the subtraction, to get rid of the musical noise, using some kind of filter, or [disfmarker] [speaker007:] Can smooth the SNR estimate, also. [speaker003:] Yeah. Right. Mmm. [speaker007:] Your filter is a function of SNR. Hmm? [speaker003:] Yeah. So, to get something that's [disfmarker] would be closer to [pause] what you tried to do with Wiener filtering. [speaker007:] Yeah. [speaker003:] And [disfmarker] Mm hmm. Yeah. [speaker007:] Actually, it's, uh [disfmarker] Uh. I don't know, it's [disfmarker] go ahead. And it's [disfmarker] [speaker003:] It [disfmarker] [speaker007:] go ahead. [speaker003:] Maybe you can [disfmarker] I think it's [disfmarker] That's it for me. [speaker007:] OK. So, uh [disfmarker] u th I've been playing with this Wiener filter, like. And there are [disfmarker] there were some bugs in the program, so I was p initially trying to clear them up. Because one of the bug was [disfmarker] I was assuming that always the VAD [disfmarker] uh, the initial frames were silence. It always started in the silence state, but it wasn't for some utterances. So the [disfmarker] it wasn't estimating the noise initially, and then it never estimated, because I assumed that it was always silence. [speaker003:] Mm hmm. So this is on SpeechDat Car Italian? [speaker007:] Yeah. SpeechDat Car Italian. [speaker003:] So, in some cases s there are also [disfmarker] [speaker007:] Yeah. There're a few cases, actually, which I found later, that there are. [speaker003:] o Uh huh. [speaker007:] So that was one of the [pause] bugs that was there in estimating the noise. And, uh, so once it was cleared, uh, I ran a few experiments with [pause] different ways of smoothing the estimated clean speech and how t estimated the noise and, eh, smoothing the SNR also. And so the [disfmarker] the trend seems to be like, [vocalsound] uh, smoothing the [pause] current estimate of the clean speech for deriving the SNR, which is like [pause] deriving the Wiener filter, seems to be helping. Then updating it quite fast using a very small time constant. So we'll have, like, a few results where the [disfmarker] estimating the [disfmarker] the [disfmarker] More smoothing is helping. But still it's like [disfmarker] it's still comparable to the baseline. I haven't got anything beyond the baseline. But that's, like, not using any Wiener filter. And, uh, so I'm [disfmarker] I'm trying a few more experiments with different time constants for smoothing the noise spectrum, and smoothing the clean speech, and smoothing SNR. So there are three time constants that I have. So, I'm just playing around. So, one is fixed in the line, like [pause] Smoothing the clean speech is [disfmarker] is helping, so I'm not going to change it that much. But, the way I'm estimating the noise and the way I'm estimating the SNR, I'm just trying [disfmarker] trying a little bit. So, that h And the other thing is, like, putting a floor on the, uh, SNR, because that [disfmarker] if some [disfmarker] In some cases the clean speech is, like [disfmarker] when it's estimated, it goes to very low values, so the SNR is, like, very low. And so that actually creates a lot of variance in the low energy region of the speech. So, I'm thinking of, like, putting a floor also for the SNR so that it doesn't [pause] vary a lot in the low energy regions. And, uh. So. The results are, like [disfmarker] So far I've been testing only with the [pause] baseline, which is [disfmarker] which doesn't have any LDA filtering and on line normalization. I just want to separate the [disfmarker] the contributions out. So it's just VAD, plus the Wiener filter, plus the baseline system, which is, uh, just the spectral [disfmarker] I mean, the mel sp mel, uh, frequency coefficients. Um. And the other thing that I tried was [disfmarker] but I just [vocalsound] took of those, uh, [pause] [vocalsound] Carlos filters, which Hynek had, to see whether it really h helps or not. I mean, it was just a [disfmarker] a run to see whether it really degrades or it helps. And it's [disfmarker] it seems to be like it's not [vocalsound] hurting a lot by just blindly picking up one filter which is nothing but a [pause] four hertz [disfmarker] a band pass m m filter on the cubic root of the power spectrum. So, that was the filter that Hy uh, Carlos had. And so [disfmarker] Yeah. Just [disfmarker] just to see whether it really [disfmarker] it's [disfmarker] it's [disfmarker] is it worth trying or not. So, it doesn't seems to be degrading a lot on that. So there must be something that I can [disfmarker] that can be done with that type of noise compensation also, which [disfmarker] [vocalsound] I guess I would ask Carlos about that. I mean, how [disfmarker] how he derived those filters and [disfmarker] and where d if he has any filters which are derived on OGI stories, added with some type of noise which [disfmarker] what we are using currently, or something like that. So maybe I'll [disfmarker] [speaker002:] This is cubic root of power spectra? [speaker007:] Yeah. Cubic root of power spectrum. [speaker002:] So, if you have this band pass filter, you probably get n you get negative values. Right? [speaker007:] Yeah. And I'm, like, floating it to z zeros right now. [speaker002:] OK. [speaker007:] So it has, like [disfmarker] the spectrogram has, like [disfmarker] Uh, it actually, uh, enhances the onset and offset of [disfmarker] I mean, the [disfmarker] the begin and the end of the speech. So it's [disfmarker] there seems to be, like, deep valleys in the begin and the end of, like, high energy regions, because the filter has, like, a sort of Mexican hat type structure. [speaker002:] Mm hmm. [speaker007:] So, those are the regions where there are, like [disfmarker] [speaker002:] Mm hmm. [speaker007:] when I look at the spectrogram, there are those deep valleys on the begin and the end of the speech. But the rest of it seems to be, like, pretty nice. [speaker002:] Mm hmm. [speaker007:] So. That's [pause] something I observe using that filter. And [disfmarker] Yeah. There are a few [disfmarker] very [disfmarker] not a lot of [disfmarker] because the filter doesn't have a [disfmarker] really a deep negative portion, so that it's not really creating a lot of negative values in the cubic root. So, I'll [disfmarker] I'll s may continue with that for some w I'll [disfmarker] I'll [disfmarker] Maybe I'll ask Carlos a little more about how to play with those filters, and [disfmarker] but while [pause] making this Wiener filter better. So. Yeah. That [disfmarker] that's it, Morgan. [speaker002:] Uh, last week you were also talking about building up the subspace [pause] stuff? [speaker007:] Yeah. I [disfmarker] I [disfmarker] I would actually m m didn't get enough time to work on the subspace last week. It was mostly about [pause] finding those bugs and th you know, things, [speaker002:] OK. [speaker007:] and I didn't work much on that. [speaker001:] How about you, Carmen? [speaker004:] Well, I am still working with, eh, VTS. And, one of the things that last week, eh, say here is that maybe the problem was with the diff because the signal have different level of energy. [speaker002:] Hmm? [speaker004:] And, maybe, talking with Stephane and with Sunil, we decide that maybe it was interesting to [disfmarker] to apply on line normalization before applying VTS. But then [vocalsound] we decided that that's [disfmarker] it doesn't work absolutely, because we modified also the noise. And [disfmarker] Well, thinking about that, we [disfmarker] we then [disfmarker] we decide that maybe is a good idea. We don't know. I don't hav I don't [disfmarker] this is [disfmarker] I didn't [pause] do the experiment yet [disfmarker] to apply VTS in cepstral domain. [speaker002:] The other thing [pause] is [disfmarker] So [disfmarker] so, in [disfmarker] i i and [disfmarker] Not [disfmarker] and C zero would be a different [disfmarker] So you could do a different normalization for C zero than for other things anyway. I mean, the other thing I was gonna suggest is that you could have [pause] two kinds of normalization with [disfmarker] with, uh, different time constants. So, uh, you could do some normalization [vocalsound] s uh, before the VTS, and then do some other normalization after. I don't know. But [disfmarker] but C zero certainly acts differently than the others do, [speaker004:] Uh. [speaker002:] so that's [disfmarker] [speaker003:] Mm hmm. [speaker004:] Well, we s decide to m to [disfmarker] to obtain the new expression if we work in the cepstral domain. And [disfmarker] Well. I am working in that now, [speaker002:] Uh huh. [speaker004:] but [vocalsound] I'm not sure if that will be usefu useful. I don't know. It's k it's k It's quite a lot [disfmarker] It's a lot of work. Well, it's not too much, [speaker002:] Uh huh. [speaker004:] but this [disfmarker] it's work. And I want to know if [disfmarker] if we have some [pause] feeling that [pause] the result [disfmarker] [speaker002:] Yeah. [speaker004:] I [disfmarker] I would like to know if [disfmarker] I don't have any feeling if this will work better than apply VTS aft in cepstral domain will work better than apply in m mel [disfmarker] in filter bank domain. I r I'm not sure. I don't [disfmarker] I don't know absolutely nothing. [speaker003:] Mm hmm. [speaker002:] Yeah. Well, you're [disfmarker] I think you're the first one here to work with VTS, so, uh, maybe we could call someone else up who has, ask them their opinion. Uh, [speaker003:] Mm hmm. [speaker002:] I don't [disfmarker] I don't have a good feeling for it. Um. [speaker007:] Pratibha. [speaker003:] Actually, the VTS that you tested before was in the log domain and so [pause] the codebook is e e kind of dependent on the [pause] level of the speech signal. [speaker004:] Yeah? [speaker003:] And [disfmarker] So I expect it [disfmarker] If [disfmarker] if you have something that's independent of this, I expect it to [disfmarker] it [disfmarker] to, uh, be a better model of speech. [speaker004:] To have better [disfmarker] [speaker003:] And. Well. [speaker002:] You [disfmarker] you wouldn't even need to switch to cepstra. Right? I mean, you can just sort of normalize the [disfmarker] [speaker003:] No. We could normali norm I mean, remove the median. [speaker002:] Yeah. Yeah. [speaker004:] Mm hmm. [speaker002:] And then you have [pause] one number which is very dependent on the level cuz it is the level, and the other which isn't. [speaker003:] Mm hmm. Yeah. But here also we would have to be careful about removing the mean [pause] of speech and [speaker004:] Ye [speaker003:] not of noise. Because it's like [pause] first doing general normalization [speaker004:] Yea [speaker003:] and then noise removal, which is [disfmarker] [speaker004:] Yeah. We [disfmarker] I was thinking to [disfmarker] to [disfmarker] to estimate the noise [pause] with the first frames and then apply the VAD, [speaker002:] Mm hmm. [speaker003:] Mm hmm. [speaker004:] before the on line normalization. We [disfmarker] we see [disfmarker] [speaker003:] Mm hmm. [speaker004:] Well, I am thinking [vocalsound] about that and working about that, [speaker002:] Yeah. [speaker004:] but I don't have result this week. [speaker002:] Sure. I mean, one of the things we've talked about [disfmarker] maybe it might be star time to start thinking about pretty soon, is as we look at the pros and cons of these different methods, how do they fit in with one another? Because [pause] we've talked about potentially doing some combination of a couple of them. [speaker004:] Mm hmm. [speaker002:] Maybe [disfmarker] maybe pretty soon we'll have some sense of what their [pause] characteristics are, so we can see what should be combined. [speaker003:] Mm hmm. [speaker001:] Is that it? [speaker002:] OK. [speaker001:] OK? [speaker002:] Why don't we read some digits? [speaker001:] Yep. Want to go ahead, Morgan? [speaker002:] Sure. [speaker001:] Transcript L dash two one five. [speaker002:] O K. [speaker003:] So, uh [disfmarker] [speaker002:] Alright. [speaker006:] Um, so I wanted to discuss digits briefly, but that won't take too long. [speaker003:] Oh good. Right. OK, agenda items, Uh, we have digits, What else we got? [speaker001:] New version of the presegmentation. [speaker003:] New version of presegmentation. [speaker002:] Um, do we wanna say something about the, an update of the, uh, transcript? [speaker007:] Yeah, why don't you summarize the [disfmarker] [speaker003:] Update on transcripts. [speaker007:] And I guess that includes some [disfmarker] the filtering for the, the ASI refs, too. [speaker002:] Mmm. [speaker003:] Filtering for what? [speaker007:] For the references that we need to go from the [disfmarker] the [pause] fancy transcripts to the sort of [nonvocalsound] brain dead. [speaker002:] It'll [disfmarker] it'll be [disfmarker] basically it'll be a re cap of a meeting that we had jointly this morning. [speaker003:] Uh huh. [speaker007:] With Don, as well. [speaker002:] Mm hmm. [speaker003:] Got it. Anything else more pressing than those things? So [disfmarker] So, why don't we just do those. You said yours was brief, so [disfmarker] [speaker006:] OK. OK well, the, w uh as you can see from the numbers on the digits we're almost done. The digits goes up to [pause] about four thousand. Um, and so, uh, we probably will be done with the TI digits in, um, another couple weeks. um, depending on how many we read each time. So there were a bunch that we skipped. You know, someone fills out the form and then they're not at the meeting and so it's blank. Um, but those are almost all filled in as well. And so, once we're [disfmarker] it's done it would be very nice to train up a recognizer and actually start working with this data. [speaker004:] So we'll have a corpus that's the size of TI digits? [speaker006:] And so [disfmarker] One particular test set of TI digits. [speaker004:] Test set, OK. [speaker006:] So, I [disfmarker] I extracted, Ther there was a file sitting around which people have used here as a test set. It had been randomized and so on and that's just what I used to generate the order. of these particular ones. [speaker004:] Oh! Great. Great. [speaker006:] Um [disfmarker] [speaker003:] So, I'm impressed by what we could do, Is take the standard training set for TI digits, train up with whatever, you know, great features we think we have, uh for instance, and then test on uh this test set. And presumably uh it should do reasonably well on that, and then, presumably, we should go to the distant mike, and it should do poorly. [speaker004:] Yeah. [speaker006:] Right. [speaker003:] And then we should get really smart over the next year or two, and it [disfmarker] that should get better. [speaker006:] And inc increase it by one or two percent, yeah. [speaker003:] Yeah, [vocalsound] Yeah. [speaker006:] Um, but, in order to do that we need to extract out the actual digits. [speaker003:] Right. [speaker006:] Um, so that [disfmarker] the reason it's not just a transcript is that there're false starts, and misreads, and miscues and things like that. And so I have a set of scripts and X Waves where you just select the portion, hit R, um, it tells you what the next one should be, and you just look for that. You know, so it [disfmarker] it'll put on the screen, "The next set is six nine, nine two two". And you find that, and, hit the key and it records it in a file in a particular format. [speaker003:] So is this [disfmarker] [speaker006:] And so the [disfmarker] the question is, should we have the transcribers do that or should we just do it? Well, some of us. I've been do I've done, eight meetings, something like that, just by hand. Just myself, rather. So it will not take long. Um [disfmarker] [speaker003:] Uh, what [disfmarker] what do you think? [speaker002:] My feeling is that we discussed this right before coffee and I think it's a [disfmarker] it's a fine idea partly because, um, it's not un unrelated to their present skill set, but it will add, for them, an extra dimension, it might be an interesting break for them. And also it is contributing to the, uh, c composition of the transcript cuz we can incorporate those numbers directly and it'll be a more complete transcript. So I'm [disfmarker] I think it's fine, that part. [speaker006:] There is [disfmarker] there is [disfmarker] [speaker003:] So you think it's fine to have the transcribers do it? [speaker002:] Mm hmm. [speaker003:] Yeah, OK. [speaker006:] There's one other small bit, which is just entering the information which at s which is at the top of this form, onto the computer, to go along with the [disfmarker] where the digits are recorded automatically. [speaker004:] Good. [speaker003:] Yeah. [speaker006:] And so it's just, you know, typing in name, times [disfmarker] time, date, and so on. Um, which again either they can do, but it is, you know, firing up an editor, or, again, I can do. Or someone else can do. [speaker002:] And, that, you know, I'm not, that [disfmarker] that one I'm not so sure if it's into the [disfmarker] the, things that, I, wanted to use the hours for, because the, the time that they'd be spending doing that they wouldn't be able to be putting more words on. [speaker003:] Mmm. [speaker002:] But that's really your choice, it's your [disfmarker] [speaker004:] So are these two separate tasks that can happen? Or do they have to happen at the same time before [disfmarker] [speaker006:] No they don't have [disfmarker] this [disfmarker] you have to enter the data before, you do the second task, but they don't have to happen at the same time. So it's [disfmarker] it's just I have a file whi which has this information on it, and then when you start using my scripts, for extracting the times, it adds the times at the bottom of the file. [speaker004:] OK. [speaker006:] And so, um, I mean, it's easy to create the files and leave them blank, and so actually we could do it in either order. [speaker004:] Oh, OK. [speaker006:] Um, it's [disfmarker] it's sort of nice to have the same person do it just as a double check, to make sure you're entering for the right person. But, either way. [speaker003:] Yeah. Yeah just by way of uh, uh, a uh, order of magnitude, uh, um, we've been working with this Aurora, uh data set. And, uh, the best score, on the, nicest part of the data, that is, where you've got training and test set that are basically the same kinds of noise and so forth, uh, is about, uh [disfmarker] I think the best score was something like five percent, uh, error, per digit. So, that [disfmarker] [speaker006:] Per digit. [speaker001:] Per digit. [speaker003:] You're right. So if you were doing [pause] ten digit, uh, recognition, [vocalsound] you would really be in trouble. [speaker004:] Mm hmm. [speaker003:] So [disfmarker] So the [disfmarker] The point there, and this is uh car noise uh, uh things, but [disfmarker] but real [disfmarker] real situation, well, "real", Um, the [disfmarker] uh there's one microphone that's close, that they have as [disfmarker] as this sort of thing, close versus distant. Uh but in a car, instead of [disfmarker] instead of having a projector noise it's [disfmarker] it's car noise. Uh but it wasn't artificially added to get some [disfmarker] some artificial signal to noise ratio. It was just people driving around in a car. So, that's [disfmarker] that's an indication, uh that was with, many sites competing, and this was the very best score and so forth, so. More typical numbers like [speaker004:] Although the models weren't, that good, right? I mean, the models are pretty crappy? [speaker003:] You're right. I think that we could have done better on the models, but the thing is that we got [disfmarker] this [disfmarker] this is the kind of typical number, for all of the, uh, uh, things in this task, all of the, um, languages. And so I [disfmarker] I think we'd probably [disfmarker] the models would be better in some than in others. [speaker004:] Hmm. [speaker003:] Um, so, uh. Anyway, just an indication once you get into this kind of realm even if you're looking at connected digits it can be pretty hard. [speaker002:] Hmm. It's gonna be fun to see how we, compare at this. [speaker003:] Yeah. [speaker004:] How did we do on the TI digits? [speaker002:] Very exciting. s [@ @]. [speaker006:] Well the prosodics are so much different s it's gonna be, strange. I mean the prosodics are not the same as TI digits, for example. [speaker003:] Yeah. [speaker006:] So I'm [disfmarker] I'm not sure how much of effect that will have. [speaker007:] What do you mean, the prosodics? [speaker004:] H how do [disfmarker] [speaker006:] Um, just what we were talking about with grouping. That with these, the grouping, there's no grouping at all, and so it's just [disfmarker] the only sort of discontinuity you have is at the beginning and the end. [speaker007:] So what are they doing in Aurora, are they reading actual phone numbers, [speaker006:] Aurora I don't know. I don't know what they do in Aurora. [speaker007:] or, a [disfmarker] a digit at a time, or [disfmarker]? [speaker003:] Uh, I'm not sure how [disfmarker] [speaker007:] Cuz it's [disfmarker] [speaker003:] no, no I mean it's connected [disfmarker] it's connected, uh, digits, [speaker007:] Connected. [speaker003:] yeah. [speaker007:] So there's also the [disfmarker] not just the prosody but the cross [disfmarker] the cross word modeling is probably quite different. [speaker003:] But. [speaker006:] But [disfmarker] Right. But in TI digits, they're reading things like zip codes and phone numbers and things like that, [speaker004:] H How [speaker007:] Right. [speaker006:] so it's gonna be different. [speaker004:] do we do on TI digits? [speaker006:] I don't remember. I mean, very good, right? [speaker003:] Yeah, I mean we were in the. [speaker006:] One and a half percent, two percent, something like that? [speaker003:] Uh, I th no I think we got under a percent, [speaker006:] Oh really? [speaker003:] but it was [disfmarker] but it's [disfmarker] but I mean. The very best system that I saw in the literature was a point two five percent or something that somebody had at [disfmarker] at Bell Labs, or. Uh, but. But, uh, sort of pulling out all the stops. [speaker006:] OK. [speaker002:] s [@ @]. It s strikes me that there are more [disfmarker] each of them is more informative because it's so, random, [speaker006:] Alright. [speaker004:] Hmm. [speaker003:] But I think a lot of systems sort of get half a percent, or three quarters a percent, [speaker006:] Right. [speaker003:] and we're [disfmarker] we're in there somewhere. [speaker006:] But that [disfmarker] I mean it's really [disfmarker] it's [disfmarker] it's close talking mikes, no noise, clean signal, just digits, I mean, every everything is good. [speaker003:] Yeah. [speaker007:] It's the beginning of time in speech recognition. [speaker006:] Yes, exactly. [speaker003:] Yeah. [speaker006:] And we've only recently got it to anywhere near human. [speaker007:] It's like the, single cell, you know, it's the beginning of life, [speaker004:] Pre prehistory. [speaker007:] yeah. [speaker006:] And it's still like an order of magnitude worse than what humans do. [speaker007:] Right. [speaker003:] Yeah. [speaker006:] So. [speaker003:] When [disfmarker] When they're wide awake, yeah. [speaker006:] Yeah. [speaker003:] Um, [speaker006:] After coffee. [speaker003:] after coffee, you're right. Not after lunch. [speaker006:] OK, so, um, what I'll do then is I'll go ahead and enter, this data. And then, hand off to Jane, and the transcribers to do the actual extraction of the digits. [speaker003:] Yeah. Yeah. One question I have that [disfmarker] that I mean, we wouldn't know the answer to now but might, do some guessing, but I was talking before about doing some model modeling of arti uh, uh, marking of articulatory, features, with overlap and so on. [speaker006:] Hmm. [speaker003:] And, and, um, On some subset. One thought might be to do this uh, on [disfmarker] on the digits, or some piece of the digits. Uh, it'd be easier, uh, and so forth. The only thing is I'm a little concerned that maybe the kind of phenomena, in w i i The reason for doing it is because the [disfmarker] the argument is that certainly with conversational speech, the stuff that we've looked at here before, um, just doing the simple mapping, from, um, the phone, to the corresponding features that you could look up in a book, uh, isn't right. It isn't actually right. In fact there's these overlapping processes where some voicing some up and then some, you know, some nasality is [disfmarker] comes in here, and so forth. And you do this gross thing saying "Well I guess it's this phone starting there". So, uh, that's the reasoning. But, It could be that when we're reading digits, because it's [disfmarker] it's for such a limited set, that maybe [disfmarker] maybe that phenomenon doesn't occur as much. I don't know. Di an anybody [disfmarker]? [pause] Do you have any [disfmarker]? [pause] Anybody have any opinion about that, [speaker002:] and that people might articulate more, and you that might end up with more [disfmarker] a closer correspondence. [speaker003:] Mm hmm. Yeah. [speaker006:] Yeah [disfmarker] that's [disfmarker] I [disfmarker] I agree. That [disfmarker] it's just [disfmarker] [speaker004:] Sort of less predictability, and [disfmarker] You hafta [disfmarker] [speaker002:] Mm hmm. [speaker003:] Yeah. [speaker007:] Well it's definitely true that, when people are, reading, even if they're re reading what, they had said spontaneously, that they have very different patterns. [speaker006:] It's a [disfmarker] Well [disfmarker] Would, this corpus really be the right one to even try that on? [speaker007:] Mitch showed that, and some, dissertations have shown that. [speaker003:] Right. [speaker007:] So the fact that they're reading, first of all, whether they're reading in a room of, people, or rea you know, just the fact that they're reading will make a difference. [speaker003:] Yeah. [speaker007:] And, depends what you're interested in. [speaker003:] See, I don't know. So, may maybe the thing will be do [disfmarker] to take some very small subset, I mean not have a big, program, but take a small set, uh, subset of the conversational speech and a small subset of the digits, and [pause] look and [disfmarker] and just get a feeling for it. Um, just take a look. Really. [speaker002:] H That could [disfmarker] could be an interesting design, too, cuz then you'd have the com the comparison of the, uh, predictable speech versus the less predictable speech [speaker003:] Cuz I don't think anybody is, I at least, I don't know, of anybody, uh, well, I don't know, [vocalsound] the answers. [speaker004:] Hey. [speaker003:] Yeah. [speaker002:] and maybe you'd find that it worked in, in the, case of the pr of the, uh, non predictable. [speaker003:] Yeah. [speaker004:] Hafta think about, the particular acoustic features to mark, too, because, I mean, some things, they wouldn't be able to mark, like, uh, you know, uh, tense lax. [speaker003:] Mm hmm. [speaker004:] Some things are really difficult. You know, just listening. [speaker006:] M I think we can get Ohala in to, give us some advice on that. [speaker002:] Well. [speaker004:] Yeah. [speaker002:] Also I thought you were thinking of a much more restricted set of features, that [disfmarker] [speaker003:] Yeah, but I [disfmarker] I [disfmarker] I [disfmarker] I was, like he said, [vocalsound] I was gonna bring John in and ask John what he thought. [speaker002:] Yeah, sure. Sure. Yeah. [speaker003:] Right. But I mean you want [disfmarker] you want it be restrictive but you also want it to [disfmarker] to [disfmarker] to have coverage. [speaker006:] Right. [speaker003:] You know i you should. It should be such that if you, if you, uh, if you had o um, all of the features, determined that you [disfmarker] that you were uh ch have chosen, that that would tell you, uh, in the steady state case, uh, the phone. [speaker002:] Yeah [speaker003:] So, um. [speaker002:] OK. [speaker006:] Even, I guess with vowels that would be pretty hard, wouldn't it? To identify actually, you know, which one it is? [speaker002:] It would seem to me that the points of articulation would be m more, g uh, I mean that's [disfmarker] I think about articulatory features, I think about, points of articulation, which means, uh, rather than vowels. [speaker006:] Yeah. [speaker004:] Points of articulation? What do you mean? [speaker002:] So, is it, uh, bilabial or dental or is it, you know, palatal. [speaker003:] Mm hmm. [speaker002:] Which [disfmarker] which are all like where [disfmarker] where your tongue comes to rest. [speaker006:] Uvular. [speaker004:] Place of ar place of articulation. [speaker003:] Place, place. [speaker002:] Place. [speaker003:] Yeah. [speaker001:] Place. [speaker002:] Thank you, what [disfmarker] whatev whatever I s said, that's [disfmarker] [speaker004:] OK. [speaker003:] Yeah. [speaker002:] I really meant place. [speaker004:] OK, I see. [speaker003:] Yeah. OK we got our jargon then, OK. [speaker002:] Yeah. [speaker003:] Uh. [speaker007:] Well it's also, there's, really a difference between, the pronunciation models in the dictionary, and, the pronunciations that people produce. And, so, You get, some of that information from Steve's work on the [disfmarker] on the labeling [speaker003:] Right. [speaker006:] Right. [speaker007:] and it really, I actually think that data should be used more. That maybe, although I think the meeting context is great, that he has transcriptions that give you the actual phone sequence. And you can go from [disfmarker] not from that to the articulatory features, but that would be a better starting point for marking, the gestural features, then, data where you don't have that, because, we [disfmarker] you wanna know, both about the way that they're producing a certain sound, and what kinds of, you know what kinds of, phonemic, differences you get between these, transcribed, sequences and the dictionary ones. [speaker003:] Well you might be right that mi might be the way at getting at, what I was talking about, but the particular reason why I was interested in doing that was because I remember, when that happened, and, John Ohala was over here and he was looking at the spectrograms of the more difficult ones. Uh, he didn't know what to say, about, what is the sequence of phones there. They came up with some compromise. Because that really wasn't what it look like. [speaker007:] Right. [speaker003:] It didn't look like a sequence of phones [speaker006:] Right. [speaker003:] it look like this blending thing happening here and here and here. [speaker007:] Right. [speaker006:] Yeah, so you have this feature here, and, overlap, yeah. [speaker004:] There was no name for that. [speaker003:] Yeah. [speaker007:] But [disfmarker] Right. [speaker003:] Yeah. [speaker007:] But it still is [disfmarker] there's a [disfmarker] there are two steps. One [disfmarker] you know, one is going from a dictionary pronunciation of something, like, "gonna see you tomorrow", [speaker006:] And [disfmarker] Or "gonta". [speaker003:] Right. Yeah. [speaker007:] it could be "going to" or "gonna" or "gonta s" you know. [speaker003:] Right. [speaker007:] And, yeah. "Gonna see you tomorrow", uh, "guh see you tomorrow". And, that it would be nice to have these, intermediate, or these [disfmarker] some [disfmarker] these reduced pronunciations that those transcribers had marked or to have people mark those as well. [speaker003:] Mm hmm. [speaker007:] Because, it's not, um, that easy to go from the, dictionary, word pronuncia the dictionary phone pronunciation, to the gestural one without this intermediate or a syllable level kind of, representation. [speaker006:] Well I don't think Morgan's suggesting that we do that, though. [speaker003:] Do you mean, [speaker001:] Yeah. [speaker003:] Yeah, I mean, I I I'm jus at the moment of course we're just talking about what, to provide as a tool for people to do research who have different ideas about how to do it. So for instance, you might have someone who just has a wor has words with states, and has uh [disfmarker] uh, comes from articulatory gestures to that. And someone else, might actually want some phonetic uh intermediate thing. So I think it would be [disfmarker] be best to have all of it if we could. But [pause] um, [speaker006:] But [disfmarker] What I'm imagining is a score like notation, where each line is a particular feature. [speaker003:] Yeah. [speaker006:] Right, so you would say, you know, it's voiced through here, and so you have label here, and you have nas nasal here, [speaker003:] Yeah. [speaker006:] and, they [disfmarker] they could be overlapping in all sorts of bizarre ways that don't correspond to the timing on phones. [speaker003:] I mean this is the kind of reason why [disfmarker] I remember when at one of the Switchboard, workshops, that uh when we talked about doing the transcription project, Dave Talkin said, "can't be done". [speaker006:] Right. [speaker003:] He was [disfmarker] he was, what [disfmarker] what he meant was that this isn't, you know, a sequence of phones, and when you actually look at Switchboard that's, not what you see, and, you know. And. It, [speaker006:] And in [disfmarker] in fact the inter annotator agreement was not that good, right? On the harder ones? [speaker003:] yeah I mean it was [speaker007:] It depends how you look at it, and I [disfmarker] I understand what you're saying about this, kind of transcription exactly, [speaker003:] Yeah. [speaker007:] because I've seen [disfmarker] you know, where does the voicing bar start and so forth. [speaker003:] Yeah. [speaker007:] All I'm saying is that, it is useful to have that [disfmarker] the transcription of what was really said, and which syllables were reduced. Uh, if you're gonna add the features it's also useful to have some level of representation which is, is a reduced [disfmarker] it's a pronunciation variant, that currently the dictionaries don't give you [speaker003:] Mm hmm. [speaker007:] because if you add them to the dictionary and you run recognition, you, you add confusion. [speaker003:] Right. [speaker007:] So people purposely don't add them. [speaker003:] Right. [speaker007:] So it's useful to know which variant was [disfmarker] was produced, at least at the phone level. [speaker004:] So it would be [disfmarker] it would be great if we had, either these kind of, labelings on, the same portion of Switchboard that Steve marked, or, Steve's type markings on this data, with these. [speaker007:] Right. That's all, I mean. Exactly. [speaker003:] Yeah. [speaker007:] Exactly. [speaker003:] Yeah, no I [disfmarker] I don't disagree with that. [speaker007:] And Steve's type is fairly [disfmarker] it's not that slow, uh, uh, I dunno exactly what the, timing was, but. [speaker003:] Yeah u I don't disagree with it the on the only thing is that, What you actually will end [disfmarker] en end up with is something, i it's all compromised, right, so, the string that you end up with isn't, actually, what happened. But it's [disfmarker] it's the best compromise that a group of people scratching their heads could come up with to describe what happened. [speaker007:] Mm hmm. [speaker004:] And it's more accurate than, phone labels. [speaker003:] But. And it's more accurate than the [disfmarker] than the dictionary or, if you've got a pronunciation uh lexicon that has three or four, [speaker006:] The word. [speaker004:] Yeah. [speaker003:] this might be have been the fifth one that you tr that you pruned or whatever, [speaker004:] So it's like a continuum. [speaker007:] Right. Right. [speaker004:] It's [disfmarker] you're going all the way down, [speaker003:] so sure. [speaker007:] Right. That's what I meant is [disfmarker] [speaker004:] yeah. [speaker003:] Yeah. [speaker007:] an and in some places it would fill in, So [disfmarker] the kinds of gestural features are not everywhere. [speaker003:] Yeah. [speaker006:] Well [disfmarker] [speaker004:] Right. [speaker007:] So there are some things that you don't have access to either from your ear or the spectrogram, [speaker004:] Mm hmm. [speaker007:] but you know what phone it was and that's about all you can [disfmarker] all you can say. [speaker004:] Right. [speaker007:] And then there are other cases where, nasality, voicing [disfmarker] [speaker004:] It's basically just having, multiple levels of [disfmarker] of, information and marking, on the signal. [speaker007:] Right. Right. [speaker003:] Yeah. [speaker007:] Right. [speaker006:] Well the other difference is that the [disfmarker] the features, are not synchronous, right. They overlap each other in weird ways. [speaker004:] Mm hmm. Mm hmm. [speaker006:] So it's not a strictly one dimensional signal. [speaker003:] Right. [speaker006:] So I think that's sorta qualitatively different. [speaker007:] Right. You can add the features in, uh, but it'll be underspecified. [speaker002:] Hmm. [speaker007:] Th there'll be no way for you to actually mark what was said completely by features. [speaker006:] Well not with our current system but you could imagine designing a system, that the states were features, rather than phones. [speaker007:] And i if you're [disfmarker] Well, we [disfmarker] we've probably have a [vocalsound] separate, um, discussion of, uh [disfmarker] of whether you can do that. [speaker002:] That's [disfmarker] Well, [pause] isn't that [disfmarker] I thought that was, well but that [disfmarker] wasn't that kinda the direction? [speaker006:] Yeah. [speaker002:] I thought [speaker003:] Yeah, so I mean, what, what [disfmarker] where this is, I mean, I I want would like to have something that's useful to people other than those who are doing the specific kind of research I have in mind, so it should be something broader. But, The [disfmarker] but uh where I'm coming from is, uh, we're coming off of stuff that Larry Saul did with [disfmarker] with, um, uh, John Dalan and Muzim Rahim in which, uh, they, uh, have, um, a m a multi band system that is, uh, trained through a combination of gradient learning an and EM, to [pause] um, estimate, uh, [vocalsound] the, uh, value for m for [disfmarker] for a particular feature. OK. And this is part of a larger, image that John Dalan has about how the human brain does it in which he's sort of imagining that, individual frequency channels are coming up with their own estimate, of [disfmarker] of these, these kinds of [disfmarker] something like this. Might not be, you know, exact features that, Jakobson thought of or something. But I mean you know some, something like that. Some kind of low level features, which are not, fully, you know, phone classification. And the [disfmarker] the [disfmarker] th this particular image, of how thi how it's done, is that, then given all of these estimates at that level, there's a level above it, then which is [disfmarker] is making, some kind of sound unit classification such as, you know, phone and [disfmarker] and, you know. You could argue what, what a sound unit should be, and [disfmarker] and so forth. But that [disfmarker] that's sort of what I was imagining doing, um, and [disfmarker] but it's still open within that whether you would have an intermediate level in which it was actually phones, or not. You wouldn't necessarily have to. Um, but, Again, I wouldn't wanna, wouldn't want what we [disfmarker] we produced to be so, know, local in perspective that it [disfmarker] it was matched, what we were thinking of doing one week, And [disfmarker] and, and, you know, what you're saying is absolutely right. That, that if we, can we should put in, uh, another level of, of description there if we're gonna get into some of this low level stuff. [speaker004:] Well, you know, um [disfmarker] I mean if we're talking about, having the, annotators annotate these kinds of features, it seems like, You know, you [disfmarker] The [disfmarker] the question is, do they do that on, meeting data? Or do they do that on, Switchboard? [speaker006:] That's what I was saying, maybe meeting data isn't the right corpus. [speaker002:] W Well it seems like you could do both. I mean, I was thinking that it would be interesting, to do it with respect to, parts of Switchboard anyway, in terms of, [speaker003:] Mm hmm. [speaker002:] uh [disfmarker] partly to see, if you could, generate first guesses at what the articulatory feature would be, based on the phone representation at that lower level. [speaker003:] Mm hmm. [speaker002:] It might be a time gain. But also in terms of comparability of, um, [speaker003:] Mm hmm. [speaker004:] Well cuz the yeah, and then also, if you did it on Switchboard, you would have, the full continuum of transcriptions. [speaker002:] what you gain Yep. [speaker004:] You'd have it, from the lowest level, the ac acoustic features, then you'd have the, you know, the phonetic level that Steve did, [speaker002:] Mm hmm. [speaker007:] Yeah that [disfmarker] that's all I was thinking about. [speaker004:] and, [speaker007:] it is telephone band, so, the bandwidth might be [disfmarker] [speaker004:] yeah. [speaker002:] And you could tell that [disfmarker] [speaker004:] It'd be a complete, set then. [speaker002:] And you get the relative gain up ahead. [speaker003:] It's so it's a little different. [speaker007:] Yeah. [speaker003:] So I mean i we'll see wha how much we can, uh, get the people to do, and how much money we'll have and all this sort of thing, [speaker002:] Mm hmm. Mm hmm. [speaker004:] But it [disfmarker] it might be good to do what Jane was saying uh, you know, seed it, with, guesses about what we think the features are, based on, you know, the phone or Steve's transcriptions or something. to make it quicker. [speaker003:] but, Might be do both. [speaker006:] Alright, so based on the phone transcripts they would all be synchronous, but then you could imagine, nudging them here and there. [speaker004:] Adjusting? Yeah, exactly. Scoot the voicing over a little, because [disfmarker] [speaker002:] Yeah. [speaker006:] Right. [speaker003:] Well I think what [disfmarker] I mean I'm [disfmarker] I'm a l little behind in what they're doing, now, and, uh, the stuff they're doing on Switchboard now. But I think that, Steve and the gang are doing, something with an automatic system first and then doing some adjustment. As I re as I recall. So I mean that's probably the right way to go anyway, is to [disfmarker] is to start off with an automatic system with a pretty rich pronunciation dictionary that, that, um, you know, tries, to label it all. And then, people go through and fix it. [speaker002:] So in [disfmarker] in our case you'd think about us s starting with maybe the regular dictionary entry, and then? [speaker003:] Well, regular dictionary, I mean, this is a pretty rich dictionary. [speaker002:] Or [pause] would we [disfmarker] [speaker003:] It's got, got a fair number of pronunciations in it [speaker004:] Or you could start from the [disfmarker] if we were gonna, do the same set, of sentences that Steve had, done, we could start with those transcriptions. [speaker002:] But [disfmarker] Mm hmm. [speaker003:] Yeah. [speaker007:] That's actually what I was thinking, is tha [disfmarker] [speaker002:] So I was thinking [disfmarker] [speaker006:] Right. [speaker004:] Yeah. [speaker007:] the problem is when you run, uh, if you run a regular dictionary, um, even if you have variants, in there, which most people don't, you don't always get, out, the actual pronunciations, [speaker004:] Yeah. [speaker007:] so that's why the human transcriber's giving you the [disfmarker] that pronunciation, [speaker002:] Yeah. [speaker003:] Actually maybe they're using phone recognizers. [speaker002:] Oh. [speaker007:] and so y they [disfmarker] they [disfmarker] I thought that they were [disfmarker] [speaker003:] Is that what they're doing? [speaker006:] They are. [speaker007:] we should catch up on what Steve is, [speaker003:] Oh, OK. [speaker007:] uh [disfmarker] I think that would be a good i good idea. [speaker003:] Yeah. Yeah, so I think that i i we also don't have, I mean, we've got a good start on it, but we don't have a really good, meeting, recorder or recognizer or transcriber or anything yet, so. [speaker007:] Yeah. [speaker003:] So, I mean another way to look at this is to, is to, uh, do some stuff on Switchboard which has all this other, stuff to it. And then, um, As we get, further down the road and we can do more things ahead of time, we can, do some of the same things to the meeting data. [speaker004:] Mm hmm. [speaker002:] OK. [speaker004:] Yeah. [speaker002:] And I'm [disfmarker] and these people might [disfmarker] they [disfmarker] they are, s most of them are trained with IPA. [speaker003:] Yeah [speaker002:] They'd be able to do phonetic level coding, or articulatory. [speaker004:] Are they busy for the next couple years, or [disfmarker]? [speaker002:] Well, you know, I mean they, they [disfmarker] they're interested in continuing working with us, so [disfmarker] I mean [disfmarker] I, and this would be up their alley, so, we could [disfmarker] when the [disfmarker] when you d meet with, with John Ohala and find, you know what taxonomy you want to apply, then, they'd be, good to train onto it. [speaker003:] Yeah. Yeah. Anyway, this is, not an urgent thing at all, [speaker002:] Yeah. [speaker003:] just it came up. [speaker004:] It'd be very interesting though, to have [pause] that data. [speaker007:] Yeah. [speaker006:] I wonder, how would you do a forced alignment? [speaker002:] I think so, too. [speaker007:] Might [disfmarker] [speaker002:] Interesting idea. [speaker006:] To [disfmarker] to [disfmarker] I mean, you'd wanna iterate, somehow. Yeah. It's interesting thing to think about. [speaker007:] It might be [disfmarker] [speaker004:] Hmm. [speaker007:] I was thinking it might be n [speaker006:] I mean you'd [disfmarker] you'd want models for spreading. [speaker004:] Of the f acoustic features? [speaker006:] Yeah. [speaker004:] Mm hmm. [speaker003:] Mm hmm. Yeah. [speaker007:] Well it might be neat to do some, phonetic, features on these, nonword words. Are [disfmarker] are these kinds of words that people never [disfmarker] the "huh"s and the "hmm"s and the "huh" [vocalsound] and the uh [disfmarker] These k No, I'm serious. There are all these kinds of [pause] functional, uh, elements. I don't know what you call [pause] them. But not just fill pauses but all kinds of ways of [pause] interrupting [comment] and so forth. [speaker006:] Uh huh. [speaker007:] And some of them are, [vocalsound] yeah, "uh huh"s, and "hmm"s, and, "hmm!" "hmm" [comment] "OK", "uh" [comment] Grunts, uh, that might be interesting. [speaker002:] He's got lip [disfmarker] [pause] lipsmacks. [speaker007:] In the meetings. [speaker003:] We should move on. [speaker002:] Yeah. [speaker003:] Uh, new version of, uh, presegmentation? [speaker001:] Uh, oh yeah, um, [vocalsound] I worked a little bit on the [disfmarker] on the presegmentation to [disfmarker] to get another version which does channel specific, uh, speech nonspeech detection. And, what I did is I used some normalized features which, uh, look in into the [disfmarker] which is normalized energy, uh, energy normalized by the mean over the channels and by the, minimum over the, other. within each channel. And to [disfmarker] to, mm, to, yeah, to normalize also loudness and [disfmarker] and modified loudness and things and that those special features actually are in my feature vector. [speaker006:] Oh. [speaker001:] And, and, therefore to be able to, uh, somewhat distinguish between foreground and background speech in [disfmarker] in the different [disfmarker] in [disfmarker] each channel. And, eh, I tested it on [disfmarker] on three or four meetings and it seems to work, well yeah, fairly well, I [disfmarker] I would say. There are some problems with the lapel mike. [speaker006:] Of course. [speaker001:] Yeah. [speaker006:] Wow that's great. [speaker001:] Uh, yeah. And. [speaker006:] So I [disfmarker] I understand that's what you were saying about your problem with, minimum. [speaker001:] Yeah. And. Yeah, and [disfmarker] and I had [disfmarker] I had, uh, specific problems with. [speaker006:] I get it. So new use ninetieth quartile, rather than, minimum. [speaker001:] Yeah. Yeah. [speaker002:] Wow. [speaker001:] Yeah [disfmarker] yeah, then [disfmarker] I [disfmarker] I did some [disfmarker] some [disfmarker] some things like that, [speaker002:] Interesting. [speaker001:] as there [disfmarker] there are some [disfmarker] some problems in, when, in the channel, there [disfmarker] they [disfmarker] the the speaker doesn't [disfmarker] doesn't talk much or doesn't talk at all. Then, the, yeah, there are [disfmarker] there are some problems with [disfmarker] with [disfmarker] with n with normalization, and, then, uh, there the system doesn't work at all. So, I'm [disfmarker] I'm glad that there is the [disfmarker] the digit part, where everybody is forced to say something, [speaker003:] Right. [speaker001:] so, that's [disfmarker] that's great for [disfmarker] for my purpose. And, the thing is I [disfmarker] I, then the evaluation of [disfmarker] of the system is a little bit hard, as I don't have any references. [speaker006:] Well we did the hand [disfmarker] the one by hand. [speaker001:] Yeah, that's the one [disfmarker] one wh where I do the training on so I can't do the evaluation on So the thing is, can the transcribers perhaps do some, some [disfmarker] some meetings in [disfmarker] in terms of speech nonspeech in [disfmarker] in the specific channels? [speaker006:] Uh. [speaker004:] Well won't you have that from their transcriptions? [speaker002:] Well, I have [disfmarker] Well, OK, so, now we need [disfmarker] [speaker006:] No, cuz we need is really tight. [speaker001:] Yeah. [speaker002:] so, um, I think I might have done what you're requesting, though I did it in the service of a different thing. [speaker001:] Oh, great. [speaker002:] I have thirty minutes that I've more tightly transcribed with reference to individual channels. [speaker001:] OK. OK, that's great. That's great [speaker002:] And I could [disfmarker] And [disfmarker] [speaker001:] for me. Yeah, so. [speaker006:] Hopefully that's not the same meeting that we did. [speaker002:] And [disfmarker] No, actually it's a different meeting. [speaker006:] Good. [speaker002:] So, um, e so the, you know, we have the, th they transcribe as if it's one channel with these [disfmarker] with the slashes to separate the overlapping parts. [speaker001:] OK. [speaker002:] And then we run it through [disfmarker] then it [disfmarker] then I'm gonna edit it and I'm gonna run it through channelize which takes it into Dave Gelbart's form format. [speaker001:] Yeah. Yeah. [speaker002:] And then you have, all these things split across according to channel, and then that means that, if a person contributed more than once in a given, overlap during that time bend that [disfmarker] that two parts of the utterance end up together, it's the same channel, and then I took his tool, and last night for the first thirty minutes of one of these transcripts, I, tightened up the, um, boundaries on individual speakers' channels, [speaker001:] OK. OK. [speaker002:] cuz his [disfmarker] his interface allows me to have total flexibility in the time tags across the channels. [speaker001:] Yeah. Yeah. [speaker002:] And [pause] um, so. [speaker004:] So, current [disfmarker] This week. [speaker001:] so, yeah [disfmarker] yeah, that [disfmarker] that [disfmarker] that's great, but what would be nice to have some more meetings, not just one meeting to [disfmarker] to be sure that [disfmarker] that, there is a system, [speaker002:] Yes. Might not be what you need. [speaker006:] Yeah, so if we could get a couple meetings done with that level of precision I think that would be a good idea. [speaker001:] OK. [speaker002:] Oh, OK. [speaker001:] Yeah. [speaker002:] Uh, how [disfmarker] how m much time [disfmarker] so the meetings vary in length, what are we talking about in terms of the number of minutes you'd like to have as your [disfmarker] as your training set? [speaker001:] It seems to me that it would be good to have, a few minutes from [disfmarker] from different meetings, so. But I'm not sure about how much. [speaker002:] OK, now you're saying different meetings because of different speakers or because of different audio quality or both or [disfmarker]? [speaker001:] Both [disfmarker] both. [speaker002:] OK. [speaker001:] Different [disfmarker] different number of speakers, different speakers, different [pause] conditions. [speaker003:] Yeah, we don't have that much variety in meetings yet, uh, I mean we have this meeting and the feature meeting and we have a couple others that we have uh, couple examples of. But [disfmarker] but, uh, [speaker001:] Yeah, m Yeah. [speaker002:] Mm hmm. [speaker005:] Even probably with the gains [pause] differently will affect it, you mean [disfmarker] [speaker001:] Uh, not really as [disfmarker] [speaker003:] Poten potentially. [speaker005:] Oh, cuz you use the normalization? [speaker001:] uh, because of the normalization, yeah. [speaker005:] OK. [speaker003:] Oh, OK. [speaker001:] Yeah. [speaker007:] We can try running [disfmarker] we haven't done this yet because, um, uh, Andreas an is [disfmarker] is gonna move over the SRI recognizer. i basically I ran out of machines at SRI, cuz we're running the evals [speaker001:] OK. [speaker007:] and I just don't have machine time there. But, once that's moved over, uh, hopefully in a [disfmarker] a couple days, then, we can take, um, what Jane just told us about as, the presegmented, [vocalsound] [nonvocalsound] the [disfmarker] the segmentations that you did, at level eight or som [comment] at some, threshold that Jane, tha [pause] right, and try doing, forced alignment. um, on the word strings. [speaker006:] Oh, shoot! [speaker001:] Yeah. Yeah. [speaker002:] The pre presegment [speaker001:] Yeah. [speaker002:] yeah. [speaker001:] With the recognizer? Yeah. [speaker007:] And if it's good, then that will [disfmarker] that may give you a good boundary. Of course if it's good, we don't [disfmarker] then we're [disfmarker] we're fine, [speaker001:] Yeah. M [speaker007:] but, I don't know yet whether these, segments that contain a lot of pauses around the words, will work or not. [speaker001:] I [disfmarker] I would quite like to have some manually transcribed references for [disfmarker] for the system, as I'm not sure if [disfmarker] if it's really good to compare with [disfmarker] with some other automatic, found boundaries. [speaker007:] Yeah. Right. [speaker002:] Well, no, if we were to start with this and then tweak it h manually, would that [disfmarker] that would be OK? [speaker001:] Yeah. [speaker007:] Right. [speaker001:] Yeah [pause] sure. [speaker007:] They might be OK. [speaker002:] OK. [speaker007:] It [disfmarker] you know it really depends on a lot of things, [speaker001:] Yeah. [speaker007:] but, I would have maybe a transciber, uh, look at the result of a forced alignment and then adjust those. [speaker001:] Yeah. To a adjust them, or, yeah. Yeah, yeah. [speaker007:] That might save some time. If they're horrible it won't help at all, [speaker001:] Yeah, great. [speaker007:] but they might not be horrible. [speaker001:] Yeah. [speaker007:] So [disfmarker] but I'll let you know when we, uh, have that. [speaker001:] OK, great. [speaker002:] How many minutes would you want from [disfmarker] I mean, we could [pause] easily, get a section, you know, like say a minute or so, from every meeting that we have so f from the newer ones that we're working on, everyone that we have. And then, should provide this. [speaker001:] If it's not the first minute of [disfmarker] of the meeting, that [disfmarker] that's OK with me, but, in [disfmarker] in the first minute, uh, Often there are some [disfmarker] some strange things going on which [disfmarker] which aren't really, well, for, which [disfmarker] which aren't re re really good. So. What [disfmarker] what I'd quite like, perhaps, is, to have, some five minutes of [disfmarker] of [disfmarker] of different meetings, so. [speaker002:] Somewhere not in the very beginning, five minutes, OK. [speaker001:] Yeah. [speaker002:] And, then I wanted to ask you just for my inter information, then, would you, be trai cuz I don't quite unders so, would you be training then, um, the segmenter so that, it could, on the basis of that, segment the rest of the meeting? So, if I give you like [pause] five minutes is the idea that this would then be applied to, uh, to, providing tighter time [pause] bands? [speaker001:] I [disfmarker] I could do a [disfmarker] a retraining with that, yeah. [speaker002:] Wow, interesting. [speaker001:] That's [disfmarker] but [disfmarker] but I hope that I [disfmarker] I don't need to do it. [speaker002:] OK. Uh huh. [speaker001:] So, uh it c can be do in an unsupervised way. [speaker002:] Excellent. [speaker001:] So. [speaker002:] Excellent, OK. [speaker001:] I'm [disfmarker] I'm not sure, but, for [disfmarker] for [disfmarker] for those three meetings whi which I [disfmarker] which I did, it seems to be, quite well, but, there are some [disfmarker] some [disfmarker] as I said some problems with the lapel mike, but, perhaps we can do something with [disfmarker] with cross correlations to, to get rid of the [disfmarker] of those. And. Yeah. That's [disfmarker] that's what I [disfmarker] that's my [pause] future work. Well [disfmarker] well what I want to do is to [disfmarker] to look into cross correlations for [disfmarker] for removing those, false overlaps. [speaker002:] Wonderful. [speaker007:] Are the, um, wireless, different than the wired, mikes, at all? I mean, have you noticed any difference? [speaker001:] I'm [disfmarker] I'm not sure, um, if [disfmarker] if there are any wired mikes in those meetings, or, uh, I have [disfmarker] have to loo have a look at them but, I'm [disfmarker] I'm [disfmarker] I think there's no difference between, [speaker007:] So it's just the lapel versus everything else? [speaker001:] Yeah. Yeah. [speaker002:] OK, so then, if that's five minutes per meeting we've got like twelve minutes, twelve meetings, roughly, that I'm [disfmarker] that I've been working with, then [disfmarker] [speaker003:] Of [disfmarker] of [disfmarker] of the meetings that you're working with, how many of them are different, tha [speaker001:] No. [speaker003:] are there any of them that are different than, these two meetings? [speaker002:] Well [disfmarker] oh wa in terms of the speakers or the conditions or the? [speaker003:] Yeah, speakers. Sorry. So. [speaker002:] Um, we have different combinations of speakers. [speaker001:] Yeah, that [disfmarker] [speaker002:] I mean, just from what I've seen, uh, there are some where, um, you're present or not present, and, then [disfmarker] then you have the difference between the networks group and this group [speaker001:] Yeah, I know, some of the NSA meetings, yeah. [speaker003:] Yeah. So I didn't know in the group you had if you had [disfmarker] [speaker001:] Yeah. [speaker003:] so you have the networks meeting? [speaker002:] Yep, we do. [speaker001:] Yeah. [speaker003:] Do you have any of Jerry's meetings in your, pack, er, [speaker002:] Um, no. [speaker003:] No? [speaker002:] We could, I mean you [disfmarker] you recorded one last week or so. I could get that new one in this week [disfmarker] I get that new one in. [speaker006:] Yep. [speaker007:] We're gonna be recording them every [pause] Monday, [speaker006:] u [speaker003:] Yeah. Cuz I think he really needs variety, [speaker007:] so [disfmarker] [speaker003:] and [disfmarker] and having as much variety for speaker certainly would be a big part of that I think. [speaker002:] Great. [speaker001:] Yeah. Yeah. [speaker002:] OK, so if I, OK, included [disfmarker] include, OK, then, uh, if I were to include all together samples from twelve meetings that would only take an hour and I could get the transcribers to do that right [disfmarker] I mean, what I mean is, that would be an hour sampled, and then they'd transcribe those [disfmarker] that hour, right? That's what I should do? [speaker003:] Yeah. And. Right. Ye But you're [disfmarker] y [speaker001:] That's [disfmarker] that's. [speaker002:] I don't mean transcribe I mean [disfmarker] I mean adjust. So they get it into the multi channel format and then adjust the timebands so it's precise. [speaker003:] So that should be faster than the ten times kind of thing, [speaker002:] Absolutely. [speaker003:] yeah. [speaker002:] I did [disfmarker] I did, um, uh, so, last night I did, uh, Oh gosh, well, last night, I did about half an hour in, three hours, which is not, terrific, [speaker003:] Yeah. [speaker002:] but, um, [speaker003:] Yeah. [speaker002:] anyway, it's an hour and a half per [disfmarker] [speaker003:] Well, that's probably. [speaker001:] So. [speaker002:] Well, I can't calculate on my, [vocalsound] on my feet. [speaker001:] Do the transcribers actually start wi with, uh, transcribing new meetings, or [pause] are they? [speaker002:] Well, um they're still working [disfmarker] they still have enough to finish that I haven't assigned a new meeting, [speaker001:] OK. [speaker002:] but the next, m m I was about to need to assign a new meeting and I was going to take it from one of the new ones, [speaker001:] OK. [speaker002:] and I could easily give them Jerry Feldman's meeting, no problem. And, then [disfmarker] [speaker001:] OK. [speaker007:] So they're really running out of, data, prett I mean that's good. [speaker002:] Mm hmm. Uh, that first set. [speaker007:] Um, OK. [speaker003:] They're running out of data unless we s make the decision that we should go over and start, uh, transcribing the other set. [speaker007:] So [disfmarker] [speaker003:] There [disfmarker] the first [disfmarker] the first half. [speaker002:] And so I was in the process of like editing them but this is wonderful news. [speaker001:] Yeah. OK. [speaker003:] Alright. [speaker002:] We funded the experiment with, uh [disfmarker] also we were thinking maybe applying that that to getting the, Yeah, that'll be, very useful to getting the overlaps to be more precise all the way through. [speaker003:] So this, blends nicely into the update on transcripts. [speaker002:] Yes, it does. [speaker003:] OK. [speaker002:] So, um, [comment] um, Liz, and [disfmarker] and Don, and I met this morning, in the BARCO room, with the lecture hall, [speaker007:] Yeah, please. Go ahead. And this afternoon. [speaker002:] and this afternoon, it drifted into the afternoon, [comment] [vocalsound] uh, concerning this issue of, um, the, well there's basically the issue of the interplay between the transcript format and the processing that, they need to do for, the SRI recognizer. And, um, well, so, I mentioned the process that I'm going through with the data, so, you know, I get the data back from the transcri Well, s uh, metaphorically, get the data back from the transcriber, and then I, check for simple things like spelling errors and things like that. And, um, I'm going to be doing a more thorough editing, with respect to consistency of the conventions. But they're [disfmarker] they're generally very good. And, then, I run it through, uh, the channelize program to get it into the multi channel format, OK. And [pause] the, what we discussed this morning, I would summarize as saying that, um, these units that result, in a [disfmarker] a particular channel and a particular timeband, at [disfmarker] at that level, um, vary in length. And, um, [nonvocalsound] their recognizer would prefer that the units not be overly long. But it's really an empirical question, whether the units we get at this point through, just that process I described might be sufficient for them. So, as a first pass through, a first chance without having to do a lot of hand editing, what we're gonna do, is, I'll run it through channelize, give them those data after I've done the editing process and be sure it's clean. And I can do that, pretty quickly, with just, that minimal editing, without having to hand break things. [speaker003:] Mm hmm. [speaker002:] And then we'll see if the units that we're getting, uh, with the [disfmarker] at that level, are sufficient. And maybe they don't need to be further broken down. And if they do need to be further broken down then maybe it just be piece wise, maybe it won't be the whole thing. So, that's [disfmarker] that's what we were discussing, this morning as far as I [disfmarker] Among [disfmarker] [speaker007:] Right. [speaker002:] also we discussed some adaptational things, [speaker007:] Then lots of [disfmarker] Right. [speaker002:] so it's like, uh [disfmarker] You know I hadn't, uh, incorporated, a convention explicitly to handle acronyms, for example, but if someone says, PZM it would be nice to have that be directly interpretable from, the transcript what they said, [speaker003:] Mm hmm. [speaker002:] or Pi uh Tcl [disfmarker] TCL I mean. It's like y it's [disfmarker] and so, um, I've [disfmarker] I've incorporated also convention, with that but that's easy to handle at the post editing phase, and I'll mention it to, transcribers for the next phase but that's OK. And then, a similar conv uh, convention for numbers. So if they say one eighty three versus one eight three. Um, and also I'll be, um, encoding, as I do my post editing, the, things that are in curly brackets, which are clarificational material. And eh to incorporate, uh, keyword, at the beginning. So, it's gonna be either a gloss or it's gonna be a vocal sound like a, laugh or a cough, or, so forth. Or a non vocal sound like a doors door slam, and that can be easily done with a, you know, just a [disfmarker] one little additional thing in the, in the general format. [speaker007:] Yeah we j we just needed a way to, strip, you know, all the comments, all the things th the [disfmarker] that linguist wants but the recognizer can't do anything with. Um, but to keep things that we mapped to like reject models, or, you know, uh, mouth noise, or, cough. And then there's this interesting issue Jane brought up which I hadn't thought about before but I was, realizing as I went through the transcripts, that there are some noises like, um, well the [disfmarker] good example was an inbreath, where a transcriber working from, the mixed, signal, doesn't know whose breath it is, [speaker006:] Right. [speaker007:] and they've been assigning it to someone that may or may not be correct. And what we do is, if it's a breath sound, you know, a sound from the speaker, we map it, to, a noise model, like a mouth noise model in the recognizer, and, yeah, it probably doesn't hurt that much once in a while to have these, but, if they're in the wrong channel, that's, not a good idea. And then there's also, things like door slams that's really in no one's channel, they're like [disfmarker] it's in the room. [speaker006:] Right. [speaker001:] Yeah. [speaker007:] And [pause] uh, Jane had this nice, uh, idea of having, like an extra, uh couple tiers, [speaker006:] An extra channel. [speaker002:] Yeah. [speaker007:] yeah. [speaker002:] I've been [disfmarker] I've been adding that to the ones I've been editing. [speaker007:] And we were thinking, that is useful also when there's uncertainties. So if they hear a breath and they don't know who breath it is it's better to put it in that channel than to put it in the speaker's channel because maybe it was someone else's breath, or [disfmarker] Uh, so I think that's a good [disfmarker] you can always clean that up, post processing. [speaker002:] Yeah. [speaker007:] So a lot of little details, but I think we're, coming to some kinda closure, on that. So the idea is then, uh, Don can take, uh, Jane's post processed channelized version, and, with some scripts, you know, convert that to [disfmarker] to a reference for the recognizer and we can, can run these. So [pause] when that's, ready [disfmarker] you know, as soon as that's ready, and as soon as the recognizer is here we can get, twelve hours of force aligned and recognized data. And, you know, start, working on it, [speaker002:] And [disfmarker] [speaker007:] so we're, I dunno a coup a week or two away I would say from, uh, if [disfmarker] if that process is automatic once we get your post process, transcript. [speaker002:] Mm hmm. And that doesn't [disfmarker] the amount of editing that it would require is not very much either. I'm just hoping that the units that are provided in that way, [nonvocalsound] will be sufficient cuz I would save a lot of, uh, time, dividing things. [speaker007:] Yeah, some of them are quite long. Just from [disfmarker] I dunno how long were [disfmarker] you did one? [speaker005:] I saw a couple, [vocalsound] around twenty seconds, and that was just without looking too hard for it, so, I would imagine that there might be some that are longer. [speaker007:] Right. [speaker002:] Well n One question, e w would that be a single speaker or is that multiple speakers overlapping? [speaker005:] No. No, but if we're gonna segment it, like if there's one speaker in there, that says "OK" or something, right in the middle, it's gonna have a lot of dead time around it, [speaker007:] Right. It's not the [disfmarker] it's not the fact that we can't process a twenty second segment, it's the fact that, there's twenty seconds in which to place one word in the wrong place [speaker005:] so it's not [disfmarker] Yeah. [speaker007:] You know, if [disfmarker] if someone has a very short utterance there, [speaker002:] Yeah. [speaker007:] and that's where, we, might wanna have this individual, you know, ha have your pre pre process input. [speaker001:] Yep. Yeah. Sure. [speaker007:] And I just don't know, [speaker001:] I [disfmarker] I [disfmarker] I thought that perhaps the transcribers could start then from the [disfmarker] those mult multi channel, uh, speech nonspeech detections, if they would like to. [speaker002:] That's very important. [speaker007:] I have to run it. Right. [speaker002:] In [disfmarker] in doing the hand marking? [speaker001:] Yeah. [speaker002:] Yeah that's what I was thinking, too. [speaker007:] Right. So that's probably what will happen, [speaker002:] Yeah. [speaker007:] but we'll try it this way and see. [speaker001:] Yeah. [speaker007:] I mean it's probably good enough for force alignment. If it's not then we're really [disfmarker] then we def definitely [speaker001:] Yeah. [speaker007:] uh, but for free recognition I'm [disfmarker] it'll probably not be good enough. We'll probably get lots of errors because of the cross talk, and, noises and things. [speaker001:] Yep. [speaker003:] Good s I think that's probably our agenda, or starting up there. [speaker002:] Oh I wanted to ask one thing, [speaker003:] Yeah? K. [speaker002:] the microphones [disfmarker] the new microphones, when do we get, uh? [speaker006:] Uh, they said it would take about a week. [speaker002:] Oh, exciting. K. K. [speaker003:] K. [speaker004:] You ordered them already? [speaker006:] Mm hmm. [speaker004:] Great. [speaker007:] So what happens to our old microphones? [speaker003:] They go where old microphones go. [speaker006:] Um [disfmarker] [speaker007:] Do we give them to someone, or [disfmarker]? [speaker006:] Well the only thing we're gonna have extra, for now, [speaker007:] We don't have more receivers, we just have [disfmarker] [speaker006:] Right, we don [speaker007:] Right. [speaker006:] so the only thing we'll have extra now is just the lapel. Not [disfmarker] not the, bodypack, just the lapel. [speaker007:] Just the lapel itself. [speaker006:] Um, and then one of the [disfmarker] one of those. Since, what I decided to do, on Morgan's suggestion, was just get two, new microphones, um, and try them out. [speaker007:] Mm hmm. [speaker006:] And then, if we like them we'll get more. [speaker007:] OK. [speaker003:] Yeah. [speaker006:] Since they're [disfmarker] they're like two hundred bucks a piece, we won't, uh, at least try them out. [speaker004:] So it's a replacement for this headset mike? [speaker006:] Yep. Yep. [speaker003:] Yeah. [speaker006:] And they're gonna do the wiring for us. [speaker004:] What's the, um, style of the headset? [speaker006:] It's, um, it's by Crown, and it's one of these sort of mount around the ear thingies, and, uh, when I s when I mentioned that we thought it was uncomfortable he said it was a common problem with the Sony. And this is how apparently a lot of people are getting around it. [speaker004:] Hmm. [speaker006:] And I checked on the web, and every site I went to, raved about this particular mike. It's apparently comfortable and stays on the head well, so we'll see if it's any good. But, uh, I think it's promising. [speaker002:] You said it was used by aerobics instructors? [speaker006:] Yep. Yep, so it was [disfmarker] it was advertised for performers [speaker005:] Hmm. [speaker002:] That says a lot. [speaker003:] For the recor for the record Adam is not a paid employee or a consultant of Crown. [speaker006:] and [disfmarker] Excuse me? [speaker002:] Oh. [speaker006:] Excuse me? [speaker007:] Right. [speaker003:] I said "For the record Adam is [disfmarker] is not a paid consultant or employee of Crown". [speaker006:] That's right. [speaker007:] However, he may be solicited after these meetings are distributed. [speaker006:] Well we're using the Crown P Z [speaker007:] Don't worry about finishing your dissertation. [speaker003:] Yeah. [speaker006:] These are Crown aren't they? The P Z Ms are Crown, [speaker003:] Right. [speaker006:] aren't they? [speaker003:] Yeah. [speaker006:] Yeah, I thought they were. [speaker003:] You bet. You bet. [speaker006:] And they work very well. [speaker007:] Yes. [speaker003:] So if we go to a workshop about all this [disfmarker] this it's gonna be a meeting about meetings about meetings. OK. So. [speaker006:] And then it [disfmarker] we have to go to the planning session for that workshop. [speaker003:] Oh, yeah, what [disfmarker] Which'll be the meeting about the meeting about the meeting. [speaker004:] Oh, god. [speaker006:] Cuz then it would be a meeting about the meeting about the meeting about meetings. [speaker003:] Yeah? Just start saying "M four". Yeah, OK. [speaker006:] Yeah. M to the fourth. [speaker003:] Should we do the digits? [speaker006:] Yep, go for it. [speaker003:] OK. [speaker001:] S [pause] s [speaker006:] Pause between the lines, remember? [speaker005:] Excuse me. [speaker006:] OK. [speaker003:] OK. [speaker007:] Huh. [speaker006:] OK. Um, w would you mind telling me again, please? Oh, he's gonna write it. OK. Now, if the batteries were to go out during the meeting, there's really [disfmarker] what do we do? Do we have batteries back there? OK. OK. And you'll be in your office during this? Oh, thank you very much. Thanks a lot. [speaker001:] What's [disfmarker] so what do we do with this? [speaker006:] Appreciate it. Could you close the door for me please? [speaker001:] Anything? [speaker002:] Just put your name on it, I think. [speaker001:] Oh, OK. [speaker006:] OK, thanks. [speaker002:] Uh huh. [speaker006:] So, thanks very much for coming. OK. So um [disfmarker] let's see, so, um, as you know, the meeting today is to, um, [@ @] [comment] benefit from your involvement in the data, all the things that [disfmarker] that you've been exposed to in these many months. And um, I just [disfmarker] I just put down some ideas [disfmarker] you've seen some of this in the email that I sent out earlier. And, um, [speaker003:] Mmm. [speaker006:] none of these [disfmarker] these are not obligatory toca topics but they're just things that I thought might be useful to discuss, just as a way of organizing the discussion. But if there are other topics you'd like to discuss that'd be great too. Alright, so um, it struck me that, um, you know, we could start out with some sort of general thing about, um, [vocalsound] things that you found easier or more difficult with respect to transcribing or checking or using the interface or, uh, dealing with the time bins or any aspect of that type. And then, maybe talk about specifics [disfmarker] specifics of the data and then maybe talk about, um, aspects of the conventions and then the technical set up regarding the, um, auditory image and the interface. OK, so, maybe we could just, um, start with issues of the [disfmarker] [@ @] [comment] the tar ti up there at the top of [disfmarker] What [disfmarker] what did you find easiest or more s most difficult about transcribing the data? [speaker004:] OK. [speaker006:] Yeah. [speaker004:] One thing that I remember being really difficult at the beginning and it got easier as we went along was, um, acronyms and technical terms. Um, because there are so many [speaker001:] Mm hmm. [speaker004:] and [disfmarker] it's a field that I'm not really l very [disfmarker] I don't know, I don't know very much about this kind of stuff [disfmarker] computers and speech recognition. I've never taken classes on it or anything, so. I found that really hard. And I remember transcribing and having a bunch of [disfmarker] You know, every five words there'd be something I'd put in parentheses cuz I didn't know what it was. So. [speaker006:] Parentheses, meaning uncertainty. [speaker004:] Yeah. Yeah, [speaker003:] Uh huh. [speaker004:] but it's definitely gotten easier because the same things keep getting repeated, [speaker006:] Mm hmm. [speaker004:] once you learn what it is, and you can say, "OK I know what they meant by that" or "I know how to spell that now" or something like that. And the Cast of Characters that you put up on the Web was helpful [speaker006:] Is [disfmarker] [speaker003:] Yeah. [speaker004:] also. [speaker005:] Yeah. [speaker006:] OK, good. It's sorta like a foreign language, isn't it? It's basically [disfmarker] you have these [vocalsound] specialized terms for things that I didn't know n needed specialized terms. [speaker004:] Yeah. [speaker006:] Yeah. [speaker004:] Especially when describing like the set up in this room, also, and the different types of mikes. [speaker006:] Yeah. [speaker004:] I don't know. [speaker006:] Mm hmm. [speaker004:] And it's hard to tell when people are talking relatively quickly. They just glide over some of the letters. They don't fully pronounce them. So you can't really tell even if you listen to it ten times [@ @] [comment] what it is. [speaker006:] Mm hmm. [speaker002:] Hmm. [speaker003:] Mm hmm. [speaker005:] Yeah. I usually put most of the acronyms in parentheses cuz I think it's sort of safe, cuz people can [disfmarker] people can say their acronyms really fast [speaker004:] Uh huh. [speaker005:] and I figure that even if I think I know exactly what letters they said, it's probably best to put them in parentheses cuz I probably don't. [speaker006:] That's not a bad strategy. I mean, i it's [disfmarker] it's true that I [disfmarker] I always check the parentheses at the last stage and those are pretty easy to clear, in terms of when they're in context. [speaker005:] Mm hmm. Right. [speaker002:] Hmm. [speaker006:] Not a bad strategy. [speaker003:] Uh huh. [speaker006:] Yeah. [speaker003:] Some of them just occurred over and over the [disfmarker] again, so like [disfmarker] PDA. Like that got [disfmarker] got easy. [speaker006:] Yes. Mm hmm. [speaker003:] There [disfmarker] there was a [disfmarker] a bunch of them that, uh [disfmarker] [speaker006:] I suspect in context. [speaker003:] They just kept getting repeated often and often you know, over and over, so. [speaker004:] Mm hmm. [speaker005:] Yeah. [speaker003:] Just got better, [speaker006:] And also when you have the [disfmarker] the juxtaposition of PDA and PZM [disfmarker] [vocalsound] then, you know, you [disfmarker] [@ @] [comment] in context you can tell which one it's [disfmarker] is probably likely to be the one that you had. [speaker003:] but. Mm hmm. [speaker004:] Yeah. Even though I still don't know what they are. I mean, I assume they're microphones of some sort, but that's all. [speaker006:] Well these here are the P Z that right there. [speaker004:] Oh. [speaker005:] Oh. [speaker001:] Is anybody wearing one of those awful quote Crown microphones that people used to be [disfmarker] [speaker003:] No. [speaker006:] Uh you've heard, the "crown of pain". [speaker001:] Yeah. [speaker006:] No. He was telling us, uh, before the meeting that, um, that they're not in circulation right now, that they're [disfmarker] they were unpopular. I liked them, [speaker001:] Huh. [speaker006:] and Adam liked them. [speaker001:] Oh. Oh, OK. [speaker006:] I thought the quality was good. [speaker005:] Hmm. [speaker001:] Just curious. [speaker006:] What [disfmarker] yeah, what did he say? He said some more things about that, but I was n I was working on something else. [speaker004:] About the microphone? [speaker006:] Uh huh. [speaker004:] I can't remember. I mean he's e he [disfmarker] he said that they were designed for a specific sized head? [speaker006:] Ah, yeah. Yeah, that's right. [speaker004:] Right? [speaker005:] Yeah. [speaker004:] And so if [disfmarker] if your head happened to be that size, then you were fine [speaker006:] That's right. [speaker001:] Oh. [speaker004:] and if it w was bigger, then it was like torture, [speaker001:] Oh. [speaker003:] Right. [speaker006:] Yeah, [speaker004:] o or [disfmarker] I don't know. [speaker006:] it's [disfmarker] [speaker004:] I don't know. [speaker006:] I liked them. And there's the [disfmarker] actually, there's the picture of the [disfmarker] of the Crown adjustment on the wall right there. [speaker005:] Hmm. [speaker006:] So it sort of hooks over your ears from the back. [speaker003:] Right. [speaker004:] Oh, yeah. [speaker001:] Oh, OK. [speaker006:] Yeah, good c [speaker004:] Oh, it doesn't look so bad, but, I guess it depends really. [speaker006:] Yeah. OK, so now what about accents? Do you have, uh [disfmarker] Did you find any p [speaker001:] They're tough sometimes. [speaker006:] Mm hmm. [speaker005:] Yeah. [speaker001:] Sometimes the German accents can get a little bit daunting. I mean, um, I forget who it was that I was transcribing but, it was tough to figure out what e e he was saying, uh, at any given point in time and I would I would just listen to it over and over and over again, going "Oh my God! My head's gonna explode!" like, not really understanding a [speaker005:] Yeah. [speaker001:] you know? And so I was getting a little frustrated because I would put a lot of things into parentheses and then go back over it and go back over it again and still [disfmarker] you know, I [disfmarker] I like to try to clear as much up as I possibly can and I [disfmarker] I just felt like you know, with the German accents, just for me personally, those were probably the toughest to get around. [speaker006:] Mm hmm. [speaker001:] Um, they weren't impossible but just [disfmarker] like they too the m they took the most time too. I mean I would take maybe twice as long on a [disfmarker] a g given channel w with a speaker who has a German accent than somebody who's you know, a native English speaker, [speaker005:] Mm hmm. [speaker004:] Hm hmm. [speaker001:] or even Asian [disfmarker] like, uh, speaks an En Asian language. I found that easier. Uh, you know. Uh, somebody who speaks like Chinese or something, [@ @]. [speaker006:] What's [disfmarker] you've [comment] studied some languages, [speaker003:] Mm hmm. [speaker006:] which [disfmarker] which ones do you [disfmarker] are you studying? It's more of a Romance direction, or [disfmarker]? [speaker001:] Um, yeah [disfmarker] well, I studied [disfmarker] I mean, [vocalsound] I studied Old English for a while. [speaker006:] Mm hmm. [speaker001:] Um, and then I've worked with Larry Hyman on like some of the Bantu languages. Um, [speaker006:] Mm hmm. [speaker001:] and you know, I'm also working with Ian Maddieson and he's been doing work on the, uh, like also some African languages. So I've [nonvocalsound] like learned a little bit. [speaker006:] Mm hmm, so it's not really [disfmarker] you're not really exposed to German very much. [speaker001:] No, I mean I sing in German. [speaker006:] Ah! [comment] Interesting. Interesting. [speaker001:] But [disfmarker] I mean, I can [disfmarker] I can pronounce it [speaker006:] Huh. [speaker001:] but I don't know any. [speaker006:] OK. [speaker004:] That's interesting. [speaker003:] Hmm hmm. [speaker001:] I can do [disfmarker] I can pronounce a lot of languages and not actually speak them [speaker004:] Yeah. [speaker001:] because of singing. [speaker004:] Yeah. [speaker002:] Hmm. [speaker005:] Huh. [speaker004:] For me I found the Spanish speakers to be the hardest. And the German speakers I found a little bit easier, [speaker003:] Yeah. [speaker004:] and I thought maybe that was because I had [disfmarker] I took German for a semester and I've had exposure to German, [speaker001:] Oh. [speaker004:] um, because I had a lot of friends who were really into Germany and German language and [disfmarker] But [disfmarker] And I found that I could [disfmarker] if they were pronouncing something in a certain way I could say, "Oh well that's just the German way of pronouncing this" and then I figured out what they were saying. [speaker001:] Mm hmm. [speaker004:] um, but with the Spanish speakers it was [disfmarker] I've never taken Spanish, and even though I've taken French, and I speak French fairly well, um, I found the Spanish to be much harder and [disfmarker] maybe also because the level of fluency for the Spanish speakers was a little bit less than the level of fluency of the German speakers that are here? [speaker001:] Oh. [speaker005:] If you're talking about who I think you're talking about [disfmarker] [speaker004:] I think [speaker005:] are you talking about um [disfmarker] [speaker004:] There was a couple of different Spanish speakers. [speaker003:] Can we give names? [speaker005:] O can [disfmarker] yeah. [speaker003:] We [disfmarker] w we probably shouldn't, right? [speaker005:] Is that [disfmarker] [speaker006:] I it's a good question. Um, And I can't really answer that. [speaker005:] OK, well there's one in particular who I find the hardest to transcribe of all, and it's [disfmarker] d she's Spanish. [speaker006:] Oh. [speaker005:] So. [speaker003:] Oh yeah, OK. [speaker005:] And, um, and I've taken lots [disfmarker] I'm fluent in Spanish, and so, um, when I [disfmarker] when I correct it, I can t I can tay [disfmarker] tell that the person who, um, transcribed it before me probably didn't speak Spanish, cuz I can catch a lot of things like [disfmarker] like "No, that's not what she said", I can tell she's [pause] pronouncing [disfmarker] pronouncing it the way that if you read it from a [disfmarker] you know, a Spanish speaker's mind that you would, you know, pronounce it the way that you would pronounce it in Spanish. But the same time, uh I can't get most of it, cuz I [disfmarker] [speaker004:] Huh, that's really interesting. [speaker006:] Oh, that's fascinating. [speaker001:] Huh. [speaker005:] I find it one of [disfmarker] one of the hardest ones to transcribe. [speaker006:] Well, what you can see is that it's [disfmarker] it's her lack of [disfmarker] of English fluency that's being [disfmarker] [speaker005:] Yeah. [speaker001:] Oh OK. [speaker004:] I think so, yeah. [speaker006:] Yeah, that's [disfmarker] that's really interesting. I do find that the German tends to be easier for me than s than the others [speaker003:] Mm hmm. Mm hmm. [speaker006:] because of my c exposure and interest in German. [speaker001:] Mm hmm. [speaker004:] Yeah. [speaker006:] But, um, [speaker005:] Right. [speaker001:] Maybe I haven't transcribed this Spanish woman, I don't know if I have. [speaker006:] I suspect you haven't because this was, uh, earlier on that we had a lot of [disfmarker] [speaker004:] Mm hmm. [speaker006:] actually she [disfmarker] she was in the middle phase I would say. We had another Spanish speaker who was earlier who was also difficult. [speaker005:] Hmm. [speaker004:] Mm hmm. Yeah. [speaker005:] Mm hmm. [speaker004:] Yeah, he was hard. [speaker003:] Yeah. [speaker006:] But that's really interesting. [speaker003:] He got easier though, because he started [disfmarker] like he would say certain words over and over, [speaker004:] Yeah. [speaker003:] and [disfmarker] [speaker004:] And you got to know his pronunciation of them. [speaker003:] Yeah. So when he said "sign yah", and it sounds like "sign", he was saying "signal". [speaker004:] I remember, yeah. [speaker006:] Yes. Yes yes yes. Uh huh. [speaker004:] Yeah, I remember that. [speaker006:] Ex [speaker003:] And so I would transcribe it as "signal". [speaker006:] exactly. [speaker001:] Huh. [speaker005:] Oh, wow. [speaker004:] And [disfmarker] and m eh the way he said "mixed" was really strange, too. [speaker002:] Ah! [speaker006:] "Miss ed". [speaker003:] "Meexe", "meese", "mees uh", or something. [speaker004:] "Miss ed" or somethi [speaker005:] "Miss" [speaker004:] yeah. [speaker006:] Uh huh. [speaker005:] Huh. [speaker003:] Right. [speaker004:] Yeah. [speaker006:] Which was [disfmarker] you know, you could predict it, [speaker004:] He [disfmarker] [speaker006:] I mean, if you knew [disfmarker] look looked back at it, [speaker004:] Mm hmm. [speaker001:] That's kinda cool. [speaker004:] Especially because also in the beginning meetings, I mean, he'd say something and no one in the meeting would understand, [speaker006:] they [disfmarker] [speaker004:] and then everyone would say, "What was that again?" [speaker003:] Oh, eh [disfmarker] [speaker004:] and then they'd pronounce it in English [speaker003:] Yeah. [speaker004:] and so I could figure out what it was [speaker003:] Yeah. [speaker004:] that way. [speaker006:] Yeah, interesting. [speaker004:] Yeah. [speaker006:] OK. [speaker003:] Yeah, I mean he was never like [disfmarker] it was never up to like a native speaker, but like that [disfmarker] it got a little easier. Um [disfmarker] [speaker004:] Like his English fluency got better? [speaker003:] I remember in the beginning you had [disfmarker] and you were just sorta like [disfmarker] you were beating your head against the wall, [speaker004:] Oh, I went crazy. [speaker003:] but I [disfmarker] it go but it got better, it got better. [speaker004:] I [disfmarker] I would skip doing that portion for as long as I could [speaker003:] Yeah, I know. Yeah. [comment] I remember. [speaker004:] because i I just couldn't deal with it, [speaker003:] Yeah. [speaker004:] and [disfmarker] [speaker006:] Yeah, I had the same experience. [speaker004:] Yeah. [speaker006:] Sometimes [disfmarker] And actually [disfmarker] Sometimes [nonvocalsound] when people lack a certain level of fluency you can ask them themselves, "What did you say there?" and they can't tell you. [speaker004:] And they won't know. [speaker006:] S I think it's [disfmarker] it's sort of a general variability when it's not [disfmarker] when you're learning a language and you haven't gotten to a certain level. [speaker005:] Mmm. [speaker002:] Mm hmm. [speaker001:] Oh. [speaker004:] Hmm. Mm hmm. [speaker005:] Mm hmm. [speaker004:] Mm hmm. Yeah. [speaker005:] Yeah. [speaker006:] I also wonder if [disfmarker] if it's partly that Span Spanish has different pronunciations in different regions? I mean, do you think that, um, that, uh, maybe [disfmarker] I mean, being [disfmarker] I mean, not that I would know that. [speaker005:] Mmm. [speaker006:] But I just wondered if, um [disfmarker] you know, she was from probably continental Spain versus [@ @] [comment] y you know other Spanish [disfmarker] I wonder if that would have a [disfmarker] a contributing factor. [speaker005:] I don't know. [speaker001:] Oh, like the region? [speaker004:] I have no idea about Spain, [speaker006:] Yeah. [speaker004:] but I know that in France for example Parisian French is much easier for me to understand cuz I learned it versus France in like Marseilles or in the south or in like the provin like [vocalsound] the [pause] villages in the south, [speaker001:] Sure. Right. [speaker004:] it's really hard to understand. [speaker006:] Mm hmm. [speaker001:] Menton. [speaker003:] Or in Quebec, [speaker006:] That's right. [speaker003:] right? In Canada it'd be [disfmarker] [speaker004:] Yeah. [speaker001:] Right. [speaker003:] it's different. [speaker004:] Much different. [speaker003:] A lot of the pronunciations are different, [speaker004:] Yeah. Yeah. [speaker003:] yeah. [speaker006:] And in my case with [disfmarker] with, uh, German w I [disfmarker] a certain region that I do well in and then [vocalsound] get farther away from that region [speaker005:] Yeah. [speaker004:] Yeah. [speaker006:] and I notice it's more and more difficult. [speaker004:] Mm hmm. [speaker001:] There's another language that's spoken in Spain too, that's um, [speaker006:] Yeah. [speaker001:] what am I thinking of? [speaker003:] Castillian? [speaker001:] Castillian. Yeah, and uh, [speaker004:] And [disfmarker] [speaker001:] um [disfmarker] [speaker004:] isn't it Catalan? [speaker001:] Catalan. [speaker004:] Yeah. [speaker005:] Mm hmm. [speaker001:] So maybe [disfmarker] I mean, depending on where she is, like maybe [speaker006:] But she was also very low volume. [speaker001:] ch [speaker006:] And, um [disfmarker] and didn't say a lot. [speaker004:] Yeah. [speaker002:] Mm hmm. [speaker006:] I mean, I [disfmarker] I [disfmarker] I c I totally agree. This was a [disfmarker] a difficult person to [disfmarker] to understand. [speaker005:] Yeah. Yeah. Well, the last one I did she actually said a lot, [speaker004:] Yeah. [speaker006:] But. [speaker005:] and it was kinda [disfmarker] it was very hard to transcribe. You kno it was [disfmarker] i j I just think that, um, her fluency in English wasn't [disfmarker] wa um, wasn't very [disfmarker] very high up there, so it was really difficult. [speaker004:] Yeah. [speaker005:] But, I mean, I did find that knowing Spanish helped me a lot. [speaker006:] I can imagine that, [speaker005:] I could tell that a lot of things that the other transcriber had put in parentheses, I was like, "I can see how you [disfmarker] how you couldn't understand that", [speaker003:] Hmm. [speaker006:] yeah. [speaker001:] Mm hmm. [speaker005:] but I c I kn I think I know what she's doing. [speaker006:] Yeah, yeah, yeah. [speaker005:] I mean, I think I know the mistake she's making. And [disfmarker] [speaker001:] Mm hmm. [speaker004:] That's really good. [speaker006:] Yeah, yeah. [speaker004:] Yeah, cuz that's definitely something that I wouldn't have be able to do. [speaker005:] Yeah. [speaker006:] Yeah, I couldn't do it in Spanish. I've [disfmarker] I found myself doing that somewhat in [disfmarker] in German. [speaker004:] Yeah. [speaker001:] German. [speaker005:] In German, yeah. [speaker006:] Mm hmm. [speaker004:] There were problems even with, um, Am English speakers though. I mean, not in their level of fluency, obviously, [speaker003:] Uh huh. [speaker004:] but in [disfmarker] [speaker005:] Mm hmm. [speaker004:] Some people tend to mumble, [speaker006:] Mm hmm. [speaker002:] Mmm. [speaker001:] Yeah. [speaker006:] That's right. [speaker004:] and [disfmarker] and it's hard to catch what it is is, cuz it goes by really quickly and you can't really tell [speaker003:] Uh huh. [speaker004:] and [disfmarker] [speaker006:] Or they're laughing when they're talking [speaker004:] That too. [speaker006:] or they're [disfmarker] [speaker005:] Mm hmm. [speaker002:] Mm hmm. [speaker004:] That too. Yeah. Or just [disfmarker] I don't know. And some people tend to get interrupted more, I know that's not something that the person themselves can control [speaker001:] He yeah. [speaker004:] but [disfmarker] [speaker005:] Hmm. [speaker001:] If [disfmarker] when people talk over one another like constantly, [comment] like there's this whole barrage of voices coming from everywhere, [speaker004:] um [disfmarker] Mm hmm. [speaker001:] it's like "Oh my God! What's going on right now?" [speaker004:] Especially when they're [disfmarker] when you [comment] were using the l the mikes that weren't m headmounted mikes. [speaker001:] It's kind of [disfmarker] [speaker006:] Oh. Yeah. [speaker004:] And there was input from other people on the same channel as the person who was talking. [speaker001:] Yeah. [speaker006:] Yeah, [speaker004:] And that was really hard to deal with. [speaker006:] we had lapels for d l brief And then sometimes people would wear these around their necks, [speaker005:] Hmm. [speaker006:] which would mean really basically u uh, a problem that you're saying of getting input from others, [speaker002:] Mm hmm. [speaker004:] Oh, yeah. [speaker005:] Mmm. [speaker003:] Right. [speaker006:] and also, um, it varying when they change their hea w head position. [speaker004:] Mm hmm. Yeah. [speaker006:] Yeah, [speaker003:] Right. [speaker006:] but [disfmarker] [speaker003:] A big im I tha I think the transcriptions improved immensely, like exponentially, um, after it was able to sort of isolate a single person onto a single channel. [speaker004:] Yeah, [speaker006:] Oh, excellent. [speaker004:] absolutely. [speaker003:] Um [disfmarker] [speaker006:] Oh, good to know. [speaker004:] Yeah. [speaker003:] Oh, there's no question. [speaker004:] Yeah. [speaker002:] Oh, yeah. [speaker003:] Um [disfmarker] [speaker006:] S so that was [disfmarker] You're talking about early on, the change fro into Channeltrans? [speaker003:] Yeah. [speaker004:] Yeah. [speaker005:] You guys didn't have o one channel per person before? [speaker006:] Mm hmm. [speaker003:] No. [speaker004:] No, it was one like [disfmarker] one ribbon for everybody [speaker003:] No. [speaker001:] Oh, wow. [speaker002:] Euh. [speaker004:] and you had to kind of [disfmarker] you could switch signals, like you could listen to a different one, [speaker006:] Mm hmm. [speaker004:] but you couldn't really [disfmarker] Well could you? Did [disfmarker] we didn't do that very much. [speaker001:] You couldn't visually isolate it? [speaker005:] What? [speaker004:] You couldn't visually do it, no. [speaker001:] Huh. [speaker002:] Euh. [speaker003:] No. [speaker004:] And f Yeah. [speaker003:] And you had to mark interruptions by a slash, [speaker006:] Mm hmm, that's right. [speaker003:] right? [speaker004:] And it was very hard to be precise about it, [speaker003:] Um [disfmarker] [speaker006:] Overlaps. [speaker004:] and it was very hard to tell [disfmarker] I had so much trouble at the very beginning trying to figure out who was who. [speaker005:] Mmm. [speaker003:] Yeah. [speaker004:] Cuz there's m so many men and so few women. [speaker002:] Mmm. [speaker004:] The women were easy to f figure out who was who, and then the men [disfmarker] all their voices sounded the same except I think there was one British English speaker and that was easy but [speaker005:] Wow, that's difficult. [speaker004:] Eh. [comment] [disfmarker] Yeah. [speaker003:] You know what? There might be a gender thing there, though, too, [speaker002:] Yeah. [speaker003:] cuz I think [disfmarker] I think as women it might be easier to differentiate [disfmarker] Well, there were fewer women, too, but I [disfmarker] I wonder if a man transcribing all the guys [pause] would have had [disfmarker] [speaker004:] Yeah. An easier time? [speaker003:] Like if you [comment] had transcribed it you would have had an easier time differentiating them? [speaker005:] Mm hmm. [speaker003:] I don't know why that would be true, [speaker006:] That's interesting. [speaker001:] That's possible. [speaker003:] but [speaker004:] That's possible. [speaker003:] you wonder if there'd be like a gender difference there? I don't know, I'm thinking about it as like a psychologist I suppose. [speaker004:] I don't know. [speaker006:] It's possible. [speaker003:] Anyway. [speaker001:] Well, it's [disfmarker] [speaker006:] Hmm. [speaker001:] oh, yeah, yeah, that's possible. [speaker006:] Hmm. [speaker004:] Yeah. [speaker001:] I was just gonna say something. I was gonna say that, I mean, it's easier for people to mimic like the timbre of somebody's voice if it's the same [disfmarker] if, uh, that person's voice is the same [disfmarker] if that person is the same sex as you. So maybe, like, implicitly people pay attention more closely to uh, speakers of the same sex. [speaker004:] Mm hmm. [speaker003:] Hmm. [speaker001:] I don't know. I don't know if that's actually true but I'm just throwing that out there. [speaker006:] Hmm. [speaker004:] It's interesting. [speaker003:] Huh. [speaker004:] I don't know. [speaker006:] Huh. [speaker001:] Kind of an interesting research project. [speaker006:] Yeah, it seems like it's [disfmarker] it's related to this production idea. How you [disfmarker] you internalize what you're externalizing in a way, but. [speaker004:] Yeah. [speaker006:] Yeah. [speaker004:] Yeah, but it definitely got much better when that change happened. [speaker003:] Oh yeah, it was so ea it [disfmarker] it became much easier. [speaker004:] Mm hmm. [speaker003:] Um, you know. [speaker006:] In the prev You've also gone through [disfmarker] They've [comment] been here since the beginning. You've also gone through the [disfmarker] use the int in in introduction of the pre segmentation, and that kind of thing. [speaker004:] Oh, yeah. [speaker003:] Right. [speaker006:] So initially they had to actually mark all the time bins themselves as well. [speaker002:] Wow. [speaker001:] Oh. [speaker005:] Oh, wow. [speaker003:] Right. [speaker004:] That took a l [speaker003:] Which wasn't as hard. [speaker006:] Yeah. [speaker003:] I mean, it took time. [speaker004:] E No. [comment] It just took time. Yeah. [speaker003:] It took time. [speaker005:] Yeah. [speaker006:] Mm hmm. [speaker002:] Hmm. [speaker003:] Um, I find that I have to adjust the time bins a lot anyway. [speaker006:] Mm hmm. [speaker004:] Me too, [speaker001:] Me too. [speaker004:] yeah. Yeah. [speaker005:] Yeah, [speaker003:] Um [disfmarker] [speaker005:] I always adjust the time bins. [speaker002:] Yeah. [speaker003:] Um, not for the end of the utterance but for the beginning. [speaker004:] Yeah. [speaker003:] Sometimes the beginning [disfmarker] the person started to say something, [speaker001:] Yeah. [speaker006:] I see. [speaker002:] Really? [speaker003:] it's a very slight difference, and then the time bin starts. [speaker006:] Mm hmm. [speaker004:] Yeah, that happens a lot. [speaker005:] Yeah right. [speaker002:] Hmm. [speaker001:] Right. [speaker003:] Um, so you have to just move it back a little bit. And besi and I think you [comment] mentioned at some point that you wanted a little pad of time anyway. [speaker005:] Mm hmm. [speaker006:] It doesn't hurt to have a pad. It's [disfmarker] it's nicer to have a pad than not, [speaker003:] Yeah. [speaker004:] Then to have it be cut off. [speaker006:] d but s don't have to be really [disfmarker] Yeah, don't [disfmarker] doesn't have to be [disfmarker] [speaker005:] Yeah. [speaker003:] Don't have it just start, yeah. [speaker005:] I always wonder about the, um, time bins. [speaker004:] I thought [disfmarker] [speaker003:] Um, so. [speaker005:] I mean, I usually ended adjusting them a little bit [disfmarker] [speaker006:] Yeah. [speaker005:] I mean, definitely when something's cut off at the beginning or the end but also sometimes there's just like huge pads of, you know, air before and [disfmarker] or after, um, that I don't really feel is necessary for that segment, [speaker006:] Mm hmm. [speaker005:] and I usually, um, you know, shrink it a bit [speaker002:] Hmm. [speaker004:] Shrink in them or whatever, [speaker003:] Mm hmm. [speaker004:] yeah. [speaker006:] Nice. [speaker005:] and make it a little bit [disfmarker] [speaker002:] Seems like I always have to cut off pads at the end. [speaker006:] Nice. [speaker002:] I don't have so much problem at the beginning, but. [speaker004:] Of breath or of uh [disfmarker] nothing? [speaker002:] No, of n just nothing. [speaker005:] Just nothing, [speaker002:] Yeah. [speaker004:] Hmm, yeah. [speaker005:] yeah. [speaker003:] Yeah, there's nothing going on. [speaker001:] At the beginning it seems to happen when, um, voicing occurs later [disfmarker] [speaker004:] Yeah. Yeah. [speaker001:] l you know, like [disfmarker] [speaker004:] With fricatives a lot. [speaker001:] Yeah, with fricatives and with like, uh [disfmarker] [speaker002:] Mmm. [speaker001:] I mean, like i it doesn't happen with, uh, like "buh" [comment] so much because you've got the pre voicing before the consonant actually begins. [speaker004:] Although I found one in [disfmarker] actually I was just working right before this meeting, [comment] one where someone said, "great" and the [disfmarker] the G [disfmarker] it was pronounced as a K so it was unvoiced and then part of the R was cut off in the beginning. [speaker001:] Oh. [speaker004:] And then you [disfmarker] It sounded like "great" if you just listened to that time bin it still sounded like "great". [speaker005:] Uh huh. [speaker004:] But if you listened to right before it you could hear a little the [disfmarker] the K or whatever and a little bit of the R. [speaker001:] Mm hmm. [speaker004:] So [disfmarker] I don't know. [speaker001:] That's kind of interesting. [speaker004:] Usually it's not [disfmarker] usually that doesn't happen but this time it did. [speaker003:] So for the non linguists, what's a fricative? [speaker006:] I [disfmarker] Ah. [speaker001:] Oh. Uh, it's like "fff", "sss". [speaker004:] Or "sss". Or [disfmarker] [speaker005:] Yeah. [speaker003:] Oh, OK. [speaker005:] Or "thh". [speaker001:] "Shh". [speaker005:] It's a sound that [disfmarker] [speaker001:] It's like the noisy, uh, um, like airflow ones where you've got like a [disfmarker] a stoppage in your mouth like, uh, like your tongue [disfmarker] [speaker004:] But it's not completely blocking the airflow. [speaker003:] Oh, I see. [speaker005:] Yeah, and you can sort of [disfmarker] you can sort of keep it going a little bit. [speaker001:] Right. [speaker003:] Oh, I see. Oh, OK. [speaker002:] Unlike like a "tuh" or a "kuh" that just stops. [speaker004:] Yeah. Yeah. [speaker003:] Right. Got it. [speaker001:] Sorry. [speaker006:] Have you [disfmarker] [speaker003:] I guess I'm the only non linguist. [speaker006:] Well, I'm [disfmarker] I'm sort of a hybrid, so I [disfmarker] I [disfmarker] you know, I [disfmarker] I appreciate having them do this, [speaker005:] Mm hmm. [speaker003:] Oh, right, yeah, oh you [disfmarker] [speaker006:] yeah. [speaker003:] Yeah. [speaker006:] So, um, in [disfmarker] did you [disfmarker] have you run into places [disfmarker] I've run into a couple where actually the word was totally different when you adjusted the time bin. Have you had that happen? [speaker004:] Mmm, yeah. [speaker005:] Oh, yeah. [speaker001:] Yeah. [speaker006:] That's fascinating. [speaker005:] Yeah. I like that. [speaker001:] Yeah. [speaker006:] Oh, it's really a kick. [speaker005:] I think that's fun. [speaker003:] Yeah. [speaker001:] And it sometimes changes the entire utterance, too. [speaker004:] Oh it does. [speaker005:] Yeah. [speaker006:] Yeah. [speaker004:] Completely. Yeah, I [disfmarker] I don't remember the words exactly. It's a shame. There was one that was a whole sentence was broken up in the middle, and there was a little like millisecond of break in the middle. And if you connected it all together, it completely changed what was right before it and what was right after [nonvocalsound] it. It was written as something else, and then it became a word together from what was before and after and then with this little tiny millisecond pause. So I guess [disfmarker] I don't know. I mean, I guess when we talk in between different segments of the word maybe, [comment] there's pauses somehow, I guess [disfmarker] [speaker003:] Hmm. [speaker004:] or the [disfmarker] you can [disfmarker] [speaker001:] We stop voicing sometimes. [speaker004:] Yeah, or something like that. It's probably like inaudible for us but [disfmarker] And maybe the pause was just so short that it made up a pause. I don't [disfmarker] is [disfmarker] it [disfmarker] That doesn't make any sense, but. [speaker006:] There are some times when there have been words that they just pause right in the middle of. [speaker004:] Mm hmm. [speaker006:] You're [disfmarker] i it's h [speaker004:] On purpose, though. Or not. [speaker006:] Well, it ge u I'm sure sometimes on purpose, but sometimes it'd just be s saying something and especially people who have really lo elongation in their style, [speaker004:] Mm hmm. [speaker006:] and they say, "That's really fascinating" [speaker002:] Mm hmm. [speaker006:] It's not a great example, but you know? [speaker004:] Mm hmm. [speaker001:] Sh [speaker006:] And [disfmarker] Yeah. Uh there's [disfmarker] it's [disfmarker] it's funny how, um, i it's funny to me that th something like that, you just change one little boundary and suddenly the whole meaning changes and I've run across that also. [speaker003:] Mm hmm. [speaker005:] Mm hmm. [speaker003:] Yeah. [speaker004:] Yeah, that is true. [speaker005:] Hmm. Would you want, um, uh, a transcription of something like that, of a big pause in between [disfmarker] in the middle of a word? Or just leave it as, "fascinating"? [speaker006:] What I've done [disfmarker] I've [disfmarker] It's very rare. I mean, when I've run across it. [speaker005:] Mm hmm. [speaker006:] And, you know, maybe there are times that I haven't detected it and I'm not [@ @] [disfmarker] you know, fin I haven't looked at everything. I've looked at [disfmarker] I've probably looked at per uh s at least eighty percent of the data, you know, all the way through e every meeting [disfmarker] [vocalsound] for about eighty percent of the stuff that I've cleared so far. [speaker005:] Oh! [speaker006:] And, um, i it s You guys are getting so good that it's like I'm [disfmarker] I [disfmarker] I [disfmarker] i I can get through them faster, and I'm getting to the point where I maybe don't have to go through them all the way, you know, [speaker004:] Mm hmm. [speaker006:] so it's like [disfmarker] it's just we've gotten to a level, it's just really great, [comment] of, um, of [disfmarker] of [vocalsound] pre precision that um, it takes a lot of a load off of me. [speaker005:] Mmm. [speaker006:] And, um [disfmarker] and so I'm not gonna be checking them all I don't think in the [@ @] [comment] for throughout the whole rest of the project. But, um [disfmarker] Certainly have spot checking. But, um, hhh, [comment] those that I have found that way, [comment] to me, I think f f the most sensible way to handle it from my perspective was to treat it as a pronunciation situation. [speaker004:] Mm hmm. [speaker006:] So just do a [disfmarker] you know, a mark, then "fascinating", and then put "PRN" and uh "interrupted by a pause" or, you know, some kind of other gloss like that, you know. [speaker001:] Mm hmm, mm hmm. PRN. [speaker004:] How it was really pronounced? [speaker005:] Mm hmm. [speaker004:] Mm hmm. [speaker005:] Right. [speaker003:] Mmm, oh right. Uh huh. [speaker006:] You know, "P R dot dot fascinating", whatever it is. [speaker004:] I always get confused with the pronunciation convention. [speaker006:] Sorry, what was this? [speaker004:] I get confused with pronunciation, like when there's a strange pronunciation that I wanna mark. [speaker006:] OK. [speaker004:] Because I think w m may di in the beginning did we do the strange pronunciation and then write PRN and what it really was meant to be? And now we're doing the normal word and then inside the brackets what it sounded like? [speaker003:] Right. [speaker004:] We did a switch, right? Or not. I don't know if it's just me that's confused or if it was swi [speaker006:] Well, I've [disfmarker] [speaker003:] Mm hmm, I [disfmarker] it [disfmarker] it never occurred to me, but now that you're mentioning it, n yeah, yeah, possibly. [speaker006:] There was an adjustment, and I [disfmarker] and I've switched them all [disfmarker] all the way at this point so that [speaker004:] OK. OK. So the convention as it stands now is write the word as it is normally conventionally spelled in English with the little [vocalsound] apostrophe, or whatever you call it, [speaker006:] Yeah. Yep. Mm hmm, that's right. [speaker004:] and then "PRN", [speaker006:] And then in the brackets, what it [disfmarker] [speaker004:] how it's pronounced. [speaker006:] Uh huh. So i you know, "OK" is a very e easy example. [speaker004:] Mm hmm. [speaker006:] A person says, "Mm kay" [speaker004:] Mm hmm. [speaker006:] then, [speaker002:] Oh. [speaker006:] it's "OK" with a ha a little hatch and then, "PRN" and then "m mm kay", or whatever it is in quotes. [speaker003:] Oh, right. [speaker004:] OK. [speaker001:] Oh. Right. [speaker003:] I think Nelson's the only one who, uh [disfmarker] Oh, Morgan. You guys call him Morgan. [speaker006:] Yeah, yeah. [speaker003:] He's the only one who ever did that, I think. "Mm kay." [speaker004:] He does it a lot. [speaker005:] "Mm kay" [disfmarker] I [disfmarker] [speaker003:] He does it a [disfmarker] [speaker001:] OK. [speaker003:] yeah. [speaker005:] Yeah, and some people just say, "Kay." [speaker003:] Um [disfmarker] [speaker004:] Yeah. [speaker003:] "Kay", yeah, that's true. [speaker006:] Yeah, that's right. [speaker005:] S [speaker001:] Yeah. [speaker006:] And of course, you know [disfmarker] I mean, there's also this other reduction that w that we've done in terms of like not capturing every time a person says "and" versus "nnn" [comment] versus, you know, "nd", [vocalsound] whatever [disfmarker] [speaker004:] Mm hmm. [speaker006:] all the different ways of reducing "and". [speaker003:] That's true. [speaker006:] Because you can assume that the speech recognition, um, uh, pronunciation models will handle a certain amount of reductions of these types, and the same thing for, um, well "the". "The" [comment] versus "the" that there are those known variants, and [disfmarker] [speaker004:] Mm hmm. [speaker005:] Uh huh. [speaker004:] Or "a" [comment] versus "a". [speaker006:] Exactly. [speaker004:] Mm hmm. [speaker003:] Mm hmm. [speaker006:] And that they should be able to handle themselves, and [disfmarker] and that we assume also that for discourse purposes, those distinctions are not really the focus right now, and those distinctions could be added later if needed but it's unlikely that they would be. And it's so time consuming. [speaker001:] Mm hmm. [speaker006:] So we've made these [disfmarker] these simplifying assumptions with that, and so then you begin to think, "Well what about" OK "? I mean, maybe "mm kay" is an appropriate variant for "OK" and it wouldn't be necessary to mark those either, so it's [disfmarker] [nonvocalsound] it's definitely [disfmarker] there's a, uh, choice of how far you go with that, and whether "kay" is [disfmarker] is enough of a variant [disfmarker] [speaker003:] Mm hmm. [speaker002:] Mmm. [speaker006:] you know, it's [disfmarker] th this is a case where, um, I appreciate it when people mark those kinds of things but I [disfmarker] I sort of figure that i [@ @] [comment] that [disfmarker] that's not as far aw away from "OK" as "mm kay" is, so it's not quite so [disfmarker] [speaker005:] Mm hmm. [speaker004:] Mmm. [speaker006:] but I don't change it back if it's there, [speaker003:] Mm hmm. [speaker001:] I think with like some of them [disfmarker] like you know with uh the difference between "the" [comment] and "the", [comment] you can [disfmarker] you can distinguished very, uh, u dif definitively like what context they're gonna be used in. [speaker006:] you know. Yes. Mm hmm. [speaker004:] Mm hmm. [speaker001:] Like um, [vocalsound] like you would say "the [comment] apple" because it's "the" plus then the next word starts with a vowel. [speaker006:] Mm hmm. [speaker001:] Or "the cat" because it's next word starts with a [disfmarker] [speaker004:] Yeah. There's a study done on when people say "uh" and when people say "um". [speaker001:] Yeah, I'm sure it's [disfmarker] [speaker004:] Also, there's contec in concet contextual variation, [speaker005:] Oh, really? [speaker004:] and it's predictable apparently, [speaker002:] Hmm. [speaker001:] Mm hmm. [speaker004:] I haven't read the [disfmarker] [speaker005:] I was trying to figure that out myself a couple times, [speaker004:] It's really interesting. [speaker001:] But something like [disfmarker] [speaker005:] and I couldn't get anything. [speaker001:] Yeah. [speaker004:] Yeah. [speaker006:] Mm hmm, interesting. [speaker005:] I kept looking at all the "uh's" and "um's" and I was like, "Oh, this would be hard to do." [speaker004:] Yeah. [speaker006:] Oh, that's interesting. [speaker004:] You know, what's [disfmarker] On that sort of similar topic, I just thought of something that's been hard for me sometimes, is you can have a pronunciation of [disfmarker] of the word "a" [comment] as "a". [speaker006:] Huh. Yeah. [speaker004:] And then you can have "uh" like a pause or whatever as "uh", [speaker005:] Mm hmm. [speaker004:] and sometimes it's hard to tell what the person meant. Are [disfmarker] did they mean "uh" like they're pausing [speaker001:] Mmm, yeah. [speaker004:] or did they mean "a" like "a person"? [speaker001:] Yeah. [speaker006:] Yeah. [speaker001:] I've run into that. [speaker005:] Really? [speaker006:] Yeah. [speaker001:] Yeah. [speaker004:] Yeah. [speaker003:] Right. [speaker002:] Hmm. [speaker006:] Once in a while. [speaker004:] And [speaker005:] I don't usually get con [speaker004:] you can kind of tell from the context but sometimes it's hard, [speaker006:] Yes. [speaker005:] Right. [speaker004:] yeah. [speaker003:] Yeah, that came up a couple of times, [speaker006:] Yeah, I [disfmarker] [speaker004:] Yeah. [speaker005:] Huh. [speaker003:] yeah. [speaker006:] I agree. And [disfmarker] and when I [disfmarker] when I'm totally in doubt, if there's a case where it really could be either, and I could flip a coin, [vocalsound] I'll choose the one that would make syntactically the most sense, [speaker004:] Mm hmm. [speaker005:] Uh huh. [speaker004:] Mm hmm. [speaker006:] give them the benefit of the doubt that they're being fluent about it. [speaker004:] Mm hmm. [speaker005:] Uh huh. [speaker001:] Sometimes I'll put in like a bracket like uh, you know, the orthographic A slash UH like if I'm really confused. [speaker004:] Like you can't tell, [speaker006:] Mm hmm. [speaker004:] mm hmm. [speaker001:] Yeah, kinda go "Well there's two, so here." [speaker004:] Mm hmm. [speaker006:] It must be really rare though, I haven't run across that often. Ha ha h Is it rare? [speaker003:] Mmm mm. [speaker001:] A few times. [speaker004:] A few. It's really [disfmarker] yeah, [speaker006:] OK. [speaker003:] It's rare. [speaker004:] relatively rare. [speaker002:] I don't think I ever have. [speaker004:] Yeah. Yeah. I mean I can only think of like two or three occasions where I sat and thought about it for a while. [speaker005:] Hmm. [speaker004:] Other times just you figure [disfmarker] you look at it and what's the context. Yeah. It's OK, I don't know. [speaker001:] Right. [speaker006:] OK. Well, le let's see now, there's another [disfmarker] another methodo what I go I wanna ask you at some point, it just occurred to me, [comment] I'd like to ask you the following question, maybe you can have this in the back of your minds, something like, um, you know, what sorts of hypothesis [disfmarker] your hypothesis about the [disfmarker] you know, ti trying to find the distributional explanations for some of these things. [speaker003:] Huh. [speaker005:] Right. [speaker006:] What kind of, uh, hypotheses have [disfmarker] has this caused you [disfmarker] uh, I mean, don't wanna have you give away a paper topic that you wanna explore, but, you know, if there's interesting th uh, hypotheses that the [disfmarker] that y have been suggested to you by looking at the data. That kinda thing, maybe, uh, at some point. So, [speaker003:] Hnh. [speaker006:] and al Oh! And I also wanna say, that, um, I'd like to meet again also next week, [speaker004:] Mm hmm. [speaker006:] um, and I don't know if this is a good time frame, uh, maybe it'd be good to have it an hour later or [disfmarker] I don't know [disfmarker] I mean, could do anything [speaker001:] This is fine. [speaker004:] This time is fine for me. [speaker006:] about [disfmarker] [speaker005:] Yeah, this is a good time for me. [speaker006:] Good time for you? [speaker003:] It should be. [speaker006:] OK. [speaker003:] Um, I know I have something in the morning, which I don't normally have. I don't remember off the top of my head when it ends. Um, I'll let you know if it's [disfmarker] it'll be a problem but it should be OK. [speaker006:] OK. [speaker003:] Two thirty should be late enough. So. [speaker006:] OK. Excellent, cuz we could probably move it one hour later, [speaker001:] In m [speaker006:] I mean [disfmarker] I think, u unless [disfmarker] [speaker001:] I was just gonna say maybe fifteen minutes later would help me a little bit just cuz I have a class up at Dwinelle b just before this. [speaker006:] Oh, I see. [speaker001:] That's why I was a little late today [speaker006:] Wow. You did really well. [speaker001:] because [disfmarker] [speaker006:] Yeah. [speaker001:] Thanks. Well, I took the bus for part of the way. [speaker004:] Mm hmm. [speaker006:] Oh, did you? OK, OK. [speaker001:] But um, that might help me just a little bit because then it [disfmarker] I [disfmarker] you know, I'm gonna [disfmarker] gonna be hobbling a little slowly. [speaker004:] Then you don't have to rush. [speaker006:] Would anybody be constrained by having it be a full half hour later? [speaker004:] Hnh nn. [speaker003:] No. [speaker005:] No. [speaker006:] OK. And th the final question i [speaker001:] No. [speaker003:] Except we'll miss treat time. [speaker005:] Oh! [speaker001:] Tea! [speaker006:] Oh, that's right! Oh! [comment] I thought she said "mistreat time". [speaker001:] Oh no. [speaker003:] Maybe we could have like a [disfmarker] like a half hour, then take a little break, [speaker005:] Yeah. [speaker004:] Mistreat? Oh! [speaker005:] Yeah, [speaker003:] then come back and finish the meet [speaker005:] well that's [disfmarker] that's some good motivation to not start a half hour later. [speaker004:] You know what we could do? [speaker006:] Yeah, you know, that's [disfmarker] that's [disfmarker] that's an interesting [disfmarker] [speaker004:] We could just ask the [disfmarker] someone like the front desk person or Lila to like set aside a little bit for us. [speaker001:] Tha that's so funny. [speaker003:] Little bit of treats. [speaker006:] Hey, they would. [speaker001:] Oh that's true. [speaker002:] Mm hmm. [speaker006:] Yeah, they would. [speaker005:] Yeah. [speaker006:] That's [disfmarker] that's not a bad solution. [speaker005:] Mm hmm. [speaker003:] It's not gonna be birthday cake, so. [speaker004:] Yeah, so we'll survive. [speaker003:] Yeah. [speaker006:] Yeah, that's right, [speaker001:] This is very important. [speaker006:] it's already done. It's good to s good to have these things all scheduled. OK, so um [disfmarker] Oh, uh, eh then I was gonna ask you, what about an hour later? I mean, if we h had those things an hour later like, would people [disfmarker] [speaker001:] This is fine with me. [speaker006:] OK? [speaker004:] That's fine too. [speaker003:] Yeah. [speaker006:] OK, good. [speaker002:] Hmm. [speaker005:] Yeah. That's fine. [speaker006:] Alright, so, I'll [disfmarker] I'll so settle on that. [speaker004:] Yeah. [speaker006:] OK now, I wanted to say, you've been through a couple of different, uh, [@ @] improvements in the methods, the pre segmentation. There was another impro another improvement in that [disfmarker] Well, s let's ask if it was an improvement from your standpoint. Uh, the Tigerfish, um, approach. So, uh, s moving from having the first transcription done by an external service, and then you correcting that versus doing the initial transcription yourselves. Ho ho how [disfmarker] Have you [disfmarker] [speaker001:] Oh. [speaker006:] And so I think you [disfmarker] Yeah, I think you [disfmarker] Did you [disfmarker] [speaker005:] I don't think I have ever done one from an external [disfmarker] [speaker006:] has there already been t [speaker001:] I have. [speaker005:] I th I feel like they've all been done probably by another transcriber here. [speaker001:] You talking about the IBM ones? [speaker006:] Yeah, IBM or if [disfmarker] if it was dash TF, that's the Tigerfish. [speaker003:] Right. [speaker005:] T [speaker001:] Right, yeah, oh, yeah yeah. [speaker005:] Oh, m yeah, then I guess I have. Maybe the o one I'm working on now. [speaker001:] I could tell you something that I thought [disfmarker] [speaker004:] Mm hmm. [speaker006:] So d [speaker005:] They have different [disfmarker] definitely a different style, right? Do they end all [disfmarker] them all in dots, [speaker001:] Yeah. [speaker005:] like dot dot? [speaker006:] There's a lot of that. [speaker003:] Yes. [speaker004:] Yeah, [speaker002:] They used to. [speaker004:] mm hmm. [speaker005:] And they [disfmarker] and they capitalize like every single first [disfmarker] [speaker001:] I really th [speaker006:] Yeah. [speaker004:] And they put paren um, not parentheses. What's it called? Punctuation after brackets. [speaker006:] Yeah, that's right too. [speaker001:] Yep. [speaker005:] Oh, OK, yeah, then I'm doing that right now. [speaker003:] Right. Oh, right, [speaker006:] Mm hmm. [speaker004:] Yeah. [speaker003:] "Laugh, period". [speaker004:] Yeah. [speaker005:] Yeah. [speaker003:] Something like that, right? [speaker001:] Right. [speaker004:] Yeah. [speaker005:] Which makes no sense to me. [speaker004:] No it doesn't to me either [speaker005:] Like "that's not a syntactic part of that sentence". [speaker006:] Yeah. [speaker001:] If [disfmarker] [speaker004:] but. No. [speaker006:] Exactly. Exactly, I'm glad you [disfmarker] [speaker003:] Yeah. [speaker006:] Yeah, exactly. [speaker003:] Well, they're not linguists, [speaker001:] if [disfmarker] [speaker005:] Mm hmm. [speaker003:] so. [speaker004:] No. [speaker001:] if I could be completely frank, I really didn't like the IBM ones. [speaker006:] Yeah. Oh, good to know. [speaker001:] Yeah. [speaker004:] I didn't either. [speaker006:] Oh, that's helpful. [speaker001:] I found them to be just [disfmarker] I just [disfmarker] I [disfmarker] I ended up basically redoing them. [speaker006:] OK. [speaker002:] Hmm. [speaker001:] Um, like from start to finish, [speaker003:] I don't know if I did one. [speaker001:] so when I was checking them like it would take me basically as long as it would have taken me to just to transcribe it by myself. [speaker006:] Wow. [speaker003:] Hmm. [speaker005:] Hmm. [speaker006:] Wow! [speaker004:] I think it was a little bit faster than just transcribing it, [speaker006:] Wow, OK. [speaker005:] Yeah. [speaker004:] but I definitely saw a lot of [disfmarker] [speaker001:] I'm really picky. [speaker004:] there was a lot of errors that are not in the Tigerfish ones. [speaker006:] Mm hmm. [speaker001:] Yeah, Tigerfish are better. [speaker004:] Yeah. And they just have better intuitions, or I don't know what but, about what's said. [speaker005:] OK Which are [disfmarker] What are the two differences? I'm not sure. [speaker006:] I'm not sure if you had an IBM one. So this was earlier [pause] on. [speaker005:] So do I have a Tigerfish one right now? [speaker002:] Hmm. [speaker004:] If it says "TF". [speaker001:] If it says "TF". [speaker006:] You probably do. [speaker005:] Uh huh. OK. [speaker001:] Yeah. The ones that say "IBM" are like [disfmarker] They actually say "IBM". [speaker005:] So I don't think I've ever [disfmarker] The IBM ones were the ones that came in huge, huge paragraphs, huh? [speaker001:] Yes! [speaker004:] Yeah, and we had to split them up. [speaker005:] See, I never got one of those. [speaker001:] Yeah, and you have to parse them all down. [speaker005:] Right, [speaker001:] Yeah, yeah, yeah, yeah. [speaker005:] OK, yeah. Mm hmm. Yeah, I didn't g ever get one of those. [speaker006:] So you didn't get one? Oh, interesting. [speaker005:] No. I'm glad I didn't. [speaker001:] OK. [speaker002:] I didn't get one either. [speaker003:] I think those show up in Tigerfish too, cuz I think I've had a couple of Tigerfish where I've had to break them up. [speaker004:] Where they have li [speaker005:] Ye [speaker004:] Yeah, but they're not like [disfmarker] [speaker005:] They're not [disfmarker] Yeah, I can remember, u I used to see [disfmarker] [speaker004:] they're not huge huge, though. [speaker003:] A couple were. [speaker004:] Really? [speaker003:] Yeah. [speaker004:] Oh. [speaker003:] And I had to sorta break them down. [speaker004:] Oh. [speaker005:] Do they get them, um [disfmarker] [speaker004:] I don't know. [speaker005:] Id Is [disfmarker] They don't have like an interface, right? They just [disfmarker] [speaker006:] No. In neither case. And I'm [disfmarker] and I'm thinking [disfmarker] I'm thinking [disfmarker] you're causing me to think back over what the differences were in the [disfmarker] what they got [speaker005:] Yeah. [speaker006:] and IBM didn't [disfmarker] [speaker005:] Yeah. They just have [disfmarker] They just have it segmented. [speaker006:] Yeah. [speaker005:] You can tell because sometimes they have it so wrong and it's like, "Well, if you had just listened to what was before". [speaker004:] Yeah. [speaker001:] Yeah. [speaker004:] But they don't have that option. [speaker005:] Yeah, but they don't have that. Yeah, I figured that they didn't. [speaker001:] Right. [speaker004:] Yeah. [speaker003:] Right. [speaker006:] There's also some i iffiness about how d if [disfmarker] [speaker001:] Als [speaker006:] We're not sure really how Tigerfish does it, you know, the methodology. How do they do this. [speaker005:] Mmm. [speaker006:] Because sometimes you'll have the s the same word, and I've seen this, [comment] like, within [disfmarker] well, within a single turn, l it'll be spelled in three different ways. And it's like, you know, is this the same person who's kind of forgetful, [speaker001:] Huh. [speaker005:] Uh huh. [speaker006:] or is it that they have several people doing it together? You know, maybe it's parallel. [speaker003:] Right. [speaker005:] Right. [speaker006:] Maybe they're all doing it in parallel and one falls behind and the other one [disfmarker] [speaker005:] Yeah. [speaker001:] Oh, that's interesting. [speaker006:] But, who knows? [speaker004:] And they don't [disfmarker] One person doesn't listen to all the channels of one meeting, right? So they can't get clues from the other channels? [speaker006:] Well, you know, the [disfmarker] The way they're listening to it is they [disfmarker] they get a [disfmarker] they get [disfmarker] [speaker004:] Or can they? I don't know. Mm hmm. [speaker006:] It's linearized. So if you picture what we have on the screen, um, you know, there's this block and it goes over [disfmarker] I'm doing it from my perspective. You have this block and it goes over to here and stops and this person starts to talk and he goes over to here [speaker004:] Mm hmm. [speaker006:] and then you have an overlap over here. And so what they would get is this block from the first guy, this block from the second guy, and then the top of the overlap and then the second guy in the overlap. [speaker004:] Oh. [speaker005:] Oh, wow. [speaker003:] Hmm. [speaker006:] And then it would move on. [speaker002:] Hmm. [speaker006:] So it's sort of like this [disfmarker] you know, you go this way as much as you can, then you cycle through there and then you go up to this next level [speaker005:] So they hear all the channels at once like you guys were saying you first did? [speaker006:] and [disfmarker] [speaker004:] Oh, OK. [speaker006:] Um, let's see, they hear them [disfmarker] Oh! uh, they hear them separately in two chunks, but then sometimes they get confused about who the foreground speaker is, because they're not being told who the foreground speaker is. [speaker004:] Uh huh. [speaker006:] You know, and especially you can see if you have like four speakers, it be reason you can maybe find a way of doing that. But with nine speakers [disfmarker] [speaker001:] Eee! [comment] Yeah. [speaker006:] So what they get is they get a beep that tells them when there's something that they think [disfmarker] that the f pre segmenter thought there should be s transcribed in there. [speaker004:] Mm hmm. [speaker001:] Mm hmm. [speaker002:] Mmm. [speaker006:] And [disfmarker] so they'll get [disfmarker] And it's numbered so that we end up with the same number of [disfmarker] of transcribed slots. [speaker004:] As there are slots in the pre segmentation. [speaker006:] Exactly. [speaker004:] Mm hmm. [speaker006:] Cuz the first time when we didn't have them numbered, we ended up with a [pause] couple extra [pause] segments that were transcribed, and it was hard to patch the whole thing back together [speaker004:] Mm hmm. [speaker005:] Hmm. [speaker006:] because when it comes back to our end, um, it's just this long string of things that they've uh, partitioned according to the tape that they were given. [speaker001:] Mm hmm. [speaker006:] Cuz they don't [disfmarker] they actually don't have digital interface either so it's [disfmarker] it's audiotape. [speaker002:] Oh wow. [speaker004:] Oh, wow. [speaker003:] Huh. [speaker001:] Oh. [speaker005:] Hmm, wow. [speaker006:] And they type it on [speaker003:] Makes it really hard. [speaker006:] and then [disfmarker] then it comes back to us [speaker005:] Wow. [speaker006:] and Adam and Thilo have a program wa they put it back together into our interface format and then we can use it. [speaker005:] Mm hmm. [speaker006:] But, um, um, [vocalsound] there are f cases where you'll get a segment and it's someone else's utterance and that's because that was overlapping with someone else and this is the weaker speaker of the two, [speaker004:] Mm hmm. [speaker006:] and you'll have like two totally parallel r renditions of the same thing. [speaker004:] Yeah. [speaker006:] Because i it's just [disfmarker] i n [speaker003:] Right. [speaker006:] So what they get is like [disfmarker] i t is they get something like "beep one". And then [disfmarker] then they get "beep two", "beep three". Chuck Wooters does the numbers. And, um, it cycles through. [speaker003:] Wow. Oh, so they don't [disfmarker] Do they know the names of the speakers? [speaker006:] No. [speaker003:] Oh, OK. [speaker006:] And they don't know which channel is s l supposed to be together. [speaker003:] Cuz [disfmarker] y OK. [speaker001:] Huh. [speaker003:] Cuz sometimes there'd just be interesting things where there'd be a misunderstanding about like [disfmarker] like Morgan would be speaking but it'd be on your [comment] channel, and it'd be like, "This is clearly not a woman speaking." [speaker006:] Yes, exactly. [speaker005:] Hmm. [speaker004:] Yeah. [speaker006:] Yeah, that's quite right. [speaker003:] It's like, "How could they do that?" [speaker006:] And that's why. [speaker003:] It's like [disfmarker] Well now it's [disfmarker] it m seems more reasonable that they'd make a mistake like that if they had no idea. [speaker004:] Mm hmm. [speaker006:] Exactly right. [speaker001:] OK. [speaker004:] Yeah. [speaker002:] Mmm. [speaker004:] God, it must take them a really long time to do these. [speaker005:] Hmm. [speaker006:] Yeah. [speaker003:] Um. [speaker006:] They're very fast. [speaker004:] That's amazing. [speaker006:] Is l Yeah. [speaker004:] It sounds so hard. [speaker006:] Yeah. It [disfmarker] it turns out that they're really much faster than the other approaches we've used, and we don't know exactly how they do it [speaker004:] Mm hmm. [speaker006:] and I think that that's by intention, that [disfmarker] [@ @] [disfmarker] not just me who thinks that [disfmarker] it's that proprietarily, it's a very competitive business [speaker004:] Mm hmm. [speaker006:] and they probably don't wanna, uh, say too much about it. [speaker002:] Mmm. [speaker005:] Huh. [speaker001:] Right. [speaker004:] Leak their secrets? [speaker006:] Yeah. Yeah. But they're very fast. It's uh [disfmarker] [speaker004:] Well, I [disfmarker] I definitely prefer not doing the transcript I mean, I liked transcribing [comment] because I felt like I was much closer to the data [speaker006:] Yeah. [speaker004:] and I had a lot more ideas, [comment] like you were saying, hypotheses of what's going on [speaker006:] Huh. [speaker004:] and [disfmarker] [speaker005:] Mm hmm. [speaker002:] Mmm. [speaker004:] but it took a really long time and I'd get frustrated with it. And I think that I would lose my [disfmarker] [speaker001:] Yeah. [speaker004:] I don't know, I would not be as thorough, not be as careful. [speaker005:] I kinda miss it. [speaker006:] Excellent. Uh huh. [speaker005:] I liked [disfmarker] [speaker006:] You miss [disfmarker] [speaker005:] Yeah, [speaker006:] Oh! I see. [speaker005:] I liked transcribing. [speaker002:] I kinda did too. [speaker003:] Yeah, I liked transcribing. [speaker006:] Oh, interesting. [speaker004:] Mm hmm. [speaker005:] I [disfmarker] You know, I feel [disfmarker] [vocalsound] I feel like it was, um, more personal. I was [disfmarker] [speaker004:] It was more personal. [speaker005:] You know, the [disfmarker] n This is sort of just, you know, correcting other [disfmarker] [speaker004:] Yeah. [speaker005:] I feel like I'm just sort of like correcting a test or something now, [speaker004:] Mm hmm. [speaker005:] you know, that somebody's already done, [speaker006:] Oh, how interesting. [speaker005:] and [disfmarker] But, I mean, it's still [disfmarker] It's still cool. [speaker006:] Yeah. [speaker004:] Yeah. [speaker005:] But, um, yeah, I did [disfmarker] I did like actually doing the transcribing a little bit more. [speaker004:] You don't tend to notice as much I don't think at checking as you do transcribing, because it's already there so you don't have to pay as much attention. [speaker001:] That's true. [speaker005:] Yeah. [speaker006:] Yeah, OK. [speaker005:] Yeah, and when you're [disfmarker] when you're, um, just checking you're [disfmarker] You're usually just looking for certain things that you know are gonna come up, [speaker002:] Mmm. [speaker005:] and I usually assume that, um, the words are mostly right, although I g you know, I go through and I look and make sure all the words are right, but usually it's like, punctuation and capitalization and other things that I'm [disfmarker] end up changing. [speaker004:] And time bin corrections and things, [speaker005:] And time bins, [speaker004:] yeah. [speaker005:] yeah. [speaker004:] Yeah. [speaker001:] I guess I'm such a perfectionist that I sort of combine the two when I'm checking, like I act as though I'm transcribing it. [speaker004:] Mmm. [speaker005:] Really? [speaker001:] And so [disfmarker] Yeah, I really do. Like I'm [disfmarker] I tend to take a little [vocalsound] longer maybe than other people [comment] doing checking but like I [disfmarker] I'm like ultra careful. [speaker004:] Mm hmm. [speaker001:] I [disfmarker] That's just how I am. But, like I [disfmarker] I think I really, like, think of it in terms of re transcribing it almost [speaker005:] Yeah. [speaker001:] and I only [disfmarker] You know, b [speaker002:] Hmm. [speaker001:] So, like, I [disfmarker] I change time bins a lot, and I like will change what people write [disfmarker] like for whatever words they write a lot, [speaker004:] Mm hmm. [speaker005:] Right. Mm hmm. [speaker001:] like [disfmarker] So, I guess I don't see too much of a difference but that's just because it's me. [speaker004:] Mm hmm. [speaker001:] Like I think I like checking more for that reason because then it's sort of [disfmarker] it's like it's a [disfmarker] it's a little helpful. [speaker005:] Mm hmm. [speaker001:] It's like kind of, you know, somebody's helping me a little bit, but I'm actually doing the work anyway, [speaker004:] Mm hmm. [speaker005:] Right. [speaker001:] is kind of how I see it. [speaker004:] Hmm. [speaker002:] Hmm. [speaker005:] Hmm. [speaker006:] Yeah. [speaker004:] One thing that's annoying I think with checking is that a lot of times in the meetings you tend to have one or two people who talk a lot and the rest of the people just sit there until it's their turn to give their little report or whatever. [speaker003:] Mm hmm. [speaker002:] Mmm. [speaker001:] Right. [speaker004:] And yet you have to check the channels for if they did backchannels or if they did other things that you need to encode that weren't on the pre segmenter. But sometimes it's not visually easy. [speaker005:] Yeah. [speaker004:] And so I'm [disfmarker] Sometime I debate with myself whether it's really worth it to sit there and listen to the whole thing [comment] or risk missing a little bit and doing it visually. [speaker006:] Mm hmm. [speaker004:] It's hard to decide sometimes. [speaker006:] And this is when you crank up the visual. [speaker003:] Mm hmm. [speaker006:] I mean, you've still got the maximum [disfmarker] [speaker004:] Yeah. Mm hmm. [speaker005:] Yeah. [speaker006:] Yeah. OK. [speaker004:] Yeah. [speaker005:] Well, sometimes when you crank up the visual all it gives you is the background speakers. [speaker001:] Yeah. [speaker004:] And s Yeah, [speaker006:] Yeah, it's true. [speaker002:] Yeah. [speaker004:] that's the thing. [speaker006:] It's true, [speaker004:] i that's the thing that makes it harder sometimes. [speaker001:] Yeah. [speaker006:] it's just a big mess. [speaker002:] Or if they're breathing loudly. [speaker005:] So. [speaker006:] Yeah. [speaker005:] Yeah. [speaker004:] Yeah. [speaker006:] Yeah, that's right. Well, i it's a judgment call. And you can also say [disfmarker] well, you know, if the backchannels were so limit so low on volume that they weren't o visually obvious, then you can say, "Well, it's reasonable not to worry about them." [speaker004:] Yeah. [speaker005:] Yeah. The [disfmarker] I mean [disfmarker] but also if they have it all pre segmented and they can only hear the segments, they often miss what's in between because they didn't even have the option of listening to what's in between. [speaker004:] You mean the transcribers? [speaker001:] Right. [speaker004:] Yeah. [speaker005:] Mm hmm, [speaker002:] Yeah. [speaker005:] so. And you know, the [disfmarker] the segmenter usually misses little things like "mm hmm" and "yeah" [speaker006:] Yeah, that's tr right, That's quite right. [speaker005:] and [disfmarker] [speaker006:] Yeah, that's right. [speaker004:] And even bigger things. [speaker005:] yeah, sometimes the segmenter will [disfmarker] [speaker004:] I mean you've [comment] said things that weren't caught on the pre segmenter, like whispers, kind of. [speaker001:] Yeah. [speaker005:] Mm hmm. [speaker004:] Just like, "Oh, that's interesting", [speaker006:] OK. [speaker004:] or something like that. [speaker001:] Well, like little sighs, or like inbreaths, like as though you're gonna start speaking, [speaker004:] Yeah, you're about to s speak. [speaker001:] which I feel is important [speaker003:] Uh [disfmarker] [speaker006:] Yes, I agree. [speaker002:] Hmm. [speaker001:] because it means that somebody's about to interject something but then they decide for whatever reason that they're not going to. [speaker004:] But then they don't. [speaker006:] Mm hmm. [speaker004:] Yeah. [speaker001:] And oftentimes I find that the segmenter will miss those kind of things, those kind of vo vocal cues. [speaker004:] And sometimes even laughs. [speaker001:] Yeah, breath laughs especially. [speaker003:] Oh, yeah. [speaker005:] Yeah. [speaker004:] Uh huh. Uh huh. [speaker002:] Oh yeah. [speaker005:] Hmm. I usually take out [disfmarker] um, I usually don't put in any of the breaths unless they're laughs. I usually don't transcribe the [disfmarker] the inbreaths or things like that. [speaker004:] I don't either. M but, I mean I don't take them out if they're already there. [speaker005:] You know? [speaker004:] I don't [disfmarker] [speaker006:] c Yeah. It's [disfmarker] it's good to [disfmarker] [speaker003:] If it sounds like a yawn. I've transcribed yawns, [speaker005:] Mm hmm. [speaker006:] Yeah. [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker003:] um [disfmarker] You can hear, there's like. You know, it, like [disfmarker] [comment] something like that. [speaker004:] Yeah. [speaker005:] Yeah. [speaker004:] Mm hmm. [speaker006:] It [disfmarker] Well, it's communicative. [speaker003:] Uh [disfmarker] [speaker006:] Yeah. [speaker004:] Mm hmm. [speaker003:] Yeah, or when it's a laugh. [speaker005:] Yeah. [speaker003:] I mean, often in the Tigerfish there'd [disfmarker] it would si say "breath", but it [disfmarker] it w it actually seemed more like a laugh. [speaker006:] Yeah. [speaker002:] It's really a laugh. [speaker005:] It's a laugh, yeah. [speaker004:] But it's a laugh. [speaker001:] It's actually a breath laugh. [speaker004:] Yeah. And you can [disfmarker] [speaker003:] Um, cuz everyone's laughing. Someone has just said something funny [speaker005:] Uh huh. [speaker003:] and everyone's like, "Ah ha ha ha ha". [speaker004:] And everyone else is laughing, and then the other person is like "ha ha" [speaker001:] Right. [speaker003:] Yeah. [speaker004:] but it's not a actual laugh, [speaker003:] Right, [speaker004:] but like [disfmarker] [speaker003:] it's [disfmarker] it's not a very committed laugh, [speaker004:] I don't know how you do it, [speaker005:] Right. [speaker001:] Right. [speaker003:] but it's still a laugh. [speaker004:] you know? [speaker001:] It's [disfmarker] No, like, yeah, it's like. [speaker003:] It's like [disfmarker] [speaker001:] Yeah. [speaker003:] yeah, "I guess I'll laugh cuz everyone else is laughing," I [disfmarker] You know, who knows, but [disfmarker] but yeah. [speaker006:] Have you noticed that Tiger I've noticed that Tigerfish uses "cough" a lot where it isn't really a cough. [speaker003:] It should be "clear throat", or something. [speaker006:] You notice that? Yeah. [speaker005:] Yeah. Yeah. [speaker001:] Oh, yeah. [speaker002:] Mmm. [speaker004:] Oh yeah, mm hmm. [speaker003:] Yeah, I [disfmarker] I just fix that. [speaker004:] Mm hmm. [speaker006:] Mm hmm. [speaker004:] Yeah. [speaker005:] I'd fix that too. [speaker006:] Yeah. Do you find [disfmarker] you've [disfmarker] i This has caused me to wonder. So, in Tigerfish ones, do you find that where they've [disfmarker] where they have like "dot dot's" cuz it's segments they didn't hear, [comment] that [disfmarker] that you add [disfmarker] u you probably do add things in those places where it's [disfmarker] where nothing is transcribed really, because they didn't even hear it. [speaker005:] Mm hmm. [speaker004:] You mean in between the segments that they did transcribe? [speaker006:] Yeah. [speaker005:] Oh. Yeah. [speaker004:] Yeah. That's where you f tend to find [disfmarker] [speaker005:] Mm hmm. [speaker006:] Interesting. Oh, interesting. [speaker005:] Yeah, you find little lost [disfmarker] lost words in the middle. [speaker004:] Yeah. Yeah. [speaker005:] Huh. [speaker006:] Huh. [speaker004:] It's [disfmarker] it depends on the person, I mean, some people don't tend to do [disfmarker] aren't that involved in the meeting, or something, and they don't tend to say as much, [speaker003:] Mmm. [speaker005:] Mm hmm. [speaker004:] um, m backchannels or whatever. And some people [disfmarker] I mean [disfmarker] I remember [vocalsound] transcribing one person, all they did was clear their throat and breathe. [speaker001:] Yeah. [speaker004:] And then read digits and then it was like the end of the meeting. [speaker001:] Yo yo yo. [speaker003:] I think I know who you're talking about, but [disfmarker] [speaker004:] So. But then some people tend to [disfmarker] [speaker001:] There's one guy in particular who does that. It's really cute actually. [speaker004:] Mm hmm. [speaker003:] Uh [disfmarker] [speaker004:] Mm hmm. But then there are some people who are [disfmarker] Even if they're not actually speaking, you can tell they're more involved or interested or something, [speaker006:] Yeah. [speaker004:] cuz they're always, "Mm hmm, oh yeah, really, mm hmm" [disfmarker] Um, reacting. Yeah. [speaker001:] Reacting. [speaker005:] Uh huh. [speaker001:] Yeah. [speaker006:] It just occurred to me, this is a [disfmarker] a different kind of question, but I w from having listened to all these people, I mean, and it is kind of ab abnormal to be listening to them as if you're [disfmarker] they're talking into your ear, really. [speaker004:] Mm hmm. [speaker006:] You know, it's this really unusual acoustic circumstances. [speaker002:] Mmm. [speaker001:] Right. [speaker005:] Uh huh. [speaker006:] Um, I sort of end up feeling like I know these people better than I really do. And it's like you go out to tea and there they are [speaker001:] Yeah. [speaker006:] and, "Oh, you were so funny in that meeting" and it [disfmarker] it has this sort of unusual dynamic because, you know, I [disfmarker] I'm privy to information which really by all rights I shouldn't be, [speaker002:] Mmm. [speaker006:] except of course they [disfmarker] they agreed, [speaker004:] Yeah. [speaker006:] they signed, they were there, they agreed. [speaker005:] Mm hmm. [speaker004:] Right, [speaker002:] Mmm. [speaker004:] yeah. [speaker006:] But [disfmarker] but it [disfmarker] but it's sort of an unusual relationship to say, you know, i i you know, if you're in the meeting yourself, then you [disfmarker] of course you'd have access to all this information and all your inferences about, you know, whether they're funny or not funny or whatever, [speaker004:] I remember from the very beginning, I would go to tea and start recognizing voices. [speaker006:] but it [disfmarker] Oh, that'd be interesting. [speaker004:] And like turn around and be like, "Wait, that person doesn't look anything like I imagined them." [speaker003:] Yeah. [speaker001:] Yep. [speaker005:] Mm hmm. [speaker001:] Yeah, yeah. [speaker006:] Yes. [speaker004:] a eh [disfmarker] [speaker001:] Y you know who I kept wanting to meet? Was Bhaskara. [speaker004:] Nnn. [speaker006:] He's [disfmarker] he's on campus, I found this out. [speaker001:] Is he really? [speaker006:] Yeah yeah yeah yeah. [speaker001:] He has such a nice voice. I really like his voice, [speaker006:] Yeah. Nice guy. Nice guy. [speaker001:] so I kept wanting to meet him. And then John told me that he's not around anymore. [speaker006:] Oh, but he is. [speaker003:] Mmm. [speaker001:] Or something, [speaker006:] I went on [disfmarker] I actually went on a hike with him two weeks [disfmarker] three weeks ago. [speaker001:] so. Neat. [speaker003:] Oh, cool. [speaker001:] Just curious. [speaker006:] Yeah. Yeah yeah. Yeah. Nice guy. [speaker003:] I just wanted to know what Liz looked like. [speaker004:] Mm hmm. [speaker003:] I've never gotten to see her. [speaker004:] Nn mm. [speaker006:] Oh, she's here. [speaker005:] I haven't seen most of them. [speaker001:] I haven't either. [speaker003:] She's got a nice voice. I know she's here, I know she's a big part of the Institute, right? So. [speaker006:] Yeah, I'm sure you've seen her, [speaker003:] Um [disfmarker] [speaker006:] uh but [disfmarker] [speaker003:] I must have, I just don't know who [disfmarker] you know, what she looks like. [speaker004:] Yeah. [speaker003:] Um [disfmarker] Yeah, I think she's got a nice voice. [speaker006:] Mm hmm. [speaker003:] But anyway. [speaker006:] Yeah. [speaker004:] She always tends to get interrupted. [speaker005:] Hmm. [speaker003:] It's OK if we say nice things about these people, [speaker001:] Yeah. [speaker003:] right? [speaker006:] Yeah yeah, that's no problem. I also find that some of the [disfmarker] you know, if someone sits there and [disfmarker] and it [disfmarker] th there's this o one who doesn't say much, but she just is always pleasant. You know, and it's kind of a smiling voice and she laughs [speaker001:] Yes. [speaker006:] and [disfmarker] [speaker001:] I th I think I know who you're talking about. [speaker006:] What a difference. Yeah. I mean it's like [disfmarker] Actually it's [disfmarker] um, she's [disfmarker] she's just so pleasant. And [disfmarker] and, you know, adds a lot just by virtue of that. [speaker005:] Yeah. [speaker006:] You know, can [disfmarker] People will talk on all these technical things but she'll [disfmarker] she'll say something technical and then laugh, and [disfmarker] It's [disfmarker] it's very interesting, the ambience that you can get through these [disfmarker] these little things. [speaker001:] Yeah. [speaker004:] It's [disfmarker] really makes it a lot nicer to transcribe because there can be a lot of heated discussions about these very technical issues [speaker001:] Yeah. [speaker006:] Mm hmm. [speaker004:] and people will have their opinions and, you know [disfmarker] And then you can [disfmarker] It's really nice to have a little bit of humor, a little bit of break away from that [speaker001:] Mm hmm. [speaker002:] Mmm. [speaker004:] because it's hard to sit there and li see [disfmarker] like, But, don't you see h you're being rude to this person, you're cutting off this person [speaker001:] I know. [speaker004:] and it's not nice? " [speaker001:] I started like kind of making mental friends. [speaker004:] Uh huh. [speaker001:] I'm just like, [comment] "I really like that person. You know what, I'm gonna be his or her friend." [speaker006:] I agree with you. [speaker001:] And then [disfmarker] [speaker006:] I agree, [speaker004:] Yeah. [speaker006:] it's [disfmarker] it's really interesting, Also, [speaker002:] Mmm. [speaker006:] i it's a persona, you get this [disfmarker] [speaker002:] Mmm. [speaker004:] Yeah. [speaker006:] there's definitely a sense of persona. [speaker004:] You definitely get an image of [disfmarker] uh [disfmarker] Especially if you can relate it to what the person [disfmarker] or y what you assume the person is uh interested in academically. [speaker006:] Yeah. [speaker001:] Yeah. [speaker005:] Yeah. [speaker006:] Yeah. [speaker004:] It's really [disfmarker] [speaker001:] Some of that computer stuff, I just [disfmarker] Oh, my God. I'll just sit there like going, "Oh fff!" It's going right over my head. [speaker004:] Yeah. [speaker001:] Yeah, I just [disfmarker] I start zoning out sometimes. Like, I don't mean to, but I'm just kind of going, I don't know what any of these acronyms are [speaker004:] Yeah. [speaker001:] and no idea what you're talking about. " [speaker004:] And there was a whole set of meetings that are like that. [speaker003:] Mm hmm. [speaker006:] That's right. [speaker004:] It [disfmarker] Wasn't there? [speaker001:] Yeah. [speaker004:] Yeah, those were really hard. [speaker005:] Oh, yeah. [speaker001:] And they're writing stuff on boards and stuff that we can't see, [speaker004:] Really hard. Yeah, and it's hard [disfmarker] [speaker001:] and so we don't know what they're referring to, [speaker003:] Right. Yeah, all you hear is the squeaky pen, [speaker001:] and so. [speaker004:] Yeah. [speaker003:] and you're like, Oh, no, [speaker001:] Right. [speaker003:] something [disfmarker] something's going on that I don't know. " [speaker001:] Yeah, they're referring to this or that and the other thing and you're just going, No, [speaker004:] Yeah. [speaker001:] I don't understand. " [speaker004:] U no. [speaker006:] Yeah, it's difficult. [speaker003:] Yeah, you guys talked about doing a videotape. [speaker006:] Yes. um, [vocalsound] it's mainly a matter of [disfmarker] two things. [speaker003:] The sort of [disfmarker] [speaker006:] First is the storage. It'd be so huge that, uh, you'd wanna be sure that it was of interest to someone, that it would be worthwhile doing. [speaker004:] Mm hmm. [speaker006:] And that's the other thing, that we don't have someone currently doing the video thing. It would really be interesting. [speaker003:] Yeah. Cuz I know it was mentioned. [speaker004:] Mm hmm. [speaker003:] Also having a meeting where you all wore blindfolds. [speaker006:] Yeah. [speaker005:] Oh? [speaker001:] Really? [speaker003:] Li it was Liz's idea, wasn't it? [speaker006:] That was Liz's idea. [speaker003:] Yeah. [speaker004:] What would be the point of that? [speaker006:] Oh, well, it's a question of feedback, and being able to give feedback. [speaker004:] Oh. [speaker005:] Oh, wow. [speaker004:] I think that would be really interesting. [speaker001:] Oh I see. [speaker005:] That would be interesting. [speaker001:] So you have to do it vocally as opposed to visually? Like you can't use gesture? [speaker006:] Yeah. Exactly. [speaker002:] Oh. [speaker001:] Ah. [speaker006:] Yeah. [speaker005:] Wow. [speaker006:] And, you know [disfmarker] I mean, th that was the meeting where, [vocalsound] um, [@ @] [comment] you know the m the [disfmarker] th y y [vocalsound] Partitions wouldn't work. Another possibility is pa partitions wouldn't work because then it would cut off the P Z Ms from having the indirect [disfmarker] [speaker004:] Right. [speaker005:] Mmm. [speaker006:] so it's like you have these unusual configuration problems that [disfmarker] that you wouldn't normally have. [speaker001:] Right. [speaker006:] Yeah, blindfolds. You know, and someone else said, "Well, maybe you could just turn the light out." [speaker003:] There you go. [speaker004:] That'd be kinda creepy. [speaker006:] But then, someone else said no, [speaker001:] Cute. [speaker006:] cuz there are these m special lights that have to stay on all the time [speaker004:] Oh. [speaker002:] Oh. [speaker001:] Oh. [speaker006:] for the fire code or whatever it is. [speaker005:] Oh, wow. [speaker004:] Oh. [speaker006:] But then [disfmarker] [speaker003:] I would just be so worried that someone would cheat. You know, it's sort of like a prisoners' di prisoners' dilemnas game, where [vocalsound] you know, everyone's wearing their blindfold but someone could just cheat and look, see what's [disfmarker] [vocalsound] see what's going on. [speaker006:] That's right. [speaker001:] Are you raising your hand? [speaker003:] Yeah. [speaker001:] Totally. [speaker003:] You know, someone just leaves, you know, gets coffee, [speaker001:] Right. Right, right. [speaker003:] comes back, and no one knows. [speaker001:] Gets bored. [speaker006:] Yeah, I mean, there are other things we could expand on and [disfmarker] with [disfmarker] i in terms of the methodology. It's nice to have a core set of data though and they've d actually done some [disfmarker] they've had some good results in terms of the digits which they do in terms of having a r a nice broad set of speakers doing the same thing, meeting after meeting after meeting. [speaker001:] Mm hmm. [speaker002:] Hmm. [speaker006:] So, um [disfmarker] [speaker004:] That's good. [speaker006:] I'm wondering, um, so we still technically [disfmarker] uh [disfmarker] you know [disfmarker] I mean, what should we do in terms of tea? Technically [disfmarker] [speaker003:] Mm hmm. [speaker006:] if we could hang on for fifteen more minutes and have tea? [speaker001:] Yeah. [speaker004:] That's OK. [speaker006:] And then continue with this [disfmarker] this next [disfmarker] next week? [speaker003:] Yeah, sure. [speaker004:] Mm hmm. [speaker005:] OK. [speaker003:] Oh, yeah. [speaker004:] It's OK. [speaker006:] That'd be great. OK, so, um, let's see. [speaker004:] I had something that I could bring up. [speaker006:] Yes. [speaker004:] Um, as far as ideas about looking at the data and getting ideas about whatever you said [disfmarker] hypotheses about how to figure it out. [speaker006:] Yes. Yeah. [speaker004:] One thing that I [disfmarker] I'm [disfmarker] you probably don't remember this [disfmarker] that I was really interested in is the use of "so" and the use of "or". Because the way people use it in these meetings, I found, was [disfmarker] I don't know if it was [disfmarker] it [disfmarker] if it's different than other people. I don't know if it's a computer speech thing. I mean, computer people speech thing, but um, [speaker005:] Hmm. [speaker004:] people tend to end utterances quite often with "so" or "or" and trailing off. [speaker001:] Oh, yes. [speaker005:] M Mm hmm. [speaker002:] Mmm. [speaker001:] A couple people in particular do that, [speaker004:] Yeah, [speaker006:] Mm hmm, [speaker001:] yeah. [speaker004:] mm hmm. [speaker006:] that's right. [speaker004:] And I found that really interesting, and I was curious to [disfmarker] to figure out when that happened, if it happened in certain occasions and whatever, and I never really got around to it, but. [speaker006:] Do you people [disfmarker] [speaker003:] Uh huh. [speaker006:] I mean, does everyone agree with that intu I mean, I f I s share her intuition that's it's [disfmarker] it is sort of unusual to have "so" used as a final, uh, part of a person's turn. [speaker005:] Oh, yeah. [speaker006:] I mean, it's [disfmarker] [speaker004:] It didn't sound stran [speaker005:] Hmm. [speaker003:] Uh huh. [speaker004:] I mean, it doesn't sound like bizarre enough that I don't understand. [speaker006:] Mm hmm. [speaker004:] But I don't think it's something that happens in just everyday speech really. [speaker001:] I don't [disfmarker] [speaker005:] I don't [disfmarker] [speaker001:] no, I don't agree necessarily. [speaker004:] Really? [speaker006:] I was wondering. [speaker001:] I do it. [speaker004:] Really? [speaker001:] Yeah, I trail off with "so". [speaker004:] Huh. [speaker001:] Um, I don't necessarily do it as often as the one speaker I'm thinking of, um, but, I do do that because [disfmarker] I think partly because I want to make sure that people know that my [disfmarker] my thought [disfmarker] my utterance is finished, [speaker004:] Yeah. [speaker005:] Right. [speaker001:] but that it's not necessarily final. [speaker004:] Like that you're flexible? [speaker001:] Like, you can [disfmarker] Right. Or you can build on it [speaker005:] Right. [speaker001:] or j d expound on it. [speaker004:] Mm hmm. [speaker005:] I think it's sort of a like [disfmarker] you guys can take from here, you know, somebody [disfmarker] somebody build on my thought. [speaker001:] Yeah. [speaker002:] Yeah. [speaker005:] It's [disfmarker] are you [disfmarker] people also do it with "but". [speaker001:] It's kinda like a shrug. [speaker004:] Yeah, but it tends to happen [disfmarker] [speaker001:] It's like a m it's like a [disfmarker] [speaker005:] You know? [speaker004:] I mean, "so" tends to happen when the person brings up a new idea that kind of goes against what the other people is saying. [speaker001:] so. [speaker004:] Or at least that's what I found. [speaker001:] What do you mean? [speaker004:] Like if someone in a meeting is talking about a certain thing and someone kind of interrupts or doesn't necessarily interrupt and says something [disfmarker] either a new topic or a different take on the same topic will tend to use [disfmarker] will trail off with "so". [speaker002:] Oh. [speaker001:] Oh oh oh, yeah, OK. [speaker002:] Hmm. [speaker001:] Mm hmm. [speaker002:] Uh like [disfmarker] "So, you can see you might be wrong, but I don't want to actually say that." [speaker006:] Yes, that's possible, [speaker004:] Yeah, [speaker002:] Mm hmm. [speaker004:] kind of, yeah, yeah. [speaker006:] yeah. [speaker003:] Huh. [speaker006:] Yeah, interesting. [speaker004:] Yeah. And "or" tends to be used with questions more than [disfmarker] So. [speaker003:] Yeah. [speaker002:] Mmm. [speaker005:] That's true. [speaker004:] But. I thought that was really interesting. [speaker001:] Or y I've heard you say "so", in that sort of sense like uh, um, "So in other words blah blah blah blah." [speaker006:] Mm hmm. [speaker001:] Like a sort of an inad [speaker006:] Of course [speaker001:] uh, eh [disfmarker] [speaker006:] b but that's not really an example of this, because they don't just say "so". [speaker001:] Right, [speaker005:] u Nnn. [speaker001:] but [disfmarker] That's true. [speaker006:] Is that right? [speaker005:] Ending with "so". [speaker006:] I mean, s I just wanna be sure. [speaker001:] Or I [disfmarker] maybe I'm misunderstanding what you were sa [speaker005:] When you end with "so". [speaker004:] When you end the utterance. [speaker001:] When [disfmarker] [speaker005:] When you say something [disfmarker] [speaker001:] Oh, just [disfmarker] just ending with "so". [speaker006:] Yeah. [speaker005:] Right, and you say "so". [speaker004:] Yeah. [speaker001:] OK. [speaker006:] Yeah, I c I could get all those extra signatures, [speaker005:] It [disfmarker] [speaker006:] but I don't know, so. " [speaker005:] Yeah. [speaker004:] Right. [speaker001:] Yeah, to me it's like a vocal shrug. [speaker005:] It's [disfmarker] [speaker004:] Yeah. [speaker006:] That's a very nice characterization. [speaker004:] Mm hmm. [speaker006:] Interesting. [speaker001:] S sort of. [speaker004:] I'd definitely like to look more at that if I can, but. [speaker001:] That's cool. [speaker002:] Hm hmm. [speaker005:] I think people end in "but" a lot too. And I feel that that means that they want somebody else to take it up? [speaker001:] Mm hmm. [speaker005:] Or somebody just sort of like [disfmarker] you know, take it from here? [speaker004:] Oh, yeah. Mm hmm. [speaker005:] So, you know, "Blah blah blah blah blah, but." [speaker004:] " But I'm not really sure [speaker006:] Once again a sh [speaker004:] so you can [disfmarker] you could finish my thought. " [speaker005:] Yeah, so s [speaker006:] is it a shrug? Would you say [pause] it's another shrug? [speaker005:] Um, [speaker006:] Or [disfmarker] [speaker004:] It's a different kind of shrug though. [speaker005:] I feel like it [disfmarker] I feel like it sort of, um, asks for somebody else to, uh, put input in now. [speaker006:] OK. [speaker001:] Yeah. I could see that. [speaker004:] I think it shows less of confident [disfmarker] like less confidence in what they just said [speaker002:] Mm hmm. [speaker006:] OK. [speaker003:] Or giving permission? [speaker005:] Yeah. [speaker006:] OK. [speaker005:] Yeah. [speaker004:] in a way. [speaker001:] Same with "I don't know". [speaker004:] Yeah. [speaker001:] You can also end an utterance with, uh, "blah blah blah but I don't know". [speaker002:] Hmm. [speaker004:] Yeah. [speaker001:] Like that. [speaker003:] Uh huh. [speaker004:] Yeah. [speaker001:] Like, I feel like maybe that's similar to "but". [speaker004:] Yeah. [speaker005:] Mm hmm. [speaker004:] More similar than to "so" I think. [speaker006:] Yeah, that's interesting. [speaker001:] To "so", yeah. I think so too. [speaker006:] Huh. Huh, interesting. What about, um, the way people treat w when they give a joke, I mean, and [disfmarker] and they're like [disfmarker] they laugh and then after the laugh they say, "uh". I mean, if you [disfmarker] so it's like, "ha ha ha uh". I mean [disfmarker] Do you [disfmarker] Have you run across this? [speaker004:] Yeah. [speaker001:] Yeah. [speaker002:] Mmm. [speaker006:] And it's like, to me [disfmarker] I [disfmarker] I think I've heard comedians do that. There's some there's something there about [disfmarker] you know, you [disfmarker] you make a joke and then you have an "uh" in there somehow, which I don't really understand the function of. [speaker001:] Maybe it's like saying like, This is funny [speaker006:] Do you [disfmarker] [speaker001:] but maybe it shouldn't be funny "or" This is funny but maybe it's funny for some sort of perverse reason ". [speaker004:] Or maybe it's just a link to get back to the conversation or something? [speaker003:] Hmm. [speaker004:] Like to connect it with what is going on elsewhere? [speaker002:] Mmm. [speaker001:] Well, eh how [disfmarker] OK, so the one I'm thinking of is like if like "ha ha" uh, you say [disfmarker] you would say like, um, uh like [disfmarker] OK, so he's laughing "ha ha" and then he'd go "i uh". [speaker004:] Oh, like kind of [disfmarker] kind of laughing with it [speaker001:] And [disfmarker] Yeah. Kinda, [speaker004:] in a way? [speaker001:] yeah, [speaker004:] Yeah. [speaker001:] kinda like "Uh." You know. [speaker004:] Yeah. [speaker001:] And to me that's more of like the [vocalsound] "Maybe this shouldn't be funny but it is. Maybe this is funny because it's kind of disgusting or perverse in some way." [speaker004:] Mmm. [speaker003:] Huh. Yeah. [speaker006:] It's possible, now what I'm thinking [disfmarker] I didn't have the extra [disfmarker] the extra e um, what do you say, nuances in the "uh", which you just [disfmarker] which you just did. [speaker001:] Wi [disfmarker] Oh, OK. [speaker006:] However, the context fits. I mean, that's an interesting take on it. I think [disfmarker] [speaker004:] Mmm. [speaker006:] It's possible. It's possible. [speaker001:] So you meant just sort of like a [disfmarker] just a normal "uh"? [speaker006:] It's just a normal "uh". Mm hmm. [speaker004:] Yeah. [speaker006:] But it's stuck in [disfmarker] maybe it's a little louder than usual. It's not any of the extra, um, you know, glottal thing. [speaker002:] Hmm. [speaker001:] Meaning, "Let's keep going"? [speaker004:] Maybe it means that. [speaker006:] Well, but it just seems like it's more prominent than [disfmarker] You know it really is like a placeholder of some sort, [speaker004:] Uh. [speaker006:] like [disfmarker] which I think is self reflexive in [disfmarker] in the [disfmarker] sort of brings focus back onto the jokemaker, [speaker001:] That's [disfmarker] yeah. [speaker004:] Maybe if [disfmarker] maybe it's when people don't laugh enough or something. [speaker006:] but [disfmarker] [speaker001:] Yeah. [speaker006:] Yeah, that's possible. [speaker004:] I don't know. [speaker001:] Yeah, like, Let's come back to the original subject [speaker006:] OK. [speaker001:] and I will take it from there ". [speaker006:] Oh. It could be. Yes, po that's possible. [speaker001:] Maybe. [speaker006:] It's possible. [speaker001:] Cuz I've heard, um, Morgan do that. [speaker006:] Exactly who I was thinking of actually. [speaker001:] Yeah, he does that a lot. [speaker006:] Uh huh. Yeah. Exactly. Yeah. [speaker001:] For obvious reasons, probably, if that's the way he uses it. [speaker006:] That's true, that would [disfmarker] The [disfmarker] And he does continue after that. But it's just [disfmarker] you know, to say "uh" [disfmarker] you kn to say "uh" while you're laughing i it's just to me it's an interesting category. [speaker005:] Right. I think it may almost have to do with like, "OK, I told a joke, now everyone's laughing, and now it's common courtesy I'll sort of tell everyone to stop laughing, you know, so you don't offend me about how good my joke was." You know, it's almost like, "OK, uh, OK, my joke's over, you know, you can stop laughing now". [speaker004:] Right. [speaker006:] That's possible. [speaker002:] Mmm. [speaker004:] Giving them permission to [disfmarker] [speaker005:] Yeah. You know, because it's like [disfmarker] How do you [disfmarker] you know, how much do you laugh after someone's joke [speaker003:] Hmm! [speaker004:] I don't know about permission. [speaker001:] Yeah. Yeah, yeah, yeah. [speaker005:] to tell them how much you enjoyed it [speaker001:] Be polite, or whatever. [speaker005:] or [disfmarker] Yeah. Or how long do you wait after the joke [speaker004:] To keep going with the meeting. [speaker005:] to [disfmarker] Right. [speaker004:] Yeah. [speaker006:] Very interesting. Interesting. [speaker004:] Cuz sometimes you can get on digressions that just last for like minutes at a time. [speaker001:] Right. [speaker006:] Yes. [speaker004:] And people are just hysterical and they're like, "What were we talking about? I don't know". [speaker006:] Yeah, that's [disfmarker] that's [disfmarker] Huh, OK. [speaker004:] It's hard when people get really really in like into their laughing and start talking at the same time. [speaker005:] Yeah. [speaker004:] It's hard to figure out what they're doi what they're talking about, and also they don't tend to talk very f there's a lot of disfluencies when they're laughing and talking. [speaker003:] Yeah. [speaker005:] Definitely. [speaker002:] Mmm. [speaker004:] It's hard to [disfmarker] [speaker005:] Sometimes they make me laugh so hard, because they'll just be talking [comment] about something technical [speaker004:] Yeah. [speaker005:] and I'll be so bored and then suddenly they'll say something that they only just sort of chuckle at and then I laugh at for the next like twenty minutes because I think it's so funny. Like one time [comment] everybody was doing the, um, digits, and they were all doing it at the same time, and they started off kind of shaky, [speaker003:] Oh, right. [speaker005:] they were like, "Oh, do we wanna do this at the same time, you know, that's [disfmarker] [@ @] [comment] [disfmarker] you know, wh let's not all start laughing" and then they all did it at the same time and one person ended up at the very end doing the digits and everybody had finished [speaker003:] Yeah. [speaker005:] and he had like two more lines to go, [speaker001:] Go [disfmarker] [speaker005:] and when he ended somebody turned to him and went, "You lose!" [speaker006:] Oh n [speaker005:] And I just kept laughing and laughing and laughing. [speaker001:] Johno keeps making comments about the transcribers. [speaker006:] Yes. [speaker004:] Yeah. [speaker001:] It's the funniest thing. [speaker006:] Yes. [speaker001:] He [disfmarker] he'll like [speaker003:] What does he say? [speaker006:] He's very funny. [speaker001:] say [disfmarker] he'll say like mean things about the transcribers. He'll be like [disfmarker] [speaker003:] Why? [speaker004:] "Oh, but I don't mean the transcribers we have here of course." [speaker005:] Oh. [speaker001:] Like [disfmarker] Well sometimes he'll do that, yeah. But sometimes he doesn't. And sometimes he'll just be like uh, you know, um, "Here, Transcribers, try transcribing this!" [speaker004:] Yeah. Yeah, like when they do random sounds that you have no idea [disfmarker] [speaker001:] Like, "Thanks!" [speaker006:] He's very funny. [speaker001:] Right. [speaker004:] Yeah. [speaker006:] I [disfmarker] I started laughing with something he said the other day, it was [disfmarker] it was the beginning of a meeting and it hadn't gotten far into it and [disfmarker] and he's [disfmarker] he's doing this [disfmarker] one of these mock fights with Nancy, which he does. Have you run across these mock fights? [speaker004:] Mm hmm. [speaker006:] And he s he says, [vocalsound] um, something like, uh, I [disfmarker] i and he's m e they [disfmarker] they have this nice [@ @] [comment] mutual condescending joking thing going on, and he says, "Well, I could explain this to you. I have a better laugh than you do. The transcribers told me so." And it went on a little longer but it's li it's like this really [disfmarker] [speaker001:] Li [speaker006:] Yeah, and then at one point she [disfmarker] you know, at another meeting, she [disfmarker] she was not sure which channel she was on, and he said, "Well, I think you're on channel so and so" and then she said, "Well, no, sh shouldn't it be channel so and so?" and he says "Well, no, it should be channel so and so". And then it turned out it was channel so and so which he said, and [disfmarker] and she said [disfmarker] he s he said, "I bested you again Nancy". And she said, "Yes, that's my recollection of all of our meetings together". [speaker004:] Oh my God. [speaker003:] Oh, no. [speaker006:] They have this really nice [disfmarker] You know, it's really [disfmarker] Those sorts of things, i it's true, you know, it's like, it really does liven things up to have the, uh, [vocalsound] the nice interrelationships. [speaker005:] Yeah. [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker002:] Hmm. [speaker006:] I think [disfmarker] I [disfmarker] I found that's a nice aspect of a lot of these meetings, the [disfmarker] eh the, uh, eh little periods of humor like that. [speaker004:] Yeah. [speaker006:] Alright, well, so do we [disfmarker] would [disfmarker] would [disfmarker] we could do digits if we so desired. If w if we wanted to do that. [speaker003:] Woo hoo! [speaker001:] Ah, the digits task. [speaker003:] Let's do simultaneous digits. No, let's [disfmarker] let's not. [speaker006:] Let's do simultaneous digits. [speaker001:] We [disfmarker] can somebody explain to me why you do digits? [speaker003:] No, no, let's not. [speaker006:] No, you don't? [speaker001:] I still don't really know. [speaker006:] OK, the idea is that you have [disfmarker] then, um, you have ground truth, you know exactly what they're supposed to have said, and then of course sometimes we go through and actually, um, people do double check that they read them correctly and all, [speaker002:] Mmm. [speaker005:] Mmm. [speaker003:] Yeah. [speaker006:] so that we're sure, as you've done. [speaker003:] I remember I did that a little bit, [speaker004:] Yeah, I did that once too. [speaker006:] Yeah. [speaker003:] and, yeah, you must have too. [speaker004:] Yeah, yeah. [speaker006:] And you [comment] did also? [speaker002:] We [disfmarker] we are meant to be transcribing if they're just left in the [disfmarker] the curly brackets? [speaker003:] Kinda early on. [speaker006:] No, because it's [disfmarker] it's [disfmarker] it should be able to be taken care of efficiently with this interface. I mean, if the T Tigerfish people do it, then, you know, leave it, [speaker003:] Yeah, they do write the numbers in. [speaker004:] The [disfmarker] Yeah. [speaker006:] that's fine. [speaker004:] Yeah, I've been just leaving it. [speaker002:] And sometimes they don't. [speaker006:] Yeah. [speaker003:] Yeah. [speaker001:] Oh, I've been writing the numbers just cuz I wasn't sure what to do. [speaker006:] Sometimes they don't. Well, if you do that's [disfmarker] that's alright as well. [speaker005:] Oh, really? [speaker001:] You don't have to? [speaker006:] You don't have to. [speaker001:] Oh, OK. [speaker002:] Hmm. [speaker006:] Because, uh, we're [disfmarker] we have the assumption that we'll take care of it this [disfmarker] in this other way, but In any case, the idea is that you have ground truth, [speaker002:] Oh. [speaker006:] you know exactly what the person should have said and you check to be sure, and usually there aren't many errors [disfmarker] eh w when they were checked. Um, and then, you have this multiplicity of voices, uh, reading these things, and you can do really good speech recognition that way. So they have some papers out that they've done this. It's really [disfmarker] it's been very fruitful in terms of the analyses. [speaker005:] Huh. [speaker002:] Hmm. [speaker006:] It's nice to have, you know, a standard [disfmarker] Uh, you know, it's like, uh, psychological approach also that [vocalsound] you like to have some standard stimulus here [speaker001:] Mm hmm. [speaker003:] Oh, yeah. [speaker006:] and then you can [disfmarker] [speaker003:] No, I totally agree. [speaker006:] Yeah. [speaker001:] Mm hmm. [speaker006:] I [disfmarker] I agree, I think it's a nice [disfmarker] it's a nice design feature. [speaker001:] Where's the little things that's supposed to say "transcript L dash blah blah blah"? [speaker003:] Oh, it's at the top. [speaker006:] Yes, you run across that. Let's see, I guess it would be [disfmarker] it's up there. [speaker005:] It's right here, [speaker002:] Oh, I see. [speaker005:] yeah. I looked for that too. [speaker006:] Uh huh. Up there at the top. [speaker001:] Ah. Oh. OK. [speaker004:] OK. [speaker003:] So, I have a question. [speaker006:] Yeah. [speaker003:] Cuz now I'm looking at this and I realize I don't know the answer, is, um, sometimes when people are reading "zero", [comment] they say "O", [speaker006:] Yeah. Yes. [speaker002:] Yeah. [speaker003:] and sometimes they say "zero", [speaker001:] Mm hmm. [speaker003:] but [disfmarker] And I thought that there must be an indication here. [speaker006:] Used to be. [speaker004:] That's what I thought too. [speaker006:] Used to be. [speaker003:] Oh, [speaker005:] Hmm. [speaker003:] that [disfmarker] "read O or read zero". [speaker001:] Oh. [speaker006:] This [disfmarker] they changed them. Initially it was words all spelled out, [speaker004:] Mm hmm. [speaker006:] so "ONE" [disfmarker] [speaker001:] Oh. [speaker002:] Mmm. [speaker003:] Oh, my! [speaker006:] Yes. [speaker005:] Oh, wow. [speaker003:] Yeah. [speaker006:] And that was like a Stroop test in some ways. It took a lot of conc When I was doing those I had to really concentrate. [speaker003:] Yeah. [speaker006:] It was like s "One, s seven" [disfmarker] you know, it was like [disfmarker] You really had to [disfmarker] "let's see, the words and the numbers" [speaker003:] Yeah, you're not used to seeing the numbers like that. [speaker006:] and [disfmarker] [speaker004:] Yeah. [speaker006:] and then you'd have an "OH" periodically and, um, they were initially spelled out that way, [speaker005:] Hmm. [speaker001:] Huh. [speaker006:] one or the other. [speaker003:] I see, so that's why. [speaker006:] But we converted them to this format, and then the idea [disfmarker] I mean, you could conceivably put an "OH" or a "zero" for the "O's" and still have this benefit, but I think that the idea is that people can choose what they think is most natural for them to say in those [disfmarker] those locations, [speaker005:] Right. [speaker006:] and [disfmarker] you know, maybe there'd be [disfmarker] [speaker001:] Mm hmm. [speaker003:] So it's optional. [speaker002:] Hmm. [speaker003:] You can say one or the other? [speaker006:] Yes, mm hmm. [speaker003:] OK. [speaker002:] Mmm. [speaker006:] So, um [disfmarker] [speaker002:] Oh, can I ask also? [speaker006:] Sure. [speaker002:] Um, it says [disfmarker] it looks like the groupings are important, but some of the transcripts I've seen they don't have any commas or breaks, it's just a string of numbers. So we should be adding the breaks if [disfmarker] [speaker006:] Yes. Well, see [disfmarker] er [disfmarker] also, this has been a [disfmarker] a change in the procedure that [disfmarker] used to be that they were like uh one long string [disfmarker] [speaker002:] Oh, OK. [speaker006:] different sized strings. But to have them broken up like this, it's [disfmarker] it's m been about uh maybe half a year that [disfmarker] that this [disfmarker] or maybe [disfmarker] maybe seven months, I don't know, maybe longer, [vocalsound] that they've gotten actually broken up within the line. [speaker002:] Mmm. [speaker006:] So the idea is [@ @] [comment] to pause when you see that. [speaker004:] But in the transcriptions [disfmarker] I mean, we don't have to deal with that because when the other interface deals with it then they break it up or whatever. [speaker006:] No. That's right, break it up more finely. [speaker004:] Yeah. [speaker002:] But if you can clearly hear pauses and there've not been transcribed, we should be adding the commas or not? Though the one I'm working on now is like that. [speaker006:] Well [disfmarker] I would say don't worry about that, because, um, [vocalsound] this is w [speaker002:] OK. [speaker006:] again [disfmarker] you know, it [disfmarker] it'd be nice to get the words right, but, um, the [disfmarker] uh, the interface that these guys used, uh, allows you to set the time boundaries really accurately, [speaker004:] Mm hmm. [speaker006:] so it's not so important. It's more important to focus on the words, because there we don't know the ground truth. [speaker003:] Mm hmm. [speaker006:] So. OK, so [disfmarker] [speaker004:] Are we gonna do them together then? Or no? [speaker006:] Well, I don't know, how do you feel? [speaker003:] Yeah, I don't care. [speaker006:] I mean, what we could do is we could do it together one week [speaker003:] Whatever. [speaker006:] and then separate the next or something like that [speaker003:] Sure. [speaker006:] to give the variety. [speaker001:] Sure. [speaker004:] Whatever. [speaker006:] We [disfmarker] [speaker005:] Sure. [speaker006:] why don't we do them together this time just because [disfmarker] in the interest of time, to get you out in time. [speaker004:] OK. [speaker005:] OK. [speaker001:] OK. [speaker006:] OK, so, um, we just start out, we [disfmarker] and you s you say, "transcript L" whatever it is, and then launch into the numbers and then pause between the gr the gaps [speaker003:] Sure. [speaker006:] and then [disfmarker] I guess you're supposed to treat [disfmarker] [speaker005:] Pause after each line? [speaker006:] Yeah, you s well, you're supposed to treat each line as kind of like a little sentence. What do they say, [speaker005:] OK. [speaker006:] " Each line into smaller groups. Please read each string as if you were giving it to someone over the telephone using the groupings indicated. [speaker005:] Hmm. [speaker003:] Uh huh. [speaker004:] Pause briefly between lines. [speaker006:] Yes, [speaker003:] Right. [speaker006:] there you go. [speaker004:] OK. [speaker006:] OK. So, here we go. [speaker001:] OK. [speaker005:] Alright. [speaker006:] Ready, set, go. [speaker005:] Eight three one, seven six, three nine, [speaker001:] Joel. [vocalsound] I almost started laughing cuz you were cracking up. [speaker005:] And I only started laughing when I had one batch of numbers left, and I couldn't get it. [speaker004:] Oh no. [speaker003:] Aw! I'm just happy I didn't make any mistakes. [speaker001:] He [disfmarker] You almost made me mess up. [speaker005:] Um. [speaker002:] It was hard! [speaker003:] Oh yeah? That's why I [disfmarker] that's why I sat back here, I was like, "I didn't want [disfmarker] I [disfmarker] I don't wanna look at you people." [vocalsound] To read it. [speaker005:] I it just c [speaker006:] That's right. It's [disfmarker] it's funny how distracting it is, you know? It's like there's no content. [speaker002:] It is. [speaker004:] It is, it's hard. [speaker005:] It is. [speaker001:] Ye he almost made me mess up. [speaker006:] Did y did you finish your last batch? OK. [speaker005:] I finished my last one, it was really hard though. [speaker004:] OK, good. [speaker006:] You're right in the middle of it, yeah. I'm on the edge. [speaker005:] Yeah. I mean, I got all the way down and then the very last one came and I was like, "I haven't laughed." [vocalsound] And then I was [@ @]. And then I just couldn't help it. [speaker004:] Yeah. You did? [speaker003:] Oh, no. Oh, that's the killer. [speaker006:] OK, well, I wanna thank you very much, and um, look f and so we'll meet next week at the same time, u unless [disfmarker] I mean, I g I need to check on the schedule but I think, um, [speaker004:] Good. [speaker003:] Alright then. [speaker005:] OK. [speaker001:] OK. [speaker004:] OK, so unless we hear from you. [speaker006:] The same [disfmarker] same time or within an hour ra of it. [speaker004:] OK. [speaker005:] OK. [speaker002:] OK. [speaker003:] Sounds good. [speaker006:] Good. [speaker005:] Sounds good, yeah. [speaker006:] Great. Thanks very much. [speaker001:] OK. [speaker003:] OK. Signing off. [speaker005:] Alright. [speaker006:] Signing off. [speaker001:] Bye. I love you, man. [speaker007:] There're some words that weren't [disfmarker] [speaker006:] OK. We're on. [speaker007:] See, there're some stretches that weren't even in one or the other, [speaker002:] Right. It would have to be hand [disfmarker] [speaker007:] so [disfmarker] [speaker002:] Right. [speaker008:] Yeah. [speaker007:] What are you gonna do about those, though? [speaker002:] It would have to be [disfmarker] Well, a h a person would [disfmarker] would check it over [speaker001:] Hello? [speaker007:] You're never [disfmarker] [speaker001:] OK. [speaker002:] and then give a final [disfmarker] [speaker001:] Should we close the door? [speaker006:] Oh, we'll wait for Chuck. [speaker008:] I mean, if recognition's better on the channelized segmentations, then [disfmarker] [speaker006:] Close the door behind him. [speaker002:] I'm [disfmarker] Well, it's more [speaker003:] We're [disfmarker] we're recording by the way, [speaker002:] you didn't [disfmarker] [speaker003:] just for [disfmarker] [speaker002:] Oh, sorry. [speaker003:] I mean, uh u [speaker006:] Y you can keep talking, [speaker003:] You can keep talking. [speaker006:] but you should be aware. [speaker003:] I just [disfmarker] I just [disfmarker] g guess you were gonna [disfmarker] [speaker002:] You were saying that we were waiting for Chuck, so I just [disfmarker] [speaker006:] Well, and then, uh [disfmarker] then Morgan said "why don't we start?" [speaker003:] Well [disfmarker] [speaker002:] OK. [speaker006:] and I said "O K". [speaker003:] So the compromise is we're [disfmarker] we're starting but we're leaving the door open. [speaker002:] OK. [speaker003:] So we'll have a little more background noise and [disfmarker] [vocalsound] Uh. [speaker006:] Or we can close the door and make Chuck knock [comment] when he comes. [speaker001:] Well, I tend to [disfmarker] th there're a couple meetings where there are a lot of preambles like this, in which case the transcription doesn't start until someone actually says "OK." [speaker003:] Really? [speaker001:] So, we're starting right now. [speaker002:] Well, that's good. [speaker003:] Yeah. Oh, yeah. The meeting's started. This is [disfmarker] this is the real thing. [speaker001:] This is the [disfmarker] [speaker003:] This is the real thing. Oh well, never mind, it wasn't. There goes Chuck. Just kidding, it was. OK, so. [speaker006:] And Chuck gets stuck with the ear plug mike. [speaker003:] OK. Last guy in [vocalsound] gets the [disfmarker] [speaker001:] Yeah. u Didn't get the door. [speaker003:] Oh, you just get the door now. [speaker006:] But he didn't shut the door. [speaker003:] Yeah. [speaker006:] I got it since [disfmarker] [speaker003:] Yeah. So I [disfmarker] I [disfmarker] [@ @] Adam sent around this thing about what we were going to talk about and [disfmarker] and th we had had this discussion of, uh, flipping back and forth between an emphasis on [disfmarker] [vocalsound] on the technology, particularly recognition, versus, uh, the recordings and transcriptions and so on. Um, but, um, I was asking, if we could, to [pause] uh, kind of make it just whatever is happening rather than one or the other because, uh, next weekend [disfmarker] next week I'll be at a meeting on campus [speaker007:] Sure. [speaker003:] and after that I'll be Europe for, [vocalsound] uh, several meetings' worth. Uh, you folks'll be gone for a couple of the meetings' worth in [disfmarker] in Europe also, [speaker007:] Mm hmm. [speaker002:] I overlap exactly with the times you're gone. [speaker003:] right? Really? [speaker002:] I mean [disfmarker] [speaker003:] OK. Yeah. [speaker002:] Yeah. [speaker003:] So. So, uh, you didn't [disfmarker] you didn't get an agenda, I guess. We've got the usual [disfmarker] [speaker006:] Well, because no one [disfmarker] no one ever rep responded [pause] with any items. [speaker003:] Yeah. [speaker006:] So I have [pause] a couple of minor ones. Uh, our [pause] normal transcription status, just so that we can make some [disfmarker] some plans. So wha what's up with IBM, Chuck? Do you know? [speaker009:] Um, I think Brian is on vacation now but, uh, before he left we gave him, uh, another set of meetings to give to the transcribers. And [disfmarker] [speaker008:] How many? [speaker009:] Uh, what is it? [speaker006:] Is [disfmarker] is yours working there, Chuck? [speaker009:] Four? [speaker004:] Four. [speaker008:] Oh. [speaker009:] Four. [speaker006:] Can you [disfmarker]? You're b you're B? [speaker009:] Test. What am I? [speaker001:] Is it turned on? [speaker006:] There's no on. [speaker009:] Test. [speaker001:] OK. [speaker009:] Yeah. [speaker006:] Oh. It's just [disfmarker] [speaker001:] Channel B. [speaker006:] I'll [disfmarker] Let me raise the gain on that a little bit [speaker009:] OK. [speaker001:] Channel B. [speaker006:] since it's so far from your mouth. Sorry. [speaker003:] OK. [vocalsound] So we sent them [disfmarker] [speaker009:] So we gave them four meetings before he left and, um, that's it. So, um, I don't think we'll hear from him before he gets back. [speaker006:] OK. [speaker003:] And the [speaker001:] Do they know about your pie chart? [speaker009:] I don't know. [speaker001:] OK. [speaker009:] I don't know. [speaker003:] What pie chart? [speaker006:] Pie? Oh. [speaker001:] He has a nice pie chart on the, uh, s the current status page, [speaker002:] Pie. [speaker001:] which shows the amount of transc amount of data that had been transcribed [speaker007:] Mm hmm. [speaker001:] and the amount [disfmarker] out of the entire [disfmarker] and [vocalsound] at this point it's not quite half. Wouldn't [disfmarker] Is that what you'd say, with the pie chart? [speaker009:] Yeah. [speaker001:] And then of that, there's, uh, a little sliver [disfmarker] I could try and draw it but it's probably better to describe. [speaker003:] Half the data that's been recorded has been transcribed? [speaker001:] So [disfmarker] [vocalsound] Roughly. Almost. Not quite. [speaker003:] Wow! That's getting pretty good. [speaker001:] And I haven't actually caught up with, uh [disfmarker] since I got back so recently from my vacation. Um, I know that the transcribers finished, um, uh, several meetings while I was gone. [speaker009:] Oh. [speaker001:] And I haven't told you cuz I haven't [disfmarker] I haven't figured out which ones they are y are yet. [speaker009:] Right. [speaker001:] But when that happens, then that pie will get even closer to the midline. [speaker003:] Mm hmm. [speaker009:] Mm hmm. [speaker001:] It's not quite [disfmarker] [vocalsound] not quite fifty percent but [pause] really getting close. I think it will be. [speaker006:] OK. So also, um [disfmarker] [speaker007:] You mean, pi will be close to one half? [speaker001:] Well, [vocalsound] not mathematically. [speaker007:] That's new. [speaker006:] Now, that's a strange [disfmarker] [speaker003:] Well, if we can have three point one four times as much tr transcribed as we recorded, we really would be doing well. [speaker001:] n [speaker006:] Yep. [speaker007:] Right. [speaker001:] But it's a nice [disfmarker] it's [disfmarker] I recommend his [disfmarker] his [disfmarker] his graphic, [speaker006:] Um. So [disfmarker] [speaker007:] You know, this [disfmarker] this raises some [disfmarker] [speaker001:] cuz it's [disfmarker] it's a really nice graphic. [speaker007:] Mm hmm. [speaker006:] How much [disfmarker] [speaker007:] How do you transcribe that? [speaker006:] How much do we need transcribed before it's worth doing some trainings? [speaker007:] Oh. B uh [disfmarker] [speaker002:] Mmm. [speaker006:] I mean, it seems like probably we have enough now. [speaker007:] We can [disfmarker] [speaker006:] Right? Because, like [disfmarker] like Macrophone wasn't that much data. [speaker002:] You mean [disfmarker] [speaker006:] Right? And it helped. [speaker007:] Well, we already use it for digit training. [speaker002:] But [disfmarker] but that was read speech. [speaker006:] You only used that for digits? [speaker007:] I mean [disfmarker] [speaker006:] Right. [speaker007:] Well, we did do some training on digits [disfmarker] [speaker006:] That's right. [speaker007:] uh, rather adaptation on digits [disfmarker] digits. I mean, supervised adaptation. [speaker006:] Mm hmm. [speaker007:] So. Yeah. I mean, uh [disfmarker] [speaker002:] Maybe like twenty hours. We could try, but [disfmarker] Maybe not counting the non native speech, cuz if [disfmarker] That [disfmarker] that just doesn't translate very well. [speaker007:] Well, I would [disfmarker] I would love to do some, um [disfmarker] you know, with adaptation you can use fairly small amount of data to improve your performance. [speaker006:] Right. [speaker007:] Um, and n you know, it's just basically, uh [disfmarker] uh You know, I [disfmarker] I don't have to time to [disfmarker] to do too much these days except what I'm already doing. So [disfmarker] [speaker006:] Mm hmm. [speaker007:] um, but if there's some [disfmarker] someone has some free time, I'm [disfmarker] I'll be happy to show them how it works and [pause] you can play with it. [speaker006:] Cool. Um. [speaker003:] Uh, I thought you [disfmarker] you were already using adaptation w on the test set. [speaker007:] No, supervised adaptation. [speaker006:] Supervised. [speaker007:] So, we're doing s unsupervised speaker adaptation. [speaker003:] I see. [speaker007:] But [vocalsound] you could [pause] build a test set which you sort of take your models and ada do supervised adaptation. [speaker003:] Oh, OK. Got it. [speaker007:] And then you start with the speaker adaptation from sort of channel adapted [speaker003:] Right. Right. [speaker007:] or room adapted or whatever. [speaker003:] Yeah, yeah. Yeah, whatever this is. [speaker006:] H how would you imagine doing a test set on this corpus? A test set, training set division? I mean, it it's really weird. Right? Because it's [disfmarker] it would be hard to get speaker disjoint sets. [speaker007:] Right. [speaker006:] Do y do we just give up on that? [speaker002:] Right. And actually the [disfmarker] I was thinking about this the original application doesn't really appl I mean, if you're using this over and over again, you're gonna be testing on the same [disfmarker] the same speakers. [speaker006:] Well, but then you have the supervised versus unsupervised question. [speaker002:] I just mean that it's not a terrible thing to assume that you don't [disfmarker] that you don't have disjoint sets, [speaker006:] Right. [speaker003:] Mm hmm. [speaker006:] So. [speaker002:] for this particular kind of [pause] application. I mean [disfmarker] [speaker006:] Yeah. [speaker003:] Right. [vocalsound] Yeah. I mean, you could have some parts of it that are disjoint and some that aren't, [speaker007:] It's [disfmarker] it's sort of like in, um, Broadcast News [speaker003:] and see what the difference is, you know, on the others. [speaker002:] Yeah. [speaker003:] Yeah. [speaker007:] because you have those people who re occur over and over. [speaker003:] Yeah. [speaker007:] I mean, the anchor speakers tend typically. [speaker006:] Right. [speaker007:] So, um [disfmarker] m m m m Yeah. As long as you're aware that you're doing that, that's [disfmarker] I don't see that as a problem. [speaker003:] You can separate it out and see how [disfmarker] if they're [disfmarker] [speaker006:] And then s s similarly with subject matter. [speaker003:] Yeah. [speaker006:] That, you know, if you're training your language model on this meeting only and testing on this meeting only, you're gonna do a lot better than if you start crossing them. [speaker007:] Mm hmm. [speaker002:] Somewhat. Although what's interesting is that the majority of the words in the language model are actually [disfmarker] [speaker003:] Mm hmm. [speaker006:] Are the same. [speaker002:] Yeah. They're actually these, like, function words [speaker006:] Sure. [speaker002:] and "Yeah!" and s stuff like that. [speaker003:] Mm hmm. Mm hmm. [speaker002:] And there there's a lot of speaker dependence there. Even the transcribers will say they know who the speaker is after a while just from their backchannels [speaker003:] Yeah. [speaker006:] Just from word choice? [speaker002:] and [disfmarker] Right. So y So, I don't think you're talking the way you write, in [disfmarker] in other words. [speaker006:] Most certainly. So it's just, uh, if we're gonna do that we should [pause] give some thought to how we wanna divide it up. [speaker003:] Yeah. Yeah, so, you know, maybe someone else would do [pause] a lot of the work where probably [pause] your advice [speaker006:] Yeah. And similarly with digits. We now have a lot of digits [speaker007:] Yeah. [speaker003:] would be helpful. [speaker006:] but most of them aren't transcribed. So if we wanted to do a forced alignment on them, we could start collecting ourselves up a fairly large digit corpus. [speaker003:] Yeah. Well, we'd [disfmarker] [speaker006:] And so, same issue. You know we just need to get someone who's interested in that to start looking at it. [speaker003:] We want that for sure. Yeah. [speaker006:] So who's working on digits currently? [speaker003:] Uh, Dave Gelbart. [speaker006:] OK, maybe I'll talk to him and see if he's interested. [speaker003:] Not [pause] right away. [speaker006:] Yeah, certainly. [speaker003:] He's got prelims [speaker006:] He wants to pass the prelims. [speaker003:] and [disfmarker] Yeah. Um, [vocalsound] one thing, uh, uh, just a e uh, uh, an announcement of sorts is that, um, [vocalsound] Chuck is gonna be going to a meeting at NIST, in, uh, about a month and a half, I guess. They're still working out the date. cuz they want to gather together sort of one person from each site who is involved in the meeting stuff since they're going to be doing [vocalsound] a bunch of meeting recordings there. [speaker006:] Mm hmm. [speaker003:] And so, uh. [speaker006:] Is, uh [disfmarker]? Are the CMU folks gonna be there? [speaker003:] Somebody [disfmarker] you know, it's like one person [disfmarker] at the moment it's one person from each site. So there'll be somebody from the CMU, and somebody from here, and somebody from, [vocalsound] uh, a [disfmarker] a number of places. Probably somebody from UW. Um. So. Yeah, it's possible they'll open it up to more people. But [disfmarker] but I think, you know, the main thing is just to, uh [disfmarker] uh [disfmarker] uh [disfmarker] uh, it's not a [disfmarker] it's not a conference. It's not a workshop, really. [speaker006:] Right. [speaker003:] It's just connecting with them. So. Um. So, then if they start a bunch of recordings then ultimately they'll [disfmarker] they'll have training sets and test sets, and [disfmarker] [vocalsound] and all that [disfmarker] all that stuff, and [disfmarker] But. [speaker007:] But it would be interesting to hear what they intend to do about the speaker overlap, for instance. [speaker006:] Right. [speaker003:] Yeah. [speaker002:] What's the plan if [disfmarker] were we still going to be like a [pause] center for transcribing data? What happens if th they don't deliver things until [pause] next year or something when we [pause] have less [pause] resources [speaker003:] I [speaker002:] or [disfmarker]? [speaker003:] Well, again the hope is [vocalsound] that, uh [disfmarker] I mean, it's [disfmarker] it's looking [disfmarker] I mean, I have to ask these folks but, uh, my impression is that the, um, IBM path is looking, you know, more promising. [speaker006:] Yep. It [disfmarker] it [disfmarker] pretty good. Yeah. [speaker003:] And if that's the case and if it turns out that, uh, it's really a relatively small amount of time for our transcribers to process things before and after, if they really have taken out enough of the [disfmarker] of the, uh, uh, work that [disfmarker] that it's f feasible, then maybe we can do it with a smaller staff next year. [speaker002:] Mm hmm. [speaker003:] I mean, [vocalsound] next [disfmarker] next year, uh, in which he's [disfmarker] you're referring to is, uh, next year we [disfmarker] we don't have a whole lot of guaranteed funding for this [disfmarker] [vocalsound] this work. Um, other funding may come. You know that's [disfmarker] But [disfmarker] but if we wanna just count on, uh, [vocalsound] uh, what we know about, it's [disfmarker] it's more modest than this year. Uh, it's still [disfmarker] there'll still be money, but it's just more modest. So [disfmarker] um, so I would see us as probably not doing more recordings at least for [disfmarker] for [disfmarker] for meetings or [disfmarker] or digits or anything. But, uh, if [disfmarker] if we feel that we need to, probably still doing some recordings for SmartKom or something. [speaker004:] Yeah. [speaker003:] Um, but otherwise just, you know, stop that and just [pause] take whatever comes from the other places, which, uh, I guess we have an u unknown rate. If it [disfmarker] if it starts being [disfmarker] if they're generating this huge amount and we can't handle it then, you know, we'll tell them. But. Uh, although it would have been nice if that [disfmarker] their stuff had come earlier, it it's in some ways a good thing, because we've been [vocalsound] ironing out this [pause] path. So. [speaker006:] Um. Well, I got a email from one of the UW guys. They want to record the meetings at higher, uh, sampling rate than sixteen [pause] kilohertz. Uh, I didn't ask why. But it seems to me [pause] that really doesn't matter. If they have the disk space to do it, it seems that that [pause] is not a big deal. I was just wondering if anyone had [pause] an opinion about mismatched [pause] rates. [speaker003:] Well, it's just if that they're sending it to us, then we have to have the disk space to do it too. [speaker001:] Can't we downsample and, uh, transcribe on that basis? [speaker003:] That would be the hope. Yeah. [speaker006:] Yep. [speaker003:] d Are they [pause] doing some nice integer multiple or [disfmarker]? [speaker006:] Excuse me? [speaker003:] Are they doing some nice integer multiple of th of sixteen? [speaker006:] No. [speaker002:] Are they [disfmarker] they're doing, like, twenty two [speaker006:] No. No, no, it was like forty something. [speaker002:] or s? Could [disfmarker] Forty eight? Forty [disfmarker] forty four? [speaker006:] Forty four, [speaker008:] F [speaker006:] forty two. [speaker008:] So, CD. [speaker002:] Yeah. CD ROM. [speaker004:] CD. [speaker006:] Yeah. [speaker002:] Yeah. [speaker003:] Oh, forty four point one. [speaker004:] Yeah. [speaker002:] Forty four. [speaker008:] Forty four one. [speaker004:] Yeah. [speaker002:] S [speaker003:] Yeah. [speaker006:] Something like that. [speaker003:] Oh, so that's [disfmarker] Well, you know, we record here at for at forty eight or something [speaker006:] And then we downsample. Yep. [speaker003:] and then we downsample. [speaker008:] But forty eight's a nice multiple of [pause] eight. [speaker002:] But [disfmarker] but it [disfmarker] but forty four and sixteen are not easy? [speaker003:] Yeah. Well you can do it. [speaker008:] You can do it, I think. [speaker002:] Yeah? [speaker003:] Oh, absolutely you can do it. [speaker002:] It's not lossy? [speaker003:] Yeah. You can do [disfmarker] you can go from anything to anything else. [speaker008:] You c y [speaker003:] But [disfmarker] but it's [disfmarker] it's just [disfmarker] it's [disfmarker] it's just a little more complicated. [speaker006:] You get aliasing. [speaker002:] As long as you're [disfmarker] Yeah. As long as you're not doing the processing on the [pause] downsampled [disfmarker] [speaker008:] There's probably some [disfmarker] [speaker002:] I mean [disfmarker] Just for transcribing it, you mean? [speaker003:] It's just like [disfmarker] Yeah. So [disfmarker] u u u Yeah. I mean, it's just slightly more complicated, but it's [disfmarker] but i you can [disfmarker] you can go from any frequency to anyth any other frequency. So it's [disfmarker] [speaker006:] Well, the [disfmarker] the only thought was if we're gonna do a combined corpus, does it really matter if some's sampled at one rate and some's sampled at the other? My feeling is no, it doesn't. Your audio, u these days audio tools tend to take care of all of that. But I just wanted to double check with other people before I told them "yeah, no problem". [speaker001:] Also seems like the corpus could all be standardized on s on this [disfmarker] the, uh [disfmarker] [speaker006:] That's what I was saying. [speaker009:] Why are [disfmarker] why are they asking us? [speaker006:] So [disfmarker] so [disfmarker] [speaker001:] Yeah. [speaker006:] I don't know. Why are they asking us? Because, you know, we're sort of doing the corpus, so they want to be as similar as possible. [speaker003:] A and, so I gue [speaker006:] So. But I think the answer is, if [disfmarker] if we tell him "no, that's not alright", what he'll probably say is "well, we're gonna do it anyway". [speaker007:] It could be interesting. [speaker009:] Well that's [disfmarker] that's why I'm asking. [speaker006:] But, uh, [speaker007:] Well, it will be interesting to find out why they are choosing a higher sampling [disfmarker] [speaker008:] Yeah. [speaker006:] Uh. Yep. [speaker008:] It's kind of a s [speaker003:] Well, I would suspect it's because it's what the hardware does. [speaker002:] For [disfmarker] [speaker006:] Oh, and they just don't want to downsample? [speaker002:] Yeah. [speaker003:] Yeah. [speaker008:] Well, someone's gonna have to downsample. [speaker007:] But then they're just [disfmarker] they're just [disfmarker] there's just deferring the problem to later [speaker008:] Yeah. [speaker007:] and they might save themselves a lot of disk space problems by doing it right away. [speaker002:] They're taking up more disk space. [speaker008:] Yeah. It [disfmarker] [speaker004:] Yeah. [speaker003:] I [disfmarker] I [vocalsound] I think it's the sort of thing that's deserving of discussion with them. [speaker006:] OK. [speaker003:] I d I don't think it's like, i uh, they'll ask us, we'll give a simple answer and come back. It needs some back and forth [speaker007:] I mean that's wha [speaker003:] because it's [disfmarker] [vocalsound] it it's a little gnarly to have parts of the database with different things. [speaker007:] That's like three times [disfmarker] three times as much disk space. [speaker002:] Yeah. [speaker008:] Yeah. It's [pause] four or five times. [speaker003:] And it's, [speaker004:] Yeah. [speaker008:] Depends what you save. [speaker003:] um, i You know, part of the arguments for the higher sampling rate is that maybe there's something, you know, say, in six to ten kilohertz that [vocalsound] you might use for [disfmarker] for some purposes somewhere. If people are doing a wide range of things, doing location, uh, you know, doing a bunch of things, maybe, you know, why [disfmarker] why throw it away if you don't need to? Um, on the other hand, um, I mean, it's [@ @] [disfmarker] [vocalsound] sixty kilohertz is plenty. [speaker006:] Yep. That's wh that's what I thought. [speaker008:] Yeah. [speaker003:] So [disfmarker] [vocalsound] so [disfmarker] So, uh, I th I [disfmarker] I think it's [disfmarker] it's, uh [disfmarker] And they are gonna be using up a lot of disk, both for th them and for us when we're pr processing their stuff, [speaker006:] Mm hmm. [speaker007:] We should warn them that these disk space issues [pause] are gonna creep up on them very fast. [speaker003:] uh, if they [speaker006:] Yep. Yep. [speaker002:] They'll have to start kicking people out of the meetings, [speaker003:] Well, i w wa if [disfmarker] if they're going at forty four, they won't even creep. [speaker006:] Yeah, but [disfmarker] [speaker002:] like [disfmarker] [speaker008:] Yeah. [speaker004:] Yeah. [speaker002:] As long as they have only, like, three people per meeting. [speaker003:] They'll just pounce. [speaker009:] How many channels? [speaker002:] I mean, they can just limit the number of people. [speaker006:] Um, I don't remember what their hardware set up was but it was smaller than ours. [speaker003:] Well, that might help a bit. [speaker006:] Yeah. [speaker003:] Yeah, actually I think at NIST they were ta there were [disfmarker] there was some discussion of fairly high sampling rates too, [speaker006:] Well, with NIST I think [speaker003:] so. [speaker006:] they're [disfmarker] they're in a different situation cuz they're doing video. And so compared to the video, the audio is just noise in terms of disk space, literally. [speaker003:] I don't think that's true. [speaker006:] Really? [speaker003:] They're recording fifty channels of mikes. [speaker006:] They're not recording all fifty of the [pause] array. They're [disfmarker] they're cooking it down, [speaker003:] Yeah, they are. [speaker002:] Yeah. They were g they said they were gonna record [@ @] [disfmarker] [speaker006:] aren't they? [speaker003:] They're recording all of them. [speaker002:] I don't know if there were fifty, but there were definitely above, like, twenty. [speaker006:] Whew. [speaker003:] Yeah. [speaker006:] That's funny. I thought that they were cooking down that data [pause] in some way before they were storing it. [speaker002:] It was [disfmarker] it was high. [speaker003:] I don't believe so. No. I don't think so. No. So I think they [disfmarker] they have actually a very large audio data rate. I think it is comparable to a video data rate. [speaker008:] OK. [speaker003:] So. [speaker009:] How in the world are they gonna ever distribute this? [speaker006:] Yeah, distributing [disfmarker] They're not going to. [speaker003:] Uh. [speaker009:] You're gonna have to just get [disfmarker] [speaker003:] Interesting question. [speaker009:] Because our [disfmarker] [speaker006:] You'll have to just work on little subsets of it. There's just no way [pause] you could get it all. [speaker009:] We're [disfmarker] we're estimating that ours [disfmarker] if we collect say a hundred meetings and each meeting is just the audio, compressed audio's half a gig, so there's fifty gigs just for our corpus. [speaker006:] Yep. [speaker009:] For a hundred meetings. And, uh, you know [disfmarker] [speaker008:] That's eight kilohertz? [speaker003:] And that's [disfmarker] and that's [disfmarker] and that's no video. [speaker006:] Sixteen. [speaker004:] Sixteen. [speaker009:] That's sixteen. [speaker002:] Sixteen. [speaker008:] Sixteen. [speaker009:] And so, [speaker003:] Sample rate, [speaker002:] You just get, like, one backchannel at a time or something. [speaker003:] yeah. [speaker008:] Oh, it's shortened. [speaker009:] even [disfmarker] even a distribution of fifty gigs, [speaker008:] Oh, OK. [speaker004:] Yeah. It's shortened. [speaker008:] And then shortened. [speaker009:] you know, it's [disfmarker] if you use D V Ds or something, that's [disfmarker] [speaker004:] Yeah. [speaker006:] A lot. [speaker009:] It's still, yeah, two or three D V but [disfmarker] [speaker003:] Yeah, not if you have to distribute the video also. [speaker006:] Two or three? [speaker009:] If you use both sides and the two layer and all that. [speaker006:] It's a lot more than that. Right? DVD is [pause] eight gig? [speaker003:] Well, I think you'll have some interesting discussions next month when you go out there. [speaker009:] S seventeen. Yeah. Yeah. [speaker003:] I mean, how are they gonna [disfmarker] you know, what's [disfmarker] what's the distribution plan? [speaker009:] Mm hmm. [speaker003:] If they have something really smart in mind, you know, maybe we can use it. I mean [disfmarker] I mean, [vocalsound] w we talked about sending around a disk or a computer. [speaker006:] I think that's definitely the right way to do it. Someone sends us disk. We [disfmarker] we load it with data and send it back. It's gonna be easier than any other method. [speaker003:] Maybe a compu maybe [disfmarker] [speaker006:] Well, if not [disfmarker] if you want to burn the C Ds, you're welcome to it. [speaker009:] I say we give it to LDC and let them do it. [speaker006:] Well, yeah. Yeah, absolutely. If they have a way of doing it. [speaker002:] And there's never any talk [disfmarker]? [speaker006:] I'm thinking prior to that. [speaker008:] What's LDC? [speaker002:] Was there ever any talk of taking the, um, close talking mikes when people aren't talking and deleting those portions? [speaker007:] Linguistic Data Consortium. [speaker008:] Mmm. [speaker002:] Or is the breathing and things [disfmarker]? [speaker006:] We already do that. Shorten does that automatically. [speaker002:] Yeah? OK. [speaker006:] That's why you [disfmarker] why [disfmarker] that's why Shorten [disfmarker] that's why Shorten reduces the size so much. [speaker002:] And if they're still that big [disfmarker] I see. Yeah, that's true. [speaker003:] That's why it's only fifty gigs. [speaker006:] Right. [speaker002:] OK. So it's still really big even if you're [disfmarker] only one person or two people at most are actually talking all the time. [speaker006:] Yep. [speaker002:] Wow. [speaker007:] We should just talk less. [speaker002:] Have shorter meetings! [speaker006:] Well, if you talk less it does in fact use less me d m data. [speaker004:] Yeah. [speaker006:] So, the channels [disfmarker] [speaker004:] Yeah. [speaker003:] Hey, it's [disfmarker] it's the first time I've ever heard you ask for less data. [speaker006:] Hmm. It's actually one way to tell if a microphone is dead in a meeting is if the shortened file is too short. [speaker002:] Sorry. [vocalsound] [nonvocalsound] Thanks, Morgan. [speaker004:] It's really short, yeah. Yeah. [speaker006:] Then you can be pretty sure that the mike was off. [speaker001:] Oh. [speaker004:] Yeah. [speaker007:] Uh, OK. [speaker006:] And, the t unfortunately the tabletop ones [disfmarker] the six tabletop which we record all the time no matter how many people are there, uh, don't compress nearly as well because they're almost always have signal on them. [speaker003:] Yeah, that's probably a lot of it. [speaker004:] Yeah. Yeah. [speaker003:] And they're gonna have mikes like that too. [speaker008:] Yeah. [speaker006:] Yep. [speaker003:] In fact, their array [disfmarker] their array will be [disfmarker] [speaker006:] Most of them. [speaker008:] All of them are like that. [speaker006:] Yep. So they won't shorten nearly as well. [speaker003:] Yeah. [speaker002:] Wow. [speaker006:] And I assume they're gonna do it lossless. [speaker002:] Yeah. So [disfmarker] so they're really going to have a huge [disfmarker] [speaker007:] So f so for dis for distribution purposes it might make sense to split it by channel, uh, rather than by meetings. [speaker006:] Yeah. [speaker007:] So, you could distribute, like, only the near field signals, uh, uh, uh, together for a bunch of meetings [speaker006:] Yep. Absolutely. [speaker007:] and then have a second row. [speaker006:] And then maybe one far field. You know? It's things [disfmarker] or the mixed channel. [speaker007:] Yeah. [speaker006:] Yeah, I think there are a lot of ways to do it, but [disfmarker] Let's not worry about that now. [speaker003:] Yeah. [speaker007:] Right. [speaker003:] Yeah. [speaker006:] How are we on disk space, Chuck? [speaker009:] Um, we're OK. We've got [disfmarker] for the, uh, compressed meetings we've still got about, I think it was four gigs, uh, on the one disk that we're using. [speaker006:] Mm hmm. [speaker009:] And we've got several [disfmarker] we' we've still got, uh, quite a bit of space on the, uh, un backed up. So. [speaker006:] Good. [speaker009:] But I [disfmarker] I think it's probably [disfmarker] we should start thinking about where to go next, especially on the backed up disk space. [speaker003:] Now we bought [disfmarker] a little [disfmarker] not all that long ago we brought three [pause] uh, thirty six gigabyte disks. [speaker009:] That's the un backed up. Yeah. Those are the u [speaker006:] Yeah, we're using those for the [disfmarker] for the expanded. [speaker009:] So, for example, the, uh [disfmarker] Guenter's Wall Street Journal data went onto one of those, [speaker003:] Uh huh. [speaker009:] uh, some other experiment things that people are doing are on there, and [disfmarker] So. [vocalsound] If you put it there people will use it. Yeah. So. [speaker003:] So I think after that we need another, uh, rack or something. [speaker009:] Have to change servers, actually. [speaker003:] Right? Change servers? [speaker009:] Well, we could [disfmarker] we could [disfmarker] There are a bunch of disks that we have, uh, that are s s smaller. And they're, like, seventeen. We could go to thirty five. So we could [vocalsound] get some extra space out of that. But the s right now the server's full. We couldn't add any more disks. We could [pause] change a smaller disk for a larger one. [speaker003:] We should probably do that while [pause] we have the space left on the [disfmarker] on the th the existing thirty sixes, [speaker009:] Yeah. [speaker004:] Yeah. [speaker003:] so that we can [speaker009:] Yeah. [speaker006:] Yeah. The problem with that for backed up media is, uh, the sysadmins wanna keep them at eighteen meg [disfmarker] eighteen gig partitions [speaker003:] shuffle it around. [speaker001:] Mm hmm. [speaker006:] because that takes about a day to get off back up tape. [speaker003:] Oh. [speaker009:] Well, then there's the other issue is that to g to g add more disk now, um, David says we really need to go to new servers. [speaker006:] So. [speaker009:] But he wants to go to new servers not just for us but for other groups too. [speaker006:] For u the whole institute. Yeah. [speaker009:] So there's this sort of coordination issue, and I guess I need to talk to him more about [disfmarker] [speaker006:] Actually, going to bigger disks we can do even, and just maintain the eighteen gig partitions. You just partition them into multiple. [speaker004:] Yeah. [speaker009:] Yeah. [speaker006:] So that's probably the next step, [speaker003:] That's right. [speaker006:] is to get the eighty or ninety gig drives to replace the thirty gig or an eighteen gig that we have now. [speaker003:] There are eighty gig drives now that you can get for that? [speaker006:] Yeah. Yeah. [speaker003:] See, and that's probably what we should send around for these [disfmarker] [vocalsound] to d for distribution [vocalsound] of this corpus. This s s isn't any [disfmarker] [speaker006:] A whole handful of microdrives. [speaker003:] Yeah. [speaker009:] Well, when they come out with an [speaker006:] Is [disfmarker] is the areal density of the microdrive higher [pause] than of a w normal drive? It must be. [speaker009:] When they come out with the eighty gig microdrive then we'll just send one of those little things around with all the meeting data. [speaker006:] Yep. [speaker004:] Yeah. [speaker003:] Yeah. That's what I meant, [speaker006:] Well, they have the two gig already. [speaker003:] yeah. [speaker006:] And I'm [disfmarker] I'm just thinking forty of them is probably still smaller [pause] in area than, uh, one normal sized drive. [speaker003:] Actually, what [disfmarker] I think what [disfmarker] what, uh [disfmarker] was it [disfmarker] was it Dave who was suggesting [comment] that, uh, you just get, uh, uh, a [disfmarker] a computer [speaker006:] Yep. [speaker003:] that has one of these drives in it and just send the computer around to the different sites. [speaker008:] Oh, God. Y [speaker003:] And you can just [vocalsound] get it off the computer [speaker008:] Put the computer on tour? [speaker003:] and press the [disfmarker] [speaker006:] Yeah. [comment] A laptop. [speaker003:] Yeah. Laptop? Yeah, [speaker004:] Yeah. [speaker003:] send the laptop. [speaker006:] Eighty gig laptop. [speaker003:] And, of course you don't want [disfmarker] y we don't want to trust the shippers necessarily, so you send a person too. And they just kind of go from site to site, [speaker006:] I'll [disfmarker] I'll do that. [speaker003:] and download their data kind of [disfmarker] Johnny Appleseed. [speaker006:] It [disfmarker] uh, uh, unfortunately it takes me a week at each site. [speaker003:] Yeah. What? [speaker006:] It does take a week at each site. So. [speaker003:] Does it? [speaker006:] Yeah, a absolutely. [vocalsound] If it's a nice site. [speaker002:] Especially Hawaii. [speaker003:] Yeah. There's a [disfmarker] there's a [disfmarker] they really need it in Novosibirsk. [speaker006:] Yeah, right. Th that only takes a day or so. Anyway. [speaker003:] R r I'm probably pr mispronouncing it. Well. [speaker007:] But the University of Hawaii has issued a request. [speaker006:] Yep. [speaker003:] Yeah. Mm hmm. That [disfmarker] that needs a month, yeah, [speaker007:] Yeah. [speaker003:] to download. Things are slower there. [speaker006:] Definitely true. [speaker003:] Yeah. OK. [speaker006:] Any other items? [speaker003:] So. OK, so that's the meeting stuff. Uh, transcrip So you're [disfmarker] we're [disfmarker] you said it was almost half, so that means that we have, like, eighty or so hours and out of that maybe thirty five or something are [pause] transcribed? [speaker009:] Yeah. The [disfmarker] the pie chart shows actually [pause] m meetings that are, uh [disfmarker] it combines the categories of, you know, "currently in [disfmarker]" Oh! Well, I've got it right there, I think. [speaker006:] I [disfmarker] I saw the pie chart sticking out, [speaker009:] Yeah. There's [disfmarker] there's just a few, uh, [speaker006:] so. [speaker001:] Do you want to write on the l on the board? [speaker009:] Um. Yeah, except I [disfmarker] I'm tethered. [speaker006:] D d do we care? [speaker003:] He's tethered, yeah. [speaker009:] Uh, [speaker001:] I could write on the board. [speaker009:] OK. It combines categories. So when we say "transcribed", we mean either in the process of being transcribed, either here or at IBM, [speaker003:] Oh. Yeah. [speaker009:] or completed transcription, and it also includes, uh, checked. So it includes a lot of categories together. [speaker003:] I see. [speaker009:] So. [speaker006:] We need different color pens, definitely. [speaker009:] And that [disfmarker] I think that that status thing is kinda old actually right now. [speaker003:] Oh, that's [disfmarker] Yeah, that's not half. Yeah. [speaker001:] Last updated in July. [speaker009:] That one that you have? [speaker001:] Oh, I'm not sure. The one that I saw on the web when I came back. OK. [speaker009:] Yeah. The [disfmarker] [speaker006:] Rustle, rustle. [speaker009:] the date on this one is July twenty sixth, so, uh, I have to update it. [speaker003:] A couple weeks ago. Yeah. [speaker009:] Yeah. [speaker001:] OK. [speaker003:] But about how much do we have that's sort of, uh [disfmarker]? [speaker009:] So we have total number of meetings is seventy eight [pause] meetings and, uh, uh, the total meeting time is seventy five hours. [speaker003:] Mm hmm. [speaker009:] And, um [disfmarker] [vocalsound] So, [speaker008:] Oh. [speaker009:] the ones that are finished being transcribed, as opposed to in the process of being transcribed [disfmarker] we've got twenty six hours that are finished being transcribed. [speaker006:] Oh! Great. [speaker008:] What does that mean, "finished being transcribed"? [speaker009:] Um. So there's, uh, several processes to the transcription. There's, uh [disfmarker] I it ranges from being assigned to a transcriber, to [pause] them finishing transcription, to then being checked, you know, with a double check. [speaker008:] Mm hmm. [speaker009:] And so, when we say the total transcribed, that's the one that's gone through being checked, I believe. I think that's what the [disfmarker] I have here. [speaker008:] OK. [speaker004:] Just not yet approved for [disfmarker] [speaker009:] But not yet approved. [speaker004:] Yeah? OK. [speaker009:] And then there's, uh [disfmarker] the final one is approved for release. [speaker006:] Oh. [speaker009:] That's after the meeting participants have [pause] gone through it. [speaker006:] So I should probably send those to you, shouldn't I? [speaker008:] Oh. [speaker001:] Yes. At present there's [disfmarker] n none of those are on the pie chart. [speaker009:] Yeah, if you know. Yeah, currently that's zero, released [disfmarker] uh, approve [speaker006:] There are five. [speaker009:] Huh? [speaker006:] There are five. [speaker001:] There're five? Good. [speaker006:] We have five where, uh, everyone has replied. [speaker009:] OK. OK. So we have five that are potentially releasable then. [speaker001:] So the c the cross hatching there is, um [disfmarker] is it's been through the double check. Right? I mean, it's [disfmarker] this is the [disfmarker] it's it's been tran [speaker009:] Yeah. That's approval in progress. [speaker003:] Mm hmm. [speaker001:] Everything on the [disfmarker] [speaker009:] Yeah. [speaker001:] So, I mean, it's [disfmarker] When I add a couple more from [disfmarker] that, uh [disfmarker] that were done in my absence, I think that th we'll be down to about there. So it's not quite as close to half as I'd thought, but it's, you know [disfmarker] it's more than a [disfmarker] And so these right here have been through the double check. These haven't. [speaker003:] Mm hmm. [speaker001:] Oh, how nice of you, holding [disfmarker] holding the thing so I'm not [disfmarker] [speaker006:] It was falling off, uh. [speaker001:] Yeah, thank you. [speaker003:] Yeah. OK. Uh. Alright. So we still have a long [comment] ways to go but maybe the, uh [disfmarker] the IBM thing will [disfmarker] will help push through the [disfmarker] the rest of it somewhat quicker. OK. [speaker006:] And there's still, uh, two people for whom I have not been able to get in touch [disfmarker] have never gotten a response from [disfmarker] from any of the emails about, uh [pause] permission forms. So. [speaker001:] Are they from one of the foreign language ones? [speaker006:] Yep. [speaker001:] I mean [disfmarker] OK, good. [speaker006:] That's so [disfmarker] that's why the NSA ones have still not been approved. Though. [speaker001:] Those will figure less prominently in things anyway. [speaker006:] Yep. [speaker003:] Yeah. It might [disfmarker] it might partly just be [pause] European summer vacation issues. [speaker006:] Possibly. It's been, like, four months, so. [speaker003:] Oh. [speaker008:] Wow. [speaker007:] Uh, you know those Europeans. They got a lot of vaca [speaker001:] Long vacations. [speaker003:] Right. Hmm. OK. Uh, [comment] www. [comment] What else is going on? [speaker004:] You [disfmarker] [speaker003:] Why don't we just go around, cuz we only have a few minutes. [speaker004:] Didn't you wanna say something about the hardware [pause] set up? [speaker006:] Oh, that's right. Uh. Yeah, we have new hardware. So, I want to set it up at some point, so I just wanted to know what meeting schedules were. [speaker003:] What new hardware? [speaker006:] Uh, we have several more wireless channels so that we can set up and get rid of these, uh [disfmarker] the wired stuff completely. [speaker003:] Oh. [speaker006:] And then also we got replacements for these mikes. [speaker002:] Oh. Really? [speaker007:] Mm hmm. [speaker003:] So, wha what's your question? [speaker006:] Yeah. [speaker008:] Oh. [speaker002:] Are they like these, [speaker006:] Yeah, they're like those. [speaker002:] right? [speaker006:] Those are the only two choices right now, unfortunately. [speaker002:] OK. [speaker003:] So what's your question? [speaker006:] Question is, when are we not recording any meetings for a couple days so that I can do it and if it doesn't work, we won't impact people. [speaker003:] Well, there's a point when a bunch of us are off at Eurospeech. [speaker006:] So. [speaker003:] Um. [speaker006:] So, what's SmartKom? [speaker004:] Nothing this week. Maybe n end of next week. [speaker006:] End of next week. And is EDU still recording? [speaker002:] They usually do Thursdays. I don't get a lot of advance notice, but it's al always been either Mondays or Thursdays. So. [speaker009:] Thursdays? [speaker002:] Yeah. They [disfmarker] they often do it after [disfmarker] [speaker006:] So if I did it, like, tomorrow that would be alright [pause] it sounds like? [speaker002:] for a while they were doing it after this meeting. [speaker009:] Oh! [speaker002:] Yeah. Wednesdays, Fridays [disfmarker] [speaker006:] OK. So it sounds like no one n [speaker002:] Definitely Wed Definitely Wednesday I'm not here, so I don't record anything. [speaker006:] No one early next week. [speaker004:] Yeah. Let me check with [pause] Robert again, but I [disfmarker] I'm [pause] pretty sure that there's nothing this week. [speaker006:] OK. [speaker003:] OK. [speaker006:] And then I'll also re number so that, uh, the mikes are in order. [speaker001:] Oh, thank you. Thank you. [speaker003:] OK. [speaker006:] Should help with the transcripts. [speaker003:] Right. [speaker006:] They wer so right now the channel numbers are discontiguous. Right. [speaker001:] Yeah. I mean, you know, it's an extra step. [speaker004:] Yeah. [speaker003:] OK. So, I was just thinking maybe just go around and just brief briefly look what's other things that are going on, maybe, since we don't [disfmarker] don't have much of a set a set agenda and we don't have much time. Uh. [speaker002:] But feel free to overlap. [speaker006:] OK, we'll all say what we're doing at the same time. [speaker002:] No, I mean [disfmarker] [vocalsound] I always get scared when you do these portions where you're not gonna overlap because [vocalsound] there's less data points. [speaker003:] Well, why don't we d we [disfmarker] we deliberately pick two people to [disfmarker] [speaker002:] I mean, people could, like, backchannel and ask questions and stuff. [speaker003:] Yeah. I'll [disfmarker] I'll [disfmarker] I'll keep saying "uh huh". Yeah. [speaker002:] OK. [speaker007:] Mm hmm. [speaker003:] This is absolutely natural meetings. We have [vocalsound] absolutely no effect from our preconceptions. [speaker007:] So, we had [disfmarker] [speaker002:] Right. [speaker007:] the [disfmarker] the transcribers [comment] have trouble with your backchannel, because [disfmarker] [speaker003:] With my backchannel? [speaker007:] Yeah, with your backchannels. [speaker003:] Uh huh. [speaker007:] Because sometimes you have this sort of [disfmarker] this [disfmarker] this "uh huh" that you don't n So there's "uh huh" and then there's "aha!". [speaker002:] Uh huh. [speaker007:] And it's not quite clear always what [disfmarker] what it is you mean. [speaker006:] Which "uh huh"? [speaker003:] Which one it is? [speaker007:] Yeah. [speaker002:] But I don't think it's "aha!". [speaker004:] Yeah. [speaker002:] It's more like "uh huh!" rather than "uh huh". [speaker004:] I'm [disfmarker] Yeah. [speaker007:] Yeah. [speaker008:] Yeah. [speaker004:] I don't like the backchannels at all, as that's [pause] really a problem with the speech nonspeech detection. [speaker008:] Yeah. Let's get rid of them. [speaker004:] So [disfmarker] [speaker006:] That's a [disfmarker] [speaker003:] Well, has anybody done anything? Come on. What are we doing? [speaker006:] I mean, I'll start if you want. I've been, uh, most recently rewriting Transcriber in Java for no particular reason. [speaker003:] Yeah. Uh huh. [speaker001:] Well, for speed. [speaker006:] Uh. Well, I mean, Trans What happened was that Transcriber's very slow to load. So when you have a big meeting it c uh, especially on a slower machine, it can take five minutes before you can start doing anything. [speaker007:] Mm hmm. [speaker006:] And so I was thinking about why that was and the data structures I would use, and I found myself unable not to sit down and code it. [speaker007:] Mm hmm. [speaker006:] So I did. And so I have a big chunk of Transcriber now written in Java and it comes up, you know, in a couple of seconds. Um. [speaker007:] u Do you take [disfmarker] do you take feature requests? [speaker008:] Cool. [speaker002:] Great. Great. Y y yeah. W we should talk to you off line about something. [speaker003:] Well, wait a minute, [speaker006:] I [disfmarker] It's not really [disfmarker] probably not worth doing. [speaker003:] what [disfmarker]? [speaker007:] That would [disfmarker] th w s Well, actually, I think it is worth doing [speaker006:] But we can talk about it. [speaker002:] Yeah. [speaker007:] because here's the thing. We're sort of [disfmarker] we're starting to move from transcription to annotation. [speaker006:] Mm hmm. [speaker007:] And [pause] the Transcriber interface is fine [pause] if essentially what you're doing is stringing words together to p to make transcripts. [speaker006:] Annotating. Yep. [speaker007:] But, um, in [disfmarker] for instance, what we're doing now w with, uh, Communicator data, but which we would like to do with Meeting data, is to actually label utterances, or words, or whatever units you want [vocalsound] with, uh, you know, basically doing a multiple choice type of labelling. [speaker006:] Mm hmm. [speaker007:] And for those type of tasks, it wou it's much more efficient to present, [vocalsound] uh, the [disfmarker] th p present a bunch of, uh, clickable buttons or whatever [speaker006:] Dialogue. [comment] Yeah. Buttons. [speaker002:] Mm hmm. [speaker007:] and, um [disfmarker] [speaker002:] But it has to be multi stream. Th they [disfmarker] In other words, the beginning of something could be one [pause] speaker and the end of that unit, like question answer pair, could be a different speaker. That's why we can't [pause] just annotate the transcripts, uh, one [disfmarker] listening to one at a time. [speaker006:] Mm hmm. [speaker002:] So [disfmarker] so that's why the Transcriber's nice for sort of listening to it, but you can't actually encode, um [disfmarker] encode someth [speaker006:] Yeah. I un I understand the problem. [speaker002:] Yeah, exactly. [speaker006:] So. [speaker002:] So. [speaker007:] And [disfmarker] and th that's actually a good model, which is what this current tool that we're using has, is to atta to associate these [vocalsound] labels with, um, attributes of SGML tags. [speaker006:] Mm hmm. [speaker007:] So, you know, you have, say, a tag for [disfmarker] I don't know [disfmarker] what type of utterance, whether it's a question or a statement or whatever. And then [vocalsound] you have an, uh [disfmarker] you know, an [disfmarker] a tag attribute and the value of that encodes the choice. [speaker006:] Right. Right. So what I don't have with it so far is it doesn't play wavefiles and it display wavefiles. I just haven't done that since that [disfmarker] that part wasn't [disfmarker] [speaker007:] Oh, you mean display as in showing the waveform? [speaker006:] Yep. [speaker007:] OK. [speaker006:] Yep. And, uh [disfmarker] and then also a lot of the fancy user interface stuff that's in Transcriber I haven't done. [speaker007:] Mm hmm. [speaker006:] But it works. [speaker003:] What's, uh, the original Transcriber written in? [speaker006:] Tcl TK. [speaker003:] I see. [speaker006:] So. [speaker003:] So. [speaker009:] Are you using [disfmarker]? [speaker006:] Which is nice and flexible but very, very slow in comparison. [speaker003:] Yeah, yeah. Right. [speaker009:] Are you using the [disfmarker]? Oh, you [disfmarker] oh, you are. We already talked about that. The XML [disfmarker] [vocalsound] Reading [speaker006:] Yep, SAX. [speaker009:] SAX. Yeah. [speaker006:] Cuz it's much faster than DOM and uses less memory. [speaker009:] Right. Right. [speaker006:] So. And then, uh, [speaker007:] Mm hmm. [speaker006:] preparing for quals. Uh, hopefully second week of September if I can ever get Warren Sack to [pause] answer my emails. I probably shouldn't say that on the tape. [speaker003:] Beep. [speaker006:] Yeah. Oh, well. I gotta remember to beep that out. [speaker009:] Who was that? [speaker006:] No So anyway. So I've been preparing for that. [speaker003:] Not answering your calls, huh? [speaker006:] And, uh, actually, I wanted to say off line with you about any literature [pause] reviews I should do beforehand. [speaker003:] "War and Peace." [speaker006:] "War and Peace." That's a good choice. [speaker003:] Yeah. [speaker006:] Keep me busy. [speaker003:] Yeah. [speaker002:] It's actually not a bad idea to say "beep" as a word because you can search the transcripts, y you know, for [disfmarker] for that, to find these places. [speaker006:] For "beep". That's a good idea. [speaker002:] No, I'm s I'm totally serious. To have one word, like "beep", [speaker006:] No, absolutely. I agree. [speaker002:] especially with a really long "eee" [comment] like that. [speaker003:] Yeah. [speaker002:] Works great. [speaker003:] I'll [disfmarker] I'll take that role anytime you want. I like saying it. So. OK. [speaker004:] Yeah. I've been doing a bunch of recognition experiments on [pause] meeting data with different, uh, segmentations [disfmarker] with different [disfmarker] different automatic segmentations, but [pause] it's not yet finished. It's [pause] work in progress. [speaker003:] OK. Any sense about how [disfmarker] how it does on [disfmarker]? [speaker004:] Yeah. For, uh [disfmarker] for [disfmarker] for the paper I w I want to submit to ASRU to [disfmarker] [vocalsound] to, uh, evaluate the quality of the speech nonspeech detection by [pause] using it for speech recognition. [speaker003:] Right, but I was just asking if you had a sense yet of [disfmarker] of, uh, whether [disfmarker] maybe you already did this experiment, where there's this question of [disfmarker] of how much it hurt you or helped you to use segmented versus [disfmarker] [speaker007:] Yeah, but we didn't have the exact comparison. [speaker004:] It [disfmarker] [speaker007:] But [disfmarker] you know, there were [disfmarker] i i the experiments were based on different segmentations and different transcripts. [speaker004:] Yeah. [speaker007:] So we weren't quite sure which [disfmarker] whether the difference came from different [pause] transcripts, for instance. And so [vocalsound] Thilo's [disfmarker] [speaker004:] So, it hurts compared to the ideal segmentation, when you have the [disfmarker] the manual segmentations, where you have exact boundaries for each s utterances. Sometimes there are some [disfmarker] some words are cut off or [pause] there's some segments i are s uh, assigned to s to speech which there is nothing in. And so there are more insertions but [vocalsound] most of the time there are more [disfmarker] the [disfmarker] the number of deletions is higher when [disfmarker] when you use the automatic segmentation compared to the ideal one. And so I'm just in [disfmarker] in progress of, yeah, trying to feed the recognizer with un segmented data and to see how much worse that is. So that [disfmarker] that I have three things. Un yeah [disfmarker] un segmented, the automatic, and the [disfmarker] the ideal one. [speaker003:] Yeah. [speaker007:] But the recognizer's putting up some resistance because it doesn't like the un segmented data. [speaker004:] Yeah. [speaker003:] Uh huh. [speaker008:] Um. Yeah, I just got back [pause] from Chicago. So. Um, fishing went pretty well. [speaker006:] Oh, yeah. All of us are banging. [speaker003:] Good. Good. [speaker008:] And, um, I was gonna post some results, you know, with, uh, [speaker002:] Wisconsin fishing. [speaker006:] With fishing. B [speaker008:] Wisconsin fishing. Yeah. [speaker003:] Yeah. [speaker008:] Uh. [speaker003:] So your [disfmarker] so your results are about the same as Bush's, [speaker006:] You should have seen the one that got away. [speaker003:] basically. His [disfmarker] [speaker008:] Y who [disfmarker] Yeah, right. Whose result? [speaker003:] Bush's. Yeah, you're both, you know, sort of [disfmarker] He's [disfmarker] he's coming back to the heartland and [disfmarker] [speaker008:] Yeah, right, exactly. I did my tour of the heartland. [speaker003:] Right. [vocalsound] Yeah. [speaker009:] Did you build any houses? [speaker008:] Yeah. Yeah. I built a few [disfmarker] built a few houses. Um, but right now I guess, uh, I'm working on [vocalsound] this paper [disfmarker] this ISCA paper for this workshop that Liz and Andreas and I are [pause] putting together. So we're just getting, uh, different results for [disfmarker] Um, we fixed up [pause] our different, uh, transcripts with different types of, uh, annotations and we re re ran ali re ran alignments. And we're gonna develop a new feature database based on those alignments and do some trees and analysis. So I'm kind of in the middle of that. And hopefully by the twentieth, the twenty third. [speaker002:] Twentieth. [speaker008:] The twentieth! Yeah. We'll have it all straightened out. [speaker007:] Yeah. [speaker008:] So. It's gonna be a busy week. [speaker007:] Yeah. You know, I'll bet [disfmarker] You know, Bush went on this European trip recently and [vocalsound] they told him there that they get more vacations. [speaker003:] Yeah. [speaker007:] So he came back here and thought "oh, I [disfmarker] I can afford some more vacation too". [speaker002:] Beep. [speaker003:] Yeah. I [disfmarker] I think he's b he's been [disfmarker] he will have been off, like, two months out of his first seven [pause] in office. [speaker008:] Yeah. It's amazing. [speaker007:] Yeah. It's hard to keep concentration, you know, [speaker003:] That's a pretty good deal. [speaker007:] and the focus on the [disfmarker] [speaker009:] Don't even really notice, do you? [speaker003:] Anyway. [speaker007:] Um. So while he's vacationing, um, we've been, uh [disfmarker] Well basically this is like a joint effort. Uh, d we were just [disfmarker] [speaker003:] Who are you talking about, Bush or Don? Yeah. [speaker007:] Um, you know, f apart from getting [disfmarker] preparing the [disfmarker] the da the data for the, um [disfmarker] [vocalsound] for these experiments for the, um, [vocalsound] uh [disfmarker] for this prosody workshop, um, we just, um [disfmarker] we just had a phone call with Mari and her students, in fact, and they want to, um, e m They're bringing up their own meeting recognizer, which is based on Bill Byrne's, uh, recognizer from Johns Hopkins. [speaker003:] Mm hmm. Based on Hub five, or like [disfmarker] like your [disfmarker]? [speaker007:] u [comment] Based on their Hub five recognizer. [speaker002:] Mm hmm. [speaker007:] And, uh, [speaker003:] Yeah. [speaker007:] so they've been getting from us the, um [disfmarker] [vocalsound] you know, some support in terms of getting the latest transcripts and the [disfmarker] also the [disfmarker] actually the, [vocalsound] uh, transcripts annotated with events, [speaker003:] Mm hmm. [speaker007:] uh, like sentence boundaries and stuff like that. [speaker003:] Mm hmm. Yeah. [speaker007:] Um, because one of Mari's students was to [disfmarker] wants to do some language modeling for, uh, predicting overlaps and, uh, stuff like that. [speaker003:] Yeah. [speaker007:] Um. [speaker002:] Yeah. Actually we had this ide from the last time that they were here, that it'd be nice to have UW do some work on [pause] language modeling that would be trying to get at the same [pause] detection as what we do acoustically [speaker006:] Can I borrow a piece of paper? [speaker002:] since they can ramp up much ques quicker in the language modeling [speaker006:] Thank you. [speaker002:] and we don [vocalsound] you know, we'll do the, uh, prosodic side. And so this is supposed to be a project that we could actually [pause] integrate the posteriors that they get from their language model for every word boundary or every frame boundary, and then train up the classifiers. [speaker003:] Yeah. [speaker007:] Mm hmm. [speaker002:] And it's [disfmarker] I guess it's an undergrad, too, that [disfmarker] that's going to be doing this work. [speaker008:] Cool. [speaker007:] Yeah. [speaker002:] So. [speaker007:] Um, on the recognition side, actually, um, I think the main person doing that is, uh, Harriet from [disfmarker] Harriet Nock from, uh [disfmarker] formerly of Cambridge. [speaker003:] Yeah. [speaker007:] Um, [vocalsound] and apparently they get reasonable results except in some portions they get very high insertion rates [disfmarker] even higher than we get with [disfmarker] with the, um [disfmarker] [vocalsound] like even, you know, on lapel mikes with, uh, [vocalsound] background speech. And so they were trying to track that down, and [disfmarker] It could be that our recognizer's actually doing relatively well because it has a reject model for, you know, mismatched speech, essentially, or for unspecified speech. And, um, so [disfmarker] [speaker003:] Oh! [speaker007:] so, uh, they'll [disfmarker] I sent them our recognition output so they can sort of do a [pause] line by line comparison and see if [disfmarker] if they, um [disfmarker] [vocalsound] you know, their insertions correspond to our rejects and stuff like that. So, um, we'll see what that, uh, leads to. Um. Yeah. Other than that, uh, we're [disfmarker] Well, that's pretty m pretty much it as far as meetings are concerned. [speaker003:] Yeah, I mean we're doing, uh [disfmarker] So I should mention, Dave and I have sort of been [pause] pushing through [disfmarker] thinking in terms of an ASRU paper. But I'm still [disfmarker] [speaker007:] Mm hmm. [speaker003:] I think early next week we'll decide whether we'll [disfmarker] [vocalsound] we'll do one or not. Um. But [disfmarker] because, you know, the results are [disfmarker] are still confusing enough that [disfmarker] [vocalsound] that, um, we we gotta be convinced we understand what's going on. But it's starting to look like it [disfmarker] it's almost the opposite of the usual th situation [vocalsound] where, um, [vocalsound] you get some really nice result with, uh [disfmarker] [vocalsound] with digits, say, and then you go to a large corpus and the result goes away. Um, we had what looked like a good result in digits but th but then when we went to more training data that kind of went away. [speaker007:] Mm hmm. [speaker003:] So, uh, the digits result right now is kind of equivocal. [speaker007:] Mm hmm. [speaker003:] Um, but the conversational speech one, while not huge, is in sort of Hub five or Meeting kind of terms, actually it's [disfmarker] it's [disfmarker] it's not bad. You know? So [disfmarker] [vocalsound] So it's [disfmarker] it's [disfmarker] you know, it's like, i i w [speaker007:] Right. [speaker003:] after he did some adjustments, more like a percent or so that [disfmarker] that i th absolute, that it goes [disfmarker] [vocalsound] goes down [speaker006:] Yeah. It's a P H [speaker003:] and [disfmarker] [speaker007:] But is it still true that basically the s speaker adaptation, um, sort of negates much of the advantage? [speaker003:] Sometimes. Yeah. No, I'm talking about with the adaptation. [speaker007:] With it. [speaker003:] Without the adapt without the adaptation is a much bigger effect. [speaker007:] OK. [speaker003:] So it so it's [disfmarker] it's very reasonable to think of as an alternative way [pause] of getting this kind o kind of improvement. [speaker007:] Mm hmm. [speaker003:] But I think what w ultimately what we have to face with it is that, um [disfmarker] I mean, he gets incredibly good results [vocalsound] if you artificially reverberate clean [disfmarker] you know, close [disfmarker] close miked data. [speaker006:] Right. [speaker003:] So I think the thing we really have to face is that our model [vocalsound] for what's going on is [disfmarker] is wrong [speaker006:] Is it wrong? [speaker007:] Right. [speaker003:] or incomplete, and that in this situation in particular there's a lot of noise. [speaker007:] Right. [speaker003:] And noise plus reverberation is not at all the same as just reverberation. So, um, uh, there's that one thing that [pause] [@ @] [comment] been thinking of doing, is there's a whole lot of work that's been going on in noise suppression in the [disfmarker] in the Aurora, uh, team. And so, uh, they've got some software and I'm think of [disfmarker] of just having us integrate that in, uh, cuz it [disfmarker] [speaker006:] Just combine it see. [speaker003:] Yeah. Cuz it just [disfmarker] I mean, it just subtracts it out. In fact [disfmarker] in fact you can run it in sort of enhancement mode where you get speech out and [disfmarker] and [disfmarker] and, uh, then just run it through the rest of your recognizer. [speaker007:] Mm hmm. [speaker003:] So [disfmarker] So, uh [pause] um. [speaker007:] Yeah. [speaker006:] Neat. [speaker003:] So we're working on that. [speaker007:] OK. There's some work, um, [vocalsound] that [disfmarker] Do you remember this [pause] paper or poster [comment] by, um, some CMU folks at, uh, HLT? They were talking about [disfmarker] and [disfmarker] and they s they referred to an upcoming ICASSP paper, I think, at the time. Some someone in Karlsruhe, I think, um, worked on, [vocalsound] uh, es you know, estimating noise from the silent regions and then, [vocalsound] uh, doing some explicit [disfmarker] I don't know if it's something akin to parallel model combination or something like that to [disfmarker] to, uh, [speaker003:] I mean, people have done that for a long time. Right? [speaker007:] Right. Anyway, uh, so I [disfmarker] I [disfmarker] I couldn't judge whether this was original or not, but [disfmarker] but it seemed like they got pretty good results on their meeting data. So we might want to look into that. [speaker003:] Yeah, I don I don't remember the paper. [speaker007:] Right. [speaker003:] But I mean, that [disfmarker] th that's certainly one of the common techniques is to do that. [speaker007:] Right. [speaker003:] Uh, and in all th the stuff that our team is doing they certainly look at [vocalsound] regions that you think are [vocalsound] nonspeech in order to get noise statistics. [speaker007:] Mm hmm. [speaker003:] But, [vocalsound] uh, [speaker007:] Right. [speaker003:] I mean, there's a host of thing different things you could do. In the Aurora thing, we have [disfmarker] we have a little bit of a handicap in that they [disfmarker] they don't let you, [vocalsound] uh, adjust the tis [comment] the statistical models. So y you have to [disfmarker] you have to do everything at the feature level. [speaker007:] Mm hmm. [speaker003:] Um. But, um, um [disfmarker] but there's, uh, you know, Wiener filtering and spectral subtraction which are, uh, sort of [disfmarker] i can be looked at as m minor modifications of one another. Um. And they they've now built up a nice piece of software that does [disfmarker] u um [disfmarker] has this sort of general framework for a [disfmarker] for it so that you can adjust it to sort of an any [disfmarker] any [disfmarker] any one of a dozen ways that people have done this kind of thing and [speaker007:] Mm hmm. [speaker003:] they found a particular set of parameters they liked [pause] this week. And [pause] we're running with that for a while. But. [speaker007:] OK. [speaker003:] Anything else? [speaker002:] So. Well, I'll just really briefly, um [disfmarker] So, I've been working with Don and also [disfmarker] [vocalsound] and [disfmarker] and Andreas on the [disfmarker] this paper on prosody, and what we're trying to do is predict where people [disfmarker] at [disfmarker] given all information up to whatever point in time you're considering, try to predict if that's a good location for someone else to jump in. So, that's the idea of that paper. And that's [pause] the same task that Mari, a um, [vocalsound] a student named Dustin, and also Sarah who was at this last meeting [vocalsound] will be looking into doing from the language modeling side. Maybe some kind of clu you know, clustered class N gram or something. [speaker003:] Uh huh. [speaker002:] Um. And also we're [disfmarker] we're using a couple of the labellers who are doing emotion to help us, um, finalize some transcripts for this. So these undergra undergrad students are really helping us out a lot, [speaker008:] Yeah. [speaker002:] under Don's supervision, actually. So that's good. [speaker008:] Right. [speaker002:] So these offices are being, uh, very busy [disfmarker] Don's and the one across from him. And then also, um, working with Jeremy on Communicator emotion labelling. And [vocalsound] most of that up to a few weeks ago was just getting the [pause] people labelling these utterances to a computer for emotion and that's going pretty well now. [speaker003:] What kind of emotions do you label? [speaker002:] Um. Well, we're mostly looking for places where the person's frustrated with the system. [speaker003:] So, there's just two categories, frustrated and not, or [disfmarker]? No. [speaker002:] There [disfmarker] there are actually [disfmarker] that's how we started it. [speaker003:] Yeah. [speaker002:] And we had different levels. Now we have things like "amuse". Like, "oh, you finally got it". And [disfmarker] [vocalsound] um, and "disappointed tired". Like, "yeah, no, that's wrong again". Which isn't really mad, but, you know. [speaker003:] Yeah. [speaker002:] So th they gave me a lot of feedback. The labellers, after doing this, [speaker003:] Uh huh. [speaker002:] told me what they wanted as categories. So. [comment] And then there's jokes. And [disfmarker] [speaker003:] Yeah. [speaker002:] Yeah, right. Um. There's not a lot of [disfmarker] enough frustration for us to really go through quickly, cuz we have to label every meeting and also things like repeats for the same information, and so forth. And I've been coordinating a little bit with [vocalsound] Katrin at, uh, UW who was doing [pause] user correction work that she presented at the [disfmarker] at the meeting. So there's some overlap there in what you m there's a correlation between corrections and annoyance or frustration. [speaker003:] Yeah. [speaker002:] Um, but Jeremy's been starting to really work on that project now from the acoustic side. So we have all these waveforms, and he's been [vocalsound] with Morgan's help and, uh, Dan Ellis's, I think, looking into spectral tilt a bit, so he can talk about that. It's a feature we haven't used before. Um, and starting to, uh, r r take information from alignments and create a database that we can use sort of as a [disfmarker] [vocalsound] a large feature vector or table to feed to our decision trees, which will try to give us back an answer from the speech only as [pause] to whether the person's frustrated or not. [speaker003:] That reminds me of something. [speaker002:] So. [speaker003:] We [disfmarker] we talked about something a ways back and I sort of lost track of it, about having a synthesizer driven with [disfmarker] [speaker004:] Yeah. I haven't really [pause] had time to do that. [speaker002:] Yeah. [vocalsound] Actually, I wanted [pause] to talk with you about that [speaker004:] Yeah. [speaker002:] cuz there's a similar project at SRI where we're using [disfmarker] we want to do something like that to look at whether or not some of the p stylized pitch stuff [vocalsound] is, um, sort of perceptually similar to the kinds of things that people would mark. [speaker004:] Yeah. [speaker002:] So, I [disfmarker] I actually wanted to talk to you about that. [speaker004:] Yeah. Maybe after Wednesday? [speaker002:] So. OK. Yeah. Cuz this will be, like, after [disfmarker] [speaker003:] Yeah, cuz you have this AS ASRU deadline. [speaker004:] Yeah. Yeah. [speaker002:] OK. Yeah. This will be after September anyway, [speaker007:] Right. [speaker003:] But that seemed like it would be fairly straight forward, that if you take the [disfmarker] take the pitch from the real data, feed it into the synthesizer, and then process the output with a gain box that you also had energy from the [disfmarker] from the real data. [speaker004:] Yeah. [speaker002:] so I [disfmarker] [speaker004:] Yeah. [speaker002:] Right. [speaker004:] Yeah. [speaker002:] Yeah. There was just no [pause] way to use the energy in Festival, that was [disfmarker] [speaker007:] Hey! We could use that in the English [disfmarker] We have to do the English synthesis for What if we make the system really [pause] annoyed? [speaker002:] OK Anyway. [vocalsound] I [disfmarker] I should move on cuz we're running late, [speaker003:] Yeah. [speaker002:] but I wanted to say there's one question in my mind, which maybe Morgan can talk to Jeremy about, is how to sort of normalize spectral tilt. It's just [disfmarker] I'm sort of in this, uh, area I don't know much about and so we should talk off line. [speaker003:] OK. [speaker002:] But if you have a certain speaker [vocalsound] and you wanna sort of get a bunch of data from that speaker but compare it, you know, directly on a feature to [disfmarker] to like a decision tree that won't normalize anything for you once you feed it the features, how do you sort of normalize over a particular speaker? [speaker003:] Mm hmm. [speaker002:] What kinds of [disfmarker] what [disfmarker] what makes sense to do with that feature? [speaker003:] Well, I'll have to hear what he's using, f given different suggestions he's gotten. [speaker002:] So. i Yeah. But that's something we [pause] would [pause] be [pause] glad to [pause] have help on. [speaker003:] But [disfmarker] OK. [speaker002:] So. Go ahead, Jeremy. [speaker005:] Right. So I've been looking at spectral tilt and [pause] different ways of [pause] perhaps generating it. And, um, I basically have three ways right now that we're looking at for, uh [disfmarker] we're gonna look at for features. Uh, one is just looking at, um, the first cepstral coefficient and, uh, using that. Um, another is looking at the log ener the difference of log energies between, um, the high [disfmarker] like a high frequency band and a low frequency band. And the third is just, um, taking a slope, um, of the linear fit of the spectrum [pause] and using that. So, those are the three [disfmarker] three ways I have right now for generating these spectral tilt numbers. And, [vocalsound] um, hopefully soon we can look at some of the labelled stuff and look at, uh, these numbers and see [disfmarker] see which ones w seem to be working well. And things like that. Um, I also recently wrote a script, um, [vocalsound] that, uh, generates the kappa statistic, um, which is a [disfmarker] for labeller agreement. Um, so, [vocalsound] [nonvocalsound] been looking at that as well. [speaker004:] Yeah. [speaker003:] I mean, wh what [disfmarker] [vocalsound] specifically, what you wanna do is remove the [disfmarker] the [disfmarker] the [disfmarker] the average, uh, speaker [pause] tilt in some sense? [speaker002:] The thing i Yeah. We [disfmarker] we [disfmarker] you wanna use spectral tilt to try to get at the s n the voice quality of these utterances. So you could have something like "no" versus "no!" and these might differ in that way. But the problem is we don't know [disfmarker] [vocalsound] th we don't [disfmarker] we don't know whether the "no" is frustrated or not. [speaker003:] You don't know it. Yeah. Yeah. [speaker002:] So [disfmarker] Right. [vocalsound] Exactly. And then you also have the different speakers. And so, the sort of question is, [vocalsound] do you wanna average all the data together or do you [disfmarker] do you [disfmarker]? [speaker003:] Yeah. [speaker002:] You wanna capture the change in voice quality within a speaker, without having to know ahead of time w which is which, cuz that would be circular. [speaker003:] I mean, [speaker002:] So, [speaker003:] m maybe [disfmarker] maybe this is too obvious to be right, [speaker002:] u this is [disfmarker] [speaker003:] but I mean, the [disfmarker] the [disfmarker] the [disfmarker] the normal thing for C one would be to [disfmarker] to, uh, get rid of the mean and th and the variance. Right? [speaker001:] To get rid of what? [speaker003:] Does that [disfmarker]? [vocalsound] The mean and variance. Have it be zero mean and unity variance. [speaker006:] Mean and variance. [speaker001:] OK. [speaker002:] But do you do this, say, just in the vowel regions, [speaker003:] And then [disfmarker] [speaker002:] and do you [disfmarker]? a There's [disfmarker] there's a whole bunch of, um, sort of unknowns. [speaker003:] Right. [speaker002:] And we can try all of them and just put them into the tree as features, [speaker003:] Well, the answer's yeah. I mean, you want [disfmarker] you want it for every place that's voiced. [speaker002:] and [disfmarker] [speaker007:] But [disfmarker] but you can't do it [disfmarker] I mean you can't just normalize it based on the utterance because then you're gonna have zero everywhere. Right? So you'd have to normalize it based on [pause] the [disfmarker] [speaker003:] No, ev everything for that speaker. [speaker002:] Yeah. [speaker007:] Yeah, OK. But that's not really feasible [speaker002:] Everything up to that point. [speaker007:] because you're having a sys you have a system, a dialogue system, where you only have access to the [disfmarker] what the speaker said before. [speaker003:] Yeah? [speaker007:] So it'd have to be sort of a causal, uh, version of [@ @]. [speaker002:] Oh, yeah. That's OK. That [disfmarker] that's the sort of [disfmarker] w we keep [disfmarker] you keep, uh, iterat you keep updating those numbers for future utterances. [speaker006:] Yeah. [speaker002:] Th That [disfmarker] that would work. [speaker003:] Yeah. I mean, but [disfmarker] but the main [disfmarker] I think the main thing is that [disfmarker] [speaker007:] Or you c u [speaker003:] the [disfmarker] You're right. I mean, the main distinction is whether it's voiced or [disfmarker] or [disfmarker] voiced or not. If it's [disfmarker] if it's [disfmarker] if it's silence or [disfmarker] or unvoiced regions, then you [disfmarker] you [disfmarker] you don't really want that in there, uh, if you can help it. [speaker002:] Yeah. [speaker003:] But actually, I mean, just to first order, [comment] suppose you just didn't do anything smart at all and just did [disfmarker] you know, did that, it wouldn't be that bad. Because you'd get some number that would be skewed by [disfmarker] be skewed by that [disfmarker] [speaker002:] So you mean, you're just [pause] using a mean and the varian and a variance? [speaker003:] You know, uh, uh, X X [disfmarker] X [disfmarker] X X [disfmarker] X minus mu over sigma. Yeah. [speaker002:] You mean just a [disfmarker] a Z square, like? Right, right. Right. OK. So it is distributed [pause] in a way that you can do this. Yeah. OK. [speaker007:] Mmm. [speaker003:] It [disfmarker] it [disfmarker] it's [disfmarker] it's sort of the first order thing to do. There's, uh, you know, uh, obviously much [disfmarker] much more arcane things to do and complicated things to do. [speaker002:] OK. So, that's [disfmarker] so [disfmarker] that was our sort of [disfmarker] [speaker003:] But that's [disfmarker] th that's the first thing to do. [speaker002:] maybe this will work, I don't know. [speaker003:] And then [disfmarker] you know, then [vocalsound] the next thing is, "well, let's just look at the voiced regions", [speaker002:] OK. [speaker003:] you know, and that would be a reasonable thing to do. And then if it's really [disfmarker] when you look at what y i i i Since you're looking at something that's sort of in log domain, cepstral is like log d linearly transformed of a log domain, [comment] [vocalsound] it's probably not that terrible an assumption to pretend it's Gaussian anyway. And so the other things that get more [disfmarker] more tricky are, [vocalsound] uh, to handle the fact that it's not [disfmarker] not really Gaussian. And [disfmarker] and y you may [disfmarker] you may not really need that for this. [speaker002:] Well, we probably won't get that far. Yeah. [speaker003:] Yeah. So I think that would be [disfmarker] that would be the obvious thing to do. [speaker002:] OK. OK. [speaker007:] Well, we're not gonna build a parametric model of the [disfmarker] o o of the [disfmarker] of the feature. [speaker002:] Yeah. [speaker007:] Right? We're just gonna [disfmarker] we we're gonna use [vocalsound] like thresholding or something. [speaker003:] No, I understand. It's just [disfmarker] it [disfmarker] it [disfmarker] No, it's more that [disfmarker] that if you felt something wasn't Gaussian and you were trying to [pause] normalize for some kind of, uh, bias of some sort, you might use a non parametric [vocalsound] measure, [speaker007:] Right, right. OK. [speaker003:] like you might use a median, [speaker002:] m Median, yeah. [speaker007:] Right. [speaker003:] you know, and rank statistics, and all that sort of stuff. [speaker002:] We could just plot [disfmarker] plot these for a speaker, [speaker003:] But I mean, you shouldn't bother with it if it's roughly Gaussian. [speaker002:] and get a bunch of histograms. [speaker007:] Right. [speaker003:] Just [disfmarker] Yeah. [speaker002:] OK. [speaker007:] OK. [speaker002:] Right. [speaker007:] Uh, yeah. [speaker002:] Thanks. [speaker003:] Sure. Uh, and I already [disfmarker] I already said what [pause] I was doing, which is [pause] mostly trying to give advice to Dave when he [disfmarker] he [disfmarker] [vocalsound] he actually does the real [vocalsound] work on this, uh, reverberation thing. But I think it's [disfmarker] it's [disfmarker] it's a real, uh [disfmarker] [vocalsound] it's a real learning experience for both of us. And [disfmarker] and, uh, I think it's [disfmarker] it's a problem [pause] ar [comment] problem area that people really haven't dealt with that much, as we know. So, [vocalsound] uh, w h he kind of did [disfmarker] he started off with what was [vocalsound] in some sense sort of the first obvious thing to do, although [pause] hardly anybody had done it yet. And, uh, it [disfmarker] it does work very well under some conditions and other conditions it doesn't. [speaker007:] Mm hmm. [speaker003:] And now we're understanding [disfmarker] trying to understand what those are. It it's [disfmarker] it's [disfmarker] it's fun. Do you want to add some anything? [disfmarker] to say anything? [speaker006:] She just got back. [speaker003:] How was y Did you catch fish, or [disfmarker]? [speaker001:] No. [speaker003:] No. [vocalsound] No fish. OK. [speaker001:] No. [speaker003:] You know, when Bush catches fish they actually put a bunch of fish in the lake for him. [speaker001:] Oh. [speaker002:] Really s really hungry fish. [speaker008:] It's [pause] part of the budget. [speaker003:] I'm [disfmarker] I'm not kidding. Really. They just [disfmarker] they [disfmarker] they pre stocked it with [disfmarker] with a [disfmarker] a [disfmarker] [speaker006:] The Secret Service is out there throwing fish into the ocean [disfmarker] into the lake. [speaker003:] I'm really not kidding. [speaker008:] How they [disfmarker] They got, like, scuba gear on and, like, hooking them. [speaker007:] But they didn't tell her. [speaker006:] "Hey, I caught one [disfmarker] this one [disfmarker] and it's already cooked!" [speaker007:] It [disfmarker] you know, to keep [disfmarker] to keep us, uh [disfmarker] [speaker002:] It's already dead. [speaker003:] Yeah. [vocalsound] So we're [pause] ready to increase our federal funding. [speaker006:] I think we're done. Should we do, uh, simultaneous digits so that we actually have time for tea? [speaker003:] Why [disfmarker] why don't we? [speaker007:] OK. [speaker003:] Yeah, let's do that. [speaker007:] Hmm. [speaker003:] I still don't know if these are gonna be any good but it sure does get us out of here quicker. A one, and a two, and a three. [speaker006:] Ready? [speaker003:] And you were worried we wouldn't have any overlap today. [speaker006:] Got plenty of overlap. [speaker007:] So, did you [disfmarker] Jeremy, did you know that whoever finishes last has to buy [pause] cappuccino for everybody? [speaker006:] And [disfmarker] And we ' [speaker001:] Why? [speaker005:] I'm known. I [disfmarker] [speaker004:] Um. [speaker001:] No, cuz she already told me it, before she told you. [speaker005:] No, she told me a long time ago. She told me [disfmarker] she told me like two weeks ago. [speaker001:] Oh, well, it doesn't matter what time. [speaker002:] OK. You know how to toggle the display width [pause] function [disfmarker] [speaker004:] Wow. [speaker001:] Well maybe she hadn't just started transcribing me yet. Anyway. [speaker004:] What is it? [speaker005:] Let me explain something to you. [speaker004:] Um, [speaker005:] My laugh is better than yours. [speaker004:] there. [speaker002:] Yo. [speaker001:] I beg to differ. [speaker004:] Um, OK. [speaker001:] But you have to say something genuinely funny before you'll get an example. [speaker005:] Yeah. [speaker004:] The thing is I don't know how to get to the next page. Here. [speaker005:] No. You should be [disfmarker] at least be self satisfied enough to laugh at your own jokes. [speaker004:] Actually I thought [disfmarker] There. [speaker001:] No, it's a different laugh. [speaker004:] How weird. [speaker001:] Ooh, wow! [speaker005:] Oh! Holy mackerel. [speaker001:] Wow. Whoa! [speaker004:] What?! Oh. OK. I wasn't even doing anything. [vocalsound] OK. [speaker001:] Uh. [speaker005:] Eva's got a laptop, she's trying to show it off. [speaker004:] That was r actually Robert's idea. But anyhow. Um [speaker006:] O K. So, here we are. [speaker005:] Once again. [speaker006:] Once again, right, together. Um, so we haven't had a meeting for a while, and [disfmarker] and probably won't have one next week, I think a number of people are gone. Um, so Robert, why don't you bring us up to date on where we are with EDU? [speaker002:] Um, uh in a [disfmarker] in a smaller group we had uh, talked and decided about continuation of the data collection. So Fey's time with us is almost officially over, and she brought us some thirty subjects and, t collected the data, and ten dialogues have been transcribed and can be looked at. If you're interested in that, talk to me. Um, and we found another uh, cogsci student who's interested in playing wizard for us. Here we're gonna make it a little bit more complicated for the subjects, uh this round. She's actually suggested to look um, at the psychology department students, because they have to partake in two experiments in order to fulfill some requirements. So they have to be subjected, [vocalsound] [comment] before they can actually graduate. And um, we want to design it so that they really have to think about having some time, two days, for example, to plan certain things and figure out which can be done at what time, and, um, sort of package the whole thing in a [disfmarker] in a re in a few more complicated um, structure. That's for the data collection. As for SmartKom, I'm [disfmarker] the last SmartKom meeting I mentioned that we have some problems with the synthesis, which as of this morning should be resolved. [speaker006:] Good. [speaker002:] And, so, "should be" means they aren't yet, but [disfmarker] but I think I have the info now that I need. Plus, Johno and I are meeting tomorrow, so maybe uh uh, when tomorrow is over, we're done. And ha n hav we'll never have to look at it again Maybe it'll take some more time, to be realistic, but at least we're [disfmarker] we're seeing the end of the tunnel there. That was that. Um, the uh, uh I don't think we need to discuss the formalism that'll be done officially s once we're done. Um, something happened, in [disfmarker] on Eva's side with the PRM that we're gonna look at today, and um, we have a visitor from Bruchsal from the International University. Andreas, I think you've met everyone except Nancy. [speaker001:] Sorry. [speaker003:] Yeah. [speaker001:] Hi. Hi. [speaker002:] Hi. Hi. And, um, [speaker001:] So when you said "Andreas" I thought you were talking about Stolcke. Now I know that we aren't, OK. [speaker002:] Andy, you actually go by Andy, right? [speaker003:] Yeah. [speaker002:] Oh, OK. [speaker003:] Cuz there is another Andreas around, [speaker002:] Eh [disfmarker] [speaker001:] Hmm. [speaker003:] so, to avoid some confusion. [speaker002:] That will be [pause] Reuter? [speaker003:] Yeah. [speaker002:] Oh, OK. So my scientific director of the EML is also the dean of the International University, one of his many occupations that just contributes to the fact that he is very occupied. And, um, the [disfmarker] um, he [@ @] might tell us a little bit about what he's actually doing, and why it is s somewhat related, and [disfmarker] by uh using maybe some of the same technologies that we are using. And um. Was that enough of an update? [speaker006:] I think so. [speaker002:] In what order shall we proceed? [speaker004:] OK. [speaker002:] Maybe you have your on line [disfmarker] [speaker004:] Uh, yeah, sure. Um, so, I've be just been looking at, um, Ack! What are you doing? Yeah. OK. Um, I've been looking at the PRM stuff. Um, so, this is, sort of like the latest thing I have on it, and I sorta constructed a couple of classes. Like, a user class, a site class, and [disfmarker] and you know, a time, a route, and then [disfmarker] and a query class. And I tried to simplify it down a little bit, so that I can actually um, look at it more. It's the same paper that I gave to Jerry last time. Um, so basically I took out a lot of stuff, a lot of the decision nodes, and then tried to [disfmarker] The red lines on the, um, graph are the um, relations between the different um, classes. Like, a user has like, a query, and then, also has, you know um, reference slots to its preferences, um, the special needs and, you know, money, and the user interest. And so this is more or less similar to the flat Bayes net that I have, you know, with the input nodes and all that. And [disfmarker] So I tried to construct the dependency models, and a lot of these stuff I got from the flat Bayes net, and what they depend on, and it turns out, you know, the CPT's are really big, if I do that, so I tried to see how I can do, um [disfmarker] put in the computational nodes in between. And what that would look like in a PRM. And so I ended up making several classes [disfmarker] Actually, you know, a class of [disfmarker] with different attributes that are the intermediate nodes, and one of them is like, time affordability money affordability, site availability, and the travel compatibility. And so some of these classes are [disfmarker] s some of these attributes only depend on stuff from, say, the user, or s f just from, I don't know, like the site. S like, um, these here, it's only like, user, but, if you look at travel compatibility for each of these factors, you need to look at a pair of, you know, what the um, preference of the user is versus, you know, what type of an event it is, or you know, which form of transportation the user has and whether, you know, the onsite parking matters to the user, in that case. And that makes the scenario a little different in a PRM, because, um, then you have one user objects and potentially you can have many different sites in [disfmarker] in mind. And so for each of the site you'll come up with this rating, of travel compatibility. And, they all depend on the same users, but different sites, and that makes a [disfmarker] I'm tr I w I wa have been trying to see whether the PRM would make it more efficient if we do inferencing like that. And so, I guess you end up having fewer number of nodes than in a flat Bayes net, cuz otherwise you would [disfmarker] c well, it's probably the same. But um, No, you would definitely have [disfmarker] be able to re use, like, [vocalsound] um, all the user stuff, and not [disfmarker] not having to recompute a lot of the stuff, because it's all from the user side. So if you changed sites, you [disfmarker] you can, you know, save some work on that. But, you know, in the case where, it depends on both the user and the site, then I'm still having a hard time trying to see how um, using the PRM will help. Um, so anyhow, using those intermediate nodes then, this [disfmarker] this would be the class that represent the intermediate nodes. And that would [disfmarker] basically it's just another class in the model, with, you know, references to the user and the site and the time. And then, after you group them together this [disfmarker] no the dependencies would [disfmarker] of the queries would be reduced to this. And so, you know, it's easier to specify the CPT and all. Um, so I think that's about as far as I've gone on the PRM stuff. [speaker006:] Well [speaker004:] Right. [speaker006:] No. So y you didn't yet tell us what the output is. [speaker004:] The output. [speaker006:] So what decisions does this make? [speaker004:] OK. So it only makes two decisions, in this model. And one is basically how desirable a site is meaning, um, how good it matches the needs of a user. And the other is the mode of the visit, whether th It's the EVA decision. Um, so, instead of um, [vocalsound] doing a lot of, you know, computation about, you know, which one site it wants of [disfmarker] the user wants to visit, I'll come [disfmarker] well, try to come up with like, sort of a list of sites. And for each site, you know, where [disfmarker] h how [disfmarker] how well it fits, and basically a rating of how well it fits and what to do with it. So. Anything else I missed? [speaker006:] So that was pretty quick. She's ac uh uh Eva's got a little write up on it that uh, probably gives the [disfmarker] the details to anybody who needs them. Um, so the [disfmarker] You [disfmarker] you didn't look at all yet to see if there's anybody has a implementation. [speaker004:] No, not yet, um [disfmarker] [speaker006:] OK. So one [disfmarker] so one of the questions, you know, about these P R Ms is [speaker004:] Mm hmm. [speaker006:] uh, we aren't gonna build our own interpreter, so if [disfmarker] if we can't find one, then we uh, go off and do something else and wait until s one appears. Uh, so one of the things that Eva's gonna do over the next few weeks is see if we can track that down. Uh, the people at Stanford write papers as if they had one, but, um, we'll see. So w Anyway. So that's a [disfmarker] a major open issue. If there is an interpreter, it looks like you know, what Eva's got should run and we should be able to actually um, try to solve, you know, the problems, to actually take the data, and do it. Uh, and we'll see. Uh, I actually think it is cleaner, and the ability to instantiate, you know, instance of people and sites and stuff, um, will help in the expression. Whether the inference gets any faster or not I don't know. Uh, it wouldn't surprise me if it [disfmarker] if it doesn't. [speaker004:] Mm hmm. [speaker006:] You know, it's the same kind of information. I think there are things that you can express this way which you can't express in a normal belief net, uh, without going to some incredible hacking of [disfmarker] sort of rebuilding it on the fly. I mean, the notion of instantiating your el elements from the ontology and stuff fits this very nicely and doesn't fit very well into the extended belief net. So that was one of the main reasons for doing it. Um. I don't know. So, uh, people who have thought about the problem, like Robert i it looked to me like if [comment] Eva were able to come up with a [vocalsound] you know, value for each of a number of uh, sites plus its EVA thing, that a travel planner should be able to take it from there. And [disfmarker] you know, with some other information about how much time the person has and whatever, and then plan a route. [speaker002:] Um hmm, um, [vocalsound] well, first of all uh, uh, great looks, mu much cleaner, nnn, nnn, Certain [disfmarker] certain beauty in it, so, um, if beauty is truth, then, uh we're in good shape. But, the um, as, uh, mentioned before we probably should look at t the details. So if you have a write up then uh, I'd love to read it [speaker004:] Mm hmm. [speaker002:] and uh [disfmarker] because, um, i Can you go all the way back to the [disfmarker] the very top? [speaker004:] Yeah. [speaker002:] Um, [vocalsound] uh these [disfmarker] [@ @] [comment] these [disfmarker] w w when these are instantiated they take on the same values? that we had before? [speaker004:] I can't really see the whole thing. [speaker002:] or are they [disfmarker] have they changed, in a sense? [speaker004:] Well I think I basically leave them to similar things. [speaker002:] Uh huh. [speaker004:] Some of the things might [disfmarker] that might be different, maybe like [disfmarker] are that the hours for the site. [speaker002:] Hmm. [speaker004:] And, eventually I meant that to mean whether they're open at this hour or not. And status would be, you know, more or less like, whether they're under construction, [speaker002:] Uh huh. [speaker004:] and [disfmarker] and [disfmarker] or stuff like that. [speaker002:] And the, uh, other question I would have is that presumably, from the way the Stanford people talk about it, you can put the probabilities also on the relations. If [disfmarker] [speaker004:] Which is the structural uncertainty? [speaker006:] Yeah. Yeah, I [disfmarker] that's [disfmarker] That I think was actually in the previous [disfmarker] the Ubenth stuff. I don't remember whether they carried that over to this or not, [speaker001:] Mmm. [speaker006:] uh, structural uncertainty. [speaker002:] It's sort of in the definition or [disfmarker] in the [disfmarker] in Daphne's definition of a PRM is that classes and relations, [speaker006:] OK. [speaker002:] and you're gonna have CPT's over the classes and their relations. [speaker006:] Alright. [speaker002:] More uncertainty, or [disfmarker] or [disfmarker] [speaker006:] Uh, [speaker004:] I remember them learning when, you know, you don't know the structure for sure, [speaker002:] I should say. [speaker006:] Yeah. [speaker004:] but I don't remember reading how you specify [speaker006:] Right. [speaker002:] Yeah, that would be exactly my question. [speaker004:] wh to start with. Yeah. [speaker002:] Well [disfmarker] [speaker006:] Yeah. [speaker004:] Yeah. [speaker006:] So, uh, the [disfmarker] the plan is [disfmarker] is when Daphne gets back, we'll get in touch and supposedly, um, we'll actually get s deep [disfmarker] seriously connected to [disfmarker] to their work and [speaker002:] Yep. [speaker006:] somebody'll [disfmarker] Uh, you know [disfmarker] If it's a group meeting once a week probably someone'll go down and, whatever. So, we'll actually figure all this out. [speaker002:] OK. OK. Then I think the w [vocalsound] long term perspective is [disfmarker] is pretty clear. We get rocking and rolling on this again, once we get a package, if, when, and how, then this becomes foregrounded [speaker004:] Mm hmm. [speaker002:] profiled, focused, again. [speaker005:] Designated? [speaker001:] Of course. [speaker002:] And um, until then we'll come up with a something that's [disfmarker] [@ @] [comment] that's way more complicated for you. Right? [speaker004:] OK. [speaker002:] Because this was laughingly easy, right? [speaker004:] Actually I had to take out a lot of the complicated stuff, cuz I [disfmarker] I made it really complicated in the beginning, and Jerry was like, [vocalsound] "this is just too much". [speaker006:] Yeah. So, um, you could, from this, go on and say suppose there's a group of people traveling together and you wanted to plan something that somehow, with some Pareto optimal uh, [vocalsound] uh, thing for [disfmarker] [speaker001:] That's good. That's definitely a job for artificial intelligence. [speaker006:] uh, or [disfmarker] [speaker002:] Well that's not [disfmarker] not even something humans [disfmarker] [speaker001:] Except for humans can't really solve it either, so. [speaker002:] yeah. [speaker006:] Right. Right. Well that's the [disfmarker] that would [disfmarker] that would be a [disfmarker] uh, you could sell it, as a [disfmarker] [speaker001:] Yeah. [speaker006:] OK, eh you don't have to fight about this, just give your preferences to the [disfmarker] [speaker001:] And then you can blame the computer. [speaker006:] w Exactly. [speaker002:] Hmm. [speaker001:] So. [speaker002:] But what does it [disfmarker] uh [disfmarker] Would a pote potential result be to [disfmarker] to split up and never talk to each other again? You know. [speaker001:] That should be one of them. [speaker002:] Yeah. [speaker006:] Yeah. Right. [speaker005:] That'd be nice. [speaker001:] Mmm. [speaker006:] Anyway. So. So there i there are some [disfmarker] some u uh, you know, uh, elaborations of this that you could try to put in to this structure, but I don't think it's worth it now. Because we're gonna see what [disfmarker] what else uh [disfmarker] what else we're gonna do. Anyway. But uh, it's good, yeah and [disfmarker] and there were a couple other ideas of [disfmarker] of uh, things for Eva to look at in [disfmarker] in the interim. [speaker002:] Good. Then, we can move on and see what Andreas has got out his sleeve. Or Andy, for that matter? [speaker003:] OK. So uh, uh, well, thanks for having me here, first of all. Um, so maybe just a [disfmarker] a little background on [disfmarker] on my visit. So, uh, I'm not really involved in any project, that's uh [disfmarker] that's relevant to you uh, a at the moment, uh, the [disfmarker] the reason is really for me uh, to have an opportunity to talk to some other researchers in the field. And [disfmarker] and so I'll just n sort of give you a real quick introduction to what I'm working on, and um, I just hope that you have some comments or, maybe you're interested in it to find out more, and [disfmarker] and so I'll be uh, happy to talk to you and [disfmarker] and uh, I'd also like to find out some more and [disfmarker] and maybe I'll just walk around the office and and then [disfmarker] and ask some [disfmarker] some questions, uh, in a couple days. So I'll be here for uh, tomorrow and then uh, the remainder of uh, next week. OK, so, um, what I started looking at, uh, to begin with is just uh, content management systems uh, i i in general. So um, uh what's uh [disfmarker] Sort of the state of the art there is to um [disfmarker] uh you have a bunch of [disfmarker] of uh documents or learning units or learning objects, um, and you store meta data uh, associate to them. So there's some international standards like the I triple E, uh [disfmarker] There's an I triple E, LON standard, and um, these fields are pretty straightforward, you have uh author information, you have uh, size information, format information and so on. Uh, but they're two uh fields that are um, more interesting. One is uh you store keywords associated with the uh [disfmarker] with the document, and one is uh, you have sort of a, um, well, what is the document about? So it's some sort of taxonomic uh, ordering of [disfmarker] of the [disfmarker] of the units. Now, if you sort of put on your semantic glasses, uh you say, well that's not all that easy, because there's an implicit um, uh, assumption behind that is that uh, all the users of this system share the same interpretation of the keyword and the same interpretation of uh, whichever taxonomy is used, and uh, I think that's a [disfmarker] that's a very [disfmarker] that's a key point of these systems and they sort of always brush over this real quickly without really elaborating much of that and uh [disfmarker] As a matter of fact, the only thing that m apparently really works out so far are library ordering codes, which are very, very coarse grain, so you have some like, science, biology, and then [disfmarker] But that's really all that we have at the moment. So I think there's a huge, um, uh need for improvement there. Now, what this uh [disfmarker] a standard like this would give us is we could um, sort of uh with a search engine just query uh, different repositories all over the world. But we can't really [disfmarker] Um, so what I'm [disfmarker] what I try to do is um, to have um, uh [disfmarker] So. So the scenario is the following, you you're working on some sort of project and you encounter a certain problem. Now, what [disfmarker] what we have at our university quite a bit is that uh, students um, try to u program a certain assignment, for example, they always run into the same problems, uh, and they always come running to us, and they'll say why's it not [disfmarker] it's not working, and we always give out the same answer, so we thought, well, it'd be nice to have a system that could sort of take care of this, and so, what I want to build is basically a [disfmarker] a smart F A Q system. Now, what you uh need to do here is you need to provide some context information which is more elaborate than "I'm looking for this and this and this keyword." So. And I think that I don't need to tell you this. I'm [disfmarker] I'm sure you have the same [disfmarker] when [disfmarker] when somebody utters a sentence in a certain, uh, context it, and [disfmarker] and the same sentence in another context makes a huge difference. So, I want to be able to model information like, um, so in the [disfmarker] in the context of [disfmarker] in the context of developing distributed systems, of a at a computer science school, um, what kind of software is the person using, which homework assignment is he or she working on at the moment, um, maybe what's the background of that student's um, which um, which error message was encountered. So this sort of information I think should be transmitted, uh, when a certain document is retrieved. Now, um, basically giving this um [disfmarker] Uh so we somehow need to have a formalized um, way of writing this down basically, and that's where the shared interpretation of [disfmarker] of certain terms and keywords comes in again. And, using this and some [disfmarker] some uh, knowledge about the domain I think you can do some [disfmarker] some simple inferences. Like you know that when somebody's working about [disfmarker] uh, working on [disfmarker] on servlets for example, he's using Java, cuz servlets are used [disfmarker] are written in Java. So some [disfmarker] some inferences like that, now, um, u using this you can infer more information, and you could then match this to the meta data of um [disfmarker] off the documents you're [disfmarker] you're searching against. So, uh what I wanna do is basically have some sort of um [disfmarker] given these inputs, and then I can compute how many documents match, and use this as a metric in the search. Now, what I plan to do is I want to uh sort of do a uh [disfmarker] uh [pause] try to improve the quality of the search results, and I want to do this by having a depth uh, um, um [disfmarker] steepest descent approach. So if I knew which operating system the person was working on, would this improve my search result? And [disfmarker] and having uh, uh a symbolic formalized model of this I could simply compute that, and find out which um [disfmarker] which questions are worth um, asking. And that's what I then propagate back to the user, and [disfmarker] and sort of try to optimize the search in this way. Now, the big problem that I'm facing right now is um, it's fairly easy to hack up a system uh quickly, that [disfmarker] that works in the small domain, but the problem is obviously the scalability. And uh uh, so Robert was mentioning uh, earlier today is that uh, Microsoft for example with their printer set up program has a Bayesian network, which does exactly this, but there you face a problem that these are very hard to extend. And so, uh what I'm [disfmarker] What I try to do is basically try to model this uh, in a way that you could really combine uh, knowledge from very different sources, and [disfmarker] and um, sort of looking into some of the ideas that the semantic web community uh, came up with. Trying to [disfmarker] to have uh, an approach how to integrate s uh certain uh [disfmarker] representation of certain concepts and also some computational rules, um, what you can do with those. Um. What I'm also looking into is a probabilistic approach into this because document retrievals is a very fuzzy procedure, so it's probably not that easy to simply have a symbolic uh, computational model. That [disfmarker] that probably isn't expressive enough. So. So that's another thing, um, which I think you're also uh, uh looking into right now. And then um, uh sort of as an add on to this whole idea, um, uh that would be now, depending on what the search engine or the content repository [disfmarker] depending on which [disfmarker] um, uh, which uh, rules and which ontologies it [disfmarker] it uses, or basically its view of the world, uh you can get very different results. So it might ma make a lot of sense to actually query a lot of different search engines. And there you could have an idea where you actually have sort of a [disfmarker] a peer to peer approach, where we're all sort of carrying around our individual bookshelves, and um, if you have a question about a homework, it's [disfmarker] probably makes sense to ask somebody who's in your class with you, sort of the guru in the certain area, rather than going to some Yahoo like uh, search engine. So these are some of the [disfmarker] just in a nutshell, some of the ideas. And I think a lot of the [disfmarker] even though it's a [disfmarker] it's a very different domain, but I think a lot of the, um, issues are [disfmarker] are fairly similar. So. OK. [speaker001:] And so some of the [disfmarker] I don't know how much you know about the larger Heidelberg project, I [disfmarker] Are you [disfmarker] [speaker003:] Uh I know, yeah I know abou about it. [speaker001:] So it seems like a lot of [disfmarker] some of the issues are the same. It's like, um, you know, the c context based factors that influence how you interpret, [speaker003:] Mm hmm. Mm hmm. [speaker001:] um, s how to interpret. In [disfmarker] in this case, infer in in knowing [disfmarker] wanting to know what kinds of things to ask. We we've kind of talked about that, but we haven't worried too much about that end of the discourse. [speaker002:] Mm hmm. [speaker001:] But maybe you guys had that in the previous models. [speaker002:] Well, in a [disfmarker] in one [disfmarker] t one s mmm, small difference in a [disfmarker] in a way, is that he doesn't have to come up with an answer, but he wants to point to the places w w [speaker003:] Yeah, so. So I'm [disfmarker] I'm not [disfmarker] I'm not building an expert [disfmarker] [speaker001:] Documents that have the answers. [speaker003:] Uh, I want to build a smart librarian, basically [speaker001:] Mm hmm. Right. [speaker003:] that can point you to the right reference. [speaker001:] Right. [speaker003:] I don't wanna compute the answer, so it's a little bit easier for me. [speaker002:] Well. Uh, you have to s still m understand what the content says about itself, and then match it to what you think the informational needs [disfmarker] [speaker001:] Mm hmm. [speaker003:] Mm hmm. [speaker001:] So you also don't have to figure out what the content is. You're just taking the keywords as a topic text, [speaker003:] I [disfmarker] I assume that [disfmarker] that the there will be learning systems that [disfmarker] that tag their [disfmarker] their content. [speaker001:] as [disfmarker] OK. [speaker003:] And um, um, [speaker001:] Right. [speaker003:] m [@ @] and basically what I [disfmarker] what I envision is that you [disfmarker] rather than just supplying a bunch of keywords you could basically [disfmarker] for [disfmarker] for an FAQ for example you could state sort of like a logic condition, when this document applies. So "this document explains how to set up your uh, mail account on Linux" or something like this. [speaker001:] Mm hmm. [speaker003:] So. So something [disfmarker] something very specific that you can then [disfmarker] But the [disfmarker] I think that the key point with these uh, learning systems is that uh, a learning system is only as good as uh the amount of content it [disfmarker] it carries. [speaker001:] Mmm, mm hmm. [speaker003:] You can have the best learning system with the best search interface, if there's no content inside of it, it's not very useful. So I think ultimately because um, uh developing these [disfmarker] these rules and these inference uh [disfmarker] inferences I think is very costly, so um, uh I think you must be able to reuse some [disfmarker] some existing um, domain [disfmarker] domain information, or [disfmarker] or [disfmarker] or ontologies that [disfmarker] that uh other people wrote and then try to integrate them, and then also search the entire web basically, rather than just the small uh, content management system. [speaker001:] OK. Mm hmm. [speaker003:] So I think that's [disfmarker] that's crucial for [disfmarker] for the success of [disfmarker] or [@ @] [disfmarker] [speaker001:] So, you're not [disfmarker] I guess I'm trying to figure out how [disfmarker] how it maps to the kinds of things that we've talked about in this group, and, actually associated groups, [speaker003:] Mm hmm. [speaker001:] cuz some of us do pretty detailed linguistic analyses, and I'm guessing that you [disfmarker] you won't be doing that? [speaker003:] No. [speaker001:] OK. Just checking. So, [vocalsound] OK. [speaker002:] Hmm. [speaker003:] No. [speaker001:] So, you take the query, and [disfmarker] and [disfmarker] [speaker006:] On the other hand, uh, FrameNet could well be useful. So do you know the FrameNet story? [speaker003:] Um, yeah. Uh, not [disfmarker] not too much, [speaker006:] OK. [speaker003:] but uh, [speaker006:] Oh. Th that's another thing you might wanna look into while you're here. [speaker003:] I have a rough overview. [speaker006:] Because, um, you know, the standard story is that keyworks [disfmarker] keywords evoke frames, and the frames may well give you additional keywords or uh, if you know that [disfmarker] that [disfmarker] that a [disfmarker] a bunch of keywords uh, indicate a frame, then you can find documents that actually have the whole frame, rather th than just uh, individual [disfmarker] [speaker003:] Mmm. Mmm. [speaker006:] So there's a lot of stuff, and people are looking at that. Most of the work here is just trying to get the frames right. There's linguists and stuff and there's a lot of it and they're [disfmarker] they're busily working away. But there are some application efforts trying to exploit it. And this looks t it seems to be that this is a place where you might be able to do that. [speaker003:] Yeah. Yeah. Yeah. I'm sure I could learn a lot about um, yeah, just how to [disfmarker] how to come up with these structures, [speaker001:] Mmm. [speaker003:] cuz it's [disfmarker] it's very easy to whip up something quickly, but it maybe then makes sense to [disfmarker] to me, but not to anybody else, and [disfmarker] and if we want to share and integrate things, they must [disfmarker] well, they must be well designed really. [speaker002:] Remember the uh, Prashant story? [speaker006:] Right. [speaker002:] The absolutely no [disfmarker] no linguistic background person that the IU sent over here. [speaker006:] Right. [speaker002:] And Andreas and I tried to come up wi or we had come up actually with a eh [disfmarker] with him working on an interface for FrameNet, as it was back then, that would p do some of the work for this machine, [speaker006:] Right. Yeah. [speaker002:] which uh, never got done because Prashant found a happy occupation [speaker006:] W yeah, I know, I mean it [disfmarker] it [disfmarker] he [disfmarker] w he did w what [disfmarker] what he did was much more s sensible for him. [speaker002:] which in the [disfmarker] [speaker006:] I think uh, [speaker002:] Absolutely. Yeah. But so [disfmarker] I'm just saying, the uh, we had that idea [speaker006:] you know [disfmarker] Yeah. The idea was there. Yeah, OK. [speaker002:] uh to [disfmarker] to exploit FrameNet there as well. [speaker006:] Yeah. [speaker001:] Hmm. [speaker002:] And um. [speaker006:] Yeah, actually you guys never [disfmarker] [speaker002:] And Srini's doing information extraction also, right? [speaker006:] Right. [speaker002:] with that FrameNet base. [speaker003:] Mmm. [speaker006:] Yeah. So you [disfmarker] you guys never sent anybody else from I U. [speaker002:] Mm hmm. [speaker006:] You were y no [disfmarker] [speaker003:] Except [disfmarker] except Prashant? [speaker006:] Yeah. Uh, this was supposedly an exchange program, [speaker003:] Um, [speaker006:] and [disfmarker] I [disfmarker] we [disfmarker] you know, it's fine. We don't care, but it just [disfmarker] I'm a little surprised that uh, Andreas didn't come up with anyone else he wanted to send. Alright. [speaker003:] Uh I don't know, I mean the uh [disfmarker] [speaker006:] I mean I had forgotten a I [disfmarker] To be honest with you, I'd totally forgotten we had a program. [speaker001:] Hmm. [speaker002:] Uh it's in the program? [speaker003:] Uh I [disfmarker] I think it's [disfmarker] it's really the lack of students uh, at IU at the moment. [speaker006:] Yeah. Yeah. No, no. There was a whole co There was a little contract signed. It was [disfmarker] Yeah. [speaker003:] Yeah, yeah. I think it's ju it's more the lack of [disfmarker] of students, really, and w we have all these sponsors that are always sort of eager to get some teams. [speaker006:] Yeah, I know. [speaker001:] Mmm. [speaker006:] Right. [speaker003:] But [disfmarker] [speaker006:] Right. [speaker003:] Well I mean if [disfmarker] if I were a student, I'd love to come here, rather than work for some German [vocalsound] [nonvocalsound] company, or [disfmarker] [speaker006:] Yeah. Right. [speaker002:] You are being recorded right now, so beware. [speaker006:] Oh, right! [speaker003:] Well, I didn't say anybody to [disfmarker] anything to offend [disfmarker] well, except for the sponsors maybe, but [disfmarker] [speaker006:] Right. Anyway. Right. So I thi tha that's [disfmarker] that's one of the things that might be worth looking into while you're here. [speaker003:] Mm hmm. [speaker006:] Uh, unfortunately, Srini, who is heavily involved in DAML and all this sort of stuff is himself out of town. [speaker003:] Mm hmm. Well I'll go to the uh, Semantic Web Workshop, uh, in two weeks. [speaker006:] Right, and [disfmarker] Yeah, for [disfmarker] for some reason he's not doing that. [speaker001:] Yeah. [speaker006:] I don't know why he [@ @] [disfmarker] [speaker001:] Well, he had other things to do. [speaker006:] oh, I, who knows? Anyway, s yeah, you'll see [disfmarker] you'll certainly see a lot of the people there. [speaker001:] The uh [disfmarker] The other person I thought of is Dan Gildea? because he did some work on topic spotting [speaker006:] Yeah. St statistical stuff. That would be a very good idea. [speaker001:] w um, which is, I mean, you [disfmarker] I mean. I don't [disfmarker] Depending on how well you wanna integrate with that end, [speaker003:] Mm hmm. [speaker001:] you know, like, taking the data and fig you said the learning systems that figure out [disfmarker] We [disfmarker] There's someone in ICSI who actually has been working on [disfmarker] has worked on that kinda stuff, and he's worked with frame net, so you could talk to him about, you know, both of those things at once. [speaker003:] Mm hmm. Mm hmm. [speaker001:] So. And he just finished writing a draft of his thesis. So. [speaker003:] So, uh, who is that again? [speaker001:] I u [vocalsound] Dan Gildea, GILDEA. And, he's in one of the rooms on the fifth floor and stuff, [speaker002:] Who? I can take you to his office. It's just around the corner. [speaker001:] and [disfmarker] [speaker003:] OK, great. [speaker001:] Hmm. Well, if you fal solve the problem, [vocalsound] hope you can do one for us too. [speaker006:] Alright, was there anything else for this? One of these times soon we're gonna hear about construal. [speaker002:] Yeah. I'm sure. I have um [disfmarker] I think it was November two thousand three or some [disfmarker] No. Wh I had something in my calendar. [speaker006:] Oh, OK. Right. [speaker002:] Um, [speaker005:] Wait a second. That's a long way away. [speaker006:] Good thinking! [speaker002:] Uh well, maybe I can [disfmarker] I can bribe my way out of this. So. So I did some double checking and it seems like spring break in two thousand [vocalsound] one. [speaker001:] Talk about changing the topic. [speaker002:] No. [speaker006:] Well, no, but he's [disfmarker] he's [disfmarker] he's [disfmarker] he's [disfmarker] as you said, he's, like the state legislature, he's trying to offer us bribes. [speaker001:] At least this is a private meeting. Right, exactly, OK, that's the link. [speaker002:] This uh [disfmarker] Oh, they refused the budget again? Is it [disfmarker] so about CITRIS? Yeah, still nothing. [speaker006:] Uh, this [disfmarker] this [disfmarker] this [disfmarker] t the s we're, uh, involved in a literally three hundred million dollar uh, program. Uh, with the State of California. And, the State of California is now a month and a half behind its legis its legally required date to approve a budget. So the budget has not been approved. And two days ago [disfmarker] There's two l you know, so, two branches of legislature. One branch approved it, [speaker003:] Mm hmm. [speaker006:] and, um, yesterdayday [comment] there was this uh [disfmarker] uh I thought that the other branch would just approve it, but now there's actually a little back sliding to people who [disfmarker] who approved it got flak from there, eh anyway. So, um [disfmarker] Oh! I have to tell you a wonderful story about this, OK? And then we'll go. So, I [disfmarker] it turns out I wound up having lunch today with a guy named Tom Kalil. KILL [disfmarker] KALIL. And, uh, he now works at Berkeley. In fact he's hired to run a lot of CITRIS, even though we don't have the money they [disfmarker] So they've been hiring people right and left, so, uh, they think the money's coming. So [disfmarker] and he was, I think, the chief staffer to Clinton on technology matters. He was in the White House, I don't remember what he was saying. A anyway, like that. And, is now doing all the politics for CITRIS, but also, has a uh, a lot of interest in uh, actually doing things for society, so digital divide and stuff like that. So that's s interesting to me but maybe not to you. But the really interesting thing was, he st he s he s said something about, you know I'm interested in things that have high social multiplier, something that is of great social value. He said, "for example", this was his only example, "if you had a adult literacy program that was as good as an individual tutor, and as compelling as a video game, then that would have a huge social impact". I said, "Oh great! That's a good problem to work on." Anyway. So it was nice that uh, he's got this view, of A, that's what you should try to do, and B, uh, language would be a good way to do it. So that's [disfmarker] [speaker001:] Mmm. Definitely. [speaker006:] So anyway, that's the end of the story. [speaker001:] But for adults and not for the children. [speaker006:] This was [disfmarker] Yeah. I didn't push him on the ch on the child thing, [speaker001:] Uh huh. [speaker006:] but, uh, you know, a again, if [disfmarker] if you [disfmarker] if you [speaker001:] Oh. [speaker006:] um, and this was [disfmarker] this was literacy, which actually is somewhat different problem. [speaker001:] Mm hmm. [speaker006:] Maybe easier. I don't know. So this is reading, rather than teaching [disfmarker] Another project we started on, and [disfmarker] and didn't get funded for was, uh, to try to build an automatic tutoring program, for kids whose first language wasn't English. Which is like half the school population in California. Something like that, isn't it? [speaker001:] Mm hmm. [speaker006:] Yeah. So, enormous problem in California, and the idea was if we're so smart about language understanding and speech understanding, couldn't we build [vocalsound] uh, programs that would be tutors for the kids. We think we could. Anyway. So [disfmarker] so [disfmarker] But this is a slightly different problem, [speaker001:] Mm hmm. [speaker006:] and um, I know none of us have the spare time to look at it right now, but it i it's [disfmarker] it's interesting and I may um, talk to him some more about is em somebody already doing this, and stuff like that. So anyway, that was [disfmarker] that was today's little story. [speaker005:] Hmm. [speaker002:] OK. So I [disfmarker] I did manage to get [disfmarker] pull my head out of the sling by sidetracking into CITRIS, [speaker006:] No, no. [speaker002:] but uh or [disfmarker] a temporarily putting it out of the sling [speaker006:] Right. [speaker002:] but, I [disfmarker] I'll volunteer to put it right back in by stating that I am n uh among some other things in the process of writing up stuff that we have been discussing at our daily meetings, [speaker006:] Yeah. [speaker002:] and also revising, thanks for all the comments, the c the original construal proposal. And, if I put one and one together, I may end up with a number that's greater than one and that I [disfmarker] I can potentially present once you get back. [speaker001:] Greater than two? [speaker006:] You're good. [speaker002:] Nnn. [comment] s sometimes, you know the sum is not uh less than the [disfmarker] [speaker001:] Uh, right, right. [speaker006:] Right. Right. Anyway. Yeah, so [disfmarker] OK, so that'd be great, but I'd [disfmarker] I think it's [disfmarker] it's time again, right? [speaker002:] Absolutely. Yeah. [speaker006:] Yeah. OK. [speaker002:] But um, and hopefully all sidetracking um, other things will have disappeared, soon. [speaker006:] Good. Yep. Done? [speaker001:] Alright, so I'm I should read all of these numbers? [speaker002:] OK. [speaker005:] Piece of paper? I could borrow? [speaker001:] Oh yeah. [speaker002:] OK, so uh i um I don't know whether Ami's coming or not um but I think we oughta just get started. [speaker005:] Nancy is uh currently in Berkeley but not here? [speaker003:] Nancy's still stick? [speaker002:] Don't know. [speaker005:] OK. [speaker002:] Anyway Oh, so there you go. Anyway, so my idea f for today and we can uh decide that that isn't the right thing to do was to at [disfmarker] spend at least part of the time trying to eh build the influence links, you know which sets of things are uh relevant to which decisions and actually I had uh specific s suggestion to start first with the path ones. The database ones being in some sense less interesting to us although probably have to be done and so to do that so there's [disfmarker] and the idea was we were gonna do two things [speaker003:] Is your mike on? [speaker002:] Ah. Oh right, well. Yeah. We were gonna do two things one of which is just lay out the influence structure of what we think influences what [speaker004:] That's funny. [speaker002:] and then as a uh separate but related task uh particularly Bhaskara and I were going to try to decide what kinds of belief nodes are needed in order to um do what we [disfmarker] what we need to do. Once so but du we should sort of have all of the uh basic design of what influences what done before we decide exactly how to compute it. So I didn't [disfmarker] did you get a chance to look at all [disfmarker] yet? [speaker004:] Yeah, I looked at some of that stuff. [speaker002:] Great. OK so let's start with the uh belief nets, the general influence stuff and then we'll [disfmarker] then we'll also at some point break and talk about the techy stuff. [speaker005:] Well I think one could go there's I think we can di discuss everything. First of all this I added, I knew from sort of basically this has to be there right? Um [speaker002:] Oh are you gonna go there or not? Yeah, so one i [speaker005:] Given [disfmarker] given uh uh not transverse the castle, the decision is does the person want to go there or is it just [speaker002:] Right, true. Does have to be there. [speaker005:] And [speaker002:] And I'm sure we'll find more as we go that [speaker005:] Hmm? So Go there in the first place or not is definitely uh one of the basic ones. We can start with that. Interesting effect. Um Is this basically true or false or maybe we'll get [speaker002:] Well [disfmarker] [speaker004:] Which one? [speaker005:] what? [speaker001:] "Go there". [speaker005:] m right. [speaker002:] so there is this question about [speaker005:] Here we we actually get just probabilities, [speaker002:] Yeah. [speaker005:] right for each down here. [speaker002:] When we're [disfmarker] yeah when we're done. [speaker005:] Hmm. [speaker002:] So [disfmarker] so the [disfmarker] the reason it might not be true or false is that we did have this idea of when so it's, you know uh current [@ @] and so forth and so on or not at all, [speaker005:] Mm hmm. [speaker002:] right? And so that a decision would be do we want that so you could [disfmarker] two different things you could do, you could have all those values for Go there or you could have Go there be binary and given that you're going there when. [speaker005:] When. How. [speaker002:] Yeah [speaker005:] Why, [speaker002:] and so forth. [speaker005:] yeah. [speaker002:] So I'll let we'll see. [speaker005:] Hmm? [speaker001:] I mean it seems that you could um uh it seems that those things would be logically independent like you would wanna have them separate or binary, Go there and then the [disfmarker] the possibilities of how to go there because [disfmarker] [speaker002:] OK, that's [disfmarker] let's start that way. [speaker001:] because, you know it might be easy to figure out that this person is going to need more film eventually from their utterance but it's much more complex to query when would be the most appropriate time. [speaker005:] Hmm. Hmm. OK. And so I've tried to come up with some initial things one could observe so who is the user? Everything that has user comes from the user model everything that has situation comes from the situation model A. We should be be clear. But when it comes to sort of writing down when you [disfmarker] when you do these things is it here? You sort of have to a write the values this can take. [speaker002:] Right. [speaker005:] And here I was really uh in some s sometimes I was really sort of standing in front of a wall feeling very stupid because um [disfmarker] this case it's pretty simple, but as we will see the other ones um for example if it's a running budget so what are the discrete values of a running budget? So maybe my understanding there is too impoverished. [speaker001:] Hmm. [speaker005:] How can I write here that this is something, a number that cr keeps on changing? [speaker002:] No uh [speaker005:] But OK. Thus is understandable? [speaker003:] Yes. [speaker001:] Think so. [speaker005:] So here for example. [speaker002:] You've s have you seen this before at all Keith, these belief net things? [speaker001:] Uh, no, but I think I'm following it. So far. [speaker005:] So here is the [disfmarker] the [disfmarker] we had that the user's budget may influence the outcome of decisions. [speaker002:] Yeah. [speaker004:] Hmm. [speaker005:] There we wanted to keep sort of a running total of things. [speaker004:] Is this like a number that represents how much money they have left to spend? OK, h well I mean how is it different from user finance? [speaker005:] Um the finance is sort of here thought of as [disfmarker] as the financial policy a person carries out in his life, he [disfmarker] is he cheap, average, or spendy? [speaker004:] Alright. [speaker005:] And um I didn't come uh maybe a user I don't know, I didn't want to write greediness, but [speaker001:] Yeah. Hmm. [speaker005:] Welcome. [speaker002:] Or cheapness. [speaker005:] Welcome. [speaker001:] User thrift. [speaker004:] Yeah. [speaker002:] Thrift, that's good. Great. [speaker005:] There it is. [speaker002:] Yeah. So Keith w what's behind this is actually a program that will once you fill all this in actually s solve your belief nets for you and stuff. [speaker001:] Mm hmm. [speaker002:] So this is not just a display, this is actually a GUI to a simulator that will if we tell it all the right things we'll wind up with a functioning belief net at the other end. [speaker001:] OK. OK. [speaker005:] And it's so simple even I can use it. [speaker001:] Wow, that is simple. [speaker005:] OK, so here was OK, I can think of uh people being cheap, average, or spendy or we can even have a [disfmarker] a finer scale moderately cheap, [speaker002:] Doesn't matter. [speaker005:] doesn't matter. Agree there but here um I wasn't sure what to write in. [speaker002:] Let's [disfmarker] go ahead. [speaker004:] Well, I mean you've written in [disfmarker] you've written in what uh seems to be required like what else is [disfmarker] is do you want? [speaker005:] If that's permissible then I'm happy. [speaker002:] Well yeah. So here's [disfmarker] here's what's permissible is that you can arrange so that the um the value of that is gonna have to be updated and n it's not a belief update, right? It's [disfmarker] you took some actions, you spent money and stuff, so the update of that is gonna have to be essentially external to the belief net. [speaker005:] Yeah. [speaker002:] Right? And then what you're going to need is uh for the things that it influences. Well let's [disfmarker] first of all let's see if it does influence anything. And if it does influence anything then you're gonna need something that converts from the [disfmarker] the number here to something that's relevant to the decision there. So it could be ra they create different ranges that are relevant for different decisions or whatever [disfmarker] but for the moment this is just a node that is conditioned externally and might influence various things. [speaker005:] Hmm. Yeah [disfmarker] this is where um OK anyways let's forget it. [speaker002:] Well that's fine. Well anyway, go ahead. [speaker005:] OK, and so this, oh that [speaker004:] The other thing is that um every time that's updated beliefs will have to be propagated but then the question is do you [disfmarker] do we wanna propagate beliefs every single time it's updated or only when we need to? [speaker002:] Yeah, that's a good question. And uh does it have a lazy mode? I don't remember. [speaker004:] Uh Well, I mean, in Srini's thing there was this thing [disfmarker] there was this um option like proper inferences which suggests that uh doesn't happen, automatically. [speaker002:] Oh right. Yeah. S probably does. Yeah someone has to track that down, but I [disfmarker] but uh [speaker005:] I just accidentally [speaker002:] And [disfmarker] and [disfmarker] and I think [disfmarker] actually uh [speaker005:] Oops. [speaker002:] one of the we w items for the uh user home base uh should be uh essentially non local. I they're only there for the day and they don't have a place that they're staying. [speaker004:] Well [speaker005:] Oh just uh accidentally erased this, I [disfmarker] I just had values here such as uh um is he s we had in our list we had "Is he staying in our hotel?", "Is he staying with friends?", and so forth [speaker002:] Yeah. [speaker005:] uh so we're OK. [speaker002:] So it's clear where w w w where we are right now. So my suggestion is we just pick uh [speaker005:] Something down here? [speaker002:] one, you know one uh particular one of the uh well let's do the first [disfmarker] first one let's do the one that we sort of already think we did so w that was the [disfmarker] of the endpoint? [speaker005:] Mm hmm. And um Oops. [speaker004:] Is hmm [speaker005:] Ah, [speaker004:] So it's true or false? [speaker005:] OK. No no no, [speaker002:] No, that's [disfmarker] that's a [speaker005:] EVA. Missed that one. [speaker004:] So [speaker003:] What's the difference between mode and endpoint? [speaker004:] I thought mode, yeah. [speaker002:] although that [speaker005:] Um mode was um [speaker002:] Well, that's [speaker004:] Mode of transportation? [speaker005:] Yeah. [speaker004:] OK. Also true or false. [speaker005:] Mm hmm. [speaker002:] No, he has he hasn't filled them in yet, is what's true. [speaker004:] Yeah, OK. [speaker005:] Did I or didn't I? Ah. Probably nothing done yet, oh I just did it on the upper ones, OK. Makes sense. OK, so this was EVA. Maybe we can think of more things, cross [speaker004:] Yeah. [speaker002:] OK. [speaker001:] Climb, rob. [speaker005:] climb, emerge [speaker002:] No no no, [speaker003:] Uh [speaker004:] Well some of those are subsumed by approach. [speaker002:] these are ju that's just a point, this is ju [speaker003:] Would it be an endpoint if you were crossing over it? [speaker001:] The Charles Bridge, you know. [speaker002:] Yeah, would be a f for a given segment. You know, you [disfmarker] y you go [disfmarker] first go the town square [speaker003:] Well I eh [speaker001:] No, I mean, if you go to re you know if you go to Prague or whatever one of your [disfmarker] your key points that you have to do is cross the Charles Bridge and doesn't really matter which way you cross which [disfmarker] where you end up at the end but the part [disfmarker] the good part is walking over it, so. [speaker002:] That's subtle, but true. Anyway so let's just leave it three [disfmarker] with three for now [speaker005:] Mm hmm, mmm. Yeah. [speaker002:] and let's see if we can get it linked up just to get ourselves started. [speaker005:] OK, we [speaker002:] You'll see it [disfmarker] you'll see something comes up immediately, that the reason I wanna do this. [speaker005:] w well the uh user was uh definitely more likely to enter if he's a local [speaker002:] Right. Right. [speaker005:] more likely to view if he's a tourist um and then of course we had the fact that given the fact that he's thrifty and there will be admission then we get all these cross um [speaker002:] We did, but the three things w that [disfmarker] that it contributed to this in fact, the other two aren't up there. so one was the ontology [speaker005:] We'll d what type of building is it? Yeah. [speaker002:] Yeah. And the [disfmarker] and the third thing we talked about was something from the discourse. [speaker005:] What he has mentioned before. [speaker002:] OK, so this is w Right, so what w I [disfmarker] what we seem to need here, this is why it starts getting into the technical stuff [speaker001:] mm hmm [speaker002:] the way we had been designing this, there were three intermediate nodes uh which were the endpoint decision as seen from the uh user model as seen from the ontology and as seen from the discourse. So each of those the way we had it designed, now we can change the design, but the design we had was there was a decision with the same three outcomes uh based on the th those three separate considerations [speaker001:] mm hmm [speaker002:] so if we wanted to do that would have to put in uh three intermediate nodes [speaker005:] Uh we can load it up it you know very simple. [speaker001:] So [speaker002:] and then what you and I have to talk about is, OK if we're doing that and they get combined somehow uh how do they get combined? But the [disfmarker] they're [disfmarker] they're undoubtedly gonna be more things to worry about. [speaker005:] So this was adjusted for this one mode thing. [speaker004:] Oh yes. [speaker002:] Yeah. [speaker005:] So that's w w in our uh in [disfmarker] in Johno's sort of pictogram everything that could contribute to whether a person wants to enter, view, or approach something. [speaker002:] Oh, it was called mode, so this [disfmarker] this is m mode here means the same as endpoint. [speaker005:] Is now this endpoint. [speaker003:] Right. [speaker002:] OK, why don't we ch can we change that? [speaker005:] We can just rename that, yeah. [speaker002:] Alright. You know, but that was actually, yeah unfortunately that was a um kind of an intermediate versio that's I don't think what we would currently do. [speaker001:] Can I ask about "slurred" and "angry" as inputs to this? [speaker002:] That's a [speaker001:] What [disfmarker] why? [speaker004:] Like they're either true or false [speaker005:] The prosody? [speaker004:] and they uh [speaker001:] OK. [speaker004:] oh I see. [speaker003:] If the [disfmarker] if the person talking is angry or slurs their speech they might be tired or, you know [speaker001:] Mm hmm. OK. Drunk. [speaker004:] Therefore [speaker003:] And, you know, possibly uh [speaker001:] Less likely to enter. [speaker003:] some, [speaker004:] uh I was thinking less likely to view [speaker003:] yeah. [speaker001:] Hmm. [speaker002:] Yeah. But that's that seems to, yeah. So [disfmarker] so my advice to do is [disfmarker] is get this down to what we think is actually likely to [disfmarker] to be a [disfmarker] a strong influence. [speaker001:] OK. [speaker004:] Right. [speaker002:] But yeah, that was what he had in mind. So let's think about this [disfmarker] this question of how do we wanna handle [disfmarker] so there're two separate things. One is [disfmarker] uh at least two. One is how do we want to handle the notion of the ontology now what we talked about, and this is another technical thing Bhaskara, is uh can we arrange so that I think we can so that the belief net itself has properties and the properties are filled in uh from on ontology items. So the [disfmarker] let's take the case of the uh this endpoint thing, the notion was that if you had a few key properties like is this a tourist site, you know some kind of landmark is it a place of business uh is it something you physically could enter [speaker001:] Mm hmm. [speaker002:] OK, et cetera. So that there'd be certain properties that would fit into the decision node and then again as part of the ou outer controlling conditioning of this thing those would be set, so that some somehow someone would find this word, look it up in the ontology, pull out these properties, put it into the belief net, and then the decision would flow. [speaker001:] Mm hmm. [speaker002:] Now [speaker005:] Seems to me that we've sort of e em embedded a lot, em embedded a lot of these uh things we had in there previously in [disfmarker] in [disfmarker] in some of the other final decisions done here, for example if we would know that this thing is exhibiting something um [speaker002:] Right. Right. [speaker005:] if it's exhibiting itself it is a landmark, meaning more likely to be viewed [speaker002:] Yeah. Yep. [speaker005:] if it is exhibiting pictures or sculptures and stuff like this, then it's more likely to be entered. [speaker002:] I uh that's [disfmarker] I think that's completely right and um I think that's good, right? So what [disfmarker] what that says is that we might be able to uh take and in particular so [disfmarker] so the ones we talked about were uh exhibiting and selling [speaker005:] Accessibility. [speaker002:] no, accessibility meant [speaker005:] If it's closed one probably won't enter. Or if it's not accessible to a tourist ever the likelihood of that person actually wanting to enter it, [speaker002:] OK. [speaker005:] given that he knows it, of course. [speaker002:] Alright. So let me suggest this. Uh w could you move those up about halfway. Uh The ones that you th And selling I guess. [speaker005:] Yeah, all [disfmarker] all of these if it's fixing things selling things, or servicing things [speaker002:] Right. So here [disfmarker] here's what it looks like to me. is that you want an intermediate structure which i uh is essentially the or of uh for this purpose of [disfmarker] of uh selling, f fixing, or servicing. So that it uh that is, for certain purposes, it becomes important but for this kind of purpose uh one of these places is quite like the other. Does that seem right? So we di [speaker003:] Basic you're basically just merging those for just the sake of endpoint decision? [speaker002:] if we Yes. [speaker003:] Yeah. [speaker002:] So if [disfmarker] well it may be more than endpoint decisions, [speaker001:] Mm hmm. [speaker002:] so the idea would be that you might wanna merge those three [speaker005:] These three? [speaker002:] Yeah. Eh ser s uh selling, fixing, and servicing. [speaker005:] Yeah. [speaker004:] What ex um and so either those is true f or false? So [speaker002:] Uh Uh well it [disfmarker] it [disfmarker] i here's where it gets a little tricky. Uh from the belief net point of view it is from another point of view of course it's interest it's [disfmarker] it's important to know what it's selling or servicing and so forth. [speaker001:] Yeah. [speaker004:] OK. [speaker002:] So for this decision it's just uh true or false [speaker004:] Yeah. [speaker002:] and in th this is a case where the or seems just what you want. [speaker004:] OK. [speaker002:] That [disfmarker] that if any of those things is true then it's the kind of place that you uh [speaker005:] Um more likely to enter. [speaker002:] are more likely to enter. [speaker004:] So you just wanna have them all pointing to a summary thing? [speaker002:] You could, yeah. Yeah, so let's do that. No no, no eh to [disfmarker] to an inter no, an intermediate node. [speaker004:] T [speaker005:] Oh, OK. [speaker002:] That's the p part of the idea, is [speaker005:] Um is [disfmarker] is that the object type node? [speaker002:] I d [speaker005:] So are they the [disfmarker] is it the kind of object that sells, fixes, or services things? [speaker002:] Well, o open up object type and let's see what its values are. [speaker005:] Oh I just created it, it has none so far. [speaker002:] Oh, well OK first of all it's not objects, we called them entities, [speaker005:] Yeah. [speaker002:] right? [speaker005:] And then we have sort of the um [speaker002:] Let's say I put commercial. [speaker005:] Yeah, I w I was just gonna commercial action inside where people p [speaker002:] Well couldn't I do [disfmarker] let's do commercial uh landmark and [speaker005:] And where was the accessible, yeah. [speaker002:] Well accessible I think is different [speaker005:] Yeah. [speaker002:] cuz that's tempor that [disfmarker] that varies temporally, whereas this is a [speaker005:] Mm hmm. [speaker003:] What would a hotel fall under? [speaker002:] I would call that a service, but [disfmarker] but I don't know. [speaker003:] Well I mean in terms of entity type? [speaker002:] Say w w well it's co I would s a a again for this purpose I think it's commercial. Someplace you want to go in to do some kind of business. [speaker003:] OK. [speaker004:] Um what does the underscore T at the end of each of those things signify? [speaker005:] Um things. So places that service things sell things or fix things and pe places that e exhibit things. [speaker004:] Uh huh. OK. OK. That also points to entity type I guess. [speaker001:] So we're deriving um this [disfmarker] the [disfmarker] this feature of whether the [disfmarker] the main action at this place happens inside or outside or what we're deriving that from what kind of activity is done there? Couldn't you have it as just a primitive feature of the entity? [speaker002:] Well you could, that's a [disfmarker] that's a choice. [speaker001:] OK. [speaker002:] So uh [speaker001:] I mean it seems like that's much more reliable cuz you could have outdoor places that sell things and you know indoor places that do something else [speaker002:] Yeah, the problem with it is that it sort of putting in a feature just for one decision, [speaker001:] and [speaker002:] now w we may wind up having to do that [speaker001:] Hmm. [speaker002:] this i anyway, this i [speaker001:] OK. [speaker002:] at a mental level that's what we we're gonna have to sort out. [speaker001:] OK. [speaker002:] So, you know what does this look like, what are [disfmarker] what are uh intermediate things that are worth computing, what are the features we need in order to make all these decisions [speaker001:] Mm hmm. [speaker002:] and what's the best way to organize this so that um it's clean and [disfmarker] and consistent and all that sort of stuff. [speaker001:] OK. I'm just thinking about how people, human beings who know about places and places to go and so on would store this and it would probably [disfmarker] you wouldn't just sort of remember that they sell stuff and then deduce from that that it must be going on inside or something. [speaker005:] Well I think an entity maybe should be regard as a vector of several possible things, it can either em do s do sell things, fix things, service things, exhibit things, it can be a landmark at the same time as doing these things, [speaker001:] Mm hmm. [speaker005:] it's not either or mmm certainly a place can be a hotel and a famous site. [speaker001:] Mm hmm. [speaker005:] Many come to mind. Things can be generally um a landmark and be accessible. IE a [disfmarker] a castle or can be a landmark a or not accessible, some statue [speaker001:] Mm hmm. [speaker005:] you know can go inside. [speaker002:] OK. Anyway so let me suggest you do something else. Uh which is to get rid [disfmarker] get rid of that l long link between who [disfmarker] the user and the endpoint. [speaker005:] Could we just move it like this? [speaker002:] No no, I don't want the link there at all. [speaker005:] Oh, OK. [speaker002:] Because what we're gonna want is an intermediate thing which is uh the endpoint decisi the endpoint decision based o on the user models, so what we [disfmarker] we [disfmarker] what we talked about is three separate endpoint decisions, so let's make a new node [speaker005:] Yeah. Yeah. [speaker003:] Just as a suggestion maybe you could "save as" to keep your old one nice and clean and so you can mess with this one. [speaker005:] Mmm. The old one was not that not that important, I think but [speaker003:] OK, well, not a big deal then. [speaker005:] Let's do it then. [speaker003:] Well the [disfmarker] Isn't there a "save as" inside of java base? [speaker005:] But I can just take this [speaker003:] OK. [speaker005:] copy it somewhere else. This was user something or [speaker002:] Well this was uh let's p put it this [disfmarker] let's do endpoint underbar U. [speaker005:] end point? [speaker002:] i endpoint, e end poi this is sa [speaker005:] Ah. [speaker002:] it's the endpoint [speaker005:] Gotcha, yeah. [speaker002:] let's say underbar U, so that's the endpoint decision uh as seen through the [speaker003:] As related from the user model. [speaker002:] Right. So let's [disfmarker] let's actually yeah so lin you can link that up to the [speaker005:] Should I rename this [pause] too? [speaker002:] uh yeah, so that, I guess that's endpoint uh [speaker005:] It's underscore E. [speaker002:] underscore E for entity, and we may change all this, but. Right. And [speaker005:] OK, shouldn't I be able to move them all? No. Or [disfmarker]? Can I? Where? What? [speaker002:] Oh I d eh I don't know. Actually, I guess the easiest thing would move [disfmarker] mo move the endpoint, well, go ahead. Just do whatever. [speaker005:] Wasn't this possible? [speaker002:] Well. [speaker005:] Yeah. [speaker003:] I think you have to be in move mode before [speaker005:] Uh huh. OK. [speaker002:] Good. Right. [speaker005:] So now we're looking for user related things that um [speaker002:] Yeah. And uh maybe th maybe it's just one who is the user, I don't know, maybe [disfmarker] maybe there's more. [speaker001:] Huh. [speaker005:] Well if he's usi if he's in a car right now what was that people with Harry drove the car into the cafe [speaker002:] Never mind. Uh anyway, this is crude. Now but the [disfmarker] now so [disfmarker] so [disfmarker] but then the question is uh so [disfmarker] and [disfmarker] and we assume that some of these properties would come indirectly through an ontology, but then we had this third idea of input from the discourse. [speaker005:] Well let's [disfmarker] should we finish this, I mean but surely the user interests [speaker002:] Sure, OK. [speaker003:] The user thrift, the user budget. [speaker005:] yeah, yeah [speaker002:] Well, maybe, I again, I d well, OK, put em in but what we're gonna wanna do is actually uh [speaker003:] Well is [disfmarker] [speaker005:] Here this was one of my problems we have the user interest is a [disfmarker] is a vector of five hundred values, so um That's from the user model, [speaker004:] Oh you mean level of interest? [speaker005:] mm hmm, no not levels of interest but things you can be interested in. [speaker001:] Well [disfmarker] [speaker002:] somebody else has built this user model. [speaker005:] Gothic churches versus Baroque townhouses versus [speaker004:] Oh I see, right. So why is it oh it, so it's like a vector of five hundred one's or zero's? [speaker005:] Yea n is that [speaker004:] Like for each thing are we [disfmarker] are you interested in it or not? [speaker005:] yeah uh I [disfmarker] I think [speaker004:] I see. [speaker001:] Hmm. [speaker002:] OK. So uh you cou and so here let me give you two ways to handle that. Alright? One is um you could ignore it. But the other thing you could do is have an [disfmarker] and this will give you the flavor of the [disfmarker] of what you could have a node that's [disfmarker] that was a measure of the match between the object's feature, you know, the match between the object the entity, I'm sorry and the user. [speaker005:] Mm hmm. Uh. [speaker002:] So you could have a k a "fit" node and again that would have to be computed by someone else [speaker005:] Mm hmm. [speaker002:] but uh so that uh [speaker005:] Just as a mental note uh [speaker002:] Yeah, that's all. [speaker005:] Mm hmm. And [disfmarker] and should we say that this interests eh affects the likelihood of [disfmarker] of entering? [speaker002:] Yeah. I mean, we could. [speaker005:] Yeah. And also if it's an expensive place to enter, this may also [speaker002:] OK. [speaker004:] Budget. [speaker001:] User schedule. [speaker005:] Schedule? [speaker001:] "Do I have time to go in and climb all the way to the top of the Koelner Dome [comment] or do I just have to [disfmarker]" "time to take a picture of the outside?" [speaker002:] Right. [speaker003:] It seems like everything in a user model a affects [disfmarker] [speaker002:] Well that's what we don't wanna do, see that [disfmarker] se [speaker003:] Yeah. [speaker002:] cuz then we get into huge combinatorics and stuff like that [speaker001:] Mm hmm. [speaker002:] an [speaker003:] Cuz if the, I mean, and if the user is tired, the user state, [speaker004:] Well [speaker003:] right, it would affect stuff, but I can't see why e anything w everything in the model wouldn't be [speaker002:] Well, but [speaker004:] Right. [speaker002:] Well, that [disfmarker] that's [disfmarker] we can't do that, so we we're gonna have to [speaker003:] Yeah. [speaker002:] but this is a good discussion, we're gonna have to somehow figure out uh some way to encapsulate that uh so if there's some general notion of for example the uh relation to the time to do this to the amount of time the guy has or something like that is [disfmarker] is the uh compatibility with his current state, so that's what you'd have to do, you'd have to get it down to something which uh was itself relatively compact, so it could be compatibility with his current state which would include his money and his time and [disfmarker] and his energy [speaker003:] Yeah, just seems like it'd push the problem back a level. [speaker004:] Right. [speaker002:] It does. [speaker003:] Yeah, but [speaker001:] Mm hmm. [speaker004:] No but, it's more than that, like the [disfmarker] the more sort of you break it up like because if you have everything pointing to one node it's like exponential whereas if you like keep breaking it up more and more it's not exponential anymore. [speaker002:] So it yeah, there are two advantages. That's tha there's one technical one [speaker003:] Sh sh yeah, [disfmarker] [speaker002:] and the other is it [disfmarker] it gets used [speaker003:] S so we'd basically be doing subgrouping? Subgrouping, basically [speaker004:] Yeah. [speaker003:] into mo so basically make it more tree like going backwards? [speaker004:] Right. [speaker001:] Yeah. [speaker002:] Right. But it [disfmarker] there's two advantages, one is the technical one that you don't wind up with such big exponential uh CBT's, [speaker005:] Bhaskara? [speaker002:] the other is it can be [disfmarker] it presumably can be used for multiple decisions. [speaker001:] Mm hmm. [speaker002:] So that if you have this idea of the compatibility with the requirements of an action to the state of the user one could well imagine that that was u [speaker004:] Right. [speaker002:] not only is it sim is it cleaner to compute it separately but it could be that it's used in multiple places. Anyway th so in general this is the design, this is really design problem. [speaker005:] Yeah. [speaker002:] OK, you've got a signal, a d set of decisions um how do we do this? [speaker005:] What do I have under user state anyhow cuz I named that already something. Oh that's tired, fresh, yeah. Maybe should be renamed into physical state. [speaker002:] Or fat user fatigue even. [speaker001:] Hmm. [speaker005:] That's with a "G"? [speaker001:] Mm hmm. [speaker002:] Whatever. [speaker005:] Then we can make a user state. [speaker002:] What's th what we're talking about is compatibility. Uh or something, I don't know, but. [speaker003:] I guess the [disfmarker] the question uh is It's hard for me to imagine how everything wouldn't just contribute to user state again. Or user compatibility. [speaker002:] Oh but the thing is that we uh uh we had some things that uh [speaker005:] That don't. [speaker002:] that don't [speaker005:] The user interests and the user who [disfmarker] who [disfmarker] who the user is are completely apart from the fact whether he is tired broke [speaker003:] Sure, but other [disfmarker] I thought though the node we're creating right now is user compatibility to the current action, right? [speaker002:] the right [speaker003:] Seems like everything in the user model would contribute to whether or not the user was compatible with something. [speaker002:] Uh maybe not. I mean the [disfmarker] that's the [disfmarker] the issue is um would Even if it was true in some abstract general sense it might not be true in terms of the information we actually had and can make use of. And anyway we're gonna have to find some way to cl uh get this sufficiently simple to make it feasible. [speaker005:] Maybe um if we look at the [disfmarker] if we split it up again into sort of um if we look at the uh the endpoint again we [disfmarker] we said that for each of these things there are certain preconditions so you can only enter a place if you are not too tired to do so and also eh have the money to do so if it costs something so if you can afford it and perform it is preconditions. Viewing usually is cheap or free. Is that always true? [speaker001:] Mm hmm. [speaker005:] I don't know. [speaker003:] Well, with the way we're defining it I think yeah. [speaker002:] W w but that eh viewing it without ent yeah view w with our definition of view it's free cuz you [speaker005:] And so is approaching. [speaker002:] Yeah. [speaker001:] Well what about the Grand Canyon, right? No, never mind. I mean are there [disfmarker] are there large things that you would have to pay to get up close to like, I mean never mind, not in the current [disfmarker] [speaker002:] No we have to enter the park. [speaker001:] OK. [speaker002:] Eh almost by definition um paying involves entering, [speaker001:] Yeah. [speaker002:] ge going through some [speaker001:] OK. Right, sure. [speaker002:] Right. Uh So let me suggest we switch to another one, I mean clearly there's more work to be done on this [speaker005:] Mm hmm. [speaker002:] but I think it's gonna be more instructive to [disfmarker] to think about uh other decisions that we need to make in path land. And what they're gonna look like. [speaker003:] So you can save this one as and open up the old one, right and [disfmarker] and then everything would be clean. You could do it again. [speaker002:] Why, I think it's worth saving this one but I think I'd [disfmarker] I'd like to keep this one [speaker004:] Yeah. [speaker002:] cuz I wanna see if [disfmarker] if we're gonna reuse any of this stuff. [speaker003:] Mm hmm. [speaker005:] Um so this might be What next? [speaker002:] Well you tell me, so in terms of the uh planner what's [disfmarker] what's a good one to do? [speaker005:] Well let's [disfmarker] th this go there or not I think is a good one. [speaker002:] Uh [speaker005:] Is a very basic one. So what makes things more likely that [disfmarker] So [speaker002:] Well the fir see the first thing is, getting back to thing we left out of the other is the actual discourse. So Keith this is gonna get into your world because uh we're gonna want to know you know, which constructions indicate various of these properties [speaker001:] Mm hmm. [speaker002:] s [speaker001:] Mm hmm. [speaker002:] and so I [disfmarker] I don't yet know how to do this, I guess we're gonna wind up pulling out uh discourse properties like we have object properties and we don't know what they are yet. [speaker001:] Mm hmm. [speaker002:] So that [disfmarker] that the Go there decision will have a node from uh discourse, and I guess why don't we just stick a discourse thing up there to be as a placeholder for [speaker005:] We [disfmarker] we also had discourse features of course for the endpoint. [speaker002:] Of [disfmarker] of course. [speaker005:] Identified that [speaker002:] Yeah. [speaker005:] and so again re that's completely correct, we have the user model, the situation model here, we don't have the discourse model here yet. Much the same way as we didn't [disfmarker] we don't have the ontology here. [speaker002:] Well the ontology we sort of said we would pull these various kinds of properties from the ontology like exhibiting, selling, and so forth. [speaker005:] Really. [speaker002:] So in some sense it's [disfmarker] it's there. [speaker005:] Mm hmm. [speaker002:] But the discourse we don't have it represented at all yet. [speaker005:] Yeah. Um This be specific for second year? Um And [disfmarker] and we probably will have uh something like a discourse for endpoint. [speaker002:] But if we do it'll have the three values. [speaker005:] Hmm? [speaker002:] It'll have the EVA values [speaker005:] Yeah. [speaker002:] if [disfmarker] if we have it. [speaker005:] Yeah. OK just for starters and here discourse um [speaker002:] For Go there, probably is true and false, let's say. That's what we talked about. [speaker005:] um well, I think um we're looking at the [disfmarker] the little data that we have, so people say how do I get to the castle and this usually means they wanna go there. [speaker001:] Mm hmm. [speaker005:] So this should sort of push it in one direction [speaker002:] Right. [speaker005:] however people also sometimes say how do I get there in order to find out how to get there without wanting to go there. [speaker002:] Mm hmm. [speaker005:] And sometimes um people say where is it [speaker001:] Mm hmm. [speaker005:] because they wanna know where it is but in most cases they probably [speaker002:] Yeah, but that doesn't change the fact that you're [disfmarker] you want these two values. [speaker005:] Oh yeah, true. So this is sort of some external thing that takes all the discourse stuff and then says here it's either [pause] yep, yay, A, or nay. Yeah. OK? [speaker002:] And they'll be a y uh, a user Go there and maybe that's all, I don't know. [speaker004:] Situation Go there, I mean, because it's [disfmarker] whether it's open or not. [speaker005:] Mm hmm. [speaker002:] OK, good. [speaker004:] That definitely interes [speaker002:] Yep. [speaker004:] But that now that kind of um what's the word [speaker001:] Hmm. [speaker004:] um the [disfmarker] that interacts with the uh EVA thing if they just wanna view it then it's fine to go there when it's closed whereas if they want to um [speaker002:] Right. [speaker004:] so [speaker002:] Right, so that's [disfmarker] that's where it starts getting to be uh uh essentially more interesting, so what uh Bhaskara says which is completely right is if you know that they're only going to view it then it doesn't matter whether it's closed or not [speaker001:] Mm hmm. [speaker002:] in terms of uh uh you know, whether [disfmarker] whether you wanna go there. [speaker004:] The time of day, right I [disfmarker] well, right. [speaker001:] Mm hmm. [speaker003:] It does matter though if there's like a strike or riot or something. [speaker002:] Absolutely there are other situational things that do matter. [speaker004:] Right. So yeah, that's what I said just having one situational node may not be enough because this [disfmarker] that node by itself wouldn't distinguish [speaker002:] Well i i it can have di various values. Yeah, but we eh you [disfmarker] you're right it might not be enough. [speaker004:] Yeah, I mean, see I'm [disfmarker] I'm thinking that any node that begins with "Go there" is either gonna be true or false. [speaker001:] Well, what [disfmarker] Whoops. [speaker002:] Yeah. [speaker001:] Right. [speaker002:] Ah. I see that could be. [speaker001:] Also, that node, I mean the Go there s S node would just be fed by separate ones for [speaker005:] Mm hmm. [speaker002:] Could be. [speaker001:] you know, there's different things, the strikes and the [speaker002:] Yeah. [speaker004:] Like situation traffic and so on. [speaker002:] N Yeah. [speaker001:] Yeah, the time of day. [speaker002:] Yeah. So [disfmarker] so now the other thing that Bhaskara eh pointed out is what this says is that uh there sh should be a link, and this is where things are gonna get very messy from the endpoint uh decision [speaker004:] I guess the final [speaker002:] maybe the t they're final re and, I guess the very bottom endpoint decision uh to the Go there node. And I [disfmarker] don't worry about layout, [speaker004:] Yeah. [speaker002:] I mean then we'll go [disfmarker] we'll go nuts [speaker005:] Mm hmm. [speaker004:] Mmm. [speaker002:] but [speaker004:] Maybe we could um have intermediate node that just the Endpoint and the Go there S node sort of fed into? [speaker002:] Could be, yeah. [speaker004:] Right. Because that's what we, I mean that's why this situation comes up. [speaker002:] Yeah. Well the Go there, actually the Endpoint node could feed [disfmarker] feed into the Go there S [speaker004:] Yeah, right. [speaker002:] That's right, so the Endpoint node, [speaker005:] Mm hmm. [speaker002:] make that up t t to the Go there then [speaker005:] Yeah. [speaker002:] and again we'll have to do layout at some point, but something like that. Now it's gonna be important not to have loops by the way. [speaker005:] I was just gonna [speaker002:] Uh really important in [disfmarker] in the belief worl net world not to have loops uh [speaker004:] Yes. [speaker005:] How long does it take you to [disfmarker] to compute uh [speaker002:] No it's much worse than that. It [disfmarker] if i loo it [disfmarker] it [disfmarker] it [disfmarker] it [disfmarker] it's not def i it's not well defined if you're there are loops, [speaker004:] It [disfmarker] things don't converge, yeah. [speaker005:] uh R recursive action? [speaker002:] you just you have to there are all sorts of ways of breaking it up so that there isn't uh [speaker005:] Uh but this isn't, this is [disfmarker] this line is just coming from over here. [speaker002:] OK. [speaker004:] Yeah. Yeah. [speaker002:] Yeah, no it's not a loop yet, I'm just saying we [disfmarker] we, in no, in [speaker005:] Mmm. [speaker004:] Well, but the good thing is we [disfmarker] we could have loopy belief propagation which [vocalsound] we all love. [speaker002:] Right. OK, so anyway, so that's another decision. Uh what's [disfmarker] what's another decision you like? [speaker005:] OK, these have no parents yet, but I guess that sort of doesn't matter. Right? [speaker002:] Well, the idea is that you go there, you go comes from something about the user from something about the situation and the uh the discourse is [disfmarker] is a mystery. [speaker005:] I mean this is sort of This comes from traffic and so forth, yeah. Sh Should we just make some [speaker002:] Sure, if you want. [speaker005:] um if there's parking maybe Mmm Oh who cares. OK. And if he has seen it already or not and so forth, [speaker002:] Right. [speaker005:] OK. Um and discourse is something that sort of should we make a Keith note here? That sort of comes from Keith. [speaker002:] Sure. Mm hmm. [speaker005:] Just sort of so we don't forget. Oops. Have to get used to this. OK, whoops. [speaker001:] Um actually [disfmarker] [speaker002:] And then also the discourse endpoint, I [disfmarker] I guess endpoint sub D is [disfmarker] if you wanna make it consistent. [speaker003:] Wh ah. [speaker005:] Mm hmm. [speaker001:] Um actually is this the [disfmarker] the right way to have it where um go there from the user and go there from the situation just sort of don't know about each other but they both feed the go there decision [speaker002:] I think so. [speaker001:] because isn't the, I mean [speaker002:] S [speaker001:] uh, hmm OK. [speaker002:] Maybe not, [speaker001:] But that still allows for the possibility of the [disfmarker] of the user model affecting our decision about whether a strike is the sort of thing which is going to keep this user away from [disfmarker] [speaker002:] a Right. [speaker001:] That [disfmarker] all that [disfmarker] that kind of decision making happens at the Go there node. [speaker002:] Uh y you [disfmarker] yeah you [disfmarker] i you [disfmarker] if you needed to do that. [speaker001:] Uh. If you needed it to do that. [speaker002:] Yeah. [speaker001:] But uh OK I was just thinking [speaker002:] Yeah. [speaker001:] I guess maybe I'm conflating that user node with possible [disfmarker] possible asking of the user you know [speaker002:] Ah. [speaker001:] hey there's a strike on, uh does that affect whether or not you wanna go or something [speaker002:] Good point, I don't [disfmarker] I don't know how we're going to [disfmarker] t uh [speaker001:] or Yeah, so that might not come out of a user model but, you know, directly out of interaction. [speaker002:] Right. Uh I gu yes my curr you know, don't yeah yeah yeah that's enough. [speaker005:] Yeah. [speaker002:] Uh My current idea on that would be that each of these decision nodes has questions associated with it. [speaker001:] Mm hmm. [speaker002:] And the question wouldn't itself be one of these conditional things you know, given that you know there's a strike do you still wanna go? [speaker001:] OK. Yeah. [speaker002:] But uh if you told him a bunch of stuff, then you would ask him do you wanna go? [speaker001:] Mm hmm. [speaker002:] But I think trying to formulate the conditional question, that sounds too much. [speaker001:] OK. Right, right. Yeah. Right, sure, OK. [speaker002:] To me. [speaker005:] Mm hmm. [speaker002:] Alright, but let me [disfmarker] let [disfmarker] let's stay with this a minute [speaker005:] But [speaker002:] because I want to do a little bit of organization. Before we get more into details. The organization is going to be that uh the flavor of what's going on is going to be that uh as we s e sort of going to this detail indeed Keith is going to [disfmarker] to worry about the various constructions that people might use [speaker001:] Mm hmm. [speaker002:] and Johno has committed himself to being the parser wizard, so what's going to happen is that eventually [speaker001:] Alright. [speaker002:] like by the time he graduates, OK uh they'll be some sort of system which is able to take the discourse in context and have outputs that can feed the rest of belief net. I j wa I [disfmarker] I assume everybody knows that, I just wanna you know, get closure that that'll be the game then, [speaker001:] Mm hmm. [speaker002:] so the semantics that you'll get out of the discourse will be of values that go into the various discourse based decision nodes. And now some of those will get fancier like mode of transportation and stuff so it isn't by any means uh necessarily a simple thing that you want out. So uh if there is an and there is mode of transportation [speaker005:] And it there's a sort of also a split if you loo if you blow this up and look at it in more detail there's something that comes from the discourse in terms of what was actually just said what's the utterance go giving us [speaker002:] Yeah. [speaker005:] and then what's the discourse history give us. [speaker002:] Yeah, well that, well, we'll have to decide uh how much of th where that goes. [speaker001:] Mm hmm. [speaker005:] That's uh two things then. Mmm. [speaker001:] Mm hmm. [speaker002:] an and it's not clear yet. I mean it could be those are two separate things, it could be that the discourse gadget itself integrates em as [disfmarker] which would be my guess that you'd have to do see in order to do reference and stuff like that um you've gotta have both the current discourse and the context to say I wanna go back there, wow, what does that mean [speaker001:] Mm hmm. [speaker002:] and uh [speaker005:] Mm hmm [speaker001:] Now. Mm hmm. [speaker002:] Alright. So [speaker005:] But is th is this picture that's emerging here just my wish that you have noticed already for symmetry or is it that we get for each [disfmarker] each decision on the very bottom we sort of get the sub E, sub D, sub U and maybe a sub O [disfmarker] "O" for "ontology" um meta node [speaker002:] I don't know. [speaker005:] but it might just [speaker002:] It could be. [speaker005:] could be so this [speaker002:] This is [disfmarker] this is getting into the thing I wanna talk about next, which is s if that's true uh how do we wanna combine those? O or when it's true? [speaker005:] but this eh w wou wou would be nice though that, you know, we only have at most four at the moment um arrows going f to each of the uh bottom decisions. [speaker002:] Yeah. [speaker004:] Yeah. [speaker005:] And four you [disfmarker] we can handle. [speaker002:] No. [speaker005:] It's too much? [speaker004:] Yeah. [speaker002:] Well i i it see i if it's fou if it's four things and each of them has four values it turns out to be a big CPT, it's not s completely impossi I mean it's [disfmarker] it's not beyond what the system could solve but it's probably beyond what we could actually uh write down. or learn. [speaker005:] Right, true. [speaker002:] Uh but, you know it's four to the fourth. It's pretty big. Uh. [speaker003:] Two fifty six, is that what that [speaker002:] Yeah. Yeah, I mean it's and I don't think it's gonna g e I don't think it'll get worse than that by the way, so le that's a [disfmarker] that's a good [speaker004:] Mmm yeah. [speaker005:] But [disfmarker] but four [disfmarker] didn't we decide that all of these had true or false? So is [disfmarker] it's four [speaker002:] Uh for go there, but not f but not for [disfmarker] [speaker003:] Yeah. [speaker002:] the other one's three values for endpoint already. [speaker004:] Yeah, I mean you need actually three to the five because uh well I mean if [disfmarker] if it has four inputs and then it itself has three values [speaker003:] Right. [speaker004:] so I mean it can get big fast. [speaker005:] Um for endpoint? No it's [disfmarker] it's sh [speaker002:] EV it's the EVA. [speaker005:] yeah, down here, but this one only has two. [speaker004:] No it still has three, [speaker002:] No. Since ta they will still have three. [speaker004:] EVA. [speaker002:] Each [disfmarker] so you're uh uh from each point of view you're making the same decision. [speaker001:] Mm hmm. [speaker002:] So from the point of view of the ob of the entity [speaker001:] Mm hmm. [speaker005:] Want to view that, yeah yeah. C sl yeah [speaker002:] Yeah. [speaker004:] This [disfmarker] and also, I mean, the other places where, like for example consider endpoint view, it has inputs coming from user budget, user thrift [speaker002:] Right. [speaker004:] so even [speaker002:] Those are not necessarily binary. S so we're [disfmarker] we're gonna have to use some t care in the knowledge engineering to not have this explode. And in fact I think it doesn't in the sense that um Read it, you know actually with the underlying semantics and stuff I think it isn't like you have two hundred and fifty six different uh ways of [disfmarker] of thinking about whether this user wants to go to some place. Alright. So we [disfmarker] we just have to figure out what the regularities are and and code them. But um What I was gonna suggest next is maybe we wanna work on this a little longer but I do want to also talk about the thing that we started into now of uh well it's all fine to say all these arrows come into the si same place what rule of combination is used there. So th yes they [disfmarker] so these things all affect it, [speaker001:] Mm hmm. Right. [speaker002:] how do they affect it? And belief nets have their own beliefs about uh what are good ways to do that. So is it [disfmarker] it's [disfmarker] it's clearer n clear enough what the issue is, [speaker004:] Right. [speaker002:] right? So do we wanna switch that now or we wanna do some more of this? [speaker005:] R basically w we just need to sort of in order to get some closure on this figure out how we're gonna get this picture sort of uh completely messy. [speaker002:] Well, here [disfmarker] he here's one of the things that [disfmarker] that I th you sh you [disfmarker] no, I don't know how easy it is to do this in the interface but you [disfmarker] it would be great if you could actually just display at a given time uh all the things that you pick up, you click on "endpoint", OK [speaker005:] Mm hmm. [speaker002:] and everything else fades and you just see the links that are relevant to that. And I does anybody remember the GUI on this? [speaker003:] Uh d I would almost say the other way to do that would be to open u or make you know N many belief nets and then open them every time you wanted to look at a different one [speaker005:] Mm hmm. [speaker003:] vers [speaker005:] It's probably pretty easy do it [disfmarker] to do it in HTML, [speaker003:] cuz uh [speaker005:] just [disfmarker] [speaker003:] Yeah, but [speaker005:] Uh [speaker004:] HTML? [speaker005:] Yeah I have each of these thing each of the end belief nets be [disfmarker] be a page and then you click on the thing and then li consider that it's respective, [speaker004:] OK. [speaker002:] Yeah the [disfmarker] well the b [speaker005:] but [speaker002:] anyway so uh it clear that even with this if we put in all the arrows nobody is gonna be able to read the diagram. [speaker003:] Yeah. [speaker002:] Alright, so e we have to figure out some eh eh uh basically display hack or something to do this because anyway I [disfmarker] I [disfmarker] let me consi suggest that's a s not a first order consideration, we have two first order considerations which is what are the uh influences A, A, and B how do they get combined mathematically, how do we display them is an issue, but um [speaker003:] I don't, yeah I just don't think this has been designed to support something like that. [speaker004:] Yeah. Yeah, I [disfmarker] I mean, it might soon, if this is gonna be used in a serious way like java base then it might soon be necessary to uh start modifying it for our purposes. [speaker002:] Right. Yeah, and Um I [disfmarker] that seems like a perfectly feasible thing to get into, but um we have to know what we want first. OK, so why don't you tell us a little bit about decision nodes and what [disfmarker] what the choices might be for these? [speaker004:] So Ah, sorry. I guess that's [speaker003:] You can technically wear that as you're talking. [speaker004:] Yeah, it's right, I guess I can do that. [speaker001:] Darn. [speaker002:] Put it in your, yeah. [speaker004:] I guess this board works fine. So um recall the basic problem which is that um you have a belief net and you have like a lot of different nodes all contributing to one node. Right? So as we discussed specifying this kind of thing is a big pain and it's so will take a long time to write down because for example if these S have three possibilities each and this has three possibilities then you know you have two hundred and forty three possibilities which is already a lot of numbers to write down. So what um helps us in our situation is that these all have values in the same set, right? These are all like saying EV or A, right? So it's not just a generalized situation like I mean basically we wanna just take a combination of [disfmarker] we wanna view each of these as experts ea who are each of them is making a decision based on some factors and we wanna sort of combine their decisions and create you know, um sorta weighted combination. [speaker005:] Hmm. ROVER, the ROVER decision. [speaker004:] The what decision? [speaker005:] ROVER. All of their outputs combined to make a decision. [speaker001:] Hmm. [speaker004:] Yeah. Yeah. So the problem is to specify the uh so the conditional property of this given all those, right? That's the way belief nets are defined, like each node given its parents, right? So um that's what we want, we want for example P of um let's call this guy Y and let's call these X one, X two XN, right. So we want probability that Y equals, you know, for example um E given that these guys are I'll just refer to this as like X um hat or something, uh the co like all of them? Given that for example the data says you know, A, V, A, E, or something right? [speaker002:] Yep. [speaker004:] So we would like to do this kind of combination. [speaker002:] Alright, so um Is that uh I [disfmarker] yeah, I just wanna make sure everybody is with us before he goes on. [speaker001:] I think so, yeah. [speaker002:] It's [disfmarker] it's cl e is [disfmarker] is it clear what he wants to compute? [speaker004:] Right. [speaker001:] Mm hmm. [speaker004:] So, right. So Basically um [vocalsound] what we don't wanna do is to for every single combination of E and V and A and every single letter E, s give a number because that's obviously not desirable. [speaker001:] Mm hmm. [speaker004:] What we wanna do is find some principled way of um saying what each of these is and we want it to be a valid probability distribution, so we want it to um add up to one, right? So those are the two things that we uh need. [speaker001:] Hmm. [speaker004:] So what uh I guess, what Jerry suggested earlier was basically that we, you know view these guys as voting and we just take the uh we essentially take um averages, right? So for example here two people have voted for A, one has voted for V, and one has voted for E, so we could say that the probabilities are, you know, probability of being E is one over four, because one person voted for E out of four and similarly, probability of so this is probability of E s and then probability of A given all that is um two out of four and probability of V is one out of four. Right? So that's step [disfmarker] that's the uh yeah that's the [disfmarker] that's the basic uh thing. Now [speaker005:] Um Yeah. [speaker004:] Is that all OK? [speaker005:] And that one outcome, that's [speaker002:] What? [speaker005:] it's X [disfmarker] X one voted for A X two voted for V [speaker001:] Mm hmm. [speaker005:] and so forth? [speaker002:] Y right. [speaker005:] Yeah. [speaker002:] Yep. [speaker004:] Yeah. [speaker005:] That's the outcome. [speaker002:] S so this assumes symmetry and equal weights and all this sort of things, which may or may not be a good assumption, [speaker001:] Mm hmm. [speaker002:] so that [speaker001:] Right. [speaker004:] Yeah. Yeah. So step two is um right. So we've assumed equal weights whereas it might turn out that you know, some w be that for example, what the um the actual the uh verbal content of what the person said, like what uh what might be uh somehow more uh important than the uh [speaker003:] X one matters more i than X two or [speaker004:] Right. Sure, so we don't wanna like give them all equal weight so currently we've been giving them all weight one fourth so we could replace this by uh W one, W two, W three, and W four right? [speaker001:] Hmm. [speaker004:] And in order for this to be a valid probability distribution for each um X hat, we just need that the W's sum to one. So they can be for example, you know you [disfmarker] you could have point one, point three, point two, and point four, say. [speaker005:] That's one. [speaker004:] And that'd be one. So that um also seems to work fine. And uh [speaker003:] So I jus just to make sure I understand this, so in this case um we would still compute the average? [speaker004:] You'd compute the weighted average, so the probability of E would be uh [speaker003:] OK, so [disfmarker] so it'd be so in this case the probability that Y equals A would be uh [comment] W one times [disfmarker] [speaker001:] Point three. [speaker003:] or A or [disfmarker] let's see, one full quarter times point one [speaker004:] Not one quarter, so these numbers have been replaced with point one, point three, point two, and point four. [speaker001:] No. [speaker004:] So you can view these as gone. [speaker003:] OK. [speaker004:] Probability of [speaker003:] OK. [speaker004:] Yeah. Yeah. OK. So, alright. So this is uh step two. So the next possibility is that um we've given just a single weight to each expert, right, whereas it might be the case that um in certain situations one of the experts is more uh reliable and in certain situations the other expert is more reliable. So the way this is handled is by what's called a mixture of experts, so what you can have is you augment these diagrams like this so you have a new thing called "H", OK? This is a hidden variable. And what this is is it gets its input from X one, X two, X three, and X four, and what it does is it decides which of the experts is to be trusted in this particular situation. Right? And then these guys all come here. OK. So this is sightly uh more complicated. So what's going on is that um this H node looks at these four values of those guys and it decides in given these values which of these isn't likely to be more reliable or most reliable. So H produces some you know, it produces a number, either one, two, three, or four, in our situation, right? Now this guy he looks at the value of H say it's two, and then he just selects the uh thing. That's all there is to say, I guess about it. [speaker005:] Mm hmm. [speaker004:] Right, so you can have a mixture that Right. [speaker001:] So [disfmarker] so the function of the thing that comes out of H is very different from the function of the other inputs. It's driving how the other four are interpreted. OK. [speaker004:] Yeah. Yeah. [speaker003:] So H passes a vector on to the next node? [speaker004:] It could. [speaker003:] It could? A vector of the weights as the se [speaker004:] Yeah, it could [speaker003:] oh. [speaker004:] Sorry? [speaker001:] Well a vector with three zero's and one one, right? [speaker003:] Oh it's basically to tell the bottom node which one of the situations that it's in or which one of the weighting systems [speaker004:] Right, so I mean the way you desc [speaker003:] W I was just, if you wanted to pay attention to more than one you could pass a w a weighting s system though too, couldn't you? OK. [speaker001:] Um Does H have to have another input to tell it alpha, beta, whatever, or is the [disfmarker] that's determined by what the experts are saying, like the type of situ OK. Hmm. OK. OK. I mean It [disfmarker] it just seems that like without that [disfmarker] that outside input that you've got a situation where, you know, like if [disfmarker] if uh X one says no, you know, a low value coming out of X on or i if X one says no then ignore X one, you know, I mean that seems like that'd be weird, [speaker004:] Yeah, well could be things like if X two and X three say yes then i ignore X one also. [speaker001:] right? Oh, OK. OK. Alright, right. [speaker003:] Oh The situations that H has, are they built into the net or OK, so they [disfmarker] they could either be hand coded or learned or OK. [speaker004:] Yeah. [speaker003:] Based on training data, OK. [speaker004:] Yeah. Yes. [speaker003:] So you specify one of these things for every one of those possi possible situations. Oh yeah. [speaker004:] Yeah. Um Well, I mean to learn them we need data, where are we gonna get data? Well I mean we need data with people intentions, right? [speaker001:] Right, right. [speaker004:] Which is slightly tricky. Right. [speaker001:] Uh huh. [speaker004:] Mm hmm. But what's the data about like, are we able to get these nodes from the data? [speaker001:] Like how thrifty the user is, or do we have access to that? Mm hmm. Oh right. Oh good. [speaker004:] Yeah. [speaker001:] OK. Mm hmm. Mm hmm. OK. [speaker004:] Yeah, but that's my question, like how do we [disfmarker] I mean, how do we have data about something like um um endpoint sub E, or endpoint sub uh [pause] you know s S? [speaker003:] Well, basically you would say, based on [disfmarker] in this dialogue that we have which one of the things that they said eh whether it was the entity relations or whatever was the thing that determined what mode it was, [speaker004:] Mmm. Mmm. [speaker003:] right? [speaker004:] So this is what we wanna learn. Yep. Right. Hmm. Yeah. I don't think, well you have a [disfmarker] can you bring up the function thing? Um w where is the thing that allows you to sort of [speaker003:] That's on the added variable, isn't it? [speaker004:] Oh function properties, is that it? Hmm, I guess not. Yeah, that's [speaker001:] No. [speaker004:] Right. OK. And um it so e either it'll allow us to do everything which I think is unlikely, I think more likely it'll allow us to do very few of these things and in that case we'll have to um just write up little things that allow you to um create such CPU's on your own in the java base format. Yeah. Yeah. Yeah, I was assuming that's what we'd always do because yeah I was assuming that's what we'd always do, it's Right. Yeah. [speaker003:] Ah. Well in terms of java base I think it's basically what you see is what you get in I don't yeah, I would be surprised if it supports anything more than what we have right here. [speaker001:] So Yeah. Yeah. By the way um uh just talking about uh about that general end of things uh is there gonna be data soon from what people say when they're interacting with the system and so on? Like, I mean, what kind of questions are being given [disfmarker] being asked? Cuz [disfmarker] OK. Yeah yeah. OK. OK. Fey, you mean. OK. OK. O OK. OK. I'm just wondering, because in terms of, you know, I mean uh w the figure [disfmarker] I was thinking about this figure that we talked about, fifty constructions or whatever that's uh that's a whole lot of constructions and um you know, I mean one might be f fairly pleased with getting a really good analysis of five maybe ten in a summer so, I mean I know we're going for sort of a rough and ready. Mm hmm. Mm hmm. OK. OK. I mean, I [disfmarker] I [disfmarker] I [disfmarker] I was [disfmarker] uh I was talking about the, you know, if you wanted to do it really in detail and we don't really need all the detail for what we're doing right now but anyway in terms of just narrowing that task you know which fifty do I do, I wanna see what people are using, so Well, it will inspire me. Right, sure sure. Right. Yeah, sure. Sure. Yeah. OK. Touche. Good enough. [speaker001:] OK, we're going. [speaker004:] Damn. [speaker003:] And uh Hans uh, Hans Guenter will be here, um, I think by next [disfmarker] next Tuesday or so. [speaker004:] Mm hmm. [speaker002:] Oh, OK. [speaker003:] So he's [disfmarker] he's going to be here for about three weeks, [speaker002:] Oh! That's nice. [speaker001:] Just for a visit? [speaker003:] and, uh [disfmarker] Uh, we'll see. We might [disfmarker] might end up with some longer collaboration or something. [speaker001:] Huh. [speaker003:] So he's gonna look in on everything we're doing [speaker001:] Cool. [speaker004:] Mm hmm. [speaker003:] and give us his [disfmarker] his thoughts. And so it'll be another [disfmarker] another good person looking at things. [speaker002:] Oh. Hmm. [speaker005:] Th that's his spectral subtraction group? Is that right? [speaker003:] Yeah, [speaker005:] Oh, OK. [speaker003:] yeah. [speaker005:] So I guess I should probably talk to him a bit too? [speaker003:] Oh, yeah. Yeah. Yeah. No, he'll be around for three weeks. He's, uh, um, very, very, easygoing, easy to talk to, and, uh, very interested in everything. [speaker001:] Really nice guy. [speaker003:] Yeah, yeah. [speaker002:] Yeah, we met him in Amsterdam. [speaker003:] Yeah, yeah, he's been here before. I mean, he's [disfmarker] he's [disfmarker] he's [disfmarker] he's [disfmarker] [speaker002:] Oh, OK. I haven't noticed him. [speaker001:] Wh Back when I was a grad student he was here for a, uh, uh [disfmarker] a year or [comment] n six months. [speaker003:] N nine months. [speaker001:] Something like that. [speaker003:] Something like that. [speaker001:] Yeah. [speaker003:] Yeah. Yeah. He's [disfmarker] he's done a couple stays here. [speaker002:] Hmm. [speaker003:] Yeah. [speaker001:] So, um, [vocalsound] [comment] I guess we got lots to catch up on. And we haven't met for a couple of weeks. We didn't meet last week, Morgan. Um, I went around and talked to everybody, and it seemed like they [disfmarker] they had some new results but rather than them coming up and telling me I figured we should just wait a week and they can tell both [disfmarker] you know, all of us. So, um, why don't we [disfmarker] why don't we start with you, Dave, [speaker005:] Oh, OK. [speaker001:] and then, um, we can go on. So. [speaker005:] So, um, since we're looking at putting this, um [disfmarker] mean log m magnitude spectral subtraction, um, into the SmartKom system, I I did a test seeing if, um, it would work using past only [comment] and plus the present to calculate the mean. So, I did a test, um, [vocalsound] where I used twelve seconds from the past and the present frame to, um, calculate the mean. And [disfmarker] [speaker001:] Twelve seconds [disfmarker] Twelve [disfmarker] twelve seconds back from the current [pause] frame, [speaker005:] Uh [disfmarker] [speaker001:] is that what you mean? [speaker005:] Twelve seconds, um, counting back from the end of the current frame, yeah. [speaker001:] OK, [speaker005:] So it was, um, twen I think it was twenty one frames [speaker001:] OK. [speaker005:] and that worked out to about twelve seconds. [speaker001:] Mm hmm. [speaker005:] And compared to, um, do using a twelve second centered window, I think there was a drop in performance but it was just a slight drop. [speaker003:] Mm hmm. [speaker001:] Hmm! [speaker005:] Is [disfmarker] is that right? [speaker003:] Um, yeah, I mean, it was pretty [disfmarker] it was pretty tiny. [speaker005:] Uh huh. [speaker003:] Yeah. [speaker005:] So that was encouraging. And, um, that [disfmarker] that [disfmarker] um, that's encouraging for [disfmarker] for the idea of using it in an interactive system like And, um, another issue I'm [disfmarker] I'm thinking about is in the SmartKom system. So say twe twelve seconds in the earlier test seemed like a good length of time, but what happens if you have less than twelve seconds? And, um [disfmarker] So I w bef before, um [disfmarker] Back in May, I did some experiments using, say, two seconds, or four seconds, or six seconds. In those I trained the models using mean subtraction with the means calculated over two seconds, or four seconds, or six seconds. And, um, here, I was curious, what if I trained the models using twelve seconds but I f I gave it a situation where the test set I was [disfmarker] subtracted using two seconds, or four seconds, or six seconds. And, um [disfmarker] So I did that for about three different conditions. And, um [disfmarker] I mean, I th I think it was, um, four se I think [disfmarker] I think it was, um, something like four seconds and, um, six seconds, and eight seconds. Something like that. And it seems like it [disfmarker] it [disfmarker] it hurts compared to if you actually train the models [comment] using th that same length of time but it [disfmarker] it doesn't hurt that much. Um, u usually less than point five percent, although I think I did see one where it was a point eight percent or so rise in word error rate. But this is, um, w where, um, even if I train on the, uh, model, and mean subtracted it with the same length of time as in the test, it [disfmarker] the word error rate is around, um, ten percent or nine percent. So it doesn't seem like that big a d a difference. [speaker003:] But it [disfmarker] but looking at it the other way, isn't it [disfmarker] what you're saying that it didn't help you to have the longer time for training, if you were going to have a short time for [disfmarker] [speaker005:] That [disfmarker] that's true. Um, [speaker003:] I mean, why would you do it, if you knew that you were going to have short windows in testing. [speaker005:] Wa [speaker001:] Yeah, it seems like for your [disfmarker] I mean, in normal situations you would never get twelve seconds of speech, right? [speaker002:] You need twelve seconds in the past to estimate, right? [speaker001:] I'm not [disfmarker] e u [speaker003:] Yeah. [speaker002:] Or l or you're looking at six sec [disfmarker] seconds in future and six in [disfmarker] [speaker005:] Um, t twelve s N n uh [disfmarker] For the test it's just twelve seconds in the past. [speaker003:] No, total. [speaker002:] No, it's all [disfmarker] Oh, OK. [speaker001:] Is this twelve seconds of [disfmarker] uh, regardless of speech or silence? Or twelve seconds of speech? [speaker005:] Of [disfmarker] of speech. [speaker001:] OK. [speaker002:] Mm hmm. [speaker003:] The other thing, um, which maybe relates a little bit to something else we've talked about in terms of windowing and so on is, that, um, I wonder if you trained with twelve seconds, and then when you were two seconds in you used two seconds, and when you were four seconds in, you used four seconds, and when you were six [disfmarker] and you basically build up to the twelve seconds. So that if you have very long utterances you have the best, [speaker005:] Yeah. [speaker003:] but if you have shorter utterances you use what you can. [speaker005:] Right. And that's actually what we're planning to do in [speaker003:] OK. [speaker005:] But [disfmarker] [speaker003:] Yeah. [speaker005:] s so I g So I guess the que the question I was trying to get at with those experiments is, "does it matter what models you use? Does it matter how much time y you use to calculate the mean when you were, um, tra doing the training data?" [speaker003:] Right. But I mean the other thing is that that's [disfmarker] I mean, the other way of looking at this, going back to, uh, mean cepstral subtraction versus RASTA kind of things, is that you could look at mean cepstral subtraction, especially the way you're doing it, uh, as being a kind of filter. And so, the other thing is just to design a filter. You know, basically you're [disfmarker] you're [disfmarker] you're doing a high pass filter or a band pass filter of some sort and [disfmarker] and just design a filter. And then, you know, a filter will have a certain behavior and you loo can look at the start up behavior when you start up with nothing. [speaker005:] Mm hmm. [speaker003:] And [disfmarker] and, you know, it will, uh, if you have an IIR filter for instance, it will, um, uh, not behave in the steady state way that you would like it to behave until you get a long enough period, but, um, uh, by just constraining yourself to have your filter be only a subtraction of the mean, you're kind of, you know, tying your hands behind your back because there's [disfmarker] filters have all sorts of be temporal and spectral behaviors. [speaker005:] Mm hmm. [speaker003:] And the only thing, you know, consistent that we know about is that you want to get rid of the very low frequency component. [speaker005:] Hmm. [speaker002:] But do you really want to calculate the mean? And you neglect all the silence regions [comment] or you just use everything that's twelve seconds, and [disfmarker] [speaker005:] Um, you [disfmarker] do you mean in my tests so far? [speaker002:] Ye yeah. [speaker005:] Most of the silence has been cut out. [speaker002:] OK. [speaker005:] Just [disfmarker] There's just inter word silences. [speaker002:] Mm hmm. And they are, like, pretty short. [speaker005:] Pretty short. [speaker002:] Shor Yeah, [speaker005:] Yeah. [speaker002:] OK. Yeah. Mm hmm. So you really need a lot of speech to estimate the mean of it. [speaker005:] Well, if I only use six seconds, it still works pretty well. [speaker002:] Yeah. Yeah. Uh huh. [speaker005:] I saw in my test before. I was trying twelve seconds cuz that was the best [pause] in my test before [speaker002:] OK. [speaker005:] and that increasing past twelve seconds didn't seem to help. [speaker002:] Hmm. Huh. [speaker005:] th um, yeah, I guess it's something I need to play with more to decide how to set that up for the SmartKom system. Like, may maybe if I trained on six seconds it would work better when I only had two seconds or four seconds, and [disfmarker] [speaker003:] Yeah. Yeah. And, um [disfmarker] [speaker005:] OK. [speaker003:] Yeah, and again, if you take this filtering perspective and if you essentially have it build up over time. I mean, if you computed means over two and then over four, and over six, essentially what you're getting at is a kind of, uh, ramp up of a filter anyway. And so you may [disfmarker] may just want to think of it as a filter. But, uh, if you do that, then, um, in practice somebody using the SmartKom system, one would think [comment] [disfmarker] if they're using it for a while, it means that their first utterance, instead of, you know, getting, uh, a forty percent error rate reduction, they'll get a [disfmarker] uh, over what, uh, you'd get without this, uh, um, policy, uh, you get thirty percent. And then the second utterance that you give, they get the full [disfmarker] you know, uh, full benefit of it if it's this ongoing thing. [speaker001:] Oh, so you [disfmarker] you cache the utterances? That's how you get your, uh [disfmarker] [speaker005:] M [speaker003:] Well, I'm saying in practice, yeah, [speaker001:] Ah. [speaker003:] that's [disfmarker] If somebody's using a system to ask for directions or something, [speaker001:] OK. OK. [speaker003:] you know, they'll say something first. And [disfmarker] and to begin with if it doesn't get them quite right, ma m maybe they'll come back and say, "excuse me?" [speaker001:] Mm hmm. [speaker003:] uh, or some [disfmarker] I mean it should have some policy like that anyway. [speaker001:] Mm hmm. [speaker003:] And [disfmarker] and, uh, uh, in any event they might ask a second question. And it's not like what he's doing doesn't, uh, improve things. It does improve things, just not as much as he would like. And so, uh, there's a higher probability of it making an error, uh, in the first utterance. [speaker001:] What would be really cool is if you could have [disfmarker] uh, this probably [disfmarker] users would never like this [disfmarker] but if you had [disfmarker] could have a system where, [vocalsound] before they began to use it they had to introduce themselves, verbally. [speaker003:] Mm hmm. [speaker001:] You know. [speaker003:] Yeah. [speaker001:] "Hi, my name is so and so, I'm from blah blah blah." And you could use that initial speech to do all these adaptations [speaker005:] Mm hmm. [speaker001:] and [disfmarker] [speaker003:] Right. Oh, the other thing I guess which [disfmarker] which, uh, I don't know much about [disfmarker] as much as I should about the rest of the system but [disfmarker] but, um, couldn't you, uh, if you [disfmarker] if you sort of did a first pass I don't know what kind of, uh, uh, capability we have at the moment for [disfmarker] for doing second passes on [disfmarker] on, uh, uh, some kind of little [disfmarker] small lattice, or a graph, or confusion network, or something. But if you did first pass with, um, the [disfmarker] with [disfmarker] either without the mean sub subtraction or with a [disfmarker] a very short time one, and then, um, once you, uh, actually had the whole utterance in, if you did, um, the, uh, uh, longer time version then, based on everything that you had, um, and then at that point only used it to distinguish between, you know, top N, um, possible utterances or something, you [disfmarker] you might [disfmarker] it might not take very much time. I mean, I know in the large vocabulary stu uh, uh, systems, people were evaluating on in the past, some people really pushed everything in to make it in one pass but other people didn't and had multiple passes. And, um, the argument, um, against multiple passes was u u has often been "but we want to this to be r you know [disfmarker] have a nice interactive response". And the counterargument to that which, say, uh, BBN I think had, [comment] was "yeah, but our second responses are [disfmarker] second, uh, passes and third passes are really, really fast". [speaker001:] Mm hmm. [speaker003:] So, um, if [disfmarker] if your second pass takes a millisecond who cares? Um. [speaker005:] S so, um, the [disfmarker] the idea of the second pass would be waiting till you have more recorded speech? Or [disfmarker]? [speaker003:] Yeah, so if it turned out to be a problem, that you didn't have enough speech because you need a longer [disfmarker] longer window to do this processing, then, uh, one tactic is [disfmarker] you know, looking at the larger system and not just at the front end stuff [comment] [disfmarker] is to take in, um, the speech with some simpler mechanism or shorter time mechanism, [speaker005:] Mm hmm. [speaker003:] um, do the best you can, and come up with some al possible alternates of what might have been said. And, uh, either in the form of an N best list or in the form of a lattice, or [disfmarker] or confusion network, or whatever. [speaker005:] Mm hmm. [speaker003:] And then the decoding of that is much, much faster or can be much, much faster if it isn't a big bushy network. And you can decode that now with speech that you've actually processed using this longer time, uh, subtraction. [speaker005:] Mmm. [speaker003:] So I mean, it's [disfmarker] it's common that people do this sort of thing where they do more things that are more complex or require looking over more time, whatever, in some kind of second pass. [speaker005:] Mm hmm. OK. [speaker003:] um, and again, if the second pass is really, really fast [disfmarker] Uh, another one I've heard of is [disfmarker] is in [disfmarker] in connected digit stuff, um, going back and l and through backtrace and finding regions that are considered to be a d a digit, but, uh, which have very low energy. [speaker005:] Mm hmm. OK. [speaker003:] So, uh [disfmarker] I mean, there's lots of things you can do in second passes, at all sorts of levels. Anyway, I'm throwing too many things out. But. [speaker001:] So is that, uh [disfmarker] that it? [speaker005:] I guess that's it. [speaker001:] OK, uh, do you wanna go, Sunil? [speaker002:] Yep. Um, so, the last two weeks was, like [disfmarker] So I've been working on that Wiener filtering. And, uh, found that, uh, s single [disfmarker] like, I just do a s normal Wiener filtering, like the standard method of Wiener filtering. And that doesn't actually give me any improvement over like [disfmarker] I mean, uh, b it actually improves over the baseline but it's not like [disfmarker] it doesn't meet something like fifty percent or something. So, I've been playing with the v [speaker001:] Improves over the base line MFCC system? [speaker002:] Yeah. [speaker001:] Yeah. [speaker002:] Yeah. Yeah. So, um [disfmarker] So that's [disfmarker] The improvement is somewhere around, like, thirty percent over the baseline. [speaker003:] Is that using [disfmarker] in combination with something else? [speaker002:] No, [speaker003:] With [disfmarker] with a [disfmarker] [speaker002:] just [disfmarker] just one stage Wiener filter which is a standard Wiener filter. [speaker003:] No, no, but I mean in combination with our on line normalization or with the LDA? [speaker002:] Yeah, yeah, yeah, yeah. So I just plug in the Wiener filtering. [speaker003:] Oh, OK. [speaker002:] I mean, in the s in our system, where [disfmarker] [speaker001:] Oh, OK. [speaker002:] So, I di i di [speaker003:] So, does it g does that mean it gets worse? Or [disfmarker]? [speaker002:] No. It actually improves over the baseline of not having a Wiener filter in the whole system. Like I have an LDA f LDA plus on line normalization, [speaker003:] Yeah? [speaker002:] and then I plug in the Wiener filter in that, so it improves over not having the Wiener filter. So it improves but it [disfmarker] it doesn't take it like be beyond like thirty percent over the baseline. So [disfmarker] [speaker003:] But that's what I'm confused about, cuz I think [disfmarker] I thought that our system was more like forty percent without the Wiener filtering. [speaker002:] No, it's like, uh, [speaker004:] Mmm. [speaker002:] well, these are not [disfmarker] [speaker001:] Is this with the v new VAD? [speaker002:] No, it's the old VAD. So my baseline was, [vocalsound] uh, [vocalsound] nine [disfmarker] This is like [disfmarker] w the baseline is ninety five point six eight, and eighty nine, and [disfmarker] [speaker003:] So I mean, if you can do all these in word errors it's a lot [disfmarker] a lot easier actually. [speaker002:] What was that? Sorry? [speaker003:] If you do all these in word error rates it's a lot easier, right? [speaker002:] Oh, OK, OK, OK. [speaker003:] OK, [speaker002:] Errors, right, I don't have. [speaker003:] cuz then you can figure out the percentages. [speaker002:] It's all accuracies. [speaker003:] Yeah. [speaker004:] The baseline is something similar to a w I mean, the t the [disfmarker] the baseline that you are talking about is the MFCC baseline, right? [speaker002:] The t yeah, [speaker004:] Or [disfmarker]? [speaker002:] there are two baselines. OK. So the baseline [disfmarker] One baseline is MFCC baseline [speaker004:] Mm hmm. [speaker002:] that [disfmarker] When I said thirty percent improvement it's like MFCC baseline. [speaker003:] So [disfmarker] so [disfmarker] so what's it start on? The MFCC baseline is [disfmarker] is what? Is at what level? [speaker002:] It's the [disfmarker] it's just the mel frequency and that's it. [speaker003:] No, what's [disfmarker] what's the number? [speaker002:] Uh, so I I don't have that number here. OK, OK, OK, I have it here. Uh, it's the VAD plus the baseline actually. I'm talking about the [disfmarker] the MFCC plus I do a frame dropping on it. So that's like [disfmarker] the word error rate is like four point three. [speaker003:] Four point three. [speaker002:] Like [disfmarker] Ten point seven. [speaker003:] What's ten point seven? [speaker002:] It's a medium misma OK, sorry. There's a well ma well matched, medium mismatched, and a high matched. [speaker003:] Ah. [speaker002:] So I don't have the [disfmarker] like the [disfmarker] [speaker003:] Yeah. OK, four point three, [speaker002:] So [disfmarker] [speaker003:] ten point seven, [speaker002:] And forty forty. [speaker003:] and [disfmarker] [speaker002:] Forty percent is the high mismatch. [speaker003:] OK. [speaker002:] And that becomes like four point three [disfmarker] [speaker003:] Not changed. [speaker002:] Yeah, it's like ten point one. Still the same. And the high mismatch is like eighteen point five. [speaker003:] Eighteen point five. [speaker002:] Five. [speaker003:] And what were you just describing? [speaker002:] Oh, the one is [disfmarker] this one is just the baseline plus the, uh, Wiener filter plugged into it. [speaker003:] But where's the, uh, on line normalization and so on? [speaker002:] Oh, OK. So [disfmarker] Sorry. So, with the [disfmarker] with the on line normalization, the performance was, um, ten [disfmarker] OK, so it's like four point three. Uh, and again, that's the ba the ten point, uh, four and twenty point one. That was with on line normalization and LDA. So the h well matched has like literally not changed by adding on line or LDA on it. But the [disfmarker] I mean, even the medium mismatch is pretty much the same. And the high mismatch was improved by twenty percent absolute. [speaker003:] OK, and what kind of number [disfmarker] an and what are we talking about here? Is this TI digits [speaker002:] It's the It it's Italian. [speaker003:] or [disfmarker] [speaker002:] I'm talking about Italian, [speaker003:] Italian? [speaker002:] yeah. [speaker003:] And what did [disfmarker] So, what was the, um, uh, corresponding number, say, for, um, uh, the Alcatel system for instance? [speaker002:] Mmm. [speaker003:] Do you know? [speaker004:] Yeah, so it looks to be, um [disfmarker] [speaker002:] You have it? [speaker004:] Yep, it's three point four, uh, eight point, uh, seven, and, uh, thirteen point seven. [speaker003:] OK. [speaker002:] Yep. [speaker003:] OK. [speaker002:] So [disfmarker] Thanks. [speaker004:] Mm hmm. [speaker003:] OK. [speaker002:] So, uh, this is the single stage Wiener filter, with [disfmarker] The noise estimation was based on first ten frames. [speaker003:] Mm hmm. [speaker002:] Actually I started with [disfmarker] using the VAD to estimate the noise and then I found that it works [disfmarker] it doesn't work for Finnish and Spanish because the VAD endpoints are not good to estimate the noise because it cuts into the speech sometimes, so I end up overestimating the noise and getting a worse result. [speaker003:] Mm hmm. [speaker002:] So it works only for Italian by u for [disfmarker] using a VAD to estimate noise. It works for Italian because the VAD was trained on Italian. [speaker003:] Mm hmm. [speaker002:] So, uh [disfmarker] so this was, uh [disfmarker] And so this was giving [disfmarker] um, this [disfmarker] this was like not improving a lot on this baseline of not having the Wiener filter on it. And, so, uh, I ran this stuff with one more stage of Wiener filtering on it but the second time, what I did was I [disfmarker] estimated the new Wiener filter based on the cleaned up speech, and did, uh, smoothing in the frequency to [disfmarker] to reduce the variance [disfmarker] [speaker003:] Mm hmm. [speaker002:] I mean, I have [disfmarker] I've [disfmarker] I've observed there are, like, a lot of bumps in the frequency when I do this Wiener filtering which is more like a musical noise or something. And so by adding another stage of Wiener filtering, the results on the SpeechDat Car was like, um [disfmarker] So, I still don't have the word error rate. I'm sorry about it. But the overall improvement was like fifty six point four six. This was again using ten frames of noise estimate and two stage of Wiener filtering. And the rest is like the LDA plu and the on line normalization all remaining the same. Uh, so this was, like, compared to, uh, uh [disfmarker] Fifty seven is what you got by using the French Telecom system, right? [speaker004:] No, I don't think so. Is it on Italian? [speaker002:] Y i No, this is over the whole SpeechDat Car. So [disfmarker] [speaker004:] Oh, yeah, fifty seven [disfmarker] [speaker002:] point [disfmarker] [speaker004:] Right. [speaker002:] Yeah, so the new [disfmarker] the new Wiener filtering schema is like [disfmarker] some fifty six point four six which is like one percent still less than what you got using the French Telecom system. [speaker004:] Uh huh. Mm hmm. [speaker003:] But it's a pretty similar number in any event. [speaker002:] It's very similar. [speaker003:] Yeah. But again, you're [disfmarker] you're more or less doing what they were doing, right? [speaker002:] It's [disfmarker] it's different in a sense like I'm actually cleaning up the cleaned up spectrum which they're not doing. They're d what they're doing is, they have two stage [disfmarker] stages of estimating the Wiener filter, [speaker003:] Yeah. [speaker002:] but [disfmarker] the final filter, what they do is they [disfmarker] they take it to their time domain by doing an inverse Fourier transform. [speaker003:] Uh huh. [speaker002:] And they filter the original signal using that fil filter, which is like final filter is acting on the input noisy speech rather than on the cleaned up. So this is more like I'm doing Wiener filter twice, but the only thing is that the second time I'm actually smoothing the filter and then cleaning up the cleaned up spectrum first level. [speaker003:] OK. [speaker002:] And so that [disfmarker] that's [disfmarker] that's what the difference is. And actually I tried it on s the original clean [disfmarker] I mean, the original spectrum where, like, I [disfmarker] the second time I estimate the filter but actually clean up the noisy speech rather the c s first [disfmarker] output of the first stage and that doesn't [disfmarker] seems to be a [disfmarker] giving, I mean, that much improvement. I [disfmarker] I didn didn't run it for the whole case. And [disfmarker] and what I t what I tried was, by using the same thing but [disfmarker] Uh, so we actually found that the VAD is very, like, crucial. I mean, just by changing the VAD itself gives you the [disfmarker] a lot of improvement [speaker003:] Mm hmm. [speaker002:] by instead of using the current VAD, if you just take up the VAD output from the channel zero, [comment] when [disfmarker] instead of using channel zero and channel one, because that was the p that was the reason why I was not getting a lot of improvement for estimating [comment] the noise. So I just used the channel zero VAD to estimate the noise so that it gives me some reliable mar markers for this noise estimation. [speaker003:] What's a channel zero VAD? I'm [disfmarker] I'm confused about that. [speaker002:] Um, so, it's like [disfmarker] [speaker004:] So it's the close talking microphone. [speaker002:] Yeah, the close talking without [disfmarker] [speaker003:] Oh, oh, oh, oh. [speaker002:] So because the channel zero and channel one are like the same speech, but only w I mean, the same endpoints. But the only thing is that the speech is very noisy for channel one, so you can actually use the output of the channel zero for channel one for the VAD. I mean, that's like a cheating method. [speaker003:] Right. I mean, so a are they going to pro What are they doing to do, do we know yet? about [disfmarker] as far as what they're [disfmarker] what the rules are going to be and what we can use? [speaker004:] Yeah, so actually I received a [disfmarker] a new document, describing this. [speaker002:] Yeah, that's [disfmarker] [speaker004:] And what they did finally is to, mmm, uh, not to align the utterances but to perform recognition, um, only on the close talking microphone, and to take the result of the recognition to get the boundaries uh, of speech. [speaker002:] Which is the channel zero. [speaker003:] So it's not like that's being done in one place or one time. [speaker004:] And [disfmarker] [speaker003:] That's [disfmarker] that's just a rule and we'd [disfmarker] you [disfmarker] you were permitted to do that. Is [disfmarker] is that it? [speaker004:] Uh, I think they will send, um, files but we [disfmarker] we don't [disfmarker] Well, apparently [disfmarker] [speaker003:] Oh, so they will send files so everybody will have the same boundaries to work with? [speaker004:] Yeah. Yeah. [speaker002:] But actually their alignment actually is not seems to be improving in like on all cases. [speaker003:] OK. [speaker004:] Oh, i Yeah, so what happened here is that, um, the overall improvement that they have with this method [disfmarker] So [disfmarker] Well, to be more precise, what they have is, they have these alignments and then they drop the beginning silence and [disfmarker] and the end silence but they keep, uh, two hundred milliseconds before speech and two hundred after speech. And they keep the speech pauses also. Um, and the overall improvement over the MFCC baseline So, when they just, uh, add this frame dropping in addition it's r uh, forty percent, right? [speaker003:] Mm hmm. [speaker004:] Fourteen percent, I mean. [speaker003:] Mm hmm. [speaker002:] Yeah, [speaker004:] Um, [speaker002:] which is [disfmarker] [speaker004:] which is, um, t which is the overall improvement. But in some cases it doesn't improve at all. Like, uh, y do you remember which case? [speaker003:] Mm hmm. [speaker002:] It gives like negative [disfmarker] Well, in [disfmarker] in like some Italian and TI digits, [speaker004:] Yeah, some [@ @]. [speaker002:] right? [speaker004:] Right. [speaker002:] Yeah. So by using the endpointed speech, actually it's worse than the baseline in some instances, [speaker004:] Mmm. [speaker002:] which could be due to the word pattern. [speaker004:] Yeah. [speaker003:] Yeah, [speaker004:] And [disfmarker] Yeah, the other thing also is that fourteen percent is less than what you obtain using a real VAD. [speaker002:] Yeah, our neural net [disfmarker] [speaker004:] So with without cheating like this. [speaker002:] Yeah, yeah. [speaker004:] So [disfmarker] Uh [disfmarker] So I think this shows that there is still work [disfmarker] Uh, well, working on the VAD is still [disfmarker] still important I think. [speaker003:] Yeah, c [speaker004:] Uh [disfmarker] [speaker001:] Can I ask just a [disfmarker] a high level question? Can you just say like one or two sentences about Wiener filtering and why [disfmarker] why are people doing that? [speaker002:] Hmm. [speaker001:] What's [disfmarker] what's the deal with that? [speaker002:] OK, so the Wiener filter, it's [disfmarker] it's like [disfmarker] it's like you try to minimize [disfmarker] I mean, so the basic principle of Wiener filter is like you try to minimize the, uh, d uh, difference between the noisy signal and the clean signal if you have two channels. Like let's say you have a clean t signal and you have an additional channel where you know what is the noisy signal. And then you try to minimize the error between these two. [speaker001:] Mm hmm. Mm hmm. [speaker002:] So that's the basic principle. And you get [disfmarker] you can do that [disfmarker] I mean, if [disfmarker] if you have only a c noisy signal, at a level which you, you w try to estimate the noise from the w assuming that the first few frames are noise or if you have a w voice activity detector, uh, you estimate the noise spectrum. [speaker001:] Mm hmm. [speaker002:] And then you [disfmarker] [speaker001:] Do you assume the noise is the same? [speaker002:] Yeah. in [disfmarker] yeah, after the speech starts. [speaker001:] Uh huh. [speaker002:] So [disfmarker] but that's not the case in, uh, many [disfmarker] many of our cases but it works reasonably well. [speaker001:] I see. [speaker002:] And [disfmarker] and then you What you do is you, uh b fff. So again, I can write down some of these eq Oh, OK. Yeah. And then you do this [disfmarker] uh, this is the transfer function of the Wiener filter, so "SF" is a clean speech spectrum, power spectrum [speaker001:] Mm hmm. [speaker002:] And "N" is the noisy power spectrum. And so this is the transfer function. [speaker003:] Right actually, I guess [disfmarker] [speaker002:] And, Yeah. [speaker003:] Yeah. [speaker002:] And then you multiply your noisy power spectrum with this. You get an estimate of the clean power spectrum. [speaker001:] I see. OK. [speaker002:] So [disfmarker] but the thing is that you have to estimate the SF from the noisy spectrum, what you have. So you estimate the NF from the initial noise portions and then you subtract that from the current noisy spectrum to get an estimate of the SF. So sometimes that becomes zero because you do you don't have a true estimate of the noise. So the f filter will have like sometimes zeros in it [speaker001:] Mm hmm. [speaker002:] because some frequency values will be zeroed out because of that. And that creates a lot of discontinuities across the spectrum because [@ @] the filter. So, uh, so [disfmarker] that's what [disfmarker] that was just the first stage of Wiener filtering that I tried. [speaker001:] So is this, um, basically s uh, similar to just regular spectral subtraction? [speaker002:] It [disfmarker] [speaker003:] It's all pretty related, yeah. [speaker002:] Yeah. [speaker003:] It's [disfmarker] it's [disfmarker] there's a di there's a whole class of techniques where you try in some sense to minimize the noise. [speaker001:] Uh huh. [speaker003:] And it's typically a mean square sense, uh [disfmarker] uh [disfmarker] uh, i in [disfmarker] in [disfmarker] in some way. And, uh [disfmarker] uh, spectral subtraction is [disfmarker] is, uh [disfmarker] uh, one approach to it. [speaker001:] Do people use the Wiener filtering in combination with the spectral subtraction typically, or is i are they sort of competing techniques? [speaker002:] Not seen. They are very s similar techniques. [speaker001:] Yeah. [speaker002:] So it's like I haven't seen anybody using s Wiener filter with spectral subtraction. [speaker001:] O oh, OK. [speaker004:] Mm hmm. [speaker001:] I see, I see. [speaker003:] I mean, in the long run you're doing the same thing but y but there you make different approximations, [speaker001:] Mm hmm. [speaker002:] Yeah. [speaker003:] and [disfmarker] in spectral subtraction, for instance, there's a [disfmarker] a [disfmarker] an estimation factor. [speaker001:] Mmm. [speaker003:] You sometimes will figure out what the noise is and you'll multiply that noise spectrum times some constant and subtract that rather than [disfmarker] and sometimes people [disfmarker] even though this really should be in the power domain, sometimes people s work in the magnitude domain because it [disfmarker] it [disfmarker] it works better. [speaker001:] Mm hmm. [speaker003:] And, uh, uh, you know. [speaker001:] So why did you choose, uh, Wiener filtering over some other [disfmarker] one of these other techniques? [speaker002:] Uh, the reason was, like, we had this choice of using spectral subtraction, Wiener filtering, and there was one more thing which I which I'm trying, is this sub space approach. So, Stephane is working on spectral subtraction. [speaker001:] Oh, OK. [speaker002:] So I picked up [disfmarker] [speaker001:] So you're sort of trying [@ @] them all. [speaker002:] Y Yeah, we just wanted to have a few noise production [disfmarker] compensation techniques [speaker001:] Ah, I see. [speaker002:] and then pick some from that [disfmarker] [speaker001:] Oh, OK. [speaker003:] I m I mean [disfmarker] [speaker001:] Mm hmm. [speaker002:] pick one. [speaker003:] yeah, I mean, there's Car Carmen's working on another, on the vector Taylor series. [speaker002:] VA Yeah, VAD. w Yeah. [speaker003:] So they were just kind of trying to cover a bunch of different things with this task and see, you know, what are [disfmarker] what are the issues for each of them. [speaker001:] Ah, OK. [speaker002:] Yeah. [speaker001:] That makes sense. Yeah. Mm hmm. Mm hmm. [speaker003:] Um. [speaker001:] Cool, thanks. [speaker002:] So [disfmarker] [speaker003:] Yeah. [speaker002:] so one of [disfmarker] one of the things that I tried, like I said, was to remove those zeros in the fri filter by doing some smoothing of the filter. [speaker001:] Mm hmm. [speaker002:] Like, you estimate the edge of square and then you do a f smoothing across the frequency so that those zeros get, like, flattened out. [speaker001:] Mm hmm. [speaker002:] And that doesn't seems to be improving by trying it on the first time. So what I did was like I p did this and then you [disfmarker] I plugged in the [disfmarker] one more [disfmarker] the same thing but with the smoothed filter the second time. And that seems to be working. [speaker001:] Mm hmm. Mm hmm. [speaker002:] So that's where I got like fifty six point five percent improvement on SpeechDat Car with that. And [disfmarker] So the other thing what I tried was I used still the ten frames of noise estimate but I used this channel zero VAD to drop the frames. So I'm not [disfmarker] still not estimating. And that has taken the performance to like sixty seven percent in SpeechDat Car, which is [disfmarker] which [disfmarker] which like sort of shows that by using a proper VAD you can just take it to further, better levels. And [disfmarker] So. [speaker001:] So that's sort of like, you know, best case performance? [speaker002:] Yeah, so far I've seen sixty seven [disfmarker] I mean, no, I haven't seen s like sixty seven percent. And, uh, using the channel zero VAD to estimate the noise also seems to be improving but I don't have the results for all the cases with that. So I used channel zero VAD to estimate noise as a lesser 2 x frame, which is like, [vocalsound] everywhere I use the channel zero VAD. And that seems to be the best combination, uh, rather than using a few frames to estimate and then drop a channel. [speaker003:] So I'm [disfmarker] I'm still a little confused. Is that channel zero information going to be accessible during this test. [speaker002:] Nnn, no. This is just to test whether we can really improve by using a better VAD. [speaker003:] Mm hmm. Mm hmm. [speaker002:] So, I mean [disfmarker] So this is like the noise compensation f is fixed [speaker004:] Mm hmm. [speaker002:] but you make a better decision on the endpoints. That's, like [disfmarker] seems to be [disfmarker] [speaker003:] Mm hmm. [speaker002:] so we c so I mean, which [disfmarker] which means, like, by using this technique what we improve just the VAD [speaker003:] Yes. [speaker002:] we can just take the performance by another ten percent or better. [speaker003:] OK. [speaker002:] So, that [disfmarker] that was just the, uh, reason for doing that experiment. And, w um [disfmarker] Yeah, but this [disfmarker] all these things, I have to still try it on the TI digits, which is like I'm just running. And there seems to be not improving a [disfmarker] a lot on the TI digits, so I'm like investigating that, why it's not. And, um, um [disfmarker] Well after that. So, uh [disfmarker] so the other [disfmarker] the other thing is [disfmarker] like I've been [disfmarker] I'm doing all this stuff on the power spectrum. So [disfmarker] Tried this stuff on the mel as well [disfmarker] mel and the magnitude, and mel magnitude, and all those things. But it seems to be the power spectrum seems to be getting the best result. So, one of [disfmarker] one of reasons I thought like doing the averaging, after the filtering using the mel filter bank, that seems to be maybe helping rather than trying it on the mel filter ba filtered outputs. [speaker003:] Mm hmm. Mm hmm. Ma [speaker002:] So just th [speaker003:] Makes sense. [speaker002:] Yeah, th that's [disfmarker] that's the only thing that I could think of why [disfmarker] why it's giving improvement on the mel. And, yep. So that's it. [speaker003:] Uh, how about the subspace stuff? [speaker002:] Subspace, [comment] I'm [disfmarker] I'm like [disfmarker] that's still in [disfmarker] a little bit in the back burner because I've been p putting a lot effort on this to make it work, on tuning things and other stuff. [speaker003:] OK. [speaker002:] So I was like going parallely but not much of improvement. I'm just [disfmarker] have some skeletons ready, need some more time for it. [speaker003:] OK. [speaker002:] Mmm. [speaker001:] Tha that it? [speaker002:] Yep. Yep. [speaker001:] Cool. Do you wanna go, Stephane? [speaker004:] Uh, yeah. So, [vocalsound] I've been, uh, working still on the spectral subtraction. Um, So to r to remind you [vocalsound] [vocalsound] a little bit of [disfmarker] of what I did before, is just [vocalsound] to apply some spectral subtraction with an overestimation factor also to get, um, an estimate of the noise, uh, spectrum, and subtract this estimation of the noise spectrum from the, uh, signal spectrum, [comment] but subtracting more when the SNR is [disfmarker] is, uh, low, which is a technique that it's often used. [speaker001:] "Subtracting more", meaning [disfmarker]? [speaker004:] So you overestimate the noise spectrum. You multiply the noise spectrum by a factor, uh, which depends on the SNR. [speaker001:] Oh, OK. [speaker004:] So, above twenty DB, [speaker001:] I see. [speaker004:] it's one, so you just subtract the noise. [speaker001:] Mm hmm. [speaker004:] And then it's b Generally [disfmarker] Well, I use, actually, a linear, uh, function of the SNR, [speaker001:] Mm hmm. [speaker004:] which is bounded to, like, two or three, [comment] when the SNR is below zero DB. [speaker001:] Mm hmm. Mm hmm. [speaker004:] Um, doing just this, uh, either on the FFT bins or on the mel bands, um, t doesn't yield any improvement [speaker003:] Oh! Um, uh, what are you doing with negative, uh, powers? [speaker004:] o Yeah. So there is also a threshold, of course, because after subtraction you can have negative energies, [speaker001:] Mm hmm. [speaker004:] and [disfmarker] So what I [disfmarker] I just do is to put, uh [disfmarker] to [disfmarker] to add [disfmarker] to put the threshold first and then to add a small amount of noise, which right now is speech shaped. Um [disfmarker] [speaker001:] Speech shaped? [speaker004:] Yeah, so it's [disfmarker] a it has the overall [disfmarker] overall energy, uh [disfmarker] pow it has the overall power spectrum of speech. So with a bump around one kilohertz. [speaker001:] So when y when you talk about there being something less than zero after subtracting the noise, is that at a particular frequency bin? [speaker004:] i Uh huh. Yeah. There can be frequency bins with negative values. [speaker001:] OK. And so when you say you're adding something that has the overall shape of speech, is that in a [disfmarker] in a particular frequency bin? Or you're adding something across all the frequencies when you get these negatives? [speaker004:] For each frequencies I a I'm adding some, uh, noise, but the a the amount of [disfmarker] the amount of noise I add is not the same for all the frequency bins. [speaker001:] Ah! OK. I gotcha. Right. [speaker004:] Uh. Right now I don't think if it makes sense to add something that's speech shaped, because then you have silence portion that have some spectra similar to the sp the overall speech spectra. [speaker001:] Mm hmm. [speaker004:] But [disfmarker] Yeah. So this is something I can still work on, [speaker001:] So what does that mean? [speaker004:] but [disfmarker] Hmm. [speaker001:] I'm trying to understand what it means when you do the spectral subtraction and you get a negative. It means that at that particular frequency range you subtracted more energy than there was actually [disfmarker] [speaker004:] That means that [disfmarker] Mm hmm. Yeah. So [disfmarker] so yeah, you have an [disfmarker] an estimation of the noise spectrum, but sometimes, of course, it's [disfmarker] as the noise is not perfectly stationary, sometimes this estimation can be, uh, too small, so you don't subtract enough. But sometimes it can be too large also. If [disfmarker] if the noise, uh, energy in this particular frequency band drops for some reason. [speaker001:] Mm hmm. Mm hmm. [speaker004:] Mmm. [speaker001:] So in [disfmarker] in an ideal word i world [comment] if the noise were always the same, then, when you subtracted it the worst that i you would get would be a zero. I mean, the lowest you would get would be a zero, [speaker003:] Right. [speaker001:] cuz i if there was no other energy there you're just subtracting exactly the noise. [speaker004:] Mm hmm, yeah. [speaker003:] Yep, there's all [disfmarker] there's all sorts of, uh, deviations from the ideal here. I mean, for instance, you're [disfmarker] you're talking about the signal and noise, um, at a particular point. And even if something is sort of stationary in ster terms of statistics, there's no guarantee that any particular instantiation or piece of it is exactly a particular number or bounded by a particular range. [speaker004:] Mm hmm. [speaker003:] So, you're figuring out from some chunk of [disfmarker] of [disfmarker] of the signal what you think the noise is. Then you're subtracting that from another chunk, [speaker001:] Mm hmm. [speaker003:] and there's absolutely no reason to think that you'd know that it wouldn't, uh, be negative in some places. [speaker004:] Mm hmm. Hmm. [speaker003:] Uh, on the other hand that just means that in some sense you've made a mistake because you certainly have stra subtracted a bigger number than is due to the noise. [speaker001:] Mm hmm. [speaker003:] Um [disfmarker] Also, we speak [disfmarker] the whole [disfmarker] where all this stuff comes from is from an assumption that signal and noise are uncorrelated. And that certainly makes sense in s in [disfmarker] in a statistical interpretation, that, you know, over, um, all possible realizations that they're uncorrelated [speaker001:] Mm hmm. [speaker003:] or assuming, uh, ergodicity that i that i um, across time, uh, it's uncorrelated. But if you just look at [disfmarker] a quarter second, uh, and you cross multiply the two things, uh, you could very well, uh, end up with something that sums to something that's not zero. So in fact, the two signals could have some relation to one another. And so there's all sorts of deviations from ideal in this. And [disfmarker] and given all that, you could definitely end up with something that's negative. But if down the road you're making use of something as if it is a power spectrum, um, then it can be bad to have something negative. Now, the other thing I wonder about actually is, what if you left it negative? What happens? I mean, because [disfmarker] [speaker002:] Is that the log? [speaker003:] Um, are you taking the log before you add them up to the mel? [speaker002:] After that. No, after. [speaker003:] Right. So the thing is, I wonder how [disfmarker] if you put your thresholds after that, I wonder how often you would end up with, uh [disfmarker] with negative values. [speaker002:] But you will [disfmarker] But you end up reducing some neighboring frequency bins [disfmarker] [@ @] in the average, right? When you add the negative to the positive value which is the true estimate. [speaker003:] Yeah. But nonetheless, uh, you know, these are [disfmarker] it's another f kind of smoothing, right? that you're doing. [speaker002:] Yeah. [speaker003:] Right. So, you've done your best shot at figuring out what the noise should be, and now i then you've subtracted it off. And then after that, instead of [disfmarker] instead of, uh, uh, leaving it as is and adding things [disfmarker] adding up some neighbors, you artificially push it up. [speaker002:] Hmm. [speaker003:] Which is, you know, it's [disfmarker] there's no particular reason that that's the right thing to do either, right? [speaker002:] Yeah, yeah. [speaker003:] So, um, uh, i in fact, what you'd be doing is saying, "well, we're d we're [disfmarker] we're going to definitely diminish the effect of this frequency in this little frequency bin in the [disfmarker] in the overall mel summation". It's just a thought. I d I don't know if it would be [disfmarker] [speaker002:] Yeah. [speaker001:] Sort of the opposite of that would be if [disfmarker] if you find out you're going to get a negative number, you don't do the subtraction for that bin. [speaker002:] Uh huh. That is true. [speaker003:] Nnn, yeah, [speaker004:] Mm hmm. [speaker001:] That would be almost the opposite, [speaker003:] although [disfmarker] [speaker001:] right? Instead of leaving it negative, you don't do it. If your [disfmarker] if your subtraction's going to result in a negative number, you [disfmarker] you don't do subtraction in that. [speaker003:] Yeah, but that means that in a situation where you thought that [disfmarker] that the bin was almost entirely noise, you left it. [speaker001:] Yeah. [speaker002:] We just [disfmarker] [speaker003:] Uh. [speaker001:] Yeah, I'm just saying that's like the opposite. [speaker002:] Yeah. [speaker003:] Yeah. Well, yeah that's [disfmarker] that's the opposite, [speaker001:] Yeah. [speaker004:] Mm hmm. [speaker003:] yeah. [speaker004:] And, yeah, some people also [disfmarker] if it's a negative value they, uh, re compute it using inter interpolation from the edges and bins. [speaker002:] For frames, frequency bins. [speaker003:] Yeah. [speaker004:] Well, there are different things that you can do. [speaker001:] Oh. [speaker003:] People can also, uh, reflect it back up and essentially do a full wave rectification instead of a [disfmarker] instead of half wave. [speaker001:] Oh. [speaker003:] But it was just a thought that [disfmarker] that it might be something to try. [speaker004:] Mm hmm. Mm hmm. Yep. Well, actually I tried, [vocalsound] something else based on this, um, is to [disfmarker] to put some smoothing, um, because it seems to [disfmarker] to help or it seems to help the Wiener filtering [speaker003:] Mm hmm. [speaker004:] and, mmm [disfmarker] So what I did is, uh, some kind of nonlinear smoothing. Actually I have a recursion that computes [disfmarker] Yeah, let me go back a little bit. Actually, when you do spectral subtraction you can, uh, find this [disfmarker] this equivalent in the s in the spectral domain. You can uh compute, y you can say that d your spectral subtraction is a filter, um, and the gain of this filter is the, um, [vocalsound] signal energy minus what you subtract, divided by the signal energy. And this is a gain that varies over time, and, you know, of course, uh, depending on the s on the noise spectrum and on the speech spectrum. And [disfmarker] what happen actually is that during low SNR values, the gain is close to zero but it varies a lot. Mmm, and this [disfmarker] this is the cause of musical noise and all these [disfmarker] the [disfmarker] [comment] the fact you [disfmarker] we go below zero one frame and then you can have an energy that's above zero. [speaker003:] Mm hmm. [speaker004:] And [disfmarker] Mmm. So the smoothing is [disfmarker] I did a smoothing actually on this gain, uh, trajectory. But it's [disfmarker] the smoothing is nonlinear in the sense that I tried to not smooth if the gain is high, because in this case we know that, uh, the estimate of the gain is correct because we [disfmarker] we are not close to [disfmarker] to [disfmarker] to zero, um, and to do more smoothing if the gain is low. Mmm. Um. Yeah. So, well, basically that's this idea, and it seems to give pretty good results, uh, although I've just [disfmarker] just tested on Italian and Finnish. And on Italian it seems [disfmarker] my result seems to be a little bit better than the Wiener filtering, [speaker002:] Mm hmm. [speaker004:] right? [speaker002:] Yeah, the one you showed yesterday. Right? [speaker003:] Yeah. [speaker004:] Uh, I don't know if you have these improvement the detailed improvements for Italian, Finnish, and Spanish there [speaker002:] Fff. [speaker004:] or you have [disfmarker] just have your own. [speaker002:] No, I don't have, for each, I [disfmarker] I just [disfmarker] just have the final number here. [speaker004:] Mm hmm. [speaker003:] So these numbers he was giving before with the four point three, and the ten point one, and so forth, those were Italian, right? [speaker002:] Yeah, yeah, yeah. [speaker003:] Yeah. [speaker004:] Uh [disfmarker] [speaker002:] So [disfmarker] so, no, I actually didn't give you the number which is the final one, [speaker004:] uh, no, we've [disfmarker] [speaker002:] which is, after two stages of Wiener filtering. I mean, that was I just [disfmarker] well, like the overall improvement is like fifty six point five. [speaker003:] Right. [speaker002:] So, [speaker004:] Mm hmm. [speaker002:] I mean, his number is still better than what I got in the two stages of Wiener filtering. [speaker004:] Yeah. [speaker003:] Right. [speaker004:] On Italian. But on Finnish it's a little bit worse, apparently. [speaker002:] Mm hmm. [speaker004:] Um [disfmarker] [speaker003:] But do you have numbers in terms of word error rates on [disfmarker] on Italian? So just so you have some sense of reference? [speaker004:] Yeah. Uh, so, it's, uh, three point, uh, eight. [speaker003:] Uh huh. [speaker004:] Am I right? [speaker002:] Oh, OK. Yeah, right, OK. [speaker004:] And then, uh, d uh, nine point, uh, one. [speaker003:] Mm hmm. [speaker004:] And finally, uh, sixteen point five. [speaker003:] And this is, um, spectral subtraction plus what? [speaker004:] Plus [disfmarker] plus nonlinear smoothing. Well, it's [disfmarker] the system [disfmarker] it's exactly the sys the same system as Sunil tried, [speaker003:] On line normalization and LDA? [speaker004:] but [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Yeah. [speaker004:] But instead of double stage Wiener filtering, it's [disfmarker] it's this smoothed spectral subtraction. Um, yeah. [speaker003:] Right. [speaker001:] What is it the, um, France Telecom system uses for [disfmarker] Do they use spectral subtraction, or Wiener filtering, or [disfmarker]? [speaker002:] They use spectral subtraction, right. [speaker004:] For what? [speaker002:] French Telecom. [speaker004:] It [disfmarker] it's Wiener filtering, [speaker002:] Oh, it's [disfmarker] it's Wiener filtering. [speaker004:] am I right? [speaker002:] Sorry. [speaker004:] Well, it's some kind of Wiener filtering [disfmarker] [speaker001:] Oh. [speaker002:] Yeah, filtering. Yeah, it's not exactly Wiener filtering but some variant of Wiener filtering. [speaker004:] Yeah. [speaker001:] I see. [speaker003:] Yeah, [speaker002:] Yeah. [speaker003:] plus, uh, I guess they have some sort of cepstral normalization, as well. [speaker002:] s They have like [disfmarker] [speaker004:] Mm hmm. [speaker002:] yeah, th the [disfmarker] just noise compensation technique is a variant of Wiener filtering, plus they do some [disfmarker] some smoothing techniques on the final filter. The [disfmarker] th they actually do the filtering in the time domain. [speaker001:] Mmm. [speaker004:] Yeah. [speaker002:] So they would take this HF squared back, taking inverse Fourier transform. [speaker001:] Hmm. [speaker002:] And they convolve the time domain signal with that. [speaker001:] Oh, I see. [speaker002:] And they do some smoothing on that final filter, impulse response. [speaker001:] Hmm. [speaker004:] But they also have two [disfmarker] two different smoothing [@ @]. One in the time domain [speaker002:] I mean, I'm [disfmarker] I'm [@ @]. [speaker004:] and one in the frequency domain by just taking the first, um, coefficients of the impulse response. [speaker002:] But. [speaker004:] So, basically it's similar. I mean, what you did, it's similar [speaker002:] It's similar in the smoothing [speaker004:] because you have also two [disfmarker] two kind of smoothing. [speaker002:] and [disfmarker] [speaker004:] One in the time domain, [speaker002:] Yeah. [speaker004:] and one in the frequency domain, [speaker002:] Yeah. The frequency domain. [speaker004:] yeah. [speaker001:] Does the smoothing in the time domain help [disfmarker] [speaker004:] Um [disfmarker] [speaker001:] Well, do you get this musical noise stuff with Wiener filtering or is that only with, uh, spectral subtraction? [speaker002:] No, you get it with Wiener filtering also. [speaker004:] Yeah. [speaker001:] Does the smoothing in the time domain help with that? Or some other smoothing? [speaker002:] Oh, no, you still end up with zeros in the s spectrum. [speaker004:] Yeah. [speaker002:] Sometimes. [speaker003:] I mean, it's not clear that these musical noises hurt us in recognition. [speaker001:] Hmm. [speaker003:] We don't know if they do. [speaker001:] Hmm. [speaker002:] Yeah. [speaker003:] I mean, they [disfmarker] they sound bad. But we're not listening to it, usually. [speaker001:] Mm hmm. [speaker002:] Yeah, I know. [speaker004:] Mm hmm. [speaker001:] Hmm. [speaker004:] Uh, actually the [disfmarker] the smoothing that I did [disfmarker] do here reduced the musical noise. Well, it [disfmarker] [speaker002:] Mm hmm. [speaker004:] Mmm. [speaker002:] Yeah, yeah, the [disfmarker] [speaker004:] Well, I cannot [disfmarker] you cannot hear beca well, actually what I d did not say is that this is not in the FFT bins. This is in the mel frequency bands. Um [disfmarker] So, it could be seen as a f a [disfmarker] a smoothing in the frequency domain because I used, in ad mel bands in addition and then the other phase of smoothing in the time domain. Mmm. But, when you look at the spectrogram, if you don't have an any smoothing, you clearly see, like [disfmarker] in silence portions, and at the beginning and end of speech, you see spots of high energy randomly distributed over the [disfmarker] the spectrogram. [speaker001:] Mm hmm. Mm hmm. [speaker004:] Um [disfmarker] [speaker001:] That's the musical noise? [speaker004:] Which is musical noise, [speaker001:] Mm hmm. [speaker004:] yeah, if [disfmarker] if it [disfmarker] If you listen to it [disfmarker] uh, if you do this in the FFT bins, then you have spots of energy randomly distributing. And if you f if you re synthesize these spot sounds as, like, sounds, [speaker001:] Mm hmm. [speaker004:] uh [disfmarker] And [disfmarker] [speaker003:] Well, none of these systems, by the way, have [disfmarker] I mean, y you both are [disfmarker] are working with, um, our system that does not have the neural net, right? [speaker004:] Mm hmm. [speaker002:] Yep. Yeah. [speaker003:] OK. [speaker004:] Yeah. [speaker003:] So one would hope, presumably, that the neural net part of it would [disfmarker] would improve things further as [disfmarker] as they did before. [speaker004:] Yeah. Um [disfmarker] Yeah, although if [disfmarker] if we, um, look at the result from the proposals, [comment] one of the reason, uh, the n system with the neural net was, um, more than [disfmarker] well, around five percent better, is that it was much better on highly mismatched condition. I'm thinking, for instance, on the TI digits trained on clean speech and tested on noisy speech. [speaker003:] Mm hmm. [speaker004:] Uh, for this case, the system with the neural net was much better. [speaker003:] Mm hmm. [speaker004:] But not much on the [disfmarker] in the other cases. [speaker003:] Yeah. [speaker004:] And if we have no, uh, spectral subtraction or Wiener filtering, um, i the system is [disfmarker] Uh, we thought the neural [disfmarker] neural network is much better than before, even in these cases of high mismatch. So, maybe the neural net will help less but, um [disfmarker] [speaker003:] Maybe. [speaker001:] Could you train a neural net to do spectral subtraction? [speaker003:] Yeah, it could do a nonlinear spectral subtraction [speaker004:] Mm hmm. [speaker003:] but I don't know if it [disfmarker] I mean, you have to figure out what your targets are. [speaker001:] Yeah, I was thinking if you had a clean version of the signal and [disfmarker] and a noisy version, and your targets were the M F uh, you know, whatever, frequency bins [disfmarker] [speaker004:] Mm hmm. [speaker003:] Right. [speaker004:] Mm hmm. [speaker003:] Yeah, well, that's not so much spectral subtraction then, [speaker004:] Mm hmm. [speaker003:] but [disfmarker] but [disfmarker] but it's [disfmarker] but at any rate, yeah, people, uh [disfmarker] [speaker001:] People do that? [speaker003:] y yeah, in fact, we had visitors here who did that I think when you were here ba way back when. [speaker004:] Mm hmm. [speaker001:] Hmm. [speaker003:] Uh, people [disfmarker] d done lots of experimentation over the years with training neural nets. And it's not a bad thing to do. It's another approach. [speaker001:] Hmm. [speaker004:] Mm hmm. [speaker003:] M I mean, it's [disfmarker] it, um [disfmarker] The objection everyone always raises, which has some truth to it is that, um, it's good for mapping from a particular noise to clean but then you get a different noise. [speaker001:] Mm hmm. [speaker003:] And the experiments we saw that visitors did here showed that it [disfmarker] there was at least some, um, [vocalsound] [comment] gentleness to the degradation when you switched to different noises. It did seem to help. So that [disfmarker] you're right, that's another [disfmarker] another way to go. [speaker001:] How did it compare on [disfmarker] I mean, for [disfmarker] for good cases where it [disfmarker] it [disfmarker] uh, stuff that it was trained on? Did it do pretty well? [speaker003:] Oh, yeah, it did very well. [speaker001:] Mmm. [speaker003:] Yeah. [speaker001:] Mmm. [speaker004:] Mm hmm. [speaker003:] Um, but to some extent that's kind of what we're doing. I mean, we're not doing exactly that, we're not trying to generate good examples but by trying to do the best classifier you possibly can, for these little phonetic categories, [speaker001:] Mm hmm. You could say it's sort of built in. [speaker003:] It's [disfmarker] Yeah, it's kind of built into that. [speaker001:] Hmm. [speaker003:] And [disfmarker] and that's why we have found that it [disfmarker] it does help. [speaker001:] Mm hmm. [speaker003:] Um [disfmarker] so, um, yeah, I mean, we'll just have to try it. But I [disfmarker] I would [disfmarker] I would [disfmarker] I would imagine that it will help some. I mean, it [disfmarker] we'll just have to see whether it helps more or less the same, but I would imagine it would help some. [speaker004:] Mm hmm. [speaker003:] So in any event, all of this [disfmarker] I was just confirming that all of this was with a simpler system. [speaker004:] Yeah, yeah. [speaker003:] OK? [speaker004:] Um, Yeah, so this is th the, um [disfmarker] Well, actually, this was kind of the first try with this spectral subtraction plus smoothing, [speaker003:] Mm hmm. [speaker004:] and I was kind of excited by the result. [speaker003:] Mm hmm. [speaker004:] Um, then I started to optimize the different parameters. And, uh, the first thing I tried to optimize is the, um, time constant of the smoothing. And it seems that the one that I chose for the first experiment was the optimal one, so [vocalsound] uh, [speaker003:] It's amazing how often that happens. [speaker004:] Um, so this is the first thing. Um [disfmarker] Yeah, another thing that I [disfmarker] it's important to mention is, um, that this has a this has some additional latency. Um. Because when I do the smoothing, uh, it's a recursion that estimated the means, so [disfmarker] of the g of the gain curve. And this is a filter that has some latency. And I noticed that it's better if we take into account this latency. So, instead o of using the current estimated mean to, uh, subtract the current frame, it's better to use an estimate that's some somewhere in the future. Um [disfmarker] [speaker001:] And that's what causes the latency? [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker001:] OK. [speaker002:] You mean, the m the mean is computed o based on some frames in the future also? Or [disfmarker] or no? [speaker004:] It's the recursion, so it's [disfmarker] it's the center recursion, right? [speaker002:] Mm hmm. [speaker004:] Um [disfmarker] and the latency of this recursion is around fifty milliseconds. [speaker003:] One five? One five? Five zero? [speaker004:] Five zero, yeah. [speaker003:] Five zero. [speaker004:] Um, [speaker003:] Yeah. [speaker004:] mmm. [speaker002:] I'm sorry, why [disfmarker] why is that delay coming? Like, you estimate the mean? [speaker004:] Yeah, the mean estimation has some delay, right? I mean, the [disfmarker] the filter that [disfmarker] that estimates the mean has a time constant. [speaker002:] Oh, yeah. It isn't [disfmarker] OK, so it's like it looks into the future also. [speaker004:] Yeah. [speaker002:] OK. [speaker003:] What if you just look into the past? [speaker004:] It's, uh, not as good. It's not bad. [speaker003:] How m by how much? [speaker004:] Um, it helps a lot over the ba the baseline but, mmm [disfmarker] [speaker003:] By how much? [speaker004:] it [disfmarker] It's around three percent, um, relative. [speaker003:] Worse. [speaker004:] Yeah. Yeah. Um, [speaker003:] Hmm. [speaker004:] mmm [disfmarker] So, uh [disfmarker] [speaker003:] It's depending on how all this stuff comes out we may or may not be able to add any latency. [speaker004:] Yeah, but [disfmarker] Yeah. So, yeah, it depends. Uh, y actually, it's [disfmarker] it's l it's three percent. Right. Mmm. Yeah, b but I don't think we have to worry too much on that right now while [disfmarker] you kno. Mm hmm. [speaker003:] Um, [speaker004:] So [disfmarker] [speaker003:] s Yeah, I mean, I think the only thing is that [disfmarker] I would worry about it a little. [speaker004:] Mm hmm. [speaker003:] Because if we completely ignore latency, and then we discover that we really have to do something about it, we're going to be [disfmarker] find ourselves in a bind. [speaker004:] Mm hmm. [speaker003:] So, um, you know, maybe you could make it twenty five. [speaker004:] Yeah. [speaker003:] You know what I mean? [speaker004:] Oh yes. [speaker003:] Yeah, just, you know, just be [disfmarker] be a little conservative because we may end up with this crunch where all of a sudden we have to cut the latency in half or something. [speaker004:] s Mm hmm. Yeah. [speaker003:] OK. [speaker004:] Um. So, yeah, there are other things in the, um, algorithm that I didn't, uh, [@ @] a lot yet, [speaker001:] Oh! [speaker004:] which [disfmarker] [speaker001:] Sorry. A quick question just about the latency thing. If [disfmarker] if there's another part of the system that causes a latency of a hundred milliseconds, is this an additive thing? Or c or is yours hidden in that? [speaker004:] Mm hmm. No, [speaker001:] Uh [disfmarker] [speaker004:] it's [disfmarker] it's added. [speaker001:] It's additive. [speaker004:] Mm hmm. [speaker001:] OK. [speaker002:] We can [disfmarker] OK. We can do something in parallel also, in some like [disfmarker] some cases like, if you wanted to do voice activity detection. [speaker001:] Uh huh. [speaker002:] And we can do that in parallel with some other filtering you can do. [speaker004:] Mmm. [speaker002:] So you can make a decision on that voice activity detection and then you decide whether you want to filter or not. [speaker004:] Yeah. [speaker002:] But by then you already have the sufficient samples to do the filtering. [speaker001:] Mm hmm. [speaker002:] So [disfmarker] So, sometimes you can do it anyway. [speaker001:] I mean, couldn't, uh [disfmarker] I [disfmarker] Couldn't you just also [disfmarker] I mean, i if you know that the l the largest latency in the system is two hundred milliseconds, don't you [disfmarker] couldn't you just buffer up that number of frames and then everything uses that buffer? [speaker002:] Yeah. [speaker001:] And that way it's not additive? [speaker003:] Well, in fact, everything is sent over in buffers cuz of [disfmarker] isn't it the TCP buffer some [disfmarker]? [speaker002:] You mean, the [disfmarker] the data, the super frame or something? [speaker004:] Mm hmm. [speaker003:] Yeah, yeah. [speaker004:] Yeah. [speaker002:] Yeah, but that has a variable latency because the last frame doesn't have any latency [speaker004:] Mm hmm. [speaker002:] and first frame has a twenty framed latency. So you can't r rely on that latency all the time. [speaker003:] Yeah. [speaker002:] Because [disfmarker] [speaker004:] Yeah. [speaker002:] I mean the transmission over [disfmarker] over the air interface is like a buffer. Twenty frame [disfmarker] [speaker001:] Yeah. [speaker002:] twenty four frames. [speaker001:] Yeah. [speaker002:] So [disfmarker] But the only thing is that the first frame in that twenty four frame buffer has a twenty four frame latency. And the last frame doesn't have any latency. [speaker001:] Mm hmm. [speaker002:] Because it just goes as [disfmarker] Yeah. [speaker001:] Yeah, I wasn't thinking of that one in particular but more of, you know, if [disfmarker] if there is some part of your system that has to buffer twenty frames, uh, can't the other parts of the system draw out of that buffer and therefore not add to the latency? [speaker003:] Yeah. Yeah. And [disfmarker] and that's sort of one of the [disfmarker] all of that sort of stuff is things that they're debating in their standards committee. [speaker001:] Oh! Hmm. [speaker004:] Mm hmm. Yeah. So, um, there is uh, [comment] these parameters that I still have to [disfmarker] to look at. Like, I played a little bit with this overestimation factor, uh, but I still have to [disfmarker] to look more at this, um, at the level of noise I add after. Uh, I know that adding noise helped, um, the system just using spectral subtraction without smoothing, but I don't know right now if it's still important or not, and if the level I choose before is still the right one. Same thing for the shape of the [disfmarker] the noise. Maybe it would be better to add just white noise instead of speech shaped noise. [speaker003:] That'd be more like the JRASTA thing in a sense. [speaker004:] Mm hmm. [speaker003:] Yeah. [speaker004:] Um, yep. Uh, and another thing is to [disfmarker] Yeah, for this I just use as noise estimate the mean, uh, spectrum of the first twenty frames of each utterance. I don't remember for this experiment what did you use for these two stage [disfmarker] [speaker002:] I used ten [disfmarker] just ten frames. [speaker004:] The ten frames? [speaker002:] Yeah, because [disfmarker] I mean, the reason was like in TI digits I don't have a lot. I had twenty frames most of the time. [speaker004:] Mm hmm. Um. But, so what's this result you told me about, the fact that if you use more than ten frames you can [disfmarker] improve by t [speaker002:] Well, that's [disfmarker] that's using the channel zero. If I use a channel zero VAD to estimate the noise. [speaker004:] Oh, OK. [speaker002:] Which [disfmarker] [speaker004:] But this is ten frames plus [disfmarker] plus [speaker002:] Channel zero dropping. [speaker004:] channel [disfmarker] Uh, no, these results with two stage Wiener filtering is ten frames [speaker002:] Hmm. t Oh, this [disfmarker] [speaker004:] but possibly more. I mean, if channel one VAD gives you [disfmarker] [speaker002:] f Yeah. Mm hmm. [speaker004:] Yeah. [speaker002:] Yeah. [speaker004:] OK. Yeah, but in this experiment I did [disfmarker] I didn't use any VAD. I just used the twenty first frame to estimate the noise. And [disfmarker] So I expected it to be a little bit better, [vocalsound] if, uh, I use more [disfmarker] more frames. Um. OK, that's it for spectral subtraction. The second thing I was working on is to, um, try to look at noise estimation, [comment] mmm, and using some technique that doesn't need voice activity detection. Um, and for this I u simply used some code that, uh, [vocalsound] I had from [disfmarker] from Belgium, which is technique that, um, takes a bunch of frame, um, and for each frequency bands of this frame, takes a look at the minima of the energy. And then average these minima and take this as an [disfmarker] an energy estimate of the noise for this particular frequency band. And there is something more to this actually. What is done is that, [vocalsound] uh, these minima are computed, um, based on, um, high resolution spectra. So, I compute an FFT based on the long, uh, signal frame which is sixty four millisecond [disfmarker] [speaker001:] So you have one minimum for each frequency? [speaker004:] What [disfmarker] what I [disfmarker] what I d uh, I do actually, is to take a bunch of [disfmarker] to take a tile on the spectrogram and this tile is five hundred milliseconds long and two hundred hertz wide. [speaker001:] Mmm. [speaker004:] And this tile [disfmarker] Uh, in this tile appears, like, the harmonics if you have a voiced sound, because it's [disfmarker] it's the FTT bins. And when you take the m the minima of [disfmarker] of these [disfmarker] this tile, when you don't have speech, these minima will give you some noise level estimate, If you have voiced speech, these minima will still give you some noise estimate because the minima are between the harmonics. And [disfmarker] If you have other [disfmarker] other kind of speech sounds then it's not the case, but if the time frame is long enough, uh, like s five hundred milliseconds seems to be long enough, [comment] you still have portions which, uh, are very close [disfmarker] whi which minima are very close to the noise energy. [speaker003:] I'm confused. [speaker004:] Mmm? [speaker003:] You said five hundred milliseconds but you said sixty four milliseconds. Which is which? What? [speaker004:] Sixty four milliseconds is to compute the FFT, uh, bins. The [disfmarker] the FFT. [speaker003:] Yeah, yeah. [speaker004:] Um, actually it's better to use sixty four milliseconds because, um, if you use thirty milliseconds, then, uh, because of the [disfmarker] this short windowing and at low pitch, uh, sounds, [vocalsound] the harmonics are not, wha uh, correctly separated. [speaker003:] Mm hmm. [speaker004:] So if you take these minima, it [disfmarker] b [vocalsound] they will overestimate the noise a lot. [speaker003:] So you take sixty four millisecond F F Ts and then you average them [comment] over five hundred? Or [disfmarker]? Uh, what do you do over five hundred? [speaker004:] So I take [disfmarker] to [disfmarker] I take a bunch of these sixty four millisecond frame to cover five hundred milliseconds, [speaker003:] Ah. OK. [speaker004:] and then I look for the minima, [speaker003:] I see. [speaker001:] Mmm. [speaker004:] on the [disfmarker] on [disfmarker] on the bunch of uh fifty frames, right? [speaker003:] I see. [speaker004:] Mmm. So the interest of this is that, as y with this technique you can estimate u some reasonable noise spectra with only five hundred milliseconds of [disfmarker] of signal, so if the [disfmarker] the n the noise varies a lot, uh, you can track [disfmarker] better track the noise, [speaker003:] Mm hmm. [speaker004:] which is not the case if you rely on the voice activity detector. So even if there are no no speech pauses, you can track the noise level. The only requirement is that you must have, in these five hundred milliseconds segment, [comment] you must have voiced sound at least. Cuz this [disfmarker] these will help you to [disfmarker] to track the [disfmarker] the noise level. Um. So what I did is just to simply replace the VAD based, uh, noise estimate by this estimate, first on SpeechDat Car [disfmarker] Well, only on SpeechDat Car actually. And it's, uh, slightly worse, like one percent relative compared to the VAD based [pause] estimates. Um, I think the reason why it's not better, is that the SpeechDat Car noises are all stationary. Um. So, u y y there really is no need to have something that's adaptive [speaker003:] Mm hmm. [speaker004:] and [disfmarker] Uh, well, they are mainly stationary. Um. But, I expect s maybe some improvement on TI digits because, nnn, in this case the noises are all sometimes very variable. Uh, so I have to test it. Mmm. [speaker003:] But are you comparing with something [disfmarker] e I'm [disfmarker] I'm [disfmarker] p s a little confused again, i it [disfmarker] Uh, when you compare it with the V A D based, [speaker004:] Mm hmm. It's [disfmarker] [speaker003:] VAD Is this [disfmarker] is this the [disfmarker]? [speaker004:] It's the France Telecom based spectra, s uh, Wiener filtering and VAD. So it's their system but just I replace their noise estimate by this one. [speaker003:] Oh, you're not doing this with our system? [speaker004:] In i I'm not [disfmarker] No, no. Yeah, it's our system but with just the Wiener filtering from their system. Right? Mmm. [speaker003:] OK. [speaker004:] Yeah. Actually, th the best system that we still have is, uh, our system but with their noise compensation scheme, right? [speaker003:] Right. [speaker004:] So I'm trying to improve on this, and [disfmarker] by [disfmarker] by replacing their noise estimate by, uh, something that might be better. [speaker003:] But [disfmarker] OK. But the spectral subtraction scheme that you reported on also re requires a [disfmarker] a noise estimate. [speaker004:] Yeah. Yeah. But I di [speaker003:] Couldn't you try this for that? Do you think it might help? [speaker004:] Not yet, because I did this in parallel, and I was working on one and the other. [speaker003:] I see, I see. Yeah. [speaker004:] Um, Yeah, for [disfmarker] for sure I will. [speaker002:] Yeah. [speaker004:] I can try also, mmm, the spectral subtraction. [speaker003:] OK. [speaker002:] So I'm also using that n new noise estimate technique on this Wiener filtering what I'm trying. [speaker004:] Mm hmm. [speaker002:] So I [disfmarker] I have, like, some experiments running, I don't have the results. [speaker003:] Yeah. Yeah. [speaker002:] So. I don't estimate the f noise on the ten frames but use his estimate. [speaker004:] Mm hmm. [speaker003:] Yeah. [speaker004:] Um. Yeah. I, um, also implemented a sp um [disfmarker] spectral whitening idea which is in the, um, Ericsson proposal. Uh, the idea is just to [vocalsound] um, flatten the log, uh, spectrum, um, and to flatten it more if the [disfmarker] the probability of silence is higher. So in this way, you can also reduce [disfmarker] somewhat reduce the musical noise and you reduce the variability if you have different noise shapes, because the [disfmarker] the spectrum becomes more flat in the silence portions. Um. Yeah. With this, no improvement, uh, but there are a lot of parameters that we can play with and, um [disfmarker] Actually, this [disfmarker] this could be seen as a soft version of the frame dropping because, um, you could just put the threshold and say that "below the threshold, I will flatten [disfmarker] comp completely flatten the [disfmarker] the spectrum". And above this threshold, uh, keep the same spectrum. So it would be like frame dropping, because during the silence portions which are below the threshold of voice activity probability, [comment] uh, w you would have some kind of dummy frame which is a perfectly flat spectrum. And this, uh, whitening is something that's more soft because, um, you whiten [disfmarker] you just, uh, have a function [disfmarker] the whitening is a function of the speech probability, so it's not a hard decision. [speaker003:] Mm hmm. [speaker004:] Um, so I think maybe it can be used together with frame dropping and when we are not sure about if it's speech or silence, well, maybe it has something do with this. [speaker003:] It's interesting. I mean, um, you know, in [disfmarker] in JRASTA we were essentially adding in, uh, white [disfmarker] uh, white noise dependent on our estimate of the noise. [speaker004:] Mm hmm. [speaker003:] On the overall estimate of the noise. Uh, I think it never occurred to us to use a probability in there. [speaker004:] Mm hmm. [speaker003:] You could imagine one that [disfmarker] that [disfmarker] that made use of where [disfmarker] where the amount that you added in was, uh, a function of the probability of it being s speech or noise. [speaker004:] Mm hmm. Mm hmm. Yeah, w Yeah, right now it's a constant that just depending on the [disfmarker] the noise spectrum. [speaker002:] There's [disfmarker] [speaker003:] Yeah. [speaker004:] Mm hmm. [speaker003:] Cuz that [disfmarker] that brings in sort of powers of classifiers that we don't really have in, uh, this other estimate. [speaker004:] Mm hmm. [speaker003:] So it could be [disfmarker] it could be interesting. [speaker004:] Mm hmm. Mm hmm. [speaker003:] What [disfmarker] what [disfmarker] what point does the, uh, system stop recording? How much [disfmarker] [speaker001:] It'll keep going till [disfmarker] I guess when they run out of disk space, [speaker003:] It went a little long? I mean, disk [disfmarker] [speaker001:] but [disfmarker] I think we're OK. [speaker004:] So. [speaker003:] OK. [speaker004:] Yeah. Uh [disfmarker] Yeah, so there are [disfmarker] with this technique there are some [disfmarker] I just did something exactly the same as [disfmarker] as the Ericsson proposal but, um, [vocalsound] the probability of speech is not computed the same way. And I think, i for [disfmarker] yeah, for a lot of things, actually a g a good speech probability is important. Like for frame dropping you improve, like [disfmarker] you can improve from ten percent as Sunil showed, if you use the channel zero speech probabilities. [speaker003:] Mm hmm. Mm hmm. [speaker004:] For this it might help, um [disfmarker] [speaker003:] Mm hmm. [speaker004:] S so, yeah. Uh, so yeah, the next thing I started to do is to, [vocalsound] uh, try to develop a better voice activity detector. And, um [disfmarker] I d um [disfmarker] yeah, for this I think we can maybe try to train the neural network for voice activity detection on all the data that we have, including all the SpeechDat Car data. Um [disfmarker] And so I'm starting to obtain alignments on these databases. Um, and the way I mi I do that is that I just use the HTK system but I train it only on the close talking microphone. And then I aligned [disfmarker] I obtained the Viterbi alignment of the training utterances. Um [disfmarker] It seems to be, uh i Actually what I observed is that for Italian it doesn't seem [disfmarker] Th there seems to be a problem. [speaker002:] No. [speaker004:] Well. [speaker002:] So, it doesn't seems to help by their use of channel zero or channel one. [speaker004:] Because [disfmarker] What? [speaker002:] Uh, you mean their d the frame dropping, right? [speaker004:] Yeah. [speaker002:] Yeah, it doesn't [disfmarker] [speaker004:] Yeah. So, u but actually the VAD was trained on Italian also, [speaker002:] Italian. [speaker004:] so [disfmarker] Um, the c the current VAD that we have was trained on, uh, t SPINE, right? Italian, and TI digits with noise and [disfmarker] [speaker002:] TI digits. [speaker004:] Uh, yeah. And it seems to work on Italian but not on the Finnish and Spanish data. So, maybe one reason is that s s Finnish and Spanish noise are different. And actually we observed [disfmarker] we listened to some of the utterances and sometimes for Finnish there is music in the recordings and strange things, right? [speaker002:] Yeah. [speaker004:] Um [disfmarker] Yeah, so the idea was to train all the databases and obtain an alignment to train on these databases, and, um, also to, um, try different kind of features, [vocalsound] uh, as input to the VAD network. And we came up with a bunch of features that we want to try like, um, the spectral slope, the, um, the degree o degree of voicing with the features that, uh, we started to develop with Carmen, um, e with, uh, the correlation between bands and different kind of features, [speaker002:] Yeah. Mm hmm. [speaker004:] and [disfmarker] Yeah. [speaker002:] The energy also. [speaker004:] The energy. Yeah. Of course. [speaker002:] Yeah. [speaker003:] Yeah, right. [speaker004:] Yeah. [speaker003:] OK. Well, Hans Guenter will be here next week so I think he'll be interested in all [disfmarker] all of these things. And, so. [speaker004:] Mm hmm. [speaker003:] Mmm. [speaker001:] OK, shall we, uh, do digits? [speaker003:] Yeah. [speaker001:] Want to go ahead, Morgan? [speaker003:] Sure. [speaker001:] OK. [speaker002:] Yeah. [speaker001:] OK, we're on. So, I think this is gonna be a pretty short meeting because I have four agenda items, three of them were requested by Jane who is not gonna be at the meeting today. So. [vocalsound] The uh first was transcription status. Does anyone besides Jane know what the transcription status is? [speaker006:] Um, sort of, I do, peripherally. Um [disfmarker] Well first of all with IBM I got a note from Brian yesterday saying that they finally made the tape for the thing that we sent them a [pause] week or week and a half ago [speaker003:] Is that English? [speaker004:] That's our system. [speaker001:] Ugh! [speaker006:] and that it's gone out to the transcribers and hopefully next week we'll have the transcription back from that. Um [disfmarker] Jane seems to be um moving right along on the transcriptions from the ICSI side. [speaker001:] C can I have a pen? [speaker006:] She's assigned, I think probably five or six m more meetings. [speaker003:] Yeah, I think we're up to MR thirteen or something. [speaker006:] Yeah, so um, I guess she's hired some new transcribers [speaker004:] Mmm. [speaker006:] and [disfmarker] [speaker005:] Which meetings is she transcribing? [speaker004:] Speaking [disfmarker] [speaker006:] Um well we've [disfmarker] we've run out of E D Us because a certain number of them are um, sort of awaiting to go to IBM. [speaker005:] OK. [speaker003:] For IBM, yeah. [speaker005:] OK. [speaker004:] Hmm. [speaker006:] and the rest are in process being transcribed uh here. [speaker005:] So we're doing some in parallel. [speaker004:] So does she have transcribers right now who are basically sitting idle because there's no data back from IBM [speaker001:] Yep. [speaker006:] No. Oh no no. [speaker001:] No, no. [speaker006:] No. [speaker004:] no? [speaker006:] We're not waiting on them. [speaker001:] We haven't done that process. So. They' they're doing the full transcription process. [speaker004:] Oh. Oh, OK. [speaker006:] Yeah. [speaker005:] So they're just doing their own thing until [disfmarker] [speaker004:] Because I [disfmarker] I need to ask Jane whether it's [disfmarker] it would be OK for her [disfmarker] um, s some of her people to transcribe uh some of the initial data we got from the SmartKom data collection, which is these short like five or seven minute sessions. [speaker006:] We're doing it in parallel, yeah. [speaker005:] OK. [speaker003:] Yep. [speaker004:] Um and we want it [disfmarker] You know, we need [disfmarker] The [disfmarker] Again, we [disfmarker] we have a similar uh logistic set up where we are supposed to send the data to Munich and get it transcribed and get it back. [speaker001:] Right. [speaker004:] But to get going we would like some of the data transcribed right away so we can get started. [speaker001:] Yep, sounds familiar. [speaker004:] And so um I wanted to ask Jane if [disfmarker] if uh, you know, maybe one of their transcribers could [disfmarker] could do [disfmarker] I mean since these are very short, that should really be uh, [speaker002:] Mm hmm. [speaker004:] um [disfmarker] It's [disfmarker] [speaker003:] There's only two channels. So it's only [disfmarker] [speaker004:] Yeah. It's only two [disfmarker] [speaker003:] Yeah. As the synthesis doesn't have to be transcribed I think. [speaker004:] Right, s [speaker003:] So. [speaker004:] Yeah. So [disfmarker] So it's basically one channel to transcribe. And it's [disfmarker] One session is only uh like seven [disfmarker] [speaker002:] So that should have ma many fewer [disfmarker] And it's also not uh a bunch of interruptions with people and all that, [speaker004:] Right. And some of it is read speech, so we could give them the [disfmarker] the thing that they're reading [speaker002:] right? So. Yeah. [speaker004:] and they just may [disfmarker] [speaker001:] Make sure it's right. [speaker004:] And so um, [speaker003:] Yep. [speaker004:] um, I guess since she's [disfmarker] I was gonna ask her but since she's not around I [disfmarker] maybe I'll [disfmarker] [speaker002:] Yeah, well it certainly seems [disfmarker] [speaker004:] Uh if [disfmarker] if that's OK with you to [disfmarker] to, you know, get that stuff uh [disfmarker] to ask her for that, then I'll do that. [speaker002:] Yeah. Yeah, if we're held up on this other stuff a little bit in order to encompass that, that's OK because I I um, I mean I still have high hopes that the that the IBM pipeline'll work out for us, so it's [disfmarker] [speaker004:] Yeah. OK, yeah. [speaker002:] Yeah. [speaker004:] Alrighty. [speaker006:] Oh, yeah, and also related to the transcription stuff, so I've been trying to keep a web page uh up to date f showing what the current status is of the trans of all the things we've collected and what stage each meeting is in, in terms of whether it's [disfmarker] [speaker001:] Can you mail that out to the list? [speaker006:] Mm hmm, yeah I will. I [disfmarker] That's the thing that I sent out just to foo people saying can you update these pages and so that's where I'm putting it but I'll [disfmarker] [speaker001:] Oh, OK, OK. [speaker006:] I'll send it out to the list telling people to look at it. [speaker001:] Yeah, I haven't done that. So. I have lots of stuff to add that's just in my own directory. [speaker006:] Yeah. [speaker001:] I'll try to get to that. OK. So Jane also wanted to talk about participant approval, but I don't really think there's much to talk about. I'm just gonna do it. And uh, if anyone objects too much then they can do it instead. [speaker002:] You are going to [disfmarker] [speaker001:] I'm gonna send out to the participants, uh, with links to web pages which contain the transcripts and allow them to [pause] suggest edits. And then bleep them out. [speaker002:] OK. [speaker001:] For the ones that we have. Um [disfmarker] [speaker003:] So but it's just transcripts, not the [disfmarker] not the audio? [speaker001:] Nope, they'll have access to the audio also. [speaker003:] OK, yeah, yep. Ah. [speaker001:] I mean that's my intention. [speaker003:] Yeah. [speaker001:] Because the transcripts might not be right. [speaker006:] So [disfmarker] [speaker003:] Yeah. [speaker001:] So you want people to be able to listen to them. [speaker006:] So, um the audio that they're gonna have access to, will that be the uncompressed version? Or will you have scripts that like uncompress the various pieces and [disfmarker] [speaker001:] Oh, that's a good point. That's a good point. Yeah, it's [disfmarker] it's probably going to have to be the uncompressed versions because, uh, uh, it takes too long to do random access decompression. [speaker006:] Hmm. Yeah, I was just wondering because we're uh running out of the un backed up disk space on [speaker001:] Well, that was the other point. [speaker006:] Oh, was that another one? [speaker001:] Yep, that's another agenda item. [speaker006:] OK. I'll wait. [speaker001:] So, uh [disfmarker] But that is a good point so we'll get to that, too. Um, DARPA demo status, not much to say. The back end stuff is working out fine. It's more or less ready to go. I've added some stuff that uh indes indexes by the meeting type MR, EDU, et cetera and also by the user ID. So that the front end can then do filtering based on that as well. Uh [disfmarker] The back end is uh, going more slowly as I s I think I said before just cuz I'm not much of a Tcl TK programmer. And uh Dave Gelbart says he's a little too busy. So I think Don and I are gonna work on that and [disfmarker] and you and I can just talk about it off line more. [speaker005:] Right. [speaker001:] But uh [pause] the back end was pretty smooth. [speaker002:] Oh [speaker001:] So I think, we'll have something. It may not be as [disfmarker] As pretty as we might like, but we'll have something. [speaker002:] I wondered whe when we would reach Dave's saturation point. He's sort of been [disfmarker] been volunteering for everything and [pause] and uh [disfmarker] [speaker001:] Yeah. [speaker004:] Mm hmm. [speaker002:] O K. Finally said he was too busy. I guess we reached it. [speaker001:] Yeah, he [disfmarker] he actually [disfmarker] he volunteered but then he s then he retracted it. So. Oh well. Um [disfmarker] [speaker005:] And, also um, I was just showing Andreas, I got um an X Waves kind of display, and I don't know how much more we can do with it [disfmarker] with like the prosodic stuff where we have like stylized pitches and signals and the transcripts on the bottom [speaker001:] Oh, cool. [speaker005:] so, right now it's just an X Waves and then you have three windows but I don't know, it looked pretty nice and I'm sure it [disfmarker] think it has potential for a little something, [speaker001:] For a demo? [speaker005:] yeah, for a demo. [speaker001:] Yeah, sounds good. [speaker005:] So [disfmarker] [speaker002:] OK, so again, the issue is [disfmarker] For July, the issue's gonna be what can we fit into a Windows machine, uh, and so on, [speaker005:] Oh. [speaker002:] but [disfmarker] [speaker005:] OK. [speaker001:] So it might just be slides. [speaker005:] Yeah, OK. Well, we'll see, um [disfmarker] [speaker003:] Well [disfmarker] [pause] Yeah. I've been putting together uh Transcriber things for Windows so i And I installed it on Dave Gelbart's PC and it worked just fine. So hopefully that will work. [speaker004:] Really? So is that [disfmarker] Because there's some people um [disfmarker] It would be cool if we could uh get that to work uh at [disfmarker] at SRI [speaker003:] Yeah. [speaker004:] because the um [disfmarker] [speaker003:] Yep. [speaker004:] we have m m We have more Windows machines to run the [disfmarker] [speaker001:] Well Transcriber is Tcl TK, very generic with Snack, [speaker003:] Yeah. [speaker001:] so basically anything you can get Snack to run on, it will work. [speaker004:] Right. [speaker003:] Yeah. Yeah but [disfmarker] But the problem is the version Transcriber works with, the Snack version, is one point six whatever and that's not anymore supported. It's not on [disfmarker] on the web page anymore. But I just wrote an email to [disfmarker] to the author of [disfmarker] to the Snack author and he sent me to one point six whatever library and so it works. [speaker001:] Well I thought it was packaged with Transcriber? [speaker003:] Yeah, but then you can't add our patches and then the [disfmarker] the new version is [disfmarker] is totally different [speaker001:] Oh. [speaker003:] a and in [disfmarker] yeah, in terms of [disfmarker] of the source code. You [disfmarker] you can't find the Tcl files anymore. [speaker001:] Ah. [speaker003:] It's some whatever wrapped thing [speaker004:] Mmm. [speaker003:] and you can't [disfmarker] you can't access that so you have to install [disfmarker] First install Tcl then install Snack and then install the Transcriber thing and then do the patches. [speaker001:] Patch. [speaker004:] I [disfmarker] I wonder if [disfmarker] if we should contribute our changes back to the authors so that they maintain those changes along [disfmarker] [speaker001:] Ugh! [speaker003:] Yeah. Yeah. [speaker001:] We have [disfmarker] [speaker004:] We have? [speaker001:] Yeah b it's just hasn't made it into the release yet. [speaker004:] Oh. Oh, OK. [speaker006:] So did you um put the uh [disfmarker] [vocalsound] the NT version out on the uh Meeting Recorder page? Or [disfmarker] [speaker003:] No, I haven't done that yet. I'm [disfmarker] oh Nope. But I definitely will do that. [speaker002:] So, can some of the stuff that Don's talking about somehow fit into this Uh, mean you just have a set of numbers that are associated with the [disfmarker] [speaker005:] Yeah. Yeah, it's basically ASCII files or binary files, whatever representation. [speaker003:] So [disfmarker] [speaker005:] Just three different [disfmarker] It's a waveform and just a stylized pitch vector basically so it's [disfmarker] I mean we could do it in Matl [comment] I mean you could do it in a number of different places I'm sure. [speaker004:] So [disfmarker] So [disfmarker] Well [disfmarker] But [disfmarker] But it would be cool if the Transcriber interface had like another window for the [disfmarker] you know, maybe above the waveform where it would show some arbitrary valued function that is [disfmarker] that is you know time synchron ti ti time synchronous with the wavform. [speaker003:] Yep. [speaker005:] Yeah. Yeah, that'd be very cool. [speaker002:] Yes. [speaker001:] It'd be easy enough to add that. Again it's [disfmarker] it's [pause] It's more Tcl T [speaker005:] Yeah. [speaker001:] So someone who's familiar with Tcl TK has to do it, [speaker004:] Right. Right. [speaker001:] but uh, it wouldn't be hard to do. [speaker004:] But it would almost be like having another waveform displayed. [speaker005:] Mm hmm. [speaker001:] Yep. [speaker004:] S [speaker003:] Yep. [speaker004:] Right. [speaker003:] Yeah. Yeah, maybe we could l look into that. [speaker005:] Yeah. [speaker001:] But it [disfmarker] it seems to me that I c [speaker003:] And [disfmarker] [speaker001:] It doesn't seem like having that real time is that necessary. So yo It seems to me you could do images. [speaker005:] Um [pause] What do you mean by real time? Do you mean like [disfmarker] [speaker006:] Like being able to scroll through it and stuff for the demo. [speaker005:] OK. [speaker006:] Is that what you mean? [speaker001:] Yeah, jus Yeah. [speaker006:] Yeah. [speaker001:] It just seems to me jus [speaker005:] It would be cool to see it [disfmarker] It would be cool like to see [disfmarker] to hear it and see it, [speaker003:] And to hear it. Yeah. [speaker005:] and see the pitch contours also. [speaker003:] Yeah. Yeah. [speaker001:] Sure, but I don't think [disfmarker] I [disfmarker] You can do all that just statically in [speaker005:] I think it would lose [disfmarker] Yeah, I mean y [speaker001:] Just record the audio clip and show an image and I think that's [disfmarker] [speaker005:] Right, right. I just thought if you meant slides I thought you meant like just [pause] like [pause] um view graphs or something. [speaker002:] You know, wh Yeah. So. Uh, no, we're talking about on the computer [speaker005:] Right. [speaker002:] and [disfmarker] and um, I think when we were talking about this before we had littl this little demo meeting, we sort of set up a range of different degrees of liveness that you could have and, [vocalsound] the more live, the better, but uh, given the crunch of time, we may have to retreat from it to some extent. So I think [disfmarker] [pause] For a lot of reasons, I think it would be very nice to have this Transcriber interface be able to show some other interesting signal along with it [speaker004:] Mm hmm. [speaker002:] so it'd be a good thing to get in there. But, um [disfmarker] Anyway, jus just looking for ways that we could actually show what you're doing, uh, in [disfmarker] [pause] to people. [speaker005:] Mm hmm. [speaker002:] Cuz a lot of this stuff, particularly for Communicator, uh certainly a significant chunk of the things that we waved our arms about th originally had t had to do with prosodics It'd be nice to show that we can actually get them and see them. [speaker005:] Mm hmm. [speaker004:] Mmm. [speaker001:] And the last i item on the agenda is disk issues yet again. So, we're doing OK on backed up. We're [disfmarker] We're only about thirty percent on the second disk. So, uh, we have a little bit of time before that becomes critical, but we are like ninety five percent, ninety eight percent on the scratch disks for the expanded meetings. [speaker003:] Yeah. [speaker001:] And, my original intention was like we would just delete them as we needed more space, but unfortunately we're in the position where we have to deal with all the meeting data [pause] all at once, in a lot of different ways. [speaker003:] Yeah. [speaker006:] Oh there's a lot of transcribers, too. [speaker001:] Yeah, there're a lot of transcribers, [speaker003:] Yeah. [speaker001:] so all of those need to be expanded, [speaker006:] Mm hmm. [speaker001:] and then people are doing chunking and I want to do uh, uh, uh, the permission forms, [speaker003:] An [speaker006:] Right. [speaker001:] so I want those to be live, so there's a lot of data that has to be around. Um [disfmarker] And Jane was gonna talk to, uh, Dave Johnson about it. One of the things I was thinking is we [disfmarker] we just got these hundred [disfmarker] alright, excuse me [disfmarker] ten, uh SPARC Blade SUN Blades. [speaker002:] Did they come in? [speaker006:] SUN Blades. Yeah. [speaker004:] Yeah. [speaker006:] They came in the other day. [speaker001:] They came in but they're not set up yet. [speaker002:] Oh. [speaker001:] And so it seems to me we could hang scratch disk on those [pause] because they'll be in the machine room, they'll be on the fast connection to the rest of the machines. And if we just need un backed up space, we could just hang disks off them. [speaker006:] Well, is there [disfmarker] Why not just hang them off of Abbott, is there a [disfmarker] [speaker002:] Yeah. [speaker001:] Because there's no more room in the disk racks on Abbott. [speaker006:] Ah. Ah, I see. [speaker002:] Weren't we gonna get [disfmarker] Well, maybe it should get another rack. [speaker004:] But you still need to store the disks somehow. [speaker001:] Well, but the SUN Blades have spare drive bays. [speaker004:] So [disfmarker] [speaker006:] You can put two [disfmarker] [speaker001:] Just put them in. [speaker004:] Oh you mean you put them inside the pizza boxes for the [disfmarker] [speaker003:] Internal. [speaker001:] Sure. [speaker003:] Yeah. [speaker001:] Yeah. [speaker004:] Oh. [speaker001:] Cuz the SUN [disfmarker] uh, these SUN Blades take commodity hard drives. So you can just go out and buy a PC hard drive and stick it in. [speaker004:] Mmm. [speaker002:] But if Abbott is going to be our disk server it [disfmarker] it [disfmarker] file server [comment] it seems like we would want to get it, uh, a second disk rack or something. [speaker004:] Plus we're talking about buying a second dis uh, file server. [speaker001:] Well, I mean there are lots of long term solutions. What I'm looking for is where do we s expand the next meeting? [speaker003:] Yep. [speaker004:] I see [vocalsound] [pause] Oh, I see. [speaker002:] Well, for the next meeting you might be out of luck with those ten, mightn't you? Uh, you know Dave Johnson is gone for, like, ten days, [speaker001:] Oh, I didn't know he had left already. [speaker002:] Uh, well, tonight. [speaker001:] Oh, oh well. [speaker004:] You mean he won't set up the [disfmarker] mmm. [speaker005:] How much space do you need for these? [speaker002:] I don't know. I don't know what his schedule is. [speaker001:] You [disfmarker] we need about a gig per meeting. [speaker002:] I'm just saying he's gone. [speaker006:] I [disfmarker] I thi [speaker003:] Yep. [speaker005:] I have um [disfmarker] I have an eighteen gig drive hanging off of my computer. [speaker001:] Alright! [speaker005:] So [disfmarker] [speaker001:] What's your computer's name? [speaker005:] Uh, Samosa. [speaker002:] You had an eighteen gigabyte drive. [speaker005:] Yeah, I had. Well it's about [disfmarker] I think there's about twelve gig left. [speaker001:] So it [disfmarker] And you have an X drives installed? [speaker005:] Yeah. [speaker001:] OK. [speaker005:] So, I didn't realize it was so critical. [speaker001:] And you're o you're offering? [speaker005:] I mean I'm not doing anything on it right now until I get new meetings to transcri or that are [disfmarker] new transcriptions coming in I really can't do anything. [speaker001:] OK. [speaker006:] I [disfmarker] I jus I just gave Thilo some [disfmarker] about ten gigs, the last ten gigs of space that there was on [disfmarker] on uh Abbott. [speaker005:] Um not that I can't do anything, I jus [speaker006:] Uh [disfmarker] And uh [disfmarker] So but that [disfmarker] But [disfmarker] [speaker001:] Which one was that, X [pause] G? X [pause] G? [speaker003:] XG. [speaker006:] XG. [speaker001:] OK. [speaker006:] Yeah. [speaker004:] XG? That's also where we store the [disfmarker] The uh Hub five training set waveforms, [speaker003:] Oops. [speaker006:] No. [speaker004:] right? [speaker001:] But that won't be getting any bigger, [speaker006:] I don't think that's on XG. [speaker001:] will it? [speaker004:] Right. [speaker006:] On XG is only Carmen and Du and Stephane's disk. [speaker003:] It's [disfmarker] [speaker004:] But I've also been storing [disfmarker] I've been storing the feature files there [speaker003:] Yeah. [speaker004:] and I guess I can s start deleting some because we now know what the best features are [speaker005:] Well [disfmarker] [speaker004:] and we won't be using the old ones anymore. [speaker006:] Yeah, I do I don't think it was on XG. [speaker005:] I have a lot of space, though. [speaker004:] Uh [disfmarker] Oh thats XA [disfmarker] Oh that's X [disfmarker] [speaker003:] Isn't that XH? [speaker006:] I th [speaker005:] I have a lot of space and it's not [disfmarker] it's n [speaker001:] Not [disfmarker] not for long. [speaker005:] There's very little uh [disfmarker] Yeah not for long. But I mean it's not going f [speaker004:] Maybe I'm confu Oh no I'm sorry. [speaker005:] It's not being used often at all. [speaker003:] But I'm using XH [disfmarker] H, too. [speaker001:] Yeah, it's probably [disfmarker] Probably only about four gig is on X [disfmarker] on your X drive, [speaker003:] So. [speaker004:] Oh OK. [speaker001:] but we'll definitely take it up if you [disfmarker] [speaker005:] I th [speaker004:] I think you're right. [speaker005:] I think it's about four or five gig [speaker004:] It's XH and D [disfmarker] [speaker005:] cuz I have four meetings on there, [speaker004:] The b I'm also using DG I got that confused. [speaker005:] three or four meetings. [speaker004:] OK. [speaker001:] Great. [speaker005:] So. [speaker002:] We need [disfmarker] [speaker001:] OK, so that will get us through the next couple days. [speaker002:] We need another gigaquad. [speaker001:] Yep. At least. [speaker002:] There should [disfmarker] I d There should just be a b I should have a button. Just press [disfmarker] Press each meeting saying "we need more disk space" [vocalsound] "this week". [speaker001:] The "more disk space" button? Yep. [speaker002:] Skip the rest of the conversation. [speaker006:] Well we've collected so far something like uh sixty five meetings. And it's about a gig uncompressed. [speaker002:] And [disfmarker] And how much does each meeting take? [speaker003:] It's [disfmarker] It's a little bit more as I usually don't [disfmarker] do not uncompress the [disfmarker] all of the PZM and the PDA things. [speaker006:] Is a little more? Right, yeah so if you uncompressed everything it's even more. [speaker003:] So. It's [disfmarker] Yeah. One point five or something. [speaker006:] U Uh compressed how much are they? Like [disfmarker] [speaker001:] Half a gig. [speaker006:] About half? [speaker001:] For all of them. [speaker003:] Yeah. Yeah. Yep. [speaker006:] So we're definitely are storing you know, all of those. So there's what thirty some gig of just meetings so far? [speaker002:] So so So maybe there's a hundred [pause] gig or something. Or [disfmarker] [speaker006:] Mm hmm. [speaker002:] I mean. Cuz we [disfmarker] we have the uncompressed around also. [speaker006:] Right. [speaker003:] Yeah. [speaker002:] So it's like [disfmarker] [speaker006:] Right. Well we [disfmarker] We haven't uncompressed all the meetings, but [disfmarker] [speaker001:] I would like to. [speaker002:] Yeah. Well I mean it's [disfmarker] the they really are cheap. I mean it's just a question of figuring out where they should be and hanging them, [speaker006:] Right. [speaker001:] Yep. [speaker002:] but But uh, we could [disfmarker] You know, if you want to get four disks, get four disks. I mean it's [disfmarker] it's small [disfmarker] I mean these things ar are just a few hundred dollars. [speaker006:] Yeah. Well I sent that message out to, I guess, you and Dave asking for [disfmarker] if we could get some disk. [speaker002:] Yeah. [speaker006:] I s I sent this out a [disfmarker] a day ago [speaker001:] And put it where? [speaker002:] Right. [speaker006:] but [disfmarker] and Dave didn't respond so I don I don't know how the whole process works. I mean does he just go out and get them and [disfmarker] if it's OK, and [disfmarker] [speaker001:] Yep. [speaker006:] So I was assuming he was gonna take over [pause] that. But he's probably too busy given that he's leaving. [speaker002:] Yeah, I think you need a direct conversation with him. And just [vocalsound] say an e just ask him that, you know, wha what should you do. And in my answer back was "are you sure you just want one?" So I mean I think [pause] that what you want to do is plan ahead a little bit and figure "well, here's what we pi figure on doing for the next few months". [speaker006:] Yeah. [speaker001:] Wa a I know what they want. The sysadmins would prefer to have one external drive per machine. [speaker002:] Yeah. [speaker001:] So they don't want to stack up external drives. Um [disfmarker] [pause] And then they want everything else in the machine room. [speaker002:] Right. [speaker001:] So the question is where are you gonna hang them? [speaker006:] Mm hmm. I don't know what the space situation is in the machine room. [speaker001:] Right. [speaker006:] So. [speaker002:] Right. So this is a question that's pretty hard to solve without talking to Dave, [speaker004:] Th The [disfmarker] [speaker006:] I think part of the reason why Dave can't get the [disfmarker] the new machines up is because he doesn't have room in the machine room right now. [speaker002:] cuz it [disfmarker] [speaker004:] One [disfmarker] Mmm. [speaker001:] Yep. [speaker004:] One [disfmarker] One On One thing to in to um t to do when you need to conserve space is [speaker006:] So he has to re arrange a bunch of stuff. [speaker004:] I bet there are still some old, uh, like, nine gig disks, uh, around and you can probably consolidate them onto larger disks and [disfmarker] and you know recover the space. [speaker001:] Yep. [speaker002:] Yeah. No. I think Dave [disfmarker] Dave knows all these things, of course. [speaker004:] Right. [speaker002:] An and so, he always has a a lot of plans of things that he's gonna do to make things better in many ways an and runs out of time. [speaker004:] Mm hmm. [speaker001:] But I [disfmarker] I know that [pause] generally their first priority has been for backed up disk. And so I think what he's been concentrating on is uh the back [disfmarker] the [pause] back up system, rather than on new disk. [speaker004:] Mmm. Mmm. [speaker001:] So. [speaker002:] Well. So. [speaker001:] Which [disfmarker] [speaker002:] But this [disfmarker] this is a very specific question for me. Basically, we can easily get [pause] one to four disks, I mean you just go out and get four and we've got the money for it, it's no big deal. Uh, but the question is where they go, and I don't think we can solve that here, [speaker004:] Maybe we can put some disks in the [disfmarker] in that back room there. [speaker002:] you just have to ask him. [speaker001:] Yeah really. [speaker002:] Attach to [disfmarker] [speaker001:] Popcorn. [speaker004:] To the machine that collects the data. [speaker002:] Yeah? [speaker004:] So then you could, at least temporarily, store stuff there. The only [disfmarker] [speaker006:] Hmm. [speaker001:] Yeah, it's just [disfmarker] It's not on the net, so it's a little [pause] awkward [speaker004:] What do you mean it's not on the net? [speaker001:] It's not [disfmarker] [speaker003:] It's not bad. [speaker001:] It's behind lots of fire walls that don't allow any services through except S S [speaker004:] Oh because it's [disfmarker] because it's an ACIRI machine? [speaker001:] Yep. [speaker004:] Oh, oh oh. [speaker002:] Yeah. [speaker001:] And also on the list is to get it into the normal ICSI net, but Who knows when that will happen? [speaker006:] That might be a good short term solution, though. [speaker004:] But that can't be that hard. I mean [disfmarker] [speaker001:] No, the [disfmarker] the problem with that apparently is that they don't currently have a wire running to that back room [pause] that goes anywhere near one of the ICSI routers. [speaker004:] Oh, [speaker001:] So, they actually have to run a wire somewhere. [speaker002:] Yeah. Yeah, e again, you know, any one of these things is certainly not a big deal. If there was a person dedicated to doing it they would happen pretty easily but it's [disfmarker] it's [disfmarker] jus every ever everybody [disfmarker] everybody has a [disfmarker] has [disfmarker] [speaker001:] But Dave has to do all of them. [speaker002:] Well all of us have long lists of different things we're doing. But at any rate I think that there's a [disfmarker] there's a longer term thing and there's immediate need and I think we need a [disfmarker] a conversation with [disfmarker] Uh, maybe [disfmarker] maybe after [disfmarker] after tea or something you and I can go down and [disfmarker] and talk to him about it Just say "wha you know, what should we do right now?" [speaker006:] How long is David gonna be gone? [speaker002:] Uh, eleven days or something? [speaker001:] Oh my! [speaker002:] Yeah basically tomorrow and all of [pause] the week after. [speaker001:] And that's all I have. [speaker002:] Um [disfmarker] Let's see. The only oth thing [disfmarker] other thing I was gonna add was that um [disfmarker] uh, I talked briefly to Mari and uh we had [vocalsound] both been busy with other things so we haven't really connected that much since the [pause] last meeting we had here but we agreed that we would have a telephone meeting the Friday after next. And I [disfmarker] I [disfmarker] I wanted to make it, um after the next one of these meetings, so something that we wanna do next meeting is [disfmarker] is uh to put together um, a kind of reasonable list for ourselves of what is it, um, that we've done. I mean just sort of bulletize I mean o e do do I can [disfmarker] I can dream up text but [pause] this is basically gonna lead to the annual report. So [pause] Um [disfmarker] If w [speaker004:] Mm hmm. [speaker001:] This is the fifteenth? [speaker002:] Um, that would [speaker001:] So just a week from tomorrow? [speaker002:] Yeah. [speaker001:] OK. [speaker002:] Yeah. So, uh, we can [disfmarker] This [disfmarker] [speaker004:] Is this gotta be in the morning? [speaker002:] So that's an [disfmarker] Um [disfmarker] [speaker004:] Or [disfmarker] Because you know I [disfmarker] Fridays I have to leave uh like around uh two. So if it could be before that would be be [speaker002:] No, no but I [disfmarker] I [disfmarker] I don't need other folks for the meeting. I can do it. [speaker004:] Oh, OK, alright. [speaker002:] A A All I'm saying is that on [disfmarker] [speaker004:] Oh I'm sorry, I misunderstood. I thought you are [disfmarker] [speaker002:] Yeah so what I meant was on the me this meeting [pause] if I wa something I [disfmarker] I [disfmarker] I'm making a major thing in the agenda is I wanna help in getting together a list of what it is that we've done so I can tell her. [speaker004:] OK. Alright. Mm hmm. OK. [speaker002:] I think I have a pretty good idea but [disfmarker] but um [disfmarker] Uh, and then the next day uh, late in the day I'll be having that [disfmarker] that discussion with her. [speaker004:] Mmm. [speaker002:] Um. So. [speaker004:] Um [disfmarker] Uh [disfmarker] One thing [disfmarker] I mean [disfmarker] we [disfmarker] [vocalsound] [vocalsound] in past meetings we had um also a you know various [disfmarker] variously talked about the um work that w uh was happening sort of on the [disfmarker] on the recognition side um but isn't necessarily related to meetings uh specifically. [speaker002:] Mm hmm. [speaker004:] So. Um. And I wondered whether we should maybe have um a separate meeting and between you know, whoever's interested in that because I feel that uh there's plenty of stuff to talk about but it would be sort of um maybe the wrong place to do it in this meeting if uh [disfmarker] [speaker002:] Think so? [speaker004:] Well, it's that [disfmarker] It's just gonna be ver very boring for people who are not you know, sort of really interested in the details of the recognition system. [speaker001:] I'm interested. [speaker003:] Me too. [speaker002:] Well, OK, so how many [disfmarker] how many people here would not be interested in uh [disfmarker] in a meeting about recognition? [speaker006:] Jane may not be. [speaker001:] Jane, I think. [speaker004:] Well I know [disfmarker] Well, Jane an [speaker003:] Yep. [speaker004:] Well you mean in a separate meeting or ha ha talking about it in this [disfmarker] [speaker001:] No. If we talked about it in this meeting. [speaker006:] He's wondering how much overlap there will be. [speaker004:] OK. [speaker002:] Yeah, so you're su So. [speaker004:] So, uh, uh, Liz and Jane probably. [speaker002:] OK, so we're gonna have a guy's meeting. [speaker004:] Uh. [vocalsound] Uh, if you wanna put it that way. [speaker006:] Good thing Liz isn't here. [speaker002:] Real [disfmarker] [speaker005:] Watch a ball game? [speaker002:] Yeah, real [disfmarker] real [disfmarker] real men [vocalsound] "Real men do decoding" or something like that. [speaker006:] Don't listen to this, Liz. [speaker004:] Right. [speaker002:] Uh. [speaker004:] I mean it it's sort of [disfmarker] I mean when [disfmarker] when the talk is about data collection stuff, sometimes I've [disfmarker] you know, I [disfmarker] I'm bored. [speaker002:] Yeah. [speaker001:] The [disfmarker] Nod off? [speaker004:] So it's I c I can sympathize with them not wanting to [disfmarker] i to [disfmarker] to be uh [disfmarker] you know [disfmarker] If [disfmarker] I cou you know [disfmarker] this could [disfmarker] [speaker002:] It's cuz [pause] y you have a [disfmarker] So you need a better developed feminine side. [speaker004:] I'm not sure I wanna [disfmarker] [speaker002:] There's probably gonna be a lot of "bleeps" in this meeting. [speaker001:] Yeah, I would as [comment] I would guess. [speaker002:] Uh. Um. [speaker004:] Yeah and [disfmarker] [speaker002:] I think it must be [pause] uh nearing the end of the week. Um. [vocalsound] Yeah. I [disfmarker] You know, I [disfmarker] I've heard some comments about like this. That m could be. [speaker004:] Mm hmm. [speaker002:] I mean the [disfmarker] Um. [vocalsound] U [speaker004:] And we don't have to do it every week. [speaker006:] Could we [disfmarker] [speaker004:] We could do it every other week or so. [speaker006:] Right, I was [disfmarker] [speaker004:] You know, whatev or whenever we feel like we [disfmarker] [speaker006:] Why don't we alternate this meeting every other week? [speaker001:] Or just alternate the focus. [speaker006:] Tha That's what I mean. [speaker001:] Yeah, so on even weeks have [pause] basic [pause] on data. [speaker002:] Yeah. [speaker004:] We could do that, yeah. [speaker006:] Yeah. [speaker004:] I [disfmarker] I [disfmarker] Personally I'd [disfmarker] I'm not in favor of more meetings. Um. [vocalsound] Because, uh. [speaker002:] Right. [speaker004:] You know. [speaker001:] I am. Oh sor [speaker006:] But I do I don't [disfmarker] I mean a lot of times lately it seems like we don't really have enough for a full meeting on Meeting Recorder. [speaker004:] Right. [speaker006:] So if we did that [disfmarker] [speaker001:] Well, except that we keep going for our full time. [speaker003:] Yep. [speaker006:] Well, cuz we get into these other topics. [speaker005:] Yeah. [speaker004:] We feel [disfmarker] We feel obligated to collect more data. [speaker001:] Yeah. [speaker006:] Yeah. [speaker001:] Ugh. [speaker006:] So if we could alternate the focus of the meeting [disfmarker] [speaker001:] I don't. Let's read digits and go. [speaker002:] Why don't we just start with that. [speaker004:] ummh. [comment] ummh. [comment] OK. [speaker002:] And then if we find, you know we're just not getting enough done, there's all these topics not coming up, then we can expand into another meeting. [speaker004:] Mm hmm. [speaker002:] But I [disfmarker] I think that's a great idea. Uh. So uh. Um. Let's chat about it with Liz and Jane [pause] when we get a chance, see what they think and [disfmarker] [speaker004:] Mm hmm. [speaker006:] Yeah that would be good. I mean Andreas and I have various talks in the halls and there's lots of things, you know, details and stuff that would I think people'd be interested in [speaker004:] Mm hmm. [speaker006:] and I'd [disfmarker] you know, where do we go from here kind of things and [disfmarker] So, it would be good. [speaker002:] Yeah, and you're [disfmarker] you're attending [pause] the uh [disfmarker] the front end meeting as well as the others so you have [disfmarker] you have probably one of the best [disfmarker] you and I, I guess are the main ones who sort of see the bridge between the two. [speaker001:] Bridge. [speaker004:] Mm hmm. [speaker002:] We are doing recognition in both of them. So. [speaker006:] Mm hmm. [speaker004:] Right. [speaker002:] Uh. [speaker004:] So um. [speaker001:] OK? [speaker004:] So [disfmarker] so we could talk a little bit about that now if [disfmarker] if there's some time. [speaker001:] No, no that would be for next week. [speaker004:] Um I jus So the latest result was that um um yot I tested the uh [disfmarker] the sort of final version of the PLP configuration um on development test data for [disfmarker] for this year's Hub five test set. [speaker002:] Mm hmm. [speaker004:] And the recognition performance was exactly, and I mean exactly up to the [disfmarker] you know, the first decimal, same as with the uh Mel Cepstra front end. [speaker001:] Mmm. [speaker006:] For both females and males? [speaker004:] Yes. [speaker006:] Oh! [speaker004:] Uh, well i there was a little bit of a [disfmarker] i overall. They [disfmarker] They were [disfmarker] The males I think were slightly better and the females were slightly worse but nothing really. [speaker006:] Mm hmm. [speaker004:] I mean definitely not significant. [speaker002:] Mm hmm. [speaker006:] Mm hmm. [speaker004:] And then the really nice thing was that if [disfmarker] if we combine the two systems we get a one and a half percent improvement. [speaker001:] Wow. [speaker004:] So. [speaker001:] Just with ROVER? [speaker004:] t With N best ROVER, which is like our [vocalsound] new and improved version of ROVER. [speaker001:] Mm hmm. [speaker004:] Which u actually uses the whole N best list from both systems [pause] to [pause] mmm, uh [pause] c combine that. [speaker002:] So except [disfmarker] I mean the only key difference between the two really is the kind of smoothing at the end which is the auto regressive versus the cepstral truncation. [speaker004:] Yeah. [speaker002:] OK. [speaker004:] And, the [disfmarker] [speaker006:] But a percent and a half? That's [disfmarker] [speaker001:] Yeah, it's pretty [pause] impressive. [speaker004:] And [disfmarker] And so uh after I told the [disfmarker] my uh colleagues at SRI about that, you know, now they definitely want to, you know, uh, have a [disfmarker] Next time we have an evaluation they want to do uh, you know, basically a at least the system combination. Um, and, you know, why not? [speaker002:] Sure, why not? [speaker004:] Uh. [vocalsound] So. [speaker001:] We clearly gotta add a few more features, though. [speaker004:] Uh w what do you mean? More features in the sense of front end features or in the sense of just bells and whistles? [speaker001:] No, uh front end features. You know we did PLP and Mel Cepstra. Let's, you know, try RASTA and MSG, and [disfmarker] [speaker004:] Oh I mean [disfmarker] Yeah. Well Right. So, we cou Yeah. That's [disfmarker] the [disfmarker] the the [disfmarker] There's one thing uh [disfmarker] I mean you don't want to overdo it because y every front end [disfmarker] You know, if you [disfmarker] you know you basically multiply your effort by N, where N is a number of different systems [speaker006:] Oh. [speaker002:] Mm hmm. [speaker004:] and [disfmarker] Um. So. So one [disfmarker] one compromise would be to [disfmarker] only to have the [disfmarker] everything up to the point where you generate lattices be basically one system and then after that you rescore your lattices with the multiple systems and combine the results and that's a fairly painless um thing. [speaker002:] Mmm. An [speaker006:] Do you think we'd still get the one and a half uh [disfmarker] [speaker004:] So. I [disfmarker] I think so. Yeah. Maybe a little less because at that point the error rates are lower and so if [disfmarker] [speaker006:] Mm hmm. [speaker004:] You know, maybe it's only one percent or something but that would still be worthwhile doing. So. Um [pause] Jus You know, just wanted to let you know that that's working out very nicely. [speaker001:] Cool. [speaker004:] And then we had some results on [pause] digits, uh, with um [disfmarker] We [disfmarker] We [disfmarker] So this was uh really [comment] [disfmarker] really sort of just to get Dave going with his um experiments. [speaker002:] Mm hmm. [speaker004:] And so, uh. But as a result, um, you know, we were sort of wondering why is the Hub five system doing so well on the digits. And the reason is basically there's a whole bunch of read speech data in the Hub five training set. [speaker002:] Right. [speaker001:] Right. [speaker002:] Including digits I gather, yeah. [speaker004:] And you c And [disfmarker] Not all of [disfmarker] No it's actually, digits is only a maybe a fifth of it. The rest is [disfmarker] is read [disfmarker] is read TIMIT data and uh ATIS data and Wall Street Journal and stuff like that. [speaker002:] A fifth of it is how much? Right. But a fi a fifth is how much? [speaker004:] A fifth would be maybe uh two hours something. [speaker002:] Yeah, so I mean that's actually not that different from the [pause] amount of training that there was. [speaker004:] Right. But it definitely helps to have the other read data in there [speaker002:] So. [speaker004:] because we're doing [disfmarker] [speaker002:] Oh yeah w [speaker004:] You know the error rate is half of what you do if you train only on ti uh TIMIT [disfmarker] [comment] uh not TIMIT uh TI digits, [speaker002:] Mm hmm. [speaker004:] which is only what two hours something? [speaker002:] Right. [speaker001:] I don't know. [speaker004:] So. Uh, more read speech data definitely helps. And you can leave out all the conversational data with no performance penalty. That's e [speaker002:] Yeah that was the interesting thing. [speaker004:] That was e [speaker002:] Because [disfmarker] because uh, it was apparent if you put in a bunch more data it would be better, [speaker004:] Right, right. [speaker002:] but [disfmarker] but uh. [speaker004:] Right. [speaker006:] Well is there even more read speech data around? [speaker004:] Oh, yeah. So we only [disfmarker] for the Hub five training, we're only using uh a fairly small subset of the Macrophone [pause] database. [speaker006:] Mm hmm. [speaker004:] Um, so, you could beef that up and probably do even better. [speaker001:] I could also put in [pause] uh focus condition zero from Hub four from Broadcast News, which is mostly prepared speech. [speaker006:] Mm hmm. [speaker001:] It's not exactly read speech but it's pretty darn close. [speaker004:] Yeah. Yeah. Right. Well, I mean that's plenty of read speech data. I mean, Wall Street Journal, [pause] uh, take one example. [speaker001:] Yeah. That's right. [speaker004:] But um. So, you know that might be useful for the people who train the [disfmarker] the digit recognizers to [disfmarker] to use uh something other than TI digits. [speaker001:] Yeah. [speaker002:] Well they been using TIMIT. [speaker004:] OK. [speaker002:] That [disfmarker] Uh. [pause] They [disfmarker] they uh [disfmarker] they experimented for a while with a bunch of different databases with French and Spanish and so forth cuz they're multilingual tests [speaker004:] Mm hmm. [speaker002:] and [disfmarker] and uh, um, [disfmarker] and actually the best results they got wa were uh using TIMIT. [speaker001:] Hmm. [speaker002:] Uh [disfmarker] But uh [disfmarker] which [disfmarker] So that's what they're [disfmarker] they're using now. [speaker004:] Mmm. [speaker002:] But [disfmarker] but yeah certainly if we, um [disfmarker] If we knew what the structure of what we're doing there was. I mean there's still a bunch of messing around with different kinds of uh noise robustness algorithms. [speaker004:] Mm hmm. Mm hmm. [speaker002:] So we don't know exactly which combination we're gonna be going with. Once we know, then [disfmarker] the trainable parts of it [disfmarker] it'd be great to run lots of [disfmarker] lots of stuff through. [speaker004:] Mm hmm. Right. Well, that was that. And then I th guess Chuck and I had some discussions about how to proceed with the tandem uh system and [disfmarker] You wanna [disfmarker] [vocalsound] You wanna see where that stands? [speaker006:] Well, I'm [disfmarker] Yeah, so Andreas uh brought over the uh alignments that the SRI system uses. And so I'm in the process of um converting those alignments into uh label files that we can use to train uh a new net with. And uh so then I'll train the net. And. [speaker004:] An And one side effect of that would be that it's [disfmarker] um that the phone set would change. [speaker006:] Right. [speaker004:] So the MLP would be trained on I think only forty six or forty eight [disfmarker] [speaker006:] Eight. [speaker004:] forty eight phones? [speaker006:] Mm hmm. [speaker004:] Uh which is smaller than the um than the phone set that [disfmarker] that we've been using so far. [speaker002:] Yeah. [speaker004:] And that [disfmarker] that [disfmarker] that will probably help, actually, [speaker006:] So it's a little different? [speaker004:] because um the fewer dimensions uh e the less trouble probably with the [disfmarker] as far as just the um, um [disfmarker] Just [disfmarker] You know we want to try things like deltas on the tandem features. And so you h have to multiply everything by two or three. And so, you know, fewer dimensions in the [pause] phone set would be actually helpful just from a logistics point of view. [speaker002:] Sure. Although we [disfmarker] I mean, it's not that many fewer and [disfmarker] and [disfmarker] and we take a KLT anyway so we could [disfmarker] [speaker004:] Right. Exactly. So [disfmarker] so that was the other thing. [speaker002:] Yeah. [speaker004:] And then we wanted to s just limit it to maybe uh something on the same order of dimensions as we use in a standard um front end. So that would mean just doing the top I don't know ten or twelve or something of the KLT dimensions. [speaker002:] Yeah, and I think [disfmarker] and we sh again check [disfmarker] we should check with Stephane. My impression was that when we did that before that had very little [disfmarker] uh he didn't lose very much. [speaker004:] Right. [speaker006:] By just taking the top whatever? [speaker001:] Yep. [speaker002:] Yeah yeah. [speaker004:] But then [disfmarker] And then something [disfmarker] Once we have the new M L P trained up, uh one thing I wanted to try just for the fun of it was to actually run uh like a standard hybrid system that is based on you know, those features uh and uh retrain MLP and also the you know, the dictionary that we use for the Hub five system. [speaker002:] And the b And the base u starting off with the base of the alignments that you got from i from a pretty decent system. [speaker004:] Exactly. [speaker006:] Right. [speaker004:] Yeah. So that would basically give us a, um, more [disfmarker] hopefully a [disfmarker] a better system [speaker002:] Yeah. [speaker004:] um because [disfmarker] you know, compared to what Eric did a while ago, where he trained up, I think, a system based on Broadcast News and then uh tra retraining it on Switchboard or s uh and [disfmarker] [speaker002:] Yeah. [speaker004:] But he [disfmarker] I think he d he didn't [disfmarker] he probably didn't use all the training data that was available. And his dictionary probably wasn't as tuned to um conversational speech as [disfmarker] as the [disfmarker] as ours is. So. [speaker002:] That's [disfmarker] That's certainly one thing, yeah. [speaker004:] And the dictionary made a huge difference. [speaker002:] Uh. Yeah. [speaker004:] Uh. We [disfmarker] we made some improvements to the dictionary's uh [disfmarker] to the dictionary about two years ago which resulted in a [disfmarker] uh something like a four percent absolute error rate reduction on Switchboard, which [disfmarker] [speaker002:] Well the other thing is, dipping [pause] deep into history and into uh our resource management days, when we were collaborating with SRI before, [speaker004:] Mm hmm. Mmm. Mm hmm. [speaker002:] uh it was [disfmarker] I think, it is was a really key uh starting point for us that we actually got our alignment. When we were working together we got our initial alignments from Decipher, uh [pause] at the time. [speaker004:] Mm hmm. Yeah. [speaker002:] Uh. [pause] And. Later we got away from it because [disfmarker] because once we had decent systems going then it was [disfmarker] it was typically better to use our own systems [speaker006:] Yeah. [speaker004:] Mm hmm. [speaker002:] cuz they were self consistent but [disfmarker] but certainly to start off when we were trying to recover from our initial hundred and forty percent error uh rate. Uh. [vocalsound] But that was a [disfmarker] that was a good [disfmarker] good [disfmarker] good way to start. [speaker004:] Yeah. OK. [speaker002:] And we're not quite that bad with our [disfmarker] our Switchboard systems but it was [disfmarker] they certainly aren't as good as SRI's, [speaker004:] Yeah. [speaker002:] so [disfmarker] [speaker006:] W What is the performance on s the best Switchboard system that we've done? [speaker004:] Right. [speaker006:] Roughly? [speaker002:] Well, the hybrid system we never got better than about fifty percent [pause] error. And uh it was [disfmarker] I think there's just a whole lot of things that uh no one ever had time for. We never did really fix up the dictionary. Uh we always had a list of a half dozen things that we were gonna do and [disfmarker] and a lot of them were pretty simple and we never did. [speaker004:] Yeah. Mmm. But that w [speaker002:] Uh, we never did an never did any adaptation [speaker004:] Even that [disfmarker] that number [disfmarker] Right. [speaker002:] uh, we never did any [disfmarker] [speaker004:] And [disfmarker] And that number I think was on Switchboard one data, right? Where the error rate now is in the twenties. [speaker002:] Yeah. [speaker004:] So, um. [speaker002:] Yeah. So we were [disfmarker] [speaker004:] That's yet s [speaker002:] Yeah. We were probably at least a factor or two off. [speaker004:] Right. [speaker002:] Yeah. [speaker004:] So it would be [disfmarker] So it would be good t to sort of r re uh [disfmarker] [speaker002:] Yeah. [speaker004:] just at least to give us an idea of how well the hybrid system would do. [speaker002:] Yeah. But I think [disfmarker] again it's [disfmarker] Yeah. It's the conver it's the s conversational speech bit. Because our [disfmarker] our Broadcast News system is actually pretty good. [speaker004:] Mm hmm. [speaker002:] He knows. [speaker004:] Right. And the other thing that that would help us to evaluate is to see how well the M L P is trained up. Right? Because it's a pretty good um indicator of that. [speaker002:] Mm hmm. [speaker004:] So it's sort of a sanity check of the M L P outputs [pause] before we go ahead and train up the [disfmarker] uh you know, use them as a basis for the tandem system. [speaker006:] Mm hmm. [speaker002:] Yeah. It'll still probably be worse. I mean, it's [disfmarker] it'd be context independent and so on. [speaker004:] No. Sure. [speaker006:] Should we [disfmarker] Should we bother with um using the net before doing uh embedded training? [speaker004:] Not [disfmarker] But [disfmarker] [speaker002:] But. [speaker006:] I mean should [disfmarker] should we even use that? [speaker004:] Oh [pause] oh that's a good question. [speaker006:] Or should I just go straight to [disfmarker] [speaker004:] Yeah, we [disfmarker] we weren't sure whether it's worth to just use the alignments um from the S R I recognizer or whether to actually go through one or more iterations of embedded training where you realign. [speaker001:] Try it. You run it? Keep [disfmarker] keep both versions? See which one's better? [speaker002:] Uh, yeah. I mean. I think I agree with Ad I mean basically you would then [disfmarker] You proceed with the embedded training. [speaker004:] Mm hmm. Mm hmm. [speaker002:] It's gonna take you a while to train at this net anyway. [speaker004:] Right. [speaker002:] And while it's training you may as well test the one you have and see how it did. [speaker004:] OK. [speaker006:] Mmm. [speaker004:] Alright. [speaker001:] I could make arguments either way. [speaker004:] But [disfmarker] But so I [disfmarker] [speaker001:] You know, it's [disfmarker] [speaker004:] Well but i [speaker001:] Sort of given up guessing. [speaker004:] But in your experience I mean uh have you seen big improvements in s on some tasks with embedded training? Or was it sort of small ish uh improvements that you got [speaker002:] Uh well. It depended on the task. I mean I think in this one I would sort of expect it to be important [speaker004:] Right. [speaker002:] because we're coming from uh, alignments that were achieved with an extremely different system. [speaker004:] That are from another [disfmarker] Right. Right. [speaker001:] Although, I mean we've done it with [disfmarker] When we were combining with the Cambridge recurrent neural net, embedded training made it worse. [speaker002:] Uh. [speaker001:] Which I've never figured out. [speaker002:] Right. But I mean i [speaker001:] I think it's a bug. [speaker004:] So you [disfmarker] you started training with outputs from a [disfmarker] with alignments that were generated by the Cambridge uh system? [speaker001:] Yep. [speaker004:] And then [disfmarker] Uh. [speaker001:] Yeah. [speaker002:] Yeah. [speaker004:] Hmm. Well, that might probably just [disfmarker] Hmm. That was probably because your initial system [disfmarker] I mean your system was ba worse than Cambridge's. And you [disfmarker] Um. [speaker002:] Was it? I don't think it was. [speaker001:] No they were [disfmarker] they were comparable. [speaker004:] It wasn't? [speaker002:] No. [speaker004:] Really? [speaker001:] They were very close. [speaker002:] Yeah. [speaker004:] That's weird. [speaker002:] Excuse me? [speaker004:] That's [disfmarker] That's weird. No I mean it's weird that it did [disfmarker] [speaker002:] Oh! [speaker001:] That's what I said. [speaker004:] I'm sorry. It's w It's weird that it got worse. [speaker006:] That's ambiguous. [speaker002:] Um. No. Uh. Tha u we we've see I mean [disfmarker] and wi with the numbers [disfmarker] OGI numbers task we've seen a number of times people doing embedded trainings and things not getting better. [speaker004:] Oh actually it's not that weird because we have seen [disfmarker] We have seen cases where acoustic [disfmarker] retraining the acoustic models after some other change made matters worse rather than better. [speaker002:] Yeah. It just [disfmarker] [speaker004:] Yeah. [speaker002:] But I But I would [disfmarker] I would suspect that something that [disfmarker] that had um a very different Um feature set, for instance [disfmarker] I mean they were using pretty diff similar feature sets to us. [speaker001:] Yep. [speaker002:] I [disfmarker] I would expect that something that had a different feature set would [disfmarker] would uh benefit from [disfmarker] [speaker004:] Mm hmm. [speaker006:] What about uh hidden unit size [pause] on this. [speaker002:] Oh, wait a minute, and the other thing uh, [speaker006:] Oh. [speaker002:] sorry, it was [disfmarker] the other thing is that what was in common [comment] to the Cambridge system and our system is they both were training posteriors. [speaker001:] Right. [speaker002:] So I mean, uh, that's another pretty big difference [speaker001:] Ah yeah. That's another big difference. [speaker002:] and uh, one bac at least [disfmarker] Back at [disfmarker] [speaker004:] You mean with soft targets? Or [disfmarker]? Sorry, I'm sor I missed [disfmarker] What [disfmarker] What's the key issue here? [speaker002:] Oh, that uh both the Cambridge system and our system were [disfmarker] were training [pause] posteriors. And if we're [disfmarker] we're coming from alignments coming from the SRI system, it's a likelihood based system. So [disfmarker] so that's another difference. [speaker004:] Yeah. [speaker002:] I mean. You know, there's diffe different front end different [disfmarker] different uh, um, training criterion [disfmarker] Uh, I would think that in a that an embedded [pause] uh [pause] embedded uh training would have at least a good shot of improving it some more. [speaker004:] Mm hmm. [speaker002:] But we don't know. [speaker004:] OK. [speaker002:] You gonna say something? [speaker006:] Yeah. I was wondering uh you know what size net I should [disfmarker] Anybody have any intuitions or suggestions? [speaker002:] Uh, how much training data? [speaker006:] Well, I was gonna start off with the small train set. [speaker002:] And how [disfmarker] How many hours is that? [speaker006:] That's why I was [disfmarker] I [disfmarker] I'm not sure how much that is. [speaker004:] Uh, I think that has about [disfmarker] Well i you'd [disfmarker] would be gender dependent training, right? [speaker006:] Gender dependent, yeah. [speaker004:] So [disfmarker] So I think it's [disfmarker] uh that's about mmm, something like thirty hours. [speaker006:] Thirty hours. [speaker004:] Thirty hours per gender. [speaker001:] I'm not sure what this'll mean. [speaker006:] In the small training set? [speaker001:] Hello? [speaker004:] I [disfmarker] I think so. I'll [disfmarker] [speaker001:] Excuse me? [speaker004:] It's definitely less than a hundred [disfmarker] You know, it's more like [disfmarker] like thirty forty hours [speaker001:] Alright. [speaker004:] something like that. [speaker001:] Wrong number. [speaker006:] They called to tell us that? [speaker004:] Yeah. [speaker002:] Yeah. Um. So. Uh. after run [speaker006:] I mean, I didn't want to do too big, [speaker002:] Right. [speaker006:] just [disfmarker] [speaker002:] So At least a couple thousand hidden units. I mean. [speaker004:] Mm hmm. [speaker002:] It's [disfmarker] it's th the thing I'll [disfmarker] I'll think about it a little more but it [disfmarker] it'd be toss up between two thousand and four thousand. [speaker006:] Mmm. [speaker002:] You definitely wouldn't want the eight thousand. It's m It's more than [disfmarker] [speaker006:] And a thousand is too small? [speaker002:] Oh let me think about it, but I think that [disfmarker] that uh th at some point there's diminishing returns. [speaker006:] Mm hmm. [speaker002:] I mean it doesn't actually get worse, typically, [speaker001:] Mm hmm. [speaker002:] but it [disfmarker] but [disfmarker] but there is diminishing returns and you're doubling [pause] the amount of time. [speaker004:] Remember you'll have a smaller output layer so there's gonna be fewer parameters there. [speaker001:] But not by a lot. [speaker002:] Not by much. [speaker004:] And then [disfmarker] [speaker002:] Fifty s Fifty four to forty eight? [speaker001:] Vast majority is from the input unit. [speaker004:] OK. Yeah. [speaker002:] Yeah. Yeah. It'll have a very tiny effect. [speaker001:] Right, because you used the context windows and so the input to hidden is much, much larger. [speaker002:] Yeah. [speaker004:] Oh I see, I see, yeah, of course. [speaker002:] Yeah. [speaker004:] Yeah. It's negligible, OK. [speaker002:] Yeah, so it's [disfmarker] it'd be way, way less than ten percent of the difference. [speaker004:] Uh huh. [speaker002:] Uh. There's uh [disfmarker] How bi how big Let's see. What am I trying to think of? [speaker006:] The [disfmarker] The net that [disfmarker] that we did use already [pause] uh was eight thousand hidden units and that's the one that Eric trained up. [speaker002:] Right. And that was trained up on uh like a hundred and forty hours of [disfmarker] of speech. [speaker004:] Was that gender dependent or independent? [speaker006:] Gender dependent. [speaker002:] Oh. So that would be like trained on s sixty or seventy hours. [speaker004:] Mm hmm. [speaker002:] So, uh, yeah definitely not the one thousand uh [disfmarker] two thousand fr I mean the four thousand will be better and the two thousand will be almost [disfmarker] will be faster and almost as good. [speaker001:] It'll be faster. [speaker006:] Yeah. [speaker002:] So. [speaker006:] Maybe I'll start off with two thousand just to see. [speaker004:] Mm hmm. [speaker002:] Yeah. [speaker004:] OK. [speaker006:] OK. [speaker002:] Yeah, thirty hours is like a hundred and ten thousand uh seconds. Uh, so that's like eleven [disfmarker] eleven million frames. And a two thousand hidden unit net is uh I guess about seven, eight hundred thousand parameters. So that's probably [disfmarker] That's probably fine. I mean a four thousand is well within the range that you could benefit from but the two thousand'd be faster so [disfmarker] [speaker004:] Right. I actually have to go. So. [speaker002:] Alright. Uncle Bernie's rule is ten to one. Bernie Woodrow's Rule of [disfmarker] yeah [disfmarker] Uncle Bernie [disfmarker] yeah. Yes sir. [speaker001:] We're just waiting for you to leave. Anything else? [speaker002:] Nah. Since we have nothing to talk about we only talked for an hour. [speaker001:] OK. [speaker002:] So. [speaker001:] If [disfmarker] yeah that's right. [speaker003:] Yeah. [speaker001:] Uh, well, we started late. [speaker006:] Transcript [speaker002:] de ba de. de ba de That's all folks! [speaker006:] We have littler heads so. [speaker005:] OK. So, everything's [disfmarker] Oh, the microphone adjustments, we should check [disfmarker] Would you check the microphone adjustments, please? Is [disfmarker] is M So, is [disfmarker] is Marisa's close enough? We [disfmarker] could we look [disfmarker] [speaker004:] I'm [disfmarker] [speaker001:] Hmm? [speaker005:] But, I mean, also, to get the [disfmarker] do [disfmarker] you [disfmarker] you [disfmarker] what you said about [disfmarker] [speaker004:] Yes. [speaker005:] is i is it close [disfmarker]? OK. Alright. [speaker003:] Hello. [speaker002:] Hello? [speaker004:] I can change it. [speaker001:] Hmm. [speaker005:] But this w You know, he's, like, the expert, [speaker004:] I was just trying a new head position. [speaker005:] and I don't know exactly, uh [disfmarker] [speaker001:] Mm hmm. [speaker005:] Yeah. [speaker002:] Hmm. [speaker003:] Oh. [speaker001:] OK. I see. [speaker002:] Hello? OK. [speaker003:] Hello. [speaker006:] Hmm. [speaker005:] It's interesting. [speaker003:] Hello. That's good. [speaker005:] Hello? Yeah. Your voice is so [disfmarker] is so [disfmarker] i it's like, s you can see across all the channels when you talk. [speaker001:] Hello. [speaker004:] OK. [speaker001:] OK. Mine's good. [speaker002:] Hello? [speaker005:] It's great. Thanks very much. Thanks very much. Alright. So. Uh, welcome back. Thank you for coming. Um, so, what I thought we'd do today is pick up where we were last time and maybe, uh, go over some [disfmarker] some ideas about, um, well, i the [disfmarker] return to the conventions, but I also wanted to, uh, [vocalsound] share with you a couple of the things that I've picked up in terms of checking. And these are [disfmarker] uh, th these are things that came up with the tra with the Tigerfish transcripts. I'm just doing this cuz I think it's interesting. OK. So, um, I have an example [speaker003:] OK. [@ @] Not sure. Thank you. [speaker002:] Oh, there's [disfmarker] Hold on. [speaker005:] which [disfmarker] um, [vocalsound] so there're a couple of examples. I have one in front of you which is the one where, um, um, u Leah [disfmarker] Leah noticed this one where I said something and, um, the transcriber interpreted it as "twosome gruesome, uh, different sizes different this", and, um, actually what I meant to say was um, "so you're saying groups of different sizes, different size groups?" [speaker003:] Whoa. [speaker005:] Yeah. [speaker001:] Wow! [speaker005:] I I [disfmarker] It's really enormously far away. Now this [disfmarker] this could be one of those cases where the auditory was just really bad. [speaker004:] That's what I was just thinking, or maybe it was really quiet or something? I don't know. [speaker005:] Maybe very [disfmarker] very soft. Maybe with the microphone noise, [speaker004:] Yeah. [speaker005:] or something from someone else, or an overlap, [speaker004:] Mm hmm. [speaker005:] or [disfmarker] but they're not usually that bad, I [disfmarker] I don't think. [speaker006:] Mm mmm. [speaker005:] Then, um, I have a couple of others. So, there's one, um [disfmarker] So, the person said, "interesting idea", but Tigerfish thought it was, "uh, just an idea". Now that changes the meaning. [speaker003:] Right. [speaker002:] Hmm. [speaker005:] It's interesting. [speaker001:] Hmm. [speaker006:] Right. [speaker005:] And then there are various ones like, um, [vocalsound] um, "ish decay" instead of "HTK". [speaker003:] Oh. [speaker004:] Ugh! [speaker005:] And now this one [disfmarker] [speaker006:] i "Ish decay"? [speaker005:] "Ish decay". Now "ish" was in parentheses [disfmarker] [speaker003:] "Ish decay." [speaker006:] Yeah, what's an "ish"? [speaker005:] Well they [disfmarker] they [disfmarker] they were suspicious that wasn't right, but that was the closest they could come. [speaker001:] Mm hmm. [speaker005:] And, you know, with [disfmarker] when you're dealing with acronyms, [speaker004:] Wow. [speaker005:] that's [disfmarker] that's [disfmarker] that shows one of the problems with acronyms. [speaker006:] Yeah. [speaker005:] Is that it's clear if you know it, [speaker004:] I wonder if that was someone who was [disfmarker] who had an accent, maybe? [speaker005:] but [disfmarker] Always conceivable. I don't remember off hand. I did check the example. And, um, well, and this [disfmarker] this would be consistent. So, uh, it was transcribed as "with H", but it was really "with age". [speaker004:] Mmm. [speaker003:] Mmm. [speaker001:] Uh huh. [speaker005:] Yeah. And, uh, then there are some where actually punctuation changes the meaning. So, this is [disfmarker] the these are things that Barbara show uh, found. "Spurts wouldn't be right" was what it looked like. But, actually when you listen to it, it would be "Spurts wouldn't be. Right?" [speaker003:] Oh. [speaker002:] Hmm. [speaker001:] Hmm. [speaker004:] Oh. That's very subtle. [speaker005:] A It is subtle. And yet, you know, it's like [disfmarker] [comment] And instead of [disfmarker] instead of "and it's" they put "Annette's". [speaker001:] Annette's? [speaker005:] Now this [disfmarker] this is interesting because it's like, you know, we know that there's no Annette on any of the meetings we've ever dealt with, [speaker001:] Right. [speaker004:] Right. [speaker005:] but, um, it's, you know, conceivable. [speaker003:] Right. [speaker001:] Hmm. [speaker005:] And then, um, let's see. There's one other here I thought was interesting. Well this was [disfmarker] OK, so it should have been "the quals w the [disfmarker] the quals slides will be fine". This is Jerry Feldman. And, uh, instead it was transcribed with "the" and then an uncertain syllable "the [disfmarker] the" and then "all the slides will be fine". [speaker003:] Ah. [speaker005:] Which is interesting. [speaker006:] Oh. [speaker005:] And [disfmarker] and, you know, this was a case where the timebin was off. [speaker002:] Hmm. [speaker005:] I wanted to bring an example where th [speaker001:] Mm hmm. [speaker004:] Oh. [speaker003:] Yeah. [speaker005:] So "all the slides" it's w started too late. [speaker001:] Right. [speaker005:] It's another case of devoicing at the initial [disfmarker] [speaker003:] Right. [speaker004:] Uh huh. [speaker005:] Yeah. So, I thought that was very interesting. And I think that it shows, um [disfmarker] these sorts of things, I was thinking it would be interesting to do an analysis of these things because I think that, um, if [disfmarker] well first of all, there's this overall point that it shows a degree to which, when you listen to speech, and when you're just, in general, listening, probably, but it shows espec specifically this [disfmarker] to this task, that it's very constructive. You're not going on simply the auditory there's a huge contribution from the top down processing, and the [disfmarker] you know, e different types. You've got the contextual information, you have syntax, you have your mastery of English, and these various things. But I think these kinds of errors, it just shows the degree which it's not strictly an acoustic task. [speaker001:] Mm hmm. Yeah. [speaker004:] Oh, yeah. [speaker005:] And the other thing I think that's interesting is, the types of [disfmarker] the types of confusions that happen, um, do preserve certain aspects of the stimulus. [speaker003:] Mm hmm. [speaker005:] So, the stress tends to be preserved, you know, the content word will be substituted for another content word, and, um, It's [disfmarker] it's not surprising that they would hear this. I'm o I'm often surprised by how [disfmarker] how plausible something can be with [disfmarker] without it being correct. [speaker001:] Mm hmm. [speaker005:] Because it really seems like it's [disfmarker] [speaker003:] Mm hmm. [speaker005:] And then often, you know, i it's, um, i there's ex the parentheses are used, so the person knew it wasn't really a perfect match. [speaker001:] Mm hmm. [speaker005:] But, uh. OK. Now having said that I'm [disfmarker] I'm impressed by h how [disfmarker] by how, um [disfmarker] well, how infrequent these [disfmarker] these kinds of extreme cases are. Most of the types of things that I find in checking are much [disfmarker] much more mild. Yeah. So [disfmarker] so. [speaker006:] Mm hmm. [speaker005:] Do you wanna [disfmarker] [speaker006:] Actually, that comes back to a point. We had talked about this last week, where you were interested in "so". [speaker004:] Mm hmm. [speaker006:] Saying "so" at the end of something. [speaker004:] Mm hmm. [speaker006:] And sometimes that would get transcribed as [disfmarker] OK, so, let me just think of an example. Um, OK, let's say I say, "Well, that would be hard to do, so." [speaker005:] Mm hmm. [speaker006:] OK, right? And the way I would transcribe it, I would put that "so that would be hard", comma, "so" at the end. [speaker005:] Mm hmm. [speaker006:] Cuz it's kind of just the end of that [disfmarker] the [disfmarker] [speaker001:] Mm hmm. Would you put a period after "so"? [speaker006:] Right. [speaker001:] You would or you wouldn't? [speaker006:] "So." Yeah. Yeah, I would just sort of end it. [speaker001:] Yes. Mm hmm. [speaker006:] But then sometimes they transcribe it like "So that would be a problem", period, new word, like [disfmarker] capitalized "So". [speaker003:] Oh. [speaker005:] Mm hmm. [speaker001:] Right. [speaker004:] Oh, like it's a whole new utterance. Like it's not connected. [speaker006:] Right. And [disfmarker] and it just seems to sotal totally [disfmarker] that [disfmarker] that's not the meaning. [speaker001:] Mm hmm. [speaker006:] And [disfmarker] and it changes the meaning, I think. Um [disfmarker] [speaker005:] In a way [disfmarker] So, uh, let me be sure I understand. So, you're saying [disfmarker] [vocalsound] s boy, it's hard to ig uh, ignoi that [disfmarker] ignore that word. [speaker001:] Uh huh. [speaker005:] Um, So y the example is, "That would be hard" [disfmarker] [speaker006:] Or whatever, yeah, "that would be a problem", [speaker005:] "That'd be a problem," [speaker006:] whatever. [speaker001:] Right. [speaker006:] Yeah. [speaker005:] "so." [speaker006:] "So." [speaker005:] And how does it change the meaning? What's [disfmarker] what are the two different meanings there? [speaker006:] It seems like [disfmarker] because if I was just reading that thing, I would say, "So that would be a problem. [pause] So." [speaker005:] OK. [speaker006:] You know what I mean? Th if [disfmarker] if it was like, "period, capital SO", as opposed to, um, "That would be a problem, so." [speaker005:] Oh, I see what you're saying! [speaker006:] "That would be a problem. [pause] So." [speaker001:] Right. [speaker003:] It [disfmarker] it [disfmarker] [speaker001:] I think the second one where the "so" is all by itself, it's like, "So that would be a problem. [pause] So." You know, the "so" is like disconnected from the [disfmarker] [speaker006:] Some [speaker003:] Right, as though it's leading into something else. [speaker004:] It's like [disfmarker] kind of like introducing [disfmarker] [speaker001:] Yeah, the first one is [disfmarker] like, the "so" is connected to the sentence, [speaker003:] Bu [disfmarker] And [disfmarker] [speaker001:] and [disfmarker] "That'd be a problem, so." [speaker003:] yeah. I [speaker001:] You know, it's [disfmarker] but the next one is like not [disfmarker] not connected to "that would be a problem". It's like, "So." [speaker004:] It's like introducing a new thing that they don't talk about. [speaker003:] A new subject. [speaker001:] Yeah, less [disfmarker] [speaker006:] Right, yeah. [speaker003:] The first one, too, is more like [disfmarker] [speaker006:] So [disfmarker] [speaker005:] Mm hmm. [speaker003:] To me, I don't necessarily put, um, like a d a p a p a period after that "so". I might actually either put a dash after it depending on whether or not they got unt interrupted or something, [speaker001:] Mm hmm. [speaker003:] or I would maybe even act as though they were, um, just trailing off. [speaker001:] Right. [speaker003:] So put that [disfmarker] those two dots after it. [speaker001:] I do that too. [speaker003:] Um, uh, which depending on what's going on, like, I'll do that. [speaker001:] I [speaker003:] Cuz to me it sounds more like they're trailing off. [speaker001:] Right. Sometimes, though, it does sound like they ended a sentence [speaker004:] Yeah. [speaker001:] and then I do [disfmarker] [speaker003:] Yeah. Yeah, yeah, yeah. [speaker004:] Yeah. [speaker002:] Mmm. [speaker001:] like sometimes it's just like, "That'd be hard, so". [speaker006:] Right. [speaker001:] You know, and but [disfmarker] and then ofte often it sounds like they were trying to think of something else to say, [speaker003:] Right. Yeah. [speaker001:] but they couldn't really, you know [disfmarker] And then they get interrupted and then I put a dash. [speaker004:] Yeah. Yep. [speaker001:] You know, but I never usually put period and then new word "so" unless it's like, end of the sentence and then, "So." [speaker004:] "What are we gonna do now," but they don't say that. [speaker001:] Mmm. [speaker005:] Mm hmm. [speaker001:] Yeah. [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker005:] It's [disfmarker] it's interesting that there are so many cases. And I agree. I th I think those are very good conventions. I also like w you know, this [disfmarker] this whole point about how [disfmarker] how that y th th the fact that you're raising this, I [disfmarker] li I like this example that, um, we have several things that are being cued by intonation, [speaker006:] Right. [speaker005:] because if [disfmarker] if that's a f nice full falling intonational contour on "that's a problem", [speaker001:] Mm hmm. [speaker005:] "eh, that'll be a problem". then I would think that there should be a period there. [speaker003:] Period. Right. [speaker004:] Yeah. [speaker001:] Right. [speaker005:] And then especially if you've got a pause following it, but you don't always [disfmarker] y these things are correlated but they're not perfect. [speaker004:] Yeah. [speaker003:] That's another thing, is i if you're not a native speaker of English you might not pick up, because, uh, you know, every language has different, um, intonation mappings. [speaker006:] Mm hmm. [speaker001:] Yeah. That's always one of the hardest parts about the native speakers, [speaker005:] Mm hmm. Mm hmm. [speaker003:] And [disfmarker] [speaker001:] is you don't know exactly like how to do the punctuation [speaker003:] Right. [speaker001:] because their intonation is totally different. [speaker005:] You mean non native na Non native speakers? [speaker001:] I mean [disfmarker] Non native speakers is what I meant. Yeah. [speaker005:] Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah. [speaker004:] Yeah. [speaker005:] Absolutely. Yeah, absolutely. [speaker003:] And also if you're a non native speaker transcribing it, you wouldn't necessarily [vocalsound] kn uh, kn understand the conventions of English t intonation as, you know, intimately as a native speaker might. [speaker001:] Yeah. That's true. Right. Mm hmm. [speaker004:] Well, I would hope they would have native speaker transcribers transcribing. [speaker005:] Ideally, wouldn't that be something. I ideally it would be nice to have like a Spaniard t tran transcribing the [disfmarker] [speaker004:] Oh [disfmarker] y yeah. [speaker005:] Oh, is that what you mean? Or [disfmarker] [speaker004:] No, that's not what I meant, [speaker001:] Oh. [speaker004:] but that would be cool. [speaker005:] That'd be nice. [speaker004:] No, what I meant was that I would hope that, for meetings in English you would have native speakers transcribing it because it [disfmarker] you would have more intuitions about what should be there. [speaker005:] I agree. [speaker001:] Yeah. [speaker003:] n Yeah. [speaker004:] Even if you can't fully understand the word, you can make a guess, you can [disfmarker] that isn't completely off like some of these are, [speaker003:] Right. [speaker004:] but they're pretty rare [disfmarker] those things, the f [speaker005:] Mm hmm. [speaker004:] the really bad mistakes that they make are pretty rare I think, so. [speaker005:] I [disfmarker] I [disfmarker] OK So, in terms of like when you're going through the Tigerfish transcripts, you don't find a lot of these kinds of extreme cases? [speaker004:] Not this [disfmarker] t t [speaker003:] Not so much. No. [speaker005:] OK. [speaker004:] I mean [disfmarker] [speaker001:] Mmm, not "twosome gruesome". [speaker004:] No, not "twosome gruesome". Sometimes, but [disfmarker] they tend to be pretty good about parentheses. [speaker003:] Mm hmm. [speaker005:] Mm hmm. Good. Good to hear. [speaker001:] Yeah. Yeah, sometimes they p they usually put them in prethen parentheses, and you think to yourself, You were way off, [speaker004:] Yeah. [speaker001:] but at least you knew you were way off. " [speaker004:] Yeah. Yeah. [speaker005:] Mm hmm. [speaker004:] Exactly. [speaker006:] Yeah. I just wonder, like, what influences them to think of "twosome gruesome", [speaker005:] Good. [speaker006:] like maybe this person just saw a horror movie or something? And, uh, this is [vocalsound] the interpretation that came to mind or something? [speaker005:] It's [disfmarker] Could be. [speaker004:] Yeah. [speaker006:] You know what I mean? Like, where would you get the word "gruesome"? [speaker004:] I don't know. [speaker006:] You know, you just sort of pull it out of the air? [speaker004:] Well, I mean, you get the "GR" from "groups", I guess, [speaker006:] Anyway. [speaker004:] but [speaker006:] Yeah. [speaker001:] Yeah. [speaker004:] I don't know where the rest of it came from. [speaker001:] Uh, "groups of", "gruesome". [speaker003:] g Group. [speaker006:] OK. [speaker003:] "Groups of", "gruesome". [speaker001:] You've got the, uh, "S" in the middle. [speaker003:] Yeah. If you [disfmarker] if for some reason they didn't hear the "F", if they didn't hear the "fuh", [comment] it might just sound like [disfmarker] [speaker001:] "Gruesome". [speaker003:] Yeah, "grou" uh, "gruesuh". [speaker004:] Yeah. [speaker001:] Yeah. It's both labial [pause] at the end. [speaker006:] Mmm hmm. [speaker003:] You know. And then you would kind of have to come up with some sort of consonant [speaker004:] "Groups of" [disfmarker] [speaker003:] and maybe it's an "M" to you. [speaker004:] Well, also [disfmarker] [speaker006:] Right. [speaker004:] "Group of, groups of, groups [pause] of, group [disfmarker] s" [speaker003:] "Fff". [comment] Because also, I mean, "F" is "Fff". [comment] is, um, [vocalsound] "Fff". [speaker001:] Gruesome [pause] groups of [pause] gruesome [pause] groups of [speaker004:] I wonder where the "P" went, though? [speaker003:] Right. "Fff". [comment] Because it's [disfmarker] [speaker001:] Maybe the "P" wasn't pronounced very well, [speaker004:] Yeah? [speaker006:] "Pss". [speaker001:] like "groups of" [speaker003:] Yeah, because it's, um, bilabial too. "Fff". [comment] you know, you'll di uh, or interl interdental, "fff". [speaker001:] Mm hmm. [speaker003:] you know, it's close enough, I guess, to "mmm", [comment] you know. [speaker004:] Nnn, I guess. [speaker003:] You're getting the closure of the lips. So maybe that's [vocalsound] what they were do getting. I don't know. [speaker004:] It's possible. [speaker002:] Hmm. [speaker001:] Hmm. [speaker005:] Well, you know, there's also this problem that they didn't have context to constrain the interpretation, [speaker004:] Yeah. [speaker001:] Yeah. [speaker005:] cuz [disfmarker] Yeah. [speaker001:] Yeah, that's really hard, [speaker005:] OK. [speaker001:] th to not be able to go and listen to the other channels and figure out the context of what it's occurring in. [speaker003:] Yeah. [speaker005:] Yeah, when I'm clearing the par parentheses, I find that very useful. I [disfmarker] I [disfmarker] sometimes I try to just go through and pick up the parentheses, [speaker001:] Yeah. [speaker005:] see if I can do it just by itself, but it [disfmarker] it's remarkable how sometimes something can be so totally opaque, and then you go back to, you know, the person's previous utterance or you go to the mixed channel, or you, uh, check on the other things and suddenly it becomes perfectly clear. [speaker001:] Mm hmm. [speaker005:] It's very [disfmarker] very surprising. Very surprising. [speaker003:] Yeah. [speaker005:] Good. [speaker006:] But this other business, um, that you have at the bottom with people getting transcribed on another channel [speaker005:] Yes. [speaker006:] is, uh [disfmarker] is much more common. [speaker004:] Is that what it's about? I wasn't [disfmarker] I couldn't u I don't know, for some reason that didn't make sense to me. But th it's about when things are transcribed on other channels than what they're set on? [speaker006:] Right. [speaker004:] OK. [speaker006:] And what's always funny to me is when it's th you know, they m mi miss the gender completely, like, you know, Morgan'll be on Jane's channel, or just something bizarre like that. [speaker004:] Yeah. [speaker006:] Um, so. [speaker003:] Well, maybe it's because they don't realize that you're supposed to sp Do they? I mean, maybe they don't really realize that you're supposed to split it up. Or something like, they maybe think, "Well, I hear it just as clearly [disfmarker] I hear this other person's voice just as clearly and I realize it's not Jane, and I realize it's not a female even, but, um, because it's so clear on this channel, I'm supposed to transcribe it." [speaker004:] I'm supposed to do it. [speaker001:] Mm hmm. [speaker004:] h [comment] Yeah. [speaker003:] I [disfmarker] I'd imagine that's p that might be what that is. [speaker004:] They probably don't even know how many people are at the meeting, or w anything. [speaker005:] Yeah, it's really a an odd task in [disfmarker] in a way because they're [disfmarker] they have no clues at all about how dominant the speaker is on the channel they're transcribing. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker004:] Yeah. [speaker006:] Mm hmm. [speaker005:] And the pre segmenter doesn't give them any or the information eith either. I'm [disfmarker] I'm sure that it's because of the [disfmarker] [speaker001:] Yeah. [speaker004:] Wow. [speaker006:] Yeah. [speaker004:] We're [disfmarker] we're really spoiled, then, with our interface here, [speaker002:] Mmm. [speaker006:] Fancy schmancy. [speaker004:] right? [speaker005:] Yeah. [speaker004:] I mean [disfmarker] s it's very useful. [speaker006:] If I was gonna make just a guess, though, from my experience it would seem like the men get mis um, [speaker005:] Yes. [speaker006:] the men get moved onto other channels, maybe more frequently. And it's just probably cuz they have deeper voices and they're getting picked up on more channels, um, than their own channel. [speaker005:] That would make [disfmarker] that would make some sense. [speaker004:] Yeah. [speaker006:] Um, for w you know, it's an acoustic thing. [speaker005:] OK. [speaker004:] Or they just tend to sp peak [comment] louder, maybe? [speaker003:] That's true. They do [disfmarker] they do say that m men's voices c carry better. [speaker006:] Yeah. [speaker003:] I mean, that's why they use, like, I think who was telling me? Ian Maddieson was talking about this, that they use male voices in, um, [vocalsound] in, um, airports. [speaker005:] Oh! [speaker001:] Hmm. [speaker003:] Because the acoustics of it [disfmarker] it just fits into the hea a hearer's range [disfmarker] [speaker004:] For announcements and things? [speaker003:] Yeah, that's th It fits into a hearer's range of perception, um, better than a lot of female voices. [speaker002:] Hmm. [speaker006:] Hmm. [speaker004:] Hmm. [speaker003:] And so that's why they use male voices. We were just talking about that. [speaker004:] They should do that at BART, too, because sometimes it's hard to hear. [speaker005:] I see. [speaker002:] And now they have those strange computerized voices that n [speaker005:] So [speaker004:] Yeah. Those are [disfmarker] i a w actually probably just the computer problem [speaker001:] Hmm. [speaker004:] not the gender problem. [speaker005:] Huh. So [disfmarker] so this means that people with hearing problems would hear, uh, uh, one of these announcements better if it were a male voice then. [speaker003:] d Yeah, depending on their auditory, um, capabilities. [speaker005:] Uh huh. [speaker003:] I mean, if their auditory capabilities, uh, have been, you know, um, squished down [vocalsound] like in a certain place where it would be n easier for a person with normal, [comment] um, hearing to hear a male's voice, um, like if [disfmarker] I'm saying if [disfmarker] if they [disfmarker] d if they had a [disfmarker] some sort of, um, you know, damage to part of their brain that processes auditory information, then [disfmarker] and, i uh, e i u if that part of the brain normally codes for, you know, male voices, uh, in a normal hearer, [comment] then they might not be able to. You know. Does that make sense? [speaker005:] Mm hmm. Mm hmm. Mm hmm. [speaker003:] Like if that part is damaged. So, but, um [disfmarker] [speaker005:] Selective. [speaker003:] but yeah, if [disfmarker] I guess for most h hearers it's just easier. And it's like a certain, uh, kind of, like, rough male voice too, kind of graspy [disfmarker] raspy, I mean, [speaker006:] Hmm. [speaker002:] Hmm. [speaker001:] Hmm. [speaker003:] apparently. [speaker004:] Hmm. [speaker005:] Is better? [speaker003:] Is better. [speaker005:] Ah, it's a wider band then. Because of [disfmarker] do you think it might be from the [disfmarker] [speaker003:] Yeah, I guess so. [speaker005:] She asked this question last time which was, um, whether our system here, the recording system, may have some of the same channel restr some sort of s channel restrictions, such as they ran across when they were looking at telephone, uh, transmissions at a certain point. Cuz, you know, telephone transmissions, um, there's some filtering going on. You don't hear the full [disfmarker] the full band of frequencies. [speaker001:] Mm hmm. [speaker004:] Oh. [speaker001:] Right. [speaker002:] Hmm. [speaker005:] And, w uh v I asked c a couple of people and they c answer came back that [disfmarker] [speaker003:] It's really interesting. [speaker005:] It was intere an interesting [disfmarker] it's reminded me because of the comment about noise, noise being sp really spreading frequencies across, uh, using frequencies across a broader spectrum. [speaker003:] Uh huh. [speaker004:] Yeah. [speaker005:] So in a way, bandwidth can increase with some of these cases of distortion that we have. But it's n it's not a useful kind of increase [vocalsound] in frequency range. [speaker003:] Right. Yeah. I was just saying that it, like, you know, sometimes, um, if [disfmarker] Remember I was saying, like, if there's a discrepancy between fricatives, you know, like, um, "shuh" [comment] or "suh" [comment] will show up in a certain part of the spectrum, and [disfmarker] and, uh, oftentimes the "fuh" [comment] is like knocked out of the spectrum by [disfmarker] [speaker004:] Yeah. By filtering or something? [speaker003:] um, by filtering in some of the older phones. [speaker001:] Mm hmm. [speaker003:] And that's why, like he [speaker002:] Hmm. [speaker004:] So what was the answer here? It [disfmarker] that's not the case? [speaker005:] Answer? [speaker003:] That they don't [disfmarker] [speaker005:] That they [disfmarker] that they don't. We don't have a reduction in that direction. [speaker004:] Oh, OK. [speaker003:] Y right. [speaker001:] Oh. [speaker004:] Good. [speaker005:] That, uh, I mean, you know, one could have if one had cheaper microphones than we have, but we've got good microphones [speaker004:] Mm hmm. [speaker001:] Right. [speaker005:] and another [disfmarker] there is sort of an upper limit set on our bandwidth by the sampling rate but then that becomes [disfmarker] So Adam's ans that [disfmarker] [speaker003:] Mm hmm. [speaker005:] then it becomes a matter of, Is that something that would be relevant to perceptual range. And, I didn't really have the sense that it would. [speaker003:] I didn't get that from him, actually. I felt like, um, he's basically saying that for our [disfmarker] for all intents and purposes with this, it's covering whatever sounds are necessary to be covered, like everything. [speaker005:] Mm hmm. Mm hmm. Good. I [disfmarker] I [disfmarker] that was my impression also. [speaker003:] That's [disfmarker] that's what I th [speaker005:] It wa so we don't have that kind of reduction like you have on telephones. And actually in the work that they were doing with the telephone corpora, they did run into limitations and [disfmarker] uh, certain types that they [disfmarker] when [disfmarker] what their speech recognition approach is because of the [disfmarker] of the filtering. [speaker006:] Hmm. [speaker005:] I mean it does seem like when I hear these things, except for the distortion parts, where the volume's too high, or th of course when the volume's too low, but that's a different problem, um, except for those cases, it seems to me that it really is as if the person were speaking in my ear. I don't feel that it's being reduced in any way. [speaker004:] Yeah. [speaker002:] Hmm. [speaker006:] Mm hmm. [speaker004:] Yeah, that's what it seems like. [speaker003:] Yeah. Yeah, I do too. Yeah. [speaker005:] Yeah? And he was saying in the [disfmarker] in the, uh, areas where you have clipping, where it's recorded too high, and [disfmarker] and it [vocalsound] starts to saturate and you have trouble, [speaker003:] Right. [speaker005:] um [disfmarker] I thought that was very interesting. So bandwidth then gets [disfmarker] gets actually wider instead of narrower because you have, um [disfmarker] Uh oh. You have these [disfmarker] y y you still have [disfmarker] The frequencies are represented but the energy distributions are thrown off, and, um, in fact you have also some additional frequencies. So you end up with a wider bandwidth functionally, I mean [disfmarker] [speaker003:] Yeah. So, like, you were saying, like [disfmarker] in the case of something with clipping, [speaker005:] But not helpful for speech. [speaker003:] does it [disfmarker] were you saying that, uh [disfmarker] it [disfmarker] it increases to [disfmarker] or, expands to accommodate that? Or, [speaker005:] No, technically, the bandwidth is wider than [disfmarker] [speaker003:] no? [speaker005:] so it's actually paradoxical in a way cuz you usually think bandwidth [disfmarker] better bandwidth is better. [speaker003:] Oh. [speaker001:] Mm hmm. [speaker005:] But in the cases of [disfmarker] of distortion than you can end up with, [speaker003:] Right. Oh, OK. Yeah. [speaker005:] uh, [speaker003:] I see what you're saying. [speaker005:] a wider bandwidth which isn't useful in terms of understanding speech. [speaker003:] Right. [speaker001:] Right. [speaker006:] Hmm. [speaker005:] So. [speaker004:] I have a question. [speaker005:] Yes. [speaker004:] In the [disfmarker] [vocalsound] some of the earlier meetings I guess, there was a lot of spikes, or what I called spikes. [speaker005:] Yes. [speaker004:] I was wondering what that is. I don't even know if it's relevant or useful to know, but I'm just curious. [speaker005:] One source was, there was a problem with a connection where the microphone went int went into a box [disfmarker] [speaker004:] Mm hmm. [speaker005:] I was trying [disfmarker] trying to think of this as prior to [disfmarker] yes, it was prior to the wireless. There was a particular connection which wasn't wireless, which was, um, mechanically bad in some [disfmarker] some sense, [speaker004:] Loose. Yeah. [speaker001:] Mm hmm. [speaker005:] and if it got touched, it would s it would send an impulse, [speaker003:] Oh. [speaker002:] Hmm. [speaker005:] i in al an uh [disfmarker] an irrelevant non wanted [vocalsound] artifact. [speaker004:] Yeah. [speaker001:] Hmm. [speaker004:] Cuz those ones sometimes were extraordinarily bad and pai [speaker003:] Huh. [speaker001:] Painful. Yeah. [speaker004:] Yeah. [speaker005:] I agree. Absolutely painful. [speaker001:] They were painful. [speaker005:] Exa I agree. [speaker003:] Uh [disfmarker] [speaker005:] And that's [disfmarker] that's th you know, I would watch the visual sis signal to avoid that. [speaker001:] Oh yeah. [speaker004:] Yeah, [speaker005:] It's like [disfmarker] [speaker006:] Yeah. [speaker004:] and like not listen to them. [speaker003:] I do remember those, actually. [speaker001:] And then you see it, and off with the headphones! [speaker006:] Yeah, you could see it coming and you'd turn the volume down or something. [speaker004:] Or just uh s [speaker003:] Yeah, yeah, yeah, yeah. I remember I had one. One with that and it was just, al all the time. [speaker004:] Yeah. Or, like, skip it entirely. Yeah. [speaker001:] Oh yeah. [speaker003:] Yeah. [speaker004:] Yeah. [speaker005:] Yeah. [speaker003:] Especially if they also got louder [comment] too, [comment] and they hit it, [speaker004:] Y y yeah. [speaker005:] Oh! [speaker003:] it was just like "whoahh!" [speaker005:] Yeah. That's really awful. You've noticed that those have gotten better over the intervening [disfmarker]? [speaker004:] Yes. Absolutely. [speaker001:] Oh yeah. Mm hmm. [speaker003:] Yes. [speaker005:] Good. [speaker006:] But sometimes you'll see like something coming up like, "Uh oh. Someone's gonna cough really loudly", and I could tell because of the way it looks! [speaker004:] Yeah. [speaker006:] Or "someone's about to laugh really loudly", [speaker004:] And so you turn it down? [speaker001:] Mm hmm. [speaker005:] Oh. [speaker006:] um [disfmarker] yeah, [speaker005:] Yeah. Yeah, yeah. [speaker006:] and then you're just sort you've got like [disfmarker] you know [vocalsound] [disfmarker] you're sort of like [speaker001:] Mm hmm. [speaker003:] Take one off, or something. [speaker006:] Yeah. [speaker004:] No, a I think [disfmarker] [speaker002:] Hmm. [speaker001:] Yeah. [speaker006:] So unless there's gonna be some speech in there, I don't think I need to listen that carefully. [speaker004:] I think that's really important. Yeah, I think that's really important [speaker006:] Yeah. [speaker004:] because I generally have the volume turned up to above normal volume just so I can hear more clearly, [speaker006:] So. [speaker001:] Mm hmm. Yeah. [speaker003:] Right. [speaker004:] and if it's like that for someone screeching into the microphone than it would be really painful. [speaker006:] Yeah. [speaker003:] Right. [speaker006:] You could h really hurt your ears. Yeah. [speaker004:] I think you could, yeah. [speaker001:] Mm hmm. [speaker006:] Oh, yeah. [speaker005:] I think so too. [speaker002:] Hmm. [speaker006:] So. [speaker005:] Yeah, I [disfmarker] [speaker004:] Yeah, that's [disfmarker] Especially [disfmarker] we have really good headphones, [speaker005:] when I realized [disfmarker] [speaker004:] like [disfmarker] [speaker005:] yeah, when I realized, I [disfmarker] I meant to tell everyone to watch that visual signal cuz that's [disfmarker] that's saved me a bunch of times. [speaker006:] Gives some good clues, yeah. [speaker004:] Yeah, I mean, it's very useful in like all senses cuz [disfmarker] [speaker006:] Oh yeah. Yeah. [speaker001:] Yeah. [speaker003:] Oh, yeah. [speaker005:] Yeah. [speaker004:] It's very good. [speaker006:] Sometimes it doesn't pick up backchannelling, though, and so sometimes you have to, um, try to listen [disfmarker] [speaker004:] No. Or, turn up [speaker006:] there's no visual but [disfmarker] Yeah, you have to [disfmarker] the resolution, right? [speaker004:] Yeah. Or, like [disfmarker] [speaker001:] Right. [speaker005:] It might be useful to [disfmarker] to talk about the interface for [disfmarker] for a bit, [speaker006:] So. [speaker005:] in terms of what it is that you like particularly, and what you don't, [speaker001:] Mm hmm. [speaker005:] and, uh, that kind of thing. [speaker001:] You know what I would think would be a really n um, neat feature for the interface? Some sort of backwards tab where it could go backwards without you having to go up and do it manually, [speaker005:] Ooh, interesting. [speaker004:] Yeah. [speaker003:] T Yeah. [speaker001:] and, you know, press the button and "juh juh juh juh juh juh juh". [speaker002:] Hmm. [speaker001:] Cuz if you miss something [comment] and you wanna go back to it, it's such a [disfmarker] such a hassle. [speaker005:] Mm hmm. [speaker003:] That's true. [speaker001:] It'd be so nice if there was just some sort of like, you know, uh, opposite of tab that you could hit, that it would just sort of move backwards. There wouldn't even have to be any, um [disfmarker] [speaker006:] Hmm. [speaker004:] But it would play it backwards too. [speaker001:] I know. It w you wouldn't even have to have the audio. That's not [disfmarker] I wouldn't [disfmarker] I mean [disfmarker] [speaker004:] But just to move back, but not necessarily a whole timebin? [speaker006:] Just to jump it back. [speaker001:] Yeah. [speaker005:] Interesting. [speaker006:] Right, cuz sometimes you'll hear [disfmarker] It'll be in [disfmarker] right in the middle of the timebin, [speaker005:] Hmm. [speaker006:] and you don't have to listen to the whole thing again [speaker003:] Right. [speaker006:] cuz you've already gotten the first part. [speaker001:] Right [speaker004:] Yeah. [speaker002:] Mmm. [speaker001:] Oh yeah. Yeah. [speaker006:] So you just want it to go back, I don't know [disfmarker] [speaker001:] Yeah, some [disfmarker] [speaker003:] And otherwise you have to drag it. [speaker001:] Yeah, some timebins are humongous, [speaker004:] Or otherwise, drag it, or like click on it. [speaker001:] and if you miss it, you know, you've gotta go back and find it, [speaker005:] Mm hmm. [speaker001:] and dragging it and looking for it and you may not be able to see it, even. [speaker003:] Right. [speaker004:] Yeah. [speaker001:] You know? [speaker006:] That's true. [speaker004:] I think that would be useful. [speaker006:] Hmm. Yeah. [speaker001:] Yeah. [speaker005:] Interesting idea. I'll [disfmarker] I'll s I'll suggest that to the charer. [speaker006:] Yeah, the less you have to click on the mouse, I think the faster you could go. [speaker004:] Yeah, because I noticed I started feeling like I was getting some sort of carpal tunnel issues. [speaker001:] Mm hmm. [speaker006:] Yeah, you were having some wrist problems and stuff, [speaker004:] Much better now that I have the keyboard st tray, [speaker006:] right? [speaker004:] but it was [disfmarker] [speaker006:] Yeah. [speaker004:] I think cuz I had my hand in one position using it all the time. [speaker001:] Mmm. [speaker004:] And that I would [disfmarker] it would crack [pause] a lot. [speaker006:] Yeah. Ooh! [speaker004:] It was very weird. [speaker002:] Mmm. [speaker004:] But. [speaker003:] Yeah. [speaker005:] That's [disfmarker] it's good to be careful about the mousing. You were gonna say? [speaker003:] I s Oh, yeah. Um, something [disfmarker] Um, maybe I just don't know how to do this, [speaker005:] Yeah. [speaker003:] but when I wanna expand a timebin back [speaker004:] Uh huh. [speaker001:] Mm hmm. [speaker003:] or even forward, I'll use the mouse, the middle [disfmarker] [speaker005:] How do you mean "expand"? What do you mean by "expand"? [speaker003:] Oh, uh, in terms of, um, if, uh [disfmarker] if it's [disfmarker] it's [disfmarker] if it's cut off, like the beginning of an utterance, [speaker005:] OK. [speaker001:] Mm hmm. [speaker004:] Oh. I see what you're saying. [speaker005:] OK. OK. [speaker003:] I'll move it [disfmarker] move the line back. [speaker005:] Mm hmm. [speaker003:] And I usually just use the mouse. How do you do that with the keyboard? [speaker001:] Mmm. Really? Oh, I do that with the keyboard. [speaker004:] Well, I would press "tab" [speaker006:] I do that with the mouse, yeah. [speaker004:] and listen to where I wanted it to stop, [speaker001:] Mm hmm. [speaker004:] and then press "return", create a new timebin, [speaker001:] "Return" and then [disfmarker] [speaker003:] Oh. [pause] y [speaker004:] and then "shift" [disfmarker] [speaker001:] and then "shift" [speaker004:] "shift", what is it? [speaker001:] "backspace". [speaker003:] Oh, OK. So you just do it that way. [speaker004:] "Shift", "backspace" to c collapse it, yeah. [speaker001:] That's [disfmarker] that's what I do, [speaker003:] OK. OK. [speaker001:] yeah. I do it all on the keyboard. [speaker006:] Oh. [speaker003:] OK. To me that's takes longer than to do it with the mouse. [speaker005:] Yeah. [speaker004:] Oh. Do you drag [disfmarker] do you drag the line? [speaker001:] Oh really? [speaker003:] Because you can just click [disfmarker] y [speaker005:] Yeah. [speaker003:] I just drag the line. [speaker004:] See, I never [disfmarker] I never do that for some reason. [speaker001:] I don't do that either. [speaker002:] I didn't n even [disfmarker] even know you could do that. [speaker005:] I vary. I [disfmarker] I should [disfmarker] it depends on how much I wanna go back [speaker004:] Yeah. [speaker005:] and how certain the boundary is. If I can see it [disfmarker] if it's [disfmarker] if I can see this thing's coming up and it's just so clear, [speaker003:] Oh yeah, that's true. [speaker004:] Yeah. [speaker001:] Right. [speaker005:] it's that mountain right there, I can tell, then [disfmarker] [speaker004:] Yeah. [speaker003:] Right, then I'll drag it. [speaker001:] Mm hmm. [speaker005:] yeah, dragging is really f efficient. But if it's like a mess in there, then it helps me to be able to locate it with the actual clicking. [speaker001:] Mm hmm. [speaker003:] Right. [speaker004:] Yeah. [speaker006:] Mmm. [speaker001:] Right. [speaker002:] Mmm. [speaker005:] And then do this [disfmarker] what th you were just describing. [speaker004:] Yeah. [speaker005:] But, um [disfmarker] [speaker004:] Um, one thing that's gotten me into problems a little bit is having the "shift backspace" as the collapser. [speaker003:] Ah. Yeah. [speaker004:] Because I'll tend to delete things, [speaker003:] Yeah. [speaker004:] or go too far, or something, [speaker002:] Ni [speaker004:] or I'll be holding down "shift" because I wanna type something as a capital, [speaker001:] Mm hmm. [speaker006:] Oh yeah. [speaker003:] Yeah. Yep. Yep, yep, yep. [speaker004:] and then end up collapsing the timebins. [speaker001:] And [disfmarker] and then collapse it. [speaker003:] Yep. Yep. [speaker001:] I did that so much. [speaker004:] So, it might be better to use a different keys or something. [speaker005:] Oh that's annoying! [speaker003:] Yep. [speaker005:] This [disfmarker] if I were to [disfmarker] [speaker006:] Yeah. [speaker005:] if I were to add an option, it might be to have an undo for that. [speaker004:] Yeah. Yeah! [speaker002:] Oh, that'd be good. That would be good. [speaker003:] Yeah. [speaker005:] Because I have had that happen. [speaker001:] Yeah. That would be nice. [speaker003:] That would be great. [speaker004:] Or just, I mean [disfmarker] [speaker005:] Yeah, s Good. [speaker003:] I've erased entire utterances before, accidentally. [speaker004:] but, or just to change the keys used, you know, for [disfmarker] for that option. [speaker001:] Yeah. I'm rather used to it now, though. [speaker004:] I'm used to it too, but, I mean [disfmarker] [speaker001:] Yeah. [speaker003:] But then when you go to type on your keyboard at home, [speaker001:] Mm hmm. [vocalsound] You're so scared, [speaker003:] do you do this too? I'm so scared. Like I [vocalsound] [disfmarker] I won't hold shift down [disfmarker] [speaker001:] To [disfmarker] to hold shift down while you're backspacing? Like [disfmarker] [speaker003:] I won't do it. [speaker001:] Yeah. You're typing capitals, and you're backspacing, and you're like "[vocalsound] Oh, take the shift off! I'll collapse everything!" [speaker003:] Yeah, yeah, yeah, yeah, yeah! I'm just like, "What am I doing?" [speaker001:] I totally do that now. [speaker004:] I don't think I'm that bad, but [disfmarker] [speaker003:] Yeah, yeah. [speaker006:] Oh yeah, I do find that the keyboard is different here than it is even at my s for my school computer [speaker005:] Mmm. [speaker006:] and so I'll al I'll just be [disfmarker] if I was just at school, and I come here, it's just [disfmarker] [speaker001:] Oh yeah. [speaker006:] or vice versa it's just, uh [disfmarker] it's like I'll [disfmarker] I'll reach for one key and it'll do something completely different, [speaker001:] Or vice versa, yeah. [speaker006:] and I have to just get used [disfmarker] just get used to it again [disfmarker] [speaker003:] Uh huh. [speaker001:] Th that backspace and the little [disfmarker] [speaker006:] Yeah, the backspace is in the wrong place. [speaker001:] and the [disfmarker] th yeah, that little evil [vocalsound] apostrophe that keeps showing up. [speaker002:] The little accent thingy. Right. [speaker001:] "I don't want you!" [speaker004:] Yeah, I do that too. [speaker006:] Oh yeah, and then I'll keep [disfmarker] I keep caps locking too [speaker005:] Oh. [speaker006:] cuz the caps lock's in a different spot. [speaker001:] Oh, yeah. [speaker004:] Yeah. [speaker006:] So. [speaker002:] Hmm. [speaker003:] Oh. [speaker005:] How do you get per I don't know the evil apostrophe. [speaker004:] It's just th right above the [pause] backspace. [speaker001:] Um, [vocalsound] yeah, it's one of the backspaces on the keyboards that most of us are probably used to. [speaker006:] Th [speaker002:] Yeah. [speaker005:] Oh, oh! [speaker001:] So you wanna backspace and you get apostrophe, apostrophe, apostrophe, apostrophe. [speaker003:] Yeah, yeah, yeah. Yeah. [speaker002:] Mmm. [speaker001:] Yeah, t you're like, "No!" [speaker005:] I see. [speaker006:] Oh! Yeah, I do that all the time. [speaker004:] Yeah. [speaker005:] OK. So this is the difference between the PC keyboard and the SUN keyboard? [speaker001:] Mm hmm. [speaker005:] So, like I only have one [disfmarker] I have one versus two of the others. [speaker004:] I think so. [speaker001:] Yeah. [speaker003:] Yeah. [speaker002:] Yeah. [speaker004:] Did it [disfmarker] Right. [speaker005:] Is that right? [speaker001:] Right. I [disfmarker] I'm used to it now too. [speaker005:] So it's [disfmarker] [speaker004:] I'm actually used to [disfmarker] I'm used to the SUN one, I guess, now. [speaker005:] OK. [speaker003:] Yeah. But then, like, I think there's a different keyboard on [speaker005:] Mm hmm. [speaker003:] I always forget th oh, Linguine, than on, um [disfmarker] [speaker005:] There's [disfmarker] Basmati? [speaker003:] No, the other one. [speaker005:] Amaretto? [speaker003:] Amaretto. [speaker005:] OK. [speaker004:] Amaretto has a different one. [speaker003:] I i [speaker004:] Linguine is [disfmarker] Basmati have the same. [speaker003:] Right. And it's [disfmarker] and I always use [disfmarker] I tend to use Linguine. [speaker004:] Right? [speaker003:] But the last couple times, like, you've stolen Linguine before I can get to it. [speaker004:] Ooh! [speaker003:] So I go onto Amaretto, which is fine. But then like, um, it takes me a good, like, ten minutes to get re used to that keyboard. [speaker004:] Yeah. [speaker003:] And so I keep [disfmarker] [speaker006:] I don't know. You seem very upset about it, so. We [disfmarker] maybe we should trade. [speaker003:] No, it's really OK. But, yeah. I [disfmarker] I [disfmarker] I have noticed that I'll screw some stuff up, like, when I'm trying to get re adj [speaker005:] S so which t [speaker004:] But just the beginning. [speaker005:] I [disfmarker] I've forgotten which one is which. So, which one do you prefer? Is it [disfmarker] is it the [disfmarker] So, if we had a show of hands, how many people would vote for the SUN keyboard versus the [disfmarker] [speaker001:] OK. [speaker005:] So let's put it the other way. How many people would not vote for the SUN keyboard? Would vote for the other one? The PC [disfmarker] PC keyboard? [speaker004:] Which is the [disfmarker] [speaker003:] The PC? [speaker001:] The PC one? [speaker003:] I would vote for the PC. [speaker004:] The PC one is the Amaretto one, right? [speaker005:] I don't know. I don I'm not sure. It's the [disfmarker] [speaker003:] Yeah. [speaker006:] Huh. [speaker001:] I I'm not sure which one is which. [speaker003:] Or no! [speaker006:] Yeah. I just get used to it. [speaker005:] Uh oh. [speaker006:] So. [speaker001:] Yeah. [speaker003:] I like the Linguine keyboard. [speaker002:] Yeah. [speaker001:] I mean like, when I first had started working I would have definitely wanted the PC keyboard, [speaker005:] The Linguine is [disfmarker] OK. [speaker003:] That's all I know. [speaker001:] but now I'm [disfmarker] I'm just used to it. [speaker005:] You're multi keyb [speaker004:] I prefer th I prefer the one on Linguine or Basmati. [speaker006:] Yeah. [speaker004:] I don't know what it's called. [speaker005:] OK, let's do it that way. [speaker002:] Mm hmm. [speaker005:] So [disfmarker] so if we [disfmarker] if we have a map of the room, the one when you first walk in [disfmarker] [speaker001:] Mm hmm. [speaker006:] Is Basmati. [speaker004:] Oh. [speaker003:] To the left [disfmarker] [speaker005:] the one on the left is that Basmati? [speaker002:] By the light switch, yeah. [speaker003:] Yep. [speaker006:] I believe that's correct, yeah. [speaker004:] Yeah. [speaker005:] OK. [speaker006:] Linguine must be the one in the left hand corner. [speaker005:] And [disfmarker] [speaker001:] It's the one in [disfmarker] in that corner, right? [speaker003:] Linguine is the one in the corner. [speaker004:] In the c [speaker001:] That [disfmarker] that has the PC [disfmarker] [speaker004:] Yeah, the one in the [disfmarker] [speaker005:] And then the f the far right corner as you walk in? [speaker001:] Yeah, r Yeah, that's [disfmarker] that one's got the [disfmarker] the PC keyboard, and then the two on those sides have the, um, SUN keyboard. [speaker003:] Yeah. So, basically, right as you walk in the door, the one straight ahead of you is Amaretto. [speaker001:] Right. [speaker005:] Yep. U u [speaker006:] Hmm. [speaker003:] And the one to the left of that against the far corner is Linguine. [speaker005:] Good. OK. [speaker006:] Hmm. [speaker005:] And the one close to you is Basmati. [speaker004:] Yeah. [speaker003:] Right. [speaker005:] OK. So now, ha ha who would've [disfmarker] [speaker001:] Yes. [speaker005:] So, if you have a strong [disfmarker] Now you're bi keyboardal so, [speaker001:] Uh, yeah. [speaker005:] s but [disfmarker] but, um, [speaker003:] Nice. [speaker005:] who [disfmarker] who would prefer the keyboard on Linguine? OK. We got two votes for Linguine. And who would prefer Amaretto? Amaretto? So we got more [disfmarker] more multi keyboardal people? [speaker002:] Ready. [speaker006:] I just have no preference. [speaker005:] Well, that's lovely. [speaker002:] Yeah, it doesn't really matter. [speaker005:] Same here? [speaker001:] Yeah. [speaker006:] Yeah. [speaker005:] Excellent. [speaker002:] Yeah. [speaker005:] OK, so i [speaker006:] It just goofs me up for like maybe a minute and then I sorta get used to it. [speaker001:] Exactly, exactly. [speaker004:] I [disfmarker] I I could get used to [disfmarker] I could get used to it too. [speaker003:] Yeah, I could get used to it too. [speaker005:] OK. [speaker001:] Mm hmm. [speaker004:] I just am very used to [disfmarker] [speaker006:] Huh. [speaker003:] i Right. [speaker004:] I don't know. Cuz I always use it. [speaker005:] I pr I see. I [disfmarker] I have experienced this keyboard conflict, and f with me the things that are confusing are th where the control key is, so I end up [speaker004:] Mmm. [speaker005:] pressing shifts and control keys whenever I want the other one, whiche whichever it is. [speaker001:] Oh, that's true. [speaker004:] Mm hmm. Mm hmm. [speaker001:] I forgot about that one, [speaker003:] Oh. [speaker001:] yeah. [speaker006:] Mmm. [speaker002:] Mmm. [speaker006:] Mm hmm. [speaker004:] Mm hmm. [speaker005:] I could easily put a PC keyboard in there if anyone had [disfmarker] I [disfmarker] we already have one, I could add ane another one [disfmarker] [speaker001:] Right. [speaker005:] we [disfmarker] so, w you know, maybe [disfmarker] let me know if you want it [disfmarker] to have it shifted. [speaker001:] I'm [disfmarker] [speaker005:] But, um, OK. That's good. Oh, let me see. So, um, [vocalsound] I [disfmarker] I wanted to, um, m maybe comment a little more on the interface in terms of what aspects of it are helpful in this, unique the shapes of some of these things are. I know we had this interface for a time when people were doing marking of the numbers, [speaker004:] Mm hmm. [speaker005:] and, um, um, I've forgotten [disfmarker] Someone said that they could recognize a seven when they saw it. They could see the [disfmarker] the waveform and [disfmarker] "Oh, it's gonna be a seven." [speaker001:] Oh wow. [speaker003:] Oh yeah. [speaker004:] I [disfmarker] I remember recognizing certain numbers. [speaker005:] Becau [speaker004:] I don mmm, I don't remember which ones, but I think it was maybe [disfmarker] well, I could recognize when there was a fricative at the beginning of a number for some reason. It was like, a little bit of energy, or some whatever you call it, and then it grew really big. So like "four" and "five" and "three" all had this same characteristic, and "six" and "seven" o both had the same characteristics to their [disfmarker] in the beginnings. [speaker001:] Hmm. [speaker004:] But "six" and "eight"? "Eight", [comment] um, I remember tended [disfmarker] that people tended to not pronounce the last "T". [speaker005:] Oh. [speaker004:] And when they did, it was slightly farther away from the rest of the word than you would have es expected. [speaker003:] Mm hmm. [speaker004:] Same with "six". The "X", well the "KS" or whatever, at the end. [speaker006:] Hmm. [speaker001:] Hmm. [speaker004:] But [disfmarker] Yeah. That was a [disfmarker] that was really interesting to notice that. Uh, It's a very different interface, though. I mean, it seems very strange. [speaker005:] Mm hmm. It is different. Well, it w it would [disfmarker] it had the benefit of it [disfmarker] So, this [disfmarker] this was to handle the digits, uh, only, [speaker001:] OK. [speaker005:] which is what you figure. And, um, did you [disfmarker] I thought you might have done that, Joel. Did you also? [speaker001:] Nn nnn. [comment] No. [speaker005:] So, two people did that, i um, I thought. [speaker006:] I did it a little. [speaker005:] Did you do it? I thought so. Good, good. [speaker006:] Just a little bit. [speaker004:] Yeah. [speaker006:] Yeah. [speaker005:] OK. [speaker004:] Yeah, I only did it a little bit, too. [speaker006:] Yeah, what happened to that, you just stopped doing it? Or [disfmarker]? [speaker005:] Well, I think that we'll bring it back. [speaker006:] Oh, OK. [speaker005:] The [disfmarker] th [speaker001:] This was a different interface entirely? [speaker004:] Yeah. [speaker005:] Uh, Adam wrote this as [disfmarker] uh, using the same, uh, computer language as Channeltrans is written in, [speaker001:] Mm hmm. [speaker005:] and what it would do is present you with the digit string that the person was supposed to have said, and then you would listen to it, and everyth the j there were two parts to the task, one of them was to see if that's what they actually said, and to indicate, you know, in the [disfmarker] i transcribe errors of it. [speaker003:] Mmm. [speaker001:] Hmm. [speaker005:] And then the other aspect was to tighten the timebins. And those data were actually used in some analyses which showed uh, very [disfmarker] very good results. [speaker006:] Right. Good. [speaker001:] Oh wow. [speaker005:] And so it shows, I mean [disfmarker] it's [disfmarker] it's useful. This is [disfmarker] this is why we do these digits. It's a nice standard task, [speaker001:] Mm hmm. [speaker004:] Mm hmm. [speaker005:] and you have lots of different voices, [speaker003:] Mm hmm. [speaker004:] Mm hmm. [speaker005:] and it's very clear. You know what the canani canonical answer is when a person says a number, um, at least most of the time, you know what number it was they said. So its [speaker001:] Right. [speaker005:] "ground truth", as they call it, is very clear. [speaker004:] Mm hmm. [speaker001:] Hmm. [speaker003:] Right. [speaker005:] So. But i [disfmarker] it was, um, really rather minimal. I mean, I [disfmarker] I like that interface. Very efficient, and really is, um [disfmarker] I th think that we'll probably use it for getting the digits refined in terms of their timebins. [speaker001:] Hmm. [speaker004:] If I remember correctly, it seemed to be very efficient in the key strokes that you used to do certain things. [speaker006:] Mm hmm. [speaker004:] I can't remember what they were, now, but I just remember that it was [disfmarker] once you learned them, it was very neat and concise and, th like [disfmarker] [speaker005:] Excellent. [speaker004:] couple keys that you used. [speaker006:] Yeah the hardest thing'd be finding them. [speaker004:] Hmm? [speaker006:] Cuz I would get the whole tr I'd get the whole transcript of the meeting [speaker005:] Oh, that's right! [speaker006:] and you had to find them. You'd have to find where the digits were. [speaker004:] Oh. [speaker006:] And they'd usually be at the end somewhere, [speaker001:] Oh wow. [speaker003:] Oh. [speaker006:] but you had to fig figure out where the heck in the meeting to [disfmarker] to look for them, [speaker004:] Yeah. [speaker005:] That's right. [speaker001:] Mm hmm. [speaker006:] so. [speaker004:] Yeah. [speaker005:] I also think that [disfmarker] wasn't this a stage where some of these meetings hadn't been transcribed, so I don't think you could use a string search, right? [speaker004:] Yeah. [speaker005:] I mean, I was trying to remember how that worked. [speaker001:] Mmm. [speaker005:] But it was difficult finding them for some reason. [speaker006:] Yeah! [speaker004:] Yep. [speaker005:] Well, y it was also, like you [disfmarker] [speaker006:] Yeah, the interface was different. I [disfmarker] I [disfmarker] yeah, it was [disfmarker] it was more complicated. [speaker005:] That's a down side. [speaker006:] But anyway, [speaker005:] Mm hmm. [speaker006:] than what it would be now, so. [speaker005:] Actually, you [disfmarker] you also did a different type of transcription i uh, with res respect to, um, our sister project, the SmartKom project, which involved listening to people who were supposed to be wandering through Germany, [speaker001:] Oh, right. [speaker005:] and this is s supposed to be something that would be useful in developing an interface to help them find their way. [speaker001:] Mm hmm. [speaker005:] So, um [disfmarker] someone else did that too, I've forgotten who [disfmarker] who else did that? Did anybody else here do that? OK. [speaker004:] I feel like I did a little checking or something on that. [speaker005:] Actually, I think you might have done this also, as I think about it. [speaker004:] Yeah. I don't know [disfmarker] I don't remember actually transcribing it, but I remember listening to [disfmarker] [speaker005:] Oh. Mm hmm. So actually we started with the words [speaker001:] Yeah. [speaker005:] and it was a matter then of trying to find the most efficient way to match the words up with the s with the actual sounds. Cuz they were reading from a script. [speaker004:] Yeah. Right. [speaker003:] Mmm. [speaker001:] Mm hmm. [speaker005:] And then there was some conversation in addition, that [disfmarker] that, um, needed to be transcribed. [speaker002:] Mmm. [speaker001:] Mm hmm. [speaker005:] But, um, yeah. One thing that you learn with this interface right away is that if you have really, really, really long utterances, it's extremely inefficient to do anything. [speaker002:] Hmm. [speaker005:] Because you end up [disfmarker] p p p if you try to n go up in the transcript, you end up with this huge [disfmarker] huge space of time [speaker003:] Mm hmm. [speaker001:] Yep. [speaker005:] and things covering several screens and it's just horribly i inefficient, [speaker004:] Yeah. [speaker001:] Yeah. [speaker005:] so. [speaker001:] Yeah. [speaker004:] One thing that I still can't figure out with the interface, it does this sometimes but other times it doesn't, is when you click on [disfmarker] you know, there's a bar above where this sound wave is, that has, um, n the [disfmarker] it has a little bar of where the meeting is, like in [disfmarker] [speaker005:] Yes. [speaker006:] Mm hmm. [speaker004:] and it goes along as you're listening to the meeting and then it ends up the end. And if you click on that bar to the right of that little marker then it moves forward, [speaker005:] Mm hmm. [speaker004:] if you click on the left it moves back. But sometimes it moves two frames [comment] or two screens, [speaker005:] Mm hmm. [speaker003:] Mmm. [speaker005:] Oh yes. Mm hmm. [speaker002:] Hmm. [speaker005:] I've noticed that. [speaker004:] and sometimes it doesn't. [speaker005:] I think that's totally dependent w on how you have the resolution set. So that little tiny thing. [speaker004:] Really? [speaker005:] And if I increase the resolution so it's spread out a bit more, then I seem to have more control over it. [speaker004:] OK. [speaker005:] But, I don't have [disfmarker] [speaker001:] Mmm. [speaker004:] Cuz I noticed that a few times it would do it [disfmarker] it would move twice [speaker003:] That kind of makes sense [speaker005:] Yes. [speaker004:] and then in [disfmarker] n sometimes i it wouldn't, [speaker003:] because [disfmarker] Yeah, i [speaker004:] and maybe i maybe I had resolution set differently [speaker003:] Cuz, yeah, if the s the resolution [disfmarker] If it's moving too slowly [disfmarker] then, like, if you were to click on some place then it makes sense that it would get kind of confused. [speaker004:] or I s [speaker003:] But if it's moving quickly, [comment] seems that it's covering more ground faster, [speaker004:] Mm hmm. [speaker003:] so [disfmarker] I don't know. [speaker005:] I'm gonna report that to him, cuz I [disfmarker] I've [disfmarker] I know [disfmarker] that's one of the reasons why, when I uh [disfmarker] suggested to visual scan the record, that I suggested doing it by clicking on the right of the [disfmarker] of that navigation bar you're mentioning. [speaker004:] Mm hmm. [speaker005:] Or, actually, clicking on the right of the sound wave and then letting it run past the boundary, [speaker004:] Right. [speaker005:] because I had the same problem. [speaker004:] Yeah. That's what I tend to do, is I use a soundwave, [speaker005:] It's [disfmarker] [speaker004:] but if I'm trying to scan through a long period of what appears to be silence sometimes I'll s ry and be sc try to scan it, [speaker005:] cuz you always know [disfmarker] Yes. Yeah, much more [disfmarker] Mm hmm. [speaker004:] but I can't because [speaker005:] Cuz you end up missing entire frames. [speaker004:] it skips. Yeah. So then I end up clicking on the arrow, they have the arrow at the far right, [speaker005:] Ye I was surprised by that. Yeah. Yes, OK. OK. [speaker004:] and so I use that sometimes. [speaker005:] So, on the [disfmarker] on that [disfmarker] on that bar you were describing. Clicking [disfmarker] [speaker004:] Yeah. Mm hmm. At the far right and far left there's a little arrow that allows you to go like second by second or something, [speaker005:] OK. [speaker001:] I didn't know about that. [speaker004:] I don't know. [speaker001:] Really? [speaker004:] Not second by second, but little bit by bit, yeah. [speaker005:] So, is this the arrow on the right, [speaker001:] Oh. [speaker005:] it would be underneath that little resolution bar? I mean, it's like in that u cluster up there? [speaker004:] Um, it's not on the resolution part. It's where you have [disfmarker] it's where you have the marker of where you are in the meeting. So you have this bar, uh, and then you have a time marker that moves along. Say, you're halfway through the meeting, it's gonna be halfway along the line. [speaker005:] Mm hmm. I have to do something visually. So, my [disfmarker] I would say you have [comment] the transcript screen, and then you have a bar. I think you're referring to this bar, aren't you? [speaker004:] I think [disfmarker] Yeah. [speaker005:] And then there's like this little resolution changer. [speaker004:] Yeah. It's [disfmarker] Yeah. [speaker005:] And then down here you have the soundwave, and then something down here, [speaker004:] Exactly. [speaker001:] Oh. [speaker005:] and then you got the cha the different channels come out. [speaker004:] So right at the right and left of that [disfmarker] [speaker005:] So r right in here you have this bar that you move and as y as you go along it [disfmarker] [speaker004:] Yeah. [speaker005:] OK, so, is [disfmarker] Are you saying [disfmarker] [speaker001:] Oh that! Yeah. [speaker005:] you saying that there's an arrow right here? [speaker004:] Yeah! [speaker005:] OK. And then there's an arrow right here? [speaker004:] Yeah. Yeah. [speaker005:] OK. So you can [@ @] [disfmarker] [speaker004:] Exactly. [speaker001:] Oh. Does [disfmarker] That doesn't [disfmarker] [speaker003:] I usually just don't use that, I think. [speaker001:] Yeah, that skips [disfmarker] Oh, is that what you're talking about? [speaker003:] Oh. [speaker001:] If you use that [disfmarker] [speaker002:] Mmm. [speaker001:] I've never used [disfmarker] been able to use that cuz it [disfmarker] I s feel like it skips such huge portions. [speaker004:] Maybe we have resolution set totally different. [speaker002:] Yeah. [speaker004:] Cuz I noticed when [disfmarker] Jen and I have totally different resolutions, [speaker001:] Maybe we do. [speaker004:] cuz hers moves so much quicker than mine, [speaker003:] I tend to make it move fast. [speaker001:] Oh, I d I [disfmarker] I [disfmarker] Yeah, I wouldn't [disfmarker] [speaker004:] which means it's more stretched out, right? [speaker001:] I wouldn't use that unless I was trying to get to a whole different part of the [disfmarker] [speaker003:] Right. Right. Right. Right. [speaker004:] Yeah. I usually have it set to where it has, um, point five second intervals on the numbers. [speaker006:] Right. That's when I use it too. [speaker001:] Yeah. [speaker003:] Hmm. [speaker005:] Oh. That's interesting. I haven't paid attention to the numbers. I sort of like [disfmarker] visually, i it helps me to have it be separated enough that I could find a new breaking point if I needed it. [speaker004:] Yeah. [speaker003:] Yeah. That's the reason I separate it out. [speaker004:] Yeah. I [disfmarker] [speaker001:] Right. [speaker003:] Yeah. [speaker004:] I do too, but sometimes if it gets too [disfmarker] too [disfmarker] for me too stretched out, um, I'll tend to think that there's a break in between a pause inside of a word or something. [speaker005:] Expanded? Mm hmm. [speaker003:] Oh, yes. [speaker004:] I don't know. [speaker003:] Because the, uh [disfmarker] the waveform becomes too stretched out. [speaker004:] Yeah. [speaker003:] Right. Because [disfmarker] [speaker004:] So I prefer it to be a little bit tighter, [speaker003:] Yeah. It's nice to have [disfmarker] [speaker004:] but I'll stretch [disfmarker] I'll stretch it in a particular place if I need to, or something. [speaker003:] Right. A sort of, like, happy medium, so you can make sure that the waveform has its, like, th g shape. [speaker001:] Uh, right. Mm hmm. [speaker004:] Yeah. [speaker003:] Yeah. [speaker004:] Yeah. [speaker003:] Now I know what you're talking about. [speaker004:] Yeah. That's kind of interesting, though. [speaker003:] Yeah. [speaker005:] Well, we're coming kind of close to the end of our time here. Um, I did ask about dessert and they told me it was ice cream and it doesn't save well, so. [vocalsound] I mean [disfmarker] [speaker001:] Oh. [speaker004:] Well if there's any left we can grab it. [speaker006:] If there was pie again, [speaker005:] That's right. [speaker006:] if there was those tarts again, I'd be like, "Set us some aside some there." [speaker004:] Ooh, those were good. [speaker001:] Hmm. [speaker004:] Yeah. [speaker006:] But [disfmarker] [speaker005:] Yeah, there's some [disfmarker] Those are [disfmarker] those are nice. [speaker006:] I'm not a big ice cream fan. [speaker003:] Really? Oh. [speaker006:] Nah. [speaker001:] No? [speaker005:] But I did [disfmarker] I was thinking i you know, it is interesting to me to think about how each [disfmarker] each of us have sampled different meetings, and yet, i i it's intriguing to me to think that if I were to describe someone's characteristics in terms of their speech patterns, I bet you, we would all agree on who it [disfmarker] who it was. [speaker003:] Mm hmm. [speaker004:] Mm hmm. [speaker005:] You know. I think that's so interesting. [speaker004:] Yeah. [speaker001:] Yeah. [speaker005:] Yeah, it's [disfmarker] it's a statement of speaker style, of course, [speaker001:] Mm hmm. [speaker005:] but. [speaker004:] And also that we can [disfmarker] I don't know, that we have this shared realm of knowledge, kind of, and can laugh about things that other people would not understand whatsoever, or [disfmarker] You know? [speaker003:] Mm hmm. The little intricacies and idiosyncracies of people's voices [speaker004:] Yeah. Yeah. I mean, i it's really cool. [speaker003:] and [disfmarker] [speaker004:] Yeah, or even just about what we were talking about with the technical part, you know, the interface. [speaker001:] Yeah. [speaker003:] Right. [speaker004:] I don't know. [speaker005:] Do you have any favorite parts of these meetings? Things that you [disfmarker] I mean, we di we talked briefly about this last time. [speaker006:] When people get really loopy, and start telling lots of jokes, I think. [speaker003:] Yeah. [speaker006:] I enjoy always transcribing those sections. [speaker005:] Is this [disfmarker] do you think this is usually toward the end of the meeting, or does it vary? Cuz it could [disfmarker] [speaker006:] It's [disfmarker] yeah, it seems to be more at the end. [speaker005:] The [disfmarker] the person [disfmarker] [speaker003:] Eah. [comment] Depends on the person. [speaker002:] Or in the beginning, [speaker006:] Like [disfmarker] [speaker002:] the very beginning. [speaker004:] Yeah, in the very beginning. [speaker006:] Yeah, yeah. [speaker001:] Yeah. [speaker005:] OK. [speaker006:] Yeah, they'll be joking around. [speaker004:] Yeah. [speaker003:] Yeah, that's true. [speaker004:] Yeah. [speaker005:] Do you have a favorite group that you [disfmarker] that you like? I don't [disfmarker] I don't wanna cause any partisanship here, but [disfmarker] [speaker006:] Wait, a favorite what? [speaker005:] A [disfmarker] a fav a favorite t type of meeting that you like, or, um [disfmarker] or [disfmarker] [speaker006:] Oh, you mean like, eh, meeti like [disfmarker] like, um, Meeting Recorder as opposed to EDU as oppo Like, those groups? [speaker005:] It [disfmarker] whichever. I that way or [disfmarker] or a nature of a meeting, [speaker006:] Oh. [speaker005:] or [disfmarker] [speaker004:] I prefer ones that are somewhat linguistically related, [speaker003:] Yeah. [speaker005:] OK. [speaker004:] cuz it's more interesting for me. [speaker001:] Yeah. I like those. [speaker003:] I don't like the technical ones cuz I don't understand what they're talking about. [speaker004:] Yeah. Exactly. And it's f really hard t t [speaker003:] And so I'm just [disfmarker] Yeah. And I'm sitting there bored cuz I r I really have no concept of what they're talking about. [speaker004:] Yeah. [speaker001:] Yeah. [speaker003:] So I'm going "OK." "I'm just gonna listen to you." [speaker004:] Yeah. [speaker005:] Mm hmm. Mm hmm. [speaker001:] M yeah, the ones that [disfmarker] where they're talking about linguistic things, though, are great. [speaker003:] Yeah. [speaker004:] I [disfmarker] [speaker001:] I love those. [speaker004:] Yeah. [speaker001:] I like those [disfmarker] [speaker005:] Those are th E would these be the EDU meetings? That's what I think. Is that true? [speaker003:] The ones with Liz, and you. [speaker001:] Mm hmm. [speaker003:] Yeah. [speaker001:] Yeah. [speaker005:] Oh, really? OK. [speaker004:] Yeah. You guys do some in the mee in the Meeting Recorder ones, there's some talk of, D d I don't know, discourse stuff. I think it's really fun too to listen to when you give reports on transcription. [speaker005:] Oh, I I'm [disfmarker] really? [speaker004:] Yeah. Cuz there are always people like, you know, talking bad about the transcribers, [speaker005:] Oh, I'm glad. [speaker004:] and just making fun of u [speaker003:] Yeah. [speaker004:] And so it's really funny, I don't know. [speaker003:] Yeah. I li that's why I like Johno. [speaker004:] It's really funny to listen to. [speaker006:] I've never heard anyone make fun of transcribers. [speaker004:] You haven't? [speaker003:] Really? Oh, it's so funny. [speaker006:] No! [speaker005:] Oh, Johno does a lot of that. [speaker006:] When you guys were saying that last week, I was like "Never heard that." [speaker003:] Yeah. That's why I like listening to him. [speaker004:] It's really funny. [speaker003:] He's always making fun of transcribers. [speaker005:] It would be a kick [disfmarker] Yeah, in a [disfmarker] uh, in a good natured way [@ @]. [speaker003:] Yeah, yeah, yeah. [speaker004:] Yeah. [speaker005:] It would be really fun to have a collage of all these things, like, "Well, the transcribers [disfmarker]" so, "Rer rer rer rer" and then s be some horrible thing, "The transcriber's gonna have trouble with this. [@ @]" [speaker003:] Yeah, yeah, yeah. [speaker005:] Did you hear the meeting where the guy was doing, um, clicks like from African languages? [speaker006:] Yes. [speaker001:] Oh my. [speaker003:] Oh my god! [speaker005:] Wasn't that a kick? [speaker004:] No! [speaker006:] I did that. [speaker003:] No. [speaker005:] That was really a kick. [speaker006:] And whistles. He was [disfmarker] it [disfmarker] I [disfmarker] you must have been sittin There was a meeting where there was this guy, he was making these funky sounds [speaker005:] Yeah. [speaker006:] and making funny whistles, [speaker005:] Yep. [speaker006:] and you must have been sitting next to him [speaker005:] I was. Exactly. [speaker006:] cuz you sort of engaged him in a conversation about it. [speaker005:] Exactly so. [speaker006:] And he was like, Yeah, I can make dogs, uh, raise their ears [speaker005:] Exactly. Exactly! [speaker006:] cuz I can whistle so high. " [speaker004:] Wow. [speaker006:] And you were like, w "That's fascinating." I remember that meeting. I thought it was pretty funny. [speaker004:] Yeah. I think it's funny when there's little side conversations like that, that happen at the same time too. [speaker006:] Yeah. [speaker004:] I think those are interesting. [speaker006:] Yeah. [speaker004:] And I think it's really cool when people start getting really excited about something or angry about something [speaker006:] Mm hmm. [speaker004:] because then they tend to interrupt each other a lot, and [disfmarker] I don't know. I mean, I remember one where one person, um, was really trying to get something out, really, really desperately, and he kept getting interrupted by like three p three other people, and he was like, "I I I I I I I I I" [speaker003:] Oh! [speaker004:] And it was [disfmarker] I mean it was kind of sad but it was, like, also really interesting to watch, or to listen to. Both. [speaker005:] Have [disfmarker] have you noticed that sometimes you have these a [disfmarker] a g g greater frequency of speech errors, I think, during times of inter of overlap [speaker004:] Yeah. [speaker005:] which [disfmarker] which is [disfmarker] you can just tell a person's m monitoring at two levels [speaker003:] Yeah. [speaker002:] Mmm. [speaker001:] Mm hmm. [speaker005:] and it's [disfmarker] it's difficult. [speaker001:] Mm hmm. [speaker004:] They're like just trying really hard to get it out before they get interrupted or something, [speaker003:] Right. [speaker001:] Yeah. [speaker004:] falls apart. I don't know. [speaker003:] I like transcribing you cuz you're very polite. [speaker005:] Oh, thank you! [speaker003:] No, really. You're always like "Well, [vocalsound] I was just thinking about this, and maybe this might be if [disfmarker]" I [disfmarker] uh, you know. [speaker004:] Yeah. [speaker003:] It's just [disfmarker] it's just kind of cute because everybody's like, "Oh. Well, OK." And they'll just listen to you, you know. [speaker004:] Yeah. [speaker003:] They don't interrupt you, they're just like, "Well, OK." It's kind of neat. [speaker004:] Yep. [speaker005:] Thank you. Yeah. Let's see. Have you noti th one of the things that I found is [disfmarker] is, um, surprisingly difficult these days is to co is to get certain electronic sounds, to try and describe them. [speaker004:] Hmm. [speaker005:] Have [disfmarker] I was thinking about that the other day, [speaker004:] Yeah. [speaker005:] it was like [disfmarker] it's like, y y you know, someone in one of these EDU meetings, they have a laptop and they hit the thing and the [disfmarker] and there's an error screen that comes up and it has this "doink" [comment] kind of sound. And it [disfmarker] [speaker002:] Hmm. [speaker003:] Right. Right. [speaker005:] What is that? It's not a beep. [speaker004:] Yeah. [speaker005:] It's not a bell. It's not s it's not a "dong". It's not a [disfmarker] I mean, you [disfmarker] there're all these words that I could imagine and [disfmarker] so, it's like, it's a "tinny kind of electronic sound that comes up when you make an error with your laptop". And that's [disfmarker] [vocalsound] that's not really very helpful. [speaker002:] I wonder if like Microsoft has a word for it. [speaker003:] Now I'm [disfmarker] now I'm just wondering how they're gonna transcribe you saying that. [speaker004:] Yes. [speaker005:] Oh. [vocalsound] You mean the voice quality? The [disfmarker] the shift in [disfmarker] [speaker003:] The "doink". [speaker002:] Bling. [speaker005:] Oh, that thing. [speaker003:] Yeah. [vocalsound] I know. [speaker005:] Oh th [speaker001:] Now you did it again! Stop torturing them! [speaker003:] No! [speaker002:] Let's not make things hard on ourselves later. [speaker004:] I know. [speaker001:] Yeah. [speaker005:] Yeah. Well and then, you know, there are these other things that I think are really interesting, like the vocal changes in character. People shifting into a [disfmarker] kind of a comic voice, [speaker004:] Mm hmm. [speaker005:] or sort of a mock this or mock that, [speaker006:] Mmm. [speaker002:] Mmm. [speaker001:] Mmm. [speaker005:] you know, the sighing and the, "Oh no!" You know, it's [disfmarker] it's, uh, i There are lot's of nuances which we [disfmarker] uh, which we use very often. [speaker004:] Mm hmm. [speaker005:] Comic words and things like that. [speaker004:] You know, I [speaker005:] And [disfmarker] [speaker002:] Huh. [speaker001:] Sorry. [speaker005:] Yeah? [speaker004:] sorry, to [disfmarker] [vocalsound] I just remembered one thing that really was [disfmarker] you needed [disfmarker] [speaker005:] Yeah. [speaker004:] I think it was you that pointed out the problem. Like you needed to be at the meeting to be able to transcribe correctly, the noise. I think it was Adam was making this really strange noise, and apparently he was pretending [disfmarker] [speaker005:] "Eech eech", [comment] or something. [speaker004:] he was like pretending to r screw something into his head? [speaker005:] "Whee whee". [speaker002:] What? [speaker005:] s s [speaker004:] Was that what it was? [speaker005:] Yes. It was one of these [disfmarker] it was this kind of deal. [speaker001:] Oh yeah, I had that one. [speaker004:] Yeah! Y yeah! [speaker001:] Oh yeah. [speaker005:] Yes. Oh, yes. And he got to transcribe this meeting. [speaker001:] Yeah, [speaker005:] And there was all this meta stuff about [disfmarker] [speaker001:] that was fun. [speaker005:] Cuz I was giving a report on what the comments were, and it was [disfmarker] one of these was this, s so [disfmarker] [speaker004:] Uh huh. [speaker001:] Yeah. [speaker005:] "Whee whee". [comment] is what he said. And he was going like this at the time. [speaker004:] Mm hmm. [speaker003:] Joel's like, "I am not amused." [speaker001:] I don't even wanna, like, try to repeat what he said because I don't wanna put the transcriber through it again. [speaker005:] Yes, that's right. Let's just [disfmarker] [speaker001:] But they actually had a big list of, like, things that people had transcribed and then they were reading one [disfmarker] them one by one and being, like, "What's this? This?" You know, and then doing the sounds, [speaker003:] Aw! [speaker001:] and then, This sound must be this sound! [speaker004:] Oh. Oh wow. [speaker001:] This sound must be this! " [speaker004:] y I had one like that where they were describing the different f qualities of "uh huh" and "uh" and "uah" and like what "EH" is versus "UH" versus "AH". [speaker001:] u u I had [disfmarker] yeah, I had one like that too. [speaker006:] Yeah! [speaker002:] Hmm. [speaker005:] Oh man. [speaker004:] That was really hard. Yeah. That's really interesting. [speaker001:] I had Yeah. I had one where they were just [disfmarker] they were just, like, talking about each word, you know, and they were saying [disfmarker] they were just describing the word over and over again, and just the q the quotes around the word were driving me crazy. When they [disfmarker] kn knowing when to put the quotes and when [disfmarker] And then they were talking about like the difference between "uh" when it's "a" [comment] or when it's "uh", [speaker004:] Yeah. [speaker001:] and at some point they were saying it so much, I didn't know how to transcribe it anymore. [speaker004:] Yeah. [speaker003:] Yeah. [speaker001:] I was like, "I don't know if you're saying" a "or" uh "anymore". [speaker004:] Yeah. [speaker003:] Right. [speaker006:] Oh, yeah. [speaker005:] You did an excellen excellent job. That was really [disfmarker] it was like m transcription, a meta transcription and meta meta meta. [speaker001:] Oh, thank you. [speaker004:] Yeah. [speaker006:] Yeah. Cool! [speaker005:] It's really [@ @]. [speaker001:] It was fun. [speaker004:] Yeah. [speaker005:] You know, I'm really interested in this topic. Would it be a problem if we continued a little bit longer on this? Or do you need to g do people need to leave. I don't wanna keep you longer than [disfmarker] [speaker003:] Mm mmm. [speaker001:] I'm fine. [speaker002:] I can stay. [speaker004:] I don't have to be anywhere until five, [speaker006:] Sure. [speaker005:] Are we OK? [speaker004:] so. [speaker005:] Oh, OK. We won't be that long, I don't think. [speaker006:] No. [speaker004:] Good. [speaker006:] Sure. [speaker005:] But [disfmarker] but I really appreciate it. [speaker001:] OK. [speaker005:] This is just really i it's so interesting to me, the [disfmarker] the specific experiences of [disfmarker] of having encountered these data in this way. Does, um [disfmarker] Did anybody else wanna say anything about [disfmarker] [speaker006:] I mean, just weird things like before [disfmarker] like, I had kind of a weird thing happen, and we'd talked about it. It was when I'd first started working on the project I got this discussion where [disfmarker] this was before we had the different channels, and, um, Dan Ellis [disfmarker] it was when Dan Ellis was still here [speaker005:] Yes. [speaker006:] so it was like kind of early on, right? And, um, he [disfmarker] everyone's talking, and then all of the sudden, you hear Dan talking to someone else and it's different people that you'd never heard before. And [disfmarker] and what he had done, is he had left the room to g He'd li and he had brought the microphone with him. [speaker003:] Oh, no. [speaker006:] And this is without the channelized interface. So you [disfmarker] you have no idea. [speaker005:] Oh my gosh. [speaker004:] Oh man. [speaker001:] Oh wow. [speaker006:] You think everyone's just become schizophrenic or something. [speaker004:] Oh n [speaker006:] And they're all like in they're own r world or something because [disfmarker] Then all of the sudden he's talking about a co he needs a copy machine. And I don and I'm just assuming, well he's s s still in the room [speaker003:] Oh g [speaker006:] and what the heck's he talking about? [speaker001:] Wow! [speaker006:] But just sort of just weird things like that, and, um [disfmarker] Yeah, he had just left the room and gone and made a copy and he still had his microphone on, and then you could hear the [disfmarker] the [disfmarker] the r the women behind the desk talking to him, and just sort of stuff like that. It was just really bizarre. So. [speaker004:] That's really funny. [speaker006:] Anyway. Just sort of weird things like that. [speaker005:] Well, it is [disfmarker] There's another case where I ran across like o of that type, um, i when I was sitting here and we were doing [disfmarker] This was like o one of the first meetings I ever participated in and [disfmarker] and Dan was adjusting something here and Adam was in the next room, and, um, [vocalsound] Adam says, "So I'll come in here and check the levels," and [disfmarker] and he says something and Dan says, "No". And it sounds like he's saying "no" to Adam, and it was w if [disfmarker] if [disfmarker] if so, this would have been extremely rude. [speaker002:] Mmm. [speaker005:] But but the thing [disfmarker] and [disfmarker] and the reason that it would be rude is because you don't have this information from hearing all these equalized volume channels, [speaker003:] Uh huh. [speaker004:] Right. [speaker005:] you don't know that they're not participating in the same conversation at all. [speaker004:] Yeah. [speaker001:] Yeah. [speaker006:] Yeah. [speaker005:] It really is strange. [speaker006:] Yeah. [speaker005:] I it lead us [disfmarker] it ta if we w [speaker006:] So. [speaker005:] It is true that this is kind of an artificial and i um, discourse c aspect, the mixed channel is artificial because we're boosting it in order to make everything audible. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker002:] Hmm. [speaker003:] It's interesting cuz like, um, when I'm working late at night, [comment] I actually don't feel so lonely [speaker005:] But [disfmarker] [speaker003:] because there's so many voices in my head. [speaker005:] You mean when you leave here? When you're somewhere else? [speaker003:] No, no. While I [disfmarker] while I'm working here, [speaker005:] No, no. OK. [speaker003:] it's like nine o ' and nobody's here except for me, and I'm like, You know, I don't feel so lonely [speaker004:] Oh, yeah. [speaker003:] because there are like eight people talking to me all at once ". I'm like, "Wow". [speaker005:] How about these meetings when [disfmarker] where there are like eight speakers i in one meeting. Are those [disfmarker] What are those like? [speaker003:] Takes a long time. [speaker005:] Are [disfmarker] [speaker006:] They take longer to do. [speaker004:] It takes a long time, yeah. [speaker002:] Mmm. [speaker005:] OK. [speaker001:] Yeah. [speaker006:] In a way they're more fun, though. [speaker003:] Takes a long time. [speaker004:] Yeah. [speaker006:] There's just more variety. [speaker004:] I [disfmarker] tends [disfmarker] I [disfmarker] you know what I noticed, is it tends to have a [disfmarker] still, a dominant group of speakers [speaker003:] Sometimes. [speaker004:] and then there are two or three people generally who don't talk at all [speaker002:] Mmm. [speaker003:] Yeah. [speaker006:] Who won't talk. [speaker004:] except to say their digits or something, or who [disfmarker] [speaker006:] And to breathe. [speaker002:] Yeah. [speaker004:] or to breathe and cough and sniff and whatever. [speaker003:] Right. [speaker002:] Oh, god! [speaker006:] There'll be lots of breathing. [vocalsound] Yeah. [speaker004:] Yeah. Yeah. But I [disfmarker] I think that still overall you're gonna have probably three or four or five dominant speakers. [speaker003:] One thing I noticed is that, like, uhhh! [disfmarker] [comment] There was this one meeting I was transcribing, there were nine [disfmarker] nine speakers, and by the end of it, I was so entirely sick of listening to the same conversation all over again [speaker004:] Yeah. [speaker003:] nine times in slow mo! [speaker004:] Exactly. [speaker001:] Definitely. [speaker003:] I was going nuts. I was like, "I cannot take this anymore. I can't listen to this anymore. Oh my god!" [speaker005:] I had that same reaction. [speaker004:] Yeah. [speaker005:] Absolutely. And sometimes these are meetings I participated in. It's it's like [disfmarker] [speaker001:] Mm hmm. [speaker004:] So you've [disfmarker] this is like the tenth time for you, then. [speaker003:] I swear it's like [disfmarker] it's like a bad dream. [speaker005:] That's righ that's right. [speaker003:] It's like, "This is [disfmarker] this is keep happ this keeps happening to me". [speaker004:] Yeah. [speaker005:] Never changes. I'm gonna check out outside to make sure that we're OK with the, um [speaker004:] Oh, OK. [speaker003:] Oh. OK. [speaker006:] Did you guys ever, when [disfmarker] This happened to me when I first started transcribing, cuz I had never really transcribed stuff like this before, [speaker005:] Is it OK if we stay for another half hour in this room? [speaker006:] but, um, [speaker004:] Mm hmm. [speaker005:] Good, thanks. [speaker006:] I would leave here, and I would go out and do my daily business, and I would hear people speaking to me, [speaker003:] Yeah. [speaker006:] and I would transcribe them in my head! [speaker003:] Yes! Yeah, yeah, yeah. Oh yeah. [speaker004:] Oh, man. [speaker001:] Oh yeah! [vocalsound] That's fun! [speaker006:] I'd imagine what I would type. [speaker004:] Yeah! Yeah. [speaker002:] Mmm. [speaker006:] It went away, like maybe after a couple of weeks, [speaker003:] Yeah. [speaker001:] Oh really? [speaker006:] but it was just bizarre. [speaker003:] I still do that. I still do that. [speaker001:] I still do that. [speaker002:] No, I don't Yeah. [speaker006:] Really? [speaker004:] Occasionally. [speaker001:] Yeah. [speaker003:] Yeah. [speaker004:] Like on the bus or something when I'm bored. [speaker003:] Yep. Or, like, I'll think [disfmarker] [speaker002:] Right. It's something you're not interacting with, [speaker006:] Oh. No, for me it was unconscious. [speaker002:] you're just listening. [speaker004:] Oh, really? [speaker006:] It was [disfmarker] I didn't mean for [disfmarker] to be doing it. [speaker004:] Oh, OK. [speaker003:] No, yeah. N Yeah. [speaker001:] Oh, oh. [speaker006:] It would just intru it was an intrusion. [speaker002:] Mm hmm. [speaker004:] Oh, that's funny. [speaker006:] Um, not a te not a bad intrusion, [speaker001:] Mm hmm. [speaker004:] Yeah. [speaker006:] but not something I was sort of volitionally sort of [disfmarker] [speaker004:] Huh, that's interesting. [speaker003:] I would also be really aw aware of w how I was speaking. [speaker001:] Hmm. [speaker006:] Yes! [speaker003:] Cuz I also tend [disfmarker] Like, we were talking about "so". I do that. [speaker001:] Mm hmm. [speaker003:] I'll end utterances with "so" or "and". [speaker004:] Yeah. [speaker003:] and it'll just be a trailing off, and I'll think, "Man. I would be such a pain in the butt to transcribe." [speaker006:] Yeah! [speaker004:] Mm hmm. [speaker002:] Mmm. [speaker006:] Yeah! [speaker003:] Like I just, uh, I [disfmarker] g yeah. I'll [disfmarker] I'll do the same thing as you do, and I [disfmarker] I'll listen to somebody and his or her speech will be particularly idios idiosyncratic, [speaker006:] Yeah, yeah. [speaker003:] and I'll go, "Wow, I just [disfmarker] I'm glad I'm not transcribing him or her". [speaker004:] Yeah. I noticed a lot of my speech i I don't know, I wouldn't call them errors, I guess disfluencies, a lot more once [disfmarker] when I started transcribing. I just noticed [disfmarker] and in ge people in general, that people break up their speech interestingly. [speaker002:] Mmm. [speaker006:] Yeah. [speaker003:] Right. [speaker006:] Yeah. [speaker001:] Mmm. [speaker003:] I'm amazed sometimes that we manage to communicate. [speaker004:] Yeah! Actually. [speaker003:] You know? [vocalsound] Cuz people will say just, like, you know, monosyllabic little grunts and squeaks and things, [speaker004:] I think that shows a lot of [disfmarker] I think that shows a lot of how, um, it's not all acoustic. [speaker003:] and it means something. [speaker004:] I mean, there's other things involved, [speaker003:] Right. [speaker004:] eye contact, body language, [speaker003:] Right. [speaker004:] gesture, whatever. [speaker001:] Right. [speaker005:] That raises a question. Um, do you think that, um, you would have been considerably helped if we had had video tapes of these things? [speaker006:] Oh sure. [speaker003:] Yes. [speaker004:] Yeah. [speaker005:] OK. [speaker004:] I definitely think so. [speaker003:] Yep. Mhh. [speaker004:] And I think there would be a lot more interesting data to analyze later on, [vocalsound] personally. [speaker005:] Mmm, well, mm hmm, mm hmm that's right, [speaker003:] Uh huh. [speaker005:] cuz you're involved in gesture research. [speaker004:] Yeah. [speaker005:] Mm hmm. [speaker004:] Yeah. [speaker003:] Yeah, cuz then you know exactly what's going on, what the context cues are [disfmarker] [speaker004:] And you can really get a feel for the re interpersonal relationships, [speaker003:] Yeah. [speaker004:] which kind of comes into play, maybe not for speech recognition, but for other things, maybe? [speaker003:] Maybe this is what it's like to be blind. [speaker005:] Mmm hmm. Oh, interesting. [speaker001:] Hmm. [speaker004:] Maybe. [speaker006:] Hmm. [speaker005:] Huh. [speaker006:] Yeah. [speaker001:] Hmm. [speaker003:] No [disfmarker] no visual input at all. [speaker004:] Yeah, you just have to rely on your intonation, and other things like that. [speaker003:] Right. And you get so much more aware of all of it too. [speaker004:] Yeah. Mm hmm. Maybe. [speaker005:] Mm hmm. [speaker002:] Hmm. [speaker001:] Yeah. [speaker005:] That's [disfmarker] I know someone who actually recognizes perfumes in the elevator. I mean, there's a [disfmarker] there's a blind man in this [disfmarker] in this building, [speaker004:] Oh. Uh huh. [speaker005:] and [disfmarker] and, um, Lila was in the elevator one day. And they hadn't spoken yet cuz he just came on the elevator and he says [disfmarker] uh, and then she [disfmarker] then she said something and he says, "Y you sp you've changed your perfume." [speaker003:] Oh wow. [speaker005:] Yeah, it's pretty interesting. Pretty interesting. [speaker001:] Huh. [speaker004:] That's pretty amazing. [speaker006:] Had she changed her perfume? [speaker003:] That's neat. [speaker005:] Yes, she had. [speaker006:] Oh, wow. [speaker005:] Yeah. [speaker003:] Huh. [speaker001:] Hmm. [speaker005:] Yeah. [speaker006:] Oof! [speaker005:] Yeah. [speaker002:] Hmm. [speaker005:] It's very interesting. [speaker004:] Enhanced other senses, I guess. That's really cool. [speaker003:] Right. [speaker004:] Hmm. [speaker005:] Have you [disfmarker] I [disfmarker] I've learned several things about having been Wi i with the close talking mikes. This is [disfmarker] this is a new experience. Cuz I have of course been involved in [disfmarker] in this kind of thing for quite a while, but th there are several things I didn't realize you could hear so reliably, [speaker001:] Mm hmm. [speaker004:] Mm hmm. [speaker005:] and [disfmarker] some of these are the things like when the mouth opens and the inbreath and stuff and it [disfmarker] and how extremely audible and [disfmarker] and visual, [speaker001:] Mm hmm. [speaker003:] Mm hmm. [speaker002:] Uh huh. [speaker005:] I mean, you can see these [disfmarker] the inbreaths and things. [speaker001:] Mm hmm. [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker005:] It's [disfmarker] that was really surprising to me, that you [disfmarker] it's that tangible. [speaker001:] Yeah. [speaker004:] That's something you don't hear in normal speech too. [speaker006:] Mm hmm. [speaker004:] In normal conversations, [comment] unless it's very dramatic or very loud, [speaker006:] Mm hmm. [speaker004:] that people click their [disfmarker] Their lips are always making funny noises [speaker002:] Unless [disfmarker] you're on, uh, talking on the phone, [speaker004:] and [disfmarker] [speaker002:] and the phone is [disfmarker] the levels are funny. [speaker004:] Yeah. [speaker002:] Then you hear it. [speaker004:] Yeah. [speaker002:] So, th it's kind of like that. [speaker003:] Right. [speaker004:] Yeah. [speaker005:] I guess it's also one of these things that u usually is not meaningful, so we sort of screen it out. [speaker004:] Yeah. [speaker005:] u all these different in cuz, you know, I mean, we do that all the time with all other [disfmarker] well, lots of other aspects of the acoustic signal. [speaker001:] Right. [speaker004:] Yeah. [speaker006:] Mm hmm. Mm hmm. [speaker005:] It sounds like it's a nice smooth melody, but [vocalsound] these things are voicing and unvoicing, are tripping in and out, and they don't [disfmarker] The contour is really not nearly that neat. Yeah. [speaker003:] Yeah. [speaker006:] Right. Yeah [disfmarker] well, yeah, most of the time when we're listening to other people's speech, we're just trying to get the gist. Not literally what people are saying [speaker005:] Mm hmm. [speaker002:] Mmm. [speaker006:] and so, you know, all the "uh" [disfmarker] "uh's" and "um's", and all those different things, [speaker003:] That's true. [speaker004:] We yeah, we filter all that out. [speaker006:] we just tune it out, [speaker004:] Yeah. [speaker006:] yeah. [speaker001:] Mm hmm. [speaker006:] And, uh, it's that top down thing again. Just [disfmarker] right? [speaker005:] Mm hmm. Yeah. Yeah, absolutely. Absolutely. [speaker006:] In a way. Just trying to get the, um, you know, what is the p "what is the point of what this person is trying to communicate?" kind of thing. [speaker004:] Mm hmm. [speaker005:] That's right. [speaker006:] So. [speaker002:] Hmm. [speaker004:] Which is a very [disfmarker] it's [disfmarker] that's how I tended [disfmarker] before studying linguistics anyways and, uh, definitely before doing transcription, [comment] how I tended to look all conversations, [speaker005:] That's right. [speaker004:] and then coming here and doing this, you have to look at speech in a totally more intense way. Um. Including, I mean, every little aspect. I [disfmarker] and when I tell people what I do here, and tell them, "Yeah, I'm transcribing". And they're [disfmarker] they think I'm like a stenographer or something. [speaker003:] Right. [speaker006:] Yeah. [speaker004:] I'm like, No, I don't correct the grammar. [speaker001:] Mm hmm. [speaker004:] I don't erase the "um"'s and "uh"'s, I record all the sighs, and the laughs, and the breaths, and [disfmarker] And they're [disfmarker] I don't know. This is a very different thing than most people can relate to, I guess. [speaker005:] Mm hmm. The [disfmarker] I agree. [speaker004:] Or know about. [speaker005:] Another thing that I think is interesting in these meetings is, certain speakers have a tendency to embed utterances in other embed uh, in other utterances. And, um, you know, it's very clear when you see it [disfmarker] when you hear it with the intonational markers, but [disfmarker] well, just the contours, [speaker001:] Mm hmm. [speaker005:] but the words on the page, it's not at all clear. [speaker004:] Yeah. [speaker005:] And there are a couple people who do this in particular. [speaker002:] Mm hmm. [speaker005:] I [disfmarker] I would say that it's both [disfmarker] I would say that both Jerry and Morgan do this. [speaker004:] Mm hmm. [speaker005:] And [disfmarker] it's amazing how fluent it is [pause] when you hear it, but how very complicated it is when you see the words. [speaker003:] Indecipherable. [speaker004:] Yeah. [speaker005:] Yeah. [speaker004:] And when you think about it, they pick up exactly where they left off, even though maybe it's several seconds later, there's no glitch in the syntax or anything. It's [disfmarker] We're pretty amazing machines. Or creatures, [speaker001:] Mm hmm. [speaker004:] whatever. [speaker005:] Hmm. [speaker001:] Mmm. [speaker005:] Yeah. [speaker004:] That's really cool. [speaker003:] Machines. [speaker005:] Yeah. So, and then of course there are these [disfmarker] these things which I think are also nicely represented in the data which are these funny sounds people make, which to some degree is borrowing from comic talk. So, I notice people going, "Pfft" [comment] you know, if they're trying to say something is done. [speaker001:] Mm hmm. [speaker005:] The Germans do this I think in particular. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker002:] Hmm. [speaker005:] And there are others that we use like that th which are sort of semi conventionalized, but you wouldn't think that you would write them down. [speaker003:] Yeah, the one I kn I kn get all the time is "tsk" [speaker001:] Yeah. [speaker005:] Oh yes! Good example. [speaker002:] Mmm. [speaker003:] It's [disfmarker] it's, uh [disfmarker] like I don't know how [disfmarker] I just transcribe it as [disfmarker] I say, like, "click" [speaker004:] Mmm. [speaker001:] Mm hmm. [speaker004:] It's like "tisk". [speaker003:] and then I say, "It's like a vocal gesture." Like a shrug or a vocal [disfmarker] some [disfmarker] some sort of like, "Well, there it is". [speaker005:] Yes. [speaker001:] Yeah. [speaker004:] Yeah. [speaker003:] Or, "That's it." [speaker001:] Yeah. [speaker003:] Or something like that, you know. That [disfmarker] people do that one a lot. And there was one speaker who used to do it constantly. And after a while, [vocalsound] I started getting confused as to whether or not he was, like, just going "tsk tsk tsk tsk tsk" to himself, or whether he was meaning it in conversat Sometimes it was really obvious that he meant it as some sort of vocal cue. [speaker004:] Mm hmm. [speaker003:] But then sometimes it was more ambiguous. Like it was tough to tell, um, because he would just always be clicking. [speaker005:] This cou Is this the one that [disfmarker] I mean, it could be the one that we were thinking about. [speaker004:] It could be. But I noticed it more as it was obviously not [disfmarker] because it was always in a g the one I'm thinking of always came in groups of four or something. [speaker005:] Yes. Uh huh. [speaker004:] And so it wasn't ever like a, um, single l [speaker003:] Mmm. Oh, OK. [speaker005:] It would be [disfmarker] it would [disfmarker] [speaker003:] Yeah, this guy does a lot of single ones. [speaker004:] Yeah. [speaker005:] Oh, how interesting. [speaker003:] Yeah, just kind of, um [disfmarker] And, uh [disfmarker] that's another thing like I [disfmarker] I realize I do that too. [speaker005:] I'll ask you who this is later. I don't wanna [vocalsound] cuz [speaker001:] Hmm. [speaker003:] I can't remember the name actually. [speaker005:] OK. The [disfmarker] the one that I'm thinking of, i it's a matter of you can tell it's in context where he's looking in his agenda to see, you know, "Do I have time to do this?" [speaker004:] Mm hmm. [speaker005:] and he's calculating it in his mind. So it's sort of li almost like a stand in for calculator, kind of thing. [speaker002:] Hmm. [speaker005:] And i uh, but it's signaling. I'm sure that everybody [disfmarker] I'm sure that other people can hear it. So it's sort of like, "Wait a minute, I'm thinking." [speaker004:] Right. Yeah. Yeah. Yeah. [speaker003:] Mm hmm. [speaker005:] But it's not lexicalized. This is not a [disfmarker] [speaker001:] Hmm. [speaker004:] That's kind of the impression I got too. [speaker005:] i not in a standard way, [speaker003:] Oh. Yeah, that's a different person. [speaker005:] it's [disfmarker] OK. OK. OK. [speaker003:] Yeah. OK, I know who you're talking about. But this is somebody [disfmarker] somebody differ [speaker005:] He must do this in a lot of meetings because I think [disfmarker] you know, I've run across it several times. [speaker001:] I don't think I've ever heard it. [speaker004:] No. [speaker006:] No. [speaker004:] Hmm. [speaker006:] I've heard [speaker001:] Yeah. [speaker005:] Yeah. [speaker006:] Which I just transcribe as "mouth". [speaker005:] Heard that one. Oh, I see. [speaker006:] I just make a mouth [disfmarker] [speaker001:] Huh. [speaker006:] Cuz I think I had talked to you about this [disfmarker] about what [disfmarker] what did we do [disfmarker] cuz [disfmarker] wha what was it that I [disfmarker] oh, I was [disfmarker] "Lipsmack"! [speaker004:] Yeah. [speaker005:] Oh yes. [speaker003:] Yeah. [speaker006:] I was marking "lipsmack", and [disfmarker] and you said [disfmarker] [speaker004:] That's more like the "[@ @]". [speaker006:] yeah, that's [disfmarker] It's a different noise, [speaker001:] Hmm. [speaker004:] Yeah. [speaker006:] but, um, it's what made me think of it, [speaker002:] Ah. [speaker006:] but, um, [speaker003:] The one you're talking ab [speaker006:] and [disfmarker] and you said, "Don't do" lipsmack ", just do [disfmarker] we don't need [disfmarker] it's too specific. Just do "mouth" for all tha all those noises ". [speaker005:] Yeah, OK. Now what I would say [disfmarker] um, [speaker006:] Um [disfmarker] [speaker005:] it's good to know this. C what I [disfmarker] what I would say is, um, if it's communicative s beyond just the p mouth opening and the w oh, you know, the mechanical aspects that happen when you happen to be about to speak, then I would like to have it [disfmarker] if it's communicative, I'd like to have it preserved. [speaker006:] Mm hmm. [speaker001:] Mm hmm. [speaker002:] Mm hmm. [speaker001:] Right. [speaker005:] a But if it's just this mechanical stuff, "mouth" is fine. [speaker006:] Mm hmm. [speaker005:] The [disfmarker] It seems like the computer science people use "lipsmack" for that. I [disfmarker] Personally, my choice [disfmarker] eh, w we've [disfmarker] we're still deciding on how we're gonna encode it in the final version. But I tend to not like "lipsmack" because I think that that's [disfmarker] could be meaningful and it's a little confusing. [speaker004:] Mm hmm. [speaker003:] Mm hmm. [speaker005:] People talk about, you know, "Oh gee, that was really good," [speaker002:] Mmm. [speaker005:] and then they do a [disfmarker] I'm not gonna do it cuz I don't really know how to do lipsmack. But [disfmarker] but it seems like it would be a [disfmarker] a vocal gesture, [speaker006:] Yeah. [speaker004:] It could be. [speaker005:] a definite type of thing. [speaker004:] Yeah. [speaker003:] Right. Yeah, th I was thinking the one that you just did, like "tsk". [comment] makes me think of the [disfmarker] what people would write as "tsk tsk". [speaker005:] Yes. Same here. Yeah. [speaker004:] Yeah. Me too. [speaker003:] You know, like, uh, as though they're like, you know, trying to communicate displeasure. [speaker001:] Yeah. [speaker005:] I would too. [speaker006:] Oh, right. Yeah. [speaker002:] Hmm. [speaker001:] That's true. Yeah. [speaker006:] Right. Mm hmm. [speaker003:] And so that [disfmarker] Yeah. [speaker002:] Mmm. [speaker003:] U i cu there's [disfmarker] there's at least two or three that I can think of that to me are very different. You know, there's like, you know, that, [speaker006:] Right. [speaker003:] and there's like, um, you know, well, the one I did before that's like I don't know, like, um [disfmarker] Hhh. [speaker005:] I would say "tongue click" for that. [speaker003:] Yeah. [speaker005:] But I would also probably put in parentheses it [disfmarker] that I thought it was a vocal gesture. [speaker003:] Uh huh. [speaker005:] Because I think it's communicative of a certain thing. [speaker001:] Yeah. [speaker003:] Yeah. [speaker001:] I don't know what [disfmarker] I am never sure whether or not to transcribe, um, a difference with, like, "hmm", [comment] you know, and "hmm!". [speaker004:] Oh yeah. [speaker005:] Oh. Good point. Yes. [speaker001:] Like, how do you transcribe "hmm!"? [speaker004:] It's like "humph". [speaker003:] I'll put like a [disfmarker] [speaker005:] Really good point. Yeah. [speaker003:] I'll usually put a [disfmarker] an exclamation point [pause] somehow. Like I'll go [disfmarker] [speaker005:] I would too. [speaker001:] Yeah? [speaker004:] Oh. [speaker005:] I think it's more emphatic. [speaker001:] Mm hmm. [speaker005:] Mm hmm. [speaker001:] W what about pre transcribing it, um, "HMPH"? [speaker005:] But it [disfmarker] th it's true that th segments are different too. [speaker001:] What about transcribing it that way, [speaker003:] "Hmph." [speaker001:] like "hmph"? [speaker002:] Hmm! [speaker006:] Oh. [speaker005:] Well, now, y This is a good question. I guess I would say th you know, sometimes we've seen "hmph" in literature meaning in [speaker001:] Right. [speaker005:] but it's a s a sigh [disfmarker] a si uh, sorry. A sound of indignation. [speaker001:] Right. [speaker004:] Mm hmm. [speaker005:] So, if it's meant as a sound of indignation then I think that would be how I would capture it, with the HMPH thing. [speaker001:] Mm hmm. [speaker004:] I usually tend to think that it's a [disfmarker] more of a [disfmarker] Hhh. [vocalsound] not indignation. Something more positive. I don't know exactly wha how to characterize it, [speaker002:] Like [disfmarker] [speaker004:] but something more [speaker003:] Oh really? [speaker002:] " Hmm! [speaker005:] a sort of like a sup uh, an [disfmarker] uh, an elevated "hmm"? [speaker002:] I hadn't thought of that. " [speaker001:] Right. [speaker002:] Yeah. [speaker005:] It's like a m sort of more surprise, or engagement? [speaker004:] Yeah. "Hmm." Yeah. Or [disfmarker] [speaker002:] Hhh. [speaker004:] I [disfmarker] i I don't know. [speaker001:] I yeah. I guess it could have [pause] two meanings, sort of. [speaker004:] Maybe. Yeah. [speaker006:] If it's surprise then the exclamation point makes sense, too. [speaker004:] Yeah. [speaker005:] Mm hmm. And then there's the other kind which sounds similar, but it's really part of a laugh. So, "Hmm hmm hmm hmm hmm". [speaker006:] Hmm. [speaker004:] Yeah. [speaker005:] Which is [disfmarker] I would just say "laugh" for that. [speaker001:] Yeah. Mm hmm. [speaker003:] Laugh. Right. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker006:] Yeah, once, Adam was laughing, and it was actually written out phonetically, and I changed it. I'm like "it's just laughing". [speaker005:] Good. [speaker004:] Yeah. [speaker005:] Well done. Well done. [speaker006:] Like somebody had written, like "hyuh hyuh" or [disfmarker] or [disfmarker] or something. And [disfmarker] [speaker005:] I've run across that also. And [disfmarker] Right. Change it to "laugh". [speaker001:] That's funny. [speaker006:] Yeah, i I [disfmarker] it was maybe Tigerfish had done something [speaker005:] Well done. [speaker006:] like they had written out a phonetic laugh, [speaker003:] Yeah, yeah. [speaker006:] and they didn't do it at any other place in the transcript. [speaker003:] That's interesting. [speaker004:] That's interesting. [speaker005:] I ran across a couple "ho"'s occ [speaker006:] It was just that one spot. [speaker005:] I've seen "ho"'s [speaker006:] It was just a regular old vanilla laugh, you know. [speaker001:] "Ho ho ho"? [speaker005:] and [disfmarker] [speaker001:] Oh, that is too much fun. [speaker005:] and "heh". [speaker006:] It wasn't anything special. [speaker005:] Yeah. Yeah, that's right. [speaker004:] I noticed, i m as far as problems with laughs, is that a lot of them were marked as breaths. [speaker005:] W we [disfmarker] [speaker004:] Because a lot of people, [speaker005:] Yes. [speaker004:] I think we already talked about this last time, [speaker001:] Right. [speaker002:] Mm hmm. [speaker004:] but [disfmarker] [speaker005:] Yeah. [speaker004:] but not so much as, I never saw that written out phonetically, or whatever. [speaker005:] Well i [speaker006:] Well [disfmarker] well, here's a question, and there's been some variance in transcripts that I've looked at. When [disfmarker] OK. So when you laugh, you're exhaling all this air, [speaker005:] Yes. [speaker006:] and so you need to i inhale, [speaker005:] Yes. [speaker006:] and [disfmarker] so how do you transcribe the inhale? Some people are transcribing it as a breath. I I used to, you know, back in the day [disfmarker] back when I was do transcribing, I would do the whole thing as a laugh [speaker005:] Yeah? Yeah, yeah, yeah? [speaker004:] Yeah, that's what I used to do too. [speaker005:] Uh huh. Yeah, yeah. [speaker006:] because to me the breath is part of the laugh. [speaker005:] I agree with you. [speaker001:] Oh yeah. [speaker003:] Yeah. [speaker005:] That's [disfmarker] that's what I prefer. [speaker002:] Mmm. [speaker006:] Um, so [disfmarker] cuz i cuz in the ones I've been che the one I've been checking actually right now, that breath gets marked as a breath, [speaker001:] Yeah. [speaker006:] and I've been taking them out. [speaker004:] So you would have like "lef laugh, breath, laugh" [speaker005:] Good. Well done. [speaker006:] So that's OK? [speaker005:] Well done. [speaker006:] Actually, it'd be a "laugh" [speaker004:] transcribed? [speaker006:] and then two dots, like the next bin will be empty, [speaker001:] Mm hmm. [speaker006:] and then it'll be "breath". [speaker004:] Oh, I've [disfmarker] [speaker005:] Mm hmm. [speaker004:] Yeah. [speaker006:] And so it'll be that [disfmarker] [speaker003:] s Yeah. I was gonna say, I put [disfmarker] [speaker004:] OK. [speaker001:] Oh. [speaker004:] I see what you're saying. [speaker001:] Yeah. [speaker006:] it'll be three d sort of separate units as opposed to [disfmarker] I think it's sort of one coherent thing. [speaker004:] Yeah. Yeah. [speaker001:] Mm hmm. [speaker006:] So. [speaker003:] I put "breath laugh" in the brackets when I don't know whether it's a breath or a laugh, [speaker005:] That's good. That's good. [speaker004:] Mm hmm. [speaker003:] which is what [disfmarker] I remember you telling me to do that. [speaker006:] Oh, well, yeah, that's [disfmarker] that's a different thing. [speaker005:] That's right. [speaker004:] Yeah. [speaker002:] Mmm. OK. [speaker006:] Yeah. S cuz sometimes it's not clear. [speaker005:] That's good. [speaker004:] Sometimes it's hard to tell. [speaker006:] Like sometimes "Hhh!" is a laugh. [speaker002:] Yeah. [speaker003:] Right. [speaker004:] Yeah. [speaker001:] Yeah. [speaker005:] That's right. [speaker003:] Right. [speaker006:] Um, and if other people are laughing, then I'll usually give them [disfmarker] and say that they're laughing. [speaker002:] Hmm. [speaker005:] And [disfmarker] I agree with you. [speaker003:] Right. [speaker005:] And [disfmarker] and I think that's important [speaker001:] Yeah. [speaker005:] because we're capturing then the communicative level. [speaker004:] Yeah. Exactly. [speaker006:] Mm hmm. Exactly. [speaker001:] Right. [speaker005:] And it's likely that it has a different contour than most breaths as well. [speaker004:] Yeah. [speaker005:] Because it's sort of expulsive and [disfmarker] and, uh, that kind of thing. [speaker003:] Mm hmm. [speaker006:] Mm hmm. [speaker004:] Yeah. [speaker005:] Ex exactly. Exactly. [speaker004:] Yeah. [speaker003:] A lot of people go, [speaker005:] Well done. [speaker006:] So. [speaker005:] Yes they do. Yeah, the shyer ones sometimes will laugh i with the breath laugh. And you can see it. [speaker003:] Right. [speaker005:] You can see the little [disfmarker] little peaks, which is interesting. [speaker003:] Right. [speaker001:] Mm hmm. [speaker006:] Right. [speaker004:] Yeah. [speaker005:] But yes. Yeah, exactly. I wouldn't, uh [disfmarker] i That's a case whe you know, I know generally with the Tigerfish things that they put "breath" and we leave it in. But absolutely there, yeah. [speaker006:] Good. OK. [speaker005:] "Laugh", change it. And, you know, there are other places where it's in the way, you know. I [disfmarker] I mainly said not to change their "breath"'s simply because it would take time to move them if [disfmarker] and if they're not in the way then don't [disfmarker] don't mess with it, [speaker001:] A lot of times, though, the breaths are, um, results of bad segmentation, [speaker005:] but [disfmarker] [speaker001:] and there's a segmentation of nothing, [speaker005:] Oh yes! [speaker001:] and so they just write it as a breath when it's clearly not a breath. [speaker004:] Yeah. I've taken out some of those. [speaker001:] And I usually always take those out. [speaker006:] Yeah. [speaker001:] I mean, usually, um, it seems like the segmenter, um, often just sort of, um, segments [disfmarker] [speaker005:] Great. [speaker001:] when there's really long spaces, [comment] it segments random spaces in between. And th you know, they get that and there's nothing there, so they m s mark it as a breath when it's really actually nothing. So usually I take those out. [speaker005:] That's right. [speaker004:] Yeah. [speaker006:] Well they give the segmenter, though [disfmarker] cuz they don't have the [disfmarker] the rich interface that we do, so the they must be giving the pre segmenter sort of the benefit of the doubt, "Well, there must be something there, um, whether I can hear it or not." [speaker001:] Right. Right. Exactly, exactly. [speaker004:] Oh, yeah. I'm sure they are. [speaker006:] So, they just say, "Well, it's a breath then". [speaker001:] Exactly. [speaker005:] One wonders what it might be [disfmarker] [speaker001:] Mm hmm. [speaker004:] Although, they did put a "missing segment" so sometimes in brackets. [speaker005:] Yeah. Yeah, there are some where [disfmarker] [speaker006:] Yes! [speaker001:] Oh really? [speaker006:] That's true. [speaker001:] Oh, I always get [disfmarker] I always get "breath". [speaker005:] Yeah. Well, see, I [disfmarker] I do a previous filtering before I give the Tigerfish ones out, [speaker004:] I've gotten both. [speaker005:] and [disfmarker] and I u I have tried to get the [disfmarker] that particular the [disfmarker] the "missing segment" removed. [speaker004:] Uh huh. [speaker005:] But I don't always do it. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker004:] Yeah. I've seen a few of them, [speaker006:] Hmm. [speaker005:] So yeah, you've s they slip through. [speaker006:] Well, frequently that'll also be a misattribution too, of voice. [speaker004:] but I've seen a [disfmarker] several "breath"'s too. [speaker001:] Hmm! [speaker004:] Yeah! Yeah. [speaker006:] Cuz then you'll hear [vocalsound] [disfmarker] you'll hear part of what another person is saying and th tha and that's all transcribed on that one level, but then you'll hear a little bit of it and then "missing segment". [speaker004:] Yeah. [speaker006:] It's like, OK, well they just [disfmarker] it's in the wrong [disfmarker] [speaker004:] It's just in the wrong place. [speaker006:] That person's not talking at all ". So, you just get rid of it. [speaker004:] Yeah. Yeah. [speaker001:] Hmm. [speaker006:] So. [speaker005:] Mm hmm. Mm hmm. [speaker004:] Yeah. [speaker005:] Interesting. OK, well, let's see, so, um, do you think that this has changed your speech [disfmarker] n in other ways, in terms of [disfmarker] Myself, I do feel that I'm [disfmarker] when I spend a lot of hours doing this I become less fluent. I have more false starts and things. [speaker004:] Really? [speaker005:] Yeah. I don't know if it's [disfmarker] It could just be because there're so many more hours I haven't been talking. [speaker006:] Hmm! [speaker003:] I guess I try to be more careful in my speech, more often. [speaker005:] Good. [speaker003:] Unless I'm really not paying any attention like, you know, if I'm at a party or something, and I'm just sort of having a good time, then it doesn't really matter, but in n you know, normal one on one speech I [disfmarker] I think I tend to try to enunciate more. [speaker005:] Mm hmm. Mm hmm. [speaker003:] Cuz I've been told I mumble. [speaker005:] Oh! Huh! [speaker003:] Um, and I guess I used to be worse about it. I was told by my parents that I used to mumble a lot. [speaker004:] Uh huh. [speaker003:] Uh, and also I [disfmarker] I have this sort of East Coast accent that comes in sometimes that makes it hard for people on the West Coast to understand what I'm saying. [speaker005:] Hmm! [speaker003:] A lot of times like, uh, if I'm not paying attention I'll say something that, uh, will not really be heard. I tend to leave off final consonants, for instance, and people, uh, n I have to be conscious of that sometimes. [speaker005:] Hmm! [speaker003:] Yeah, so. [speaker005:] So mainly for clarity, it sounds like, like in phrasing or strategies. [speaker003:] Yeah. Yeah. [speaker004:] Hmm. One thing that I noticed [disfmarker] I don't know if it's a [disfmarker] really a question of me being more c changing my speech necessarily, but I'm much more aware of interruptions in conversation [disfmarker] in normal conversation. [speaker005:] Oh! [speaker006:] Oh, yeah. That's true. [speaker005:] Oh. [speaker003:] Yeah. [speaker004:] So, uh, maybe I am more careful [comment] in the sense that I maybe don't interrupt as much, or j I'm more aware of it when I am doing it, or something like that, but, um, I just pay attention more to the fact that every [disfmarker] pretty much all the time in normal conversation there are interruptions. There's no [disfmarker] I mean, unless you have really aw I mean, I think if there were no interruptions it would sound really awkward. [speaker001:] Mm hmm. [speaker004:] There would be a lot of pauses, and that would kind of i imply uncomfortability, or something. [speaker002:] Hmm. [speaker003:] Right. [speaker001:] Yeah. [speaker004:] But, [speaker005:] I guess it depends partly on the type of interruption. [speaker004:] yeah. [speaker005:] I mean, if it's a clarifying type like [disfmarker] "well do you mean [disfmarker]?" when they're partway through it in a utterance. [speaker004:] Yeah. Yeah. Yeah. It's different. [speaker003:] Yeah. I tend to think of a conversation that has no interruptions as being something that's [disfmarker] n unnatural and forced. Or something like a meeting where, you know, I mean everybody says [disfmarker] their little schpiel [speaker004:] Yeah. [speaker003:] and people are meant to be, you know, pretty [disfmarker] pretty quiet and attentive for the most part. And then they let them finish, and then somebody else comes in and says something about it. [speaker004:] Yeah. [speaker003:] Um, and in a n like a friendly conversation I [disfmarker] I notice that that's not really true. Like what you were saying. [speaker001:] Right. [speaker004:] But even in cases like that, [comment] even when it is [disfmarker] see now I'm contradicting myself, but even when it is very [disfmarker] turn taking is very w like people follow the turn taking rules, or whatever, and don't really blatantly interrupt people, um, and I [disfmarker] I remember this actually from a meeting because Liz was complaining there wasn't enough o interruptions, or e not enough overlaps, or something. [speaker003:] Huh! [speaker004:] It wasn't interruptions it was overlaps, that I was thinking about. [speaker005:] Uh huh. I think this is really [disfmarker] [speaker004:] Not enough overlaps. [speaker005:] yeah. [speaker004:] And so she wanted to [disfmarker] there to be an argument or something so that there'd d be more. And ch what she didn't realize was that throughout the meeting up until then there had been, they were just very small, but they were there. I think they're there pretty much a all the time. [speaker005:] I agree. I did [disfmarker] I did do one analysis wh where I looked at one meeting, [speaker003:] That's interesting. [speaker004:] Uh huh. [speaker005:] and, um [disfmarker] th cuz the claim had been, actually o one of the people in the meeting had made the claim in the previous meeting that, "Well, there [disfmarker] probably the overlaps only occur during setup and, um, you know, at the end when we're leaving." [speaker004:] Oh! [speaker006:] Oh yeah, that's totally not true. [speaker002:] Hmm! [speaker003:] No, not at all. [speaker005:] Exactly. Exactly. [speaker004:] Yeah! [speaker005:] Well, and, you know, I think that that's sort of the feeling you would get if [disfmarker] unless you looked at the data at this level. [speaker004:] Yeah! Yeah. [speaker001:] Right. [speaker005:] And so I did an analysis and in fact it was really a pretty constant rate [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker005:] all the way through the entire meeting and everyone was involved. [speaker004:] Mm hmm. [speaker005:] Wasn't just one, you know, or two speakers. Everyone was [disfmarker] Yeah. It's quite true. [speaker003:] Kind of makes sense because if you're [disfmarker] if you wanna speak next, it's almost like you got y you gotta get your foot in the door. You gotta be the one who [disfmarker] you know, make sure people recognize that you're the one who's gonna speak next. [speaker004:] Mm hmm. [speaker005:] That's true. [speaker001:] Mm hmm. [speaker005:] I should say, I didn't distinguish between overlaps and [disfmarker] and interruptions. So some of these were backchannels and things like that. [speaker004:] Yeah. [speaker003:] Oh, I see. [speaker005:] But it's still [disfmarker] It's still the case. [speaker003:] Uh huh. [speaker005:] You have [disfmarker] [speaker001:] Yeah. [speaker005:] y you [disfmarker] [speaker001:] I think, um, silences are generally like [vocalsound] not good in conversation too. Cuz once you get a sil You know, it [disfmarker] it [disfmarker] it's to your benefit to [disfmarker] to try to get in right when you think they're done because if there's a silence, not only is it awkward, [comment] but then that sort of cues everybody to start thinking of a new topic to stop the silence, [speaker004:] Yeah. [speaker002:] Hmm. [speaker001:] and then you don't get to talk about what you wanted to talk about [comment] because the whole subject has changed. [speaker004:] And then everybody talks at once. [speaker005:] That's true. That's very true. [speaker004:] You know, thinking about it, [speaker001:] You know? S [speaker004:] I [disfmarker] I think I've only noticed a couple times where there's been actual silence on all channels for a period of time. [speaker001:] Yeah. [speaker006:] Mm hmm. [speaker002:] Hmm! [speaker004:] It's only been like twice. [speaker005:] It's true. That's very rare. [speaker006:] It's rare. [speaker003:] Yeah. [speaker005:] I agree. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker005:] Interesting observation. Yeah. Well, if they're i c if [disfmarker] this has raised a couple of things for me that I wanted to mention [speaker004:] Yeah. [speaker005:] because, you know, at this point, we're at the [disfmarker] at the really fine grain level of [disfmarker] of standardization which I hadn't really thought of specifying in advance before, and, um, wanted to maybe just mention a couple of things. These are, um [disfmarker] Yeah, I don't think that these are very common. But with the idea that [disfmarker] what we're trying to do is encode the communicative stuff, um, when we have these grey areas, um, w um, I wanted to draw attention to the fact that there are sometimes these stretches where a person is trying to say something and formulating their thought, and they're coming out with a bunch of segments, which really aren't part of a word. Have you run across that where they're [disfmarker] they're saying [speaker004:] Hmm. [speaker003:] Mm hmm. Yeah, yeah, yeah. [speaker004:] Yeah, yeah. [comment] Yeah. [speaker001:] Yeah. [speaker005:] and then they say a word. [speaker002:] Oh. [speaker003:] Right. [speaker006:] Yeah. [speaker005:] And, my feeling on that is, that this isn't really a phonological transcript, and potentially those things could probably be rendered, although some of them are not even full segments, they're even sub s subsegmental, I would say. I mean, you got a [disfmarker] a single flap of a vocal cord sometimes, [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker005:] and, you know, they're th it's like the [disfmarker] the apparatus is sort of almost kicking in, they're almost gonna say but it hasn't [disfmarker] somehow this [disfmarker] somehow this vibration hit before the rest of it and [disfmarker] and that's not communicative. [speaker001:] Mm hmm. [speaker005:] So, um, my [disfmarker] my the way I do those things when I hear those, if [disfmarker] is to [disfmarker] I use in parentheses one "X" [vocalsound] so it's marked. [speaker003:] OK. [speaker001:] Right. [speaker004:] Mm hmm. [speaker005:] Or if it's [@ @] [comment] then I [comment] [disfmarker] then I put six or seven or however many it is [disfmarker] [speaker004:] Then [disfmarker] [speaker001:] That's what I usually do too. [speaker004:] Right. [speaker006:] I was wondering who had been putting those in. [speaker001:] Yeah. [speaker005:] Great. And then [disfmarker] and then what I do is I put in, um, brackets, um, uncodeable [disfmarker] Sorry, "Uncodeable vocal sounds". [speaker004:] Mm hmm. [speaker006:] Yes. [speaker003:] Oh, OK. OK. [speaker005:] And of course, potentially they're codable, but i I just wanted to mark roughly how man how long it is, [speaker001:] Mm hmm. [speaker005:] then indicate what [disfmarker] what the nature of it is, so no one's gonna try and go back and [disfmarker] and decode it as something that's pechen a a word that was not [disfmarker] It has a different status than parenthesized, uh, parts when there are words involved, [speaker001:] Mm hmm. [speaker005:] just to set it aside, have some sort of a length indication, and sometimes I'll even put them in a [disfmarker] in a separate bin, although that's [disfmarker] that's not necessary. It's just sometimes it works out that way that they [@ @], [speaker006:] Mm hmm. [speaker005:] they do this thing. I shouldn't do that so often. I'm sorry about that. They do this sound, and then there's a pause, and then they start speaking, [speaker006:] Mm hmm. [speaker005:] so then it's really obvious, I can [disfmarker] I can split it apart and it won't be any problem. I wanted to mention that one, [speaker006:] Man! [speaker003:] I'm glad you ment brought that up [speaker005:] yeah. [speaker003:] because I actually [disfmarker] I'm one of the culprits that puts in all the little itsy bitsy tiny sounds in the middle of everything, [speaker004:] Mmm. [speaker006:] I do that too! [speaker003:] because I was [disfmarker] [speaker006:] I used to do that! [speaker003:] because [disfmarker] Yeah. Cuz I was thinking, like, you're just supposed to write down everything you hear as closely as possible. [speaker005:] Well [disfmarker] Mm hmm. [speaker003:] And so, I'm glad you said that because, um, that actually takes a good amount of time, and like pretty strict concentration [vocalsound] to d to do that. [speaker006:] A lot of time. [speaker001:] Yeah. Yeah. [comment] I put in the "X"s. [speaker003:] Um, [speaker005:] Excellent. [speaker003:] and so [disfmarker] [speaker001:] I put in [disfmarker] if it's, um, unintelligible I put [disfmarker] I count how many syllables and then I put in "six X" [disfmarker] [speaker003:] I do that if it's unintelligible, but if I can tell something [disfmarker] if somebody goes "r" I'll put an "R". [speaker005:] Oh, I see. I see. [speaker003:] Eh [disfmarker] Um, so if they go "r e m m", I'll go "R" "EH" "mmm" m "M" "M". [speaker001:] Oh. [speaker002:] Hmm. [speaker003:] So like I [disfmarker] I've been doing that because like that's what I can hear. [speaker005:] OK. [speaker003:] And I'm pretty damn sure that [disfmarker] [speaker001:] Right. [speaker003:] sorry. Sorry! Sorry. Hhh! [speaker001:] That will have to be transcribed with little, uh, symbols. [speaker003:] Sorry! I'm sorry. [speaker006:] Yeah. Like a [disfmarker] a skull and crossbones. [speaker003:] I feel really bad now. Oh no! I'm sorry! [speaker001:] Right. [speaker003:] OK [disfmarker] [speaker005:] That's OK. It's alright. [speaker001:] You've ruined the whole meeting! [speaker005:] No problem. [speaker001:] We can't transcribe any of it now! [speaker006:] It's not [disfmarker] i it's [disfmarker] [speaker003:] Oh no, I'm so sorry. [speaker006:] Yeah. It's not like you said the F word. [speaker005:] No, no, no. [speaker006:] It's not [disfmarker] it's not an [disfmarker] [speaker005:] This is not [disfmarker] [speaker006:] it's still a PG [disfmarker] [speaker004:] Yeah, seriously. [speaker005:] I wouldn't give it a second thought. [speaker004:] Do you remember what you were gonna say? [speaker003:] Oh yeah [disfmarker] I [disfmarker] I was [disfmarker] [speaker006:] She's pretty darn sure that, uh, [speaker003:] Ah [disfmarker] and then [disfmarker] tha Yeah, that's wha [speaker005:] You sh Yeah [disfmarker] [speaker006:] she could make it out probably. [speaker001:] OK. [comment] Right. [speaker003:] I'm sorry. [speaker006:] Yeah, no one's ever sworn in any of the meetings! [speaker005:] Well, y you've heard [disfmarker] have you heard "shoot"? [speaker006:] That actually [disfmarker] [speaker005:] I've heard "shoot" a couple of times. [speaker003:] I've heard the [disfmarker] [speaker004:] Yeah. [speaker006:] Not the swearword version. [speaker001:] Hmm [disfmarker] [speaker005:] No, no, no, no, no, no, the [speaker006:] Just "shoot". [speaker005:] "shoot" type, yeah. [speaker003:] I've heard the swearword version. [speaker001:] Yeah. [speaker006:] That's OK. [speaker003:] Remember that? [speaker001:] Oh, you have? [speaker005:] Yeah? [speaker003:] Remember that? [speaker001:] I've never heard the swearword versions. [speaker005:] Oh. [speaker003:] It was a while back. I [disfmarker] it was a long time ago. [speaker005:] I do remember that. Yeah. [speaker003:] Yeah, and I was like, "Oah!" I emailed you, I was like, "M You might wanna check this out." [speaker005:] That's right. That's right. [speaker003:] Yeah, I do remember that. [speaker005:] Yeah. [speaker003:] But it was very quiet. So I can see how that would have been sort of gone over by accident. It was just very quiet, like kind of under the breath [comment] kind of thing, [speaker005:] Uh huh, yeah. [speaker004:] Uh huh. [speaker005:] Yeah, it was. [speaker003:] so that's why it was missed. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker004:] Yeah. [speaker003:] I can understand. [speaker006:] Hmm! [speaker002:] Mm hmm. [speaker005:] Mm hmm. [speaker003:] Yeah. [speaker005:] Yeah. B but y t finish your point. So, y [speaker003:] Oh! [vocalsound] I was just [comment] [disfmarker] I was just [vocalsound] saying that, um, I [disfmarker] I can [vocalsound] [disfmarker] I'm pretty sure that I can tell what [disfmarker] exactly what sounds are being produced. [speaker005:] Yes. [speaker003:] So if it's "er", I write an "R". [speaker005:] Mm hmm. [speaker001:] Right. [speaker003:] If it's "mmm", I can tell it's [disfmarker] [speaker004:] Yeah. [speaker001:] Right. [speaker005:] Mm hmm. Mm hmm. [speaker001:] Then again, I always think that it's closer to stick with the X because it's, like, how effective is, you know, the Roman alphabet at deciphering all the sounds that a human can make. [speaker004:] Yeah. [speaker001:] You know, it's not the IPA and [disfmarker] it's just, I [disfmarker] I just sort of think, um, it's best to leave it with X [speaker002:] Mmm. [speaker001:] because I [disfmarker] [vocalsound] y English orthography conventions are just so, you know, uncertain anyway in the first place, and, um [disfmarker] [speaker005:] Mm hmm. [speaker001:] you know, I [disfmarker] I usually only use them when I [disfmarker] I know that, um, the person was trying to make a word. I mean, I know that the [disfmarker] the word they were trying to say and that they were stuttering that, then I feel fine just you know, writing out the letters. But otherwise, I sort of don't feel right trying to write them out because I feel like it's, um [disfmarker] you can't really [disfmarker] [speaker003:] Just not useful? [speaker001:] Yeah, you can't really spell those things with the R with the Roman alphabet, [speaker006:] Mm hmm. [speaker003:] Yeah? [speaker001:] you know? [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker001:] It's [disfmarker] it was only set up for [disfmarker] [speaker006:] Right. [speaker001:] it was set up for a different purpose, you know. Not necessarily so phonological. [speaker005:] I agree. And I think there's another problem too, [speaker004:] Yeah. [speaker003:] OK. [speaker005:] and that is that we do have some short words which are communicative. And [disfmarker] and sometimes [disfmarker] So, if you have an "ER" as part of one of these fragments, we do have a lexicalized "er" [speaker001:] Right. [speaker005:] which is a [disfmarker] which is a [disfmarker] a filler item. [speaker003:] Right. [speaker005:] Does have a meaning. [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker005:] So I think for both of the reasons it's good to just leave it in this category of not really communicating in a [disfmarker] in a word like way. [speaker003:] So even if it's "er", or [disfmarker] [speaker001:] Right. [speaker005:] I would keep "er". Well, "E" [disfmarker] "er" [disfmarker] "er" if it's a filler. [speaker003:] Oh, you would keep "er". [speaker005:] So, "I mean, er," [speaker003:] Right, right. [speaker005:] th that [disfmarker] that [disfmarker] that stays. [speaker001:] Mmm [disfmarker] [speaker003:] Right. Oh, OK. [speaker004:] Yeah. [speaker001:] Oh yeah. [speaker005:] But if it's p uh, part on these little fragments, no. [speaker001:] I always [disfmarker] I often transcribe that as "or". I always thought that was "or" sort of with a strange pronunciation. [speaker005:] Oh. I'm making a distinction. [speaker006:] Oh. [speaker005:] So you have the, um, comic "er" where the person says something and then they pretend that they didn't wanna say that, [speaker001:] Right. [speaker005:] and then they throw in an "er". [speaker001:] Oh. [speaker005:] m and that would be an "er". [speaker001:] Oh, I don't get that one. [speaker003:] Uh huh. [speaker004:] Mm hmm. [speaker001:] Right. [speaker005:] ER. But if they're saying "or" in a reduced way so it comes out as "er", do it as "or". [speaker001:] R Right. OK. [speaker005:] Cuz that's [disfmarker] that falls within the realm of pronunciation variants. [speaker004:] OK. [speaker001:] m OK. [speaker003:] Right. [speaker001:] Right. [speaker005:] It's just like different ways of saying "and". [speaker001:] Mm hmm. [speaker005:] OK. And I have like, uh, uh, two more things I wanted to say. One of them is, um [disfmarker] Oh, yeah. This is an interesting category to me, and I don't, um, y y don't have to mark this systematically. But when you're going through the meeting, and what I'd like people to do if [disfmarker] if you're not [disfmarker] uh, uh [disfmarker] is to go through the meeting, um [disfmarker] So, if you're doing [disfmarker] you're doing separate channels first and then as a final pass through the mixed channel, I don't think I've made that explicit but it w it's really helpful to go through it a [disfmarker] a final time though the mixed channel so that you hear things in context. And at that point what I sometimes pick up is this meta level, where someone is [disfmarker] finishes someone's [disfmarker] someone else's utterance. [speaker004:] Mm hmm. [speaker005:] So you'll suddenly have this stranded word out in the middle of nowhere, and it's because this other speaker on this other channel was trying to formulate this thought, and they paused, and they paused just enough that this person leaped in and gave them a word. [speaker004:] Mmm. [speaker001:] Mm hmm. [speaker003:] Right. [speaker005:] And I think those are interesting, so when I find that I [disfmarker] I note [disfmarker] note it as, you know, finishing [disfmarker] put capital letters. Capital letter, capital letter, colon, apostrophe "S", utterance. [speaker004:] Oh! [speaker005:] Yeah. [speaker006:] Wait, wait. How do you mark it? [speaker001:] Wait. [speaker003:] We add the [disfmarker] [speaker005:] So, if [vocalsound] [disfmarker] so in the examp i in the instance in which [disfmarker] Let's say that, um, Morgan's talking, uh, so, [nonvocalsound] he's talking. And then somewhere there's a bit of a pause, and let's say Adam fills in a word for him. Then what I do is, on Adam's channel, I say [disfmarker] I say, "Completing [disfmarker]" And this [disfmarker] this isn't that common of an occurrence. [comment] "Completing speaker initial with a colon apostrophe" S "'s utterance ". [speaker006:] Oh, I see. [speaker004:] Oh! [speaker001:] Oh. [speaker006:] OK. [speaker005:] And what that means is at the time when we [disfmarker] See we're gonna go to a stage of anonymization. So it's important to have the colon here [speaker001:] Oh. [speaker005:] because that'll make it possible to easily know, you know, who it was without having to do the math involved in figuring out which channel it was. The problem with [disfmarker] I originally wanted to use channel numbers as the referent index, but that [disfmarker] that's not functional because we have these two different numbering schemes. You have the numbering in the Channeltrans interface and you have the numbering in the "key" file [speaker004:] Yeah. [speaker005:] and those can be very different. [speaker001:] Right. Right. [speaker005:] So, you know, to be able to, [speaker004:] Yeah. [speaker005:] uh [disfmarker] Th this is redundant in a useful way in that you know that you're not gonna make a mistake. I tried to do it with the numbers and I was making all sorts of inconsistent problems, so I stopped it. [speaker003:] So that's probably why you, like, added in that, um, thing that automatically puts in the people's initials. [speaker001:] Mm hmm. [speaker004:] Initials? [speaker005:] That's [disfmarker] that's [disfmarker] I did that partly to keep track of things. Absolutely. Absolutely. [speaker004:] I I also found that it was easier to deal with the meetings when there are speaker initials. [speaker003:] Uh huh. [speaker005:] Somet [speaker003:] Oh yeah. Initials. [speaker005:] The initials? I'm glad to hear that. [speaker004:] Yeah. [speaker005:] I st I didn't do that systematically until, uh, really rather recently, [speaker004:] Yeah. [speaker005:] so. I I'm glad to know that's useful. [speaker001:] Mm hmm. [speaker005:] It s takes an extra step. Glad it's useful. OK, now I just have one more thing to say and that is that, um, um, like once [disfmarker] this is just such a rea I'm getting down to like the really eensy weensy stuff. But, um, once [vocalsound] I found a comment that, uh, in in curly brackets about mumbling. And, um, i g that's [disfmarker] that's [disfmarker] this is just to mention an instance that happened once, it's really grasping for details. [speaker001:] Mm hmm. [speaker005:] But, um, I [disfmarker] I [disfmarker] f I think, and it may be that it was not from anybody here, so. But I think that, um, "mumble" is only useful in the s to me, in the sense of "whispered", if it's like a manner of speaking, in which case it's a "QUAL". But not in terms of replacing, like, this string of undecipherable fragments. So, [speaker001:] Oh, right. [speaker003:] Right. Oh yeah. [speaker004:] OK. [speaker005:] um, this [disfmarker] this was u when it was used [disfmarker] [speaker001:] They used "mumble" for [disfmarker] [speaker005:] exactly. It was used in place of, uh, something that was in unintelligible. [speaker004:] OK. [speaker003:] To fill in. I see. [speaker001:] Mmm. Right. [speaker005:] In which case, it would be better to use parentheses and [disfmarker] and either exclamation, [comment] or the number of syllables, if you know. [speaker001:] The number "X". [speaker006:] And then do "QUAL mumbled" or "QUAL whispered" or something. [speaker005:] Exactly. If it's a manner of speaking, then it's useful. [speaker006:] I see. [speaker001:] Right. [speaker005:] And otherwise the parentheses with an estimate of syllables is better. [speaker004:] OK. [speaker006:] OK. [speaker005:] OK. So, I guess, you know, I don't wanna keep you. I kept you so long. [speaker004:] Can I ask one more thing about these comments. [speaker005:] I really appreciate this. Yes? [speaker004:] So we are for [disfmarker] "QUAL" is used for things like "whispered", or "said while laughing", or things of that nature. [speaker005:] Yes. Mm hmm. [speaker004:] Um, I remember at some point we were doing a difference between vocal and non vocal sounds? [speaker005:] Oh. Yes. Oh, but [disfmarker] Good question. And you don't have to add any of those tags. [speaker004:] Oh! [speaker005:] The only tags that you need to add, is [disfmarker] So [disfmarker] th uh, there's like one exception that's just microscopic, and that is, sometimes [disfmarker] this is again really rare, sometimes I've run across the word "noise". And offhand, I don't know if that's a vocal noise or non vocal noise. [speaker003:] Mmm. [speaker005:] And so then I listen to it and I figure it out. But that's the only place you would use a "vocal" [disfmarker] "non vocal" tag, where it's potentially ambiguous between those two categories. Cuz otherwise, what I do when I get these, is [disfmarker] and of course I do my checks, and then I have a, um [disfmarker] a script that I wrote which will separate everything in [disfmarker] out that has curly brackets, do a listing of them, [speaker001:] Mmm. [speaker005:] and then anything [disfmarker] Some of them already have tags from some previous, uh, modification, but anything that doesn't, I just go through this list, and I [disfmarker] I k tag it right then. And so, in one [disfmarker] in one [disfmarker] in one change, I can modify, you know, five hundred instances of this. [speaker004:] Mm hmm. Mm hmm. [speaker005:] So there's no need to add the "vocal" or the "non vocal". [speaker004:] Very efficient. OK. But for, like, noise that we know is something like mike noise, or something like a door, whatever, then we can mark it as that [speaker006:] Hmm! [speaker003:] Right. [speaker006:] Mm hmm. [speaker005:] Yes. [speaker004:] and no [speaker005:] You can either [disfmarker] if you mark it "mike noise" then I'll know what kind of noise it is when I go through the thing and put VOC or NVC on the [disfmarker] in the substitution file. [speaker004:] Yeah, [comment] right. Or [disfmarker] o or if it's [disfmarker] [speaker001:] Mm hmm. [speaker004:] Right. [speaker005:] But it's [disfmarker] it's uh, very efficient for me to add those tags later. [speaker004:] OK. [speaker005:] It's really not [disfmarker] The same thing with the "QUAL". Just so long as I can tell which of those three categories it is. Or [disfmarker] or "pron", but, you know, those [disfmarker] that's really very obvious, the pronunciation. [speaker001:] Yeah. [speaker005:] So long as I can tell what they are. [speaker004:] So we do need to add that one? [speaker005:] I think it's good for you to add the "pron". [speaker004:] OK. Sure. OK. [speaker005:] I would appreciate the "pron", [comment] come to think of it. [speaker006:] Good. [speaker005:] But the other three [disfmarker] i just so long as it's clear to me, w you know, from the comment which of those three it is, then I'm fine. [speaker006:] Good. [speaker005:] That's great. Alright, so, um, I guess, um, [speaker006:] Do you wanna read digits? Do you wanna do that? [speaker005:] Oh, thank you, do we have time? Would you mind? [speaker004:] Yeah, that's fine. [speaker005:] Can we do digits? [speaker001:] Yeah. That's fine. [speaker006:] No? [speaker003:] Yeah. [speaker005:] OK, so why don't [disfmarker] do you wanna do it, uh, in unison this time? [speaker004:] That's what we did last time. [speaker006:] Well, we did it in unison last time. [speaker003:] We did it [disfmarker] [speaker005:] Oh we did! [speaker004:] Yeah. [speaker005:] Oh! [speaker001:] Right. [speaker005:] Do it separately, then. OK. [speaker006:] Sure did. [speaker001:] Yes. [speaker005:] Good deal. [speaker006:] Oh, and do you want us to mark the t roughly the time we start? [speaker005:] That is really what that time field is. I used to think it was the beginning of the meeting, [speaker001:] Oh, OK. [speaker005:] but yes. Quite right. Quite right. [speaker006:] Oh, OK. [speaker005:] That'd be good. [speaker001:] Oh, alright. [speaker003:] The time we start digits? [speaker006:] So we have to [disfmarker] [speaker004:] Hmm. [speaker005:] Yeah. [speaker003:] OK. [speaker005:] OK. So, who wants to start? Well, I guess I'll start. OK. Transc And the time is [disfmarker] OK. Um, O eight. [comment] OK, I wanted to make one comment and that is, some of these are like two digits and just be sure you say both digits separately. So, one eight instead of eighteen. OK. [speaker006:] Sure. [speaker001:] Oh. Right. [speaker003:] I can go. [speaker005:] OK. OK. Wonderful. Thank you very, very much. I really appreciate this. [speaker006:] No problem. [speaker001:] Sure. [speaker005:] And, um, so now we get to turn the microphones off, [speaker003:] Sure. [speaker005:] and, uh [disfmarker] [speaker002:] So I guess this is more or less now just to get you up to date, Johno. This is what, uh, [speaker003:] This is a meeting for me. [speaker002:] um, Eva, Bhaskara, and I did. [speaker004:] Did you add more stuff to it? [pause] later? [speaker002:] Um. Why? [speaker004:] Um. I don't know. There were, like, the [disfmarker] you know, [@ @] and all that stuff. But. I thought you [disfmarker] you said you were adding stuff but [pause] I don't know. [speaker002:] Uh, no. This is [disfmarker] Um, Ha! Very nice. Um, so we thought that, [vocalsound] We can write up uh, an element, and [disfmarker] for each of the situation nodes that we observed in the Bayes net? So. What's the situation like at the entity that is mentioned? if we know anything about it? Is it under construction? Or is it on fire or something [pause] happening to it? Or is it stable? and so forth, going all the way um, f through Parking, Location, Hotel, Car, Restroom, [@ @] [comment] Riots, Fairs, Strikes, or Disasters. [speaker003:] So is [disfmarker] This is [disfmarker] A situation are [disfmarker] is all the things which can be happening right now? Or, what is the situation type? [speaker002:] That's basically [pause] just specifying the [disfmarker] the input for the [disfmarker] w what's [speaker003:] Oh, I see y Why are you specifying it in XML? [speaker002:] Um. Just because it forces us to be specific about the values [pause] here? [speaker003:] OK. [speaker002:] And, also, I mean, this is a [disfmarker] what the input is going to be. Right? So, we will, uh [disfmarker] This is a schema. This is [disfmarker] [speaker003:] Well, yeah. I just don't know if this is th l what the [disfmarker] Does [disfmarker] This is what Java Bayes takes? as a Bayes net spec? [speaker002:] No, because I mean if we [disfmarker] I mean we're sure gonna interface to [disfmarker] We're gonna get an XML document from somewhere. Right? And that XML document will say "We are able to [disfmarker] We were able to observe that w the element, um, [@ @] [comment] of the Location that the car is near." So that's gonna be [disfmarker] [vocalsound] [comment] Um. [speaker003:] So this is the situational context, everything in it. Is that what Situation is short for, shi situational context? [speaker002:] Yep. [speaker003:] OK. [speaker002:] So this is just, again, a an XML schemata which defines a set of possible, uh, permissible XML structures, which we view as input into the Bayes net. Right? [speaker003:] And then we can r [pause] uh possibly run one of them uh transformations? That put it into the format that the Bayes n or Java Bayes or whatever wants? [speaker002:] Yea Are you talking [disfmarker] are you talking about the [disfmarker] the structure? [speaker003:] Well it [disfmarker] [speaker002:] I mean when you observe a node. [speaker003:] When you [disfmarker] when you say [pause] the input to the [pause] v Java Bayes, [comment] it takes a certain format, [speaker002:] Um hmm. [speaker003:] right? Which I don't think is this. Although I don't know. [speaker002:] No, it's certainly not this. Nuh. [speaker003:] So you could just [disfmarker] Couldn't you just run a [disfmarker] [speaker002:] XSL. [comment] Yeah. [speaker003:] Yeah. To convert it into the Java Bayes for format? [speaker002:] Yep. [speaker003:] OK. [speaker002:] That's [disfmarker] That's no problem, but I even think that, um [disfmarker] I mean, once [disfmarker] Once you have this sort of as [disfmarker] running as a module [disfmarker] Right? What you want is [disfmarker] You wanna say, "OK, give me the posterior probabilities of the Go there [pause] node, when this is happening." Right? When the person said this, the car is there, it's raining, and this is happening. And with this you can specify the [disfmarker] what's happening in the situation, and what's happening with the user. So we get [disfmarker] After we are done, through the Situation we get the User Vector. So, this is a [disfmarker] [speaker003:] So this is just a specification of all the possible inputs? [speaker002:] Yep. And, all the possible outputs, too. [speaker003:] OK. [speaker002:] So, we have, um, for example, the, uh, Go there decision node which has two elements, going there and its posterior probability, and not going there and its posterior probability, because the output is always gonna be all the decision nodes and all the [disfmarker] the [disfmarker] a all the posterior probabilities for all the values. [speaker003:] And then we would just look at the, eh, Struct that we wanna look at in terms of if [disfmarker] if we're only asking about one of the [disfmarker] So like, if I'm just interested in the going there node, I would just pull that information out of the Struct that gets return that would [disfmarker] that Java Bayes would output? [speaker002:] Um, pretty much, yes, but I think it's a little bit more complex. As, if I understand it correctly, it always gives you all the posterior probabilities for all the values of all decision nodes. So, when we input something, we always get the, uh, posterior probabilities for all of these. Right? [speaker003:] OK. [speaker002:] So there is no way of telling it t not to tell us about the EVA [pause] values. [speaker003:] Yeah, wait I agree, that's [disfmarker] yeah, use [disfmarker] oh, uh [pause] Yeah, OK. [speaker002:] So [disfmarker] so we get this whole list of [disfmarker] of, um, things, and the question is what to do with it, what to hand on, how to interpret it, in a sense. So y you said if you [disfmarker] "I'm only interested in whether he wants to go there or not", then I just look at that node, look which one [disfmarker] [speaker003:] Look at that Struct in the output, right? [speaker002:] Yep. Look at that Struct in the [disfmarker] the output, even though I wouldn't call it a "Struct". But. [speaker003:] Well i well, it's an XML Structure that's being res returned, [speaker002:] Oh. Mm hmm. [speaker003:] right? [speaker002:] So every part of a structure is a "Struct". Yeah. [speaker003:] Yeah, I just uh [disfmarker] I just was [disfmarker] abbreviated it to Struct in my head, and started going with that. [speaker002:] That element or object, I would say. [speaker003:] Not a C Struct. That's not what I was trying to k [speaker002:] Yeah. [speaker003:] though yeah. [speaker002:] OK. And, um, the reason is [disfmarker] why I think it's a little bit more complex or why [disfmarker] why we can even think about it as an interesting problem in and of itself is [disfmarker] Um. So. The, uh [disfmarker] Let's look at an example. [speaker003:] Well, w wouldn't we just take the structure that's outputted and then run another transformation on it, that would just dump the one that we wanted out? [speaker002:] Yeah. w We'd need to prune. Right? Throw things away. [speaker003:] Well, actually, you don't even need to do that with XML. D Can't you just look at one specific [disfmarker] [speaker002:] No Yeah, exactly. The [disfmarker] [@ @] [comment] Xerxes allows you to say, u "Just give me the value of that, and that, and that." But, we don't really know what we're interested in [pause] before we look at the complete [disfmarker] at [disfmarker] at the overall result. So the person said, um, "Where is X?" and so, we want to know, um, is [disfmarker] Does he want info? o on this? or know the location? Or does he want to go there? Let's assume this is our [disfmarker] our question. [speaker003:] Sure. [speaker002:] Nuh? So. Um. Do this in Perl. So we get [disfmarker] OK. Let's assume this is the output. So. We should con be able to conclude from that that [disfmarker] I mean. It's always gonna give us a value of how likely we think i it is that he wants to go there and doesn't want to go there, or how likely it is that he wants to get information. But, maybe w we should just reverse this to make it a little bit more delicate. So, does he wanna know where it is? or does he wanna go there? [speaker003:] He wants to know where it is. [speaker002:] Right. I [disfmarker] I [disfmarker] I tend to agree. And if it's [disfmarker] If [disfmarker] [speaker003:] Well now, y I mean, you could [disfmarker] [speaker002:] And i if there's sort of a clear winner here, and, um [disfmarker] and this is pretty, uh [disfmarker] indifferent, then we [disfmarker] then we might conclude that he actually wants to just know where, uh t uh, he does want to go there. [speaker003:] Uh, out of curiosity, is there a reason why we wouldn't combine these three nodes? into one smaller subnet? that would just basically be [pause] the question for [disfmarker] We have "where is X?" is the question, right? That would just be Info on or Location? Based upon [disfmarker] [speaker002:] Or Go there. A lot of people ask that, if they actually just wanna go there. People come up to you on campus and say, "Where's the library?" You're gonna say [disfmarker] y you're gonna say, g "Go down that way." You're not gonna say "It's [disfmarker] It's five hundred yards away from you" or "It's north of you", or [disfmarker] "it's located [disfmarker]" [speaker003:] Well, I mean [disfmarker] But the [disfmarker] there's [disfmarker] So you just have three decisions for the final node, that would link thes these three nodes in the net together. [speaker002:] Um. I don't know whether I understand what you mean. But. Again, in this [disfmarker] Given this input, we, also in some situations, may wanna postulate an opinion whether that person wants to go there now the nicest way, use a cab, or so s wants to know it [disfmarker] wants to know where it is because he wants something fixed there, because he wants to visit t it or whatever. So, it [disfmarker] n I mean [disfmarker] a All I'm saying is, whatever our input is, we're always gonna get the full output. And some [disfmarker] some things will always be sort of too [disfmarker] not significant enough. [speaker003:] Wha Or i or i it'll be tight. You won't [disfmarker] it'll be hard to decide. [speaker002:] Yep. [speaker003:] But I mean, I guess [disfmarker] I guess the thing is, uh, this is another, smaller, case of reasoning in the case of an uncertainty, which makes me think Bayes net should be the way to solve these things. So if you had [disfmarker] If for every construction, right? [speaker002:] Oh! [speaker003:] you could say, "Well, there [disfmarker] Here's the Where Is construction." And for the Where Is construction, we know we need to l look at this node, that merges these three things together [speaker002:] Mm hmm. [speaker003:] as for th to decide the response. And since we have a finite number of constructions that we can deal with, we could have a finite number of nodes. [speaker002:] OK. [speaker003:] Say, if we had to y deal with arbitrary language, it wouldn't make any sense to do that, [speaker002:] Mm hmm. [speaker003:] because there'd be no way to generate the nodes for every possible sentence. [speaker002:] Mm hmm. [speaker003:] But since we can only deal with a finite amount of stuff [disfmarker] [speaker002:] So, basically, the idea is to f to feed the output of that belief net into another belief net. [speaker003:] Yeah, so basically take these three things and then put them into another belief net. [speaker002:] But, why [disfmarker] why [disfmarker] why only those three? Why not the whol [speaker003:] Well, I mean, d For the Where Is question. So we'd have a node for the Where Is question. [speaker002:] Yeah. But we believe that all the decision nodes are [disfmarker] can be relevant for the Where Is, and the Where [disfmarker] How do I get to or the Tell me something about. [speaker003:] You can come in if you want. [speaker002:] Yes, it is allowed. [speaker003:] As long as y you're not wearing your h your h headphones. Well, I do I [disfmarker] See, I don't know if this is a [pause] good idea or not. I'm just throwing it out. But uh, it seems like we could have [disfmarker] I mea or uh we could put all of the all of the r information that could also be relevant [pause] into the Where Is node answer [speaker002:] Mm hmm. Yep. [speaker003:] node thing stuff. And uh [disfmarker] [speaker004:] OK. [speaker002:] I mean [disfmarker] Let's not forget we're gonna get some very strong [pause] input from [pause] these sub dis from these discourse things, right? So. "Tell me the location of X." Nuh? Or "Where is X located at?" [speaker003:] We u [speaker002:] Nuh? [speaker003:] Yeah, I know, but the Bayes net would be able to [disfmarker] The weights on the [disfmarker] on the nodes in the Bayes net would be able to do all that, wouldn't it? [speaker002:] Mm hmm. [speaker003:] Here's a k Oh! Oh, I'll wait until you're [pause] plugged in. Oh, don't sit there. Sit here. You know how you don't like that one. It's OK. That's the weird one. That's the one that's painful. That hurts. It hurts so bad. I'm h I'm happy that they're recording that. That headphone. The headphone [pause] that you have to put on backwards, with the little [disfmarker] little thing [disfmarker] and the little [disfmarker] little foam block on it? It's a painful, painful microphone. [speaker002:] I think it's th called "the Crown". [speaker003:] The crown? [speaker004:] What? [speaker002:] Yeah, versus "the Sony". [speaker001:] The Crown? Is that the actual name? [speaker002:] Mm hmm. [speaker001:] OK. [speaker002:] The manufacturer. [speaker003:] I don't see a manufacturer on it. Oh, wait, [speaker002:] You w [speaker003:] here it is. h This thingy. Yeah, it's "The Crown". The crown of pain! [speaker001:] Yes. [speaker002:] You're on line? [speaker003:] Are you [disfmarker] are your mike o Is your mike on? [speaker001:] Indeed. [speaker003:] OK. So you've been working with these guys? You know what's going on? [speaker001:] Yes, I have. And, I do. Yeah, alright. [speaker003:] Excellent! [speaker001:] s So where are we? [speaker002:] We're discussing this. [speaker001:] I don't think it can handle French, but anyway. [speaker002:] So. Assume we have something coming in. A person says, "Where is X?", and we get a certain [disfmarker] We have a Situation vector and a User vector and everything is fine? An an and [disfmarker] and our [disfmarker] and our [disfmarker] [speaker003:] Did you just sti Did you just stick the m the [disfmarker] the [disfmarker] the microphone actually in the tea? [speaker001:] No. [speaker002:] And, um, [speaker001:] I'm not drinking tea. What are you talking about? [speaker003:] Oh, yeah. Sorry. [speaker002:] let's just assume our Bayes net just has three decision nodes for the time being. These three, he wants to know something about it, he wants to know where it is, he wants to go there. [speaker003:] In terms of, these would be wha how we would answer the question Where Is, right? We u This is [disfmarker] i That's what you s it seemed like, explained it to me earlier w We [disfmarker] we're [disfmarker] we wanna know how to answer the question "Where is X?" [speaker002:] Yeah, but, mmm. Yeah. No, I can [disfmarker] I can do the Timing node in here, too, and say "OK." [speaker003:] Well, yeah, but in the s uh, let's just deal with the s the simple case of we're not worrying about timing or anything. We just want to know how we should answer "Where is X?" [speaker002:] OK. And, um, OK, and, Go there has two values, right?, Go there and not Go there. Let's assume those are the posterior probabilities of that. [speaker001:] Mm hmm. [speaker002:] Info on has True or False and Location. So, he wants to know something about it, and he wants to know something [disfmarker] he wants to know Where it is, [speaker001:] Excuse me. [speaker002:] has these values. And, um, [speaker003:] Oh, I see why we can't do that. [speaker002:] And, um, in this case we would probably all agree that he wants to go there. Our belief net thinks he wants to go there, right? [speaker001:] Yeah. Mm hmm. [speaker002:] In the, uh, whatever, if we have something like this here, and this like that and maybe here also some [disfmarker] [speaker001:] You should probably [comment] make them out of [disfmarker] Yeah. [speaker003:] Well, it [speaker002:] something like that, then we would guess, "Aha! He, our belief net, [comment] has s stronger beliefs that he wants to know where it is, than actually wants to go [pause] there." Right? [speaker003:] That it [disfmarker] Doesn't this assume, though, that they're evenly weighted? [speaker004:] True. [speaker003:] Like [disfmarker] I guess they are evenly weighted. [speaker001:] The different decision nodes, you mean? [speaker003:] Yeah, the Go there, the Info on, and the Location? [speaker001:] Well, d yeah, this is making the assumption. Yes. [speaker002:] What do you mean by "differently weighted"? [speaker003:] Like [disfmarker] [speaker002:] They don't feed into anything really anymore. [speaker003:] Or I jus [speaker001:] But I mean, why do we [disfmarker] [speaker003:] Le [speaker001:] If we trusted the Go there node more th much more than we trusted the other ones, then we would conclude, even in this situation, that he wanted to go there. So, in that sense, we weight them equally [speaker002:] OK. [speaker001:] right now. [speaker002:] Makes sense. Yeah. [speaker003:] So the But I guess the k the question [disfmarker] that I was as er wondering or maybe Robert was proposing to me is [disfmarker] [speaker002:] But [disfmarker] [speaker003:] How do we d make the decision on [disfmarker] as to [disfmarker] which one to listen to? [speaker001:] Yeah, so, the final d decision is the combination of these three. So again, it's [disfmarker] it's some kind of, uh [disfmarker] [speaker003:] Bayes net. [speaker001:] Yeah, sure. [speaker003:] OK so, then, the question i So then my question is t to you then, would be [disfmarker] So is the only r reason we can make all these smaller Bayes nets, because we know we can only deal with a finite set of constructions? Cuz oth If we're just taking arbitrary language in, we couldn't have a node for every possible question, you know? [speaker001:] A decision node for every possible question, you mean? [speaker003:] Well, I [disfmarker] like, in the case of [disfmarker] Yeah. In the ca Any piece of language, we wouldn't be able to answer it with this system, b if we just h Cuz we wouldn't have the correct node. Basically, w what you're s proposing is a n Where Is node, right? [speaker001:] Yeah. [speaker003:] And [disfmarker] and if we [disfmarker] And if someone [disfmarker] says, you know, uh, something in Mandarin to the system, we'd wouldn't know which node to look at to answer that question, [speaker001:] So is [disfmarker] Yeah. [speaker003:] right? [speaker001:] Yeah. [speaker002:] Mmm? [speaker003:] So, but [disfmarker] but if we have a finite [disfmarker] What? [speaker002:] I don't see your point. What [disfmarker] what [disfmarker] what I am thinking, or what we're about to propose here is we're always gonna get the whole list of values and their posterior probabilities. And now we need an expert system or belief net or something that interprets that, that looks at all the values and says, "The winner is Timing. Now, go there." "Uh, go there, Timing, now." Or, "The winner is Info on, Function Off." So, he wants to know [pause] something about it, and what it does. Nuh? Uh, regardless of [disfmarker] of [disfmarker] of the input. Wh Regardle [speaker003:] Yeah, but But how does the expert [disfmarker] but how does the expert system know [disfmarker] how who which one to declare the winner, if it doesn't know the question it is, and how that question should be answered? [speaker002:] Based on the k what the question was, so what the discourse, the ontology, the situation and the user model gave us, we came up with these values for these decisions. [speaker003:] Yeah I know. But how do we weight what we get out? As, which one i Which ones are important? So my i So, if we were to it with a Bayes net, we'd have to have a node [disfmarker] for every question that we knew how to deal with, that would take all of the inputs and weight them appropriately for that question. [speaker002:] Mm hmm. [speaker003:] Does that make sense? Yay, nay? [speaker001:] Um, I mean, are you saying that, what happens if you try to scale this up to the situation, or are we just dealing with arbitrary language? [speaker003:] We [disfmarker] [speaker001:] Is that your point? [speaker003:] Well, no. I [disfmarker] I guess my question is, Is the reason that we can make a node f or [disfmarker] OK. So, lemme see if I'm confused. Are we going to make a node for every question? Does that make sense? [disfmarker] Or not. [speaker001:] For every question? Like [disfmarker] [speaker003:] Every construction. [speaker001:] Hmm. I don't [disfmarker] Not necessarily, I would think. I mean, it's not based on constructions, it's based on things like, uh, there's gonna be a node for Go there or not, and there's gonna be a node for Enter, View, Approach. [speaker003:] Wel W OK. So, someone asked a question. [speaker001:] Yeah. [speaker003:] How do we decide how to answer it? [speaker002:] Well, look at [disfmarker] look [disfmarker] Face yourself with this pr question. You get this [disfmarker] You'll have [disfmarker] y This is what you get. And now you have to make a decision. What do we think? What does this tell us? And not knowing what was asked, and what happened, and whether the person was a tourist or a local, because all of these factors have presumably already gone into making these posterior probabilities. [speaker003:] Yeah. [speaker002:] What [disfmarker] what we need is a [disfmarker] just a mechanism that says, "Aha! There is [disfmarker]" [speaker003:] I just don't think a "winner take all" type of thing is the [disfmarker] [speaker001:] I mean, in general, like, we won't just have those three, right? We'll have, uh, like, many, many nodes. [speaker002:] Yep. [speaker001:] So we have to, like [disfmarker] So that it's no longer possible to just look at the nodes themselves and figure out what the person is trying to say. [speaker002:] Because there are interdependencies, right? The uh [disfmarker] Uh, no. So if [disfmarker] if for example, the Go there posterior possibility is so high, um, uh, w if it's [disfmarker] if it has reached [disfmarker] reached a certain height, then all of this becomes irrelevant. So. If [disfmarker] even if [disfmarker] if the function or the history or something is scoring pretty good on the true node, true value [disfmarker] [speaker003:] Wel I don't know about that, cuz that would suggest that [disfmarker] I mean [disfmarker] [speaker002:] He wants to go there and know something about it? [speaker003:] Do they have to be mutual Yeah. Do they have to be mutually exclusive? [speaker002:] I think to some extent they are. Or maybe they're not. [speaker003:] Cuz I, uh [disfmarker] The way you describe what they meant, they weren't mutu uh, they didn't seem mutually exclusive to me. [speaker002:] Well, if he doesn't want to go there, even if the Enter posterior proba So. [speaker003:] Wel [speaker002:] Go there is No. Enter is High, and Info on is High. [speaker003:] Well, yeah, just out of the other three, though, that you had in the [disfmarker] [speaker002:] Hmm? [speaker003:] those three nodes. The d They didn't seem like they were mutually exclusive. [speaker002:] No, there's [disfmarker] No. But [disfmarker] It's through the [disfmarker] [speaker003:] So th s so, yeah, but some [disfmarker] So, some things would drop out, and some things would still be important. [speaker002:] Mm hmm. [speaker003:] But I guess what's confusing me is, if we have a Bayes net to deal w another Bayes net to deal with this stuff, [speaker001:] Mm hmm. [speaker003:] you know, uh, is the only reason [disfmarker] OK, so, I guess, if we have a Ba another Bayes net to deal with this stuff, the only r reason [pause] we can design it is cuz we know what each question is asking? [speaker001:] Yeah. I think that's true. [speaker003:] And then, so, the only reason [disfmarker] way we would know what question he's asking is based upon [disfmarker] Oh, so if [disfmarker] Let's say I had a construction parser, and I plug this in, I would know what each construction [disfmarker] the communicative intent of the construction was [speaker001:] Mm hmm. [speaker003:] and so then I would know how to weight the nodes appropriately, in response. So no matter what they said, if I could map it onto a Where Is construction, I could say, ah! [speaker001:] Ge Mm hmm. [speaker003:] well the the intent, here, was Where Is ", [speaker001:] OK, right. [speaker003:] and I could look at those. [speaker001:] Yeah. Yes, I mean. Sure. You do need to know [disfmarker] I mean, to have that kind of information. [speaker002:] Hmm. Yeah, I'm also agreeing that [pause] a simple pru [comment] Take the ones where we have a clear winner. Forget about the ones where it's all sort of middle ground. Prune those out and just hand over the ones where we have a winner. Yeah, because that would be the easiest way. We just compose as an output an XML mes [vocalsound] message that says. "Go there [pause] now." "Enter historical information." And not care whether that's consistent with anything. Right? But in this case if we say, "definitely he doesn't want to go there. He just wants to know where it is." or let's call this [disfmarker] this "Look At H" He wants to know something about the history of. So he said, "Tell me something about the history of that." Now, the e But for some reason the Endpoint Approach gets a really high score, [pause] too. We can't expect this to be sort of at O point [comment] three, three, three, O point, three, three, three, O point, three, three, three. Right? Somebody needs to zap that. You know? Or know [disfmarker] There needs to be some knowledge that [disfmarker] [speaker003:] We [disfmarker] Yeah, but, the Bayes net that would merge [disfmarker] I just realized that I had my hand in between my mouth and my micr er, my and my microphone. So then, the Bayes net that would merge there, that would make the decision between Go there, Info on, and Location, would have a node to tell you which one of those three you wanted, and based upon that node, then you would look at the other stuff. [speaker002:] Yep. [speaker003:] I mean, it i [speaker002:] Yep. [speaker003:] Does that make sense? [speaker002:] Yep. It's sort of one of those, that's [disfmarker] It's more like a decision tree, if [disfmarker] if you want. You first look o at the lowball ones, [speaker003:] Yeah, i [speaker002:] and then [disfmarker] [speaker003:] Yeah, I didn't intend to say that every possible [disfmarker] OK. There was a confusion there, k I didn't intend to say every possible thing should go into the Bayes net, because some of the things aren't relevant in the Bayes net for a specific question. Like the Endpoint is not necessarily relevant in the Bayes net for Where Is until after you've decided whether you wanna go there or not. [speaker002:] Mm hmm. [speaker001:] Right. [speaker003:] Show us the way, Bhaskara. [speaker001:] I guess the other thing is that um, yeah. I mean, when you're asked a specific question and you don't even [disfmarker] Like, if you're asked a Where Is question, you may not even look [disfmarker] like, ask for the posterior probability of the, uh, EVA node, right? Cuz, that's what [disfmarker] I mean, in the Bayes net you always ask for the posterior probability of a specific node. So, I mean, you may not even bother to compute things you don't need. [speaker002:] Um. Aren't we always computing all? [speaker001:] No. You can compute, uh, the posterior probability of one subset of the nodes, given some other nodes, but totally ignore some other nodes, also. Basically, things you ignore get marginalized over. [speaker002:] Yeah, but that's [disfmarker] that's just shifting the problem. Then you would have to make a decision, "OK, if it's a Where Is question, which decision nodes do I query?" [speaker001:] Yeah. So you have to make [disfmarker] Yeah. Yes. [speaker002:] That's un [speaker001:] But I would think that's what you want to do. Right? [speaker002:] Mmm. [speaker004:] Well, eventually, you still have to pick out which ones you look at. [speaker002:] Yeah. [speaker004:] So it's pretty much the same problem, [speaker002:] Yeah [disfmarker] it's [disfmarker] it's [disfmarker] it's apples and oranges. [speaker004:] isn't it? [speaker002:] Nuh? I mean, maybe it does make a difference in terms of performance, computational time. So either you always have it compute all the posterior possibilities for all the values for all nodes, and then prune the ones you think that are irrelevant, [speaker001:] Mm hmm. Mmm. [speaker002:] or you just make a p [@ @] [comment] a priori estimate of what you think might be relevant and query those. [speaker001:] Yeah. [speaker003:] So basically, you'd have a decision tree [pause] query, [pause] Go there. If k if that's false, query this one. If that's true, query that one. And just basically do a binary search through the [disfmarker]? [speaker001:] I don't know if it would necessarily be that, uh, complicated. But, uh [disfmarker] [speaker003:] Well, in the case of Go there, it would be. [speaker001:] I mean, it w [speaker003:] In the case [disfmarker] Cuz if you needed an If y If Go there was true, you'd wanna know what endpoint was. And if it was false, you'd wanna d look at either Lo Income Info on or History. [speaker001:] Yeah. That's true, I guess. Yeah, [vocalsound] so, in a way you would have that. [speaker003:] Also, I'm somewhat boggled by that Hugin software. [speaker001:] OK, why's that? [speaker003:] I can't figure out how to get the probabilities into it. Like, I'd look at [disfmarker] [speaker001:] Mm hmm. [speaker003:] It's somewha It's boggling me. [speaker001:] OK. Alright. Well, hopefully it's [pause] fixable. [speaker003:] Ju Oh yeah, yeah. I d I just think I haven't figured out what [disfmarker] the terms in Hugin mean, versus what Java Bayes terms are. [speaker001:] It's [disfmarker] there's a [disfmarker] OK. [speaker002:] Um, by the way, are [disfmarker] Do we know whether Jerry and Nancy are coming? [speaker001:] So we can figure this out. [speaker002:] Or [disfmarker]? [speaker001:] They should come when they're done their stuff, basically, whenever that is. So. [speaker003:] What d what do they need to do left? [speaker001:] Um, I guess, Jerry needs to enter marks, but I don't know if he's gonna do that now or later. But, uh, if he's gonna enter marks, it's gonna take him awhile, I guess, and he won't be here. [speaker003:] And what's Nancy doing? [speaker001:] Nancy? Um, she was sorta finishing up the, uh, calculation of marks and assigning of grades, but I don't know if she should be here. Well [disfmarker] or, she should be free after that, so [disfmarker] assuming she's coming to this meeting. I don't know if she knows about it. [speaker003:] She's on the email list, right? [speaker001:] Is she? OK. [speaker002:] Mm hmm. OK. Because basically, what [disfmarker] where we also have decided, prior to this meeting is that we would have a rerun of the three of us sitting together [speaker004:] OK. [speaker002:] sometime [pause] this week [pause] again [speaker001:] OK. [speaker002:] and finish up the, uh, values of this. So we have, uh [disfmarker] Believe it or not, we have all the bottom ones here. [speaker003:] Well, I [disfmarker] [speaker004:] You added a bunch of [pause] nodes, for [disfmarker]? [speaker002:] Yep. [speaker004:] OK. [speaker002:] We [disfmarker] we [disfmarker] we have [disfmarker] Actually what we have is this line. Right? [speaker003:] Uh, what do the, uh, structures do? [speaker002:] Hmm? [speaker003:] So the [disfmarker] the [disfmarker] the [disfmarker] For instance, this Location node's got two inputs, that one you [disfmarker] [speaker001:] Four inputs. [speaker002:] Hmm. Four. [speaker001:] Those are [disfmarker] The bottom things are inputs, also. [speaker003:] Oh, I see. [speaker001:] Yeah. [speaker003:] OK, that was OK. That makes a lot more sense to me now. [speaker002:] Yep. [speaker003:] Cuz I thought it was like, that one in Stuart's book about, you know, the [disfmarker] [speaker001:] Alarm in the dog? [speaker003:] U Yeah. [speaker001:] Yeah. [speaker003:] Or the earthquake and the alarm. [speaker001:] Sorry. Yeah, I'm confusing two. [speaker003:] Yeah, there's a dog one, too, but that's in Java Bayes, isn't it? [speaker001:] Right. Maybe. [speaker003:] But there's something about bowel problems or something with the dog. [speaker001:] Yeah. [speaker002:] And we have all the top ones, all the ones to which no arrows are pointing. What we're missing are the [disfmarker] these, where arrows are pointing, where we're combining top ones. So, we have to come up with values for this, and this, this, this, and so forth. And maybe just fiddle around with it a little bit more. And, um. And then it's just, uh, edges, many of edges. And, um, we won't [comment] meet next Monday. So. [speaker003:] Cuz of Memorial Day? [speaker002:] Yep. [speaker001:] We'll meet next Tuesday, I guess. [speaker002:] Yeah. [speaker003:] When's Jerry leaving for [disfmarker] Italia? [speaker002:] On [disfmarker] on Friday. [speaker001:] Which Friday? [speaker002:] This [disfmarker] this Friday. [speaker001:] OK. [speaker004:] Oh. This Friday? [speaker003:] Ugh. [speaker002:] This Friday. [speaker003:] As in, four days? [speaker002:] Yep. [speaker003:] Or, three days? [speaker001:] Is he [disfmarker] How long is he gone for? [speaker002:] Two weeks. [speaker001:] Italy, huh? What's, uh [disfmarker] what's there? [speaker002:] Well, it's a country. Buildings. People. [speaker003:] But it's not a conference or anything. [speaker001:] Pasta. [speaker003:] He's just visiting. [speaker002:] Hmm? [speaker001:] Right. Just visiting. [speaker002:] Vacation. [speaker001:] It's a pretty nice place, in my brief, uh, encounter with it. [speaker002:] Do you guys [disfmarker] Oh, yeah. So. Part of what we actually want to do is sort of schedule out what we want to surprise him with when [disfmarker] when he comes back. Um, so [disfmarker] [speaker003:] Oh, I think we should disappoint him. [speaker002:] Yeah? You [disfmarker] or have a finished construction parser and a working belief net, and uh [disfmarker] [speaker003:] That wouldn't be disappointing. I think w we should do absolutely no work for the two weeks that he's gone. [speaker002:] Well, that's actually what I had planned, personally. I had [disfmarker] I [disfmarker] I had sort of scheduled out in my mind that you guys do a lot of work, and I do nothing. And then, I sort of [disfmarker] [speaker003:] Oh, yeah, that sounds good, too. [speaker002:] sort of bask in [disfmarker] in your glory. But, uh, i do you guys have any vacation plans, because I myself am going to be, um, gone, but this is actually not really important. Just this weekend we're going camping. [speaker003:] Yeah, I'm wanna be this [disfmarker] gone this weekend, too. [speaker002:] Ah. But we're all going to be here on Tuesday again? Looks like it? [speaker004:] Yeah. [speaker002:] OK, then. Let's meet [disfmarker] meet again next Tuesday. And, um, finish up this Bayes net. And once we have finished it, I guess we can, um [disfmarker] and that's going to be more just you and me, because Bhaskara is doing probabilistic, recursive, structured, object oriented, uh, [speaker003:] Killing machines! [speaker002:] reasoning machines. [speaker001:] Yes. [speaker002:] And, um [disfmarker] [speaker003:] Killing, reasoning. What's the difference? [speaker004:] Wait. So you're saying, next Tuesday, is it the whole group meeting, or just us three working on it, or [disfmarker] or [disfmarker]? [speaker002:] Uh. The whole group. And we present our results, our final, [speaker004:] OK. [speaker002:] definite [disfmarker] [speaker004:] So, when you were saying we [pause] need to do a re run of, like [disfmarker] [speaker001:] h What? [speaker004:] What [disfmarker] Like, just working out the rest of the [disfmarker] [speaker002:] Yeah. We should do this th the upcoming days. [speaker004:] This week? [speaker002:] So, this week, yeah. [speaker004:] OK. [speaker003:] When you say, "the whole group", you mean [pause] the four of us, and Keith? [speaker002:] And, Ami might. [speaker003:] Ami might be here, and it's possible that Nancy'll be here? [speaker002:] Yep. [speaker003:] So, yeah. [speaker002:] Because, th you know, once we have the belief net done [disfmarker] [speaker003:] You're just gonna have to explain it to me, then, on Tuesday, how it's all gonna work out. You know. [speaker002:] We will. OK. Because then, once we have it sort of up and running, then we can start you know, defining the interfaces and then feed stuff into it and get stuff out of it, and then hook it up to some fake construction parser and [disfmarker] [speaker003:] That you will have in about nine months or so. Yeah. [speaker002:] Yeah. And, um, [speaker003:] The first bad version'll be done in nine months. [speaker002:] Yeah, I can worry about the ontology interface and you can [disfmarker] Keith can worry about the discourse. I mean, this is pretty [disfmarker] Um, I mean, I [disfmarker] I [disfmarker] I hope everybody uh knows that these are just going to be uh dummy values, right? [speaker001:] Which [disfmarker] [speaker002:] where the [disfmarker] [speaker001:] Which ones? [speaker002:] S so [disfmarker] so if the endpoint [disfmarker] if the Go there is Yes and No, then Go there discourse will just be fifty fifty. Right? [speaker001:] Um, what do you mean? If the Go there says No, then the Go there is [disfmarker] [speaker004:] I don't get it. [speaker001:] I don't u understand. [speaker002:] Um. [speaker001:] Like, the Go there depends on all those four things. [speaker002:] Yep. [speaker001:] Yeah. [speaker002:] But, what are the values of the Go there discourse? [speaker001:] Well, it depends on the situation. If the discourse is strongly indicating that [disfmarker] [speaker002:] Yeah, but, uh, we have no discourse input. [speaker001:] Oh, I see. The d See, uh, specifically in our situation, D and O are gonna be, uh [disfmarker] Yeah. Sure. So, whatever. [speaker004:] So, so far we have [disfmarker] Is that what the Keith node is? [speaker002:] Yep. [speaker004:] OK. And you're taking it out? [pause] for now? Or [disfmarker]? [speaker002:] Well, this is D [disfmarker] OK, this, I can [disfmarker] I can get it in here. [speaker004:] All the D's are [disfmarker] [speaker002:] I can get it in here, so th We have the, uh, um, sk let's [disfmarker] let's call it "Keith Johno node". [speaker001:] Johno? [speaker002:] There is an H [comment] somewhere printed. [speaker003:] There you go. [speaker001:] Yeah. People have the same problem with my name. [speaker002:] Yeah. [speaker001:] Oops. [speaker002:] And, um, [speaker003:] Does th th does the H go b before the A or after the A? [speaker001:] Oh, in my name? [speaker003:] Yeah. [speaker001:] Before the A. [speaker003:] OK, good. Cuz you kn When you said people have the same problem, I thought [disfmarker] Cuz my H goes after the uh e e e the v [speaker001:] People have the inverse problem with my name. [speaker003:] OK. I always have to check, every time y I send you an email, [comment] a past email of yours, [comment] to make sure I'm spelling your name correctly. [speaker001:] Yeah. That's good. [speaker003:] I worry about you. [speaker001:] I appreciate that. [speaker002:] But, when you abbreviate yourself as the "Basman", you don't use any H's. [speaker001:] "Basman"? Yeah, it's because of the chessplayer named Michael Basman, who is my hero. [speaker002:] OK. [speaker003:] You're a geek. It's O K. I How do you pronou [speaker002:] OK. [speaker003:] How do you pronounce your name? [speaker004:] Eva. [speaker003:] Eva? [speaker004:] Yeah. [speaker001:] Not Eva? [speaker003:] What if I were [disfmarker] What if I were to call you Eva? [speaker004:] I'd probably still respond to it. I've had people call me Eva, but I don't know. [speaker003:] No, not just Eva, Eva. Like if I u take the V and s pronounce it like it was a German V? [speaker002:] Which is F. [speaker003:] Yeah. [speaker004:] Um, no idea then. [speaker002:] Voiced. [speaker004:] What? [speaker003:] It sounds like an F. [speaker004:] I [disfmarker] [speaker003:] There's also an F in German, [speaker004:] OK. [speaker003:] which is why I [disfmarker] [speaker002:] Well, it's just the difference between voiced and unvoiced. [speaker003:] Yeah. [speaker004:] OK. [speaker003:] As long as that's O K. [speaker004:] Um. [speaker003:] I mean, I might slip out and say it accidentally. That's all I'm saying. [speaker004:] That's fine. [speaker001:] Yeah. It doesn't matter what those nodes are, anyway, because we'll just make the weights "zero" for now. [speaker002:] Yep. We'll make them zero for now, because it [disfmarker] who [disfmarker] who knows what they come up with, what's gonna come in there. OK. And, um, then should we start on Thursday? [speaker001:] OK. [speaker002:] And not meet tomorrow? [speaker001:] Sure. [speaker002:] OK. I'll send an email, make a time suggestion. [speaker003:] Wait, maybe it's OK, so that [disfmarker] that [disfmarker] that we can [disfmarker] that we have one node per construction. Cuz even in people, like, they don't know what you're talking about if you're using some sort of strange construction. [speaker002:] Yeah, they would still c sort of get the closest, best fit. [speaker003:] Well, yeah, but I mean, the [disfmarker] uh, I mean, that's what the construction parser would do. Uh, I mean, if you said something completely arbitrary, it would f find the closest construction, [speaker002:] Mm hmm. OK. [speaker003:] right? But if you said something that was completel er [disfmarker] h theoretically the construction parser would do that [disfmarker] But if you said something for which there was no construction whatsoever, n people wouldn't have any idea what you were talking about. [speaker002:] Mm hmm. [speaker003:] Like "Bus dog fried egg." I mean. You know. [speaker002:] Or, if even something Chinese, for example. [speaker003:] Or, something in Mandarin, yeah. Or Cantonese, as the case may be. What do you think about that, Bhaskara? [speaker001:] I mean [disfmarker] Well [disfmarker] But how many constructions do [disfmarker] could we possibly have [pause] nodes for? [speaker003:] In this system, or in r [speaker001:] No, we. Like, when people do this kind of thing. [speaker003:] Oh, when p How many constructions do people have? [speaker001:] Yeah. [speaker003:] I have not [comment] the slightest idea. [speaker001:] Is it considered to be like in [disfmarker] are they considered to be like very, uh, sort of s abstract things? [speaker003:] Every noun is a construction. [speaker001:] OK, so it's like in the [pause] thousands. [speaker003:] The [disfmarker] Yeah. Any [disfmarker] any form meaning pair, to my understanding, is a construction. [speaker001:] OK. [speaker002:] So. [speaker003:] And form u starts at the level of noun [disfmarker] Or actually, maybe even sounds. [speaker002:] Phoneme. Yep. [speaker003:] Yeah. And goes upwards until you get the ditransitive construction. And then, of course, the c I guess, maybe there can be the [disfmarker] [speaker001:] S [speaker003:] Can there be combinations of the dit [speaker001:] Discourse level [pause] constructions. [speaker003:] Yeah. The "giving a speech" construction, [speaker002:] Rhetorical constructions. Yeah. [speaker001:] Yes. [speaker002:] But, I mean, you know, you can probably count [disfmarker] count the ways. I mean. [speaker003:] It's probab Yeah, I would s definitely say it's finite. [speaker002:] Yeah. [speaker003:] And at least in compilers, that's all that really matters, as long as your analysis is finite. [speaker001:] How's that? [nonvocalsound] How it can be finite, again? [speaker003:] Nah, I can't think of a way it would be infinite. [speaker002:] Well, you can come up with new constructions. [speaker003:] Yeah. [comment] If the [disfmarker] if your [disfmarker] if your brain was totally non deterministic, then perhaps there's a way to get, uh, infin an infinite number of constructions that you'd have to worry about. [speaker001:] But, I mean, in the [nonvocalsound] practical sense, it's impossible. [speaker003:] Right. Cuz if we have a fixed number of neurons [disfmarker]? [speaker001:] Yeah. [speaker003:] So the best case scenario would be the number of constructions [disfmarker] or, the worst case scenario is the number of constructions equals the number of neurons. [speaker001:] Well, two to the power of the number of neurons. [speaker003:] Right. But still finite. [speaker002:] OK. [speaker003:] No, wait. Not necessarily, is it? We can end the [pause] meeting. I just [disfmarker] Can't you use different var different levels of activation? across, uh [disfmarker] lots of different neurons, to specify different values? [speaker002:] Mm hmm. [speaker001:] Um, yeah, but there's, like, a certain level of [disfmarker] [speaker003:] There's a bandwidth issue, right? [speaker001:] Bandw Yeah, so you can't do better than something. [speaker003:] Yeah. [speaker002:] Turn off the mikes. Otherwise it gets really tough for the tr [speaker004:] OK. So, uh You can fill those out, uh [pause] after, actually, so So, I got, uh [pause] these results from, uh, Stephane. Also, um, I think that, uh [pause] um [pause] we might hear later today, about other results. I think s that, uh, there were some other very good results that we're gonna wanna compare to. But, [vocalsound] r our results from other [disfmarker] other places, yeah. [speaker001:] I I'm sorry? I didn't [speaker004:] Um, I got this from you [speaker001:] Yeah. [speaker004:] and then I sent a note to Sunil about the [disfmarker] cuz he has been running some other systems other than the [disfmarker] the ICSI OGI one. [speaker001:] Mm hmm. Oh yeah. [speaker004:] So [pause] um, I wan wanna [disfmarker] wanna see what that is. But, uh, you know, so we'll see what it is comparatively later. But [pause] it looks like, um [speaker001:] M yeah. [speaker004:] You know most of the time, even [disfmarker] I mean even though it's true that the overall number for Danish [disfmarker] we didn't improve it If you look at it individually, what it really says is that there's, um, uh Looks like out of the six cases, between the different kinds of, uh, matching conditions [pause] out of the six cases, there's basically, um, a couple where it stays about the same, uh, three where it gets better, and one where it gets worse. [speaker001:] Yeah. [speaker004:] Uh, go ahead. [speaker001:] Y Actually, uh, um, for the Danish, there's still some kind of mystery because, um, um, when we use the straight features, we are not able to get these nice number with the ICSI OGI one, I mean. We don't have this ninety three seventy eight, we have eight [speaker005:] Eighty nine forty four. [speaker001:] yeah. Uh, so, uh, that's probably something wrong with the features that we get from OGI. Uh, and Sunil is working on [disfmarker] on trying to [disfmarker] to check everything. [speaker004:] Oh, and [disfmarker] and we have a little time on that [disfmarker] and [disfmarker] actually so [speaker001:] Hmm? [speaker004:] We have a little bit of time on that, actually. [speaker001:] Yeah. [speaker004:] We have a day or so, so When [disfmarker] when [disfmarker] when do you folks leave? [speaker001:] Uh, Sunday. [speaker004:] Sunday? So So, uh Yeah, until Saturday midnight, or something, we have W we [disfmarker] we have time, yeah. Well, that would be good. That'd be good. [speaker001:] Yeah. [speaker004:] Yeah. Uh, and, you know, i u when whenever anybody figures it out they should also, for sure, email Hynek because Hynek will be over there [vocalsound] telling people [vocalsound] what we did, so he should know. [speaker001:] Mmm. [speaker004:] Good, [speaker001:] Yeah. [speaker004:] OK. So, um So, we'll [disfmarker] we'll hold off on that a little bit. I mean, even with these results as they are, it's [disfmarker] it's [disfmarker] it's really not that bad. But [disfmarker] but, uh, um And it looks like the overall result as they are now, even without, you know, any [disfmarker] any bugs being fixed is that, uh, on the [disfmarker] the other tasks, we had this average of, uh, forty uh [disfmarker] nine percent, or so, improvement. And here we have somewhat better than that than the Danish, and somewhat worse than that on the German, but I mean, it sounds like, uh, one way or another, the methods that we're doing can reduce the error rate from [disfmarker] from mel ceptrum [pause] down by, you know [pause] a fourth of them to, uh, a half of them. Somewhere in there, depending on the [pause] exact case. So So that's good. I mean, I think that, uh, one of the things that Hynek was talking about was understanding what was in the other really good proposals and [disfmarker] and trying to see if what should ultimately be proposed is some, uh, combination of things. Um, if, uh [disfmarker] Cuz there's things that they are doing [pause] there that we certainly are not doing. And there's things that we're doing that [pause] they're not doing. And [disfmarker] and they all seem like good things. [speaker001:] Yeah. [speaker005:] Mmm, yeah. [speaker004:] So So [speaker003:] How much [disfmarker] how much better was the best system than ours? [speaker004:] Well, we don't know yet. [speaker003:] Mmm. [speaker004:] Uh, I mean, first place, there's still this thing to [disfmarker] to work out, and second place [disfmarker] second thing is that the only results that we have so far from before were really development set results. [speaker003:] Oh, OK. [speaker004:] So, I think in this community that's of interest. It's not like everything is being pinned on the evaluation set. But, um, for the development set, our best result was a little bit short of fifty percent. And the best result of any system was about fifty four, where these numbers are the, uh, relative, uh, reduction in, uh, word error rate. [speaker003:] Oh, OK. [speaker004:] And, um, the other systems were, uh, somewhat lower than that. There was actually [disfmarker] there was much less of a huge range than there was in Aurora one. In Aurora one there were [disfmarker] there were systems that ba basically didn't improve things. [speaker003:] Hmm. [speaker004:] And here the [disfmarker] the worst system [pause] still reduced the error rate by thirty three percent, or something, in development set. [speaker003:] Oh, wow. [speaker004:] So [disfmarker] so, you know, sort of everybody is doing things between, well, roughly a third of the errors, and half the errors being eliminated, [vocalsound] uh, and varying on different test sets and so forth. [speaker003:] Mm hmm. [speaker004:] So I think Um [pause] It's probably a good time to look at what's really going on and seeing if there's a [disfmarker] there's a way to combine the best ideas while at the same time not blowing up the amount of, uh, resources used, cuz that's [disfmarker] that's critical for this [disfmarker] this test. [speaker003:] Do we know anything about [disfmarker] who [disfmarker] who's was it that had the lowest on the dev set? [speaker004:] Um, uh, the, uh, the there were two systems that were put forth by a combination of [disfmarker] of, uh, French Telecom and Alcatel. And, um they [disfmarker] they differed in some respects, but they e em one was called the French Telecom Alcatel System the other was called the Alcatel French Telecom System, [vocalsound] uh, which is the biggest difference, I think. But [disfmarker] but there're [disfmarker] there're [disfmarker] there're some other differences, too. Uh, and [disfmarker] and, uh, they both did very well, [speaker003:] Uh huh. [speaker004:] you know? So, [vocalsound] um, my impression is they also did very well on [disfmarker] on the [disfmarker] the, uh, evaluation set, but, um, I [disfmarker] I we haven't seen [disfmarker] you've you haven't seen any final results for that yeah. [speaker003:] And they used [disfmarker] the main thing that [disfmarker] that they used was spectral subtraction? Or [speaker004:] There is a couple pieces to it. There's a spectral subtraction style piece [disfmarker] it was basically, you know, Wiener filtering. And then [disfmarker] then there was some p some modification of the cepstral parameters, where they [disfmarker] [speaker001:] Yeah, actually, something that's close to cepstral mean subtraction. But, uh, the way the mean is adapted [disfmarker] um, it's signal dependent. I'm [disfmarker] I'm, uh So, basically, the mean is adapted during speech and not during silence. [speaker004:] Yeah. [speaker001:] But it's very close to [disfmarker] to cepstral mean subtraction. [speaker004:] But some people have done [vocalsound] [pause] exactly that sort of thing, of [disfmarker] of [disfmarker] and the [disfmarker] I mean it's not [disfmarker] To [disfmarker] to look in [pause] speech only, to try to m to measure these things during speech, [speaker001:] Yeah, yeah. [speaker004:] that's p that's not that uncommon. But i it it [disfmarker] so it looks like they did some [disfmarker] some, uh, reasonable things, uh, and they're not things that we did, precisely. We did unreasonable things, [vocalsound] which [disfmarker] because we like to try strange things, and [disfmarker] and, uh, and our things worked too. [speaker003:] Hmm. [speaker004:] And so, um, uh, it's possible that some combination of these different things that were done would be the best thing to do. But the only caveat to that is that everybody's being real conscious of how much memory and how much CPU they're using because these, [vocalsound] [vocalsound] [vocalsound] uh, standards are supposed to go on cell phones with m moderate resources in both respects. [speaker003:] Mm hmm. Did anybody, uh, do anything with the models as a [disfmarker] an experiment? Or [speaker004:] Uh, they didn't report it, if they did. [speaker003:] N nobody reported it? [speaker004:] Yeah. I think everybody was focused elsewhere. Um, now, one of the things that's nice about what we did is, we do have a [disfmarker] a, uh [disfmarker] a filtering, which leads to a [disfmarker] a, uh [disfmarker] a reduction in the bandwidth in the modulation spectrum, which allows us to downsample. So, uh, as a result of that we have a reduced, um, transmission rate for the bits. [speaker003:] Mm hmm. [speaker004:] That was misreported the first time out. It [disfmarker] it said the same amount because for convenience sake in the particular way that this is being tested, uh, they were repeating the packets. So it was [disfmarker] they were s they [disfmarker] they had twenty four hundred bits per second, but they were literally creating forty eight hundred bits per second, [vocalsound] um, even though y it was just repeated. [speaker003:] Oh. Mm hmm. Right. [speaker004:] So, uh, in practice [speaker003:] So you could've had a repeat count in there or something. [speaker004:] Well, n I mean, this was just a ph phoney thing just to [disfmarker] to fit into the [disfmarker] the software that was testing the errors [disfmarker] channel errors and so on. [speaker003:] Oh. [speaker004:] So [disfmarker] [speaker003:] Oh. [speaker004:] so in reality, if you put this [disfmarker] this system in into, uh, the field, it would be twenty four hundred bits per second, not forty eight hundred. So, um, so that's a nice feature of what [disfmarker] what we did. Um, but, um, well, we still have to see how it all comes out. [speaker003:] Hmm. [speaker004:] Um, and then there's the whole standards process, which is another thing altogether. [speaker003:] When is the development set [disfmarker] I mean, the, uh, uh, test set results due? Like the day before you leave or something? [speaker004:] Uh, probably the day after they leave, but we'll have to [disfmarker] [vocalsound] we'll have to stop it the day before [comment] we leave. [speaker001:] Yeah, yeah. So [speaker003:] Huh. [speaker004:] I think tha I think the [disfmarker] the meeting is on the thirteenth or something. [speaker001:] Yeah, this Tuesday, yeah. [speaker004:] And, uh, they, uh Right. And the [disfmarker] the, uh, results are due like the day before the meeting or something. So [speaker001:] Yeah, probably, well [speaker004:] I th I think [disfmarker] I I think they are, [speaker001:] Yeah, well [speaker004:] yeah. So [pause] [vocalsound] um, since we have a bit farther to travel than [vocalsound] some of the others, [vocalsound] uh, we'll have to get done a little quicker. But, um, I mean, it's just tracing down these bugs. I mean, just exactly this sort of thing of, you know, why [disfmarker] why these features seem to be behaving differently, uh, in California than in Oregon. [speaker003:] Hmm. [speaker004:] Might have something to do with electricity shortage. Uh, we didn't [disfmarker] we didn't have enough electrons here and Uh, but, um Uh, I think, you know, the main reason for having [disfmarker] I mean, it only takes w to run the [disfmarker] the two test sets in [disfmarker] just in computer time is just a day or so, right? So [speaker001:] Yeah, it's very short interval. [speaker004:] yeah. So, I think the who the whole reason for having as long as we have, which was [pause] like a week and a half, is [disfmarker] is because of bugs like that. So Huh So, we're gonna end up with these same kind of sheets that have the [pause] the percentages and so on just for the [disfmarker] [speaker001:] Yeah, so there are two more columns in the sheets, [speaker004:] Oh, I guess it's the same sheets, yeah, [speaker001:] two. Yeah, it's the same sheets, [speaker004:] yeah [disfmarker] just with the missing columns filled in. [speaker001:] yeah. Yeah. [speaker004:] Yeah. Well, that'll be good. So, I'll dis I'll disregard these numbers. That's [disfmarker] that's [disfmarker] that's good. [speaker001:] So, Hynek will try to push for trying to combine, uh, different things? Or [speaker004:] Uh, well that's [pause] um [speaker001:] Hmm? [speaker004:] yeah I mean, I think the question is "Is there [disfmarker] is there some advantage?" I mean, you could just take the best system and say that's the standard. But the thing is that if different systems are getting at good things, um, a again within the constraint of the resources, if there's something simple that you can do Now for instance, uh, it's, I think, very reasonable to have a standard for the terminal's side and then for the server's side say, "Here's a number of things that could be done." So, um, everything that we did could probably just be added on to what Alcatel did, and i it'd probably work pretty well with them, too. So, um, uh, that's one [disfmarker] one aspect of it. And then on the terminal's side, I don't know how much, um, memory and [disfmarker] and CPU it takes, but it seems like the filtering [pause] Uh, I mean, the VAD stuff they both had, right? And, um, so [disfmarker] and they both had some kind of on line normalization, right? [speaker001:] Uh, yeah. [speaker004:] Of sorts, yeah? So [disfmarker] so, it seems like the main different there is the [disfmarker] is the, uh, filtering. And the filtering [disfmarker] I think if you can [disfmarker] shouldn't take a lot of memory to do that Uh, and I also wouldn't think the CPU, uh, would be much either for that part. So, if you can [disfmarker] if you can add those in [pause] um [pause] then, uh, you can cut the data rate in half. [speaker001:] Yeah. [speaker004:] So it seems like the right thing to do is to [disfmarker] on the [disfmarker] on the terminal's side, take what they did, if it [disfmarker] if it does seem to generalize well to German and Danish, uh, take what they did add in a filter, and add in some stuff on the server's side and [disfmarker] and [disfmarker] and that's probably a reasonable standard. Um [pause] Uh [speaker001:] They are working on this already? Because [disfmarker] yeah, Su Sunil told me that he was trying already to put some kind of, uh, filtering in the [vocalsound] [pause] France Telecom. [speaker004:] Yeah, so that's [disfmarker] that's [disfmarker] that's what That would be ideal [disfmarker] would be is that they could, you know, they could actually show that, in fact, a combination of some sort, [vocalsound] uh, would work even better than what [disfmarker] what any of the systems had. And, um, then it would [disfmarker] it would, uh [pause] be something to [disfmarker] to discuss in the meeting. But, uh, not clear what will go on. Um, I mean, on the one hand, um, sometimes people are just anxious to get a standard out there. I mean, you can always have another standard after that, but [vocalsound] this process has gone on for a while on [disfmarker] already and [disfmarker] and people might just wanna pick something and say, "OK, this is it." And then, that's a standard. Uh, standards are always optional. It's just that, uh, if you disobey them, then you risk not being able to sell your product, or [pause] [vocalsound] Uh [pause] um And people often work on new standards while an old standard is in place and so on. So it's not final even if they declared a standard. The other hand, they might just say they just don't know enough yet to [disfmarker] to declare a standard. So you [disfmarker] you [disfmarker] you will be [disfmarker] you will become experts on this and know more [disfmarker] far more than me about the tha this particular standards process once you [disfmarker] you go to this meeting. So, be interested in hearing. So, uh, I'd be, uh, interested in hearing, uh, your thoughts now I mean you're almost done. I mean, you're done in the sense that, um, you may be able to get some new features from Sunil, and we'll re run it. Uh, but other than that, you're [disfmarker] you're basically done, right? So, uh, I'm interested in hearing [disfmarker] hearing your thoughts about [pause] where you think we should go from this. [speaker001:] Yeah. [speaker004:] I mean, we tried a lot of things in a hurry, and, uh, if we can back off from this now and sort of take our time with something, and not have doing things quickly be quite so much the constraint, what [disfmarker] what you think would be the best thing to do. [speaker001:] Uh, well Hmm Well, first, uh, to really have a look at [disfmarker] at the speech [pause] [vocalsound] from these databases because, well, we tried several thing, but we did not really look [vocalsound] at what what's happening, and [vocalsound] where is the noise, and [speaker004:] OK. [speaker001:] Eh [speaker004:] It's a novel idea. Look at the data. OK. [speaker001:] Yeah. [speaker004:] Or more generally, I guess, what [disfmarker] what is causing the degradation. [speaker001:] Yeah, yeah. Actually, there is one thing that [disfmarker] well [pause] Um, generally we [disfmarker] we think that [vocalsound] most of the errors are within phoneme classes, and so I think it could be interesting to [disfmarker] to see if it [disfmarker] I don't think it's still true when we add noise, and [vocalsound] so we have [disfmarker] I [disfmarker] I guess the confusion ma the confusion matrices are very different when [disfmarker] when we have noise, and when it's clean speech. And probably, there is much more [pause] between classes errors for noisy speech. [speaker004:] Mm hmm. [speaker001:] And [vocalsound] so, um Yeah, so perhaps we could have a [disfmarker] a large gain, eh, just by looking at improving the, uh, recognition, not of phonemes, but of phoneme classes, simply. [speaker004:] Mm hmm. [speaker001:] And [vocalsound] which is a s a s a simpler problem, perhaps, but [disfmarker] which is perhaps important for noisy speech. [speaker004:] The other thing that strikes me, just looking at these numbers is, just taking the best cases, I mean, some of these, of course, even with all of our [disfmarker] our wonderful processing, still are horrible kinds of numbers. But just take the best case, the well matched [pause] uh, German case after [disfmarker] er well matched Danish after we [disfmarker] [speaker001:] Mm hmm. [speaker004:] the kind of numbers we're getting are about eight or nine [pause] uh [pause] p percent [pause] error [pause] per digit. [speaker001:] Mm hmm. Yeah. [speaker004:] This is obviously not usable, right? [speaker001:] No. [speaker004:] I mean, if you have ten digits for a phone number [comment] I mean, every now and then you'll get it right. [speaker001:] Sure. [speaker004:] I mean, it's [disfmarker] it's, uh, [vocalsound] um So, I mean, the other thing is that, uh [disfmarker] And [disfmarker] and [disfmarker] a and [disfmarker] and also, um [pause] part of what's nice about this is that this is, uh, [vocalsound] um [pause] a realistic [disfmarker] almost realistic database. I mean, it's still not people who are really trying to accomplish something, but [disfmarker] but, uh, within the artificial setup, it isn't noise artificially added, you know, simulated, uh, additive noise. [speaker001:] Mm hmm. [speaker004:] It's real noise condition. And, um, [vocalsound] the [disfmarker] the training [disfmarker] the training, I guess, is always done on the close talking [speaker001:] No, actually [disfmarker] actually the well matched condition [pause] is [pause] still quite di still quite difficult. [speaker004:] No? [speaker001:] I mean, it's [disfmarker] they have all these data from the close mike and from the distant mike, [vocalsound] from different driving condition, open window, closed window, [speaker004:] Yeah. [speaker001:] and they take all of this and they take seventy percent, I think, for training and thirty percent for testing. [speaker005:] Mm hmm. [speaker001:] So, training is done [vocalsound] on different conditions and different microphones, and testing also is done [pause] on different microphone and conditions. So, probably if we only take the close microphones, [vocalsound] I guess the results should be much much better than this. [speaker004:] I see. [speaker001:] Mmm. [speaker004:] Oh, OK, that explains it partially. [speaker001:] Uh [speaker004:] Wha what about i in [disfmarker] so the [disfmarker] the [disfmarker] [speaker001:] Yeah, so [disfmarker] there is this, the mismatched is, um [pause] the same kind of thing, [speaker004:] go ahead. [speaker001:] but [pause] the driving conditions, I mean the speed and the kind of road, is different for training and testing, is that right? [speaker005:] Yeah. [speaker001:] And the last condition is close microphone for training and distant for testing. Yeah. [speaker004:] Uh, OK, so [speaker001:] So [disfmarker] [vocalsound] s so [disfmarker] [speaker004:] I see. So, yeah, so the high [disfmarker] so the [disfmarker] right [disfmarker] so the highly mismatched [vocalsound] case [pause] is in some sense a good model for what we've been, you know, typically talking about when we talk about additive noise in [disfmarker] And so [disfmarker] and i i k it does correspond to a realistic situation in the sense that, [vocalsound] um, people might really be trying to, uh, call out telephone numbers or some or something like that, in [disfmarker] in their cars [speaker001:] Yeah. [speaker004:] and they're trying to connect to something. [speaker001:] Mmm. [speaker004:] Um [speaker001:] Actually, yeah, it's very close to clean speech training because, well, because the close microphone [vocalsound] and noisy speech testing, [speaker004:] Yeah. Yeah. Yeah. [speaker001:] yeah. Mmm. [speaker004:] And the well matched condition [pause] is what you might imagine that you might be able to approach, if you know that this is the application. You're gonna record a bunch on people in cars and so forth, and do these training. And then, uh, when y you sell it to somebody, they will be a different person with a different car, and so on. So it's [disfmarker] this is a an optim somewhat optimistic view on it, uh, so, you know, the real thing is somewhere in between the two. Uh, [speaker001:] Yeah. [speaker004:] uh, but [speaker001:] But the [disfmarker] I mean, the [pause] th th [speaker004:] Even the optimistic one is [speaker001:] it doesn't work. [speaker004:] Yeah, right. [speaker001:] It [disfmarker] [speaker004:] Right, it doesn't work. So, in a way, that's, you know, that's sort of the dominant thing is that even, say on the development set stuff that we saw, the, uh, the numbers that, uh, that Alcatel was getting when choosing out the best single numbers, [vocalsound] it was just [disfmarker] you know, it wasn't good enough for [disfmarker] for [pause] a [disfmarker] a [disfmarker] for a real system. [speaker001:] Mmm. Mm hmm. [speaker004:] You [disfmarker] you [disfmarker] you, [vocalsound] um So, uh, we still have stuff to do. [speaker001:] Yeah. [speaker004:] Uh, and, uh I don't know So, looking at the data, where, you know [disfmarker] what's the [disfmarker] what's [disfmarker] what's th what's characteristic i e yeah, I think that's [disfmarker] that's a good thing. Does a any you have any thoughts about what else [vocalsound] y you're thinking that you didn't get to that you would like to do if you had more time? Uh [speaker005:] Oh, f a lot of thing. Because we trying a lot of s [pause] thing, and we doesn't work, [vocalsound] we remove these. Maybe [vocalsound] we trying again with the articulatory feature. I don't know exactly because we tried [disfmarker] we [disfmarker] some [disfmarker] one experiment that doesn't work. Um, forgot it, something [pause] I don't know exactly [speaker004:] Mm hmm. [speaker005:] because, tsk [comment] [vocalsound] maybe do better some step the general, [vocalsound] eh, diagram. [speaker004:] Mm hmm. [speaker005:] I don't know exactly s to think what we can improve. [speaker004:] Yeah, cuz a lot of time it's true, there were a lot of times when we've tried something and it didn't work right away, even though we had an intuition that there should be something there. And so then we would just stop it. Um And, uh, one of the things [disfmarker] I don't remember the details on, but I remember at some point, when you were working with a second stream, and you tried a low pass filtering to cepstrum, in some case you got [disfmarker] [speaker005:] MSG Yeah. [speaker004:] Well, but it was [comment] an MSG like thing, but it wasn't MSG, right? Uh, you [disfmarker] y I think in some case you got some little improvement, but it was, you know, sort of a small improvement, and it was a [disfmarker] a big added complication, so you dropped it. But, um, that was just sort of one try, right? You just took one filter, threw it there, [speaker001:] Yeah, [speaker004:] right? And it seems to me that, um, if that is an important idea, which, you know, might be, that one could work at it for a while, as you're saying. [speaker001:] Hmm. [speaker004:] And, uh Uh, and you had, you know, you had the multi band things also, and, you know, there was issue of that. [speaker001:] Yeah, [speaker005:] Mm hmm. [speaker004:] Um, Barry's going to be, uh, continuing working on multi band things as well. We were just talking about, um, [vocalsound] some, uh, some work that we're interested in. Kind of inspired by the stuff by Larry Saul with the, uh [pause] uh, learning articulatory feature in [disfmarker] I think, in the case of his paper [disfmarker] with sonorance based on, uh, multi band information where you have a [disfmarker] a combination of gradient learning an and, uh, EM. [speaker001:] Mm hmm. [speaker004:] Um, and [pause] [vocalsound] [vocalsound] Um, so, I think that, you know, this is a, uh [disfmarker] this is a neat data set. Um, and then, uh, as we mentioned before, we also have the [disfmarker] the new, uh, digit set coming up from recordings in this room. So, there's a lot of things to work with. Um and, uh what I like about it, in a way, is that, uh, the results are still so terrible. Uh [pause] [vocalsound] Uh [pause] [vocalsound] I mean, they're much better than they were, you know. We're talking about thirty to sixty percent, uh, error rate reduction. That's [disfmarker] that's really great stuff to [disfmarker] to do that in relatively short time. But even after that it's still, you know, so poor that [disfmarker] that, uh, no one could really use it. So, um I think that's great that [disfmarker] because [disfmarker] and y also because again, it's not something [disfmarker] sometimes we've gotten terrible results by taking some data, and artificially, you know, convolving it with some room response, or something [disfmarker] we take a very [disfmarker] Uh, at one point, uh, Brian and I went downstairs into the [disfmarker] the basement where it was [disfmarker] it was in a hallway where it was very reverberant and we [disfmarker] we made some recordings there. And then we [vocalsound] [disfmarker] we, uh [disfmarker] uh, made a simulation of the [disfmarker] of the room acoustics there and [disfmarker] and applied it to other things, [speaker001:] Mm hmm. [speaker004:] and uh But it was all pretty artificial, and [disfmarker] and, you know, how often would you really try to have your most crucial conversations in this very reverberant hallway? Um [pause] So, uh [pause] This is what's nice about the Aurora data and the data here, is that [disfmarker] is that it's sort of a realistic room situation [pause] uh, acoustics [disfmarker] acoustic situation, both terms in noise and reflections, and so on and n n And, uh, uh, with something that's still relatively realistic, it's still very very hard to do very well. So Yeah. [speaker001:] Yeah, so d well Actually, this is [disfmarker] tha that's why we [disfmarker] well, it's a different kind of data. We're not [disfmarker] we're not used to work with this kind of data. [speaker004:] Yeah. [speaker001:] That's why we should have a loo more closer look at what's going on. [speaker005:] Mm hmm. [speaker001:] Um Yeah. So this would be the first thing, and then, of course, try to [disfmarker] well, [vocalsound] kind of debug what was wrong, eh, when we do Aurora test on the MSG [pause] particularly, and on the multi band. [speaker004:] Yeah. Yeah. Yeah. [speaker001:] Uh [speaker004:] Yeah. Yeah. No, I [disfmarker] I think there's lots of [disfmarker] lots of good things to do with this. So Um So let's [disfmarker] I guess [pause] You were gonna say something else? Oh, OK. What do you think? [speaker003:] About [speaker004:] Anything [speaker003:] About other experiments? Uh, now, I'm interested in, um, uh [pause] looking at the experiments where you use, um [pause] uh, data from multiple languages to train the neural net. And I don't know how far, or if you guys even had a chance to try that, but [pause] that would be some it'd be interesting to me. [speaker001:] Yeah, but [speaker004:] S b [speaker001:] Again, it's the kind of [disfmarker] of thing that, uh, we were thin thinking [disfmarker] thinking that it would work, but it didn't work. And, eh, so there is kind of [disfmarker] of [pause] not a bug, but something wrong in what we are doing, perhaps. [speaker004:] Yeah. [speaker003:] Right. Right. Right. [speaker001:] Uh, something wrong, perhaps in the [disfmarker] just in the [disfmarker] the fact that the labels are [disfmarker] well [speaker003:] Mm hmm. [speaker001:] What worked best is the hand labeled data. [speaker003:] Mm hmm. [speaker001:] Um Uh, so, yeah. I don't know if we can get some hand labeled data from other languages. [speaker003:] Yeah. [speaker001:] It's not so easy to find. [speaker003:] Right. [speaker001:] But [pause] that would be something interesting t to [disfmarker] to see. [speaker003:] Yeah, yeah. [speaker004:] Yeah. Also, uh, [vocalsound] I mean, there was just the whole notion of having multiple nets that were trained on different data. So one form of different data was [disfmarker] is from different languages, but the other Well, i in fact, uh, m in those experiments it wasn't so much combining multiple nets, it was a single net that had different [speaker001:] Yeah. [speaker004:] So, first thing is would it be better if they were multiple nets, for some reason? Second thing is, never mind the different languages, just having acoustic conditions rather than training them all up in one, would it be helpful to have different ones? So, um That was a question that was kind of raised by Mike Shire's thesis, and on [disfmarker] in that case in terms of reverberation. Right? That [disfmarker] that sometimes it might be better to do that. But, um, [vocalsound] I don't think we know for sure. So, um Right. So, next week, we, uh, won't meet because you'll be in Europe. Whe when are you two getting back? [speaker005:] Um, I'm [speaker001:] You on Friday or S on Saturday [speaker005:] Sunday [speaker001:] or [pause]? [speaker005:] because it's [disfmarker] it's less expensive, the price [disfmarker] the price the ticket. [speaker001:] S oh yeah, Sunday, yeah. [speaker004:] Yeah, that's right. You've gotta S have a Saturday overnight, right? [speaker001:] I'll be back on Tuesday. [speaker004:] Tuesday. [speaker003:] Where [disfmarker] where's the meeting? [speaker004:] Uh, Amsterdam, I think, [speaker001:] Yeah, Amsterdam. [speaker004:] yeah? [speaker003:] Uh huh. [speaker004:] Yeah. Yeah, yeah. Yep. Um [pause] So, we'll skip next week, and we'll meet two weeks from now. And, uh, I guess the main topic will be, uh, you telling us what happened. [speaker001:] Yeah. [speaker005:] Yeah. [speaker004:] Uh, so Yeah, well, if we don't have an anything else to discuss, we should, uh, turn off the machine and then say the real nasty things. [speaker003:] Should we do digits first? [speaker002:] Oh, yeah, digits. [speaker004:] Oh yeah, digits! [speaker001:] Yeah. [speaker004:] Yeah. Good point. Yeah, good thinking. Why don't you go ahead. [speaker003:] OK. OK. [speaker005:] Channel. [speaker003:] What channel am I on? Oh, channel two. [speaker007:] Make sure to turn your microphone on. There's a battery. [speaker005:] Channel. [speaker002:] There we go. [speaker007:] OK. Your channel number's already on this blank sheet. [speaker002:] Yeah. [speaker007:] So you just [disfmarker] If you can [disfmarker] [speaker006:] Channel five? Channel five. [speaker005:] Channel whatever. [speaker006:] I'm on channel five. [speaker002:] Camera one, camera two. [speaker005:] What am I? [speaker006:] Little low? [speaker005:] Channel four? [speaker006:] Channel five. [speaker005:] This number four? [speaker006:] Channel five. [speaker005:] OK. [speaker006:] OK. [speaker007:] The gai the gain's up at it [disfmarker] what it usually is, but if you think it's [disfmarker] [speaker006:] Is it? [speaker007:] Yeah. It's sort of a default. But I can set it higher if you like. [speaker006:] Oh. Maybe it should be a little higher. [speaker007:] Yeah? [speaker006:] It's not showing much. Test, test, test, test, test, test, test, test, test, test. OK, that [disfmarker] that seems better? Yeah? OK, good. Ah, that's good, that's good. That's Ahh. Mmm. So I [disfmarker] I had a question for Adam. Have we started already? [speaker007:] Well, we started recording, but [disfmarker] Yeah. [speaker006:] Yeah. Is Jane around or [disfmarker]? [speaker004:] I saw her earlier. [speaker006:] Uh. [speaker007:] She can just walk in, I guess, or [disfmarker] [speaker004:] I think [disfmarker] Yeah. [speaker006:] Right. [speaker004:] She'll probably come up. [speaker007:] Since we're starting late I figured we'd better just start. [speaker006:] Yeah. Great idea. I was gonna ask Adam to, uh, say if he thought anymore about the demo stuff because [vocalsound] it occurred to me that this is late May and the DARPA meeting is in [pause] mid July. Uh, but [vocalsound] I don't remember w what we [disfmarker] I know that we were gonna do something with the transcriber interface is one thing, but I thought there was a second thing. Anybody remember? [speaker007:] Well, we were gonna do a mock up, like, question answering or something, I thought, that was totally separate from the interface. Do you remember? Remember, like, asking questions and retrieving, [vocalsound] but in a pre stored fashion. [speaker006:] Mm hmm. Right. [speaker007:] That was the thing we talked about, I think, before the transcriber [disfmarker] Come on in. [speaker006:] Yeah. Alright. So anyway, you have to sort out that out and get somebody going on it cuz we're [disfmarker] got a [disfmarker] got a month left basically. So. [speaker007:] You like these. Right? OK, good. [speaker006:] OK. Um OK. So, what are we g else we got? You got [disfmarker] you just wrote a bunch of stuff. [speaker007:] No. That was all, um, previously here. [speaker006:] Oh. [speaker007:] I was writing [pause] the digits and then I realized I could xerox them, [speaker006:] Oh, oh. [speaker007:] because I didn't want people to turn their heads from these microphones. So. We all, by the way, have the same digit form, for the record. So. [speaker006:] That's cool. [speaker007:] Yeah. [speaker006:] So, the choice is, uh, which [disfmarker] which do we want more, the [disfmarker] the [disfmarker] the comparison, uh, [vocalsound] of everybody saying them at the same time or the comparison of people saying the same digits at different times that [disfmarker]? [speaker007:] It's just cuz I didn't have any more digit sheets. [speaker006:] I know that. [speaker007:] So. [speaker006:] But, you know, which opportunity should we [speaker007:] Yeah. [speaker003:] Unison. [speaker007:] I mean, it [disfmarker] Actually it might be good to have them separately and have the same exact strings. [speaker006:] exploit? [vocalsound] Unison. [speaker007:] I mean, we could use them for normalizing or something, but it of course goes more quickly doing them in unison. [speaker006:] I guess we'll see [speaker007:] I don't know. [speaker006:] i I guess it's dependent on [speaker007:] See how long we go. [speaker006:] how long we go and how good the snack is out there. Yeah. Hmm. Get some advance intelligence. [speaker005:] But anyway, they won't be identical as somebody is saying zero in some [disfmarker] sometimes, you know, saying O, and so, it's not i not identical. [speaker007:] Right. Right. [speaker006:] Yeah. We'd have to train. [speaker007:] We'd be like a chorus. [speaker005:] OK. [speaker006:] Yeah. We'd have to get s get some experience. [speaker003:] Greek chorus. [speaker002:] Yeah. [speaker007:] Yes. [speaker006:] Yeah. Really [pause] boring chorus. Um. Do we [pause] have an agenda? Adam usually tries to put those together, but he's ill. [speaker004:] I've got a couple of things to talk about. [speaker006:] So. Yeah. Uh ju what [disfmarker] what might those be? [speaker004:] Uh, IBM stuff and, um, just getting [vocalsound] uh, meeting information organized. [speaker006:] Meeting info organized. OK. Um. [speaker003:] Are you implying that it's currently disorganized? [speaker004:] In my mind. [speaker006:] Is there stuff that's happened about, um, uh, the [pause] SRI recognizer et cetera, tho those things that were happening before with [disfmarker]? Y y you guys were doing a bunch of experiments with different front ends and then with [disfmarker] [speaker003:] Well. [speaker006:] Is [disfmarker] is that still sort of where it was, uh, the other day? [speaker003:] We're improving. [speaker006:] We're improving. [speaker003:] Yeah. [speaker004:] Now the [disfmarker] the [disfmarker] You saw the note that the PLP now is getting basically the same as the MFCC. [speaker006:] Right. [speaker004:] Right? [speaker006:] Right. [speaker003:] Yeah. Actually it looks like it's getting better. [speaker006:] Oh. [speaker003:] So. But [disfmarker] but it's not [disfmarker] [speaker006:] Just with [disfmarker] with age, kind of. [speaker003:] With age. Yeah. [speaker006:] Yeah. Yeah. [speaker003:] But, uh, that's not d directly related to me. Doesn't mean we can't talk about it. Um, it seems [disfmarker] It looks l I haven't [disfmarker] The [disfmarker] It's [disfmarker] The experiment is still not complete, but, um, it looks like the vocal tract length normalization is working beautifully, actually, [speaker006:] Mm hmm. [speaker003:] w using the warp factors that we computed for the SRI system and just applying them to the [vocalsound] ICSI front end. [speaker006:] That's pretty funny. [speaker003:] Yeah. [speaker007:] So you just need to [pause] copy over to this one. [speaker006:] OK. [speaker003:] Just had to take the reciprocal of the number because they have different meanings in the two systems. [speaker001:] OK. [speaker006:] Ah! Yeah. Well, that's always good to do. [speaker003:] Yeah. [speaker006:] OK. OK. Uh [disfmarker] [speaker003:] But one issue actually that just came up in discussion with Liz and [disfmarker] and Don was, um, as far as meeting recognition is concerned, um, we would really like to, uh, move, uh, to, uh, doing the recognition on automatic segmentations. [speaker006:] Yeah. [speaker003:] Because in all our previous experiments, we had the [disfmarker] uh, you know, we were essentially cheating by having the, um, you know, the h the hand segmentations as the basis of the recognition. [speaker006:] Mm hmm. [speaker003:] And so now with Thilo's segmenter working so well, I think we should [pause] consider doing a [disfmarker] [speaker005:] Mmm. So. [speaker002:] Come on. [speaker005:] Yeah. We [disfmarker] But [disfmarker] [speaker006:] Y think [disfmarker] you think we should increase the error rate. [speaker003:] uh, doing [disfmarker] [speaker005:] Anyway. Yeah. [speaker006:] Good. [speaker005:] Yeah. [speaker003:] Yeah. [speaker006:] Yeah. [speaker003:] Yeah. [speaker005:] That that's what I wanted to do anyway, [speaker007:] Yeah. [speaker003:] Yeah. [speaker005:] so we should just get together and [disfmarker] [speaker003:] Yeah. [speaker007:] And even [disfmarker] The good thing is that since you, um, have high recall, [comment] even if you have low precision cuz you're over generating, that's good because we could train noise models in the recognizer for these kinds of, uh, transients and things that come from the microphones, [speaker005:] Yeah. [speaker003:] Right. [speaker005:] Yeah. [speaker007:] but [vocalsound] I know that if we run recognition unconstrained on a whole waveform, we do very poorly because we're [disfmarker] we're getting insertions in places what [disfmarker] that you may well be cutting out. [speaker003:] Well [disfmarker] [speaker005:] Yeah. [speaker006:] Mm hmm. [speaker007:] So we do need some kind of pre segmentation. [speaker003:] We should [disfmarker] we should consider doing some extra things, like, um, you know, retraining or adapting the [disfmarker] [vocalsound] the models for background noise to the [disfmarker] to this environment, for instance. [speaker007:] Mmm. Yeah. [speaker005:] Yeah. [speaker007:] And, yeah, using Thilo's, you know, posteriors or some kind of [disfmarker] or [disfmarker] [speaker003:] So. [speaker007:] right now they're [disfmarker] they're discrete, yes or no for a speaker, to consider those particular speaker background models. [speaker005:] Yeah. [speaker007:] So. [speaker003:] Right. [speaker007:] There's lots of ins interesting things that could be done. [speaker005:] Yeah. Yeah. We should do that. [speaker007:] So. [speaker006:] Good. So, uh, why don't we, uh, do the IBM stuff? You had some [pause] thing about that? [speaker004:] Yeah. So, um, talked with Brian and gave him the alternatives to the single beep at the end of each utterance that we had generated before. [speaker006:] Right. [speaker004:] And so [disfmarker] [speaker006:] The, uh, Chuck chunks. [speaker004:] Yeah. [speaker005:] Hmm. [speaker004:] The Chuck chunks. Right. And so he talked it over with the transcriber and the transcriber thought that the easiest thing for them would be if there was a beep and then the nu a number, a digit, and then a beep, uh, at the beginning of each one [speaker006:] Yeah. Yeah. [speaker004:] and that would help keep them from getting lost. And, um, [vocalsound] so Adam wrote a little script to generate those style, uh, beeps and so we're [disfmarker] [speaker003:] Where'd you get the digits from? [speaker004:] I came up here and just recorded the numbers one through ten. [speaker001:] They sound really good. [speaker004:] So. [speaker007:] That's a great idea. [speaker004:] Does it sound OK? [speaker001:] Yeah. [speaker004:] So, um [disfmarker] Yeah. We just used those. [speaker003:] And do you splice them into the [pause] waveform? Or [disfmarker]? [speaker004:] Yeah. He [disfmarker] then he d I recorded [disfmarker] Actually, I recorded one through ten three times at three different speeds and then he picked. [speaker003:] Right. Mm hmm. [speaker004:] He liked the fastest one, so he just cut those out [vocalsound] and spliced them in between, uh, two beeps. [speaker005:] It will be funny [speaker001:] It sounds like a radio announcer's voice. Really. [speaker005:] uh [disfmarker] [speaker004:] Does it? [speaker001:] Yeah, yeah. [speaker005:] It will be funny when you're really reading digits, and then there are the chunks with [disfmarker] with your [pause] digits in? [speaker004:] Yeah. With my [disfmarker] [speaker007:] Oh, right. [speaker001:] Oh that's right. [speaker005:] Yeah. [speaker004:] That'll throw them, [speaker001:] Now actually, [speaker004:] huh? [speaker001:] we're [disfmarker] Are we handling [disfmarker]? [speaker006:] Uh, maybe we should have you record A, B, C for those or something. [speaker004:] Yeah. [comment] Huh! Maybe. And she said it wasn't gonna [disfmarker] the transcriber said it wouldn't be a problem cuz they can actually make a template, uh, that has beep, number, beep. [speaker005:] OK. [speaker004:] So for them it'll be very quick [speaker006:] Yeah. [speaker004:] to [disfmarker] to put those in there [vocalsound] when they're transcribing. So, um, we [disfmarker] We're gonna send them one more sample meeting, uh, and Thilo has run his segmentation. Adam's gonna generate the chunked file. And then, um, I'll give it to Brian and they can try that out. And when we get that back we'll see if that sort of fixes the problem we had with, uh, too many beeps in the last transcription. [speaker006:] OK. Do w do [disfmarker] what [disfmarker] Do you have any idea of the turn around on [disfmarker] on those steps you just said? [speaker007:] Great. [speaker006:] Uh. [speaker004:] Uh. Our s our [disfmarker] On our side? or including IBM's? [speaker006:] Including IBM's. [speaker004:] Well, I don't know. The last one seemed like it took a couple of weeks. [speaker006:] OK. [speaker004:] Um, maybe even three. Uh, that's just the I B M side. Our side is quick. I mean, I [disfmarker] I don't know. How long does your [disfmarker]? [speaker005:] It should [@ @] be finished today or something. Yeah. [speaker006:] Well, I meant the overall thing. e e u u [comment] The reason I'm asking is because, uh, Jane and I have just been talking, and she's just been doing. [vocalsound] Uh, e a, you know, further hiring of transcribers. [speaker004:] Yeah. Mm hmm. Mm hmm. [speaker006:] And so we don't sort of really know exactly what they'll be doing, how long they'll be doing it, and so forth, because right now she has no choice but to operate in the mode that we already have working. [speaker004:] Right. [speaker006:] And, uh, so it'd be [disfmarker] It'd be good to sort of get that resolved, uh, soon as we could, [speaker004:] Yeah. [speaker006:] and then [disfmarker] [speaker004:] I [disfmarker] Yeah, I [disfmarker] I hope [@ @] [comment] we can get a better estimate from this [pause] one that we send them. [speaker006:] Mm hmm. [speaker004:] So. Um. I [disfmarker] I don't know yet how long that'll take. [speaker006:] Yeah. Um [disfmarker] I mean in particular I would [disfmarker] I would really hope that when we do this DARPA meeting in July that we sort of have [disfmarker] we're [disfmarker] we're into production mode, somehow [disfmarker] [speaker004:] Mm hmm. [speaker006:] You know, that we [disfmarker] we actually [vocalsound] have a stream going and we know how [disfmarker] how well it does and how [disfmarker] and how it operates. [speaker004:] Yeah. [speaker006:] I think that would [disfmarker] that would certainly be a [disfmarker] a very good thing to know. [speaker004:] Right. Right. [speaker006:] OK. Uh. Maybe before we do the meeting info organize thing, maybe you could say relevant stuff about where we are in transcriptions. [speaker001:] OK. So, um, we [disfmarker] [vocalsound] Uh, the transcribers have continued to work past what I'm calling "set one", which was the s the set that I've been, uh [disfmarker] OK, talking about up to this point, but, uh, they've gotten five meetings done in that set. Right now they're in the process of being edited. Um, the, um [disfmarker] Let's see, I hired two transcribers today. I'm thinking of hiring another one, which will [disfmarker] because we've had a lot of attrition. And that will bring our total to [disfmarker] [speaker006:] They die off after they do this for a while. [speaker001:] Yeah. [speaker004:] Burn out. [speaker006:] Yeah. [speaker001:] Well, you know, it's [disfmarker] it's various things. So, one of them had a baby. Um, you know, one of them really w wasn't planning [disfmarker] [speaker003:] Oh, that was an unfor unforeseen side effect of [disfmarker] [speaker001:] Eh, one of them, um, had never planned to work past January. I mean, it's th all these various things, cuz we, you know, we presented it as possibly a month project back in January and [disfmarker] and [disfmarker] and [disfmarker] and [disfmarker] Um, so it makes sense. Uh, through attrition we [disfmarker] we've [disfmarker] we're down to [disfmarker] to two, but they're really solid. We're really lucky the two that we kept. And, um [disfmarker] Well, I don't mean [disfmarker] I don't mean anything against the others. [comment] What I mean is we've got a good cause [disfmarker] a good core. No. We had a good core [disfmarker] [speaker007:] Well, they won't hear this since they're going. They won't be transcribing this meeting. [speaker001:] Yeah, but still. I mean, I d it's just a matter of we [disfmarker] w we're [disfmarker] we've got, uh, [speaker006:] No backs. [speaker001:] two of the ones who [disfmarker] who, um, ha had been putting in a lot of hours up to this point and they're continuing to put in a [disfmarker] a lot of hours, which is wonderful, and excellent work. And so, then, in addition, um, I hired two more today and I'm planning to h hire a third one with this [disfmarker] within this coming week, but [disfmarker] [vocalsound] but the plan is [disfmarker] just as, uh, Morgan was saying we discussed this, and the plan right now is to keep the staff on the [disfmarker] on the leaner side, you know, rather than hiring, like, eight to ten right now, [speaker006:] Mm hmm. [speaker001:] because if the IBM thing comes through really quickly, then, um, we wouldn't wanna have to, uh, you know, lay people off and stuff. So. And this way it'll [disfmarker] I mean, I got really a lot of response for [disfmarker] for my notice and I think I could hire additional people if I [pause] wish to. [speaker006:] Yeah. An and the other thing is, I mean, in the unlikely event [disfmarker] and since we're so far from this, it's a little hard to plan this way [disfmarker] in the unlikely event that we actually find [vocalsound] that we have, uh, transcribers on staff who are twiddling their thumbs because, you know, there's, you know, [vocalsound] all the stuff that [disfmarker] that was sitting there has been transcribed and they're [disfmarker] and they're faster [disfmarker] the [disfmarker] the pipeline is faster than [disfmarker] [vocalsound] uh, than the generation, um, eh, i in [disfmarker] in the day [disfmarker] e event that that day actually dawns, uh, I [disfmarker] I bet we could find some other stuff for them to do. So I [disfmarker] I think [vocalsound] that, eh, eh, a as we were talking, if we [disfmarker] if we hire twelve, then we could, you know, run into a problem later. [speaker001:] Oh, yes. [speaker006:] I mean, we also just couldn't sustain that forever. But [disfmarker] but, um [disfmarker] for all sorts of reasons [disfmarker] but if we hire f you know, f we have five on staff [disfmarker] five or six on staff at any given time, then [vocalsound] it's a small enough number so we can be flexible either way. [speaker001:] Good. OK. [speaker006:] Good. [speaker007:] It'd be great, too, if, um, we can [disfmarker] we might need some help again getting the tighter boundaries or some hand [disfmarker] to experiment with, um [disfmarker] [vocalsound] you know, to have a ground truth for this segmentation work, which [disfmarker] I guess you have some already that was really helpful, and we could probably use more. [speaker005:] Mmm, yeah. That was a thing I [disfmarker] I planned working on, is, uh, to use the [disfmarker] the transcriptions which are done by now, and to [disfmarker] to use them as, uh [disfmarker] [speaker007:] Oh. Oh, the new ones [speaker005:] Yeah. [speaker007:] with the tighter boundaries. Yeah. [speaker005:] Yeah. And to use them for [disfmarker] for training a [disfmarker] or for [disfmarker] fo whatever. Yeah. To [disfmarker] to create some speech nonspeech labels out of them, and [disfmarker] Yeah, but that [disfmarker] that's a thing w was [disfmarker] w what I'm just looking into. [speaker007:] OK. [speaker001:] The [disfmarker] the [disfmarker] the pre segmentations are so much [disfmarker] are s so extremely helpful. Now there was, uh, I g guess [disfmarker] So, a couple weeks ago I needed some new ones and it happened to be during the time that he was on vacation [disfmarker] f for just very few days you were away. [speaker005:] Yeah. [speaker001:] But it happened to be during that time I needed one, so I [disfmarker] [vocalsound] so I started them on the non pre segmented and then switched them over to yours and, um, they, um [disfmarker] you know, they always appreciate that when they have that available. And he's, uh, usually, eh, uh, um [disfmarker] Um. So they really appreciate it. But I was gonna say that they do adjust it once in a while. You know, once in a while there's something like, [speaker005:] Yeah, sure. [speaker001:] um, and e Actually you talked to them. Didn't you? Did you? Have you [disfmarker]? [speaker005:] Yeah. I talked to Helen. [speaker001:] And [disfmarker] and [disfmarker] and she was [disfmarker] And so, I asked her [disfmarker] I mean, They're very perceptive. I really want to have this meeting of the transcribers. I haven't done it yet, but I wanna do that and she's out of town, um, for a couple of weeks, but I wanna do that when she returns. Um, cuz she was saying, you know, in a [disfmarker] in a span of very short period [disfmarker] we asked [disfmarker] It seems like the ones that need to be adjusted are these [disfmarker] these [disfmarker] these things, and she was saying the short utterances, uh, the, um [disfmarker] [speaker007:] Hmm. [speaker005:] Mmm. Yeah. [speaker001:] you know, I mean, you're [disfmarker] You're aware of this. [speaker005:] Yeah. [speaker001:] But [disfmarker] but actually i it's so correct for so much of the time, that it's an enormous time saver and it just gets tweaked a little around the boundaries. [speaker007:] That's great. [speaker001:] So. Um. Yeah. I think it'd be interesting to combine these. [speaker005:] Yeah. [speaker007:] Is there actually a record of where they change? I mean, you can compare, do a diff on the [disfmarker] just so that we [vocalsound] knew [disfmarker] [speaker001:] You could do it. It's [disfmarker] it's complicated in that [vocalsound] um, [vocalsound] hhh, i hhh, i [speaker005:] Yeah. Actually, when [disfmarker] when they create new [disfmarker] yeah, new segments or something, it will be, uh, not that easy but [disfmarker] hmm. I think [pause] one could do that. [speaker007:] I mean, if we keep a old copy of the old time marks [speaker005:] Yeah. [speaker007:] just so that if we run it we know whether we're [disfmarker] which ones were cheating [speaker005:] Yeah. Yeah. [speaker007:] and [speaker005:] That would be great, yeah, to know that. [speaker001:] There is a [disfmarker] there is one problem with that and that is when they start part way through then what I do is I merge what they've done with the pre segmented version. [speaker007:] which one would be good. [speaker005:] Yeah. [speaker001:] So it's not a pure [disfmarker] it's not a pure condition. Wha what you'd really like is that they started with pre segmented and were pre segmented all the way through. [speaker007:] Mm hmm. [speaker001:] And, um [disfmarker] [@ @] [comment] [disfmarker] I, uh [disfmarker] the [disfmarker] it wasn't possible for about four of the recent ones. But, it will be possible in the future [speaker005:] Yeah. [speaker001:] because we [disfmarker] we're, um. [speaker007:] Mmm, that's great. [speaker005:] It would. Yeah. [speaker007:] Yeah. As long as we have a record, I guess, of the original [pause] automatic one, we can always find out how well [pause] we would do fr from the recognition side by using those boundaries. [speaker005:] Yeah. Yeah. [speaker007:] Um. You know, a completely non cheating version. [speaker005:] Yeah. Yeah. [speaker007:] Also if you need someone to record this meeting, I mean, I'm happy to [disfmarker] for the transcribers [disfmarker] I could do it, or Chuck or Adam. [speaker001:] Thank you. [speaker006:] OK. So, uh, u you were saying something about organizing the meeting info? [speaker004:] Yeah. So, um, uh, Jane and Adam and I had a meeting where we talked about the reorganization of the [pause] directory structure for all of the meeting [disfmarker] [speaker006:] Did you record it? [speaker004:] No. For all the Meeting Recorder data. We should have. Um. And so we've got a plan for what we're gonna do there. And then, Jane also s prepared a [disfmarker] um, started getting all of the [disfmarker] the meetings organized, so she prepared a [disfmarker] [vocalsound] a spreadsheet, which I spent the last couple of days adding to. So I went through all of the data that we have collected so far, and have been putting it into, uh, a spreadsheet [vocalsound] with start time, the date, the old meeting name, the new meeting name, the number of speakers, the duration of the meeting, comments, you know, what its transcription status is, all that kind of stuff. And so, the idea is that we can take this and then export it as HTML and put it on the Meeting Recorder web page [speaker007:] Oh, great. [speaker004:] so we can keep people updated about what's going on. Um, I've gotta get some more information from Jane cuz I have some [disfmarker] some gaps here that I need to get her to fill in, but [vocalsound] so far, um, [vocalsound] as of Monday, the fourteenth, um, we've had a total number of meeting sixty two hours of meetings that we have collected. And, um [disfmarker] Uh, some other interesting things, average number of speakers per meeting is six. Um, and I'm gonna have on here the total amount that's been transcribed so far, but I've got a bunch of [disfmarker] uh, that's what I have to talk to Jane about, figuring out exactly which ones have [disfmarker] have been completed and so forth. But, um, [vocalsound] this'll be a nice thing that we can put up on the [disfmarker] the web site and people can [vocalsound] be informed of the status of various different ones. And [vocalsound] it'll also list, uh, like under the status, if it's at IBM or if it's at ICSI, uh, or if it's completed or which ones we're excluding and [disfmarker] and there's a place for comments, so we can, [vocalsound] um, say why we're excluding things and so forth. So. [speaker006:] Now would the ones that, um, are already transcribed [disfmarker] we h we have enough there that c you know, we've already done some studies and so forth and [disfmarker] um, shouldn't we go through and do the business es u of [disfmarker] of having the, um, uh, participants approve it, uh, for [disfmarker] approve the transcriptions for distribution and so forth? [speaker001:] Um, interesting idea. In principle, I [disfmarker] I would say yes, although I still am doing some [disfmarker] the final pass editing, trying to convert it over to the master file as the [disfmarker] being the channelized version and it's [disfmarker] Yeah, it seems like I get into that a certain way and then something else intervenes [comment] and I have to stop. Cleaning up the things like the, uh, uh, places where the transcriber was uncertain, and [disfmarker] and doing spot checking here and there. So, um, uh, I guess it would make sense to wait until th that's done, um, but [disfmarker] but [disfmarker] [speaker006:] Well, le let me put in another sort of a milestone kind of [disfmarker] as [disfmarker] as I did with the, uh, uh [disfmarker] the [disfmarker] the pipeline. [speaker001:] Yeah. [speaker006:] Um, we are gonna have this DARPA [pause] meeting in the middle of July, and I think it w it'd be [disfmarker] [speaker001:] Yes. [speaker006:] given that we've been [disfmarker] we've given a couple public talks about it already, spaced by months and months, I think it'd be pretty bad if we continued to say none of this is available. Um. [speaker001:] It'll certainly be done by then. Yeah. [speaker006:] Right. So we can s we [disfmarker] we wanna be able to say "here is a subset that is available right now" [speaker001:] Mm hmm. That's right. [speaker006:] and that's has been through the legal issues and so forth. [speaker001:] That's right. [speaker006:] So. [speaker001:] Yeah. That's right. So that [disfmarker] [speaker006:] OK? [speaker001:] OK. [speaker006:] So, by [disfmarker] before July. [speaker003:] And they don't have to approve, you know, th an edited version, they can just give their approval to whatever version [speaker001:] Well, maybe [disfmarker] [speaker006:] Well, in principle, yes. But, I mean, i if [disfmarker] if [disfmarker] if somebody actually did get into some legal issue with it then we [speaker003:] Bu Yeah. But th I mean, the editing will continue. Presumably if [disfmarker] if s errors are found, they will be fixed, but they won't change the [disfmarker] the content of the meetings. [speaker004:] Content, really. [speaker001:] Well, see, this is the [disfmarker] this is the issue. Subtleties. [speaker003:] So. [speaker007:] Well, i if Jane is clarifying question question, then, you know, how can they agree to it before they know her final version? [speaker001:] The other thing, too, is there can be subtleties where a person uses this word instead of that word, which [@ @] [comment] could've been transcribed in the other way. [speaker006:] Yeah. [speaker007:] Thing [disfmarker] [speaker001:] And no and they wouldn't have [vocalsound] been slanderous if it had been this other word. You know? [speaker006:] I it [disfmarker] you know, there there is a point at which I agree it becomes ridiculous because, you know, you could do this final thing and then a year from now somebody could say, you know, that should be a period and not a question mark. Right? And you don't [disfmarker] you [disfmarker] there's no way that we're gonna go back and ask everybody "do you approve this, uh, you know [disfmarker] this document now?" So [disfmarker] So I think what it is is that the [disfmarker] the [disfmarker] the [disfmarker] the thing that they sign [disfmarker] I [disfmarker] I haven't looked at it in a while, but it has to be open enough that it sort of says "OK, from now on [disfmarker] you know, now that I've read this, you can use [disfmarker] do anything you want with these data." [speaker001:] Mm hmm. [speaker006:] And, uh [disfmarker] But, i I think we wanna [disfmarker] So, assuming that it's in that kind of wording, which I don't remember, [vocalsound] um, I think i we just wanna have enough confidence ourselves that it's so close to the final form it's gonna be in, a year from now that they're [disfmarker] [speaker001:] Mm hmm. I agree. Mmm. I totally agree. [speaker006:] Uh. [speaker001:] It's just, uh, a question of, [vocalsound] uh, if [disfmarker] if the person is using the transcript as the way of them judging what they said and whether it was slanderous, [vocalsound] then it seems like it's [disfmarker] it's [disfmarker] i it needs to be more correct than if we could count on them re listening to the meeting. Because it becomes, eh, in a way a [disfmarker] a f uh, a legal document i if they've agreed to that. [speaker006:] Well, I forget how we end Right. I forget how we ended up on this, but I remember my taking the position [vocalsound] of not making it so [disfmarker] so easy for everybody to observe everything and Adam was taking the position of [disfmarker] of having it be really straightforward for people to check every aspect of it including the audio. And I don't remember who won, Adam or me, but [disfmarker] [speaker001:] Well, [vocalsound] if it's only the transcript, though [disfmarker] I mean, th this [disfmarker] this is my point, that [disfmarker] that [speaker006:] uh, the [disfmarker] Uh, that that's why I'm bringing this up again, because I can't remember how we ended up. [speaker001:] then it becomes [disfmarker] [speaker006:] That it was the transcrip He wanted to do a web interface that would make it [disfmarker] [speaker001:] Well, if it's just the audio [disfmarker] [speaker006:] that would give you access to the transcript and the audio. [speaker001:] Well. [speaker006:] That's what Adam wanted. [speaker001:] Mm hmm. [speaker006:] And I don't remember how we ended up. [speaker007:] I mean, with the web interface it's interesting, because you could allow the person who signs to be informed when their transcript changes, or something like that. And, I mean, I would say "no". Like, I don't wanna know, but some people might be really [vocalsound] interested and then y In other words, they would be informed if there was some significant change other than typos and things like that. [speaker006:] You decided you were whispering Satanic incantations under your breath when you were [disfmarker] [speaker007:] Well, [vocalsound] I don't know what happened to the small heads thing, but I j [vocalsound] Um, I'm just saying that, like, you know, you can sort of say that any things that are deemed [disfmarker] [speaker006:] They disappeared from view. [speaker007:] Anyway. I mean, I agree that at some point people [vocalsound] probably won't care about typos but they would care about significant meaning changes and then they could be asked for their consent, I guess, if [disfmarker] if those change. Cuz assumi [vocalsound] assuming we [disfmarker] we don't really distribute things that have any significant changes from what they sign anyway. [speaker003:] Tha That's [disfmarker] How about having them approve the audio and not the transcripts? [speaker007:] Oh, my God. [speaker006:] Uh. [speaker001:] That would be simpler, if we could count on them listening. [speaker007:] But no one will listen to the hours and hours of [disfmarker] [speaker004:] Talk. [speaker003:] Well, that's O K. We just have to give them a chance to listen to it, and if they don't, that's their problem. [speaker002:] That's [disfmarker] hmm, hmm. [speaker007:] You [disfmarker] you d That's like [disfmarker] [speaker001:] Unfortunately, uh, in [disfmarker] in the sign thing that they signed, it says "transcripts". [speaker003:] No, I'm serious. [speaker001:] "You'll be [disfmarker] you'll be provided the transcripts when they're available." [speaker003:] Really? [speaker006:] Yeah. [speaker007:] I I [disfmarker] I think [speaker002:] Mmm. [speaker003:] Mmm. [speaker005:] Yeah. [speaker007:] that's a lot to ask for people that have been in a lot of meetings. [speaker001:] Yeah. Yeah. [speaker006:] W anyway, haven't we [disfmarker] we've gone down this path a number of times. I know this can lead to extended conversations and [disfmarker] and not really get anywhere, so let [disfmarker] let me just suggest that [disfmarker] [vocalsound] uh, off line that, uh, the people involved figure it out and take care of it before it's July. [speaker001:] Yes. [speaker006:] OK. So [disfmarker] so that in July we can tell people [vocalsound] "yes, we have this and you can use it". [speaker001:] Yes. It's done, ready, available. Good. [speaker006:] Uh. So, let's see. What else we got? Uh. Don did [disfmarker] did a report about his project in class and, uh [disfmarker] an oral and written [disfmarker] written version. [speaker007:] Well. [speaker006:] So that was stuff he was doing with you. [speaker007:] I mean, it's [disfmarker] I guess one thing we're learning is that the amount [disfmarker] [speaker006:] Yeah. [speaker007:] We have eight meetings there because we couldn't use the non native [disfmarker] all non native meetings and [vocalsound] it's, well, probably below threshold on enough data for us for the things we're looking at because the [vocalsound] prosodic features are [pause] very noisy and so you [disfmarker] you need a lot of data in order to model them. Um, so we're starting to see some patterns and we're hoping that maybe with, [vocalsound] I don't know, double or triple the data [disfmarker] with twenty meetings or so, that we would start to get better results. But we did find that some of the features that, I gue Jane would know about, that are expressing sort of the [vocalsound] distance of, um, [vocalsound] boundaries from peaks in the utterance and [vocalsound] some [pause] local, um, range [disfmarker] pitch range effects, like how close people are to their floor, are showing up in these classifiers, which are also being given some word features that are cheating, cuz they're true words. Um, so these are based on forced alignment. Word features like, um, word frequency and whether or not something's a backchannel and so forth. So, we're starting to see, I think, some interesting patterns. [speaker006:] So the dominant features, including everything, were those [disfmarker] those quasi cheating things. Right? Where these are [disfmarker] [speaker002:] Sometimes not. [speaker007:] I think it depends what you're looking at, a actually. [speaker002:] Yeah. Sometimes [pause] positions in sentences obviously, or in spurts, was helpful. [speaker007:] Right. [speaker002:] I don't know if that's cheating, too. [speaker007:] Um, [speaker003:] Spurts wouldn't be. Right? [speaker007:] spurts is not cheating except that of course you know the real words, but roughly speaking, the recognized words are gonna give you a similar type of position. [speaker002:] Right. Right. Would they give you the same number of words, though? [speaker007:] It's either early or late. [speaker006:] Right. [speaker007:] Not exactly, but i [speaker003:] No [speaker002:] But ra somewhat? [speaker007:] Y yeah it should be. [speaker006:] On the average. [speaker007:] Well, we don't know and actually that's one of the things we're interested in doing, is a sort of [disfmarker] [speaker003:] Have you tried using just time, as opposed to number of words? [speaker006:] Uh huh. [speaker007:] So. [speaker002:] I think ti uh [disfmarker] Just p time position, like when the word starts? [speaker003:] Yeah. Well, no, I mean t time [disfmarker] time position relative to the beginning of the spurt. [speaker007:] Eh [disfmarker] [speaker002:] I don't know if that was in the [disfmarker] [speaker007:] You know, uh [speaker002:] Start. [speaker007:] Yeah, uh, we didn't try it, [speaker002:] Yeah. There's all these things to do. [speaker007:] but it's s [speaker002:] Like, there's a lot of different features you could just pull out. [speaker003:] Yeah. I mean that wouldn't be cheating because you can detect pause [pause] pretty well within the time. [speaker002:] Right. [speaker007:] Right. [speaker006:] How about time position normalized by speak [speaker007:] And it depends on speaking rate [disfmarker] [speaker006:] Yeah. [vocalsound] Yeah. [speaker007:] speaking rate. Yeah. Yeah. That's actually why I didn't use it at first. [speaker002:] Yeah. [speaker006:] Yeah. [speaker007:] But we [disfmarker] one of the interesting things was [speaker003:] Mm hmm. [speaker007:] I guess you reported on some te punctuation type [disfmarker] finding sentence boundaries, finding disfluency boundaries, [speaker002:] Yeah. [speaker007:] and then I had done some work on finding from the foreground speech whether or not someone was likely to interrupt, so where [disfmarker] you know, if I'm talking now and someone [disfmarker] and [disfmarker] and Andreas is about to interrupt me, is he gonna choose a certain place in my speech, either prosodically or word based. And there the prosodic features actually showed up and a neat thing [disfmarker] even though the word features were available. And a neat thing there too is I tried some [disfmarker] [vocalsound] putting the speaker [disfmarker] So, I gave everybody [vocalsound] a short version of their name. So the real names are in there, which we couldn't use. Uh, we should use I Ds or something. And those don't show up. So that means that overall, um, it wasn't just modeling Morgan, or it wasn't just modeling a single person, [speaker006:] Mm hmm. [speaker007:] um, but was sort of trying to, [vocalsound] uh, get a general idea [disfmarker] the model [disfmarker] the tree classifier was trying to find general locations that were applicable to different speakers, even though there are huge speaker effects. So. The [disfmarker] but the main limitation now is I [disfmarker] because we're only looking at things that happen every [vocalsound] ten words or every twenty words, we need more [disfmarker] more data and more data per speaker. So. It'd also be interesting to look at the EDU meetings because we did include meeting type as a feature, so whether you were in a r Meeting Recorder meeting or a Robustness meeting did matter [vocalsound] to [pause] interrupts because there are just fewer interrupts in the Robustness meetings. [speaker002:] Mm hmm. [speaker007:] And so the classifier learns more about Morgan than it does about sort of the average person, [speaker006:] Mm hmm. [speaker007:] which is [vocalsound] not bad. It'd probably do better than [disfmarker] [vocalsound] Um, but it wasn't generalizing. So it's [disfmarker] [speaker006:] Yeah. [speaker007:] And I think Don, um [disfmarker] Well, we have a long list of things he's starting to look at now over the summer, where we can [disfmarker] And he'll be able to report on more things [pause] in the future. But it was great that we could at least go from the [disfmarker] [vocalsound] you know, Jane's transcripts and the, [vocalsound] uh, recognizer output and get it [pause] to this point. And I think it's something Mari can probably use in her preliminary report [disfmarker] like, "yeah, we're at the point where we're training these classifiers and we're just [vocalsound] reporting very preliminary but suggestive results that [vocalsound] some features, both word and pro prosodic, work." The other thing that was interesting to me is that the pitch features are better than in Switchboard. And I think that really is from the close talking mikes, cuz the pitch processing that was done has much cleaner behavior than [disfmarker] than the Switchboard telephone bandwidth. [speaker003:] W wh wh wh Better in what sense? [speaker007:] Um. Well, first of all, the pitch tracks are m have less, um, halvings and doublings than [disfmarker] than Switchboard and there's a lot less dropout, so if you ask how many regions where you would normally expect some vowels to be occurring [vocalsound] are completely devoid of pitch information, [speaker006:] Mm hmm. [speaker007:] in other words the pitch tracker just didn't get a high enough probability of voicing for words [disfmarker] for [disfmarker] for, you know, five word [speaker006:] Hmm. [speaker007:] there are much fewer than in Switchboard. So the missing [disfmarker] [vocalsound] We had a big missing data problem in Switchboard and, so the features weren't as reliable cuz they were often just not available. So that's actually good. [speaker004:] Could it have to do with the [disfmarker] the lower frequency cut off on the Switchboard? [speaker007:] Ma maybe. I mean, the tele we had telephone bandwidth for Switchboard and we had the an annoying sort of telephone handset movement problem that I think may also affect it. [speaker004:] Hmm. [speaker007:] So we're just getting better signals in [disfmarker] in this data. Which is nice. So. [speaker006:] Yeah. [speaker007:] Anyway, Don's been doing a great job and we hope to continue with, um, Andreas's help and also some of Thilo's help on this, [speaker006:] Great. [speaker007:] to [disfmarker] to try to get a non cheating version of how all this would work. [speaker005:] Y Yeah. Sure. Yeah. [speaker006:] Has [disfmarker] has, uh [disfmarker]? We just [disfmarker] I think, just talked about this the other day, but h has [disfmarker] has anybody had a chance to try changing, uh, insertion penalty sort of things with the [disfmarker] with the, uh [disfmarker] [vocalsound] uh, using the tandem system input for the [disfmarker]? [speaker003:] Oh, yeah. I tried that. It didn't, um, help dramatically. [speaker004:] Were they out of balance? [speaker003:] The [disfmarker] [speaker004:] I didn't [disfmarker] I didn't notice. [speaker003:] There were a little [disfmarker] the relative number of [disfmarker] I think there were a higher number of deletions, actually. [speaker006:] Oh. [speaker007:] Deletions? [speaker003:] So, you, uh [disfmarker] So, actually it [disfmarker] it preferred to have a positive [disfmarker] er, negative insertion penalty, which means [vocalsound] that, um [disfmarker] [speaker006:] Uh huh. [speaker003:] But, you know, it didn't change [vocalsound] th the [disfmarker] by adjusting that [disfmarker] the, um [disfmarker] [speaker006:] OK. [speaker003:] Yeah. The error changed by probably one percent or so. But, you know, given that that word error rate is so high, that's not a [disfmarker] [speaker006:] OK. So that [disfmarker] So that's [disfmarker] So that's not the problem. [speaker003:] That's not the problem. [speaker006:] Yeah. [speaker003:] No. But, uh, we s just, um, uh [disfmarker] you know, Chuck and I talked and the [pause] [@ @] [comment] next thing to do is probably to tune the [disfmarker] um, the size of the Gaussian system, um, [@ @] [comment] to [disfmarker] to this [disfmarker] to this feature vector, which we haven't done at all. We just used the same [vocalsound] configuration as we used for the [disfmarker] [vocalsound] for the standard system. [speaker006:] Hmm. [speaker003:] And, [vocalsound] for instance, uh, Dan [disfmarker] [@ @] [comment] Dan just sent me a message saying that CMU used, um, [vocalsound] something like ten Gaussians per cluster [disfmarker] You know, each [disfmarker] each mixture has ten [pause] Gaussians [speaker004:] Mm hmm. Hmm. We're using sixty four, [speaker003:] and [disfmarker] and we're using sixty four, [speaker004:] right? [speaker003:] so that's [vocalsound] obviously a big difference [speaker004:] Yeah. [speaker003:] and it might be way off and give very poorly trained, uh, you know, Gaussians that way, [speaker006:] Hmm. [speaker003:] uh, an and poorly trained mixture weights. So [disfmarker] so, we have [disfmarker] The turn around time on the training when we train only the [disfmarker] a male system with, uh, you know, our small training set, is [vocalsound] less than twenty four hours, so we can run lots of [disfmarker] [vocalsound] uh, basically just brute force, try a whole bunch of different um, settings. [speaker006:] OK. [speaker003:] And, uh, with the new machines it'll be even better. So. [speaker006:] Yeah. We get twelve of those, huh? [speaker003:] Yeah. But the PLP features work [disfmarker] um, uh, you know, continue to improve the, [speaker006:] OK. [speaker003:] um [disfmarker] As I said before, the [disfmarker] uh using Dan's, uh, uh, vocal tract normalization option works very well. [speaker006:] Mm hmm. [speaker003:] So, um, [@ @] [comment] I ran one experiment where we're just [vocalsound] did the vocal tract le normalization only in the test data, so I didn't bother to retrain [pause] the models at all, and it improved by one percent, which is about what we get with [disfmarker] [vocalsound] uh, with, you know, just [@ @] [comment] actually doing both training and test normalization, um, with, um, [vocalsound] the, uh [disfmarker] [vocalsound] uh, with the standard system. So, in a few hours we'll have the numbers for the [disfmarker] for retraining everything with vocal tract length normalization and [disfmarker] So, that might even improve it further. [speaker006:] Great. [speaker003:] So, it looks like the P L fea P features [comment] do very well now with [disfmarker] after having figured out all these little tricks to [disfmarker] to get it to work. [speaker006:] Yeah. [speaker003:] So. [speaker007:] Wait. So you mean you improve one percent over a system that doesn't have any V T L in it already? [speaker006:] Good. [speaker003:] Exactly. Yeah. [speaker007:] OK. [speaker006:] Yeah. OK. So then [disfmarker] then we'll have our baseline to [disfmarker] to compare the currently hideous, uh, uh, new thing with. [speaker003:] Right. a [speaker006:] But [disfmarker] [speaker003:] Right. And [disfmarker] and what that suggests also is of course that the current Switchboard [pause] MLP isn't trained on very good features. [speaker006:] Yeah. [speaker003:] Uh, because it was trained on whatever, you know, was used, uh, last time you did Hub five stuff, which didn't have any of the [disfmarker] [speaker006:] Right. But all of these effects were j like a couple percent. [speaker003:] Uh. [speaker006:] Right? I mean, y the [disfmarker] [speaker003:] Well, but if you add them all up you have, uh, almost five percent difference now. [speaker006:] Add [pause] all of them. I thought one was one point five percent and one was point eight. [speaker003:] Yeah. And now we have another percent with the V T [speaker006:] That's three point three. [speaker003:] Um, actually, and it's, um, What's actually qu interesting is that with [disfmarker] um, well, you m prob maybe another half percent if you do the VTL in training, and then interestingly, if you optimize you get more of a win out of rescoring the, um, [vocalsound] uh, the N best lists, uh, and optimizing the weights, um, uh than [disfmarker] [speaker004:] Than you do with the standard? [speaker003:] Yeah. [speaker006:] Yeah. But the part that's actually adjustment of the front end per se as opposed to doing [disfmarker] putting VTLN in or something is [disfmarker] it was a couple percent. [speaker003:] So [disfmarker] Right. [speaker006:] Right? It was [disfmarker] it was [disfmarker] there was [disfmarker] [vocalsound] there was one thing that was one and a half percent and one that was point eight. So [disfmarker] and [disfmarker] and [disfmarker] let me see if I remember what they were. One of them [vocalsound] was, uh, the change to, uh [disfmarker] because it did it all at once, [comment] to [disfmarker] uh, from bark scale to mel scale, [speaker003:] Mm hmm. [speaker006:] which I really feel like saying in quotes, because [@ @] [comment] they're essentially the same scale [speaker007:] Yeah. Why did that cha? [speaker006:] but the [disfmarker] but [disfmarker] but [disfmarker] but any i individual particular implementation of those things puts things in a particular place. [speaker003:] Mm hmm. [speaker006:] So that's why I wanted to look [disfmarker] I still haven't looked at it yet. I [disfmarker] I wanna look at exactly where the filters were in the two, [speaker003:] Mm hmm. [speaker006:] and it [disfmarker] [vocalsound] it's probably something like there's one fewer or one more filter in the sub [vocalsound] one kilohertz band and for whatever reason with this particular experiment it was better [speaker003:] Mm hmm. [speaker006:] one way or the other. [speaker007:] Hmm. [speaker006:] Um, it could be there's something more fundamental but it [disfmarker] you know, I [disfmarker] I don't know it yet. And the other [disfmarker] and the other [disfmarker] that was like one and a half or something, and then there was point eight percent, which was [disfmarker] what was the other thing? [speaker004:] Well, that was combined with the triangular. Right? [speaker006:] Yeah. Those [disfmarker] those two were together. [speaker004:] Yeah. Right. [speaker006:] We d weren't able to separate them out cuz it was just done in one thing. But then there was a point eight percent which was something else. Do you remember the [disfmarker]? [speaker004:] The low frequency cut off. [speaker006:] Oh, yeah. So that was [disfmarker] that was, uh [disfmarker] that one I can claim credit for, uh, i in terms of screwing it up in the first place. So that someone e until someone else fixed it, which is that, um, I never put [disfmarker] when I u We had some problems before with offsets. This inf this went back to, uh, I think Wall Street Journal. So we [disfmarker] we had, uh [disfmarker] [speaker003:] Hmm. [speaker006:] ea everybody else who was doing Wall Street Journal knew that there were big DC offsets in th in these data [disfmarker] in those data and [disfmarker] and [disfmarker] and nobody happened to mention it to us, [speaker003:] Hmm. [speaker006:] and we were getting these, like, really terrible results, like two, three times the error everybody else was getting. And then in casual conversation someone ment mentioned "uh, well, I guess, you know, of course you're taking care of the offsets." I said "what offsets?" [speaker003:] Mm hmm. [speaker006:] And at that point, you know, we were pretty new to the data and we'd never really, like, looked at it on a screen and then when we just put it on the screen [comment] and wroop! there's this big DC offset. [speaker003:] Mm hmm. [speaker006:] So, um, in PLP [speaker007:] There was a [disfmarker] like a hum or some or [disfmarker] when they recorded it? [speaker006:] No. It's just, it [disfmarker] it's [disfmarker] it's not at all uncommon for [disfmarker] for recorded electronics to have different, um, DC offsets. [speaker007:] Or just [disfmarker]? Huh. [speaker006:] It's [disfmarker] it's, you know, no big deal. It's [disfmarker] you know, you could have ten, twenty, maybe thirty millivolts, whatever, and it's consistently in there. The thing is, most people's front ends have pre emphasis with it, with zero at zero frequency, so that it's irrelevant. Uh, but with P L P, we didn't actually have that. We had [disfmarker] we had the equivalent of pre emphasis in a [disfmarker] a, uh, Fletcher Munson style weighting that occurs in the middle of P L but it doesn't actually have a zero at zero frequency, [speaker007:] Hmm. [speaker006:] like, eh, uh, typical simple fr pre emphasis does. We had something more fancy. It was later on it didn't have that. So at that point I reali "oh sh we better have a [disfmarker] have a high pass filter" just, you know [disfmarker] [vocalsound] just take care of the problem. So I put in a high pass filter at, uh, I think ninety [disfmarker] ninety hertz or so [vocalsound] uh, for a sixteen kilohertz sampling rate. And I never put anything in to adjust it for different [disfmarker] different sampling rates. And so [disfmarker] well, so, you know, the code doesn't know anything about that and so this is all at eight kilohertz and so it was at forty five hertz instead of at [disfmarker] [vocalsound] instead of at ninety. [speaker003:] Hmm. [speaker006:] So, um, I don't know if Dan fixed it or [disfmarker] or, uh, what he [disfmarker] [speaker003:] Well, he made it a parameter. [speaker006:] He made it a parameter. So. Yeah, I guess if he did it right, he did fix it and then [disfmarker] and then it's taking care of sampling rate, which is great. [speaker004:] What [disfmarker] what is the parameter? [speaker006:] He had a [disfmarker] [speaker004:] Is it, uh, just the f lower cut off that you want? [speaker003:] It's called, uh, [vocalsound] H HPF. [speaker006:] H [disfmarker] Yeah. Does HPF on [disfmarker] on his feat feature. [speaker003:] u And [disfmarker] but HPF, you know, when you put a number after it, uses that as the hertz value of the cut off. [speaker004:] Mm hmm. [speaker006:] Yeah. [speaker004:] Oh, OK. [speaker003:] So. [speaker006:] I mean, frankly, we never did that with the RASTA filter either, so the RASTA filter is actually doing a different thing in the modulation spectral domain depending on what sampling rate you're doing, [speaker003:] Mm hmm. [speaker006:] which is [vocalsound] another old [disfmarker] old bug of mine. [speaker003:] Mm hmm. [speaker006:] But, um [disfmarker] Um. So that [disfmarker] that was the problem there was th we [disfmarker] we [disfmarker] we had always intended to cut off below a hundred hertz [speaker003:] Mm hmm. [speaker006:] and it just wasn't doing it, so now it is. So, [vocalsound] that hep that helped us by, like, eight tenths of a percent. It [pause] still wasn't a big deal. [speaker003:] OK. Well, but, um [disfmarker] Well, uh, again, after completing the [pause] current experiments, we'll [disfmarker] [vocalsound] we can add up all the uh differences [speaker006:] Oh, yeah. But [disfmarker] but, I guess my [disfmarker] my point was that [disfmarker] that, um, the hybrid system thing that we did was, uh, primitive in many ways. [speaker003:] and [disfmarker] [vocalsound] and [disfmarker] an Y Right. [speaker006:] And I think I agree with you that if we fixed lots of different things and they would all add up, we would probably have a [disfmarker] a [disfmarker] a competitive system. But I think not that much of it is due to the front end per se. I think maybe a couple percent of it is, as far as I can see from this. [speaker003:] Mm hmm. [speaker006:] Uh, unless you call [disfmarker] well, if you call VTL the front en front end, that's, uh, a little more. But that's sort of more both, kind of. [speaker004:] One experiment we should [disfmarker] we'll probably need to do though when [disfmarker] um, at some point, is, since we're using that same [disfmarker] the net that was trained on PLP without all these things in it, for the tandem system, we may wanna go back and retrain, [speaker006:] Right? But. [speaker003:] Well, that's what I meant, in fact. Yeah. [speaker004:] yeah, yeah, for the tandem. You know, so we can see if it [disfmarker] what effect it has on the tandem processing. [speaker003:] So [disfmarker] so, the thing is [disfmarker] is do we expect [disfmarker]? [speaker006:] Mm hmm. [speaker003:] eh At this point I'm as I mean, you know [disfmarker] e I'm wondering is it [disfmarker] Can we expect, uh, a tandem system to do better than a properly trained [disfmarker] you know, a Gaussian system trained directly on the features with, you know, the right ch choice of [pause] parameters? [speaker006:] Well, that's what we're seeing in other areas. Yes. Right? So, it's [disfmarker] so, um, um [disfmarker] [speaker004:] So, we [disfmarker] But [disfmarker] but we may not. I mean, if it doesn't perform as well, we may not know why. Right? Cuz we need to do the exact experiment. [speaker003:] Right. [speaker006:] I mean, the reason to think it should is because you're putting in the same information and you're transforming it to be more discriminative. So. Um. Now the thing is, in some databases I wouldn't expect it to necessarily give you much and [disfmarker] and part of what I view as the real power of it is that it [pause] gives you a transformational capability [vocalsound] for taking all sorts of different wild things that we do, not just th the standard front end, but other things, like with multiple streams and so forth, [speaker003:] Mm hmm. [speaker006:] and allows you to feed them to the other system with this [disfmarker] through this funnel. Um, so I think [disfmarker] I think that's the real power of it. I wouldn't expect huge in huge improvements. Um, but it should at least be roughly the same and maybe a little better. [speaker003:] Mm hmm. [speaker006:] If it's, you know, like way way worse then, you know [disfmarker] [speaker003:] Right. [speaker004:] So, Morgan, an another thing that Andreas and I were talking about was, so [@ @] [comment] in the first experiment that he did [vocalsound] we just took the whole fifty six, uh, outputs and that's, um, basically compared to a thirty nine input feature vector from either MFCC or PLP. [speaker006:] Mm hmm. Mm hmm. [speaker004:] But one thing we could do is [disfmarker] [speaker006:] Let [disfmarker] let me [disfmarker] let me just ask you something. When you say take the fifty six outputs, these are the pre final nonlinearity [pause] outputs [speaker004:] Yeah. Through the regular tandem outputs. [speaker006:] and they're [disfmarker] and [disfmarker] through the KLT. [speaker004:] Through the KLT. [speaker006:] OK. [speaker004:] All that kinda stuff. [speaker006:] And so [disfmarker] so then you u Do you use all fifty six of the KLT or [disfmarker]? [speaker004:] That's what we did. Right? [speaker006:] OK. [speaker004:] So one thing we were wondering is, if we did principal components and, say, took out just thirteen, and then did deltas and double deltas on that [disfmarker] [speaker006:] Yes. Yes. [speaker004:] so we treated the th first thirteen as though they were [vocalsound] standard features. [speaker006:] Yeah. [speaker004:] I mean, did Dan do experiments like that to [disfmarker]? [speaker006:] Uh. Talk with Stephane. He did some things like that. It was either him or Carmen. I forget. [speaker004:] Mmm. [speaker003:] Mm hmm. [speaker006:] I mean these were all different databases and different [disfmarker] you know, in HTK and all that, [speaker004:] Yeah. [speaker006:] so i it [disfmarker] it may not apply. But my recollection of it was that it didn't make it better but it didn't make it worse. [speaker004:] Hmm. [speaker006:] But, again, given all these differences, maybe it's more important in your case that you not take a lot of these low variance, uh, components. [speaker004:] Cuz in a sense, the net's already got quite a bit of context in those features, [speaker006:] Yeah. [speaker004:] so if we did deltas and double deltas on top of those, we're getting sort of even more. [speaker006:] Which could be good or not. [speaker004:] Yeah. [speaker006:] Yeah. Yeah. Worth trying. [speaker003:] But there the main point is that, um, you know, it took us a while but we have the procedure for coupling the two systems [vocalsound] debugged now and [disfmarker] I mean, there's still conceivably some bug somewhere in the way we're feeding the tandem features [disfmarker] uh, either generating them or feeding them to this [disfmarker] to the [vocalsound] SRI system, [speaker004:] Mm hmm. Yeah. [speaker003:] but [speaker006:] There might be, [speaker003:] it's [disfmarker] [speaker006:] cuz that's a pretty big difference. [speaker003:] Yeah. [speaker006:] But [speaker003:] And I'm wondering how we can [disfmarker] how we can debug that. [speaker004:] Yeah. [speaker003:] I mean how [disfmarker] Um. [speaker006:] Hmm. [speaker003:] I'm actually f quite sure that the [disfmarker] feeding the [pause] features into the system and training it up, [speaker006:] What if [disfmarker]? [speaker003:] that [disfmarker] that [disfmarker] I think that's [disfmarker] this [disfmarker] that's essentially the same as we use with the ce with the P L P fe features. And that's obviously working great. So. I um. [speaker004:] Yeah. There could be a bug in [disfmarker] in the [disfmarker] somewhere before that. [speaker003:] There [disfmarker] we could [disfmarker] the [disfmarker] another degree of freedom is how do you generate the K L T transform? [speaker004:] Mm hmm. [speaker006:] That's [disfmarker] [speaker003:] Right? We to [speaker004:] Right. [speaker006:] well, and another one is the normalization of the inputs to the net. [speaker003:] Yeah. [speaker006:] These nets are trained with particular normalization and when that gets screwed up it [disfmarker] it can really hurt it. [speaker004:] I'm doing what Eric [disfmarker] E Eric coached me through then [disfmarker] that part of it, so I'm pretty confident in that. [speaker006:] OK. [speaker004:] I mean, the only slight difference is that I use normalization values that, um, Andreas calculated from the original [comment] PLP, [speaker003:] Right. [speaker004:] which is right. N Yeah. [speaker003:] Right. [speaker004:] So, I u I do [disfmarker] Oh, we actually don't do that normalization for the PLP, do we? For the st just the straight PLP features? [speaker003:] No. The [disfmarker] the SRI system does it. [speaker004:] S R I system does that. Right. [speaker003:] Yeah. [speaker006:] Right. Well, you might e e [speaker004:] So that's [disfmarker] that's another [disfmarker] [speaker003:] So, there's [disfmarker] there is [disfmarker] there is room for bugs that we might not have discovered, [speaker004:] Yeah. [speaker006:] Yeah. [speaker004:] Mm hmm. [speaker006:] Yeah. [vocalsound] I [disfmarker] I would actually double check with Stephane at this point, [speaker003:] but [disfmarker] [speaker006:] cuz he's probably the one here [disfmarker] I mean, he and Dan are the ones who are at this point most experienced with the tandem [speaker004:] Mm hmm. [speaker006:] thing and there may [disfmarker] there may be some little bit here and there that is not [disfmarker] not being handled right. [speaker004:] Yeah. It's hard with features, cuz you don't know what they should look like. I mean, you can't just, like, print the [disfmarker] the values out in ASCII and, you know, look at them, see if they're [disfmarker] [speaker006:] Not unless you had a lot of time [speaker007:] Well [disfmarker] [speaker006:] and [disfmarker] [speaker007:] eh, and also they're not [disfmarker] I mean, as I understand it, you [disfmarker] you don't have a way to optimize the features for the final word error. Right? [speaker003:] Right. [speaker007:] I mean, these are just discriminative, but they're not, um, optimized for the final [disfmarker] [speaker003:] They're optimized for phone discrimination, [speaker007:] Right. So it [disfmarker] there's always this question of whether you might do better with those features if there was a way to train it for the word error metric that you're actually [disfmarker] that you're actually [disfmarker] [speaker003:] not for [disfmarker] [speaker006:] That's right. Well, the other [disfmarker] Yeah, th the [disfmarker] [speaker003:] Mm Mmm. [speaker006:] Well, you actually are. But [disfmarker] but it [disfmarker] but in an indirect way. [speaker007:] Well, right. It's indirect, so you don't know [disfmarker] [speaker006:] So wha w what [disfmarker] an and you may not be in this case, come to think of it, because, uh, you're just taking something that's trained up elsewhere. So, what [disfmarker] what you [disfmarker] what you do in the full procedure [vocalsound] is you, um, uh, have an embedded training. So in fact you [disfmarker] the [disfmarker] the net is trained on, uh, uh, a, uh, Viterbi alignment of the training data that comes from your full system. And so that's where the feedback comes all around, so that it is actually discriminant. You can prove that it's [disfmarker] it's a, uh [disfmarker] If you believe in the Viterbi assumption that, uh, getting the best path, uh, is almost equivalent to getting the best, uh, total probability, um, then you actually do improve that by, uh [disfmarker] by training up on local [disfmarker] local, uh [disfmarker] local frames. But, um, we aren't actually doing that here, because we did [disfmarker] we did that for a hybrid system, and now we're plugging it into another system and so it isn't [disfmarker] [vocalsound] i i i it wouldn't quite apply here. [speaker004:] So another huge experiment we could do would be to take the tandem features, uh, do SRI forced alignments using those features, and then re do the net with those. [speaker003:] Do y [speaker006:] Mm hmm. [speaker007:] Mmm, uh [disfmarker] Exactly. Exactly. [speaker006:] Yeah. [speaker007:] So that you can optimize it for the word error. [speaker006:] Yeah. Another thing is since you're not using the net for recognition per se but just for this transformation, it's probably bigger than it needs to be. [speaker003:] But [disfmarker] [speaker007:] Yeah. [speaker006:] So that would save a lot of time. [speaker003:] And there's a mismatch in the phone sets. [speaker004:] Mmm. [speaker003:] So, you're using a l a long a larger phone set than what [disfmarker] [speaker006:] Yeah. Actually all those things could [disfmarker] could [disfmarker] could [disfmarker] could, uh [disfmarker] could affect it as well. [speaker004:] Yeah. Yeah. [speaker006:] The other thing, uh, just to mention that Stephane [disfmarker] this was an innovation of Stephane's, which was a pretty neat one, uh, and might particularly apply [vocalsound] here, given all these things we're mentioning. Um, Stephane's idea was that, um, discriminant, uh, approaches are great. Even the local ones, given, you know, these potential outer loops which, you know, you can convince yourself turn into the global ones. Um, however, there's times when it [pause] is not good. Uh, when [pause] something about the test set is different enough from the training set that [disfmarker] that, uh, the discrimination that you're learning is [disfmarker] is [disfmarker] is not a good one. [speaker003:] Mm hmm. [speaker006:] So, uh, his idea was to take as the input feature vector to the, uh, Gaussian mixture system, [vocalsound] uh, a concatenation of the neural net outputs and the regular features. [speaker003:] Oh, we already talked about that. [speaker007:] Yeah. That [disfmarker] [speaker006:] Yeah. [speaker004:] Mm hmm. [speaker007:] Didn't you [disfmarker] did you [pause] do that already [speaker003:] El Yeah. [speaker007:] or [disfmarker]? [speaker003:] No, but we [disfmarker] we [disfmarker] when [disfmarker] when we [disfmarker] when I first started corresponding with Dan about how to go about this, I think that was one of the things that we definitely went there. [speaker007:] Oh. That makes a lot of sense. Huh. [speaker006:] Yeah. Yeah. I mean, I'm sure that Stephane wasn't the first to think of it, [speaker003:] Yeah. [speaker006:] but actually Stephane did it and [disfmarker] [vocalsound] and [disfmarker] and it helped a lot. [speaker003:] Uh huh. And i does it help? [speaker006:] Yeah. [speaker003:] Oh, OK. [speaker006:] So that's [disfmarker] that [disfmarker] that's our current best [disfmarker] best system in the, uh [disfmarker] [vocalsound] uh, in the Aurora thing. [speaker003:] Oh. OK. [speaker007:] Yeah. That makes sense. [speaker006:] Yeah. [speaker003:] And do you do a KLT transform on the con on the combined feature vector? [speaker007:] As [disfmarker] you should never do worse. [speaker006:] I [disfmarker] I, uh, missed what you said. [speaker003:] Do you [disfmarker] d you do a KLT transform on the combined feature vector? [speaker006:] Yeah. [speaker003:] OK. [speaker006:] Well, actually, I, uh [disfmarker] you should check with him, because he tried several different combinations. [speaker003:] Because you end up with this huge feature vector, so that might be a problem, a unless you do some form of dimensionality reduction. [speaker006:] Yeah. I, uh, th what I don't remember is which came out best. So he did one where he put o put e the whole thing into one KLT, and another one, since the [disfmarker] the PLP things are already orthogonalized, he left them alone and [disfmarker] and just did a KLT on the [disfmarker] on the [disfmarker] on the net outputs [speaker003:] Mm hmm. [speaker006:] and then concatenated that. [speaker003:] Mmm. [speaker006:] And I don't remember which was better. [speaker004:] Did he [disfmarker] did he try to [disfmarker]? So he always ended up with a feature vector that was [pause] twice as long as either one of the [disfmarker]? [speaker006:] No. I don't know, i I [disfmarker] I don't know. You have to check with him. [speaker004:] Yeah. [speaker003:] OK. Actually, I have to run. [speaker006:] I'm into big ideas these days. [speaker007:] Yeah. [speaker003:] Uh. [speaker007:] We need to close up cuz I need to save the data and, um, get a call. [speaker006:] Not to mention the fact that we're missing snacks. Yeah. [speaker007:] Right. Did people wanna do the digits [speaker006:] Uh [speaker007:] or, um, do them together? [speaker006:] Um. I [disfmarker] I g I think, given that we're in a hurry for snacks, maybe we should do them together. [speaker007:] I don't know. Should we just [disfmarker]? OK. I mean, are we trying to do them [nonvocalsound] in synchrony? That might be fun. Adam's not here, so he's not here to tell me no. [speaker006:] Well, it's [disfmarker] [vocalsound] it's [disfmarker] it's not [disfmarker] You know, it's not gonna work out but we could [disfmarker] we could just, uh, uh, see if we find a rhythm, you know, [speaker007:] Sure. [speaker006:] what [disfmarker] Uh, O's or zeroes, we wanna agree on that? [speaker007:] Maybe just whatever people would naturally do? I don't know. [speaker006:] Oh, but if we were a singing group, we would wanna decide. Right? [speaker007:] Be harmony. Yeah. [comment] Yeah. [speaker001:] Mine's identical to yours. [speaker006:] We might wa [speaker001:] Is that correct? [speaker007:] Sorry. So I set up and we didn't have [pause] enough digit forms [speaker001:] Oh. I see. [speaker007:] so I xeroxed the same one seven times. [speaker006:] So these are excellent. [speaker001:] Oh. I see. [speaker006:] Why don't we do zer i Anyone have a problem with saying zero? [speaker007:] No. [speaker006:] Is zero OK? [speaker001:] Yeah. [speaker006:] OK. One and a two and three. [speaker007:] e [speaker006:] Once more with feeling. [speaker007:] And th [speaker006:] No, just k just kidding. Oh, yeah. It was. [speaker001:] OK, this is one channel. Can you uh, say your name and talk into your mike one at a time? [speaker003:] This is Eric on channel three, I believe. [speaker001:] OK. Uh, I don't think it's on there, Jane. [speaker004:] Tasting one two three, tasting. [speaker005:] OK, this is Jane on channel five. [speaker001:] Uh, I still don't see you Jane. [speaker005:] Oh, darn, what am I doing wrong? [speaker004:] Can you see me on channel four? Really? My lucky day. [speaker005:] Uh, screen no, [disfmarker] [speaker001:] Yeah, I s [speaker005:] it is, oh, maybe it just warmed up? [speaker001:] No. [speaker005:] Oh, darn, can you can't see channel five yet? [speaker001:] Uh, well, the mike isn't close enough to your mouth, [speaker005:] Oh, this would be k [speaker001:] so. [speaker005:] OK, is that better? [speaker004:] I like the high quality labelling. [speaker001:] S uh, try speaking loudly, so, [speaker005:] Hello, hello. [speaker004:] David, can we borrow your labelling machine to improve the quality of the labelling a little bit here? [speaker001:] OK, [speaker005:] Alright. [speaker001:] good. Thank you. [speaker002:] One t [speaker004:] How [disfmarker] how many are there, one to five? [speaker002:] One five, yeah. [speaker004:] Yeah, please. [speaker005:] Would you like to join the meeting? I bet [disfmarker] [speaker001:] Well, we don't wanna renumber them, cuz we've already have like, forms filled out with the numbers on them. So, let's keep the same numbers on them. [speaker002:] Yeah, OK, that's a good idea. [speaker001:] OK, Dan, are you on? [speaker002:] I'm on [disfmarker] I'm on two and I should be on. [speaker001:] Good. [speaker002:] Yeah. [speaker004:] Want to join the meeting, Dave? Do we [disfmarker] do [disfmarker] do we have a spare, uh [disfmarker] [speaker001:] And I'm getting lots of responses on different ones, so I assume [pause] the various and assorted P Z Ms are on. [speaker004:] We' we're [disfmarker] we' This is [disfmarker] this [disfmarker] this is a meeting meeting. [speaker005:] This is abou we're [disfmarker] we're mainly being taped but we're gonna talk about, uh, transcription for the m future meeting meetings. [speaker001:] Stuff. Yeah, this is not something you need to attend. So. [speaker005:] Yeah. e OK. [speaker003:] You're always having one of those days, Dave. [speaker005:] Y you'd be welcome. [speaker001:] Besides, I don't want anyone who has a weird accent. [speaker005:] You'd be welcome. [speaker001:] Right, Dan? [speaker004:] So, I don't understand if it's neck mounted you don't get very good performance. [speaker003:] It's not neck mounted. It's supposed to be h head mounted. [speaker004:] Yeah. It [disfmarker] it should be head mounted. Right? [speaker003:] Right. [speaker001:] Well, then put it on your head. [speaker002:] I don't know. [speaker001:] What are you doing? [speaker004:] Cuz when you do this, you can [disfmarker] Rouww Rouww. [speaker005:] Why didn't I [disfmarker] you were saying that but I could hear you really well on the [disfmarker] on the transcription [disfmarker] on the, uh, tape. [speaker002:] I [disfmarker] I don't know. [speaker001:] Well, I m I would prefer that people wore it on their head [speaker003:] i [speaker001:] but they were complaining about it. [speaker004:] Why? [speaker001:] Because it's not [disfmarker] it doesn't go over the ears. [speaker005:] It's badly designed. [speaker001:] It's very badly designed so it's [disfmarker] [speaker002:] It's very badly designed? [speaker004:] What do you mean it doesn't go over the ears? [speaker002:] Why? It's not s It's not supposed to cover up your ears. I mean, it's only badly [disfmarker] [speaker001:] Yeah but, there's nowhere to put the pad so it's comfortable. [speaker005:] So that's what you're d He's got it on his temples so it cuts off his circulation. [speaker003:] Yeah, that's [disfmarker] that's what I have. [speaker002:] Oh, that's strange. [speaker001:] And it feels so good that way. [speaker003:] It feels so good when I stop. [speaker005:] Try it. [speaker004:] Somebody wanna [disfmarker] [speaker001:] So I [disfmarker] I again would like to do some digits. [speaker004:] Somebody wanna close the door? [speaker001:] Um. [speaker002:] OK. [speaker001:] Sure. [speaker005:] We could do it with noise. [speaker001:] So let me [disfmarker] [speaker003:] You're always doing digits. [speaker001:] Well, you know, I'm just that sort of [disfmarker] digit y g sorta guy. OK. So this is Adam. [speaker005:] Uh, this is the same one I had before. [speaker001:] I doubt it. [speaker002:] It's still the same words. [speaker001:] I think we're session four by the way. Or m it might be five. [speaker004:] Psss! [speaker005:] No [speaker004:] Oh, that's good. [speaker001:] I didn't bring my previous thing. [speaker002:] We didn't [disfmarker] [speaker005:] Now, just to be sure, the numbers on the back, this is the channel? [speaker002:] That's the microphone number. [speaker005:] That's the microphone number. [speaker001:] Yeah, d leave the channel blank. [speaker005:] Uh oh. OK, good. [speaker004:] But number has to be [disfmarker]? [speaker005:] Five [disfmarker] [speaker004:] So we have to look up the number. [speaker001:] Right. [speaker004:] OK, good. [speaker005:] Good. OK. Well, this is Jane, on mike number five. Um. I just start? Do I need to say anything more? [speaker001:] Uh, transcript number. [speaker002:] Transcript number [disfmarker] [speaker003:] OK, this is Eric on microphone number three, [speaker004:] This is Beck on mike four. [speaker001:] Thanks. Should I turn off the VU meter Dan? Do you think that makes any difference? [speaker002:] Oh, God. No, let me do it. [speaker001:] Why? Are you gonna do something other than hit "quit"? [speaker002:] No, but I'm gonna look at the uh, logs as well. [speaker001:] Oh. [speaker005:] Uh, you said turn off the what? [speaker001:] Should have done it before. The VU meter which tells you what the levels on the various mikes are and there was one hypothesis that perhaps that [disfmarker] [vocalsound] the act of recording the VU meter was one of the things that contributed to the errors. [speaker005:] Oh. Oh, I see. [speaker004:] Yeah, but Eric, uh, you didn't think that was a reasonable hypothesis, right? [speaker005:] I See. [speaker001:] That was me, [speaker004:] Oh, I'm sorry y [speaker001:] I thought that was [disfmarker] [speaker004:] That was malarkey. [speaker001:] Well, the only reason that could be is if the driver has a bug. Right? Because the machine just isn't very heavily loaded. [speaker004:] No chance of that. [speaker001:] No chance of that. Just because it's beta. [speaker002:] Yeah, there [disfmarker] there [disfmarker] there was [disfmarker] there was a [disfmarker] there was a bug. [speaker001:] Look OK? [speaker002:] There was a glitch last time we ran. [speaker004:] Are are yo are you recording where the table mikes are by the way? Do you know which channels [disfmarker] [speaker002:] No. [speaker001:] Yeah, we usually do that. [speaker002:] No, we don't. [speaker001:] Yeah. [speaker004:] Why not? [speaker002:] But we [disfmarker] we ought to st we ought to standardize. I think, [vocalsound] uh, I s I spoke to somebody, Morgan, [comment] about that. [speaker004:] Why don't you just do this? [speaker002:] I think [disfmarker] I think we should put mar Well, no, w we can do that. [speaker001:] I mean, that's what we've done before. [speaker002:] I know what they [disfmarker] they're [disfmarker] they're four, three, two, one. In order now. [speaker004:] Four. [speaker002:] Three, [speaker004:] Three. [speaker002:] two, [vocalsound] and one. But I think [disfmarker] I think we should put them in standard positions. I think we should make little marks on the table top. [speaker005:] Oh, OK. [speaker001:] Which means we need to move this thing, and sorta decide how we're actually going to do things. [speaker002:] So that we can put them [disfmarker] I guess that's the point. [speaker001:] So. [speaker002:] It'll be a lot easier if we have a [disfmarker] if we have them permanently in place or something like that. [speaker001:] Right. [speaker005:] I do wish there were big booms coming down from the ceiling. [speaker002:] You do? [speaker005:] Yeah. [speaker003:] Would it make you feel more important? [speaker005:] Yeah, yeah, yeah. [speaker001:] Mmm. [speaker003:] I see. [speaker005:] You know. [speaker004:] Wait till the projector gets installed. [speaker001:] That'll work. [speaker005:] Oh, that'll be good. [speaker001:] That'll work. [speaker005:] OK. [speaker002:] Oh, gosh. [speaker004:] Cuz it's gonna hang down, make noise. [speaker002:] When's it gonna be installed? [speaker005:] OK. [speaker004:] Well, [vocalsound] it depends on [speaker002:] I see. [speaker004:] Is this b is this being recorded? [speaker001:] That's right. [speaker004:] Uh, I think Lila actually is almost getting r pretty close to even getting ready to put out the purchase order. [speaker002:] OK. Cool. [speaker004:] I handed it off to her about a month ago. [speaker002:] I see. [speaker001:] OK, so, topic of this meeting is I wanna talk a little bit about transcription. Um, I've looked a little bit into commercial transcription services and Jane has been working on doing transcription. Uh, and so we wan wanna decide what we're gonna do with that and then get an update on the electronics, and then, uh, maybe also talk a little bit about some infrastructure and tools, and so on. Um, you know, eventually we're probably gonna wanna distribute this thing and we should decide how we're gonna [disfmarker] how we're gonna handle some of these factors. So. [speaker002:] Distribute what? [speaker001:] Hmm? [speaker002:] The data? [speaker001:] Right. Right. I mean, so we're [disfmarker] we're collecting a corpus and I think it's gonna be generally useful. I mean, it seems like it's not a corpus which is [disfmarker] uh, has been done before. [speaker002:] Oh. [speaker001:] And so I think people will be interested in having [disfmarker] having it, [speaker004:] u Using, like, audio D V Ds or something like that? [speaker001:] and so we will [disfmarker] [speaker002:] Yes. [speaker001:] Excuse me? [speaker004:] Audio D V [speaker001:] Well, or something. Yeah, audio D V C Ds, [speaker004:] Or t [speaker001:] you know. [speaker004:] Yeah. tapes. [speaker001:] And [disfmarker] and so how we do we distribute the transcripts, how do we distribute the audio files, how do we [disfmarker] how do we just do all that infrastructure? [speaker003:] Well, I think [disfmarker] I mean, for that particular issue ther there are known sources where people go to [disfmarker] to find these kind of things like the LDC for instance. [speaker005:] Yeah, that's right. [speaker001:] Right, but [disfmarker] but so should we do it in the same format as LDC and what does that mean to what we've done already? [speaker002:] Right. The [disfmarker] It's not so much the actu The logistics of distribution are secondary to [pause] preparing the data in a suitable form for distribution. [speaker003:] Right. [speaker001:] Right. So, uh, as it is, it's sort of a [pause] ad hoc combination of stuff Dan set and stuff I set up, which we may wanna make a little more formal. So. [speaker002:] And the other thing is that, um, University of Washington may want to start recording meetings as well, [speaker001:] Right. [speaker002:] in which case w w we'll have to decide what we've actually got so that we can give them a copy. [speaker004:] A field trip. [speaker001:] That's right. Yeah. I was actually thinking I wouldn't mind spending the summer up there. That would be kind of fun. [speaker002:] Oh, really? [speaker001:] Yeah. [speaker002:] Different for you. [speaker001:] Visit my friends and spend some time [disfmarker] [speaker002:] Yes. [speaker001:] Well, and then also I have a bunch of stuff for doing this digits. So I have a bunch of scripts with X Waves, and some Perl scripts, and other things that make it really easy to extract out [vocalsound] and align where the digits are. And if U d UW's going to do the same thing I think it's worth while for them to do these digits tasks as well. [speaker004:] Mm hmm. [speaker001:] And what I've done is pretty ad hoc, um, so we might wanna change it over to something a little more standard. [speaker003:] Hmm. [speaker001:] You know, STM files, or XML, or something. [speaker004:] An and there's interest up there? [speaker001:] What's that? [speaker004:] There's interest up there? [speaker001:] Well they [disfmarker] they certainly wanna collect more data. And so they're applying, I think I B Is that right? [speaker002:] I don't know. [speaker001:] Something like that. Um, for some more money to do more data. So we were planning to do like thirty or forty hours worth of meetings. They wanna do an additional hundred or so hours. So, they want a very large data set. Um, but of course we're not gonna do that if we don't get money. So. [speaker002:] I see. [speaker001:] And I would like that just to get a disjoint speaker set and a disjoint room. [speaker003:] Mm hmm. [speaker001:] I mean, one of the things Morgan and I were talking about is we're gonna get to know this room really well, the [disfmarker] the acoustics of this room. [speaker004:] Including the fan. [speaker002:] All about that. [speaker004:] Did you notice the fan difference? [speaker001:] Including the fan. [speaker002:] Oh, now you've touched the fan control, now all our data's gonna be [disfmarker] [speaker004:] Hear the difference? [speaker005:] Oh, that's better. [speaker002:] Yeah, it's great. [speaker001:] Oh, it's enormous. [speaker005:] That's better. [speaker004:] Do you wanna leave it off or not? [speaker001:] All the others have been on. [speaker004:] Yeah, the [disfmarker] You sure? [speaker002:] That's [disfmarker] Oh, yeah. [speaker004:] You [disfmarker] [vocalsound] You think that [disfmarker] [speaker002:] Absolutely. [speaker001:] y Absolut [speaker004:] things after the f then [speaker001:] Yeah. [speaker004:] This fan's wired backwards by the way. Uh, I think this is high speed here. [speaker005:] Yeah, it's noticeable. [speaker004:] Well, not clear. Maybe it [disfmarker] Maybe it isn't. [speaker002:] Well it's [disfmarker] well like [vocalsound] "low" is mid [disfmarker] mid scale. [speaker004:] That's right. [speaker002:] So it could be [vocalsound] that it's not actually wired backwards it's just that ambiguous. [speaker004:] I was wondering also, Get ready. [comment] whether the lights made any noise. [speaker005:] Uh huh. [speaker001:] There's definitely [disfmarker] [speaker003:] Yeah, a little bit. [speaker002:] Oh, they do. [speaker001:] Yep. [speaker002:] Yeah. [speaker001:] High pitch hum. Wow. [speaker004:] So, [vocalsound] do our meetings in the dark with no air conditioning in the future. [speaker001:] Yeah, just get a variety. [speaker005:] I think candles would be nice if they don't make noise. [speaker002:] Oh, yeah. [speaker001:] They're very good. [speaker003:] It would [disfmarker] you know, it would real really mean that we should do short meetings when you [vocalsound] turn off the [disfmarker] [comment] turn off the air conditioning, [speaker001:] Carbon monoxide poisoning? [speaker004:] Short meetings, that's right. Or [disfmarker] Yeah, sort of [comment] r r [speaker003:] got to finish this meeting. [speaker004:] Tear t [pause] Tear your clothing off to stay cool. [speaker003:] That's right. [speaker004:] Actually, the a th air [disfmarker] the air conditioning's still working, that's just an auxiliary fan. [speaker003:] Right, I see. So, [speaker001:] So [speaker003:] um, in addition to this issue about the UW stuff there was announced today, uh, via the LDC, um, a corpus from I believe Santa Barbara. [speaker005:] Yeah, I saw it. I've been watching for that corpus. [speaker003:] Um, of general spoken English. [speaker005:] Yeah. Yep. [speaker003:] And I don't know exactly how they recorded it but apparently there's a lot of different [vocalsound] styles of speech and what not. [speaker001:] Mm hmm. [speaker003:] And [disfmarker] [speaker005:] They had people come in to a certain degree and they [disfmarker] and they have DAT recorders. [speaker003:] I see. So it is sort of far field stuff. Right? [speaker005:] I [disfmarker] I assume so, actually, I hadn't thought about that. Unless they added close field later on but, um, I've listened to some of those data and I, um, I've been [disfmarker] I [disfmarker] I was actually on the advisory board for when they set the project up. [speaker001:] Mm hmm. [speaker003:] Oh, OK. [speaker005:] I'm glad to see that it got released. [speaker002:] What's it sound like? [speaker005:] So it it's a very nice thing. [speaker001:] Yeah, I [disfmarker] I wish [disfmarker] I wish we had someone here working on adaptation [speaker003:] S [speaker001:] because it would nice to be able to take that stuff and adapt it to a meeting setting. [speaker005:] How do you mean [disfmarker] do you mean mechanical adaptation or [disfmarker] [speaker003:] But it may be [disfmarker] it may be useful in [disfmarker] [speaker001:] You know [disfmarker] No, software, [speaker005:] OK. [speaker001:] to adapt the speech recognition. [speaker003:] Well, what I was thinking is it may be useful in transcribing, if it's far field stuff, [speaker001:] Mm hmm. [speaker003:] right? In doing, um, some of our first automatic speech recognition models, it may be useful to have that kind of data [speaker005:] Great idea. [speaker003:] because that's very different than any kind of data that we have so far. [speaker001:] That's true. [speaker005:] And [disfmarker] and their recording conditions are really clean. I mean, I've [disfmarker] I've heard [disfmarker] I've listened to the data. [speaker001:] Well that's not good, right? [speaker005:] It sounds [disfmarker] [speaker003:] That's [disfmarker] that's not great. [speaker004:] Tr [speaker005:] well but what I mean is that, um [disfmarker] [speaker004:] But far field means great distance? I mean [disfmarker] [speaker001:] Just these. [speaker004:] Not head mounted? [speaker002:] Yeah. [speaker004:] And so that's why they're getting away with just two channels or something, or are they using multiple DATs? [speaker005:] Um, oh, good question and I can't ans answer it. I don't know. [speaker001:] Well we can look into it. [speaker003:] No, and their web [disfmarker] their web page didn't answer it either. [speaker005:] OK. [speaker003:] So I'm, I uh, was thinking that we should contact them. So it's [disfmarker] that's sort of a beside the point point. But. [speaker001:] So we can get that just with, uh, media costs, [speaker004:] Still a point. [speaker003:] Right. [speaker001:] is that right? [speaker003:] Uh, in fact we get it for free cuz they're distributing it through the LDC. [speaker001:] Oh. Great. [speaker005:] Yep. [speaker001:] So that would be [disfmarker] yeah, that would be something to look into. [speaker003:] So, I can [disfmarker] I can actually arrange for it to arrive in short order if we're [disfmarker] [speaker001:] So. [speaker005:] The other thing too is from [disfmarker] from a [disfmarker] [speaker001:] Well, it's silly to do unless we're gonna have someone to work on it, so maybe we need to think about it a little bit. [speaker005:] The other thing too is that their their jus their transcription format is really nice and simple in [disfmarker] in the discourse domain. [speaker003:] Huh. [speaker005:] But they also mentioned that they have it time aligned. I mean, I s I [disfmarker] I saw that write up. [speaker003:] Yeah. Maybe we should [disfmarker] maybe we should get a copy of it just to see what they did [speaker002:] Yeah, absolutely. [speaker003:] so [disfmarker] so that we can [disfmarker] we can compare. [speaker005:] It's very nice. [speaker001:] Yeah. [speaker002:] Absolutely. [speaker001:] OK, why don't you go ahead and do that then Eric? [speaker003:] Alright, I'll do that. I can't remember the name of the corpus. It's Corps S [disfmarker] [speaker005:] CSAE. Corpus of Spoken American English. [speaker003:] S [disfmarker] Right, OK. [speaker005:] Yeah, sp I've been [disfmarker] I was really pleased to see that. I knew that they [disfmarker] [vocalsound] they had had some funding problems in completing it but, um, [speaker003:] Yeah. [speaker002:] Uh huh. [speaker003:] Well they're [disfmarker] [speaker005:] this is clever. [speaker003:] Apparently this was like phase one [speaker005:] Got it through the LDC. [speaker003:] and the there's still more that they're gonna do apparently or something like that [speaker005:] Great. [speaker003:] unless of course they have funding issues [speaker005:] Great. [speaker003:] and then then it ma they may not do phase two but [vocalsound] from all the web documentation it looked like, "oh, this is phase one", [speaker005:] Super. [speaker003:] whatever that means. [speaker005:] Super. Great. Yeah, that [disfmarker] I mean, they're really well respected in the linguistics d side too and the discourse area, and [disfmarker] [speaker003:] OK. [speaker005:] So this is a very good corpus. [speaker003:] But, it uh it would also maybe help be helpful for Liz, if she wanted to start working on some discourse issues, you know, looking at some of this data and then, you know [disfmarker] [speaker001:] Right. [speaker003:] So when she gets here maybe that might be a good thing for her. [speaker001:] Actually, that's another thing I was thinking about is that maybe Jane should talk to Liz, to see if there are any transcription issues related to discourse that she needs to get marked. [speaker005:] OK. [speaker003:] Maybe we should have a big meeting meeting. [speaker002:] Sure, of course. [speaker004:] That would be a meeting meeting meeting? [speaker001:] A meeting meeting meeting. [speaker003:] Yeah. [speaker001:] Well this is the meeting about the meeting meeting meeting. So. [speaker003:] Oh. Right. [speaker001:] Um. [speaker003:] But maybe we should, uh find some day that Liz [disfmarker] uh, Liz and Andreas seem to be around more often. [speaker001:] Mm hmm. [speaker003:] So maybe we should find a day when they're gonna be here and [disfmarker] and Morgan's gonna be here, and we can meet, at least this subgroup. I mean, not necessarily have the U dub people down. [speaker001:] Well, I was even thinking that maybe we need to at least ping the U dub [disfmarker] to see [disfmarker] [speaker003:] We need [disfmarker] we need to talk to them some more. [speaker001:] you know, say "this is what we're thinking about for our transcription", if nothing else. [speaker002:] Mm hmm. [speaker001:] So, well w shall we move on and talk a little bit about transcription then? [speaker003:] Yeah. [speaker002:] Let's. [speaker001:] OK, so [comment] [vocalsound] since that's what we're talking about. What we're using right now is a tool, um, from this French group, called "Transcriber" that seems to work very well. Um, so it has a, uh, nice useful Tcl TK user interface and, uh, [speaker004:] Thi this is the process of converting audio to text? [speaker001:] Right. [speaker004:] And this requires humans just like the [disfmarker] the STP stuff. [speaker001:] Yes, yeah. Right, right. So we're [disfmarker] we're at this point only looking for word level. So all [disfmarker] all [disfmarker] so what you have to do is just identify a segment of speech in time, and then write down what was said within it, and identify the speaker. And so the things we [disfmarker] that we know [disfmarker] that I know I want are [vocalsound] the text, the start and end, and the speaker. But other people are interested in for example stress marking. And so Jane is doing primary stress, [vocalsound] um, stress marks as well. Um, and then things like repairs, and false starts, and, [vocalsound] filled pauses, and all that other sort of stuff, [vocalsound] we have to decide how much of that we wanna do. [speaker005:] I did include a glo [comment] uh, a certain first pass. My [disfmarker] my view on it was when you have a repair [vocalsound] then, uh [disfmarker] it seems [disfmarker] I mean, we saw, there was this presentation in the [disfmarker] one of the speech group meetings about how [disfmarker] [speaker001:] Mm hmm. [speaker005:] and I think Liz has done some stuff too on that, that it, uh [disfmarker] [vocalsound] that you get it bracketed in terms of like [disfmarker] well, if it's parenthetical, which I know that Liz has worked on, then [vocalsound] uh [disfmarker] y y you'll have different prosodic aspects. [speaker001:] Mm hmm. [speaker002:] Hmm. [speaker005:] And then also [vocalsound] if it's a r if it's a repair where they're [disfmarker] like what I just did, [vocalsound] then it's nice to have sort of a sense of the continuity of the utterance, the start to be to the finish. And, uh, it's a little bit deceptive if you include the repai the pre repair part [disfmarker] and sometimes or of it's in the middle. Anyway, [vocalsound] so what I was doing was bracketing them to indicate that they were repairs which isn't uh, very time consuming. [speaker001:] Mm hmm. [speaker004:] I is there already some sort of plan in place for how this gonna be staffed or done? Or is it real [disfmarker] is that what we're talking about here? [speaker001:] Well, that's part of the thing we're talking about. So what we wanted to do was have Jane do basically one meeting's worth, [speaker005:] Mm hmm. [speaker001:] you know, forty minutes to an hour, [speaker005:] As a pilot study. [speaker001:] and [disfmarker] [speaker004:] Yourself? [speaker001:] Yeah. [speaker005:] Yeah, as a pilot study. [speaker004:] It [disfmarker] this is [disfmarker] this is like five times real time or ten times real time [disfmarker] [speaker001:] Ten times about, is [disfmarker] and so one of the things was to get an estimate of how long it would take, and then also what tools we would use. And so the next decision which has to be made actually pretty soon is how are we gonna do it? [speaker004:] And so you make Jane do the first one [speaker001:] So. [speaker004:] so then she can decide, oh, we don't need all this stuff, just the words are fine. [speaker005:] That's right, that's right. [speaker002:] That's right. [speaker005:] I wanna hear about these [disfmarker] uh, we have a g you were s continuing with the transcription conventions for s [speaker001:] R right, so [disfmarker] so one [disfmarker] one option is to get linguistics grad students and undergrads to do it. And apparently that's happened in the past. And I think that's probably the right way to do it. Um, it will require a post pass, I mean people will have to look at it more than once to make sure that it's been done correctly, but I just can't imagine that we're gonna get anything that much better from a commercial one. And the commercial ones I'm sure will be much more expensive. [speaker004:] Can't we get Joy to do it all? [speaker005:] No, [vocalsound] that's [disfmarker] [speaker001:] Yeah right. [speaker004:] Is tha wasn't that what she was doing before? [speaker001:] We will just get Joy and Jane to do everything. [speaker004:] Yeah, that's right. [speaker001:] But, you know, that's what we're talking about is getting some slaves who [disfmarker] who need money [speaker004:] Right. [speaker001:] and, uh, duh, again o [speaker005:] I object to that characterization! [speaker002:] Oh, really. [speaker001:] I meant Joy. And so again, I have to say "are we recording" [speaker005:] Oh, thank you. OK. [speaker001:] and then say, uh, Morgan has [disfmarker] has consistently resisted telling me how much money we have. [speaker004:] Right. Well, the answer is zero. [speaker001:] So. [speaker004:] There's a reason why he's resisted. But. [speaker001:] Well, if it's zero then we can't do any transcription. [speaker004:] Right. [speaker001:] I mean, cuz we're [disfmarker] we [disfmarker] [speaker002:] I have such a hard name. [speaker001:] I mean, I [disfmarker] I can't imagine us doing it ourselves. [speaker004:] Well, we already [disfmarker] we already [disfmarker] [vocalsound] We already have a plan in place for the first meeting. [speaker001:] Right? [speaker004:] Right? [speaker001:] N right. [speaker004:] That's [disfmarker] [speaker005:] Well th there is als Yeah, really. There is also the o other possibility which is if you can provide not money but instructional experience or some other perks, [vocalsound] you can [disfmarker] you could get people to [disfmarker] to um, to do it in exchange. [speaker004:] Well, i b but seriously, I [disfmarker] I mean, Morgan's obviously in a bind over this [speaker001:] Right. [speaker004:] and thing to do is just the field of dreams theory, which is we we go ahead as though there will be money at the time that we need the money. [speaker001:] Mm hmm. [speaker004:] And that's [disfmarker] that's the best we can do. [speaker002:] Yeah. [speaker004:] i b To not do anything until we get money is [disfmarker] is ridiculous. [speaker001:] Right. [speaker002:] Right. [speaker001:] Right. [speaker004:] We're not gonna do any [disfmarker] get anything done if we do that. [speaker005:] Mm hmm. [speaker002:] Yeah. [speaker001:] So at any rate, Jane was looking into the possibility of getting students, at [disfmarker] is that right? Talking to people about that? [speaker005:] I'm afraid I haven't made any progress in that front yet. [speaker001:] OK. [speaker005:] I should've sent email and I haven't yet. [speaker001:] Yeah, right. So, uh [disfmarker] [speaker004:] I d do [disfmarker] So until you actually [vocalsound] have a little experience with what this [disfmarker] this French thing does we don't even have [disfmarker] [speaker005:] And I do have [disfmarker] [speaker001:] She's already done quite a bit. [speaker005:] I have [disfmarker] a bunch of hours, [speaker004:] Oh, we have. I'm sorry. [speaker001:] Yeah. [speaker005:] yeah. [speaker004:] So that's where you came up with the f the ten X number? Or is that really just a guess? [speaker005:] Actually that's the [disfmarker] the one people usually use, ten X. And I haven't really calculated [disfmarker] [speaker003:] How fast are you? [speaker005:] How fast am I? I haven't done a [speaker004:] Yeah i [speaker005:] s see, I've been at the same time doing kind of a boot strapping in deciding on the transcription conventions that [disfmarker] that are [disfmarker] [vocalsound] you know, and [disfmarker] and stuff like, you know, how much [disfmarker] [speaker002:] Mmm. [speaker003:] Right. [speaker005:] There's some interesting human factors problems like, [vocalsound] yeah, what span of [disfmarker] of time is it useful to segment the thing into in order to uh, transcribe it the most quickly. Cuz then, you know, you get like [disfmarker] [vocalsound] if you get a span of five words, that's easy. [speaker002:] Yeah. [speaker005:] But then you have to take the time to mark it. [speaker002:] Yeah. [speaker005:] And then there's the issue of [vocalsound] it's easier to [disfmarker] [vocalsound] hear it th right the first time if you've marked it at a boundary instead of [vocalsound] somewhere in the middle, cuz then the word's bisected or whatever [speaker002:] Mm hmm. [speaker005:] and [disfmarker] And so I mean, I've been sort of playing with, uh, different ways of mar cuz I'm thinking, you know, I mean, if you could get optimal instructions you could cut back on the number of hours it would take. [speaker004:] D does uh [disfmarker] this tool you're using is strictly [disfmarker] [speaker002:] Yeah. [speaker004:] it doesn't do any speech recognition does it? [speaker001:] No. [speaker005:] No, it doesn't but what a super tool. It's a great environment. [speaker004:] But [disfmarker] but is there anyway to [disfmarker] to wire a speech recognizer up to it and actually run it through [disfmarker] [speaker005:] That's an interesting idea. Hey! [speaker001:] We've [disfmarker] we've thought about doing that but the recognition quality is gonna be horrendous. [speaker004:] Well, a couple things. First of all the time marking you'd get [disfmarker] you could get by a tool. [speaker002:] Wow. That's true. [speaker004:] And so if the [disfmarker] if [disfmarker] if the issue really [speaker005:] That's interesting. [speaker004:] uh, I'm think about the close caption that you see running by on [disfmarker] on live news casts. You know, yo [speaker001:] Most of those are done by a person. [speaker005:] Yeah, [speaker004:] I know [disfmarker] I know that. [speaker005:] I [speaker004:] No, I understand. And [disfmarker] in a lot of them you see typos and things like that, [speaker001:] Mm hmm. [speaker004:] but it [disfmarker] but it occurs to me that [vocalsound] it may be a lot easier to correct things than it is to do things from scratch, no matter how wonderful the tool is. [speaker001:] Yeah. Yeah, we [disfmarker] [speaker004:] But if [disfmarker] if there was a way to merge the two [disfmarker] [speaker003:] Well, I mean, but sometimes it's easier to type out something instead of going through and figuring out which is the right [disfmarker] [speaker005:] That'd be fun. [speaker001:] I mean, we've talked about it but [disfmarker] [speaker003:] I mean, it depends on the error rate, right? [speaker004:] Well s but [disfmarker] but again the timing is for fr should be for free. The timing should be [disfmarker] [speaker003:] But we don't care about the timing of the words. [speaker004:] Well I thought you just [disfmarker] that's [disfmarker] said that was a critical issue. [speaker005:] No, [speaker001:] We don't care about the timing of the words, just of the utterances. [speaker005:] uh the [disfmarker] the boundary [disfmarker] [speaker003:] We cut it s s [speaker002:] We don't [disfmarker] we don't know, actually. [speaker005:] boundary. [speaker002:] We haven't decided which [disfmarker] which time we care about, and that's kind of one of the things that you're saying, is like [disfmarker] [vocalsound] you have the option to put in more or less timing data [disfmarker] and, uh, be in the absence of more specific instructions, [vocalsound] we're trying to figure out what the most convenient thing to do is. [speaker001:] Yeah, so [disfmarker] so what [disfmarker] what she's done so far, is sort of [disfmarker] more or less breath g not breath groups, [comment] sort of phrases, continuous phrases. [speaker002:] Yeah. [speaker001:] And so, um, that's nice because you [disfmarker] you separate when you do an extract, you get a little silence on either end. So that seems to work really well. [speaker005:] That's ideal. [speaker001:] Um. [speaker005:] Although I was [disfmarker] I [disfmarker] you know, the alternative, which I was sort of experimenting with before I ran out of time, [vocalsound] recently was, [vocalsound] um [disfmarker] [vocalsound] that, you know, ev if it were like an arbitrary segment of time [disfmarker] i t pre marked [speaker002:] Yeah. [speaker005:] cuz it does take time to put those markings in. [speaker002:] Yeah. [speaker005:] It's really the i the interface is wonderful because, you know, the time it takes is you listen to it, [vocalsound] and then you press the return key. But then, you know, it's like, [vocalsound] uh, you press the tab key to stop the flow and [disfmarker] [vocalsound] and, uh, the return key to p to put in a marking of the boundary. But, you know, obviously there's a lag between when you hear it and when you can press the return key [speaker002:] Yeah. [speaker005:] so it's slightly delayed, so then you [disfmarker] you listen to it a second time and move it over to here. So that takes time. [speaker004:] a i a [speaker005:] Now if it could all be pre marked at some, [vocalsound] l you know, good [disfmarker] [speaker004:] ar but Are [disfmarker] are those d delays adjustable? [speaker001:] Hmm. [speaker004:] Those delays adjustable? See a lot of people who actually build stuff with human computer interfaces [vocalsound] understand that delay, and [disfmarker] and so when you [disfmarker] by the time you click it it'll be right on because it'll go back in time to put the [disfmarker] [speaker002:] Yeah. [speaker005:] Yeah. Yeah, uh, not in this case. [speaker002:] It could do that couldn't it. [speaker001:] We could program that pretty easily, [speaker005:] It has other [disfmarker] [speaker001:] couldn't we Dan? Yeah, mis Mister TCL? [speaker002:] Yeah. [speaker005:] Oh, interesting point. [speaker002:] I would have thought so, yeah. [speaker005:] Ah! [comment] Interesting point. OK, that would make a difference. [speaker002:] Mmm. [speaker001:] But, um [disfmarker] [speaker005:] I mean, it's not bad [speaker001:] But, if we tried to do automatic speaker ID. [speaker005:] but it does [disfmarker] take twice. [speaker001:] I mean, cuz primarily the markings are at speaker change. [speaker002:] Yeah, yeah, but [disfmarker] But we've got [disfmarker] we've got the most channel data. [speaker001:] But that would be [disfmarker] [speaker002:] We'd have to do it from your signal. Right. [speaker005:] Oh, good point! [speaker002:] I mean, we've [disfmarker] we've got [disfmarker] we've got a lot of data. [speaker005:] Ah! We've got volume. [speaker001:] Yeah, I guess the question is how much time will it really save us versus the time to write all the tools to do it. [speaker002:] Right. but the chances are if we if we're talking about collecting [vocalsound] ten or a hundred hours, which is going to take a hundred or a thousand hours to transcribe [disfmarker] [speaker004:] If [disfmarker] [speaker001:] But [disfmarker] [speaker004:] if we can go from ten X to five X we're doing a big [disfmarker] [speaker001:] We're gonna need [disfmarker] we're gonna need ten to a hundred hours to train the tools, and validate the tools the do the d to [disfmarker] to do all this anyway. [speaker003:] Right. So maybe [disfmarker] [speaker005:] Wow. But [disfmarker] but it op [speaker002:] If we're just doing silence detection [disfmarker] [speaker001:] I knew you were gonna do that. Just saw it coming. [speaker005:] I'm sorry. I wish you had told me [disfmarker] wish you'd told me. [speaker004:] Put [disfmarker] put it on your sweater. [speaker005:] At what part? OK, I'm alright. [speaker002:] Um, i it seems like [disfmarker] Well, uh, I don't know. Yeah. I mean, it [disfmarker] it's [disfmarker] it's maybe like [disfmarker] [vocalsound] a week's work to get to do something like this. So forty or fifty hours. [speaker005:] Could you get it so that with [disfmarker] so it would [disfmarker] it would detect volume on a channel and insert a marker? [speaker003:] Right. [speaker005:] And the [disfmarker] the format's really transparent. [speaker002:] Sure. [speaker003:] Yeah. [speaker005:] It's just a matter of [vocalsound] a very c clear [disfmarker] it's XML, isn't it? [speaker001:] Mm hmm. [speaker005:] It's very [disfmarker] I mean, I looked at the [disfmarker] the file format and it's just [disfmarker] it has a t a time [disfmarker] [vocalsound] a time indication and then something or other, and then an end time or something or other. [speaker003:] So maybe [disfmarker] maybe we could try the following experiment. Take the data that you've already transcribed [speaker005:] Mm hmm. [speaker003:] and [disfmarker] [speaker004:] Is this already in the past or already in the future? [speaker003:] Already in the past. [speaker004:] You've already [disfmarker] you've already done some? [speaker003:] She [disfmarker] she's done one [disfmarker] she's one [disfmarker] [speaker005:] Yes I have. [speaker001:] She's [disfmarker] she's done about half a meeting. [speaker004:] Oh Oh, I see. [speaker003:] Right. [speaker004:] OK, [speaker003:] Right. [speaker004:] good. [speaker001:] Right? About half? [speaker003:] I'm go [speaker005:] S I'm not sure if it's that's much but anyway, enough to work with. [speaker003:] Right. [speaker002:] Several minutes. [speaker003:] Um, and [disfmarker] and throw out the words, but keep the time markings. And then go through [disfmarker] I mean, and go through and [disfmarker] and try and re transcribe it, given that we had perfect boundary detection. [speaker005:] OK. Good idea. [speaker003:] And see if it [disfmarker] see if it [disfmarker] see if it feels easier to you. [speaker005:] OK. [speaker004:] And forgetting all the words because you've been thr [speaker005:] Yeah, that's what I was thinking. I'd [disfmarker] I'd be cheating a little bit g with familiarity effect. [speaker003:] Yeah, I mean uh, that's part of the problem is, is that what we really need is somebody else to come along. [speaker002:] Well, no, you should do it [disfmarker] you should do it [disfmarker] [vocalsound] Do it again from scratch and then do it again at the boundaries. So you do the whole thing three times and then we get [disfmarker] [speaker003:] Yeah. [speaker005:] No. Now, there's a plan. [speaker004:] And then [disfmarker] then w since we need some statistics do it three more. [speaker005:] OK. [speaker004:] And so you'll get [disfmarker] you'll get down to one point two X by the time you get done. [speaker005:] Oh, yeah. I'll do that tomorrow. I should have it finished by the end of the day. [speaker004:] No, but the thing is the fact that she's [disfmarker] she's did it before just might give a lower bound. That's all. [speaker005:] Exactly. [speaker004:] Uh, which is fine. [speaker002:] Yeah. [speaker003:] Right. [speaker004:] It's [disfmarker] [speaker005:] Yeah. [speaker004:] And if the lower bound is nine X then w it's a waste of time. [speaker003:] Right. [speaker005:] Well, uh but there's an extra problem which is that I didn't really keep accurate [disfmarker] uh, it wasn't a pure task the first time, [speaker002:] Oh! [speaker005:] so [disfmarker] [vocalsound] uh, it's gonna be an upper bound in [disfmarker] in that case. [speaker002:] Yeah. [speaker005:] And it's not really strictly comparable. So I think though it's a good proposal to be used on a new [disfmarker] a new batch of text that I haven't yet done yet in the same meeting. Could use it on the next segment of the text. [speaker002:] The point we [disfmarker] where do we get the [disfmarker] the [disfmarker] the oracle boundaries from? [speaker003:] Right. [speaker002:] Or the boundaries. [speaker001:] Yeah, one person would have to assign the boundaries and the [disfmarker] and the other person would have to [disfmarker] [speaker005:] Well, but couldn't I do it for the next [disfmarker] [speaker002:] We [disfmarker] we [disfmarker] we could get fake [disfmarker] [speaker005:] Oh, I see what you mean. [speaker001:] I mean that's easy enough. I could do that. [speaker005:] Well, but the oracle boundaries would come from volume on a partic specific channel wouldn't they? [speaker002:] That would be the automatic boundaries. [speaker003:] No, no, no, no. [speaker001:] No, no. [speaker003:] You wanna know [disfmarker] given [disfmarker] Given a perfect human segmentation, I mean, you wanna know how well [disfmarker] [speaker005:] Yeah. [speaker003:] I mean, the [disfmarker] the question is, is it worth giving you the segmentation? [speaker005:] Oh, I see what you mean. Yeah. [speaker001:] I mean, that [disfmarker] that's easy enough. [speaker003:] Right. [speaker001:] I could generate the segmentation and [disfmarker] and you could do the words, and time yourself on it. [speaker004:] A little double blind ear kind of thing. [speaker001:] So. Yep. [speaker005:] I see. OK. [speaker001:] So it [disfmarker] that might be worth doing. [speaker005:] That's good. I like that. [speaker001:] That would at least tell us whether it's worth spending a week or two trying to get a tool, that will compute the segmentations. [speaker004:] And the thing to keep in mind too about this tool, guys is that [vocalsound] sure, you can do the computation for what we're gonna do in the future [speaker003:] Right. [speaker004:] but if [disfmarker] if UW's talking about doing two, or three, or five times as much stuff and they can use the same tool, then obviously there's a real multiplier there. [speaker001:] Right. [speaker005:] And the other thing too is with [disfmarker] with speaker identification, if [disfmarker] if that could handle speaker identification that's a big deal. [speaker002:] Well it w [speaker004:] Well, use it. [speaker003:] Right. [speaker005:] OK. [speaker004:] Yeah, that's why we s bought the expensive microphones. [speaker005:] Yeah, I mean, that's a nice feature. [speaker001:] Yep. [speaker002:] Yeah, yeah. [speaker005:] That's a major [disfmarker] that's like, one of the two things that [disfmarker] [speaker003:] I mean, there's gonna [disfmarker] there's gonna be [disfmarker] in the meeting, like the reading group meeting that we had the other day, that's [disfmarker] it's gonna be a bit of a problem [speaker002:] OK. [speaker003:] because, like, I wasn't wearing a microphone f and there were other people that weren't wearing microphones. [speaker002:] Yes. [speaker004:] But you didn't say anything worth while anyway, right? [speaker001:] That [disfmarker] [speaker002:] Right. [speaker001:] That'll s [speaker005:] Yeah. [speaker003:] That's pretty much true but [disfmarker] [speaker001:] it might save ninety percent of the work though. [speaker003:] but, yes. [speaker001:] So. [speaker002:] So I [disfmarker] I need to [disfmarker] we need to look at what [disfmarker] what the final output is but it seems like [vocalsound] we [disfmarker] it doesn't [disfmarker] it seems like it's not really not that hard to have an automatic tool to generate [vocalsound] the phrase marks, and the speaker, and speaker identity without putting in the words. [speaker001:] Yeah. I've already become pretty familiar with the format, [speaker005:] That'd be so great. [speaker001:] so [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker001:] it would be easy. [speaker005:] Yeah. We didn't finish the [disfmarker] the part of work already completed on this, [speaker001:] If you'd tell me where it is, huh? [speaker005:] did we? I mean, you [disfmarker] [vocalsound] you talked a little bit about the transcription conventions, [speaker001:] Mm hmm. [speaker005:] and, I guess you've mentioned in your progress report, or status report, that you had written a script to convert it into [disfmarker] So, [vocalsound] I [disfmarker] when I [disfmarker] i the [disfmarker] it's quickest for me in terms of [vocalsound] the transcription part [vocalsound] to say [vocalsound] something like, you know, if [disfmarker] [vocalsound] if Adam spoke to, um [disfmarker] to just say, "A colon", Like who could be, you know, I mean at the beginning of the line. [speaker002:] Mmm. [speaker005:] and E colon [vocalsound] instead of entering the interface for speaker identification and clicking on the thing, uh, indicating the speaker ID. So, and then he has a script that will convert it into the [disfmarker] the thing that, uh, would indicate speaker ID. If that's clear. [speaker001:] It's pretty cute. [speaker003:] OK. [speaker001:] But at any rate. [speaker005:] It's Perl script. [speaker001:] So, um, Right. So [disfmarker] so I think the guess at ten X seems to be pretty standard. Everyone [disfmarker] more or less everyone you talk to says about ten times for hard technical transcription. [speaker005:] Mm hmm. Yeah. [speaker004:] Using wh using stone age using stone age tools. [speaker005:] That's right. [speaker001:] Using [disfmarker] using stone age tools. [speaker005:] Yeah, well that's true, [speaker001:] I mean, I looked at Cyber Transcriber [speaker005:] but [disfmarker] [speaker001:] which is a service that you send an audio file, they do a first pass speech recognition. And then they [disfmarker] they do a clean up. But it's gonna be horrible. They're never gonna be able to do a meeting like this. [speaker005:] What [disfmarker] i just approximately, what did you find out in terms of price or [disfmarker] or whatever? [speaker004:] Right. [speaker002:] No. [speaker001:] Well, for Cyber Transcriber they don't quote a price. They want you to call and [disfmarker] and talk. So for other services, um, they were about thirty dollars an hour. [speaker005:] Of [disfmarker] of tape? Or of action? [speaker001:] Thirty [disfmarker] So, yeah. For thirty dollars an hour for [disfmarker] of their work. [speaker005:] OK. OK. Oh, of their [disfmarker] Oh! [speaker001:] So [disfmarker] so if it's ten times it's three hundred dollars an hour. [speaker003:] So that's three [disfmarker] that's three hours. [speaker004:] D did you talk to anybody that does closed captioning for [disfmarker] for uh, TV? [speaker005:] OK. [speaker003:] Right. [speaker001:] No. [speaker004:] Cuz they a usually at the end of the show they'll tell what the name of the company is, the captioning company that's doing it. [speaker001:] Mm hmm. [speaker005:] Interesting. [speaker001:] Yeah, so [disfmarker] so my [disfmarker] my search was pretty cursory. It was just a net search. And, uh, so it was only people who have web pages and are doing stuff through that. [speaker004:] Well, you know, the [disfmarker] the thing [disfmarker] the thing about this is thinking kind of, maybe a little more globally than I should here but [comment] [vocalsound] that really this could be a big contribution we could make. Uh, I mean, we've been through the STP thing, we know what it [disfmarker] what it's like to [disfmarker] to manage the [disfmarker] manage the process, and admittedly they might have been looking for more detail than what we're looking for here but [vocalsound] it was a [disfmarker] it was a big hassle, right? I mean, uh, you know, they [disfmarker] they constantly could've reminding people and going over it. [speaker002:] Yeah. [speaker004:] And clearly some new stuff needs to be done here. And it's [disfmarker] it's only [vocalsound] our time, where "our" of course includes Dan, [vocalsound] Dan and you guys. It doesn't include me at all. Uh. j Just seems like [disfmarker] [speaker002:] Yeah, I mean I don't know if we'd be able to do any thing f to help STP type problems. But certainly for this problem we can do a lot better than [disfmarker] [speaker004:] Bec Why? Because they wanted a lot more detail? [speaker002:] No. [speaker001:] Right. [speaker002:] Because they had [disfmarker] because they only had two speakers, right? I mean, the [disfmarker] the segmentation problem is [disfmarker] [speaker004:] Only had two. [speaker001:] Trivial. They had two speakers over the telephone. [speaker004:] Oh, I see. So what took them so long? [speaker001:] Um, mostly because they were doing much lower level time. [speaker002:] Yeah. [speaker001:] So they were doing phone and syllable transcription, as well as, uh, word transcription. [speaker004:] Right. Right. [speaker003:] Right. [speaker005:] Mm hmm. [speaker001:] And so we're [disfmarker] w we decided early on that we were not gonna do that. [speaker004:] I see. But there's still the same issue of managing the process, of [disfmarker] [vocalsound] of reviewing and keeping the files straight, and all this stuff, that [disfmarker] which is clearly a hassle. [speaker001:] Yep. [speaker002:] Yeah. [speaker001:] Right. And so [disfmarker] so what I'm saying is that if we hire an external service I think we can expect three hundred dollars an hour. [speaker002:] Yeah. [speaker001:] I think that's the ball park. There were several different companies that [disfmarker] [vocalsound] and the [disfmarker] the range was very tight for technical documents. Twenty eight to thirty two dollars an hour. [speaker003:] And who who knows if they're gonna be able to m manage multal multiple channel data? [speaker002:] Yeah, they won't. [speaker001:] They won't. [speaker002:] They w they'll refuse to do it. [speaker005:] Mm hmm. [speaker001:] We'll have to mix them. [speaker003:] Right. [speaker002:] Yeah. [speaker005:] And then there's the problem also that [disfmarker] [speaker002:] No, but I mean, they [disfmarker] they [disfmarker] they won't [disfmarker] they won't [disfmarker] they will refuse to transcribe this kind of material. That's not what they're d quoting for, right? [speaker004:] Well, they might [disfmarker] [vocalsound] they might quote it [disfmarker] [speaker001:] Yes, it is. [speaker002:] For quoting meetings? [speaker001:] Sev several of them say that they'll do meetings, and conferences, and s and so on. [speaker002:] Wow. [speaker001:] None of them specifically said that they would do speaker ID, or speaker change mark. [speaker002:] Yeah. [speaker004:] Th th the th there may be just multiplier for five people costs twice as much and for ten people co [comment] Something like that. [speaker001:] They all just said transcription. [speaker002:] Yeah, yeah, yeah. [speaker001:] Well, the [disfmarker] the way it worked is it [disfmarker] it was scaled. So what they had is, [vocalsound] if it's an easy task it costs twenty four dollars an hour and it will take maybe five or six times real time. And what they said is for the hardest tasks, bad acoustics, meeting settings, it's thirty two dollars an hour and it takes about ten times real time. [speaker002:] I see. [speaker001:] So I think that we can count on that being about what they would do. [speaker002:] Yeah. [speaker005:] Yeah. [speaker001:] It would probably be a little more [speaker002:] Right. [speaker001:] because we're gonna want them to do speaker marking. [speaker004:] A lot of companies I've worked for y the, uh [disfmarker] the person leading the meeting, the executive or whatever, would sort of go around the room and [disfmarker] and mentally calculate h how many dollars per hour this meeting was costing, [speaker001:] So. [speaker004:] right? In university [vocalsound] atmosphere you get a little different thing. But you know, it's a lot like, "he's worth fifty an hour, he's worth [disfmarker]" And so he so here we're thinking, "well let's see, if the meeting goes another hour it's going to be another thousand dollars." You know? It's [disfmarker] [speaker001:] Yep, [speaker004:] So ch [vocalsound] So every everybody ta Talk really fast. [speaker001:] we have to have a short meeting. [speaker005:] That's very interesting. [speaker001:] Stop talking! [speaker002:] Yeah. [speaker004:] Let's get it over with. [speaker005:] Talk [vocalsound] slowly but with few words. [speaker001:] And clearly. [speaker004:] And [speaker002:] That's right. [speaker004:] only talk when you're pointed to. [speaker005:] There you go. [speaker001:] Content words only. [speaker005:] We could have some telegraphic meetings. That might be interesting. [speaker002:] Yeah, it'd be cheap. Cheap to transcribe. [speaker001:] So. But at any rate, so we [disfmarker] we have a ballpark on how much it would cost if we send it out. [speaker004:] And we're talking about do doing how many hours worth of meetings? [speaker001:] Thirty or forty. [speaker004:] So thirty or forty thousand dollars. [speaker005:] Yeah. [speaker002:] Well, for ten thousand dollars. [speaker003:] So, meanwhile [disfmarker] [speaker004:] Oh. What [disfmarker] Well, it was thirty times [disfmarker] [speaker002:] Three hundred. [speaker001:] Three hundred dollars an hour. [speaker004:] Oh, I'm sorry, three hundred. Right, I w got an extra factor of three there. [speaker001:] Right. [speaker003:] So it's thirty dollars an hour, essentially, right? [speaker004:] Yeah. [speaker003:] But we can pay a graduate student seven dollars an hour. And the question is what's the difference [disfmarker] [speaker002:] How [disfmarker] how much lower are they? [speaker003:] or ei eight dollars. What [disfmarker] do you know what the going rate is? It's [disfmarker] it's [vocalsound] on the order of eight to ten. [speaker005:] I think uh that would give us a [disfmarker] a good [disfmarker] good estimate. [speaker003:] I think. But I'm not sure. [speaker005:] I'd [disfmarker] I'd say [disfmarker] [speaker002:] Ten. [speaker005:] yeah, I was gonna say eight [disfmarker] you'd say ten? [speaker003:] Let's say ten. [speaker002:] Yeah, give them a break. [speaker004:] The these are not for engineering graduate students, right? [speaker003:] Cuz it's easier. [speaker001:] Right, these are linguistics grad students. [speaker004:] That's right. [speaker003:] Yeah, I [disfmarker] I [disfmarker] I don't [disfmarker] I don't know what the [disfmarker] I don't know what the standard [disfmarker] [speaker001:] Six. [speaker003:] but there is a standard pay scale I just don't know what it is. [speaker005:] Yeah, that's right. [speaker001:] Mm hmm. [speaker005:] That's right. [speaker003:] Um, so that means that even if it takes them thirty times real time it's cheaper to [disfmarker] to do graduate students. [speaker005:] And there's another aspect too. [speaker001:] I mean, that's why I said originally, that I couldn't imagine sending it out's gonna be cheaper. [speaker002:] No, it isn't. [speaker005:] The other thing too is that, uh, if they were linguistics they'd be [disfmarker] you know, in terms of like the post editing, i uh [disfmarker] tu uh content wise they might be easier to handle [speaker002:] So. [speaker005:] cuz they might get it more right the first time. [speaker001:] And also we would have control of [disfmarker] I mean, we could give them feedback. Whereas if we do a service it's gonna be limited amount. [speaker005:] Mmm. [speaker002:] Yep, yep. [speaker005:] Good point. [speaker001:] I mean, we can't tell them, you know, for this meeting we really wanna mark stress [speaker002:] Yep. [speaker001:] and for this meeting we want [disfmarker] " [speaker002:] No. [speaker005:] Good point. [speaker001:] And [disfmarker] and they're not gonna provide [disfmarker] they're not gonna provide stress, they're not gonna re provide repairs, they're not gonna [vocalsound] provide [disfmarker] they [disfmarker] they may or may not provide speaker ID. So that we would have to do our own tools to do that. [speaker005:] Mm hmm. [speaker001:] So [disfmarker] [speaker002:] Yeah. [speaker004:] Just hypoth hypothetically assuming that [disfmarker] that we go ahead and ended up using graduate students. [speaker001:] I just [disfmarker] [speaker004:] I who [disfmarker] who's the person in charge? Who's gonna be the Steve here? You? [speaker001:] I hope it's Jane. [speaker005:] Oh, interesting. [speaker001:] Is that alright? [speaker005:] Um, now would this involve some manner of uh, monetary compensation or would I be the voluntary, uh, coordinator of multiple transcribers for checking? [speaker001:] Um, I would imagine there would be some monetary involved but we'd have to talk to Morgan about it. [speaker002:] Yeah, out of [disfmarker] out of Adam's pocket. [speaker003:] Yeah. [speaker005:] OK. [speaker004:] You know, it just means you have to stop working for Dave. [speaker005:] Oh, [speaker004:] See? [speaker005:] I don't wanna stop working for Dave. [speaker004:] That's why Dave should have been here. To pr protect his people. [speaker001:] Well, I would like you to do it [speaker005:] Oh, cool. [speaker001:] because you have a lot more experience than I do, [speaker005:] Yeah. Uh huh. [speaker001:] but if [disfmarker] if that's not feasible, I will do it with you as an advisor. [speaker004:] W we'd like you to do it [speaker005:] We'll see. [speaker004:] and we'd like to pay you. Not being Morgan though, it's [disfmarker] [speaker005:] OK. [speaker002:] Yeah. [speaker005:] Oh, I see. [speaker003:] Right. [speaker002:] We'd like to. [speaker005:] Well [disfmarker] [speaker002:] Unfortunately [disfmarker] [speaker004:] Yeah. [speaker001:] Yeah, six dollars an hour. [speaker005:] Yeah. Yeah, I see. [speaker003:] That's a [disfmarker] [speaker005:] OK. [speaker004:] And [disfmarker] and then [disfmarker] [speaker005:] Boy, if I wanted to increase my income I could start doing the transcribing again. [speaker002:] Yeah, that's right. [speaker004:] an an an and be and be sure and say, would you like fries with that when you're thinking about your pay scale. [speaker002:] Yeah. [speaker005:] I see. Good. Yeah, no, that [disfmarker] I [disfmarker] I would be interested in that [disfmarker] in becoming involved in the project in some aspect like that [disfmarker] [speaker001:] OK. More. [speaker005:] more. Yeah. Uh huh. Yeah. [speaker001:] Um, any more on transcript we wanna talk about? [speaker002:] What s so what are you [disfmarker] so you've done some portion of the first meeting. [speaker005:] Yes. [speaker002:] And what's your plan? [speaker005:] Mm hmm. [speaker002:] To carry on doing it? [speaker005:] What [disfmarker] Well, you know what I thought was right now we have p So I gave him the proposal for the transcription conventions. He made his, uh, suggestion of improvement. [speaker002:] OK. [speaker005:] The [disfmarker] the [disfmarker] It's a good suggestion. So as far as I'm concerned those transcription conventions are fixed right now. And so my next plan would be [disfmarker] [speaker002:] What [disfmarker] what do they [disfmarker] what do they cover? [speaker005:] They're very minimal. So, [vocalsound] it would be good to [disfmarker] just to summarize that. So, um, [vocalsound] one of them is the idea of how to indicate speaker change, [speaker002:] Yeah. [speaker005:] and this is a way which meshes well with [disfmarker] with, uh, making it [vocalsound] so that, uh, you know, on the [disfmarker] At the [disfmarker] [speaker002:] Yeah. [speaker005:] Boy, it's such a nice interface. When you [disfmarker] when you get the, um [disfmarker] you [disfmarker] you get the speech signal you also get [vocalsound] down beneath it, [vocalsound] an indication of, [vocalsound] uh, if you have two speakers overlapping in a s in a single segment, you see them [vocalsound] one [disfmarker] displayed one above each other. And then at the same time [vocalsound] the top s part of the screen is the actual verbatim thing. You can clip [disfmarker] click on individual utterances and it'll take you immediately to that part of the speech signal, and play it for you. And you can, eh you can work pretty well between those two [disfmarker] these two things. [speaker004:] Is there a limit to the number of speakers? [speaker001:] Um, the user interface only allows two. [speaker002:] Hmm. [speaker001:] And so if [disfmarker] if you're using their interface to specify overlapping speakers you can only do two. But my script can handle any. And their save format can handle any. And so, um, using this [disfmarker] the convention that Jane and I have discussed, you can have as many overlapping speakers as you want. [speaker004:] Do y is this a, uh, university project? Th this is the French software, right? [speaker002:] Yeah. [speaker001:] Yeah, [speaker002:] Yeah, yeah, [speaker001:] French. Yeah. [speaker002:] their academic. [speaker001:] And they're [disfmarker] they've been quite responsive. [speaker004:] eh [speaker001:] I've been exchanging emails on various issues. [speaker002:] Oh, really? [speaker005:] Oh. [speaker004:] Uh, did you ask them to change the interface for more speakers? [speaker001:] Yes, and they said that's on [disfmarker] in [disfmarker] in the works for the next version. [speaker004:] Good. Good. [speaker003:] Oh, so multi multichannels. [speaker001:] Multichannels was also [disfmarker] Well, they said they wanted to do it but that the code is really very organized around single channels. [speaker003:] I see. [speaker001:] So I think that's n unlikely to ha happen. [speaker003:] OK. [speaker004:] Do do you know what they're using it for? Why'd they develop it? [speaker001:] For this exact task? [speaker004:] Are they linguists? [speaker003:] For transcription. It's [disfmarker] [speaker004:] But I mean, are they [disfmarker] are they linguists or are they speech recognition people? [speaker001:] I think they're linguists. [speaker005:] Ho Hmm. [speaker002:] Linguists. [speaker003:] They're [disfmarker] they have some connection to the LDC [speaker002:] Yeah. [speaker003:] cuz the LDC has been advising them on this process, the Linguistic Data Consortium. [speaker004:] Mm hmm. [speaker003:] Um, so [disfmarker] but a apart from that. Yeah. [speaker001:] It's also [disfmarker] All the source is available. So. [speaker003:] Right. [speaker004:] Great. [speaker001:] If you [disfmarker] if you speak TCLTK. [speaker004:] Mm hmm. [speaker001:] And they have [disfmarker] they've actually asked if we are willing to do any development and I said, well, maybe. [speaker004:] Good. [speaker003:] Right. [speaker005:] Mm hmm. [speaker001:] So if we want [disfmarker] if we did [disfmarker] if we did something like programmed in a delay, which actually I think is a great idea, um, I'm sure they would want that incorporated back in. [speaker005:] Yeah, I do too. [speaker002:] Mm hmm. [speaker004:] Their pre pre lay. [speaker002:] Pre lay. [speaker005:] Pre lay. [speaker001:] Way. [speaker005:] Well, and they've thought about things. You know, I mean, they [disfmarker] they do have [disfmarker] So you have [disfmarker] when you [disfmarker] when you play it back, um, it's [disfmarker] it is useful to have, uh, a [disfmarker] a break mark to [disfmarker] se segment it. But it wouldn't be strictly necessary cuz you can use the [disfmarker] uh, the tabbed key to toggle [vocalsound] the sound on and off. I mean, it'll stop the s speech you know if you if you press a tab. And, um. And so, uh, that's a nice feature. And then also once you've put a break in then you have the option of [vocalsound] cycling through the unit. You could do it like multiply until you get [comment] crazy and decide to stop cycling through that unit. [speaker004:] Loop it? [speaker005:] Or [disfmarker] or [disfmarker] [speaker004:] Yo you n you know, there's al also the [disfmarker] the user interface that's missing. [speaker005:] or [disfmarker] [speaker004:] It's missing from all of our offices, and that is some sort of analog input for something like this. It's what audio people actually use of course. It's something that wh [vocalsound] when you move your hand further, the sound goes faster past it, like fast forward. You know, like a joy stick or a [disfmarker] uh, you could wire a mouse or trackball to do something like that. [speaker005:] Why, that's [disfmarker] That's not something I wanted to have happen. [speaker004:] No, but I'm saying if this is what professionals who actually do this kind of thing for [disfmarker] for [disfmarker] for [disfmarker] m for video or for audio [disfmarker] [vocalsound] where you [disfmarker] you need to do this, [speaker005:] I see. Uh huh. [speaker004:] and so you get very good at sort of jostling back and forth, rather than hitting tab, and backspace, and carriage return, and enter, and things like that. [speaker002:] Mmm. Mmm. [speaker005:] Uh huh. [speaker003:] Yeah. [speaker001:] Yeah, we talked about things like foot pedals and other analog [disfmarker] [speaker004:] Mm hmm. [speaker001:] So [vocalsound] I mean, tho those are things we could do but I [disfmarker] I just don't know how much it's worth doing. [speaker004:] Ye Yeah. [speaker001:] I mean we're just gonna have [disfmarker] [speaker005:] Yeah, [speaker004:] Right. [speaker005:] I [disfmarker] I agree. [speaker003:] Yeah. [speaker005:] They [disfmarker] they have several options. So, uh, you know, I mentioned the looping option. Another option is it'll pause when it reaches the end of the boundary. And then to get to the next boundary you just press tab and it goes on to the next unit. [speaker001:] Hmm. [speaker005:] I mean, it's very nicely thought out. [speaker004:] Cool. [speaker005:] They thought about [disfmarker] [speaker003:] Hmm. [speaker005:] and also it'll [vocalsound] go around the c the, uh, I wanna say cursor but I'm not sure if that's the right thing. [speaker001:] Point, whatever. [speaker005:] Anyway, you can [disfmarker] so they thought about different ways of having windows that you c uh work within, [speaker002:] Mm hmm. [speaker005:] and [disfmarker] But so in terms of the con the conventions, then, [vocalsound] uh, basically, [vocalsound] uh, it's strictly orthographic which means with some w provisions for, uh, w uh, [vocalsound] colloquial forms. So if a person said, "cuz" instead of "because" then I put a [disfmarker] [vocalsound] an apostrophe at the beginning of the word and then in [disfmarker] in double ang angle brackets what the full lexical item would be. [speaker002:] Mm hmm. [speaker005:] And this could be [vocalsound] something that was handled by a table or something but I think [vocalsound] to have a convention marking it as a non standard or wha I don't mean standard [disfmarker] but a [disfmarker] a [disfmarker] a non [vocalsound] uh, ortho orthographic, uh, whatever. [speaker002:] Mm hmm. [speaker003:] Mm hmm. [speaker001:] Non canonical. [speaker005:] "Gonna" or "wanna", you know, the same thing. And [disfmarker] and there would be limits to how much refinement you want in indicating something as non standard pres pronunciation. [speaker003:] How are you handling backchannels? [speaker005:] Backchannels? Um, [speaker001:] Comments. [speaker005:] you know [disfmarker] oh, yes, there was some [disfmarker] in my view, when i when you've got it densely overlapping, um, I didn't worry about [disfmarker] I didn't worry about s specific start times. [speaker003:] What do you mean by du [speaker005:] I sort of thought that this is not gonna be [comment] easily processed anyway and [vocalsound] maybe I shouldn't spend too much time getting exactly when the person said [vocalsound] "no", or, you know, uh, i "immediate". [speaker002:] Yeah. [speaker005:] And instead just sort of rendered within this time slot, [vocalsound] there were two people speaking during part of it [speaker002:] Yeah. [speaker005:] and [vocalsound] if you want more detail, figure it out for yourself ", [speaker002:] Mm hmm. [speaker003:] I see. [speaker001:] Well, I think what [disfmarker] w what Eric was talking about was channels other than the direct speech, [speaker005:] was sort of the way I felt [@ @] [disfmarker] [speaker001:] right? [speaker003:] Well, yeah, what I mean is wh I mean, when somebody says "uh huh" in the middle of, uh, a [@ @] [disfmarker] [speaker001:] Yep. [speaker005:] Uh huh. That happened very seldom. [speaker003:] Oh, cuz I was [disfmarker] I was listening to [disfmarker] Dan was agreeing a lot to things that you were saying as you were talking. [speaker004:] Uh huh. Uh huh. [speaker005:] Oh, well, thank you Dan. [speaker003:] So. [speaker005:] Appreciate it. Well, if it [disfmarker] if there was a word like "right", you know, then I wou I would indicate [vocalsound] that it happened within the same tem time frame [speaker001:] Yeah, there's an overlapping mark. [speaker003:] And [disfmarker] [speaker002:] Yeah. [speaker005:] but wouldn't say exactly when it happened. [speaker004:] I'll be right back. [speaker002:] I transcribed a minute of this stuff [speaker003:] I see. [speaker002:] and there was a lot of overlapping. It was [disfmarker] [speaker005:] A lot of overlapping, yeah. [speaker001:] Well there there's a lot of overlapping at the beginning and end. [speaker002:] Yeah. Yeah. [speaker001:] Huge amounts. [speaker002:] It was at the beginning. [speaker001:] Um, when [disfmarker] when no one i when we're not actually in the meeting, [vocalsound] and we're all sort of separated, and [disfmarker] and doing things. But even during the meeting there's a lot of overlap but it [disfmarker] it's marked pretty clearly. Um, some of the backchannel stuff Jane had some comments [disfmarker] and [disfmarker] but I think a lot of them were because you were at the meeting. And so I think that [disfmarker] that often [disfmarker] [vocalsound] often you can't tell. [speaker005:] Yeah, well that's true. That's another issue. [speaker001:] I mean, Jane had [disfmarker] had comments like uh, to [disfmarker] who [disfmarker] who the person was speaking to. [speaker003:] Yeah. [speaker005:] Yeah. [speaker002:] Mm hmm. [speaker005:] Only when it was otherwise gonna be puzzling because he was in the other room talking. [speaker001:] Yeah. Yeah, but someone who, uh, [vocalsound] was just the transcriber wouldn't have known that. [speaker005:] Yeah. That's true. [speaker003:] Right. [speaker001:] Or when Dan said, "I wa I wasn't talking to you". [speaker005:] I know. [speaker004:] So you take a bathroom break in the middle and [disfmarker] and keep your head mount [speaker001:] You have to turn off your mike. [speaker002:] Yeah. [speaker004:] Oh, you do? [speaker005:] Well he was [speaker002:] You don't have to. [speaker005:] so [disfmarker] so he was checking the meter levels and [disfmarker] and we were handling things while he was labeling the [disfmarker] the whatever it was, the PDA? [speaker001:] Mm hmm. [speaker002:] Uh huh. [speaker005:] And [disfmarker] and so he was [disfmarker] in [disfmarker] sort of [disfmarker] you were sort of talking [disfmarker] you know, so I was saying, like [disfmarker] [vocalsound] "and I could label this one left. Right?" And he [disfmarker] and he said, "I don't see anything". And he said [disfmarker] [vocalsound] he said, [vocalsound] "I wasn't talking to you". Or [disfmarker] it wasn't [disfmarker] it didn't sound quite that rude. [speaker001:] But [disfmarker] [speaker005:] But really, no, uh [disfmarker] w you know in the context if you know he can't hear what he's saying [disfmarker] [speaker001:] but when you w when you listen to it [disfmarker] [speaker004:] he he It was a lot funnier if you were there though. [speaker005:] Uh, yeah, I know. Well, you'll see. [speaker001:] Well what [disfmarker] what it [disfmarker] what happens is if you're a transcriber listening to it it sounds like Dan is just being a total [disfmarker] [vocalsound] totally impolite. [speaker005:] You can listen to it. Oh, I thought it was you who was. No, well, but you were [disfmarker] you were asking off the wall questions. [speaker001:] Um [disfmarker] But [disfmarker] but if you knew that [disfmarker] that I wasn't actually in the room, and that Dan wasn't talking to me, it [disfmarker] it became OK. [speaker002:] I see. [speaker004:] So th [speaker001:] So. [speaker005:] And that's w that's where I added comments. [speaker003:] Hmm. [speaker005:] The rest of the time I didn't bother with who was talking to who but [disfmarker] but this was unusual circum circumstance. [speaker004:] So this is [disfmarker] this is gonna go on the meeting meeting transcriber bloopers tape, right? [speaker001:] Yes. Right. [speaker005:] Well and part of it was funny, uh [disfmarker] reason was because it was a mixed signal so you couldn't get any clues from volume [vocalsound] that, you know, he was really far away from this conversation. [speaker001:] Stereo. Yeah. [speaker005:] You couldn't do that symmetrically in any case. [speaker002:] No. [speaker001:] Oh. I should rewrite the mix tool to put half the people in one channel and half in the other. I have a [pause] auto gain mixer tool that mixes all the head mounted microphones into one signal [speaker005:] That's a good idea. [speaker004:] Mm hmm. [speaker001:] and that seems to work really well for the uh [pause] transcribers. [speaker004:] Great. [speaker005:] But I thought it would be [disfmarker] you know, I [disfmarker] I didn't wanna add more contextual comments than were needed but that, it seemed to me, clarified that the con what was going on. And, uh [disfmarker] OK, so normalization [disfmarker] [speaker003:] So, s I was just gonna ask, uh, so I just wanted to c sort of finish off the question I had about backchannels, [speaker005:] Yeah. [speaker003:] if that's OK, [speaker002:] Mmm. [speaker005:] OK. [speaker003:] which [disfmarker] which was, so say somebody's talking for a while [speaker005:] Yeah. [speaker003:] and somebody goes "mm hmm" in the middle of it, and [disfmarker] and [disfmarker] and what not, does the conversation come out from the [disfmarker] or the person who's speaking for the long time as one segment and then there's this little tiny segment [vocalsound] of this other speaker or does it [disfmarker] does the fact that there's a backchannel split the [disfmarker] the [disfmarker] the [disfmarker] it in two. [speaker005:] OK, my [disfmarker] my focus was to try and maintain conten con content continuity and, uh, to keep it within what he was saying. Like [vocalsound] I wouldn't say breath groups but prosodic or intonational groups as much as possible. So [vocalsound] if someone said "mm hmm" in the middle of a [disfmarker] of someone's, [vocalsound] uh, uh, intonational contour, [vocalsound] I [disfmarker] I indicated it as, like what you just did. [speaker003:] OK. [speaker005:] then I indicated it as a segment which contained [vocalsound] [@ @] [comment] this utterance plus an overlap. [speaker003:] OK. [speaker002:] But that's [disfmarker] but there's only one [disfmarker] there's only one time boundary for both speakers, [speaker005:] Yeah, that's right. [speaker002:] right? [speaker005:] And you know, it could be made more precise than that but I just thought [disfmarker] [speaker003:] I see, I see, OK. [speaker005:] Yeah. [speaker003:] Right. [speaker004:] I think whenever we use these speech words we should always [vocalsound] do the thing like you're talking about, accent, [speaker005:] Oh, I see what you mean. And then [pause] "hesitation". Yeah. OK, and so then, uh, in terms of like words like "uh" and "um" I just wrote them because I figured there's a limited number, and I keep them to a [disfmarker] uh, limited set because it didn't matter if it was "mmm" or "um", [comment] [vocalsound] you know, versus "um". So I just always wrote it as U M. And "uh huh", you know, "UHUH." I mean, like a s set of like five. [speaker002:] OK. [speaker005:] But in any case [disfmarker] [vocalsound] I didn't mark those. [speaker003:] Mm hmm. [speaker002:] No. "Uh huh" is "U H H U H." H U H. " [speaker005:] I'd be happy with that. That'd be fine. It'd be good to have that in the [disfmarker] in the conventions, what's to be used. [speaker003:] Huh uh. [speaker001:] I [disfmarker] I did notice that there were some segments that had pauses on the beginning and end. We should probably mark areas that have no speakers as no speaker. [speaker005:] Yeah, that's a fine idea. [speaker001:] Then, so question mark colon is fine for that. [speaker005:] That's a fine idea. Yeah, OK. [speaker004:] Well, what's that mean? [speaker001:] Just say silence. [speaker005:] Yeah. [speaker004:] You mean re [speaker001:] No one's talking. [speaker004:] ye s Oh. Silence all around. [speaker001:] Yep. [speaker004:] Yep. [speaker005:] So I had [disfmarker] [speaker002:] We have to mark those? [speaker005:] I d [speaker002:] Don't they [disfmarker] d can't we just leave them unmarked? [speaker005:] Well, you see, that's possible too. [speaker001:] Well, I wanna leave the marked [disfmarker] I don't want them to be part of another utterance. [speaker002:] OK. [speaker001:] So you just [disfmarker] you need to have the boundary at the start and the end. [speaker002:] Sure. [speaker005:] Mm hmm. Now that's refinement that, uh, maybe it could be handled by part of the [disfmarker] part of the script or something more [disfmarker] [speaker002:] Uh, yeah, it seems like [disfmarker] it seems like the, uh, tran the transcription problem would be very different if we had these [vocalsound] automatic speaker detection turn placing things. Because suddenly [disfmarker] I mean, I don't know, actually it sounds like there might be a problem [vocalsound] putting it into the software if the software only handles two parallel channels. But assuming we can get around that somehow. [speaker005:] Mm hmm. Well you were saying, I think it can read [disfmarker] [speaker001:] It can read and write as many as you want, it's just that it [speaker005:] Uh huh. [speaker002:] But what if you wanna edit it? Right? I mean, the point is we're gonna generate [vocalsound] this transcript with five [disfmarker] five tracks in it, but with no words. Someone's gonna have to go in and type in the words. Um, and if there are five [disfmarker] five people speaking at once, [speaker001:] Right, i it's [disfmarker] I didn't explain it well. If we use the [disfmarker] the little [disfmarker] the conventions that Jane has established, I have a script that will convert from that convention to their saved convention. [speaker002:] Oh, yeah. Yes. [speaker005:] Which allows five. [speaker001:] Right. [speaker005:] And it can be m edited after the fact, can't it also? [speaker001:] Yes. [speaker005:] But their [disfmarker] but their format, if you wanted to in indicate the speakers right there instead of doing it through this indirect route, [vocalsound] then i they [disfmarker] a c window comes up and it only allows you to enter two speakers. [speaker002:] Yeah. Right. [speaker004:] But you're saying that by the time you call it back in to [disfmarker] from their saved format it opens up a window with window with five speakers? [speaker005:] So. But. [speaker001:] Right. [speaker004:] Oh! That is sort of [pause] f They didn't quite go the whole [disfmarker] [speaker001:] It's just user interface. So i it's [disfmarker] [speaker004:] Yeah, they didn't go the whole route, did they? [speaker001:] the [disfmarker] the [disfmarker] the whole saved form the saved format and the internal format, all that stuff, handles multiple speakers. [speaker004:] They just [disfmarker] [speaker001:] It's just there's no user interface for specifying multiple [disfmarker] any more than two. [speaker004:] Right. So your [disfmarker] your script solves [disfmarker] Doesn't it solve all our problems, [speaker005:] And that [disfmarker] [speaker001:] Yep. [speaker004:] cuz we're always gonna wanna go through this preprocessing [disfmarker] [speaker001:] Yep. [speaker004:] uh, assuming it works. [speaker001:] Yep. [speaker005:] And that works nicely cuz this so quick to enter. So I wouldn't wanna do it through the interface anyway adding which [disfmarker] worry who the speaker was. [speaker001:] Yep. [speaker004:] I see. Right. Good. [speaker005:] And then, uh, let's see what else. Oh, yes, I [disfmarker] I wanted to have [disfmarker] So sometimes a pers I [disfmarker] uh in terms of like the continuity of thought [vocalsound] for transcriptions, it's [disfmarker] i it isn't just words coming out, it's like there's some purpose for an utterance. And [vocalsound] sometimes someone will [vocalsound] do a backchannel in the middle of it but you wanna show that it's continued at a later point. So I have [disfmarker] I have a convention of putting like a dash [vocalsound] arrow just to indicate that this person's utterance continues. And then when it uh, catches back up again then there's an arrow dash, and then you have the opposite direction [vocalsound] to indicate continuation of ones own utterance versus, um, sometimes [vocalsound] we had the situation which is [disfmarker] you know, which you [disfmarker] which you get in conversations, [comment] of someone continuing someone else's utterance, [speaker002:] Mmm. [speaker005:] and in that case I did a tilde arrow versus a arrow tilde, [vocalsound] to indicate that it was continuation but it wasn't [disfmarker] Oh, I guess I did [vocalsound] equal arrow for the [disfmarker] for the own [disfmarker] for yourself things [speaker002:] Mm hmm. [speaker005:] cuz it's [disfmarker] the speakers the same. And then tilde arrow if it was a different [disfmarker] if a different speaker, uh, con continuation. [speaker002:] Mmm. [speaker001:] Oh. [speaker005:] But just, you know, the arrows showing continuation of a thought. [speaker002:] Mm hmm. [speaker005:] And then you could track whether it was the same speaker or not by knowing [disfmarker] you know, at the end of this unit you'd know what happened later. And that was like this person continued [speaker002:] Mm hmm. [speaker005:] and you'd be able to [vocalsound] look for the continuation. [speaker002:] But the only time that becomes ambiguous is if you have two speakers. [speaker001:] So [speaker002:] Like, if you [disfmarker] [vocalsound] If you only have one person, if you only have one thought that's continuing across a particular time boundary, you just need [vocalsound] one arrow at each end, and if it's picked up by a different speaker, it's picked up by a different speaker. The time it becomes ambiguous if you have more than one speaker and that [disfmarker] and they sort of swap. I guess if you have more than one thread going, then you [disfmarker] [vocalsound] then you need to know whether they were swapped or not. [speaker005:] Mm hmm. Mm hmm. Mm hmm. [speaker003:] How often does that happen do you think? [speaker002:] Hopefully not very much. [speaker005:] Yeah, I didn't use it very often. [speaker001:] Especially for meetings. [speaker004:] It l ou [speaker001:] I mean, if i if you were just recording someone's day, it would be impossible. You know, if you were trying to do a remembrance agent. But I think for meetings it's probably alright. [speaker003:] Hmm. [speaker001:] But, a lot of these issues, I think that for [disfmarker] uh, from my point of view, where I just wanna do speech recognition and information retrieval, it doesn't really matter. [speaker002:] Sure. I know. [speaker001:] But other people have other interests. [speaker002:] But it [disfmarker] it does feel [disfmarker] it does feel like it's really in there. [speaker001:] So. [speaker002:] I [disfmarker] you know I did this [disfmarker] I did this transcription and I marked that, I marked it with ellipsis because it seemed like there was a difference. It's something you wanted to indicate that it [disfmarker] that I [disfmarker] this was the end of the phrase, this was the end of [vocalsound] that particular transcript, but it was continued later. And I picked up with an ellipsis. [speaker001:] Right. [speaker005:] Excellent. Yeah. Yeah. [speaker002:] I didn't have the equal, not equal thing. [speaker005:] Well that's [disfmarker] you know, I mean [disfmarker] I [disfmarker] that's why I didn't [comment] I didn't do it n [vocalsound] I mean, that's why I thought about it, and [disfmarker] and re ev [speaker002:] Yeah, yeah. [speaker005:] and it didn't do [disfmarker] I didn't do it in ten times the [disfmarker] [vocalsound] the time. Yeah. [speaker001:] Well, so anyway, are we interested then in writing tools to try to generate any of this stuff automatically? Is that something you want to do, Dan? [speaker002:] No. But it's something [@ @] that I feel we definitely ought to do. [speaker001:] No. [speaker005:] I also wanted to ask you if you have a time estimate on the part that you transcribed. Do you have a sense of how long [disfmarker] [speaker002:] Yeah, it took me half an hour to transcribe a minute, but I didn't have any [disfmarker] I didn't even have a [disfmarker] [speaker005:] OK. [speaker002:] I was trying to get Transcriber to run but I couldn't. So I was doing it by typ typing into a text file and trying to fit [disfmarker] [speaker005:] OK. [speaker002:] It was horrible. [speaker005:] OK. [speaker004:] So thirty to one's what you got? [speaker002:] Mm hmm. [speaker004:] So that's a new upper limit? [speaker002:] Yeah. [speaker005:] Well, I mean, that's [disfmarker] that's because you didn't have the segmentation help and all the other [disfmarker] [speaker002:] Is it [disfmarker] [speaker001:] But I think for a first try that's about right. [speaker003:] So [disfmarker] so if we hired a who if we hired a whole bunch of Dan's [disfmarker] [speaker004:] That's right. [speaker005:] Yeah. [speaker004:] a [speaker002:] It was actually [disfmarker] it was quite [disfmarker] it was a t [speaker001:] If we [pause] hire an infinite number of Dan's [disfmarker] [speaker005:] And there's always a warm up thing of [disfmarker] [speaker004:] It'd b a a [speaker002:] it w [speaker001:] Are we gonna run out of disk space by the way? [speaker002:] Yeah. No. [speaker001:] OK, good. [speaker005:] OK. [speaker004:] d Doesn't it beep in the other room when you're out of disk space? [speaker003:] So [disfmarker] Is there [disfmarker] [speaker001:] No. [speaker003:] Maybe we should s consider also, um, starting to build up a web site around all of these things. [speaker002:] Web site! That's great! [speaker003:] I know. [speaker001:] Dan's sort of already started. [speaker002:] We could have like business to business E commerce as well! [speaker003:] That's right. No, but I'm it would be interesting [disfmarker] it would be interesting to see [disfmarker] [speaker001:] Can we sell banner ads? [speaker004:] Get [disfmarker] get paid for click throughs? [speaker002:] Yeah. [speaker001:] What a good idea, that's how we could pay for the transcription. [speaker003:] I want to introduce [disfmarker] I [disfmarker] I want to introduce the word "snot head" into the conversation at this point. [speaker002:] We can have [disfmarker] [speaker004:] You wanna word that won't be recognized? [speaker003:] You see, cuz [disfmarker] uh, cuz [disfmarker] Exactly. [speaker005:] Oh, I don't think so. [speaker003:] Um. No. The r [speaker001:] Hey, what about me? [speaker003:] w What [disfmarker] OK. [speaker005:] You're the one who raised the issue. [speaker003:] No. Alright, see here's [disfmarker] here's [disfmarker] here's my thought behind it which is that, uh, the [disfmarker] the stuff that you've been describing, Jane, I gu one has to, [vocalsound] of course [vocalsound] indicate, [comment] [vocalsound] um, i is very interesting, [speaker005:] Alright. [speaker003:] and I I'd like to be able to [disfmarker] to pore through, you know, the [disfmarker] the types of tr conventions that you've come up with and stuff like that. [speaker002:] Yeah, yeah, yeah. [speaker003:] So I would like to see that kind of stuff on the web. [speaker005:] OK, now, w the alternative to a web site would be to put it in Doctor speech. [speaker002:] Yes. Yes. [speaker005:] Cuz [disfmarker] cuz what I have is a soft link to my transcription [vocalsound] that I have on my account [speaker003:] Either's fine. [speaker002:] We c [speaker005:] but it doesn't matter. [speaker001:] We can do it all. [speaker002:] we can do it all! [speaker005:] OK. [speaker002:] We can write [disfmarker] [speaker005:] Web site's nice. [speaker002:] Oh. Yeah. [speaker005:] Then you have to t you have to do an HT access. [speaker004:] Web site's what? [speaker002:] We could actually [disfmarker] maybe we could use the TCL plug in. Oh, man. [speaker005:] Ooo! He's committed himself to something. [speaker003:] Ow. See he said the word TCL and [disfmarker] and that's [disfmarker] [speaker004:] But he does such a good job of it. He should be allowed to [disfmarker] to, you know, w do it. [speaker005:] I know, I know. [speaker002:] I know, but that [disfmarker] but, I [disfmarker] Right. But I should be allowed to but [disfmarker] [speaker004:] If you just did a crappy job, [vocalsound] no nobody would want you to do it. [speaker002:] I sh I shouldn't be allowed to by m by my own [disfmarker] [vocalsound] by my [disfmarker] according to my own priorities. Alright. Let's look at it anyway. So definitely we should [disfmarker] we should have some kind of access to the data. [speaker003:] Yeah. [speaker001:] And we have [disfmarker] we have quite a disparate number of web and other sorts of documents on this project sort of spread around. I have several and Dan has a few, [speaker002:] Yes. [speaker005:] Ah! [speaker003:] Right, so we can add in links and stuff like that to other things. [speaker001:] and [disfmarker] [speaker005:] Nice. [speaker001:] Yep. [speaker002:] Well, yeah. [speaker003:] The [disfmarker] [speaker002:] Well so then th [speaker001:] Try [disfmarker] try to s consolidate. I mean, who wants to do that though? [speaker002:] the other side is, yeah. [speaker003:] Uh, right. [speaker001:] No one wants to do that. [speaker005:] Yeah. [speaker001:] So. [speaker002:] Right, that's the problem. [speaker004:] Why? What [disfmarker] what's [disfmarker] what's the issue? [speaker003:] Well, we could put [disfmarker] we could put sort of a disorganized sort of group gestalt [disfmarker] [speaker002:] No one owns the project. [speaker004:] No one what? [speaker002:] No one owns the project. [speaker001:] Yeah, I own the project but I don't wanna do it. [speaker002:] No one wants to own the project. [speaker003:] Right. [speaker004:] W well [pause] Do [disfmarker] But [disfmarker] [speaker001:] It's mine! All mine! [speaker002:] Well then you have to do the web site. [speaker004:] But [disfmarker] [speaker002:] You know, it's like, it's that simple. [speaker001:] "Wah hah hah hah hah hah." [speaker004:] b but [disfmarker] but [disfmarker] but what are you [disfmarker] what are you talking about for web site hacking? [speaker002:] No [disfmarker] [speaker004:] You're talking about writing HTML, right? [speaker002:] Yeah. [speaker001:] Yeah, I [disfmarker] I'm talking about putting together all the data in a form that [disfmarker] that is legible, and pleasant to read, and up to date, and et cetera, et cetera, et cetera. [speaker004:] But, is it against the law to actually use a tool to help your job go easier? [speaker001:] Absolutely. It's [disfmarker] it's absolutely against the law to use a tool. [speaker004:] You y [speaker001:] I haven't found any tools that I like. It's just as easy to use [disfmarker] to edit the raw HTML as anything else. [speaker004:] No kidding? [speaker002:] That's obviously not true, but you have [disfmarker] [speaker001:] It's obviously not true. [speaker004:] No, it it it's obviously true that he hasn't found any he likes. [speaker002:] Right. [speaker004:] The question is what is [disfmarker] what's he looked at. [speaker002:] That's true. [speaker005:] Which one do you use Jim? [speaker004:] I use something called Trellix. [speaker005:] Oh, that's right. I remember. Yeah. [speaker004:] And it [disfmarker] [speaker005:] Which produces also site maps. [speaker004:] it's it it's very powerful. [speaker001:] Now, I guess if I were [disfmarker] if I were doing more powerful [disfmarker] excuse me [disfmarker] more complex web sites I might want to. But most of the web sites I do aren't that complex. [speaker005:] Well, would this be to document it also for outside people or mainly for in house use? [speaker001:] But. [speaker003:] No, I think in [speaker001:] I think both. [speaker003:] I think mostly internal. [speaker001:] Mostly in house. [speaker002:] That's right. [speaker005:] OK. [speaker004:] Well, yeah, but what does internal mean? [speaker002:] No, both. [speaker004:] I mean, you're leaving. People at UW wanna look at it. I mean, it's [disfmarker] it's internal [comment] until [disfmarker] [speaker003:] Right. Internal to the project. [speaker004:] I see. [speaker005:] We could do an HT access which would accommodate those things. [speaker002:] I [disfmarker] I [disfmarker] I [disfmarker] I [disfmarker] [speaker001:] OK, well, send me links and I wi send me pointers, rather, and I'll put it together. [speaker002:] I'm not [disfmarker] o [speaker005:] Wonderful. [speaker002:] OK. I'm not sure how [disfmarker] how [vocalsound] important that distinction is. I don't think [vocalsound] we should say, [vocalsound] "oh, it's internal therefore we don't have to make it very good". I mean, you can say [vocalsound] oh [disfmarker] oh, it's internal [speaker003:] No. No. [speaker002:] therefore we can put data in it that we don't [disfmarker] we don't have to worry about releasing ". But I think the point is to try and [vocalsound] be coherent and make it a nice presentation. [speaker004:] Right. I agree. [speaker005:] Yeah, it is true, that is [disfmarker] it benefits to [disfmarker] [speaker004:] Cuz you're gonna have to wor do the work sooner or later. [speaker005:] That's right. [speaker002:] Yeah. [speaker005:] I mean, it's the early on. [speaker004:] Even if it's just writing things up. [speaker001:] Yep. [speaker004:] You know? [speaker005:] It's a great idea. [speaker001:] OK, um, let's move on to electronics. [speaker002:] Ah. [speaker004:] d we [disfmarker] we out of tape [disfmarker] out of disk? [speaker002:] Great. No, we're doing [disfmarker] we're doing great. [speaker004:] I [disfmarker] I was looking for the actual box I plan to use, uh, but I [disfmarker] c all I could [disfmarker] I couldn't find it at the local store. But this is [vocalsound] the [disfmarker] the technology. It's actually a little bit thinner than this. And it's two by two, by one, and it would fit right under the [disfmarker] right under th the the [disfmarker] the [disfmarker] the lip, [speaker001:] Yeah, does everyone know about the lip on the table? [speaker004:] yeah. [speaker001:] It's great. [speaker004:] There's a lip in these tables. [speaker005:] Nice. [speaker004:] And, it oc I p especially brought the bottom along to try and generate some frequencies that you may not already have recorded. [speaker001:] Clink! [speaker004:] Let's see [disfmarker] [vocalsound] see what it does to the [disfmarker] [speaker001:] Clink! [speaker004:] But this was the uh [disfmarker] just [disfmarker] just to review, and I also brought this [comment] along rather than the projector so we can put these on the table, and sort of w push them around. [speaker001:] And [disfmarker] and crinkle them and [disfmarker] [speaker005:] And th "that" being a diagram. [speaker002:] What? What? [speaker004:] That [disfmarker] that's the six tables that we're looking at. These six tables here, [vocalsound] with [disfmarker] with little boxes sort of, uh, in the middle here. [speaker005:] OK. [speaker002:] I see. [speaker004:] Which es would [disfmarker] I mean, the [disfmarker] the boxes are pretty much out of the way anyway. I'll I'll show you the [disfmarker] [vocalsound] the cro this is the table cross section. I don't know if people realize what they're looking at. [speaker002:] You trying to [pause] screw up the m the microphones? [speaker001:] Yes. [speaker002:] I mean th [speaker001:] He is. Absolutely. [speaker004:] Well why not? I mean, cuz this is what's gonna happen. You got plenty of data. I won't come to your next meeting. And [disfmarker] and [disfmarker] and you So this is the box's [disfmarker] [speaker001:] Get your paper off my PDA! [speaker005:] Yeah, [speaker002:] Yeah. [speaker005:] let [disfmarker] let the record show that this is exhibit two B. [speaker004:] That's right. "Or not to be". Yeah, yeah. [speaker001:] Yeah. [speaker004:] Uh, the box, uh [disfmarker] there's a half inch lip here. The box is an inch thick so it hangs down a half an inch. And so the [disfmarker] the two [vocalsound] head set jacks would be in the front and then the little LED to indicate that that box is live. The [disfmarker] the important issue about the LED is the fact that we're talking about eight of these total, which would be sixteen channels. And, uh, even though we have sixteen channels back at the capture, they're not all gonna be used for this. [speaker002:] Hmm. [speaker004:] So there'd be a subset of them used for [disfmarker] obviously j just use the ones at this end for [disfmarker] [vocalsound] for this many. So [disfmarker] Excuse me. you'd like a [disfmarker] a way to tell whether your box is live, so the LED wouldn't be on. [speaker002:] Right. All the lights. [speaker004:] So if you're plugged in it doesn't work and the LED is off that's [disfmarker] that's a tip off. [speaker005:] That's good. [speaker004:] And then the, uh [disfmarker] would wire the [disfmarker] all of the cables in a [disfmarker] in a bundle come through here and o obviously collect these cables at the same time. Uh, so this [disfmarker] this notion of putting down the P Z Ms [vocalsound] and taking them away would somehow have to be turned into leaving them on the table or [disfmarker] or [disfmarker] [speaker001:] Right. Well, we wanna do that definitely. [speaker004:] Right. Right. [speaker001:] So. [speaker004:] And so the [disfmarker] you [disfmarker] we just epoxy them down or something. Big screw into the table. [speaker002:] Velcro. [speaker004:] Uh, and even though there's eight cables they're not really very big around so my model is to get a [disfmarker] a [disfmarker] a p piece of [disfmarker] [speaker001:] Sleeve. [speaker004:] yeah, that [disfmarker] that stuff that people put with the little [disfmarker] you slip the wires into that's [vocalsound] sort of shaped like that cross section. [speaker001:] Oh. [speaker004:] Yeah. [speaker001:] OK, not just sleeve them all? [speaker004:] I'm [disfmarker] I'm r a I'm going up and then I'm going down. [speaker002:] No. [speaker001:] And leave them loose? [speaker005:] That looks like a semi circle. [speaker002:] Yeah. It's like a [disfmarker] it's a sleeping policeman. [speaker005:] Sleeping pol [speaker002:] Speed bump! [speaker001:] Whoo! [speaker002:] Speed bump. [speaker001:] Speed [speaker004:] Yeah, it's like a speed bum [vocalsound] An [speaker005:] Speed bump. [speaker001:] A "sleeping policeman"! [speaker005:] That's good. There we go s [speaker004:] And they're ac they're actually ext extruded from plastic. [speaker001:] Cool. [speaker003:] What is [disfmarker] [speaker004:] They sorta look like this. [speaker001:] Oh. [speaker003:] What does that mean? [speaker002:] That's the s that's British for speed bump, [speaker003:] Is it a speed bump? [speaker004:] So that the wires go through here. [speaker002:] yeah. [speaker005:] Oh, is that right? [speaker003:] Wow. [speaker005:] I never heard that. [speaker002:] Yeah. [speaker005:] Ah! [speaker004:] So. [speaker001:] That's really cruel. OK, [speaker004:] s [speaker001:] so that [disfmarker] [speaker004:] So it would c basically go on the diagonal here. [speaker003:] It could go either way. [speaker001:] So why do we have sixteen channels instead of like some fewer number? [speaker002:] Yeah. [speaker003:] I guess. [speaker004:] Uh, because the [disfmarker] [speaker002:] How else are you gonna distribute them around the tables? [speaker004:] Because they're there. [speaker001:] Well, OK, let me rephrase that. Why two each? [speaker002:] Oh, because then you don't have to just have one each. So that if t if you have two people sitting next to each other they can actually go into the same box. [speaker004:] Yeah. And to [disfmarker] See, thi this is really the way people sit on this table. [speaker001:] OK. [speaker004:] Th [speaker005:] Mm hmm. [speaker001:] OK. [speaker004:] Uh. Dot, dot, dot. [speaker005:] Which means two at each station. [speaker004:] Well that [disfmarker] that's the way people sit. That's how many chairs are in the room. [speaker005:] Yeah. Yeah, I'm just saying that for the recording. [speaker001:] Alright. [speaker004:] Yeah. Right. [speaker001:] Right. OK. [speaker004:] And certainly you could do a thing where all sixteen were plugged in. Uh if [disfmarker] if you ha if you had nothing else. [speaker001:] But then none of these. Right. N none of these and no P Z Ms then. [speaker004:] Yeah. Right. Right. I agree. [speaker002:] Only if you had [disfmarker] Well it depends on this box, right? [speaker004:] Oh, true enough. And actually, at the m my plan is to only bring eight wires out of this box. [speaker002:] Exactly. [speaker004:] This [disfmarker] this box [disfmarker] [speaker001:] Oh, I didn't understand [disfmarker] [speaker005:] That being the wiring box. [speaker004:] Thi thi thi this box is a one off deal. [speaker001:] Oh, I see, I see. [speaker004:] Uh. And, uh, it's function is to s to, uh, essentially a wire converter to go from these little blue wires to these black wires, plus supply power to the microphones cuz the [disfmarker] the [disfmarker] he the, uh, cheap head mounteds all require low voltage. [speaker001:] So [disfmarker] so you'd imagine some sort of [disfmarker] in some sort of patch panel on top to figure out what the mapping was between each d of these two and each of those one or what? [speaker004:] Well I w I I [speaker002:] Hmm! [speaker004:] the simplest thing I could imagine, i which is really, really simple is to [disfmarker] quite literally that these things plug in. And there's a [disfmarker] there's a plug on the end of each of these [disfmarker] these, uh, ei eight cables. [speaker005:] What [disfmarker] OK. [speaker002:] Yeah. [speaker005:] Each of the blue wires? [speaker002:] But there are only four. [speaker005:] Mm hmm. [speaker004:] An and there's only [disfmarker] there's only four slots that are [disfmarker] you know, in [disfmarker] in the first version or the version we're planning to [disfmarker] to build. [speaker002:] Yeah. [speaker001:] Mm hmm. [speaker004:] So [vocalsound] that [disfmarker] that was the whole issue with the LED, [speaker002:] Yeah. [speaker004:] that you plug it in, the LED comes on, and [disfmarker] and [disfmarker] and you're live. [speaker001:] Oh, then it comes on. I see, I see. OK, good. [speaker004:] Now the [disfmarker] the [disfmarker] [vocalsound] the subtle issue here is that [disfmarker] tha I [disfmarker] I haven't really figured out a solution for this. So, we it'll have to be convention. What happens if somebody unplugs this because they plug in more of something else? [speaker001:] Mm hmm. [speaker004:] Well the [disfmarker] there's no clever way to let the up stream guys know that you're really not being powered. So [disfmarker] th there will be a certain amount of looking at cables has to be done if people, uh, rewire things. [speaker001:] Right. [speaker004:] But. [speaker002:] Yeah, I mean, we [disfmarker] I had that last time. But uh there are actually [disfmarker] that you know, there's an extra [disfmarker] there's a mix out on the radio receiver? [speaker004:] Mm hmm. [speaker002:] So there are actually six [vocalsound] XLR outs on the back of the radio receiver and only five cables going in, I had the wrong five, so I ended up [vocalsound] not recording one of the channels and recording the mix. [speaker004:] How interesting. D did you do any recognition on the mix [disfmarker] mix out? [speaker005:] Hmm. [speaker002:] No. [speaker004:] Wonder whether it works any [disfmarker] [speaker002:] But I subtracted the four that I did have from the mix and got a pretty good approximation of the [@ @]. [speaker004:] Got the fifth? [speaker001:] You g [speaker005:] Oh, how great. [speaker004:] Cool. [speaker002:] Yeah. [speaker001:] And did it work? [speaker004:] Is it [disfmarker] is [disfmarker] [speaker001:] Did it sound good? [speaker002:] It's not bad. It's not bad, yeah. [speaker004:] Ain't science wonderful? [speaker001:] Wow. [speaker005:] That's amazing. [speaker002:] Yeah. [speaker004:] So [disfmarker] [speaker001:] So what's the schedule on these things? [speaker005:] Wow. [speaker002:] But, you always [disfmarker] [speaker004:] Uh, well I was wrestling with th with literally the w number of connectors in the cable and the [disfmarker] [vocalsound] the, uh, [vocalsound] powering system. And I [disfmarker] I was gonna do this very clever phantom power and I decided a couple days ago not to do it. [speaker002:] Hmm! [speaker004:] So I'm ready to build it. Which is to say, uh, the neighborhood of a week to get the circuit board done. [speaker001:] Mm hmm. So I think the other thing I'd like to do is, do something about the set up [speaker002:] See [speaker001:] so that it's a little more presentable and organized. [speaker004:] I agree. [speaker001:] And I'm [disfmarker] I'm just not sure what that is. I mean, some sort of cabinet. [speaker004:] Well I can build a cabinet. The [disfmarker] the difficulty for this kind of project is the intellectual capital to design the cabinet. [speaker001:] Mm hmm. [speaker004:] In other words, to figure out ex exactly what the right thing is. That cabinet can [disfmarker] can go away. We can use that for [disfmarker] for uh kindling or something. But if you can imagine what the right form factor is. Dan Dan and I have sort of gone around on this, and we were thinking about something that opened up in the top [vocalsound] to allow access to the mixer for example. [speaker001:] Mm hmm. [speaker004:] But there's these things sticking out of the mixer which are kind of a pain, so you end up with this thing that [disfmarker] if if you stuck the mixer up here and the top opened, it'd be [disfmarker] it'd be fine. You wouldn't necessarily [disfmarker] Well, you s understand what I'm [disfmarker] the [disfmarker] the [disfmarker] you can [disfmarker] you can start [disfmarker] start s sketching it out, [speaker002:] Yeah. [speaker001:] Yeah, I understand. So. [speaker004:] and I can certainly build it out of oak no problem, would it [disfmarker] you know, arb you know, arbitrarily amount of [disfmarker] [speaker001:] I need a desk at home too, alright? Is that gonna be a better solution than just going out and buy one? [speaker004:] Well, the [disfmarker] as we found out with the [disfmarker] the thing that, uh, Jeff bought a long time ago to hold our stereo system [vocalsound] the stuff you buy is total crap. And I mean this is something you buy. [speaker001:] Mm hmm. [speaker004:] And [disfmarker] and [disfmarker] [speaker001:] And it's total crap. [speaker004:] It's total crap. Well, it's useless for this function. Works fine for holding a Kleenex, but it [disfmarker] [speaker001:] Right, Kleenex and telephones. [speaker004:] Right. [speaker001:] Um, so yeah, I g I guess it's just a question, is that something you wanna spend your time on? [speaker004:] Oh, I [disfmarker] I'm paid for. [speaker001:] OK, great. [speaker004:] I have no problem. No, but w certainly one of the issues is [disfmarker] is the, uh [disfmarker] is security. [speaker001:] Hmm? [speaker004:] I mean, we've been [disfmarker] been [disfmarker] been lax and lucky. [speaker001:] Mm hmm. [speaker002:] Yeah. [speaker001:] Lax. [speaker002:] Yep. [speaker004:] Really lucky with these things. But they're not ours, so [disfmarker] the, uh [disfmarker] the flat panels. [speaker002:] Yeah. Oh, yeah! [speaker001:] I'm telling you, I'm just gonna cart one of them away if they stay there much longer. [speaker005:] Wow. [speaker004:] Uh, let the record show at [disfmarker] uh [vocalsound] at f four thirty five [vocalsound] Adam Janin says [disfmarker] [speaker002:] Well w yeah, exactly. [speaker005:] Tempting. Tempting. [speaker002:] We'll know [disfmarker] we'll know to come after. [speaker005:] Yeah. [speaker001:] So, um, j uh, then the other question is do we wanna try to do a user interface that's available out here? [speaker002:] Sorry? [speaker005:] Use user interface [disfmarker] [speaker004:] Slipped [disfmarker] almost slipped it by Dan. [speaker001:] A user interface. I mean, do we wanna try to get a monitor? [speaker002:] Oh! [speaker001:] Or just something. [speaker005:] Oh. [speaker002:] Sure. Well of course we do. [speaker001:] And how do we want to do that? [speaker005:] You mean like see [disfmarker] see meter readings, from [disfmarker] while sitting here. [speaker001:] J just so we see something. [speaker005:] Wow. [speaker004:] How about use the thing that um ACIRI's doing. [speaker002:] Yeah. [speaker004:] Which is to say just laptop with a wireless. [speaker002:] Yeah, yeah, [speaker005:] Oh. [speaker002:] yeah. Sure. Which we'll borrow from them, [vocalsound] when we need it. [speaker004:] What's wrong with yours? If we bought you a [disfmarker] a [disfmarker] [speaker002:] Oh, a Applecard. Sure. Right. Yeah, you could use my machine. [speaker003:] Well [disfmarker] [speaker004:] What? [speaker001:] I have an IRAM machine I've borrowed and we can use it. [speaker002:] I [disfmarker] or the [disfmarker] [speaker004:] N no, I'm [disfmarker] I'm [disfmarker] I'm serious. Does [disfmarker] does the wireless thing work on your [disfmarker] [speaker001:] Wait, isn't that an ethernet connection or is that a phone? [speaker002:] Uh, that's an ethernet connection. [speaker001:] Well [disfmarker] [speaker004:] Yeah [disfmarker] no [disfmarker] no I'm a I [disfmarker] I [disfmarker] I ain't joking here. [speaker002:] It's going next door. [speaker001:] We jus [speaker004:] I'm serious, that [disfmarker] that [disfmarker] [vocalsound] it [disfmarker] it [disfmarker] [speaker002:] Yeah. No, no, absolutely, that's the right way to do it. T to have it uh, just [disfmarker] [speaker004:] It's very convenient especially if Dan happens to be sitting at that end of the table to not have to run down here and [disfmarker] and look in the thing every so often, [speaker002:] Yeah. [speaker004:] but just have the [disfmarker] [speaker002:] And given [disfmarker] given that we've got a wireless [disfmarker] that we've got a [disfmarker] we got the field. [speaker004:] It's right there. [speaker002:] Right. [speaker004:] Right? The antenna's right there, [speaker002:] Yeah. [speaker001:] Right. [speaker004:] right outside the [disfmarker] [speaker002:] Yeah. I don't know. [speaker004:] Y I mean, we need [disfmarker] obviously need to clear this with ACIRI but, uh, how tough can that be? There [disfmarker] it you'd [disfmarker] all you need's web access, isn't it? [speaker002:] W we don't need X access [speaker004:] In [disfmarker] in theory. [speaker002:] but I mean that's fine. That's [disfmarker] that's what it does, [speaker004:] OK, great, [speaker002:] yeah. [speaker004:] great. [speaker002:] So [disfmarker] [speaker001:] Um, right, so it's just a question of getting a laptop and a wireless modem. [speaker002:] With a [disfmarker] with a [disfmarker] with a [disfmarker] w [speaker004:] No, and he [disfmarker] he had, reque [@ @] [disfmarker] my [disfmarker] my proposal is you have a laptop. [speaker002:] No. [speaker004:] You don't? [speaker002:] Yeah. I do! Yeah. Yeah, yeah. [speaker004:] If [disfmarker] if we bought you the thing would you mind using it with i the [disfmarker] the [disfmarker] [speaker002:] No, I would love to but I'm not sure if my laptop is compatible with the wave LAN thing they're using. [speaker004:] Really? [speaker001:] To [pause] Mac. [speaker004:] Your new one? [speaker002:] Well Apple has their own thing, right? [speaker003:] He's [disfmarker] [speaker004:] I'm sorry? [speaker001:] Airport. [speaker002:] Apple has their own thing. And [disfmarker] [speaker004:] I thought it just came through a serial p or an Ethernet port. [speaker002:] Yeah, I think what [disfmarker] I think you [disfmarker] I think it just plug plugs in a PC card, so you could probably make it run with that, but. [speaker001:] The question is, is there an Apple driver? [speaker004:] I [disfmarker] e [speaker002:] Yeah, I'm sure. I imagine there is. But [disfmarker] uh anyway [speaker004:] But the two t [speaker002:] there are [disfmarker] there are abs there are a bunch of machines at ICSI that have those cards and so I think if w if it doesn't [disfmarker] we should be able to find a machine that does that. I [disfmarker] I mean I know that doesn't [disfmarker] don't [disfmarker] don't the important people have those little blue VAIOs that [disfmarker] [speaker004:] Well, uh, b that [disfmarker] to me that's a whole nother. That's a whole nother issue. [speaker005:] Hmm. Hmm. [speaker002:] Yeah. [speaker004:] The [disfmarker] the idea of con convincing them that we should use their network i is fairly straight forward. [speaker002:] Yeah. Yeah. [speaker004:] The idea of being able to walk into their office and say, "oh, can I borrow your machine for a while", is [disfmarker] is [disfmarker] is a non starter. [speaker002:] Yeah. [speaker004:] That [disfmarker] I [disfmarker] I don't think that's gonna work. [speaker002:] I see. [speaker004:] So, I mean, either [disfmarker] either we figure out how to use a machine somebody already [disfmarker] in the group already owns, [vocalsound] a a and the idea is that if it's it perk, you know, it's an advantage not [disfmarker] not a disadvan [comment] [vocalsound] or else we [disfmarker] we literally buy a machine e exactly for that purpose. [speaker002:] Yeah. Yeah, yeah. Absolutely. Yeah. [speaker004:] Certainly it solves a lot of the problems with leaving a monitor out here all the time. I [disfmarker] I [disfmarker] I [disfmarker] I'm [disfmarker] I'm not a big fan of doing things to the room that make the room less attractive for other people, [speaker002:] Yeah. [speaker001:] Right. [speaker004:] right? Which is part of the reason for getting all this stuff out of the way [speaker001:] Yeah. [speaker004:] and [disfmarker] [vocalsound] and, so a monitor sitting here all the time you know people are gonna walk up to it and go, "how come I can't get, you know, Pong on this" or, whatev [speaker001:] Mm hmm. Right. [speaker003:] Well [disfmarker] [speaker001:] I've [disfmarker] I've borrowed the IRAM VAIO Sony thingy, [speaker004:] Right. [speaker002:] Yeah. [speaker001:] and I don't think they're ever gonna want it back. [speaker002:] You're kidding! [speaker004:] Well, the next conference they will. [speaker001:] So. Sure. [speaker004:] Yeah. [speaker001:] But that does mean [disfmarker] so we can use that as well. [speaker004:] Well, uh, the [disfmarker] certainly, u you should give it a shot first See whether you you can get compatible stuff. [speaker002:] Mm hmm. [speaker004:] Uh, ask them what it costs. Ask them if they have an extra one. Who knows, they might have an extra [vocalsound] hardware s [speaker002:] I'd trade them a flat panel display for it. Yeah. [speaker004:] Good. [speaker003:] What is the, um, projector supposed to be hooked up to? [speaker004:] Uh, the, uh [disfmarker] Tsk. It's gonna be hooked up to all sorts of junk. There's gonna be actually a [disfmarker] a plug at the front that'll connect to people's laptops so you can walk in and plug it in. And it's gonna be con connected to the machine at the back. So we certainly could use that as [disfmarker] as a constant reminder of what the VU meters are doing. [speaker002:] Huge VU meters. [speaker004:] So people sitting here [comment] are going [vocalsound] "testing, one, two, three"! It [disfmarker] a [speaker003:] But I mean, that's another [disfmarker] that's another possibility that, you know, solves [disfmarker] [speaker002:] Yeah. [speaker004:] Yeah. [speaker002:] That's an end [speaker004:] But [disfmarker] but [disfmarker] but I think the idea of having a control panel it's [disfmarker] that's there in front of you is really cool. [speaker002:] Yeah. Yeah, yeah. [speaker005:] Mm hmm. [speaker002:] I think and uh, having [disfmarker] having it on wireless is [disfmarker] is the neatest way [disfmarker] neatest way to do it. [speaker004:] R As long as you d as l as long as you're not tempted to sit there and f keep fiddling with the volume controls going, "can you talk a bit louder?" [speaker001:] I had [disfmarker] [speaker002:] Yeah. [speaker001:] I had actually earlier asked if I could borrow one of the cards to do wireless stuff [speaker002:] Yeah. [speaker001:] and they said, "sure, whenever you want". So I think it won't be a problem. [speaker002:] Oh, cool. [speaker004:] And [disfmarker] and it's a [disfmarker] a PCMCIA card, right? [speaker002:] OK. [speaker001:] Yep. [speaker004:] PC card, so you can have a slot, [speaker001:] PC card. [speaker004:] right? In your new machine? [speaker002:] Yeah, yeah. [speaker004:] Is it with s [speaker003:] It's [disfmarker] it really come down to the driver. I mean [disfmarker] [speaker002:] Yeah. [speaker004:] Right, i it'll [disfmarker] it'll work [disfmarker] [speaker001:] Right, I mean, and if [disfmarker] and if his doesn't work, as I said, we can use the PC. [speaker004:] It'll work the first time. I [disfmarker] I trust Steve Jobs. [speaker001:] Good. [speaker002:] Um, well, that sounds like a d good solution one way or the other. [speaker001:] So [disfmarker] So Jim is gonna be doing wiring and you're gonna give some thought to cabinets? [speaker004:] Uh, y yeah. We [disfmarker] we need to [vocalsound] figure out what we want. [speaker001:] Great. [speaker004:] Uh [disfmarker] [speaker002:] We'd [disfmarker] I think [disfmarker] [speaker004:] Hey, what are those green lights doing? [speaker001:] They're flashing! [speaker002:] Uh oh! Uh oh! Does that [disfmarker] it means [disfmarker] it means it's gonna explode. No. [speaker004:] Cut the red wire, the red wire! [speaker002:] Um [disfmarker] [speaker001:] When people talk, it [disfmarker] they go on and off. [speaker002:] This [disfmarker] So again, Washington wants to equip a system. Our system, we spent ten thousand dollars on equipment not including the PC. However, seven and a half thousand of that was the wireless mikes. [speaker004:] Mm hmm. [speaker002:] Uh, using [disfmarker] using these [disfmarker] [speaker004:] And it [disfmarker] and the f the five thousand for the wires, so if I'm gonna do [disfmarker] No. It's a joke. [speaker002:] Yeah, [speaker004:] I have to do [disfmarker] [speaker002:] that's true but we haven't spent that, right? But once we [disfmarker] once we've done the intellectual part of these, uh, we can just knock them out, right? [speaker001:] Cheap. [speaker002:] We can start [disfmarker] [vocalsound] we [disfmarker] you can make a hundred of them or something. [speaker004:] Oh, of the [disfmarker] of the boards? Yeah, yeah, sure, right. [speaker002:] And then we could [disfmarker] Washington could have a system that didn't have any wireless but would had [disfmarker] what's based on these [speaker004:] Mm hmm. [speaker002:] and it would cost [disfmarker] [speaker004:] Peanuts. [speaker001:] A PC and a peanuts. [speaker002:] PC and two thousand dollars for the A to D stuff. [speaker001:] Yeah. [speaker002:] And that's about [disfmarker] cuz you wouldn't even need the mixer if you didn't have the [disfmarker] [speaker004:] Right. [speaker002:] Oh th [vocalsound] the P Z P Z Ms cost a lot. But anyway you'd save, on the seven [disfmarker] seven or eight thousand for the [disfmarker] for the wireless system. So actually that might be attractive. [speaker004:] Right. [speaker002:] OK, [speaker001:] Good. [speaker002:] I can move my thumb now. [speaker005:] That's a great idea. [speaker004:] What? [speaker005:] It's nice [disfmarker] it's nice to be thinking toward that. [speaker004:] Oh, I thought like if we talked softer the disk lasts longer. [speaker001:] Well, actually shorten [disfmarker] [speaker002:] Yeah. [speaker001:] There's a speech compression program that works great on things like this, cuz if the dynamic range is low it encodes it with fewer bits. And so most of the time no one's talking so it shortens it dramatically. But [vocalsound] if you talk quieter, the dynamic range is lower and it will compress better. [speaker002:] Yeah. [speaker001:] So. [speaker005:] Oh. Hmm. [speaker004:] It also helps if you talk in a monotone. [speaker001:] Probably. [speaker004:] Constant volume all the time. [speaker005:] Oh, interesting. And shorter words. [speaker001:] Shorter words. [speaker003:] Now, shorter words wouldn't [disfmarker] would induce more dynamics, right? You want to have [disfmarker] [speaker002:] Yeah, but if the words are more predictable. [speaker001:] How about if you just go "uh"? [speaker003:] Huh. [speaker004:] Uh. [speaker005:] That's a long word! [speaker001:] How do you spell that? [speaker005:] I don't know. [speaker001:] OK, can you do one more round of digits? Are we done talking? [speaker004:] Well it's a choice [disfmarker] if we get a choice, let's keep talking. Sure. [speaker001:] Do we have more to talk about? [speaker004:] No, I'm done. [speaker003:] I'm done. [speaker001:] Are you done? [speaker005:] I'm done, [speaker001:] I'm done. [speaker005:] yeah. [speaker003:] Dan isn't but he's not gonna say anything. [speaker004:] But [disfmarker] you [disfmarker] you [disfmarker] you [disfmarker] there's a problem [disfmarker] a structural problem with this though. You really need an incentive at the end if you're gonna do digits again. Like, you know, candy bars or something, or [disfmarker] or [disfmarker] or [disfmarker] or or a little, uh [disfmarker] you know, toothbrushes like they give you at the d dentist. [speaker001:] I'll [disfmarker] I'll remember to bring M and M's next time. [speaker002:] Mmm! [speaker005:] Or both. [speaker004:] Or both. [speaker002:] Sorry. [speaker004:] Eric, you and I win. We didn't make any mistakes. [speaker001:] It's harder at the end than at the beginning. [speaker005:] We don't know that for sure, do we? [speaker001:] I should have mentioned that s uh, to pause between lines but [disfmarker] [speaker004:] No, I know. I'm just giving you a hard time. [speaker001:] It's [disfmarker] it's only a hard time for the transcriber not for the speech recognizer. [speaker002:] Tha tha [speaker005:] But [disfmarker] I also think you said channel four [speaker001:] Me. [speaker005:] and I think you meant microphone four. And I think that's a mistake. [speaker004:] Very good. So Eric, you win. But the other thing is that there's a [disfmarker] there's a colon for transcripts. And there shouldn't be a colon. Because see, everything else is stuff you fill in. [speaker002:] Yeah, that's been filled in for you. [speaker004:] Right? Automatically. [speaker002:] But they're in order! [speaker004:] But real [speaker002:] They start, six, seven, eight, nine, zero, one, two, three, four, five, six, eight, nine. [speaker004:] Where'd they come from? [speaker005:] Oh. [speaker002:] And they're in order because they're sorted lexically by the file names, which are [disfmarker] have the numbers in digits. And so they're actually [disfmarker] this is like all the [disfmarker] all utterances that were generated by speaker MPJ or something. [speaker005:] Oh. [speaker002:] And then within MPJ they're sorted by what he actually said. [speaker001:] Ugh! I didn't know that. I should have randomized it. [speaker005:] Wow. [speaker002:] It doesn't matter! It's like [disfmarker] Cuz you said "six, seven, eight". [speaker004:] Well, we think it doesn't matter. [speaker002:] We think it doesn't matter. If I [disfmarker] if not I [disfmarker] [speaker005:] Oh, interesting. [speaker004:] But the real question I have is that, why bother with these? Why don't you just ask people to repeat numbers they already know? Like phone numbers, you know, social security numbers. [speaker002:] Cuz we have these writt written down, right? [speaker001:] Because [disfmarker] [speaker002:] That's why [disfmarker] [speaker001:] Right. If we have it, uh [disfmarker] [speaker004:] I know. [speaker005:] Social security numbers. [speaker004:] I kn [speaker002:] You can [disfmarker] you can generate [disfmarker] [speaker001:] we don't have to transcribe. [speaker005:] Bank account numbers. [speaker004:] Credit card numbers, yeah. [speaker001:] We don't have to tran Yeah, please. [speaker002:] Yeah. That's a great idea. [speaker005:] Passport numbers. [speaker004:] Yeah, so you just say [disfmarker] [vocalsound] say your credit card numbers, say your phone numbers, say your mother's maiden name. [speaker001:] Bet we could do it. [speaker004:] You know pe [speaker005:] Password to your account. Go on. [speaker004:] people off the street. This [disfmarker] [speaker002:] Alright. [speaker001:] Actually, this [disfmarker] I got this directly from another training set, from Aurora. So. We can compare directly. [speaker002:] Looks good. [speaker005:] I was [disfmarker] I [disfmarker] the reason I made my mistake was [disfmarker] [speaker002:] Looks like there were no errors. [speaker005:] Wa was this [disfmarker]? [speaker001:] What? [speaker002:] There were no [disfmarker] there were no direct driver errors, by the look of it, which is good. [speaker001:] Great. [speaker005:] Good news. [speaker001:] OK, the mike's off. [speaker002:] So I'm gonna stop it. Yeah, OK. [speaker005:] OK. [speaker004:] Mony on the mike. [speaker001:] Thank you all. [speaker002:] Uh oh. [speaker001:] OK, [speaker003:] Eight, eight? [speaker001:] we're going. [speaker004:] This is three. Yep. [speaker003:] Three. [speaker004:] Yep. [speaker002:] Test. Hmm. Let's see. Move it bit. Test? Test? OK, I guess it's alright. So, let's see. Yeah, Barry's not here and Dave's not here. Um, I can say about [disfmarker] just q just quickly to get through it, that Dave and I submitted this ASRU. [speaker001:] This is for ASRU. [speaker002:] Yeah. So. Um. Yeah, it's [disfmarker] it's interesting. I mean, basically we're dealing with rever reverberation, and, um, when we deal with pure reverberation, the technique he's using works really, really well. Uh, and when they had the reverberation here, uh, we'll measure the signal to noise ratio and it's, uh, about nine DB. [speaker004:] Hmm. [speaker002:] So, um, a fair amount of [disfmarker] [speaker001:] You mean, from the actual, uh, recordings? [speaker004:] k [speaker002:] Yeah. [speaker001:] It's nine DB? [speaker002:] Yeah. Um [disfmarker] And actually it brought up a question which may be relevant to the Aurora stuff too. Um, I know that when you figured out the filters that we're using for the Mel scale, there was some experimentation that went on at [disfmarker] at, uh [disfmarker] at OGI. Um, but one of the differences that we found between the two systems that we were using, [comment] the [disfmarker] the Aurora HTK system baseline system [comment] and the system that we were [disfmarker] the [disfmarker] the uh, other system we were using, the uh, the SRI system, was that the SRI system had maybe a, um, hundred hertz high pass. [speaker004:] Yep. [speaker002:] And the, uh, Aurora HTK, it was like twenty. [speaker004:] S sixty four. S sixty four. [speaker002:] Uh. Sixty four? [speaker004:] Yeah, [speaker002:] Uh. [speaker004:] if you're using the baseline. [speaker002:] Is that the ba band center? [speaker004:] No, the edge. [speaker002:] The edge is really, uh, sixty four? [speaker004:] Yeah. [speaker002:] For some reason, uh, Dave thought it was twenty, [speaker004:] So the, uh, center would be somewhere around like hundred [speaker002:] but. [speaker004:] and [disfmarker] hundred and [disfmarker] hundred [disfmarker] hundred and [disfmarker] maybe [disfmarker] it's like [disfmarker] fi hundred hertz. [speaker002:] But do you know, for instance, h how far down it would be at twenty hertz? What the [disfmarker] how much rejection would there be at twenty hertz, let's say? [speaker004:] At twenty hertz. [speaker002:] Yeah, any idea what the curve looks like? [speaker004:] Twenty hertz frequency [disfmarker] Oh, it's [disfmarker] it's zero at twenty hertz, right? The filter? [speaker003:] Yea actually, the left edge of the first filter is at sixty four. [speaker004:] Sixt s sixty four. [speaker003:] So [disfmarker] [speaker004:] So anything less than sixty four is zero. [speaker003:] Mmm. [speaker002:] It's actually set to zero? What kind of filter is that? [speaker004:] Yeah. [speaker003:] Yeah. [speaker002:] Is this [disfmarker] oh, from the [disfmarker] from [disfmarker] [speaker003:] It [disfmarker] This is the filter bank in the frequency domain that starts at sixty four. [speaker002:] Oh, so you, uh [disfmarker] so you really set it to zero, the FFT? [speaker004:] Yeah, yeah. [speaker003:] Yeah. [speaker004:] So it's [disfmarker] it's a weight on the ball spectrum. Triangular weighting. [speaker002:] Right. OK. Um [disfmarker] OK. So that's [disfmarker] that's a little different than Dave thought, I think. But [disfmarker] but, um, still, it's possible that we're getting in some more noise. So I wonder, is it [disfmarker] [@ @] Was there [disfmarker] their experimentation with, uh, say, throwing away that filter or something? And, uh [disfmarker] [speaker004:] Uh, throwing away the first? [speaker002:] Yeah. [speaker004:] Um, yeah, we [disfmarker] we've tried including the full [disfmarker] full bank. Right? From zero to four K. [speaker003:] Mm hmm. [speaker004:] And that's always worse than using sixty four hertz. [speaker002:] Right, but the question is, whether sixty four hertz is [disfmarker] is, uh, too, uh, low. [speaker004:] Yeah, I mean, make it a hundred or so? [speaker002:] Yeah. [speaker004:] I t I think I've tried a hundred and it was more or less the same, or slightly worse. [speaker002:] On what test set? [speaker004:] On the same, uh, SpeechDat Car, Aurora. [speaker002:] Um, it was on the SpeechDat Car. [speaker004:] Yeah. So I tried a hundred to four K. Yeah. [speaker002:] Um, [speaker004:] So it was [disfmarker] [speaker002:] and on [disfmarker] and on the, um, um, [vocalsound] TI digits also? [speaker004:] No, no, no. I think I just tried it on SpeechDat Car. [speaker002:] Mmm. That'd be something to look at sometime because what, um, eh, he was looking at was performance in this room. [speaker004:] Mm hmm. [speaker002:] Would that be more like [disfmarker] Well, you'd think that'd be more like SpeechDat Car, I guess, in terms of the noise. The SpeechDat Car is more, uh, sort of roughly stationary, a lot of it. [speaker004:] Yeah. [speaker002:] And [disfmarker] and TI digits maybe is not so much as [disfmarker] [speaker003:] Mm hmm. [speaker004:] Yeah. [speaker002:] Yeah. Mm hmm. OK. Well, maybe it's not a big deal. But, um [disfmarker] Anyway, that was just something we wondered about. But, um, uh, certainly a lot of the noise, uh, is, uh, below a hundred hertz. Uh, the signal to noise ratio, you know, looks a fair amount better if you [disfmarker] if you high pass filter it from this room. [speaker004:] Yeah. [speaker002:] But, um [disfmarker] but it's still pretty noisy. Even [disfmarker] even for a hundred hertz up, it's [disfmarker] it's still fairly noisy. [speaker003:] Mm hmm. [speaker002:] The signal to noise ratio is [disfmarker] is [disfmarker] is actually still pretty bad. [speaker001:] Hmm. [speaker002:] So, um, I mean, the main [disfmarker] the [disfmarker] the [disfmarker] [speaker001:] So that's on th that's on the f the far field ones though, right? [speaker002:] Yeah, that's on the far field. [speaker001:] Yeah. [speaker002:] Yeah, the near field's pretty good. [speaker001:] So wha what is, uh [disfmarker] what's causing that? [speaker002:] Well, we got a [disfmarker] a video projector in here, uh, and, uh [disfmarker] which we keep on during every [disfmarker] every session we record, [speaker001:] Yeah. [speaker002:] which, you know, I [disfmarker] I [disfmarker] w we were aware of but [disfmarker] but we thought it wasn't a bad thing. [speaker001:] Uh huh. [speaker002:] I mean, that's a nice noise source. [speaker001:] Yeah. [speaker002:] Uh, and there's also the, uh [disfmarker] uh, air conditioning. [speaker001:] Hmm. [speaker002:] Which, uh, you know, is a pretty low frequency kind of thing. [speaker001:] Mm hmm. [speaker002:] But [disfmarker] but, uh [disfmarker] So, those are [disfmarker] those are major components, I think, [speaker001:] I see. [speaker002:] uh, for the stationary kind of stuff. [speaker001:] Mmm. [speaker002:] Um, but, um, it, uh [disfmarker] I guess, I [disfmarker] maybe I said this last week too but it [disfmarker] it [disfmarker] it really became apparent to us that we need to [disfmarker] to take account of noise. And, uh, so I think when [disfmarker] when he gets done with his prelim study I think [vocalsound] one of the next things we'd want to do is to take this, uh [disfmarker] uh, noise, uh, processing stuff and [disfmarker] and, uh [disfmarker] uh, synthesize some speech from it. And then [disfmarker] [speaker001:] When are his prelims? [speaker002:] Um, I think in about, um, a little less than two weeks. [speaker001:] Oh. Wow. [speaker002:] Yeah. Yeah. So. Uh, it might even be sooner. Uh, let's see, this is the sixteenth, seventeenth? Yeah, I don't know if he's before [disfmarker] It might even be in a week. [speaker001:] So, I [speaker002:] A week, [speaker001:] Huh. [speaker002:] week and a half. [speaker001:] I [disfmarker] I guessed that they were gonna do it some time during the semester but they'll do it any time, huh? [speaker002:] They seem to be [disfmarker] Well, the semester actually is starting up. [speaker001:] Is it already? [speaker002:] Yeah, the semester's late [disfmarker] late August they start here. [speaker001:] Yikes. [speaker002:] So they do it right at the beginning of the semester. [speaker001:] Yeah. [speaker002:] Yeah. So, uh [disfmarker] Yep. I mean, that [disfmarker] that was sort of one [disfmarker] I mean, the overall results seemed to be first place in [disfmarker] in [disfmarker] in the case of either, um, artificial reverberation or a modest sized training set. Uh, either way, uh, i uh, it helped a lot. And [disfmarker] But if you had a [disfmarker] a really big training set, a recognizer, uh, system that was capable of taking advantage of a really large training set [disfmarker] I thought that [disfmarker] One thing with the HTK is that is has the [disfmarker] as we're using [disfmarker] the configuration we're using is w s is [disfmarker] being bound by the terms of Aurora, we have all those parameters just set as they are. So even if we had a hundred times as much data, we wouldn't go out to, you know, ten or t or a hundred times as many Gaussians or anything. So, um, it's kind of hard to take advantage of [disfmarker] of [disfmarker] of big chunks of data. [speaker003:] Mm hmm. [speaker004:] Mmm, yeah. [speaker002:] Uh, whereas the other one does sort of expand as you have more training data. It does it automatically, actually. And so, um, uh, that one really benefited from the larger set. And it was also a diverse set with different noises and so forth. Uh, so, um, that, uh [disfmarker] that seemed to be [disfmarker] So, if you have that [disfmarker] that better recognizer that can [disfmarker] that can build up more parameters, and if you, um, have the natural room, which in this case has a p a pretty bad signal to noise ratio, then in that case, um, the right thing to do is just do [disfmarker] u use speaker adaptation. And [disfmarker] and not bother with [disfmarker] with this acoustic, uh, processing. But I think that that would not be true if we did some explicit noise processing as well as, uh, the convolutional kind of things we were doing. [speaker003:] Mm hmm. [speaker002:] So. That's sort of what we found. [speaker004:] Hmm. [speaker001:] I, um [disfmarker] [vocalsound] uh, started working on the uh [disfmarker] Mississippi State recognizer. [speaker004:] Oh, [speaker001:] So, I got in touch with Joe and [disfmarker] and, uh, from your email and things like that. [speaker004:] OK. [speaker001:] And, uh, they added me to the list [disfmarker] uh, the mailing list. [speaker004:] OK, great. [speaker001:] And he gave me all of the pointers and everything that I needed. And so I downloaded the, um [disfmarker] There were two things, uh, that they had to download. One was the, uh, I guess the software. And another wad [disfmarker] was a, um, sort of like a sample [disfmarker] a sample run. So I downloaded the software and compiled all of that. And it compiled fine. [speaker004:] Eight. Oh, eh, great. [speaker001:] No problems. And, um, I grabbed the sample stuff but I haven't, uh, compiled it. [speaker004:] That sample was released only yesterday or the day before, right? [speaker001:] No [disfmarker] Well, I haven't grabbed that one yet. So there's two. [speaker004:] Oh, there is another short sample set [disfmarker] [speaker001:] There was another short one, [speaker004:] o o sample. [speaker001:] yeah. [speaker004:] OK. [speaker001:] And so I haven't grabbed the latest one that he just, uh, put out yet. [speaker004:] Oh, OK. F Yeah, OK. [speaker001:] So. Um, but, the software seemed to compile fine and everything, so. And, um, So. [speaker002:] Is there any word yet about the issues about, um, adjustments for different feature sets or anything? [speaker001:] No, I [disfmarker] I d You asked me to write to him and I think I forgot to ask him about that. [speaker002:] Yeah. [speaker001:] Or if I did ask him, he didn't reply. I [disfmarker] I don't remember yet. Uh, I'll [disfmarker] I'll d I'll double check that and ask him again. [speaker002:] Yeah. [speaker004:] Hmm. [speaker002:] Yeah, it's like that [disfmarker] that could r turn out to be an important issue for us. [speaker004:] Mmm. [speaker001:] Yeah. [speaker002:] Yeah. [speaker001:] Yeah. [speaker004:] Cuz they have it [disfmarker] [speaker001:] Maybe I'll send it to the list. Yeah. [speaker004:] Cuz they have, uh, already frozen those in i insertion penalties and all those stuff is what [disfmarker] I feel. Because they have this document explaining the recognizer. [speaker001:] Uh huh. [speaker004:] And they have these tables with, uh, various language model weights, insertion penalties. u [speaker001:] OK, I haven't seen that one yet. [speaker004:] Uh, it's th it's there on that web. [speaker001:] So. OK. [speaker004:] And, uh, on that, I mean, they have run some experiments using various insertion penalties and all those [disfmarker] [speaker001:] And so they've picked [disfmarker] the values. [speaker004:] Yeah, I think they pi p yeah, they picked the values from [disfmarker] [speaker001:] Oh, OK. OK. [speaker002:] For r w what test set? [speaker004:] Uh, p the one that they have reported is a NIST evaluation, Wall Street Journal. [speaker002:] But that has nothing to do with what we're testing on, right? [speaker004:] You know. [speaker003:] Mm hmm. [speaker004:] No. So they're, like [disfmarker] um [disfmarker] So they are actually trying to, uh, fix that [disfmarker] those values using the clean, uh, training part of the Wall Street Journal. Which is [disfmarker] I mean, the Aurora. Aurora has a clean subset. I mean, they want to train it [speaker002:] Right. [speaker004:] and then this [disfmarker] they're going to run some evaluations. [speaker002:] So they're set they're setting it based on that? [speaker004:] Yeah. [speaker002:] OK. So now, we may come back to the situation where we may be looking for a modification of the features to account for the fact that we can't modify these parameters. [speaker001:] Yeah. [speaker002:] But, um, [speaker004:] Yeah. [speaker002:] uh [disfmarker] but it's still worth, I think, just [disfmarker] since [disfmarker] you know, just chatting with Joe about the issue. [speaker001:] Yeah, OK. Do you think that's something I should just send to him [speaker002:] Um [disfmarker] [speaker001:] or do you think I should send it to this [disfmarker] there's an [disfmarker] a m a mailing list. [speaker002:] Well, it's not a secret. I mean, we're, you know, certainly willing to talk about it with everybody, but I think [disfmarker] I think that, um [disfmarker] um, it's probably best to start talking with him just to [disfmarker] [speaker001:] OK. [speaker002:] Uh [@ @] [comment] you know, it's a dialogue between two of you about what [disfmarker] you know, what does he think about this and what [disfmarker] what [disfmarker] you know [disfmarker] what could be done about it. [speaker001:] Yeah. [speaker002:] Um, [speaker001:] OK. [speaker002:] if you get ten people in [disfmarker] involved in it there'll be a lot of perspectives based on, you know, how [disfmarker] [speaker001:] Yeah. [speaker002:] you know. [speaker001:] Right. [speaker002:] Uh [disfmarker] But, I mean, I think it all should come up eventually, but if [disfmarker] if [disfmarker] if there is any, uh, uh, way to move in [disfmarker] a way that would [disfmarker] that would, you know, be more open to different kinds of features. [speaker001:] OK. [speaker002:] But if [disfmarker] if, uh [disfmarker] if there isn't, and it's just kind of shut down and [disfmarker] and then also there's probably not worthwhile bringing it into a larger forum where [disfmarker] where political issues will come in. [speaker001:] Yeah. OK. [speaker004:] Oh. So this is now [disfmarker] it's [disfmarker] it's compiled under Solaris? [speaker001:] Yeah. [speaker004:] Yeah, OK. Because he [disfmarker] there was some mail r saying that it's [disfmarker] may not be stable for Linux and all those. [speaker001:] Yep. Yeah. Yeah, i that was a particular version. [speaker004:] SUSI [speaker001:] Yeah, [speaker004:] yeah. [speaker001:] SUSI or whatever it was [speaker004:] Yeah, yeah. [speaker001:] but we don't have that. [speaker004:] Yeah, OK. [speaker001:] So. [speaker004:] OK, that's fine. [speaker001:] Should be OK. Yeah, it compiled fine actually. [speaker004:] Yeah. [speaker001:] No [disfmarker] no errors. Nothing. So. [speaker004:] That's good. [speaker002:] Uh, this is slightly off topic but, uh, I noticed, just glancing at the, uh, Hopkins workshop, uh, web site that, uh, um [disfmarker] one of the thing I don't know [disfmarker] Well, we'll see how much they accomplish, but one of the things that they were trying to do in the graphical models thing was to put together a [disfmarker] a, uh, tool kit for doing, uh r um, arbitrary graphical models for, uh, speech recognition. [speaker001:] Hmm. [speaker002:] So [disfmarker] And Jeff, uh [disfmarker] the two Jeffs were [speaker001:] Who's the second Jeff? [speaker002:] Uh [disfmarker] Oh, uh, do you know Geoff Zweig? [speaker001:] No. [speaker002:] Oh. Uh, he [disfmarker] he, uh [disfmarker] he was here for a couple years and he, uh [disfmarker] got his PHD. [speaker001:] Oh, OK. [speaker002:] He [disfmarker] And he's, uh, been at IBM for the last couple years. [speaker001:] Oh, OK. [speaker002:] So. [speaker001:] Wow. [speaker002:] Uh, so he did [disfmarker] he did his PHD on dynamic Bayes nets, [speaker001:] That would be neat. [speaker002:] uh, for [disfmarker] for speech recognition. He had some continuity built into the model, presumably to handle some, um, inertia in the [disfmarker] in the production system, and, um [disfmarker] [speaker001:] Hmm. [speaker002:] So. [speaker004:] Hmm. [speaker003:] Um, I've been playing with, first, the, um, VAD. Um, [vocalsound] so it's exactly the same approach, but the features that the VAD neural network use are, uh, MFCC after noise compensation. Oh, I think I have the results. [speaker002:] What was it using before? [speaker003:] Before it was just P L So. [speaker004:] Yeah, it was actually [disfmarker] No. Not [disfmarker] I mean, it was just the noisy features I guess. Yeah, yeah, yeah, [speaker003:] Yeah, noisy [disfmarker] noisy features. [speaker004:] not compensated. [speaker003:] Um [disfmarker] This is what we get after [disfmarker] This [disfmarker] So, actually, we, yeah, here the features are noise compensated and there is also the LDA filter. Um, and then it's a pretty small neural network which use, um, [vocalsound] nine frames of [disfmarker] of six features from C zero to C fives, plus the first derivatives. And it has one hundred hidden units. [speaker001:] Is that nine frames u s uh, centered around the current frame? Or [disfmarker] [speaker003:] Yeah. Mm hmm. [speaker002:] S so, I'm [disfmarker] I'm sorry, there's [disfmarker] there's [disfmarker] there's how many [disfmarker] how many inputs? [speaker003:] So it's twelve times nine. [speaker002:] Twelve times nine inputs, and a hundred, uh, hidden. [speaker003:] Hidden and [speaker004:] Two outputs. [speaker003:] two outputs. [speaker002:] Two outputs. OK. So I guess about eleven thousand parameters, [speaker003:] Mm hmm. [speaker002:] which [disfmarker] actually shouldn't be a problem, even in [disfmarker] in small phones. Yeah. [speaker001:] So, I'm [disfmarker] I'm [disfmarker] s [speaker003:] It should be OK. [speaker001:] so what is different between this and [disfmarker] and what you [disfmarker] [speaker003:] So the previous syst It's based on the system that has a fifty three point sixty six percent improvement. It's the same system. The only thing that changed is the n a p eh [disfmarker] a es the estimation of the silence probabilities. [speaker001:] Ah. OK. [speaker003:] Which now is based on, uh, cleaned features. [speaker002:] And, it's a l it's a lot better. [speaker001:] Wow. [speaker003:] Yeah. Um [disfmarker] [speaker002:] That's great. [speaker003:] So it's [disfmarker] it's not bad, but the problem is still that the latency is too large. [speaker002:] What's the latency? [speaker003:] Because [disfmarker] um [disfmarker] the [disfmarker] the latency of the VAD is two hundred and twenty milliseconds. And, uh, the VAD is used uh, i for on line normalization, and it's used before the delta computation. So if you add these components it goes t to a hundred and seventy, right? [speaker002:] I [disfmarker] I'm confused. You started off with two twenty and you ended up with one seventy? [speaker003:] With two an two hundred and seventy. [speaker002:] Two seventy. [speaker003:] If [disfmarker] Yeah, if you add the c delta comp delta computation [speaker002:] Oh. [speaker003:] which is done afterwards. Um [disfmarker] [speaker002:] So it's two twenty. I the is this [disfmarker] are these twenty millisecond frames? Is that why? Is it after downsampling? [speaker003:] The two twenty is one hundred milliseconds for the um [disfmarker] [speaker002:] or [disfmarker] [speaker003:] No, it's forty milliseconds for t for the, uh, uh, cleaning of the speech. Um [disfmarker] then there is, um, the neural network which use nine frames. So it adds forty milliseconds. [speaker002:] a OK. [speaker003:] Um, after that, um, you have the um, filtering of the silence probabilities. Which is a million filter it, and it creates a one hundred milliseconds delay. So, um [disfmarker] [speaker004:] Plus there is a delta at the input. [speaker003:] Yeah, and there is the delta at the input which is, [speaker002:] One hundred milliseconds for smoothing. [speaker003:] um [disfmarker] So it's [disfmarker] [@ @] [disfmarker] [speaker002:] Uh, [speaker004:] It's like forty plus [disfmarker] forty [disfmarker] plus [disfmarker] [speaker002:] median. [speaker003:] Mmm. Forty [disfmarker] [speaker002:] And then forty [disfmarker] [speaker003:] This forty plus twenty, plus one hundred. [speaker002:] forty p [speaker003:] Uh [disfmarker] [speaker004:] So it's two hundred actually. [speaker003:] Yeah, there are twenty that comes from [disfmarker] There is ten that comes from the LDA filters also. [speaker004:] Oh, OK. [speaker003:] Right? Uh, so it's two hundred and ten, yeah. [speaker004:] If you are using [disfmarker] [speaker002:] Uh [disfmarker] [speaker003:] Plus the frame, [speaker004:] t If you are using three frames [disfmarker] [speaker003:] so it's two twenty. [speaker004:] If you are phrasing f [comment] using three frames, it is thirty here for delta. [speaker003:] Yeah, I think it's [disfmarker] it's five frames, [speaker004:] So five frames, that's twenty. [speaker003:] but. [speaker004:] OK, so it's who un [comment] two hundred and ten. [speaker002:] Uh, p Wait a minute. It's forty [disfmarker] [vocalsound] forty for the [disfmarker] for the cleaning of the speech, [speaker003:] So. Forty cleaning. [speaker002:] forty for the I N [disfmarker] ANN, a hundred for the smoothing. [speaker003:] Yeah. [speaker002:] Well, but at ten [disfmarker], [speaker003:] Twenty for the delta. [speaker004:] At th [nonvocalsound] At the input. [speaker002:] Twenty for delta. [speaker004:] I mean, that's at the input to the net. [speaker003:] Yeah. [speaker002:] Delta at input to net? [speaker004:] And there i [speaker003:] Yeah. [speaker004:] Yeah. So it's like s five, six cepstrum plus delta at nine [disfmarker] nine frames of [disfmarker] [speaker002:] And then ten milliseconds for [disfmarker] [speaker004:] Fi There's an LDA filter. [speaker002:] ten milliseconds for LDA filter, and t and ten [disfmarker] another ten milliseconds you said for the frame? [speaker003:] For the frame I guess. I computed two twenty [disfmarker] Yeah, well, it's [disfmarker] I guess it's for the fr [disfmarker] the [disfmarker] [speaker002:] OK. And then there's delta besides that? [speaker003:] So this is the features that are used by our network and then afterwards, you have to compute the delta on the, uh, main feature stream, which is um, delta and double deltas, [speaker002:] OK. [speaker003:] which is fifty milliseconds. [speaker002:] Yeah. No, I mean, the [disfmarker] after the noise part, the forty [disfmarker] the [disfmarker] the other hundred and eighty [disfmarker] Well, I mean, Wait a minute. Some of this is, uh [disfmarker] is, uh [disfmarker] is in parallel, isn't it? I mean, the LDA [disfmarker] Oh, you have the LDA as part of the V D uh, VAD? Or [disfmarker] [speaker003:] The VAD use, uh, LDA filtered features also. [speaker002:] Oh, it does? [speaker003:] Mm hmm. [speaker002:] Ah. So in that case there isn't too much in parallel. Uh [disfmarker] [speaker003:] No. There is, um, just downsampling, upsampling, and the LDA. [speaker002:] Um, so the delta at the end is how much? [speaker003:] It's fifty. [speaker004:] It's [disfmarker] [speaker002:] Fifty. Alright. So [disfmarker] [speaker003:] But well, we could probably put the delta, um, [vocalsound] before on line normalization. It should not that make a big difference, [speaker001:] What if you used a smaller window for the delta? [speaker003:] because [disfmarker] [speaker001:] Could that help a little bit? I mean, I guess there's a lot of things you could do to [disfmarker] [speaker003:] Yeah. [speaker002:] Yeah. [speaker003:] Yeah, but, nnn [disfmarker] [speaker002:] So Yeah. So if you [disfmarker] if you put the delta before the, uh, ana on line [disfmarker] If [disfmarker] [speaker003:] Mm hmm. [speaker002:] Yeah [disfmarker] uh [disfmarker] then [disfmarker] then it could go in parallel. [speaker003:] Cuz i [speaker002:] And then y then you don't have that additive [disfmarker] [speaker004:] Yep. [speaker003:] Yeah, cuz the time constant of the on line normalization is pretty long compared to the delta window, [speaker002:] OK. [speaker003:] so. It should not make [disfmarker] [speaker002:] OK. And you ought to be able to shove tw, uh [disfmarker] sh uh [disfmarker] pull off twenty milliseconds from somewhere else to get it under two hundred, right? I mean [disfmarker] [speaker003:] Mm hmm. [speaker001:] Is two hundred the d [speaker002:] The hundred milla mill a hundred milliseconds for smoothing is sort of an arbitrary amount. It could be eighty and [disfmarker] and probably do [@ @] [disfmarker] [speaker003:] Yeah, yeah. [speaker001:] i a hun uh [disfmarker] Wh what's the baseline you need to be under? Two hundred? [speaker002:] Well, we don't know. They're still arguing about it. [speaker001:] Oh. [speaker002:] I mean, if it's two [disfmarker] if [disfmarker] if it's, uh [disfmarker] if it's two fifty, then we could keep the delta where it is if we shaved off twenty. If it's two hundred, if we shaved off twenty, we could [disfmarker] we could, uh, meet it by moving the delta back. [speaker001:] So, how do you know that what you have is too much if they're still deciding? [speaker002:] Uh, we don't, but it's just [disfmarker] I mean, the main thing is that since that we got burned last time, and [disfmarker] you know, by not worrying about it very much, we're just staying conscious of it. [speaker001:] Uh huh. Oh, OK, I see. [speaker002:] And so, th I mean, if [disfmarker] if [disfmarker] if a week before we have to be done someone says, "Well, you have to have fifty milliseconds less than you have now", it would be pretty frantic around here. So [disfmarker] [speaker001:] Ah, OK. [speaker002:] Uh [disfmarker] [speaker001:] But still, that's [disfmarker] that's a pretty big, uh, win. And it doesn't seem like you're [disfmarker] in terms of your delay, you're, uh, that [disfmarker] [speaker002:] He added a bit on, [speaker003:] Hmm. [speaker002:] I guess, because before we were [disfmarker] we were [disfmarker] had [disfmarker] were able to have the noise, uh, stuff, uh, and the LVA be in parallel. And now he's [disfmarker] he's requiring it to be done first. [speaker003:] Well, but I think the main thing, maybe, is the cleaning of the speech, which takes forty milliseconds or so. [speaker002:] Right. Well, so you say [disfmarker] [speaker003:] And [disfmarker] [speaker002:] let's say ten milliseconds [disfmarker] seconds for the LDA. [speaker003:] and [disfmarker] but [disfmarker] the LDA is, well, pretty short right now. [speaker002:] Well, ten. [speaker003:] Yeah. [speaker002:] And then forty for the other. [speaker004:] Yeah, the LDA [disfmarker] LDA [disfmarker] we don't know, is, like [disfmarker] is it very crucial for the features, right? [speaker003:] No. I just [disfmarker] [speaker004:] Yeah. [speaker003:] This is the first try. I mean, I [disfmarker] maybe the LDA's not very useful then. [speaker002:] Right, [speaker004:] S s h [speaker002:] so you could start pulling back, but [disfmarker] [speaker004:] Yeah, l [speaker002:] But I think you have [disfmarker] I mean, you have twenty for delta computation which y now you're sort of doing twice, right? But yo w were you doing that before? [speaker003:] Mmm. [speaker004:] On the [disfmarker] in the [disfmarker] [speaker003:] Well, in the proposal, um, the input of the VAD network were just three frames, I think. [speaker004:] Mm hmm. Just [disfmarker] Yeah, just the static, no delta. [speaker003:] Uh, static features. [speaker002:] Right. So, what you have now is fort uh, forty for the [disfmarker] the noise, twenty for the delta, and ten for the LDA. That's seventy milliseconds of stuff which was formerly in parallel, right? So I think, [speaker003:] Mm hmm. [speaker002:] you know, that's [disfmarker] that's the difference as far as the timing, [speaker003:] Yeah. [speaker002:] right? Um, and you could experiment with cutting various pieces of these back a bit, but [disfmarker] I mean, we're s we're not [disfmarker] we're not in terrible shape. [speaker001:] Yeah, that's what it seems like to me. [speaker003:] Mm hmm. [speaker002:] Yeah. [speaker001:] It's pretty good. [speaker002:] It's [disfmarker] it's not like it's adding up to four hundred milliseconds or something. [speaker001:] Where [disfmarker] where is this [disfmarker] where is this fifty seven point O two in [disfmarker] in comparison to the last evaluation? [speaker002:] Well, it's [disfmarker] I think it's better than anything, uh, anybody got. [speaker003:] Yeah. [speaker001:] Oh, is that right? [speaker003:] The best was fifty four point five. [speaker002:] Yeah. [speaker004:] Point s [speaker001:] Oh. [speaker002:] Yeah. [speaker003:] And our system was forty nine, [speaker002:] Uh [speaker003:] but with the neural network. [speaker001:] Wow. So this is almost ten percent. [speaker002:] With the f with the neural net. Yeah, [speaker004:] Yeah, [speaker002:] and r and [disfmarker] [speaker003:] It would [speaker004:] so this is [disfmarker] this is like the first proposal. The proposal one. It was forty four, actually. [speaker002:] Yeah. Yeah. And we still don't have the neural net in. So [disfmarker] so it's [disfmarker] [speaker001:] Wow. [speaker002:] You know. So it's [disfmarker] We're [disfmarker] we're doing better. I mean, we're getting better recognition. [speaker001:] This is [disfmarker] this is really good. [speaker002:] I mean, I'm sure other people working on this are not sitting still either, but [disfmarker] but [disfmarker] [speaker001:] Yeah. [speaker002:] but, uh [disfmarker] Uh, I mean, the important thing is that we learn how to do this better, and, you know. So. Um, Yeah. So, our, um [disfmarker] Yeah, you can see the kind of [disfmarker] kind of numbers that we're having, say, on SpeechDat Car which is a hard task, cuz it's really, um [disfmarker] I think it's just sort of [disfmarker] sort of reasonable numbers, starting to be. [speaker003:] Mm hmm. [speaker002:] I mean, it's still terri [speaker003:] Yeah, even for a well matched case it's sixty percent error rate reduction, which is [disfmarker] [speaker002:] Yeah. Yeah. Probably half. Good! [speaker003:] Um, Yeah. So actually, this is in between [vocalsound] what we had with the previous VAD and what Sunil did with an IDL VAD. Which gave sixty two percent improvement, right? [speaker004:] Yeah, it's almost that. [speaker003:] So [disfmarker] [speaker004:] It's almost an average somewhere around [disfmarker] [speaker003:] Yeah. [speaker004:] Yeah. [speaker001:] What was that? Say that last part again? [speaker003:] So, if you use, like, an IDL VAD, uh, for dropping the frames, [speaker004:] o o Or the best we can get. [speaker003:] the best that we can get [disfmarker] i That means that we estimate the silence probability on the clean version of the utterances. Then you can go up to sixty two percent error rate reduction, globally. [speaker001:] Mmm. [speaker003:] Mmm [disfmarker] Yeah. [speaker001:] So that would be even [disfmarker] That wouldn't change this number down here to sixty two? [speaker003:] Yeah. [speaker002:] Yeah. So you [disfmarker] you were get [speaker003:] If you add a g good v very good VAD, that works as well as a VAD working on clean speech, [speaker001:] Yeah. Yeah. [speaker003:] then you wou you would go [disfmarker] [speaker001:] So that's sort of the best you could hope for. [speaker003:] Mm hmm. [speaker002:] Probably. Yeah. [speaker001:] I see. [speaker002:] So fi si fifty three is what you were getting with the old VAD. [speaker003:] Yeah. [speaker002:] And, uh [disfmarker] and sixty two with the [disfmarker] the, you know, quote, unquote, cheating VAD. And fifty seven is what you got with the real VAD. [speaker003:] Mm hmm. [speaker002:] That's great. [speaker003:] Uh, yeah, the next thing is, I started to play [disfmarker] Well, I don't want to worry too much about the delay, no. Maybe it's better to wait [speaker002:] OK. [speaker003:] for the decision [speaker002:] Yeah. [speaker003:] from the committee. Uh, but I started to play with the, um, [vocalsound] [vocalsound] uh, tandem neural network. Mmm I just did the configuration that's very similar to what we did for the February proposal. And [disfmarker] Um. So. There is a f a first feature stream that use uh straight MFCC features. [speaker002:] Mm hmm. [speaker003:] Well, these features actually. And the other stream is the output of a neural network, using as input, also, these, um, cleaned MFCC. Um [disfmarker] I don't have the comp [speaker001:] Those are th those are th what is going into the tandem net? [speaker003:] Mmm? [speaker001:] Those two? [speaker003:] So there is just this feature stream, [comment] the fifteen MFCC plus delta and double delta. [speaker002:] No. [speaker001:] Yeah? [speaker003:] Um, so it's [disfmarker] makes forty five features [comment] that are used as input to the HTK. And then, there is [disfmarker] there are more inputs that comes from the tandem MLP. [speaker001:] Oh, oh. OK. [speaker002:] Yeah, [speaker001:] I see. [speaker002:] h he likes to use them both, cuz then it has one part that's discriminative, [speaker001:] Uh huh. [speaker003:] Yeah. Um [disfmarker] [speaker002:] one part that's not. [speaker001:] Right. OK. [speaker003:] So, um, uh, yeah. Right now it seems that [disfmarker] i I just tested on SpeechDat Car while the experiment are running on your [disfmarker] on TI digits. Well, it improves on the well matched and the mismatched conditions, but it get worse on the highly mismatched. Um, [speaker001:] Compared to these numbers? [speaker003:] Compared to these numbers, yeah. Um, like, on the well match and medium mismatch, the gain is around five percent relative, [speaker002:] y [speaker003:] but it goes down a lot more, like fifteen percent on the HM case. [speaker002:] You're just using the full ninety features? [speaker003:] The [disfmarker] [speaker002:] Y you have ninety features? [speaker003:] i I have, um [disfmarker] From the networks, it's twenty eight. So [disfmarker] [speaker002:] And from the other side it's forty five. [speaker003:] So, d i It's forty five. [speaker002:] So it's [disfmarker] you have seventy three features, [speaker003:] Yeah. [speaker002:] and you're just feeding them like that. [speaker003:] Yeah. Mm hmm. [speaker002:] There isn't any KLT or anything? [speaker003:] There's a KLT after the neural network, as [disfmarker] as before. [speaker001:] That's how you get down to twenty eight? [speaker003:] Yeah. [speaker001:] Why twenty eight? [speaker003:] I don't know. Uh. [speaker001:] Oh. [speaker003:] It's [disfmarker] i i i It's because it's what we did for the first proposal. We tested, uh, trying to go down [speaker001:] Ah. [speaker002:] It's a multiple of seven. [speaker003:] and [speaker004:] Yeah. [speaker003:] Yeah. So [disfmarker] Um. [speaker004:] Yeah. [speaker003:] I wanted to do something very similar to the proposal as a first [disfmarker] first try. [speaker004:] Yeah. [speaker001:] I see. [speaker002:] Yeah. [speaker001:] Yeah. That makes sense. [speaker003:] But we have to [disfmarker] for sure, we have to go down, because the limit is now sixty features. So, [speaker002:] Yeah. [speaker003:] uh, we have to find a way to decrease the number of features. Um [disfmarker] [speaker001:] So, it seems funny that [disfmarker] I don't know, maybe I don't u quite understand everything, [comment] but that adding features [disfmarker] I guess [disfmarker] I guess if you're keeping the back end fixed. Maybe that's it. Because it seems like just adding information shouldn't give worse results. But I guess if you're keeping the number of Gaussians fixed in the recognizer, then [disfmarker] [speaker002:] Well, yeah. [speaker003:] Mmm. [speaker002:] But, I mean, just in general, adding information [disfmarker] Suppose the information you added, well, was a really terrible feature and all it brought in was noise. [speaker001:] Yeah. [speaker002:] Right? So [disfmarker] so, um [disfmarker] Or [disfmarker] or suppose it wasn't completely terrible, but it was completely equivalent to another one feature that you had, except it was noisier. [speaker001:] Uh huh. [speaker002:] Right? In that case you wouldn't necessarily expect it to be better at all. [speaker001:] Oh, yeah, I wasn't necessarily saying it should be better. I'm just surprised that you're getting fifteen percent relative worse on the wel [speaker002:] Uh huh. [speaker003:] But it's worse. [speaker002:] On the highly mismatched condition. [speaker001:] On the highly mismatch. [speaker003:] Yeah, I [disfmarker] [speaker001:] Yeah. [speaker002:] So, "highly mismatched condition" means that in fact your training is a bad estimate of your test. [speaker003:] Uh huh. [speaker002:] So having [disfmarker] having, uh, a g a l a greater number of features, if they aren't maybe the right features that you use, certainly can e can easily, uh, make things worse. I mean, you're right. If you have [disfmarker] if you have, uh, lots and lots of data, and you have [disfmarker] and your [disfmarker] your [disfmarker] your training is representative of your test, then getting more sources of information should just help. But [disfmarker] but it's [disfmarker] It doesn't necessarily work that way. [speaker001:] Huh. [speaker003:] Mm hmm. [speaker002:] So I wonder, um, Well, what's your [disfmarker] what's your thought about what to do next with it? [speaker003:] Um, I don't know. I'm surprised, because I expected the neural net to help more when there is more mismatch, as it was the case for the [disfmarker] [speaker002:] Mm hmm. [speaker004:] So, was the training set same as the p the February proposal? OK. [speaker003:] Yeah, it's the same training set, so it's TIMIT with the TI digits', uh, noises, uh, added. [speaker002:] Mm hmm. [speaker003:] Um [disfmarker] [speaker002:] Well, we might [disfmarker] uh, we might have to experiment with, uh better training sets. Again. [speaker003:] Mm hmm. [speaker002:] But, I [disfmarker] The other thing is, I mean, before you found that was the best configuration, but you might have to retest those things now that we have different [disfmarker] The rest of it is different, right? So, um, uh, For instance, what's the effect of just putting the neural net on without the o other [disfmarker] other path? [speaker003:] Mm hmm. [speaker002:] I mean, you know what the straight features do. [speaker003:] Yeah. [speaker002:] That gives you this. [speaker003:] Mm hmm. [speaker002:] You know what it does in combination. You don't necessarily know what [disfmarker] [speaker001:] What if you did the [disfmarker] Would it make sense to do the KLT on the full set of combined features? Instead of just on the [disfmarker] [speaker003:] Yeah. I g I guess. Um. The reason I did it this ways is that in February, it [disfmarker] we [disfmarker] we tested different things like that, so, having two KLT, having just a KLT for a network, or having a global KLT. [speaker001:] Oh, I see. [speaker003:] And [disfmarker] [speaker001:] So you tried the global KLT before [speaker003:] Well [disfmarker] Yeah. [speaker001:] and it didn't really [disfmarker] [speaker003:] And, uh, th Yeah. The differences between these configurations were not huge, [speaker001:] I see. [speaker003:] but [disfmarker] it was marginally better with this configuration. [speaker001:] Uh huh. Uh huh. [speaker002:] But, yeah, that's obviously another thing to try, [speaker003:] Um. [speaker002:] since things are [disfmarker] things are different. [speaker003:] Mm hmm. Mm hmm. [speaker002:] And I guess if the [disfmarker] These are all [disfmarker] so all of these seventy three features are going into, um, the, uh [disfmarker] the HMM. [speaker003:] Yeah. [speaker002:] And is [disfmarker] are [disfmarker] i i are [disfmarker] are any deltas being computed of tha of them? [speaker003:] Of the straight features, yeah. So. [speaker002:] n Not of the [disfmarker] [speaker003:] But n th the, um, tandem features are u used as they are. [speaker002:] Are not. [speaker003:] So, yeah, maybe we can add some context from these features also as [disfmarker] Dan did in [disfmarker] in his last work. [speaker002:] Could. i Yeah, but the other thing I was thinking was, um [disfmarker] Uh, now I lost track of what I was thinking. But. [speaker001:] What is the [disfmarker] You said there was a limit of sixty features or something? [speaker003:] Mm hmm. [speaker001:] What's the relation between that limit and the, um, forty eight [disfmarker] uh, forty eight hundred bits per second? [speaker002:] Oh, I know what I was gonna say. [speaker003:] Um, not [disfmarker] no relation. [speaker002:] No relation. [speaker001:] So I [disfmarker] I [disfmarker] I don't understand, [speaker003:] The f the forty eight hundred bits is for transmission of some features. [speaker001:] because i I mean, if you're only using h [speaker003:] And generally, i it [disfmarker] s allows you to transmit like, fifteen, uh, cepstrum. [speaker002:] The issue was that, um, this is supposed to be a standard that's then gonna be fed to somebody's recognizer somewhere which might be, you know, it [disfmarker] it might be a concern how many parameters are use [disfmarker] u used and so forth. And so, uh, they felt they wanted to set a limit. So they chose sixty. Some people wanted to use hundreds of parameters and [disfmarker] and that bothered some other people. u And so [speaker001:] Uh huh. [speaker002:] they just chose that. I [disfmarker] I [disfmarker] I think it's kind of r arbitrary too. But [disfmarker] but that's [disfmarker] that's kind of what was chosen. I [disfmarker] I remembered what I was going to say. What I was going to say is that, um, maybe [disfmarker] [vocalsound] maybe with the noise removal, uh, these things are now more correlated. So you have two sets of things that are kind of uncorrelated, uh, within themselves, but they're pretty correlated with one another. [speaker003:] Mm hmm. [speaker002:] And, um, they're being fed into these, uh, variants, only Gaussians and so forth, and [disfmarker] and, uh, [speaker003:] Mm hmm. [speaker002:] so maybe it would be a better idea now than it was before to, uh, have, uh, one KLT over everything, [speaker003:] Mm hmm. [speaker002:] to de correlate it. [speaker003:] Yeah, I see. [speaker002:] Maybe. You know. [speaker004:] What are the S N Rs in the training set, TIMIT? [speaker003:] It's, uh, ranging from zero to clean? Yeah. [speaker004:] Mm hmm. [speaker003:] From zero to clean. [speaker002:] Yeah. So we found this [disfmarker] this, uh [disfmarker] this Macrophone data, and so forth, that we were using for these other experiments, to be pretty good. [speaker003:] Mm hmm. [speaker002:] So that's [disfmarker] i after you explore these other alternatives, that might be another way to start looking, is [disfmarker] is just improving the training set. [speaker003:] Mm hmm. [speaker002:] I mean, we were getting, uh, lots better recognition using that, than [disfmarker] Of course, you do have the problem that, um, u i [comment] we are not able to increase the number of Gaussians, uh, or anything to, uh, uh, to match anything. So we're only improving the training of our feature set, but that's still probably something. [speaker001:] So you're saying, add the Macrophone data to the training of the neural net? The tandem net? [speaker002:] Yeah, that's the only place that we can train. We can't train the other stuff with anything other than the standard amount, [speaker001:] Yeah. Right. [speaker002:] so. Um, um [disfmarker] [speaker001:] What [disfmarker] what was it trained on again? The one that you used? [speaker003:] It's TIMIT with noise. [speaker001:] Uh huh. [speaker003:] So, yeah, it's rather a small [disfmarker] [speaker002:] Yeah. [speaker003:] Um, [speaker002:] How big is the net, by the way? [speaker003:] Uh, it's, uh, five hundred hidden units. And [disfmarker] [speaker002:] And again, you did experiments back then where you made it bigger and it [disfmarker] and that was [disfmarker] that was sort of the threshold point. Much less than that, it was worse, [speaker003:] Yeah. [speaker002:] and [speaker003:] Yeah. [speaker002:] much more than that, it wasn't much better. Hmm. [speaker004:] So is it [disfmarker] is it though the performance, big relation in the high ma high mismatch has something to do with the, uh, cleaning up that you [disfmarker] that is done on the TIMIT after adding noise? [speaker003:] Yeah. [@ @]? [speaker004:] So [disfmarker] it's [disfmarker] i All the noises are from the TI digits, right? [speaker003:] Yeah. [speaker004:] So you [disfmarker] i [speaker003:] Um [disfmarker] [speaker004:] Well, it it's like the high mismatch of the SpeechDat Car [speaker003:] They [disfmarker] k uh [disfmarker] [speaker004:] after cleaning up, maybe having more noise than the [disfmarker] the training set of TIMIT after clean [disfmarker] s after you do the noise clean up. [speaker003:] Mmm. [speaker004:] I mean, earlier you never had any compensation, you just trained it straight away. [speaker003:] Mm hmm. [speaker004:] So it had like all these different conditions of S N Rs, actually in their training set of neural net. [speaker003:] Mm hmm. Mm hmm. [speaker004:] But after cleaning up you have now a different set of S N Rs, right? [speaker003:] Yeah. [speaker004:] For the training of the neural net. [speaker003:] Mm hmm. [speaker004:] And [disfmarker] is it something to do with the mismatch that [disfmarker] that's created after the cleaning up, like the high mismatch [disfmarker] [speaker003:] You mean the [disfmarker] the most noisy occurrences on SpeechDat Car might be a lot more noisy than [disfmarker] [speaker004:] Mm hmm. Of [disfmarker] that [disfmarker] I mean, the SNR after the noise compensation of the SpeechDat Car. [speaker002:] Oh, so [disfmarker] Right. So the training [disfmarker] the [disfmarker] the neural net is being trained with noise compensated stuff. [speaker003:] Maybe. [speaker004:] Yeah. [speaker003:] Yeah, yeah. [speaker004:] Yeah. [speaker002:] Which makes sense, but, uh, you're saying [disfmarker] Yeah, the noisier ones are still going to be, even after our noise compensation, are still gonna be pretty noisy. [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker004:] Yeah, so now the after noise compensation the neural net is seeing a different set of S N Rs than that was originally there in the training set. Of TIMIT. Because in the TIMIT it was zero to some clean. [speaker002:] Right. Yes. [speaker004:] So the net saw all the SNR [@ @] conditions. [speaker002:] Right. [speaker004:] Now after cleaning up it's a different set of SNR. [speaker002:] Right. [speaker004:] And that SNR may not be, like, com covering the whole set of S N Rs that you're getting in the SpeechDat Car. [speaker002:] Right, but the SpeechDat Car data that you're seeing is also reduced in noise [speaker004:] Yeah, yeah, yeah, [speaker003:] Yeah. [speaker002:] by the noise compensation. [speaker004:] yeah, it is. But, I'm saying, there could be some [disfmarker] some issues of [disfmarker] [speaker002:] So. [speaker003:] Mm hmm. [speaker002:] Yeah. [speaker003:] Well, if the initial range of SNR is different, we [disfmarker] the problem was already there before. And [disfmarker] [speaker002:] Yeah. [speaker003:] Because [disfmarker] Mmm [disfmarker] [speaker002:] Yeah, I mean, it depends on whether you believe that the noise compensation is equally reducing the noise on the test set and the training set. [speaker003:] Hmm. [speaker004:] On the test set, yeah. [speaker002:] Uh [disfmarker] Right? I mean, you're saying there's a mismatch in noise that wasn't there before, [speaker004:] Hmm. Mm hmm. Mm hmm. [speaker002:] but if they were both the same before, then if they were both reduic reduced equally, then, there would not be a mismatch. So, I mean, this may be [disfmarker] Heaven forbid, this noise compensation process may be imperfect, but. Uh, [speaker003:] Yeah, uh [disfmarker] [speaker004:] Well, I [disfmarker] [speaker002:] so maybe it's treating some things differently. [speaker004:] I don't know. I [disfmarker] I just [disfmarker] that could be seen from the TI digits, uh, testing condition because, um, the noises are from the TI digits, right? Noise [disfmarker] [speaker003:] Yeah. So [disfmarker] [speaker004:] So cleaning up the TI digits and if the performance goes down in the TI digits mismatch [disfmarker] high mismatch like this [disfmarker] [speaker003:] Clean training, yeah. [speaker004:] on a clean training, or zero DB testing. [speaker003:] Yeah, we'll [disfmarker] so we'll see. [speaker004:] Yeah. [speaker003:] Uh. [speaker004:] Then it's something to do. [speaker003:] Maybe. Mm hmm. Yeah. [speaker002:] I mean, one of the things about [disfmarker] I mean, the Macrophone data, um, I think, you know, it was recorded over many different telephones. [speaker003:] Mm hmm. [speaker002:] And, um, so, there's lots of different kinds of acoustic conditions. I mean, it's not artificially added noise or anything. So it's not the same. I don't think there's anybody recording over a car from a car, but [disfmarker] I think it's [disfmarker] it's varied enough that if [disfmarker] if doing this adjustments, uh, and playing around with it doesn't, uh, make it better, the most [disfmarker] uh, it seems like the most obvious thing to do is to improve the training set. Um [disfmarker] I mean, what we were [disfmarker] uh [disfmarker] the condition [disfmarker] It [disfmarker] it gave us an enormous amount of improvement in what we were doing with Meeting Recorder digits, even though there, again, these m Macrophone digits were very, very different from, uh, what we were going on here. I mean, we weren't talking over a telephone here. But it was just [disfmarker] I think just having a [disfmarker] a nice variation in acoustic conditions was just a good thing. [speaker003:] Mm hmm. Yep. [speaker004:] Mmm. [speaker003:] Yeah, actually [vocalsound] to s eh, what I observed in the HM case is that the number of deletion dramatically increases. It [disfmarker] it doubles. [speaker002:] Number of deletions. [speaker003:] When I added the num the neural network it doubles the number of deletions. Yeah, so I don't you know [vocalsound] how to interpret that, but, mmm [disfmarker] [speaker002:] Yeah. Me either. [speaker003:] t [speaker001:] And [disfmarker] and did [disfmarker] an other numbers stay the same? Insertion substitutions stay the same? [speaker003:] They p stayed the same, they [disfmarker] maybe they are a little bit uh, lower. [speaker001:] Roughly? Uh huh. [speaker003:] They are a little bit better. Yeah. But [disfmarker] Mm hmm. [speaker002:] Did they increase the number of deletions even for the cases that got better? Say, for the [disfmarker] I mean, it [disfmarker] [speaker003:] No, it doesn't. No. [speaker002:] So it's only the highly mismatched? And it [disfmarker] Remind me again, the "highly mismatched" means that the [disfmarker] [speaker003:] Clean training and [disfmarker] [speaker002:] Uh, sorry? [speaker003:] It's clean training [disfmarker] Well, close microphone training and distant microphone, um, high speed, I think. [speaker002:] Close mike training [disfmarker] [speaker003:] Well [disfmarker] The most noisy cases are the distant microphone for testing. [speaker002:] Right. So [disfmarker] Well, maybe the noise subtraction is subtracting off speech. [speaker003:] Separating. Yeah. But [disfmarker] [speaker002:] Wh [speaker003:] Yeah. I mean, but without the neural network it's [disfmarker] well, it's better. It's just when we add the neural networks. The feature are the same except that [disfmarker] [speaker002:] Yeah, right. Uh, that's right, that's right. Um [disfmarker] [speaker001:] Well that [disfmarker] that says that, you know, the, um [disfmarker] the models in [disfmarker] in, uh, the recognizer are really paying attention to the neural net features. [speaker003:] Yeah. Mm hmm. [speaker001:] Uh. [speaker002:] But, yeah, actually [disfmarker] [nonvocalsound] the TIMIT noises [pause] are sort of a range of noises and they're not so much the stationary driving kind of noises, right? It's [disfmarker] it's pretty different. Isn't it? [speaker003:] Uh, there is a car noise. So there are f just four noises. Um, uh, "Car", I think, [speaker004:] "Babble." [speaker003:] "Babble", "Subway", right? and [disfmarker] [speaker004:] "Street" or "Airport" or something. [speaker003:] and [disfmarker] "Street" isn't [disfmarker] [speaker004:] Or "Train station". [speaker003:] "Train station", yeah. [speaker004:] Yeah. [speaker003:] So [disfmarker] it's mostly [disfmarker] Well, "Car" is stationary, [speaker002:] Mm hmm. [speaker003:] "Babble", it's a stationary background plus some voices, [speaker002:] Mm hmm. [speaker003:] some speech over it. And the other two are rather stationary also. [speaker002:] Well, I [disfmarker] I think that if you run it [disfmarker] Actually, you [disfmarker] maybe you remember this. When you [disfmarker] in [disfmarker] in the old experiments when you ran with the neural net only, and didn't have this side path, um, uh, with the [disfmarker] the pure features as well, did it make things better to have the neural net? [speaker003:] Mm hmm. [speaker002:] Was it about the same? Uh, w i [speaker003:] It was [disfmarker] b a little bit worse. [speaker002:] Than [disfmarker]? [speaker003:] Than just the features, yeah. [speaker002:] So, until you put the second path in with the pure features, the neural net wasn't helping at all. [speaker003:] Mm hmm. [speaker002:] Well, that's interesting. [speaker003:] It was helping, uh, if the features are b were bad, I mean. [speaker002:] Yeah. [speaker003:] Just plain P L Ps or M F [speaker002:] Yeah. [speaker003:] C Cs. as soon as we added LDA on line normalization, and [vocalsound] all these things, then [disfmarker] [speaker002:] They were doing similar enough things. Well, I still think it would be k sort of interesting to see what would happen if you just had the neural net without the side thing. [speaker003:] Yeah, mm hmm. [speaker002:] And [disfmarker] and the thing I [disfmarker] I have in mind is, uh, maybe you'll see that the results are not just a little bit worse. Maybe that they're a lot worse. You know? And, um [disfmarker] But if on the ha other hand, uh, it's, say, somewhere in between what you're seeing now and [disfmarker] and [disfmarker] and, uh, what you'd have with just the pure features, then maybe there is some problem of a [disfmarker] of a, uh, combination of these things, or correlation between them somehow. [speaker003:] Mm hmm. [speaker002:] If it really is that the net is hurting you at the moment, then I think the issue is to focus on [disfmarker] on, uh, improving the [disfmarker] the net. [speaker003:] Yeah, mm hmm. [speaker002:] Um. So what's the overall effe I mean, you haven't done all the experiments but you said it was i somewhat better, say, five percent better, for the first two conditions, and fifteen percent worse for the other one? But it's [disfmarker] but of course that one's weighted lower, [speaker003:] Y yeah, oh. Yeah. [speaker002:] so I wonder what the net effect is. [speaker003:] I d I [disfmarker] I think it's [disfmarker] it was one or two percent. That's not that bad, but it was l like two percent relative worse on SpeechDat Car. I have to [disfmarker] to check that. Well, I have [disfmarker] I will. [speaker004:] Well, it will [disfmarker] overall it will be still better even if it is fifteen percent worse, because the fifteen percent worse is given like f w twenty five [disfmarker] point two five eight. [speaker002:] Right. [speaker003:] Mm hmm. Hmm. [speaker002:] Right. So the [disfmarker] so the worst it could be, if the others were exactly the same, is four, [speaker004:] Is it like [disfmarker] Yeah, so it's four. [speaker002:] and [disfmarker] and, uh, in fact since the others are somewhat better [disfmarker] [speaker004:] Is i So either it'll get cancelled out, or you'll get, like, almost the same. [speaker003:] Yeah, it was [disfmarker] it was slightly worse. [speaker002:] Uh. [speaker004:] Slightly bad. Yeah. [speaker003:] Um, [speaker002:] Yeah, it should be pretty close to cancelled out. [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker001:] You know, I've been wondering about something. In the, um [disfmarker] a lot of the, um [disfmarker] the Hub five systems, um, recently have been using LDA. and [disfmarker] and they, um [disfmarker] They run LDA on the features right before they train the models. So there's the [disfmarker] the LDA is [disfmarker] is right there before the H M [speaker004:] Yeah. [speaker001:] So, you guys are using LDA but it seems like it's pretty far back in the process. [speaker004:] Uh, this LDA is different from the LDA that you are talking about. The LDA that you [disfmarker] saying is, like, you take a block of features, like nine frames or something, [comment] and then do an LDA on it, [speaker001:] Yeah. Uh huh. [speaker004:] and then reduce the dimensionality to something like twenty four or something like that. [speaker001:] Yeah, you c you c you can. [speaker004:] And then feed it to HMM. [speaker001:] I mean, it's [disfmarker] you know, you're just basically i [speaker004:] Yeah, so this is like a two d two dimensional tile. [speaker001:] You're shifting the feature space. Yeah. [speaker004:] So this is a two dimensional tile. And the LDA that we are f applying is only in time, not in frequency [disfmarker] high cost frequency. So it's like [disfmarker] more like a filtering in time, rather than doing a r [speaker001:] Ah. OK. So what i what about, um [disfmarker] i u what i w I mean, I don't know if this is a good idea or not, but what if you put [disfmarker] ran the other kind of LDA, uh, on your features right before they go into the HMM? [speaker004:] Uh, it [disfmarker] [speaker003:] Mm hmm. No, actually, I think [disfmarker] i [speaker004:] m [speaker003:] Well. What do we do with the ANN is [disfmarker] is something like that except that it's not linear. But it's [disfmarker] it's like a nonlinear discriminant analysis. [speaker001:] Yeah. Right, it's the [disfmarker] It's [disfmarker] Right. The [disfmarker] So [disfmarker] [speaker003:] But. [speaker001:] Yeah, so it's sort of like [disfmarker] The tandem stuff is kind of like i nonlinear LDA. [speaker003:] Yeah. It's [disfmarker] [speaker001:] I g [speaker003:] Yeah. [speaker002:] Yeah. [speaker001:] Yeah. [speaker003:] Uh. [speaker001:] But I mean, w but the other features that you have, um, th the non tandem ones, [speaker003:] Mm hmm. Yeah, I know. That [disfmarker] that [disfmarker] Yeah. Well, in the proposal, they were transformed u using PCA, but [disfmarker] [speaker001:] Uh huh. [speaker003:] Yeah, it might be that LDA could be better. [speaker002:] The a the argument i is kind of i in [disfmarker] and it's not like we really know, but the argument anyway is that, um, uh, we always have the prob I mean, discriminative things are good. LDA, neural nets, they're good. [speaker001:] Yeah. [speaker002:] Uh, they're good because you [disfmarker] you [disfmarker] you learn to distinguish between these categories that you want to be good at distinguishing between. And PCA doesn't do that. It [disfmarker] PAC PCA [disfmarker] low order PCA throws away pieces that are uh, maybe not [disfmarker] not gonna be helpful just because they're small, basically. [speaker001:] Right. [speaker002:] But, uh, the problem is, training sets aren't perfect and testing sets are different. So you f you [disfmarker] you face the potential problem with discriminative stuff, be it LDA or neural nets, that you are training to discriminate between categories in one space but what you're really gonna be g getting is [disfmarker] is something else. [speaker001:] Uh huh. [speaker002:] And so, uh, Stephane's idea was, uh, let's feed, uh, both this discriminatively trained thing and something that's not. So you have a good set of features that everybody's worked really hard to make, [speaker001:] Yeah. [speaker002:] and then, uh, you [disfmarker] you discriminately train it, but you also take the path that [disfmarker] that doesn't have that, [speaker001:] Uh huh. [speaker002:] and putting those in together. And that [disfmarker] that seem So it's kind of like a combination of the [disfmarker] uh, what, uh, Dan has been calling, you know, a feature [disfmarker] uh, you know, a feature combination versus posterior combination or something. It's [disfmarker] it's, you know, you have the posterior combination but then you get the features from that and use them as a feature combination with these [disfmarker] these other things. And that seemed, at least in the last one, as he was just saying, he [disfmarker] he [disfmarker] when he only did discriminative stuff, i it actually was [disfmarker] was [disfmarker] it didn't help at all in this particular case. [speaker001:] Yeah. [speaker002:] There was enough of a difference, I guess, between the testing and training. But by having them both there [disfmarker] The fact is some of the time, the discriminative stuff is gonna help you. [speaker001:] Mm hmm. [speaker002:] And some of the time it's going to hurt you, and by combining two information sources if, you know [disfmarker] if [disfmarker] if [disfmarker] [speaker001:] Right. So you wouldn't necessarily then want to do LDA on the non tandem features because now you're doing something to them that [disfmarker] [speaker002:] That i i I think that's counter to that idea. [speaker001:] Yeah, [speaker002:] Now, again, it's [disfmarker] we're just trying these different things. [speaker001:] right. [speaker002:] We don't really know what's gonna work best. But if that's the hypothesis, at least it would be counter to that hypothesis to do that. [speaker001:] Right. [speaker002:] Um, and in principle you would think that the neural net would do better at the discriminant part than LDA. [speaker001:] Right. Yeah. [speaker002:] Though, maybe not. [speaker001:] Well [disfmarker] y Yeah. Exactly. I mean, we, uh [disfmarker] we were getting ready to do the tandem, uh, stuff for the Hub five system, and, um, Andreas and I talked about it, and the idea w the thought was, Well, uh, yeah, that i you know [disfmarker] th the neural net should be better, but we should at least have uh, a number, you know, to show that we did try the LDA in place of the neural net, so that we can you know, show a clear path. [speaker002:] Right. [speaker001:] You know, that you have it without it, then you have the LDA, then you have the neural net, and you can see, theoretically. So. I was just wondering [disfmarker] I [disfmarker] I [disfmarker] [speaker002:] Well, I think that's a good idea. [speaker001:] Yeah. [speaker002:] Did [disfmarker] did you do that or [disfmarker] tha that's a [disfmarker] [speaker001:] Um. No. That's what [disfmarker] that's what we're gonna do next as soon as I finish this other thing. [speaker002:] Yeah. [speaker001:] So. [speaker002:] Yeah. No, well, that's a good idea. I [disfmarker] I [disfmarker] i Yeah. [speaker001:] We just want to show. I mean, it [disfmarker] everybody believes it, [speaker002:] Oh, no it's a g [speaker001:] but you know, we just [disfmarker] [speaker002:] No, no, but it might not [disfmarker] not even be true. I mean, it's [disfmarker] it's [disfmarker] it's [disfmarker] it's [disfmarker] it's a great idea. [speaker001:] Yeah. [speaker002:] I mean, one of the things that always disturbed me, uh, in the [disfmarker] the resurgence of neural nets that happened in the eighties was that, um, a lot of people [disfmarker] Because neural nets were pretty easy to [disfmarker] to use [disfmarker] a lot of people were just using them for all sorts of things without, uh, looking at all into the linear, uh [disfmarker] uh, versions of them. [speaker001:] Yeah. Mm hmm. [speaker002:] And, uh, people were doing recurrent nets but not looking at IIR filters, [speaker001:] Yeah. [speaker002:] and [disfmarker] You know, I mean, uh, so I think, yeah, it's definitely a good idea to try it. [speaker001:] Yeah, and everybody's putting that on their [vocalsound] systems now, and so, I that's what made me wonder about this, [speaker002:] Well, they've been putting them in their systems off and on for ten years, [speaker001:] but. [speaker002:] but [disfmarker] but [disfmarker] but, uh, [speaker001:] Yeah, what I mean is it's [disfmarker] it's like in the Hub five evaluations, you know, and you read the system descriptions and everybody's got, [vocalsound] you know, LDA on their features. [speaker002:] And now they all have that. I see. [speaker001:] And so. [speaker002:] Yeah. [speaker001:] Uh. [speaker003:] It's the transformation they're estimating on [disfmarker] Well, they are trained on the same data as the final HMM are. [speaker001:] Yeah, so it's different. Yeah, exactly. Cuz they don't have these, you know, mismatches that [disfmarker] that you guys have. [speaker003:] Mm hmm. [speaker001:] So that's why I was wondering if maybe it's not even a good idea. [speaker003:] Mm hmm. [speaker001:] I don't know. I [disfmarker] I don't know enough about it, [speaker003:] Mm hmm. [speaker001:] but [disfmarker] Um. [speaker002:] I mean, part of why [disfmarker] I [disfmarker] I think part of why you were getting into the KLT [disfmarker] Y you were describing to me at one point that you wanted to see if, uh, you know, getting good orthogonal features was [disfmarker] and combining the [disfmarker] the different temporal ranges [disfmarker] was the key thing that was happening or whether it was this discriminant thing, right? So you were just trying [disfmarker] I think you r I mean, this is [disfmarker] it doesn't have the LDA aspect but th as far as the orthogonalizing transformation, you were trying that at one point, right? [speaker003:] Mm hmm. Mm hmm. [speaker002:] I think you were. [speaker003:] Yeah. [speaker002:] Does something. It doesn't work as well. Yeah. Yeah. [speaker004:] So, yeah, I've been exploring a parallel VAD without neural network with, like, less latency using SNR and energy, um, after the cleaning up. So what I'd been trying was, um, uh [disfmarker] After the b after the noise compensation, n I was trying t to f find a f feature based on the ratio of the energies, that is, cl after clean and before clean. So that if [disfmarker] if they are, like, pretty c close to one, which means it's speech. And if it is n if it is close to zero, which is [disfmarker] So it's like a scale [@ @] probability value. So I was trying, uh, with full band and multiple bands, m ps uh [disfmarker] separating them to different frequency bands and deriving separate decisions on each bands, and trying to combine them. Uh, the advantage being like it doesn't have the latency of the neural net if it [disfmarker] if it can [speaker002:] Mm hmm. [speaker004:] g And [pause] it gave me like, uh, one point [disfmarker] One [disfmarker] more than one percent relative improvement. So, from fifty three point six it went to fifty f four point eight. So it's, like, only slightly more than a percent improvement, [speaker002:] Mm hmm. [speaker004:] just like [disfmarker] Which means that it's [disfmarker] it's doing a slightly better job than the previous VAD, [speaker002:] Mm hmm. [speaker004:] uh, at a l lower delay. [speaker002:] Mm hmm. [speaker004:] Um, so, um [disfmarker] so [disfmarker] u [speaker002:] But [disfmarker] i d I'm sorry, does it still have the median [pause] filter stuff? [speaker004:] It still has the median filter. [speaker002:] So it still has most of the delay, [speaker004:] So [disfmarker] [speaker002:] it just doesn't [disfmarker] [speaker004:] Yeah, so d with the delay, that's gone is the input, which is the sixty millisecond. The forty plus [pause] twenty. At the input of the neural net you have this, uh, f nine frames of context plus the delta. [speaker002:] Well, w i [speaker003:] Mm hmm. [speaker002:] Oh, plus the delta, right. [speaker004:] Yeah. [speaker002:] OK. [speaker004:] So that delay, plus the LDA. [speaker002:] Mm hmm. [speaker004:] Uh, so the delay is only the forty millisecond of the noise cleaning, plus the hundred millisecond smoothing at the output. [speaker002:] Mm hmm. Mm hmm. [speaker004:] Um. So. Yeah. So the [disfmarker] the [disfmarker] di the biggest [disfmarker] The problem f for me was to find a consistent threshold that works [pause] well across the different databases, because I t I try to make it work on tr SpeechDat Car [speaker002:] Mm hmm. [speaker004:] and it fails on TI digits, or if I try to make it work on that it's just the Italian or something, it doesn't work on the Finnish. [speaker002:] Mm hmm. [speaker004:] So, um. So there are [disfmarker] there was, like, some problem in balancing the deletions and insertions when I try different thresholds. [speaker002:] Mm hmm. [speaker004:] So [disfmarker] The [disfmarker] I'm still trying to make it better by using some other features from the [disfmarker] after the p clean up [disfmarker] maybe, some, uh, correlation [disfmarker] auto correlation or some s additional features of [disfmarker] to mainly the improvement of the VAD. I've been trying. [speaker002:] Now this [disfmarker] this [disfmarker] this, uh, "before and after clean", it sounds like you think that's a good feature. That [disfmarker] that, it [disfmarker] you th think that the, uh [disfmarker] the [disfmarker] i it appears to be a good feature, right? [speaker004:] Mm hmm. Yeah. [speaker002:] What about using it in the neural net? [speaker003:] Yeah, eventually we could [disfmarker] could just [speaker004:] Yeah, so [disfmarker] Yeah, so that's the [disfmarker] Yeah. So we've been thinking about putting it into the neural net also. [speaker002:] Yeah. [speaker004:] Because they did [disfmarker] that itself [disfmarker] [speaker003:] Then you don't have to worry about the thresholds and [disfmarker] [speaker004:] There's a threshold and [disfmarker] Yeah. Yeah. [speaker002:] Yeah. [speaker004:] So that [disfmarker] that's, uh [disfmarker] [speaker003:] but just [disfmarker] [speaker002:] Yeah. So if we [disfmarker] if we can live with the latency or cut the latencies elsewhere, then [disfmarker] then that would be a, uh, good thing. [speaker004:] Yeah. Yeah. [speaker002:] Um, anybody [disfmarker] has anybody [disfmarker] you guys or [disfmarker] or Naren, uh, somebody, tried the, uh, um, second th second stream thing? [speaker004:] Oh, I just [disfmarker] I just h put the second stream in place and, uh ran one experiment, [speaker002:] Uh. [speaker004:] but just like [disfmarker] just to know that everything is fine. [speaker002:] Uh huh. [speaker004:] So it was like, uh, forty five cepstrum plus twenty three mel [disfmarker] log mel. [speaker002:] Yeah. [speaker004:] And [disfmarker] and, just, like, it gave me the baseline performance of the Aurora, which is like zero improvement. [speaker002:] Yeah. Yeah. [speaker004:] So I just tried it on Italian just to know that everything is [disfmarker] But I [disfmarker] I didn't export anything out of it because it was, like, a weird feature set. [speaker002:] Yeah. [speaker004:] So. [speaker002:] Yeah. Well, what I think, you know, would be more what you'd want to do is [disfmarker] is [disfmarker] is, uh, put it into another neural net. [speaker004:] Yeah, yeah, yeah, yeah. [speaker003:] Mm hmm. [speaker002:] Right? And then [disfmarker] But, yeah, we're [disfmarker] we're not quite there yet. So we have to [vocalsound] figure out the neural nets, I guess. [speaker003:] Yeah. [speaker004:] The uh, other thing I was wondering was, um, if the neural net, um, has any [disfmarker] because of the different noise con unseen noise conditions for the neural net, where, like, you train it on those four noise conditions, while you are feeding it with, like, a additional [disfmarker] some four plus some [disfmarker] f few more conditions which it hasn't seen, actually, [speaker003:] Mm hmm. [speaker004:] from the [disfmarker] f f while testing. [speaker003:] Yeah, yeah. Right. [speaker004:] Um [disfmarker] instead of just h having c uh, those cleaned up t cepstrum, sh should we feed some additional information, like [disfmarker] The [disfmarker] the [disfmarker] We have the VAD flag. I mean, should we f feed the VAD flag, also, at the input so that it [disfmarker] it has some additional discriminating information at the input? [speaker003:] Hmm hmm! Um [disfmarker] [speaker002:] Wh uh, the [disfmarker] the VAD what? [speaker004:] We have the VAD information also available at the back end. [speaker002:] Uh huh. [speaker004:] So if it is something the neural net is not able to discriminate the classes [disfmarker] [speaker002:] Yeah. [speaker004:] I mean [disfmarker] Because most of it is sil I mean, we have dropped some silence f We have dropped so silence frames? [speaker002:] Mm hmm. [speaker004:] No, we haven't dropped silence frames still. [speaker003:] Uh, still not. Yeah. [speaker004:] Yeah. So [disfmarker] the b b biggest classification would be the speech and silence. [speaker003:] Th [speaker004:] So, by having an additional, uh, feature which says "this is speech and this is nonspeech", I mean, it certainly helps in some unseen noise conditions for the neural net. [speaker001:] What [disfmarker] Do y do you have that feature available for the test data? [speaker004:] Well, I mean, we have [disfmarker] we are transferring the VAD to the back end [disfmarker] feature to the back end. Because we are dropping it at the back end after everything [disfmarker] all the features are computed. [speaker001:] Oh, oh, I see. [speaker004:] So [disfmarker] [speaker001:] I see. [speaker004:] so the neural [disfmarker] so that is coming from a separate neural net or some VAD. [speaker001:] OK. OK. [speaker004:] Which is [disfmarker] which is certainly giving a [speaker001:] So you're saying, feed that, also, into [pause] the neural net. [speaker004:] to [disfmarker] Yeah. So it it's an [disfmarker] additional discriminating information. [speaker001:] Yeah. Yeah. Right. [speaker004:] So that [disfmarker] [speaker002:] You could feed it into the neural net. The other thing [comment] you could do is just, um, p modify the, uh, output probabilities of the [disfmarker] of the, uh, uh, um, neural net, tandem neural net, [comment] based on the fact that you have a silence probability. [speaker004:] Mm hmm. [speaker002:] Right? [speaker003:] Mm hmm. [speaker002:] So you have an independent estimator of what the silence probability is, and you could multiply the two things, and renormalize. Uh, I mean, you'd have to do the nonlinearity part and deal with that. [speaker003:] Yeah. [speaker002:] Uh, I mean, go backwards from what the nonlinearity would, you know [disfmarker] would be. [speaker004:] Through [disfmarker] t to the soft max. [speaker002:] But [disfmarker] but, uh [disfmarker] [speaker003:] Yeah, so [disfmarker] maybe, yeah, when [disfmarker] [speaker001:] But in principle wouldn't it be better to feed it in? And let the net do that? [speaker002:] Well, u Not sure. I mean, [speaker001:] Hmm. [speaker002:] let's put it this way. I mean, y you [disfmarker] you have this complicated system with thousands and thousand parameters [speaker001:] Yeah. [speaker002:] and you can tell it, uh, "Learn this thing." Or you can say, "It's silence! Go away!" I mean, I mean, i Doesn't [disfmarker]? I think [disfmarker] I think the second one sounds a lot more direct. [speaker001:] What [disfmarker] what if you [disfmarker] [speaker002:] Uh. [speaker001:] Right. So, what if you then, uh [disfmarker] since you know this, what if you only use the neural net on the speech portions? [speaker002:] Well, uh, [speaker003:] That's what [disfmarker] [speaker001:] Well, I guess that's the same. Uh, that's similar. [speaker002:] Yeah, I mean, y you'd have to actually run it continuously, but it's [disfmarker] [@ @] [disfmarker] [speaker001:] But I mean [disfmarker] I mean, train the net only on [disfmarker] [speaker002:] Well, no, you want to train on [disfmarker] on the nonspeech also, because that's part of what you're learning in it, to [disfmarker] to [disfmarker] to generate, that it's [disfmarker] it has to distinguish between. [speaker004:] Speech. [speaker001:] But I mean, if you're gonna [disfmarker] if you're going to multiply the output of the net by this other decision, uh, would [disfmarker] then you don't care about whether the net makes that distinction, right? [speaker002:] Well, yeah. But this other thing isn't perfect. [speaker001:] Ah. [speaker002:] So that you bring in some information from the net itself. [speaker001:] Right, OK. That's a good point. [speaker002:] Yeah. Now the only thing that [disfmarker] that bothers me about all this is that I [disfmarker] I [disfmarker] I [disfmarker] The [disfmarker] the fact [disfmarker] i i It's sort of bothersome that you're getting more deletions. [speaker003:] Yeah. But [disfmarker] So I might maybe look at, is it due to the fact that um, the probability of the silence at the output of the network, is, uh, [speaker002:] Is too high. [speaker003:] too [disfmarker] too high or [disfmarker] [speaker002:] Yeah. So maybe [disfmarker] [speaker003:] If it's the case, then multiplying it again by [disfmarker] i by something? [speaker002:] So [disfmarker] [speaker004:] It may not be [disfmarker] it [disfmarker] [speaker002:] Yeah. [speaker004:] Yeah, [speaker003:] Mm hmm. [speaker004:] it [disfmarker] it may be too [disfmarker] it's too high in a sense, like, everything is more like a, um, flat probability. [speaker002:] Yeah. [speaker003:] Oh eee hhh. [speaker004:] So, like, it's not really doing any distinction between speech and nonspeech [disfmarker] [speaker003:] Uh, yeah. [speaker004:] or, I mean, different [disfmarker] among classes. [speaker002:] Yeah. [speaker003:] Mm hmm. [speaker001:] Be interesting to look at the [disfmarker] Yeah, for the [disfmarker] I wonder if you could do this. But if you look at the, um, highly mism high mismat the output of the net on the high mismatch case and just look at, you know, the distribution versus the [disfmarker] the other ones, do you [disfmarker] do you see more peaks or something? [speaker003:] Yeah. Yeah, like the entropy of the [disfmarker] the output, [speaker001:] Yeah. [speaker003:] or [disfmarker] [speaker002:] Yeah, for instance. But I [disfmarker] bu [speaker003:] It [disfmarker] it seems that the VAD network doesn't [disfmarker] Well, it doesn't drop, uh, too many frames because the dele the number of deletion is reasonable. But it's just when we add the tandem, the final MLP, and then [disfmarker] [speaker002:] Yeah. Now the only problem is you don't want to ta I guess wait for the output of the VAD before you can put something into the other system, [speaker003:] u [speaker002:] cuz that'll shoot up the latency a lot, right? Am I missing something here? [speaker003:] But [disfmarker] [speaker004:] Mm hmm. [speaker003:] Yeah. Right. [speaker002:] Yeah. So that's maybe a problem with what I was just saying. But [disfmarker] but [disfmarker] I I guess [disfmarker] [speaker001:] But if you were gonna put it in as a feature it means you already have it by the time you get to the tandem net, right? [speaker004:] Um, well. We [disfmarker] w we don't have it, actually, because it's [disfmarker] it has a high rate energy [disfmarker] [speaker002:] No. [speaker004:] the VAD has a [disfmarker] [speaker001:] Ah. [speaker002:] Yeah. [speaker001:] OK. [speaker002:] It's kind of done in [disfmarker] I mean, some of the things are, not in parallel, but certainly, it would be in parallel with the [disfmarker] with a tandem net. [speaker001:] Right. [speaker002:] In time. So maybe, if that doesn't work, um [disfmarker] But it would be interesting to see if that was the problem, anyway. And [disfmarker] and [disfmarker] and then I guess another alternative would be to take the feature that you're feeding into the VAD, and feeding it into the other one as well. [speaker003:] Mm hmm. Mm hmm. [speaker002:] And then maybe it would just learn [disfmarker] learn it better. Um [disfmarker] But that's [disfmarker] Yeah, that's an interesting thing to try to see, if what's going on is that in the highly mismatched condition, it's, um, causing deletions by having this silence probability up [disfmarker] up too high, [speaker003:] Mm hmm. [speaker002:] at some point where the VAD is saying it's actually speech. [speaker003:] Yeah. So, m [speaker002:] Which is probably true. Cuz [disfmarker] Well, the V A if the VAD said [disfmarker] since the VAD is [disfmarker] is [disfmarker] is right a lot, uh [disfmarker] [speaker003:] Yeah. [speaker002:] Hmm. Anyway. Might be. [speaker003:] Mm hmm. [speaker002:] Yeah. Well, we just started working with it. But these are [disfmarker] these are some good ideas I think. [speaker003:] Mm hmm. Yeah, and the other thing [disfmarker] Well, there are other issues maybe for the tandem, like, uh, well, do we want to, w uh n Do we want to work on the targets? Or, like, instead of using phonemes, using more context dependent units? [speaker001:] For the tandem net you mean? [speaker003:] Well, I'm [disfmarker] Yeah. [speaker001:] Hmm. [speaker003:] I'm thinking, also, a w about Dan's work where he [disfmarker] he trained [vocalsound] a network, not on phoneme targets but on the HMM state targets. And [disfmarker] it was giving s slightly better results. [speaker002:] Problem is, if you are going to run this on different m test sets, including large vocabulary, [speaker003:] Yeah. Yeah. [speaker002:] um, [speaker003:] Uh [disfmarker] Mmm. [speaker002:] I think [disfmarker] [speaker003:] I was just thinking maybe about, like, generalized diphones, and [disfmarker] come up with a [disfmarker] a reasonable, not too large, set of context dependent units, and [disfmarker] and [disfmarker] Yeah. And then anyway we would have to reduce this with the KLT. [speaker002:] Yeah. [speaker003:] So. But [disfmarker] I don't know. [speaker002:] Yeah. Well, maybe. [speaker003:] Mm hmm. [speaker002:] But I d I d it [disfmarker] it [disfmarker] i it's all worth looking at, but it sounds to me like, uh, looking at the relationship between this and the [disfmarker] speech noise stuff is [disfmarker] is [disfmarker] is probably a key thing. [speaker003:] Mm hmm. [speaker002:] That and the correlation between stuff. [speaker001:] So if, uh [disfmarker] if the, uh, high mismatch case had been more like the, uh, the other two cases [comment] in terms of giving you just a better performance, [comment] how would this number have changed? [speaker003:] Mm hmm. Oh, it would be [disfmarker] Yeah. Around five percent better, I guess. If [disfmarker] if [disfmarker] i [speaker001:] y Like sixty? [speaker002:] Well, we don't know what's it's gonna be the TI digits yet. He hasn't got the results back yet. [speaker003:] Yeah. If you extrapolate the SpeechDat Car well matched and medium mismatch, it's around, yeah, maybe five. [speaker001:] Uh huh. Yeah. So this would be sixty two? [speaker002:] Sixty two. [speaker003:] Sixty two, [speaker002:] Yeah. [speaker004:] Somewhere around sixty, must be. [speaker001:] Which is [disfmarker] [speaker003:] yeah. [speaker004:] Right? Yeah. [speaker003:] Well, it's around five percent, because it's [disfmarker] s [speaker004:] Yeah. [speaker003:] Right? If everything is five percent. [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker001:] All the other ones were five percent, the [disfmarker] [speaker003:] I d I d I just have the SpeechDat Car right now, [speaker002:] Yeah. [speaker003:] so [disfmarker] [speaker001:] Yeah. [speaker003:] It's running [disfmarker] it shou we should have the results today during the afternoon, [speaker001:] Hmm. [speaker003:] but [disfmarker] Well. [speaker002:] Hmm. Well [disfmarker] Um [disfmarker] So I won't be here for [disfmarker] [speaker001:] When [disfmarker] When do you leave? [speaker002:] Uh, I'm leaving next Wednesday. May or may not be in in the morning. I leave in the afternoon. Um, so I [disfmarker] [speaker001:] But you're [disfmarker] are you [disfmarker] you're not gonna be around this afternoon? [speaker002:] Yeah. [speaker001:] Oh. [speaker002:] Oh, well. I'm talking about next week. I'm leaving [disfmarker] leaving next Wednesday. [speaker001:] Uh huh. [speaker002:] This afternoon [disfmarker] uh [disfmarker] Oh, right, for the Meeting meeting? Yeah, that's just cuz of something on campus. [speaker001:] Ah, OK, OK. [speaker002:] Yeah. But, um, yeah, so next week I won't, and the week after I won't, cuz I'll be in Finland. And the week after that I won't. By that time you'll be [disfmarker] [comment] Uh, you'll both be gone [pause] from here. So there'll be no [disfmarker] definitely no meeting on [disfmarker] on September sixth. Uh, and [disfmarker] [speaker001:] What's September sixth? [speaker002:] Uh, that's during Eurospeech. [speaker001:] Oh, oh, right. OK. [speaker002:] So, uh, Sunil will be in Oregon. Uh, Stephane and I will be in Denmark. Uh [disfmarker] Right? So it'll be a few weeks, really, before we have a meeting of the same cast of characters. Um, but, uh [disfmarker] I guess, just [disfmarker] I mean, you guys should probably meet. And maybe Barry [disfmarker] Barry will be around. And [disfmarker] and then uh, uh, we'll start up again with Dave and [disfmarker] Dave and Barry and Stephane and us on the, uh, twentieth. No. Thirteenth? About a month? [speaker001:] So, uh, you're gonna be gone for the next three weeks or something? [speaker002:] I'm gone for two and a half weeks starting [disfmarker] starting next Wed late next Wednesday. [speaker001:] So that's [disfmarker] you won't be at the next three of these meetings. Is that right? [speaker002:] Uh, I won't [disfmarker] it's probably four because of [disfmarker] is it three? Let's see, twenty third, thirtieth, sixth. That's right, next three. And the [disfmarker] the third one won't [disfmarker] probably won't be a meeting, cuz [disfmarker] cuz, uh, Su Sunil, Stephane, and I will all not be here. [speaker001:] Oh, right. Right. [speaker002:] Um [disfmarker] Mmm. [comment] So it's just, uh, the next two where there will be [disfmarker] there, you know, may as well be meetings, but I just won't be at them. [speaker001:] OK. [speaker002:] And then starting up on the thirteenth, [nonvocalsound] uh, we'll have meetings again but we'll have to do without Sunil here somehow. [speaker001:] When do you go back? [speaker002:] So. [speaker004:] Thirty first, August. [speaker002:] Yeah. Yeah. So. Cool. [speaker001:] When is the evaluation? November, or something? [speaker002:] Yeah, it was supposed to be November fifteenth. Has anybody heard anything different? [speaker003:] I don't know. The meeting in [disfmarker] is the five and six of December. [speaker004:] p s It's like [disfmarker] Yeah, it's tentatively all full. [speaker003:] So [disfmarker] [speaker004:] Yeah. [speaker003:] Mm hmm. [speaker004:] Uh, that's a proposed date, I guess. [speaker003:] Yeah, um [disfmarker] so the evaluation should be on a week before or [disfmarker] [speaker001:] Yeah. [speaker002:] Yep. But, no, this is good progress. So. Uh [disfmarker] OK. Guess we're done. [speaker001:] Should we do digits? [speaker002:] Digits? Yep. [speaker001:] OK. [speaker002:] It's a wrap. [speaker002:] Are we on? We're on. [speaker005:] Is it on? [speaker002:] OK. [speaker004:] Yeah. [speaker002:] Yeah. [speaker004:] One, two [disfmarker] [speaker002:] OK, [speaker004:] u OK. [speaker001:] Why is it so cold in here? [speaker002:] so, uh, we haven't sent around the agenda. So, i uh, any agenda items anybody has, wants to talk about, what's going on? [speaker007:] I c I could talk about the meeting. [speaker008:] Does everyone [disfmarker] has everyone met Don? [speaker007:] Yeah. [speaker008:] Yeah? [speaker004:] Yeah. [speaker008:] OK. [speaker003:] Now, yeah. [speaker002:] It's on? [speaker006:] Hello. [speaker004:] Yeah. [speaker006:] Yeah. [speaker002:] OK, agenda item one, [speaker004:] We went [disfmarker] [speaker002:] introduce Don. OK, we did that. Uh [disfmarker] [speaker001:] Well, I had a [disfmarker] just a quick question but I know there was discussion of it at a previous meeting that I missed, but just about the [disfmarker] the wish list item of getting good quality close talking mikes on every speaker. [speaker002:] OK, so let's [disfmarker] let's [disfmarker] So let's just do agenda [pause] building right now. OK, so let's talk about that a bit. [speaker001:] I mean, that was [disfmarker] [speaker002:] Uh, [@ @] tuss close talking mikes, better quality. OK, [vocalsound] uh, we can talk about that. You were gonna [disfmarker] starting to say something? [speaker007:] Well, you [disfmarker] you, um, already know about the meeting [comment] that's coming up and I don't know if [disfmarker] if this is appropriate for this. I don't know. I mean, maybe [disfmarker] maybe it's something we should handle outside of the meeting. [speaker005:] What meeting? [speaker002:] No, no, that's OK. We can [disfmarker] so [disfmarker] we can ta so n NIST is [disfmarker] NIST folks are coming by next week [speaker007:] OK. Yeah. [speaker002:] and so we can talk about that. [speaker005:] Who's coming? [speaker002:] I think Uh, uh, John Fiscus [speaker007:] Mm hmm. [speaker002:] and, uh, I think George Doddington will be around as well. Uh, OK, so we can talk about that. Uh, I guess just hear about how things are going with, uh, uh, the transcriptions. [speaker007:] Sure. Mm hmm. [speaker002:] That's right. That would sorta be an obvious thing to discuss. Um, An anything else, uh, strike anybody? [speaker001:] Uh, we started [pause] running recognition on [pause] one conversation but it's the r [pause] isn't working yet. So, [speaker002:] OK. [speaker005:] Wha [speaker001:] But if anyone has [disfmarker] uh, the main thing would be if anyone has, um, knowledge about ways to, uh, post process the wave forms that would give us better recognition, that would be helpful to know about. [speaker002:] Um, [speaker008:] Dome yeah, it sounds like a topic of conversation. [speaker002:] Yeah, so, uh [disfmarker] [speaker005:] What about, uh, is there anything new with the speech, nonspeech stuff? [speaker003:] Yeah, we're working more on it but, [vocalsound] it's not finished. [speaker002:] OK. Alright, that seems like a [disfmarker] a good collection of things. And we'll undoubtedly think of [pause] other things. [speaker007:] I had thought under my topic that I would mention the, uh, four items that I [disfmarker] I, uh, put out for being on the agenda f on that meeting, which includes like the pre segmentation and the [disfmarker] and the developments in multitrans. [speaker002:] Oh, under the NIST meeting. [speaker007:] Yeah, under the NIST thing. Yeah. [speaker002:] OK. Alright, why don't we start off with this, u u I guess the order we brought them up seems fine. [speaker007:] Yeah. [speaker002:] Um, so, better quality close talking mikes. So the one issue was that the [disfmarker] the, uh, lapel mike, uh, isn't as good as you would like. And so, uh, it [disfmarker] it'd be better if we had close talking mikes for everybody. Right? Is that [disfmarker] is that basically the point? [speaker001:] Ri um, yeah, the [disfmarker] And actually in addition to that, that the [disfmarker] the close talking mikes are worn in such a way as to best capture the signal. And the reason here is just that for the people doing work not on microphones but on sort of like dialogue and so forth, uh [disfmarker] or and even on prosody, which Don is gonna be working on soon, it adds this extra, you know, vari variable for each speaker to [disfmarker] to deal with when the microphones aren't similar. [speaker008:] Right. [speaker002:] Mm hmm. [speaker001:] So [disfmarker] And I also talked to Mari this morning and she also had a strong preference for doing that. And in fact she said that that's useful for them to know in starting to collect their data too. [speaker002:] Mm hmm. Right, so one th [speaker008:] Well, so [disfmarker] [speaker002:] uh, well one thing I was gonna say was that, um, i we could get more, uh, of the head mounted microphones even beyond the number of radio channels we have because I think whether it's radio or wire is probably second order. And the main thing is having the microphone close to you, u although, not too close. [speaker001:] Mm hmm. [speaker008:] Right, so, uh, actually the way Jose is wearing his is [disfmarker] is c [pause] correct. [speaker004:] Yeah. Is [disfmarker] [speaker008:] The good way. So you want to [disfmarker] [speaker002:] Yeah. [speaker004:] I it's not cor it's correct? [speaker002:] Is. [speaker008:] Yeah, th that's good. [speaker004:] Yeah. [speaker008:] So it's towards the corner of your mouth so that breath sounds don't get on it. [speaker002:] Yes. [speaker004:] Yeah. Yeah. [speaker008:] And then just sort of about, uh, a thumb or [disfmarker] a thumb and a half away from your [disfmarker] from your mouth. [speaker004:] Yeah. Yeah. Uh, yeah. [speaker002:] Right. How am I d [speaker001:] But we have more than one type of [disfmarker] I mean, for instance, you're [disfmarker] [speaker003:] Yeah. [speaker008:] And this one isn't very adjustable, so this about as good as I can get [speaker004:] Yeah. [speaker008:] cuz it's a fixed boom. [speaker001:] Right. [speaker004:] Yeah. Is fixed. Yeah. [speaker002:] Yeah. [speaker001:] But if we could actually standardize, you know, the [disfmarker] the microphones, uh, as much as possible that would be really helpful. [speaker005:] Mm hmm. [speaker004:] Yeah. [speaker008:] Mm hmm. [speaker002:] Well, I mean it doesn't hurt to have a few extra microphones around, [speaker004:] Yeah. [speaker002:] so why don't we just go out and [disfmarker] and get an order of [disfmarker] of if this microphone seems OK to people, uh, I'd just get a half dozen of these things. [speaker008:] Well the onl the only problem with that is right now, um, some of the Jimlets aren't working. The little [disfmarker] the boxes under the table. [speaker002:] Yeah. [speaker008:] And so, w Uh, I've only been able to find three jacks that are working. [speaker002:] Yeah. [speaker005:] Can we get these, wireless? [speaker008:] So [disfmarker] [speaker002:] No, but my point is [disfmarker] [speaker001:] But y we could just record these signals separately and time align them with the start of the meeting. [speaker002:] R r right [disfmarker] [speaker008:] I [disfmarker] I'm not sure I'm follow. Say that again? [speaker002:] Right now, we've got, uh, two microphones in the room, that are not quote unquote standard. So why don't we replace those [disfmarker] [speaker008:] OK, just two. [speaker002:] Well, however many we can plug in. [speaker008:] OK. [speaker002:] You know, if we can plug in three, let's plug in three. [speaker004:] Mm yeah. [speaker002:] Also what we've talked before about getting another, uh, radio, [speaker008:] Right. [speaker002:] and so then that would be, you know, three [pause] more. [speaker008:] Right. OK. [speaker004:] Mm hmm. [speaker002:] So, uh [disfmarker] so we should go out to our full complement of whatever we can do, but have them all be the same mike. I think the original reason that it was done the other way was because, it w it was sort of an experimental thing and I don't think anybody knew whether people would rather have more variety or [disfmarker] [vocalsound] or, uh, more uniformity, [speaker001:] Right. [speaker002:] but [disfmarker] [@ @] [comment] but uh, sounds [disfmarker] sounds fine. [speaker008:] Sounds like uniformity wins. [speaker004:] Right. [speaker002:] Yeah. [speaker001:] Well, for short term research it's just [disfmarker] there's just so much effort that would have to be done up front n uh, [speaker008:] Well [disfmarker] [speaker001:] so [disfmarker] yeah, uniformity would be great. [speaker008:] Yeah. [speaker005:] Is it because [disfmarker] You [disfmarker] you're saying the [disfmarker] for dialogue purposes, so that means that the transcribers are having trouble with those mikes? Is that what you mean? Or [disfmarker]? [speaker001:] Well Jane would know more about the transcribers. [speaker007:] And that's true. I mean, I [disfmarker] we did discuss this. Uh, and [disfmarker] and [disfmarker] [speaker008:] Yep. Couple times. [speaker007:] a couple times, so, um, yeah, the transcribers notice [disfmarker] And in fact there're some where, um [disfmarker] ugh well, I mean there's [disfmarker] it's the double thing. It's the equipment and also how it's worn. And he's always [disfmarker] they always [disfmarker] they just rave about how wonderful Adam's [disfmarker] Adam's channel is. [speaker001:] Right. [speaker008:] What can I say. [speaker007:] And then, [speaker001:] So does the recognizer. [speaker008:] Oh, really? [speaker002:] Yeah. [speaker007:] Yeah. [speaker008:] Yeah, I'm not surprised. I mean, "Baaah!" [speaker007:] Yeah, but I mean it's not just that, it's also you know you [speaker001:] Even if [disfmarker] if you're talking on someone else's mike it's still [pause] you w [speaker002:] Yeah. [speaker007:] It's also like n no breathing, [speaker003:] Yeah. [speaker007:] no [disfmarker] You know, it's like it's [disfmarker] it's um, [speaker002:] Yeah. [speaker007:] it's really [disfmarker] [nonvocalsound] it makes a big difference from the transcribers' point of view [speaker008:] Yeah, it's an advantage when you don't breath. [speaker007:] and also from the research s point of view. [speaker002:] When we're doing [disfmarker] [speaker008:] Yeah, I think that the point of doing the close talking mike is to get a good quality signal. We're not doing research on close talking mikes. [speaker001:] Right. [speaker007:] Yeah. [speaker008:] So we might as well get it as uniform as we can. [speaker002:] Yeah. [speaker001:] Right. [speaker002:] Now, this is locking the barn door after the horse was stolen. We do have thirty hours, of [disfmarker] of speech, which is done this way. [speaker008:] Yeah. [speaker001:] That's OK. [speaker002:] But [disfmarker] but, uh, yeah, for future ones we can get it a bit more uniform. [speaker001:] Great, great. [speaker008:] So I think just do a field trip at some point. [speaker002:] Yeah, probably [disfmarker] yeah, to the store we talked about [speaker008:] Yep. [speaker002:] and that [disfmarker] [speaker007:] And there was some talk about, uh, maybe the h headphones that are uncomfortable for people, to [disfmarker] [speaker008:] Yep. So, as [disfmarker] as I said, we'll do a field trip and see if we can get all of the same mike that's more comfortable than [disfmarker] than these things, which I think are horrible. [speaker007:] OK. [speaker008:] So. [speaker007:] Good. [speaker001:] Great, thank you very much. [speaker005:] Especially for people with big heads. [speaker002:] OK. [speaker001:] It's makes our job a lot easier. [speaker008:] And, you know, we're researchers, [speaker002:] OK. [speaker008:] so we all have big heads. [speaker005:] Yeah. [speaker002:] OK. Yeah. Uh, OK, second item was the, uh, NIST visit, and what's going on there. [speaker007:] Yeah. OK, so, um, uh, Jonathan Fiscus is coming on the second of February and I've spoken with, uh, [pause] u u a lot of people here, not everyone. Um, and, um, he expressed an interest in seeing the room and in, um, seeing a demonstration of the modified multitrans, which I'll mention in a second, and also, um, he was interested in the pre segmentation and then he's also interested in the transcription conventions. [speaker008:] Mm hmm. [speaker007:] And, um [disfmarker] So, um, it seems to me in terms of like, um, i i it wou You know, OK. So the room, it's things like the audio and c and audi audio and acoustic [disfmarker] acoustic properties of the room and how it [disfmarker] how the recordings are done, and that kind of thing. And, um. OK, in terms of the multi trans, well that [disfmarker] that's being modified by Dave Gelbart to, uh, handle multi channel recording. [speaker008:] Oh, I should've [disfmarker] I was just thinking I should have invited him to this meeting. I forgot to do it. So. [speaker007:] Yeah, OK. [speaker003:] Yeah. [speaker008:] Sorry. [speaker007:] Yeah. Well that's OK, I mean we'll [disfmarker] Yeah, and it's t and it looks really great. He [disfmarker] he has a prototype. I [disfmarker] I, uh, [@ @] [comment] didn't [disfmarker] didn't see it, uh, yesterday but I'm going to see it today. And, uh, that's [disfmarker] that will enable us to do [pause] nice um, tight time marking of the beginning and ending of overlapping segments. At present it's not possible with limitations of [disfmarker] of the, uh, original [pause] design of the software. And um. So, I don't know. In terms of, like, pre segmentation, that [disfmarker] that continues to be, um, a terrific asset to the [disfmarker] to the transcribers. Do you [disfmarker] I know that you're al also supplementing it further. Do you want to mention something about that c Thilo, or [disfmarker]? [speaker003:] Um, yeah. What [disfmarker] what I'm doing right now is I'm trying to include some information about which channel, uh, there's some speech in. But that's not working at the moment. [speaker007:] OK. [speaker003:] I'm just trying to do this by comparing energies, uh [disfmarker] normalizing energies and comparing energies of the different channels. And so to [disfmarker] to give the transcribers some information in which channel there's [disfmarker] there's speech in addition to [disfmarker] to the thing we [disfmarker] we did now which is just, uh, speech nonspeech detection on the mixed file. [speaker007:] This is good. Mm hmm. [speaker003:] So I'm [disfmarker] I'm relying on [disfmarker] on the segmentation of the mixed file but I'm [disfmarker] I'm trying to subdivide the speech portions into different portions if there is some activity in [disfmarker] in different channels. [speaker007:] Excellent, so this'd be like w e providing also speaker ID [pause] potentially. [speaker003:] But [disfmarker] Yeah. Yeah. [speaker007:] Wonderful. Wonderful. [speaker002:] Um, something I guess I didn't put in the list but, uh, on that, uh, same day later on in [disfmarker] or maybe it's [disfmarker] No, actually [pause] it's this week, uh, Dave Gelbart and I will be, uh, visiting with John Canny who i you know, is a CS professor, [speaker007:] Oh. [speaker008:] HCC. [speaker002:] who's interested in ar in array microphones. [speaker008:] Oh, he's doing array mikes. [speaker002:] Yeah. And so we wanna see what commonality there is here. You know, maybe they'd wanna stick an array mike here when we're doing things [speaker005:] That would be cool. [speaker008:] Yeah, that would be neat. [speaker002:] or [disfmarker] or maybe it's [disfmarker] it's not a specific array microphone they want [speaker004:] Yeah. [speaker005:] That would be really neat. [speaker002:] but they might wanna just, [disfmarker] uh, you know, you could imagine them taking the four signals from these [disfmarker] these table mikes and trying to do something with them [disfmarker] Um, I also had a discussion [disfmarker] So, w uh, we'll be over [disfmarker] over there talking with him, um, after class on Friday. Um, we'll let you know what [disfmarker] what goes with that. Also had a completely unrelated thing. I had a, uh, discussion today with, uh, Birger Kollmeier who's a, uh, a German, uh, scientist who's got a fair sized group [vocalsound] doing a range of things. It's sort of auditory related, largely for hearing aids and so on. But [disfmarker] but, uh, he does stuff with auditory models and he's very interested in directionality, and location, and [disfmarker] and, uh, head models and [pause] microphone things. And so, uh, he's [disfmarker] he and possibly a student, there w there's, uh, a student of his who gave a talk here last year, uh, may come here, uh, in the fall for, uh, sort of a five month, uh, sabbatical. So he might be around. Get him to give some talks and so on. But anyway, he might be interested in [pause] this stuff. [speaker004:] Mm hmm. [speaker005:] That [disfmarker] that reminds me, I had a [disfmarker] a thought of an interesting project that somebody could try to do with [pause] the data from here, either using, you know, the [disfmarker] the mikes on the table or using signal energies from the head worn mikes, [speaker004:] Mm hmm. [speaker005:] and that is to try to construct a map of where people were sitting, [speaker002:] Right. [speaker004:] Uh huh. [speaker008:] Well Dan [disfmarker] Dan had worked on that. Dan Ellis, [speaker005:] uh, based on [disfmarker] [speaker004:] Uh huh. [speaker005:] Oh, did he? [speaker008:] yeah. [speaker005:] Oh, that's interesting. [speaker008:] So that [disfmarker] that's the cross correlation stuff, was [disfmarker] was doing b beam forming. [speaker004:] Yeah. [speaker005:] And so you could plot out who was sitting next to who and [disfmarker] [speaker002:] A little bit, I mean, he didn't do a very extreme thing but just [disfmarker] it was just sort of [speaker004:] Yeah, yeah. [speaker008:] No, he did start on it. [speaker002:] e e given that, the [disfmarker] the [disfmarker] the block of wood with the [disfmarker] the [disfmarker] the two mikes [comment] on either side, [speaker008:] Mm hmm. [speaker002:] if I'm speaking, or if you're speaking, or someone over there is speaking, it [disfmarker] if you look at cross correlation functions, you end up with a [disfmarker] [speaker004:] Yeah. [speaker002:] if [disfmarker] if someone who was on the axis between the two is talking, then you [disfmarker] you get a big peak there. And if [disfmarker] if someone's talking on [disfmarker] on [disfmarker] on, uh, one side or the other, it goes the other way. [speaker005:] Mm hmm. [speaker002:] And then, uh, it [disfmarker] it [disfmarker] it even looks different if th t if the two [disfmarker] two people on either side are talking than if one in the middle. It [disfmarker] it actually looks somewhat different, [speaker005:] Hmm. [speaker002:] so. [speaker005:] Well I was just thinking, you know, as I was sitting here next to Thilo that um, when he's talking, my mike probably picks it up better than [pause] your guys's mikes. [speaker003:] Yeah. [speaker005:] So if you just looked at [disfmarker] [speaker004:] Yeah. [speaker008:] Oh, that's another cl cue, that's true. [speaker005:] yeah, [comment] looked at [comment] the energy on my mike and you could get an idea about who's closest to who. [speaker004:] Yeah. Yeah. [speaker002:] Mm hmm. [speaker004:] Yeah. [speaker002:] Right. [speaker004:] Yeah. [speaker005:] And [disfmarker] [speaker008:] Or who talks the loudest. [speaker003:] Yeah. [speaker004:] Yeah. [speaker002:] Yeah, well you have to [disfmarker] the appropriate normalizations are tricky, and [disfmarker] and [disfmarker] and are probably the key. [speaker008:] Yeah. [speaker003:] Yeah. [speaker001:] You just search for Adam's voice on each individual microphone, you pretty much know where everybody's sitting. [speaker003:] Yeah. [speaker002:] Yeah. We've switched positions recently so you can't [disfmarker] Anyway. OK. So those are just a little couple of news items. [speaker007:] Can I ask one thing? Uh, so, um, Jonathan Fiscus expressed an interest in, uh, microphone arrays. [speaker002:] Yes. [speaker007:] Um, is there [disfmarker] I mean [disfmarker] b And I also want to say, his [disfmarker] he can't stay all day. He needs to uh, leave for [disfmarker] uh, from here to make a two forty five flight [speaker008:] Oh, so just morning. [speaker007:] from [disfmarker] from Oakland. [speaker002:] Right. [speaker007:] So it makes the scheduling a little bit tight but do you think that, um [disfmarker] that, uh, i John Canny should be involved in this somehow or not. I have no idea. [speaker002:] Probably not but I [disfmarker] I'll [disfmarker] I'll [disfmarker] I'll know better after I see him this Friday what [disfmarker] what kind of level he wants to get involved. [speaker007:] It's premature. Fine. Good. [speaker002:] Uh, he might be excited to and it might be very appropriate for him to, uh, or he might have no interest whatsoever. I [disfmarker] I just really don't know. [speaker007:] OK. [speaker008:] Is he involved in [disfmarker] Ach! [comment] I'm blanking on the name of the project. NIST has [disfmarker] has done a big meeting room [disfmarker] instrumented meeting room with video and microphone arrays, and very elaborate software. Is [disfmarker] is he the one working on that? [speaker002:] Well that's what they're starting up. [speaker008:] OK. [speaker002:] Yeah. No, I mean, that's what all this is about. They [disfmarker] they haven't done it yet. [speaker008:] OK. [speaker002:] They wanted to do it [disfmarker] [speaker008:] I had read some papers that looked like they had already done some work. [speaker002:] Uh, well I think they've instrumented a room but I don't [pause] think they [disfmarker] they haven't started recordings yet. They don't have the t the transcription standards. [speaker005:] Are they going to do video as well? [speaker002:] They don't have the [disfmarker] [speaker008:] Hmm. [speaker002:] Yeah. [speaker005:] Hmm. [speaker002:] I think. I think they are. [speaker008:] Oh, cuz what [disfmarker] what I had read was, uh, they had a uh very large amount of software infrastructure for coordinating all this, both in terms of recording and also live room where you're interacting [disfmarker] the participants are interacting with the computer, and with the video, and lots of other stuff. [speaker002:] Well, I'm [disfmarker] I'm [disfmarker] I'm not sure. [speaker008:] So. [speaker002:] All [disfmarker] all I know is that they've been talking to me about a project that they're going to start up recording people meet in meetings. [speaker008:] OK. Well [disfmarker] [speaker002:] And, uh, it is related to ours. They were interested in ours. They wanted to get some uniformity with us, uh, about the transcriptions and so on. [speaker008:] Alright. [speaker002:] And one [disfmarker] one notable difference [disfmarker] u u actually I can't remember whether they were going to routinely collect video or not, but one [disfmarker] one, uh, difference from the audio side was that they are interested in using array mikes. So, um, I mean, I'll just tell you the party line on that. The reason I didn't go for that here was because, uh, the focus, uh, both of my interest and of Adam's interest was uh, in impromptu situations. And we're not recording a bunch of impromptu situations but that's because it's different to get data for research than to actually apply it. [speaker004:] Hmm. [speaker002:] And so, uh, for scientific reasons we thought it was good to instrument this room as we wanted it. But the thing we ultimately wanted to aim at was a situation where you were talking with, uh, one or more other people i uh, in [disfmarker] in an p impromptu way, where you didn't [disfmarker] didn't actually know what the situation was going to be. And therefore it would not [disfmarker] it'd be highly unlikely that room would be outfitted with [disfmarker] with some very carefully designed array of microphones. Um, so it was only for that reason. It was just, you know, yet another piece of research and it seemed like we had enough troubles just [disfmarker] [speaker005:] So there's no like portable array of mikes? [speaker002:] No. So there's [disfmarker] there's [disfmarker] uh, there's a whole range of things [disfmarker] there's a whole array of things, [vocalsound] that people do on this. [speaker004:] Hmm. [speaker002:] So, um, the, uh [disfmarker] the big arrays, uh, places, uh, like uh, Rutgers, and Brown, and other [disfmarker] other places, uh, they have, uh, big arrays with, I don't know, a hundred [disfmarker] hundred mikes or something. [speaker008:] Xerox. [speaker002:] And so there's a wall of mikes. And you get really, really good beam forming [comment] with that sort of thing. [speaker005:] Wow. [speaker002:] And it's [disfmarker] and, um, in fact at one point we had a [disfmarker] a proposal in with Rutgers where we were gonna do some of the sort of per channel signal processing and they were gonna do the multi channel stuff, but [pause] it d it d we ended up not doing it. But [disfmarker] [speaker005:] I've seen demonstrations of the microphone arrays. It's amazing how [disfmarker] how they can cut out noise. [speaker002:] Yeah, it's r It's really neat stuff. [speaker008:] And then they have little ones too [speaker002:] And then they had the little ones, yeah. [speaker008:] but I mean [disfmarker] but they don't have our block of wood, right? [speaker002:] Yeah, our block of wood is unique. [speaker004:] Yeah. [speaker002:] But the [vocalsound] But the No, there are these commercial things now you can buy that have four mikes or something [speaker001:] Mm hmm. [speaker002:] and [disfmarker] and, uh, um [disfmarker] So, yeah, there's [disfmarker] there's [disfmarker] there's a range of things that people do. [speaker005:] Huh. [speaker002:] Um, so if we connected up with somebody who was interested in doing that sort of thing that's [disfmarker] that's a good thing to do. I mean, whenever I've described this to other people who are interested on the [disfmarker] with the acoustic side that's invariably the question they ask. Just like someone who is interested in the general dialogue thing will always ask [vocalsound] "um, are you recording video?" Um, right? [speaker001:] Right, [speaker002:] And [disfmarker] and the acoustic people will always say, "well are you doing, uh, uh, array microphones?" [speaker001:] right. [speaker002:] So it's [disfmarker] it's a good thing to do, but it doesn't solve the problem of how do you solve things when there's one mike or at best two mikes in [disfmarker] in this imagined PDA that we have. So maybe [disfmarker] maybe we'll do some more of it. [speaker007:] Well one thing I [disfmarker] I mean, I don't know. I mean, I know that having an array of [disfmarker] I mean, I would imagine it would be more expensive to have a [disfmarker] an array of microphones. But couldn't you kind of approximate the natural sis situation by just shutting off uh, channels when you're [disfmarker] later on? I mean, it seems like if the microphones don't effect each other then couldn't you just, you know, record them with an array and then just not use all the data? [speaker008:] It's [disfmarker] it's just a lot of infrastructure that for our particular purpose we felt we didn't need to set up. [speaker007:] I see. Fine. [speaker001:] Yeah. [speaker007:] OK. [speaker002:] Yeah, if ninety nine percent of what you're doing is c is shutting off most of the mikes, then going through the [disfmarker] But if you get somebody who's [disfmarker] who [disfmarker] who has that as a primary interest then that put [disfmarker] then that drives it in that direction. [speaker008:] That's right, I mean if someone [disfmarker] if someone came in and said we really want to do it, [speaker001:] Right. [speaker008:] I mean, we don't care. That would be fine, [speaker005:] So to save that data you [disfmarker] You have to have one channel recording per mike in the array? [speaker008:] Buy more disk space. [speaker005:] Is that [disfmarker] [speaker002:] Well, uh, at some level [disfmarker] at some level. [speaker008:] I usually do a mix. [speaker002:] But then, you know, there's [disfmarker] it [disfmarker] there's [disfmarker] [speaker005:] What you save, I mean, if you're going to do research with it. yeah [speaker002:] There's [disfmarker] I [disfmarker] I don't know what they're going to do and I don't know how big their array is. Obviously if you were gonna save all of those channels for later research you'd use up a lot of space. [speaker005:] Yeah. [speaker008:] Well their software infrastructure had a very elaborate design for plugging in filters, and mixers, and all sorts of processing. [speaker005:] Hmm. [speaker002:] And, th [speaker008:] So that they can do stuff in real time and not save out each channel individually. [speaker002:] Yeah. [speaker005:] Mmm. [speaker002:] Yeah. [speaker008:] So it was, uh [disfmarker] [speaker004:] Yeah. [speaker002:] But I mean, uh, for optimum flexibility later you'd want to save each channel. But I think in practical situations you would have some engine of some sort doing some processing to reduce this to some [disfmarker] to the equivalent of a single microphone that was very directional. [speaker005:] Uh, oh, OK, I see. [speaker002:] Right? So [disfmarker] [speaker001:] I mean, it seems [disfmarker] [speaker005:] Sort of saving the result of the beam forming. [speaker004:] Yeah. [speaker001:] it seems to me that there's [disfmarker] you know, there are good political reasons for [disfmarker] for doing this, just getting the data, because there's a number of sites [disfmarker] like right now SRI is probably gonna invest a lot of internal funding into recording meetings also, which is good, um, but they'll be recording with video and they'll be [disfmarker] You know, it'd be nice if we can have at least, uh, make use of the data that we're recording as we go since it's sort of [disfmarker] this is the first site that has really collected these really impromptu meetings, um, and just have this other information available. So, if we can get the investment in just for the infra infrastructure and then, I don't know, save it out or have whoever's interested save that data out, transfer it there, it'd be g it'd be good to have [disfmarker] have the recording. I think. [speaker008:] You mean to [disfmarker] to actually get a microphone array and do that? [speaker001:] Well, if [disfmarker] [speaker008:] And video and [disfmarker] [speaker001:] Even if we're not [disfmarker] I'm not sure about video. That's sort of an [disfmarker] video has a little different nature since right n right now we're all being recorded but we're not being taped. Um, but it [disfmarker] definitely in the case of microphone arrays, since if there was a community interested in this, then [disfmarker] [speaker008:] Well, but I think we need a researcher here who's interested in it. To push it along. [speaker002:] See the problem is it [disfmarker] it took, uh, uh, it took at least six months for Dan to get together the hardware and the software, and debug stuff in [disfmarker] in the microphones, and in the boxes. And it was a really big deal. And so I think we could get a microphone array in here pretty easily and, uh, have it mixed to [disfmarker] to one channel of some sort. [speaker001:] Mm hmm. [speaker002:] But, e I think for I mean, how we're gonna decide [disfmarker] For [disfmarker] for maximum flexibility later you really don't want to end up with just one channel that's pointed in the direction of the [disfmarker] the [disfmarker] the p the person with the maximum energy or something like that. I mean, you [disfmarker] you want actually to [disfmarker] you want actually to have multiple channels being recorded so that you can [disfmarker] And to do that, it [disfmarker] we're going to end up greatly increasing the disk space that we use up, we also only have boards that will take up to sixteen channels and in [pause] this meeting, we've got eight people and [disfmarker] and six mikes. And there we're already using fourteen. [speaker008:] And we actually only have fifteen. One of them's [disfmarker] [speaker002:] E [speaker004:] Yeah. [speaker008:] Details. But fifteen, not sixteen. [speaker001:] Mm hmm. [speaker002:] Yeah. Yeah. [speaker001:] Well if there's a way to say time [disfmarker] to sort of solve each of these f those [disfmarker] So suppose you can get an array in because there's some person at Berkeley who's interested and has some [pause] equipment, uh, and suppose we can [disfmarker] as we save it we can, you know, transfer it off to some other place that [disfmarker] that holds this [disfmarker] this data, who's interested, and even if ICSI it itself isn't. Um, and it [disfmarker] it seems like as long as we can time align the beginning, do we need to mix it with the rest? I don't know. You know? The [speaker002:] Yeah. So I think you'd need a separate [disfmarker] a separate set up [speaker001:] So [disfmarker] [speaker002:] and the assumption that you could time align the two. [speaker001:] Yeah. [speaker008:] And y it'd certainly gets skew. [speaker001:] I mean it's just [disfmarker] it's worth considering as sort of once you make the up front investment [comment] and can sort of save it out each time, and [disfmarker] and not have to worry about the disk space factor, then it mi it might be worth having the data. [speaker002:] I'm not so much worried about disk space actually. I mentioned that, b as a practical matter, [speaker008:] Just [disfmarker] [speaker002:] but the real issue is that, uh, there is no way to do a recording extended to what we have now with low skew. So [pause] you would have a t completely separate set up, which would mean that the sampling times and so forth would be all over the place compared to this. [speaker001:] Right. [speaker002:] So it would depend on the level of pr processing you were doing later, but if you're d i the kind of person who's doing array processing you actually care about funny little times. And [disfmarker] and so you actually wou would want to have a completely different set up than we have, [speaker001:] I see. [speaker002:] one that would go up to thirty two channels or something. [speaker001:] Mmm. [speaker002:] So basically [disfmarker] [speaker008:] Or a hundred thirty two. [speaker002:] or a hun Yeah. So, I'm kinda skeptical, but um I think that [disfmarker] [speaker001:] Mmm. [speaker002:] So, uh, I don't think we can share the resource in that way. But what we could do is if there was someone else who's interested they could have a separate set up which they wouldn't be trying to synch with ours which might be useful for [disfmarker] for them. [speaker001:] Right, I mean at least they'd have the data and the transcripts, [speaker002:] And then we can offer up the room, [speaker001:] and [disfmarker] [speaker002:] Yeah, we can o offer the meetings, and the physical space, [speaker001:] Right. [speaker002:] and [disfmarker] and [disfmarker] yeah, the transcripts, and so on. [speaker001:] OK. Right, I mean, just [disfmarker] it'd be nice if we have more information on the same data. You know, [speaker002:] Yeah. [speaker001:] and [disfmarker] But it's [disfmarker] if it's impossible or if it's a lot of effort then you have to just balance the two, [speaker002:] Well I thi yeah, the thing will be, u u in [disfmarker] in [disfmarker] again, in talking to these other people to see what [disfmarker] you know, what [disfmarker] what we can do. [speaker001:] so [disfmarker] Right. [speaker002:] Uh, we'll see. [speaker005:] Is there an interest in getting video recordings for these meetings? I mean [speaker002:] Right, so we have [disfmarker] we [disfmarker] [speaker008:] Yes, absolutely. But it's exactly the same problem, that you have an infrastructure problem, you have a problem with people not wanting to be video taped, and you have the problem that no one who's currently involved in the project is really hot to do it. [speaker005:] Hmm. So there's not enough interest to overcome all of [disfmarker] [speaker008:] Mm hmm. [speaker001:] Right. Internally, but I know there is interest from other places that are interested in looking at meeting data and having the video. So it's just [disfmarker] [speaker007:] Yeah, w although I [disfmarker] m [vocalsound] I [disfmarker] I have to u u mention the human subjects problems, [pause] that i increase with video. [speaker001:] Right, that's true. [speaker002:] Yeah, so it's, uh, people [disfmarker] people getting shy about it. [speaker007:] Yeah. [speaker002:] There's this human subjects problem. There's the fact that then um, if [disfmarker] i I I've heard comments about this before, "why don't you just put on a video camera?" But you know, it's sort of like saying, "uh, well we're primarily interested in [disfmarker] in some dialogue things, uh, but, uh, why don't we just throw a microphone out there." I mean, the thing is, once you actually have serious interest in any of these things then you actually have to put a lot of effort in. [speaker005:] Mmm. [speaker002:] And, uh, you really want to do it right. [speaker008:] I know. Yep. [speaker002:] So I think NIST or LDC, or somebody like that I think is much better shape to do all that. We [disfmarker] there will be other meeting recordings. We won't be the only place doing meeting recordings. [speaker007:] Mm hmm. [speaker002:] We are doing what we're doing. And, uh, hopefully it'll be useful. [speaker007:] I [disfmarker] it [disfmarker] it occurred to me, has Don signed a human subject's form? [speaker008:] Oh! Probably not. [speaker007:] A permission form? [speaker008:] Has Don [disfmarker] have you s did you si I thought you did actually. Didn't you read a digit string? [speaker006:] I was [disfmarker] [vocalsound] Yeah, I was [disfmarker] I was here [disfmarker] I was here before once. [speaker005:] You were here at a meeting before. [speaker007:] You were here at a meeting before. [speaker008:] Yeah, and you [disfmarker] and you signed a form. [speaker005:] Yeah. [speaker006:] So. [speaker007:] Did you sign a form? [speaker006:] Oh, I think so. Did I? [speaker008:] I'm pretty sure. [speaker006:] I don't know. [speaker008:] Well I'll [disfmarker] I'll get another one before the end of the meeting. [speaker007:] OK. [speaker008:] Thank you. [speaker002:] Yeah. [speaker007:] OK. [speaker006:] Yeah. [speaker002:] Yeah. [speaker007:] You don't [disfmarker] you don't have to leave for it. But I just [disfmarker] [speaker002:] Yeah, we [disfmarker] we [disfmarker] [speaker007:] you know. [speaker008:] Well I can't, [speaker006:] Can I verbally consent? [speaker008:] I'm wired in. [speaker002:] We [disfmarker] we [disfmarker] we [disfmarker] we don't, uh [disfmarker] [speaker001:] Yeah. [speaker007:] o [speaker001:] You're on recor you're being recorded [speaker006:] Yeah. [speaker001:] and [disfmarker] [speaker006:] I don't care. [speaker002:] we don't [disfmarker] we don't perform electro shock during these meetings, [speaker006:] You can do whatever you want with it. That's fine. [speaker002:] and [disfmarker] [speaker005:] Usually. [speaker002:] Yeah. OK. Uh, transcriptions. [speaker007:] Transcriptions, OK. Um, I thought about [disfmarker] there are maybe three aspects of this. So first of all, um, I've got eight transcribers. Uh, seven of them are linguists. One of them is a graduate student in psychology. Um, Each [disfmarker] I gave each of them, uh, their own data set. Two of them have already finished the data sets. And [pause] the meetings run, you know, let's say an hour. Sometimes as man much as an hour and a half. [speaker005:] How big is the data set? [speaker007:] Oh, it's [disfmarker] what I mean is one meeting. Each [disfmarker] each person got their own meeting. [speaker005:] Ah, OK. [speaker007:] I didn't want to have any conflicts of, you know, of [disfmarker] of when to stop transcribing this one or [disfmarker] So I wanted to keep it clear whose data were whose, and [disfmarker] and [disfmarker] and so [disfmarker] [speaker005:] Uh huh. [speaker007:] And, uh, meetings, you know, I think that they're [disfmarker] they go as long as a [disfmarker] almost two hours in some [disfmarker] in some cases. So, you know, that means [disfmarker] you know, if we've got two already finished and they're working on [disfmarker] Uh, right now all eight of them have differe uh, uh, additional data sets. That means potentially as many as ten might be finished by the end of the month. Hope so. [speaker005:] Wow. [speaker007:] But the pre segmentation really helps a huge amount. [speaker003:] OK. [speaker007:] And, uh, also Dan Ellis's innovation of the, uh [disfmarker] the multi channel to here really helped a r a lot in terms of clearing [disfmarker] clearing up h hearings that involve overlaps. But, um, just out of curiosity I asked one of them how long [pause] it was taking her, one of these two who has already finished her data set. She said it takes about, uh, sixty minutes transcription for every five minutes of real time. So it's about twelve to one, which is what we were thinking. [speaker008:] or Yep. [speaker007:] It's well in the range. [speaker008:] It's pretty good. [speaker007:] OK. Uh, these still, when they're finished, um, that means that they're finished with their pass through. They still need to be edited and all but [disfmarker] But it's word level, speaker change, the things that were mentioned. OK, now I wanted to mention the, um, teleconference I had with, uh, Jonathan Fiscus. We spoke for an hour and a half [speaker008:] Hmm. [speaker007:] and, um, had an awful lot of things in common. He, um, um, he in indicated to me that they've [disfmarker] that he's been, uh, looking, uh, uh, spending a lot of time with [disfmarker] I'm not quite sure the connection, but spending a lot of time with the ATLAS system. And I guess that [disfmarker] I mean, I [disfmarker] I need to read up on that. And there's a web site that has lots of papers. But it looks to me like that's the name that has developed for the system that Bird and Liberman developed [comment] for the annotated [pause] graphs approach. [speaker008:] Mm hmm. [speaker007:] So what he wants me to do and what we [disfmarker] what we will do and [disfmarker] uh, is to provide them with the u already transcribed meeting for him to be able to experiment with in this ATLAS System. And they do have some sort of software, at least that's my impression, related to ATLAS and that he wants to experiment with taking our data and putting them in that format, and see how that works out. I [disfmarker] I [disfmarker] I explained to him in [disfmarker] in detail the, uh, conventions that we're using here in this [disfmarker] in this word level transcript. And, um, you know, I [disfmarker] I explained, you know, the reasons that [disfmarker] that we were not coding more elaborately and [disfmarker] and the focus on reliability. He expressed a lot of interest in reliability. It's like he's [disfmarker] he's really up on these things. He's [disfmarker] he's very [disfmarker] Um, independently he asked, "well what about reliability?" So, [vocalsound] he's interested in the consistency of the encoding and that sort of thing. OK, um [disfmarker] [speaker001:] Sorry, can you explain what the ATLAS [disfmarker] I'm not familiar with this ATLAS system. [speaker007:] Well, you know, at this point I think [disfmarker] Uh, well Adam's read more [disfmarker] in more detail than I have on this. I need to acquaint myself more with it. But, um, there [disfmarker] there is a way of viewing [disfmarker] Uh, whenever you have coding categories, um, and you're dealing with uh, a taxonomy, then you can have branches that [disfmarker] that have alternative, uh, choices that you could use for each [disfmarker] each of them. And it just ends up looking like a graphical representation. [speaker008:] Is [disfmarker] is [disfmarker] Is ATLAS the [disfmarker] his annotated transcription graph stuff? I don't remember the acronym. The [disfmarker] the one [disfmarker] the [disfmarker] what I think you're referring to, they [disfmarker] they have this concept of an an annotated transcription graph representation. And that's basically what I based the format that I did [disfmarker] [speaker007:] Yeah. [speaker001:] Oh. Oh. [speaker008:] I based it on their work almost directly, in combination with the TEI stuff. And so it's very, very similar. And so it's [disfmarker] it's a data representation and a set of tools for manipulating transcription graphs of various types. [speaker005:] Is this the project that's sort of, uh, between, uh, NIST and [disfmarker] and, uh, a couple of other places? [speaker007:] Mm hmm. Including LDC. [speaker005:] The [disfmarker] the [disfmarker] [speaker007:] I think so. [speaker008:] Yep. [speaker005:] Yeah, y right, [speaker007:] Mm hmm. Then there's their web site that has lots of papers. [speaker005:] OK. [speaker007:] And I looked through them and they mainly had to do with this, um, this, uh, tree structure, uh, annotated tree diagram thing. [speaker001:] Mmm. [speaker007:] So, um, um [disfmarker] and, you know, in terms of like the conventions that I'm a that I've adopted, it [disfmarker] there [disfmarker] there's no conflict at all. [speaker008:] Right. [speaker007:] And he was, you know, very interested. And, "oh, and how'd you handle this?" And I said, "well, you know, this way" and [disfmarker] And [disfmarker] and we had a really nice conversation. Um, OK, now I also wanted to say in a different [disfmarker] a different direction is, Brian Kingsbury. So, um, I corresponded briefly with him. I, uh, c I [disfmarker] He still has an account here. I told him he could SSH on and use multi trans, and have a look at the already done, uh, transcription. And he [disfmarker] and he did. And what he said was that, um, what they'll be providing is [disfmarker] will not be as fine grained in terms of the time information. And, um, that's, uh [disfmarker] You know, I need to get back to him and [disfmarker] and, uh, you know, explore that a little bit more and see what they'll be giving us in specific, [speaker001:] Hmm. [speaker005:] The p the people [disfmarker] [speaker007:] but I just haven't had time yet. Sorry, what? [speaker005:] The [disfmarker] the folks that they're, uh, subcontracting out the transcription to, are they like court reporters [speaker007:] Yes. [speaker005:] or [disfmarker] [speaker007:] Apparently [disfmarker] Well, I get the sense they're kind of like that. Like it's like a pool of [disfmarker] of somewhat uh, secretarial [disfmarker] I don't think that they're court reporters. I don't think they have the special keyboards and that [disfmarker] and that type of training. I [disfmarker] I get the sense they're more secretarial. [speaker005:] Mm hmm. [speaker007:] And that, um, uh, what they're doing is giving them [disfmarker] [speaker005:] Hmm. Like medical transcriptionist type people [disfmarker] [speaker008:] Nu it's mostly [disfmarker] it's for their speech recognition products, [speaker007:] Yep. [speaker008:] that they've hired these people to do. [speaker005:] But aren't [disfmarker] they're [disfmarker] Oh, so they're hiring them, they're coming. It's not a service they send the tapes out to. [speaker008:] Well they [disfmarker] they do send it out but my understanding is that that's all this company does is transcriptions for IBM for their speech product. [speaker005:] Ah! Oh. OK. I gotcha. [speaker008:] So most of it's ViaVoice, people reading their training material for that. [speaker005:] I see. [speaker007:] Mm hmm. [speaker005:] I see. [speaker007:] Up to now it's been monologues, uh, as far my understood. [speaker008:] Yep, exactly. [speaker007:] And [disfmarker] and what they're doing is [speaker008:] Yep. [speaker005:] Mm hmm. [speaker007:] Brian himself downloaded [disfmarker] So [disfmarker] So, um, Adam sent them a CD and Brian himself downloaded [disfmarker] uh, cuz, you know, I mean, we wanted to have it so that they were in familiar f terms with what they wanted to do. He downloaded [pause] from the CD onto audio tapes. And apparently he did it one channel per audio tape. So each of these people is [pause] transcribing from one channel. [speaker008:] Right. [speaker007:] And then what he's going to do is check it, [speaker005:] Oh. [speaker007:] a before they go be beyond the first one. Check it and, you know, adjust it, and all that. [speaker005:] So each person gets one of these channels [disfmarker] [speaker008:] Right. [speaker005:] OK. [speaker002:] So if they hear something off in the distance they don't [disfmarker] they just go [disfmarker] [speaker008:] Well, but that's OK, because, you know, you'll do all them and then combine them. [speaker007:] I [disfmarker] I don't know. I have t I, you know I [disfmarker] [speaker005:] But there could be problems, right? with that. [speaker003:] Yep. [speaker007:] I think it would be difficult to do it that way. I really [speaker001:] Yeah. [speaker005:] Well if you're tran if you got that channel right there [disfmarker] [speaker007:] d uh, in my case [disfmarker] [speaker008:] No, no. We're talking about close talking, not the [disfmarker] not the desktop. [speaker003:] Yeah. [speaker004:] No, close talk. [speaker002:] Are you? [speaker007:] Yes. [speaker008:] I sure hope so. [speaker007:] Well I th I think so. [speaker008:] It'd be really foolish to do otherwise. [speaker007:] Yeah, I [disfmarker] I would think that it would be kind of hard to come out with [disfmarker] Yeah. [speaker001:] I [disfmarker] I think it's sort of hard just playing the [disfmarker] you know, just having played the individual files. And I [disfmarker] I mean, I know you. I know what your voice sounds like. I'm sort of familiar with [disfmarker] [speaker003:] Yeah. [speaker001:] Uh, it's pretty hard to follow, especially [speaker008:] One side. [speaker007:] I agree. [speaker001:] there are a lot of words that are so reduced phonetically that make sense when you know what the person was saying before. [speaker005:] Yeah, that's [disfmarker] [speaker003:] Yeah. [speaker007:] And especially since a lot of these [disfmarker] [speaker001:] Uh, it sort of depends where you are in [disfmarker] [speaker004:] Yeah. [speaker008:] But I mean we had this [disfmarker] we've had this discussion many times. And the answer is we don't actually know the answer because we haven't tried both ways. [speaker007:] Yeah, we have. Well, except I can say that my transcribers use the mixed signal mostly [speaker008:] So. Mm hmm. [speaker001:] Mm hmm. [speaker007:] unless there's a huge disparity in terms of the volume on [disfmarker] on the mix. [speaker005:] Right. [speaker007:] In which case, you know, they [disfmarker] they wouldn't be able to catch anything except the prominent [comment] channel, [speaker008:] Right. [speaker007:] then they'll switch between. [speaker008:] Well I think that [disfmarker] that might change if you wanted really fine time markings. [speaker003:] Yeah. [speaker007:] But [disfmarker] but really [disfmarker] Well, OK. [speaker008:] So. [speaker007:] Yeah, well [disfmarker] [speaker002:] But they're not giving f really fine time markings. [speaker008:] Right. [speaker001:] Actually, are th so [vocalsound] are they giving any time markings? In other words, if [disfmarker] [speaker007:] Well, I have to ask him. And that's [disfmarker] that's my email to him. That needs to be forthcoming. [speaker001:] Yeah. Cuz [disfmarker] OK. [speaker007:] But [disfmarker] but the, uh [disfmarker] I did want to say that it's hard to follow one channel of a conversation even if you know the people, and if you're dealing furthermore with highly abstract network concepts you've never heard of [disfmarker] So, you know, one of these people was [disfmarker] was transcribing the, uh, networks group talk and she said, "I don't really know what a lot of these abbreviations are," "but I just put them in parentheses cuz that's the [disfmarker] that's the convention and I just" [disfmarker] Cuz you know, if you don't know [disfmarker] [speaker008:] Oh, I'd be curious to [disfmarker] to look at that. [speaker005:] Just out of curiosity, I mean [disfmarker] [speaker008:] They also all have h heavy accents. The networks group meetings are all [disfmarker] [speaker003:] Yeah. [speaker007:] Yeah. [speaker004:] Yeah. [speaker005:] Given all of the effort that is going on here in transcribing why do we have I B M doing it? Why not just do it all ourselves? [speaker002:] Um, it's historical. I mean, uh, some point ago we thought that uh, it [disfmarker] "boy, we'd really have to ramp up to do that", [speaker004:] No, just [disfmarker] [speaker003:] Uh huh. [speaker002:] you know, like we just did, [speaker005:] Mm hmm. [speaker002:] and, um, here's, uh, a [disfmarker] a, uh, collaborating institution that's volunteered to do it. [speaker005:] Mm hmm. [speaker002:] So, that was a contribution they could make. Uh in terms of time, money, you know? [speaker005:] Mm hmm. [speaker002:] And it still might be a good thing [speaker005:] I'm just wondering now [disfmarker] [speaker001:] Actu [speaker002:] but [disfmarker] [speaker005:] Well, I'm [disfmarker] I'm wondering now if it's [disfmarker] [speaker001:] yeah, Mar Mari asked me the same question as sort of [disfmarker] [speaker008:] Well we can talk about more details later. [speaker001:] um, you know, yeah, [speaker002:] Yeah. [speaker001:] whether to [disfmarker] [speaker002:] Yeah. Yeah, so. [speaker005:] Hmm. [speaker002:] We'll see. I mean, I think, th you know, they [disfmarker] they [disfmarker] they've proceeded along a bit. Let's see what comes out of it, and [disfmarker] and, uh, you know, have some more discussions with them. [speaker007:] Mm hmm. It's very [disfmarker] a real benefit having Brian involved because of his knowledge of what the [disfmarker] how the data need to be used and so what's useful to have in the format. [speaker008:] Yeah. [speaker002:] Mm hmm. Yeah. [speaker008:] So, um, Liz, with [disfmarker] with the SRI recognizer, [comment] can it make use of some time marks? [speaker001:] OK, so this is a, um, [speaker008:] I [disfmarker] I guess I don't know what that means. [speaker001:] and actually I should say this is what Don has b uh, he's already been really helpful in, uh, chopping up these [disfmarker] So [disfmarker] so first of all you [disfmarker] um, I mean, for the SRI front end, we really need to chop things up into pieces that are f not too huge. Um, but second of all, uh [disfmarker] in general because some of these channels, I'd say, like, I don't know, at least half of them probably [comment] on average are g are ha are [disfmarker] have a lot of cross ta sorry, some of the segments have a lot of cross talk. Um, it's good to get sort of short segments if you're gonna do recognition, especially forced alignment. So, uh, Don has been taking a first stab actually using Jane's first [disfmarker] the fir the meeting that Jane transcribed which we did have some problems with, and Thilo, uh, I think told me why this was, but that people were switching microphones around [comment] in the very beginning, so [disfmarker] the SRI re [speaker003:] No, th Yeah. No. They [disfmarker] they were not switching them but what they were [disfmarker] they were adjusting them, [speaker001:] and they [disfmarker] They were not [disfmarker] [speaker006:] Mmm. [speaker008:] Adjusting. Oh. [speaker003:] so. [speaker001:] Yeah. [speaker003:] And aft after a minute or so it's [disfmarker] it's way better. So [disfmarker] [speaker001:] So we have to sort of normalize [comment] the front end and so forth, and have these small segments. [speaker003:] Yep. [speaker001:] So we've taken that and chopped it into pieces based always on your [disfmarker] your, um, cuts that you made on the mixed signal. And so that every [disfmarker] every speaker has the same cuts. And if they have speech in it we run it through. And if they don't have speech in it we don't run it through. And we base that knowledge on the transcription. [speaker008:] On [disfmarker] Just on the marks. Right? [speaker001:] Um, the problem is if we have no time marks, then for forced alignment we actually don't know where [disfmarker] you know, in the signal the transcriber heard that word. And so [disfmarker] [speaker008:] Oh, I see, it's for the length. [speaker001:] I mean, if [disfmarker] if it's a whole conversation and we get a long, uh, you know, par paragraph of [disfmarker] of talk, [speaker008:] I see. [speaker001:] uh, I don't know how they do this. Um, we actually don't know which piece goes where. [speaker008:] I understand. [speaker001:] And, um, I think with [disfmarker] [speaker005:] Well you would need to [disfmarker] like a forced alignment before you did the chopping, right? [speaker001:] No, we used the fact that [disfmarker] So when Jane transcribes them the way she has transcribers doing this, [speaker008:] It's already chunked. [speaker001:] whether it's with the pre segmentation or not, they have a chunk and then they transcribes [comment] the words in the chunk. And maybe they choose the chunk or now they use a pre segmentation and then correct it if necessary. [speaker007:] Mm hmm. [speaker001:] But there's first a chunk and then a transcription. Then a chunk, then a transcription. That's great, cuz the recognizer can [disfmarker] [speaker008:] Uh, it's all pretty good sized for the recognizer also. [speaker001:] Right, and it [disfmarker] it helps that it's made based on sort of heuristics and human ear I think. [speaker007:] Good. Oh good. [speaker001:] Th but there's going to be a real problem, uh, even if we chop up based on speech silence these, uh, the transcripts from I B M, we don't actually know where the words were, [speaker008:] Right. [speaker001:] which segment they belonged to. So that's sort of what I'm [pause] worried about right now. [speaker005:] Why not do a [disfmarker] a [disfmarker] a forced alignment? [speaker008:] That's what she's saying, is that you can't. [speaker001:] If you do a forced alignment on something really [disfmarker] [speaker008:] Got uh six sixty minutes of [disfmarker] [speaker001:] well even if you do it on something really long you need to know [disfmarker] you can always chop it up but you need to have a reference of which words went with which, uh, chop. [speaker007:] Now wasn't [disfmarker] I thought that one of the proposals was that IBM was going to do an initial forced alignment, [speaker001:] So [disfmarker] [speaker007:] after they [disfmarker] [speaker008:] Yeah, but [disfmarker] [speaker002:] I [disfmarker] I think that they are, [speaker008:] We'll have to talk to Brian. [speaker002:] um, yeah, I'm sure they will and so we [disfmarker] we have to have a dialogue with them about it. [speaker001:] Yeah. [speaker002:] I mean, it sounds like Liz has some concerns [speaker001:] Maybe they have some [disfmarker] you know, maybe actually there is some, even if they're not fine grained, maybe the transcribers [disfmarker] [speaker002:] and [disfmarker] [speaker001:] uh, I don't know, maybe it's saved out in pieces or [disfmarker] or something. That would help. [speaker007:] Yeah. [speaker001:] But, uh, it's just an unknown right now. [speaker007:] Yeah. I [disfmarker] I need to [disfmarker] to write to him. [speaker001:] So. [speaker007:] I just [disfmarker] you know, it's like I got over taxed with the timing. [speaker001:] Right. But the [disfmarker] it is true that the segments [disfmarker] I haven't tried the segments that Thilo gave you but the segments that in your first meeting are great. [speaker007:] Mm hmm. [speaker001:] I mean, that's [disfmarker] that's a good length. [speaker007:] A good size. Good. [speaker001:] Right, cuz [disfmarker] [speaker007:] Well, I [disfmarker] I was thinking it would be fun to [disfmarker] to [disfmarker] uh, uh, if [disfmarker] if you [disfmarker] wouldn't mind, [comment] [vocalsound] to give us a pre segmentation. [speaker001:] y yeah. [speaker003:] Yeah. [speaker007:] Uh, maybe you have one already of that first m of the meeting that uh, the first transcribed meeting, the one that I transcribed. Do you have a [disfmarker] could you generate a pre segmentation? [speaker003:] Um, I'm sure I have some [speaker008:] February sixteenth I think. [speaker003:] but [disfmarker] but that's the one where we're, um, trai training on, so that's a little bit [disfmarker] [speaker008:] Oh. [speaker007:] Oh, I see. Oh, darn. [speaker003:] It's a little bit at odd to [disfmarker] [speaker007:] Of course, of course, of course. Yeah, OK. [speaker003:] Yeah. [speaker001:] And actually as you get transcripts just, um, for new meetings, [comment] um, we can try [disfmarker] [speaker007:] Uh huh. [speaker001:] I mean, the [disfmarker] the more data we have to try the [disfmarker] the alignments on, um, the better. So it'd be good for [disfmarker] just to know as transcriptions are coming through the pipeline from the transcribers, just to sort of [disfmarker] we're playing around with sort of uh, parameters f on the recognizer, [speaker007:] Mm hmm. [speaker001:] cuz that would be helpful. [speaker007:] Excellent, good. [speaker001:] Especially as you get, en more voices. The first meeting had I think just four people, [speaker003:] Four speakers, [speaker007:] Mm hmm. [speaker003:] yeah. [speaker007:] Yeah, Liz and I spoke d w at some length on Tuesday [speaker001:] yeah. [speaker007:] and [disfmarker] and I [disfmarker] and I was planning to do just a [disfmarker] a preliminary look over of the two that are finished and then give them to you. [speaker001:] Oh, great, great. [speaker007:] Yeah. [speaker001:] So. [speaker002:] That's great. I guess the other thing, I [disfmarker] I can't remember if we discussed this in the meeting but, uh, I know you and I talked about this a little bit, there was an issue of, uh, suppose we get in the, uh, I guess it's enviable position although maybe it's just saying where the weak link is in the chain, uh, where we [disfmarker] we, uh [disfmarker] uh, we have all the data transcribed and we have these transcribers and we were [disfmarker] we're [disfmarker] the [disfmarker] we're still a bit slow on feeding [disfmarker] at that point we've caught up and the [disfmarker] the [disfmarker] the, uh, the weak link is [disfmarker] is recording meetings. OK, um, two questions come, is you know what [disfmarker] how [disfmarker] how do we [disfmarker] uh, it's not really a problem at the moment cuz we haven't reached that point but how do we step out the recorded meetings? And the other one is, um, uh, is there some good use that we can make of the transcribers to do other things? So, um, I [disfmarker] I can't remember how much we talked about this in this meeting [speaker008:] We had spoken with them about it. [speaker002:] but there was [disfmarker] [speaker007:] And there is one use that [disfmarker] that also we discussed which was when, uh, Dave finishes the [disfmarker] and maybe it's already finished [disfmarker] the [disfmarker] the modification to multi trans which will allow fine grained encoding of overlaps. Uh, then it would be very [disfmarker] these people would be very good to shift over to finer grain encoding of overlaps. It's just a matter of, you know, providing [disfmarker] So if right now you have two overlapping segments in the same time bin, well with [disfmarker] with the improvement in the database [disfmarker] in [disfmarker] in the, uh, sorry, in the interface, it'd be possible to, um, you know, just do a click and drag thing, and get the [disfmarker] uh, the specific place of each of those, the time tag associated with the beginning and end of [disfmarker] of each segment. [speaker002:] Right, so I think we talking about three level [disfmarker] three things. [speaker001:] Mm hmm. [speaker007:] Yeah. [speaker002:] One [disfmarker] one was uh, we had s had some discussion in the past about some very high level labelings, [speaker007:] The types of overlaps [disfmarker] [speaker002:] types of overlaps, and so forth that [disfmarker] that someone could do. [speaker007:] Mm hmm. [speaker002:] Second was, uh, somewhat lower level just doing these more precise timings. And the third one is [disfmarker] is, uh, just a completely wild hair brained idea that I have which is that, um, if, uh [disfmarker] if we have time and people are able to do it, to take some subset of the data and do some very fine grained analysis of the speech. For instance, uh, marking in some overlapping [disfmarker] potentially overlapping fashion, uh, the value of, uh, ar articulatory features. [speaker007:] Yeah. [speaker002:] You know, just sort of say, OK, it's voiced from here to here, there's [disfmarker] it's nasal from here to here, and so forth. Um, as opposed to doing phonetic [disfmarker] uh, you know, phonemic and the phonetic analysis, and, uh, assuming, uh, articulatory feature values for those [disfmarker] those things. Um, obviously that's extremely time consuming. Uh [disfmarker] [speaker005:] That would be really valuable I think. [speaker002:] but, uh, we could do it on some small subset. [speaker007:] Also if you're dealing with consonants that would be easier than vowels, wouldn't it? I mean, I would think that [disfmarker] that, uh, being able to code that there's a [disfmarker] a fricative extending from here to here would be a lot easier than classifying precisely which vowel that was. [speaker008:] Which one. [speaker001:] Mmm. [speaker007:] I think vowels [disfmarker] vowels are I think harder. [speaker002:] Mm hmm. [speaker003:] Yeah. [speaker002:] Well, yeah, but I think also it's just the issue that [disfmarker] that when you look at the [disfmarker] u w u u when you look at Switchboard for instance very close up there are places where whether it's a consonant or a vowel you still have trouble calling it a particular phone [speaker007:] Mm hmm. Mm hmm, OK. [speaker008:] Yeah, but [disfmarker] but just saying what the [disfmarker] [speaker002:] at that point because it's [disfmarker] you know, there's this movement from here to here [speaker007:] Yeah, I'm sure. Uh, yeah, I [disfmarker] I know. [speaker001:] Right. [speaker002:] and [disfmarker] and [disfmarker] and it's [disfmarker] [speaker005:] You're saying r sort of remove the high level constraints [speaker002:] so I [speaker005:] and go bottom up. [speaker008:] Yep, just features. [speaker005:] Then just say [disfmarker] [speaker002:] Yeah, describe [disfmarker] describe it. [speaker001:] Mmm. [speaker002:] Now I'm suggesting articulatory features. Maybe there's [disfmarker] there's even a better way to do it but it [disfmarker] but [disfmarker] but that's, you know, sort of a traditional way of describing these things, [speaker005:] Mm hmm. [speaker002:] um, and [disfmarker] uh, [speaker007:] That's nice. [speaker002:] I mean, actually this might be a g neat thing to talk to [disfmarker] [speaker005:] Acoustic features versus psychological categories. [speaker002:] Sort of. [speaker007:] Yeah. [speaker002:] I mean, it's still [disfmarker] [speaker005:] Yeah. [speaker002:] some sort of categories but [disfmarker] but something that allows for overlapping change of these things and then this would give some more ground work for people who were building statistical models that allowed for overlapping changes, different timing changes as opposed to just "click, you're now in this state, which corresponds to this speech sound" and so on. [speaker005:] Mm hmm. Mm hmm. [speaker001:] So this is like gestural [disfmarker] uh, these g [speaker002:] Yeah, something like that. [speaker001:] Right. [speaker002:] I mean, actually if we get into that it might be good to, uh, uh, haul John Ohala into this [speaker001:] OK. [speaker002:] and ask his [disfmarker] his views on it I think. [speaker008:] Yeah. [speaker001:] Right. But is [disfmarker] is the goal there to have this on meeting data, [speaker007:] Excellent. [speaker001:] like so that you can do far field studies [comment] of those gestures or [disfmarker] um, or is it because you think there's a different kind of actual production in meetings [comment] that people use? [speaker002:] No, I think [disfmarker] I think it's [disfmarker] for [disfmarker] for [disfmarker] for that purpose I'm just viewing meetings as being a [disfmarker] a neat way to get people talking naturally. [speaker001:] Or [disfmarker]? [speaker002:] And then you have i and then [disfmarker] and then it's natural in all senses, [speaker005:] Just a source of data? [speaker001:] I see. [speaker002:] in the sense that you have microphones that are at a distance that you know, one might have, and you have the close mikes, and you have people talking naturally. And the overlap is just indicative of the fact that people are talking naturally, [speaker001:] Uh huh. [speaker004:] Yeah. [speaker002:] right? So [disfmarker] so I think that given that it's that kind of corpus, [speaker001:] Right. [speaker004:] Yeah. [speaker002:] if it's gonna be a very useful corpus um, if you say w OK, we've limited the use by some of our, uh, uh, censored choices, we don't have the video, we don't [disfmarker] and so forth, but there's a lot of use that we could make of it by expanding the annotation choices. [speaker001:] Mm hmm. [speaker002:] And, uh, most of the things we've talked about have been fairly high level, and being kind of a bottom up person I thought maybe we'd, [vocalsound] do some of the others. [speaker008:] Hmm. [speaker007:] It's a nice balance. [speaker001:] Right. Yeah, that would be good. [speaker007:] That would be really nice to offer those things with that wide range. [speaker002:] Yeah. [speaker001:] Right. [speaker002:] Yeah and hopefully someone would make use of it. [speaker007:] Really nice. [speaker002:] I mean, people didn't [disfmarker] [speaker007:] Yeah. [speaker002:] uh, I mean, people have made a lot of use of [disfmarker] of TIMIT and, uh w due to its markings, and then [pause] the Switchboard transcription thing, well I think has been very useful for a lot of people. [speaker008:] Right. [speaker001:] That's true. [speaker002:] So [disfmarker] [speaker001:] I guess I wanted to, um, sort of make a pitch for trying to collect more meetings. [speaker007:] Cool. [speaker001:] Um, [speaker002:] Yeah. [speaker001:] I actually I talked to Chuck Fillmore and I think they've what, vehemently said no before but this time he wasn't vehement and he said you know, well, Liz, come to the meeting tomorrow [speaker002:] Yeah. [speaker001:] and try to convince people ". So I'm gonna [pause] try. Go to their meeting tomorrow [speaker007:] Mm hmm. [speaker001:] and see if we can try, uh, to convince them [speaker007:] Good. [speaker002:] Cuz they have something like three or four different meetings, [speaker001:] because they have [disfmarker] And they have very interesting meetings from the point of view of a very different type of [disfmarker] of talk than we have here [speaker002:] right? [speaker007:] Mm hmm. [speaker004:] Talk [disfmarker] [speaker001:] and definitely than the front end meeting, probably. Um [disfmarker] [speaker005:] You mean in terms of the topic [disfmarker] topics? [speaker001:] Well, yes and in terms of the [disfmarker] the fact that they're describing abstract things and, uh, just dialogue wise, [speaker002:] Mm hmm. [speaker001:] right. [speaker005:] Mm hmm. [speaker001:] Um, so I'll try. And then the other thing is, I don't know if this is at all useful, but I asked Lila if I can maybe go around and talk to the different departments in this building [speaker002:] Yes. [speaker001:] to see if there's any groups that, for a free lunch, if we can still offer that, [speaker008:] You mean non ICSI? [speaker002:] Great. [speaker001:] might be willing [disfmarker] non ICSI, non academic, [speaker008:] Yeah, I guess you [disfmarker] you can try [speaker001:] you know, like government people, [speaker008:] but [disfmarker] [speaker001:] I don't know. [speaker008:] The problem is so much of their stuff is confidential. [speaker001:] So. [speaker008:] It would be very hard for them. [speaker007:] Yeah. [speaker004:] Yeah. [speaker007:] Also it does seem like it takes us way out of the demographic. [speaker004:] Yeah. [speaker001:] Is [disfmarker] is it in these departments? [speaker007:] I mean, it seems like we [disfmarker] we had this idea before of having like linguistics students brought down for free lunches [speaker008:] Well, tha I think that's her point. [speaker001:] Right, and then we could also [disfmarker] we might try advertising again [speaker007:] and that's a nice idea. [speaker001:] because I think it'd be good if [disfmarker] if we can get a few different sort of non internal types of meetings [speaker007:] Yeah. [speaker002:] Yeah. [speaker001:] and just also more data. [speaker007:] Mm hmm. [speaker008:] And I think, uh, if we could get [disfmarker] [speaker005:] Does [disfmarker] does John Ohala have weekly phonetics lab meetings? [speaker001:] So. So I actually wrote to him and he answered, "great, that sounds really interesting". But I never heard back because we didn't actually advertise openly. We a I mean w I told [disfmarker] I d asked him privately. Um, and it is a little bit of a trek for campus [pause] folks. [speaker008:] Yeah. [speaker007:] Mm hmm. [speaker008:] You might give them a free lunch. But, um, [speaker001:] Um, so it's still worthwhile. [speaker008:] it would be nice if we got someone other than me who knew how to set it up and could do the recording [speaker001:] So [disfmarker] [speaker008:] so u I didn't have to do it each time. [speaker001:] Exactly, [speaker007:] Yeah. That's right. [speaker001:] and [disfmarker] and [disfmarker] and I was thinking [disfmarker] [speaker002:] He he's supposed [disfmarker] he's supposed to be trained [vocalsound] to do it. [speaker001:] Yeah. Plus we could also get you know, a s a student. [speaker008:] OK, next week [pause] you're going to do it all. [speaker004:] Yeah. [speaker001:] And I'm willing to try to learn. I mean, I'm [disfmarker] I would do my best. Um, the other thing is that [disfmarker] there was a number of things at the transcription side that, um, transcribers can do, like dialogue act tagging, [speaker008:] It's not that hard. [speaker001:] disfluency tagging, um, things that are in the speech that are actually something we're y [comment] working on for language modeling. And Mari's also interested in it, Andreas as well. So if you wanna process a utterance and the first thing they say is, "well", and that "well" is coded as some kind of interrupt u tag. Uh, and things like that, um, [speaker007:] Of course some of that can be li done lexically. [speaker001:] th [speaker007:] And I also [disfmarker] they are doing disfluency tagging to some degree already. [speaker001:] A lot of it can be done [disfmarker] Great. So a [disfmarker] a lot of this kind of [disfmarker] [speaker007:] Yeah. [speaker001:] I think there's a second pass and I don't really know what would exist in it. But there's definitely a second pass worth doing to maybe encode some kinds of, you know, is it a question or not, [speaker007:] Mm hmm. [speaker001:] or [disfmarker] um, that maybe these transcribers could do. So [disfmarker] [speaker007:] They'd be really good. [speaker001:] Yeah. [speaker007:] They're [disfmarker] they're very [disfmarker] they're very consistent. [speaker002:] Mm hmm. [speaker007:] Uh, I wanted to [disfmarker] whi while we're [disfmarker] Uh, so, to return just briefly to this question of more meeting data, [speaker001:] That'd be great. [speaker007:] um [disfmarker] I have two questions. One of them is, um, Jerry Feldman's group, they [disfmarker] they, uh, are they [disfmarker] I know that they recorded one meeting. Are they willing? [speaker002:] I think they're open to it. I think, you know, all these things are [disfmarker] [speaker001:] Oh, yeah. [speaker002:] I think there's [disfmarker] we should go beyond, uh, ICSI but, I mean, there's a lot of stuff happening at ICSI that we're not getting now that we could. [speaker007:] Mm hmm. [speaker002:] So it's just [disfmarker] [speaker001:] Oh, that we could. OK. [speaker002:] Yeah. So the [disfmarker] [speaker001:] I thought that all these people had sort of said "no" twice already. [speaker002:] No, no. No. [speaker001:] If that's not the case then [disfmarker] [speaker002:] So th there was the thing in Fillmore's group but even there he hadn't [disfmarker] What he'd said "no" to was for the main meeting. But they have several smaller meetings a week, [speaker008:] So. [speaker002:] and, uh, the notion was raised before that that could happen. And it just, you know [disfmarker] it just didn't come together [speaker001:] Just [disfmarker] [speaker005:] Well, and [disfmarker] and the other thing too is when they originally said "no" they didn't know about this post editing capability thing. [speaker001:] OK. [speaker002:] but [disfmarker] [speaker007:] Oh. [speaker001:] Right. [speaker002:] Yeah. [vocalsound] Yeah. [speaker001:] Right. That was a big fear. [speaker007:] That's important. [speaker005:] So. [speaker002:] Yeah, so I mean there's possibilities there. I think Jerry's group, yes. Uh, there's [disfmarker] there's, uh, the networks group, [speaker001:] OK. [speaker002:] uh, I don't [disfmarker] Do they still meeting regularly or [disfmarker]? [speaker008:] Well, I don't know if they meet regularly or not but they are no longer recording. [speaker002:] But I mean, ha ha have they said they don't want to anymore or [disfmarker]? [speaker008:] Um, ugh, what was his name? [speaker002:] Uh, i i [speaker007:] Joe Sokol? [speaker008:] Yeah. [speaker002:] Yeah. [speaker008:] When [disfmarker] with him gone, it sorta trickled off. They [disfmarker] and they stopped [disfmarker] [speaker002:] OK, so they're down to three or four people [speaker008:] Yeah. [speaker002:] but the thing is three or four people is OK. [speaker001:] Mm hmm. [speaker008:] Yep. [speaker007:] We might be able to get the administration [disfmarker] [speaker008:] Well he was sort of my contact, so I just need to find out who's running it now. So. [speaker002:] OK. [speaker007:] I see that Lila has a luncheon meeting in here periodically. I don't know [disfmarker] [speaker001:] Yeah, I mean, it [disfmarker] One thing that would be nice and this [disfmarker] it sounds bizarre but, I'd really like to look at [disfmarker] to get some meetings where there's a little bit of heated discussion, like ar arguments and [disfmarker] or emotion, and things like that. And so I was thinking if there's any like Berkeley political groups or something. I mean, that'd be perfect. Some group, "yes, we must [disfmarker]" [speaker008:] Who's willing to get recorded and distributed? [speaker001:] Well, you know, something [disfmarker] [speaker006:] Yeah, I don't think the more political argumentative ones would be willing to [disfmarker] [speaker003:] Yeah. [speaker001:] Um [disfmarker] [speaker002:] Yeah, [speaker004:] Yeah. [speaker002:] with [disfmarker] with [disfmarker] with potential use from the defense department. [speaker001:] Well, OK. [speaker008:] Yeah. [speaker006:] Yeah. [vocalsound] Exactly. [speaker002:] Yeah. [speaker007:] Yeah. [speaker003:] Yeah. [speaker001:] No, but maybe stu student, uh, groups or, um, film makers, or som Something a little bit colorful. [speaker004:] Yeah. [speaker007:] Yeah, of course there is this problem though, that if we give them the chance to excise later we e [vocalsound] might end up with like five minutes out of a f [comment] [pause] of m one hour [speaker002:] Yeah. [vocalsound] [vocalsound] [vocalsound] [vocalsound] Yeah, th there's a problem there in terms of, uh, the um commercial value of [disfmarker] of st uh, [speaker004:] Film maker. [speaker008:] Of beeps, [speaker003:] Yeah. [speaker008:] yeah. [speaker003:] Yeah. [speaker004:] Is [disfmarker] [speaker007:] of [disfmarker] [comment] Yes. Really. [speaker001:] And I don't mean that they're angry but just something with some more variation in prosodic contours and so forth would be neat. So if anyone has ideas, I'm willing to do the leg work to go try to talk to people but I don't really know which groups are worth pursuing. [speaker007:] Well there was this K P F A [speaker008:] No that's [disfmarker] [speaker007:] but [disfmarker] OK. [speaker008:] Legal. [speaker007:] OK, OK. [speaker002:] it [disfmarker] it [disfmarker] it [disfmarker] it turned out to be a bit of a problem. [speaker001:] Or [disfmarker] [speaker007:] And I had one other [disfmarker] one other aspect of this which is, um, uh, uh, Jonathan Fiscus expressed primar uh y a major interest in having meetings which were all English speakers. Now he wasn't trying to shape us in terms of what we gather [speaker002:] Mm hmm. [speaker007:] but that's what he wanted me to show him. So I'm giving him our, um [disfmarker] our initial meeting because he asked for all English. And I think we don't have a lot of all English meetings right now. [speaker005:] Did he mean, uh [disfmarker] did he mean and non British? [speaker002:] Of all [disfmarker] all nat all native speakers. [speaker008:] Well [disfmarker] [speaker003:] The all native. [speaker007:] That's what I mean, yeah. [speaker008:] Well if he meant and non British I think we have zero. [speaker007:] He doesn't care. No. Eh, well, British is OK. But [disfmarker] but [disfmarker] [speaker005:] He said British was OK? [speaker007:] Sure, sure, sure. [speaker008:] British is English? [speaker002:] Why? [speaker007:] Yeah. Different varieties of English. [speaker003:] Ooo, ooo. [speaker002:] Well, I don't [disfmarker] I don't [disfmarker] I don't think [disfmarker] if he didn't say that [disfmarker] [speaker007:] Native speaking. Native speaking English. [speaker008:] I bet he meant native speaking American. [speaker007:] Yes. [speaker002:] I bet he did. [speaker003:] American English? [speaker007:] Oh, really. [speaker008:] So, why would he care? [speaker005:] Knowing the application [disfmarker] [speaker002:] I remember wh I I remember a study [disfmarker] [speaker001:] That's [disfmarker] I was thinking, knowing the, uh, n National Institute of Standards, it is all [disfmarker] [speaker002:] I remember a study that BBN did where they trained on [disfmarker] this was in Wall Street Journal days or something, they trained on American English and then they tested on, uh, different native speakers from different areas. And, uh, uh, the worst match was people whose native tongue was Mandarin Chinese. The second worst was British English. [speaker007:] That's funny. [speaker002:] So h it's, you know, t [speaker007:] Alright. And so that would make sense. [speaker002:] the [disfmarker] the [disfmarker] the [disfmarker] German was much better, [speaker007:] I didn't have the context of that. [speaker003:] Ooo, ooo. [speaker002:] it was Swiss w Yeah, so it's [disfmarker] so I think, you know, if he's [disfmarker] if he's thinking in terms of recognition kind of technology I [disfmarker] I [disfmarker] I think he would probably want, uh [vocalsound] American English, [speaker008:] I wonder if we have any. [speaker007:] All America, OK. [speaker002:] yeah. It [disfmarker] it [disfmarker] yeah, unless we're gonna train with a whole bunch of [disfmarker] [speaker007:] I think that the [disfmarker] Feldman's meetings tend to be more that way, aren't they? I mean, I sort of feel like they have [disfmarker] [speaker008:] Maybe. [speaker002:] I think so, [speaker008:] Maybe. [speaker002:] yeah. [speaker001:] Yeah, mm hmm. [speaker002:] Yeah. [speaker004:] Mmm. [speaker008:] And maybe there are a few of [disfmarker] with us where it was [disfmarker] you know, Dan wasn't there [speaker002:] Yeah. [speaker008:] and before Jose started coming, [speaker002:] Yeah. [speaker004:] Yeah. [speaker008:] and [disfmarker] [speaker002:] It's pretty tough, uh, this group. Yeah. [speaker008:] Yeah. [speaker003:] Mmm. [speaker004:] Yeah. [speaker002:] So, uh, what about [disfmarker] what about people who involved in some artistic endeavor? I mean, film making or something like that. [speaker001:] Exactly, that's what I was [disfmarker] [speaker002:] You'd think like they would be [disfmarker] [speaker004:] A film maker. [speaker001:] something where there [disfmarker] there is actually discussion where there's no right or wrong answer but [disfmarker] but it's a matter of opinion kind of thing. [speaker007:] It's be fun. [speaker001:] Uh, anyway, if you [disfmarker] if you have ideas [disfmarker] [speaker008:] RASTA. PLP. RASTA. PLP. [speaker006:] We can just discu we can just have a political discussion one day. [speaker004:] Yes. [speaker005:] A any department that calls itself science [speaker001:] Yeah, we could [disfmarker] [speaker004:] Department. [speaker006:] Uh, I could make that pretty [disfmarker] [speaker008:] Well, like computer science. [speaker004:] Yeah. [speaker008:] That [disfmarker] [speaker004:] Computer sci [speaker007:] We could get Julia Child. I know. [speaker008:] That's [disfmarker] [speaker001:] I'm [disfmarker] I'm actually serious [speaker008:] Got a ticket. [speaker001:] because, uh, you know, we have the set up here [speaker002:] Yeah, I know you are. [speaker001:] and [disfmarker] and that [disfmarker] that has a chance to give us some very interesting fun data. So if anyone has ideas, [speaker004:] Yeah. [speaker002:] Yeah. [speaker001:] if you know any groups that are m you know, [speaker008:] Well I had asked some [disfmarker] some of the students at the business school. [speaker004:] Yeah. [speaker006:] I know [disfmarker] [speaker001:] student groups c like clubs, things like that. [speaker008:] I could [disfmarker] [speaker002:] Put a little ad up saying, "come here and argue". [speaker001:] Not [disfmarker] not [disfmarker] Yeah. [speaker008:] The Business school. [speaker001:] "If you're really angry at someone use our conference room." [speaker008:] Uh, the business school might be good. I actually spoke with some students up there [speaker001:] Oh, OK. [speaker008:] and they [disfmarker] they [disfmarker] they expressed willingness back when they thought they would be doing more stuff with speech. [speaker001:] Really. [speaker008:] But when they lost interest in speech they also [pause] stopped answering my email about other stuff, so. [speaker004:] Hmm. [speaker006:] I [disfmarker] [speaker002:] They could have a discussion about te [speaker001:] Or people who are really h [speaker008:] We should probably bleep that out. [speaker002:] about [disfmarker] about tax cuts or something. [speaker006:] I heard that at Cal Tech they have a special room [disfmarker] someone said that they had a special room to get all your frustrations out that you can go to and like throw things and break things. So we can like post a [disfmarker] [speaker002:] Yeah, now that is not actually what we [disfmarker] [speaker008:] Th that's not what we want. [speaker006:] No, not to that extent but, um. [speaker001:] Well, far field mikes can pick up where they threw stuff on the wall. [speaker006:] Yeah. [speaker002:] Yeah, but we don't want them to throw the far field mikes is the thing. [speaker008:] That's right. [speaker001:] Oh. [vocalsound] Yeah, right. [speaker006:] Yeah. [speaker004:] The fa [speaker008:] "Please throw everything in that direction." [speaker002:] Yeah. Anyway. [speaker008:] Padded cell. [speaker007:] It'd be fun to get like a [disfmarker] a p visit from the [disfmarker] [speaker008:] There was a dorm room at Tech that, uh, someone had coated the walls and the ceiling, and, uh, the floor with mattresses. [speaker006:] Mmm. [speaker008:] The entire room. [speaker002:] I had as my fourth thing here processing of wave forms. [speaker007:] Yeah. [speaker002:] What did we mean by that? [speaker008:] Uh, Liz wanted to talk about methods of improving accuracy by doing pre processing. [speaker002:] Remember [@ @]? [speaker007:] Pre processing. [speaker001:] Well I think that [disfmarker] that was just sort of [disfmarker] I I already asked Thilo [speaker002:] Oh, you already did that. [speaker001:] but that, um, it would be helpful if I can stay in the loop somehow with, um, people who are doing any kind of post processing, whether it's to separate speakers or to improve the signal to noise ratio, or both, um, that we can sort of try out as we're running recognition. Um, so, i is that [disfmarker] Who else is work I guess Dan Ellis and you [speaker003:] Dan, yeah. [speaker008:] Yep. [speaker002:] Yeah, and Dave uh [pause] Gel Gelbart again, [speaker001:] and Dave. [speaker003:] Yep. [speaker004:] Yeah. [speaker001:] OK. [speaker002:] he's [disfmarker] he's interested in [disfmarker] in fact we're look starting to look at some echo cancellation kind of things. [speaker001:] OK. [speaker002:] Which uh [disfmarker] [speaker008:] I am not sure how much that's an issue with the close talking mikes, [speaker002:] Hmm? [speaker008:] but who knows? [speaker002:] Well, let's [disfmarker] w i isn't that what [disfmarker] what you want [disfmarker] [speaker001:] I don't know. I'm bad [disfmarker] [speaker002:] t No, so [disfmarker] No, i w wha what you [disfmarker] what you want [disfmarker] when you're saying improving the wave form you want the close talking microphone to be better. [speaker001:] It's like [disfmarker] [comment] [vocalsound] like [disfmarker] [speaker008:] Right. [speaker002:] Right? And the question is to w to what extent is it getting hurt by, uh [disfmarker] by any room acoustics or is it just [disfmarker] uh, given that it's close it's not a problem? Uh [disfmarker] [speaker001:] It doesn't seem like big room acoustics problems to my ear but I'm not an expert. [speaker002:] OK, so it's [disfmarker] [speaker001:] It seems like a problem with cross talk. [speaker008:] e I bet with the lapel mike there's plenty, uh, room acoustic [speaker003:] Yeah. [speaker008:] but I I think the rest is cross talk. [speaker001:] That [disfmarker] that may be true. But I don't know how good it can get either by those [disfmarker] the [disfmarker] those methods [disfmarker] [speaker008:] Yeah. [speaker001:] That's true. [speaker008:] So I [disfmarker] I think it's just, [speaker002:] OK. [speaker001:] Oh, I don't know. [speaker008:] yeah, what you said, cross talk. [speaker001:] All I meant is just that as sort of [disfmarker] as this pipeline of research is going on we're also experimenting with different ASR, uh, techniques. [speaker008:] Mm hmm. [speaker001:] And so it'd be w good to know about it. [speaker005:] So the problem is like, uh, on the microphone of somebody who's not talking they're picking up signals from other people [comment] and that's [vocalsound] causing problems? [speaker001:] R right, although if they're not talking, using the [disfmarker] the inhouse transcriptions, were sort of O K because the t no one transcribed any words there and we throw it out. [speaker005:] Mm hmm. [speaker001:] But if they're talking at all and they're not talking the whole time, so you get some speech and then a "mm hmm", and some more speech, so that whole thing is one chunk. And the person in the middle who said only a little bit is picking up the speech around it, that's where it's a big problem. [speaker007:] You know, this does like seem like it would relate to some of what Jose's been working on as well, the encoding of the [disfmarker] [speaker004:] Yeah. [speaker007:] And [disfmarker] and he also, he was [disfmarker] [speaker001:] The energy, [speaker004:] Yeah, [speaker001:] right. Exactly. [speaker004:] energy. [speaker007:] I was t I was trying to remember, you have this interface where you [disfmarker] i you ha you showed us one time on your laptop that you [disfmarker] you had different visual displays as speech and nonspeech events. [speaker004:] Yeah, c Yeah. May [disfmarker] I [disfmarker] I only display the different colors for the different situation. But, eh, for me and for my problems, is uh [disfmarker] is enough. Because, eh, it's possible, eh, eh, in a simp sample view, uh, to, nnn, to compare with c with the segment, the [disfmarker] the kind of assessment what happened with the [disfmarker] the different parameters. And only with a different bands of color for the, uh, few situation, eh, I consider for acoustic event is enough to [@ @]. [speaker007:] Mm hmm. [speaker004:] I [disfmarker] I [disfmarker] [vocalsound] I see that, eh, you are considering now, eh, a very sophisticated, eh, ehm, eh, [@ @] [comment] set of, eh, graphic s eh, eh, ehm, si symbols to [disfmarker] to transcribe. No? [speaker007:] Oh, I w Uh huh. [speaker004:] Because, uh, before, you [disfmarker] you are talking about the [disfmarker] the possibility to include in the Transcriber program eh, um, a set of symbols, of graphic symbol to [disfmarker] t to mark the different situations during the transcription during the transcription. No? [speaker007:] Well, you're saying [disfmarker] So, uh, symbols for differences between laugh, and sigh, and [disfmarker] and [disfmarker] and slam the door and stuff? [speaker004:] Yeah. [speaker007:] Or some other kind of thing? [speaker004:] Yeah, yeah. The s the symbols, you [disfmarker] you talk of before. No? To [disfmarker] to mark [disfmarker] [speaker007:] Well, I wouldn't say [vocalsound] symbols so much. The [disfmarker] the main change that I [disfmarker] that I see in the interface is [disfmarker] is just that we'll be able to more finely c uh, time things. [speaker004:] Yeah. Yeah. [speaker007:] But I [disfmarker] I also st there was another aspect of your work that I was thinking about when I was talking to you which is that it sounded to me, Liz, as though you [disfmarker] [speaker001:] Hmm. [speaker007:] and, uh, maybe I didn't q understand this, but it sounded to me as though part of the analysis that you're doing involves taking segments which are of a particular type and putting them together. And th so if you have like a p a s you know, speech from one speaker, [pause] then you cut out the part that's not that speaker, [speaker004:] Yeah. [speaker001:] Mm hmm. [speaker007:] and you combine segments from [pause] that same speaker to [disfmarker] [comment] and run them through the recognizer. Is that [pause] right? [speaker001:] Well we try to find as close of start and end time of [disfmarker] as we can to the speech from an individual speaker, [speaker007:] Mm hmm. [speaker001:] because then we [disfmarker] we're more guaranteed that the recognizer will [disfmarker] for the forced alignment which is just to give us the time boundaries, because from those time boundaries then the plan is to compute prosodic features. [speaker007:] Mm hmm. [speaker001:] And the sort of more space you have that isn't the thing you're trying to align the more errors we have. [speaker007:] Mm hmm. [speaker001:] Um, so, you know, that [disfmarker] that [disfmarker] it would help to have either pre processing of a signal that creates very good signal to noise ratio, [speaker007:] Cuz i OK. [speaker001:] which I don't know how possible this is for the lapel, um, or to have very [disfmarker] to have closer, [vocalsound] um, time [disfmarker] you know, synch times, basically, around the speech that gets transcribed in it, or both. And it's just sort of a open world right now of exploring that. So I just wanted to [pause] see, you know, on the transcribing end from here things look good. Uh, the IBM one is more [disfmarker] it's an open question right now. And then the issue of like global processing of some signal and then, you know, before we chop it up is [disfmarker] is yet another way we can improve things in that. [speaker005:] What about increasing the flexibility of the alignment? [speaker007:] OK. [speaker005:] Do you remember that thing that Michael Finka did? that experiment he did a while back? [speaker001:] Mm hmm. Right. You can, um [disfmarker] The problem is just that the acoustic [disfmarker] when the signal to noise ratio is too low, um, you [disfmarker] you'll get, a uh [disfmarker] an alignment with the wrong duration pattern [speaker005:] Oh, so that's the problem, is the [disfmarker] the signal to noise ratio. [speaker001:] or it [disfmarker] Yeah. It's not the fact that you have like [disfmarker] I mean, what he did is allow you to have, uh, words that were in another segment move over to the [disfmarker] at the edges of [disfmarker] of segmentations. [speaker005:] Mm hmm. Or even words inserted that weren't [disfmarker] weren't there. [speaker001:] Right, things [disfmarker] things near the boundaries where if you got your alignment wrong [disfmarker] [speaker005:] Mm hmm. [speaker001:] cuz what they had done there is align and then chop. [speaker005:] Mm hmm. [speaker001:] Um, and this problem is a little bit j more global. It's that there are problems even in inside the alignments, uh, because of the fact that there's enough acoustic signal there t for the recognizer to [disfmarker] to eat, [vocalsound] as part of a word. And it tends to do that. S [speaker004:] Yeah. [speaker001:] So, uh, but we probably will have to do something like that in addition. Anyway. So, yeah, bottom [disfmarker] bottom line is just I wanted to make sure I can be aware of whoever's working on these signal processing techniques for, uh, detecting energies, [speaker004:] Yeah. [speaker001:] because that [disfmarker] that'll really help us. [speaker002:] O K, uh tea has started out there I suggest we c run through our digits and, [speaker007:] OK. [speaker002:] Uh, So, OK, we're done. [speaker001:] OK [speaker004:] Uh. [vocalsound] Alright. So. I [vocalsound] uh [vocalsound] uh wanted to mention a couple little bits of news. Um Hynek is supposed to come for a couple days next week. Don't know which days yet. So hopefully he'll have some intensive time to [disfmarker] to work with people. Um. [vocalsound] The second thing is that um [vocalsound] we're starting to talk about maybe having Sunil come here uh for some chunk of the summer. So that may or may not happen but [vocalsound] if he does since actually Pratibha is going to be off at IBM [vocalsound] um [vocalsound] the [vocalsound] uh effort will pretty much shift here, cuz Sunil will be here and [vocalsound] other people are working on other things I think. So. [vocalsound] [vocalsound] So. Uh. [vocalsound] That's my news. [speaker003:] Alright. [speaker006:] Cool. I need a new office mate. [speaker004:] Oh, OK. [speaker006:] Yeah. That's cool. [speaker004:] There you go. [speaker006:] Right. [speaker003:] It will be some sometime in summer. July or August, or. [speaker004:] Yeah, I mean it may not happen. I mean he wants [disfmarker] uh I think he wants some experience someplace else and he's [disfmarker] he's uh [vocalsound] thinking of going someplace, [speaker003:] Mm hmm. [speaker004:] but it's [disfmarker] I mean Pratibha is getting her experience going to IBM and he may come here. So. But y I don't know maybe June. Maybe earlier. But uh. Anyway, Hynek will be here next week and maybe he'll know more about it. [speaker003:] Oh yeah. Well, the news [vocalsound] [vocalsound] more specifically t for Aurora um So I guess there was again a [vocalsound] conference call but uh they are not decide on [vocalsound] everything yet. For the VAD they [disfmarker] probably they will do something like [vocalsound] having some kind of idealized VAD that they could [vocalsound] run on the channel zero [vocalsound] of the SpeechDat Car and then take the same um end points and apply them on channel one [speaker004:] Mm hmm. [speaker003:] so it's [disfmarker] [speaker004:] Oh so take a real VAD but apply it to [disfmarker] to [disfmarker] to the clean [disfmarker] clean data. [speaker003:] Uh g Yeah, to the clean and take the end points for the noisy. Mmm. [speaker004:] OK. [speaker003:] But I am not sure this is decided yet. Um. [speaker004:] And [disfmarker] and i when they're talking about it, do you have the impression that they're talking about [vocalsound] a particular VAD which everybody would use? or [disfmarker] or just that everybody would use the same procedure? [speaker003:] Well, probably uh s the same VAD [vocalsound] for everybody. [speaker004:] Mm hmm. [speaker003:] Um. [speaker001:] So they would just provide that as part of the data. [speaker003:] Maybe. But this is still [disfmarker] [speaker004:] Yeah. Sounds like they're still arguing. Yeah. [speaker003:] Yeah, they're still [vocalsound] not decided. [speaker004:] Yeah. Yeah. [speaker003:] Um I don't know what [disfmarker] Yeah. Nothing much. Yeah. Probably the um [disfmarker] [vocalsound] these weight things, they will [vocalsound] apply the same kind of weighting scheme for [vocalsound] TI digits than for the SpeechDat Car. So it would be weighting of um [disfmarker] It would be an average of improvements rather than an average of [vocalsound] word error rates which would make [vocalsound] the more clean parts of TI digits more important than they are right now. [speaker004:] Mm hmm. OK. Uh but it sounds like none of this is really decided, that this is how things are leaning [speaker003:] This [speaker004:] and [disfmarker] [speaker003:] They [disfmarker] they t I think they will tend to go this way, but hmm. Uh. Yeah, if we have the result for the tandem with um MSG also So [@ @] [vocalsound] it was t Yeah there is no surpri there is no surprise. Well, so there is nor significant improvement. Um. [vocalsound] [vocalsound] [vocalsound] [vocalsound] Yeah, basically that's all. We've started to work on some kind of report for the work [vocalsound] so we've not much more results. [speaker004:] Mm hmm. Yeah you have a uh [disfmarker] you have a lunch talk sometime in [disfmarker] [speaker003:] Hmm. Mm hmm. It's next week, yeah. [speaker004:] When is it? Next week? Oh, OK. [speaker001:] How about the um [disfmarker] the thing that you guys were working on before the uh Uh [vocalsound] voiced unvoiced uh [disfmarker] [speaker005:] Oh yes. But, [vocalsound] [vocalsound] no. This week I'm [disfmarker] [vocalsound] I am begin to [disfmarker] to write the report on [disfmarker] I stopped it. [speaker001:] Ah. You stopped working on it [speaker005:] Interfered. [speaker001:] I see [speaker005:] Yeah. [speaker004:] OK. Uh. So it sounds like basically things are slowing down a bit? And [disfmarker] and [disfmarker] and trying to collate results? and [vocalsound] get something from it? [speaker003:] Mm hmm. Yeah. [speaker004:] Yeah. [speaker003:] Um. Yeah. Right now, Sunil seems i is in India actually. [speaker004:] Right. [speaker003:] Uh so [vocalsound] there is no progress apparently from their side neither. [speaker004:] Right. [speaker003:] Um. And [disfmarker] [speaker004:] But that sort of puts it on us to do it. But. [vocalsound] Kind of puts it on us to put more in, then, really. [speaker003:] I'm sorry? Yeah. [speaker004:] I mean. Uh. Who's [disfmarker] who's [disfmarker] was uh looking at the combination of uh spectral s subtraction approaches with what we had for instance. [speaker003:] Sunil was [vocalsound] working on this. [speaker004:] OK. [speaker003:] Um. [vocalsound] He sent a bunch of [disfmarker] a bunch of results uh but I don't know what's the status of this. I guess it was just the result that he sent, so. [vocalsound] Um. [speaker004:] What was that again? [speaker003:] He just used the spectral subtraction and the LDA filters I guess [vocalsound] [vocalsound] Mmm. [speaker004:] Mm hmm. [speaker003:] And it was not better than what we had before. [speaker004:] And [disfmarker] [speaker003:] Then he tried to put the on line normalization and [vocalsound] it did not improve further, putting the on line normalization, so. Um. Yeah. [speaker004:] Hmm. Huh. OK. Well [disfmarker] [speaker003:] Hmm. [speaker004:] Yeah. I [disfmarker] uh, I g so I guess nothing has ever happened about uh making a standardized uh place where the software sits so that people can work with it, and [vocalsound] upgrade it? [speaker003:] No. Well, I [disfmarker] I sent mails to start the [disfmarker] the process and I guess we should have to [disfmarker] Well, we know most what we have to [disfmarker] to put in [disfmarker] in this. And. Yeah. So, I guess. Yeah, we m [speaker004:] OK. Well, uh. Let's [disfmarker] let's come back to this uh later. Um. [vocalsound] OK. Um. [vocalsound] Chuck, did you get a chance to do anymore stuff with the uh HTK, [speaker001:] No. I didn't. Um, are you talking about [disfmarker] [speaker004:] or [disfmarker]? [speaker001:] Uh, no. This week I haven't. I've been [disfmarker] my whole time's been taken up with uh [vocalsound] Meeting Recorder stuff. [speaker004:] OK. [speaker001:] Uh, disk crash and uh covering things. [speaker004:] u OK. This may be a very short meeting. Uh. [vocalsound] [vocalsound] Um. Anything from [vocalsound] your side? [speaker006:] Uh, continuing readings we come [vocalsound] Uh. We're doing some background reading on phonetics. [speaker004:] Uh huh. OK. [speaker006:] Yeah. [speaker004:] Dave? [speaker002:] Well um. [vocalsound] One thing in [disfmarker] in the paper Avendano and his collaborators wrote [vocalsound] is that [disfmarker] is that um [vocalsound] they tried doing channel normalization on the reverberant in speech by um [vocalsound] subtracting the mean of the log spectral magnitude [vocalsound] over a two second window. But they didn't do anything with the phase, and they said perhaps that limited their approach that it did not try to normalize the phase in any way. So I'm [disfmarker] I'm going to start thinking about [vocalsound] um ways that the ph that the phase could be used. Um. Si if [disfmarker] if the um channel [vocalsound] and the reverberation is multiplicative in the frequency domain with the speech spectrum then the [vocalsound] phases should be additive. So um perhaps just subtracting the mean of the phase would [disfmarker] would help. Um but I don't know I th I get the impression that they tried to do some things with the phase and they weren't successful. [speaker004:] OK. [speaker001:] Oh. You know, there is one other thing. Um Stephane showed me a paper yesterday [vocalsound] um where th uh for the Italian system they had done a series of experiments playing around with the um [disfmarker] [vocalsound] the number of iterations, that I talked about last week? And, um, so, there was a whole series of those that they had done. [speaker004:] Mm hmm. [speaker001:] They didn't, I don't think, reported timing [vocalsound] for those, but um [vocalsound] it turns out that the one that they're actually using was not the best, uh in [disfmarker] in their series of experiments, which I uh [disfmarker] Maybe it's because they [vocalsound] uh wanted to use the same number of iterations across all different languages and this was only done on Italian or something. [speaker004:] Sure. Wh who's they [disfmarker] they? [speaker001:] Uh, I don't know who did it do you know, Stephane? Who did that? [speaker003:] Um. I don't know but it's the company who prepared the Italian database, so it was [disfmarker] Is it Alcatel, or [disfmarker]? [speaker005:] I think so. [speaker003:] I [disfmarker] [speaker004:] Mmm. [speaker003:] I don't know. [speaker001:] So um [vocalsound] you know there wasn't a huge difference in terms of the performance across all the different experiments that they did. Um. But it varied a little bit. And uh [disfmarker] So I guess the main thing that I take out of that is that I think that for our purposes here we could definitely [vocalsound] you know decrease the number of iterations that we do and um, at least while we're, you know, working on uh [vocalsound] all different of kinds of variations, and that would let us, you know, pump through a lot more experiments. So um I still have to [disfmarker] to send everybody a pointer about uh [vocalsound] how to run the um [vocalsound] HTK system [vocalsound] on the Linux boxes and about changing these number of iterations so that [disfmarker] so that you can do that. Um. [vocalsound] So the [disfmarker] the other thing that that I was thinking about maybe trying out next is um [disfmarker] Oh! The other th the other experiment that they had in that paper was uh uh playing with the number of Gaussians per state. And so. [speaker003:] Uh. [speaker004:] Mm hmm. [speaker003:] I think it's [disfmarker] was playing with the number of states per word. They had sixteen, seventeen, eighteen. Yeah, because it was around [disfmarker] between sixteen and twenty or [disfmarker] [speaker001:] OK. I should look at that more carefully cuz I wonder what they did about [disfmarker] [speaker003:] Yeah. [speaker001:] Oh, OK. I was thinking it was Gaussians but it's states. [speaker003:] So, and [disfmarker] [speaker001:] OK. [speaker003:] Yeah, apparently, um [vocalsound] they had better results when they increased the number of states. From [disfmarker] from eighteen to twenty, or [disfmarker] [speaker001:] Mm hmm. Yeah. So. [vocalsound] The thing I was thinking about was uh, you know, the number of insertions really goes up when you [vocalsound] start adding all the noise in, and maybe the thing to do next would be to try to um [vocalsound] uh make the silence model more powerful, you know, increasing the number of Gaussians to the number of states, different [disfmarker] different things like that, so. [vocalsound] That's probably the next little thing I'll play with when I get a chance [speaker004:] Um. Let's see. Um. Why don't [disfmarker] why don't we uh [disfmarker] if there aren't any other major things, why don't we do the digits, and then [disfmarker] then uh turn the mikes off. So I'll start. Uh. [speaker003:] D'ou put two? [speaker005:] My name is Maria. [speaker003:] Oh, yeah. [speaker005:] This. [speaker003:] Maria Carmen? [speaker005:] Yeah. [speaker004:] OK. [speaker003:] Uh, is it the twenty fourth? [speaker006:] now we're on. [speaker003:] Yeah. [speaker001:] Uh Chuck, is the mike type wireless [disfmarker] [speaker006:] Yes. [speaker001:] wireless headset? [speaker006:] Yes. [speaker001:] OK. [speaker003:] Yeah. [speaker006:] For you it is. [speaker003:] Yeah. We uh [disfmarker] we abandoned the lapel because they sort of were not too [disfmarker] not too hot, not too cold, they were [disfmarker] you know, they were [vocalsound] uh, far enough away that you got more background noise, uh, and uh [disfmarker] and so forth [speaker001:] Uh huh. [speaker003:] but they weren't so close that they got quite the [disfmarker] you know, the really good [disfmarker] No, th [speaker001:] OK. [speaker003:] they [disfmarker] I mean they didn't [disfmarker] Wait a minute. I'm saying that wrong. They were not so far away that they were really good representative distant mikes, but on the other hand they were not so close that they got rid of all the interference. [speaker001:] Uh huh. [speaker003:] So it was no [disfmarker] didn't seem to be a good point to them. On the other hand if you only had to have one mike in some ways you could argue the lapel was a good choice, precisely because it's in the middle. [speaker001:] Yeah, yeah. [speaker003:] There's uh, some kinds of junk that you get with these things that you don't get with the lapel uh, little mouth clicks and breaths and so forth are worse with these than with the lapel, but given the choice we [disfmarker] there seemed to be very strong opinions for uh, getting rid of lapels. So, [speaker001:] The mike number is [disfmarker] [speaker006:] Uh, your mike number's written on the back of that unit there. [speaker001:] Oh yeah. One. [speaker006:] And then the channel number's usually one less than that. [speaker001:] Oh, OK. [speaker006:] It it's one less than what's written on the back of your [disfmarker] [speaker001:] OK. OK. OK. [speaker006:] yeah. So you should be zero, actually. [speaker001:] Hello? Yeah. [speaker006:] For your uh, channel number. [speaker001:] Yep, yep. [speaker003:] And you should do a lot of talking so we get a lot more of your pronunciations. no, they don't [disfmarker] don't have a [disfmarker] have any Indian pronunciations. [speaker006:] So what we usually do is um, we typically will have our meetings [speaker003:] Yeah. [speaker006:] and then at the end of the meetings we'll read the digits. Everybody goes around and reads the digits on the [disfmarker] the bottom of their forms. [speaker003:] Session R [speaker004:] R nineteen? [speaker001:] OK. [speaker003:] R nineteen. [speaker006:] Yeah. We're [disfmarker] This is session R nineteen. [speaker003:] If you say so. O K. Do we have anything like an agenda? What's going on? Um. I guess um. So. [speaker006:] Sunil's here for the summer? [speaker003:] One thing [disfmarker] Sunil's here for the summer, right. Um, so, one thing is to talk about a kick off meeting maybe uh, and then just uh, I guess uh, progress reports individually, and then uh, plans for where we go between now and then, pretty much. Um. [speaker006:] I could say a few words about um, some of the uh, compute stuff that's happening around here, so that people in the group know. [speaker003:] Mm hmm. OK. Why don't you start with that? That's sort of [disfmarker] [speaker006:] OK. We um [disfmarker] [speaker003:] Yeah? [speaker006:] So we just put in an order for about twelve new machines, uh, to use as sort of a compute farm. And um, uh, we ordered uh, SUN Blade one hundreds, and um, I'm not sure exactly how long it'll take for those to come in, but, uh, in addition, we're running [disfmarker] So the plan for using these is, uh, we're running P make and Customs here and Andreas has sort of gotten that all uh, fixed up and up to speed. And he's got a number of little utilities that make it very easy to um, [vocalsound] run things using P make and Customs. You don't actually have to write P make scripts and things like that. The simplest thing [disfmarker] And I can send an email around or, maybe I should do an FAQ on the web site about it or something. Um, there's a c [speaker003:] How about an email that points to the FAQ, [speaker006:] Yeah, yeah. [speaker003:] you know what I'm saying? so that you can [disfmarker] Yeah. [speaker006:] Uh, there's a command, uh, that you can use called "run command". "Run dash command", "run hyphen command". And, if you say that and then some job that you want to execute, uh, it will find the fastest currently available machine, and export your job to that machine, and uh [disfmarker] and run it there and it'll duplicate your environment. So you can try this as a simple test with uh, the L S command. So you can say "run dash command L S", and, um, it'll actually export that [vocalsound] LS command to some machine in the institute, and um, do an LS on your current directory. So, substitute LS for whatever command you want to run, and um [disfmarker] And that's a simple way to get started using [disfmarker] using this. And, so, soon, when we get all the new machines up, [vocalsound] um, e then we'll have lots more compute to use. Now th one of the nice things is that uh, each machine that's part of the P make and Customs network has attributes associated with it. Uh, attributes like how much memory the machine has, what its speed is, what its operating system, and when you use something like "run command", you can specify those attributes for your program. For example if you only want your thing to run under Linux, you can give it the Linux attribute, and then it will find the fastest available Linux machine and run it on that. So. You can control where your jobs go, to a certain extent, all the way down to an individual machine. Each machine has an attribute which is the name of itself. So you can give that as an attribute and it'll only run on that. If there's already a job running, on some machine that you're trying to select, your job will get queued up, and then when that resource, that machine becomes available, your job will get exported there. So, there's a lot of nice features to it and it kinda helps to balance the load of the machines and uh, right now Andreas and I have been the main ones using it and we're [disfmarker] Uh. The SRI recognizer has all this P make customs stuff built into it. So. [speaker003:] So as I understand, you know, he's using all the machines and you're using all the machines, [speaker006:] Yeah. [speaker003:] is the rough division of [disfmarker] [speaker006:] Exactly. Yeah, you know, I [disfmarker] I sort of got started [comment] using the recognizer just recently and uh, uh I fired off a training job, and then I fired off a recognition job and I get this email about midnight from Andreas saying, "uh, are you running two [vocalsound] trainings simultaneously s my m my jobs are not getting run." So I had to back off a little bit. But, soon as we get some more machines then uh [disfmarker] then we'll have more compute available. So, um, that's just a quick update about what we've got. [speaker007:] Um, I have [disfmarker] I have a question about the uh, parallelization? [speaker006:] So. Mm hmm. [speaker007:] So, um, let's say I have like, a thousand little [disfmarker] little jobs to do? [speaker006:] Mm hmm. [speaker007:] Um, how do I do it with "run command"? I mean do [disfmarker] [speaker006:] You could write a script uh, which called run command on each sub job [speaker007:] Uh huh. A thousand times? [speaker006:] right? [speaker007:] OK. [speaker006:] But you probably wanna be careful with that because um, you don't wanna saturate the network. Uh, so, um, you know, you should [disfmarker] you should probably not run more than, say ten jobs yourself at any one time, uh, just because then it would keep other people [disfmarker] [speaker007:] Oh, too much file transfer and stuff. [speaker006:] Well it's not that so much as that, you know, e with [disfmarker] if everybody ran fifty jobs at once then it would just bring everything to a halt and, you know, people's jobs would get delayed, so it's sort of a sharing thing. [speaker007:] OK. [speaker006:] Um, so you should try to limit it to somet sometim some number around ten jobs at a time. Um. So if you had a script for example that had a thousand things it needed to run, um, you'd somehow need to put some logic in there if you were gonna use "run command", uh, to only have ten of those going at a time. And uh, then, when one of those finished you'd fire off another one. Um, [speaker003:] I remember I [disfmarker] I forget whether it was when the Rutgers or [disfmarker] or Hopkins workshop, I remember one of the workshops I was at there were [disfmarker] everybody was real excited cuz they got twenty five machines and there was some kind of P make like thing that sit sent things out. [speaker006:] Mm hmm. Mm hmm. Mm hmm. [speaker003:] So all twenty five people were sending things to all twenty five machines [speaker006:] Yeah. Yeah. [speaker003:] and [vocalsound] and things were a lot less efficient than if you'd just use your own machine. [speaker006:] Yep. Yeah, exactly. Yeah, you have to be a little bit careful. [speaker003:] as I recall, but. [speaker004:] Hmm. [speaker003:] Yeah. [speaker006:] Um, but uh, you can also [disfmarker] If you have that level of parallelization um, and you don't wanna have to worry about writing the logic in [disfmarker] in a Perl script to take care of that, you can use um, P make [speaker007:] Just do P make. [speaker006:] and [disfmarker] and you basically write a Make file that uh, you know your final job depends on these one thousand things, [speaker007:] s Mm hmm. [speaker006:] and when you run P make, uh, on your Make file, you can give it the dash capital J and [disfmarker] and then a number, [speaker007:] Mm hmm. [speaker006:] and that number represents how many uh, machines to use at once. [speaker007:] Right. [speaker006:] And then it'll make sure that it never goes above that. [speaker007:] Right. OK. [speaker006:] So, I can get some documentation. [speaker004:] So it [disfmarker] it's [disfmarker] it's not systematically queued. I mean all the jobs are running. If you launch twenty jobs, they are all running. [speaker006:] It depends. [speaker004:] Alright. [speaker006:] If you [disfmarker] "Run command", that I mentioned before, is [disfmarker] doesn't know about other things that you might be running. [speaker004:] Uh huh. [speaker006:] So, it would be possible to run a hundred run jobs at once, [speaker004:] Right. [speaker006:] and they wouldn't know about each other. But if you use P make, then, it knows about all the jobs that it has to run [speaker004:] Mm hmm. [speaker006:] and it can control, uh, how many it runs simultaneously. [speaker003:] So "run command" doesn't use P make, or [disfmarker]? [speaker006:] It uses "export" underlyingly. But, if you [disfmarker] i It's meant to be run one job at a time? So you could fire off a thousand of those, and it doesn't know [disfmarker] any one of those doesn't know about the other ones that are running. [speaker003:] So why would one use that rather than P make? [speaker006:] Well, if you have, um [disfmarker] Like, for example, uh if you didn't wanna write a P make script and you just had a, uh [disfmarker] an HTK training job that you know is gonna take uh, six hours to run, and somebody's using, uh, the machine you typically use, you can say "run command" and your HTK thing and it'll find another machine, the fastest currently available machine and [disfmarker] and run your job there. [speaker003:] Now, does it have the same sort of behavior as P make, which is that, you know, if you run something on somebody's machine and they come in and hit a key then it [disfmarker] [speaker006:] Yes. Yeah, there are um [disfmarker] Right. So some of the machines at the institute, um, have this attribute called "no evict". And if you specify that, in [disfmarker] in one of your attribute lines, then it'll go to a machine which your job won't be evicted from. [speaker003:] Mm hmm. [speaker006:] But, the machines that don't have that attribute, if a job gets fired up on that, which could be somebody's desktop machine, and [disfmarker] and they were at lunch, [speaker003:] Mm hmm. [speaker006:] they come back from lunch and they start typing on the console, then your machine will get evicted [disfmarker] your job [comment] will get evicted from their machine and be restarted on another machine. Automatically. So [disfmarker] which can cause you to lose time, right? If you had a two hour job, and it got halfway through and then somebody came back to their machine and it got evicted. So. If you don't want your job to run on a machine where it could be evicted, then you give it the minus [disfmarker] the attribute, you know, "no evict", and it'll pick a machine that it can't be evicted from. So. [speaker003:] Um, what [disfmarker] what about [disfmarker] I remember always used to be an issue, maybe it's not anymore, that if you [disfmarker] if something required [disfmarker] if your machine required somebody hitting a key in order to evict things that are on it so you could work, [speaker006:] Mm hmm. [speaker003:] but if you were logged into it from home? and you weren't hitting any keys? cuz you were, home? [speaker006:] Yeah, I [disfmarker] I'm not sure how that works. Uh, it seems like Andreas did something for that. [speaker003:] Yeah. Hmm. [speaker006:] Um. [speaker003:] OK. We can ask him sometime. [speaker006:] But [disfmarker] Yeah. I don't know whether it monitors the keyboard or actually looks at the console TTY, so maybe if you echoed something to the you know, dev [disfmarker] dev console or something. [speaker003:] You probably wouldn't ordinarily, though. Yeah. Right? [speaker006:] Hmm? [speaker003:] You probably wouldn't ordinarily. I mean you sort of [disfmarker] you're at home and you're trying to log in, and it takes forever to even log you in, [speaker006:] Yeah, yeah. [speaker003:] and you probably go, "screw this", and [disfmarker] [vocalsound] You know. [speaker006:] Yeah. Yeah, so, um, [speaker003:] Yeah. [speaker006:] yeah. I [disfmarker] I can [disfmarker] I'm not sure about that one. [speaker003:] yeah. [speaker006:] But uh. [speaker003:] OK. [speaker001:] Uh, I need a little orientation about this environment and uh scr s how to run some jobs here because I never d did anything so far with this X emissions [speaker006:] OK. [speaker001:] So, I think maybe I'll ask you after the meeting. [speaker006:] Um. Yeah. Yeah, and [disfmarker] and also uh, Stephane's a [disfmarker] a really good resource for that if you can't find me. [speaker001:] Yeah, yeah, yeah. Yep. OK, sure [speaker004:] Mmm. [speaker006:] Especially with regard to the Aurora stuff. [speaker001:] OK. [speaker006:] He [disfmarker] he knows that stuff better than I do. [speaker003:] OK. Well, why don't we uh, uh, Sunil since you're [vocalsound] haven't [disfmarker] haven't been at one of these yet, why don't yo you tell us what's [disfmarker] what's up with you? Wh what you've been up to, hopefully. [speaker001:] Um. Yeah. So, uh, shall I start from [disfmarker] Well I don't know how may I [disfmarker] how [disfmarker] OK. Uh, I think I'll start from the post uh Aurora submission maybe. [speaker003:] Yeah. [speaker001:] Uh, yeah, after the submission the [disfmarker] what I've been working on mainly was to take [disfmarker] take other s submissions and then over their system, what they submitted, because we didn't have any speech enhancement system in [disfmarker] in ours. So [disfmarker] So I tried uh, And u First I tried just LDA. And then I found that uh, I mean, if [disfmarker] if I combine it with LDA, it gives [@ @] improvement over theirs. Uh [disfmarker] [speaker006:] Are y are you saying LDA? [speaker001:] Yeah. [speaker006:] LDA. OK. [speaker001:] Yeah. So, just [disfmarker] just the LDA filters. I just plug in [disfmarker] I just take the cepstral coefficients coming from their system and then plug in LDA on top of that. [speaker006:] Mm hmm. [speaker001:] But the LDA filter that I used was different from what we submitted in the proposal. What I did was [vocalsound] I took the LDA filter's design using clean speech, uh, mainly because the speech is already cleaned up after the enhancement so, instead of using this, uh, narrow [disfmarker] narrow band LDA filter that we submitted uh, I got new filters. So that seems to be giving [disfmarker] uh, improving over their uh, system. Slightly. But, not very significantly. And uh, that was uh, showing any improvement over [disfmarker] final [disfmarker] by plugging in an LDA. And uh, so then after [disfmarker] after that I [disfmarker] I added uh, on line normalization also on top of that. And that [disfmarker] there [disfmarker] there also I n I found that I have to make some changes to their time constant that I used because th it has a [disfmarker] a mean and variance update time constant and [disfmarker] which is not suitable for the enhanced speech, and whatever we try it on with proposal one. But um, I didn't [disfmarker] I didn't play with that time constant a lot, I just t g I just found that I have to reduce the value [disfmarker] I mean, I have to increase the time constant, or reduce the value of the update value. That's all I found So I have to. Uh, Yeah. And uh, uh, the other [disfmarker] other thing what I tried was, I just um, uh, took the baseline and then ran it with the endpoint inf uh th information, just the Aurora baseline, to see that how much the baseline itself improves by just supplying the information of the [disfmarker] I mean the w speech and nonspeech. And uh, I found that the baseline itself improves by twenty two percent by just giving the wuh. [speaker003:] Uh, can you back up a second, I [disfmarker] I [disfmarker] I missed something, uh, I guess my mind wandered. Ad ad When you added the on line normalization and so forth, uh, uh things got better again? [speaker001:] Yeah. [speaker003:] or is it? [speaker001:] No. No. [speaker003:] Did it not? [speaker001:] No, things didn't get better with the same time constant that we used. [speaker003:] No, no. With a different time constant. [speaker001:] With the different time constant I found that [disfmarker] I mean, I didn't get an improvement over not using on line normalization, [speaker003:] Oh. No you didn't, [speaker001:] because I [disfmarker] I found that I would have change the value of the update factor. [speaker003:] OK. [speaker001:] But I didn't play it with play [disfmarker] play quite a bit to make it better than. [speaker003:] Yeah. OK. [speaker001:] So, it's still not [disfmarker] I mean, the on line normalization didn't give me any improvement. [speaker003:] OK. [speaker001:] And uh, [speaker003:] OK. [speaker001:] so, oh yeah So I just stopped there with the uh, speech enhancement. The [disfmarker] the other thing what I tried was the [disfmarker] adding the uh, endpoint information to the baseline and that itself gives like twenty two percent because the [disfmarker] the second [disfmarker] the new phase is going to be with the endpointed speech. And just to get a feel of how much the baseline itself is going to change by adding this endpoint information, I just, uh, use [disfmarker] [speaker003:] Hmm. [speaker006:] So people won't even have to worry about, uh, doing speech nonspeech then. [speaker001:] Yeah that's, that's what the feeling is like. [speaker006:] Mmm. [speaker001:] They're going to give the endpoint information. [speaker006:] I see. [speaker003:] G I guess the issue is that people do that anyway, everybody does that, and they wanted to see, given that you're doing that, what [disfmarker] what are the best features that you should use. [speaker001:] Yeah. [speaker006:] Yeah, I see. [speaker001:] So, [speaker003:] I mean clearly they're interact. So I don't know that I entirely agree with it. [speaker006:] Yeah. [speaker003:] But [disfmarker] but it might be uh [disfmarker] In some ways it might be better t to [disfmarker] rather than giving the endpoints, to have a standard that everybody uses and then interacts with. [speaker006:] Mm hmm. [speaker003:] But, you know. It's [disfmarker] it's still someth reasonable. [speaker006:] So, are people supposed to assume that there is uh [disfmarker] Are [disfmarker] are people not supposed to use any speech outside of those endpoints? [speaker001:] Uh [disfmarker] [speaker006:] Or can you then use speech outside of it for estimating background noise and things? [speaker001:] No. No. That i I [disfmarker] Yeah. Yeah, yeah, exactly. I guess that is [disfmarker] that is where the consensus is. Like y you will [disfmarker] you will [disfmarker] You'll be given the information about the beginning and the end of speech but the whole speech is available to you. [speaker006:] OK. [speaker001:] So. [speaker003:] So it should make the spectral subtraction style things work even better, because you don't have the mistakes in it. [speaker001:] Yeah. [speaker003:] Yeah? OK. [speaker001:] Yeah. So [disfmarker] So that [disfmarker] that [disfmarker] The baseline itself [disfmarker] I mean, it improves by twenty two percent. I found that in s one of the SpeechDat Car cases, that like, the Spanish one improves by just fifty percent by just putting the endpoint. [speaker006:] Wow. [speaker001:] w I mean you don't need any further speech enhancement with fifty. So, uh, [speaker006:] So the baseline itself improves by fifty percent. [speaker001:] Yeah, by fifty percent. [speaker003:] Yeah. So it's g it's gonna be harder to [vocalsound] beat that actually. [speaker006:] Wow. Yeah. [speaker001:] Yeah, [speaker003:] But [disfmarker] but [disfmarker] [speaker001:] so [disfmarker] so that is when uh, the [disfmarker] the qualification criteria was reduced from fifty percent to something like twenty five percent for well matched. And I think they have [disfmarker] they have actually changed their qualification c criteria now. And uh, Yeah, I guess after that, I just went home f I just had a vacation fo for four weeks. [speaker003:] OK. [speaker001:] Uh. [speaker003:] No, that's [disfmarker] that's [disfmarker] that's a good [disfmarker] good update. [speaker001:] Ye Yeah, and I [disfmarker] I came back and I started working on uh, some other speech enhancement algorithm. I mean, so [disfmarker] I [disfmarker] from the submission what I found that people have tried spectral subtraction and Wiener filtering. These are the main uh, approaches where people have tried, [speaker003:] Yeah. [speaker001:] so just to [disfmarker] just to fill the space with some f few more speech enhancement algorithms to see whether it improves a lot, I [disfmarker] I've been working on this uh, signal subspace approach for speech enhancement where you take the noisy signal and then decomposing the signal s and the noise subspace and then try to estimate the clean speech from the signal plus noise subspace. [speaker003:] Mm hmm. [speaker001:] And [disfmarker] So, I've been actually running some s So far I've been trying it only on Matlab. I have to [disfmarker] to [disfmarker] to test whether it works first or not [speaker003:] Yeah. [speaker001:] and then I'll p port it to C and I'll update it with the repository once I find it it giving any some positive result. So, yeah. [speaker003:] S So you s you So you said one thing I want to jump on for a second. So [disfmarker] so now you're [disfmarker] you're getting tuned into the repository thing that he has here [speaker001:] Yeah. [speaker003:] and [disfmarker] so we we'll have a [vocalsound] single place where the stuff is. [speaker001:] Yep. Yeah. [speaker003:] Cool. Um, so maybe uh, just briefly, you could remind us about the related experiments. Cuz you did some stuff that you talked about last week, I guess? [speaker004:] Mm hmm. [speaker003:] Um, where you were also combining something [disfmarker] both of you I guess were both combining something from the uh, French Telecom system with [vocalsound] the u uh [disfmarker] [speaker004:] Right. [speaker003:] I [disfmarker] I don't know whether it was system one or system two, or [disfmarker]? [speaker004:] Mm hmm. It was system one. So [speaker003:] OK. [speaker004:] we [disfmarker] The main thing that we did is just to take the spectral subtraction from the France Telecom, which provide us some speech samples that are uh, with noise removed. [speaker003:] So I let me [disfmarker] let me just stop you there. So then, one distinction is that uh, you were taking the actual France Telecom features and then applying something to [disfmarker] [speaker001:] Uh, no there is a slight different. Uh I mean, which are extracted at the handset [speaker003:] Yeah. [speaker001:] because they had another back end blind equalization [disfmarker] [speaker003:] Yeah. But that's what I mean. [speaker001:] Yeah. Yeah. [speaker003:] But u u Sorry, yeah, I'm not being [disfmarker] I'm not being clear. [speaker001:] Yeah. Yeah. [speaker003:] What I meant was you had something like cepstra or something, right? [speaker001:] Yeah, yeah, yeah, yeah. [speaker003:] And so one difference is that, I guess you were taking spectra. [speaker001:] The speech. [speaker002:] Yeah. [speaker004:] Yeah. But I guess it's the s exactly the same thing because on the heads uh, handset they just applied this Wiener filter and then compute cepstral features, right? [speaker001:] Yeah, [speaker004:] or [disfmarker]? [speaker001:] the cepstral f The difference is like [disfmarker] There may be a slight difference in the way [disfmarker] because they use exactly the baseline system for converting the cepstrum once you have the speech. [speaker004:] Right. [speaker001:] I mean, if we are using our own code for th I mean that [disfmarker] that could be the only difference. I mean, there is no other difference. [speaker004:] Mm hmm. [speaker001:] Yeah. [speaker003:] But you got some sort of different result. So I'm trying to understand it. But uh, [speaker004:] Yeah, well I think we should uh, have a table with all the result [speaker003:] I th [speaker004:] because I don't know I uh, I don't exactly know what are your results? But, [speaker001:] OK. OK. [speaker004:] Mmm. Yeah, but so we did this, and another difference I guess is that we just applied uh, proposal one system after this without [disfmarker] well, with our modification to reduce the delay of the [disfmarker] the LDA filters, and [speaker001:] Uh huh. [speaker004:] Well there are slight modifications, [speaker002:] And the filter [disfmarker] [speaker004:] but it was the full proposal one. In your case, if you tried just putting LDA, then maybe on line normalization [disfmarker]? [speaker001:] Only LDA. Yeah. Af I [disfmarker] after that I added on line normalization, yeah. [speaker004:] Mm hmm. So we just tried directly to [disfmarker] to just, keep the system as it was and, um, when we plug the spectral subtraction it improves uh, signif significantly. Um, but, what seems clear also is that we have to retune the time constants of the on line normalization. [speaker001:] Yeah, yeah. Yeah. [speaker004:] Because if we keep the value that was submitted uh, it doesn't help at all. You can remove on line normalization, or put it, it doesn't change anything. Uh, uh, as long as you have the spectral subtraction. But, you can still find some kind of optimum somewhere, and we don't know where exactly but, [speaker001:] Yeah. [speaker004:] uh. [speaker001:] Yeah, I assume. [speaker003:] So it sounds like you should look at some tables of results or something [speaker004:] Right. [speaker003:] and see where i where the [disfmarker] [vocalsound] where they were different and what we can learn from it. [speaker004:] Yeah. [speaker001:] Yeah. [speaker004:] Mm hmm. Mm hmm. [speaker001:] without any change. [speaker004:] Yeah. [speaker002:] But it's [disfmarker] [speaker001:] OK. [speaker004:] Well, [speaker002:] It's the new. [speaker004:] with [disfmarker] with [disfmarker] with changes, [speaker001:] with [speaker002:] The new. [speaker004:] because we change it the system to have [disfmarker] [speaker001:] Oh yeah, I mean the [disfmarker] the new LDA filters. [speaker002:] The new. [speaker001:] I mean [disfmarker] [speaker004:] Yeah. [speaker001:] OK. [speaker004:] LDA filters. There are other things that we finally were shown to improve also like, the sixty four hertz cut off. [speaker002:] Mm hmm. [speaker001:] Mm hmm. [speaker004:] w Uh, it doesn't seem to hurt on TI digits, finally. [speaker001:] OK. [speaker004:] Maybe because of other changes. [speaker001:] OK. [speaker004:] Um, well there are some [vocalsound] minor changes, yeah. [speaker001:] Mm hmm. [speaker004:] And, right now if we look at the results, it's, um, always better than [disfmarker] it seems always better than France Telecom for mismatch and high mismatch. And it's still slightly worse for well matched. Um, [speaker002:] But [speaker004:] but this is not significant. But, the problem is that it's not significant, but if you put this in the, mmm, uh, spreadsheet, it's still worse. Even with very minor [disfmarker] uh, even if it's only slightly worse for well matched. [speaker003:] Mm hmm. [speaker004:] And significantly better for HM. Uh, but, well. I don't think it's importa important because when they will change their metric, uh, uh, mainly because of uh, when you p you plug the um, frame dropping in the baseline system, it will improve a lot HM, and MM, [speaker001:] Yeah. [speaker004:] so, um, I guess what will happen [disfmarker] I don't know what will happen. But, the different contribution, I think, for the different test set will be more even. [speaker001:] Because the [disfmarker] your improvement on HM and MM will also go down significantly in the spreadsheet so. But the [pause] the well matched may still [disfmarker] [speaker004:] Mm hmm. [speaker001:] I mean the well matched may be the one which is least affected by adding the endpoint information. [speaker003:] Right. [speaker001:] Yeah. So the [disfmarker] the MM [disfmarker] [speaker004:] Mm hmm. [speaker001:] MM and HM are going to be v hugely affected by it. [speaker004:] Yeah, so um, [speaker001:] Yeah. [speaker004:] yeah. [speaker001:] Yeah. But they d the [disfmarker] everything I mean is like, but there that's how they reduce [disfmarker] why they reduce the qualification to twenty five percent or some [disfmarker] something on. [speaker004:] Mm hmm. [speaker003:] But are they changing the weighting? [speaker001:] Uh, no, I guess they are going ahead with the same weighting. [speaker004:] Yeah. [speaker001:] Yeah. So there's nothing on [disfmarker] [speaker003:] I don't understand that. I guess I [disfmarker] I haven't been part of the discussion, [speaker001:] Yeah. [speaker003:] so, um, it seems to me that the well matched condition is gonna be unusual, in this case. [speaker001:] Usual. [speaker003:] Unusual. [speaker001:] Uh huh. [speaker003:] Because, um, you don't actually have good matches ordinarily for what any [@ @] [disfmarker] particular person's car is like, or [speaker001:] Mmm. [speaker003:] uh, [speaker001:] Mmm. [speaker003:] It seems like something like the middle one is [disfmarker] is more natural. [speaker001:] Hmm. [speaker003:] So I don't know why the [pause] well matched is [speaker001:] Right. [speaker003:] uh [disfmarker] [speaker004:] Mm hmm. [speaker001:] Yeah, but actually the well [disfmarker] well the well matched um, uh, I mean the [disfmarker] the well matched condition is not like, uh, the one in TI digits where uh, you have all the training, uh, conditions exactly like replicated in the testing condition also. It's like, this is not calibrated by SNR or something. The well matched has also some [disfmarker] some mismatch in that which is other than the [disfmarker] [speaker003:] The well wa matched has mismatch? [speaker001:] has [disfmarker] has also some slight mismatches, unlike the TI digits where it's like prefectly matched [speaker006:] Perfect to match. [speaker003:] Yeah. [speaker001:] because it's artificially added noise. [speaker003:] Yeah. [speaker001:] But this is natural recording. [speaker003:] So remind me of what well matched meant? You've told me many times. [speaker001:] The [disfmarker] the well matched is like [disfmarker] the [disfmarker] the well matched is defined like it's seventy percent of the whole database is used for training and thirty percent for testing. [speaker004:] Yeah. Well, so it means that if the database is large enough, it's matched. [speaker001:] It's [disfmarker] it's [disfmarker] [speaker004:] Because it [speaker001:] OK, it's [disfmarker] [speaker003:] Yeah. [speaker004:] in each set you have a range of conditions [disfmarker] Well [disfmarker] [speaker003:] Right. So, I mean, yeah, unless they deliberately chose it to be different, which they didn't because they want it to be well matched, it is pretty much [disfmarker] You know, so it's [disfmarker] so it's sort of saying if you [disfmarker] [speaker006:] It's [disfmarker] it's not guaranteed though. [speaker001:] Yeah. [speaker003:] Uh, it's not guaranteed. [speaker001:] Yeah. [speaker003:] Right. [speaker004:] Mm hmm. [speaker003:] Right. [speaker001:] Yeah because the m the main [disfmarker] major reason for the m the main mismatch is coming from the amount of noise and the silence frames and all those present in the database actually. [speaker003:] Again, if you have enough [disfmarker] if you have enough [disfmarker] [speaker001:] No yeah, yeah. Yeah. [speaker003:] So it's sort of i i it's sort of saying OK, so you [disfmarker] much as you train your dictation machine for talking into your computer, um, you [disfmarker] you have a car, and so you drive it around a bunch and [disfmarker] and record noise conditions, or something, and then [disfmarker] I don't think that's very realistic, I mean I th [speaker001:] Mm hmm. [speaker003:] I [disfmarker] I you know, so I [disfmarker] I [disfmarker] I [disfmarker] you know, I guess they're saying that if you were a company that was selling the stuff commercially, that you would have a bunch of people driving around in a bunch of cars, and [disfmarker] and you would have something that was roughly similar and maybe that's the argument, but I'm not sure I buy it, so. [speaker001:] Yeah, yeah, yeah. [speaker003:] Uh, So What else is going on? [speaker004:] Mmm. You Yeah. We are playing [disfmarker] we are also playing, trying to put other spectral subtraction mmm, in the code. Um, it would be a very simple spectral subtraction, on the um, mel energies which I already tested but without the um frame dropping actually, and I think it's important to have frame dropping if you use spectral subtraction. [speaker006:] Is it [disfmarker] is spectral subtraction typically done on the [disfmarker] after the mel, uh, scaling or is it done on the FFT bins? [speaker004:] Um, [speaker006:] Does it matter, or [disfmarker]? [speaker004:] I d I don't know. Well, it's both [disfmarker] both uh, cases can i [speaker006:] Oh. [speaker004:] Yeah. So some of the proposal, uh, we're doing this on the bin [disfmarker] on the FFT bins, [speaker006:] Hmm. [speaker004:] others on the um, mel energies. You can do both, but I cannot tell you what's [disfmarker] which one might be better or [disfmarker] [speaker006:] Hmm. [speaker004:] I [disfmarker] [speaker001:] I guess if you want to reconstruct the speech, it may be a good idea to do it on FFT bins. [speaker004:] I don't know. [speaker006:] Mmm. [speaker004:] Yeah, but [speaker001:] But for speech recognition, it may not. I mean it may not be very different if you do it on mel warped or whether you do it on FFT. [speaker006:] I see. [speaker001:] So you're going to do a linear weighting anyway after that. Well [disfmarker] Yeah? [speaker006:] Hmm. [speaker001:] So, it may not be really a big different. [speaker004:] Well, it gives something different, but I don't know what are the, pros and cons of both. [speaker001:] It I Uh huh. [speaker003:] Hmm. [speaker001:] So [speaker003:] OK. [speaker001:] The other thing is like when you're putting in a speech enhancement technique, uh, is it like one stage speech enhancement? Because everybody seems to have a mod two stages of speech enhancement in all the proposals, which is really giving them some improvement. [speaker004:] Yeah. [speaker002:] Mm hmm. [speaker004:] Mm hmm. [speaker001:] I mean they just do the same thing again once more. [speaker003:] Mm hmm. [speaker001:] And [disfmarker] So, there's something that is good about doing it [disfmarker] I mean, to cleaning it up once more. [speaker004:] Yeah, it might be. Yeah. [speaker001:] Yeah, so we can [disfmarker] [speaker004:] So maybe in my implementation I should also try to inspire me from this kind of thing [speaker001:] Yeah. That's what [speaker004:] and [disfmarker] Yeah. [speaker003:] Well, the other thing would be to combine what you're doing. I mean maybe one or [disfmarker] one or the other of the things that you're doing would benefit from the other happening first. [speaker001:] That's wh Yeah. So, [speaker003:] Right, so he's doing a signal subspace thing, maybe it would work better if you'd already done some simple spectral subtraction, or maybe vi maybe the other way around, [speaker004:] Yeah, mm hmm. [speaker001:] Yeah. [speaker003:] you know? [speaker004:] Mm hmm. [speaker001:] So I've been thinking about combining the Wiener filtering with signal subspace, I mean just to see all [disfmarker] some [disfmarker] some such permutation combination to see whether it really helps or not. [speaker004:] Mm hmm. Mm hmm. Mm hmm. Yeah. Yeah. [speaker003:] How is it [disfmarker] I [disfmarker] I guess I'm ignorant about this, how does [disfmarker] I mean, since Wiener filter also assumes that you're [disfmarker] that you're adding together the two signals, how is [disfmarker] how is that differ from signal subspace? [speaker001:] The signal subspace? [speaker003:] Yeah. [speaker001:] The [disfmarker] The signal subspace approach has actually an in built Wiener filtering in it. [speaker003:] Oh, OK. [speaker001:] Yeah. It is like a KL transform followed by a Wiener filter. Is the signal is [disfmarker] is a signal substrate. [speaker003:] Oh, oh, OK so the difference is the KL. [speaker001:] So, the [disfmarker] the different [disfmarker] the c the [disfmarker] the advantage of combining two things is mainly coming from the signal subspace approach doesn't work very well if the SNR is very bad. It's [disfmarker] [speaker003:] I see. [speaker001:] it works very poorly with the poor SNR conditions, and in colored noise. [speaker003:] So essentially you could do simple spectral subtraction, followed by a KL transform, followed by a [speaker001:] Wiener filtering. [speaker003:] Wiener filter. [speaker001:] It's a [disfmarker] it's a cascade of two s [speaker003:] Yeah, in general, you don't [disfmarker] that's right you don't wanna othorg orthogonalize if the things are noisy. Actually. Um, that was something that uh, Herve and I were talking about with um, the multi band stuff, that if you're converting things to from uh, bands, groups of bands into cepstral coef you know, local sort of local cepstral coefficients that it's not that great to do it if it's noisy. [speaker001:] Mm hmm. OK. Yeah. [speaker003:] Uh, so. [speaker001:] So. So that [disfmarker] that's one reason maybe we could combine s some [disfmarker] something to improve SNR a little bit, first stage, [speaker003:] Yeah. [speaker001:] and then do a something in the second stage which could take it further. [speaker004:] What was your point about [disfmarker] about colored noise there? [speaker001:] Oh, the colored noise [speaker004:] Yeah. [speaker001:] uh [disfmarker] the colored noise [disfmarker] the [disfmarker] the v the signal subspace approach has [disfmarker] I mean, it [disfmarker] it actually depends on inverting the matrices. So it [disfmarker] it [disfmarker] ac the covariance matrix of the noise. [speaker004:] Mm hmm. [speaker001:] So if [disfmarker] if it is not positive definite, I mean it has a [disfmarker] it's [disfmarker] It doesn't behave very well if it is not positive definite ak It works very well with white noise because we know for sure that it has a positive definite. [speaker003:] So you should do spectral subtraction and then add noise. [speaker001:] So the way they get around is like they do an inverse filtering, first of the colo colored noise [speaker003:] Yeah. [speaker001:] and then make the noise white, [speaker003:] Yeah. [speaker001:] and then finally when you reconstruct the speech back, you do this filtering again. [speaker004:] Yeah, right. [speaker003:] I was only half kidding. I mean if you [disfmarker] sort of [vocalsound] you do the s spectral subtraction, that also gets rid [disfmarker] [speaker004:] Yeah. [speaker001:] Yeah. Yeah. [speaker003:] and then you [disfmarker] then [disfmarker] then add a little bit l noise [disfmarker] noise addition [disfmarker] I mean, that sort of what J [disfmarker] JRASTA does, in a way. If you look at what JRASTA doing essentially i i it's equivalent to sort of adding a little [disfmarker] adding a little noise, [speaker001:] Yeah. Huh? Uh huh. [speaker004:] Uh huh. [speaker003:] in order to get rid of the effects of noise. [speaker001:] So. [speaker004:] Yeah. [speaker003:] OK. [speaker004:] Uh, yeah. So there is this. And maybe we [disfmarker] well we find some people so that [vocalsound] uh, agree to maybe work with us, and they have implementation of VTS techniques so it's um, Vector Taylor Series that are used to mmm, [vocalsound] uh f to model the transformation between clean cepstra and noisy cepstra. So. Well, if you take the standard model of channel plus noise, uh, it's [disfmarker] it's a nonlinear eh uh, transformation in the cepstral domain. [speaker003:] Mm hmm. Yes. [speaker004:] And uh, there is a way to approximate this using uh, first order or second order Taylor Series and it can be used for [vocalsound] uh, getting rid of the noise and the channel effect. [speaker003:] Who is doing this? [speaker004:] Uh w working in the cepstral domain? So there is one guy in Grenada, [speaker002:] Yeah, in Grenada [speaker004:] and another in [pause] uh, Lucent that I met at ICASSP. [speaker002:] one of my friend. [speaker004:] uh, [speaker003:] Who's the guy in Grenada? [speaker002:] Uh, Jose Carlos Segura. [speaker003:] I don't know him. [speaker001:] This VTS has been proposed by CMU? [speaker004:] Mm hmm. [speaker001:] Is it [disfmarker] is it the CMU? [speaker002:] Yeah, yeah, yeah. [speaker001:] Yeah, yeah, OK. [speaker002:] Originally the idea was from CMU. [speaker004:] Mm hmm. [speaker001:] From C. [speaker004:] Yeah. [speaker003:] Uh huh. [speaker004:] Well, it's again a different thing [vocalsound] [vocalsound] that could be tried. Um, [speaker003:] Uh huh. [speaker004:] Mmm, yeah. [speaker003:] Yeah, so at any rate, you're looking general, uh, standing back from it, looking at ways to combine one form or another of uh, noise removal, uh, with [disfmarker] with these other things we have, [speaker004:] Mm hmm. [speaker003:] uh, looks like a worthy thing to [disfmarker] to do here. [speaker004:] Uh, yeah. But, yeah. But for sure there's required to [disfmarker] that requires to re check everything else, and re optimize the other things [speaker003:] Oh yeah. [speaker004:] and, for sure the on line normalization may be the LDA filter. Um, [speaker003:] Well one of the [disfmarker] seems like one of the things to go through next week when Hari's here, [speaker004:] I [disfmarker] [speaker003:] cuz Hari'll have his own ideas too [disfmarker] [speaker004:] Uh huh. [speaker003:] or [pause] I guess not next week, week and a half, uh, will be sort of go through these alternatives, what we've seen so far, and come up with some game plans. Um. You know. So, I mean one way would [disfmarker] he Here are some alternate visions. I mean one would be, you look at a few things very quickly, you pick on something that looks like it's promising and then everybody works really hard on the same [disfmarker] different aspects of the same thing. Another thing would be to have t to [disfmarker] to pick two pol two plausible things, and [disfmarker] and you know, have t sort of two working things for a while until we figure out what's better, [speaker004:] Mm hmm. [speaker003:] and then, you know, uh, but, w um, uh, he'll have some ideas on that too. [speaker001:] The other thing is to, uh [disfmarker] Most of the speech enhancement techniques have reported results on small vocabulary tasks. But we [disfmarker] we going to address this Wall Street Journal in our next stage, which is also going to be a noisy task so s very few people have reported something on using some continuous speech at all. So, there are some [disfmarker] I mean, I was looking at some literature on speech enhancement applied to large vocabulary tasks and spectral subtraction doesn't seems to be the thing to do for large vocabulary tasks. And it's [disfmarker] Always people have shown improvement with Wiener filtering and maybe subspace approach over spectral subtraction everywhere. But if we [disfmarker] if we have to use simple spectral subtraction, we may have to do some optimization [pause] to make it work [@ @]. [speaker003:] So they're making [disfmarker] there [disfmarker] Somebody's generating Wall Street Journal with additive [disfmarker] artificially added noise or something? [speaker001:] Yeah, yeah. [speaker003:] Sort of a [disfmarker] sort of like what they did with TI digits, and? [speaker001:] Yeah. [speaker003:] Yeah, OK. [speaker001:] Yeah. I m I guess Guenter Hirsch is in charge of that. Guenter Hirsch and TI. [speaker003:] OK. [speaker001:] Maybe Roger [disfmarker] r Roger, maybe in charge of. [speaker003:] And then they're [disfmarker] they're uh, uh, generating HTK scripts to [disfmarker] [speaker001:] Yeah. Yeah, I don't know. There are [disfmarker] they have [disfmarker] there is no [disfmarker] I don't know if they are converging on HTK or are using some Mississippi State, [speaker003:] Mis Mississippi State maybe, [speaker001:] yeah. [speaker003:] yeah. [speaker001:] I'm not sure about that. [speaker003:] Yeah, so that'll be a little [disfmarker] little task in itself. [speaker001:] Yeah. [speaker003:] Um, well we've [disfmarker] Yeah, it's true for the additive noise, y artificially added noise we've always used small vocabulary too. But for n there's been noisy speech this larv large vocabulary that we've worked with in Broadcast News. So we we did the Broadcast News evaluation and some of the focus conditions were noisy [speaker001:] Mm hmm. [speaker003:] and [disfmarker] and [disfmarker] [speaker001:] It had additive n [speaker003:] But we [disfmarker] but we didn't do spectral subtraction. We were doing our funny stuff, right? We were doing multi multi uh, multi stream and [disfmarker] and so forth. [speaker001:] Yeah. [speaker003:] But it, you know, we di stuff we did helped. I mean it, did something. So. [speaker001:] OK. [speaker003:] Um, now we have this um, meeting data. You know, like the stuff we're [comment] recording right now, [speaker001:] Yeah. Yeah. [speaker003:] and [disfmarker] and uh, that we have uh, for the [disfmarker] uh, the quote unquote noisy data there is just [disfmarker] noisy and reverberant actually. It's the far field mike. And uh, we have uh, the digits that we do at the end of these things. And that's what most o again, most of our work has been done with that, with [disfmarker] with uh, connected digits. [speaker001:] Uh huh. [speaker003:] Um, but uh, we have recognition now with some of the continuous speech, large vocabulary continuous speech, using Switchboard [disfmarker] uh, Switchboard recognizer, [speaker001:] Yeah. OK. [speaker003:] uh, no training, [vocalsound] from this, just [disfmarker] just plain using the Switchboard. [speaker001:] Oh. You just take the Switchboard trained [disfmarker]? [speaker003:] That's [disfmarker] that's what we're doing, [speaker001:] Yeah, [speaker003:] yeah. [speaker001:] yeah. [speaker003:] Now there are some adaptation though, [speaker001:] OK. [speaker003:] that [disfmarker] that uh, Andreas has been playing with, [speaker001:] Yeah. That's cool. OK. [speaker003:] but we're hop uh, actually uh, Dave and I were just talking earlier today about maybe at some point not that distant future, trying some of the techniques that we've talked about on, uh, some of the large vocabulary data. Um, I mean, I guess no one had done [disfmarker] yet done test one on the distant mike using uh, the SRI recognizer and, uh, [speaker006:] I don't [disfmarker] not that I know of. [speaker003:] Yeah, cuz everybody's scared. [speaker001:] Yeah. [speaker003:] You'll see a little smoke coming up from the [disfmarker] the CPU or something [vocalsound] trying to [disfmarker] trying to do it, [speaker006:] That's right [speaker003:] but uh, yeah. But, you're right that [disfmarker] that [disfmarker] that's a real good point, that uh, we [disfmarker] we don't know yeah, uh, I mean, what if any of these ta I guess that's why they're pushing that in the uh [disfmarker] in the evaluation. [speaker001:] Yeah. [speaker003:] Uh, But um, Good. OK. Anything else going on? at you guys' end, or [disfmarker]? [speaker002:] I don't have good result, with the [disfmarker] inc including the new parameters, I don't have good result. Are [pause] similar or a little bit worse. [speaker001:] With what [disfmarker] what other new p new parameter? [speaker007:] You're talking about your voicing? [speaker003:] Yeah. So maybe [disfmarker] You probably need to back up a bit [speaker002:] Yeah. Mm hmm. [speaker001:] Yeah. [speaker003:] seeing as how Sunil, [speaker002:] I tried to include another new parameter to the traditional parameter, [speaker003:] yeah. [speaker002:] the coe the cepstrum coefficient, [speaker001:] Uh huh. [speaker002:] that, like, the auto correlation, the R zero and R one over R zero [speaker001:] Mm hmm. Mm hmm. [speaker002:] and another estimation of the var the variance of the difference for [disfmarker] of the spec si uh, spectrum of the signal and [disfmarker] and the spectrum of time after filt mel filter bank. [speaker001:] I'm so sorry. I didn't get it. [speaker002:] Nuh. Well. Anyway. The [disfmarker] First you have the sp the spectrum of the signal, [speaker001:] Mm hmm. [speaker002:] and you have the [disfmarker] on the other side you have the output of the mel filter bank. [speaker001:] Mm hmm. [speaker002:] You can extend the coefficient of the mel filter bank and obtain an approximation of the spectrum of the signal. [speaker001:] Mmm. OK. [speaker002:] I do the difference [disfmarker] [speaker001:] OK. [speaker002:] I found a difference at the variance of this different [speaker001:] Uh huh. [speaker002:] because, suppose we [disfmarker] we think that if the variance is high, maybe you have n uh, noise. [speaker001:] Yeah. [speaker002:] And if the variance is small, maybe you have uh, speech. [speaker001:] Uh huh. [speaker002:] To [disfmarker] to To [disfmarker] The idea is to found another feature for discriminate between voice sound and unvoice sound. [speaker001:] OK. [speaker002:] And we try to use this new feature [disfmarker] feature. And I did experiment [disfmarker] I need to change [disfmarker] to obtain this new feature I need to change the size [disfmarker] the window size [disfmarker] size. of the a of the [disfmarker] analysis window size, to have more information. [speaker001:] Yeah. Make it longer. [speaker002:] Uh, sixty two point five milliseconds I think. [speaker001:] OK. [speaker002:] And I do [disfmarker] I did two type of experiment to include this feature directly with the [disfmarker] with the other feature and to train a neural network to select it voice unvoice silence [disfmarker] silence [speaker001:] Unvoiced. Well. [speaker002:] and to [disfmarker] to concat this new feature. But the result are n with the neural network I have more or less the same result. [speaker001:] As using just the cepstrum, [speaker002:] Result. [speaker001:] or [disfmarker]? [speaker002:] Yeah. Yeah. [speaker001:] OK. [speaker002:] It's neve e e sometime it's worse, sometime it's a little bit better, but not significantly. And [disfmarker] [speaker001:] Uh, is it with TI digits, or with [disfmarker]? [speaker002:] No, I work with eh, Italian and Spanish basically. [speaker001:] OK. OK. [speaker002:] And if I don't y use the neural network, and use directly the feature the results are worse. [speaker001:] Uh huh. [speaker002:] But Doesn't help. [speaker004:] Mm hmm. [speaker003:] I [disfmarker] I [disfmarker] I really wonder though. I mean we've had these discussions before, and [disfmarker] and one of the things that struck me was that [disfmarker] uh, about this line of thought that was particularly interesting to me was that we um [disfmarker] whenever you condense things, uh, in an irreversible way, um, you throw away some information. And, that's mostly viewed on as a good thing, in the way we use it, because we wanna suppress things that will cause variability for uh particular, uh, phonetic units. Um, but, you'll do throw something away. And so the question is, uh, can we figure out if there's something we've thrown away that we shouldn't have. And um. So, when they were looking at the difference between the filter bank and the FFT that was going into the filter bank, I was thinking "oh, OK, so they're picking on something they're looking on it to figure out noise, or voice [disfmarker] voiced property whatever." So that [disfmarker] that's interesting. Maybe that helps to drive the [disfmarker] the thought process of coming up with the features. But for me sort of the interesting thing was, "well, but is there just something in that difference which is useful?" So another way of doing it, maybe, would be just to take the FFT uh, power spectrum, and feed it into a neural network, and then use it, you know, in combination, or alone, or [disfmarker] or whatever [speaker002:] To know [disfmarker] [speaker006:] Wi with what targets? [speaker001:] Voiced, unvoiced is like [disfmarker] [speaker003:] Uh, no. [speaker001:] Oh. Or anything. [speaker003:] No the [disfmarker] just the same [disfmarker] same way we're using [disfmarker] I mean, the same way that we're using the filter bank. [speaker006:] Phones. [speaker001:] Oh, OK. [speaker004:] Mm hmm. [speaker003:] Exact way [disfmarker] the same way we're using the filter bank. I mean, the filter bank is good for all the reasons that we say it's good. But it's different. And, you know, maybe if it's used in combination, it will get at something that we're missing. [speaker004:] Mm hmm. [speaker003:] And maybe, you know, using, orth you know, KLT, or uh, um, adding probabilities, I mean, all th all the different ways that we've been playing with, that we would let the [disfmarker] essentially let the neural network determine what is it that's useful, that we're missing here. [speaker004:] Mm hmm. Yeah, but there is so much variability in the power spectrum. [speaker001:] Mm hmm. [speaker003:] Well, that's probably why y i it would be unlikely to work as well by itself, [speaker004:] Mm hmm. [speaker003:] but it might help in combination. [speaker004:] Mmm. [speaker003:] But I [disfmarker] I [disfmarker] I have to tell you, I can't remember the conference, but, uh, I think it's about ten years ago, I remember going to one of the speech conferences and [disfmarker] and uh, I saw within very short distance of one another a couple different posters that showed about the wonders of some auditory inspired front end or something, and a couple posters away it was somebody who compared one to uh, just putting in the FFT and the FFT did slightly better. [speaker004:] Mm hmm. [speaker003:] So I mean the [disfmarker] i i It's true there's lots of variability, but again we have these wonderful statistical mechanisms for quantifying that a that variability, and you know, doing something reasonable with it. [speaker004:] Mm hmm. [speaker003:] So, um, uh, It it's same, you know, argument that's gone both ways about uh, you know, we have these data driven filters, in LDA, and on the other hand, if it's data driven it means it's driven by things that have lots of variability, and that are necessarily [disfmarker] not necessarily gonna be the same in training and test, so, in some ways it's good to have data driven things, and in some ways it's bad to have data driven things. So, [speaker001:] Yeah, d [speaker003:] part of what we're discovering, is ways to combine things that are data driven than are not. [speaker001:] Yeah. [speaker003:] Uh, so anyway, it's just a thought, that [disfmarker] that if we [disfmarker] if we had that [disfmarker] maybe it's just a baseline uh, which would show us "well, what are we really getting out of the filters", or maybe i i probably not by itself, but in combination, [speaker004:] Mm hmm. [speaker003:] uh, you know, maybe there's something to be gained from it, and let the [disfmarker] But, you know, y you've only worked with us for a short time, maybe in a year or two you w you will actually come up with the right set of things to extract from this information. But, maybe the neural net and the H M Ms could figure it out quicker than you. [speaker002:] Maybe. [speaker003:] So. It's just a thought. [speaker002:] Yeah, I can [disfmarker] I will try to do that. [speaker003:] Yeah. [speaker001:] What [disfmarker] one [disfmarker] one um p one thing is like what [disfmarker] before we started using this VAD in this Aurora, the [disfmarker] th what we did was like, I [disfmarker] I guess most of you know about this, adding this additional speech silence bit to the cepstrum and training the HMM on that. [speaker003:] Mm hmm. [speaker001:] That is just a binary feature and that seems to be [vocalsound] improving a lot on the SpeechDat Car where there is a lot of noise but not much on the TI digits. So, a adding an additional feature to distin to discriminate between speech and nonspeech was helping. That's it. [speaker004:] Wait [disfmarker] I [disfmarker] I'm sorry? [speaker001:] Yeah, we actually added an additional binary feature to the cepstrum, just the baseline. [speaker004:] Yeah? [speaker002:] You did some experiment. [speaker001:] Yeah, yeah. Well, in [disfmarker] in the case of TI digits it didn't actually give us anything, because there wasn't any f anything to discriminate between speech, [speaker004:] Yeah. [speaker001:] and it was very short. [speaker004:] Hmm. [speaker001:] But Italian was like very [disfmarker] it was a huge improvement on Italian. [speaker004:] Well [disfmarker] Mm hmm. But anyway the question is even more, is within speech, can we get some features? Are we drop dropping information that can might be useful within speech, I mean. To [disfmarker] maybe to distinguish between voice sound and unvoiced sounds? [speaker001:] OK. Mm hmm. Yeah, yeah. Yeah. [speaker003:] And it's particularly more relevant now since we're gonna be given the endpoints. [speaker004:] Yeah. Mm hmm. [speaker003:] So. [speaker001:] Yeah, yeah. [speaker003:] Uh. [speaker004:] Mmm. [speaker003:] So. Um. [speaker001:] Mmm. There was a paper in ICASSP [disfmarker] this ICASSP [disfmarker] over the uh extracting some higher order uh, information from the cepstral coefficients and I forgot the name. Some is some harmonics I don't know, I can [disfmarker] I can pull that paper out from ICASSP. [speaker004:] Yeah. [speaker003:] Talking cumulants or something? [speaker001:] It [disfmarker] Huh? [speaker003:] Cumulants or something. [speaker001:] Uh, I don't know. [speaker003:] But [disfmarker] No. [speaker001:] I don't remember. It wa it was taking the, um [disfmarker] It was about finding the higher order moments of [disfmarker] [speaker003:] Yeah, [speaker001:] Yeah. [speaker003:] cumulants, [speaker001:] And I'm not sure about whether it is the higher order moments, or [disfmarker] [speaker003:] yeah. [speaker001:] maybe higher order cumulants [speaker003:] Oh. [speaker001:] and [disfmarker] Yeah. [speaker003:] Or m e [speaker001:] It was [disfmarker] it was [disfmarker] Yeah. I mean, he was showing up uh some [disfmarker] something on noisy speech, [speaker003:] Yeah. [speaker001:] some improvement on the noisy speech. [speaker004:] Mm hmm. [speaker001:] Some small vocabulary tasks. [speaker003:] Uh. [speaker001:] So it was on PLP derived cepstral coefficients. [speaker003:] Yeah, but again [disfmarker] You could argue that th that's exactly what the neural network does. [speaker001:] Mmm. [speaker003:] So n neural network uh, is in some sense equivalent to computing, you know, higher order moments of what you [disfmarker] [speaker001:] trying to f to Moments, yeah. [speaker003:] yeah. [speaker001:] Yeah. [speaker003:] So. I mean, it doesn't do it very specifically, [speaker004:] Mm hmm. [speaker003:] and pretty [disfmarker] you know. But. [speaker001:] Yep. [speaker003:] Uh, anything on your end you want to talk about? Uh. [speaker007:] Um, nothing I wanna really talk about. I can [disfmarker] I can just uh, um, share a little bit [disfmarker] Sunil hasn't [disfmarker] hasn't heard about uh, what I've been doing. [speaker003:] Yeah. [speaker007:] Um, so, um, I told you I was [disfmarker] I was [disfmarker] I was getting prepared to take this qualifier exam. So basically that's just, um, trying to propose um, uh, your next your [disfmarker] your following years of [disfmarker] of your PHD work, trying [disfmarker] trying to find a project to [disfmarker] to define and [disfmarker] and to work on. So, I've been, uh, looking into, um, doing something about r uh, speech recognition using acoustic events. So, um, the idea is you have all these [disfmarker] these different events, for example voicing, nasality, R coloring, you know burst or noise, uh, frication, that kinda stuff, um, building robust um, primary detectors for these acoustic events, and using the outputs of these robust detectors to do speech recognition. Um, and, um, these [disfmarker] these primary detectors, um, will be, uh, inspired by, you know, multi band techniques, um, doing things, um, similar to Larry Saul's work on, uh, graphical models to [disfmarker] to detect these [disfmarker] these, uh, acoustic events. And, um, so I [disfmarker] I been [disfmarker] I been thinking about that and some of the issues that I've been running into are, um, exactly what [disfmarker] what kind of acoustic events I need, what [disfmarker] um, what acoustic events will provide a [disfmarker] a good enough coverage to [disfmarker] in order to do the later recognition steps. And, also, um, once I decide a set of acoustic events, um, h how do I [disfmarker] how do I get labels? Training data for [disfmarker] for these acoustic events. And, then later on down the line, I can start playing with the [disfmarker] the models themselves, the [disfmarker] the primary detectors. Um, so, um, I kinda see [disfmarker] like, after [disfmarker] after building the primary detectors I see um, myself taking the outputs and feeding them in, sorta tandem style into [disfmarker] into a um, Gaussian mixtures HMM back end, um, and doing recognition. Um. So, that's [disfmarker] that's just generally what I've been looking at. [speaker001:] Yeah. [speaker007:] Um, [speaker003:] By [disfmarker] by the way, uh, the voiced unvoiced version of that for instance could tie right in to what Carmen was looking at. [speaker007:] Yeah. [speaker003:] So, [speaker004:] Mm hmm. [speaker003:] you know, um, if you [disfmarker] if a multi band approach was helpful as [disfmarker] as I think it is, it seems to be helpful for determining voiced unvoiced, [speaker007:] Mm hmm. [speaker003:] that one might be another thing. [speaker007:] Yeah. [speaker002:] Mm hmm. [speaker007:] Yeah. Um, were [disfmarker] were you gonna say something? [speaker006:] Mmm. [speaker007:] Oh. It looked [disfmarker] OK, never mind. Um, yeah. And so, this [disfmarker] this past week um, I've been uh, looking a little bit into uh, TRAPS um, and doing [disfmarker] doing TRAPS on [disfmarker] on these e events too, just, um, seeing [disfmarker] seeing if that's possible. Uh, and um, other than that, uh, I was kicked out of I house for living there for four years. [speaker003:] Oh no. So you live in a cardboard box in the street now [speaker007:] Yeah. [speaker003:] or, no? [speaker007:] Uh, well, s s som something like that. [speaker003:] Yeah. [speaker007:] In Albany, yeah. Yeah. And uh. Yep. That's it. [speaker003:] Suni i d'ou v did uh [disfmarker] did you find a place? Is that out of the way? [speaker001:] Uh, no not yet. Uh, yesterday I called up a lady who ha who will have a vacant room from May thirtieth and she said she's interviewing two more people. So. And she would get back to me on Monday. So that's [disfmarker] that's only thing I have and Diane has a few more houses. She's going to take some pictures and send me after I go back. [speaker003:] OK. [speaker001:] So it's [disfmarker] that's [disfmarker] [speaker006:] Oh. So you're not down here permanently yet? [speaker001:] No. I'm going back to OGI today. [speaker006:] Ah! Oh, OK. [speaker007:] Oh. [speaker003:] OK. And then, you're coming back uh [disfmarker] [speaker001:] Uh, i I mean, I [disfmarker] I p I plan to be here on thirty first. [speaker003:] Thirty first, OK. [speaker007:] Thirty first. [speaker001:] Yeah, well if there's a house available or place to [disfmarker] [speaker003:] Well, I mean i i if [disfmarker] if [disfmarker] [speaker001:] Yeah, I hope. [speaker003:] They're available, and they'll be able to get you something, so worst comes to worst we'll put you up in a hotel for [disfmarker] for [disfmarker] for a while [speaker001:] Yeah. So, in that case, I'm going to be here on thirty first definitely. [speaker003:] until you [disfmarker] OK. [speaker005:] You know, if you're in a desperate situation and you need a place to stay, you could stay with me for a while. I've got a spare bedroom right now. [speaker001:] Oh. OK. Thanks. That sure is nice of you. So, it may be he needs more than me. [speaker007:] Oh r oh. Oh no, no. My [disfmarker] my cardboard box is actually a nice spacious two bedroom apartment. [speaker003:] So a two bedroom cardboard box. Th that's great. [speaker007:] yeah [speaker001:] Yeah. Yeah. [vocalsound] Yeah. [speaker003:] Thanks Dave. [speaker001:] Yeah. [speaker003:] Um, [speaker001:] Yeah. [speaker003:] Do y wanna say anything about [disfmarker] You [disfmarker] you actually been [disfmarker] Uh, last week you were doing this stuff with Pierre, you were [disfmarker] you were mentioning. Is that [disfmarker] that something worth talking about, or [disfmarker]? [speaker005:] Um, it's [disfmarker] Well, um, it [disfmarker] I don't think it directly relates. Um, well, so, I was helping a speech researcher named Pierre Divenyi and he's int He wanted to um, look at um, how people respond to formant changes, I think. Um. So he [disfmarker] he created a lot of synthetic audio files of vowel to vowel transitions, and then he wanted a psycho acoustic um, spectrum. And he wanted to look at um, how the energy is moving [pause] over time in that spectrum and compare that to the [disfmarker] to the listener tests. And, um. So, I gave him a PLP spectrum. And [disfmarker] to um [disfmarker] he [disfmarker] he t wanted to track the peaks so he could look at how they're moving. So I took the um, PLP LPC coefficients and um, I found the roots. This was something that Stephane suggested. I found the roots of the um, LPC polynomial to, um, track the peaks in the, um, PLP LPC spectra. [speaker001:] well there is aligned spectral pairs, is like the [disfmarker] the [disfmarker] Is that the aligned s [speaker003:] It's a r root LPC, uh, of some sort. [speaker001:] Oh, no. [speaker004:] Mm hmm. [speaker001:] So you just [disfmarker] [speaker003:] Yeah. [speaker001:] instead of the log you took the root square, I mean cubic root or something. What di w I didn't get that. [speaker003:] No, no. It's [disfmarker] it's [disfmarker] it's taking the [disfmarker] finding the roots of the LPC polynomial. [speaker001:] Polynomial. Yeah. Is that the line spectral [disfmarker] [speaker003:] So it's like line spectral pairs. [speaker001:] Oh, it's like line sp [speaker003:] Except I think what they call line spectral pairs they push it towards the unit circle, don't they, to sort of? [speaker001:] Yeah, yeah, yeah, yeah. [speaker003:] But it [disfmarker] But uh, you know. But what we'd used to do w when I did synthesis at National Semiconductor twenty years ago, the technique we were playing with initially was [disfmarker] was taking the LPC polynomial and [disfmarker] and uh, finding the roots. It wasn't PLP cuz Hynek hadn't invented it yet, but it was just LPC, and uh, we found the roots of the polynomial, And th When you do that, sometimes they're f they're what most people call formants, sometimes they're not. [speaker001:] Mmm. [speaker003:] So it's [disfmarker] it's [disfmarker] it's a little, [speaker004:] Hmm. [speaker003:] uh [disfmarker] Formant tracking with it can be a little tricky cuz you get these funny [vocalsound] values in [disfmarker] in real speech, [speaker006:] So you just [disfmarker] You typically just get a few roots? [speaker003:] but. [speaker006:] You know, two or three, something like that? [speaker003:] Well you get these complex pairs. And it depends on the order that you're doing, [speaker004:] Mm hmm. [speaker005:] Right. [speaker003:] but. [speaker005:] So, um, if [disfmarker] [@ @] [comment] Every root that's [disfmarker] Since it's a real signal, the LPC polynomial's gonna have real coefficients. So I think that means that every root that is not a real root [comment] is gonna be a c complex pair, [speaker006:] Mm hmm. [speaker005:] um, of a complex value and its conjugate. Um. So for each [disfmarker] And if you look at that on the unit circle, um, one of these [disfmarker] one of the members of the pair will be a positive frequency, one will be a negative frequency, I think. So I just [disfmarker] So, um, f for the [disfmarker] I'm using an eighth order polynomial and I'll get three or four of these pairs [speaker003:] Yeah. [speaker005:] which give me s which gives me three or four peak positions. [speaker001:] Hmm. [speaker003:] This is from synthetic speech, or [disfmarker]? [speaker005:] It's [disfmarker] Right. Yeah. [speaker003:] Yeah. So if it's from synthetic speech then maybe it'll be cleaner. I mean for real speech in real [disfmarker] then what you end up having is, like I say, funny little things that are [disfmarker] don't exactly fit your notion of formants all that well. [speaker006:] How did [disfmarker] [speaker004:] But [speaker003:] But [disfmarker] but mostly they are. [speaker004:] Yeah. [speaker003:] Mostly they do. [speaker005:] Mmm, [speaker003:] And [disfmarker] and what [disfmarker] I mean in [disfmarker] in what we were doing, which was not so much looking at things, it was OK [speaker004:] I [disfmarker] [speaker003:] because it was just a question of quantization. Uh, we were just you know, storing [disfmarker] It was [disfmarker] We were doing, uh, stored speech, uh, quantization. [speaker004:] Mm hmm. [speaker003:] But [disfmarker] but uh, in your case um, you know [disfmarker] [speaker004:] Actually you have peaks that are not at the formant's positions, but they are lower in energy [speaker005:] But [disfmarker] there's some of that, yes. [speaker004:] and [disfmarker] Well they are much lower. [speaker006:] If this is synthetic speech can't you just get the formants directly? I mean h how is the speech created? [speaker005:] It was created from a synthesizer, and um [disfmarker] [speaker006:] Wasn't a formant synthesizer was it? [speaker003:] I bet it [disfmarker] it might have [disfmarker] may have been [speaker005:] I [disfmarker] d d this [disfmarker] [speaker003:] but maybe he didn't have control over it or something? [speaker005:] In [disfmarker] in fact w we [disfmarker] we could get, um, formant frequencies out of the synthesizer, as well. And, um, w one thing that the, um, LPC approach will hopefully give me in addition, um, is that I [disfmarker] I might be able to find the b the bandwidths of these humps as well. Um, Stephane suggested looking at each complex pair as a [disfmarker] like a se second order IIR filter. [speaker003:] Yeah. [speaker005:] Um, but I don't think there's a g a really good reason not to um, get the formant frequencies from the synthesizer instead. Except that you don't have the psycho acoustic modeling in that. [speaker003:] Yeah, so the actual [disfmarker] So you're not getting the actual formants per se. You're getting the [disfmarker] Again, you're getting sort of the, uh [disfmarker] [speaker004:] Mm hmm. [speaker003:] You're getting something that is [disfmarker] is uh, af strongly affected by the PLP model. And so it's more psycho acoustic. So it's a little [disfmarker] It's [disfmarker] It's [disfmarker] It's sort of [disfmarker] sort of a different thing. [speaker006:] Oh, I see. That's sort of the point. [speaker003:] But [disfmarker] Yeah. i Ordinarily, in a formant synthesizer, the bandwidths as well as the ban uh, formant centers are [disfmarker] [speaker006:] Yeah. [speaker003:] I mean, that's [disfmarker] Somewhere in the synthesizer that was put in, as [disfmarker] as what you [disfmarker] [speaker005:] Mm hmm. [speaker003:] But [disfmarker] but yeah, you view each complex pair as essentially a second order section, which has, uh, band center and band width, and um, um [disfmarker] But. Yeah. O K. So, uh, yeah, you're going back today and then back in a week I guess, and. [speaker001:] Yeah. [speaker003:] Yeah. Great! Well, welcome. [speaker001:] Thanks. [speaker006:] I guess we should do digits quickly. [speaker004:] Mmm. [speaker003:] Oh yeah, digits. I almost forgot that. [speaker002:] Digits. [speaker003:] I almost forgot our daily digits. [speaker006:] You wanna go ahead? [speaker003:] Sure. [speaker006:] OK.