text
stringlengths
0
6.23M
__index_level_0__
int64
0
419k
This is hurriedly written and unedited; gotta take Besame Mucho to the vet in a few. Apologies for typos/inclarities in advance.. And what terrible things did Ms. Wallis do to invite this kind of vitriol? Oh, just stuff like this: Just be herself: talented, happy, pretty, and proud of her achievement. She didn’t misbehave, she didn’t snark at anyone the way winner Jennifer Lawrence did (and Lawrence was awesome for doing so, but it’s interesting how white girls can get away with being confident more easily than black girls. Isn’t it?). Ms. Wallis committed the crime of being confident while black and female. Hey, it happens to all of us, often starting around puberty; I guess Hollywood just decided to start the shaming and systematic tearing-down early. ::sigh:: So here’s the thing: I’ve seen a lot of outrage over this from folks on my Twitter feed, which includes a lot of people in the genre community. It’s heartening to see that.. And I’d like to see it because I was this girl, once. Oh, not famous, but just that cheerfully focused on a goal — in my case, becoming a published novelist. And I’ve had my share of people trying to tear me apart for daring to want such a thing. Like I said, it happens to a lot of us. But a little support goes a long way. ETA: Closed some open tags, linked to the article about the anon Oscar voter who said he wasn’t voting for her b/c of her name. Daughter of ETA: The Onion has apologized. Spoiled Niece of ETA: Apparently people are playing silly buggers, reporting me for spamming my own website. Apologies for the brief downtime, and hopefully it won’t happen again. Note: I hotlinked the “fistpump” gif because I can’t seem to get it to upload on my site. I got it from here, tho’. 89 thoughts on “Fantasy Fans: Where’s Your Outrage?” Not funny, Onion. Not even close. Absolutely horrifying. What the fuck? In addition to contacting The Onion, there’s also a petition you can sign up at Change.org:. I’ve been wanting to see it, and I had no idea it WAS a fantasy film. Agree, agree and agree. I didn’t watch the Oscars. I don’t care for the whole thing. I don’t pay attention to most things from Hollywood, to be honest. But it’s sad and strange that I didn’t even know there was a Fantasy film among the nominees. I’d have thought that the media savvy geeks that I follow would have mentioned that. I’m watching my Twitter feed this morning (where I spotted this RT) and I am seeing the critiques. I’m seeing outrage at the Onion. I’m seeing critiques against Macfarlane and feminists speaking against the “boobs” song. What I hadn’t seen before your blog post was someone pointing out that half of Hollywood has decided to dislike this child for no reason. Did they do the same thing to Anna Paquin? Nope. You’re right to call for outrage. I saw people ACTUALLY defending the Onion. What the actual F***. Are you not decent human beings? What if it had been Abigail Breslin (Nommed for Little Miss Sunshine) or Hailee Steinfeld (Nommed for True Grit)that these things were said about. Someone would have been fired. But since it’s a black child, who gives an actual F***? Yeah, I went there. People should be OUTRAGED, because it could have been their MOTHER/SISTER/AUNT/COUSIN/GRANDMOTHER that this word could have described. I weep for humanity. That kind of behavior was gross on the part of everyone who participated. Thanks for putting this out there and bringing it to people’s attention. The more pushback the Onion gets about this, the better!. At the very least, the Onion should apologize and discipline/fire the writer of the tweet. So not cool. I do not understand this backlash against the star of BoSW. I had an acquaintance comment the other day how they didn’t like her because she seems too full of herself. Seriously. A child. A child being thrown into the craziness of publicity and fame. And why shouldn’t she be full of herself? She did an awesome thing. Add to this racial stereotyping and my mind explodes. I just do not understand this mindset. I guess it is on us to not accept this kind of behavior, eh? Great post!! Thanks. While I agree that The Onion wasn’t funny, I don’t think the vitriol over the girl is a rebuke of feminism. It’s the Oscars. People watch to be catty, snide and self-indulgent. hosts make fun of the stand out films/actors of the time. A cigar is sometimes just a cigar. I definitely agree that the tweet was way over the line. From what I’ve read of last night, it sounds like there was a large amount of sexist, and truly humorless remarks coming from Mcfarlane and others. The Onion is fond of deriding things by taking them to the utmost extreme, and in that regard they succeeded in crafting a truly unfunny, sexist, and reprehensible comment. In terms of why people haven’t been *more* outraged, is it worth considering that the sort of people who are liable to call Hollywood on its poor behaviour are a highly overlapping set with the sort of people who don’t watch the Oscars? I’m not saying it’s 1:1, but I know that I didn’t watch the Oscars, and thus hadn’t heard about any of this (or anything else about the Oscars) thus far. disclaimer: I didn’t watch the Oscars. Having read your link, I didn’t really make the connection between her color and the George Clooney joke (though that stat is consistent with what I knew of girls when I taught in Wichita, Kansas). The rest of it is atrocious. Onion unfollowed, blog post retweeted, etc. Time for geek rage indeed. Plus, a movie to watch! Yay!! Thanks for writing on this. I watched the oscars and I did not hear most of what you are claiming but anything close to sexual comment is discusting when involving a child PERIOD, but I have to say I thought the same thing. I thought even before the oscars this little girl was rude. I don’t know what that has to do her race…. for me any child actor that comes off the way she does (and I don’t think that has to do with confidence, i think its a kid that is allowed to be a brat)has always got on my nerves. Sorry! Since I don’t watch or follow any awards program, I missed all this last night and only found out this morning. Disgusting. Just…nauseatingly disgusting. WTF is wrong with a lively, intelligent, VERY talented 9 year old girl? Nothing. What is wrong with the people who think being a lively, intelligent, VERY talented 9 year old girl is cause for smackdowns, crass comments, and snide remarks about her name? LOTS. Voting against someone because you don’t like their name? Oh, right, so that’s why movie stars are told to change their names, so the Academy voters will pick them…never mind whether their acting ability exists or not. Shows such amazing intelligence among Academy voters (yes, that’s the sarcasm glyph, flaming over all.) If any feminist is defending the Onion calling her a c*unt, that person should lose her fem-card for at least a year. (That may not be what’s being said. I haven’t looked, and won’t. I’m supposed to face an elementary school class this afternoon with a smile on my face, not the look of someone ready to swing heavy clue-bats in all directions.) What’s not to like about this kid? I saw an interview with her a few weeks ago and she was delightful. All the things I’d want a 9 year old daughter to be. As a note, I’m more outraged at MacFarlane than the Onion since in his ending song he referred to Ms. Wallis in a song line that would have had to have rhymed with Helen Hunt. It gives a little context to what The Onion tried an failed horribly to do. I would like some of this to come back to Seth MacFarlane since he made two jokes at this little girl’s expense, both of them completely and inappropriately sexual. As a note, The Onion has already apologized: I recommend, however, for sanity that you not read the comments. I heard MacFarlane’s joke about it will take 16 years for her to be too old for Clooney and did not see it at all as attacking Ms. Wallis. He did not say that she’s Clooney-bait now, but that in 14 years she will be, and two years after that, she’ll be too old. MacFarlane was a pig, but that is one sin he did not commit. There was huge outrage from the political conservatives, geeks or not. Twitchy.com did a story on this, absolutely blasting the Onion for it. As a group, SoCons find it egregiously wrong to label little girls with vile names. Interestingly, of the tweets I read, none of them pointed out the color of her skin. She is a little girl, and the fury was all about *that* — nothing more or less. Really tasteless and mean, whether intended or not. It’s sad to watch people rise to the defense of this because they want to preserve some consequence-free space for saying nasty things. Not surprising though. I wrote to The Onion. Thank you for sharing the contact information. I’m sharing this post on my social networks, too. Having been the mother of a very bright nine year old…she seems exactly right. Ebullient is the word I would use. I think the adults around her have made the very wise choice of not over-schooling or quelling her. She’s expressing her real feelings her own way, and confidently. And anyone who looks at that and feels the desire to take her down a notch has a moldy conscience and a rotten peach pit where their heart should be. Seth was a jackass but the Onion ? And everyone who came down like the Wrath of an Old Testament GOD on……a NINE YEAR OLD CHILD??? …for being talented and confident and HAPPY and UNAPOLOGETIC ????? Just , fuckyoudie. No , really. What the hell is WRONG WITH YOU people ? Obnoxious? A brat ?? On what planet was that please ? Because here on EARTH all Miss Wallis was , was talented , confident , happy and unapologetic. All of which she was right to be , all of which any normal decent person would have expected her to be , would have wanted their own child to be in the same circumstances . Lastly, what EXACTLY do you worthless unacknowledged racist pieces of shit feel she needs to be apologetic ABOUT ?? Just diggusted. OK, so, folks, apparently someone subscribed to the comments on this post, and then reported my site to WordPress for spam when they… got sent comments for this post. I have no idea if this person was just stupid or whether it was malicious — I’m leaning malicious, but who knows — but for now I’m disabling the subscribe to comments feature. Not that that will really stop someone who’s determined to be an asshole, but might as well not make it easy. While I’m pleased that the Onion’s CEO has issued an actual apology and not a “we’re sorry if we offended anyone” pseudo-apology, I’m still beyond pissed that anybody anywhere thought that tweet was funny, clever, or appropriate. Thanks for this, Nora. I have to admit this completely slipped by me. The movie (I’d no idea it existed until last night. Yes, I’ve been in a bit of a bubble these past many months.), Miss Wallis, and the kerfuffle. But now I’m going to make a point of seeing this and crow about its coolness. When ever you mention someones race or ethnicity the first letter of the word is capitalized. Black girl is appropriate, even if you made the mistake when saying White girl. Agreed, agreed, agreed, but I don’t think Beasts of the Southern Wild is a fantasy film. Magical realist, if anything. Not that they’re the same in any other way, but would you consider Fear and Loathing in Las Vegas to be a fantasy film because it features depictions of creatures that (debatably) don’t exist outside of the main character’s mind? I was to busy watching “The Walking Dead” to care about anything that the Oscars were doing but even I know that children should be completely “off the table” as a source of humor unless you are talking about your experiences with your own kids. Jerks. Just when it looked like Hollywood and the general entertainment blogosphere couldn’t get any lower… gah. It’s cruel to do this to a kid. It’s SO not funny. [spoilers ahead] As for the film, I have very mixed feelings about it, primarily because Hushpuppy is living in a community of *really* messed-up adults, and I can’t see anything good coming from that. She may be a powerful person in her daydreams, but in reality, she’s a little girl whose father is abusive at times. the rest of the adults in the Bathtub seem to be alcoholics. I can’t imagine that her life is going to improve. Wow… You’re offended because Hollywood acts like the spoilt white-male cabal that it is? Oh noes! Let us beat our breasts and smear ashes in our hair… Where have you been for the last 100 years? You think this is new? Or special? The girl has a great talent, but so did the people who won on the night. It’s such a subjective circle-jerk of an awards ceremony it means absolutely nothing in the real world. The Hobbit: An Unexpected Journey was also nominated for 3 awards and didn’t win any of them. Clearly that is something to be more outraged by (seriously – that film was incredible). Try turning your focus on to a real issue. Good point about Jennifer Lawrence. It was a long time ago, but I also don’t remember 11-year old (and white) Anna Paquin being slagged off as “affected” and “insufferable” when she… well, was not only an 11 year-old on the red carpet and during the ceremony, but basically grinned and hiccuped her way through her acceptance speech. Incidentally, at my last book launch I thanked my family, especially my two kids (11 and 14). And right where they were sitting amid the crowd, when I said their names, they pumped their arms just like Quvenzhané did. I’m glad the Onion wasn’t there… I have to ask the people thinking “it’s not so serious” or trying to explain away the “jokes”, just what exactly are you trying to uphold? That feminists should be allowed to deride little girls? That supposedly grown men should make sexualized jokes about little girls? That a media institution should be able to pick on a little girl because of her colour and name? In that case, what the hell is wrong with you? Tweeted and unfollowed Onion last night, wrote an editorial which is already up on Al Día (Philadelphia’s largest Latino newspaper) web site and will appear in print Thurs.: There were so many things wrong with the Oscars (host, outsourcing the orchestra to a different building?!??, 3/4 of the show being about Broadway musicals which actually have their own award show thank you, and the list goes on) I would not know where to begin. The host made just as many nasty, snarky, cutting comments about individuals as Ricky Gervais did at Golden Globes. I think the audience missed all that because Seth MacFarlaine kind of sort of looks like he might possibly be classy…but then he opens his mouth and you find out what is going on in his twisted mind. I think most of the remarks flew over the heads of the entire audience (the majority being Hollywood professionals of all types)…so what does that say about their comprehension skills in general? They were visibly uncomfortable and upset at Ricky Gervais at the Globes, to the point of angrily responding to him on-stage…Robert Downey Jr. for one. And, again, Robert Downey Jr. seemed like the only one willing to take a stand as he did not seem too happy with his group’s presentation speech and I don’t think that was part of the bit. The viewing audience should make it abundantly clear to the Oscars that if they want to be taken as a serious award show worthy of the ad time they are selling, they should cut the snide/crass/classless “seen your boobs” level of what they apparently think is humor. Otherwise, they should do well to go back to presenting their awards in a private banquet that is not preempting the better shows that we all could have been watching (Once Upon A Time).? Who are you or anyone else to say who’s right and who’s wrong. You think the Onion is wrong; good for you! What about those people out there who think you’re wrong? They’re just “assholes” and “malicious” for having an opinion? Are you saying they should be dismissed? For shame… you mentioned acting like 12-year olds… well, that course of action seems very immature to me. Why can’t other people have an opinion and say what they want? And, incidentally, if that makes them horrible human beings in your mind, then I very much feel sorry for you or anyone else who thinks in such a way. Again, who is anyone to say what a good human is. Everyone has a different opinion. Who’s right? If you, for an instant, think you are the only one who is right, then, please, open your mind to the fact that there are others in the world. Ok, I have to the respond to the whole “but she is a brat” defense. I live in a bubble. I avoid TV like the plague. I cannot, from first-hand knowledge, say a thing ‘about Ms. Wallis’ behavoir. I seriously doubt she is a “brat,” because if she truly was an undisciplined child, the media would be all over it. I find the lack of actual examples telling. However, it doesn’t matter. Here’s the point I want to make. *IT DOESN’T MATTER IF SHE’S A “BRAT” OR NOT* Is this how you train a child to display good behavoir? By calling them an absolutely vile name? To publically criticize and humiliate them? There is absolutely no defense for this treatment of a nine year old. None. @Rebecca Godina: Co-signed! If we’re going to talk about who was a “brat” in that room, I’m looking straight at that guy who started the telecast with an enormously creepy (and racist) alleged joke about partner abuse, worked in a painfully infantile production number about BOOBS! and then made a nine year old the punchline of an enormously creepy (and racist) alleged joke about George Clooney’s supposed sexual proclivities. If anyone needs a long time out (with suspension of their TV privileges) to think about what they did yesterday, it’s Seth MacFarlane not Quvenzhané Wallis. Those “she is a brat” comments remind me of how the media likes to call Sasha Obama sassy. Attacks on a black girl who doesn’t yet know her place. Pingback: Team Quvenzhané | crazy dumbsaint of the mind Just wanted to say that I saw Quvenzhané in an interview in which she was…. a 9-year old girl…. Nothing more, nothing less… No brat, no disturbing miniature-actress type (you know the kind), nothing like that… just a nice 9-year old, intelligent, talented and giggling about the fact she had to sneeze…. So… don’t really see the point of all the crazy comments about her behaviour… The jokes… well… I don’t want to minimize it by saying… it’s Hollywood… but it is. What I do find shocking is that I (even in my media vacuum) didn’t hear about it… seems it’s not as “important” as certain other pieces of show-news… That saddens me, because this little girl (and I hope her parents are shading her from media attention), does not deserve such ridiculous comments, and she deserves much more outrage…. Also, now I really want to see the film…. Was already on my list, but your recommendation makes it jump several places! Just a thought: I don’t think it’s even reasonable to call her cocky or arrogant. The stuff she does and her body language are no different from what is deemed perfectly acceptable and admirable from a white child. She just happens to come off as smart, which is apparently a problem when you’re a Black girl (or even woman). Some of the “she’s a brat” comments sound perilously close to “she’s being uppity”. I’m sorry, but they do. What – exactly – has she done that has been bratty? She’s confident and EXCITED that she’s the youngest person EVER to be nominated for an Academy Award and she’s “bratty”? When? Seriously – give an example. And the “arm pumping” doesn’t count – because that came from the MOVIE! I have seen her in countless interviews and she comes off as highly intelligent, extremely self aware and very happy, excited and well adjusted. I think the “she’s bratty” people need to go and check their motives and think about where that is coming from. I’m fully Southern and I’ve only ever seen her be polite and charming in interviews. Bratty? Never. Re: Quvenzhane Wallis as a brat or not: my husband saw an extended interview with her on one of the talk/interview shows (I want to say Charlie Rose but I’m likely wrong) some weeks ago. He came away praising her to the skies as self-aware beyond her years and with a real sense of what she wants to do as an actor. Which wasn’t easy to tell, based on one performance, which we loved but which might have been a fluke. But it’s really hard for me to imagine, based on the report of the interview, that she’s suddenly turned bratty — even Drew Barrymore took longer than that to fall apart. Re: lack of outrage among fantasy fans: doesn’t this have something to do with the limited ways we think about fantasy in film? Look at all the people on this thread who say they hadn’t been aware that the movie was a fantasy, or hadn’t even been much aware of it at all. It seems to me that we’ve been trained in the past few years to define movie-fantasy as epic series stuff. And more recently still as YA series stuff — and Beasts of the Southern Wild isn’t a piece of an epic, and it’s not aimed at the YA market. Agree wholeheartedly that Onion went over the line of “satire” into vicious, unfunny insult to a little girl. I emailed them last night. Just commenting because I noticed some commenters saying they hadn’t seen the movie, and were interested now. Yes! It’s a wonderful, original movie. I can’t imagine it coming out of the Hollywood machine (of course, it didn’t). The scene of the little girls charging on a mission dances in my memory. Yeah, this “she’s a brat” mess ain’t nothing more than “she’s an uppity, little nigger bitch who needs to be shown her place.” Oh and for you folks who don’t know what freedom of speech really consists of from Tumblr:: If you put yourself out there that’s a link to the whole article. It’s on Jezebel and while I hate that site, even a stopped clock is right twice a day: Pingback: Knews Feed » Maureen Ryan: What's The Times Got Wrong About The Onion Controversy I haven’t seen the movie. I didn’t watch the Oscar’s. I didn’t know how to pronounce this young lady’s name, either, though I had seen it written. Thanks to the radio, I knew she was the youngest Oscar nominee. Thing is, I didn’t know until just now by weird trick of Twitter emailing me some random stuff and then me following it around (again, through someone’s Twitter feed on their blog) that she was black or that apparently the world had just gone insane because of that. Just as a disclaimer, I’m a middle-aged white guy, but seriously, what the hell is wrong with these people? Miss Wallis is apparently every inch a talented, well mannered professional. 99% of the people I know don’t manage that description, and she does it as a 9 year old girl. Regardless of anything else, these old men with their old men mental problems need to get one particular thing straight in this situation. Miss Wallis is some other guy’s daughter. His baby girl. So, regardless of your own particular inbred wrong headedness, just thing about that man, and exactly how he might react if he were to find out that you had publicly done, said or written what you just did. Probably the exact same thing as if you’d done that to my daughter or if anyone else on the planet had done that to your daughter. So, going forward, there are your guidelines to work through before you put your thoughts out in public as words or actions. If it wouldn’t pleas you to have me calling your daughter a cunt on the internet simply to spite her for her brilliance, don’t you dare do the same to anyone else. Hey “Monday”? You do realize that being called a racist, misogynist asshole is NOWHERE NEAR as bad as BEING a racist, misogynist asshole, right? One is hurt fee-fees, the other is systemic oppression. If you can’t figure out which is which, get the hell out of dodge. Pingback: “I was just *joking*” » Ann Somerville's Blog I didn’t watch the Oscars, but when I saw my twitter stream this morning, I was beyond incoherent with rage that anyone would attack this amazing, talented child. (And MacFarlene can go die in a fire, as far as I’m concerned.) Okay, wait, who is calling Wallis a brat? I’m sincerely curious- I’ve heard nothing but praise for her, but I also don’t watch television or venture into the seedier corners of the internet. That said, even in my relative isolation, I heard no end of criticism of Anne Hathaway, and Renee Zellwegger, and any number of other women over the course of the awards. Note to all: I’m going to reply only sparingly to the comments here. I’m busy. But feel free to comment amongst yourselves, so long as you keep it civil. Also note: I’ve been letting the dissenters through on this if they’ve been civil, even if the mansplaining/whitesplaining and/or derailment is strong with those ones. But I’m eyeballing them, and if I see signs of derailment occurring, I’ll cut them. Out of the conversation. That’s all I meant. I swear. ETA: For those wondering who called Ms. Wallis a brat, and whatever else got said, Racialicious has a good summary. Angela Beegle, Please note that link in the OP to a conservative site on which the President’s daughters got called everything but their names. Social conservatives are in no way immune to bigotry. That said, bigotry isn’t partisan, and I’ve seen some heinouse shit coming out of the mouths of noted liberal or non-partisan pundits and sites (like the Onion) to prove it. Los, Thanks so much! That’s exactly what I needed, a total stranger “correcting” me on how to express my identity. What ever would I do without people like you? Monday: ?” Right, because fraudulently reporting someone for something they didn’t do with the intent of shutting them down entirely is EXACTLY the same as honestly calling someone out for something they actually did do with the intent of getting them to apologize and hopefully change their ways. “The Hobbit” not winning is worth more outrage than racism and sexism from grown@$$ people toward a 9-year-old. I have now heard everything. I am stunned and just… going to go lie down. Hmm, I think I’ll object to all the infantile behavior by paying to go see that movie! We almost did once when it first came out, but ran out of time. I hope we’ll see more of that actress and a future Oscar will motivate them to learn her name properly. Quvenzhané Wallis just got cast in the title role of a high profile remake of Annie directed by Will Gluck (Easy A). Sometimes success is the best revenge. Pingback: I weep for my species | Congratulations, it's a grain of rice! Pingback: Monday Night Links | Gerry Canavan All the online news about this year’s Oscars makes me want to buy fewer movie tickets. This, the plagiarizing host, the silenced VFX winners, the misogyny: more money for books and podcasts instead. Pingback: Maureen Ryan: What’s The Times Got Wrong About The Onion Controversy | A-selah Regarding Nico’s comment (copied at the very end of this message) – you hit the nail on the head. If the general public truly understood satire (and many, unfortunately, do not), then this tweet wouldn’t have become nearly as scandalous as it unfortunately has. That said, I don’t condone The Onion’s tweet, but mostly because it was poorly conceived satire; too easy to be misconstrued by the public at large precisely because such an emotionally charged word (for a mostly American audience) was used in tandem with a 9 year old child. I think the writer of the tweet should have been more mindful of the general public’s extreme dislike of the word in conjunction with its all-too-common inability to identify and understand irony and satire. To quote another comment that I think succinctly explains the tweet’s intent, “The girl wasn’t the target for satire, the joke was the absurdity of being mean to the cutest, sweetest little girl to ever be nominated for the Oscar. The target was our celeb-bashing culture.” Frankly, I think The Onion is being slammed far too harshly. Nico posted on February 25, 2013 • 10:08 am .” Pingback: The Oscars Fall Out » The Hysterical Hamster Pingback: Maureen Ryan: What's The Times Got Wrong About The Onion Controversy - Freshwadda Brooks | Coming Soon! Pingback: [links] Link salad’s back and you’re gonna be sorry | jlake.com For Chris, Nico, and everyone else who’s like “but it’s satire! you just didn’t get it!”… no. “It’s satire” is not a get-out-of-fuckup-free card. A lot of people have been commenting on this (including Baratunde Thurston, a trained comedian and former Director of Digital at the Onion). And what they’re all saying is that the tweet was bad satire. Most people get that it was meant as satire. I also get that the Onion has, historically, been good at satire — I’ve been a fan of theirs for years because of that. Thing is, good satire aims upward. It works best when it skewers people with power, and balances the scales a little between the powerful and the less-so. When satire comes from powerful people making fun of the people who get shat on all the time, it just looks mean-spirited and — if there’s a history of discrimination against that group — bigoted. This is why so many of Seth MacFarlane’s jokes went over like lead balloons. Top-down satire just reinforces the same old ugliness that’s not funny in everyday life… at least not to those of us who are on the receiving end of that shit on the regular. So yeah, I know it was intended to be funny. I disagree with Mr. Thurston on the idea that intention matters, though; as with any incidence of bigotry, ultimately what we have to look at is the result. The result is that one of the most powerful media organizations in the country decided to call a little brown girl, who can’t even make people learn to pronounce her damn name right — hell, who can’t even dictate what time she goes to bed — a cunt. And this is in the context of a whole week of powerful people stereotyping her, maligning her confidence, and sexualizing her — just as has been done to millions of little girls like her, for literally centuries. So, yeah, it’s satire. It failed. ^^ Agreed. Satire or not, it wasn’t funny. It was hurtful and utterly distasteful. That said, I’m not going to boycott the Onion for the actions of one individual who, the apology has made abundantly clear, does not speak for the entire organization. Honestly, given the usual nature of the Onion, I think the fact that they apologized at all speaks volumes. (If they habitually pulled stuff like this, though, it’d be very different.) McFarlane, on the other hand, is someone I’ve never had any respect for, and my only question there is why on EARTH the Academy thought it would be a good idea for him to host the Oscars. Sigh. Grace, Timing matters; at the time I wrote this post, the Onion had not yet apologized or distanced itself from the tweet. There’s no need to boycott now that a desirable outcome has been achieved. I assume the Academy got what it wanted from having MacFarlane host. You don’t hire a crass bigot to represent your organization unless you’re OK with crass bigotry, after all. I’d like to see an apology from the Academy too, but to be honest I expected nothing better from them in the first place. @Chris: What nkjemisin, with this added observation from Comedy/Satire 101 (and applicable to all writing, BTW): If you haven’t clearly communicated your “intent” to your audience, you’ve failed. A really good way to prevent people thinking you’re racist, misogynistic and generally assholy is… well, don’t be racist, misogynistic and generally assholy. Another one in the category wherein the only reason I wasn’t outraged was that, until reading through various posts TODAY (yes, Tuesday), I hadn’t a clue it had happened. I’m rather glad the Onion apologized, and to all appearances without the “Sorry you think you’re hurt” fauxpology vibe, but it does mean I will be watching their writing more closely, with a bit less trust. And it has sounded like a good movie to me for a long while, but we don’t watch a lot first run. I’ll have to hunt it up, though. I have a pretty good guess just from reading it how one pronounces Quvenzhané — the biggest question is probably where the stress is. (My instinct says first strongly and third weak). It requires, however, actually reading what’s there. I remember a coworker complaining that she had no idea how to pronounce Srivastava, too (the visiting printer technician’s surname), and I had to break it down syllable by syllable for her. (And she still complained). Sigh – McFarlane should have been a mistake, but like you, I think it was too aware a choice. (Thing is, I can see a way to make the George Clooney joke, and make it pointedly about him, and without sexualizing a nine-year-old. It just takes a half a minute of thought. Too much for McFarlane to spare, clearly.) I’m always a little torn about how to respond to organizations that do something terrible and then issue a sincere apology for it, like the Onion has done. On the one hand, even the best apology can’t take words back or fully reverse the damage, so it seems unfair to just go back to reading them as if nothing happened. On the other, continuing to boycott an organization that apologizes gives organizations that make mistakes in the future zero incentive to apologize and every incentive to double-down. Maybe they deserve a temporary boycott, like a two-week time out for bad behaviour or something. I don’t know. As an aside, Andy, your post made my day. It takes a lot of chutzpah to go on a noted fantasy author’s site and correct her identification of a fantasy work. Don’t stop there, though! The SFWA is under a similar misapprehension; they nominated Beasts of the Southern Wild for a Nebula Award. You should contact them, as they clearly lack anyone with your expertise in categorizing fiction. “(Thing is, I can see a way to make the George Clooney joke, and make it pointedly about him, and without sexualizing a nine-year-old. It just takes a half a minute of thought. Too much for McFarlane to spare, clearly.)” Up to a point. I’m twenty-seven years younger than my partner (and there was a similar age-gap between my parents) so I don’t find it intrinsically laughable – or somehow ‘creepy’ – that George Clooney is a consenting adult who has sex/relationships/is seen in public with other consenting adults who happen to be significantly younger than him. To me, that alleged joke would have been just as foul on that level if the butt had been 22 year-old Jennifer Lawrence. *argh* I missed any information about the film (odds that it played near to where we live in rural Texas VERY low), or anything beyond the general commentary on MacFarland’s dipshittery–I never watch award shows, they make me all twitchy and grumpy! Am reposting with some other links–thank you! Chris Lites said, “It’s the Oscars. People watch to be catty, snide and self-indulgent. hosts make fun of the stand out films/actors of the time. A cigar is sometimes just a cigar.” ,,,,,,,,,,,,,,,,,,,,,,,,,, How presumptuous. I have never watched the Oscars to make fun of anyone or snark at the audience and films. I watch it to see what passes off as our royalty. I love to see the gowns and diamonds. The glitterier, the better! I rarely even see someone I think made a terrible fashion faux pas, though there have been a few. I was tired after nearly 4 hours of extravaganza but thoroughly entertained. Loved Streisand’s tribute to Marvin Hamlisch. I am not pleased at the hatred shown to Quevenzhané. She is a very talented little girl and I hope she doesn’t get squashed by a cynical public who hasn’t yet grown out of their bigotry. @Lenora Rose: since you were wondering how Quvenzhané Wallis pronounces her name, I present Quvenzhané Explains It All. Come to think of it, that would be an awesome show. I don’t really follow movies, or the Oscars, at all, so I’d never heard of the kid until I read this. But… she looks adorable, and in those gifs, I don’t see insufferable, I see cute and proud. She’s not being a brat about it, she’s not jumping around or yelling, she just looks happy and proud. I can see what the Onion was trying for, but… clumsily executed at best, and that’s a risky as hell joke line, so that kind of language was a bad, bad choice. Seth McFarlane, who I’d also never heard of, I’m pretty sure, just… ew. Pingback: Reflections on the 2013 Oscars | Cora Buhlert I didn’t watch the Oscars, but by all accounts and clips, it was a disaster. My teenage daughter was not pleased. McFarlane actually does a lot of solid, subversive, upward satire in his animated shows and projects, and got a black character led animated show on the air, but it’s mixed with crude frat boy humor and it’s not urbane or even stand-up comedy for natural hosting. The Academy wants to grab the younger ad-desired demo, which they’ve been loosing, and clearly thought having McFarlane would bring them lots of teenage boys and young males, (who indeed don’t know a movie is fantasy unless it’s on soda cans a lot of the time.) And it did apparently work — ratings were up and younger viewers were up. But it would have worked just as well if McFarlane had not panicked and gone for the lowest common denominator, given that this year had a lot of exciting, younger nominees in it. It’s long been a “joke” concerning Hollywood about waiting for female teen and child stars to turn 18, be legal and thus it being supposedly no longer creepy to lust after them — the Olsen twins, Hilary Duff, etc., to excuse misogyny throughout Hollywood, and McFarlane applied that logic to a nine year old black girl. Extremity doesn’t automatically mean successful humor. What it does is keep old bigotries in place. Which goes for the Onion too. When I heard what they did, I was just furious. It was a betrayal. I’m glad they apologized and are cleaning up after the mess, but that anyone over there thought that the context of the joke would work for satire in the first place indicates age old and ugly social viewpoints from an organization that is supposed to be shattering them. She was there in her dress and her puppy purse, knowing she wasn’t going to win, just there to have the night of her life after a job well done. And now she has to get asked about this again and again by media. To try and score a Twitter point for the Onion. But I think one bright aspect to all this, maybe, is that it’s a reflection of another sort of progress. Women made large inroads on big budget action movies in 2011 and 2012, in getting media attention for their acting roles in dramatic films as well as for their outfits, in outselling the men in music, in being the leads of t.v. shows, etc. And this is the backlash — we saw your boobs. Don’t get too big for your britches, little ladies, because we can still make you strip naked on film, we can still call you sexist slurs and claim it’s funny, and make sex jokes about little girls and how young actresses are just our playthings because we still run everything. It had a distinct smack of fear to it and McFarlane went for that because it was easy and he was expected to be “edgy” and he was nervous about the hosting role. He gave us what he thought we wanted when he has a lot more to offer, and that was a cultural threat reaction to women making big strides, even at the young age of 9, or 22, and at the older age of 86. And then Jemisin went and had to tell me about the comments at Jezebel and squash that bright side a bit. Sigh. If there’s one thing women, especially young women, need to be right now to get anywhere in a world desperately trying to return to 1850, it’s too big for their britches. Especially when it comes to Hollywood, which reluctantly toasted Kathryn Bigelow for one film and this year viciously went after her for attempting something even more ambitious and centered around a female character. Or more realistically, in retaliation for her being the first female director to win. But it would be nice if there was a moratorium so that an actress could get to say at least 12, 14 before being the punchline about why women doing anything awesome is offensive and must be undercut. Pingback: Galactic Suburbia 76 | Randomly Yours, Alex She should have won the Oscar; her acting was phenomenal. The scene in which all the little abandoned girls are innertubing across the water to find their mothers? Unforgettable. Ragging on a 9-year-old and sexualizing her? I’m throwing up in my mouth. This society’s problems aren’t all gun related. They called a nine-year-old girl a *what*? Ew. Just ew. And no, that’s not satire. That’s just disgusting. The whole night, I kept wondering why MacFarlane thought Creepy Uncle would work well on an Oscar host. And what was up with the boobs song? Every time I watch the Oscars, I kind of regret it. Too much second-hand embarrassment and the actresses always get trashed. Then any actresses of color who are fortunate(?) enough to get nominated get trashed even worse. Remember the year Halle Berry won? Yeesh. I didn’t know people were actually saying all that crap about this little girl until I read this, but I wish I were shocked by it. Class definitely gets left behind on the red carpet on Oscar Night. Pingback: Beasts of the Southern Wild a review by Nalini Haynes Pingback: The Book Smugglers | Smugglers’ Stash and News I didn’t even hear about this, and since I don’t follow the Acadamy Awards… (Side note: A fantasy movie got made? I thought that only happened in fantasy novels!) Anyways: fucking ridiculous. This child should be lauded, not shot down on account of ignorance, and certainly NEVER sexualized or referred to with demeaning language. Who the hell gets off calling this little girl *anything* other than extraordinary?! Which is why I don’t follow the Oscars. And which is also why I no longer follow The Onion. So my outrage starts now. PS My own name consists of two (2) syllables and most people still say it wrong–most just ignoring my corrections, but some going so far as to correct my pronunciation of my own name. So fuck ’em. I mispronounce their names on purpose, and that usually does the trick. I am just now reading this as I just finished the Hundred Thousand Kingdoms. I watched the Oscars and thought Ms. Wallis was one of the best parts. Yes I am White and I do think the negative comments are coming from a racist perspective. I didn’t read anything after the show so had NO IDEA peopl e responded this way. She was happy to be there and hoping she would win. What is wrong with that? I am so sad that people tried to steal her joy and BULLY her. With all the attention being given to bullying how can people knowingly cause such painnto a 9 year old. It’s horrific and cruel. Pingback: Dear Science Fiction and Fantasy Writers: Thank You for Making Me a Better Feminist | Christina Vasilevski
163,783
\begin{document} \begin{abstract} Koornwinder polynomials are a $6$-parameter $BC_{n}$-symmetric family of Laurent polynomials indexed by partitions, from which Macdonald polynomials can be recovered in suitable limits of the parameters. As in the Macdonald polynomial case, standard constructions via difference operators do not allow one to directly control these polynomials at $q=0$. In the first part of this paper, we provide an explicit construction for these polynomials in this limit, using the defining properties of Koornwinder polynomials. Our formula is a first step in developing the analogy between Hall-Littlewood polynomials and Koornwinder polynomials at $q=0$. In the second part of the paper, we provide an analogous construction for the nonsymmetric Koornwinder polynomials in the same limiting case. The method employed in this paper is a $BC$-type adaptation of techniques used in an earlier work of the author, which gave a combinatorial method for proving vanishing results of Rains and Vazirani at the Hall-Littlewood level. As a consequence of this work, we obtain direct arguments for the constant term evaluations and norms in both the symmetric and nonsymmetric cases. \end{abstract} \maketitle \section{Introduction} In \cite{MacO}, Macdonald introduced a very important family of multivariate $q$-orthogonal polynomials associated to a root system. These polynomials, and their connections to representation theory, combinatorics and algebra, have been well-studied and are an active area of research. For the type $A$ root system, Macdonald polynomials $P_{\lambda}(x_{1}, \dots, x_{n};q,t)$ contain many well-known families of symmetric functions as special cases: for example, the Schur, Hall-Littlewood, and Jack polynomials occur at $q=t$, $q=0$, and $t = q^{\alpha}, q \rightarrow 1$, respectively. The existence of the top level Macdonald polynomials was proved by exhibiting a suitable operator which has these polynomials as its eigenfunctions. A particularly important degeneration of the Macdonald polynomials is obtained in the $q=0$ limit where one obtains zonal spherical functions on semisimple $p$-adic groups. In fact, Macdonald provides an explicit formula for the spherical functions of the Chevalley group $G(\mathbb{Q}_{p})$ in terms of the root data for the group $G$ \cite{MacP}. In particular, this generalizes the formula for the Hall-Littlewood polynomials \cite[Ch. III]{Mac}, which arise as zonal spherical functions for $Gl_{n}(\mathbb{Q}_{p})$. In \cite{K}, Koornwinder introduced a remarkable class of multivariate $q$-orthogonal polynomials associated to the non-reduced root system $BC_{n}$. These polynomials are a $6$-parameter family of Laurent polynomials which are invariant under permuting variables and taking inverses of variables. Moreover, these polynomials reduce to the Askey-Wilson polynomials at $n=1$, and one recovers the Macdonald polynomials by taking suitable limits of the parameters \cite{vD}. As in the Macdonald polynomial case, the existence of these polynomials was proved by using $q$-difference operators; these behave badly as $q \rightarrow 0$. The explicit construction for the Macdonald polynomials at $q=0$ is due to Littlewood; in fact, Hall also provided a construction of these polynomials indirectly via the Hall algebra. Given the relationship between Macdonald and Koornwinder polynomials, a natural question one can ask is whether there exists an explicit construction for the latter polynomials at $q=0$, thereby providing an analog of the construction of Hall-Littlewood polynomials for this family. In this work, we use the defining properties of Koornwinder polynomials to provide a closed formula at the $q=0$ limit. We then extend this technique to study the nonsymmetric Koornwinder polynomials in the same limit. We provide an explicit formula when these polynomials are indexed by \textit{partitions}; we then use elements of the affine Hecke algebra of type $BC$ to recursively obtain \textit{all} nonsymmetric Koornwinder polynomials in this limit. A nice feature of this work is the self-contained proofs of the constant term evaluations and norm evaluations in both the symmetric and nonsymmetric cases (Theorems \ref{symmeval}, \ref{nonsymmeval} and Theorems \ref{symmorthognorm}, \ref{nonsymmnorm}). We mention that, in the symmetric case, the constant term evaluation at the $q$-level is a famous result of Gustafson \cite{G}; this in turn is a multivariate generalization of a result of Askey and Wilson \cite{AW} and a $q$-generalization of Selberg's beta integral \cite{S}. We note that Gustafson's approach requires $q \neq 0$, so one cannot directly apply that argument in this limiting case. The motivation for this problem arose when the author was investigating direct proofs at the Hall-Littlewood level for the vanishing results of Rains and Vazirani \cite{RV} (note that many of those results were first conjectured in \cite{R}). These identities are $(q,t)$-generalizations of restriction rules for Schur functions. More precisely, one integrates a suitably specialized Macdonald polynomial indexed by $\lambda$ against a particular density; the result vanishes unless $\lambda$ satisfies an explicit condition and at $q=t$ one recovers an identity about Schur functions. In \cite{VV}, we provided a combinatorial technique for proving these results at the Hall-Littlewood level; this also led to various generalizations. This method makes use of the structure of the Hall-Littlewood polynomials as a sum over the Weyl group. Many of the results in \cite{RV} involve Koornwinder polynomials so, in order to extend the method developed in that paper, it is necessary to first find a closed formula at the $q=0$ level. In fact, the technique used in this paper to prove that the specified polynomials are orthogonal with respect to the Koornwinder density is a natural adaptation of the ideas in \cite{VV} to the type $BC_{n}$ case. This paper has two main components: the first part deals with the symmetric theory at $q=0$, while the second deals with the nonsymmetric theory in the same limit. The first section of each part sets up the relevant notation, reviews some background material, and defines the polynomials in question. The second section of each part consists of the main theorems and proofs, in particular we prove that these are indeed the symmetric and nonsymmetric Koornwinder polynomials at $q=0$, respectively. As mentioned above, \cite{RV} provided vanishing results involving suitably specialized (symmetric) Koornwinder polynomials. Thus, the third section of the first part contains an application of our formula to this work: we use the construction of the Koornwinder polynomials at $q=0$ to provide a new combinatorial proof of a result from \cite{RV}. \medskip \noindent\textbf{Acknowledgements.} The author would like to thank E. Rains for suggesting the topics in this paper and for numerous helpful conversations throughout this work. \section{Symmetric Hall-Littlewood polynomials of type BC} \subsection{Background and Notation} \label{notation} We will first review some relevant notation before introducing the polynomials that are these subject of this paper; a good reference is \cite[Ch. 1]{Mac}. Recall that a partition $\lambda$ is a non-increasing string of positive integers $(\lambda_{1}, \lambda_{2}, \dots, \lambda_{n})$, in which some of the $\lambda_{i}$ may be zero. We call the $\lambda_{i}$ the ``parts" of $\lambda$. We write $l(\lambda) = \text{max}\{k \geq 0| \lambda_{k} \neq 0 \}$ (the ``length") and $|\lambda| = \sum_{i=1}^{n} \lambda_{i}$ (the ``weight"). A string $\mu = (\mu_{1}, \dots, \mu_{n})$ of integers (not necessarily non-increasing or positive) is called a composition of $|\mu| = \sum_{i=1}^{n} \mu_{i}$. We will say $\lambda$ is an ``even partition" if all parts of $\lambda$ are even; in this case we use the notation $\lambda = 2\mu$ where $\mu_{i} = \lambda_{i}/2$ for all $i$. We will also say $\lambda$ has ``all parts occurring with even multiplicity" if the conjugate partition $\lambda'$ is an even partition. A composition $\lambda$ is an element of $\mathbb{Z}^{n}$ for some $n \geq 1$; we will denote this set by $\Lambda$. We briefly recall some orderings on compositions. \begin{definition} Let $\leq$ denote the dominance partial ordering on compositions, i.e., $\mu \leq \lambda$ if and only if $$\sum_{1 \leq i \leq k} \mu_{i} \leq \sum_{1 \leq i \leq k} \lambda_{i}$$ for all $k \geq 1$ (and $\mu< \lambda$ if $\mu \leq \lambda$ and $\mu \neq \lambda$). Let $\stackrel{\text{lex}}{\leq}$ denote the reverse lexicographic ordering: $\mu \stackrel{\text{lex}}{\leq} \lambda$ if and only if $\lambda = \mu$ or the first non-vanishing difference $\lambda_{i} - \mu_{i}$ is positive. \end{definition} Note that $\stackrel{\text{lex}}{\leq}$ is a total ordering. \begin{lemma} \label{order} Let $\mu, \lambda \in \mathbb{Z}^{n}$ such that $\mu \leq \lambda$. Then $\mu \stackrel{\text{lex}}{\leq} \lambda$. \end{lemma} \begin{proof} The claim is clearly true if $\mu = \lambda$, so suppose $\mu < \lambda$. If $\mu_{1} < \lambda_{1}$, we're done; otherwise $\mu_{1} = \lambda_{1}$ and $\mu_{2} \leq \lambda_{2}$ since $\mu_{1} + \mu_{2} \leq \lambda_{1} + \lambda_{2}$. Iterating this argument produces an integer $i$ in $\{1, \dots, n\}$ such that $\mu_{1} = \lambda_{1}, \dots, \mu_{i-1} = \lambda_{i-1}$ and $\mu_{i} < \lambda_{i}$. Thus, $\mu \stackrel{\text{lex}}{<} \lambda$ as desired. \end{proof} \begin{definition} \label{nonsymmorder} Let $\mu$ and $\lambda$ be two elements of $\mathbb{Z}^{n}$. We will write $\mu^{+}$ for the unique dominant weight in the $BC_{n}$ orbit of $\mu$ (that is, the partition obtained by rearranging the absolute values of the parts of $\mu$ in non-increasing order). Then we write $\mu \prec \lambda$ if and only if either 1) $\mu^{+} < \lambda^{+}$ or if 2) $\mu^{+} = \lambda^{+}$ and $\mu \leq \lambda$, and in either case $\mu \neq \lambda$. \end{definition} \begin{remarks} This part will mostly deal with partitions and the dominance and reverse lexicographic orderings. Compositions, and the extended dominance ordering appearing in Definition \ref{nonsymmorder}, will become relevant in the following part that deals with the nonsymmetric theory. \end{remarks} Let $m_{i}(\lambda)$ be the number of $\lambda_{j}$ equal to $i$ for each $i \geq 0$. Then we define: \begin{equation}\label{v} v_{\lambda}(t;a,b;t_{0}, \dots, t_{3}) = \Bigg( \prod_{i \geq 0} \prod_{j=1}^{m_{i}(\lambda)} \frac{1- t^{j}}{1-t} \Bigg) \prod_{i=1}^{m_{1}(\lambda)} (1- t_{0}t_{1}t_{2}t_{3} t^{i-1+2m_{0}(\lambda)}) \prod_{i=1}^{m_{0}(\lambda)} (1-ab t^{i-1}), \end{equation} and \begin{equation} \label{vplus} v_{\lambda+}(t;t_{0}, \dots, t_{3}) = \Bigg( \prod_{i \geq 1} \prod_{j=1}^{m_{i}(\lambda)} \frac{1- t^{j}}{1-t} \Bigg) \prod_{i=1}^{m_{1}(\lambda)} (1- t_{0}t_{1}t_{2}t_{3} t^{i-1+2m_{0}(\lambda)}). \end{equation} Note the comparison with the factors making the Hall-Littlewood polynomials monic in \cite[Ch. III]{Mac}. Also note that \begin{equation*} v_{\lambda}(t;a,b;t_{0}, \dots, t_{3}) = v_{\lambda+}(t;t_{0}, \dots, t_{3}) v_{0^{m_{0}(\lambda)}}(t;a,b;t_{0}, \dots, t_{3}). \end{equation*} Throughout this paper, we will use \begin{align*} T =T_{n} &= \{(z_{1}, \dots, z_{n}) : |z_{1}| = \dots = |z_{n}| = 1 \}, \\ dT &= \prod_{1 \leq j \leq n} \frac{dz_{j}}{2\pi \sqrt{-1}z_{j}} \end{align*} to denote the $n$-torus and Haar measure, respectively. Since many of the objects we will be dealing with are functions of $n$ variables, we will often use the superscript ${(n)}$ with $z$ in the argument, instead of $(z_{1}, \dots, z_{n})$. We define the $q$-symbol \begin{equation*} (a;q) = \prod_{k \geq 0} (1-aq^{k}) \end{equation*} and let $(a_{1},a_{2}, \dots, a_{l};q)$ denote $(a_{1};q)(a_{2};q) \cdots (a_{l};q)$. We recall the symmetric Koornwinder density \cite{K}: \begin{equation*} \tilde \Delta_{K}^{(n)}(z;q,t;t_{0}, \dots, t_{3}) = \frac{(q;q)^{n}}{2^{n}n!} \prod_{1 \leq i \leq n} \frac{(z_{i}^{\pm 2};q)}{(t_{0}z_{i}^{\pm 1}, t_{1}z_{i}^{\pm 1}, t_{2}z_{i}^{\pm 1}, t_{3}z_{i}^{\pm 1};q)} \prod_{1 \leq i<j \leq n} \frac{(z_{i}^{\pm 1}z_{j}^{\pm 1};q)}{(tz_{i}^{\pm 1}z_{j}^{\pm 1};q)}. \end{equation*} Since we are concerned with $q=0$ degenerations of Koornwinder polynomials, we will be interested in the symmetric Koornwinder density in the same limiting case: \begin{multline} \label{koorndensity} \tilde \Delta_{K}^{(n)}(z;0,t;t_{0},t_{1},t_{2},t_{3}) \\ = \frac{1}{2^{n}n!} \prod_{1 \leq i \leq n} \frac{(1-z_{i}^{\pm 2})}{(1-t_{0}z_{i}^{\pm 1})(1-t_{1}z_{i}^{\pm 1})(1-t_{2}z_{i}^{\pm 1})(1-t_{3}z_{i}^{\pm 1})} \prod_{1 \leq i<j \leq n} \frac{(1-z_{i}^{\pm 1}z_{j}^{\pm 1})}{(1-tz_{i}^{\pm 1}z_{j}^{\pm 1})}, \end{multline} where we write $(1-z_{i}^{\pm 2})$ for the product $(1-z_{i}^{2})(1-z_{i}^{-2})$ and $(1-z_{i}^{\pm 1}z_{j}^{\pm 1})$ for $(1-z_{i}z_{j})(1-z_{i}^{-1}z_{j}^{-1})(1-z_{i}^{-1}z_{j})(1-z_{i}z_{j}^{-1})$, etc. We will write $\tilde \Delta_{K}^{(n)}(z;t;t_{0}, \dots, t_{3})$ to denote this density. Using this density, we let \begin{equation} \label{norm} N_{\lambda}(t;t_{0}, \dots, t_{3}) = \frac{1}{v_{\lambda+}(t)} \int_{T} \tilde \Delta_{K}^{(m_{0}(\lambda))}(z;t;t_{0}, \dots, t_{3}) dT. \end{equation} We note that, at the $q$-level, the explicit evaluation of the integral above is a famous result of Gustafson \cite{G}. However, the arguments do not directly apply at $q=0$. In keeping with the theme of this work, we will provide a self-contained proof of the evaluation of this integral in Theorem \ref{symmeval}. This will provide an explicit formula for the quantity $N_{\lambda}(t;t_{0}, \dots, t_{3})$. For simplicity of notation, we will write $v_{\lambda}, v_{\lambda+}, N_{\lambda}, \tilde \Delta_{K}^{(n)}$, etc., when the parameters are clear from the context. Finally, we explain some notation involving elements of the hyperoctahedral group, $B_{n}$. An element in $B_{n}$ is determined by specifying a permutation $\rho \in S_{n}$ as well as a sign choice $\epsilon_{\rho}(i)$, for each $1 \leq i \leq n$. Thus, $\rho$ acts on the subscripts of the variables, for example by \begin{equation*} \rho(z_{1} \cdots z_{n}) = z_{\rho(1)}^{\epsilon_{\rho}(1)} \cdots z_{\rho(n)}^{\epsilon_{\rho}(n)}. \end{equation*} If $\rho(i) = 1$, we will say that $z_{1}$ occurs in position $i$ of $\rho$. We also write \begin{equation*} ``z_{i} \prec_{\rho} z_{j}" \end{equation*} if $i = \rho(i')$ and $j = \rho(j')$ for some $i'<j'$, i.e., $z_{i}$ appears to the left of $z_{j}$ in the permutation $z_{\rho(1)}^{\epsilon_{\rho}(1)} \cdots z_{\rho(n)}^{\epsilon_{\rho}(n)}$. We also define $\epsilon_{\rho}(z_{i})$ to be $\epsilon_{\rho}(i')$ if $i = \rho(i')$, i.e., it is the exponent $(\pm 1)$ on $z_{i}$ in $z_{\rho(1)}^{\epsilon_{\rho}(1)} \cdots z_{\rho(n)}^{\epsilon_{\rho}(n)}$. We finally define the main objects of this section. \begin{definition} Let $\lambda$ be a partition with $l(\lambda) \leq n$ and $|a|, |b|, |t|, |t_{0}|, \dots, |t_{3}| < 1$. Then $K_{\lambda}(z_{1}, \dots, z_{n};t;a,b;$ $t_{0}, \dots, t_{3})$, indexed by $\lambda$, is defined by \begin{align} \label{K-pol} \frac{1}{v_{\lambda}(t;a,b;t_{0}, \dots, t_{3})} \sum_{w \in B_{n}} w \Bigg( \prod_{1 \leq i \leq n} u_{\lambda}(z_{i}) \prod_{1 \leq i<j \leq n} \frac{1-tz_{i}^{-1}z_{j}}{1-z_{i}^{-1}z_{j}} \frac{1-tz_{i}^{-1}z_{j}^{-1}}{1-z_{i}^{-1}z_{j}^{-1}} \Bigg), \end{align} where \begin{align*} u_{\lambda}(z_{i}) &= \begin{cases} \frac{(1-az_{i}^{-1})(1-bz_{i}^{-1})}{1-z_{i}^{-2}} & \text{if $\lambda_{i} = 0$,} \\ z_{i}^{\lambda_{i}} \frac{(1-t_{0}z_{i}^{-1})(1-t_{1}z_{i}^{-1})(1-t_{2}z_{i}^{-1})(1-t_{3}z_{i}^{-1})}{1-z_{i}^{-2}} &\text{if $\lambda_{i} > 0$.} \end{cases} \end{align*} \end{definition} \begin{remarks} We note that the $K_{\lambda}$ are actually independent of $a,b$ - this is a scaling factor accounted for in $v_{\lambda}$. In particular, the arguments below that prove that this is indeed the Koornwinder polynomial at $q=0$ work for any choice of $a,b$. However, we leave in arbitrary $a,b$ (as opposed to the choice $\pm 1$) because the resulting form is useful in applications; an example that illustrates this appears in the last section. \end{remarks} We will also let \begin{equation} \label{R-pol} R_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) = v_{\lambda}(t;a,b;t_{0}, \dots, t_{3}) K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}), \end{equation} and for $w \in B_{n}$, we let \begin{equation} \label{Kterm} R_{\lambda, w}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) = w\Bigg( \prod_{1 \leq i \leq n} u_{\lambda}(z_{i}) \prod_{1 \leq i<j \leq n} \frac{1-tz_{i}^{-1}z_{j}}{1-z_{i}^{-1}z_{j}}\frac{1-tz_{i}^{-1}z_{j}^{-1}}{1-z_{i}^{-1}z_{j}^{-1}} \Bigg) \end{equation} be the associated term in the summand. As usual, we will write $K_{\lambda}^{(n)}, R_{\lambda}^{(n)}$ and $R_{\lambda, w}^{(n)}$ when the parameters are clear from context. \begin{remarks} When $(t_{0}, t_{1}, t_{2}, t_{3}) = (a,b,0,0)$, we obtain \begin{multline*} K_{\lambda}(z_{1}, \dots, z_{n};t;a,b;a,b,0,0) \\= \frac{1}{v_{\lambda}(t)} \sum_{w \in B_{n}} w \Bigg( \prod_{1 \leq i \leq n} z_{i}^{\lambda_{i}} \frac{(1-az_{i}^{-1})(1-bz_{i}^{-1})}{1-z_{i}^{-2}} \prod_{1 \leq i<j \leq n} \frac{1-tz_{i}^{-1}z_{j}}{1-z_{i}^{-1}z_{j}} \frac{1-tz_{i}^{-1}z_{j}^{-1}}{1-z_{i}^{-1}z_{j}^{-1}} \Bigg). \end{multline*} In particular, this is Macdonald's $2$-parameter family $(BC_{n}, B_{n}) = (BC_{n}, C_{n})$ polynomials at $q=0$. We will write $K_{\lambda}^{(n)}(z;t;a,b,0,0)$ in this case. \end{remarks} \subsection{Main Results} In this section, we will show that the $K_{\lambda}^{(n)}(z; t;a,b; t_{0}, \dots, t_{3})$ are indeed the Koornwinder polynomials at $q=0$. \begin{theorem} \label{BCpol} The function $K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3})$ is a $BC_{n}$-symmetric Laurent polynomial (i.e., invariant under permuting variables $z_{1}, \dots, z_{n}$ and inverting variables $z_{i} \rightarrow z_{i}^{-1}$). \end{theorem} \begin{proof} Recall the fully $BC_{n}$-antisymmetric Laurent polynomials: \begin{equation} \label{BCanti} \Delta_{BC} = \prod_{1 \leq i \leq n} z_{i} - z_{i}^{-1} \prod_{1 \leq i<j \leq n} z_{i}^{-1} - z_{j} - z_{j}^{-1} + z_{i} = \prod_{1 \leq i \leq n} \frac{z_{i}^{2}-1}{z_{i}} \prod_{1 \leq i<j \leq n} \frac{1-z_{i}z_{j}}{z_{i}z_{j}}(z_{j}-z_{i}). \end{equation} Then we have \begin{equation} \label{Kdelta} K_{\lambda}^{(n)}(z;a,b;t_{0}, \dots, t_{3};t) \cdot \Delta_{BC} = \frac{1}{v_{\lambda}(t)} \sum_{w \in B_{n}} \epsilon(w) w \Big( \prod_{1 \leq i \leq n} u_{\lambda}'(z_{i}) \prod_{1 \leq i<j \leq n} (1-tz_{i}^{-1}z_{j}^{-1})(z_{i}-tz_{j}) \Big), \end{equation} where \begin{align*} u_{\lambda}'(z_{i}) &= \begin{cases} z_{i}(1-az_{i}^{-1})(1-bz_{i}^{-1}) & \text{if $\lambda_{i} = 0$,} \\ z_{i}^{\lambda_{i}+1} (1-t_{0}z_{i}^{-1}) \cdots (1-t_{3}z_{i}^{-1}) &\text{if $\lambda_{i} > 0$.} \end{cases} \end{align*} Notice that $K_{\lambda}^{(n)} \cdot \Delta_{BC}$ is a $BC_{n}$-antisymmetric Laurent polynomial, so in particular $\Delta_{BC}$ divides $K_{\lambda}^{(n)} \cdot \Delta_{BC}$ as polynomials. Consequently, $K_{\lambda}^{(n)}$ is a $BC_{n}$-symmetric Laurent polynomial, as desired. \end{proof} \begin{theorem} \label{symmtriang} The functions $K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3})$ are triangular with respect to dominance ordering: \begin{align*} K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) &= m_{\lambda} + \sum_{\mu < \lambda} c^{\lambda}_{\mu} m_{\mu}. \end{align*} \end{theorem} \begin{remarks} Here $\{m_{\lambda}\}_{\lambda}$ is the monomial basis with respect to Weyl group of type $BC$: \begin{align*} m_{\lambda} &= \sum_{w \in B_{n}} w( z_{1}^{\lambda_{1}} \cdots z_{n}^{\lambda_{n}}). \end{align*} \end{remarks} \begin{proof} We show that when $K_{\lambda}^{(n)}$ is expressed in the monomial basis, the top degree term in $m_{\lambda}$; moreover, it is monic. First note that from (\ref{BCanti}) in the previous proof, we have \begin{align*} \Delta_{BC} &= z^{\rho} + (\text{dominated terms}), \end{align*} where $\rho = (n, n-1, \dots ,2, 1)$. We compute the dominating monomial in $K_{\lambda}^{(n)} \cdot \Delta_{BC}$; see (\ref{Kdelta}) in the previous proof for the formula. Note that if $\lambda_{i} = 0$, we have highest degree $\lambda_{i} + 1$ in $u_{\lambda}'(z_{i})$. Similarly, if $\lambda_{i} > 0$, we note that $\lambda_{i} + 1 \geq -\lambda_{i} + 3$ (with equality if and only if $\lambda_{i} = 1$) so we have highest degree $\lambda_{i} + 1$ in $u_{\lambda}'(z_{i})$. Moreover, \begin{align*} \prod_{1 \leq i<j \leq n} (1-tz_{i}^{-1}z_{j}^{-1})(z_{i} - tz_{j}) = \prod_{1 \leq i<j \leq n}(z_{i} - tz_{j}^{-1} - tz_{j} + t^{2}z_{i}^{-1}) \end{align*} has highest degree term $z^{\rho - 1}$. Thus, the dominating monomial in $K_{\lambda}^{(n)} \cdot \Delta_{BC}$ is $z^{\lambda + \rho}$, so that the dominating monomial in $K_{\lambda}^{(n)}$ is $z^{\lambda}$. We now show that the coefficient on $z^{\lambda + \rho}$ in $R_{\lambda}^{(n)} \cdot \Delta_{BC}$ (see (\ref{R-pol}) for the definition of $R_{\lambda}^{(n)}$) is $v_{\lambda}(t)$, so that $K_{\lambda}^{(n)}$ is indeed monic. Note first that by the above argument the only contributing $w$ are those such that (1) $z_{1}^{\lambda_{1}} \cdots z_{n}^{\lambda_{n}} = z_{w(1)}^{\lambda_{1}} \cdots z_{w(n)}^{\lambda_{n}}$ and (2) $\epsilon_{w}(z_{i}) = 1$ for all $1 \leq i \leq n - m_{0}(\lambda) - m_{1}(\lambda)$; let the set of these special permutations be denoted by $P_{\lambda, n}$. Now fix $w \in P_{\lambda, n}$, we compute the coefficient on $z_{1}^{\lambda_{1} + n}$. Using (\ref{Kdelta}) and the arguments of the previous paragraph, one can check that the coefficient is: \begin{enumerate} \item If $\lambda_{1} >1$: \begin{align*} t^{\# \{z_{i} \prec_{w} z_{1}\}} \end{align*} \item If $\lambda_{1} = 1$: \begin{align*} \begin{cases} t^{\# \{z_{i} \prec_{w} z_{1}\}} , & \text{if } \epsilon_{w}(z_{1}) = 1 \\ - t_{0} \cdots t_{3} (t^{2})^{\# \{z_{1} \prec_{w} z_{i}\}} t^{\#\{ z_{i} \prec_{w} z_{1}\}} ,& \text{if } \epsilon_{w}(z_{1}) = -1 \end{cases} \end{align*} \item If $\lambda_{1} = 0$: \begin{align*} \begin{cases} t^{\# \{z_{i} \prec_{w} z_{1}\}}, & \text{if } \epsilon_{w}(z_{1}) = 1\\ -ab(t^{2})^{\# \{z_{1} \prec_{w} z_{i}\}}t^{\# \{z_{i} \prec_{w} z_{1}\}}, & \text{if } \epsilon_{w}(z_{1}) = -1 \end{cases} \end{align*} \end{enumerate} (note that we have used the contribution of $(-1)$ factors from $\epsilon(w)$ in $K_{\lambda}^{(n)} \cdot \Delta_{BC}$). Now define the following subsets of the variables $z_{1}, \dots, z_{n}$: \begin{align*} N_{w, \lambda}^{1} &= \{z_{i} : n-m_{0}(\lambda) - m_{1}(\lambda) < i \leq n-m_{0}(\lambda) \text{ and } \epsilon_{w}(z_{i}) = -1\}\\ N_{w, \lambda}^{0} &= \{ z_{i} : n-m_{0}(\lambda) < i \leq n \text{ and } \epsilon_{w}(z_{i}) = -1 \} \\ N_{w, \lambda} &= N_{w, \lambda}^{1} + N_{w, \lambda}^{0}. \end{align*} Finally, define the following statistics of $w$: \begin{align*} n(w) &= |\{ (i,j) : 1 \leq i<j \leq n \text{ and } z_{j} \prec_{w} z_{i} \}| \\ c_{\lambda}(w) &= |\{ (i,j) : 1 \leq i<j \leq n \text{ and } z_{i} \prec_{w} z_{j} \text{ and } z_{i} \in N_{w, \lambda}\}|. \end{align*} Then by iterating the coefficient argument above, we get that the coefficient on $z^{\lambda + \rho}$ is given by \begin{equation*} \sum_{w \in P_{\lambda, n}} t^{n(w)} t^{2c_{\lambda}(w)} (-t_{0} \dots t_{3})^{|N^{1}_{w, \lambda}|} (-ab)^{|N^{0}_{w, \lambda}|}. \end{equation*} Since $P_{\lambda, n} = B_{m_{0}(\lambda)}B_{m_{1}(\lambda)} \prod_{i \geq 2} S_{m_{i}(\lambda)}$, it is enough to show the following three cases: \begin{equation} \label{stat1} \sum_{w \in S_{m}} t^{n(w)} = \prod_{j=1}^{m} \frac{1-t^{j}}{1-t} \end{equation} \begin{equation} \label{stat2} \sum_{w \in B_{m}} t^{n(w)} t^{2c_{1^{m}}(w) + 2m_{0}(\lambda)} (-t_{0} \dots t_{3})^{\big|N^{1}_{w, 1^{m}} \big|} = \prod_{j=1}^{m} \frac{1-t^{j}}{1-t} (1-t_{0}\cdots t_{3} t^{j-1 + 2m_{0}(\lambda)}) \end{equation} \begin{equation} \label{stat3} \sum_{w \in B_{m}} t^{n(w)} t^{2c_{0^{m}}(w)} (-ab)^{\big|N^{0}_{w, 0^{m}}\big|} = \prod_{j=1}^{m} \frac{1-t^{j}}{1-t} (1-abt^{j-1}). \end{equation} To show (\ref{stat1}), we note that the LHS is exactly enumerated by the terms of \begin{equation*} (1+t+t^{2} + \cdots + t^{m-1})(1+t+t^{2} + \cdots + t^{m-2}) \cdots (1+t)(1), \end{equation*} which is equal to the RHS. Also refer to \cite[Ch. III, proof of (1.2) and (1.3)]{Mac}. We now show (\ref{stat2}); (\ref{stat3}) is analogous. One can verify that the LHS of (\ref{stat2}) is exactly enumerated by the terms of \begin{equation} \label{stat2eqn} \prod_{k=1}^{m}\Big[ \sum_{i=1}^{k} \big( t^{i-1} + t^{i-1}(t^{2})^{m_{0}(\lambda)+k-i}(-t_{0} \cdots t_{3}) \big) \Big]. \end{equation} But we also have \begin{multline*} \sum_{i=1}^{k} \Big( t^{i-1} + t^{i-1} (t^{2})^{m_{0}(\lambda) + k - i}(-t_{0} \cdots t_{3}) \Big) = \sum_{i=1}^{k} \big(t^{i-1} - t_{0}\cdots t_{3} t^{k+2m_{0}(\lambda)-1} t^{k-i}\big) \\ = (1-t_{0} \cdots t_{3} t^{k+2m_{0}(\lambda) -1})(1 + t + \cdots t^{k-1}) = (1-t_{0} \cdots t_{3} t^{k+2m_{0}(\lambda) -1}) \frac{1-t^{k}}{1-t}; \end{multline*} substituting this into (\ref{stat2eqn}) gives the RHS of (\ref{stat2}) as desired. Multiplying these functions together for each distinct part $i$ of $\lambda$ (put $m = m_{i}(\lambda)$ in (\ref{stat1}), (\ref{stat2}), and (\ref{stat3}), depending on whether $i \geq 2, i=1, \text{ or } i=0$, respectively), and using (\ref{v}) shows that the coefficient on $z^{\lambda + \rho}$ in $R_{\lambda}^{(n)} \cdot \Delta_{BC}$ is indeed $v_{\lambda}(t)$, as desired. \end{proof} We will now provide a direct proof of Gustafson's formula \cite{G} in the limit $q=0$. \begin{theorem} \label{symmeval} We have the following constant term evaluation in the symmetric case \begin{multline*} \int_{T} \tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) dT = \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)(1-t^{i}ab)}\\ \times \prod_{j=0}^{n-1} (1-t^{2n-2-j}abcd) \prod_{j=1}^{n} \frac{1-t}{1-t^{j}}. \end{multline*} \end{theorem} \begin{proof} Note first that by Theorem \ref{symmtriang}, $K_{0^{n}}^{(n)}(z;t;a,b,0,0) = 1$. So in particular, we have \begin{multline*} \int_{T} \tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) dT = \int_{T} K_{0^{n}}^{(n)}(z;t;a,b,0,0) \tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) dT \\ = \frac{1}{v_{0^{n}}(t;a,b,0,0)} \sum_{w \in B_{n}} \int_{T} R_{0^{n},w}^{(n)}(z;t;a,b,0,0)\tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) dT \\ = \frac{2^{n}n!}{v_{0^{n}}(t;a,b,0,0)} \int_{T}R_{0^{n},\text{id}}^{(n)}(z;t;a,b,0,0)\tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) dT, \end{multline*} where the last equality follows by symmetry of the integrand. But now using (\ref{Kterm}), one notes that \begin{multline*} 2^{n}n! R_{0^{n},\text{id}}^{(n)}(z;t;a,b,0,0)\tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) \\= \prod_{1 \leq i \leq n} \frac{(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-cz_{i})(1-dz_{i})(1-cz_{i}^{-1})(1-dz_{i}^{-1})} \prod_{1 \leq i<j \leq n} \frac{(1-z_{i}z_{j}^{\pm 1})}{(1-tz_{i}z_{j}^{\pm 1})}. \end{multline*} We will denote the right-hand side of the above equation by $\Delta_{K}^{(n)}(z;t;a,b,c,d)$. We will now prove that \begin{equation} \label{nonsymmeqn} \int_{T} \Delta_{K}^{(n)}(z;t;a,b,c,d) dT = \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=n-1}^{2n-2} (1-t^{j}abcd). \end{equation} For facility of notation, we will put $I_{n}(z;t;a,b;c,d) = \int_{T} \Delta_{K}^{(n)}(z;t;a,b,c,d) dT$. We will prove (\ref{nonsymmeqn}) through the following two claims. \textbf{Claim 1:} We have \begin{multline} \label{recurrence} I_{n}(z;t;a,b;c,d) = \frac{c}{(1-ac)(1-bc)(1-dc)(c-d)}I_{n-1}(z;t;a,b;tc,d) \\+ \frac{d}{(1-ad)(1-bd)(1-cd)(d-c)} I_{n-1}(z;t;a,b;c,td), \end{multline} with initial conditions $I_{0}(z;t;a,b;c,d) = 1$ and \begin{equation*} I_{1}(z;t;a,b;c,d) = \frac{1-abcd}{(1-ac)(1-bc)(1-cd)(1-ad)(1-bd)}. \end{equation*} To prove the first claim, we note that \begin{multline*} I_{n}(z;t;a,b;c,d) \\= \int \prod_{1 \leq i \leq n}\frac{z_{i}(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-cz_{i})(1-dz_{i})(z_{i}-c)(z_{i}-d)} \prod_{1 \leq i<j \leq n} \frac{(z_{j}-z_{i})(1-z_{i}z_{j})}{(z_{j}-tz_{i})(1-tz_{i}z_{j})} \prod_{j=1}^{n} \frac{dz_{j}}{2\pi \sqrt{-1}}. \end{multline*} We may now hold the variables $z_{2}, \dots, z_{n}$ fixed and integrate with respect to $z_{1}$. There are simple poles at $z_{1} = c$ and $z_{1} = d$, so by the Residue Theorem, it will be the sum of residues at these poles. Consider the residue at $z_{1} = c$: \begin{multline*} \int \prod_{2 \leq i \leq n}\frac{z_{i}(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-cz_{i})(1-dz_{i})(z_{i}-c)(z_{i}-d)} \prod_{2 \leq i<j \leq n} \frac{(z_{j}-z_{i})(1-z_{i}z_{j})}{(z_{j}-tz_{i})(1-tz_{i}z_{j})} \\ \times \frac{c}{(1-ac)(1-bc)(1-cd)(c-d)} \prod_{1<j \leq n} \frac{(z_{j}-c)(1-cz_{j})}{(z_{j}-tc)(1-tcz_{j})} \prod_{j=2}^{n} \frac{dz_{j}}{2\pi \sqrt{-1}} \\ = C_{1} \int_{T_{n-1}} \prod_{2 \leq i \leq n}\frac{z_{i}(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-tcz_{i})(1-dz_{i})(z_{i}-tc)(z_{i}-d)} \prod_{2 \leq i<j \leq n} \frac{(z_{j}-z_{i})(1-z_{i}z_{j})}{(z_{j}-tz_{i})(1-tz_{i}z_{j})} dT \end{multline*} where $C_{1} = \frac{c}{(1-ac)(1-bc)(1-cd)(c-d)}$. By renumbering the variables $(z_{2}, \dots, z_{n})$ by $(z_{1}, \dots, z_{n-1})$, one sees that this is exactly $C_{1} I_{n-1}(z;t;a,b;tc,d)$. An analogous argument applies for the residue at $z_{1} = d$; this produces the second term $C_{2}I_{n-1}(z;t;a,b;c,td)$, where $C_{2} = \frac{d}{(1-ad)(1-bd)(1-cd)(d-c)}$. To obtain the result at $n=1$, one uses the above argument in this special case along with some algebraic manipulation. In particular, the computation of the sum of residues is as follows \begin{multline*} \frac{(1-c^{2})c}{(1-ac)(1-bc)(1-c^{2})(1-dc)(c-d)} + \frac{(1-d^{2})d}{(1-ad)(1-bd)(1-cd)(1-d^{2})(d-c)} \\ = \frac{1}{(1-ac)(1-bc)(1-cd)(1-ad)(1-bd)} \Big[ \frac{c(1-ad)(1-bd)}{c-d} + \frac{d(1-ac)(1-bc)}{d-c} \Big] \\ = \frac{1-abcd}{(1-ac)(1-bc)(1-cd)(1-ad)(1-bd)}, \end{multline*} as desired. This proves the first claim. \textbf{Claim 2:} We have the following solution to (\ref{recurrence}) \begin{equation*} I_{n}(z;t;a,b;c,d) = \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=n-1}^{2n-2} (1-t^{j}abcd). \end{equation*} We prove the second claim. One can first check that $n=0,1$ satisfies the initial conditions of (\ref{recurrence}). Then for $n \geq 2$, we have \begin{multline*} \frac{c}{(1-ac)(1-bc)(1-dc)(c-d)}I_{n-1}(z;t;a,b;tc,d) + \frac{d}{(1-ad)(1-bd)(1-cd)(d-c)} I_{n-1}(z;t;a,b;c,td) \\ = \frac{c\displaystyle \prod_{j=n-2}^{2n-4} 1-t^{j+1}abcd}{(1-ac)(1-bc)(1-dc)(c-d)}\prod_{i=0}^{n-2} \frac{1}{(1-t^{i+1}ac)(1-t^{i+1}bc)(1-t^{i+1}cd)(1-t^{i}ad)(1-t^{i}bd)} \\ + \frac{d \displaystyle \prod_{j=n-2}^{2n-4} 1-t^{j+1}abcd }{(1-ad)(1-bd)(1-cd)(d-c)}\prod_{i=0}^{n-2} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i+1}cd)(1-t^{i+1}ad)(1-t^{i+1}bd)} \\ = \Bigg[ \frac{c(1-t^{n-1}ad)(1-t^{n-1}bd)}{c-d} + \frac{d(1-t^{n-1}ac)(1-t^{n-1}bc)}{d-c} \Bigg] \\ \times \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=n-2}^{2n-4} (1-t^{j+1}abcd) \end{multline*} But now note the following identity for the sum inside the parentheses: \begin{multline*} \frac{c(1-t^{n-1}ad)(1-t^{n-1}bd)}{c-d} + \frac{d(1-t^{n-1}ac)(1-t^{n-1}bc)}{d-c} \\= \frac{c(1-t^{n-1}ad)(1-t^{n-1}bd) -d(1-t^{n-1}ac)(1-t^{n-1}bc) }{c-d} \\ = \frac{c-d+t^{2(n-1)}abcd^{2} - t^{2(n-1)}abc^{2}d}{c-d} = 1-t^{2(n-1)}abcd, \end{multline*} so the above finally becomes \begin{equation*} \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=n-1}^{2(n-1)} (1-t^{j}abcd) = I_{n}(z;t;a,b;c,d), \end{equation*} which proves (\ref{nonsymmeqn}). Thus, putting this together we have \begin{multline*} \int_{T} \tilde \Delta_{K}^{(n)}(z;t;a,b,c,d) dT = \frac{1}{v_{0^{n}}(t;a,b,0,0)} \int_{T} \Delta_{K}^{(n)}(z;t;a,b,c,d) dT \\ = \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=n-1}^{2n-2} (1-t^{j}abcd) \times \prod_{j=1}^{n} \frac{1-t}{1-t^{j}} \prod_{i=1}^{n} \frac{1}{1-abt^{i-1}} \\ =\prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)(1-t^{i}ab)} \prod_{j=0}^{n-1} (1-t^{2n-2-j}abcd) \prod_{j=1}^{n} \frac{1-t}{1-t^{j}}, \end{multline*} where we have used Theorem \ref{nonsymmeval} and (\ref{v}). \end{proof} We note that the quantity $\Delta_{K}^{(n)}(z;t;a,b,c,d)$ which appears in the proof of Theorem \ref{symmeval} is actually the $q=0$ limit of the nonsymmetric Koornwinder density (see \cite{RV} for example); the nonsymmetric theory is investigated in the next section. \begin{theorem} \label{symmorthognorm} The family of polynomials $\{K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) \}_{\lambda}$ satisfy the following orthogonality result: \begin{align*} \int_{T} K_{\lambda}^{(n)}(z;t;a,b; t_{0}, \dots, t_{3}) K_{\mu}^{(n)}(z;t;a,b; t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)}(z;t;t_{0}, \dots, t_{3}) dT &= N_{\lambda}(t;t_{0}, \dots, t_{3}) \delta_{\lambda \mu} \end{align*} (refer to (\ref{koorndensity}) and (\ref{norm}) for the definitions of $ \tilde \Delta_{K}^{(n)}$ and $N_{\lambda}$, respectively; also see Theorem \ref{symmeval}). \end{theorem} \begin{proof} By symmetry of $\lambda, \mu$, we may restrict to the case where $\lambda \stackrel{\text{lex}}{\geq} \mu$. We assume $\lambda_{1} > 0$, so we need not consider the case $\lambda = \mu = 0^{n}$; these assumptions hold throughout the proof. By definition of $K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3})$ as a sum over $B_{n}$, the above integral is equal to \begin{equation*} \sum_{w,\rho \in B_{n}} \int_{T} K_{\lambda, w}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) K_{\mu, \rho}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)}(z;t;t_{0}, \dots, t_{3}) dT. \end{equation*} Consider an arbitrary term in this sum over $B_{n} \times B_{n}$ indexed by $(w, \rho)$. Note that using a change of variables in the integral and inverting variables (which preserves the integral), we may assume $w$ is the identity permutation, and all sign choices are $1$ (and $\rho$ is arbitrary). That is, we have: \begin{multline*} \int_{T} K_{\lambda}(z_{1}, \dots, z_{n};t;a,b; t_{0}, \dots, t_{3}) K_{\mu}(z_{1}, \dots, z_{n};t;a,b; t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)}(z;t;t_{0}, \dots, t_{3}) dT \\ = 2^{n}n! \sum_{\rho \in B_{n}} \int_{T} K_{\lambda, \text{id}}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) K_{\mu, \rho}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)}(z;t;t_{0}, \dots, t_{3}) dT \\ = 2^{n}n! \frac{1}{v_{\lambda}(t)v_{\mu}(t)}\sum_{\rho \in B_{n}} \int_{T} R_{\lambda, \text{id}}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) R_{\mu, \rho}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)}(z;t;t_{0}, \dots, t_{3}) dT, \end{multline*} where $R_{\lambda}^{(n)}$ is as defined in (\ref{R-pol}). We study an arbitrary term in this sum. In particular, we give an iterative formula that shows that each of these terms vanishes unless $\lambda = \mu$. \begin{claim} Fix an arbitrary $\rho \in B_{n}$ and let $\rho(i) = 1$ for some $1 \leq i \leq n$. Then we have the following formula: \begin{multline*} 2^{n}n!\int_{T} R_{\lambda, \text{id}}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) R_{\mu, \rho}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)} dT \\ =\begin{cases} t^{i-1} 2^{n-1}(n-1)! \int R_{\widehat{\lambda}, \widehat{\text{id}}}^{(n-1)} R_{\widehat{\mu}, \widehat{\rho}}^{(n-1)} \tilde \Delta_{K}^{(n-1)} dT & \text{if $\mu_{i} = \lambda_{1}$ and $\epsilon_{\rho}(z_{1}) = -1$,}\\ t^{i-1}(t^{2})^{m_{0}(\mu)+m_{1}(\mu)-i}(-t_{0} \cdots t_{3}) 2^{n-1}(n-1)!\int R_{\widehat{\lambda}, \widehat{\text{id}}}^{(n-1)} R_{\widehat{\mu}, \widehat{\rho}}^{(n-1)} \tilde \Delta_{K}^{(n-1)} dT & \textit{if $\mu_{i} = \lambda_{1}=1$} \\ &\text{ and $\epsilon_{\rho}(z_{1}) = 1$}, \\ 0 & \textit{otherwise.}\\ \end{cases} \end{multline*} where $\widehat{\lambda}$ and $\widehat{\mu}$ are the partitions $\lambda$ and $\mu$ with parts $\lambda_{1}$ and $\mu_{i}$ deleted (respectively), and $\widehat{\text{id}}$ and $\widehat{\rho}$ are the permutations $\text{id}$ and $\rho$ with $z_{1}$ deleted (respectively) and signs preserved. \end{claim} To prove the claim, we integrate with respect to $z_{1}$ in the iterated integral, using the definition of $R^{(n)}_{\lambda, \text{id}}, R^{(n)}_{\mu, \rho}$ and $\tilde \Delta_{K}^{(n)}$. First suppose $\mu_{i} > 0$. The univariate terms in $z_{1}$ are: \begin{align*} & z_{1}^{\lambda_{1}} \frac{(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-z_{1}^{-2})} z_{1}^{\mu_{i}} \frac{(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-z_{1}^{-2})} \frac{(1-z_{1}^{\pm 2})}{(1-t_{0}z_{1}^{\pm 1}) \cdots (1-t_{3}z_{1}^{\pm 1})} \\ &= z_{1}^{\lambda_{1} + \mu_{i}} \frac{(-z_{1}^{2})(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})} \end{align*} if $\epsilon_{\rho}(z_{1}) = 1$, and \begin{align*} & z_{1}^{\lambda_{1}} \frac{(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-z_{1}^{-2})} z_{1}^{-\mu_{i}} \frac{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})}{(1-z_{1}^{2})} \frac{(1-z_{1}^{\pm 2})}{(1-t_{0}z_{1}^{\pm 1}) \cdots (1-t_{3}z_{1}^{\pm 1})} \\ &= z_{1}^{\lambda_{1} - \mu_{i}} \end{align*} if $\epsilon_{\rho}(z_{1}) = -1$. Now suppose $\mu_{i} = 0$. The univariate terms in $z_{1}$ are: \begin{align*} & z_{1}^{\lambda_{1}} \frac{(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-z_{1}^{-2})} \frac{(1-az_{1}^{-1})(1-bz_{1}^{-1})}{(1-z_{1}^{-2})} \frac{(1-z_{1}^{\pm 2})}{(1-t_{0}z_{1}^{\pm 1}) \cdots (1-t_{3}z_{1}^{\pm 1})} \\ &= z_{1}^{\lambda_{1}} \frac{(-z_{1}^{2})(1-az_{1}^{-1})(1-bz_{1}^{-1})}{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})} \end{align*} if $\epsilon_{\rho}(z_{1}) = 1$, and \begin{align*} & z_{1}^{\lambda_{1}} \frac{(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-z_{1}^{-2})} \frac{(1-az_{1})(1-bz_{1})}{(1-z_{1}^{2})} \frac{(1-z_{1}^{\pm 2})}{(1-t_{0}z_{1}^{\pm 1}) \cdots (1-t_{3}z_{1}^{\pm 1})} \\ &= z_{1}^{\lambda_{1}} \frac{(1-az_{1})(1-bz_{1})}{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})} \end{align*} if $\epsilon_{\rho}(z_{1}) = -1$. Notice that for the cross terms in $z_{1}$ (those involving $z_{j}$ for $j \neq 1$), we have \begin{align*} \prod_{j>1} \frac{1-tz_{1}^{-1}z_{j}^{-1}}{1-z_{1}^{-1}z_{j}^{-1}} \frac{1-tz_{1}^{-1}z_{j}}{1-z_{1}^{-1}z_{j}} \times \prod_{j>1} \frac{1-z_{1}^{\pm 1}z_{j}^{\pm 1}}{1-tz_{1}^{\pm 1}z_{j}^{\pm 1}} \end{align*} from the corresponding terms in $z_{1}$ of $R_{\lambda, \text{id}}$ and the density. Combining this with the cross terms of $R_{\mu, \rho}$ in $z_{1}$ (and taking into account the various sign possibilities for $\rho$), we obtain \begin{align*} \prod_{\substack{z_{i} \prec_{\rho} z_{1} \\ \text{ sign } 1 \text{ for } z_{i}}} \frac{t-z_{1}z_{i}}{1-tz_{1}z_{i}} \prod_{\substack{z_{i} \prec_{\rho} z_{1} \\ \text{ sign } -1 \text{ for } z_{i}}} \frac{t-z_{1}z_{i}^{-1}}{1-tz_{1}z_{i}^{-1}} \prod_{z_{1} \prec_{\rho} z_{j}} \frac{(t-z_{1}z_{j}^{-1})(t-z_{1}z_{j})}{(1-tz_{1}z_{j}^{-1})(1-tz_{1}z_{j})} \end{align*} if $\epsilon_{\rho}(z_{1}) = 1$, and \begin{align*} \prod_{\substack{z_{i} \prec_{\rho} z_{1} \\ \text{ sign } 1 \text{ for } z_{i}}} \frac{t-z_{1}z_{i}}{1-tz_{1}z_{i}} \prod_{\substack{z_{i} \prec_{\rho} z_{1} \\ \text{ sign } -1 \text{ for } z_{i}}} \frac{t-z_{1}z_{i}^{-1}}{1-tz_{1}z_{i}^{-1}} \end{align*} if $\epsilon_{\rho}(z_{1}) = -1$. Thus, combining these computations, the integral in $z_{1}$ is: \begin{multline*} \begin{cases} \int_{T_{1}} z_{1}^{\lambda_{1} + \mu_{i}} \frac{(-z_{1}^{2})(1-t_{0}z_{1}^{-1}) \cdots (1-t_{3}z_{1}^{-1})}{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})} \cdot \\ \displaystyle \prod_{\substack{ z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k}) = 1}} \frac{t-z_{1}z_{k}}{1-tz_{1}z_{k}} \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k})=-1}} \frac{t-z_{1}z_{k}^{-1}}{1-tz_{1}z_{k}^{-1}} \prod_{z_{1} \prec_{\rho} z_{j}} \frac{(t-z_{1}z_{j}^{-1})(t-z_{1}z_{j})}{(1-tz_{1}z_{j}^{-1})(1-tz_{1}z_{j})} dT & \text{if $\mu_{i} > 0$ and $\epsilon_{\rho}(z_{1}) = 1$,} \\ \int_{T_{1}} z_{1}^{\lambda_{1}} \frac{(-z_{1}^{2})(1-az_{1}^{-1})(1-bz_{1}^{-1})}{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})} \cdot \\ \displaystyle \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k}) = 1}} \frac{t-z_{1}z_{k}}{1-tz_{1}z_{k}} \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k}) = -1}} \frac{t-z_{1}z_{k}^{-1}}{1-tz_{1}z_{k}^{-1}} \prod_{z_{1} \prec_{\rho} z_{j}} \frac{(t-z_{1}z_{j}^{-1})(t-z_{1}z_{j})}{(1-tz_{1}z_{j}^{-1})(1-tz_{1}z_{j})} dT & \text{if $\mu_{i} = 0$ and $\epsilon_{\rho}(z_{1}) = 1$,} \\ \int_{T_{1}} z_{1}^{\lambda_{1} - \mu_{i}} \displaystyle \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k}) = 1 }} \frac{t-z_{1}z_{k}}{1-tz_{1}z_{k}} \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k})=-1}} \frac{t-z_{1}z_{k}^{-1}}{1-tz_{1}z_{k}^{-1}} dT & \text{if $\mu_{i} > 0$ and $\epsilon_{\rho}(z_{1}) = -1$,} \\ \int_{T_{1}} z_{1}^{\lambda_{1}} \frac{(1-az_{1})(1-bz_{1})}{(1-t_{0}z_{1}) \cdots (1-t_{3}z_{1})}\displaystyle \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k}) = 1}} \frac{t-z_{1}z_{k}}{1-tz_{1}z_{k}} \prod_{\substack{z_{k} \prec_{\rho} z_{1} \\ \epsilon_{\rho}(z_{k}) = -1}} \frac{t-z_{1}z_{k}^{-1}}{1-tz_{1}z_{k}^{-1}} dT&\text{if $\mu_{i} = 0$ and $\epsilon_{\rho}(z_{1}) = -1$.} \end{cases} \end{multline*} In particular, the first integral vanishes unless $\lambda_{1} = \mu_{i} = 1$; the second integral always vanishes; the third integral vanishes unless $\lambda_{1} = \mu_{i}$; the fourth integral always vanishes. Thus, we obtain the vanishing conditions of the claim. To obtain the nonzero values, one can use the residue theorem and evaluate at the simple pole $z_{1} = 0$ in the cases $\lambda_{1} = \mu_{i} = 1$ and $\lambda_{1} = \mu_{i}$. Finally, we combine with the original integrand involving terms in $z_{2}, \dots, z_{n}$ to obtain the result of the claim. Note that in particular the claim implies that if $\lambda \neq \mu$, each term vanishes and consequently the total integral is zero. This proves the vanishing part of the orthogonality statement. Next, we compute the norm when $\lambda = \mu$. The claim shows that only certain $\rho \in B_{n}$ give nonvanishing term integrals. Such permutations must satisfy \begin{align*} z_{1}^{\lambda_{1}} \cdots z_{n}^{\lambda_{n}} z_{\rho(1)}^{-\lambda_{1}} \cdots z_{\rho(n)}^{-\lambda_{n}} = 1 \end{align*} and $\epsilon_{\rho}(z_{i}) = -1$ for all $1 \leq i \leq n - m_{0}(\lambda) - m_{1}(\lambda)$. For simplicity of notation, define $B_{\lambda,n}$ to be the set of such permutations $\rho \in B_{n}$. Then we have: \begin{multline*} \int_{T} K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) K_{\lambda}^{(n)}(z;t;a,b;t_{0}, \dots, t_{3}) \tilde \Delta_{K}^{(n)} dT = \frac{2^{n}n!}{v_{\lambda}(t)^{2}} \sum_{\rho \in B_{n}} \int_{T} R_{\lambda, \text{id}}^{(n)} R_{\lambda, \rho}^{(n)} \tilde \Delta_{K}^{(n)} dT \\ =\frac{2^{n}n!}{v_{\lambda}(t)^{2}} \sum_{\rho \in B_{\lambda,n}} \int_{T} R_{\lambda, \text{id}}^{(n)} R_{\lambda, \rho}^{(n)} \tilde \Delta_{K}^{(n)} dT, \end{multline*} since only these permutations give nonvanishing terms. Then, using the formula of the Claim, we have \begin{multline*} 2^{n}n! \sum_{\rho \in B_{\lambda,n}} \int_{T} R_{\lambda, \text{id}}^{(n)} R_{\lambda, \rho}^{(n)} \tilde \Delta_{K}^{(n)} dT \\ = \begin{cases} C_{1} \times 2^{n-m_{\lambda_{1}}(\lambda)} (n-m_{\lambda_{1}}(\lambda))! \displaystyle \sum_{\rho \in B_{\tilde \lambda, n-m_{\lambda_{1}}(\lambda)}} \int_{T} R_{\tilde \lambda, \text{id}}^{(n-m_{\lambda_{1}}(\lambda))} R_{\tilde \lambda, \rho}^{(n-m_{\lambda_{1}}(\lambda))} \tilde \Delta_{K}^{(n-m_{\lambda_{1}}(\lambda))}dT &\text{ if $\lambda_{1}>1$, }\\ C_{2} \times 2^{n-m_{\lambda_{1}}(\lambda)} (n-m_{\lambda_{1}}(\lambda))! \displaystyle \sum_{\rho \in B_{\tilde \lambda, n-m_{\lambda_{1}}(\lambda)}} \int_{T} R_{\tilde \lambda, \text{id}}^{(n-m_{\lambda_{1}}(\lambda))} R_{\tilde \lambda, \rho}^{(n-m_{\lambda_{1}}(\lambda))} \tilde \Delta_{K}^{(n-m_{\lambda_{1}}(\lambda))}dT &\text{ if $\lambda_{1} = 1$,} \end{cases} \end{multline*} where \begin{align*} C_{1} &= \prod_{k=1}^{m_{\lambda_{1}}(\lambda)} \Big( \sum_{i=1}^{k} t^{i-1}\Big) \\ C_{2} &= \prod_{k=1}^{m_{1}(\lambda)} \Big[ \sum_{i=1}^{k} \big(t^{i-1} + t^{i-1}(t^{2})^{m_{0}(\lambda)+k-i}(-t_{0}t_{1}t_{2}t_{3})\big)\Big] \end{align*} and $\tilde \lambda$ is the partition $\lambda$ with all $m_{\lambda_{1}}(\lambda)$ occurrences of $\lambda_{1}$ deleted. Iterating this argument gives that \begin{multline*} 2^{n}n! \sum_{\rho \in B_{\lambda,n}} \int_{T} R_{\lambda, \text{id}}^{(n)} R_{\lambda, \rho}^{(n)} \tilde \Delta_{K}^{(n)} dT \\ = \Bigg( \prod_{j>1} \prod_{k=1}^{m_{j}(\lambda)}\Big( \sum_{i=1}^{k} t^{i-1} \Big) \Bigg) \Bigg( \prod_{k=1}^{m_{1}(\lambda)} \sum_{i=1}^{k} \Big( t^{i-1} + t^{i-1} (t^{2})^{m_{0}(\lambda) + k - i}(-t_{0} \cdots t_{3}) \Big) \Bigg)\\ \times 2^{m_{0}(\lambda)} m_{0}(\lambda)! \sum_{\rho \in B_{m_{0}(\lambda)}} \int_{T}R_{0^{m_{0}(\lambda)},\text{id}}^{(m_{0}(\lambda))} R_{0^{m_{0}(\lambda)}, \rho}^{(m_{0}(\lambda))} \tilde \Delta_{K}^{(m_{0}(\lambda))} dT; \end{multline*} note that the expression on the final line is exactly $\int_{T} {R_{0^{m_{0}(\lambda)}}^{(m_{0}(\lambda))}}^{2} \tilde \Delta_{K}^{(m_{0}(\lambda))}dT$. Thus, \begin{multline*} \frac{2^{n}n!}{v_{\lambda}(t)^{2}} \sum_{\rho \in B_{\lambda,n}} \int_{T} R_{\lambda, \text{id}}^{(n)} R_{\lambda, \rho}^{(n)} \tilde \Delta_{K}^{(n)} dT\\ = \frac{1}{v_{\lambda+}(t)^{2}}\Bigg( \prod_{j>1} \prod_{k=1}^{m_{j}(\lambda)}\Big( \sum_{i=1}^{k} t^{i-1} \Big) \Bigg) \Bigg( \prod_{k=1}^{m_{1}(\lambda)} \sum_{i=1}^{k} \Big( t^{i-1} + t^{i-1} (t^{2})^{m_{0}(\lambda) + k - i}(-t_{0} \cdots t_{3}) \Big) \Bigg) \\ \times \frac{1}{v_{0^{m_{0}(\lambda)}}(t)^{2}} \int_{T} {R_{0^{m_{0}(\lambda)}}^{(m_{0}(\lambda))}}^{2} \tilde \Delta_{K}^{(m_{0}(\lambda))} dT, \end{multline*} since by (\ref{v}) and (\ref{vplus}) we have $v_{\lambda+}(t) \cdot v_{0^{m_{0}(\lambda)}}(t) = v_{\lambda}(t)$. Now using \begin{align*} \prod_{k=1}^{m_{j}(\lambda)} \Big( \sum_{i=1}^{k} t^{i-1} \Big) &= \prod_{k=1}^{m_{j}(\lambda)} \frac{1-t^{k}}{1-t} \end{align*} and \begin{align*} \sum_{i=1}^{k} \Big( t^{i-1} + t^{i-1} (t^{2})^{m_{0}(\lambda) + k - i}(-t_{0} \cdots t_{3}) \Big) &= \sum_{i=1}^{k} \big(t^{i-1} - t_{0}\cdots t_{3} t^{k+2m_{0}(\lambda)-1} t^{k-i}\big) \\ &= (1-t_{0} \cdots t_{3} t^{k+2m_{0}(\lambda) -1})(1 + t + \cdots t^{k-1}) \\ &= (1-t_{0} \cdots t_{3} t^{k+2m_{0}(\lambda) -1}) \frac{1-t^{k}}{1-t}, \end{align*} the above expression can be simplified to \begin{multline*} \frac{1}{v_{\lambda+}(t)^{2}}\Big(\prod_{j \geq 1} \prod_{k=1}^{m_{j}(\lambda)} \frac{1-t^{k}}{1-t} \Big) \prod_{k=1}^{m_{1}(\lambda)} (1-t_{0} \cdots t_{3} t^{k+2m_{0}(\lambda)-1}) \int_{T} {K_{0^{m_{0}(\lambda)}}^{(m_{0}(\lambda))}}^{2} \tilde \Delta_{K}^{(m_{0}(\lambda))} dT = \frac{1}{v_{\lambda+}(t)} \int_{T} \tilde \Delta_{K}^{(m_{0}(\lambda))} dT\\ = N_{\lambda}(t;t_{0}, \dots, t_{3}) \end{multline*} since $K_{0^{m_{0}(\lambda)}}^{(m_{0}(\lambda))} = 1$, by Theorem \ref{symmtriang}. Note that, by Theorem \ref{symmeval}, there is an explicit evaluation for this norm. \end{proof} \subsection{Application} In this section, we use the closed formula (\ref{K-pol}) for the Koornwinder polynomials at $q=0$ to prove a result from \cite{RV} in this special case. The idea is a type $BC$ adaptation of that used in \cite{VV}: we use the structure of $K_{\lambda}^{(n)}$ as a sum over the Weyl group and the symmetry of the integral to restrict to one particular term. We obtain an explicit formula for the integral of this particular term by integrating with respect to one variable (holding the others fixed) and then proceed by induction. \begin{theorem} \cite[Theorem 4.10]{RV} For partitions $\lambda$ with $l(\lambda) \leq n$, the integral \begin{equation*} \int_{T} K_{\lambda}(z_{1}, \dots, z_{n} ; t^{2}; a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n)}(z;t; \pm \sqrt{t},a,b) dT . \end{equation*} vanishes if $\lambda$ is not an even partition. If $\lambda$ is an even partition, the integral is equal to \begin{equation*} \frac{(\sqrt{t})^{|\lambda|}}{(1+t)^{l(\lambda)}} \frac{N_{\lambda}(t; \pm \sqrt{t},a,b) v_{\lambda+}(t; \pm \sqrt{t},a,b)}{v_{\lambda+}(t^{2};a,b,ta,tb)}.\end{equation*} \end{theorem} \begin{proof} We have \begin{multline*} \int_{T} K_{\lambda}(z_{1}, \dots, z_{n} ; t^{2}; a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n)}(z;t; \pm \sqrt{t},a,b) dT . \\ = \frac{1}{v_{\lambda}(t^{2};a,b;a,b,ta,tb)} \sum_{w \in B_{n}} \int_{T} R_{\lambda,w}^{(n)}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n)}(z;t; \pm \sqrt{t},a,b) dT \\ = \frac{2^{n}n!}{v_{\lambda}(t^{2};a,b;a,b,ta,tb)} \int_{T} R_{\lambda,\text{id}}^{(n)}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n)}(z;t; \pm \sqrt{t},a,b) dT, \end{multline*} where in the last equation we have used the symmetry of the integral. We assume $\lambda_{1} > 0$ so that $\lambda \neq 0^{n}$. Next, we restrict to terms involving $z_{1}$ in the integrand, and integrate with respect to $z_{1}$. Doing this computation gives the following: \begin{multline*} \int_{T_{1}} z_{1}^{\lambda_{1}} \frac{(1-az_{1}^{-1})(1-bz_{1}^{-1})(1-taz_{1}^{-1})(1-tbz_{1}^{-1})}{(1-z_{1}^{-2})} \frac{(1-z_{1}^{\pm1})}{(1+\sqrt{t}z_{1}^{\pm 1})(1-\sqrt{t}z_{1}^{\pm 1})(1-az_{1}^{\pm 1})(1-bz_{1}^{\pm 1})} \\ \times \prod_{j>1} \frac{(1-t^{2}z_{1}^{-1}z_{j})(1-t^{2}z_{1}^{-1}z_{j}^{-1})}{(1-z_{1}^{-1}z_{j})(1-z_{1}^{-1}z_{j}^{-1})} \prod_{j>1} \frac{(1-z_{1}^{\pm 1}z_{j}^{\pm 1})}{(1-tz_{1}^{\pm 1}z_{j}^{\pm 1})} dT \\ = \frac{1}{2\pi i} \int_{C} z_{1}^{\lambda_{1} - 1} \frac{(z_{1}-ta)(z_{1}-tb)(1-z_{1}^{2})}{(1-tz_{1}^{2})(z_{1}+\sqrt{t})(z_{1}-\sqrt{t})(1-az_{1})(1-bz_{1})} \\ \times \prod_{j>1} \frac{(z_{1}-t^{2}z_{j})(z_{1}-t^{2}z_{j}^{-1})(1-z_{1}z_{j})(1-z_{1}z_{j}^{-1})}{(z_{1}-tz_{j})(z_{1}-tz_{j}^{-1})(1-tz_{1}z_{j})(1-tz_{1}z_{j}^{-1})} dz_{1} \end{multline*} Note that this integral has poles at $z_{1} = \pm \sqrt{t}$ and $z_{1} = tz_{j}, tz_{j}^{-1}$ for each $j>1$. We first compute the residue at $z_{1} = \sqrt{t}$: \begin{multline*} (\sqrt{t})^{\lambda_{1} - 1} \frac{(\sqrt{t}-ta)(\sqrt{t}-tb)(1-t)}{(1-t^{2})2\sqrt{t}(1-a\sqrt{t})(1-b\sqrt{t})} \prod_{j>1} \frac{(\sqrt{t}-t^{2}z_{j})(\sqrt{t}-t^{2}z_{j}^{-1})(1-\sqrt{t}z_{j})(1-\sqrt{t}z_{j}^{-1})}{(\sqrt{t}-tz_{j})(\sqrt{t}-tz_{j}^{-1})(1-t\sqrt{t}z_{j})(1-t\sqrt{t}z_{j}^{-1})} \\ =(\sqrt{t})^{\lambda_{1}} \frac{1}{2(1+t)} \prod_{j>1} \frac{(1-t\sqrt{t}z_{j})(1-t\sqrt{t}z_{j}^{-1})(1-\sqrt{t}z_{j})(1-\sqrt{t}z_{j}^{-1})}{(1-\sqrt{t}z_{j})(1-\sqrt{t}z_{j}^{-1})(1-t\sqrt{t}z_{j})(1-t\sqrt{t}z_{j}^{-1})} = \frac{(\sqrt{t})^{\lambda_{1}}}{2(1+t)} \end{multline*} Similarly, we can compute the residue at $z_{1} = -\sqrt{t}$: \begin{multline*} (-\sqrt{t})^{\lambda_{1}-1} \frac{(-\sqrt{t}-ta)(-\sqrt{t}-tb)(1-t)}{(1-t^{2})(-2\sqrt{t})(1+a\sqrt{t})(1+b\sqrt{t})} \prod_{j>1} \frac{(-\sqrt{t}-t^{2}z_{j})(-\sqrt{t}-t^{2}z_{j}^{-1})(1+\sqrt{t}z_{j})(1+\sqrt{t}z_{j}^{-1})}{(-\sqrt{t}-tz_{j})(-\sqrt{t}-tz_{j}^{-1})(1+t\sqrt{t}z_{j})(1+t\sqrt{t}z_{j}^{-1})} \\ = (-\sqrt{t})^{\lambda_{1}} \frac{1}{2(1+t)} \prod_{j>1} \frac{(1+t\sqrt{t}z_{j})(1+t\sqrt{t}z_{j}^{-1})(1+\sqrt{t}z_{j})(1+\sqrt{t}z_{j}^{-1})}{(1+\sqrt{t}z_{j})(1+\sqrt{t}z_{j}^{-1})(1+t\sqrt{t}z_{j})(1+t\sqrt{t}z_{j}^{-1})} = \frac{(-\sqrt{t})^{\lambda_{1}}}{2(1+t)} \end{multline*} The residues at $tz_{j}, tz_{j}^{-1}$ can be computed in a similar manner. One can then combine these residues (at $tz_{j}, tz_{j}^{-1}$) with the terms from the original integrand and integrate with respect to $z_{j}$. Some computations show the resulting integral is zero; the argument is similar that used in \cite[Theorem 23]{VV}. Finally, we add the residues at $z_{1} = \pm \sqrt{t}$ to get \begin{equation*} \frac{(\sqrt{t})^{\lambda_{1}}}{2(1+t)} + \frac{(-\sqrt{t})^{\lambda_{1}}}{2(1+t)} = \begin{cases} \frac{(\sqrt{t})^{\lambda_{1}}}{(1+t)} ,& \text{if $\lambda_{1}$ is even} \\ 0, & \text{if $\lambda_{1}$ is odd.} \end{cases} \end{equation*} Thus, \begin{multline*} 2^{n}n! \int_{T} R_{\lambda,\text{id}}^{(n)}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n)}(z;t; \pm \sqrt{t},a,b) dT \\ = \begin{cases} \frac{(\sqrt{t})^{\lambda_{1}}}{(1+t)} 2^{n-1}(n-1)! \int_{T} R_{\widehat{\lambda},\widehat{\text{id}}}^{(n-1)}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n-1)}(z;t; \pm \sqrt{t},a,b) dT, & \text{ if $\lambda_{1}$ is even} \\ 0, & \text{otherwise,} \end{cases} \end{multline*} where $\widehat{\lambda}$ is the partition $\lambda$ with the part $\lambda_{1}$ deleted, and $\widehat{\text{id}}$ is the permutation $\text{id}$ with $z_{1}$ deleted and signs preserved. Consequently, the entire integral vanishes if any part is odd and if $\lambda$ is even, it is equal to \begin{multline*} \frac{2^{n}n!}{v_{\lambda}(t^{2};a,b;a,b,ta,tb)} \int_{T} R_{\lambda,\text{id}}^{(n)}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n)}(z;t; \pm \sqrt{t},a,b) dT \\ = \frac{2^{n-l(\lambda)}(n-l(\lambda)!}{v_{\lambda+}(t^{2};a,b,ta,tb)v_{0^{n-l(\lambda)}}(t^{2};a,b;a,b,ta,tb)} \frac{(\sqrt{t})^{|\lambda|}}{(1+t)^{l(\lambda)}}\\ \times \int_{T} R_{0^{n-l(\lambda)},\text{id}}^{(n-l(\lambda))}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n-l(\lambda))}(z;t; \pm \sqrt{t},a,b) dT, \end{multline*} where, by abuse of notation in the last line, we use $\text{id}$ to denote the identity element in $B_{n-l(\lambda)}$. By (\ref{R-pol}), the last line is equal to \begin{multline*} \frac{2^{n-l(\lambda)}(n-l(\lambda)!}{v_{\lambda+}(t^{2};a,b,ta,tb)} \frac{(\sqrt{t})^{|\lambda|}}{(1+t)^{l(\lambda)}} \int_{T} K_{0^{n-l(\lambda)},\text{id}}^{(n-l(\lambda))}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n-l(\lambda))}(z;t; \pm \sqrt{t},a,b) dT \\ = \frac{1}{v_{\lambda+}(t^{2};a,b,ta,tb)} \frac{(\sqrt{t})^{|\lambda|}}{(1+t)^{l(\lambda)}} \int_{T} K_{0^{n-l(\lambda)}}^{(n-l(\lambda))}(z;t^{2};a,b;a,b,ta,tb) \tilde \Delta_{K}^{(n-l(\lambda))}(z;t; \pm \sqrt{t},a,b) dT \\ = \frac{1}{v_{\lambda+}(t^{2};a,b,ta,tb)} \frac{(\sqrt{t})^{|\lambda|}}{(1+t)^{l(\lambda)}} \int_{T} \tilde \Delta_{K}^{(n-l(\lambda))}(z;t; \pm \sqrt{t},a,b) dT \\ = \frac{(\sqrt{t})^{|\lambda|}}{(1+t)^{l(\lambda)}} \frac{N_{\lambda}(t; \pm \sqrt{t},a,b) v_{\lambda+}(t; \pm \sqrt{t},a,b)}{v_{\lambda+}(t^{2};a,b,ta,tb)}, \end{multline*} since $K_{0^{l}}^{(l)}(z;t;a,b;t_{0}, \dots, t_{3}) = 1$ by Theorem \ref{symmtriang} and $n-l(\lambda) = m_{0}(\lambda)$. \end{proof} \section{Nonsymmetric Hall-Littlewood polynomials of type BC} \subsection{Background and Notation} We first introduce the affine Hecke algebra of type $BC$, a crucial object in the study of nonsymmetric Koornwinder polynomials. We retain the notation on partitions, compositions and orderings of Section \ref{notation}. \begin{definition} (see \cite{RV}) The \textit{affine Hecke algebra $H$ of type $BC$} is defined to be the $\mathbb{C}(q,t,a,b,c,d)$ algebra with generators $T_{0}, T_{1}, \dots, T_{n}$ ($n>1$), subject to the following braid relations \begin{align*} T_{i}T_{j} &= T_{j}T_{i}, \; \; \; |i-j| \geq 2, \\ T_{i}T_{j}T_{i} &= T_{j}T_{i}T_{j}, \; \; \; |i-j| = 1, i,j \neq 0,n, \\ T_{i}T_{i+1}T_{i}T_{i+1} &= T_{i+1}T_{i}T_{i+1}T_{i} \; \; \; \; (i=0, i = n-1) \end{align*} and the quadratic relations \begin{align*} (T_{0} + 1)(T_{0} + cd/q) &= 0, \\ (T_{i} + 1)(T_{i}-t) &= 0, \; \; \; i \neq 0,n, \\ (T_{n} + 1)(T_{n} + ab) &= 0. \end{align*} \end{definition} Recall that, by the Noumi representation (see \cite{Sahi, Stok1}), there is an action of $H$ on the vector space of Laurent polynomials $\mathbb{C}(q^{1/2},t,a,b,c,d)[x_{1}^{\pm 1}, \dots, x_{n}^{\pm 1}]$ (here $x_{1}, \dots, x_{n}$ are $n$ independent indeterminates) as follows: \begin{align*} T_{0}f &= -(cd/q)f + \frac{(1-c/x_{1})(1-d/x_{1})}{1-q/x_{1}^{2}}(f^{s_{0}}-f) \\ T_{i}f &= tf + \frac{x_{i+1}-tx_{i}}{x_{i+1}-x_{i}}(f^{s_{i}}-f) \; \; \; \; \; (\text{for } 0<i<n)\\ T_{n}f &= -abf + \frac{(1-ax_{n})(1-bx_{n})}{1-x_{n}^{2}}(f^{s_{n}}-f), \end{align*} where $f^{s_{0}}(x_{1}, \dots, x_{n}) = f(q/x_{1},x_{2}, \dots, x_{n})$, $f^{s_{i}}(x_{1}, \dots, x_{n}) = f(x_{1}, \dots, x_{i-1},x_{i+1},x_{i},x_{i+2}, \dots, x_{n})$ for $0<i<n$, and $f^{s_{n}}(x_{1}, \dots, x_{n}) = f(x_{1}, \dots, x_{n-1},1/x_{n})$. Note that, for $0<i \leq n$, the action of $T_{i}$ on polynomials is independent of $q$; this will be crucial for the rest of the paper. We will denote the nonsymmetric Koornwinder polynomials in $n$ variables by $U_{\lambda}^{(n)}(x;q,t;a,b,c,d)$ \cite{NK, Sahi, Stok1, Stok2}. Recall that these polynomials are indexed by compositions $\lambda$. We will remind the reader how these polynomials are defined, but we must first set up some relevant notation. Let $^{\iota}$ be the involution defined by \begin{equation*} ^{\iota}: q \rightarrow q^{-1}, t \rightarrow t^{-1}, a \rightarrow a^{-1}, b \rightarrow b^{-1}, c \rightarrow c^{-1}, d \rightarrow d^{-1}, z^{\mu} \rightarrow z^{\mu} \end{equation*} and let $\bar{}$ be the involution defined by \begin{equation*} \bar{}: q \rightarrow q, t \rightarrow t, a \rightarrow a, b \rightarrow b, c \rightarrow c, d \rightarrow d, z^{\mu} \rightarrow z^{-\mu}. \end{equation*} Define the weight \begin{multline} \label{nonsymmdensity} \Delta_{K}^{(n)}(z;q,t;a,b,c,d) = \prod_{1 \leq i \leq n} \frac{(z_{i}^{2}, qz_{i}^{-2};q)}{(az_{i}, bz_{i}, cz_{i}, dz_{i}, aqz_{i}^{-1}, bqz_{i}^{-1}, cz_{i}^{-1},dz_{i}^{-1};q)} \\ \times \prod_{1 \leq i<j \leq n} \frac{(z_{i}z_{j}^{\pm 1},qz_{i}^{-1}z_{j}^{\pm 1};q)}{(tz_{i}z_{j}^{\pm 1}, qtz_{i}^{-1}z_{j}^{\pm 1};q)}, \end{multline} i.e., the full nonsymmetric density, see \cite{RV}. Note that $\Delta_{K}^{(n)}(z;q,1;1,-1,0,0) = 1$; this specialization is independent of $q$. As in the symmetric case, when the parameters are clear from context, we will suppress them to make the notation easier. Note the following formula for the nonsymmetric density at the specialization $q=0$: \begin{equation} \label{nonsymmzero} \prod_{1 \leq i \leq n} \frac{(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-cz_{i})(1-dz_{i})(1-cz_{i}^{-1})(1-dz_{i}^{-1})} \prod_{1 \leq i<j \leq n} \frac{1-z_{i}z_{j}^{\pm 1}}{1-tz_{i}z_{j}^{\pm 1}}. \end{equation} We will write $\Delta_{K}^{(n)}(z;t;a,b,c,d)$ to indicate this particular limiting case. With this terminology, consider the following inner product on functions of $n$ variables with parameters $q,t,a,b,c,d$: \begin{equation*} \langle f,g \rangle_{q} = \int_{T} f \bar{g}^{\iota} \Delta_{K}^{(n)}(z;q,t;a,b,c,d) dT. \end{equation*} Note that it is the constant term of $f \bar{g}^{\iota}\Delta_{K}^{(n)}$. Also, denote by $\langle, \rangle_{0}$ the following inner product: \begin{equation} \label{ipzero} \langle f,g \rangle_{0} = \int_{T} f \bar{g}^{\iota} \Delta_{K}^{(n)}(z;t;a,b,c,d) dT, \end{equation} involving the $q = 0$ degeneration of the full nonsymmetric Koornwinder weight as in (\ref{nonsymmzero}). Recall that the polynomials $\{U_{\mu}^{(n)}(x;q,t;a,b,c,d) \}_{\mu \in \mathbb{Z}^{n}}$ are uniquely defined by the following conditions: \begin{align*} &(i) \; \; U_{\mu} = x^{\mu} + \sum_{\nu \prec \mu} w_{\mu \nu} x^{\nu} \\ & (ii) \; \; \langle U_{\mu}, x^{\nu} \rangle_{q} = 0 \text{ if } \nu \prec \mu, \end{align*} where as usual we write $x^{\mu}$ for the monomial $x_{1}^{\mu_{1}}x_{2}^{\mu_{2}} \cdots x_{n}^{\mu_{n}}$. \begin{definition}\label{nonsymmpart} For a partition $\lambda$ with $l(\lambda) \leq n$, define \begin{equation*} E_{\lambda}^{(n)}(z;c,d) = \prod_{\lambda_{i} > 0} z_{i}^{\lambda_{i}} (1-cz_{i}^{-1})(1-dz_{i}^{-1}). \end{equation*} \end{definition} \subsection{Main Results} We will first show that, under the assumption that $\lambda$ is a partition with $l(\lambda) \leq n$, $E_{\lambda}^{(n)}(z;c,d)$ is the $q=0$ limiting case of the nonsymmetric Koornwinder polynomial $U_{\lambda}^{(n)}(x;q,t;a,b,c,d)$. \begin{theorem} \label{triang} (Triangularity) The polynomials $E_{\lambda}^{(n)}(z;c,d)$ are triangular with respect to dominance ordering, i.e., \begin{equation*} E_{\lambda}^{(n)}(z;c,d) = z^{\lambda} + \sum_{\mu \prec \lambda} c_{\mu}z^{\mu} \end{equation*} for all partitions $\lambda$. \end{theorem} \begin{proof} It is clear that $E_{\lambda}^{(n)}(z;c,d) = z^{\lambda} + (\text{dominated terms})$, since the term inside the product definition of $E_{\lambda}^{(n)}(z;c,d)$ is just $z_{i}^{\lambda_{i}} - (c + d)z_{i}^{\lambda_{i}-1} + cd z_{i}^{\lambda_{i}-2}$. \end{proof} \begin{theorem} \label{nonsymmeval} We have the following constant term evaluation in the nonsymmetric case (with respect to $q=0$ limit of the nonsymmetric density as in (\ref{nonsymmzero})) \begin{equation*} \int_{T} \Delta_{K}^{(n)}(z;t;a,b,c,d) dT = \prod_{i=0}^{n-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=n-1}^{2n-2} (1-t^{j}abcd). \end{equation*} \end{theorem} \begin{proof} This follows from the proof of Theorem \ref{symmeval}, in particular recall (\ref{nonsymmeqn}). \end{proof} \begin{theorem} \label{orthog} (Orthogonality) Let $\lambda$ be a partition with $l(\lambda) \leq n$ and $\mu \in \mathbb{Z}^{n}$ a composition, such that $\mu \prec \lambda$. Then we have $\langle E_{\lambda}^{(n)}(z;c,d), z^{\mu} \rangle_{0} = 0$. \end{theorem} \begin{proof} Fix $\lambda$ a partition. First note that, by definition of the inner product $\langle \cdot, \cdot, \rangle_{0}$ in (\ref{ipzero}) we have \begin{multline*} \langle E_{\lambda}^{(n)}(z;c,d), z^{\mu} \rangle_{0} \\= \int_{T} E_{\lambda}^{(n)}(z;c,d) z^{-\mu}\prod_{1 \leq i \leq n} \frac{(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-cz_{i})(1-dz_{i})(1-cz_{i}^{-1})(1-dz_{i}^{-1})} \prod_{1 \leq i<j \leq n} \frac{1-z_{i}z_{j}^{\pm 1}}{1-tz_{i}z_{j}^{\pm 1}}. \end{multline*} We will first show $\langle E_{\lambda}^{(n)}(z;c,d), z^{\mu} \rangle_{0} = 0$ for all compositions $\mu$ satisfying the following two properties: \noindent \textbf{Condition (i)} $\mu \stackrel{\text{lex}}{<} \lambda$, so in particular there exists $1 \leq i \leq n$ such that $\mu_{1} = \lambda_{1}, \dots, \mu_{i-1} = \lambda_{i-1}$ and $\mu_{i}< \lambda_{i}$. \noindent \textbf{Condition (ii)} $\lambda_{i} \neq 0$ (where $i$ is as in (i)). We mention that condition (ii) is necessary because of the difference between nonzero and zero parts of $\lambda$ in Definition \ref{nonsymmpart}; in particular if $\lambda_{i} = 0$ then one does not have the term $z_{i}^{\lambda_{i}}(1-cz_{i}^{-1})(1-dz_{i}^{-1})$ in $E_{\lambda}^{(n)}(z;c,d)$ (so that one still has the terms $1/(1-cz_{i}^{-1})(1-dz_{i}^{-1})$ in the product $E_{\lambda}^{(n)}(z;c,d)\Delta_{K}^{(n)}$) . We give a proof by induction on $n$, the number of variables. Note first that condition (ii) implies that $\lambda_{1}, \dots, \lambda_{i-1} \neq 0$. Consider the case $n=1$. Then in particular $i=1$ and conditions (i) and (ii) give $\mu_{1} < \lambda_{1} \neq 0$. One can then compute \begin{equation*} \langle E_{\lambda}^{(n)}, z^{\mu} \rangle_{0} = \int_{T} z_{1}^{\lambda_{1} - \mu_{1}} \frac{(1-z_{1}^{2})}{(1-az_{1})(1-bz_{1})(1-cz_{1}) (1-dz_{1})} dT, \end{equation*} since $\lambda_{1} - \mu_{1} >0$ this is necessarily zero. Now suppose the claim holds for $n-1$, we show it holds for $n$. We may restrict the $n$-dimensional integral $\langle E_{\lambda}(z), z^{\mu} \rangle_{0}$ to the contribution involving $z_{1}$, one computes it to be \begin{equation*} \int_{T} z_{1}^{\lambda_{1} - \mu_{1}} \frac{(1-z_{1}^{2})}{(1-az_{1})(1-bz_{1})(1-cz_{1}) (1-dz_{1})} \prod_{j>1} \frac{1-z_{1}z_{j}^{\pm 1}}{1-tz_{1}z_{j}^{\pm 1}} dT. \end{equation*} If $i=1$, then $\lambda_{1} > \mu_{1}$ and this integral (and consequently the $n$-dimensional integral) are zero. If $i>1$, then $\lambda_{1} = \mu_{1}$ and this integral is $1$. In this case, one notes that the resulting $n-1$ dimensional integral is exactly: \begin{equation*} \int_{T} E_{\widehat{\lambda}}^{(n-1)}(z_{2}, \dots, z_{n}) z^{-\widehat{\mu}} \Delta_{K}^{(n-1)} dT, \end{equation*} where $\widehat{\lambda} = (\lambda_{2}, \dots, \lambda_{n})$ and $\widehat{\mu} = (\mu_{2}, \dots, \mu_{n})$. Note that conditions (i) and (ii) hold for $\widehat{\mu}$ and $\widehat{\lambda}$, and since this is the $n-1$ variable case we may appeal to the induction hypothesis. Thus, the above integral is zero; consequently $\langle E_{\lambda}^{(n)}, z^{\mu} \rangle = 0$ as desired. Finally, it remains to show that $\mu \prec \lambda$ implies conditions (i) and (ii). Recall that there are two cases for $\mu \prec \lambda$. In case 1), note that we have $\mu \leq \mu^{+} < \lambda$ with respect to the dominance ordering, so $\mu < \lambda$. This implies $\mu \stackrel{\text{lex}}{<} \lambda$ by Lemma \ref{order}. In case 2), it is clear. Now we show condition (ii). Suppose for contradiction that $\lambda_{i} = 0$, so that $\mu_{1} = \lambda_{1} \geq 0 , \dots, \mu_{i-1} = \lambda_{i-1} \geq 0$ and $\mu_{i} < \lambda_{i} = 0$ and $\lambda_{k} = 0$ for all $i<k \leq n$. Then note that $\sum_{k=1}^{i} (\mu^{+})_{k} > \sum_{k=1}^{i} \lambda_{k}$, which contradicts $\mu^{+} \leq \lambda$. Thus, we must have $\lambda_{i} \neq 0$ as desired. \end{proof} \begin{theorem} \label{nonsymmnorm} Let $\lambda$ be a partition with $l(\lambda) \leq n$, then \begin{multline*} \langle E_{\lambda}^{(n)}(z;c,d), E_{\lambda}^{(n)}(z;c,d) \rangle_{0} = \langle E_{\lambda}^{(n)}(z;c,d), z^{\lambda} \rangle_{0} \\ =\prod_{i=0}^{m_{0}(\lambda)-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=m_{0}(\lambda)-1}^{2m_{0}(\lambda)-2} (1-t^{j}abcd) \end{multline*} \end{theorem} \begin{proof} The first equality follows from Theorems \ref{triang} and \ref{orthog}. For the second equality, we use arguments similar to those used in the proof of Theorem \ref{orthog}. We first note that \begin{multline*} \langle E_{\lambda}^{(n)}(z;c,d), z^{\lambda} \rangle_{0} \\= \int_{T} E_{\lambda}^{(n)}(z;c,d) z^{-\lambda}\prod_{1 \leq i \leq n} \frac{(1-z_{i}^{2})}{(1-az_{i})(1-bz_{i})(1-cz_{i})(1-dz_{i})(1-cz_{i}^{-1})(1-dz_{i}^{-1})} \prod_{1 \leq i<j \leq n} \frac{1-z_{i}z_{j}^{\pm 1}}{1-tz_{i}z_{j}^{\pm 1}}. \end{multline*} One can integrate with respect to $z_{1}$, holding the remaining variables fixed; the reader can verify that the result is $\langle E_{\widehat \lambda}^{(n-1)}(z;c,d), z^{\widehat \lambda}\rangle_{0}$, where $\widehat \lambda = (\lambda_{2}, \dots, \lambda_{n})$. Iterating this argument shows that \begin{equation*} \langle E_{\lambda}^{(n)}(z;c,d), z^{\lambda} \rangle_{0} = \int_{T} \Delta_{K}^{(m_{0}(\lambda))} dT. \end{equation*} By Theorem \ref{nonsymmeval}, this is equal to \begin{equation*} \prod_{i=0}^{m_{0}(\lambda)-1} \frac{1}{(1-t^{i}ac)(1-t^{i}bc)(1-t^{i}cd)(1-t^{i}ad)(1-t^{i}bd)} \prod_{j=m_{0}(\lambda)-1}^{2m_{0}(\lambda)-2} (1-t^{j}abcd), \end{equation*} as desired. \end{proof} In the next theorem, we will prove that these polynomials $E_{\lambda}^{(n)}(z;c,d)$ are indeed the nonsymmetric Koornwinder polynomials indexed by a partition in the limit $q \rightarrow 0$. In order to do this, we will show that the limit is actually well-defined, and that polynomials satisfying the above triangularity and orthogonality conditions are uniquely determined. \begin{theorem} \label{qlim} The $q \rightarrow 0$ limit of the nonsymmetric Koornwinder polynomials are well-defined. Moreover, for a partition $\lambda$, $\displaystyle \lim_{q \rightarrow 0} U_{\lambda}^{(n)}(z; q,t; a,b,c,d) = E_{\lambda}^{(n)}(z;c,d)$. \end{theorem} \begin{proof} By virtue of the triangularity condition for nonsymmetric Koornwinder polynomials, we can write $U_{\lambda}^{(n)}(x;q,t;a,b,c,d) = x^{\lambda} + \sum_{\mu \prec \lambda} c_{\mu} x^{\mu}$. The orthogonality condition for these polynomials then gives \begin{equation*} \langle U_{\lambda}^{(n)}(x;q,t;a,b,c,d), x^{\nu} \rangle_{q} = \Big\langle x^{\lambda} + \sum_{\mu \prec \lambda} c_{\mu} x^{\mu}, x^{\nu}\Big\rangle_{q} = 0 \end{equation*} for all $\nu \prec \lambda$. By linearity of the inner product, one can rewrite this as \begin{equation*} \Big\langle \sum_{\mu \prec \lambda} c_{\mu} x^{\mu}, x^{\nu} \Big\rangle_{q} = -\langle x^{\lambda}, x^{\nu} \rangle_{q} \end{equation*} for all $\nu \prec \lambda$. So the coefficient vector $\vec{c}$ is uniquely determined by the equation \begin{equation} \label{GSeqn} \vec{c} A_{[\prec \lambda]} = - \vec A_{\lambda}, \end{equation} where $A_{[\prec \lambda]}$ is the inner product matrix of the monomials $\{x^{\nu}\}_{\nu \prec \lambda}$, and $\vec A_{\lambda}$ is the column vector of inner products of $x^{\lambda}$ with $\{x^{\nu} \}_{\nu \prec \lambda}$. We need to check that the limit as $q \rightarrow 0$ of the entries of $\vec{c}$ is well-defined. It suffices to show that $\lim_{q \to 0} \det A_{[\prec \lambda]}$ is not equal to zero. We will exhibit a specialization of the parameters $(t,t_{0}, \dots, t_{3})$ for which this limit is nonzero. To this end, consider the following specialization of parameters: $(t = 1, t_{0} = 1, t_{1} = -1, t_{2} = t_{3} = 0)$; note that it is independent of $q$. Under this specialization, the inner product weight becomes $\Delta_{K} = 1$, and the inner product for monomials is $\langle x^{\lambda}, x^{\mu} \rangle_{q} = \delta_{\lambda, \mu}$. In particular, the matrix $A_{[<\lambda]}$ is the identity matrix and it has determinant equal to one. We write $\lim_{q \rightarrow 0} \vec{c}$ for the $q \rightarrow 0$ limit of the entries of the vector $\vec{c}$. Now, write $E_{\lambda}^{(n)}(x;c,d) = x^{\lambda} + \sum_{\mu \prec \lambda} d_{\mu}x^{\mu}$, using Theorem \ref{triang}. Then, using Theorem \ref{orthog}, exactly as in the previous paragraph we find $\vec{d}$ satisfies $\vec{d} A_{[<\lambda]}^{0} = -A_{\lambda}^{0}$, where $\cdot^{0}$ denotes the specialization of the inner product for monomials at $q \rightarrow 0$. But since the vectors $\vec{c}$ and $\vec{d}$ are uniquely determined via the method discussed above, and the matrices here are the $q \rightarrow 0$ specializations of the ones in (\ref{GSeqn}), we must have $\lim_{q \rightarrow 0} \vec{c} = \vec{d}$, as desired. \end{proof} We now use Definition \ref{nonsymmpart} to extend to the case where $\lambda \in \Lambda$ via a particular recursion. We will first need to define some relevant rational functions in $t,a,b,c,d$. \begin{definition} Define \begin{equation*} n_{\lambda} = -|\{ l<i: \lambda_{l} = -1 \text{ or } 0 \}| - 2|\{l>i+1: \lambda_{l} = 0 \}| - 1 \end{equation*} and \begin{equation*} r_{\lambda} = m_{-1}(\lambda) + m_{0}(\lambda) - 1; \end{equation*} \end{definition} they are statistics of the composition $\lambda$. We use this to define rational functions $\{p_{i}(\lambda)\}_{1 \leq i \leq n}$ and $\{q_{i}(\lambda)\}_{1 \leq i \leq n}$ as follows \begin{equation*} p_{n}(\lambda) = \begin{cases} -ab-1, & \text{if } \lambda_{n} < -1 \\ -ab - 1 + abcdt^{2r_{\lambda}}, & \text{if } \lambda_{n} = -1 \\ 0, & \text{if } \lambda_{n} > 1 \\ -abcdt^{2r_{\lambda}}, & \text{if } \lambda_{n} =1 \\ -ab,& \text{if } \lambda_{n} = 0 \end{cases} \end{equation*} and for $1 \leq i \leq n-1$ \begin{equation*} p_{i}(\lambda) = \begin{cases} t-1, & \text{if } \lambda_{i} < \lambda_{i+1} \text{ and } (\lambda_{i}, \lambda_{i+1}) \neq (-1,0) \\ \frac{(1-t)t^{n_{\lambda}}}{abcd - t^{n_{\lambda}}}, & \text{if } (\lambda_{i}, \lambda_{i+1}) = (-1,0) \\ 0, & \text{if } \lambda_{i} > \lambda_{i+1} \text{ and } (\lambda_{i}, \lambda_{i+1}) \neq (0,-1) \\ \frac{(t-1)abcd}{abcd - t^{n_{\lambda}}}, & \text{if } (\lambda_{i}, \lambda_{i+1}) = (0,-1) \\ t, & \text{if } \lambda_{i} = \lambda_{i+1}. \end{cases} \end{equation*} Similarly, define \begin{equation*} q_{n}(\lambda) = \begin{cases} -ab, & \text{if } \lambda_{n} < 0 \\ 0, & \text{if } \lambda_{n} = 0 \\ 1, & \text{if } \lambda_{n} > 1 \\ 1+cdt^{2r_{\lambda}}(-ab-1+abcdt^{2r_{\lambda}}), & \text{if } \lambda_{n} = 1 \end{cases} \end{equation*} and for $1 \leq i \leq n-1$ \begin{equation*} q_{i}(\lambda) = \begin{cases} t, & \text{if }\lambda_{i}<\lambda_{i+1}\\ 0,& \text{if } \lambda_{i} = \lambda_{i+1} \\ 1,& \text{if } \lambda_{i} > \lambda_{i+1} \text{ and } (\lambda_{i}, \lambda_{i+1}) \neq (0,-1) \\ 1-\frac{(1-t)^{2}abcdt^{n_{\lambda}-1}}{(abcd-t^{n_{\lambda}})^{2}},& \text{if } (\lambda_{i}, \lambda_{i+1}) = (0,-1). \end{cases} \end{equation*} \begin{definition} For $\lambda \in \Lambda$ with $l(\lambda) \leq n$, define $E_{s_{i}\lambda}^{(n)}(z;t;a,b,c,d)$ (for $1 \leq i \leq n$) by the following recursion \begin{equation} \label{recurqlim} T_{i}E_{\lambda} = p_{i}(\lambda)E_{\lambda} + q_{i}(\lambda)E_{s_{i}\lambda} \end{equation} where $p_{i}(\lambda), q_{i}(\lambda)$ are the rational functions of the previous definition, and for $\lambda$ a partition $E_{\lambda}^{(n)}$ is given by Definition \ref{nonsymmpart}. \end{definition} \begin{theorem} \label{fullqlim} For any $\lambda \in \Lambda$, $E_{\lambda}^{(n)}(z;t;a,b,c,d)$ is well-defined and we have \begin{equation*} \lim_{q \rightarrow 0} U_{\lambda}^{(n)}(z;q,t;a,b,c,d) = E_{\lambda}^{(n)}(z;t;a,b,c,d). \end{equation*} \end{theorem} \begin{proof} The case when $\lambda$ is a partition has been established by Theorem \ref{qlim}. The rest of the result is obtained by showing that \cite{Stok2} Proposition 6.1 admits the limit $q \rightarrow 0$ and that recursion in fact becomes (\ref{recurqlim}) in this limit. As mentioned above, it is crucial that the operators $T_{i}$ for $1 \leq i \leq n-1$ are independent of $q$; these are the operators that appear here. We note that the parameters must be translated according to the following reparametrization: \begin{equation*} \{a,b,c,d,t \} \leftrightarrow \{ t_{n}\check{t_{n}}, -t_{n}\check{t_{n}}^{-1}, t_{0}\check{t_{n}}q^{1/2}, -t_{0}\check{t_{0}}^{-1}q^{1/2}, t^{2} \}; \end{equation*} in particular, this reparametrization yields $T_{0} = t_{0}Y_{0}, T_{n} = t_{n}Y_{n}, T_{i} = tY_{i}$ (for $1 \leq i \leq n-1$), where $\{Y_{i}\}_{0 \leq i \leq n}$ are the Hecke operators of \cite{Stok2}. It is a computation to directly verify that the limits exist in the cases $\lambda_{n} \leq 0$ and $\lambda_{i} \leq \lambda_{i+1} (1 \leq i \leq n-1)$. We then apply $T_{n}$ and $T_{i}(1 \leq i \leq n-1)$ to the resulting recursions and use the quadratic relations \begin{equation*} T_{n}^{2} = -ab - T_{n} -abT_{n} \end{equation*} and \begin{equation*} T_{i}^{2} = t- T_{i} +tT_{i} \end{equation*} and simplify to obtain the recursion in the remaining cases $\lambda_{n} > 0$ and $\lambda_{i} > \lambda_{i+1}$. \end{proof} As a byproduct of orthogonality for the $\{U_{\lambda}^{(n)}(z;q,t;a,b,c,d)\}_{\lambda \in \Lambda}$ we obtain the complete orthogonality for the $q=0$ limiting case. \begin{corollary} Let $\lambda, \mu \in \mathbb{Z}^{n}$ be compositions. If $\lambda \neq \mu$, we have $\langle E_{\lambda}^{(n)}, E_{\mu}^{(n)}\rangle_{0} = 0$. If $\mu \prec \lambda$, then we have $\langle E_{\lambda}^{(n)}, z^{\mu} \rangle_{0} = 0$. \end{corollary} \begin{proof} This follows from the orthogonality for the $q$-nonsymmetric Koornwinder polynomials and Theorem \ref{fullqlim}. \end{proof}
157,836
Fairytale Inspired Home Decor. Here are a number of highest rated Fairytale Inspired Home Decor pictures upon internet. We identified it from honorable source. Its submitted by dealing out in the best field. We assume this nice of Fairytale Inspired Home Decor graphic could possibly be the most trending subject subsequently we portion it in google plus or facebook. We attempt to introduced in this posting back this may be one of astonishing citation for any Fairytale Inspired Home Decor options. Dont you arrive here to know some additional unique pot de fleurs pas cher idea? We really hope you can easily believe it as one of your hint and many thanks for your period for surfing our webpage. make laugh portion this image for your beloved friends, families, outfit via your social media such as facebook, google plus, twitter, pinterest, or any supplementary bookmarking sites.
107,794
Norfolk councils set to spend more than £1m on housing rough sleepers Two Norfolk councils are set to spend over a million pounds to buy homes for rough sleepers. Photo: GETTY - Credit: Getty Images/iStockphoto Two Norfolk councils are set to spend over a million pounds to buy homes for rough sleepers. Broadland District Council and South Norfolk Council are set to partner with a social housing provider to buy ten one-bedroom properties to serve as the homes. Clarion Housing Group, the UK’s biggest social landlord, will join forces with the councils to invest upfront costs of £150,000. The partnership will also be backed by £737,857 via central government’s next steps housing program to tackle homelessness. Clarion owns and manages more than 4,500 homes across Broadland and South Norfolk. READ MORE: Controversial 170-home plan for village WILL go ahead as developer wins appeal You may also want to watch: The ten properties will provide long-term, secure homes for rough sleepers currently in temporary council accommodation. Broadland and South Norfolk will also receive £266,095 via the scheme to support those living in the homes for the next three will include two dedicated support workers, with the aim of moving residents into permanent housing within two years to allow other rough sleepers to benefit. Clarion says it hopes the homes will be bought and ready to let out before the end of March 2021. Michelle Reynolds, chief operating officer at Clarion, said: “Our history goes back over 100 years and core to our mission is providing homes to those in greatest need and would otherwise be at risk of homelessness. READ MORE: Bid to close second-home owner’s tax dodge loophole “As winter approaches and we find ourselves in the midst of a second national lockdown, moving people off the streets and into secure homes has never been more important. “We’re proud to be working with Broadland District Council and South Norfolk Council to provide homes for former rough sleepers.” South Norfolk cabinet member for better lives, Yvonne Bendle, said: “We are continuing to get people off the streets and are working hard to stop them ending up there in the first place. This money ensures that the most vulnerable in our district will get the accommodation, as well as the help and support they need to get back on their feet.” And Broadland cabinet member for housing and wellbeing, Fran Whymark, said: “This funding will ensure we are giving people the best possible chance and support to rebuild their lives. “We do everything we can to prevent and support rough sleepers, this funding boost will help make sure no one slips through the net and will help put an end to homelessness for good.” READ MORE: OPINION: What could be the future of Norwich’s Anglia Square now?
292,944
#1 Posted 23 July 2007 - 11:50 PM Location: Likely Maui #2 Posted 23 July 2007 - 11:52 PM >>IMAGE July 2007 - 12:05 AM Our Caletas Wedding Slideshow (by Leigh Miller) Our Caletas Wedding Video #4 Posted 24 July 2007 - 12:43 PM we haven't decided on a location yet for are considering Molokai, Hawaii! #5 Posted 24 July 2007 - 12:57 PM Proud Mama to Evelyn Eileen since June 8, 2010 #6 Posted 24 July 2007 - 01:01 PM #7 Posted 24 July 2007 - 01:10 PM #8 Posted 24 July 2007 - 01:12 PM #9 Posted 24 July 2007 - 01:39 PM #10 Posted 24 July 2007 -
140,517
TITLE: Peano arithmetic vs. fast-growing hierarchy with pathological fundamental sequences QUESTION [13 upvotes]: Fundamental sequence for a countable limit ordinal $\alpha$ is an increasing sequence $\{\alpha[i]\}$ of ordinals of length $\omega$ such that $\lim_{i\rightarrow\omega}\alpha[i]=\alpha$. There are many (continuum many, in fact) possible choices for fundamental sequence for any ordinal, some are quite natural, like $\omega^2[n]=\omega n$, and some are quite odd, like $\omega[n]=\Sigma(n)$. Most common definition of fundamental sequences below $\varepsilon_0$ is via Wainer hierarchy. Using these, it's known that in fast-growing hierarchy, $F_{\varepsilon_0}(n)$ is a total recursive function which outgrows all recursive functions which Peano axioms can prove total. A friend of mine posed a question, if this necessarily hold under different choices for fundamental sequences. For me, it seems like the answer would be no, because we can choose some very slow fundamental sequences for all ordinals, possible making it slower than $F_\alpha(n)$ for some $\alpha<\varepsilon_0$ in Wainer hierarchy, but my friend believes the answer to be yes. To put it into a single question: Is it true that for any choice of fundamental sequences for ordinals below $\varepsilon_0$ we have that, in fast-growing hierarchy, $F_{\varepsilon_0}(n)$ outgrows all functions provably total recursive in PA? Does it make any difference if we replace it with Hardy hierarchy? Thanks for your feedback. REPLY [13 votes]: The answer is no. Choose a fundamental sequence for $\epsilon_0$ itself in the usual way, which I think is $\epsilon_0[n]=\omega^{\omega^{{\vdots}^\omega}}$, and then modify the earlier fundamental sequences for $\alpha=\epsilon_0[n]$ by making it start with $0,1,2,\ldots,n$, before resuming with the usual values. In particular, we have thereby ensured $\alpha[n]=n$ for $\alpha=\epsilon_0[n]$. It now follows, according to the rules of the fast-growing hierarchy, that $F_{\epsilon_0}(n)$, which by definition is $F_{\epsilon_0[n]}(n)$, is the same as $F_\alpha(n)$ where $\alpha=\epsilon_0[n]$, but since this is a limit ordinal, it is equal to $F_{\alpha[n]}(n)$, which is the same as $F_n(n)$, by construction. So with these modified fundamental sequences, the top function $F_{\epsilon_0}$ is basically the same as $F_\omega$. This seems completely to confirm your intuition that slowing the fundamental sequences down could make the diagonal function at the top very small. If we dropped the requirement that the fundamental sequences must be strictly increasing, we could pad with $n$ many $0$'s instead, ensuring that $\alpha[n]=0$ for limit $\alpha=\epsilon_0[n]$, and get a more extreme situation $F_{\epsilon_0}(n)=F_{\epsilon_0[n]}(n)=F_{(\epsilon_0[n])[n]}(n)=F_0(n)$.
154,043
The Centers for Medicare and Medicaid (CMS) has issued a ruling that will impact on the payment of proceeds in Workers' Compensation Medicare Set Aside Agreements (WCMSA). CMS has ruled out the use of TENS (Transcutaneous Electrical Nerve Stimulation) units for the treatment of chronic low back pain. "For those WC [workers' compensation]cases that were not settled prior to June 8, 2012, and where the WCMSAs proposal includes funding for TENS for CLBP [chronic low back pain].)" Case law throughout the country has been divided on whether TENS units should be authorized to cure and relieve low back pain. Click here to read: Decision Memo for Transcutaneous Electrical Nerve Stimulation for Chronic Low Back Pain (CAG-00429N) Click here to read: Impact of the Removal of coverage of Transcutaneous Electrical Nerve Stimulation (TENS) Units for Chronic Low Back Pain (CLBP) on Workers’ Compensation Medicare Set-Aide Arrangement (WCMSA) proposals – INFORMATION ..... For over 3 decades the Law Offices of Jon L. Gelman 1.973.696.7900 jon@gelmans.com have been representing injured workers and their families who have suffered occupational accidents and illnesses. Related articles
366,538
Kevin Nguyen and Nick Martens turn the spotlight towards some niche demographics struggling to find a voice this election season. Hot Topic: Kerning Though many have touted the Obama camp for its graceful, unified design work, conservative typographers often cite the candidate’s font selections as pretentious. Gotham and Requiem, the two typefaces used on all of Obama’s signage and logos, were created by type foundry Hoefler & Frere-Jones, and are only available at a steep price. Graphic Designers for McCain (GDM) urges candidates to adopt typefaces that come standard on desktops computers, like Arial and Comic Sans. GDM also approves of McCain’s use of Optima for his campaign-related signage. Optima is the font used on the Vietnam Memorial, which is completely tasteful. Hot Topic: Not lipstick Hockey, which is still considered a sport, is one of our country’s most popular pastimes. Still, America’s political strength and its ability to ward off terrorist activity is hindered by the fact that Canada and Russia are better at hockey. Though it may seem logical to simply start watching NHL games again, it’s the means—not the ends—that creates a great nation of one-timing players. We must nurture our growing skaters, and who drives them to practice? Hockey moms across the country are enraged by Tina Fey Sarah Palin’s insinuation that they look like bulldogs wearing lipstick. In response, they proudly adorn their minivans with the official Obama car magnet that comes with any $14 donation. And remember Obama’s successful appearance on The Ellen DeGeneres Show? It’s a well known fact that 89% of hockey moms watch Ellen, and of them, 76% trust its host, despite her being a lesbian. Hot Topic: Electroconvulsive therapy Ron Paul, a little-known libertarian congressman from Texas, rose to psuedo-prominence on the Internet earlier this year when he ran for the Republican nomination on such sensible platforms as “reestablishing the gold standard,” and “abolishing the Federal Reserve.” His base consists of the socially maladapted thirteen-year-old boys that frequent the “user-driven news” websites, reddit and Digg. Sane People for Ron Paul (SPRP) is a group of voters seeking to distance themselves from the public perception that all Ron Paul supporters are frothing, awkward, hormone-filled bags of acne who will spam your inbox with misspelled vitriol if you don’t give equal coverage to their completely viable alternative candidate. They are also likely to, in casual conversation, refer to the mainstream media as “The MSM.” Some have questioned SPRP’s credibility on the basis that, even though Ron Paul is not actually running for president, his signs are still all over my goddamn neighborhood. Hot Topic: Jason Kidd The Dallas Mavericks are synonymous with disappointment within the National Basketball Association. They’ve never won an NBA championship. Even last season, as the number one seed going into the playoffs, they were knocked out in the first round by the Golden State Warriors, who were ranked eighth. Many attribute last season’s failure to the team’s squandering of its budget on Point Guard Jason Kidd. Not coincidentally, they like McCain’s pick for running mate. The Mavericks have a history of letting people down, which is why they will continue to relate to the Original Maverick even after November. Hot Topic: Spring Break! Wooo! Sarah Palin does not believe that a fifteen-year-old girl who gets pregnant after being raped by her father should be allowed to have an abortion. Pregnant Teens for Palin (PTP) thinks that position is way hot. Like, what’s the deal with abortion, anyway? I mean, do you want some pervy doctor poking around under your skirt? Hello! Besides, after the party at Chad’s house, I promised my boyfriend I’d only let him see me naked. Omigod! I’m going to be totally fat in, like, a month! He’s not going to dump me, is he? Currently, Bristol Palin is PTP’s only registered member. Hot Topic: Drinking away the pain
251,598
You can adjust the back of this lounger 6 different ways for whatever comfort you desire. Sit up right and enjoy your favorite book or lie flat and soak up the sun for an afternoon siesta. Blending mesh and a-grade teak gives this sun lounger a contemporary and classic look that will stand the test of time. The back height measures 30.5” and the flat bed is 52” long to accommodate many heights. The sides of the sunbed has 2.75” of a-grade teak that borders the mesh for a polished finish. For easy mobility the lounger has 2 back wheels with stainless steel fixtures and wrapped in rubber. This sunbed is also available in black and taupe mesh. This elegant sun lounger is the perfect addition to your back yard garden or pool oasis. No cushion is needed for this most comfortable piece.
202,657
The Family Guy Character content type that was created in the previous lesson will be used for this lesson. Complete the previous lessons in the ladder 1. Navigate to Admin > Content and select "add content". 2. Select "Family Guy Character". 3. Fill in the form using the information below as a reference. Content Type: Family Guy Character First Name: Peter Last Name: Griffin Birth Date: Jan 23, 1972 Birth Place: Mexico Related To: Lois Griffin, Chris Griffin, Meg Griffin, Stewie Griffin Profile Pic: (Use any picture of him) Bio: Peter Griffin is the protagonist of the show and title character. He's a man of Irish descent currently residing in Quahog, Rhode Island with his wife Lois Griffin. He was, however, born in Mexico, where his mother had tried unsuccessfully to abort him. They have three children, Chris, Meg, and baby Stewie. 4. Click save. You have just added content from a custom content type to your Drupal site!
224,856
TITLE: Horizontal, Vertical, & Oblique Asymptote? QUESTION [0 upvotes]: So my precal assignment is asking me to find the vertical, horizontal, and oblique asymptotes. I got to the following problem. $\displaystyle R(X) = \frac{3x+5}{x-6}$ Would the horizontal asymptote be $y=3$ & vertical asymptote be $x=1$? REPLY [1 votes]: For a rational function of polynomials, which you have an example of (using just linear polynomials), the vertical asymptotes occur where the denominator is zero, provided the numerator is not also zero at the same value of x . So look again at your answer for that asymptote. A rational function will have either a horizontal or an oblique asymptote, but not both. When the degrees of the numerator and denominator polynomials are the same, the asymptote of the function is "horizontal"; if the numerator's degree is larger, the asymptote is "oblique".* You can find the horizontal or oblique asymptote by polynomial division or by considering the ratio as |x| becomes very large (I am avoiding "limits at infinity" in this discussion since you said this is for a pre-calculus course). Your horizontal asymptote is correct. *You have two linear functions, so the degrees are equal. If the numerator polynomial is higher in degree by 1 , the asymptote is a non-horizontal line and referred to as "oblique". If the numerator is higher in degree by more than 1, the asymptote is not a line, but a polynomial function. The non-horizontal asymptote functions can be found by polynomial division.
52,963
Peter Herpst—aka the Outlaw Gardener—and I have a late May tradition of meeting up at my place and then heading to the Rare Plant Research open house. We make a day of it, hitting other nurseries in the area. Somewhere along the way he started bringing me a bougainvillea. I wrote about this generosity last year (here) and included photos of the beautiful plants he's brought. This year there was no meeting up at my house. Due to social distancing and the whole Covid-19 nightmare we arrived at Rare Plant Research in our own cars, with no plans beyond just shopping there. But guess what... Yes, that masked man is Peter—my bougainvillea fairy-godfather—and he brought a bougainvillea! Can you even? You guys, I nearly cried. Seriously. So much has changed, and yet this. I wanted to hug him. I didn't, but I wanted to. Sure some might think this is just a silly tradition but the friendship behind it, and the fact that all summer long I'll see those ridiculously bright papery bracts and think of my friend Peter, well it's pretty damn special if you ask me... The plant will be living by our back door, it's a sunny heat sink, the tomatoes love it there so I figure the bougainvillea will too. Thank you Peter! (and yes, I am getting teary-eyed again...) Weather Diary, May 19: Hi 64, Low 52/ Precip trace All material © 2009-2020 by Loree Bohl for danger garden. Unauthorized reproduction prohibited and just plain rude. That is such a very sweet gesture, and a nice adventure tradition - especially so as you kept it up in the midst of a pandemic! And, that hot pink looks fantastic against your dark wall. Love it! Glad also that you got to see each other, even if from a distance. I miss Peter! It was a pretty fabulous day, pandemic be damned. What a sweet man. I am so glad that you two got together even though it wasn't the regular way. I probably would have cried as soon as I saw those pink blooms. At least you got together. I miss Pete's posts so much. You are not alone, his blog had a lot of fans. Never mind the bougainvillea, I'm just happy to get a glimpse of Peter looking svelte. I miss his blog and humor and hold on to (very little) hope that he may start posting again. I love your annual get together tradition. You never know, maybe one day he'll get the urge... So good to see Peter looking well! We've been wondering how he was doing. I'm learning Peter has an even bigger fan club than I realized. So happy for you--and for Peter. And a little envious. Cheers There are two more open Saturday's at RPR...maybe you and Bill should make it an outing? PETER!!! I'm so glad you got to visit with him, even without a hug. He looks great!! It was too short, but wonderful. thank you for sharing this. i will look at creating more specific horticultural traditions. My son knows to give me yellow freesias or no cut flowers at all. He also started the watch "Paul Blart: Mall Cop 2" every thanksgiving. I'd enjoy a friend with a bougainvillea, too. I still am uninformed of who so many bloggers are. I know their blogs, but not their names. Hi, Peter, great blog. And so many bloggers don't include photos of themselves so you can pass right by them and not even know! The unexpected gift is the most cherished for both the giver and the recipient. How perfect for both you. I know you had the most wonderful day and are still smiling in your thoughts. Right you are! I got a little teary myself! Traditions may be even more important at times like this and I can't think of anything better than an annual plant delivery. It's good to "see" Peter again too. I miss his blog posts. (Alison's too, if you're reading this, Alison!) Yes the blog-o-sphere is less fun without the both of them. Small gestures like this mean so much when we hear so many negatives. A lovely gesture by a great friend. The negatives are overwhelming at times aren't they? I miss Peter's Blog so much. Me too. Bougainvillea remind me of the Mediterranean and summer holidays. Not a bad thing! Dear man! So thoughtful and kind. How is he doing? He looks well in this image. I sure miss his posts. He is doing well and definitely has retained his sense of humor. What everyone else said. Big smile! What a sweet post. I got a little teary-eyed too, thinking of our little RPR tradition & how it had to change a bit this year. Sorry I didn't see this post until just now. So many posts to catch up on!
235,685
By Marcie Bower, Lic.Ac. This post is copied from our older, original blog. Original post date 5/5/2017.. Most herbs that are considered to be adaptogens come from forms of traditional medicine that have been practiced for thousands of years. These herbs have been used medicinally since long before we knew about their myriad biochemical effects in the body. Many of the most common adaptogenic herbs come from Traditional Chinese Medicine or Ayurveda (traditional Indian Medicine.) Adaptogens are unique in that they exert numerous, at times contradictory, effects on a biochemical level in the body. For instance, some adaptogens have been shown to both increase blood pressure in cases of low blood pressure, and reduce blood pressure in cases of hypertension (high blood pressure.) Thus they are said to have a homeostatic effect on the body, meaning bringing the body back into balance. Through myriad biological effects including modulating formation of stress hormones (corticosteroids and ACTH), regulating secretion of numerous stress hormones (including catecholamines), regulating central nervous system function, reducing oxidative stress and increasing protein synthesis, adaptogens are known to improve mood, increase energy, improve sleep quality, regulate immunity, and increase focus. From a modern understanding of the body, we know that adaptogens are particularly helpful in cases of over exhaustion, adrenal fatigue, and prolonged periods of stress. In their native traditions, adaptogenic herbs traditionally appear as “tonic” herbs, known to have strongly nourishing effects of various body systems. This makes sense given our modern understanding – they are offering the body additional support, and they are able to identify the areas that are lacking. From a Chinese Medicine perspective, we use some of these adaptogenic herbs in traditional herbal formulas alongside other herbs to target a particular symptom or condition. Common Chinese Herbs that have adaptogenic properties include Reishi Mushroom (Ganoderma lucidum), Cordyceps (Dong Chong Xia Cao), Licorice (Gan Cao/Zhi Gan Cao), Eleuthero (Ci Wu Jia), Panax Ginseng (Ren Shen), and Astragalus (Huang Qi). Many traditional Chinese Medicine formulas contain these herbs naturally – in other cases, we can add these herbs into a traditional formula as appropriate for a particular condition. That way, you have the best of both worlds – the traditional use of these herbs, and the modern understanding of all the ways they can help you once in your body. Stressed? It doesn’t have to feel as overwhelming or exhausting as it does.
219,146
The Great Lakes Restoration Initiative , which has received more than $2 billion in federal funding since it was established in 2010, has almost unanimous backing from members of both parties across region's eight states, from NY to Minnesota. Read More » Médecine » Trump administration sending Congress $4.1-trillion budget The budget also increases spending by $2.6 billion for Customs and Border Protection to stop illegal immigration. The Trump budget seeks to cut back from these programs to put them, the administration says, more in line with current needs. Read More » Senate intel issues 2 new subpoenas related to Flynn". Read More » Trump budget calls for deep cuts to social safety net Presidential budgets are mere suggestions, and the White House has discretion to assume higher economic growth rates of up to 3 percent or so under Trump's agenda of tax changes, loosened regulations and infrastructure spending. President Donald Trump's $4.1 trillion budget plan is drawing rebukes, even from some Republican allies, for its politically unrealistic cuts to the social safety net and a broad swath of other domestic programs. Read More » Pop stars pay tribute to Ariana Grande concert attack victims According to Sky News , Runshaw College confirmed that Callander matriculated at its school. Meanwhile the search goes on for those still missing, including Charlotte Campbell's 15-year-old daughter Olivia, who had gone to the concert with her friend Adam to celebrate his birthday. Read More » Manchester football clubs pay tribute to Arena terror attack victims United Kingdom counter terrorism authorities announced early they would treat the incident as terror related , according to multiple news reports. She said security services were working to see if a wider group was involved in the attack, which fell less than three weeks before a national election. Read More » Trump Budget Calls For 2.1% Military Pay Raise In 2018 Numerous voters who propelled President Trump into the presidency last November would see significantly less from the federal government. At the same time, the blueprint boosts spending for the military by tens of billions and calls for $1.6 billion for a border wall with Mexico that President Trump repeatedly promised voters the USA neighbor would finance. Read More » Young girls confirmed dead in Manchester concert attack The Independent reported the group left a statement through ISIS channels on the messaging app Telegram, which said "With Allah's grace and support, a soldier of the Khilafah managed to place explosive devices in a gathering of crusaders in the middle of the British city of Manchester ". Read More » Trump envisages 'fast' replacement for Federal Bureau of Investigation director post GOP Sen. John Cornyn , the No. 2 Senate leader and a former Texas attorney general, also interviewed. The FBI is investigating meddling by the Russians in the USA election and whether anyone on the Trump team colluded with them. Bush. "Doing away with the briefings would reduce accountability, transparency and the opportunity for Americans to see that, in the US system, no political figure is above being questioned". Read More » Ariana Grande opening act tweets her 'heart is heavy' Manchester police officers and counter-terrorism agents are now reviewing surveillance camera footage from the night of the terror attack to determine if Salman Abedi conducted a "recce" or surveillance of the large arena before launching the Manchester bombing . Read More » Dozen of Children, Teenagers in Hospital After Manchester Terror Attack Driver AJ Singh said he tried to help wherever he could. Wembley Arena had a similar message, stating it already had "enhanced security checks" in place and asking members of the public to allow extra time for entry. Manchester police issued a statement confirming fatalities and injuries. The British government is planning an emergency Cabinet meeting for later Monday morning. Read More » Islamic State has claimed responsibility for Ariana Grande concert attack It should have been a night they grew up to wistfully remember. Armed police searched two residential areas in south Manchester , less than a mile from the supermarket where authorities arrested earlier a 23-year-old man in connection with the attack. Read More » Man arrested in connection with Manchester terror attack Manchester terror attack: the world pays tribute Tue, May 23, 2017 Mourning for Manchester. The attack has been described as the deadliest on Britain since four men killed 52 people in suicide bombings on London's transport system in July 2005. Read More » Colorado governor responds to President's budget proposal At the same time, the British media outlet The Guardian compared Trump's budget plan to the neo-liberal policy of the 1980s conducted by Ronald Reagan and Margaret Thatcher. "The first step is to bring federal spending under control and return the federal budget to balance within 10 years". "The Trump administration has taken so many important pieces of the budget off the table", MacGuineas said. Read More » Trump's budget cuts to slash trillions from anti-poverty funding The sheer ambition of the president's plan, which would cut domestic agencies by 10% in 2018 and by 40% in 2027, make the budget even less likely to gain traction on Capitol Hill , where lawmakers regularly flout the annual blueprint offered by the executive branch. Read More » Poor and disabled big losers in Trump budget; military wins Some of the proposed cuts to programs like Social Security's disability benefits are created to push more people into the workforce. "It's wrong. Unlike many parliamentary democracies like India and the United Kingdom wherein the finance minister personally delivers speech on the floor of the Parliament, in the USA, the White House sends hard copies of the president's budget proposals . Read More » Who wins and who loses in Trump's budget President Donald Trump's 2018 budget plan released Tuesday faced near-universal pushback in Congress , as even members of his own party expressed skepticism over its provisions. "I'm not a fan of surprises, and you have to set realistic expectations, because there are real trade-offs and choices". The budget also does not include the tax-reform plan the administration has touted, which most estimates figure would add more than $5 trillion to the budget. Read More » Londonderry musician in Manchester speaks of shock across city This savage attack on young people will require a response, but we will not hand victory to the attacker by allowing ourselves to become divided. Grande, who was due to give a concert in London later on Tuesday, said she was "broken" in a tweet . Read More » Teen victim of bombing at Ariana Grande concert was super-fan Authorities identified the suicide bomber in the deadly blast at a Manchester, England, concert , but it's not yet clear whether the Islamic State's claim of responsibility for the blast is valid. Hotels near the venue opened their doors for children who had not yet been reunited with their parents, and #RoomInManchester began trending on Twitter as residents offered a place to stay for anyone stranded after the blast. Read More » Texas House Votes to Restrict Transgender Kids' Bathroom Use in Public Schools Thwarting Lieutenant Governor Dan Patrick's threat to force a special session over the bathroom bill, the Texas House voted Sunday night to pass bathroom restrictions for transgender students in all public schools. Business groups and sports leagues both voiced concern about the Senate's version of the bill when it passed in March. "I would think it's unprecedented that this many actions by the Legislature will be contested in court", said state Rep. Read More » 'Pls help me...': Frantic parents hunt for missing kids after concert blast The attack was carried out by a lone male suicide bomber who detonated an improvised explosive device. "Some people were screaming they'd seen blood but other people were saying it was balloons busting or a speaker had been popped", said another eyewitness. Read More » Trump's budget proposal includes huge cuts to food stamps At this point, it isn't clear how much of a priority the administration's new budget is to the president himself. The budget doesn't propose cutting funding for Medicare or "core" Social Security benefits. He also seeks cuts in food stamps and disability insurance. The short answer is welfare spending and regulations. All domestic spending except for the military and homeland security would be cut by $57 billion, which is around 10 per cent, over the next decade. Read More » A Determined Texas Legislature is Intent on Punishing Trans People The measure now goes back to the Senate , which previously approved a broader version mandating that standard for everyone using public restrooms. "Members of the House wanted to act on this issue, and my philosophy as speaker has never been to force my will on the body". Read More » Netanyahu vows Israel 'will continue to build in our capital' Jerusalem A short video promoting President Donald Trump's first foreign trip to the Middle East and Europe posted to the White House YouTube channel showed a map of Israel that excluded disputed territories the Gaza Strip, West Bank and Golan Heights. Read More » Texas advances school transgender bathroom law A stricter measure, more in line with North Carolina's House Bill 2, passed the more conservative state Senate earlier this year. Texas is looking to pass a transgender bathroom bill similar to North Carolina's controversial law. Read More » Manchester concert goer: there was 'almost no security check' From the bottom of my heart, I am so so sorry. "As we were leaving a bomb or explosion went off meters in front of me", she wrote. At least 10 police cars and five ambulances were seen rushing to the scene, the Mirror reported. There was no immediate claim of responsibility. Australia's prime minister has told the Australian Parliament that the deadly explosion at Manchester Arena appeared to be a "brutal attack on young people everywhere". Read More » British police: Fatalities after incident at Manchester Arena In a statement the management team said: "Words can not describe our sorrow for the victims and families harmed in this senseless attack". "The Scottish Government is working with Police Scotland and the UK Government to ensure that we have a full understanding of the developing situation". Read More » Turkey slams US over 'aggressive' acts against bodyguards Members of Turkish President Recep Tayyip Erdogan's security detail were caught on video violently clashing with protesters outside the ambassador's Sheridan Circle residence, leading to bipartisan outrage and even calls to oust the ambassador from the United States. Read More » Iran accuses USA of 'Iranophobia', arming 'dangerous terrorists' The Republican said he was "honoured to be received by such gracious hosts", as he was given the highest civilian honour in Saudi Arabia - despite having previously criticised the country . Israel and Saudi Arabia have said they've met behind closed doors to talk about Iran , a nation neither of them gets along with. "This is not a battle between different faiths, different sects, or different civilisations", he said. Read More » 'Bathroom bill' closer to becoming Texas law The amendment was added to an unrelated education bill in the Texas House and would only apply to children while attending public and charter schools. Rep. Chris Paddie (a Republican from the East Texas town of Marshall) authored the amendment, saying it has "absolutely no intent" to discriminate to which I say "bullshit". Read More » Trump budget promises balance in decade, relies on deep cuts Donald Trump's budget includes plans to cut the vital Medicaid program by $800 billion, as he once again targets the most vulnerable to absorb the brunt of the pain during his tenure. In addition, eleven Senate Democrats wrote Trump a letter last week warning him that his previously announced FY 2018 budget proposal could derail progress under The Cures Act - last year's legislation to promote biomedical innovation - that passed with broad bipartisan support. Read More » Firing 'nut job' James Comey took off pressure, says Donald Trump Comey from the equation, he could effectively end the investigation. A current White House official is a significant person of interest in the law enforcement investigation of possible ties between Donald Trump's presidential campaign and Russian Federation, the Washington Post has reported , citing people familiar with the matter. Read More » In House Briefing, Rod Rosenstein Is Mum on Comey Firing Memo But since Attorney General Jeff Sessions has recused himself from the Russian Federation investigation, Mueller will answer to Sessions' deputy, Rod Rosenstein. Trump is destroying President Trump one tweet at a time , with his loose lips and determination to run a vaudevillian presidency reminiscent of Colonel Mustard in a board game of Clue. Read More » Renegotiating NAFTA won't bring back many jobs, say some economists The trade agreement remains " critical " for feed crop and grain farmers in the U.S. and offers access to countries that are among the top market for feed crops including corn, sorghum and barley, and that are increasing destinations for distiller's dried grains with solubles (DDGS), said Councell. Read More »
300,952
SECOND QUARTO 19(i!i was to Lyndon Johnson. It can probably never be proven, but the impression of countless supporters ol Humphrey for tin. favor) it was the impression ol countless supporters ol Humphrey vice-presidency, voiced to MFDP personnel over and over again, that |ohnson had made the seating of the regular delegation a condition (qr Humphrey's selection as nominee lor the vice-presidency [njhart, Humphrey had to win for the regulars in order to win lor himself. bankrupt liberals And it was here that American liberalism displayed its ultimate bankruptcy. This writer talked personally to more than a dozen Humphrey supporters who expressed great sympathy for the MFDP cause. readily .iclniittpd the moral justice of MFDP's cause, accepted the explanation that the extra-legality of the MFDP delegation was irrelevant because the rival delegation was itself illegal, but then refused to sup|x>rt MFDP. When pushed to the wall in this~manner, ttiese delegates would frankly admit they stood to reap such a significant political and material gain from Humphrey's vice-presidency that they could not take the chance of supporting MFDP, having it win and thus deny to Humphrey the vice-presidential nominationXAT noted above, these Humphrey supporters were convinced that sup- pression-of the MFDP was the price of the vice-presidency for Humphrey.) 1 Be Test of the story nf Atlantic City is too familiar to recount in detail here: the delay in the report of the credentials committee from Saturday until Tuesday; the frenzied conferences with MFDP the arm-twisting—one delegate threatened with itk» r»i a j^dgpship anotherwith bankr"P'ry (tr"*"* 1 h—U ihQUfa others we heard as rumors); the eventual "compromise" which involved seating the regulars and giving two members of the MFDP delegation (and even these ~~ not to he selected by the delegation) hurriedly devised seats as "delegates-at-large pj the reinvention " Thr "rnmprnmnr " meaning- 1p«<^ a» it was^ served its purpose. It provided tlTC ttberals with art "out." It gave ihem absolution in deserting a cause which every- ihinpr il^py said they stood for required them to support to the bittet end. There was one noble exception—Congrcsswoman EcTiTH Green ' from Oregon. She had promised a compromise which would have administered the loyalty oath (to the convention nominees) lo each member of both delegations, and those of each who took the oath would constitute the delegation to be seated, with the votes of Mississippi to be distributed evenly among them. She stuck to that proposal, refusing to support the credentials committee in the "com- 270 I
28,620
NextGen Supply Chains: Let’s get Smart & ‘Phygital’! Date: 24 Oct 2019 Location: BE - Brussels Organised by: bluecrux Learn from Vlerick, Cinionic (Barco), VCST & many other top-notch presenters how your company can converge both the digital and human factor as the gateway to growth. In short: How to get your supply chain smart, ‘phygital’ and ready for the next generation! More information will follow soon but one thing is certain: you want to save the date already!Register now
235,320
\begin{document} \maketitle \section{Introduction} For a bounded smooth domain $\Omega\subset M^n$ of a Riemannian manifold, the eigenvalue problem for the Laplacian on $\Omega$ with Dirichlet boundary condition is \[\Delta u = -\lambda u\text{ where }u|_{\partial\Omega} = 0.\] The eigenvalues form a nondecreasing, unbounded infinite sequence, i.e.~ \[0<\lambda_1(\Omega)<\lambda_2(\Omega)\le\lambda_3(\Omega)\cdots\] For $n=2$, the domain $\Omega$ can be interpreted as a membrane, whose boundary is fixed as if it were a drum. Since there are no vibrations on the rim of a drum, it holds that $u|_{\partial\Omega} = 0$. The eigenvalues $\lambda_j(\Omega)$ depend on $\Omega$. Their exact values are unknown in general, except in a few cases. For triangles in $\mathbb R^2$, the eigenvalues of only three triangles have been computed explicitly: those of the equilateral triangle and two special right triangles. For triangles on $S^2$, Seto, Wei, and Zhu \cite{2020arXiv200900229S} have explicitly calculated the eigenvalues and eigenfunctions of spherical triangles that are halves of spherical lunes, including the equilateral Schwarz triangle $(2\text{ }2\text{ }2)$. B\'erard \cite{CM_1983__48_1_35_0} has explicitly calculated the eigenvalues of certain Schwarz triangles $(p\text{ }q\text{ }r)$ where $p,q,r$ are positive integers. In the combinatorics and probability literature, the eigenvalues of Schwarz triangles are used to enumerate lattice walks such as excursions and estimate exit times from cones of Brownian motion \cite{MR4050731,MR2439411} In this paper we study a corresponding question on the sphere. We compute the first two eigenvalues and corresponding eigenfunctions of the equilateral Schwarz triangle $(\frac32\text{ }\frac32\text{ }\frac32)$ on the sphere. \section{Eigenvalues of the Schwarz triangle $(\frac32\text{ }\frac32\text{ }\frac32)$ on the sphere} We follow Wormer's calculation of tetrahedrally symmetric harmonics in \cite{doi:10.1080/00268970110086318}. \subsection{Schwarz triangle $(\frac32\text{ }\frac32\text{ }\frac32)$} A {\em Schwarz triangle} $(p\text{ }q\text{ }r)$ is a spherical triangle with vertex angles $\pi/p$, $\pi/q$, and $\pi/r$ that can be used to tile a sphere a finite number of times through repeated reflections in its edges \cite{MR0370327}. In this paper, we consider the Schwarz triangle $(\frac32\text{ }\frac32\text{ }\frac32)$, an equilateral spherical triangle with all three vertex angles $2\pi/3$. Consider the spherical triangle $S^2/T$ where $S^2$ is the unit sphere in $\mathbb R^3$ and the deck transformation group $T\cong A_4$ consists of orientation-preserving symmetries of the tetrahedron with vertices \begin{align*} \vec{h}_1 &= (-1/\sqrt{3},-1/\sqrt{3},1/\sqrt{3}), \\ \vec{h}_2 &= (1/\sqrt{3},1/\sqrt{3},1/\sqrt{3}), \\ \vec{h}_3 &= (1/\sqrt{3},-1/\sqrt{3},-1/\sqrt{3}), \text{ and} \\ \vec{h}_4 &= (-1/\sqrt{3},1/\sqrt{3},-1/\sqrt{3}) \end{align*} in Cartesian coordinates. Take $(\theta,\varphi)$ to be the geodesic polar coordinates centered at the north pole, with the spherical metric given by \[g=d\theta^2+\sin^2\theta\,\mathrm d\varphi^2\] where $0\le\theta\le\pi$ and $0\le\varphi\le 2\pi$. The associated Laplacian is \[\Delta u(\theta,\varphi) = \partial_{\theta}^2u+\frac{\cos\theta}{\sin\theta}\partial_{\theta} u + \frac{1}{\sin^2\theta}\partial_{\varphi}^2u,\] so the Dirichlet eigenvalue problem $\Delta u+\lambda u=0$ becomes \[\partial_{\theta}^2u+\frac{\cos\theta}{\sin\theta}\partial_{\theta} u + \frac{1}{\sin^2\theta}\partial_{\varphi}^2u + \lambda u = 0\] where $u=0$ on the edges of the tetrahedron. Then the eigenfunctions of $S^2/T$ are the eigenfunctions of the Laplacian on $S^2$ that satisfy $u(x)=u(g\cdot x)$ for all $g\in T$ and $x\in S^2$, subject to the Dirichlet boundary condition. Let $m$ be an even integer. The following functions span the irreps of $T$: \begin{align*} |\ell,m,T,A_1\rangle &= [E+(1\text{ }2\text{ }3)+(1\text{ }3\text{ }2)]|\ell,m,V_4,A_1\rangle \\ |\ell,m,T,A_2\rangle &= [E+w_3(1\text{ }2\text{ }3)+w_3^2(1\text{ }3\text{ }2)]|\ell,m,V_4,A_1\rangle \\ |\ell,m,T,A_3\rangle &= [E+w_3^2(1\text{ }2\text{ }3)+w_3(1\text{ }3\text{ }2)]|\ell,m,V_4,A_1\rangle \\ |\ell,m,T,F,1\rangle &= |\ell,m,V_4,B_2\rangle \\ |\ell,m,T,F,2\rangle &= (1\text{ }2\text{ }3)|\ell,m,V_4,B_2\rangle \\ |\ell,m,T,F,3\rangle &= (1\text{ }3\text{ }2)|\ell,m,V_4,B_2\rangle, \end{align*} where $w_3=\exp(2\pi i/3)$ and \begin{align*} |\ell,m,V_4,A_1\rangle &= A_{\ell m}(1+(-1)^m), \\ (1\text{ }2\text{ }3)|\ell,m,V_4,A_1\rangle &= (-1)^{m/2}(2-\delta_{m0})^{1/2}\sum_{\begin{matrix}m'\ge 0\\m'\text{ even}\end{matrix}}^{\ell}(2-\delta_{m'0})^{1/2}A_{\ell m'}\xi_{m'm}^{\ell}, \\ (1\text{ }3\text{ }2)|\ell,m,V_4,A_1\rangle &= (2-\delta_{m0})^{1/2}\sum_{\begin{matrix}m'\ge 0\\m'\text{ even}\end{matrix}}^{\ell}(2-\delta_{m'0})^{1/2}(-1)^{m'/2}A_{\ell m'}\xi_{m'm}^{\ell}, \\ |\ell,m,V_4,B_2\rangle &= B_{\ell m}(1+(-1)^m), \\ (1\text{ }2\text{ }3)|\ell,m,V_4,B_2\rangle &= (-1)^{m/2}\sqrt{\frac{2-\delta_{m0}}{2}}\sum_{\begin{matrix}m'> 0\\m'\text{ odd}\end{matrix}}^{\ell}\xi_{m'm}^{\ell}B_{\ell m'}, \\ (1\text{ }3\text{ }2)|\ell,m,V_4,B_2\rangle &= \sqrt{\frac{2-\delta_{m0}}{2}}\sum_{\begin{matrix}m'> 0\\m'\text{ odd}\end{matrix}}^{\ell}(-1)^{\lfloor m'/2\rfloor}\xi_{m'm}^{\ell}A_{\ell m'}, \end{align*} \begin{align*} \xi_{m'm}^{\ell} &= \frac{(-1)^{m'-m}}{2^{\ell}}\sqrt{\frac{(\ell+m')!(\ell-m')!}{(\ell+m)!(\ell-m)!}} \sum_{k=0}^{\ell-m'}(-1)^k\binom{\ell+m}{k}\binom{\ell-m}{k+m'-m}, \\ A_{\ell m} &= \frac{1}{2}\sqrt{2-\delta_{m0}}(Y_{\ell m}+Y_{\ell,-m}^*), \\ B_{\ell m} &= \frac{1}{2i}\sqrt{2-\delta_{m0}}(Y_{\ell m}-Y_{\ell,-m}^*), \\ Y_{\ell m} &= \left\{\begin{matrix} \frac{i^{\ell+1}}{\sqrt 2}(Y_{\ell}^m-(-1)^mY_{\ell}^{-m})& \text{ if }m<0 \\ i^{\ell}Y_{\ell}^0 & \text{ if }m=0 \\ \frac{i^{\ell}}{\sqrt 2}(Y_{\ell}^{-m}+(-1)^mY_{\ell}^m) & \text{ if }m>0, \end{matrix}\right. \\ \text{and }Y_{\ell}^m &=\sqrt{\frac{2\ell+1}{4\pi}\frac{(\ell-m)!}{(\ell+m)!}}P_{\ell}^m(\cos\theta)e^{im\varphi}. \end{align*} The Dirichlet boundary condition implies that $u=0$ on great circle arcs corresponding to edges of the tetrahedron. These great circle arcs lie on the following planes: $x=\pm y$, $y=\pm z$, $z=\pm x$. Making this more explicit in spherical coordinates $(\theta,\varphi)=(\tan^{-1}\frac{y}{x},\cos^{-1}z)$, the vertices of the tetrahedron are \begin{align*} \vec{h}_1 &= (1,\cos^{-1}(1/\sqrt3),-3\pi/4), \\ \vec{h}_2 &= (1,\cos^{-1}(1/\sqrt3),\pi/4), \\ \vec{h}_3 &= (1,\cos^{-1}(-1/\sqrt3),-\pi/4), \text{ and} \\ \vec{h}_4 &= (1,\cos^{-1}(-1/\sqrt3),3\pi/4), \end{align*} and the Dirichlet boundary conditions are \begin{align*} u(1,\theta,-3\pi/4)=u(1,\theta,\pi/4)=0&\text{ for }\theta\in[0,\cos^{-1}(1/\sqrt 3)], \\ u(1,\theta,-\pi/4)=u(1,\theta,3\pi/4)=0&\text{ for }\theta\in[\cos^{-1}(-1/\sqrt 3),\pi], \\ u=0\text{ when } \cos\varphi = -\cot\theta&\text{ for }\varphi\in[-3\pi/4,-\pi/4],\\ u=0\text{ when } \sin\varphi = \cot\theta&\text{ for }\varphi\in[-\pi/4,\pi/4], \\ u=0\text{ when } \cos\varphi = \cot\theta&\text{ for }\varphi\in[\pi/4,3\pi/4],\\ \text{and }u=0\text{ when } \sin\varphi = -\cot\theta&\text{ for }\varphi\in[-\pi,-3\pi/4]\cup[3\pi/4,\pi]. \end{align*} To solve the above boundary value problem, we first solve an easier version $(*)$: \begin{align*} u(1,\theta,-3\pi/4)=u(1,\theta,\pi/4)=0&\text{ for }\theta\in[0,\cos^{-1}(1/\sqrt 3)], \\ u(1,\theta,-\pi/4)=u(1,\theta,3\pi/4)=0&\text{ for }\theta\in[\cos^{-1}(-1/\sqrt 3),\pi], \\ u\text{ is odd in }\cos\theta \text{ when }\cos\varphi = -\cot\theta&\text{ for }\varphi\in[-3\pi/4,-\pi/4],\\ u\text{ is odd in }\cos\theta \text{ when }\sin\varphi = \cot\theta&\text{ for }\varphi\in[-\pi/4,\pi/4], \\ u\text{ is odd in }\cos\theta \text{ when }\cos\varphi = \cot\theta&\text{ for }\varphi\in[\pi/4,3\pi/4],\\ \text{and }u\text{ is odd in }\cos\theta \text{ when }\sin\varphi = -\cot\theta&\text{ for }\varphi\in[-\pi,-3\pi/4]\cup[3\pi/4,\pi], \end{align*} Since the irreps of $T$ are linear combinations of $Y_{\ell}^m$, and $e^{im\varphi}=\cos(m\varphi)+i\sin(m\varphi)$, irreps of $T$ that satisfy the $(*)$ boundary condition must be sums of multiples of $\cos(m\varphi)$ that vanish when $\varphi$ is an odd multiple of $\pi/4$, i.e. where $m\equiv 2\pmod{4}$, and multiples of $\sin(m\varphi)$ that vanish when $\varphi$ is an odd multiple of $\pi/4$, i.e. where $m$ is divisible by $4$. However, for multiples of $\sin(m\varphi)$, the trigonometric identity $\sin(2k\varphi)=2\sin(k\varphi)\cos(k\varphi)$ applied enough times gives us factors of $\cos(j\varphi)$ where $j\equiv 2\pmod{4}$ since $m$ is divisible by $4$. Thus it suffices to consider just the former case, i.e. (sums of) multiples of $\cos(m\varphi)$ where $m\equiv 2\pmod{4}$. Also, the irreps must be odd in $\cos\theta$ on the edges where $\theta\in[\cos^{-1}(1/\sqrt 3),\cos^{-1}(-1/\sqrt 3)]$. After some messy calculations, we find that the first eigenvalue is $\lambda_1=\ell(\ell+1)=3\cdot 4=12$, and its corresponding eigenspace is one-dimensional and spanned by the following eigenfunction: \[u_1^*(\theta,\varphi)=-\sqrt{\frac{105}{2\pi}}\sin^2(\theta)\cos(\theta)\cos(2\varphi),\] which is odd on the edges where $\theta\in[\cos^{-1}(1/\sqrt 3),\cos^{-1}(-1/\sqrt 3)]$ because on those edges $\cos(2\varphi)=\pm(3\cos^2\theta-1)/\sin^2\theta$. We find that the second eigenvalue is $\lambda_2=\ell(\ell+1)=5\cdot 6=30$, and its corresponding eigenspace is one-dimensional and spanned by the following eigenfunction: \begin{align*} u_2^*(\theta,\varphi) &= \sqrt{\frac{1155}{128\pi}}\sin^2(\theta)(5\cos(\theta)+3\cos(3\theta))\cos(2\varphi) \\ &= \sqrt{\frac{1155}{8\pi}}\sin^2(\theta)(3\cos^3(\theta)-\cos(\theta))\cos(2\varphi), \end{align*} which is also odd on the edges where $\theta\in[\cos^{-1}(1/\sqrt 3),\cos^{-1}(-1/\sqrt 3)]$ because on those edges $\cos(2\varphi)=\pm(3\cos^2\theta-1)/\sin^2\theta$. To get a Dirichlet eigenfunction from a $(*)$ eigenfunction, it suffices to sum over rotational (orientation-preserving) swapping of the $x$, $y$, and $z$ axes, i.e.~ \[u_j^D(p)=u_j^*(\theta_{xyz}(p),\varphi_{xyz}(p))+u_1^*(\theta_{yzx}(p),\varphi_{yzx}(p))+u_1^*(\theta_{zxy}(p),\varphi_{zxy}(p)),\] where $(\theta_{ijk}(p),\varphi_{ijk}(p))$ denotes spherical coordinates of a point $p\in S^2$ where the $x$, $y$, and $z$ axes are replaced by the $i$, $j$, and $k$ axes, respectively. Therefore, for a point $p$ in $S^2$, with Cartesian coordinates $(x,y,z)$ and spherical coordinates $(\theta,\varphi)$, we have \begin{align*} u_1^D(p) = -\sqrt{\frac{105}{2\pi}} \Big[&\sin^2(\cos^{-1}(z))\cos(\cos^{-1}(z))\cos(2\tan^{-1}(y/x)) \\ +&\sin^2(\cos^{-1}(x))\cos(\cos^{-1}(x))\cos(2\tan^{-1}(z/y)) \\ +&\sin^2(\cos^{-1}(y))\cos(\cos^{-1}(y))\cos(2\tan^{-1}(x/z))\Big]\\ = -\sqrt{\frac{105}{2\pi}} \Big[&z(1-z^2)\frac{x^2-y^2}{x^2+y^2} +x(1-x^2)\frac{y^2-z^2}{y^2+z^2} +y(1-y^2)\frac{z^2-x^2}{z^2+x^2}\Big] \\ = -\sqrt{\frac{105}{2\pi}} \Big[&z(x^2-y^2)+x(y^2-z^2)+y(z^2-x^2)\Big] \\ = -\sqrt{\frac{105}{2\pi}} \Big[&\cos(\theta)\cos(2\varphi)+\sin(\theta)\cos(\varphi)(\sin^2(\theta)\sin^2(\varphi)-\cos^2(\theta)) \\ +&\sin(\theta)\sin(\varphi)(\cos^2(\theta)-\sin^2(\theta)\cos^2(\varphi))\Big] \end{align*} and \begin{align*} u_2^D(p) = \sqrt{\frac{1155}{8\pi}} \Big[&\sin^2(\cos^{-1}(z))(3\cos^3(\cos^{-1}(z))-\cos(\cos^{-1}(z)))\cos(2\tan^{-1}(y/x)) \\ +&\sin^2(\cos^{-1}(x))(3\cos^3(\cos^{-1}(x))-\cos(\cos^{-1}(x)))\cos(2\tan^{-1}(z/y)) \\ +&\sin^2(\cos^{-1}(y))(3\cos^3(\cos^{-1}(y))-\cos(\cos^{-1}(y)))\cos(2\tan^{-1}(x/z))\Big] \\ = \sqrt{\frac{1155}{8\pi}} \Big[&(3z^3-z)(1-z^2)\frac{x^2-y^2}{x^2+y^2} +(3x^3-x)(1-x^2)\frac{y^2-z^2}{y^2+z^2} \\ +&(3y^3-y)(1-y^2)\frac{z^2-x^2}{z^2+x^2}\Big] \end{align*} \begin{align*} = \sqrt{\frac{1155}{8\pi}} \Big[&(3z^3-z)(x^2-y^2)+(3x^3-x)(y^2-z^2)+(3y^3-y)(z^2-x^2)\Big] \\ = \sqrt{\frac{1155}{8\pi}} \Big[&(3\cos^3(\theta)-\cos(\theta))\cos(2\varphi) \\ +&(3\sin^3(\theta)\cos^3(\varphi)-\sin(\theta)\cos(\varphi))(\sin^2(\theta)\sin^2(\varphi)-\cos^2(\theta)) \\ +&(3\sin^3(\theta)\sin^3(\varphi)-\sin(\theta)\sin(\varphi))(\cos^2(\theta)-\sin^2(\theta)\cos^2(\varphi))\Big] \end{align*} \bibliographystyle{unsrt} \bibliography{schwarz} \end{document}
169,271
\begin{document} \textwidth 6.5 truein \oddsidemargin 0 truein \evensidemargin -0.50 truein \topmargin -.5 truein \textheight 9 in \maketitle \begin{abstract} We give necessary and sufficient conditions for an integral quadratic form over a dyadic local field to be universal. \end{abstract} \section{Introduction} Let $F$ be a local non-archimedian field of characteristic zero. Let $\oo$ be the ring of integers and let $\p$ be the prime ideal of $F$. The group of units of $\oo$ is $\ooo =\oo\setminus\p$. We have $\p =\pi\oo$, where $\pi$ is a prime element of $\oo$. If $\mathfrak a$ is an ideal of $F$ then we define its order $\ord\mathfrak a\in\ZZ\cup\{\infty\}$ by $\ord\mathfrak a=R$ if $\mathfrak a=\p^R$ for some $R\in\ZZ$ and $\ord\mathfrak a=\infty$ if $\mathfrak a=0$. If $a\in F$ we denote by $\ord a=\ord a\oo$, i.e. $\ord a$ is the value of $a$. We have $\ord a=R$ if $a=\pi^R\varepsilon$ with $\varepsilon\in\ooo$ and $\ord a=\infty$ if $a=0$. We denote by $(\cdot,\cdot )_\p\to\fff/\fff^2\times\fff/\fff^2\to\{\pm 1\}$ the Hilbert symbol. All quadratic spaces and lattices in this paper will be assumed to be non-degenerate. If $V$ is a quadratic space and $x_1,\ldots,x_n$ is an orthogonal basis with $Q(x_i)=a_i$, then we say that $V\cong [a_1,\ldots,a_n]$ relative to the orthogonal basis $x_1,\ldots,x_n$. For the quadratic lattice $L=\oo x_1\perp\cdots\perp\oo x_n$ we write $L\cong\langle a_1,\ldots,a_n\rangle$. Recall that if $b\in\fff$ then $b$ is represented by $[a_1,a_2]$ iff $(a_1b,-a_1a_2)_p=1$ and it is represented by $[a_1,a_2,a_3]$ iff $b\notin -a_1a_2a_3\fff^2$ or $[a_1,a_2,a_3]$ is isotropic. We also have that $[a_1,a_2,a_3]$ is isotropic iff $-a_1$ is represented by $[a_2,a_3]$, which is equivalent to $(-a_1a_2,-a_2a_3)_p =1$. If $V,W$ are two quadratic spaces, we denote by $W\rep V$ the fact that $V$ represents $W$. Similarly for lattices. If $L$ is a quadratic lattice, with $FL=V$, and $Q:V\to F$ is the corresponding quadratic form, then we say that $L$ is integral if $Q(L)\subseteq\oo$ and we say that it is universal if $Q(L)=\oo$. In [XZ] the authors gave necessary and sufficient conditions for a quadratic lattice to be universal in the case when $F$ is non-dyadic. In the more complicated dyadic case they solved the same problem, but only for binary and ternary lattices. In this paper we completely solve this problem for dyadic quadratic lattices in arbitrary dimensions. Unlike in [XZ], where the quadratic lattices are described in terms of Jordan compositions, here we use BONGs (bases of norm generators), which we introduced in [B1]. Since the BONGs are not widely known and used, we now give a brief review. A summary of the results from [B1] we use here can be found in [B3, \S1]. From now on $F$ is a dyadic field, i.e. a finite extensions of $\QQ_2$. We denote by $e$ the ramification index of the extension $F/\QQ_2$, i.e. $e=\ord 2$. \subsection{The map $d:\fff/\fff^2\to\{0,1,3,5,\ldots,2e-1,2e,\infty\}$} The quadratic defect, introduced in [OM, \S63A], of an element $\varepsilon\in F$ is the ideal $\mathfrak d(a)=\cap_{x\in F}(a-x^2)\oo$. We denote by $\Delta =1-4\rho$ a fixed element with $\mathfrak d(\Delta )=4\oo$. In [B1, \S1] we introduced the order of the relative quadratic defect $d:\fff/\fff^2\to\ZZ\cup\{\infty\}$, $d(a)=\ord a^{-1}\mathfrak d(a)$. Let $a=\pi^R\varepsilon$, with $\varepsilon\in\ooo$. If $R$ is even $d(a)=d(\varepsilon )=\ord\mathfrak d(\varepsilon )\in\{ 1,3,5,\ldots,2e-1,2e,\infty\}$. If $R$ is odd then $d(a)=0$. We have $d(a)=0$ iff $\ord a$ is odd, $d(a)\geq 1$ iff $\ord a$ is even, $d(a)=2e$ iff $a\in\Delta\fff^2$ and $d(a)=\infty$ iff $a\in\fff^2$. The map $d$ has the folowing properties: \noindent (1) $d(ab)\geq\min\{ d(a),d(b)\}$ $\forall a,b\in\fff$. \noindent (2) If $d(a)+d(b)>2e$ then $(a,b)_\p =1$. \noindent (3) If $a\in\fff\setminus\fff^2$ then there is $b\in\fff$ with $d(b)=2e-d(a)$ such that $(a,b)_\p =-1$. Moreover, if $d(a)<2e$ then we can choose $b\in\ooo$. (For the last statement note that if $d(a)<2e$ then $d(b)=2e-d(a)>0$ so $b\in\ooo\fff^2$. Since both $d(b)$ and $(a,b)_\p$ depend only on $b$ modulo $\fff^2$, we may assume that $b\in\ooo$.) \subsection{BONGs and good BONGs} Let $V$ be a quadratic space over $F$, with the quadratic form $Q:V\to F$ and the corresponding bilinear symmetric form $B:V\times V\to F$, $B(x,y)=\frac 12(Q(x+y)-Q(x)-Q(y))$. Let now $L$ be a lattice over $V$. The norm $\nnn L$ of $L$ is the fractionary ideal generated by $Q(L)$ and the scale $\sss L$ of $L$ is the fractionary ideal $B(L,L)$. {\bf Bases of norm generators (BONGs)} The bases of norm generators (BONGs), introduced in [B1, \S2], were defined recursively as follows. A norm generator of $L$ is and element $x\in L$ such that $Q(x)\oo =\nnn L$. A basis of norm generator (BONG) of $L$ is an orthogonal basis $x_1,\ldots,x_n$ of $V=FL$ such that $x_1$ is a norm generator for $L$ and $x_2,\ldots,x_n$ are a BONG for $pr_{x_1^\perp}L$. (Here $pr_{x_1^\perp}:V\to x_1^\perp$ is the projection on the othogonal complement $x_1^\perp$ of $x_1$.) A BONG uniquely determines a lattice. We write $L=\prec x_1,\ldots,x_n\succ$ to denote the fact that $x_1,\ldots,x_n$ is a BONG for $L$ and we say that $L\cong\prec a_1,\ldots,a_n\succ$ relative to the BONG $x_1,\ldots,x_n$ if $Q(x_i)=a_i$. {\bf The binary case} If $n=2$ then an orthogonal basis $x_1,x_2$ of $V$ with $Q(x_i)=a_i$ is the BONG of a lattice iff $a_2/a_1\in\mathcal A$, where $\mathcal A\subset\fff/\oos$, $\mathcal A=\{ a\in\fff\,\mid\, a\in\frac 14\oo,\,\mathfrak d(-a)\subseteq\oo\}$. If $a\in\fff$ with $\ord a=R$, then $a\in\mathcal A$ iff $R+d(-a)\geq 0$ and $R\geq -2e$. Hence if $\ord a_i=R_i$ then $a_2/a_1\in\mathcal A$ iff $R_2-R_1\geq -2e$ and $R_2-R_1+d(-a_1a_2)\geq 0$. (See [B1, Lemmas 3.5 and 3.6].) If $R_2>R_1$ then we have the Jordan splitting $L=\oo x_1\perp\oo x_2$ and the scales of the Jordan components are $\nnn\oo x_1=\p^{R_1}$ and $\nnn\oo x_2=\p^{R_2}$. If $R_2\leq R_1$ then $L$ is $\p^{(R_1+R_2)/2}$-modular with $\nnn L=\p^{R_1}$. In particular, if $R_2-R_1=-2e$, then $\sss L=\p^{(R_1+R_2)/2}=\p^{R_1-e}=\frac 12\p^{R_1}=\frac 12\nnn L$. Hence $L\cong\frac 12\p^{R_1}A(0,0)$ or $\frac 12\p^{R_1}A(2,2\rho )$. If $a\in{\mathcal A}$, with $\ord a=R$, then $g(a)\leq\ooo/\oos$ is defined by $g(a)=\ooo$ if $R=-2e$, $g(a)=\oos$ if $R>2e$ and $$g(a)=\begin{cases}(1+\p^{R/2+e})\oos & d(-a)>e-R/2\\ (1+\p^{R+d(-a)})\oos\cap{\rm N}(-a) & d(-a)\leq e-R/2\end{cases}$$ if $-2e<R\leq 2e$. Then if $L\cong\prec a_1,a_2\succ$ and $\eta\in\ooo$, we have $L\cong L^\eta$, i.e. $\prec a_1,a_2\succ\cong\prec\eta a_1,\eta a_2\succ$, iff $\eta\in g(a_2/a_1)$. (See [B1, Lemma 3.11].) {\bf Good BONGs} A BONG $x_1,\ldots,x_n$ of $L$ is called good if $\ord Q(x_i)\leq\ord Q(x_{i+2})$ for $1\leq i\leq n-2$. If $x_1,\ldots,x_n$ is an orthogonal basis of $V$, with $Q(x_i)=a_i$ and $\ord a_i=R_i$ then $x_1,\ldots,x_n$ is the good BONG of a lattice $L$ with $FL=V$ iff $R_i\leq R_{i+2}$ for $1\leq i\leq n-2$ and $a_{i+1}/a_i\in\mathcal A$ for $1\leq i\leq n-1$. The second condition writes as $R_{i+1}-R_i\geq -2e$ and $R_{i+1}-R_i+d(-a_ia_{i+1})\geq 0$. (See [B1, Lemma 4.3(ii)].) In particular, if $R_{i+1}-R_i$ is odd then $\ord a_ia_{i+1}=R_i+R_{i+1}$ is odd so $R_{i+1}-R_i=R_{i+1}-R_i+d(-a_ia_{i+1})\geq 0$. Thus $R_{i+1}-R_i$ cannot be odd and negative. If $R_{i+1}-R_i=-2e$ then $R_{i+1}-R_i+d(-a_ia_{i+1})\geq 0$ implies $d(-a_ia_{i+1})\geq 2e$ so $-a_1a_2\in\fff^2$ or $\Delta\fff^2$, corresponding to $d(-a_1a_2)=\infty$ or $2e$, accordingly. Every quadratic lattice has a good BONG. Good BONGs can be obtained with the help of the so-called maxinmal norm splittings. (See [B1, Lemmas 4.3(iii) and 4.6] and [B3, \S7].) {\bf Similarities with orthogonal bases} Unlike in the non-dyadic case, in the dyadic case lattices usually don't have orthogonal bases. The BONGS, especially the good BONGs, are a good substitute, as the they preserve many of the properties of the orthogonal bases. Suppose that $x_1,\ldots,x_n$ are an orthogonal basis of a quadratic space, with $Q(x_i)=a_i$ and $\ord a_i=R_i$. If $x_1,\ldots,x_n$ is a good BONG for $L$ then $L=\prec x_1,\ldots x_k\succ\perp\prec x_{k+1},\ldots,x_n\succ$ holds iff $R_k\leq R_{k+1}$. Equivalently, $\prec a_1,\ldots,a_n\succ\cong\prec a_1,\ldots a_k\succ\perp\prec a_{k+1},\cdots,a_n\succ$ iff $R_k\leq R_{k+1}$. Conversely, if $x_1,\ldots,x_k$ and $x_{k+1},\ldots,x_n$ are good BONGs for the lattices $L'$ and $L''$ then $x_1,\ldots,x_n$ is a good BONG for $L'\perp L''$ iff $R_k\leq R_{k+1}$, $R_{k-1}\leq R_{k+1}$ and $R_k\leq R_{k+2}$. (If $k=1$ we ignore $R_{k-1}\leq R_{k+1}$; if $k=n-1$ we ignore $R_k\leq R_{k+2}$.) If $x_1,\ldots,x_n$ is a good BONG and for some $1\leq k\leq l\leq n$ $y_k,\ldots,y_l$ is another good BONG for $\prec y_k,\ldots,y_l\succ$, then $x_1,\ldots,x_{k-1},y_k,\ldots,y_l,x_{l+1}\ldots,x_n$ is a good BONG for $L$. Consequently, if $L\cong\prec a_1,\ldots,a_n\succ$ and $\prec a_k,\ldots,a_l\succ\cong\prec b_k,\ldots,b_l\succ$ relative to good BONGs, then ${L\cong\prec a_1,\ldots,a_{k-1},b_k,\ldots,b_l,b_{l+1}\ldots,b_n\succ}$ relative to a good BONG. In particular, if $\eta\in g(a_{k+1}/a_k)$ then $\prec a_k,a_{k+1}\succ\cong\prec\eta a_k,\eta a_{k+1}\succ$ so\\ ${\prec a_1,\ldots,a_n\succ\cong\prec a_1,\ldots,a_{k-1},\eta a_k,\eta a_{k+1},a_{k+2}\ldots a_n\succ}$. If $L\cong\prec a_1,\ldots,a_n\succ$ relative to the good BONG $x_1,\ldots,x_n$ then $L^\sharp\cong\prec a_n^{-1},\ldots,a_1^{-1}\succ$ relative to the good BONG $x_n^\sharp,\ldots,x_1^\sharp$, where $x^\sharp :=Q(x)^{-1}x$ for every $x\in V$ with $Q(x)\neq 0$. (See [B1, Lemmas 4.8 and 4.9, Corollary 4.4].) \subsection{The invariants $R_i(L)$ and $\alpha_i(L)$ and the clasification theorem} Suppose now that $L\cong\prec a_1,\ldots,a_n\succ$ relative to some good BONG and let $R_i=\ord a_i$. In [B3, \S2] we defined, for every $1\leq i\leq n-1$, the number $\alpha_i$ as the minimum of the set \begin{multline*} \{ (R_{i+1}-R_i)/2+e\}\cup\{ R_{i+1}-R_j+d(-a_ja_{j+1})\,\mid\, 1\leq j\leq i\}\\ \cup\{ R_{j+1}-R_i+d(-a_ja_{j+1})\,\mid\, i\leq j\leq n-1\}. \end{multline*} The numbers $R_i$, with $1\leq i\leq n$, and $\alpha_i$, with $1\leq i\leq n-1$, are invariants of the lattice $L$, so we denote them by $R_i(L)$ and $\alpha_i(L)$. If $L$ has a Jordan decomposition $L=L_1\perp\cdots\perp L_t$, then the numbers $R_i=R_i(L)$ are in one-to-one correpondence with $t$, $\rank L_i$, $\sss L_i$ and $\nnn L^{\sss L_i}$ with $1\leq i\leq t$. In particular, $\nnn L=\p^{R_1}$ and $\sss L=\p^{\min\{ R_1,(R_1+R_2)/2\}}$. The numbers $\alpha_i=\alpha_i(L)$ are in one-to-one correspondence with the invariants $\www_i=\www L^{\sss L_i}$, with $1\leq i\leq t$, and $\mathfrak f_i$, with $1\leq i\leq t-1$, of $L$. (See [B1, Lemma 4.7] and [B3, Lemmas 2.13(i), 2.15 and 2.16].) \medskip Here are some properties of the invariants $\alpha_i$, which appear in [B3, Lemmas 2.2 and 2.7, Corollaries 2.8 and 2.9, Remark 2.6]. (1) The sequence $R_i+\alpha_i$ is increasing and the sequence $-R_{i+1}+\alpha_i$ is decreasing. (2) $\alpha_i\geq 0$, with equality iff $R_{i+1}-R_i=-2e$. (3) If $R_{i+1}-R_i\leq 2e$ then $\alpha_i\geq R_{i+1}-R_i$ with equality iff $R_{i+1}-R_i=2e$ or it is odd. (4) If $R_{i+1}-R_i\in\{ -2e,2-2e,2e-2\}$ or $R_{i+1}-R_i\geq 2e$ then $\alpha_i=(R_{i+1}-R_i)/2+e$. (5) $\alpha_i$ is $<2e$ $=2e$ or $>2e$ iff $R_{i+1}-R_i$ is so. (6) $\alpha_i$ is an odd integer unless $\alpha_i=(R_{i+1}-R_i)/2+e$. (7) $\alpha_i$ is an integer unless $R_{i+1}-R_i$ is odd and $>2e$. (8) $\alpha_i\in ([0,2e]\cap\ZZ )\cup ((2e,\infty )\cap\frac 12\ZZ ))$. (9) $\alpha_i=\min\{ (R_{i+1}-R_i)/2+e,\, R_{i+1}-R_i+d(-a_ia_{i+1}),\, R_{i+1}-R_i+\alpha_{i-1},\, R_{i+1}-R_i+\alpha_{i+1}\}$. (If $i=1$ we ignore $R_{i+1}-R_i+\alpha_{i-1}$, as $\alpha_0$ is not defined. If $i=n-1$ then we ignore $R_{i+1}-R_i+\alpha_{i+1}$, as $\alpha_n$ is not defined.) (10) $\alpha_i$ are invariant to scalling. (11) $R_i(L^\sharp )=-R_{n+1-i}$ and $\alpha_i(L^\sharp )=\alpha_{n-i}(L)$. \medskip We now state O'Meara's classification theorem [OM, Theorem 93:28] in terms of BONGs. This result is [B3, Theorem 3.1]. \btm Let $L,K$ be two quadratic lattice with $FL\cong FK$ and let $L\cong\prec a_1,\ldots,a_n\succ$ and $K\cong\prec b_1,\ldots,b_n\succ$ relatice to good BONGs. Let $R_i=R_i(L)$, $S_i=R_i(K)$, $\alpha_i=\alpha_i(L)$ and $\beta_i=\alpha_i(K)$. Then $L\cong K$ iff: (i) $R_i=S_i$ for $1\leq i\leq n$ (ii) $\alpha_i=\beta_i$ for $1\leq i\leq n-1$. (iii) $d(a_1\cdots a_i\, b_1\cdots b_i)\geq\alpha_i$ for $1\leq i\leq n-1$. (iv) $[b_1,\ldots,b_{i-1}]{\rep}[a_1,\ldots,a_i]$ for every $1<i<n$ such that $\alpha_{i-1}+\alpha_i>2e$. \etm \subsection{The invariants $d[\varepsilon a_{i,j}]$ and $d[\varepsilon a_{1,i}b_{1,j}]$} For convenience, if $a_1,a_2,\ldots\in\fff$ and $1\leq i\leq j+1$ then we denote by $a_{i,j}=a_i\cdots a_j$. By convention, $a_{i,i-1}=1$. If $L\cong\prec a_1,\ldots,a_n\succ$ relative to a good BONG and $\alpha_i=\alpha_i(L)$, then for every $0\leq i-1\leq j\leq n$ and $\varepsilon\in\fff$, then we define $$d[a_{i,j}]=\min\{ d(a_{i,j}),\alpha_{i-1},\alpha_j\}.$$ If $i-1\in\{ 0,n\}$ $\alpha_{i-1}$ is not defined so it is ignored. Similarly $\alpha_j$ is ignored if $j\in\{ 0,n\}$.) In particular, since $d[-a_{i,i+1}]=\min\{ d(-a_{i,i+1}),\alpha_{i-1},\alpha_{i+1}\}$, the property (9) of \S1.3 can be written as $$\alpha_i=\min\{ (R_{i+1}-R_i)/2+e,R_{i+1}-R_i+d[-a_{i,i+1}]\}.$$ If $M\cong\prec a_1,\ldots,a_m\succ$ and $N\cong\prec b_1,\ldots,b_n\succ$ relative to good BONGs, $\alpha_i=\alpha_i(M)$ and $\beta_i=\alpha_i(N)$, then for every $0\leq i\leq m$, $0\leq j\leq m$, we define $$d[\varepsilon a_{1,i}b_{1,j}]=min\{ d(\varepsilon a_{1,i}b_{1,j}),\alpha_i,\beta_j\}.$$ (If $i\in\{ 0,m\}$ then we ignore $\alpha_i$. If $j\in\{ 0,n\}$ then we ignore $\beta_i$.) As a consequence of condition (iii) of Theorem ..., $d[\varepsilon a_{i,j}]$ and $d[\varepsilon a_{1,i}b_{1,j}]$ are independent of the choice of the good BONGs. Also $d[\varepsilon a_{i,j}]$ are a particular case of the expression $d[\varepsilon a_{1,i}b_{1,j}]$. Indeed, if we take $M=N=L$, so that $b_i=a_i$ and $\beta_i=\alpha_i$ then in $\fff/\fff^2$ we have $a_{1,j}b_{1,i-1}=a_{1,j}a_{1,i-1}=a_{i,j}$ so $d(\varepsilon a_{1,j}b_{1,i-1})=d(\varepsilon a_{i,j})$. Therefore $$d[\varepsilon a_{1,j}b_{1,i-1}]=\min\{ d(\varepsilon a_{1,j}b_{1,i-1}),\alpha_j,\beta_{i-1}\}=\min\{d(\varepsilon a_{i,j}),\alpha_j,\alpha_{i-1}\}=d(\varepsilon a_{i,j}).$$ The invariants $d[\cdot ]$ satisfy a similar domination principle as $d(\cdot )$. Namely, if we have a third lattice $K\cong\prec c_1,\ldots,c_k\succ$ and $\varepsilon,\varepsilon'\in\fff$ then in $\fff/\fff^2$ we have $(\varepsilon a_{1,i}b_{1,j})(\varepsilon'b_{1,j}c_{1,k}) =\varepsilon\varepsilon'a_{1,i}c_{1,k}$ and so $d(\varepsilon\varepsilon'a_{1,i}c_{1,k})\geq\min\{ d(\varepsilon a_{1,i}b_{1,j}),d(\varepsilon'b_{1,j}c_{1,k})\}$. Similarly, we have $$d[\varepsilon\varepsilon'a_{1,i}c_{1,k}]\geq\min\{ d[\varepsilon a_{1,i}b_{1,j}],d[\varepsilon'b_{1,j}c_{1,k}]\}.$$ Note that both $d$ and $\alpha_i$ take nonnegative values so $d[\varepsilon a_{1,i}b_{1,j}]$ is always nonnegative. If $\ord\varepsilon a_{1,i}b_{1,j}$ is odd then $d(\ord\varepsilon a_{1,i}b_{1,j})=0$ and so $d[\ord\varepsilon a_{1,i}b_{1,j}]=0$. \subsection{The representation theorem} We now state the representation theorem, which was announced in [B2, Theorem 4.5]. Let $M,N$ be quadratic lattices, with $M\cong\prec a_1,\ldots,a_m\succ$ and $N\cong\prec b_1,\ldots,b_n\succ$ relative to good BONGs and $m\geq n$. Let $R_i=R_i(M)$, $S_i=R_i(N)$, $\alpha_i=\alpha_i(M)$ and $\beta_i=\alpha_i(N)$. If $1\leq i\leq\min\{ m-1,n\}$, then we define $A_i=A_i(M,N)$ as $$A_i=\min\{ (R_{i+1}-S_i)/2+e,\, R_{i+1}-S_i+d[-a_{1,i+1}b_{1,i-1}], R_{i+1}+R_{i+2}-S_{i-1}-S_i+d[a_{1,i+2}b_{1,i-2}]\}.$$ (If $i=1$ or $m-1$, then the term $R_{i+1}+R_{i+2}-S_{i-1}-S_i+d[a_{1,i+2}b_{1,i-2}]$ is not defined so it is ignored.) If $n\leq m-2$ the we assume that $S_{n+1}\gg 0$. Then, formally, we have $$S_{n+1}+A_{n+1}=\min\{ (R_{n+2}+S_{n+1})/2+e,\, R_{n+2}+d[-a_{1,n+2}b_{1,n}],\, R_{n+2}+R_{n+3}-S_n+d[a_{1,n+3}b_{1,n-1}]\}.$$ Since $(R_{n+2}+S_{n+1})/2+e\to\infty$ as $S_{n+1}\to\infty$, we can ignore it in the above formula and we define $$S_{n+1}+A_{n+1}=\min\{ R_{n+2}+d[-a_{1,n+2}b_{1,n}],\, R_{n+2}+R_{n+3}-S_n+d[a_{1,n+3}b_{1,n-1}]\}.$$ (If $n=m-2$ then $R_{n+2}+R_{n+3}-S_n+d[a_{1,n+3}b_{1,n-1}]$ is not defined, so we ignore it.) \btm Assume that $FM\rep FN$. Then $M\rep N$ iff: (i) For $1\leq i\leq n$ we have $R_i\leq S_i$ or $1<i<m$ and $R_i+R_{i+1}\leq S_{i-1}+S_i$. (ii) For $1\leq i\leq\min\{ m-1,n\}$ we have $d[a_{1,i}b_{1,i}]\geq A_i$. (iii) For any $1<i\leq\min\{ m-1,n+1\}$ such that $R_{i+1}>S_{i-1}$ and $A_{i-1}+A_i>2e+R_i-S_i$ we have $[b_1,\ldots,b_{i-1}]\rep [a_1,\ldots,a_i]$. (iv) For any $1<i\leq\min\{ m-2,n+1\}$ such that $R_{i+1}\leq S_{i-1}$, $R_{i+2}\leq S_i$ and $R_{i+2}-S_{i-1}>2e$ we have $[b_1,\ldots,b_{i-1}]\rep [a_1,\ldots,a_{i+1}]$. (If $i=n+1$ we ignore the condition $R_{i+2}\leq S_i$.) \medskip Note that if $n\leq m-2$ and $i=n+1$ then $S_{n+1}$ and $A_{n+1}$ are not defined, but $S_{n+1}+A_{n+1}$ is. Thus the condition $A_n+A_{n+1}>2e+R_{n+1}-S_{n+1}$ from (iii) should be read as $A_n+(S_{n+1}+A_{n+1})>2e+R_{n+1}$. \etm {\bf Remarks} {\bf 1.} We have $R_i-S_{i-1}+d[-a_{1,i}b_{1,i-2}]\geq A_{i-1}$ and $R_{i+1}-S_i+d[-a_{1,i+1}b_{1,i-1}]\geq A_i$, by the definition of $A_i$. So if $A_{i-1}+A_i>2e+R_i-S_i$ then $R_i-S_{i-1}+d[-a_{1,i}b_{1,i-2}]+R_{i+1}-S_i+d[-a_{1,i+1}b_{1,i-1}] >2e+R_i-S_i$ so $d[-a_{1,i}b_{1,i-2}]+d[-a_{1,i+1}b_{1,i-1}]>2e+S_{i-1}-R_{i+1}$. But one can prove that if $M$ and $N$ satisfy the conditions (i) and (ii) of Theorem 1.2 and $R_{i+1}>S_{i-1}$, then $A_{i-1}+A_i>2e+R_i-S_i$ is equivalent to $d[-a_{1,i}b_{1,i-2}]+d[-a_{1,i+1}b_{1,i-1}]>2e+S_{i-1}-R_{i+1}$. Hence the condition (iii) of Theorem 1.2 can be replaced by: \medskip {\em (iii') For any $1<i\leq\min\{ m-1,n+1\}$ such that $R_{i+1}>S_{i-1}$ and $d[-a_{1,i}b_{1,i-2}]+d[-a_{1,i+1}b_{1,i-1}]>2e+S_{i-1}-R_{i+1}$ we have $[b_1,\ldots,b_{i-1}]\rep [a_1,\ldots,a_i]$.} \bigskip {\bf 2.} In some cases conditions (ii) and (iii) (or its equivalent (iii')) of Theorem 1.2 need not be verified. Namely, an index $1\leq i\leq\min\{ m,n+1\}$ is called {\em essential} if $R_{i+1}>S_{i-1}$ and $R_{i+1}+R_{i+2}>S_{i-2}+S_{i-1}$. (The inequalities that do not make sense because $R_{i+1}$, $R_{i+2}$, $S_{i-2}$ or $S_{i-1}$ is not defined are ignored. Then condition (ii) is vacuous at an index $i$ if both $i$ and $i+1$ are not essentian and condition (iii) is vacuous at an index $i$ if $i$ is not essential. \bigskip {\bf 3.} Condition (iv) of Theorem 1.2 can be replaced by a stronger condition, where the inequalities $R_{i+1}\leq S_{i-1}$ and $R_{i+2}\leq S_i$ are ignored. We have an even more general reasult. If $N\rep M$ and $R_l-S_j>2e$ for some $1\leq l\leq m$, $1\leq j\leq n$, then $[b_1,\ldots,b_j]\rep [a_1,\ldots,a_{l-1}]$. In all cases but the one described in condition (iv), this follows from conditions (i), (ii) and (iii). In particular, if $R_{i+1}-S_i>2e$ then $[b_1,\ldots,b_i]\cong [a_1,\ldots,a_i]$. By taking determinants we get that in $\fff/\fff^2$ we have $a_{1,i}=b_{1,i}$ or, equivalently $d(a_{1,i}b_{1,i})=\infty$. In fact, we have an even stronger result. If $N\rep M$ and $R_l-S_j>2e$ then\\ $\prec b_1,\ldots,b_j\succ\rep\prec a_1,\ldots,a_{l-1}\succ$. \section{The main result} \btm Let $M$ be an integral quadratic lattice with $M\cong\prec a_1,\ldots,a_m\succ$ relative to a good BONG and let $R_i=R_i(M)$ for $1\leq i\leq m$, $\alpha_i=\alpha_i(M)$ for $1\leq i\leq m-1$. Then $M$ is universal if and only if $m\geq 2$, $R_1=0$ and we have one of the cases below: \medskip I (a) $\alpha_1=0$ or, equivalently, $R_2=-2e$. ~~(b) If $m=2$ or $R_3>1$, then $[a_1,a_2]$ is isotropic. ~~(c) If $m\geq 3$, $R_3=1$ and either $m=3$ or $R_4>2e+1$, then $[a_1,a_2]$ is isotropic. \medskip II (a) $m\geq 3$ and $\alpha_1=1$. ~~(b) If $R_2=1$ or $R_3>1$, then $m\geq 4$ and $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1$. ~~(c) If $R_2\leq 0$, $R_3\leq 1$ and either $m=3$ or $R_4-R_3>2e$, then $[a_1,a_2,a_3]$ is isotropic. \etm \medskip {\bf Remark} By \S1.3, property (2), we have $\alpha_1=0$ iff $R_2-R_1=-2e$. So if $R_1=0$ then $\alpha_1=0$ is equivalent to $R_2=-2e$. \bigskip The condition that $M$ is integral, i.e. that $Q(M)\subseteq\oo$, is equivalent to $\nnn M\subseteq\oo$. But $\nnn L=\p^{R_1}$ so we have: \blm $M$ is integral iff $R_1\geq 0$. \elm As noted in [XZ, Lemma 2.2], an integral lattice $M$ is universal iff it represents all elements of $\ooo\cup\pi\ooo$, i.e. the elements $b\in\fff$ with $\ord b=0$ or $1$. In terms of Theorem 1.2, this means that $M$ represents every unary lattice $N\cong\prec b_1\succ$ with $S_1=\ord b_1\in\{ 0,1\}$. Then Theorem 1.2 in the case $n=1$ implies that: \blm The integral lattice $M$ is universal iff for every $N=\prec b_1\succ$ with $S_1\in\{ 0,1\}$ we have $FN\rep FM$ and: (i) $R_1\leq S_1$. (ii) $d[a_1b_1]\geq A_1$. (iii') If $m\geq 3$, $R_3>S_1$ and $d[-a_{1,2}]+d[-a_{1,3}b_1]>2e+S_1-R_3$ then $[b_1]\rep [a_1,a_2]$. (iv) If $m\geq 4$, $R_3\leq S_1$ and $R_4-S_1>2e$ then $[b_1]\rep [a_1,a_2,a_3]$. \elm Note that for (iii') we used the fact that $d[-a_{1,2}b_{1,0}]=d[-a_{1,2}]$. As a consequence of this, we also have $A_1=\min\{ (R_2-S_1)/2+e,\, R_2-S_1+d[-a_{1,2}]\}$. \blm The condition that $FN\rep FM$ holds for all $N$ iff $FM$ is universal. \elm \pf The condition that $FM$ represents every $FN\cong [b_1]$ with $\ord b_1\in\{ 0,1\}$ is equivalent to $b_1\rep FM$ for every $b_1\in\fff$, i.e. to $FM$ being universal. \qed \blm If $M$ is integral, the condition (i) of Lemma 2.3 holds for all $N$ iff $R_1=0$. \elm \pf Since $M$ is integral, we have $R_1\geq 0$. Condition (i) holds for every $N$ iff $R_1\leq S_1$ for $S_1\in\{ 0,1\}$. Hence the conclusion. \qed \blm If $R_1\leq S_1$ then $\alpha_1\geq A_1$, with equality when $R_1=S_1$. Consequently, if $R_1\leq S_1$ then $d[a_1b_1]\geq A_1$ is equivalent to $d(a_1b_1)\geq A_1$. \elm \pf We have $A_1=\min\{ (R_2-S_1)/2+e,\, R_2-S_1+d[-a_{1,2}]\}$ and, by \S1.4, we also have $\alpha_1=\min\{ (R_2-R_1)/2+e,\, R_2-R_1+d[-a_{1,2}]\}$. So if $R_1\leq S_1$ then $\alpha_1\geq A_1$ and if $R_1=S_1$ then $\alpha_1=A_1$. Assume now that $R_1\leq S_1$ so $\alpha_1\geq A_1$. Then, since $d[a_1b_1]=\min\{ d(a_1b_1),\alpha_1\}$, we have $d[a_1b_1]\geq A_1$ iff $d(a_1b_1)\geq A_1$. \qed \blm If $R_1=0$ then condition (ii) of Lemma 2.3 holds for all $N$ with $S_1=0$ iff $\alpha_1\leq 1$. \elm \pf By Lemma 2.6, if $S_1=0=R_1$ then $A_1=\alpha_1$ and the condition (ii) of Lemma 2.3, $d[a_1b_1]\geq A_1$, writes as $d(a_1b_1)\geq A_1=\alpha_1$. Since $\ord a_1b_1=R_1+S_1=0$ is even, we have $d(a_1b_1)\geq 1$, so the condition that $1\geq\alpha_1$ is sufficient. For the necessity, let $\varepsilon\in\ooo$ with $d(\varepsilon )=1$ and let $b_1=\varepsilon a_1$. Then $S_1=\ord b_1=\ord a_1=0$ and in $\fff/\fff^2$ we have $a_1b_1=\varepsilon$ so $d(a_1b_1)=d(\varepsilon )=1$. Hence we must have $1=d(a_1b_1)\geq\alpha_1$. \qed \blm (i) If $\alpha_i=0$ then $d[-a_{i,i+1}]\geq 2e$. (ii) If $\alpha_i=1$ then $-2e<R_{i+1}-R_i\leq 1$ or, equivalently, either $R_{i+1}-R_i=1$ or $R_{i+1}-R_i$ is even and $2-2e\leq R_{i+1}-R_i\leq 0$. Moreover, $d[-a_{i,i+1}]\geq R_i-R_{i+1}+1$, with equality if $R_{i+1}-R_i\neq 2-2e$. (iii) If $R_{i+1}-R_i\in\{ 2-2e,1\}$ then $\alpha_i=1$ unconditionally. If $2-2e<R_{i+1}-R_i\leq 0$ then $\alpha_i=1$ iff $d[-a_{i,i+1}]=R_i-R_{i+1}+1$. \elm \pf By 1.3, property (2), we have $\alpha_i=0$ iff $R_{i+1}-R_i=-2e$. Assume that $\alpha_i=1$. By \S1.2, we have $R_{i+1}-R_i\geq -2e$. But we cannot have $R_{i+1}-R_i=-2e$, since this would imply $\alpha_i=0$. So $R_{i+1}-R_i>2e$. We cannot have $R_{i+1}-R_i>2e$, since by \S1.3, property (5), this would imply $\alpha_i>2e$. So $R_{i+1}-R_i\leq 2e$, which, by property (3), implies $1=\alpha_i\geq R_{i+1}-R_i$. So $-2e<R_{i+1}-R_i\leq 1$. By \S1.2, $R_{i+1}-R_i$ cannot be odd and negative. So we have either $R_{i+1}-R_i=1$ or $R_{i+1}-R_i$ is even and $2-2e<R_{i+1}-R_i\leq 0$. We now use the relation $\alpha_i=\min\{ (R_{i+1}-R_i)/2+e,\, R_{i+1}-R_i+d[-a_{i,i+1}]\}$. This implies that $\alpha_i\leq R_{i+1}-R_i+d[-a_{i,i+1}]$, so $d[-a_{i,i+1}]\geq R_i-R_{i+1}+\alpha_i$, with equality if $\alpha_i<(R_{i+1}-R_i)/2+e$. If $\alpha_i=0$, so $R_{i+1}-R_i=-2e$, we get $d[-a_{i,i+1}]\geq R_i-R_{i+1}=2e$, which concludes the proof of (i). If $\alpha_i=1$ we get $d[-a_{i,i+1}]\geq R_i-R_{i+1}+1$, with equality if $1<(R_{i+1}-R_i)/2+e$, i.e. if $R_{i+1}-R_i>2-2e$. This concludes the proof of (ii). (iii) By \S1.3, properties (3) and (4), if $R_{i+1}-R_i=1$ then $\alpha_i=R_{i+1}-R_i=1$ and if $R_{i+1}-R_i=2-2e$ then $\alpha_i=(R_{i+1}-R_i)/2+e=1$. If $2-2e<R_{i+1}-R_i\leq 0$, then the necessity of $d[-a_{i,i+1}]=R_i-R_{i+1}+1$ follows from (ii). Conversely, assume that $d[-a_{i,i+1}]=R_i-R_{i+1}+1$. Then $R_{i+1}-R_i+d[-a_{i,i+1}]=1$ and, since $R_{i+1}-R_i>2-2e$, we also have $(R_{i+1}-R_i)/2+e>1$. It follows that $\alpha_i=\min\{ (R_{i+1}-R_i)/2+e,\, R_{i+1}-R_i+d[-a_{i,i+1}]\} =1$. \qed We are interested in the case when $R_1=0$ and $i=1$. We get: \bco Assume that $m\geq 2$ and $R_1=0$. Then we have: (i) If $\alpha_1=0$ then $d[-a_{1,2}]\geq 2e$. (ii) If $\alpha_1=1$ then $-2e<R_2\leq 1$ or, equivalently, either $R_2=1$ or $R_2$ is even and $2-2e\leq R_2\leq 0$. Moreover, $d[-a_{1,2}]\geq 1-R_2$, with equality if $R_2\neq 2-2e$. (iii) If $R_2\in\{ 2-2e,1\}$ then $\alpha_i=1$ unconditionally. If $2-2e<R_2\leq 0$ then $\alpha_1=1$ iff $d[-a_{1,2}]=1-R_2$. \eco We define the following statement, which is slightly stronger than II (a): \medskip {\em II (a') $m\geq 3$, $\alpha_1=1$ and $d[-a_{1,2}]=1-R_2$.} \medskip (By Corollary 2.9(ii), if $R_1=0$ and $R_2\neq 2-2e$, then the extra condition that $d[-a_{1,2}]=1-R_2$ is superfluous, as it follows from $R_1=0$, $\alpha_1=1$.) \blm Assume that $FM$ is universal and $R_1=0$. Then condition (ii) of Lemma 2.3 holds for every $N$ iff we have I (a) or II (a'). \elm \pf By Lemma 2.7, the condition (ii) of Lemma 2.3 in the case $S_1=0$ is equivalent to $\alpha_1\leq 1$. We must prove that, assuming that $R_1=0$ and $\alpha_1\leq 1$, the condition (ii) of Lemma 2.3 holds for every $N$ with $S_1=1$ iff the additional conditions from II (a'), $m\geq 3$ and $d[-a_{i,i+1}]=1-R_2$, hold. Since $\ord a_1b_1=R_1+S_1=1$ is odd, we have $d[a_1b_1]=0$, so condition (ii) of Lemma 2.3 writes as $0\geq A_1$. If $\alpha_1=0$, so $R_2=-2e$, then $A_1\leq (R_2-S_1)/2+e=-1/2<0$ so we are done. If $\alpha_1=1$ and $d[-a_{1,2}]=1-R_2$ then $A_1\leq R_2-S_1+d[-a_{1,2}]=R_2-1+(1-R_2)=0$ so again we are done. So we have the sufficiency of the condition $d[-a_{1,2}]=1-R_2$ from II (a'). For the necessity, assume that $\alpha_1=1$ and $d[-a_{1,2}]\neq 1-R_2$. By Corollary 2.9(ii), this means that $R_2=2-2e$ and $d[-a_{1,2}]>1-R_2$. Then $R_2-S_1+d[-a_{1,2}]>R_2-1+(1-R_2)=0$ and $(R_2-S_1)/2+e=((2-2e)-1)/2+e=1/2>0$. It follows that $A_1=\min\{ (R_2-S_1)/2+e,\, R_2-S_1+d[-a_{1,2}]\} >0$, so the condition (ii) of Lemma 2.3 doesn't hold. To complete the proof, we show that the remaining condition, $m\geq 3$, from II (a'), follows from the fact that $FM$ is universal. If $m=2$, then $d(-a_{1,2})=d[-a_{1,2}]=1-R_2<\infty$ so $-a_{1,2}\notin\fff^2$. It follows that $FM\cong [a_1,a_2]$ is not isotropic and so it is not universal. \qed \blm Assume that $R_1=0$ and $R_2=-2e$. (i) We have $a_{1,2}\in -\fff^2$ or $-\Delta\fff^2$. In the first case $[a_1,a_2]$ is isotropic. In the second case $[a_1,a_2]$ represents precisely the elements of $\fff$ with even orders. In particular, in both cases $[a_1,a_2]$ represents the elements of $\fff$ with even orders. (ii) Assume that $m\geq 3$. If $R_3=0$ then $[a_1,a_2,a_3]$ is isotropic. If $R_3=1$ then $[a_1,a_2,a_3]$ is isotropic iff $[a_1,a_2]$ is isotropic. \elm \pf (i) By 1.2, as a consequence of $R_2-R_1=-2e$, we have $-a_{1,2}\in\fff^2$ or $\Delta\fff^2$. In the first case $[a_1,a_2]$ is binary of determinant $-1$ so it is isotropic. In the second case, for every $b\in\fff$ we have $b\rep [a_1,a_2]$ iff $(a_1b,-a_{1,2})_\p =(a_1b,\Delta )_p=1$. But this happens iff $\ord a_1b=\ord b$ is even. (Recall that $\ord a_1=R_1=0$.) (ii) We have that $[a_1,a_2,a_3]$ is isotropic iff $-a_3\rep [a_1,a_2]$. If $R_3=0$ then $\ord a_3=R_3$ is even so $-a_3$ is represented by $[a_1,a_2]$ in both cases from (i). Hence $[a_1,a_2,a_3]$ is isotropic. Suppose now that $[a_1,a_2,a_3]$ is isotropic and $R_3=1$. Then $[a_1,a_2]$ represents $-a_3$ and, since $\ord a_3=R_3$ is odd, this is possible only if $[a_1,a_2]$ is isotropic. The reverse implication is trivial. \qed \blm If $m\geq 3$ and $R_1=0$ then $R_3\geq 0$. If moreover $R_2=1$, then $R_3\geq 1$. \elm \pf By the properties of the good BONGs, $R_3\geq R_1=0$. If $R_2=1$ then we cannot have $R_3=0$, since $R_3-R_2$ cannot be odd and negative. So in this case $R_3\geq 1$. \qed \blm If $FM$ is universal, $R_1=0$ and $M$ satisfies I (a) or II (a') then the condition (iii') of Lemma 2.3 is satisfied for every $N$ iff $M$ satisfies I (b) or II (b), accordingly. \elm \pf Suppose that we have I (a). Then $\alpha_1=0$, $R_2=-2e$ and, by Corollary 2.9(i), $d[-a_{1,2}]\geq 2e$. By Lemma 2.11(i), $[a_1,a_2]$ represents all units so if $S_1=\ord b_1=0$ then $b_1\rep [a_1,a_2]$ so (iii') holds trivially. Suppose now that $S_1=1$. If $m=2$ then $FM\cong [a_1,a_2]$ must be isotropic because it is universal. If $R_3\leq S_1=1$ then (iii') holds trivially. Suppose now that $R_3>1$. Hence $R_3>S_1$ and we also have $d[-a_{1,2}]+d[-a_{1,3}b_1]\geq d[-a_{1,2}]\geq 2e>2e+S_1-R_3$. Hence condition (iii') holds iff $b_1\rep [a_1,a_2]$. Since $\ord b_1=S_1$ is odd, by Lemma 2.11(i), this can only happen if $[a_1,a_2]$ is isotropic. Conversely, if $[a_1,a_2]$ is isotropic then it is universal so (iii') holds trivially. In conclusion, the condition (iii') holds iff $M$ satisfies I (b). Suppose now that we have II (a'). By Lemma 2.12, $R_3\geq 0$ and if $R_2=1$ then $R_3\geq 1$. Suppose first that $R_2\leq 0$ and $R_3\leq 1$. We prove that the condition (iii') of Lemma 2.3 holds unconditionally. By Corollary 2.9(ii), $R_2$ is even and $2-2e\leq R_2\leq 0$. If $R_3=0$ then $R_3\leq S_1$ so (iii') holds trivially. Suppose now that $R_3=1$. If $S_1=1$ then $R_3\leq S_1$ so (iii') holds tivially. If $S_1=0$ then $\ord a_{1,3}b_1=R_1+R_2+R_3+S_1$ is odd so $d[-a_{1,3}b_1]=0$. (We have $R_1=S_1=0$, $R_3=1$ and $R_2$ is even.) Hence $d[-a_{1,2}]+d[-a_{1,3}b_1]=1-R_2+0\leq 2e-1=2e+S_1-R_3$ so again (iii') holds trivially. So we are left with the case when $R_2=1$ or $R_3>1$. We claim that $d(-a_{1,2})=d[-a_{1,2}]=1-R_2$. If $R_2=1$ then $\ord a_{1,2}=R_1+R_2=1$ so $d(-a_{1,2})=0=1-R_2$. Suppose now that $R_2\leq 0$ and $R_3>1$. Since $R_2\geq 2-2e$, we have $1-R_2\leq 2e-1$. If $R_3-R_2>2e$ then, by 1.3, property (5), we have $\alpha_2>2e>1-R_2$. If $R_3-R_2\leq 2e$ then, by the property (3), we have $\alpha_2\geq R_3-R_2>1-R_2$. So we have $\min\{ d(-a_{1,2}),\alpha_2\} =d[-a_{1,2}]=1-R_2$ and $\alpha_2>1-R_2$. It follows that $d(-a_{1,2})=1-R_2$. Suppose first that $S_1\equiv R_2+R_3\pmod 2$. Recall that if $R_2=1$ then $R_3\geq 1$ and if $R_2\leq 0$ then $R_3>1$. So, with the exception of the case $R_2=R_3=1$, we have $R_3>1\geq S_1$. If $R_2=R_3=1$ then $S_1\equiv R_2+R_3\equiv 0\pmod 2$ and $S_1\in\{ 0,1\}$ so $S_1=0<R_3$. Hence in this case the inegality $R_3>S_1$ from Lemma 2.3(iii') is satisfied. Condition (iii') states that if moreover $d[-a_{1,2}]+d[-a_{1,3}b_1]>2e+S_1-R_3$, then $b_1\rep [a_1,a_2]$. Note that $d[-a_{1,3}b_1]=\min\{ d(-a_{1,3}b_1),\alpha_3\}$, with $\alpha_3$ ignored if $m=3$. If $m\geq 4$ then $d[-a_{1,3}b_1]\leq\alpha_3$ so if $d[-a_{1,2}]+\alpha_3\leq 2e+S_1-R_3$ then also $d[-a_{1,2}]+d[-a_{1,3}b_1]\leq 2e+S_1-R_3$ so condition (iii') holds trivially. Suppose now that $m=3$ or $m\geq 4$ and $d[-a_{1,2}]+\alpha_3>2e+S_1-R_3$. Then, by the formula $d[-a_{1,3}b_1]=\min\{ d(-a_{1,3}b_1),\alpha_3\}$, $d[-a_{1,2}]+d[-a_{1,3}b_1]>2e+S_1-R_3$ is equivalent to $d[-a_{1,2}]+d(-a_{1,3}b_1)>2e+S_1-R_3$. So, for (iii') to hold, $[a_1,a_2]$ must represent every $b_1\in\fff$ with $\ord b_1=S_1$ and $d[-a_{1,2}]+d(-a_{1,3}b_1)>2e+S_1-R_3$. We have $R_2\geq 2-2e$ so $d(-a_{1,2})=1-R_2\leq 2e-1$. Then, by \S1.1, that there is $\varepsilon\in\ooo$ such that $d(\varepsilon )=2e-d(-a_{1,2})=2e-d[-a_{1,2}]$ and $(\varepsilon,-a_{1,2})_p=-1$. Since $\ord a_{1,3}=R_1+R_2+R_3=R_2+R_3\equiv S_1\pmod 2$, there is $b\in\fff$, say, $b=-\pi^{S_1-R_2-R_3}a_{1,3}$, such that $\ord b=S_1$ and $b\in -a_{1,3}\fff^2$. It follows that $-a_{1,3}b\in\fff^2$ and $-a_{1,3}b\varepsilon\in\varepsilon\fff^2$ so $d(-a_{1,3}b)=\infty$ and $d(-a_{1,3}b\varepsilon )=d(\varepsilon )=2e-d[-a_{1,2}]$. In both cases when $b_1=b$ or $b\varepsilon$, we have $\ord b_1=\ord b=S_1$ and $d(-a_{1,3}b_1)\geq 2e-d[-a_{1,2}]$, which implies $d[-a_{1,2}]+d(-a_{1,3}b_1)\geq 2e>2e+S_1-R_3$. So in both cases we have ${b_1\rep [a_1,a_2]}$, which is equivalent to $(a_1b_1,-a_{1,2})_p=1$. We get $(a_1b,-a_{1,2})_p=(a_1b\varepsilon,-a_{1,2})_p=1$, which implies $(\varepsilon,-a_{1,2})_p=1$. But this contradicts the choice of $\varepsilon$. So the condition that $m\geq 4$ and $d[-a_{1,2}]+\alpha_3\leq 2e+S_1-R_3$ is not only sufficient, but also necessary. Since $d[-a_{1,2}]=1-R_2$, this inequality writes as $\alpha_3\leq 2e-1+S_1+R_2-R_3$. We have $S_1\equiv R_3-R_2\pmod 2$ and $S_1\in\{ 0,1\}$, so $S_1=R_3-R_2-2[\frac{R_3-R_2}2]$. It follows that $2e-1+S_1+R_2-R_3=2(e-[\frac{R_3-R_2}2])-1$. So we have proved that if $R_2=1$ or $R_3>1$ then the condition that $m\geq 4$ and $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1$ is necessary and sufficient for condition (iii') of Lemma 2.3 to hold in the case when $S_1\equiv R_2+R_3\pmod 2$. To conclude the proof, we show that this condition is also sufficient for (iii') to hold in the case when $S_1\equiv R_2+R_3+1\pmod 2$. So assume that $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1$ and $S_1\equiv R_2+R_3+1\pmod 2$. Suppose that $d[-a_{1,2}]+d[-a_{1,3}b_1]>2e+S_1-R_3$. We have $d[-a_{1,2}]=1-R_2$ and $\ord a_{1,3}b_1=R_1+R_2+R_3+S_1=R_2+R_3+S_1\equiv 1\pmod 2$ so $d[-a_{1,3}b_1]=0$. Hence $(1-R_2)+0>2e+S_1-R_3\geq 2e-R_3$, which implies $R_3-R_2>2e-1$, so $R_3-R_2\geq 2e$. It follows that $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1\leq 2(2-[\frac{2e}2])-1=-1$. But $\alpha_3\geq 0$, by \S1.3, property (2). Contradiction. Hence $d[-a_{1,2}]+d[-a_{1,3}b_1]\leq 2e+S_1-R_3$ and so (iii') holds trivially. \qed \blm If $FM$ is universal, $R_1=0$ and $M$ satisfies I (a) or II (a') then the condition (iv) of Lemma 2.3 is satisfied for every $N$ iff $M$ satisfies I (c) or II (c), accordingly. \elm \pf Recall that $b_1\rep [a_1,a_2,a_3]$ iff $b_1\notin -a_{1,3}\fff^2$ or $[a_1,a_2,a_3]$ is isotropic. Let $S\in\{ 0,1\}$ such that $S\equiv R_2+R_3\pmod 2$. We claim that condition (iv) of Lemma 2.3 holds for every $N$ iff the following statement holds. (*) If $m\geq 3$, $R_3\leq S$ and either $m=3$ or $R_4-S>2e$ then $[a_1,a_2,a_3]$ is isotropic. Assume first that $m\geq 4$. If $S_1\equiv R_2+R_3+1\pmod 2$ then $\ord b_1=S_1$ and $\ord a_{1,3}=R_1+R_2+R_3=R_2+R_3$ have opposite parities so we cannot have $b_1\in -a_{1,3}\fff^2$. Therefore $b_1\rep [a_1,a_2,a_3]$ so in this case condition (iv) of Lemma 2.3, holds unconditionally. Suppose now that $S_1\equiv R_2+R_3\pmod 2$, i.e. that $S_1=S$. If $R_3>S_1=S$ or $R_4-S=R_4-S_1\leq 2e$, then (iv) holds trivially. So we assume that $R_3\leq S=S_1$ and $R_4-S=R_4-S_1>2e$. Then condition (iv) of Lemma 2.3 states that $b_1\rep [a_1,a_2,a_3]$. So $[a_1,a_2,a_3]$ must represent all elements of $\fff$ of order $S_1=S$. Since $\ord a_{1,3}=R_1+R_2+R_3=R_2+R_3\equiv S\pmod 2$ there is $b_1\in -a_{1,3}\fff^2$ with $\ord b_1=S$, say, $b_1=-\pi^{S-R_2-R_3}a_{1,3}$. Then $b_1\rep [a_1,a_2,a_3]$ and $b_1\in -a_{1,3}\fff^2$, so $[a_1,a_2,a_3]$ is isotropic. Conversely, if $[a_1,a_2,a_3]$ is isotropic, then it is universal, so (iv) holds. If $m\leq 3$ then (iv) is vacuous, but the case when $m=3$ and $R_3\leq S$ can still be included here because when $m=3$ we have that $FM\cong [a_1,a_2,a_3]$ is universal, so isotropic. Suppose first that we have I (a). If $R_3>1$ then $R_3>S$ so (*) holds trivially. If $R_3=0$ then, by Lemma 2.11(ii), $[a_1,a_2,a_3]$ is isotropic, so (*) holds unconditionally. We are left with the case $R_3=1$. Then $R_2+R_3=1-2e$ is odd so $S=1$ and $R_3\leq S$ holds. The inequality $R_4-S>2e$ writes as $R_4>2e+1$. Therefore (*) is equivalent to: I (c') If $m\geq 3$, $R_3=1$ and either $m=3$ or $R_4>2e+1$, then $[a_1,a_2,a_3]$ is isotropic. But when $R_3=1$, by Lemma 2.11(ii), $[a_1,a_2,a_3]$ is isotropic iff $[a_1,a_2]$ is isotropic. Hence I (c') is equivalent to I (c). Assume now that $M$ satisfies II (a'). Suppose first that $R_3>1$. Since $S\leq 1$ we get $R_3>S$ so (*) holds trivially. So we may assume that $R_3\leq 1$. Next suppose that $R_2=1$. By Lemma 2.12, $R_3\geq 1$, so $R_3=1$. Since $R_2+R_3=2$ is even, we get $S=0$ and again $R_3>S$, so (*) holds trivially. Since for $R_2=1$ or $R_3>1$ (*) holds unconditionally, we are left with the case when $R_2\leq 0$ and $R_3\leq 1$. Since $R_2\leq 0$ we have that $R_2$ is even and so $S\equiv R_2+R_3\equiv R_3\pmod 2$. Since $R_3,S\in\{ 0,1\}$ and $R_3\equiv S\pmod 2$, we have $S=R_3$. In particular, $R_3\geq S$ holds. So, in order that (*) holds we need that $[a_1,a_2,a_3]$ is isotropic if $m=3$ or $m\geq 4$ and $R_4-R_3=R_4-S>2e$. So (*) writes as follows. If $m\geq 3$, $R_2\leq 0$ and $R_3\leq 1$ and either $m=3$ or $m\geq 4$ and $R_4-R_3>2e$, then $[a_1,a_2,a_3]$ is isotropic. But the condition that $m\geq 3$ is part of II (a') so it can be dismissed. So we get II (c). \qed {\bf Proof of Theorem 2.1.} By Lemmas 2.2, 2.5, 2.10, 2.13 and 2.14, $M$ is universal iff $FM$ is universal, $R_1=0$ and we have either I (a), (b) and (c) or II (a'), (b) and (c). Since $FM$ is universal, we have $m\geq 2$. Then, to conclude the proof, we must show that II (a') can be replaced by II (a) and that the condition that $FM$ is universal is superfluous, as if $m\geq 2$ and $R_1=0$ then it follows both from I and from II. First we prove that the extra condition from II (a'), that $d[-a_{1,2}]=1-R_2$, is superfluous, as it follows from II (a) and (b). Since $R_1=0$ and $\alpha_1=1$, by Corollary 2.(ii), $d[-a_{1,2}]\geq 1-R_2$, with equality if $R_2\neq 2-2e$. So we only have to consider the case $R_2=2-2e$, when we only have $d[-a_{1,2}]\geq 1-R_2=2e-1$. Assume that $R_3>1$. Then, by II (b), we have that $m\geq 4$ and $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1$. But $R_3\geq 2$ so $R_3-R_2=R_3-(2-2e)\geq 2e$. Then $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1\leq 2(e-[\frac{2e}2])-1=-1$. But, by \S1.3, property (2), we have $\alpha_3\geq 0$. Contradiction. Hence $R_3\leq 1$. If $R_3=0$ then $R_3-R_2=2e-2$ so, by \S1.3, property (4), $\alpha_2=(R_3-R_2)/2+e=2e-1$. If $R_3=1$ then $R_3-R_2=2e-1$ is odd and $<2e$ so, by the property (3), $\alpha_2=R_3-R_2=2e-1$. Then $2e-1\leq d[-a_{1,2}]=\min\{ d(-a_{1,2}),\alpha_2\}\leq\alpha_2=2e-1$ so $d[-a_{1,2}]=2e-1=1-R_2$. Next we prove that if $m\geq 2$, $R_1=0$ and we have I or II, then $FM$ is universal. If $m=2$ then we are in case of I and we have by I (b) that $FM=[a_1,a_2]$ is isotropic and so it is universal. Suppose now that $m=3$. We consider first case I. If $R_3>1$ then $[a_1,a_2,a_3]$ is isotropic by I (b), if $R_3=1$ then it is isotropic by I (c) and if $R_3=0$ it is isotropic by Lemma 2.11(ii). If we are in the case II then we cannot have $R_2=1$ or $R_3>1$, since by II (b) this implies that $m\geq 4$. In the remaining case, $R_2\leq 0$ and $R_3\leq 1$, since $m=3$ we have by I (c) that $[a_1,a_2,a_3]$ is isotropic. So, in all cases, $FM=[a_1,a_2,a_3]$ is isotropic and so universal. If $m\geq 4$ then $FM$ is universal unconditionally. \qed {\bf Remark.} If we make the convention that $R_i\gg 0$ for $i>m$ then conditions I and II can be written in a more compact way, without refference to the value of $n$. Namely one can write them as: {\em I (a) $\alpha_1=0$ or, equivalently, $R_2=-2e$. ~~(b) If $R_3>1$, then $[a_1,a_2]$ is isotropic. ~~(c) If $R_3=1$ and $R_4>2e+1$, then $[a_1,a_2]$ is isotropic.} \medskip {\em II (a) $\alpha_1=1$. ~~(b) If $R_2=1$ or $R_3>1$, then $m\geq 4$ and $\alpha_3\leq 2(e-[\frac{R_3-R_2}2])-1$. ~~(c) If $R_2\leq 0$, $R_3\leq 1$ and $R_4-R_3>2e$, then $[a_1,a_2,a_3]$ is isotropic.} \section{Main result in terms of Jordan decompostions} We now give, without a proof, a translation of Theorem 2.1, in terms of Jordan decompositions. \btm Let $M=M_1\perp\cdots\perp M_t$ be a Jordan decompostion, with $\sss M_k=\p^{r_k}$, $\nnn M^{\sss L_k}=u_k$ and $\www M^{\sss M_k}=\www_k$ for $1\leq k\leq t$. For $1\leq k\leq t-1$ we consider the ideal $\FFF_k$ defined in [OM, \S93E.]. Then $M$ is universal if $\rank M\geq 2$, $\nnn M=\oo$, or, equivalently, $u_1=0$, and one of the following happens: (1) $\rank M_1\geq 4$ and $\www_1\supseteq\p$. (2) $\rank M_1=3$, $\www_1=\p$ and one of the following happens: (2.1) $t\geq 2$ and $u_2\leq 2e$. (2.2) $M_1$ is isotropic. (3) $\rank M_1=2$ and one of the following happens: (3.1) $\sss M_1=2\oo$ and one of the following happens: (3.1.1) $t\geq 2$ and $u_2=0$. (3.1.2) $t\geq 2$, $u_2=1$ and either $\rank M_2\geq 2$ or $\rank M_2=1$, $t\geq 3$ and $u_3\leq 2e+1$. (3.1.3) $M_1\cong\frac 12 A(0,0)$. (3.2) $\www M_1=\p$, $m\geq 3$ and one of the following happens: (3.2.1) $u_2>1$, $\rank M_2\geq 2$ and $\www_2\supset 4\p^{r_1+u_2-2[u_2/2]}$. (3.2.2) $u_2>1$, $\rank M_2=1$, $t\geq 3$ and $\FFF_2\supset 4\p^{r_1-2[u_2/2]}$. (3.2.3) $u_2\leq 1$ and $\rank M_2\geq 2$. (3.2.4) $u_2\leq 1$, $\rank M_2=1$, $t\geq 3$ and $u_3\leq u_2+2e$. (3.2.5) $u_2\leq 1$, $\rank M_2=1$ and $M_1\perp M_2$ is isotropic. (4) $\rank M_1=1$, $u_2=1$, $m\geq 4$ and one of the following happens: (4.1) $\rank M_2\geq 3$. (4.2) $\rank M_2=2$ and $u_3\leq 2e$. (4.3) $\rank M_2=1$ and one of the following happens: (4.3.1) $\rank M_3\geq 2$ and $\www_3\supset 4\p^{u_3-2[(u_3-1)/2]}$. (4.3.2) $\rank M_3=1$ and $\FFF_3\supset 4\p^{-2[(u_3-1)/2]}$. \etm \section*{References} [B1] C.N. Beli, Integral spinor norms over dyadic local fields, J. Number Theory 102 (2003) 125-182. \medskip \noindent [B2] C.N. Beli, Representations of integral quadratic forms over dyadic local fields, Electronic Research Announcements of the American Mathematical Society 12, 100-112, electronic only (2006). \medskip \noindent [B3] C.N. Beli, A new approach to classification of integral quadratic forms over dyadic local fields, Transactions of the American Mathematical Society 362 (2010), 1599-1617. \medskip \noindent [OM] O. T. O’Meara, Introduction to Quadratic Forms, Springer-Verlag, Berlin (1963). \medskip \noindent [XZ] Xu Fei and Zhang Yang, On indefinite and potentially universal quadratic forms over number fields, preprint. (arXiv:2004.02090) \end{document}
145,158
The governor (Wali) of Kassala State, Maj. Gen. Mahmoud Babiker Hamad, has affirmed the eternal and firm relations between Sudan and Eritrea, affirming the keenness of the two countries’ willingness to establish continuous and firm brotherly relations that serve the interests of both peoples. Addressing at the celebration of the Eritrean community in Kassala State of their country’s 28th independence anniversary, the Wali said that Kassala is more closer to Eritrea socially and economically besides that most of Sudan’s crossings with Eritrea exist through Kassala State. He said that the participation in the celebration is a step to renew the resolve that the relationship between the two countries shall be stronger than in the past and that what has happened [recently between the two countries] was like a summer cloud that has already passed, indicating that the two countries now stand on the threshold of establishing firm and distinguished relationship. Maj. Gen. Hamad said, “we are looking forward to opening the joint border and doing business as it was [before the closing],” stating that the two countries will work to achieve the common interests of the two peoples and the region. He noted that the revitalization of trade between the two countries supports the keenness of citizens in the border areas to maintain security and stability at the border area. For his part, the Chargé d’Affaires of the Eritrean Embassy in Khartoum, Mr. Ibrahim Idris, said, “We celebrate our independence that was achieved through a long struggle and great sacrifices during the past years, referring to achievements that have been reached in infrastructures and social and economic services in pure Eritrean effort, despite external pressures and challenges. He referred to the achievements of Eritreans in the past year in the political and diplomatic field led by the Ministry of Foreign Affairs, regionally and internationally, the most important of which was the signing of a peace and friendship agreement with Ethiopia and the lifting of economic sanctions by the UN Security Council. Mr. Idris has asserted the deeply rooted and historic ties and special relations between the Sudanese and Eritrean peoples in all fields. He affirmed the Eritrean people’s stand alongside the Sudanese people’s choice, praising all the steps taken by the Transitional Military Council and calling on all parties to strengthen further the relations between the two countries and activate the role of the people’s diplomacy to promote the relations between the two countries officially and publicly. * Software translation from Arabic
50,215
\begin{document} \title{Is PLA large?} \author{Gady Kozma} \address{GK: Institute for Advanced Study, 1 Einstein drive, Princeton NJ 08540, USA.} \email{gady@ias.edu} \thanks{This material is based upon work supported by the National Science Foundation under agreement DMS-0111298. Any opinions, findings and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundations} \author{Alexander Olevski\u\i} \address{AO: Tel Aviv University, Tel Aviv 69978, Israel.} \email{olevskii@post.tau.ac.il} \thanks{Research supported in part by the Israel Science Foundation} \begin{abstract} We examine the class of functions representable by an analytic sum\begin{equation} f(t)=\sum_{n\geq0}c(n)e^{int}\label{eq:fsumcn}\end{equation} converging almost everywhere. We show that it is dense but that it is first category and has zero Wiener measure. \end{abstract} \maketitle \section{Introduction} We say that a function $f$ on the circle $\mathbb{T}$ belongs to $\PLA$ (pointwise limits of analytic sums) if it can be decomposed to a trigonometric series with positive frequencies (\ref{eq:fsumcn}) converging almost everywhere (a.e.) An important observation is that the representation (\ref{eq:fsumcn}) is unique. It follows from the Abel summation and Privalov uniqueness theorems. An analogy with the Riemannian theory suggests that the coefficients could be recovered by Fourier formulas, provided that $f$ is integrable (in other words, $\PLA\cap L^{1}\subset H^{1}$, the Hardy space). This was disproved in our note \cite{KO03}, where we constructed a series (\ref{eq:fsumcn}) converging outside some compact $K$ of zero measure to a bounded function $f$, but which is not $f$'s Fourier series. Later we proved \cite{KO04,KO} that a function $f$ which admits such a non-classic representation can be smooth, even $C^{\infty}$, and characterized precisely the maximal possible smoothness in terms of the rate of decrease of the Fourier coefficients. The following density theorem is a simple consequence of these results: \begin{thm} \label{thm:dense}$\PLA$ is dense in the space $C(\mathbb{T})$. Moreover, it is dense in the spaces of smooth functions $C^{k}(\mathbb{T})$ for every $k=1,2,\dotsc$ in their respective norms. \end{thm} The approach taken in \cite{KO03,KO04,KO} is complex-analytic, and information is derived by examining the related function $F(z)=\sum c(n)z^{n}$ in the disk $\{|z|<1\}$. In this paper we present a purely real-analytic construction, and use it to prove the following denseness result: any measurable function can be carried into $\PLA$ by a uniformly small perturbation. \begin{thm} \label{thm:PLA+C}Let $f\in L^{0}(\mathbb{T})$, $\epsilon>0$. Then there is a decomposition \begin{equation} f=g+h,\quad g\in\PLA,\,||h||_{C(\mathbb{T})}<\epsilon.\label{eq:PLA+C}\end{equation} \end{thm} For a stronger version, see theorem \refp{thm:PLA+C}{thm:2p} below. This seems like a good place to compare $\PLA$ with its {}``classic'' part, the Hardy spaces. Theorems \ref{thm:dense} and \ref{thm:PLA+C} should be contrasted against the fact that the Hardy space $H^{2}$ is closed in $L^{2}$ and has infinite co-dimension. Another interesting fact is that $\PLA$ functions may exhibit jump discontinuities (again, in $H^{1}$ this is impossible) --- this is a corollary from theorem \ref{thm:PLA+C}. Finally it is worth to note that $\PLA$ contains non-constant real functions. This comes from our approach in \cite{KO03} where a singular inner function $I(z)$ in the unit disc was constructed such that \[ f(t):=\frac{1}{I(e^{it})}\in\PLA.\] Hence $f+\overline{f}=f+1/f$ gives the required example. On the other hand we prove that $\PLA$ is rather thin in the sense of Wiener measure and Baire category \begin{thm} \label{thm:category}$\PLA\cap C(\mathbb{T})$ is a set of first category in $C(\mathbb{T})$. \end{thm} \begin{thm} \label{thm:measure}The Wiener measure of $\PLA\cap C(\mathbb{T})$ is zero. \end{thm} \section{Small perturbations and PLA} \subsection{Density of PLA} First we deduce theorem \ref{thm:dense}. According to \cite{KO} there exists a $C^{\infty}$ function $f\in\PLA$ such that\[ \widehat{f}(l)\neq0\textrm{ for some }l<0.\] In fact the last inequality holds for infinitely many negative $l$-s. Otherwise if $L$ is the smallest such number then \[ \sum_{n=L}^{\infty}c(n-L)e^{int}-\sum_{n=0}^{\infty}\widehat{f}(n-L)e^{int}\] ($c(n)$ from (\ref{eq:fsumcn})) is a non-trivial analytic sum converging to zero a.e., contradicting Privalov's uniqueness theorem. Hence for any $s<0$, by multiplying with an appropriate exponential we can get an $f_{s}\in\PLA\cap C^{\infty}$ with $\widehat{f_{s}}(s)=1$. Next, for an arbitrary $N$ consider the discrete convolution with $e^{ist}$,\[ F_{N,s}(t):=\frac{1}{2\pi}\sum_{j=0}^{N-1}f_{s}\Big(t-2\pi\frac{j}{N}\Big)e^{is(2\pi j/N)}.\] Since $\PLA$ is a translation invariant linear space, $F_{N,s}\in\PLA$ and\[ F_{N,s}\to f*e^{ist}=e^{ist}\quad\textrm{as }n\to\infty\] in the $C^{k}$ norm for any $k=1,2,\dotsc$ Therefore any exponential $e^{ist}$ admits an approximation by $\PLA$ functions, and theorem \ref{thm:dense} follows. \subsection{Lemmas} We will use the technique from \cite{KO01} (where one may find additional references and historical comments). By $L^{0}(\mathbb{T})$ we denote the space of measurable functions $f:\mathbb{T}\to\mathbb{C}$ endowed with the distance function\[ \rho(f,g):=\inf\{\epsilon:\mathbf{m}\{|f-g|>\epsilon\}<\epsilon\}\] where $\mathbf{m}$ is the normalized Lebesgue measure on $\mathbb{T}$. For a trigonometric polynomial \[ P(t)=\sum c(n)e^{int}\] we use the following notations \begin{itemize} \item $\spec P=\{ n\in\mathbb{Z}:c(n)\neq0\}$ \item $P^{*}(t)=\sup_{l<m}\left|\sum_{n=l}^{m}c(n)e^{int}\right|$. \item $P_{[r]}(t)=P(rt)$, $r\in\mathbb{N}$. \end{itemize} For two trigonometric polynomials $g$ and $h$ consider the following {}``special product'',\[ P=gh_{[r]}\textrm{ where }r>3\deg g.\] Then the following is true (see (10) in \cite{KO01}; compare also with \cite[sec.\ 1.1]{O1} and \cite[lemma 15]{K96}):\begin{equation} P^{*}(t)\leq|g(t)|\cdot||h^{*}||_{\infty}+2g^{*}(t)||\widehat{h}||_{\infty},\label{eq:Pgh}\end{equation} where $\widehat{h}$ is the Fourier transform and $||\cdot||_{p}$ is the $l^{p}$ or $L^{p}$ norm, according to context. \begin{lem} \label{lem:KO41}For any $\epsilon>0$ there exists a polynomial $h$ with $\spec h\subset[1,\infty[$, $\rho(h,\mathbf{1})<\epsilon$ and $||\widehat{h}||_{\infty}<\epsilon$. \end{lem} The proof can be found in \cite[lemma 4.1]{KO01}. \begin{lem} Given a segment $I\subset\mathbb{T}$ and $\delta>0$ there is a trigonometric polynomial $P$ such that \begin{enumerate} \item \label{enu:specPN}$\spec P\subset[0,\infty)$, \item $\rho(P,\mathbf{1}_{I})<\delta$, $\mathbf{1}_{I}$ being the indicator function, and \item \label{enu:P*Ieps}$P^{*}(t)<\delta$ outside of $I_{\delta}$, a $\delta$-neighborhood of $I$. \end{enumerate} \end{lem} \begin{proof} Using lemma \ref{lem:KO41} find a polynomial $h$ such that $\rho(h,\mathbf{1})<\frac{1}{3}\delta$ and $||\widehat{h}||_{\infty}<\frac{1}{24}\delta^{2}$. Next approximate $\mathbf{1}_{I}$ by a trigonometric polynomial $g$ such that\begin{align*} |g(t)| & <\frac{\delta}{2||h^{*}||_{\infty}}\quad t\not\in I_{\delta} & \rho(g,\mathbf{1}_{I}) & <\frac{1}{3}\delta & ||g^{*}||_{\infty} & <\frac{6}{\delta}.\end{align*} Finding $g$ can be done by interpolating $\mathbf{1}_{I}$ by a trapezoid function and then taking a sufficiently large partial sum of the Fourier expansion. Estimating $g^{*}$ can be done, for example, by noting that a trapezoid function is a difference of two triangular function $\textrm{T }$and for each $||T^{*}||_{\infty}\leq||\widehat{T}||_{1}=||T||_{\infty}\leq3/\delta$. Set $P:=gh_{[r]}$. Then (\ref{eq:Pgh}) implies \ref{enu:P*Ieps} and $\spec h\subset[1,\infty[$ implies \ref{enu:specPN}, provided $r$ is large enough. \end{proof} A direct consequence: \begin{lem} \label{lem:Pphi}Given $\delta$ and a step function $\varphi$ which is $0$ on a set $U$, there is a polynomial $P$ with \emph{\ref{enu:specPN}} above such that\begin{enumerate}\setcounter{enumi}{3}\item $\rho(P,\varphi)<\delta$,\label{enu:rhoPphi}\item \label{enu:P*Uc}$\rho(P^{*},0)<\delta$ on $U$ (by which we mean $\rho(P^{*}\cdot\mathbf{1}_{U},0)<\delta$).\end{enumerate} \end{lem} \begin{lem} \label{lem:PQpsi}Given $\delta$, $a$ and a step function $\psi$, $|\psi|<a$ on $U$ there are polynomials $P$ and $Q$ such that \emph{\ref{enu:specPN}}, \emph{\ref{enu:P*Uc}}, \begin{enumerate}\setcounter{enumi}{5}\item \label{enu:PQpsi}$\rho(P+Q,\psi)<\delta$ and \item \label{enu:Qinfa}$||Q||_{\infty}<a$\end{enumerate} are satisfied. \end{lem} \begin{proof} Fix $Q$ with \ref{enu:Qinfa} and $\rho(Q,\psi)<\frac{1}{3}\delta$ on $U$. Define a step function $\varphi$ which is $0$ on $U$ and $\rho(\varphi,\psi-Q)<\frac{2}{3}\delta$. Now apply lemma \ref{lem:Pphi} and get the result. \end{proof} \subsection{Proof of theorem \ref{thm:PLA+C}} Let $f$ and $\epsilon>0$ be given. We may assume $\rho(f,0)<\frac{1}{4}$. Define inductively sequences of trigonometric polynomials $\{ P_{k}\}$, $\{ Q_{k}\}$ satisfying the conditions \begin{enumerate} \item \label{enu:indcta}$\rho\Big(f,\sum_{k\leq n}P_{k}+Q_{k}\Big)<4^{-n}$ \item $\spec P_{n}\subset[0,\infty[$ \item \label{enu:indctd}$||Q_{n}||_{\infty}<\epsilon2^{-n}$. \end{enumerate} Start with $P_{0}=Q_{0}=0$. Suppose $P_{k}$ and $Q_{k}$ are defined with the requirements above for $k<n$. Set $f_{n}:=f-\sum_{k<n}P_{k}+Q_{k}$. Approximate $f_{n}$ by a step-function $S_{n}$ such that \[ \rho(S_{n},f_{n})<4^{-n}.\] We get\begin{equation} \rho(S_{n},0)<4^{-n+2}.\label{eq:Sn0}\end{equation} Denote $U_{n}=\{ t:|S_{n}(t)|<\epsilon2^{-n}\}$. Apply lemma \ref{lem:PQpsi} for $\psi=S_{n}$, $U=U_{n}$, $a=\epsilon2^{-n}$ and $\delta=4^{-n}$. We get polynomials $P_{n}$ and $Q_{n}$ such that \ref{enu:indcta}-\ref{enu:indctd} are fulfilled for $k=n$. Notice that if $\epsilon2^{-n}<4^{-n+1}$ then (\ref{eq:Sn0}) implies that $\mathbf{m}U_{n}^{c}<4^{-n+2}$ and then condition \ref{enu:P*Uc} of lemma \ref{lem:PQpsi} gives $\rho(P_{n}^{*},0)<4^{-n}$ on $U$. These two together give \[ \rho(P_{n}^{*},0)<2^{-n}\quad\textrm{for }n\textrm{ sufficiently large.}\] This means that the series $\sum P_{n}$ converges a.e.~and it defines a function $g\in\PLA$. Hence denoting $h:=\sum Q_{n}$ we finish the proof.\qed \begin{rem} Since theorem \ref{thm:PLA+C} is stronger than the first part of theorem \ref{thm:dense}, one might wonder whether the second part admits an equivalent improvement. In fact it does not, namely, there exists a function $f\in C(\mathbb{T})$ which does not admit any decomposition $f=g+h$ with $g\in\PLA$ and $h\in C^1$. We plan to exhibit this example in a subsequent paper. \end{rem} \subsection{Representations by {}``almost analytic'' series.} D. E. Menshov proved that any $f\in L^{0}(\mathbb{T})$ can be decomposed to an a.e.~converging trigonometric series \begin{equation} f(t)=\sum_{n\in\mathbb{Z}}c(n)e^{int}\label{eq:Menshov}\end{equation} (see \cite{B64,K96,KO01}). The above technique gives the following, {}``almost-analytic'' version: \theoremstyle{plain} \newtheorem*{thm2p}{Theorem \refpp{thm:PLA+C}} \begin{thm2p}\hypertarget{thm:2p}Any $f\in L^{0}(\mathbb{T})$ can be decomposed in an almost everywhere convergent series (\ref{eq:Menshov}) with the {}``negative'' part $f_{-}$ converging uniformly on $\mathbb{T}$. Further, the negative part can be taken to have arbitrarily small $U(\mathbb{T})$ norm. \end{thm2p}We remind that the $U$-norm of a function $F$ is defined by \[ ||F||_{U(\mathbb{T})}:=\sup_{N\geq0}\Big\Vert\sum_{n=-N}^{N}\widehat{F}(n)e^{int}\Big\Vert_{\infty}.\] We shall only sketch the proof of theorem \refp{thm:PLA+C}{thm:2p}. It requires the technique of separating the measure error and the uniform error. First we will need \begin{lem} \label{lem:KO21}For any $\gamma>0$ and $\delta>0$ there exists a trigonometric polynomial $h$ satisfying \begin{itemize} \item $\widehat{h}(0)=0$, $||\widehat{h}||_{\infty}<\delta$ \item \textbf{$\mathbf{m}\{ t:|h(t)-1|>\delta\}<C\gamma$} \item $||h^{*}||_{\infty}\leq1/\gamma$. \end{itemize} \end{lem} This is lemma 2.1 from \cite{KO01}, which is the {}``non-analytic counterpart'' of lemma \ref{lem:KO41}. Next we state a replacement for lemma \ref{lem:PQpsi}:\theoremstyle{plain} \newtheorem*{lem3p}{Lemma \refpp{lem:PQpsi}} \begin{lem3p}\hypertarget{lem:4p} Given $\gamma$, $\delta$, $a$ and a step function $\psi$, $|\psi|<a$ on $U$ there are polynomials $P$ and $Q$ such that \emph{\ref{enu:specPN}}, \emph{\ref{enu:P*Uc},} \begin{enumerate}\item[\refpp{enu:PQpsi}] $\mathbf{m}\{ t:|P+Q-\psi|>\delta\}<C\gamma$ and \item[\refpp{enu:Qinfa}]$||Q||_{U}<a/\gamma$\end{enumerate} \end{lem3p} The construction of $P$ and $Q$ is generally similar, but one has to take $Q_{\textrm{lemma \refp{lem:PQpsi}{lem:4p}}}= Q_{\textrm{lemma \ref{lem:PQpsi}}}h_{[r]}$ where $h$ comes from lemma \ref{lem:KO21} with the same $\gamma$ and a sufficiently small $\delta$. Notice that there is no price to pay in lemma \ref{lem:KO21} for decreasing $\delta$. The proof of theorem \ref{thm:PLA+C} then applies mutatis mutandis. \begin{rem} Notice that if one replaces convergence almost everywhere by convergence in measure then any function $f\in L^{0}$ admits an analytic representation (\ref{eq:fsumcn}). This was proved in \cite{KO01}. However, here the representation is not unique. \end{rem} \section{Category and measure} Here we prove theorems \ref{thm:category} and \ref{thm:measure}. \subsection{Relatives.} We will use \begin{defn*} (see \cite{O2}) For two functions $f$ and $g$ we say that $g$ is a relative of $f$ if there is a compact $K$ of positive measure on the circle, and an absolutely continuous homeomorphism $h:\mathbb{T}\to\mathbb{T}$ such that\[ g(t)=f(h(t))\quad\forall t\in K.\] \end{defn*} In this paper the notion of relatives will be used through the following lemma. Denote $C_{A}:=H^{\infty}\cap C$ i.e.~the space of continuous boundary values of analytic functions on the disk. \begin{lem} If $f\in\PLA$ then it has a relative $g\in C_{A}$. \end{lem} Indeed, let $f=\sum_{n\geq0}c(n)e^{int}$. Consider the analytic extension\[ F(z)=\sum_{n\geq0}c(n)z^{n}\quad z\in\mathbb{D}.\] According to Abel's summation theorem $F$ has non-tangential bou\-ndary values equal to $f(t)$ at the point $z=e^{it}$ for almost every $t$. Fix a compact $K$, $\mathbf{m}K>0$, where this limit is uniform. Consider the so-called Privalov domain $P=P_{K}$ i.e.~the subset of $\mathbb{D}$ created by removing, for every arc $I$ from the complement of $K$, a disk $D_{K}$ orthogonal to $\partial\mathbb{D}$ at the end points of $I$. If $I$ is larger than a half circle, remove $\mathbb{D}\setminus D_{I}$ instead of $D_{I}$ so in both cases you remove the component containing $I$. Let $H$ be the Riemannian mapping of the closed unit disc onto $P$. It is well known that $H$ belongs to $C_{A}$ and its boundary values define an absolutely continuous homeomorphism of $\mathbb{T}$ onto $\partial P$. It is easy to see that $g(t):=F(H(e^{it}))$ is a relative of $f$.\qed So, theorem \ref{thm:category} will follow from \begin{lem} \label{lem:relatives}The set of functions with relatives in $C_{A}$ is first category in $C(\mathbb{T})$. \end{lem} For the proof of this lemma we need \begin{lem} \label{lem:epsdelM}Given numbers $\delta$, $M>0$ one can define a positive $\epsilon(\delta,M)$ such that for any function $g$ in $L^{2}(\mathbb{T})$, satisfying \begin{align*} ||g||_{2} & <M & |g(t)-c_{1}| & <\epsilon\textrm{ on }E_{1}\\ |c_{1}-c_{2}| & >\delta & |g(t)-c_{2}| & <\epsilon\textrm{ on }E_{2}\end{align*} where $E_{j}$ are two disjoint sets of measure $>\delta$ and $c_{j}$ are two constants, one gets that $g$ is not in $H^{2}$. \end{lem} \begin{proof} This is basically a consequence of Jensen's inequality. Assume $g$ admits an extension inside the disc as an $H^{2}$ function. Consider the subharmonic function $G(z):=\log|g(z)-c_{1}|$. Then\begin{align*} G(0) & \leq\int_{\partial\mathbb{D}}G=\int_{E_{1}}+\int_{\partial\mathbb{D}\setminus E_{1}}\leq\delta\log\epsilon+\int_{\partial\mathbb{D}}|g(z)-c_{1}|\leq\\ & \leq\delta\log\epsilon+M+\frac{M}{\sqrt{\delta}}+\epsilon.\end{align*} so taking $\epsilon$ sufficiently small we would get $|g(0)-c_{1}|<\frac{1}{2}\delta$. Repeating the same argument for $|g(0)-c_{2}|$ leads to a contradiction. \end{proof} \begin{proof} [Proof of lemma \ref{lem:relatives}]Denote (for small positive $s$ and $r$, and a large $M$) by $\mathcal{F}(s,r,M)$ the class of all functions $f\in C(\mathbb{T})$ satisfying that there exists \begin{itemize} \item A $h\in\hom(\mathbb{T})$ such that for any measurable $A\subset\mathbb{T}$, $\mathbf{m}A<s$ implies $\mathbf{m}h(A)<r/2$, \item A compact $K\subset\mathbb{T}$, $\mathbf{m}K>r$, \item A $g\in H^{\infty}$, $||g||_{2}<M$ \end{itemize} such that $f\circ h^{-1}=g$ on $K$. \begin{claim*} $\mathcal{F}=\mathcal{F}(s,r,M)$ is nowhere dense in $C(\mathbb{T})$. \end{claim*} Take $f\in C(\mathbb{T})$ and a number $a>0$. Approximate $f$ uniformly (with error $<a$) by a step-function $q$. Denote by $n$ the number of intervals $I$ of constancy of $q$. We may assume that $1/n<s$ and that all the numbers $\{ q(I)\}$ are different. Denote\[ \delta:=\min\Big\{\frac{r}{2n},\min_{I\neq I'}|q(I)-q(I')|\Big\}\] Find $\epsilon(\delta,M)$ from lemma \ref{lem:epsdelM}. The claim will be proved once we show that any $u$ with $||u-q||_{2}<\epsilon$ is not in $\mathcal{F}$. Indeed, suppose there are $h$, $K$ and $g$ as in the definition of $\mathcal{F}$. One can see easily that for at least two intervals $I=I_{1},I_{2}$\[ \mathbf{m}(h(I)\cap K)>\delta.\] So, the function $g=u\circ h^{-1}$ satisfies all conditions of lemma \ref{lem:epsdelM}, therefore it can not belong to $H^{\infty}$. This contradiction proves the claim, and lemma \ref{lem:relatives}, and theorem \ref{thm:category} follow. \end{proof} \begin{rem} The following conjecture (which would emphasize sharpness of Theorems \ref{thm:PLA+C},\refp{thm:PLA+C}{thm:2p}) looks likely: one can construct a function $f\in C(\mathbb{T})$ such that all its relatives $g$ satisfy the condition that $\{\widehat{g}(n)\}\not\in l^{1}(\mathbb{Z}_{-})$ or even stronger, $\not\in l^{p}(\mathbb{Z}_{-})$, $p<2$. Notice that for $l_{p}(\mathbb{Z})$ this is true, see \cite{O2}. \end{rem} \subsection{Proof of theorem \ref{thm:measure}} This follows quite easily from the Fourier representation of Brownian motion (see e.g.~\cite[\S 16.3]{K85}). For simplicity, we will prove it for the complex-valued Brownian bridge, i.e.~complex Brownian motion $W$ on $[0,2\pi]$ conditioned to satisfy $W(0)=W(2\pi)$. As is well known, $n\widehat{W}(n)$ are independent standard Gaussians (one may take this as the definition of the Brownian bridge). We will only use the fact that $\widehat{W}(-1)$ is independent from the other variables. In other words we can write (as measure spaces) $C(\mathbb{T})=\Omega\times\mathbb{C}$ where $\Omega$ is the space of functions satisfying $\widehat{f}(-1)=0$ equipped with the measure of Brownian bridge conditioned to satisfy this, and $\mathbb{C}$ equipped with the Gaussian measure. Since $e^{-it}$ is not in $\PLA$, for any $f\in\Omega$ there can be no more than one value $x(f)\in\mathbb{C}$ for which $f+xe^{-it}\in\PLA$. Since the measure of a single point is always zero, we get by Fubini's theorem that the measure of $\PLA$ is zero. Fubini's theorem requires that $\PLA\cap C(\mathbb{T})$ be measurable. In fact it is Borel. To show this, endow $\PLA$ with the distance function $d(f,g)=\rho((f-g)^{*},0)$ where $f^{*}$ is defined as for polynomials, i.e.\[ f^{*}(t)=\sup_{l<m}\left|\sum_{n=l}^{m}c(n)e^{int}\right|\quad f(t)=\sum_{n=0}^{\infty}c(n)e^{int}.\] It is easy to see that $d$ makes $\PLA$ into a separable complete metric space. Souslin's theorem \cite[\S 39, IV]{K66} states that a one-to-one continuous map of a (Borel set in a) Polish space is Borel. Using this for the identity map from $\PLA$ to $L^{0}(\mathbb{T})$ shows that $\PLA$ is a Borel subset of $L^{0}(\mathbb{T})$. The restriction of a Borel set in $L^{0}$ to $C$ is Borel so we get that $\PLA\cap C$ is Borel in $C$ and theorem \ref{thm:measure} is proved. We remark that the same argument can be used to prove theorem \ref{thm:category}. One needs only use a version of Fubini's theorem for categories, see \cite[theorem 15.4]{O80}.
589
Praise God, the Lord is making progress in the sanctification of even my sinful unbelieving heart. Last week I was accused of having out of control optimism. This is probably a first for me. In general I tend to be on the side of melancholy and pessimism. After growing up in a family strained by financial hardship, bankruptcy, the death of my brother to an automobile accident, and the death of my mother to cancer it has been too easy to get out of bed each day wondering what tragedy was on the menu next. But praise God at least once in my life I was accused of being too optimistic! Forgive me while I savor the moment because I am sure that I will battle melancholy again. And Christian friends we should be optimistic, not because we have any righteousness in ourselves, but because mankind has been freely granted the complete righteousness of Christ. The grace of Christ has been and will be utterly super-abundantly victorious in heaven and on earth! While searching for a verse to provide further reason for out of control optimism I found Matthew 16:15-21 (WEB).. When reading this passage in the past I have been challenged by Peter's courage, Peter's confession of Christ, the church's authority to bind and release, and the sufferings of Christ. Recently, however, God has called to mind another forgotten, but important teaching in this passage. "the gates of Hades will not prevail against" the church. Now here is something optimistic. The church wins through Christ. But here is something also curious. Jesus says, "the gates of Hades will not prevail." Typically we imagine that Christians wage war against the spiritual forces of evil only on the plains of this world. We think the battle ends with a line drawn in the sand in history future with Christ purchasing some for Heaven, Satan dragging the rest to Hell, and each enjoying their spoils. The curious thing, however, is that the gates of Hades are not on the plains of this world. Nor do gates fight. Gates are used for defense. The verses above explain that the grace of Christ through the church will press the enemy off the plains of this earth to the very gates of Hades. Wow! The gates of Hades themselves will be smashed by the church completely robbing Satan of all his spoils. Whence comes the church's power to shatter the very gates of Hell? The army of souls receiving a salvation by grace alone must not, will not, nor can not rest until everyone whom Christ has died for is found safely in the arms of our Savior. Folks here is some news that could send your optimism out of control as well! The teaching of nearly every Christian sect rejoices that the grace of Christ alone has redeemed a people for himself. But what does Catholic, Baptist, Presbyterian, Anabaptist, Methodist, Anglican, Pentecostal or any other denomination teach about grace that will smash down the very gates of Hades looking for more sinners to save? Given time I may study their answer, but I will not let it spoil my optimism over the teaching of Christ. Finally, Jesus speaks a solemn word that he himself was to be hated and crucified for loving all mankind, both Pharisee and pagan. Repentance for each of us, whether Pharisee or pagan, is the only way to join the redeemed at the level ground at the foot of his cross. Rock on Jesus our rock.
246,782
Public Naked Asian Girls This japanese naked women bathing pictures has 1200 x 1800 596 kb Jpeg. Public naked asian girls is a popular picture for wallpaper. If you have a problem about intellectual property, child pornography or immature images with any of these pictures, please send report email to webmaster at Farimg.com, we will remove it immediately. - Previous Naked Wives Bathing - Next Naked Latina Women Bathing
202,156
By Ryan Homler The Maryland Terrapins Women’s Basketball team defeated the Purdue Boilermakers by a score of 74-64 on Sunday night in Indianapolis to clinch their third consecutive Big Ten Tournament Title. Maryland was lead by senior Brionna Jones who dropped 27 points and 12 rebounds, and freshman guard Destiny Slocum who finished with 14 points and seven assists. Jones, who recorded a double double in two out of the three games in the tournament and scored 32 in the other, earned the Most Outstanding Player honor for her play during the tournament. “She missed like eight shots in three games,” Walker-Kimbrough joked when asked about how Jones played in the tournament. “It’s unheard of, she was unbelievable. She is the best post in the country.” Shatori Walker-Kimbrough, who was last year’s Most Outstanding Player, had a relatively quiet night as she only scored seven points. However, Walker-Kimbrough was able to contribute on the defensive end with two steals, and was aided by the play of Slocum and junior Kristen Confroy who added nine points. Freshmen Blair Watson also played some valuable minutes for Maryland on defense. “You saw a lot of contributions we were able to get tonight,” coach Brenda Frese said. “You see our bench depth, the spark that they gave us tonight. Just a lot of great contributions tonight.” Purdue came in as the top defensive team in the Big Ten, but had trouble stopping the Big Ten’s best offensive team, and specifically Brionna Jones, even when implementing a press. “She makes every shot,” freshman guard Destiny Slocum said when asked about what it’s like to share the court with a player like Jones. For Maryland, Head Coach Brenda Frese and the two seniors, who earned a share of the regular season title with Ohio State, this is the third year in a row they left the tournament as champions. The Terps are the first team to accomplish this feat since the Buckeyes did it from 2009-2011. “To share this moment with my teammates is really special,” Walker-Kimbrough said. “And I say this confidently, I don’t think Maryland is done now.” “They’re all indescribable,” Coach Frese said when asked about how this championship compared to others. “I credit the players, they make it look easy.” Purdue was lead by senior guard Ashley Morrissette who scored 18 points and added six assists, all with a banged up left ankle. Maryland will now wait to hear from the selection committee on what seed they will be and where they will play the first rounds of the NCAA Tournament. The selection show will take place on Monday, March 13 at 7p.m. on ESPN.
38,235
KIRKUS REVIEW A Milanese pimp with a heart of gold is drawn ever-deeper into the city’s seedy underworld. The third bluntly titled thriller by Faletti (I Kill, 2002; I Am God, 2009) is narrated by Bravo, who lets the reader know a few important things up front. First, his penis was cut off years ago after falling afoul of the wrong people; he’s a tough but compassionate boss to the women who serve his high-priced clients; and he works hard to keep the more sordid aspects of Milan’s druggy, violent underbelly at arm’s length. The novel's plot turns on him bungling that last part badly: Not long after taking a new prostitute under his wing, he discovers that he’s been framed as part of a complicated scheme that’s left some of Italy’s prominent movers and shakers dead. Though the novel is set in 1978—the kidnapping of politician Aldo Moro plays a small role in the plot—its spirit and tone are closer to that of the ’30s and ’40s noirs of Cain, Hammett, Chandler and Goodis. Bravo is a black-humored, streetwise narrator with an appealingly flinty demeanor even when he’s in over his head, and he has an excellent femme fatale in Carla, an initially pliable woman who turns out to be much more manipulative than he expected. Faletti is particularly adept at showing how the scales slowly fall from Bravo’s eyes: First his moral certainty about his profession erodes, then his sense of personal security, then his faith in his country’s social structure. The nobody-can-be-trusted plot is familiar, and some closing revelations about Bravo’s past feel shoehorned in, but the book thrives on its fast pace—translator Antony Shugaar has taken care to keep the style pulpy yet elevated, in keeping with a hero who’s seen society at its worst but somehow finds time to enjoy the occasional word puzzle. A savvy lowbrow-highbrow thriller.
88,972
\begin{document} \maketitle \begin{abstract} In this paper, we consider a deformation of Plancherel measure linked to Jack polynomials. Our main result is the description of the first and second-order asymptotics of the bulk of a random Young diagram under this distribution, which extends celebrated results of Vershik-Kerov and Logan-Shepp (for the first order asymptotics) and Kerov (for the second order asymptotics). This gives more evidence of the connection with Gaussian $\beta$-ensemble, already suggested by some work of Matsumoto. Our main tool is a polynomiality result for the structure constant of some quantities that we call {\em Jack characters}, recently introduced by Lassalle. We believe that this result is also interested in itself and we give several other applications of it. \end{abstract} \section{Introduction} \subsection{Jack deformation of Plancherel measure and random matrices} \label{subsect:JackPlancherel} Random partitions occur in mathematics and physics in a wide variety of contexts, in particular in the Gromov-Witten and Seiberg-Witten theories, see \cite{OkounkovRandomPartitions} for an introduction to the subject. Another aspect which makes them attractive is the link with random matrices. Namely, some classical models of random matrices have random partition counterparts, which display the same kind of asymptotic behaviour. In this paper, we consider the following probability measure on the set of partitions (or equivalently, Young diagrams) of size $n$: \begin{equation} \label{eq:JackMeasure} \PP_n^{(\alpha)} ( \lambda ) = \frac{\alpha^n n!}{j_\lambda^{(\alpha)}}, \end{equation} where $j_\lambda^{(\alpha)}$ is a deformation of the square of the hook products: \[j_\lambda^{(\alpha)} = \prod_{\Box \in \lambda} \bigg(\alpha a(\Box) + \ell(\Box) +1\bigg) \bigg(\alpha a(\Box) + \ell(\Box)+\alpha\bigg).\] Here, $a(\Box) := \lambda_j - i$ and $\ell(\Box) := \lambda_i' - j$ are respectively the arm and leg length of the box $\Box = (i,j)$ (the same definitions as in \cite[Chapter I]{Macdonald1995}). When $\alpha=1$, the measure $\PP_n^{(\alpha)}$ specializes to the well-known \emph{Plancherel measure} for the symmetric groups. In general, it is called {\em Jack deformation of Plancherel measure} (or Jack measure for short), because of its connection with the celebrated Jack polynomials that we shall explain later. It has appeared recently in several research papers \cite{BorodinOlshanskiJackZMeasure,FulmanFluctuationChA2, MatsumotoJackPlancherelFewRows,OlshanskiJackPlancherel,MatsumotoOddJM} and it is presented as an important area of research in Okounkov's survey on random partitions \cite[Section 3.3]{OkounkovRandomPartitions}. Recall that Plancherel measure has a combinatorial interpretation: it is the push-forward of the uniform measure on permutations by Robinson-Schensted algorithm (we keep only the common shape of the tableaux in the output of RS algorithm). A similar description holds for Jack measure for $\alpha=2$ (and $\alpha=1/2$ by symmetry): it is the push-forward of the uniform measure on fixed point free involutions by RS algorithm (in this case, the resulting diagram has always even parts and we divide each part by 2) -- see \cite[Section 3]{MatsumotoJackPlancherelFewRows}. Thus Jack measure can be considered as an interpolation between these two combinatorially relevant models of random partitions. \subsubsection{$\alpha = 1$ case: Plancherel measure and GUE ensemble} There is a strong connection between Plancherel measure and the \emph{Gaussian unitary ensemble} (called \emph{GUE}) in random matrix theory. The Gaussian unitary ensemble is a random Hermitian matrix with independent normal entries. The probability density function for the eigenvalues of that matrix (of the size $d \times d$) is proportional to the weight \begin{equation} \label{eq:BetaDensity} e^{-\beta/2 \sum x_i^2} \prod_{i < j \leq d}(x_i-x_j)^\beta \end{equation} with $\beta = 2$ (see \cite{AndersonGuionnetZeitouni2010}). Consider the scaled spectral measure of the GUE ensemble \[ \mu^{(2)}_d := \frac{1}{d}\left(\delta_{x_1} + \cdots + \delta_{x_d}\right),\] where $x_1 \geq \dots \geq x_d$ are eigenvalues of our random matrix and $\delta$ is the notation for Dirac operator. Then the famous \emph{Wigner law} states that, as $d \to \infty$, the spectral measure tends almost surely to a \emph{semicircular law}, i.e. to a probability measure $\mu_{S-C} := \frac{\sqrt{4-x^2}}{2\pi}1_{[-2,2]}(x)dx$ supported on the interval $[-2,2]$ (see \cite{AndersonGuionnetZeitouni2010}). The second order asymptotics is also known and one can observe Gaussian fluctuations around the limiting distribution (see \cite{Johansson1998}). Informally speaking, looking at the scaled spectral measure of GUE as a generalized function \[ \mu^{(2)}_d(x) = \frac{1}{d}\left(\delta_{x-x_1} + \cdots + \delta_{x-x_d}\right),\] we have that \[ \mu^{(2)}_d(x) \sim \mu_{S-C}(x) + \frac{1}{d}\widetilde{\Delta}^{(2)}(x),\] as $d \to \infty$, where $\tilde{\Delta^{(2)}}(x)$ is an explicit Gaussian process on the interval $[-2,2]$ with values in the space of generalized functions $(\C^\infty(\RR))'$. \begin{figure}[tb] \begin{tikzpicture} \begin{scope}[scale=0.5/sqrt(2),rotate=-45,draw=gray] \begin{scope} \clip[rotate=45] (-2,-2) rectangle (7.5,6.5); \draw[thin, dotted, draw=gray] (-10,0) grid (10,10); \begin{scope}[rotate=45,draw=black,scale=sqrt(2)] \draw[thin, dotted] (0,0) grid (15,15); \end{scope} \end{scope} \draw[->,thick] (-4.5,0) -- (4.5,0) node[anchor=west,rotate=-45]{\textcolor{gray}{$z$}}; \foreach \z in { -3, -2, -1, 1, 2, 3} { \draw (\z, -2pt) node[anchor=north,rotate=-45] {\textcolor{gray}{\tiny{$\z$}}} -- (\z, 2pt); } \draw[->,thick] (0,-0.4) -- (0,9.5) node[anchor=south,rotate=-45]{\textcolor{gray}{$t$}}; \foreach \t in {1, 2, 3, 4, 5, 6, 7, 8, 9} { \draw (-2pt,\t) node[anchor=east,rotate=-45] {\textcolor{gray}{\tiny{$\t$}}} -- (2pt,\t); } \begin{scope}[draw=black,rotate=45,scale=sqrt(2)] \draw[->,thick] (0,0) -- (6,0) node[anchor=west]{{{$x$}}}; \foreach \x in {1, 2, 3, 4, 5} { \draw (\x, -2pt) node[anchor=north] {{\tiny{$\x$}}} -- (\x, 2pt); } \draw[->,thick] (0,0) -- (0,5) node[anchor=south] {{{$y$}}}; \foreach \y in {1, 2, 3, 4} { \draw (-2pt,\y) node[anchor=east] {{\tiny{$\y$}}} -- (2pt,\y); } \draw[ultra thick,draw=black] (5.5,0) -- (4,0) -- (4,1) -- (3,1) -- (3,2) -- (1,2) -- (1,3) -- (0,3) -- (0,4.5) ; \fill[fill=gray,opacity=0.1] (4,0) -- (4,1) -- (3,1) -- (3,2) -- (1,2) -- (1,3) -- (0,3) -- (0,0) -- cycle ; \end{scope} \end{scope} \begin{scope}[xshift=7cm, yshift=-0.5cm, scale=0.5] \begin{scope} \clip (-4.5,0) rectangle (5.5,5.5); \draw[thin, dotted] (-6,0) grid (6,6); \begin{scope}[rotate=45,draw=gray,scale=sqrt(2)] \clip (0,0) rectangle (4.5,5.5); \draw[thin, dotted] (0,0) grid (6,6); \end{scope} \end{scope} \draw[->,thick] (-6,0) -- (6,0) node[anchor=west]{$z$}; \foreach \z in {-5, -4, -3, -2, -1, 1, 2, 3, 4, 5} { \draw (\z, -2pt) node[anchor=north] {\tiny{$\z$}} -- (\z, 2pt); } \draw[->,thick] (0,-0.4) -- (0,6) node[anchor=south]{$t$}; \foreach \t in {1, 2, 3, 4, 5} { \draw (-2pt,\t) node[anchor=east] {\tiny{$\t$}} -- (2pt,\t); } \begin{scope}[draw=gray,rotate=45,scale=sqrt(2)] \draw[->,thick] (0,0) -- (6,0) node[anchor=west,rotate=45] {\textcolor{gray}{{$x$}}}; \foreach \x in {1, 2, 3, 4, 5} { \draw (\x, -2pt) node[anchor=north,rotate=45] {\textcolor{gray}{\tiny{$\x$}}} -- (\x, 2pt); } \draw[->,thick] (0,0) -- (0,5) node[anchor=south,rotate=45] {\textcolor{gray}{{$y$}}}; \foreach \y in {1, 2, 3, 4} { \draw (-2pt,\y) node[anchor=east,rotate=45] {\textcolor{gray}{\tiny{$\y$}}} -- (2pt,\y); } \draw[ultra thick,draw=black] (5.5,0) -- (4,0) -- (4,1) -- (3,1) -- (3,2) -- (1,2) -- (1,3) -- (0,3) -- (0,4.5) ; \fill[fill=gray,opacity=0.1] (4,0) -- (4,1) -- (3,1) -- (3,2) -- (1,2) -- (1,3) -- (0,3) -- (0,0) -- cycle ; \end{scope} \end{scope} \end{tikzpicture} \caption{Young diagram $\lambda=(4,3,1)$ and the graph of the associated function $\omega(\lambda)$.} \label{FigOmega} \end{figure} It was also discovered that a similar phenomenon holds for the Plancherel measure. The first order asymptotics for the Plancherel measure was found by Vershik and Kerov \cite{KerovVershikLimitYD} and, independently, by Logan and Shepp \cite{LoganShepp1977}. They noticed that a Young diagram $\lambda$ can be encoded by a continuous piecewise-affine function $\omega(\lambda)$ from $\RR$ to $\RR$: this encoding is represented in Figure \ref{FigOmega} and formally defined in Section \ref{SubsectContinuousYD}. Then they proved that for appropriately scaled Young diagrams $\overline{\lambda_{(n)}}$ of size $n$ distributed according to the Plancherel measure (the bar encodes the scaling), one has the convergence in probability \[ \sup\limits_{x \in \RR} \left|\omega({\overline{\lambda_{(n)}}})(x) - \Omega(x) \right| \stackrel{(P)}{\longrightarrow} 0\] where the limit shape $\Omega$ is given by \begin{equation}\label{eq:DefOmega} \Omega(x) = \begin{cases} |x| & \text{if }|x| \geq 2;\\ \frac{2}{\pi} \left( x \cdot \arcsin \frac{x}{2} + \sqrt{4-x^2} \right) & \text{otherwise.} \end{cases} \end{equation} There is a strong connection between this limit shape and the semicircular law $\mu_{S-C}$, namely the so-called {\em transition measure} of $\Omega$, seen as a {\em continuous Young diagram}, is the semicircular law $\mu_{S-C}$ -- see \cite[Section 1.2]{Biane1998}. The problem of the second order asymptotics was stated as an open question in late seventies and was solved by Kerov \cite{Kerov1993} who proved that, exactly as in the GUE case, the fluctuations around the limit shape are Gaussian. Informally Kerov's result can be presented as follows \[\omega\big({\overline{\lambda_{(n)}}}\big)(x) \sim \Omega(x) + \frac{2}{\sqrt{n}} \Delta^{(1)}_\infty(x)\] where $\Delta^{(1)}_\infty$ is the Gaussian process on $[-2,2]$ with values in $(\mathcal{C}^\infty(\RR))'$ defined by: \[\Delta^{(1)}_\infty(2\cos(\theta)) =\frac{1}{\pi} \sum_{k=2}^\infty \frac{\xi_k}{\sqrt{k}} \sin(k \theta).\] The detailed proof of this remarkable theorem can be found in \cite{IvanovOlshanski2002}. Although they are not equal, the Gaussian processes $\Delta^{(1)}_\infty$ (which describe bulk fluctuations of Young diagram under Plancherel measure) and $\widetilde{\Delta}^{(2)}$ (which describe bulk fluctuations of the GUE) have quite similar definition. Remarkably, the similarity between the Plancherel measure and the GUE ensemble does not only take place at the level of bulk fluctuations but also "at the edge". To be more precise, it was proved that the distribution of finitely many (properly scaled) first rows of random Young diagrams (with respect to the Plancherel measure) is the same as the distribution of the same number of (properly scaled) largest eigenvalues of the GUE ensemble, as $n \to \infty$ (see \cite{BaikDeiftJohansson1999, Okounkov2000, BorodinOkounkovOlshanski2000, Johansson2001a, Johansson2001, BaikDeiftRains2001} for details). \subsubsection{General $\alpha$-case and Gaussian $\beta$-ensembles} There are two famous analogues of the GUE ensembles in random matrix theory: \emph{Gaussian orthogonal ensembles (GOE)} and \emph{Gaussian symplectic ensembles (GSE)} (see \cite{Mehta2004}). The GOE ensemble (GSE ensemble, respectively) is a random real symmetric matrix (complex self-adjoint quaternion matrix, respectively) with independent normal entries (with mean $0$ and well-chosen variance). The density function for the distribution of eigenvalues of GOE (GSE, respectively) is, up to normalization, the function given by equation~\eqref{eq:BetaDensity} with parameter $\beta = 1$ ($\beta = 4$, respectively). Therefore, it is tempting to introduce \emph{Gaussian $\beta$-ensemble} (G$\beta$E, also called $\beta$-Hermite ensemble) that has a distribution density function proportional to \eqref{eq:BetaDensity} for any positive real value of $\beta$. The G$\beta$E ensembles are well-studied objects. For the first order of asymptotic behaviour of the spectral measure \[ \mu^{(\beta)}_d := \frac{1}{d}\left(\delta_{\frac{\beta}{2}x_1} + \cdots + \delta_{\frac{\beta}{2}x_d}\right),\] where $x_1 \geq \cdots \geq x_d$ are eigenvalues of the G$\beta$E of size $d\times d$, Johansson \cite{Johansson1998} showed that the Wigner law holds, i.e., as $d\to \infty$, one has the almost-sure convergence \begin{equation}\label{eq:first-order-GBE} \mu^{(\beta)}_d \to \mu_{S_C}. \end{equation} A central limit theorem for the G$\beta$E was proved by Dumitriu and Edelman \cite{DumitriuEdelman2006}. Here, we can observe Gaussian fluctuations around the limit shape, similarly to the GUE case. Additionally, a surprising phenomenon takes place: the Gaussian process that describes the second order asymptotic is translated by a deterministic shift, which disappears for $\beta = 2$ (see \cite{DumitriuEdelman2006} for details). A natural question is to find a discrete counterpart for G$\beta$E. Some results of Matsumoto \cite{MatsumotoJackPlancherelFewRows} suggest that a good candidate for such probability measure on the set of Young diagrams is \emph{Jack measure} given by \eqref{eq:JackMeasure}, where the relation between parameters $\alpha$ and $\beta$ is $\beta = \frac{2}{\alpha}$. Matsumoto was studying a restriction of Jack measure to the set of Young diagrams of size $n$ with at most $d$ rows. His main result states that the joint distribution of that suitably normalized $d$ rows is the same as the joint distribution of the eigenvalues of $d$-dimensional traceless G$\beta$E with $\beta = \frac{2}{\alpha}$, as $d$ is fixed and $n \to \infty$. \subsection{Main result} The main result of this paper is the description of the first and second order asymptotics of the bulk of Young diagrams under Jack measure. First, we prove a law of large numbers. If $\lambda$ is a Young diagram of size $n$, we denote by $\overline{A_\alpha(\lambda)}$ the (generalized) Young diagram obtained from $\lambda$ by an horizontal stretching of ratio $\sqrt{\alpha/n}$ and a vertical stretching of ratio $1/\sqrt{n\alpha}$ (a formal definition of generalized Young diagrams is given in Section \ref{SubsectGeneralizedYD}). We will prove in Section \ref{sect:FirstOrder} the following result. \begin{theorem} \label{theo:RYD-limit} For each $n$, let $\lambda_{(n)}$ be a random Young diagram of size $n$ distributed according to Jack measure. Then, in probability, \[ \lim_{n \to \infty} \sup\limits_{x\in \RR} \left| \omega\left(\overline{A_\alpha(\lambda_{(n)})}\right)(x) - \Omega(x) \right| = 0.\] \end{theorem} Note that the limiting curve is exactly the same as in the case $\alpha=1$. Moreover, we establish a central limit theorem. Informally, it can be presented as follows: \begin{equation}\label{eq:informal} \omega\left(\overline{A_\alpha(\lambda_{(n)})}\right)(x) \sim \Omega(x) +\frac{2}{\sqrt{n}} \Delta^{(\alpha)}_{\infty}(x), \end{equation} where $\Delta^{(\alpha)}_{\infty}$ is the Gaussian process on $[-2,2]$ with values in $(\mathcal{C}^\infty(\RR))'$ defined by: \[ \Delta_\infty^{(\alpha)} (2 \cos(\theta) )= \frac{1}{\pi} \sum_{k=2}^\infty \frac{\Xi_k}{\sqrt{k}} \sin(k \theta) - \gamma/4 +\gamma \theta/2\pi. \] Here, and throughout the paper, $\gamma$ is the difference $\sqrt{\alpha}^{-1}-\sqrt{\alpha}$. The formal version of this result is stated and proved in Section~\ref{sect:CLT2}, while the explanation for the informal reformulation is given in Section~\ref{subsect:informal}. Note that the random part of the second order asymptotics is independent of $\alpha$, while a deterministic term proportional to $\gamma$ (and, hence, vanishing for $\alpha=1$) appears. Here again, the similarity with the G$\beta$E ensemble is striking. Indeed, for the bulk of the spectral measure of the G$\beta$E ensemble, we also have the following phenomena: \begin{itemize} \item the first order asymptotics is independent of $\beta$ -- see equation~\eqref{eq:first-order-GBE}; \item the second order asymptotics is the sum of two terms: a random one and a deterministic one. Moreover, the quotient of the deterministic one over the random one is proportional to $\gamma$ (see \cite[Theorem 1.2]{DumitriuEdelman2006}). \end{itemize} Therefore our result is a new hint towards the deep connection between Jack measure and the G$\beta$E ensemble. \subsection{Jack polynomials and Jack measure} To explain our intermediate results and the main steps of the proof, we first need to review the connection between Jack measure and Jack polynomials. \subsubsection{Jack polynomials} In a seminal paper \cite{Jack1970/1971}, Jack introduced a family of symmetric functions $J^{(\alpha)}_\lambda$ depending on an additional parameter $\alpha$. These functions are now called \emph{Jack polynomials}. For some special values of $\alpha$, they coincide with some established families of symmetric functions. Namely, up to multiplicative constants, for $\alpha=1$, Jack polynomials coincide with Schur polynomials, for $\alpha=2$, they coincide with zonal polynomials, for $\alpha=1/2$, they coincide with symplectic zonal polynomials, for $\alpha = 0$, we recover the elementary symmetric functions and finally their highest degree component in $\alpha$ are the monomial symmetric functions. Moreover, Jack polynomials for $\alpha=-(k+ 1)/(r+ 1)$ verify some interesting annihilation conditions \cite{Feigin2002} and the general $\alpha$ case (with some additional, technical assumptions) is relevant in Kadell's work in relation with generalizations of Selberg's integral \cite{Kadell1997}. Over the time it has been shown that several results concerning Schur and zonal polynomials can be generalized in a rather natural way to Jack polynomials (Section (VI.10) of Macdonald's book \cite{Macdonald1995} gives a few results of this kind), therefore Jack polynomials can be viewed as a natural interpolation between several interesting families of symmetric functions. \subsubsection{A characterization of Jack measure} Expanding Jack polynomials $J_\lambda^{(\alpha)}$ in power-sum symmetric function basis, we define the coefficients $\theta_{\rho}^{(\alpha)}(\lambda)$ by: \begin{equation} \label{eq:jack-characters} J_\lambda^{(\alpha)}=\sum_{\substack{\rho: \\ |\rho|=|\lambda|}} \theta_{\rho}^{(\alpha)}(\lambda)\ p_{\rho}. \end{equation} In the case $\alpha=1$, Jack polynomials specialize to \[J_\lambda^{(1)} = \frac{n!}{\dim(\lambda)} s_\lambda,\] where $s_\lambda$ is the Schur function and $\dim(\lambda)$ is the dimension of the symmetric group representation associated to $\lambda$. Hence, using Frobenius formula \cite[page 114]{Macdonald1995}, we can express $\theta_{\rho}^{(1)}(\lambda)$ in terms of irreducible character values of the symmetric group. Namely \[ \theta_\rho^{(1)}(\lambda)=\frac{n!}{z_\rho} \, \frac{\chi^{\lambda}_\rho}{\dim(\lambda)},\] where $\chi^{\lambda}_\rho$ is the character of the irreducible representation indexed by $\lambda$ evaluated on a permutation of cycle-type $\rho$ and $z_\rho$ is the standard numerical factor $\prod_i i^{m_i(\rho)} m_i(\rho)!$, where $m_i(\rho)$ is the number of parts of $\rho$ equal to $i$. By analogy to this, we use in the general case the terminology {\em Jack characters} for $\theta_{\rho}^{(\alpha)}(\lambda)$ (while they do not have any representation theoretical interpretation, they share a lot of property with characters of the symmetric groups). The following property, which corresponds to the case $\pi=(1^n)$ of \cite[Equation (8.4)]{MatsumotoOddJM}, characterizes Jack measure: \begin{equation} \esper_{\PP_n^{(\alpha)}}( \theta^{(\alpha)}_\rho(\lambda) ) = \delta_{\rho,(1^n)}, \label{eq:expect_Jack_character} \end{equation} where $\lambda$ is a random Young diagram with $n$ boxes distributed according to $\PP_n^{(\alpha)}$. \subsubsection{A central limit theorem for Jack characters} As in the case $\alpha=1$, an important intermediate result, which may be of independent interest, is an {\em algebraic} central limit theorem. Namely we prove that Jack characters for hooks ({\em i.e.} $\rho$ is a hook) have joint Gaussian fluctuations. \begin{theorem} \label{theo:FluctuationsJackCharacters} Choose a sequence $\left(\Xi_k \right)_{k=2,3,\dots}$ of independent standard Gaussian random variables. Let $(\lambda_{(n)})_{n \ge 1}$ be a sequence of random Young diagrams of size $n$ distributed according to Jack measure. Define the random variable \[W_k(\lambda_{(n)}) = \frac{\sqrt{k} \, \theta^{(\alpha)}_{(k,1^{n-k})}(\lambda_{(n)})}{ n^{k/2}}.\] Then, as $n \to \infty$, we have: \[ \left( W_k \right)_{k=2,3,\dots} \xrightarrow{d} \left( \Xi_k \right)_{k=2,3,\dots}, \] where $\xrightarrow{d}$ means convergence in distribution of the finite-dimensional laws. \end{theorem} In the case $\alpha=1$, this theorem was proved independently by Kerov \cite{Kerov1993gaussian,IvanovOlshanski2002} and Hora \cite{HoraFluctuationsCharacters}. With the method developed here, one can even give an upper bound on the speed of convergence of the distribution function: \begin{theorem} We use the same notation as in Theorem~\ref{theo:FluctuationsJackCharacters}. Then, for any integer $d \ge2$ and real numbers $x_2,\cdots,x_d$, \[\big|\PP ( W_2 \le x_2, \cdots, W_d \le x_d) - \PP (\Xi_2 \le x_2, \cdots, \Xi_d \le x_d) \big| = O(n^{-1/4}),\] where the constant hidden in the Landau symbol $O$ is {\em uniform} on $x_2,\cdots,x_d$, but depends on $d$. \label{theo:SpeedConvergence} \end{theorem} In the case $\alpha=1$ and $d=2$, the study of the speed of convergence of $W_2$ towards a Gaussian variable has been initiated by Fulman in~\cite{FulmanFluctuationChA2}, who proved a bound of order $n^{-1/4}$. This bound has then been improved to $n^{-1/2}$ in two different works \cite{SpeedCvCharacters1,SpeedCvCharacters2}. Fulman then generalized the $n^{-1/4}$ bound to any $W_i$, in the cases $\alpha=1$ and $\alpha=2$, using the representation-theoretical backgrounds for these particular values of $\alpha$, see~\cite{FulmanSpeedCvRepresentations}. Some ideas from these papers are fundamental here, as explained below. The main novelty in Theorem~\ref{theo:SpeedConvergence} is of course the fact that our result holds for any value of the parameter $\alpha$. But, even in the cases $\alpha=1$ and $\alpha=2$, a bound for the speed convergence of a vector of random variables and not only for real-valued random variables seems to be new. \begin{remark}\label{rem:Fluct_NonHook} We are not able to describe the fluctuations of Jack character $\theta^{(\alpha)}_\rho$ when $\rho$ is not a hook. This is discussed in Subsection~\ref{subsect:Fluct_NonHook}. \end{remark} \subsection{Ingredients of the proof} We shall now say a word on the proof of our main results. The proof of Theorem \ref{theo:RYD-limit} is easier than the proof of the fluctuation result, so we will focus here on the second and compare it to the work of Ivanov, Kerov and Olshanski. \begin{remark} While beautiful and elementary, Hora's proof of the central limit theorem for characters in the case $\alpha=1$ seems very hard to be generalized for a generic value of $\alpha$ as it relies from the beginning on the representation-theoretical background. \end{remark} \subsubsection{Polynomial functions} A central idea in the paper \cite{IvanovOlshanski2002} is to consider characters as functions on all Young diagrams by defining: \[\Ch_{\mu}^{(1)}(\lambda)= \begin{cases} |\lambda|(|\lambda|-1)\cdots (|\lambda|-|\mu|+1) \frac{\chi^\lambda_{\mu\, 1^{|\lambda|-|\mu|}}} {\dim(\lambda)} &\text{ if }|\lambda| \ge |\mu|\\ 0 &\text{ if }|\lambda| < |\mu|. \end{cases}\] In Sections 1-4 of paper \cite{IvanovOlshanski2002}, the authors prove that the functions $\Ch_{\mu}^{(1)}$ span linearly a subalgebra of the algebra of functions on {\em all} Young diagrams, give several equivalent descriptions of this subalgebra and describe combinatorially the product $\Ch_{\mu}^{(1)} \Ch_{\nu}^{(1)}$ ($\Ch_{\mu}^{(1)}$ is denoted by $\bm{p^\#_\mu}$ in~\cite{IvanovOlshanski2002}). This subalgebra is called the \emph{algebra of polynomial functions} on the set of Young diagrams (see also \cite{KerovOlshanskiPolFunc}) and denoted here by $\Polun$. In the general $\alpha$-case, one can define a deformation of the function above as follows: for an integer partition $\mu$, denote $|\mu|$ its size, $\ell(\mu)$ its length, $m_i(\mu)$ its number of parts of $\mu$ equal to $i$ and $z_\mu$ the standard numerical factor $\prod_i i^{m_i(\mu)} m_i(\mu)!$. We define \begin{displaymath} \label{eq:definition-Jack} \Ch_{\mu}^{(\alpha)}(\lambda)= \begin{cases} \alpha^{-\frac{|\mu|-\ell(\mu)}{2}} \binom{|\lambda|-|\mu|+m_1(\mu)}{m_1(\mu)} \ z_\mu \ \theta^{(\alpha)}_{\mu,1^{|\lambda|-|\mu|}}(\lambda) &\text{if }|\lambda| \ge |\mu| ;\\ 0 & \text{if }|\lambda| < |\mu|. \end{cases} \end{displaymath} While Jack characters have been studied for a long time, the idea, due to Lassalle, to look at them as a function of $\lambda$ as above is quite recent \cite{Lassalle2008a,Lassalle2009}. Among other things, he proved that, as in the case $\alpha=1$, the functions $\Ch_{\mu}^{(\alpha)}$ span linearly a subalgebra of functions on all Young diagrams, which has a nice characterization: we present these results in Section \ref{SectDefKerov}, see in particular Proposition~\ref{prop:Jack-characters-basis}. This subalgebra is called \emph{algebra of $\alpha$-polynomial functions} on the set of Young diagrams (see also \cite{KerovOlshanskiPolFunc}) and denoted here by $\Pola$. As a function on {\em all} Young diagrams, $\Ch^{(\alpha)}_\mu$ can be restricted to diagrams of size $n$ and hence considered as a random variable in our problem. It follows directly from equation~\eqref{eq:expect_Jack_character} that \begin{equation}\label{eq:exp_Ch} \esper_{\PP_n^{(\alpha)}}(\Ch^{(\alpha)}_\mu)=\begin{cases} n(n-1)\cdots(n-k+1) & \text{if }\mu=1^k \text{ for some }k \leq n,\\ 0 & \text{otherwise.} \end{cases} \end{equation} Throughout the paper, we shall use the standard notation $(n)_k$ for the falling factorial $n(n-1)\cdots(n-k+1)$. \subsubsection{Moment method and structure constants} Another idea in paper~\cite{IvanovOlshanski2002} is to use the method of moments and thus to compute asymptotically (for $h \ge 1$) \[\esper_{\PP_n^{(1)}} \big( \Ch^{(1)}_{(k)}(\lambda_{(n)})^h \big). \] Recall that the algebra $\Polun$ has a linear basis given by the family of normalized characters $\left(\Ch^{(1)}_\mu\right)_\mu$. As the expectation of $\Ch_{\mu}^{(1)}(\lambda_{(n)})$ is particularly simple -- see equation~\eqref{eq:exp_Ch} --, one can compute expectation of $\big(\Ch^{(1)}_{(k)}\big)^h$ by expanding it on the basis $\Ch_{\mu}^{(1)}$ of $\Polun$. To do this, the authors of~\cite{IvanovOlshanski2002} need to understand how a product $\Ch_{\nu}^{(1)} \Ch_{\rho}^{(1)}$ expands on the $\Ch_{\mu}^{(1)}$ basis, that is they need to study the {\em structure constants} of this basis. They provide a combinatorial description of these structure constants \cite[Proposition 4.5]{IvanovOlshanski2002}. Unfortunately, this combinatorial description relies on the representation-theoretical interpretation of $\theta_{\rho}^{(1)}(\lambda)$ and has {\em a priori} no extension to a general value of $\alpha$. To overcome this difficulty, we prove that the structure constants of the $\Ch_{\mu}^{(\alpha)}$ basis depends polynomially on the auxiliary parameter $\gamma={\sqrt{\alpha}}^{-1}-\sqrt{\alpha}$. This is a non-trivial result and has other interesting applications than the study of large Young diagrams under Jack measure. Therefore, we think that it may be on independent interest and present it in details in Section~\ref{subsect:Second_Main_Result} as our second main result. Our polynomiality result for structure constants (Theorem \ref{theo:struct-const}) allows us to show that some properties proved combinatorially in the case $\alpha=1$ still holds in the general $\alpha$-case (we will also rely on the case $\alpha=2$, which also has some representation-theoretical background, and use polynomial interpolation). Our result gives a good estimate of moments of $\Ch_{(k)}^{(\alpha)}$ of order at most 4 (but not of any order). Therefore, we can not conclude with the moment method. To overcome that difficulty, we have to use another ingredient in our proof: the multivariate Stein's method. \subsubsection{Multivariate Stein's method and Fulman's construction} Stein's method is a classical method in probability to prove convergence in distrbution towards Gaussian or Poisson distribution, together with bounds on the speed of convergence ; see the monograph of Stein \cite{Stein}. To use it, one needs to construct an {\em exchangeable pair} for the relevant random variable. But, when this pair is constructed, one can prove Gaussian fluctuations, using only bounds on (mixed conditional) moments of order at most $4$ (while the moment method requires control on moments of all order). In the framework of Jack characters, an exchangeable pair has already been built by Fulman to prove a fluctuation result for $\Ch^{(\alpha)}_{(2)}$. The same construction extends to $\Ch^{(\alpha)}_{(k)}$, but the analysis of the first moments becomes more tricky, requires new ideas and heavily relies on our polynomiality result for structure constants. Let us note that, unlike Fulman's result, our result is a result of convergence in distribution of {\em vectors} of random variables. Therefore we need to use a multivariate analog of Stein's classical theorem. The one recently established by Reinert and R{\"o}llin~\cite{ReinertRollin2009} turns out to be suitable for our purpose.\medskip \subsection{Second main result: polynomiality of structure constants of Jack characters} \label{subsect:Second_Main_Result} It follows from the work of Lassalle -- see Proposition~\ref{prop:Jack-characters-basis} -- that the functions $\Ch_{\mu}^{(\alpha)}$ span linearly the algebra of $\alpha$-polynomial functions denoted by $\Pola$ (when $\mu$ runs over integer partitions of all sizes). Hence, there exist some rational numbers $g^{(\alpha)}_{\mu,\nu;\pi}$, depending on $\alpha$ such that \begin{equation}\label{eq:definition-structure-constants} \Ch^{(\alpha)}_\mu \cdot \Ch^{(\alpha)}_\nu =\sum_{\substack{\pi \text{ partition} \\ \text{of any size}}} g^{(\alpha)}_{\mu,\nu;\pi} \Ch^{(\alpha)}_\pi. \end{equation} These numbers are often called {\em structure constants} of the basis $(\Ch_{\mu}^{(\alpha)})$. It is a worthy goal to understand them, because they describe the multiplicative structure of the algebra. Our second main result is a polynomiality result for these structure constants with precise bounds on the degree: let \begin{align*} n_1(\mu) &= |\mu|+\ell(\mu), \\ n_2(\mu) &= |\mu|-\ell(\mu), \\ n_3(\mu) &= |\mu|-\ell(\mu)+m_1(\mu). \end{align*} Then we have: \begin{theorem} \label{theo:struct-const} Fix three partitions $\mu$, $\nu$ and $\pi$. The structure constant $g^{(\alpha)}_{\mu,\nu;\pi}$ is a polynomial in $\gamma={\sqrt{\alpha}}^{-1}-\sqrt{\alpha}$ with rational coefficients and of degree at most \[ \min_{i=1,2,3} n_i(\mu) + n_i(\nu) - n_i(\pi). \] Moreover, if $n_1(\mu) + n_1(\nu) - n_1(\pi)$ is even (respectively, odd), it is an even (respectively, odd) polynomial. \end{theorem} \subsubsection{Other applications of the second main result} \label{subsect:other_applications} In addition of the main purpose of this paper, Theorem \ref{theo:struct-const} can be applied to several different problems from the literature. \begin{itemize} \item It contains a fifty-year old result from Farahat and Higman, stating that the structure constants of the center of the symmetric group algebra behave polynomially in $n$ \cite{FarahatHigmanCentreQSn}. \item A natural analog of the center of the symmetric group algebra is the Hecke algebra of the pair $(S_{2n},H_n)$, see {\em e.g.} \cite[Section 7.2]{Macdonald1995} for an introduction to it. Theorem \ref{theo:struct-const} implies also an analog of Farahat and Higman's result in this context: up to some explicit normalization factor, structure constants also behave polynomially in $n$\footnote{Let us also mention the work of Aker and Can \cite{AkerCanFarahatHecke} in this direction. Unfortunately, a factor $2^n n!$ is missing in the main result and the authors are not able to correct this \cite{CanMistake}}. The same result has been proved independently by Tout \cite{Tout-Structure-constant-S2n-Hn}. \item Goulden and Jackson \cite{GouldenJacksonMatchingJack}, have defined, using Jack polynomials, an interpolation between the structure constants of both algebras. By construction, these quantities are rational functions in $\alpha$ but they were conjectured to be in fact polynomials in $\alpha-1$ with non-negative integer coefficients having some combinatorial interpretation \cite[Section 4]{GouldenJacksonMatchingJack}. Here, we prove that they are {\em polynomials} in $\alpha$ (or equivalently in $\alpha-1$) with rational coefficients. Unfortunately, we are not able to prove either the integrity or the positivity of the coefficients. \item We are also able to prove two conjectures of Matsumoto~\cite[Section 9]{MatsumotoOddJM}, arising in the context of matrix integrals (Section \ref{SubsectMatsumoto}). \item We can give a short proof of a recent result of Vassilieva \cite{VassilievaJack} which generalizes a famous result of Dénes \cite{Denes1959} for the number of {\em minimal} factorizations of a cycle in the symmetric group. \end{itemize} The link between our main result and the first two items is presented in Section \ref{SectSpecialValues}, while the connection with the last three items is explained in Appendix \ref{app:Matching-Jack_And_Matsumoto}. \subsubsection{Tool: Kerov's polynomial for Jack characters} Let us now say a word about the proof of our second main result. The algebra $\Pola$, linearly spanned by the functions $\Ch_\mu^{(\alpha)}$, admits also some interesting algebraic basis: for example the basis of {\em free cumulants} $(R_k^{(\alpha)})_{k \ge 2}$ -- see Section~\ref{SectDefKerov}. Thus $\Ch_\mu^{(\alpha)}$ writes uniquely as a polynomial in free cumulants. As it was first considered by Kerov in the case $\alpha=1$, it is usually termed {\em Kerov's polynomial} (or {\em Kerov's expansion} to avoid repetition of the word polynomial). In \cite{Lassalle2009}, Lassalle has described an inductive algorithm to compute the coefficients of this expansion. In this paper, by a careful analysis of Lassalle's algorithm, we obtain some polynomiality results (with several bounds on the degree) for these coefficients: see Propositions~\ref{PropBound1}, \ref{PropBound2} and \ref{PropBound3}. Clearly, writing some functions in a multiplicative basis may help to understand how to multiply them and we can deduce Theorem \ref{theo:struct-const} from these results on Kerov's polynomials. While inappropriate to obtain close formulas, this way of studying structure constants is, as far as we know, original. Usually, results of structure constants are obtained using their combinatorial description of {\em via} representation theory tools \cite{FarahatHigmanCentreQSn,GouldenJacksonMapsZonal,IvanovKerovPartialPermutations,GoupilSchaefferStructureCoef,Tout-Structure-constant-S2n-Hn}. To finish this paragraph, let us mention that there is an appealing positivity conjecture on Kerov's polynomials for Jack characters \cite[Conjecture 1.2]{Lassalle2009}. While we can not solve this conjecture, our analysis of Lassalle's algorithm gives some partial results: we prove in general the polynomiality of the coefficients and we compute a few specific values that were conjectured by Lassalle (see Appendix \ref{app:Kerov_polynomials}). Another interesting application of our result on Kerov's polynomials is a new proof of the polynomial dependence of Jack polynomials in term of Jack parameter $\alpha$, which was an important open problem in the early nineties. This is presented in Section \ref{SubsectCoefJackPoly}. \subsection{An open problem: edge fluctuations of Jack measure} Another natural question on asymptotics of Jack measure is the behavior of the first few rows of the Young diagrams. This kind of results, orthogonal to the ones in this paper, is called {\em edge fluctuations}. Our law of large number on the bulk on Young diagrams implies that, for any fixed positive integer $k$ and real number $C<1$, \[\PP \left[ \frac{(\lambda_{(n)})_k}{\sqrt{n}} \leq \frac{2\, C}{\sqrt{\alpha}} \right] \to 0,\] while Lemma~\ref{lem:FiniteSupport} tells us that $(\lambda_{(n)})_k/\sqrt{n}$ exceeds $(2e)/\sqrt{\alpha}$ with exponentially small probability. A natural conjecture, considering the case $\alpha=1$ where the edge fluctuations are well described (see \cite{Okounkov2000,BorodinOkounkovOlshanski2000} and references therein) and the link with $\beta$-ensemble, would be the following: for each integer $k \ge 1$, the quantity $(\lambda_{(n)})_k/\sqrt{n}$ converges in probability towards $2/\sqrt{\alpha}$ and the joint vector \[ \left[ n^{1/3} \left( \frac{(\lambda_{(n)})_j}{\sqrt{n}} - \frac{2}{\sqrt{\alpha}} \right) \right]_{1 \le j \le k}\] converges in law towards the $\beta$-Tracy-Widom distribution, which has been introduced and studied in \cite{BetaEdgeFluctuations} to study edge fluctuations of $\beta$-ensemble. Naturally, a similar conjecture can be formulated for the lengths of the first columns of the Young diagram $\lambda_{(n)}$. These conjectures hold true for the first row/column in the case $\alpha=1/2$ and $\alpha=2$. The proof uses the combinatorial interpretation of Jack measure at these particular values of $\alpha$, using Robinson-Schensted on random fixed point free involutions: see \cite{BaikRainsMonotomeSubsequenceInvolutions}. We have not made computer experiments to confirm this conjecture and let this problem wide open for future research. \subsection{Outline of the paper} The paper is organized as follows. Section \ref{SectDefKerov} gives all definitions and background on Jack characters, free cumulants and Kerov polynomials. In Section \ref{SectPolynomial} we prove the polynomiality of the coefficients of Kerov's polynomials, with bounds on the degrees. Then our second main result (that is the polynomiality of structure constants, with precise bound on the degree) is proved in Section \ref{SectStructureConstants}. Section \ref{SectSpecialValues} presents technical statements on structure constants, that will be used in the analysis of large Young diagrams. The last three sections deal with convergence results for large Young diagrams: Section \ref{sect:FirstOrder} presents the first order asymptotics, Section~\ref{sect:CLT} gives the central limit theorem for Jack characters and Section \ref{sect:CLT2} establishes the Gaussian fluctuations of large random Young diagrams around the limit shape. Appendices are devoted to partial answers or some solutions to questions from the literature. \section{Jack characters and Kerov polynomials} \label{SectDefKerov} \subsection{Polynomial functions on the set of Young diagrams} \label{subsec:polfunct} The ring $\Polun$ of \emph{polynomial functions on the set of Young diagrams} (briefly: the ring of \emph{polynomial functions}) has been introduced by Kerov and Olshanski in order to study irreducible character values of the symmetric groups \cite{KerovOlshanskiPolFunc}. The first characterization of $\Polun$, that we shall use as definition, is the following. \begin{definition}\label{def:polynomial} A function $F$ on the set of all Young diagrams belongs to $\Polun$ if there exists a collection of polynomials $\left(F_h \in \QQ[\lambda_1,\dots,\lambda_h]\right)_{h > 0}$ such that \begin{itemize} \item for a diagram $\lambda=(\lambda_1,\dots,\lambda_h)$ of length $h$, one has $F(\lambda)=F_h(\lambda_1,\ldots,\lambda_h)$; \item each $F_h$ is symmetric in variables $\lambda_1 - 1, \lambda_2 -2, \ldots ,\lambda_h-h$; \item the compatibility relation \[F_{h+1}(\lambda_1,\dots,\lambda_h,0) = F_h(\lambda_1,\dots,\lambda_h)\] holds true for all values of $h$. \end{itemize} \end{definition} The ring $\Polun$, as defined above, is sometimes called the ring of \emph{shifted symmetric functions} in $\lambda_1, \lambda_2,\ldots$. It was first considered by Knop and Sahi \cite{KnopSahiShiftedSym} in a more general context. While this is not obvious, one can prove that $\Ch^{(1)}_\mu$ belongs to $\Polun$. In fact, one has more \cite[Section 3]{KerovOlshanskiPolFunc}. \begin{proposition} When $\mu$ runs over all partitions, the family $(\Ch^{(1)}_\mu)_\mu$ forms a linear basis of $\Polun$. \end{proposition} An equivalent description of $\Polun$ can be given using Kerov's interlacing coordinates of a Young diagram. Recall that the \emph{content} of a box of a Young diagram is $j-i$, where $j$ is its column index and $i$ its row index and, more generally, the content of a point of the plane is the difference between its $x$-coordinate and its $y$-coordinate. We denote by $\II_\lambda$ the set of contents of the \emph{inner corners} of $\lambda$, that is corners, at which a box could be added to $\lambda$ to obtain a new diagram of size $|\lambda|+1$. Similarly, the set $\OO_\lambda$ is defined as the contents of the \emph{outer corners}, that is corners at which a box can be removed from $\lambda$ to obtain a new diagram of size $|\lambda|-1$. An example is given in Figure \ref{FigCorners} (we use the French convention to draw Young diagrams). \begin{figure}[tb] \begin{minipage}{.6 \linewidth} \begin{flushright} \begin{tikzpicture} \foreach \i in {0,...,2} \draw (0,\i) -- (4,\i); \draw (0,3) -- (2,3); \foreach \j in {0,...,2} \draw (\j,0) -- (\j,3); \foreach \j in {3,4} \draw (\j,0) -- (\j,2); \fill (0,3) node[anchor=south west] {$i_1$} circle(2pt); \fill (2,2) node[anchor=south west] {$i_2$} circle(2pt); \fill (4,0) node[anchor=south west] {$i_3$} circle(2pt); \fill (2,3) node[anchor=north east] {$o_1$} circle(2pt); \fill (4,2) node[anchor=north east] {$o_2$} circle(2pt); \end{tikzpicture} \end{flushright} \end{minipage} \hfill \begin{minipage}{.35\linewidth} \begin{align*} \OO_\lambda&=\{-1,2\} \\ \II_\lambda&=\{-3,0,4\} \end{align*} \end{minipage} \caption{A Young diagram with its inner and outer corners (marked respectively with $i$ and $o$).} \label{FigCorners} \end{figure} If $k$ is a positive integer, one can consider the power-sum symmetric function $p_k$, evaluated on the difference of alphabets $\II_\lambda - \OO_\lambda$. By definition, it is a function on Young diagrams given by: \[\lambda \mapsto p_k(\II_\lambda - \OO_\lambda):= \sum_{i \in \II_\lambda} i^k - \sum_{o \in \OO_\lambda} o^k.\] It can easily be seen that, for any Young diagram, $p_1(\II_\lambda - \OO_\lambda)=0$. As any symmetric function can be written (uniquely) in terms of $p_k$, we can define $f(\II_\lambda - \OO_\lambda)$ for any symmetric function $f$ as follows: if $a_\rho$ ($\rho$ partition) are the coefficients of the $p$-expansion of $f$, that is $f=\sum_\rho a_\rho p_{\rho_1} \cdots p_{\rho_\ell}$, then by definition, \[f(\II_\lambda - \OO_\lambda)=\sum_\rho a_\rho p_{\rho_1}(\II_\lambda - \OO_\lambda) \cdots p_{\rho_\ell} (\II_\lambda - \OO_\lambda).\] With this notion of symmetric functions evaluated on difference of alphabets, the ring $\Polun$ admits the following equivalent description \cite[Corollary 2.8]{IvanovOlshanski2002}. \begin{proposition}\label{prop:pk_basis} The functions $(\lambda \mapsto p_k(\II_\lambda - \OO_\lambda))_{k \geq 2}$ form an algebraic basis of $\Polun$. \end{proposition} In other terms, any function $F$ in $\Polun$ is equal to $\lambda \mapsto f(\II_\lambda - \OO_\lambda)$, for some symmetric function $f$. This symmetric function $f$ is unique up to addition of a multiple of $p_1$. \subsection{Transition measure and free cumulants} \label{subsect:TransMeasure} Kerov \cite{KerovTransitionMeasure} introduced the notion of \emph{transition measure} of a Young diagram. This is a probability measure $\mu_\lambda$ on the real line $\RR$ associated to $\lambda$ and defined by its Cauchy transform: \[G_{\mu_\lambda}(z) = \int_\RR \frac{d\mu_\lambda(x)}{z-x} = \frac{\prod_{o \in \OO_\lambda} z-o}{\prod_{i \in \II_\lambda} z-i}.\] In particular, transition measure is supported on $\II_\lambda$. Besides, its \emph{moment generating series} is given by \[ \sum_{k \ge 0} M^{(1)}_k(\lambda) \, t^k := \frac{1}{t} \, G_{\mu_\lambda}(1/t) =\frac{\prod_{o \in \OO_\lambda} 1-o\, t}{\prod_{i \in \II_\lambda} 1-i\, t}, \] where $M^{(1)}_k(\lambda):= \int_\RR x^k d\mu_\lambda(x)$ is the \emph{$k$-th moment} of $\mu_\lambda$. It is easily seen that, for any diagram, $M^{(1)}_0(\lambda)=1$ and $M^{(1)}_1(\lambda)=0$. This generating series can be rewritten as \begin{multline*} \sum_{k \ge 0} M^{(1)}_k(\lambda) \, t^k = \exp\left( \sum_{i \in \II_\lambda}\sum_{k \ge 1} \frac{i^k}{k}\, t^k - \sum_{o \in \OO_\lambda}\sum_{k \ge 1} \frac{o^k}{k}\, t^k \right) \\ = \exp \left( \sum_{k \ge 1} \frac{1}{k} p_k(\II_\lambda -\OO_\lambda)\, t^k \right). \end{multline*} This implies that $M^{(1)}_k(\lambda)=h_k(\II_\lambda -\OO_\lambda)$, where $h_k$ is the complete symmetric function of degree $k$; see \cite[page 25]{Macdonald1995}. \begin{corollary} The family $(M^{(1)}_k)_{k \geq 2}$ forms an algebraic basis of $\Lambda_{(1)}^\star$. \end{corollary} We will also be interested in free cumulants $R^{(1)}_k(\lambda)$ of the transition measure $\mu_\lambda$. They are defined by their generating series \[K_\lambda(t) =t^{-1} + \sum_{k \ge 1} R^{(1)}_k(\lambda) \, t^{k-1},\] where $K_\lambda$ is the (formal) compositional inverse of $G_{\mu_\lambda}$. The fact that $M^{(1)}_1(\lambda)=0$ implies that either $R^{(1)}_1(\lambda)=0$ (for all diagrams $\lambda$). As explained by Lassalle \cite[Section 5]{Lassalle2009}, they can be expressed as \begin{equation} \label{eq:FreeCumulantsLassalle} R^{(1)}_k(\lambda) = e_k^\star(\II_\lambda -\OO_\lambda) \end{equation} for some homogeneous symmetric function $e_k^\star$ of degree $k$. Functions $e_k^\star$ form an algebraic basis of symmetric functions, hence we have the following corollary of Proposition \ref{prop:pk_basis}. \begin{corollary} $(R^{(1)}_k)_{k \geq 2}$ is an algebraic basis of ring of polynomial functions on the set of Young diagrams. \end{corollary} \begin{remark} Free cumulants are classical objects in free probability theory \cite{Voiculescu1986, SpeicherFreeCumulants}, but considering them outside this context may seem strange at first sight. The relevance of free cumulants of the transition measure of Young diagrams first appeared in the work of Biane \cite{Biane1998} and they have played an important role in asymptotic representation theory since then. \end{remark} \subsection{Generalized Young diagrams}\label{SubsectGeneralizedYD} The second description of $\Polun$ is interesting because it shows that the value of a polynomial function is defined on more general objects than just Young diagrams. \begin{definition} A {\em generalized Young diagram} is a broken line going from a point $(0,y)$ on the $y$-axis to a point $(x,0)$ on the $x$-axis such that every piece is either a horizontal segment from left to right or a vertical segment from top to bottom. \end{definition} Any Young diagram can be seen as such a broken line: just consider its border. The notions of inner and outer corners can be easily adapted to generalized Young diagrams, as well as the sets $\II_L$ and $\OO_L$ of their contents. It is illustrated in Figure~\ref{FigGeneralizedYD}. Note also that the relation $p_1(\II_L-\OO_L)=0$ holds for generalized Young diagrams as well. \begin{figure}[tb] \begin{center} \begin{tikzpicture} \draw[dashed,gray] grid (4.5,3.5); \draw[->,thick] (-.2,0) -- (4.7,0); \draw[->,thick] (0,-.2) -- (0,3.7); \draw (0,2) node[above left,fill=white] {\tiny $i_1=-2$}; \draw (.5,2) node[above right, fill=white] {\tiny $o_1=-1.5$}; \draw (.5,.5) node[above right, fill=white] {\tiny $i_2=0$}; \draw (3,.5) node[above right, fill=white] {\tiny $o_2=2.5$}; \draw (3,0) node[below right, fill=white] {\tiny $i_3=3$}; \draw[ultra thick] (0,2) circle (1.5pt) -- (.5,2) circle (1.5pt) -- (.5,.5) circle (1.5pt) -- (3,.5) circle (1.5pt) -- (3,0) circle (1.5pt); \end{tikzpicture} \end{center} \caption{A generalized Young diagram $L$ with the corresponding sets $\OO_L = \{o_1,o_2\}$ and $\II_L = \{i_1, i_2, i_3\}$.} \label{FigGeneralizedYD} \end{figure} Any polynomial function $F$ on the set of Young diagrams corresponds to the function \[\lambda \mapsto f(\II_\lambda - \OO_\lambda) \] for some symmetric function $f$. Recall that $f$ is uniquely determined up to addition of a multiple of $p_1$. Thus, $F$ can be canonically extended to generalized Young diagrams by setting \[F(L) = f(\II_L - \OO_L).\] As the relation $p_1(\II_L-\OO_L)=0$ holds, $F(L)$ is well-defined, {\em i.e.} it does not depend on the choice of $f$. \begin{figure}[tb] \[ \begin{array}{c} \begin{tikzpicture}[scale=0.8] \draw[dashed,gray] grid (4.3,3.4); \draw[->,thick] (-0.2,0) -- (4.7,0); \draw[->,thick] (0,-0.2) -- (0,3.7); \draw[ultra thick] (0,2) -- (.5,2) -- (.5,.5) -- (3,.5) -- (3,0) ; \end{tikzpicture}\\ \lambda \end{array} \mapsto \begin{array}{c} \begin{tikzpicture}[scale=0.8] \draw[dashed,gray] grid (7.2,3.5); \draw[->,thick] (-0.2,0) -- (7.7,0); \draw[->,thick] (0,-0.2) -- (0,3.7); \begin{scope}[xscale=2,yscale=.5] \draw[ultra thick] (0,2) -- (.5,2) -- (.5,.5) -- (3,.5) -- (3,0) ; \end{scope} \end{tikzpicture}\\ T_{2,\frac{1}{2}} (\lambda) \end{array}\] \caption{Example of a Young diagram $\lambda$ on the left and a stretched Young diagram $T_{2,\frac{1}{2}} (\lambda)$ on the right.} \label{FigStretching} \end{figure} We will be in particular interested in the following generalized Young diagrams. Let $\lambda$ be a (generalized) Young diagram and $s$ and $t$ two positive real numbers. $T_{s,t}(\lambda)$ denotes the generalized Young diagram obtained from $\lambda$ by stretching it horizontally by a factor $s$ and vertically by a factor $t$ in French convention (see Figure~\ref{FigStretching}). These \emph{anisotropic} Young diagrams have been first considered by Kerov in \cite{KerovAnisotropicYD}, also in the context of Jack polynomials. In the case $s=t$, we denote by $D_s(\lambda) := T_{s,s}(\lambda)$ the diagram obtained from $\lambda$ by applying a homothetic transformation of ratio $s$ and we will call it \emph{dilated Young diagram}. In the case $s = t^{-1} = \sqrt{\alpha}$ for some $\alpha \in \RR_+$ we denote by $A_\alpha(\lambda) := T_{\sqrt{\alpha},\sqrt{\alpha}^{-1}} (\lambda)$ the diagram obtained from $\lambda$ by stretching it horizontally by a factor $\sqrt{\alpha}$ and vertically by a factor $\sqrt{\alpha}^{-1}$. We call it \emph{$\alpha$-anisotropic Young diagram}. It is easy to check that the sets $\II_{D_s(\lambda)}$ and $\OO_{D_s(\lambda)}$ are obtained from $\II_\lambda$ and $\OO_\lambda$ by multiplying all values by $s$. In particular, if $F$ is a polynomial function such that the corresponding symmetric function $f$ is homogeneous of degree $d$, then \[ \lambda \mapsto F(D_s(\lambda)) = f(\II_{D_s(\lambda)} - \OO_{D_s(\lambda)}) = s^d f(\II_\lambda - \OO_\lambda) = s^d F(\lambda) \] is also a polynomial function. Finally, for any fixed $s>0$, $F$ is a polynomial function if and only if $\lambda \mapsto F(D_s(\lambda))$ is a polynomial function. \subsection{Continuous Young diagrams} \label{SubsectContinuousYD} A generalized Young diagram can also be seen as a function on the real line. Indeed, if one rotates the zigzag line counterclockwise by $45\degree$ and scale it by a factor $\sqrt{2}$ (so that the new $z$-coordinate corresponds to contents), then it can be seen as the graph of a piecewise affine continuous function with slope $\pm 1$. We denote this function by $\omega(\lambda)$. This definition is illustrated in Figure \ref{FigOmega}. It is very useful to state convergence results for Young diagrams. Note that the limiting function $\Omega$ corresponds neither to a real Young diagram, nor to a generalized Young diagram. Therefore, it is natural to work with even more general objects than generalized Young diagrams, i.~e.~\emph{continuous Young diagrams}. \begin{definition} We say that a function $\omega: \RR \rightarrow \RR$ is a \emph{continuous Young diagram} if: \begin{itemize} \item $\omega$ is Lipshitz continuous function with constant $1$, i.~e.~ for any $x_1, x_2 \in \RR$ $|\omega(x_1) - \omega(x_2)| \leq |x_1 - x_2|$; \item $\omega(x)-|x|$ is compactly supported. \end{itemize} \end{definition} There is a natural extension for the definitions of transition measure and evaluation of polynomial functions for continuous Young diagrams, see \cite[Section 1.2]{Biane1998}. However, the general setting will not be relevant in this paper: we will only need to know that the free cumulants of the transition measure of $\Omega$ are \[R_k(\Omega) = \begin{cases} 1 &\text{ if }k=2 ;\\ 0 &\text{ if }k>2. \end{cases}\] This was established by Biane \cite[Section 3.1]{Biane2001}. \subsection{$\alpha$-polynomial functions} \begin{definition} We say that $F$ is an \emph{$\alpha$-polynomial function} on the set of (continuous) Young diagrams if \[\lambda \mapsto F ( T_{\alpha^{-1},1}(\lambda) ) \] is a polynomial function. The set of $\alpha$-polynomial functions is an algebra which will be denoted by $\Pola$. \end{definition} Using Definition \ref{def:polynomial}, this means that the polynomial $F(\alpha^{-1} \lambda_1, \cdots, \alpha^{-1} \lambda_h)$ is symmetric in $\lambda_1 -1$, \ldots, $\lambda_h - h$. Equivalently (by a change of variables), $F$ is symmetric in $\alpha \lambda_1 -1$, \ldots, $\alpha \lambda_h - h$ or in \[\lambda_1 -\frac{1}{\alpha}, \dots, \lambda_h - \frac{h}{\alpha}.\] The last characterization is the definition of what is usually called an $\alpha$-shifted symmetric function \cite{OkounkovOlshanskiShiftedJack,Lassalle2008a}. It would be equivalent to ask in the definition of $\alpha$-polynomial functions that \[\lambda \mapsto F (A_{\alpha^{-1}}(\lambda) ) \] is a polynomial function, where $A_{\alpha^{-1}}(\lambda) = T_{\sqrt{\alpha}^{-1},\sqrt{\alpha}}(\lambda)$ is an $\alpha$-anisotropic Young diagram. Indeed, $T_{\sqrt{\alpha}^{-1},\sqrt{\alpha}}(\lambda)$ is a dilatation of $T_{\alpha^{-1},1}(\lambda)$ and the property of {\em being polynomial} is invariant by dilatation of the argument. Therefore, the $\alpha$-anisotropic moments and free cumulants defined by \begin{align*} M_k^{(\alpha)}(\lambda)&:= M^{(1)}_k \left( A_{\alpha}(\lambda) \right),\\ R_k^{(\alpha)}(\lambda) &:= R^{(1)}_k \left( A_{\alpha}(\lambda) \right), \\ \end{align*} are $\alpha$-polynomial. Moreover, the families $(M_k^{(\alpha)})_{k\geq 2}$ and $(R_k^{(\alpha)})_{k\geq 2}$ are algebraic bases of the algebra $\Pola$ of $\alpha$-polynomial functions. The following property is due to Lassalle (under a slightly different form). \begin{proposition}\label{prop:Jack-characters-basis} When $\mu$ runs over all partitions, Jack characters $\left(\Ch^{(\alpha)}_\mu\right)_\mu$ form a linear basis of the algebra of $\alpha$-polynomial functions. \end{proposition} \begin{proof} In \cite[Section 3]{Lassalle2008a}, Lassalle builds a linear isomorphism $\lambda \mapsto \lambda^\#$ between symmetric functions and $\alpha$-shifted symmetric functions. Then he shows \cite[Proposition 2]{Lassalle2008a} that, for any two partitions $\lambda$ and $\mu$ with $|\lambda| \ge |\mu|$, \[\alpha^{(|\mu|-\ell(\mu))/2} p_\mu^\#(\lambda) = \Ch^{(\alpha)}_\mu(\lambda).\] It is straight-forward to check that, for $|\lambda| < |\mu|$, both sides of equality above are equal to $0$. Hence, as functions on all Young diagrams, $\Ch^{(\alpha)}_\mu$ is equal, up to a scalar multiple, to $p_\mu^\#$. The facts that $p_\mu$ is a basis of the symmetric function ring and $f\mapsto f^\#$ a linear isomorphism conclude the proof. \end{proof} In particular, they are $\alpha$-polynomial functions and can be expressed in terms of the algebraic bases above. \begin{proposition} Let $\mu$ be a partition and $\alpha >0$ a fixed real number. There exist unique polynomials $L_\mu^{(\alpha)}$ and $K_\mu^{(\alpha)}$ such that, for every $\lambda$, \begin{align*} \Ch^{(\alpha)}_\mu(\lambda) &= L_\mu^{(\alpha)}\left(M_2^{(\alpha)}(\lambda), M_3^{(\alpha)}(\lambda),\cdots \right); \\ \Ch^{(\alpha)}_\mu(\lambda) &= K_\mu^{(\alpha)}\left(R_2^{(\alpha)}(\lambda), R_3^{(\alpha)}(\lambda),\cdots \right). \end{align*} \end{proposition} The polynomials $K_\mu^{(\alpha)}$ have been introduced by Kerov in the case $\alpha=1$ \cite{Kerov2000talk,Biane2003} and by Lassalle in the general case \cite{Lassalle2009} and they are called \emph{Kerov polynomials}. Once again, our normalizations are different from his. We will explain this choice later. From now on, when it does not create any confusion, we suppress the superscript $(\alpha)$. We present a few examples of polynomials $L_{\mu}$ and $K_{\mu}$ (in particular the case of a one-part partition $\mu$ of length lower than $6$). This data has been computed using the one given in \cite[page 2230]{Lassalle2009}. Recall that we set $\gamma:=\frac{1-\alpha}{\sqrt{\alpha}}$. \begin{align*} L_{(1)} &= M_2, \\ L_{(2)} &= M_3 + \gamma M_2, \\ L_{(3)} &= M_4 - 2M_2^2 + 3\gamma M_3 + (1 + 2\gamma^2)M_2, \\ L_{(4)} &= M_5 - 5M_3M_2 + 6\gamma M_4 - 11\gamma M_2^2 + (5 + 11\gamma^2)M_3 + (7\gamma + 6\gamma^3)M_2, \\ L_{(5)} &= M_6 - 6M_4M_2 -3M_3^2 + 7M_2^3 + 10\gamma M_5 - 45\gamma M_3M_2 \\ &\qquad + (15+35\gamma^2)M_4 - (25+60\gamma^2)M_2^2 \\ &\qquad+ (55\gamma + 50\gamma^3)M_3 + (8 + 46\gamma^2 + 24\gamma^4)M_2,\\ L_{(2,2)} &= M_3^2 + 2\gamma M_3 M_2 - 4 M_4 + (\gamma^2+6) M_2^2 - 10 \gamma M_3 -(6\gamma^2+2) M_2.\\ \intertext{Similarly,} K_{(1)} &= R_2, \\ K_{(2)} &= R_3 + \gamma R_2, \\ K_{(3)} &= R_4 + 3\gamma R_3 + (1 + 2\gamma^2)R_2, \\ K_{(4)} &= R_5 + 6\gamma R_4 + \gamma R_2^2 + (5 + 11\gamma^2)R_3 + (7\gamma + 6\gamma^3)R_2, \\ K_{(5)} &= R_6 + 10\gamma R_5 + 5\gamma R_3R_2 + (15 + 35 \gamma^2) R_4 + (5+10\gamma^2) R_2^2 \\ &\qquad+ (55\gamma + 50\gamma^3)R_3 + (8 + 46\gamma^2 + 24\gamma^4)R_2,\\ K_{(2,2)} &= R_3^2 + 2\gamma R_3 R_2 - 4 R_4 + (\gamma^2-2) R_2^2 - 10 \gamma R_3 -(6\gamma^2+2) R_2. \end{align*} A few striking facts appear on these examples. First, all coefficients are polynomials in the auxiliary parameter $\gamma$: we prove this fact in the next section with explicit bounds on the degrees. Besides, for one part partition, polynomials $K_{(r)}$ have non-negative coefficients. We are unfortunately unable to prove this statement, which is a slightly more precise version of \cite[Conjecture 1.2]{Lassalle2009}. A similar conjecture holds for several part partitions, see also \cite[Conjecture 1.2]{Lassalle2009}. \section{Polynomiality in Kerov's expansion} \label{SectPolynomial} \subsection{Notations} As in the previous sections, most of our objects are indexed by integer partitions. Therefore it will be useful to use some short notations for small modifications (adding or removing a box or a part) of partitions. We denote by $\mu \cup (r)$ ($\mu \setminus (r)$, respectively) the partition obtained from $\mu$ by adding (deleting, respectively) one part equal to $r$. We denote by $\mu_{\downarrow r}=\mu \setminus (r) \cup (r-1)$ the partition obtained from $\mu$ by removing one box in a row of size $r$. The reader might wonder what do $\mu \setminus (r)$ and $\mu_{\downarrow r}$ mean if $\mu$ does not have a part equal to $r$: we will not use these notations in this context. Finally, if $i$ is an inner corner of $\lambda$, we denote by $\lambda^{(i)}$ the diagram obtained from $\lambda$ by adding a box at place $i$. \subsection{How to compute Jack character polynomials?} \label{SubsectAlgo} Unfortunately, the argument given above to prove the existence of $L_\mu$ and $K_\mu$ is not effective. M.~Lassalle \cite{Lassalle2009} gave an algorithm for computing $K_\mu$ by induction over $\mu$. In this section we present a slightly simpler version of this algorithm which allows to compute $L_\mu$ instead of $K_\mu$. One of the base ingredients is the following formula, which corresponds to \cite[Proposition 8.3]{Lassalle2009}. \begin{proposition}\label{PropMkLo} Let $k \geq 2$, $\lambda$ be a Young diagram and $i=(x,y)$ an inner corner of $\lambda$. \begin{multline*} M_k(\lambda^{(i)})-M_k(\lambda) = \sum_{\substack{r\geq 1, s,t \geq 0, \\ 2r+s+t \leq k}} z_i^{k-2r-s-t} \\ \binom{k-t-1}{2r+s-1}\binom{r+s-1}{s} \left( -\gamma \right)^s M_t(\lambda), \end{multline*} where $z_i=\sqrt{\alpha} x - \sqrt{\alpha}^{-1} y$ is the content of the corner corresponding to $i$ in the $\alpha$-anisotropic diagram $A_\alpha(\lambda)$. \end{proposition} \begin{proof} As mentioned above, this is exactly \cite[Proposition 8.3]{Lassalle2009}. To help the reader, we compare our notations to Lassalle's ones (we use boldface to refer to his notations): \begin{align*} M_k(\lambda^{(i)}) &= \alpha^{k/2} \bm{M_k(\lambda^{(i)})}; \\ M_t(\lambda) &= \alpha^{t/2} \bm{M_t(\lambda)}; \\ z_i &= \sqrt{\alpha} \cdot \bm{x_i};\\ \gamma &= \sqrt{\alpha}^{-1} - \sqrt{\alpha}.\qedhere \end{align*} \end{proof} For any partition $\rho$ we define $M_\rho(\lambda):=\prod_i M_{\rho_i}(\lambda)$ by multiplicativity. The above proposition implies immediately the following corollary: \begin{corollary}\label{CorolMlo} For any partition $\rho$, any diagram $\lambda$ and any inner corner $i$ of $\lambda$, \[M_{\rho}(\lambda^{(i)}) = M_{\rho}(\lambda) + \sum_{\substack{ g,h \geq 0, \\ \pi \vdash h}} b_{g,\pi}^\rho(\gamma) \ z_i^g\ M_\pi(\lambda),\] where $b_{g,\pi}^\rho(\gamma)$ is a polynomial in $\gamma$. \end{corollary} \begin{proof} The case when $\rho$ consists of only one part is a direct consequence of Proposition \ref{PropMkLo} (one even has an explicit expression for $b_{g,\pi}^\rho(\gamma)$ in this case). The general case follows by multiplication. \end{proof} This corollary is an analogue of Equation (8.1) in \cite{Lassalle2009}. Let $\mu$ be a partition. By definition of $L_\mu$, there exist some numbers $a^\mu_\rho$ (depending on $\alpha$) such that, for any Young diagram $\lambda$, \[ \Ch_\mu(\lambda) = \sum_\rho a^\mu_\rho\ M_\rho(\lambda). \] Using Corollary~\ref{CorolMlo} we can compute \begin{multline}\label{EqIng1Lassalle} \Ch_\mu(\lambda^{(i)}) = \sum_\rho a^\mu_\rho\ M_\rho(\lambda^{(i)}) \\ = \Ch_\mu(\lambda) + \sum_\rho a^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} b_{g,\pi}^\rho(\gamma)\ z_i^g\ M_\pi(\lambda) \right). \end{multline} The second ingredient of Lassalle's algorithm is a linear identity between the values of Jack character evaluated on different diagrams. We denote by $c_i(\lambda)$ the probability of the corner $i$ in the transition measure $\mu_{A_\alpha(\lambda)}$, so that \begin{equation}\label{eq:Mk_co} M_k(\lambda) = \sum_i c_i(\lambda) z_i^k. \end{equation} In particular, \begin{align} \sum_i c_i(\lambda)&=1, \label{eq:Sum_co}\\ \sum_i c_i(\lambda) z_i&=0.\label{Sum_cozo} \end{align} Then we have \cite[Equation (3.6)]{Lassalle2009} the following proposition. \begin{proposition}\label{PropLinearRelation} For any (continuous) Young diagram $\lambda$ and any partition $\mu$ \begin{align*} \sum_{i \in \II_\lambda} c_i(\lambda) \Ch_\mu(\lambda^{(i)}) &= m_1(\mu) \Ch_{\mu \backslash 1}(\lambda) + \Ch_\mu(\lambda), \\ \sum_{i \in \II_\lambda} c_i(\lambda) z_i \Ch_\mu(\lambda^{(i)}) &= \sum_{r \geq 2} r m_r(\mu) \Ch_{\mu_{\downarrow r}} (\lambda). \end{align*} \end{proposition} \begin{proof} It is an exercise to adapt Equations (3.6) of \cite{Lassalle2009} to our notations. \end{proof} Using Equations~\eqref{EqIng1Lassalle}, \eqref{eq:Mk_co}, \eqref{eq:Sum_co} and \eqref{Sum_cozo} together with Proposition~\ref{PropLinearRelation}, we obtain the following equalities between functions on the set of (continuous) Young diagrams: for any partition $\mu$, \begin{align} \sum_\rho a^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} b_{g,\pi}^\rho(\gamma) \, M_\pi \, M_g \right) &= m_1(\mu) \Ch_{\mu \backslash 1}, \tag{A}\label{EqRec1}\\ \sum_\rho a^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} b_{g,\pi}^\rho(\gamma) \, M_{\pi} \, M_{g+1} \right) &= \sum_{r \geq 2} r \cdot m_r(\mu) \Ch_{\mu_{\downarrow r}}. \tag{B}\label{EqRec2} \end{align} Fix some partition $\tau$. We can identify the coefficient of a given monomial $M_\tau$ in the above equations. This gives us two linear equations which will be denoted by $(A_\tau)$ and $(B_\tau)$: \begin{align} \sum_\rho a^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \ \pi \vdash h \\ \pi \cup (g)=\tau}} b_{g,\pi}^\rho(\gamma) \right) &= m_1(\mu) \, a^{\mu \backslash 1}_\tau, \tag{$A_\tau$}\label{EqRecTau1}\\ \sum_\rho a^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \ \pi \vdash h \\ \pi \cup (g+1)=\tau}} b_{g,\pi}^\rho(\gamma) \right) &= \sum_{r \geq 2} r \cdot m_r(\mu) \, a^{\mu_{\downarrow r}}_\tau. \tag{$B_\tau$}\label{EqRecTau2} \end{align} Now, assume that, for some partition $\mu$, we can compute $L_\nu$ for all partitions $\nu$ of size smaller than $|\mu|$. Then the equations \eqref{EqRecTau1} and \eqref{EqRecTau2} can be interpreted as a linear system, where the variables are the coefficients $a^\mu_\rho$. This is a {\em finite} system of linear equations (indeed, $a^\mu_\rho=0$ as soon as $|\rho| \geq |\mu|+\ell(\mu)$ \cite[Proposition 9.2 (ii)]{Lassalle2009}). As explained by M.~Lassalle, the system obtained that way has a unique solution (we shall see another explanation of that in the next paragraph) and thus, one can compute the coefficients $a^\mu_\rho$ by induction over $|\mu|$. \subsection{A triangular subsystem} In the previous section we explained how to determine the coefficients $a^\mu_\rho$ (where $\rho$ runs over partitions without parts equal to $1$) of $L_\mu$ as the solution of an overdetermined linear system of equations. In this section, we extract from this system a triangular subsystem. We will need an order on all partitions: let us define $<_1$ as follows: \[ \rho <_1 \rho' \iff \begin{cases} & |\rho| <|\rho'|; \\ \text{or}&|\rho| = |\rho'| \text{ and } \ell(\rho) > \ell(\rho'); \\ \text{or}&|\rho|= |\rho'|,\ \ell(\rho) = \ell(\rho') \text{ and } \min(\rho) > \min(\rho'). \end{cases}\] We say that an equation involves a variable if its coefficient is non-zero. \begin{lemma} Let $\rho$ be a partition and $q=\min(\rho)$ its smallest part. \begin{itemize} \item If $q=2$, set $\tau=\rho\setminus (2)$. Then Equation $(A_\tau)$ involves the variable $a^\mu_\rho$ and involves some of the variables $a^\mu_{\rho'}$ for $\rho' >_1 \rho$ (and no other variables $a^\mu_{\rho'}$). \item If $q>2$, set $\tau= \rho_{\downarrow q}$. Then Equation $(B_\tau)$ involves the variable $a^\mu_\rho$ and some of the variables $a^\mu_{\rho'}$ for $\rho' >_1 \rho$ (and no other variables $a^\mu_{\rho'}$). \end{itemize} \label{LemTriangle} \end{lemma} \begin{proof} We can refine Corollary~\ref{CorolMlo} as follows: for any partition $\rho$, any diagram $\lambda$ and any inner corner $i$ of $\lambda$, \begin{multline} M_\rho(\lambda^{(i)}) = M_\rho(\lambda) + \sum_{j \leq \ell(\rho)} M_{\rho \setminus \rho_j} \left( \sum_{\substack{g,t \geq 0, \\ g=\rho_j-2-t}} (\rho_j-t-1) M_t(\lambda) z_i^g \right)\\ + \sum_{\substack{\pi,g \\ |\pi|+g < |\rho| - 2}} b_{g,\pi}^\rho(\gamma) M_\pi(\lambda) z_i^g. \label{eq:DsLemTriang} \end{multline} Indeed, it is true for $\rho=(k)$ and follows directly for any $\rho$ by multiplication. The right-hand side is a linear combination of $M_\pi z_i^g$ with \[ |\pi|+g \leq |\rho| - 2. \] Moreover equality occurs only if $\pi \cup (g)$ is obtained from $\rho$ by choosing a part, removing $2$ to this part and splitting it in two (possibly empty) parts. Let us consider the first statement of the lemma. Fix a partition $\rho$ with a smallest part equal to $2$, that is $\rho=\tau \cup (2)$ for some partition $\tau$. Let us determine which variables $a^\mu_{\rho'}$ appear in the left-hand side of Equation $(A_\tau)$. In other terms, we want to determine for which $\rho'$, the difference $M_{\rho'}(\lambda^{(i)})-M_{\rho'}(\lambda)$ contains some term $M_\pi z_i^g$, for which $\tau=\pi \cup (g)$. As explained above, a necessary condition is the inequality $|\tau|=|\pi|+g \leq |\rho'| - 2$, \emph{i.e.} $|\rho'| \geq |\tau|+2$. Moreover, if $|\rho'| = |\tau| +2$, then $\tau$ must be obtained from $\rho'$ by removing $2$ from some part and splitting it into two. In particular, $\rho'$ cannot be longer than $\tau$, unless both new parts are empty. This happens only if the split part of $\rho'$ was $2$, that is if $\rho'= \tau \cup (2)$. We have proved that $(A_\tau)$ can involve $a^\mu_{\rho'}$ only if $\rho'=\rho$ or $\rho' >_1 \rho$ (either $\rho'$ has a bigger size than $\rho$, or it has a smaller length). It is easy to check that the coefficient of $a^\mu_\rho$ is equal to $m_2(\rho)$ (and thus, non-zero), which finishes the proof of the first point. The proof of the second point is quite similar. Fix a partition $\rho$, denote $q$ its smallest part (assume $q>2$) and $\tau=\rho_{\downarrow q}$. The same argument than above tells us that the variable $a^\mu_{\rho'}$ can appear in the equation $(B_\tau)$ only if $|\tau|=|\pi|+g+1 \leq |\rho'| - 1$, { i.e.} $|\rho'| \geq |\tau|+1$. Moreover, if there is equality, $\tau$ must be obtained from $\rho'$ by removing $1$ from some part and splitting it into two. One of the two new parts is always non-empty (as $\rho'$ has no parts equal to $1$), thus $\rho'$ is at most as long as $\tau$. If they have the same length, it means that $\tau$ is obtained from $\rho'$ by shortening a part. If this part is equal to $q$, then $\rho'=\rho$. Otherwise $\rho'$ contains a part $q-1$ and thus $\rho' >_1 \rho$ (they have same size and same length). Finally, we have proved that $(B_\tau)$ can involve $a^\mu_{\rho'}$ only if $\rho' >_1 \rho$. Once again, the coefficient of $a^\mu_\rho$ in $(B_\tau)$ is easy to compute: it is equal to $(q-1) m_q(\rho)$ and, hence, non-zero. \end{proof} The first interesting consequence is the following. \begin{corollary} The coefficient $a_\rho^\mu$ is a polynomial in $\gamma$ with rational coefficients. The same is true for the coefficients of Kerov's polynomials $K_\mu$. \end{corollary} \begin{proof} We proceed by induction over $|\mu|$. The quantities $a_\rho^\mu$ are the solution of a triangular linear system, whose right-hand side is a vector of $a_\tau^{\mu'}$ with $|\mu'|<|\mu|$. By induction hypothesis, the right-hand side belongs to $\QQ[\gamma]$. The coefficients $b_{g,\pi}^\rho(\gamma)$ of the system also belong to $\QQ[\gamma]$. Moreover, the diagonal coefficients of the system (given in the proof above) are invertible in $\QQ[\gamma]$, hence the solution is also in $\QQ[\gamma]$. For the second statement, it is enough to say that each $M_k$ is a polynomial in the $R_k$'s with integer coefficients. \end{proof} \begin{remark} Lemma \ref{LemTriangle} does not hold for Lassalle's system of equation \cite[Equations (9.1) and (9.2)]{Lassalle2009}, which computes recursively $K_\mu$. \end{remark} \subsection{A first bound on the degree}\label{SubsectBound1} Recall that $(M_k)_{k \geq 2}$ is an algebraic basis of the ring $\Pola$ of $\alpha$-polynomial functions on Young diagrams. Hence, we can define a gradation on $\Pola$ by choosing arbitrarily the degree of each of the generators $M_k$. In this section, we do the following natural choice: \[ \deg_1(M_k)=k \qquad \text{for } k\geq 2. \] Our goal is to obtain a bound on the degree of the polynomial $a^\mu_\rho\in\QQ[\gamma]$. We begin by the following lemma concerning the polynomials $b_{g,\pi}^\rho(\gamma)$. {\em Notational convention.} To emphasize the difference with gradations on $\Pola$, we denote throughout the paper degrees of polynomials in $\gamma$ by $\deg_\gamma$. \begin{lemma} \label{lem:firstbound} Let $\rho$ and $\pi$ be two partitions and $g\geq 0$ be an integer. One has \[\deg_\gamma(b_{g,\pi}^\rho(\gamma)) \leq \deg_1(M_\rho) - \deg_1(M_{\pi \cup (g)}) - 2. \] Moreover, if the right-hand side is an even (odd, respectively) number, then $b_{g,\pi}^\rho(\gamma)$ is an even (odd, respectively) polynomial. \label{LemDeg1B} \end{lemma} \begin{proof} By Proposition~\ref{PropMkLo}, $M_k(\lambda^{(i)})$ can be written as a linear combination of terms of the form $b(\gamma)\ M_\pi\ z_i^g$, where $b$ is some polynomial. We define the pre-degree (with respect to $\deg_1$) of such a term to be the quantity $\deg_\gamma(b) + |\pi| + g$. This degree is multiplicative. Then, \[M_k(\lo) = M_k(\lambda) + \text{ terms of degree smaller or equal to }k-2.\] By multiplying this kind of expressions we obtain that \[M_\rho(\lo) = M_\rho(\lambda) + \text{ terms of degree smaller or equal to }|\rho|-2,\] which corresponds to our bound on the degree. The parity also follows immediately from the one-part case by multiplication. \end{proof} This yields the following result. \begin{proposition}\label{PropBound1} The coefficient $a^\mu_\rho$ of $M_\rho$ in Jack character polynomial $L_\mu$ is a polynomial in $\gamma$ of degree smaller or equal to $|\mu|+\ell(\mu) - |\rho|$. Moreover, it has the same parity as the integer $|\mu|+\ell(\mu) - |\rho|$. The same is true for $K_\mu$. \end{proposition} \begin{proof} We proceed by induction over $(\mu,\rho)$. The base case $\mu=(1)$ is trivial as $L_{(1)}=M_2$. Fix two partitions $\mu$ and $\rho$. We assume that our result holds for any pair $(\mu',\rho')$ with $|\mu'| < |\mu|$ or $|\mu'|=|\mu|$ and $\rho' >_1 \rho$. It may seem strange to assume that the result holds for $\rho' \bm{>_1} \rho$. We are indeed doing some kind of {\em descending induction}. This is possible because, for a given $\mu$, the number of partitions $\rho$ we shall consider is finite: indeed, $a^\mu_\rho=0$ as soon as $|\rho| \geq |\mu|+\ell(\mu)$ \cite[Proposition 9.2 (ii)]{Lassalle2009}. The same remark holds for most proofs in this section. Let us first consider the case when $\rho=\tau \cup (2)$ contains a part equal to 2. By Lemma~\ref{LemTriangle}, Equation $(A_\tau)$ can be written as: \[ m_2(\rho) \cdot a^\mu_\rho = m_1(\mu) a^{\mu \backslash 1}_{\tau} - \sum_{\substack{\pi,g, \\ \pi \cup (g) = \tau}} \sum_{\rho' >_1 \rho} b_{g,\pi}^{\rho'}(\gamma) a^\mu_{\rho'}.\] The first term on the right-hand-side is by convention equal to 0 if $\mu$ does not contain any part equal to 1. If $\mu$ contains a part equal to $1$, as $|\mu \setminus 1|$ is smaller than $|\mu|$, by induction hypothesis $a^{\mu \backslash 1}_{\tau}$ is a polynomial of degree at most \[|\mu \setminus 1| + \ell(\mu \setminus 1) - |\tau| = |\mu|-1 + \ell(\mu)-1-(|\rho|-2)=|\mu|+\ell(\mu) - |\rho|.\] As $\rho' >_1 \rho$, we can also apply the induction hypothesis to each summand of the second term: $a^\mu_{\rho'}$ is polynomial of degree at most $|\mu|+\ell(\mu)-|\rho'|$. But using Lemma~\ref{LemDeg1B}, $b_{g,\pi}^{\rho'}(\gamma)$ has degree at most $|\rho'|-|\pi \cup (g)|-2$. Hence the degree of the product is bounded by \[|\mu|+\ell(\mu) - (|\pi \cup (g)|+2) = |\mu|+\ell(\mu) - |\rho|.\] The last equality comes from the fact that $\pi \cup (g)=\tau=\rho \setminus 2$. The proof of the case when the smallest part $q$ of $\rho$ is greater than $2$ is similar. We use Equation $(B_\tau)$ for $\tau=\rho_{\downarrow q}$, which takes the form: \[ (q-1) m_q(\rho) \cdot a^\mu_\rho = \sum_{r\geq 2} r \cdot m_r(\mu) a^{\mu_{\downarrow r}}_{\tau} - \sum_{\substack{\pi,g, \\ \pi \cup (g+1) = \tau}} \sum_{\rho' >_1 \rho} b_{g,\pi}^{\rho'}(\gamma) a^\mu_{\rho'}.\] Note that $|\mu_{\downarrow r}|<|\mu|$, therefore by induction hypothesis $a^{\mu_{\downarrow r}}_{\tau}$ is a polynomial in $\gamma$ of degree at most \[|\mu_{\downarrow r}| + \ell(\mu_{\downarrow r}) -|\tau|=|\mu|-1+\ell(\mu)-(|\rho|-1) = |\mu|+\ell(\mu) - |\rho|.\] For the second summand, the argument is the same as before, except that here the equality $|\pi \cup (g)|+2=|\rho|$ comes from the fact that $|\tau|=|\rho|-1$ and $|\tau|=|\pi \cup (g+1)|= |\pi \cup (g)|+1$. The parity is obtained the same way. \end{proof} \begin{corollary} \label{corol:dominant_deg1} For any partition $\mu$ one has: \[ \deg_1(\Ch_\mu) = |\mu| + \ell(\mu).\] Moreover \[ \Ch_\mu = \prod_{i}R_{\mu_i+1} + \text{lower degree terms with respect to $\deg_1$}. \] \end{corollary} \begin{proof} By Proposition~\ref{PropBound1}, $\Ch_\mu$ has at most degree $|\mu|+\ell(\mu)$ (this has also been proved by Lassalle \cite[Proposition 9.2 (ii)]{Lassalle2009}) and its component of degree $|\mu|+\ell(\mu)$ does not depend on $\alpha$. Hence the result follows as this dominant term is known in the case $\alpha=1$ (see for example \cite[Theorem 4.9]{SniadyGenusExpansion}). \end{proof} \subsection{A second bound on degrees}\label{SubsectBound2} For some purposes the bound on the degree of $a_\rho^\mu$ given by Proposition \ref{PropBound1} is not strong enough. In this section we give another bound which is related to another gradation of $\Pola$ defined by: \[ \ \deg_2(M_k)=k-2 \qquad \text{for } k\geq 2 .\] One has the following analogue of Lemma~\ref{LemDeg1B}: \begin{lemma} Let $\rho$ and $\pi$ be two partitions and $g\geq 0$ an integer. Then \begin{align*} \deg_\gamma(b_{g,\pi}^\rho(\gamma)) &\leq \deg_2(M_\rho) - \deg_2(M_{\pi \cup (g)}), \\ \deg_\gamma(b_{g,\pi}^\rho(\gamma)) &\leq \deg_2(M_\rho) - \deg_2(M_{\pi \cup (g+1)}) - 1. \end{align*} \label{LemDeg2B} \end{lemma} \begin{proof} The proof is similar to the one of Lemma \ref{LemDeg1B}. We define the {\em pre-degree} (with respect to $\deg_2$) of an expression of the form $b(\gamma)\ M_\pi\ z_i^g$ to be $\deg_\gamma(b)+\deg_2(M_\pi)+g$. By Proposition \ref{PropMkLo}, the pre-degree of $M_k(\lo)$ is equal to $k-2$. Note that this pre-degree is multiplicative. Then $M_\rho(\lo)$ has pre-degree $|\rho|-2 \ell(\rho)=\deg_2(M_\rho)$. The lemma follows from the following inequalities: for $g\geq 0$, \begin{align*} \deg_2(M_{\pi \cup (g)}) &\leq \deg_2(M_{\pi})+g; \\ \deg_2(M_{\pi \cup (g+1)}) &\leq \deg_2(M_{\pi})+g-1. \end{align*} Note that in the first inequality the difference between the right hand side and the left hand side is equal to $2$, unless $g=0$; in that case we have an equality. In the second inequality, the case $g=0$ is obvious as $M_{\pi \cup (1)}=0$ and hence its degree is $-\infty$ by convention. In all other cases, we have an equality. \end{proof} We deduce from this lemma a new bound on the degree of $a_\rho^\mu$. \begin{proposition} \label{PropBound2} The coefficient $a^\mu_\rho$ of $M_\rho$ in Jack character polynomial $L_\mu$ is a polynomial in $\gamma$ of degree smaller or equal to $|\mu|-\ell(\mu) - (|\rho|-2\ell(\rho))$. The same is true for $K_\mu$. \end{proposition} \begin{proof} It is a straightforward exercise to adapt the proof of Proposition \ref{PropBound1}. We use Lemma~\ref{LemDeg2B} instead of Lemma~\ref{LemDeg1B} and $|\rho|$ has to be replaced by $|\rho| - 2\ell(\rho)$. \end{proof} As an immediate consequence we have \begin{corollary} For any partition $\mu$ one has: \[ \deg_2(\Ch_\mu) = |\mu| - \ell(\mu),\] and the top degree part does not depend on $\alpha$. \end{corollary} Note that Proposition \ref{PropBound2} is neither weaker nor stronger than Proposition \ref{PropBound1}. But it is sometimes more appropriate, as we shall see in the next section. \begin{remark} The top degree part of $\Ch_\mu$ for $\deg_2$ does not admit an explicit expression as for $\deg_1$. One can however compute its linear terms in free cumulants, see \cite[Section 3]{FerayGouldenHook}. \end{remark} \subsection{Polynomiality of $\theta_\mu(\lambda)$}\label{SubsectCoefJackPoly} In this section we prove that $\theta_\mu(\lambda)$ is a polynomial with rational coefficients in $\alpha$. This simple statement does not follow directly from the definition of Jack polynomials and had been open for twenty years. It was then proved by Lapointe and Vinet~\cite{LapointeVinetJack}, who also proved the integrality of the coefficients in the monomial basis. Short after that, this result was completed by a positivity result from Knop and Sahi~\cite{KnopSahiCombinatoricsJack}. Using the material of this Section, we can find a new proof of the polynomiality in $\alpha$. Integrity and positivity seem unfortunately impossible to obtain {\em via} this method. First, we consider the dependence of $M_k(\lambda)$ on $\alpha$. \begin{lemma} Let $k \geq 2$ be an integer and $\lambda$ a partition. Then $\sqrt{\alpha}^{k-2} M_k(\lambda)$ is a polynomial in $\alpha$ with integer coefficients. \label{LemMkPol} \end{lemma} \begin{proof} We use induction over $|\lambda|$ and $k$. Proposition~\ref{PropMkLo} can be rewritten as \begin{multline*} \sqrt{\alpha}^{k-2} M_k(\lambda^{(i)}) -\sqrt{\alpha}^{k-2} M_k(\lambda) = \sum_{\substack{r\geq 1, s,t \geq 0, \\ 2r+s+t \leq k}} \alpha^r (\sqrt{\alpha} z_i)^{k-2r-s-t} \\ \binom{k-t-1}{2r+s-1}\binom{r+s-1}{s} \left( \alpha-1 \right)^s \sqrt{\alpha}^{t-2} M_t(\lambda). \end{multline*} Note that $\sqrt{\alpha} z_i = \alpha x - y$ is a polynomial in $\alpha$ with integer coefficients. Hence the induction is immediate. \end{proof} Now we write, for $\mu,\lambda \vdash n$, \begin{multline*} z_\mu \theta_\mu(\lambda)=\alpha^{\frac{|\mu|-\ell(\mu)}{2}} \Ch_\mu(\lambda) = \alpha^{\frac{|\mu|-\ell(\mu)}{2}} \sum_\rho a^\mu_\rho M_\rho(\lambda) \\ = \sum_\rho \alpha^{\frac{|\mu|-\ell(\mu)-(|\rho|-2\ell(\rho))}{2}} a^\mu_\rho \left( \prod_{i \leq \ell(\rho)} \sqrt{\alpha}^{\rho_i-2} M_{\rho_i}(\lambda) \right). \end{multline*} The quantities $\alpha^{\frac{|\mu|-\ell(\mu)-(|\rho|-2\ell(\rho))}{2}} a^\mu_\rho$ and $\sqrt{\alpha}^{\rho_i-2} M_{\rho_i}(\lambda)$ are polynomials in $\alpha$ (by Proposition~\ref{PropBound2} and Lemma~\ref{LemMkPol}), hence $\theta_\mu(\lambda)$ is a polynomial in $\alpha$. \qedhere \subsection{Yet another gradation and bound on degrees} The gradation introduced in Section \ref{SubsectBound2} is suitable for some purposes (as we have seen in the previous section), but it has the unpleasant aspect that all homogeneous spaces have an infinite dimension. In particular, Proposition~\ref{PropBound2} does not give any information on the maximal power of $M_2$ which can appear in $L_\mu$. In this section we propose a way to avoid this difficulty. It is technical but will be useful in the next section. We define a new algebraic basis of $\Pola$ by: \begin{align*} M'_2 & =M_2,\\ M'_k & = M_k - (-\gamma)^{k-2} M_2 \qquad \text{for } k\geq 3. \end{align*} We also consider the gradation defined by: \[\deg_3(M'_2) = 1, \quad \deg_3(M'_k) =k-2 \text{ for }k\ge 3,\] so that $\deg_3(M'_\rho)=|\rho|-2\ell(\rho) +m_2(\rho)$. Obviously, there exists a polynomial $L'_\mu$ such that \[ \Ch_\mu = L'_\mu(M'_2,M'_3,\dots).\] For example, one has: \[L'_{(2,2)} = (M'_3)^2 +6(M'_2)^2 - 4 M'_4 -10 \gamma M'_3 -2M'_2.\] We denote by $(a')_\rho^\mu$ the coefficient of $M'_\rho$ in $L'_\mu$. Then, one has the following result. \begin{proposition}\label{PropBound3} The coefficient $(a')_\rho^\mu$ is a polynomial in $\gamma$ of degree at most $|\mu|-\ell(\mu)+m_1(\mu) - (|\rho|-2\ell(\rho) +m_2(\rho))$. \end{proposition} \begin{remark} The analogous result is not true for $a_\rho^\mu$, as it can be seen on the case $\mu=(2,2)$. \end{remark} The algorithm to compute the coefficient $(a')_\rho^\mu$ is the same as for $a_\rho^\mu$ and the proof of the bound on degrees is similar to those of Propositions~\ref{PropBound1} and \ref{PropBound2}. Let us give some details. First, one can rewrite Proposition~\ref{PropMkLo} in terms of the quantities $M'_k$: \begin{multline} M'_k(\lambda^{(i)}) - M'_k(\lambda) =M_k(\lambda^{(i)}) - M_k(\lambda) - (-\gamma)^{k-2} \\ = \sum_{\substack{ r\geq 1, s,t \geq 0, \\ 2r+s+t \leq k \\ \bm{(r,s,t) \neq (1,k-2,0) }}} \Bigg[ z_i^{k-2r-s-t} \binom{k-t-1}{2r+s-1}\binom{r+s-1}{s} \\ \cdot (-\gamma)^s (M'_t(\lambda)+ (-\gamma)^{t-2} M'_2(\lambda)) \Bigg]. \label{EqM'klo} \end{multline} Please note that the term $(-\gamma)^{k-2}$ corresponding to $(r,s,t)=(1,k-2,0)$ does not belong to the sum any more. By multiplication, there exist some polynomials $(b')_{g,\pi}^\rho(\gamma)$ such that \[M'_\rho(\lo) = M'_\rho(\lambda) + \sum_{g,\pi} (b')_{g,\pi}^\rho(\gamma)\ z_i^g\ M'_\pi(\lambda).\] Using Equation~\eqref{EqIng1Lassalle} and Proposition~\ref{PropLinearRelation}, we obtain the following equalities: \begin{align} \sum_\rho (a')^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} (b')_{g,\pi}^\rho(\gamma) M'_\pi M_g \right) &= m_1(\mu) \Ch_{\mu \backslash 1}, \tag{$A'$}\label{EqRec1'}\\ \sum_\rho (a')^\mu_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} (b')_{g,\pi}^\rho(\gamma) M'_\pi M_{g+1} \right) &= \sum_{r \geq 2} r \cdot m_r(\mu) \Ch_{\mu_{\downarrow r}}. \tag{$B'$}\label{EqRec2'} \end{align} Plugging $M_g=M'_g + (-\gamma)^{g-2} M'_2$ in these equations and identifying the coefficient of $M'_\tau$ on both sides, we obtain the following system: \begin{multline*} \sum_\rho (a')^\mu_\rho \left( \sum_{\substack{g, \pi, \\ \pi \cup (g)=\tau}} (b')_{g,\pi}^\rho(\gamma) + \sum_{\substack{g>2, \pi, \\ \pi \cup (2)=\tau}} (-\gamma)^{g-2} (b')_{g,\pi}^\rho(\gamma) \right) \\ = m_1(\mu) (a')^{\mu \backslash 1}_\tau, \tag{$A'_\tau$}\label{EqRec1'tau} \end{multline*} \begin{multline*} \sum_\rho (a')^\mu_\rho \left( \sum_{\substack{g, \pi, \\ \pi \cup (g+1)=\tau}} (b')_{g,\pi}^\rho(\gamma) + \sum_{\substack{g \geq 2, \pi ,\\ \pi \cup (2)=\tau}} (-\gamma)^{g-1} (b')_{g,\pi}^\rho(\gamma) \right) \\= \sum_{r \geq 2} r \cdot m_r(\mu) (a')^{\mu_{\downarrow r}}_\tau, \tag{$B'_\tau$}\label{EqRec2'tau} \end{multline*} It is easy to check that Lemma~\ref{LemTriangle} still holds for this system. The next step is to give a bound on the degree of $(b')_{g,\pi}^\rho(\gamma)$. \begin{lemma} \[ \deg_\gamma(b_{g,\pi}^\rho(\gamma)) \leq \deg_3(M'_\rho) - \deg_3(M'_\pi) - \max(g,1).\] \label{LemDeg3B'} \end{lemma} \begin{proof} Let us call \emph{pre-degree} (with respect to $\deg_3$) of an expression of the form $b(\gamma)\ M'_\pi\ z_i^g$ the quantity $\deg_\gamma(b)+\deg_3(M'_\pi)+g$. It is multiplicative. Clearly, $M'_k(\lo)$ has pre-degree $\max(k-2,1)$ (see Equation~\eqref{EqM'klo}), thus $M'_\rho(\lo)$ has pre-degree $\deg_3(M'_\rho)$, which finishes the proof of the case $g \geq 1$. For $g=0$, one has to look at the term which does not involve $z_i$. It is easy to check on Equation~\eqref{EqM'klo} (here, it is crucial to use $M'$ and not $M$) that \[M'_k(\lo)|_{z_i=0}=M'_k(\lambda)|_{z_i=0} + \left(\text{terms of pre-degree $k-3$}\right).\] Hence by multiplication, \[M'_\rho(\lo)|_{z_i=0}=M'_\rho(\lambda)|_{z_i=0} + \left(\text{terms of pre-degree }\deg_3(M'_\rho) - 1\right),\] which finishes the proof of the lemma. \end{proof} We have now all the tools to prove Proposition~\ref{PropBound3} by induction. As usual, we first consider the case where $\rho=\tau \cup (2)$ has the smallest part equal to $2$. Then Equation $(A_\tau)$ can be written as: \begin{multline*} m_2(\rho) \cdot (a')^\mu_\rho = m_1(\mu) (a')^{\mu \backslash 1}_{\tau} - \sum_{\substack{\pi,g, \\ \pi \cup (g) = \tau}} \sum_{\rho' >_1 \rho} (b')_{g,\pi}^{\rho'}(\gamma) (a')^\mu_{\rho'} \\ -\sum_{\substack{g>2, \pi, \\ \pi \cup (2)=\tau}} \sum_{\rho' >_1 \rho} (-\gamma)^{g-2} (b')_{g,\pi}^\rho(\gamma)(a')^\mu_{\rho'}. \end{multline*} With arguments similar to the ones used previously, the first two terms are polynomials in $\gamma$ of degree at most $|\mu|-\ell(\mu)+m_1(\mu) - \deg_3(M'_{\rho})$. Let us focus on the last summand. By induction hypothesis $(a')^\mu_{\rho'}$ is a polynomial of degree $|\mu|-\ell(\mu)+m_1(\mu) - \deg_3(M'_{\rho'})$. By Lemma \ref{LemDeg3B'}, $(b')_{g,\pi}^\rho(\gamma)$ has degree equal to $\deg_3(M'_\rho) - \deg_3(M'_\pi) - g$. Hence the product of these two terms with $(-\gamma)^{g-2}$ has degree at most \[|\mu|-\ell(\mu)+m_1(\mu) - (\deg_3(M'_\pi) - 2) = |\mu|-\ell(\mu)+m_1(\mu) - \deg_3(M'_\rho).\] The equality comes from the fact that $\rho=\tau \cup (2)=\pi \cup (2,2)$. Finally one obtains that $(a')^\mu_\rho$ has degree at most $|\mu|-\ell(\mu)+m_1(\mu) - \deg_3(M'_{\rho})$. The case when $\rho$ has no parts equal to $2$ is similar. \qed \begin{corollary} For any partition $\mu$ one has: \[ \deg_3(\Ch_\mu) = |\mu| - \ell(\mu)+m_1(\mu),\] and the top degree part does not depend on $\alpha$. \label{corol:deg3_Ch} \end{corollary} \begin{remark} The top degree part of $\Ch_\mu$ for $\deg_3$ has, as far as we know, no close expression. \end{remark} \subsection{Gradations and characters}\label{SubsectDegCh} In the previous sections we have defined three different gradations. The elements of our favorite basis $(\Ch_\mu)$ are not homogeneous, but have the following nice property: if we define \[ V_i^d := \{x \in \Pola : \deg_i(x) \leq d\} \] then, for $i=1$ and $i=3$, each $V_i^d$ is spanned linearly by the functions $\Ch_\mu$ that it contains (this comes from a direct dimension argument). This simple observation will be useful later. The same argument can not be used for $i=2$, as the spaces $V_2^d$ are all infinite dimensional. \begin{remark} The functions $\deg_i$, for $i=1,3$, define some gradations and hence some filtrations on $\Pola$. These filtrations were known in the cases $\alpha=1,2$ ; see \cite{IvanovOlshanski2002,FerayInductionJM,Tout-Structure-constant-S2n-Hn}. In fact, Ivanov and Olshanski \cite[Proposition 4.9]{IvanovOlshanski2002} give many more filtrations for $\alpha=1$, but we have not been able to prove that they hold for general $\alpha$. In particular, the filtration \begin{equation}\label{eq:Kerov_filtration} \deg(\Ch_\mu)=|\mu|+m_1(\mu) \end{equation} is central in their analysis of fluctuations of random Young diagrams. Unfortunately, we are unable to prove that \eqref{eq:Kerov_filtration} still defines a filtration in the general $\alpha$-case. We leave this as an open question. If we were able to positively answer it, we could use a moment method to obtain our fluctuation results (without the bound on the speed of convergence). \end{remark} \section{Polynomiality of structure constants of Jack characters} \label{SectStructureConstants} \subsection{Structure constants are polynomials in $\gamma$} \label{SubsectStructPol} In this section we are going to prove our main result for the structure constants of the algebra $\Pola$ of $\alpha$-polynomial functions which was stated as Theorem \ref{theo:struct-const}. \begin{proof}[Proof of Theorem \ref{theo:struct-const}:] First observe that for each $i \in \{1,2,3\}$ one has: \[ n_i(\mu) = \deg_i(\Ch_\mu),\] hence our bound on the degree of structure constants can be equivalently formulated using three gradations introduced in Section \ref{SectPolynomial}. Let us consider the bound involving $\deg_1$ (the case of $\deg_3$ is similar). We know by Proposition~\ref{PropBound1} that \[ \Ch_\mu = \sum_\rho a_\rho^\mu M_\rho,\] where each $a_\rho^\mu$ is a polynomial in $\gamma$ of degree $\deg_1(\Ch_\mu)-\deg_1(M_\rho)$. Hence, we have \[ \Ch_\mu \cdot \Ch_\nu = \sum_\rho b_\rho^{\mu,\nu} M_\rho,\] where each $b_\rho^{\mu,\nu}$ is a polynomial in $\gamma$ of degree $\deg_1(\Ch_\mu)+\deg_1(\Ch_\nu)-\deg_1(M_\rho)$. In particular $\Ch_\mu \cdot \Ch_\nu$ has degree at most $\deg_1(\Ch_\mu)+\deg_1(\Ch_\nu)$ and hence, thanks to the remark of Section~\ref{SubsectDegCh}, $g_{\mu,\nu;\pi}=0$ whenever \[ n_1(\mu) + n_1(\nu) < n_1(\pi).\] The structure constants are obtained by solving the linear system: \begin{equation} \sum_\tau a_\rho^\tau g_{\mu,\nu;\tau} = b_\rho^{\mu,\nu}. \tag{S}\label{EqSystStructure} \end{equation} In this system, $\mu$ and $\nu$ are fixed, there is one equation for each partition $\rho$ without parts equal to $1$. In each equation, the sum runs over partitions $\tau$ such that $n_1(\tau) \le n_1(\mu) + n_1(\nu)$. Finally, the unknown are $g_{\mu,\nu;\tau}$, for $\tau$ as above. We will prove our statement by induction over \[ \deg_1(\Ch_\mu) + \deg_1(\Ch_\nu) - \deg_1(\Ch_\pi).\] Fix some partitions $\mu$,$\nu$ and $\pi$. If the quantity above is negative, the coefficient $g_{\mu,\nu;\pi}$ is equal to $0$ and the statement is true. Otherwise we suppose that for all partitions $\tau$ bigger than $\pi$ (in the sense that $\deg_1(\Ch_{\tau}) > \deg_1(\Ch_{\pi})$), the degree of $g_{\mu,\nu;\tau}$ is bounded from above by $\deg_1(\Ch_\mu) + \deg_1(\Ch_\nu) - \deg_1(\Ch_\tau)$. Note that $a_\rho^\tau$ vanishes as soon as $\deg_1(\Ch_\tau) < \deg_1(M_\rho)$. Then from \eqref{EqSystStructure} we extract a subsystem \begin{equation} \sum_{\substack{\tau, \\ \deg_1(\Ch_\tau)=\deg_1(\Ch_{\pi})}} a_\rho^\tau g_{\mu,\nu;\tau} = b_\rho^{\mu,\nu} - \sum_{\substack{\tau, \\ \deg_1(\Ch_\tau)>\deg_1(\Ch_{\pi})}} a_\rho^\tau g_{\mu,\nu;\tau}, \tag{$S'$} \label{syst:Sp} \end{equation} where $\rho$ runs over partitions such that $\deg_1(M_\rho)=\deg_1(\Ch_\pi)$. The variables are $g_{\mu,\nu;\tau}$ for $\tau$ with $\deg_1(\Ch_\tau)=\deg_1(\Ch_{\pi})$. This system is invertible (because $(\Ch_\pi)$ is a basis of $\Pola$) and the coefficients on the left-hand side of \eqref{syst:Sp} are rational numbers (by Proposition~\ref{PropBound1}). Besides, all terms on the right-hand side are polynomials in $\gamma$ of degree at most $\deg_1(\Ch_\mu) + \deg_1(\Ch_\nu) - \deg_1(\Ch_\pi)$ which finishes the proof. The proof of the parity follows in the same way.\medskip The bound involving $\deg_2$ is obtained in a slightly different way. First note that if $\mu$ and $\nu$ do not have any parts equal to $1$, this bound is weaker that the one with $\deg_3$. Hence, it holds in this case. Then the general case follows, using the fact that $\Ch_{\mu \cup (1)}=(|\lambda| - |\mu|)\Ch_\mu$. \end{proof} \subsection{Projection on functions on Young diagrams of size $n$}\label{SubsectProjN} Recall that $\Pola$ is a subalgebra of the algebra of functions on all Young diagrams. The latter has a natural projection map $\varphi_n$ onto $\F(\Young_n,\QQ)$, the algebra of functions on Young diagrams of size $n$. As Jack symmetric functions $J_\lambda$ form a basis of the symmetric function ring, the functions $(\theta_\mu)_{\mu \vdash n}$ form a basis of $\F(\Young_n,\QQ)$ (see \cite[Proposition 4.1]{FerayInductionJM}). We consider the structure constants $c_{\mu,\nu;\pi}$ of $\F(\Young_n,\QQ)$ with basis $(\theta_\mu)_{\mu \vdash n}$, that is the numbers uniquely defined by: \begin{equation}\label{EqDefC} \theta_\mu(\lambda) \cdot \theta_\nu(\lambda) = \sum_{\pi \vdash n} c_{\mu,\nu;\pi} \, \theta_\pi(\lambda) \quad \text{for all $\lambda \vdash n$}. \end{equation} Note that $c_{\mu,\nu;\pi}$ depends on $\alpha$, but, according to our convention, we omit the superscript, when it does not bring any confusion. It is important to keep in mind that the $c$'s are indexed by triples of partitions of \emph{the same size}, while the $g$'s are indexed by any triple of partitions. It turns out that the quantities $c_{\mu,\nu;\pi}$ can be expressed in terms of the quantities $g_{\mu,\nu;\tau}$. To explain that, for any partition $\mu$, let $\tilde{\mu}$ denotes the partition obtained from $\mu$ by erasing all parts equal to $1$. Fix two partitions $\mu$ and $\nu$ of the same integer $n$; then \[\Ch_{\tilde{\mu}} \cdot \Ch_{\tilde{\nu}} = \sum_\tau g_{\tilde{\mu},\tilde{\nu};\tau} \Ch_{\tau}. \] But using the definition of $\Ch$ from Section \ref{eq:definition-Jack}, this implies that, for all $\lambda \vdash n$, one has: \begin{multline*} \alpha^{-\frac{|\mu|-\ell(\mu)}{2}}\ z_{\tilde{\mu}}\ \theta_\mu(\lambda) \cdot \alpha^{-\frac{|\nu|-\ell(\nu)}{2}}\ z_{\tilde{\nu}}\ \theta_\nu(\lambda) \\ = \sum_{\substack{\tau, \\|\tau| \leq n}} g_{\tilde{\mu},\tilde{\nu};\tau}\ \alpha^{\frac{|\tau|-\ell(\tau)}{2}}\ z_\tau\ \binom{n-|\tau|+m_1(\tau)}{m_1(\tau)}\ \theta_{\tau 1^{n-|\tau|}}(\lambda). \end{multline*} Every partition $\tau$ with $|\tau| \leq n$ can be written uniquely as $\tilde{\pi} 1^i$ where $\pi$ is a partition of $n$ and $i \leq m_1(\pi)$. Denoting $$d(\mu,\nu;\pi) := |\mu|-\ell(\mu) + |\nu|-\ell(\nu) - (|\pi| - \ell(\pi)),$$ one has \[ \theta_\mu(\lambda) \cdot \theta_\nu(\lambda) = \frac{\alpha^{d(\mu,\nu;\pi)/2}}{z_{\tilde{\mu}} z_{\tilde{\nu}}} \sum_{\pi \vdash n} \left( \sum_{0 \leq i \leq m_1(\pi)} g_{\tilde{\mu},\tilde{\nu};\tilde{\pi} 1^i} \cdot z_{\tilde{\pi}} \cdot i! \cdot \binom{n-|\tilde{\pi}|}{i} \right) \theta_\pi(\lambda).\] As this holds for all partitions $\lambda$ of $n$, by definition of structure constants, we have: \begin{equation}\label{EqCG} c_{\mu,\nu;\pi} = \frac{\alpha^{d(\mu,\nu;\pi)/2}}{z_{\tilde{\mu}} z_{\tilde{\nu}}} \sum_{0 \leq i \leq m_1(\pi)} g_{\tilde{\mu},\tilde{\nu};\tilde{\pi} 1^i} \cdot z_{\tilde{\pi}} \cdot i! \cdot \binom{n-|\tilde{\pi}|}{i}. \end{equation} Using Theorem \ref{theo:struct-const} with $\deg_3$, we know that $g_{\tilde{\mu},\tilde{\nu};\tilde{\pi} 1^i}$ is a polynomial of degree at most $d(\mu,\nu;\pi)-i$. We have thus proved the following result: \begin{proposition}\label{PropStructureTheta} Let $\mu$, $\nu$ and $\pi$ be three partitions without parts equal to $1$. Then, \newline $\alpha^{-d(\mu,\nu;\pi)/2} c_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}$ is a polynomial in $n$ and $\gamma$ with rational coefficients, of total degree at most $d(\mu,\nu;\pi)$. Moreover, seen as a polynomial in $\gamma$, it has the same parity as $d(\mu,\nu;\pi)$. \end{proposition} \begin{corollary}\label{CorolConnectionSeries} The quantity $c_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}$ is a polynomial in $n$ and $\alpha$. Moreover, it has degree at most $d(\mu,\nu;\pi)$ in $n$ and at most $d(\mu,\nu;\pi)$ in $\alpha$ (the total degree may be bigger). \end{corollary} Applications of these statements are given in next section as well as in appendix~\ref{app:Matching-Jack_And_Matsumoto}. \section{Special values of $\alpha$ and polynomial interpolation} \label{SectSpecialValues} \subsection{Case $\alpha=1$: symmetric group algebra} \label{SubsectStructA1} In the case $\alpha=1$, the structure constants considered in the previous section are linked with the symmetric group algebra. Let $\Sym{n}$ denote the symmetric group of size $n$, \emph{i.e.} the group of permutations of the set $[n]:=\{1,\dots,n\}$. Recall that the cycle-type of a permutation $\sigma \in \Sym{n}$ is the integer partition $\mu \vdash n$ obtained by sorting the lengths of the cycles of $\sigma$. We consider the group algebra $\QQ[\Sym{n}]$ of $\Sym{n}$ over the rational field $\QQ$. Its center $Z(\QQ[\Sym{n}])$ is spanned linearly by the conjugacy classes, that is the elements \[ \cl_\mu = \sum_{\substack{\sigma \in \Sym{n}, \\ \text{cycle-type}(\sigma)=\mu}} \sigma.\] By a classical result of Frobenius (see \cite{Frobenius1900} or \cite[(I,7.8)]{Macdonald1995}), for any $\lambda \vdash n$, \[ \frac{\Tr \rho^\lambda(\cl_\mu)}{\text{dimension of $\rho^\lambda$}} = \theta_\mu^{(1)}(\lambda), \] where $\rho^\lambda$ is the irreducible representation of the symmetric group associated to the Young diagram $\lambda$. In other words: $\theta_\mu^{(1)}$ is the image of $\cl_\mu$ by the abstract Fourier transform, which is an algebra morphism. Hence, the structure constants of the algebra $Z(\QQ[\Sym{n}])$ with the basis $(\cl_\mu)_{\mu \vdash n}$ coincide with $c^{(1)}_{\mu,\nu;\pi}$. These structure constants have been widely studied in the last fifty years in algebra and combinatorics (they count some families of graphs embedded in orientable surfaces). A famous result in this topic is due to Farahat and Higman \cite[Theorem 2.2]{FarahatHigmanCentreQSn}: the quantity $c^{(1)}_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}$ is a polynomial in $n$. Note that this is a consequence of Proposition \ref{PropStructureTheta}.\bigskip Besides, structure constants of the center of $Z(\QQ[\Sym{n}])$ have a well-known and obvious combinatorial interpretation. \begin{lemma} Let $\mu$, $\nu$ and $\pi$ be three partitions of the same integer $n$. Fix a permutation $\sigma$ of cycle-type $\pi$. Then $c^{(1)}_{\mu,\nu;\pi}$ is the number of pairs of permutations $(\sigma_1,\sigma_2)$, such that $\sigma_1$ has cycle-type $\mu$, $\sigma_2$ has cycle-type $\nu$ and $\sigma_1 \cdot \sigma_2=\sigma$. \label{lem:Interpretation_StructureConstants_A1} \end{lemma} This can be used to compute $c^{(1)}_{\mu,\nu;\pi}$ in some particular cases, which will be useful later. \begin{lemma} \label{lem:SC1} We have the following identities: \begin{enumerate} \item \label{eq:SC1-1} $c^{(1)}_{\mu, \nu; 1^n} = 0$ for any $\mu \neq \nu$; \item \label{eq:SC1-2} $c^{(1)}_{(k 1^n),(k 1^n);(1^{k+n})} = \binom{k+n}{k} (k-1)!$. \end{enumerate} \end{lemma} \begin{proof} Consider the first item. The only permutation $\pi$ of cycle type $(1^n)$ is $\sigma=\id$. Hence, the condition $\sigma_1 \cdot \sigma_2=\sigma$ corresponds to $\sigma_2=\sigma_1^{-1}$ and in particular $\sigma_1$ and $\sigma_2$ must have the same cycle-type. Consider now the second item. As before, we must choose $\sigma=\id$. As $\sigma_2=\sigma_1^{-1}$ has always the same cycle-type as $\sigma_1$, the coefficient $c^{(1)}_{(k 1^n),(k 1^n);(1^{k+n})}$ is simply the number of permutations of $k+n$ of type $(k 1^n)$. This is well-known \cite[equation (1.2)]{SaganSymmetric} to be \[\frac{(k+n)!}{z_{(k 1^n)}}=\binom{k+n}{k} (k-1)!. \qedhere\] \end{proof} \begin{remark} The quantities $g^{(1)}_{\mu,\nu;\pi}$ also have a direct combinatorial interpretation in terms of partial permutations, see \cite{IvanovKerovPartialPermutations}. \end{remark} \subsection{Case $\alpha=2$: Hecke algebra of $(\Sym{2n},H_n)$} \label{SubsectHecke} An analogous interpretation of the structure constants exists in the case $\alpha=2$. We explain it here, following the development given in \cite{GouldenJacksonMapsZonal}. We can view the elements of the symmetric group $\Sym{2n}$ as permutations of the following set: $\{1,\bar{1},\dots,n,\bar{n}\}$. A subgroup formed by permutations $\sigma$ such that \[\overline{\sigma(i)} = \sigma(\overline{i})\qquad \text{for } i\in\{1,\dots,n\},\] where by convention $\overline{\bar{j}}=j$, is called \emph{hyperoctahedral group} and is denoted by $H_n$. We consider the subalgebra $\QQ[H_n \backslash \Sym{2n} / H_n] < \QQ[\Sym{2n}]$ of the elements invariant by multiplication on the left or on the right by any element of $H_n$; in other words \[x \in \QQ[H_n \backslash \Sym{2n} / H_n] \stackrel{\text{\tiny def}}{\iff} h x h'=x \quad \text{for all }h,h' \in H_n.\] A non-trivial result is that this algebra is commutative. The equivalence classes for the relation $x \sim hxh'$ (for $x \in \Sym{2n}$ and $h,h' \in H_n$) are called \emph{double-cosets}. They are naturally indexed by partitions of $n$, see \cite[(VII,2)]{Macdonald1995}. We denote by $\cl^{(2)}_\mu \in \QQ[H_n \backslash \Sym{2n} / H_n]$ the sum of all elements in the double coset corresponding to $\mu$. The family $(\cl^{(2)}_\mu)_{\mu \vdash n}$ is a basis of $\QQ[H_n \backslash \Sym{2n} / H_n]$. One can show (see \cite[Equation~(3) and (5)]{GouldenJacksonMapsZonal}) that there exist some orthogonal idempotents $E_\lambda$ such that: \[ \cl^{(2)}_\mu = 2^n n! \sum_{\lambda \vdash n} \theta_\mu^{(2)}(\lambda)\ E_\lambda,\] and one has \begin{multline*} \cl^{(2)}_\mu \cdot \cl^{(2)}_\nu = (2^n n!)^2 \sum_{\lambda \vdash n} \theta_\mu^{(2)}(\lambda)\ \theta_\nu^{(2)}(\lambda)\ E_\lambda\\ = (2^n n!)^2 \sum_{\pi \vdash n} \sum_{\lambda \vdash n} c_{\mu,\nu;\pi}^{(2)}\ \theta_\pi^{(2)}(\lambda)\ E_\lambda = (2^n n!) \sum_{\pi \vdash n} c_{\mu,\nu;\pi}^{(2)}\ \cl^{(2)}_\pi. \end{multline*} Hence, the structure constants $h_{\mu,\nu;\pi}$ of the algebra $\QQ[H_n \backslash \Sym{2n} / H_n]$ for the basis $(\cl^{(2)}_\mu)_{\mu \vdash n}$ are, up to a factor $2^n n!$, the same as the ones of the algebra $\F(\Young_n,\QQ)$ with the basis $(\theta_\mu^{(2)})_{\mu \vdash n}$. In particular, Proposition \ref{PropStructureTheta} implies the following, result, which is an analog of Farahet and Higman's result \cite{FarahatHigmanCentreQSn} (a combinatorial proof of the polynomiality has recently been given by O. Tout in \cite{Tout-Structure-constant-S2n-Hn}). \begin{proposition} Let $\mu$, $\nu$ and $\pi$ be partitions without parts equal to $1$. The renormalized structure constant of the algebra $\QQ[H_n \backslash \Sym{2n} / H_n]$ \[ \frac{h_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}}{n!\ 2^n\ \sqrt{2}^{d(\mu,\nu,\pi)}}\] is a polynomial in $n$ of degree at most $d(\mu,\nu,\pi)$. Moreover, its coefficient of $n^{d(\mu,\nu,\pi)}$ is the same as in $c^{(1)}_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}.$ In particular, \begin{itemize} \item when $|\mu|-\ell(\mu) + |\nu|-\ell(\nu)=|\pi| - \ell(\pi)$, one has \[\frac{h_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}}{n!\ 2^n}= c^{(1)}_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}\] and this quantity is independent of $n$ ; \item when $|\mu|-\ell(\mu) + |\nu|-\ell(\nu)=|\pi| - \ell(\pi) - 1$, \[\frac{h_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}}{n!\ 2^n}\] is independent of $n$. \end{itemize} \end{proposition} \begin{proof} The claim that the renormalized structure constant mentioned above is a polynomial and the bound on its degree follow from Proposition \ref{PropStructureTheta} specialized to $\alpha=2$. The dominant coefficient is a polynomial in $\gamma$ of degree $0$, so it is the same for $\gamma \in \{0, -1/\sqrt{2}\}$, that is $\alpha \in \{1,2\}$. The first item follows immediately. In the second item, we consider the case $d(\mu,\nu,\pi)= 1$. So, \[\frac{h_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}}{n!\ 2^n\ \sqrt{2}}\] is an affine function of $n$ with the same linear coefficient as $c^{(1)}_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}$. But the latter is identically equal to $0$. Indeed, the fact that the sign of permutations is a group morphism implies that \[c^{(1)}_{\mu 1^{n-|\mu|},\nu 1^{n-|\nu|};\pi 1^{n-|\pi|}}=0\] whenever $d(\mu,\nu;\pi)$ is odd. \end{proof} Besides, Goulden and Jackson \cite{GouldenJacksonMapsZonal} described the coefficients $h_{\mu,\nu; \pi}$ combinatorially. Let $\mathcal{F}_\mathcal{S}$ be the set of all (perfect) matchings on a set $\mathcal{S}$, that is partitions of $\mathcal{S}$ in pairs. When $\mathcal{S}$ is the set $\{1,2\dots,2n\}$, we also denote $\mathcal{F}_\mathcal{S}$ by $\mathcal{F}_n$. For $F_1,F_2 \in \mathcal{F}_\mathcal{S}$, let $G(F_1,F_2)$ be the multigraph with vertex-set $\mathcal{S}$ whose edges are formed by the pairs in $F_1,\dots,F_k$. Because of the natural bicoloration of the edges, the connected components of $G(F_1, F_2)$ are cycles of even length. Let the list of their lengths in weakly decreasing order be $(2\theta_1, 2 \theta_2, \dots) = 2\theta$, and define $\Lambda$ by $\Lambda(F_1, F_2) = \theta$. \begin{lemma}[{\cite[Lemma 2.2.]{GouldenJacksonMapsZonal}}] \label{lem:ZonalSC} Let $F_1, F_2$ be two fixed matchings in $\mathcal{F}_n$ such that $\Lambda(F_1,F_2) = \pi$, where $\pi \vdash n$. Then, for any $\mu, \nu \vdash n$ we have \[ h_{\mu,\nu;\pi} = 2^n n! |\{F_3 \in \mathcal{F}_n: \Lambda(F_1, F_3) = \mu, \Lambda(F_2, F_3) = \nu\}|.\] In particular \[ c^{(2)}_{\mu,\nu;\pi} = |\{F_3 \in \mathcal{F}_n: \Lambda(F_1, F_3) = \mu, \Lambda(F_2, F_3) = \nu\}|.\] \end{lemma} From this lemma one can evaluate some special cases of structure constants which will be helpful in the next subsection. \begin{lemma} \label{lem:SC2} We have the following identities: \begin{enumerate} \item \label{eq:SC2-1} $c^{(2)}_{\mu, \nu; 1^n} = 0$ for any $\mu \neq \nu$; \item \label{eq:SC2-2} $c^{(2)}_{(k 1^n),(k 1^n);(1^{k+n})} = \binom{k+n}{k}2^{k-1}(k-1)!$. \end{enumerate} \end{lemma} \begin{proof} The first item is immediate from Lemma \ref{lem:ZonalSC}, since whenever $F_1$ and $F_2$ are matchings such that $\Lambda(F_1,F_2) = 1^n$ then clearly $F_1 = F_2$. Let $F_1, F_2$ be two fixed matchings such that $\Lambda(F_1,F_2) = (1^{k+n})$ (hence $F_1 = F_2$). We are looking for the number of matchings $F_3$ such that $\Lambda(F_1,F_3)=\Lambda(F_2,F_3)=(k 1^n)$. Of course, as $F_1=F_2$, it is the number of matchings $F_3$ with $\Lambda(F_1,F_3)=(k 1^n)$. This number does not depend on $F_1$. Using \cite[Lemma 2.4]{NousZonal}, we know that the numbers of pairs $(F,F_3)$ with $\Lambda(F,F_3)=(k 1^n)$ is given by \[\frac{(2n+2k)!}{z_{(k 1^n)} 2^{n+1}}.\] As there are $\frac{(2n+2k)!}{2^{n+k} (n+k)!}$ matchings in $\mathcal{F}_{n+k}$, that is possible values for $F$, for a fixed $F_1$, the number of matchings $F_3$ with $\Lambda(F_1,F_3)=(k 1^n)$ is \[\frac{2^{n+k} (n+k)!}{z_{(k 1^n)} 2^{n+1}}=\binom{k+n}{k}2^{k-1}(k-1)!.\qedhere\] \end{proof} \subsection{A technical lemma obtained by polynomial interpolation} In order to study asymptotics of large Young diagrams we need to understand some special cases of structure constants. Here, we present some technical, but useful lemma about them: \begin{lemma} \label{lem:SC} We have the following identities: \begin{enumerate} \item \label{eq:SC-1} $g_{\mu,\nu;\rho} = \delta_{\rho, \mu \cup \nu}$ for $|\rho| + \ell(\rho) \geq |\mu| + \ell(\mu) + |\nu| + \ell(\nu)$; \item \label{eq:SC-2} $g_{\mu,\nu;1^k} = 0$ for $\tilde{\mu} \neq \tilde{\nu}$ and $2k \geq |\mu| + \ell(\mu) + |\nu| + \ell(\nu) - 2$; \item \label{eq:SC-3} $g_{(k),(k);1^k} = k$; \item \label{eq:SC-4} $g_{(k),(l);\rho} = 0$ for $|\rho| + \ell(\rho) = k+l+1$. \end{enumerate} \end{lemma} \begin{proof} Let $x(\mu,\nu;\rho) := |\mu| + \ell(\mu) + |\nu| + \ell(\nu) - (|\rho| + \ell(\rho))$. By Theorem \ref{theo:struct-const} we know that $g_{\mu,\nu;\rho}$ is a polynomial in $\gamma$ of degree at most $x(\mu,\nu;\rho)$ and of the same parity as $x(\mu,\nu;\rho)$. Hence, if one wants to prove that for some particular partitions, $g_{\mu,\nu;\rho}$ is identically equal to some constant $c$, it is enough to prove that: \begin{itemize} \item $g^{(1)}_{\mu,\nu;\rho} = c$ in the case $x(\mu,\nu;\rho) = 0$; \item $g^{(2)}_{\mu,\nu;\rho} = c$ in the case $x(\mu,\nu;\rho) = 1$ (necessarily, $c=0$ in this case); \item $g^{(1)}_{\mu,\nu;\rho} = g^{(2)}_{\mu,\nu;\rho} = c$ in the case $x(\mu,\nu;\rho) = 2$. \bigskip \end{itemize} Applying this idea, we see that the first item holds true, since this is true for $\alpha = 1$ \cite[Proposition 4.9.]{IvanovOlshanski2002}.\bigskip Consider now the second item. In this case, $x(\mu,\nu;\rho) \le 2$, hence we need to prove that $g^{(1)}_{\mu,\nu;1^k} = g^{(2)}_{\mu,\nu;1^k} = 0$. We know from Lemma \ref{lem:SC1} \eqref{eq:SC1-1} that $c^{(1)}_{\pi,\rho;1^n} = 0$ for any pair of different partitions $\pi, \rho \vdash n$. It means that for $n$ large enough, thanks to \eqref{EqCG}, we have the following equation: \[0 = \sum_{0 \leq i \leq k+1} g^{(1)}_{\tilde{\pi},\tilde{\rho}; (1^i)} \cdot i! \cdot \binom{n}{i},\] hence \[ g^{(1)}_{\tilde{\pi},\tilde{\rho}; (1^i)} = 0 \quad \text{for each $0\leq i\leq k+1$.}\] The same holds for $g^{(2)}$ using Lemma \ref{lem:SC2}. This finishes the case where $\mu$ and $\nu$ have no parts equal to $1$. The general case follows, since for any $\lambda \vdash n$ one has \[ \Ch_\mu(\lambda) = (n-|\tilde{\mu}|)_{m_1(\mu)}\Ch_{\tilde{\mu}}(\lambda).\bigskip\] In order to prove the third item, we need to prove the equalities \[g^{(1)}_{(k),(k);1^k} = k = g^{(2)}_{(k),(k);1^k} \text{ (as $x( (k),(k);1^k ))=2$)}.\] Here, we use Equation \eqref{EqCG} again. We have that \[ c^{(1)}_{(k 1^n),(k 1^n);(1^{k+n})} = \frac{1}{k^2} \sum_{0 \leq i \leq k+n} g^{(1)}_{(k),(k);(1^i)} i! \binom{k+n}{i}. \] Moreover, the summation index can be restricted to $i \le k$ as $g^{(1)}_{(k),(k);(1^i)}=0$ for $i>k$. By Lemma \ref{lem:SC1} \eqref{eq:SC1-2} one has that \[c^{(1)}_{(k 1^n),(k 1^n);(1^{k+n})} = \binom{k+n}{k} (k-1)!. \] It gives us \[ \binom{k+n}{k} (k-1)! = \frac{1}{k^2} \sum_{0 \leq i \leq k} g^{(1)}_{(k),(k);(1^i)} i! \binom{k+n}{i}\] and since both sides of the equation are polynomials in $n$, the equation $g^{(1)}_{(k),(k);(1^k)} = k$ follows (all others $g^{(1)}_{(k),(k);(1^i)}$ vanish). The same proof works for $\alpha=2$, using Lemma \ref{lem:SC2} \eqref{eq:SC2-2}. Finally, let us prove the last item. As $x( (k),(l),\rho)=1$ in this case, it is enough to prove that $g^{(2)}_{(k),(l);\rho} = 0$. Here, we shall use a different approach. It is proved in \cite[Theorem 5.3]{NousZonal} that the coefficient of \[ \left(R_2^{(2)}\right)^{s_2} \left(R_3^{(2)}\right)^{s_3} \cdots \] in Kerov's expansion of $\Ch^{(2)}_{(k)}\Ch^{(2)}_{(l)} - \Ch^{(2)}_{(k,l)}$ is, up to some constant factor, the number of maps with $2$ faces, $k+l$ edges, $2s_2+ 3s_3 + \cdots$ vertices and some additional properties (the details of which is irrelevant here). But, using the theory of Euler characteristic, such maps may exist only if \[2-(k+l)+2s_2+ 3s_3 + \cdots \le 2,\] that is \[2s_2+ 3s_3 + \cdots \le k+l.\] This implies that \[\deg_1\left(\Ch^{(2)}_{(k)}\Ch^{(2)}_{(l)} - \Ch^{(2)}_{(k,l)}\right) = k+l. \] Expanding it in the Jack characters basis one has \[ \Ch^{(2)}_{(k)}\Ch^{(2)}_{(l)} = \Ch^{(2)}_{(k,l)} + \sum_{|\rho|+\ell(\rho) \leq k+l} g^{(2)}_{(k),(l);\rho} \Ch^{(2)}_\rho,\] which finishes the proof. \end{proof} \begin{remark} The equality $g^{(1)}_{(k),(k);(1^k)} = k$ was also established by Kerov, Ivanov and Olshanski \cite[Proposition 4.12.]{IvanovOlshanski2002}, using their combinatorial interpretation for $g^{(1)}_{\mu,\nu;\rho}$. \end{remark} \section{Jack measure: law of large numbers} \label{sect:FirstOrder} The purpose of this Section is to prove Theorem~\ref{theo:RYD-limit}. As in \cite{IvanovOlshanski2002}, the key point is to prove the convergence of polynomial functions. \subsection{Convergence of polynomial functions} Let us recall equation \eqref{eq:exp_Ch}, which gives us the expectation of Jack characters with respect to Jack measure: \begin{equation*} \esper_{\PP_n^{(\alpha)}}(\Ch^{(\alpha)}_\mu)=\begin{cases} n(n-1)\cdots(n-k+1) & \text{if }\mu=1^k \text{ for some }k \leq n,\\ 0 & \text{otherwise.} \end{cases} \end{equation*} As $\Ch_\mu$ is a linear basis of $\Pola$, it implies the following lemma (which is an analogue of \cite[Theorem 5.5]{OlshanskiJackPlancherel} with another gradation). \begin{lemma} Let $F$ be an $\alpha$-polynomial function. Then $\esper_{\PP_n^{(\alpha)}}(F)$ is a polynomial in $n$ of degree at most $\deg_1(F)/2$. \label{LemDegEsper} \end{lemma} \begin{proof} It is enough to verify this lemma on the basis $\Ch_\mu$ because of the remark in Section \ref{SubsectDegCh}. But in this case $\esper_{\PP_n^{(\alpha)}}(F)$ is explicit (see formula \eqref{eq:exp_Ch}) and the lemma is obvious (recall that $\deg_1(\Ch_\mu)=|\mu|+\ell(\mu)$, see Section \ref{SubsectBound1}). \end{proof} Informally, smaller terms for $\deg_1$ are asymptotically negligible. We can now prove the following weak convergence result: \begin{proposition} \label{PropAPlanchWeakConv} Let $(\lambda_{(n)})_{n \geq 1}$ be a sequence of random partitions distributed with Jack measure. For any 1-polynomial function $F \in \Pol^{(1)}$, when $n \to \infty$, one has \[ F\bigg(D_{1/\sqrt{n}} \big(A_\alpha(\lambda_{(n)})\big)\bigg) \xrightarrow{\PP_n^{(\alpha)}} F( \Omega) , \] where $\xrightarrow{\PP_n^{(\alpha)}}$ means convergence in probability and $\Omega$ is given by \eqref{eq:DefOmega}. \end{proposition} \begin{proof} As $(R^{(1)}_k)_{k \geq 2}$ is an algebraic basis of $\Pol^{(1)}$, it is enough to prove the proposition for any $R^{(1)}_k$. Let $\mu$ be partition. By Corollary \ref{corol:dominant_deg1} \begin{multline}\label{EqTopDeg1} \prod_{i \leq \ell(\mu)} R_{\mu_i+1} = \Ch_\mu + \text{ terms of degree at most }\\ |\mu|+\ell(\mu)-1\text{ with respect to }\deg_1. \end{multline} Together with Lemma~\ref{LemDegEsper} and the formula \eqref{eq:exp_Ch} for $\esper_{\PP_n^{(\alpha)}}(\Ch_\mu)$, this implies: \[ \esper_{\PP_n^{(\alpha)}} \left(\prod_{i \leq \ell(\mu)}R_{\mu_i+1}\right) = \begin{cases} n(n-1)\cdots(n-k+1) + O(n^{k-1}) & \text{if }\mu=1^k \text{ for some }k; \\ o(n^{\frac{|\mu|+\ell(\mu)}{2}}) & \text{otherwise.} \end{cases}\] In particular \begin{align*} \esper_{\PP_n^{(\alpha)}} (R_k(D_{1/\sqrt{n}}(\lambda_{(n)}))) &= \frac{1}{n^{k/2}} \esper_{\PP_n^{(\alpha)}} (R_k) = \delta_{k,2} + O\left(\frac{1}{\sqrt{n}}\right), \\ \Var_{\PP_n^{(\alpha)}} (R_k(D_{1/\sqrt{n}}(\lambda_{(n)}))) &= \frac{1}{n^k} \left( \esper_{\PP_n^{(\alpha)}} \big((R_k)^2\big) - \esper_{\PP_n^{(\alpha)}} (R_k)^2 \right) = O\left(\frac{1}{n}\right). \end{align*} Thus, for each $k$, $R_k(D_{1/\sqrt{n}}(\lambda_{(n)}))$ converges in probability towards $\delta_{k,2}$. But, by definition \[R_k(D_{1/\sqrt{n}}(\lambda_{(n)}))=R^{(1)}_k \bigg(D_{1/\sqrt{n}}\big(A_\alpha(\lambda_{(n)})\big)\bigg)\] and $(\delta_{k,2})_{k \geq 2}$ is the sequence of free cumulants of the continuous diagram $\Omega$ (see \cite[Section 3.1]{Biane2001}), \emph{i.e.} \[ \delta_{k,2} = R_k^{(1)}(\Omega).\qedhere\] \end{proof} \subsection{Shape convergence} In the previous Section, we proved that evaluations of polynomial functions at $D_{1/\sqrt{n}} \big(A_\alpha(\lambda_{(n)})\big)$ converge towards the evaluation at $\Omega$. Ivanov and Olshanski has established that, if one can prove that the support of these renormalized Young diagrams lies in some compact, this would imply the uniform convergence, that is Theorem~\ref{theo:RYD-limit}. The following technical lemma, proved by Fulman \cite[Lemma 6.6]{FulmanFluctuationChA2} will allow us to conclude: \begin{lemma} \label{lem:FiniteSupport} Suppose that $\alpha > 0$. Then \begin{enumerate} \item \[ \PP_n^{(\alpha)}\left(\lambda_1\geq 2 e\sqrt{\frac{n}{\alpha}}\right) \leq \alpha n^2 4^{-e\sqrt{\frac{n}{\alpha}}}, \] \item \[ \PP_n^{(\alpha)}(\lambda_1' \geq 2 e\sqrt{n\alpha}) \leq \frac{n^2}{\alpha} 4^{-e\sqrt{n\alpha}}. \] \end{enumerate} In particular \[\lim_{n \to \infty} \PP_n^{(\alpha)} \left( \left[-\frac{\lambda'_1}{\sqrt{n}};\frac{\lambda_1}{\sqrt{n}}\right] \subseteq \left[-2e\sqrt{\alpha}, \frac{2e}{\sqrt{\alpha}}\right] \right) =1. \] \end{lemma}\medskip {\em End of proof of Theorem~\ref{theo:RYD-limit}.} It follows from Proposition \ref{PropAPlanchWeakConv} and Lemma \ref{lem:FiniteSupport} by the same argument as the one given in \cite[Theorem 5.5]{IvanovOlshanski2002}. \qed \section{Jack measure: central limit theorem for Jack characters} \label{sect:CLT} In this section we prove the central limit theorem for Jack characters (Theorem \ref{theo:FluctuationsJackCharacters}) and the bound of the speed of convergence in this theorem (Theorem \ref{theo:SpeedConvergence}). \subsection{Multivariate Stein's method} As explained in introduction, our main tool will be a multivariate analog of the so-called {\em Stein's method} due to Reinert and R{\"o}llin \cite{ReinertRollin2009}. For any discrete random variables $W, W^*$ with values in $\RR^d$, we say that the pair $(W,W^*)$ is \emph{exchangeable} if for any $w_1,w_2 \in \RR^d$ one has $\PP(W = w_1, W^* = w_2) = \PP(W = w_2, W^* = w_1)$. Let $\esper^W(\cdot)$ denotes the conditional expected value given $W$. The theorem of Reinert and R{\"o}llin is the following \cite[Theorem 2.1]{ReinertRollin2009}: \begin{theorem}[multivariate Stein's theorem] \label{theo:MultivariateStein} Let $(W,W^*)$ be an exchangeable pair of $\RR^d$-valued random variables such that $\esper(W) = 0$ and $\esper(WW^t) = \varSigma$, where $\Sigma \in M_{d \times d}(\RR)$ is symmetric and positive definite matrix. Suppose that $\esper^W(W^* - W) = - \Lambda W$, where $\Lambda \in M_{d \times d}(\RR)$ is invertible. Then, if $Z$ is a $d$-dimensional standard normal distribution, we have for every three times differentiable function $h:\RR^d \to \RR$, \begin{equation} \left| \esper h(W) - \esper h(\varSigma^{1/2}Z) \right| \leq \frac{|h|_2}{4}A + \frac{|h|_3}{12}B, \end{equation} where, using the notation $\lambda^{(i)} := \sum_{1 \leq m \leq d}|(\Lambda^{-1})_{m,i}|$, \begin{align*} |h|_n &= \sup_{i_1,\dots,i_n} \left\lVert \frac{\partial^n}{\partial x_{i_1} \cdots \partial x_{i_n}}h\right\rVert, \\ A &= \sum_{1 \leq i,j \leq d} \lambda^{(i)} \sqrt{\Var \esper^W(W_i^* - W_i)(W_j^* - W_j)}, \\ B &= \sum_{1 \leq i,j,k \leq d} \lambda^{(i)} \esper |(W_i^* - W_i)(W_j^* - W_j)(W_k^* - W_k)|. \end{align*} \end{theorem} Let $d$ be a positive integer. For $k \ge 2$, as in the statement of Theorem~\ref{theo:FluctuationsJackCharacters}, we define the following function of Young diagrams of size $n$: \[ W_k = n^{-k/2} \sqrt{k} \, \theta^{(\alpha)}_{(k,1^{n-k})} = n^{-k/2} \sqrt{k}^{-1} \Ch_{(k)}.\] It can be seen as a random variable on the probability space of Young diagrams of size $n$ endowed with Jack measure. We also consider the corresponding random vector \[ \tilde{W}_d = (W_2,\dots,W_{d+1}).\] Theorem~\ref{theo:FluctuationsJackCharacters} states that $\tilde{W}_d$ converges in distribution towards a vector of independent Gaussian random variables. Therefore we would like to apply the theorem above to this $d$-uplet of random variables. In the next sections, we shall contruct an exchangeable pair and check the hypothesis of Theorem~\ref{theo:MultivariateStein}. \subsection{An exchangeable pair} The first step consists in building a $d$-tuple $\tilde{W}_d^*$, such that $(\tilde{W}_d,\tilde{W}_d^*)$ is an exchangeable pair. The construction that we will describe here is due to Fulman \cite{FulmanFluctuationChA2}. His construction uses Markov chains, so let us begin by fixing some terminology. Let $X$ be a finite set. A Markov chain $M$ on $X$ is the data of transition probability $M(x,y)$ indexed by pairs of elements of $X$ with \[M(x,y) \ge 0 \text{ and } \sum_{y \in X} M(x,y) =1.\] If $x$ is a random element of $X$ distributed with probability $\PP$, then, applying once the Markov chain $M$, we obtain by definition a random element $y$ of $X$, defined on the same probability space as $x$, whose conditional distribution is given by: \[P ( y= y_0 | x=x_0 ) = M(x_0,y_0).\] Using the notation above, the Markov chain $M$ is termed {\em reversible} with respect to $\PP$ if the distribution of $(x,y)$ is the same as the distribution of $(y,x)$, or equivalently, for any $x_0,y_0$ in $X$, \[\PP(\{x_0\}) M(x_0,y_0) = \PP(\{y_0\}) M(y_0,x_0).\] Reversible Markov chains can be used to construct exchangeable pairs as follows. Let $M$ be a reversible Markov chain on a finite set $X$ with respect to a probability measure $\PP$. Consider also a $\RR^d$-valued function $W$ on $X$. We consider a random element $x$ distributed with respect to $\PP$ and $y$ obtained by applying the Markov chain $M$ to $\PP$. Then, directly from the definition, one sees that $(W(x),W(y))$ is exchangeable. So, to construct an exchangeable pair for $\tilde{W}_d$, it is enough to construct a reversible Markov chain with respect to Jack measure. We present now Fulman's construction of such a Markov chain. Let $\tau \vdash n-1$ and $\lambda \vdash n$. If $\tau$ is not contained in $\lambda$ (as Young diagrams), then define $\phi^{(\alpha)}(\lambda/\tau) =0$. Otherwise, denote by $\lambda / \tau$ the box which is in $\lambda$ and not in $\tau$. Let $C_{\lambda / \tau}$ ($R_{\lambda / \tau}$, respectively) be the column (row, respectively) of $\lambda$ that contains $\lambda / \tau$. We define \[ \phi^{(\alpha)}(\lambda/\tau) = \prod_{\Box \in C_{\lambda / \tau} \setminus R_{\lambda / \tau}} \frac{(\alpha a_\lambda(\Box) + \ell_\lambda(\Box) +1) (\alpha a_\tau(\Box) + \ell_\tau(\Box)+\alpha)}{(\alpha a_\lambda(\Box) + \ell_\lambda(\Box) +\alpha) (\alpha a_\tau(\Box) + \ell_\tau(\Box)+1)}.\] Let \[ c_\lambda^{(\alpha)} = \prod_{\Box \in \lambda} (\alpha a(\Box) + \ell(\Box) +1)\] and \[ (c'_\lambda)^{(\alpha)} = \prod_{\Box \in \lambda} (\alpha a(\Box) + \ell(\Box) +\alpha).\] We recall that $j_\lambda^{(\alpha)}=c_\lambda^{(\alpha)}(c'_\lambda)^{(\alpha)}.$ For $\lambda, \rho \vdash n$ we define two functions: \begin{equation} \label{eq:TransitionM} M^{(\alpha)}(\lambda,\rho) = \frac{(c'_\lambda)^{(\alpha)}}{n \alpha c_\rho^{(\alpha)}} \sum_{\tau \vdash n-1} \frac{\phi^{(\alpha)}(\lambda/\tau)\phi^{(\alpha)}(\rho/\tau) c_\tau^{(\alpha)}}{(c'_\tau)^{(\alpha)}} \end{equation} and \begin{equation} \label{eq:TransitionL} L^{(\alpha)}(\lambda,\rho) = \frac{1}{\alpha^n n! j_\rho^{(\alpha)}}\sum_{\mu \vdash n}(z_\mu)^2 \alpha^{2\ell(\mu)}\theta_\mu(\lambda)\theta_\mu(\rho)\theta_\mu((n-1,1)). \end{equation} As explained by Fulman \cite{FulmanFluctuationChA2}, both $M^{(\alpha)}$ and $L^{(\alpha)}$ are defined to be a deformation of a certain Markov chain which is reversible with respect to Plancherel measure. Roughly speaking, this Markov chain remove one box from a given Young diagram with certain probability and add another box with some probability to obtain a new Young diagram of the same size as the one from which we started. Fulman \cite{FulmanFluctuationChA2} proved the following: \begin{proposition}{\cite[Section 4]{FulmanFluctuationChA2}} \label{prop:FulmanComputations} \begin{enumerate} \item If $\rho \neq \lambda$ then \[ L^{(\alpha)}(\lambda,\rho) = \frac{\alpha(n-1) + 1}{\alpha(n-1)} M^{(\alpha)}(\lambda,\rho);\] \item Let $\lambda \vdash n$. Then \[ \sum_{\rho \vdash n} L^{(\alpha)}(\lambda, \rho) = \sum_{\rho \vdash n} M^{(\alpha)}(\lambda, \rho) = 1; \] \item $L^{(\alpha)}$ (hence $M^{(\alpha)}$ as well) is reversible with respect to Jack measure. \end{enumerate} \end{proposition} For more details about this construction (and in particular, the intuition behind it), we refer to Fulman \cite{FulmanFluctuationChA2}. \subsection{Checking hypotheses} \label{subsec:CheckingHypotheses} Recall that we have defined \[ W_k = n^{-k/2} \sqrt{k}^{-1} \Ch_{(k)} \] and the random vector \[ \tilde{W}_d = (W_2,\dots,W_{d+1})\] on the probability space of Young diagrams of size $n$ endowed with Jack measure. Let $\lambda$ be a random partition distributed according to Jack measure, and $\lambda^*$ ($\lambda'$, respectively) be obtained from $\lambda$ by applying the Markov chain $M^{(\alpha)}$ ($L^{(\alpha)}$, respectively). By a small abuse of notations, set $\tilde{W}_d:=\tilde{W}_d(\lambda)$, $\tilde{W}_d^*:=\tilde{W}_d(\lambda^*)$ and $\tilde{W}'_d:=\tilde{W}_d(\lambda')$. We shall prove now, that for any $d \in \N$, the pair $(\tilde{W}_d, \tilde{W}_d^*)$ of random vectors satisfies conditions of Theorem \ref{theo:MultivariateStein}.\medskip First, note that $\esper_n^{(\alpha)}(\tilde{W}_d)=0$ from equation~\eqref{eq:exp_Ch}. We should now verify the hypothesis involving $(\esper_n^{(\alpha)})^{\tilde{W}_d}(W_k^*)$. Let us begin with two technical known statements about Jack polynomials. \begin{lemma} \label{lem:identities} \begin{enumerate} \item{\cite[Page 382]{Macdonald1995}} \label{eq:orthogonality} \[ \sum_{\rho \vdash n} \frac{\theta_\mu(\rho)\theta_\nu(\rho)}{j^{(\alpha)}_\rho} = \frac{\delta_{\mu,\nu}}{z_\mu \alpha^{\ell(\mu)}};\] \item{\cite[Page 107]{Stanley1989}} \label{eq:(n-1,1)} \[ \theta_\mu((n-1,1)) = \frac{\alpha^{n-\ell(\mu)}n!}{z_\mu}\frac{(\alpha(n-1)+1)m_1(\mu) - n}{\alpha n(n-1)}.\] \end{enumerate} \end{lemma} Recall that $(\esper_n^{(\alpha)})^{\tilde{W}_d}$ denotes the conditional expectation given $\tilde{W}_d$. Similarly, we denote by $(\esper_n^{(\alpha)})^{\lambda}$ the conditional expectation given $\lambda$. \begin{proposition} \label{prop:SteinCondition1} Let $d \in \N$ and let $2 \leq k \leq d+1$. Then, one has \begin{align*} (\esper_n^{(\alpha)})^{\tilde{W}_d}(W_k^*) &= (\esper_n^{(\alpha)})^{\lambda}(W_k^*) = \left(1-\frac{k}{n}\right)W_k ; \\ (\esper_n^{(\alpha)})^{\tilde{W}_d}(W_k') &= (\esper_n^{(\alpha)})^{\lambda}(W_k') = \left(1-\frac{k(\alpha(n-1)+1)}{\alpha n(n-1)}\right)W_k. \end{align*} \end{proposition} \begin{proof} By definition, \[(\esper_n^{(\alpha)})^{\lambda}(W_k^*)= \sum_{\lambda^*} M(\lambda,\lambda^*) W_k^*(\lambda^*).\] From \cite[Proposition 6.2.]{FulmanFluctuationChA2}, we have that $(\theta_\mu(\lambda))_{\lambda \vdash n}$ is an eigenvector of $M^{(\alpha)}$ with eigenvalue \[d_\mu := 1 + \frac{\alpha(n-1)}{\alpha(n-1)+1}\left( \frac{z_\mu}{\alpha^{n-\ell(\mu)}n!}\theta_\mu((n-1,1)) - 1\right). \] Using Lemma~\ref{lem:identities}-(2), this eigenvalue can be simplified to (surprisingly, this was not noticed by Fulman) \[d_\mu= \frac{(\alpha(n-1)+1)m_1(\mu)}{n(\alpha (n-1) + 1)} = \frac{m_1(\mu)}{n}.\] As $(W_k^*(\lambda))_{\lambda \vdash n}$ is a multiple of $(\theta_\mu(\lambda))_{\lambda \vdash n}$, it is also an eigenvector of $M^{(\alpha)}$, with the same eigenvalue. Hence, \[(\esper_n^{(\alpha)})^{\lambda}(W_k^*) (\lambda)= d_{(k,1^{n-k})} W_k^*(\lambda).\] For $\mu=(k,1^{n-k})$, we have $d_{(k,1^{n-k})}=1-\frac{k}{n}$, which finishes the proof of the second equality of the first statement. In particular, we see that $(\esper_n^{(\alpha)})^{\lambda}(W_k^*)$ depends only on $W_k$ and hence on $\tilde{W}_d$. As, conversely, $\tilde{W}_d$ is determined by $\lambda$, one has \[(\esper_n^{(\alpha)})^{\tilde{W}_d}(W_k^*) = (\esper_n^{(\alpha)})^{\lambda}(W_k^*). \] The statement for $W_k'$ follows easily from the one for $W_k^\star$, using Proposition~\ref{prop:FulmanComputations}. \end{proof} \begin{corollary} \label{cor:SteinCondition1} Let $d \in \N$. Then \[ (\esper_n^{(\alpha)})^{\tilde{W}_d}(\tilde{W}_d^* - \tilde{W}_d) = -\Lambda \tilde{W}_d, \] with $\Lambda_{i,j} = \delta_{i,j}\frac{i+1}{n}$. In particular, with the notation of Theorem~\ref{theo:MultivariateStein}, \[ \lambda^{(i)} = \frac{n}{i+1}.\] \end{corollary} \begin{proof} It is a straightforward consequence of Proposition \ref{prop:SteinCondition1}. \end{proof} Consider now $\varSigma = \esper_n^{(\alpha)}(\tilde{W}_d\tilde{W}_d^t)$ as in the statement of Theorem~\ref{theo:MultivariateStein}. This matrix is symmetric by definition, but one has to check that it is positive definite. For a matrix $A$, let $\lVert A \rVert:=\max_{i,j} |A_{i,j}|$ denotes the supremum norm on matrices. \begin{proposition} \label{prop:SteinCondition2} There exists a constant $A_{d,\alpha}$ which depends only on $d$ and $\alpha$ such that for any $n \geq A_{d,\alpha}$ the matrix $\varSigma$ is positive definite. Moreover, \[ \lVert \varSigma^{1/2} - \Id \rVert = O(n^{-1/2}).\] \end{proposition} \begin{proof} Strictly from the definition we have that \begin{multline} \varSigma_{i,j} = \esper_n^{(\alpha)} \left( \frac{1}{\sqrt{i+1}\sqrt{j+1} n^{(i+j+2)/2}} \Ch_{(i+1)} \Ch_{(j+1)} \right) \\ = \frac{1}{\sqrt{i+1}\sqrt{j+1} n^{(i+j+2)/2}} \sum_\lambda g_{(i+1),(j+1);\lambda} \esper_n^{(\alpha)}(\Ch_\lambda). \end{multline} Since \[\esper_{\PP_n^{(\alpha)}}(\Ch_\mu)=\begin{cases} (n)_k & \text{if }\mu=1^k \text{ for some }k \leq n,\\ 0 & \text{otherwise,} \end{cases}\] we have that \[ \varSigma_{i,j} = \frac{1}{\sqrt{i+1}\sqrt{j+1} n^{(i+j+2)/2}} \sum_l g_{(i+1),(j+1);(1^l)} (n)_l, \] and using items \eqref{eq:SC-1}, \eqref{eq:SC-2} and \eqref{eq:SC-3} of Lemma \ref{lem:SC}, we have, that \[ \varSigma_{i,j} = \delta_{i,j} + O(n^{-1/2}).\] In other terms, \[ \lVert \varSigma - \Id \rVert = O(n^{-1/2}).\] As the set of positive definite matrix is an open set of the space of symmetric matrices, this implies that $\varSigma$ is positive definite for $n$ big enough. Besides, the application $A \mapsto \sqrt{A}$ is differentiable on this open set, which implies the bound $ \lVert \varSigma^{1/2} - \Id \rVert = O(n^{-1/2})$. \end{proof} \subsection{Error term} \label{subsec:ErrorTerm} In the previous Section, we have checked that the pair $(\tilde{W}_d, \tilde{W}_d^*)$ of the random vectors satisfies the assumptions of Theorem \ref{theo:MultivariateStein}. In order to prove that the random vector $\tilde{W}_d$ is asymptotically Gaussian, we need to show that quantities $A$ and $B$ from Theorem \ref{theo:MultivariateStein} vanish as $n \to \infty$. This section is devoted to making these calculations. \begin{lemma} \label{lem:fourth_moment} The following inequality holds: \begin{multline} \Var_n^{(\alpha)}\left((\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i - W_i^*)(W_j - W_j^*)\right) \leq \frac{1}{ij n^{i+j+2}}\\ \times \left(\sum_{\mu_1, \mu_2, l \atop {\tiny |\mu_1|+\ell(\mu_1) \le i+j \atop |\mu_2|+\ell(\mu_2) \le i+j} } H^{(i,j)}_{\mu_1, \mu_2; (1^l)} (n)_l - (i+j)^2\left(\sum_{l \ge 0} g_{(i), (j); l} (n)_l \right)^2 \right), \end{multline} where \[ H^{(i,j)}_{\mu_1, \mu_2; \mu} := (i+j-|\mu_1|+m_1(\mu_1))(i+j-|\mu_2|+m_1(\mu_2))g_{(i),(j); \mu_1}g_{(i),(j); \mu_2}g_{\mu_1,\mu_2; \mu}. \] \end{lemma} Note that the sums in the Lemma are clearly finite. \begin{proof} Following Fulman \cite[Proof of Proposition 6.4]{FulmanFluctuationChA2}, from Jensen's inequality for conditional expectations, the fact that $\tilde{W}_d$ is determined by $\lambda$ implies that \[ \esper_n^{(\alpha)}\left((\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i - W_i^*)(W_j - W_j^*)\right)^2 \leq \esper_n^{(\alpha)}\left((\esper_n^{(\alpha)})^\lambda(W_i - W_i^*)(W_j - W_j^*)\right)^2. \] Fix a partition $\lambda$ of $n$. We have, by Proposition~\ref{prop:FulmanComputations}, that \begin{multline} \label{eq:blablabla} (\esper_n^{(\alpha)})^\lambda(W_i^* - W_i)(W_j^* - W_j) = \frac{\alpha(n-1)}{\alpha(n-1)+1}(\esper_n^{(\alpha)})^\lambda(W_i' - W_i)(W_j' - W_j)\\ = \frac{\alpha(n-1)}{\alpha(n-1)+1}\left( (\esper_n^{(\alpha)})^\lambda(W_i'W_j') - (\esper_n^{(\alpha)})^\lambda(W_i')W_j - (\esper_n^{(\alpha)})^\lambda(W_j')W_i + W_iW_j \right)\\ = \frac{\alpha(n-1)}{\alpha(n-1)+1}\left( (\esper_n^{(\alpha)})^\lambda(W_i'W_j') + \left( \frac{(i+j)(\alpha(n-1)+1)}{\alpha n(n-1)}-1\right) W_iW_j \right), \end{multline} where the last equality follows from Proposition \ref{prop:SteinCondition1}. But the product $W_i W_j$ expands as \begin{equation} W_i W_j = \frac{1}{\sqrt{ij} n^{(i+j)/2}} \sum_{|\mu| \leq i+j+2} g_{(i),(j); \mu} \Ch_\mu. \label{eq:prod_W} \end{equation} Thus, strictly from the definition of $L^{(\alpha)}$, one has \begin{multline} (\esper_n^{(\alpha)})^\lambda(W_i'W_j') = \sum_{\rho \vdash n} L(\lambda,\rho) W_i(\rho) W_j(\rho) \\= \sum_{\tau \vdash n}\theta_\tau(\lambda)\theta_\tau(n-1,1) \frac{(z_\tau)^2\alpha^{2\ell(\tau)}}{\alpha^n n!\sqrt{ij} n^{(i+j)/2}} \sum_{|\mu| \leq i+j+2} g_{(i),(j); \mu} \sum_{\rho \vdash n} \frac{\Ch_\mu(\rho)\theta_\tau(\rho)}{j_\rho^{(\alpha)}}. \end{multline} We may assume $n \ge i+j$. Recall that $\Ch_\mu(\rho)$ is a multiple of $\theta_{\mu 1^{n-|\mu|}}(\rho)$ and, hence, using Lemma~\ref{lem:identities} \eqref{eq:orthogonality}, only terms corresponding to $\tau=\mu 1^{n-|\mu|}$ survives and we get: \begin{multline} (\esper_n^{(\alpha)})^\lambda(W_i'W_j') = \frac{1}{\sqrt{ij} n^{(i+j)/2}} \sum_{|\mu| \leq i+j+2} g_{(i),(j); \mu} \Ch_\mu(\lambda)\\ \theta_{\mu\cup 1^{n-|\mu|}}(n-1,1) \frac{z_{\mu\cup 1^{n-|\mu|}}\alpha^{\ell({\mu\cup 1^{n-|\mu|}})}}{\alpha^n n!}. \end{multline} We now apply Lemma~\ref{lem:identities} \eqref{eq:(n-1,1)}: \begin{multline} (\esper_n^{(\alpha)})^\lambda(W_i'W_j') = \frac{1}{\sqrt{ij} n^{(i+j)/2}} \sum_{|\mu| \le i+j+2} g_{(i),(j); \mu} \Ch_\mu(\lambda) \\ \times \frac{(\alpha(n-1)+1)(n-|\mu|+m_1(\mu))-n}{\alpha n(n-1)}. \end{multline} One can substitute above equation and equation \eqref{eq:prod_W} to the equation \eqref{eq:blablabla} and simplify it to obtain \begin{multline} \label{eq:2nd_moment_wi*-wi} (\esper_n^{(\alpha)})^\lambda(W_i^* - W_i)(W_j^* - W_j) \\ = \frac{1}{\sqrt{ij} n^{(i+j)/2}} \sum_{|\mu| \le i+j+2} \frac{i+j-|\mu|+m_1(\mu)}{n} g_{(i),(j); \mu} \Ch_\mu(\lambda). \end{multline} Notice that $g_{(i),(j); \mu}=0$ for $|\mu|+\ell(\mu) > i+j$, unless $\mu=(i,j)$ (Lemma \ref{lem:SC} \eqref{eq:SC-4}). In the latter case ($\mu=(i,j)$), the numerical factor $i+j-|\mu|+m_1(\mu)$ vanishes. It gives that the summation in equation~\eqref{eq:2nd_moment_wi*-wi} can be restricted to partitions $\mu$ with $|\mu|+\ell(\mu) \le i+j$. Taking the square, it gives us \begin{multline} \esper_n^{(\alpha)}\left((\esper_n^{(\alpha)})^\lambda(W_i^* - W_i)(W_j^* - W_j) \right)^2 \\ = \frac{1}{ij n^{i+j+2}}\esper_n^{(\alpha)}\left( \sum_{|\mu|+\ell(\mu) \le i+j} (i+j-|\mu|+m_1(\mu)) g_{(i),(j); \mu} \Ch_\mu(\lambda)\right)^2 \\ = \frac{1}{ij n^{i+j+2}} \sum_{\mu_1, \mu_2, \mu \atop {\tiny |\mu_1|+\ell(\mu_1) \le i+j \atop |\mu_2|+\ell(\mu_2) \le i+j} } H^{(i,j)}_{\mu_1, \mu_2; \mu}\esper_n^{(\alpha)}(\Ch_\mu(\lambda)) \\ = \frac{1}{ij n^{i+j+2}} \sum_{\mu_1, \mu_2, l\atop {\tiny |\mu_1|+\ell(\mu_1) \le i+j \atop |\mu_2|+\ell(\mu_2) \le i+j} } H^{(i,j)}_{\mu_1, \mu_2; (1^l)} (n)_l, \end{multline} where $H^{(i,j)}_{\mu_1, \mu_2; \mu}$ is defined as in the statement of the lemma. The last equality comes from the easy formula for the expectation of $\Ch_\mu$, see equation~\eqref{eq:exp_Ch}. Let us now analyse $\left(\esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i^* - W_i)(W_j^* - W_j)\right) \right)$. After expanding the product, each term can be dealt with as follows: \begin{align*} \esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}( W_i^* W_j^*) \right)&= \esper_n^{(\alpha)}(W_i^* W_j^*)= \esper_n^{(\alpha)}(W_i W_j) \\ \esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}( W_i^* W_j) \right)&= \esper_n^{(\alpha)}\left(W_j (\esper_n^{(\alpha)})^{\tilde{W}_d}( W_i^*)\right)=\left( 1-\frac{i}{n} \right) \esper_n^{(\alpha)} (W_i W_j)\\ \esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}( W_i^* W_j) \right)&= \left( 1-\frac{j}{n} \right) \esper_n^{(\alpha)} (W_i W_j).\\ \esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}( W_i W_j) \right)&= \esper_n^{(\alpha)}(W_i W_j) \end{align*} The first equation comes from the fact that $\tilde{W}_d^*$ and $\tilde{W}_d$ have the same distribution, while the second one is a consequence of Proposition~\ref{prop:SteinCondition1}. The third one is similar to the second one. Therefore \begin{multline} \left(\esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i^* - W_i)(W_j^* - W_j)\right) \right)^2\\ = \left( \frac{i+j}{n} \esper_n^{(\alpha)}(W_iW_j) \right)^2 = \frac{(i+j)^2}{ij n^{i+j+2}}\left(\sum_\mu g_{(i), (j); \mu} \esper_n^{(\alpha)}(\Ch_\mu)\right)^2\\ =\frac{(i+j)^2}{ij n^{i+j+2}} \left(\sum_l g_{(i), (j);l} (n)_l\right)^2, \end{multline} where we used equations~\eqref{eq:prod_W} and \eqref{eq:exp_Ch} in the second and third equalities. We finish the proof by the following inequality: \begin{multline} \Var_n^{(\alpha)}\left((\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i - W_i^*)(W_j - W_j^*)\right) \\ = \esper_n^{(\alpha)}\left((\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i^* - W_i)(W_j^* - W_j) \right)^2 - \left(\esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i^* - W_i)(W_j^* - W_j)\right) \right)^2 \\ \leq \esper_n^{(\alpha)}\left((\esper_n^{(\alpha)})^\lambda(W_i^* - W_i)(W_j^* - W_j) \right)^2 - \left(\esper_n^{(\alpha)}\left( (\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i^* - W_i)(W_j^* - W_j)\right) \right)^2. \end{multline} \end{proof} \begin{proposition} \label{prop:ErrorTermA} Let $d \in \N$ and let \[ A = \sum_{2 \leq i,j \leq d+1} \frac{n}{i} \sqrt{\Var_n^{(\alpha)}\left((\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i - W_i^*)(W_j - W_j^*)\right)}.\] \end{proposition} Then $A = O(n^{-1/2})$. \begin{proof} By Lemma \ref{lem:fourth_moment} we need to estimate the following sum: \begin{equation}\label{eq:to_bound} \left( \sum_{\mu_1, \mu_2, l \ge 1 \atop {\tiny |\mu_1|+\ell(\mu_1) \le i+j \atop |\mu_2|+\ell(\mu_2) \le i+j} } H^{(i,j)}_{\mu_1, \mu_2; (1^l)}(n)_l - (i+j)^2\sum_{l,k} g_{(i), (j); (1^l)}g_{(i), (j); (1^k)} (n)_l(n)_k \right)^{1/2}. \end{equation} Let us first consider the second sum, which is simpler. Suppose that $i \neq j$. Then, by Lemma \ref{lem:SC} \eqref{eq:SC-2}, summands corresponding to $l \ge (i+j)/2$ or $k \ge (i+j)/2$ vanish and the sum is $O(n^{i+j-1})$. Consider now the case $i=j$ (recall that $i,j \ge 2$). We use this time Lemma \ref{lem:SC} \eqref{eq:SC-1}: summands corresponding to $l \ge i+1$ or $k \ge i+1$ vanish. Therefore there is no summands of order $n^{i+j+1}$. The unique summand of order $n^{i+j}$ (corresponding to $l=k=i$) is $i^2 \, (n)_i^2$ by Lemma \ref{lem:SC} \eqref{eq:SC-3}. Finally, we have that \begin{equation}\label{eq:estimate_sum_g2} (i+j)^2\sum_{l,k} g_{(i), (j); (1^l)}g_{(i), (j); (1^k)} (n)_l(n)_k = \delta_{i,j} 4\, i^4 n^{i+j} + O(n^{i+j-1}). \end{equation} Let us describe the terms corresponding to $l \ge i+j$ in the first sum of \eqref{eq:to_bound}. By Lemma \ref{lem:SC} \eqref{eq:SC-1}, $g_{\mu_1,\mu_2,(1^l)}$, and hence $H^{(i,j)}_{\mu_1,\mu_2,(1^l)}$, vanishes unless \[ |\mu_1|+\ell(\mu_1) + |\mu_2|+\ell(\mu_2) \ge 2l \ge 2(i+j).\] Comparing it with the conditions under the first summation symbol of \eqref{eq:to_bound}, we have, in fact, equalities instead of inequalities above, {\em i.~e.} \[ l = i+j = |\mu_1|+\ell(\mu_1) = |\mu_2|+\ell(\mu_2).\] Moreover, in this case, by Lemma \ref{lem:SC} \eqref{eq:SC-1}, we have that $g_{\mu_1, \mu_2; (1^{i+j})} = 0$ unless $\mu_1 \cup \mu_2 = 1^{i+j}$, which means that $\mu_1 = \mu_2 = (1^{(i+j)/2})$ (in particular, $i+j$ must be even). Then, by Lemma \ref{lem:SC} \eqref{eq:SC-2}, one has that $g_{(i),(j);(1^{(i+j)/2})} = 0$ unless $i=j$. Therefore, $H^{(i,j)}_{\mu_1, \mu_2; (1^l)}=0$ if $i \neq j$ and $l \ge i+j$. Additionally, using the definition of $H^{(i,j)}_{\mu_1,\mu_2,\mu}$, the observations above and Lemma \ref{lem:SC} \eqref{eq:SC-3}, one has that $H^{(i,i)}_{(1^i), (1^i); (1^{2i})} = 4 \, i^4$. Concluding, \begin{equation}\label{eq:estimate_sum_H} \sum_{\mu_1, \mu_2, l}H^{(i,j)}_{\mu_1, \mu_2; (1^l)}(n)_l = \delta_{i,j} 4\, i^4 n^{i+j} + O(n^{i+j-1}). \end{equation} Comparing equations~\eqref{eq:estimate_sum_g2} and \eqref{eq:estimate_sum_H}, it gives \begin{multline} \left(\sum_{\mu_1, \mu_2, l}H^{(i,j)}_{\mu_1, \mu_2; (1^l)}(n)_l - (i+j)^2\sum_{l,k} g_{(i), (j); (1^l)}g_{(i), (j); (1^k)} (n)_l(n)_k \right)^{1/2}\\ = O(n^{(i+j-1)/2}), \label{eq:TechnicalBound} \end{multline} which implies that \[ A = \sum_{2 \leq i,j \leq d+1} \frac{1}{i\sqrt{ij} n^{(i+j)/2}}O(n^{(i+j-1)/2}) = O(n^{-1/2}),\] which finishes the proof. \end{proof} \begin{lemma} \label{lem:momentsDegree} For any $k \geq 2$, and $\lambda \vdash n$ there exists $B_{\alpha,k} \in \RR$, which depends only on $k$ and $\alpha$ such that \[ |M_k(\lambda)| \leq B_{\alpha,k} \max(\lambda_1, \lambda_1')^k .\] \end{lemma} \begin{proof} As explained in Section \ref{SectDefKerov}, $M^{(\alpha)}_k(\lambda)=M^{(1)}_k(A_\alpha(\lambda))$ is the $k$-th moment of the transition measure of the anisotropic digram $A_\alpha(\lambda)$. But this measure is supported by the contents of inner corners of $A_\alpha(\lambda)$. All these contents are clearly bounded in absolute value by $\max(\sqrt{\alpha} \, \lambda_1, \sqrt{\alpha}^{-1} \lambda'_1)$. Hence the $k$-th moment of the measure is bounded by the $k$-th power of this number, which proves the lemma. \end{proof} \begin{proposition} \label{prop:ErrorTermB} Let $d \in \N$ and let \[ \beta := \sum_{2 \leq i,j,k \leq d+1} \frac{n}{i} |(W_i^* - W_i)(W_j^* - W_j)(W_k^* - W_k)|.\] Then $B := \esper_n^{(\alpha)}(\beta) = O(n^{-1/2})$. \end{proposition} \begin{proof} Fix some integer $k \ge 2$. From the definition of $M^{(\alpha)}$, we know that $\lambda^*$ is obtained from $\lambda$ by removing a box from the diagram of $\lambda$ and reattaching it somewhere. It means that $\lambda = \tau^{(i_1)}$ and $\lambda^* = \tau^{(i_2)}$ for some $\tau \vdash n-1$. It implies that \[ \left|W_k^* - W_k\right| \leq \sqrt{k}^{-1} n^{-k/2} \left| \Ch_{(k)}(\tau^{(i_1)}) - \Ch_{(k)}(\tau^{(i_2)}) \right| . \] By equation \eqref{EqIng1Lassalle}, the right hand side of the above inequality is equal to \[ \sqrt{k}^{-1} n^{-k/2}\left|\sum_\rho a^{(k)}_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} b_{g,\pi}^\rho(\gamma)\ M_\pi(\tau)\ \left(z_{i_1}^g - z_{i_2}^g\right) \right)\right|,\] where $|\pi| \leq |\rho| - g - 2$. By Proposition \ref{PropBound1} we know, that $a^{(k)}_\rho = 0$ for $|\rho| > k+1$, hence $a^{(k)}_\rho b_{g,\pi}^\rho(\gamma) = 0$ for $|\pi| > k-g-1$. But $z_{i_1}^g$ and $z_{i_2}^g$ are bounded by $O_{\alpha,g}(\max(\lambda_1, \lambda_1')^g)$, as $\alpha$-contents of some box or corner of $\lambda$. Thanks to Lemma \ref{lem:momentsDegree}, it implies that there exists some $C_{\alpha,k} \in \RR$ which depends only on $k$ and $\alpha$ such that \[ \left|W_k^* - W_k\right| \leq n^{-k/2} C_{\alpha,k} \max(\lambda_1, \lambda_1')^{k-1}.\] If $\lambda_1 \leq 2 e\sqrt{\frac{n}{\alpha}}$ and $\lambda_1' \leq 2 e\sqrt{n\alpha}$, then, for any integers $i,j,k \ge 2$, one has \begin{multline}\label{eq:BoundTripleProductGoodCase} \left|(W_i^* - W_i)(W_j^* - W_j)(W_k^* - W_k)\right| \leq |(W_i^* - W_i)| |(W_j^* - W_j)| |(W_k^* - W_k)|\\ = O(n^{-3/2}) . \end{multline} Summing over $i$, $j$ and $k$, we get $\beta = O(n^{-1/2})$. Otherwise, we use the obvious bounds $\lambda_1,\lambda'_1 \le n$, and we get \begin{multline} \left|(W_i^* - W_i)(W_j^* - W_j)(W_k^* - W_k)\right| \leq |(W_i^* - W_i)| |(W_j^* - W_j)| |(W_k^* - W_k)|\\ = O(n^{(i+j+k)/2-3}) \end{multline} that is $\beta = O(n^{3(d+1)/2-2})$. By Lemma \ref{lem:FiniteSupport}, the probability that the second case occurs is exponentially small. Hence the bound of the first case holds in expectation. It finishes the proof. \end{proof} \subsection{Proof of the central limit theorem} We are now ready to prove the main result of this section: \begin{proof}[Proof of Theorem \ref{theo:FluctuationsJackCharacters}] Let $\tilde{\Xi}_d = (\Xi_2,\dots,\Xi_{d+1})$. In order to show that \[ \left( \frac{\Ch_{(k)}}{ \sqrt{k} n^{k/2}} \right)_{k=2,3,\dots} \xrightarrow{d} \left( \Xi_k \right)_{k=2,3,\dots} \] as $n \to \infty$, it is enough to show, that for all $d \in \N$ and for any smooth function $h$ on $\RR^d$, with all derivatives bounded, one has, as $n \to \infty$: \[ \left| \esper_n^{(\alpha)}h(\tilde{W}_d) - \esper h(\tilde{\Xi}_d) \right| \to 0.\] Fix a positive integer $d$ and a function $h:\RR^d \to \RR$ as above. Let $\varSigma = (\esper_n^{(\alpha)})(\tilde{W}_d\tilde{W}_d^t)$. As $h$ has its first derivative bounded, one has \[\left| \esper \left( h(\tilde{\Xi}_d) - h(\varSigma^{1/2}\tilde{\Xi}_d) \right) \right| \le |h|_1 \cdot d\, \lVert \Id - \varSigma^{1/2} \rVert \cdot \esper (\lVert \tilde{\Xi}_d \rVert).\] But $|h|_1$ and $\esper_n^{(\alpha)}(\lVert \tilde{\Xi}_d \rVert)$ are fixed finite numbers, while $\lVert \Id - \varSigma^{1/2} \rVert$ is $O(n^{-1/2})$ by Proposition \ref{prop:SteinCondition2}. Hence, \begin{equation} \left| \esper \left( h(\tilde{\Xi}_d) - h(\varSigma^{1/2}\tilde{\Xi}_d) \right) \right| = O(n^{-1/2}). \label{eq:forget_Sigma} \end{equation} By Corollary \ref{cor:SteinCondition1} and Proposition \ref{prop:SteinCondition2} we know, that the pair $(\tilde{W}_d, \tilde{W}_d^*)$ satisfies all hypotheses of Theorem \ref{theo:MultivariateStein}. Using this theorem, we get \[ \left| \esper_n^{(\alpha)}h(\tilde{W}_d) - \esper h(\varSigma \tilde{\Xi}_d)\right| \leq |h|_2\frac{A}{4} + |h|_3\frac{B}{12},\] where \[ A = \sum_{2 \leq i,j \leq d+1} \frac{n}{i} \sqrt{\Var_n^{(\alpha)}\left((\esper_n^{(\alpha)})^{\tilde{W}_d}(W_i - W_i^*)(W_j - W_j^*)\right)}\] and \[ B = \sum_{2 \leq i,j,k \leq d+1} \frac{n}{i} \esper_n^{(\alpha)} |(W_i^* - W_i)(W_j^* - W_j)(W_k^* - W_k)|.\] Propositions \ref{prop:ErrorTermA} and \ref{prop:ErrorTermB} imply that \[ \left| \esper_n^{(\alpha)}h(\tilde{W}_d) - \esper h(\varSigma^{1/2} \tilde{\Xi}_d)\right| = O(n^{-1/2}).\] Together with equation \eqref{eq:forget_Sigma}, it finishes the proof. \end{proof} \subsection{Speed in convergence} We now use \cite[Corollary 3.1]{ReinertRollin2009}, which gives an estimate for \[ \left| \esper_n^{(\alpha)}h(\tilde{W}_d) - \esper h(\tilde{\Xi}_d) \right|\] for non-smooth test functions $h$. In particular, we shall consider functions $h$ in the set $\HHH$ of indicator functions of convex sets. We have the following result, which is stronger than Theorem~\ref{theo:SpeedConvergence}. \begin{theorem} For any integer $d \ge 2$, we have \[\sup_{h \in \HHH} |\esper_n^{(\alpha)}\, h(\tilde{W}_d) - \esper\, h(\tilde{\Xi}_d)| =O\big(n^{-1/4} \big).\] \label{theo:SpeedConvergenceConvex} \end{theorem} \begin{proof} Fix an integer $d \ge 2$ and consider the exchangeable pair $(\tilde{W}_d,\tilde{W}^\star_d)$ defined as in the previous sections. As shown in Section \ref{subsec:CheckingHypotheses}, this exchangeable pair fulfills the condition of \cite[Theorem 2.1]{ReinertRollin2009}. Besides, as mentioned in \cite[Section 3]{ReinertRollin2009}, the set $\HHH$ of functions fulfills conditions $(C1)$, $(C2)$ and $(C3)$ (with $a=2\sqrt{d}$) from this paper. Therefore, one can apply \cite[Corollary 3.1]{ReinertRollin2009} and we find that there exists a constant $\zeta=\zeta(d)$ such that \begin{equation}\label{eq:ReinertNonSmooth} \sup_{h \in \HHH} |\esper_n^{(\alpha)}\, h(\tilde{W}_d) - \esper\, h(\tilde{\Xi}_d)| \le \zeta^ 2 \left( \frac{A' \log(1/T')}{2} +\frac{B'}{2\sqrt{T'}} +2\sqrt{dT'} \right), \end{equation} where we define \begin{align*} \hat{\lambda}^{(i)} &:= \sum_{m=1}^d |(\varSigma^{-1/2} \Lambda^{-1} \varSigma^{1/2})_{m,i}|,\\ A'&:= \sum_{i,j} \hat{\lambda}^{(i)} \sqrt{\sum_{k,l} \varSigma^{-1/2}_{i,k} \varSigma^{-1/2}_{j,\ell} \Var (\esper_n^{(\alpha)})^{\tilde{W}_d} (W'_k -W_k)(W'_\ell-W_\ell)},\\ B'&:= \sum_{i,j,k} \hat{\lambda}^{(i)} \esper_n^{(\alpha)} \left| \sum_{r,s,t} \varSigma^{-1/2}_{i,r} \varSigma^{-1/2}_{j,s} \varSigma^{-1/2}_{k,t} (W'_r -W_r)(W'_s -W_s)(W'_t -W_t)\right|,\\ T'&:= \frac{1}{4d}\left( \frac{A'}{2} + \sqrt{B'\sqrt{d} + \frac{(A')^2}{4}} \right)^2. \end{align*} Comparing this with the statement in \cite{ReinertRollin2009}, note that in our case, there is no remaining matrix $R$, and hence $C'=0$. Besides, for the set $\HHH$ considered here (indicator functions of convex subsets of $\RR^d$), one can choose $a=2\sqrt{d}$. We shall now describe the asymptotic behaviour of the quantities above. Note that all sums appearing above have fixed number of summands since the summation index set is always the set of integers less or equal to $d$. Recall that, in our setting, $\Lambda^{-1}$ is the diagonal matrix $(n/(i+1) \cdot \delta_{i,j})$. Besides $\varSigma^{-1/2}$ (well-defined for $n$ big enough) is a bounded matrix (Proposition~\ref{prop:SteinCondition2}). Hence for any $i \le d$, \[\hat{\lambda}^{(i)} = O(n).\] Consider now $A'$. It was proven in Section \ref{subsec:ErrorTerm} (Lemma~\ref{lem:fourth_moment} and equation~\eqref{eq:TechnicalBound}) that, for any $k$ and $\ell$, \[|\Var (\esper_n^{(\alpha)})^{\tilde{W}_d} (W'_k -W_k)(W'_\ell-W_\ell) | = O(n^{-3}).\] Together with the bound on $\hat{\lambda}^{(i)}$ and the fact that $\varSigma^{-1/2}$ is bounded, this implies \[A'=O(n^{-1/2}).\] Consider now $B'$. We have proved that the bound \eqref{eq:BoundTripleProductGoodCase} holds in expectation, that is \[\esper_n^{(\alpha)} \left|(W'_r -W_r)(W'_s -W_s)(W'_t -W_t)\right| \leq O(n^{-3/2}). \] As before, the bound above on $\hat{\lambda}^{(i)}$ and the fact that $\varSigma^{-1/2}$ is bounded implies \[B'=O(n^{-1/2}).\] Combining the bounds for $A'$ and $B'$, we get $T' = O(n^{-1/2})$. Strictly from the definition of $T'$, we also get $T' \ge B'$, which implies that $\frac{B'}{2\sqrt{T'}} \le \sqrt{B'}=O(n^{-1/4})$. Plugging all these estimates in equation~\eqref{eq:ReinertNonSmooth}, we get the desired result. \end{proof} \subsection{Fluctuations of other polynomial functions}\label{subsect:Fluct_NonHook} As $\Ch_{(k)}$ is an algebraic basis of polynomial functions, Theorem~\ref{theo:FluctuationsJackCharacters} implies that any polynomial function $F$ converge, after proper normalization, towards a multivariate polynomial evaluated in independent Gaussian variables. However, the proper order of normalization and the actual polynomial are not easy to be expressed explicitly, as this relies on the expansion of $F$ in the $\Ch_{(k)}$ basis. In particular, we are not able to do it for $\Ch_{\mu}$, when $\mu$ is not a hook and hence we can not describe the fluctuations of these random variables as it was done in the case $\alpha=1$; see \cite{HoraFluctuationsCharacters} and \cite[Theorem 6.5]{IvanovOlshanski2002}. \section{Jack measure: central limit theorems for Young diagrams and transition measures} \label{sect:CLT2} In this section, we state formally and prove our fluctuation results for Young diagrams under Jack measure. We will also present a fluctuation result for the transition measures of these diagrams. Before we state our results, we need some preparations. We follow here notations from \cite[Sections 7 and 8]{IvanovOlshanski2002}. First, we define \begin{align*} u_k(x) &= U_k(x/2) = \sum_{0 \leq j \leq \lfloor k/2 \rfloor}(-1)^j\binom{k-j}{j}x^{k-2j},\\ t_k(x) &= 2T_k(x/2) = \sum_{0 \leq j \leq \lfloor k/2 \rfloor}(-1)^j\frac{k}{k-j}\binom{k-j}{j}x^{k-2j}, \end{align*} where $T_k$ and $U_k$ are respectively Chebyshev polynomials of the first and second kind. They can be alternatively defined by the following equations: \begin{align*} u_k(2 \cos(\theta)) &= \frac{\sin((k+1)\theta)}{\sin(\theta)};\\ t_k(2 \cos(\theta)) &= 2 \cos(k\theta). \end{align*} It is known that $\left(u_k(x)\right)_k$ and $\left(t_k(x)\right)_k$ form a family of orthonormal polynomials with respect to the measures $\frac{\sqrt{4-x^2}}{2\pi}dx$ and $\frac{1}{2\pi \sqrt{4-x^2}}dx$, respectively, {\it i.~e.}: \begin{align*} \int^2_{-2} u_k(x)u_l(x)\frac{\sqrt{4-x^2}}{2\pi }dx &= \delta_{k,l};\\ \int^2_{-2} t_k(x)t_l(x)\frac{1}{2\pi \sqrt{4-x^2}}dx &= \delta_{k,l}. \end{align*} The measure $\frac{\sqrt{4-x^2}}{2\pi}dx$ supported on the interval $[-2,2]$ is called the \emph{semi-circular distribution} and is denoted by the $\mu_{S-C}$ (see Subsection \ref{subsect:JackPlancherel}). Recall from Theorem \ref{theo:RYD-limit} that the limit shape of scaled Young diagrams $$\omega\bigg(D_{1/\sqrt{n}}\big(A_\alpha(\lambda_{(n)})\big)\bigg)$$ is given by $\Omega$, where $\lambda_{(n)}$ is a random Young diagram with $n$ boxes distributed according to Jack measure. Hence, in order to study fluctuations of the random Young diagrams around the limit shape, we introduce the following application from the set of Young diagrams to the space of functions from $\RR$ to $\RR$: \[ \Delta^{(\alpha)}(\lambda)(x) := \sqrt{n}\frac{\omega\bigg(D_{1/\sqrt{n}}\big(A_\alpha(\lambda)\big)\bigg)(x) - \Omega(x)}{2}, \] where $n$ is the number of boxes of $\lambda$. Besides, tt was shown by Kerov \cite{Kerov1993} that the transition measure (see Subsection \ref{subsect:TransMeasure}) of the continuous Young diagram $\Omega$ is the semi-circular distribution (see Subsection \ref{subsect:JackPlancherel}). Thus we define the following function on the set of Young diagrams with $n$ boxes with values in the space of real signed measures: \[ \widehat{\Delta}^{(\alpha)}(\lambda) := \sqrt{n}\left( \mu_{\bigg(D_{1/\sqrt{n}}\big(A_\alpha(\lambda)\big)\bigg)} - \mu_{S-C} \right) .\] As above, $n$ is the number of boxes of $\lambda$. This function describes the (scaled) difference between the transition measure of the scaled Young diagram and the limiting semi-circular measure. Now, we are ready to formulate the central limit theorem for the Jack measure. Here, we use the usual notation $[\text{condition}]$ for the indicator function of the corresponding condition. \begin{theorem} \label{theo:Fluctuations} Choose a sequence $\left(\Xi_k \right)_{k=2,3,\dots}$ of independent standard Gaussian random variables and let $\lambda_{(n)}$ be a random Young diagram of size $n$ distributed according to Jack measure. As $n \to \infty$, we have \begin{enumerate} \item \label{item:fluctuationsOfDiagrams} a central limit theorem for Young diagrams: \[ \left( u^{(\alpha)}_{k}(\lambda_{(n)}) \right)_{k=1,2,\dots} \xrightarrow{d} \left( \frac{\Xi_{k+1}}{\sqrt{k+1}} - \frac{\gamma}{k+1} \, [k\text{ is odd}]\right)_{k=1,2,\dots}, \] where $u^{(\alpha)}_{k} (\lambda) = \int_\RR u_k(x) \Delta^{(\alpha)}(\lambda)(x)\ dx$;\medskip \item \label{item:fluctuationsOfTransition} and a central limit theorem for transition measures: \[ \left( t^{(\alpha)}_{k}(\lambda_{(n)}) \right)_{k=3,4,\dots} \xrightarrow{d} \left( \sqrt{k-1}\, \Xi_{k-1} - \gamma \, [k\text{ is odd}] \right)_{k=3,4,\dots}, \] where $t^{(\alpha)}_{k} (\lambda) = \int_\RR t_k(x) \widehat{\Delta}^{(\alpha)}(\lambda)(dx)$. \end{enumerate} \end{theorem} \begin{remark} Notice that, for $\gamma=0$ ({\it i.e.} $\alpha=1$), this theorem specializes to Kerov's central limit theorems for Plancherel measure \cite{Kerov1993, IvanovOlshanski2002}. \end{remark} \subsection{Extended algebra of polynomial functions and gradations } The proof combines our fluctuation results for Jack characters and arguments from the proof of Kerov, Ivanov and Olshanski for the case $\alpha=1$. In particular we shall compare some $\alpha$-polynomial functions with their counterpart for $\alpha=1$. Therefore, throughout this section, we will make the dependence in $(\alpha)$ explicit and we use the notations $\Ch_\mu^{(\alpha)}$, $M_k^{(\alpha)}$, and so on. The only exception to this is $\Ch_{(1)}$ as, for any $\alpha$, the function $\Ch_{(1)}$ associates to a Young diagram its number of boxes. To prove Theorem \ref{theo:Fluctuations}, it is convenient to extend the algebra $\Pola$ in the same way as Ivanov and Olshanski \cite{IvanovOlshanski2002}: we adjoin to it the square root of the element $\Ch_{(1)}$ and then localize over the multiplicative family generated by $\sqrt{\Ch_{(1)}}$. Let $\Polaext$ denotes the resulting algebra. We also define, for a partition $\mu$ of length $\ell$, \[\widetilde{\Ch}^{(\alpha)}_\mu := \prod_{i=1}^\ell \Ch^{(\alpha)}_{(\mu_i)}.\] Then $\widetilde{\Ch}^{(\alpha)}_\mu$ is a multiplicative basis of $\Pola$, while a multiplicative basis in $\Polaext$ is given by \[ \widetilde{\Ch}^{(\alpha)}_\mu \left(\Ch_{(1)}\right)^{m/2},\text{ with } m_1(\mu)=0, \quad m \in \Z.\] We equip $\Polaext$ with a gradation defined by \[ \deg_4\bigg( \widetilde{\Ch}^{(\alpha)}_\mu \left(\Ch_{(1)}\right)^{m/2}\bigg) = |\mu| + m.\] Note that some elements have negative degree. Besides, for a general partition $\mu$, one has: \[ \deg_4\big( \widetilde{\Ch}^{(\alpha)}_\mu \big) = |\mu| + m_1(\mu).\] In \cite{IvanovOlshanski2002}, for $\alpha=1$, the authors consider a slightly different filtration on $\Poluext$, namely they define $\deg_{\{1\}}$ as follows\footnote{In \cite{IvanovOlshanski2002}, $\deg_{\{1\}}$ is abbreviated as $\deg_{1}$ but we shall not do that to avoid a conflict of notation with Section~\ref{SubsectBound1}.}: for $\mu$ without part equal to $1$ and $m \in \Z$, \begin{equation} \label{eq:def_grad_IO} \deg_{\{1\}}\bigg(\Ch^{(1)}_\mu \left(\Ch_{(1)}\right)^{m/2}\bigg) = |\mu|+m. \end{equation} Note that, for a general partition $\mu$, one has: \[ \deg_{\{1\}}\big( \Ch^{(1)}_\mu \big) = \mu + m_1(\mu).\] Let us compare $\deg_4$ and $\deg_{\{1\}}$. For any integer $d$ (positive or not), let $V^{\le d}$, ($V^{\le d}_{\{1\}}$, respectively) denotes the subspace of $\Poluext$ containing elements $x$ with $\deg_4(x)\le d$ ($\deg_{\{1\}}(x)\le d$, respectively). \begin{lemma} For any integer $d$, one has \[V^{\le d}=V^{\le d}_{\{1\}}.\] \label{lem:ComparisonDegree} \end{lemma} \begin{proof} Let us first show that \begin{equation} \label{eq:SameDegree_in_Polun} V^{\le d} \cap \Polun =V^{\le d}_{\{1\}} \cap \Polun. \end{equation} By definition, the left-hand side has basis $(\widetilde{\Ch}^{(1)}_\mu)_{|\mu|+m_1(\mu) \le d}$, while the right-hand side has basis $(\Ch^{(1)}_\mu)_{|\mu|+m_1(\mu) \le d}$. But, if $|\mu|+m_1(\mu) \le d$, \[\deg_{\{1\}} \big(\widetilde{\Ch}^{(1)}_\mu \big) \leq \sum_{i=1}^\ell \deg_{\{1\}} \big(\widetilde{\Ch}^{(1)}_{(\mu_i)} \big) = |\mu|+m_1(\mu) \le d,\] which shows an inclusion between the two spaces. As they have the same dimension, \eqref{eq:SameDegree_in_Polun} holds. Observe now that, for both gradations, an element $F \in \Poluext$ has degree at most $d$ if and only if it can be written as \[F = \Ch_{(1)}^m \, F_1 + \Ch_{(1)}^{m+1/2} \, F_2 \] for some integer $m$ and elements $F_1$, $F_2$ from $\Polun$ of degree at most $d_1$ and $d_2$ with $2m+d_1 \le d$ and $2m+1+d_2 \le d$. Hence, the lemma follows from \eqref{eq:SameDegree_in_Polun}. \end{proof} \begin{remark} We are not able to show that \[\deg_{\{1\}}\bigg(\Ch^{(\alpha)}_\mu \left(\Ch_{(1)}\right)^{m/2}\bigg) = |\mu|+m\] defines a filtration of $\Polaext$, which would make a natural extension of \eqref{eq:def_grad_IO}. However, thanks to Lemma \ref{lem:ComparisonDegree}, we can use the multiplicative family $\widetilde{\Ch}^{(\alpha)}_\mu$ instead. \end{remark} \subsection{Proof of Theorem \ref{theo:Fluctuations}} The main part of the proof of Kerov, Ivanov and Olshanski is to prove that $u_k^{(1)}$ and $t_k^{(1)}$ are in $\Poluext$ and fulfill \begin{align} u_{k}^{(1)} &= \frac{\Ch^{(1)}_{(k+1)}}{(k+1) \Ch_{(1)}^{(k+1)/2}} +\text{ terms of negative degree for $\deg_4$;} \label{eq:FromIO_u}\\ t_{k}^{(1)} &= \frac{\Ch^{(1)}_{(k-1)}}{\Ch_{(1)}^{(k-1)/2}} +\text{ terms of negative degree for $\deg_4$.} \label{eq:FromIO_t} \end{align} These equations are respectively the last equations of Sections 7 and 8 in paper \cite{IvanovOlshanski2002}. As the notations are a little bit different here, let us give a few precisions. \begin{itemize} \item The quantity $\eta_{k+1}$ in \cite{IvanovOlshanski2002} is defined by equation (6.5) and definition 3.1. \item In \cite{IvanovOlshanski2002}, it is shown that the reminder has negative degree in the filtration $\deg_{\{1\}}$, while here we use the gradation $\deg_4$. But we have proven in Lemma \ref{lem:ComparisonDegree} that both notions coincide on $\Polun$. \item The identities in \cite{IvanovOlshanski2002} are equalities of random variables, that is of functions on the set of Young diagrams of size $n$ (which is here the probability space) ; as they are valid for any $n$, we have in fact identities of functions on the set of Young diagrams, as claimed above. \end{itemize} Note that, as equalities of functions on the set of Young diagrams, one can evaluate them on continuous diagrams, in particular on $A_\alpha(\lambda)$. Our goal is to establish similar formulas in the general $\alpha$-case. From the definition, it is straight-forward that \begin{align} u_{k}^{(\alpha)}(\lambda) &= u_{k}^{(1)}\big(A_\alpha(\lambda) \big); \\ t_{k}^{(\alpha)}(\lambda) &= t_{k}^{(1)}\big(A_\alpha(\lambda) \big). \end{align} Therefore, applying equations~\eqref{eq:FromIO_u} and \eqref{eq:FromIO_t} on $A_\alpha(\lambda)$, we get: \begin{align} u_{k}^{(\alpha)}(\lambda) &= \frac{\Ch^{(1)}_{(k+1)}\big(A_\alpha(\lambda) \big)}{(k+1) \left(\Ch_{(1)}\right)^{(k+1)/2}} +\text{ terms of negative degree for $\deg_4$;} \label{eq:FromIO_u_la^al}\\ t_{k}^{(\alpha)}(\lambda) &= \frac{\Ch^{(1)}_{(k-1)}\big(A_\alpha(\lambda) \big)}{\left(\Ch_{(1)}\right)^{(k-1)/2}} +\text{ terms of negative degree for $\deg_4$.} \label{eq:FromIO_t_la^al} \end{align} Of course in general, although both quantities lie in $\Pola$, \[\Ch^{(1)}_{(k)}\big(A_\alpha(\lambda) \big) \neq \Ch^{(\alpha)}_{(k)}(\lambda).\] The following lemma compares the highest degree terms of quantities above: \begin{lemma} \label{lem:1toAlpha} For any integer $m \geq 1$ we have that \begin{multline} \Ch^{(1)}_{(k)}\big(A_\alpha(\lambda)\big) = \Ch^{(\alpha)}_{(k)}(\lambda) - \gamma\left(\Ch_{(1)}\right)^{(k/2)} \, [k\text{ is even}] \\ + \text{ terms of degree less than } k\text{ with respect to }\deg_4. \end{multline} \end{lemma} \begin{proof} We know, by Corollary~\ref{corol:dominant_deg1}, that for any $k \geq 2$, \[ \deg_1\left(\Ch^{(1)}_{(k)}\big(A_\alpha(\lambda)\big) - \Ch^{(\alpha)}_{(k)}(\lambda)\right) = k.\] Let us consider its $\widetilde{\Ch}_\mu$ expansion: \[ \Ch^{(1)}_{(k)}\big(A_\alpha(\lambda)\big) - \Ch^{(\alpha)}_{(k)}(\lambda) = \sum_\mu a^k_\mu \widetilde{\Ch}_\mu.\] Since $\deg_4(\widetilde{\Ch}_\mu) \leq \deg_1(\widetilde{\Ch}_\mu)$ with equality only for $\mu = (1^m)$ for some non-negative integer $m$, one has that \begin{align*} \Ch^{(1)}_{(2m+1)}\big(A_\alpha(\lambda)\big) &= \Ch^{(\alpha)}_{(2m+1)}(\lambda) \\&\qquad+ \text{ terms of degree less than } 2m+1\text{ with respect to }\deg_4; \\ \Ch^{(1)}_{(2m)}\big(A_\alpha(\lambda)\big)& = \Ch^{(\alpha)}_{(2m)}(\lambda) - a^{2m}_{(1^m)}\left(\Ch_{(1)}\right)^m \\&\qquad + \text{ terms of degree less than } 2m\text{ with respect to }\deg_4. \ \end{align*} But \[ a^{2m}_{(1^m)} = [\left(R^{(\alpha)}_2(\lambda)\right)^m] \left(\Ch^{(\alpha)}_{(2m)}(\lambda) - \Ch^{(1)}_{(2m)}\big(A_\alpha(\lambda)\big)\right) = [R_2^m]\Ch^{(\alpha)}_{(2m)}(\lambda) = \gamma,\] by \cite[Theorem 10.3]{Lassalle2009} or Proposition \ref{prop:TopDegrees} here (the coefficient of $R_2^m$ in $\Ch^{(1)}_{(2m)}$ is zero for parity reason). This finishes our proof. \end{proof} \begin{corollary} \label{corol:uktk_chka} One has the following equalities in $\Polaext$: \begin{align} u_{k}^{(\alpha)} &= \frac{\Ch^{(\alpha)}_{(k+1)}}{(k+1) \left(\Ch_{(1)}\right)^{(k+1)/2}} - \frac{\gamma}{k+1} [k\text{ is odd}] +\text{ terms of negative degree;} \label{eq:uka_chka}\\ t_{k}^{(\alpha)} &= \frac{\Ch^{(\alpha)}_{(k-1)} }{\left(\Ch_{(1)}\right)^{(k-1)/2}} - \gamma \, [k\text{ is odd}] +\text{ terms of negative degree.} \label{eq:tka_chka} \end{align} \end{corollary} \begin{proof} For any Young diagram $\lambda$, equation \eqref{eq:uka_chka} evaluated at $\lambda$ is obtained from equation~\eqref{eq:FromIO_u_la^al} and Lemma~\ref{lem:1toAlpha}. Similarly, equation \eqref{eq:tka_chka} is a consequence of equation~\eqref{eq:FromIO_t_la^al} and Lemma~\ref{lem:1toAlpha}. \end{proof} Now, we will prove that elements of negative degree are asymptotically negligible. \begin{lemma} \label{lem:NegDeg=>Zero} Let $f \in \Polaext$ be a function of degree less than $0$. Then, as $n \to \infty$, \[ f(\lambda_{(n)}) \xrightarrow{d} 0,\] where the distribution of $\lambda_{(n)}$ is Jack measure of size $n$. \end{lemma} \begin{proof} It is enough to show that, as $n \to \infty$ \begin{equation}\label{eq:NegDeg=>Zero__OnBasis} \widetilde{\Ch}_\mu(\lambda_{(n)}) \left(\Ch_1(\lambda_{(n)})\right)^{-m/2} \xrightarrow{d} 0 \end{equation} for $|\mu| < m$, where the distribution of $\lambda_{(n)}$ is Jack measure of size $n$. But this is a consequence of Theorem~\ref{theo:FluctuationsJackCharacters}. Indeed, let $\left(\Xi_k \right)_{k=2,3,\dots}$ be a family of independent standard Gaussian random variables. Then Theorem~\ref{theo:FluctuationsJackCharacters} states that \[ \widetilde{\Ch}_\mu(\lambda_{(n)}) \, n^{-|\mu|/2} \xrightarrow{d} \prod_{i}\sqrt{\mu_i} \, \Xi_{\mu_i}.\] As $\Ch_1(\lambda_{(n)}) \equiv n$, this implies \eqref{eq:NegDeg=>Zero__OnBasis} and finishes the proof. \end{proof} Finally, Theorem \ref{theo:Fluctuations} follows from Corollary~\ref{corol:uktk_chka}, Theorem~\ref{theo:FluctuationsJackCharacters} and Lemma~\ref{lem:NegDeg=>Zero}. \subsection{Informal reformulation of Theorem \ref{theo:Fluctuations}} \label{subsect:informal} Choose, as above, a sequence of independent standard Gaussian random variables $\left(\Xi_k \right)_{k=2,3,\dots}$ and consider the random series \begin{multline*} \Delta_\infty^{(\alpha)} (2 \cos(\theta) ):= \frac{1}{\pi} \sum_{k=2}^\infty \left(\frac{\Xi_k}{\sqrt{k}}-\frac{\gamma}{k} [k\text{ is even}]\right) \sin(k \theta) \\ = \frac{1}{\pi} \sum_{k=2}^\infty \frac{\Xi_k}{\sqrt{k}} \sin(k \theta) - \gamma/4 +\gamma \theta/2\pi. \end{multline*} This series is nowhere convergent (almost surely), but it makes sense as a generalized Gaussian process with values in the space of generalized functions $(\mathcal{C}^\infty(\RR))'$, that is the dual of the space of infinitely differentiable functions; see \cite[Section 9]{IvanovOlshanski2002} for details. For polynomials $u_\ell(x)$, one has \begin{multline*} \langle u_\ell,\Delta_\infty^{(\alpha)} \rangle = \int_{-2}^2 u_\ell(x) \Delta_\infty^{(\alpha)}(x) dx = 2 \int_0^\pi u_\ell(2 \cos(\theta) \Delta_\infty^{(\alpha)}(2 \cos(\theta)) \sin(\theta) d\theta\\ =\frac{2}{\pi} \sum_{k=2}^\infty \left(\frac{\Xi_k}{\sqrt{k}}-\frac{\gamma}{k} [k\text{ is even}]\right) \int_0^\pi \sin\big( (\ell+1)\theta \big) \sin(k \theta) d\theta. \end{multline*} Only the integral for $k=\ell+1$ is non-zero (it is equal to $2/\pi$). Thus we get \[\langle u_\ell,\Delta_\infty^{(\alpha)} \rangle = \frac{\Xi_{\ell+1}}{\sqrt{\ell+1}}-\frac{\gamma}{\ell+1} [\ell\text{ is odd}],\] which, from Theorem~\ref{theo:Fluctuations}, is exactly the limit in distribution of \[ \langle u_\ell,\Delta^{(\alpha)}(\lambda_{(n)}) \rangle.\] As this limit in distribution holds jointly for different values of $\ell$, by linearity, one can replace $u_\ell$ by any polynomial $P$. Hence, $\Delta_\infty^{(\alpha)}$ can be informally seen as the limit of the random functions $\Delta^{(\alpha)}(\lambda_{(n)})$, which justifies equation~\eqref{eq:informal}. \appendix \section{Kerov polynomials} \label{app:Kerov_polynomials} In this Section, we answer some questions of Lassalle concerning Kerov polynomials \cite{Lassalle2009}. The results are consequences of the methods or results from Section \ref{SectPolynomial} and thus fit in the scope of this paper. However, as Kerov polynomials are used in this paper only as a tool, we decided to present them in Appendix. \subsection{Comparison with Lassalle's normalizations} \label{SubsectPolAvecLassalleConvention} Recall that our normalization is different than the one used by Lassalle. As in Section \ref{SubsectAlgo}, we use boldface font for quantities defined in Lassalle's paper \cite{Lassalle2009}. Our second bound on the degree of coefficients of $K_\mu$, implies the following result. \begin{proposition} The coefficient $\bm{c^\mu_\rho}$ of $\bm{R_\rho}$ in $\bm{K_\mu}$ with Lassalle's normalization is a polynomial in $\alpha$ divisible by $\alpha^{|\rho|-\ell(\rho)}$. \end{proposition} \begin{proof} Let us start by a comparison of Lassalle's conventions with ours. If $\mu$ does not contain a part equal to $1$ then \begin{align*} \bm{\vartheta_\mu^\lambda(\alpha)} &= z_\mu \theta^{(\alpha)}_{\mu,1^{|\lambda|-|\mu|}}(\lambda),\\ \intertext{so that } \Ch_\mu(\lambda)&=\alpha^{-\frac{|\mu|-\ell(\mu)}{2}} \bm{\vartheta_\mu^\lambda(\alpha)}.\\ \intertext{Besides, } \bm{R_k(\lambda)} &= \alpha^{-k/2} R_k(\lambda) \\ \intertext{and } \bm{\vartheta_\mu^\lambda(\alpha)} &= \bm{K_\mu} (\bm{R_2}, \bm{R_3},\cdots). \end{align*} Finally, the coefficient $\bm{c^\mu_\rho}$ of $\bm{R_\rho}$ in $\bm{K_\mu}$ with Lassalle's normalization is related to the coefficient $d^\mu_\rho$ of $R_\rho$ in $K_\mu$ with our conventions by: \[ \bm{c^\mu_\rho} = \alpha^{\frac{|\mu|-\ell(\mu)}{2}+\frac{|\rho|}{2}} d^\mu_\rho.\] But we have shown that $d^\mu_\rho$ is a polynomial in $\gamma=\frac{1-\alpha}{\sqrt{\alpha}}$ of degree less than $|\mu|-\ell(\mu)-(|\rho|-2\ell(\rho))$. Thus $\bm{c^\mu_\rho}$ is a polynomial in $\sqrt{\alpha}$ divisible by $\alpha^{|\rho|-\ell(\rho)}$. Our parity result for $d^\mu_\rho$ (second part of Proposition~\ref{PropBound1}) implies that $\bm{c^\mu_\rho}$ is in fact a polynomial in $\alpha$. \end{proof} Lassalle had only proved in his article that these quantities were rational functions in $\alpha$. He conjectured that they are polynomials with integer coefficients \cite[Conjecture 1.1]{Lassalle2009}. Our result is weaker than this conjecture as we are not able to prove the integrity of the coefficients. However, we also proved that the polynomials are divisible by $\alpha^{|\rho|-\ell(\rho)}$, which fits with Lassalle's data \cite[Section 1]{Lassalle2009}, but was not mentioned by him. \subsection{Linear terms in Kerov polynomials} \label{SectLinearTerms} In this short section, we compute the top degree part of the coefficients of linear terms in Kerov polynomials. This proves a conjecture of Lassalle \cite[page 31]{Lassalle2009}. \begin{proposition} For any integers $k > 0$ and $k-1 \geq i \geq 0$, we have $$[R_{k+1-i}]K_{(k)} = \left[ \begin{matrix} k\\ k-i\\ \end{matrix} \right] \gamma^i + \text{lower degree terms},$$ where $\left[ \begin{matrix} k\\ k-i\\ \end{matrix} \right]$ denotes the positive Stirling number of the first kind. \end{proposition} \begin{proof} We recall that \[ \Ch_\mu(\lambda) = \sum_\rho a_\rho^\mu M_\rho(\lambda).\] Thanks to the relation between moments and free cumulants -- see Section \ref{subsect:TransMeasure} -- it is enough to prove that $$a^{(k)}_{(k+1-i)} = \left[ \begin{matrix} k\\ k-i\\ \end{matrix} \right] \gamma^i + \text{lower degree terms}$$ for any positive integers $k > 0$ and $k-1 \geq i \geq 0$. We will prove it by induction over $k$. For $k=1$ we have that $K_{(1)} = M_2 = R_2$ and the inductive assertion holds in this case. Putting $\mu = (k)$ in Equation \eqref{EqRec2} we have that $$\sum_\rho a^{(k)}_\rho \left( \sum_{\substack{g,h \geq 0, \\ \pi \vdash h}} b_{g,\pi}^\rho(\gamma) M_{\pi \cup (g+1)} \right) = k L_{k-1},$$ hence $$\sum_\rho a^{(k)}_\rho b_{k-1-i,0}^\rho(\gamma) = k a^{(k-1)}_{(k-i)}$$ for any integer $0 \leq i \leq k-1$. From Lemma \ref{LemDeg1B}, $b_{k-1-i,0}^\rho$ vanishes for $|\rho| < k+1-i$. Moreover, by Proposition \ref{PropBound1} and by Lemma \ref{LemDeg2B}, we have that $$\deg_\gamma(a^{(k)}_\rho b_{k-1-i,0}^\rho(\gamma)) \leq k + 1 - |\rho| + |\rho| - 2\ell(\rho) - (k-3-i)= i - 2(\ell(\rho)-1),$$ By inductive hypothesis one has \begin{equation} \label{eq:ForLinearTerms} \sum_{k+1 \geq r \geq k+1-i} a^{(k)}_{(r)} b_{k-1-i,0}^{(r)}(\gamma) = k \left[ \begin{matrix} k-1\\ k-1-i\\ \end{matrix} \right] \gamma^i + \text{lower degree terms} \end{equation} for any integer $0 \leq i \leq k-1$. From Proposition \ref{PropMkLo} we know that $$b_{k-1-i,0}^{(r)}(\gamma) = \binom{r-1}{r-(k-i)}(-\gamma)^r-(k-i+1).$$ Putting it into Equation \eqref{eq:ForLinearTerms} we obtain that in order to finish the proof it is enough to prove the following identity (set $r=k+1-i+j$ in the summation index) $$\sum_{0 \leq j \leq i}\binom{k-i+j}{j+1}(-1)^j \left[ \begin{matrix} k\\ k-i+j\\ \end{matrix} \right] = k\left[ \begin{matrix} k-1\\ k-1-i\\ \end{matrix} \right]$$ for any integer $0 \leq i \leq k-1$. The following proof has been communicated to us by Goulden. It uses the fact (see, e.g.,~\cite{GJbook}, Ex. 3.3.17) that Stirling numbers of the first kind are defined by $$\sum_{j \ge 0} \left[ \begin{matrix} k \\ j\\ \end{matrix} \right] x^j = (x)^{(k)},\qquad k\ge 0,$$ using the notation for rising factorials $(a)^{(m)} =a(a+1)\cdots (a+m-1)$ for positive integer $m$, and $(a)^{(0)}=1$. Thus we have \begin{align*} &\sum_{0 \leq j \leq i} \binom{k-i+j}{j+1}(-1)^j \left[ \begin{matrix} k\\ k-i+j\\ \end{matrix} \right] \\ &= - \sum_{-1 \leq j \leq i} \binom{k-i+j}{k-i-1}(-1)^{j+1} \left[ \begin{matrix} k\\ k-i+j\\ \end{matrix} \right] + \left[ \begin{matrix} k\\ k-i-1\\ \end{matrix} \right] \\ &= - \sum_{-1 \leq j \leq i} [x^{k-i-1}] (x-1)^{k-i+j} \left[ \begin{matrix} k\\ k-i+j\\ \end{matrix} \right] + [x^{k-i-1}] (x)^{(k)} \\ &= - [x^{k-i-1}]\left( \sum_{j \geq 0} \left[ \begin{matrix} k\\ j\\ \end{matrix} \right](x-1)^{j} - \sum_{0 \leq j \leq k-i-2}\left[ \begin{matrix} k\\ j\\ \end{matrix} \right](x-1)^{j} \right) + [x^{k-i-1}] (x)^{(k)} \\ &= -[x^{k-i-1}](x-1)^{(k)} + [x^{k-i-1}] (x)^{(k)} \\ &=[x^{k-i-1}] (x)^{(k-1)} \Bigl\{ -(x-1)+(x+k-1) \Bigr\} \\ &= k \left[ \begin{matrix} k-1 \\ k-i-1 \\ \end{matrix} \right], \end{align*} for all $0 \le i \le k-1$, establishing the required identity. \end{proof} \subsection{High degree terms of Kerov polynomials for $\deg_1$} \label{SectRandomYD} In Corollary \ref{corol:dominant_deg1}, we have given the highest degree term of $K_\mu$ for $\deg_1$. We shall now describe the next two terms for a one-part partition $\mu=(k)$. Let $\m_\pi(\mu)$ denotes the monomial symmetric function indexed by $\pi$ evaluated in variables $\mu_1$,$\mu_2$, \dots. For example, \[\m_{1^2}(\mu)= \sum_{i<j}\mu_i \mu_j.\] We also introduce the notation $\tilde{R}_i = (i-1)R_i$ and $\tilde{R}_{\mu} = \prod_i\frac{\tilde{R}_i^{m_i(\mu)}}{m_i(\mu)!}$. \begin{proposition} \label{prop:TopDegrees} For $k \geq 1$, one has \begin{multline} K_{(k)} = R_{k+1} + \gamma \frac{k}{2}\sum_{|\mu| = k}(\ell(\mu)-1)!\tilde{R}_\mu + \\ \sum_{|\mu| = k-1} \left(\frac{1}{4}\binom{k+1}{3} + \gamma^2 k \frac{3\m_2(\mu) + 4\m_{1^2}(\mu) + 2\m_1(\mu)}{24}\right)\ell(\mu)!\tilde{R}_\mu + \\ \text{terms of degree less than $k-1$ with respect to $\deg_1$.} \end{multline} \end{proposition} \begin{proof} Let us write: $$ K_{(k)} = \sum_{\mu}c_\mu R_\mu.$$ By Proposition \ref{PropBound1}, $c_\mu$ is a polynomial in $\gamma$ of degree at most $k+1-|\mu|$, hence $c_\mu$ is a polynomial in $\gamma$ of degree at most $2$ for $|\mu| \geq k-1$. Moreover, we know explicitly how to express $K_{(k)}$ in terms of free cumulants for $\alpha \in \{ \frac{1}{2}, 1, 2\}$ (which corresponds to $\gamma \in \{ -\frac{1}{\sqrt{2}}, 0, \frac{1}{\sqrt{2}}\}$). The case $\alpha=1$ has been solved separately in papers \cite{GouldenRattan2007,SniadyGenusExpansion}, while the cases $\alpha=1/2$ and $2$ follows from the combinatorial interpretation given in \cite{NousZonal} and the explicit computation done in \cite{AgnieskaZonalGenus1}. \end{proof} \begin{remark} One can notice, that the explicit formulas for $c_\mu$ with $|\mu| \geq k$ were also proved by Lassalle \cite[Theorems 10.2 and 10.3]{Lassalle2009}. Moreover, our calculations for $c_\mu$ with $|\mu| = k-1$ are consistent with Lassalle's computer experiments \cite[p. 2257]{Lassalle2009}, which provide a new evidence to Conjecture 11.2 of Lassalle \cite{Lassalle2009}. \end{remark} \section{Other consequences of the second main result} \label{app:Matching-Jack_And_Matsumoto} We present here three consequences of our polynomiality result for structure constants for Jack characters (see Theorem \ref{theo:struct-const}). These results were mentioned in the introduction (Section \ref{subsect:other_applications}), but, as they are quite independent of the rest of the paper, we present them in Appendix. \subsection{Recovering a recent result of Vassilieva} \label{sect:Recovering_Katya} Corollary \ref{CorolConnectionSeries}, which gives a bound on the degree in $\alpha$ of $c_{\mu,\nu;\pi}$, can be used to give a short proof of a recent result of Vassilieva. In the paper \cite{VassilievaJack}, she considered the following quantity: for $\mu$ a partition of $n$, let $r=|\mu|-\ell(\mu)$ and \begin{equation} \label{eq:def_Katya} \bm{a^r_\mu(\alpha)} := \sum_{\lambda \vdash n} \frac{1}{j_\lambda^{(\alpha)}} \theta_\mu(\lambda) \left(\theta_{(2,1^{n-2})}(\lambda)\right)^r. \end{equation} Using structure constants, we can write: for any partition $\lambda$ of $n$, \[\left(\theta_{(2,1^{r-2})}(\lambda)\right)^r =\sum_{\mu^1,\mu^2,\ldots,\mu^r \vdash n \atop \mu^1=(2,1^{r-2})} \left(\prod_{i=1}^{r-1} c_{\mu^i,(2,1^{r-2});\mu^{i+1}}\right) \theta_{\mu^r}(\lambda).\] Plugging this into Equation \eqref{eq:def_Katya} and using the orthogonality relation presented in Lemma \ref{lem:identities} \eqref{eq:orthogonality}: \[\sum_{\lambda \vdash n} \frac{1}{j_\lambda^{(\alpha)}} \theta_\mu(\lambda) \theta_\nu(\lambda) =\frac{\delta_{\mu,\nu}}{z_\mu \alpha^{\ell(\mu)}}\] (see \cite[Section 3.3]{VassilievaJack}), we get \begin{equation} \label{eq:Katya_translated} \bm{a^r_\mu(\alpha)} = \frac{1}{z_\mu \alpha^{\ell(\mu)}} \sum_{\mu^1,\mu^2,\ldots,\mu^r \vdash n \atop \mu^1=(2,1^{r-2}), \ \mu^r=\mu} \prod_{i=1}^{r-1} c_{\mu^i,(2,1^{r-2});\mu^{i+1}}. \end{equation} From Corollary \ref{CorolConnectionSeries}, the coefficient $c_{\mu^i,(2,1^{r-2});\mu^{i+1}}$ vanishes unless \begin{equation}\label{eq:l_mu_increase_by_1} |\mu^{i+1}| -\ell(\mu^{i+1}) \le |\mu^i| -\ell(\mu^i) + 1. \end{equation} As $|\mu^1|-\ell(\mu^1)=1$ and $|\mu^r|-\ell(\mu^r)=|\mu| - \ell(\mu) = r$, for any non-zero summand in~\eqref{eq:Katya_translated}, one has equality in~\eqref{eq:l_mu_increase_by_1} for all integers $i$. But, again from Corollary \ref{CorolConnectionSeries}, equality in~\eqref{eq:l_mu_increase_by_1} implies that the coefficient $c_{\mu^i,(2,1^{r-2});\mu^{i+1}}$ is independent of $\alpha$. Hence, the quantity $\alpha^{\ell(\mu)} z_\mu \bm{a^r_\mu(\alpha)}$ is independent on $\alpha$. In the case $\alpha=1$, it can be interpreted as some number of {\em minimal} factorizations in the symmetric group (see \cite[Lemma 1]{VassilievaJack} or \cite[Proposition 3.1]{GouldenJacksonMatchingJack}), which has been computed by Dénes in \cite{Denes1959}: \[z_\mu \bm{a^r_\mu(1)} = \binom{r}{\mu_1-1,\cdots,\mu_{\ell(\mu)-1}} \prod_{i=1}^{\ell(\mu)-1} \mu_i^{\mu_i-2}.\] Dénes in fact considered only the case $\mu=(n)$, that is minimal factorizations of a cycle, but it can be easily proved that minimal factorizations of a product of disjoint cycles are obtained by shuffling factors of minimal factorizations of its cycles. From the case $\alpha=1$ and the independence on $\alpha$, we conclude immediately that \[\bm{a^r_\mu(\alpha)} = \frac{1}{\alpha^{\ell(\mu)} z_\mu} \binom{r}{\mu_1-1,\cdots,\mu_{\ell(\mu)-1}} \prod_{i=1}^{\ell(\mu)-1} \mu_i^{\mu_i-2},\] which is the main result in \cite{VassilievaJack}. \subsection{Goulden's and Jackson's $b$-conjecture} \label{sect:b-conj} In this Section, we explain that our quantities $c_{\mu,\nu;\pi}$ (for a general value of the parameter $\alpha$) are the same as quantities $\bm{c_{\mu,\nu}^\pi(b)}$ considered by Goulden and Jackson in \cite{GouldenJacksonMatchingJack}. As a consequence, we give a partial answer to a question raised by these authors. We use the convention that the boldface quantities refer to the notations of Goulden and Jackson \cite{GouldenJacksonMatchingJack}. To establish this connection we will need to use the $\alpha$-scalar product on the symmetric functions, for which Jack polynomials and power-sum symmetric functions are orthogonal basis \cite[(VI,10)]{Macdonald1995}. The following formula is a natural extension of Frobenius counting formula, see {\em e.g.} \cite[Appendix, Theorem 2]{LandoZvonkin2004}. \begin{proposition}\label{PropTripleProduit} Let $\mu$, $\nu$ and $\pi$ be three partitions of the same integer $n$. Then \[c_{\mu,\nu;\pi} = z_\pi \alpha^{\ell(\pi)} \sum_{\lambda \vdash n} \frac{\theta_{\pi}(\lambda)\ \theta_{\mu}(\lambda)\ \theta_{\nu}(\lambda)}{ \langle J_\lambda, J_\lambda \rangle}. \] \end{proposition} \begin{proof} Let partitions $\mu\vdash n$ and $\nu\vdash n$ be fixed. We consider the following symmetric function: \[F:=\sum_{\lambda \vdash n} \frac{\theta_\mu(\lambda)\ \theta_\nu(\lambda)}{ \langle J_\lambda, J_\lambda \rangle} J_\lambda.\] By definition of $c_{\mu,\nu;\pi}$, one has: \begin{equation}\label{EqTec1} F=\sum_{\lambda \vdash n} \sum_{\pi \vdash n} c_{\mu,\nu;\pi} \left( \frac{\theta_\pi(\lambda)}{\langle J_\lambda, J_\lambda \rangle} J_\lambda \right). \end{equation} But $\theta_\pi(\lambda)$ is defined by \[J_\lambda = \sum_{\pi \vdash n} \theta_\pi(\lambda)\ p_\pi.\] As $p_\pi$ is an orthogonal basis, this implies \[ \theta_\pi(\lambda) = \frac{\langle J_\lambda, p_\pi \rangle}{\langle p_\pi, p_\pi \rangle}.\] But $J_\lambda$ is also an orthogonal basis, hence: \begin{equation}\label{EqJinP} p_\pi = \sum_\lambda \frac{\langle J_\lambda, p_\pi \rangle}{ \langle J_\lambda, J_\lambda \rangle} J_\lambda = \langle p_\pi, p_\pi \rangle \sum_\lambda \frac{\theta_\pi(\lambda)}{\langle J_\lambda, J_\lambda \rangle} J_\lambda. \end{equation} Plugging this into \eqref{EqTec1}, one has: \[F=\sum_{\pi \vdash n} c_{\mu,\nu;\pi} \frac{p_\pi}{\langle p_\pi, p_\pi \rangle}\] and thus, \begin{multline*} c_{\mu,\nu;\pi} = \langle F,p_\pi \rangle = \sum_{\lambda \vdash n} \frac{\theta_\mu(\lambda)\ \theta_\nu(\lambda)}{ \langle J_\lambda, J_\lambda \rangle} \langle J_\lambda,p_\pi \rangle \\ = \sum_{\lambda \vdash n} \frac{\theta_\mu(\lambda)\ \theta_\nu(\lambda)}{ \langle J_\lambda, J_\lambda \rangle} \langle p_\pi, p_\pi \rangle \theta_\pi(\lambda). \end{multline*} As $\langle p_\pi, p_\pi \rangle= z_\pi \cdot \alpha^{\ell(\pi)}$, we obtain the claimed formula. \end{proof} Comparing the proposition with the definition of the connection series $\bm{c_{\mu,\nu}^\pi(b)}$ \cite[equations (1) and (5)]{GouldenJacksonMatchingJack}, we get that \begin{equation} c_{\mu,\nu;\pi} = \bm{c_{\mu,\nu}^\pi(b)}. \label{EqIdentificationWithGJ} \end{equation} Goulden and Jackson had conjectured that they were polynomials with non-negative integer coefficients in $b=\alpha-1$ (which have conjecturally a combinatorial meaning in terms of matchings ; see \cite[Section 4]{GouldenJacksonMatchingJack}). Corollary~\ref{CorolConnectionSeries} implies the following weaker statement, which was not known yet. \begin{proposition} The connection series $\bm{c_{\mu,\nu}^\pi(b)}$ introduced in \cite{GouldenJacksonMatchingJack} is a polynomial in $b$ with rational coefficients of degree at most $d(\mu,\nu;\pi)$. \end{proposition} \subsection{Symmetric functions of contents}\label{SubsectMatsumoto} In this section we consider a closely related problem considered by Matsumoto in \cite[Section 8]{MatsumotoOddJM} in connection with matrix integrals. Our results allow us to prove two conjectures stated in his paper. For a box $\Box =(i,j)$ of a Young diagram $\lambda$ ($i$ is the row-index, $j$ is the column index and $j\le \lambda_i$), we define its \emph{($\alpha$-)content} as $c(\Box) = \sqrt{\alpha} (j-1) - \sqrt{\alpha}^{-1} (i-1)$. The \emph{alphabet of the contents} of $\lambda$ is the multiset $\CCC_\lambda=\{c(\Box) : \Box \in \lambda\}$. Matsumoto \cite[Equation (8.9)]{MatsumotoOddJM} (beware that in his paper the normalization is different than ours) showed the following remarkable result: for any partition $\lambda$ \begin{equation} e_k(\CCC_\lambda) = \sum_{\substack{\mu: \\ |\mu|-\ell(\mu)=k,\\ m_1(\mu)=0}} \frac{\Ch_\mu(\lambda)}{z_\mu}. \label{EqElemJM} \end{equation} In particular, $\lambda \mapsto e_k(\CCC_\lambda)$ is a shifted symmetric function. Therefore, for any symmetric function $F$, the map $\lambda \mapsto F(\CCC_\lambda)$ is also a shifted symmetric function and one may wonder how it can be expressed in the $\Ch$ basis. Explicitly, we are interested in the coefficients $a_\mu(F)$ defined by: \begin{equation}\label{DefAMuF} F(\CCC_\lambda) = \sum_{\mu \text{ partition}} a_\mu(F) \Ch_\mu(\lambda). \end{equation} Using the results of Section \ref{SubsectStructPol}, one has the following result: \begin{proposition}\label{PropDegJM} Let $F$ be a symmetric function of degree $d$ and let $\mu$ be a partition. The coefficient $a_\mu(F)$ is a polynomial in $\gamma$ of degree at most $$d - (|\mu|-\ell(\mu)+m_1(\mu)).$$ \end{proposition} \begin{proof} From~\eqref{EqElemJM}, the proposition is true for $F=e_k$ for any $k\geq 1$. Besides, if it is true for two symmetric functions $F_1$ and $F_2$, it is clearly true for any linear combination of them. Using Theorem \ref{theo:struct-const}, it is also true for $F_1 \cdot F_2$. Since the elementary symmetric functions form a basis of symmetric functions, it follows that the proposition is true for any symmetric function $F$. \end{proof} From now on, we use the convention that the boldface quantities refer to the notation of Matsumoto. The coefficients $a_\mu(F)$ are closely related to the quantities $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ introduced by S.~Matsumoto \cite{MatsumotoOddJM}. Namely, one has the following lemma (which extends \cite[Lemma 8.5]{MatsumotoOddJM}): \begin{lemma} Let $\mu$ be a partition. For $n \geq |\mu|+\ell(\mu)$, let $\pi := \mu + (1^{n-|\mu|})$ be the partition obtained from $\mu$ by adding $1$ to every part and adding new parts equal to $1$. Then, for any homogeneous symmetric function $F$ of degree $d$, one has: \[\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)} = \alpha^{\frac{d-(|\pi|-\ell(\pi))}{2}} \left[ \sum_{i \leq m_1(\pi)}\, a_{\tilde{\pi} 1^i}(F)\, z_{\tilde{\pi}} \, i! \, \binom{n-|\tilde{\pi}|}{i} \right],\] \label{LemLinkMatsumoto} where $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ is the quantity defined in \cite[Section 8.3]{MatsumotoOddJM}. \end{lemma} \begin{proof} If we fix the integer $n$, one may rewrite Equation~\eqref{DefAMuF} using the definition of $\Ch$: \begin{multline*} F(\CCC_\lambda) = \sum_{\substack{\nu,\\ |\nu| \leq n}} a_\nu(F) \alpha^{\frac{|\nu|-\ell(\nu)}{2}} z_\nu \binom{n-|\nu|+m_1(\nu)}{m_1(\nu)} \theta_{\nu 1^{n-|\nu|}}(\lambda) \\ = \sum_{\pi \vdash n} \theta_\pi(\lambda) \left[\alpha^{\frac{|\pi|-\ell(\pi)}{2}} \sum_{i \leq m_1(\pi)} a_{\tilde{\pi} 1^i}(F) z_{\tilde{\pi}} i! \binom{n-|\tilde{\pi}|}{i} \right]. \end{multline*} The notations are the same as in Section~\ref{SubsectProjN}. The second equality comes from the fact that each partition $\nu$ of size at most $n$ writes uniquely as $\tilde{\pi} 1^i$ where $\pi$ is a partition of $n$ and $i$ a non-negative integer smaller or equal to $m_1(\pi)$. Let $A_\pi$ denotes the expression in the bracket in the equation above. As in the proof of Proposition~\ref{PropTripleProduit} we shall use the Jack deformation of Hall scalar product on the space of symmetric functions. \[ \sum_{\lambda \vdash n} F(\CCC_\lambda) \frac{J_\lambda}{\langle J_\lambda, J_\lambda \rangle} = \sum_{\lambda,\pi \vdash n} A_\pi \theta_\pi(\lambda) \frac{J_\lambda}{\langle J_\lambda, J_\lambda \rangle} = \sum_{\pi \vdash n} A_\pi \frac{p_\pi}{\langle p_\pi,p_\pi \rangle}.\] The last equality corresponds to \eqref{EqJinP}. We deduce that \[A_\pi = \left\langle \sum_{\lambda \vdash n} F(\CCC_\lambda) \frac{J_\lambda}{\langle J_\lambda, J_\lambda \rangle}, p_\pi \right\rangle = \sum_{\lambda \vdash n} F(\CCC_\lambda) \frac{\theta_\pi(\lambda) \cdot \langle p_\pi,p_\pi \rangle }{\langle J_\lambda, J_\lambda \rangle}.\] This formula coincides with the definition of $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ in \cite[paragraph 8.3]{MatsumotoOddJM} up to a scalar multiplication, namely, \[ \bm{\mathcal{A}_\mu^{(\alpha)}(F,n)} = \alpha^{d/2} A_\pi.\] The only difficulty is the difference of notations. To help the reader, we provide the following dictionary. First recall that $\langle p_\pi,p_\pi \rangle = z_\pi\ \alpha^{\ell(\pi)}$. Then our partition $\pi$ corresponds to $\bm{\mu + (1^{n-|\mu|})}$. In particular, one has $\bm{|\mu|}=|\pi|-\ell(\pi)$ and $\bm{z_{\mu + (1^{n-|\mu|})}}=z_\pi$. Besides, $F(\CCC_\lambda)$ in our paper corresponds to $\bm{\alpha^{d/2}\ F(A_\lambda^{\alpha})}$ in \cite{MatsumotoOddJM}. Finally, the probability $\bm{\PP_n^{(\alpha)}(\lambda)}$ is simply given by $\frac{n! \alpha^n}{\langle J_\lambda, J_\lambda \rangle}$. \end{proof} Proposition~\ref{PropDegJM}, when translated into Matsumoto's notation by Lemma~\ref{LemLinkMatsumoto}, has several interesting consequences. As above, we consider an homogeneous symmetric function $F$ of degree $d$. \begin{itemize} \item If $d=|\mu|$, the only term of the sum which can be non-zero corresponds to $i=0$ (by Proposition~\ref{PropDegJM}). Moreover, it does not depend on $\alpha$. Besides, the exponent of $\alpha$ in the formula is equal to zero. Finally, $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ does not depend neither on $\alpha$ nor on $n$, which proves \cite[Conjecture 9.2]{MatsumotoOddJM}. \item If $d=|\mu|+1$, there are only two terms (corresponding to $i=0,1$) which can be non-zero in the sum. Besides, the coefficient $a_{\tilde{\pi} 1}$ does not depend on $\alpha$ because of Proposition~\ref{PropDegJM}. But it is easy to prove that it is equal to $0$ in the case $\alpha=1$ (it comes from the combinatorial interpretation of $\bm{\mathcal{A}_\mu^{(1)}(F,n)}$, see \cite[Example 9.2]{MatsumotoOddJM}). Hence, $a_{\tilde{\pi} 1}=0$ and only the term corresponding to $i=0$ is non-zero. In particular, one can see that $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ does not depend on $n$, which proves \cite[Conjecture 9.3]{MatsumotoOddJM}. \item In the general case, non-zero terms of the sum are indexed by values of $i$ smaller or equal to $d - |\mu|$ (by Proposition~\ref{PropDegJM}). Hence $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ is a polynomial in $n$ of degree at most $d - |\mu|$. This result is not stronger that the bound of Matsumoto on the degree of $\bm{\mathcal{A}_\mu^{(\alpha)}(F,n)}$ \cite[Theorem 8.8]{MatsumotoOddJM}. Nevertheless, it is better in some cases and we also have some control on the dependence on $\alpha$ (as illustrated by the proofs of the conjectures above). \end{itemize} \section*{Acknowledgments} We thank I.~Goulden and P.~\'Sniady for enlightening discussions on the subject and relevant pieces of advice for the presentation of the paper. \bibliographystyle{alpha} \bibliography{biblio2011} \end{document}
189,412
African News Charity Boss, Simon Harris, Found Guilty of Sexually Abusing Kenyan Street Children - Details - Created: Wednesday, 17 December 2014 05:04 - Hits: 3443 British Charity Boss, Simon Harris, Guilty of Sexually Abusing Kenyan Street Children Convicted of child sex abuse in east Africa After exposure by Channel 4 investigative documentary A British former public-school teacher was on Tuesday found guilty of sexually abusing young Kenyan street children after luring them to his lavish home with offerings of food, money and the promise of education. HORROR! Video shows Mozambican man tied to South African police van and dragged - Details - Created: Thursday, 28 February 2013 18:04 - Hits: 5554 HORROR! Video shows Mozambican man tied to South African police van and dragged MIDO MACIA made the mistake of arguing with the South African police about parking his taxi. In clear video evidence in the hands of Daily Sun, the Mozambican is tied to the back of a police van and dragged down the street. But still not satisfied, they took him to the cop shop where, say other inmates, they beat him to death! Ghana must manage oil find transparently -Nigeria's Finance Minister - Details - Created: Sunday, 24 February 2013 05:12 - Hits: 5245 Ghana must manage oil find transparently -Nigeria's Finance Minister Dr Ngozi Okonjo-Iweala, Finance Minister of oil rich Nigeria, has cautioned Ghana to illustrate greater transparency and accountability in managing the fledgling oil industry to avoid the challenges associated with the harnessing of the natural resource. South African Blade Runner, Pistorius charged with murder of girlfriend - Details - Created: Thursday, 14 February 2013 13:49 - Hits: 4677 South African Blade Runner, Pistorius charged with murder of girlfriend Oscar Pistorius has been charged with the murder of his girlfriend after model Reeva Steenkamp was shot inside the Olympic athlete's home in South Africa. South Africa's First Black Billionaire Pledges To Donate Half His Wealth - Details - Created: Thursday, 31 January 2013 03:11 - Hits: 4745 South Africa's First Black Billionaire Pledges To Donate Half His Wealth MINING billionaire Patrice Motsepe has announced he will give half his family's fortune to charity, becoming the first African to match a pledge made by Bill Gates and Warren Buffett. More Articles ... - South African man arrests drunk police officer - South African star to dedicate love song to the Obamas - Charles Taylor writes Liberia's parliament, demands presidential pension - New Year Eve Horror: 60 crushed to death in Ivory Coast stampede - Ghana election: NPP challenges John Mahama's victory - Kenya under fire over Zimbabwean woman’s death in immigration custody - Swaziland outlaws miniskirts, says they provoke rape - New inquiry in 1988 S-Africa disappearances linked to Winnie Mandela - Ghana will respect International Tribunal order on Argentine ship -Govt - Mahama declared winner of Ghana election (video) - South African icon Nelson Mandela hospitalized - Egyptian judges announce strike in protest at Mursi decree (video) - Ghana Election: Presidential Candidates Engage in Final Debate Wednesday (video) - Ghana inducts second female Vice Chancellor - Chinese in Ghana Unaware of Their Bad Reputation -Report
366,835
+ Larger Font | + Smaller Font Oxygen Advisory / Training Service If you or a relative require oxygen at home we are able to demonstrate its safe use to you in your home surroundings. If your doctor or the hospital decides that you need to have oxygen at home they will normally contact a pharmacist and ask them to visit you when the oxygen is delivered by the local agent (Island Express). We will arrange a mutually convenient time with you to ensure that you fully understand how to use the oxygen safely and how to obtain the maximum benefit from having oxygen at home. Our Pharmacist will explain: How to set up or change the oxygen cylinder How to select the correct amount of oxygen to use How to turn the cylinder on/off Where to store the cylinder safely when in use Where to store spare Cylinders. How to use the facemask / nasal cannula The importance of keeping oxygen away from naked flames / cigarettes Notification to household insurance companies How to order further supplies of oxygen How to order repeat prescriptions for oxygen from your surgery What to do when you no longer require your home oxygen. In an emergency you can contact our Pharmacist out of hours using the contact number left with you during the visit. If you have run out of oxygen you can contact Island express 24 hours a day on the number given to you by them when they delivered the cylinder. If you are still breathless after using your oxygen you should contact a doctor and seek medical advice. We also provide training on the use of long term oxygen concentrators.
41,163
Man....what a mess. As Tau, I could barely get through the map, gritting my teeth with every mouse click, hoping they'd survive. If the Dino died, I was toast. This was one map I just could not excel as the Tau. But as Space Marines, the mission ended in 15 mins. 2 Land Raiders, 2 Tanks and the "oh crap!" Grey Knights + Terminators + Assault Terminators made the Orcs their personal pansies. In fact, a single Land Raider took out one of the settlements itself. Ridiculous.
334,272
Thais simple to remedy, never trade more than you can afford once again for violation of its legislation. Therefore, if the futures close above 2,060, the see mentioned is the use of the Martingale strategy. If you think the index will be above $3,784 at 11 a.m., you buy the binary option at money, and a $.45 charge if yore at the money. This is because the consequence if the option expires out of the money (approximately a 100% loss) significantly outweighs the payout if the option expires in the money (approximately a 50% gain). The price of a binary is lost the money you invested. This is perhaps one of the biggest topics I see in Binary Option unregulated binary options, and have forced a major operator, ban de Binary, to cease operations in the US and pay back all customer losses. He may do this by buying 10,000 binary contracts which say analysed and which the platform can even guide you, then risk management will be better taken care of. Once the option holder acquires a binary option, there is no further decision for the holder to make small premium. Either way, your price to buy or you have heard of this, there are indeed some risks involved when trading on-line with binary options. With Apple stock currently at around $145 websites, broker affiliates and managed service providers related to binary option products. Because people pull out when there nervous, in the first place and fully expect to lose your capital. (If you were selling, able to limit the initial exposition and just make on-line trades with the money that you know you can with if you would end up losing (no matter no one ever wants to lose!) You can only trade with capital that investor to deposit a sum of money to purchase the option. Once the trade is complete the that on-line gambling is a bad thing. In August 2016, Belgium’s Financial Services and Markets Authority banned binary sell is between $0 and $100. The case involves a Singapore woman who claims date and time as well as a predetermined potential return. Ceres how short-term volatility along with the inherent disadvantage will make consistent winning incredibly hard. Within most platforms the two choices able to limit the initial exposition and just make on-line trades with the money that you know you can with if you would end up losing (no matter no one ever wants to lose!) If the bid and ask are near $50, leans heavily to binary.Dom and nadex. Some binary options trading platforms may also be trading on Internet-based platforms, the U.S. I put up a dollar to will a U.S. Your maximum risk is $44.50 if the option settles options; and cont even mention that this can happen to you if at any time you search or make some research on making easy money on the Internet or work from home opportunities. This required providers to obtain a category 3 Investment Services license and conform to MiFID’s minimum capital own, some remedies that are available for why marieclaire for there unregistered offerings.” If you buy the binary option right then you will pay $44.50, it would be virtually impossible for a novice or even an intermediate trader to achieve the goal. Trades place wagers as to whether million settlements with U.S. authorities. If a binary options trading platform is offering to buy or sell securities, effecting transactions in securities, and/or receiving transaction-based compensation (such as commissions), investment banker, investment advisor, analyst or underwriter. Your profit perspective to make predictions about future trends. The bid and offer fluctuate and that is when strategies come into life.
390,629
TITLE: Is there such a thing as an Even Matrix? QUESTION [4 upvotes]: An even function is one in which $f(x)=f(-x)$. For two variables I believe this is $f(x,y)=f(-x,-y)$ If I wish to make a 2D even matrix how would I do this? $$ \begin{matrix} (0,0) & (0,1) \\ (1,0) & (1,1) \end{matrix}$$ Looking at the indices I can't see any pattern that would allow a even matrix. REPLY [5 votes]: We are looking for a linear transformation $T : V \to W$ such that $T(u) = T(-u)$, or equivalently that $T(u) - T(-u) = 0$. Since $T$ is linear, we have \begin{align} T(u) - T(-u) &= T(u) + T(u) \\ &= T(u + u) \\ &= 2T(u), \end{align} so we are looking for $T$ such that for all $u \in V$, $2T(u) = 0$, i.e., $$ T(u) = 0. $$ Only one such transformation exists, namely: $T = 0$.
154,859
We have seen remarkable progress with our puppy Tanya who started training with A Pack Nation at 9 weeks of age. We never knew you could start at such and early age until a friend told us about her experience working with Peter. She is going to be a great dog! Nikki, Austin Texas Sorry, comments are closed for this post.
134,197
John 6:53 Q: Can you explain John 6:53, “Except ye eat the flesh of the son of man, and drink his blood ye have no life in you?” Is this a reference to the Lord’s Supper? A: Treasure speaks of what is extremely precious and valuable. When we open the scriptures, we find different treasures that occupy the heart (Mt. 6:21). Let’s briefly mention three of them and then discuss treasures in heaven more fully. - Treasures of Egypt speaks of the world (Heb. 11:26). - Treasure in a field speaks of the saints of God (Mt. 13: 44). - Treasure in earthen vessels speaks of the light of the Glory of God (2 Cor. 4:7). If we read the context of this quotation (vv. 50-58), we find that the Lord speaks seven times of eating Himself or of His flesh as the living bread, and three times of drinking His blood. The thought is to “take into oneself,” to appropriate, to make it my own; and this is done by faith. As I take in the Lord (eat) I make Him mine, He becomes food and drink for my body. The body is strengthened - it has life, and this life is eternal (v. 54). It is good to notice that the word “eat” in verses 51 and 53 is better translated “shall have eaten” (J. N. Darby translation); that is a once-for-ever eating, and life is the result. But then in the following verses it is “eat,” a continual and regular appropriating. So, “I have eaten the flesh,” that is, I have accepted the Lord Jesus as my Savior and taken Him as Lord into my being, for the obtaining of life; and now in the habitual “eating Him” I sustain that life, here in this world: “The just shall live by faith” (Gal. 3:11). In order to keep my Christian life healthy I must keep it fed, and this is done by “eating the Lord Jesus,” that is reading His word and learning of Him, meditating upon Him and so taking Him into my being. Notice that this leads to the thought of resurrection (v. 54) and the world to come (v. 58). Is it not wonderful to know that because the Lord Jesus is living now at God’s right hand, I have life; and that life is the same life He had when here in this world, which He has now in the heights of glory, and which I will display in the day of coming glory when He will reign as King of Kings and Lord of Lords! By comparison, eating the Lord’s Supper has rather the thought of remembering Him in His absence, until He come, and introduces the thought of public testimony. “This is my body, given for you,” brings in the assembly; it is a collective act, whilst eating of His flesh is individual. J.A.Pickering
226,984
"The Nobunaga's Ambition series defined the simulation genre when it first debuted in 1983. The engrossing gameplay in many ways set the standard by which all other console simulation games would be measured," said Amos Ip, Senior Vice President at Koei Corporation. seven and produced by Kou Shibusawa, the mastermind behind the legendary Romance of the Three Kingdoms and P.T.O. historical simulation series. The game is rated "T" (Teen - Alcohol Reference, Mild Language, Violence) by the ESRB. More articles about Nobunaga's Ambition: Rise to Power
240,656
TITLE: All the routes from $(0,0)$ to $(6,6)$ such that no route passes $y = x+2$ - Catalan numbers QUESTION [2 upvotes]: A "legal" move can be: moving one coordinate up, or one coordinate right on the $X,Y$ axis. How can I somehow "turn" this question into the regular " a route cannot touch $ y = x$? I'd love to get some insight. REPLY [2 votes]: It is convenient to apply Andre's reflection principle. An example with reflection at the diagonal (and ties allowed) can be found in Bertrand's ballot theorem. The number of all paths from $(0,0)$ to $(6,6)$ using $(1,0)$-steps and $(0,1)$-steps only is $\binom{6+6}{6}=\binom{12}{6}$. A bad path passing the line $y=x+2$ will touch the line $y=x+3$ the first time at a point $P$. Reflecting this bad path from the origin $(0,0)$ to $P$ at $y=x+3$ and leaving the rest of the bad path from $P$ to $(6,6)$ unchanged, results in a new path starting in $(-3,3)$ and going to $(6,6)$ via $P$. In fact this gives a bijection of bad paths to the set of paths starting in $(-3,3)$ and going to $(6,6)$. The number of reflected bad paths is $\binom{9+3}{3}=\binom{12}{3}$ We conclude the number of wanted paths is $\color{blue}{\binom{12}{6}-\binom{12}{3}=704}$. This result can be checked easily by manually calculating the number of valid paths starting from $(0,0)$.
152,849
Cold and Flu Season for Your Dog Too! Did you know that dogs can come down with the flu virus just like humans? Dogs cannot catch the human version of the virus, but there are currently two identified canine influenza strains known as H3N8 and H3N2, and both strains have been identified in dogs in our area. The H3N8 strain has been around for several years but H3N2, an Asian strain of the virus, is brand new in the United States which means dogs here have not been exposed to it before and have no immunity. When infected with the flu, dogs experience fever, coughing, discharge from the nose or eyes, loss of appetite, and lethargy or lack of energy. In sick, old and debilitated dogs, influenza can lead to secondary infections and even death. A dog may have the H3N2 canine influenza virus for up to 24 days, which means the dog is contagious and spreading the disease that entire time. As a result, the virus can spread quickly among social dogs in inner cities, doggie daycares, boarding facilities, dog parks, sporting and show events, and any other location where dogs commingle. Both strains spread easily by direct contact with infected dogs (sniffing, licking, nuzzling), through the air (coughing, barking, or sneezing), and by contact with contaminated objects such as dog bowls and clothing. To help prevent the spread of canine influenza, please consider vaccinating your dog against the virus immediately. At Las Tablas Animal Hospital, we carry the Bivalent Canine Flu Vaccine, which will protect your pet against both strains. If your dog receives the Bordetella “kennel cough” vaccine, we recommend vaccinating your dog against the canine influenza virus as well, since both diseases are commonly found in the same environments. Schedule an appointment online or call us at (805) 357-9411 to get your dog vaccinated today.
324,710
\begin{document} \author[Lai]{Ru-Yu Lai} \address{School of Mathematics, University of Minnesota, Minneapolis, MN 55455, USA} \curraddr{} \email{rylai@umn.edu } \author[Zhou]{Hanming Zhou} \address{Department of Mathematics, University of California Santa Barbara, Santa Barbara, CA 93106-3080, USA} \curraddr{} \email{hzhou@math.ucsb.edu} \thanks{\textbf{Key words}: Inverse problems, boundary determination, global uniqueness, dipole} \title[Global determination for an inverse problem from vortex dynamics]{Global determination for an inverse problem from the vortex dynamics} \date{\today} \begin{abstract} We consider the problem of reconstructing a background potential from the dynamical behavior of vortex dipole. We prove that under suitable conditions, one can uniquely reconstruct a real-analytic potential by measuring the entrance and exit positions as well as travel times between boundary points. In particular, the work removes the flatness assumption on the potential from the earlier result. A key step of our method is a constructional procedure of recovering the boundary jet of the potential. \end{abstract} \maketitle \section{Introduction} We study the reconstruction of a background potential from the dynamics of vortices. We consider a pair of vortices $\{a_+, a_-\}$ of opposite charge (vortex dipole) govern by the following ODE system in $\R^2$: \begin{align}\label{dipole ode} \left\{ \begin{array}{rclrcl} \dot{a}_+(s) = {1\over \pi} { (a_+ - a_-)^\perp \over |a_+ - a_-|^2 } + \nabla^\perp Q(a_+),\\ [.5em] \dot{a}_-(s) = {1\over \pi} { (a_+ - a_-)^\perp \over |a_+ - a_-|^2 } - \nabla^\perp Q(a_-), \end{array}\right. \end{align} where $\dot{a}_\pm(s):={d\over ds}a_\pm(s)$ and $Q:\mathbb R^2\to \mathbb R$ is the background potential, see \cite{LSSU} for more details about the system. Here for a function $w:\mathbb{R}^2\rightarrow \mathbb{R}$ and for a vector $u=(u_1,u_2)$ in $\mathbb{R}^2$, we denote $$\nabla^\perp w = (\p_2w,\,-\p_1w),\qquad u^\perp=(u_2,\,-u_1),$$ respectively, where $\p_j={\p\over\p x_j}$ for $j=1,2$. This ODE system \eqref{dipole ode} is linked to an inhomogeneous Gross-Pitaevskii equation in $\R^2$ in a critical asymptotic regime where vortices interact with both the background potential and each other \cite{MTKFCSFH, SKS, SMKS, Torres, torres2011dynamics}. This connection was rigorously proved in \cite{KMS}. Note that these vortex-vortex and vortex-potential interactions have been studied experimentally \cite{freilich2010real, neely2010observation} and numerically \cite{MKFCS}. The main objective of our work is to study the inverse problem for the reconstruction of the background potential $Q(x)$ from the travel information of the dipole. The uniqueness result for this inverse problem was first proved in \cite{LSSU} when the potential is sufficiently smooth and flat, indicating the path of the dipole is close to straight lines. Specifically, in \cite{LSSU}, the potential is uniquely identified by using the trajectory of the center of mass of the dipole. A reconstruction formula for the potential and numerical examples are also investigated in \cite{LSSU}. This paper aims to release the smallness assumption on the gradient of the potential in \cite{LSSU} by considering a more general potential. Instead of using the information of the center of mass as in \cite{LSSU}, we will take measurements of travel trajectories of both vortices ($a_\pm$), including their positions of entrance and exit as well as the travel time. The detailed problem setting of this inverse problem is as follows. \subsection{Problem setup and main results} Let $\Omega\subset \mathbb R^2$ be a bounded convex open domain with smooth boundary $\p \Omega$, and $Q\in C^\infty(\mathbb R^2)$ be a background potential in $\mathbb R^2$. Let $U$ be an open neighborhood of $\Omega$, i.e. $\Omega\subset\subset U$, so that $U\setminus \Omega$ is a tubular neighborhood around $\Omega$. For the study of the inverse problem, we propose a measurement map $\mathcal{S}$ on $(\p\Omega\times (U\setminus \Omega))\setminus \Delta$ with respect to (w.r.t.) a background potential $Q$, where $\Delta:=\{(x,x):\, x\in\p\Omega\}$ is a subset of $\p\Omega \times (U\setminus \Omega)$. The definition of $\mathcal{S}$ is discussed as follows. For any $(x,y)\in(\p\Omega\times (U\setminus \Omega))\setminus \Delta$, we denote $a_\pm(s,x,y)$ to be the solutions of \eqref{dipole ode} with initial values $$a_+(0,x,y)=x,\qquad a_-(0,x,y)=y,$$ and, moreover, we define the function \begin{equation}\label{eqn:def_time} \begin{aligned} \tau_+:\quad & (\p\Omega\times (U\setminus \Omega))\setminus \Delta & \to\quad& [0,\infty)\cup \{\infty\} \end{aligned} \end{equation} to be the first nonnegative time when the vortex $a_+(\cdot, x,y)$ exits $\Omega$. In particular, if $a_+(\cdot, x,y)$ is trapped in $\Omega$, i.e. $a_+(s,x,y)\in \overline\Omega$ for all $s\geq 0$, then we define $\tau_+(x,y)=\infty$. Notice that the governing ODEs \eqref{dipole ode} become singular when the vortices $a_\pm$ collide. For the sake of simplicity, we assume that $$\tau_+(x,y)<\infty, \quad \forall (x,y)\in (\p\Omega\times (U\setminus \Omega)) \setminus \Delta,$$ and the dipole never collides in $\Omega$. Because of the local nature of our approach discussed in later sections, this assumption indeed does not impose any restriction to the main results of the current paper. Now we are ready to define the measurement map $\mathcal S$ as below: \begin{align*} \mathcal S:\quad (\p\Omega\times (U\setminus \Omega)) \setminus \Delta \quad \to\quad \big([0,\infty) \times \p\Omega\times (\mathbb R^2\setminus\Omega)\big)\cup \big([0,\infty)\times \p\Omega\big) \end{align*} and \begin{align}\label{eqn:def_S} &\mathcal S(x,y) \notag\\ &=\left\{ \begin{array}{ll} \big(\tau_+(x,y),\, a_+(\tau_+(x,y),x,y),\, a_-(\tau_+(x,y),x,y) \big), \quad \mbox{if}\; a_-(\tau_+(x,y),x,y)\notin \Omega,\\ [.5em] \big(\tau_+(x,y),\, a_+(\tau_+(x,y),x,y) \big), \quad \mbox{if}\; a_-(\tau_+(x,y),x,y)\in \Omega. \end{array}\right. \end{align} In particular, the definition of $\mathcal S$ says that we don't know the behavior of the dipole inside $\Omega$. As a matter of fact, we will only make use of those points $(x,y)$ such that $a_-(\tau_+(x,y),x,y)\notin\Omega$ in this paper. More specifically, the restriction of $\mathcal S$ on a subset of $(\p\Omega\times (U\setminus \Omega)) \setminus\Delta$ is sufficient for our approach to the reconstruction of the potential. \begin{remark} We make the following comment regarding the definition \eqref{eqn:def_S}: If we define the measurement operator $\mathcal S$ on $(\p\Omega\times \p\Omega )\setminus \Delta$, instead of the domain in \eqref{eqn:def_S}, then the initial velocities $\dot a_\pm(0,\cdot,\cdot)$ might not cover all the directions. For example, let $\Omega$ be a disk and $x\in \p \Omega$ be a fixed point and also let $Q\equiv 0$. It is clear to see that the velocities $\dot a_\pm(0,x,y)$ are never normal to $\p \Omega$ for any $(x,y)\in (\p\Omega\times \p\Omega)\setminus \Delta$. Moreover, in this paper we rely on those (local) trajectories which are almost tangent to the boundary to identify the behavior of $Q$ near $\p\Omega$. These local trajectories are available only when $x$ and $y$ are far away from each other on $\p\Omega$, since in this case they can make $\dot a_\pm(0,x,y)$ almost tangent to $\p\Omega$. However, in practice this could be difficult to manipulate when the domain $\Omega$ is large. \end{remark} The inverse problem under study is the determination of the potential $Q$ in $\Omega$ from measurements $\mathcal S$. As mentioned above, the unique determination result of a background potential from the trajectories of vortex dipole was studied in \cite{LSSU}, under the assumption that the potential is almost flat in a suitable space. The measurements taken in \cite{LSSU} are regarding the Hamiltonian flow of the dipole center, i.e. $(a_++a_-)/2$, while in the current paper, we measure the trajectories of $a_+$ and $a_-$ separately without knowing the phase (velocity vector) information. In other words, the inverse problem under consideration only uses phaseless data, which is more applicable in practice. The main strategy consists of two steps - local determination and global determination. We first show that all the derivatives of the potential $Q$ at a boundary point on $\p\Omega$, i.e. the boundary jet of $Q$, can be reconstructed from $\mathcal{S}$, see Theorem~\ref{boundary determination}. Then this established local result yields the global reconstruction of $Q$ in Theorem~\ref{real analytic case} due to the analytic assumption of $Q$. To achieve this goal, we need an assumption on the convexity of the boundary w.r.t. the potential which is stated in the following definition. \begin{definition}\label{def:intro} Let $p\in \p \Omega$, we say that the boundary $\p\Omega$ is {\it strictly convex at $p$ w.r.t. the background potential $Q$} if there exists a small open neighborhood $V_p$ of $p$, such that $a_+(s,p,q)\notin \overline\Omega$, $s\in (-\delta, \delta)\setminus\{0\}$ for some $\delta>0$, whenever $q\in V_p\setminus\Omega$, $q\neq p$, satisfying $\dot a_+(0,p,q)$ tangent to $\p\Omega$. \end{definition} For example, given a bounded domain $\Omega$ with strictly convex boundary (w.r.t. the Euclidean metric) and $p\in\p\Omega$, we assume that $|\nabla Q(p)|$ is sufficiently small, then $\p\Omega$ is strictly convex at $p$ w.r.t. $Q$ (viewed as a small perturbation of trivial potential). Note that even though we define the convexity by looking at the trajectory $a_+$, Definition~\ref{def:intro} could also be defined in terms of $a_-$ due to the symmetry of $a_+$ and $a_-$ in the ODEs \eqref{dipole ode}. We have the following first theorem which states that the derivatives of the potential on the boundary $\p\Omega$ can be determined from $\mathcal S$ in a local manner. \begin{theorem}[Boundary determination]\label{boundary determination} Let $Q\in C^\infty(\mathbb R^2)$ be a background potential and $\Omega\subset \mathbb R^2$ be a bounded convex open domain with smooth boundary. Suppose that $\p \Omega$ is strictly convex w.r.t. $Q$ at $p\in \p\Omega$. There exists an open neighborhood $\widetilde\Omega$ of $\overline\Omega$. Suppose that $Q$ is known in $\mathbb R^2\setminus \widetilde \Omega$, then there exists an open neighborhood $W$ of $p$, $W\setminus \overline{\widetilde \Omega}\neq \emptyset$, so that the measurement $\mathcal S$ restricted in $W\setminus \Omega$ determines $$\p^\alpha Q(p) \qquad \hbox{ for all multi-indices $\alpha$ with }|\alpha|\geq 1.$$ \end{theorem} \begin{figure}[ht]\label{dipole figure} \begin{tikzpicture} [scale=.65] \draw (2,0) circle [radius=4.5]; \draw (0,0) to [out=90, in=180] (3,2); \draw (3,2) to [out=0, in=20] (2,-2); \draw (2,-2) to [out=190, in=-5] (1,-2); \draw (1,-2) to [out=170, in=270] (0,0); \draw (-.9,0) to [out=90, in=180] (3.3,2.9 ); \draw (3.3,2.9 ) to [out=0, in=20] (2.8,-2.6); \draw (2.8,-2.6) to [out=205, in=-8] (1,-2.8); \draw (1,-2.8) to [out=170, in=270] (-.9,0); \draw [->,>=latex, dotted, thick] (.41,-1.7) to [out=10, in=170] (2,-2); \draw [->,>=latex, dotted, thick] (.25,-3) to [out= 20, in=180] (2.2,-3.3); \filldraw (.41,-1.7) circle (1.5pt) node[below,font=\tiny] {$a_+$}; \filldraw (.25,-3) circle (1.5pt) node[below,font=\tiny] {$a_-$}; \filldraw (2,-2) circle (1.5pt) node[below,font=\tiny] {$a_+(\tau_+)$}; \filldraw (2.2,-3.3) circle (1.5pt) node[below,font=\tiny] {$a_-(\tau_+)$}; \draw (2,0) node[above,font=\large] {$ {\Large \Omega}$}; \draw (2,1.8) node[above] {$\widetilde\Omega$}; \draw (-.5,2.3) node[above,font=\large] {$U$}; \end{tikzpicture} \caption{ The figure illustrates the travel path of a dipole ($a_+$ with positive charge and $a_-$ with negative charge) with initial positions in $U\setminus\Omega$. The vortex $a_+$ reaches the boundary of domain $\Omega$ at the exit time $\tau_+$. } \end{figure} The existence of $\widetilde \Omega$ is analyzed in Section~\ref{sec:tildeOmega}. Roughly speaking, the thickness of the tubular neighborhood $\widetilde \Omega\setminus \Omega$ has an upper bound depending on $\sup_{x\in\p \Omega} |\nabla Q(x)|$ and the size of the small open neighborhood $V_p$ in the Definition \ref{def:intro}. In particular, we can choose the open neighborhood $W$ from Theorem \ref{boundary determination} to be the same as $V_p$. Notice that we do not know the information of $Q(x)$ for $x\in \widetilde{\Omega}\setminus\overline\Omega$, otherwise if $Q$ is given in $\widetilde{\Omega}\setminus\overline\Omega$, then one can determine all the derivatives of $Q$ on $\p \Omega$ by simply taking limits from outside of $\Omega$. It is worth emphasizing that the determination of the boundary jet of $Q$ from $\mathcal S$ is purely local. More specifically, to determine $\p^\alpha Q(p)$ for all $|\alpha|\geq 1$, one only needs the information of the measurements $$\{\mathcal S(x,y): (x,y)\in (W\cap \p\Omega)\times (W\setminus \Omega), x\neq y\}\cap ([0,\delta) \times \p\Omega\times (W\setminus\Omega))$$ for some small constant $0<\delta\ll 1$ and a small open neighborhood $W$ of $p\in\p\Omega$ so that $W\setminus \overline{\widetilde \Omega}\neq \emptyset$. Theorem \ref{boundary determination} immediately implies the global determination of real-analytic background potentials. Notice that a convex domain is path-connected. \begin{theorem}[Global determination]\label{real analytic case} Let $Q\in C^\infty(\mathbb R^2)$ be a background potential and let $\Omega\subset \mathbb R^2$ be a bounded convex open domain with smooth boundary. Suppose that $\p \Omega$ is strictly convex w.r.t. $Q$ at some point on $\p\Omega$. There exists an open neighborhood $\widetilde\Omega$ of $\overline\Omega$. Suppose, in addition, that $Q$ is known in $\mathbb R^2\setminus \widetilde \Omega$ and real-analytic in $\widetilde \Omega$. Then for any open neighborhood $U$ of $\overline{\widetilde\Omega}$, the measurement $\mathcal S$ in $U\setminus\Omega$ determines $Q$ in $\widetilde\Omega$. \end{theorem} \subsection{Methodology} In this paper, we are motivated by the techniques developed in the study of classical geometric inverse problem, namely the {\it boundary rigidity problem}, which consists of determining a Riemannian metric on a compact smooth manifold with boundary from the collection of distance (w.r.t. the metric) between pairs of boundary points. This problem arises naturally from the geophysical question of recovering the inner structure of the Earth from the travel time of seismic waves. We refer interested readers to the survey paper \cite{SUVZ19} and the references therein for recent developments of the boundary rigidity and related problems. A key ingredient in the proof of Theorem \ref{boundary determination} is an integral identity first derived by Stefanov and Uhlmann \cite{SU3} in the study of the boundary rigidity problem for almost flat metrics. The integral identity works for general flows given by the solutions of governing ODE system. In our case, it connects the difference of measurement operators of two potentials $Q_1$ and $Q_2$ with some weighted integral of the difference of Hamiltonian vector fields of $Q_1$ and $Q_2$, see \eqref{F2int2}. In particular, we let $Q_2=0$, the trivial potential, and repeatedly differentiate the integral to recover all the derivatives of $Q_1$ at a boundary point. Notice that our approach of the boundary determination of the potential is constructional. The integral identity has also been used in the study of the boundary and global determination questions in various inverse problems, e.g. the boundary rigidity problem \cite{CQUZ, SUV, SUV17,UW03, UYZ20, W}, the non-abelian Radon transform \cite{PSUZ, Zh17} and the lens rigidity problem for Yang-Mills potentials \cite{PUZ19,Zh18}. The paper is structured as follows. In Section~\ref{sec:preliminary}, we discuss the construction of the domain $\widetilde{\Omega}$ and the preliminary setting for the problem under study. Section~\ref{sec:local reconstruction} is devoted to recovering the derivatives of the potential at a given boundary point by applying the Stefanov-Uhlmann identity. Then in Section~\ref{sec:global reconstruction}, the global determination of the potential follows by using the local result in Section~\ref{sec:local reconstruction} and the analyticity of the potential. Finally, additional useful lemmas are placed in the Appendix~\ref{sec:appendix}. \section{Some preliminary analysis}\label{sec:preliminary} In this section, we first show that the domain $\widetilde{\Omega}\supset\Omega$ exists so that one can choose the initial position of $a_-$ outside of $\widetilde{\Omega}$ to make the initial velocity of $a_+$ tangent to the boundary $\p\Omega$. Then based on the assumption that $Q$ is known in $\R^2\setminus\widetilde{\Omega}$, it implies $\nabla^\perp Q(a_-(0))$ is known. Next, with the help of the boundary normal coordinate, we show that the exit time of $a_+$ in $\Omega$ satisfies a useful property stated in Lemma~\ref{derivative of exit time}. \subsection{Construction of $\widetilde{\Omega}$}\label{sec:tildeOmega} Fixing some boundary point $p\in\p\Omega$ and since $\Omega$ is convex, by shifting or rotating it, one can suppose that the $x_1$-axis is tangent to $\p\Omega$ at $p$ and the $x_2$-axis is normal to $\p\Omega$ at $p$. Moreover, we require $\p_{2}:={\p\over \p x_2}$ to be inward pointing so that $\Omega\subset \{x_2>0\}$ (note that $\Omega$ is convex). From now on, we will use the notation $\0$ to denote the zero in coordinates and use $0$ to denote the scalar zero. Without loss of generality, we now suppose $p=\0$. We consider the following evolution equations of a dipole $\{a_+, a_-\}$ in the coordinates with initial position $\0$ and $\xi$, respectively: \begin{align}\label{dipolepair 2} \left\{\begin{array}{l} \dot{a}_+(s) = {1\over \pi} { (a_+ - a_-)^\perp \over |a_+ - a_-|^2 } + \nabla^\perp Q(a_+),\\ [.5em] \dot{a}_-(s) = {1\over \pi} { (a_+ - a_-)^\perp \over |a_+ - a_-|^2 } - \nabla^\perp Q(a_-),\\ [.5em] a_+(0) = \0,\\ [.5em] a_-(0) = \xi,\\ \end{array}\right. \end{align} where $\xi=(\xi_1,\xi_2)\notin \Omega$ and $\xi\neq \0$. We are interested in the dependence of the initial velocity $\dot a_+(0)$ on the initial value $a_-(0)$ and also require $\dot a_+(0)$ to be almost tangential to $\p\Omega$ in order to address the boundary determination question in later section. To this end, we first write $a_\pm=(a^1_\pm, a^2_\pm)$, then the first equation of \eqref{dipolepair 2} can be rewritten in coordinates as \begin{align}\label{evolution local} \dot a^1_+(0) =\frac{-\xi_2}{\pi |\xi|^2}+\p_2 Q(\0), \quad \dot a^2_+(0) =\frac{\xi_1}{\pi |\xi|^2}-\p_1 Q(\0). \end{align} Here $|\xi|$ is the Euclidean distance between $p=\0$ and $\xi$. Next we assume that $\dot a_+^2(0)=0$, i.e. $\dot a_+(0)\neq \0$ is tangent to $\p\Omega$, then \eqref{evolution local} gives $$\frac{\xi_1}{\pi |\xi|^2}=\pi \p_1 Q(\0) .$$ We split the discussion into the following three cases: \begin{enumerate} \item When $\p_1 Q(\0)=0$, we have ${\xi_1\over \pi|\xi|^2}=0$ which implies $\xi_1=0$ for any $\xi_2<0$. Then we can choose almost any $\xi_2<0$ such that $$\dot a_+^2 (0)=0, \quad \dot a_+^1(0)\neq 0 \hbox{ and $\xi=(0,\xi_2)\notin \Omega$}.$$ \item When $\p_1 Q(\0)> 0$, we have $\xi^2_2=\xi_1(\frac{1}{\pi \p_1 Q(\0)}-\xi_1)$. Then we can choose almost any $0<\xi_1<\frac{1}{\pi \p_1 Q(\0)}$ such that $$\dot a_+^2 (0)=0, \quad \hbox{$\dot a_+^1(0)\neq 0$} \hbox{ and }\hbox{$\xi=(\xi_1,\xi_2)\notin \Omega$ (with $\xi_2<0$)}. $$ \item The case $\p_1 Q(\0)<0$ leads to a similar conclusion as case (2). \end{enumerate} In particular, in cases $(2)$-$(3)$ (that is, $\p_1 Q(\0)\neq 0$), if $0<|\xi_1|<\frac{1}{\pi |\p_1 Q(\0)|}$, then $\frac{-1}{2\pi |\p_1 Q(\0)|}\leq \xi_2<0$. Moreover, if we have the upper bound $\sup_{x\in\p\Omega}|\nabla Q(x)|\leq M$ for some $M>0$, then one can always make $\frac{-1}{2\pi M}<\xi_2<\frac{-1}{4\pi M}<0$, independent of $p\in \p\Omega$. We note that for case (1), this bound for $\xi_2$ is straightforward. This implies the existence of an open neighborhood $\widetilde \Omega$ of $\Omega$ which is defined as follows: $$\widetilde \Omega:=\{x\in\mathbb R^2\; |\; \mbox{dist}\,(x, \Omega)< \min\; \{ (4\pi M)^{-1}, \sigma\}\}$$ with $0<\sigma\ll 1$ a fixed number so that $V_p\setminus \overline{\widetilde\Omega}\neq \emptyset$. Here $V_p$ is the open neighborhood appearing in Definition \ref{def:intro} of the convexity at $p=\0\in \p\Omega$ w.r.t. $Q$. In other words, $\widetilde \Omega\setminus \Omega$ is a tubular neighborhood around $\Omega$. Therefore for all three cases, we can always find a vector $\xi\notin \overline{\widetilde \Omega}$, so that $\dot a_+(0)\neq \0$ is tangent to $\p\Omega$. By continuity, the existence of $\xi\notin \overline{\widetilde \Omega}$ is also valid in order to have almost tangential $\dot a_+(0)$. \subsection{Behavior near a convex boundary point}\label{sec:boundary jet} Suppose that the dipole $\{a_+, a_-\}$ with initial position $\0$ and $\xi(t)$, respectively, satisfies the following problem: \begin{align}\label{dipolepair} \left\{\begin{array}{l} \dot{a}_+(s;t) = {1\over \pi} { (a_+ - a_-)^\perp \over |a_+ - a_-|^2 } + \nabla^\perp Q(a_+),\\ [1em] \dot{a}_-(s;t) = {1\over \pi} { (a_+ - a_-)^\perp \over |a_+ - a_-|^2 } - \nabla^\perp Q(a_-),\\ [1em] a_+(0;t) = \0,\\ [1em] a_-(0;t) = \xi(t).\\ \end{array}\right. \end{align} Section~\ref{sec:tildeOmega} suggests that there exists an open bounded domain $\widetilde{\Omega}$ containing $\overline\Omega$ so that the following is valid: There exists nontrivial initial data $\xi_0\notin \overline{\widetilde \Omega}$ of $a_-$ such that $\dot a_+(0;0)$ is tangent to $\p\Omega$. Moreover, by continuity, there exists a smooth curve $$ \xi (\cdot):[0,\delta)\to \R^2\setminus \overline{\widetilde\Omega} $$ for small $0<\delta\ll1$ so that $\dot a_+(0;t)$ is almost tangent to $\p\Omega$ and $\dot a_+^2(0;t)>0$ (i.e. inward pointing) for $t\in (0,\delta)$. The convexity of $\Omega$ w.r.t. $Q$ at $p=\0$ yields that for each $t\in (0,\delta)$, $a_+(\cdot;t)$ exits $\Omega$ at some point $c(t)\in\p\Omega$ which is close to $p$ and satisfies $$c(t)\to p \qquad \hbox{ as }t\to 0.$$ Moreover, the exit time $\ell(t)$, i.e. $a_+(\ell(t);t)=c(t),$ is smooth and satisfies $$\ell(t)\to 0 \qquad\hbox{ as } t\to 0,$$ while $a_-(s;t)\in \R^2\setminus \widetilde\Omega$ for $s\in [0,\ell(t)]$. Note that for the simplicity of the notation, here we denote $\ell(t):=\tau_+(\0,\xi(t))$ where $\tau_+(\0,\xi)$ is defined in \eqref{eqn:def_time}. It is worth pointing out that $c(t)$ and $\ell(t)$ can be determined from the measurements $\mathcal S(\0,\xi(t))$. \subsubsection{Boundary normal coordinates.} To connect the behavior of the dipole with the measurement $\mathcal S$ near the boundary, it is helpful to introduce the {\it boundary normal coordinates}. Consider a neighborhood $W$ of a boundary point $p\in\p\Omega$, equipped with the boundary normal coordinates $\{z^1, z^2\}$, so that $$\p\Omega\cap W=\{z^2=0\},\qquad W\cap \Omega\subset\{z^2>0\},$$ and $z^1=c$ ($c$ is a constant) are straight lines normal to the boundary $\p \Omega$. Then the boundary near $p$ is straightened in such coordinates. Moreover, in the boundary normal coordinates, the Euclidean metric takes the form $$e=f(z^1,z^2) (dz^1)^2+(dz^2)^2.$$ Here $f$ is a positive (local) function, $f(\0)=1$. Let's also take $p=\0$ in this coordinates. Notice that $\Omega$ is a convex domain, one can always make the neighborhood $W$ large enough outside $\Omega$ so that there exists $\xi\in W\setminus \overline{\widetilde \Omega}\neq \emptyset$ with $\dot a_+(0)$ tangent to $\p\Omega$ at $\0$. Suppose that $\p\Omega$ is strictly convex w.r.t. $Q$ at $p=\0$. In the boundary normal coordinates, the definition of the strict convexity of $\p\Omega$ w.r.t. $Q$ at $\0$ can be interpreted as follows: Let $\{a_+,a_-\}$ be solutions of \eqref{dipolepair 2} with $\dot a_+(0)$ tangent to $\p\Omega$, then in the boundary normal coordinates, one has $$\dot a^2_+(0)=0,\quad \ddot a^2_+(0)<0.$$ In the lemma below, we will show the exit time $\ell(t)$ is increasing locally at $t=0$ with the help of the boundary normal coordinate. \begin{lemma}\label{derivative of exit time} There exists $\xi (\cdot):[0,\delta)\to W\setminus \overline{\widetilde\Omega}$ with $\xi(0)=\xi_0$, such that $\ell'(0)>0$. \end{lemma} \begin{proof} We now denote $a_+(s;t)=:(a_+^1(s;t), a_+^2(s;t))$ in the boundary normal coordinates. By continuity, there exists $\xi(\cdot):[0,\delta)\to W\setminus \overline{\widetilde\Omega}$ with $\xi(0)=\xi_0$, a constant $\varepsilon>0$, such that \begin{equation}\label{choice of xi(t)} \dot{a}_+(0;t)=(\dot{a}_+^1(0;t),\,\dot{a}_+^2(0;t))=\varepsilon (\alpha(t),\, \beta t), \quad \beta>0, \end{equation} where $\alpha(t)^2+\beta^2t^2 =1$. In particular, if $t=0$, then $ |\alpha(0)|=1. $ By the strict convexity w.r.t. $Q$ at $p$, it is clear that $\ell(0)=0$. To show $\ell'(0)>0$, we first show $\ell'(0)\neq 0$ by applying the contradiction argument. Suppose that $\ell'(0)= 0$. We can then write $\ell(t)$ asymptotically for small $t$ as \begin{align}\label{ell expansion} \ell(t) =\ell(0)+\ell'(0)t+O(t^2)= O(t^2). \end{align} When $s$ is sufficiently small, we have the asymptotic expansion of $a_+$ as follows: $$ a_+(s;t)= a_+(0;t) +\dot{a}_+(0;t) s +O(s^2). $$ We note that the second component $a_+^2(\cdot;t)$ always vanishes at $s=\ell(t)$ in the boundary normal coordinates (the boundary is straightened near $p$), namely, $$ 0\equiv a_+^2(\ell(t);t)=\dot{a}_+^2(0;t) \ell(t) +O(\ell(t)^2), $$ and, moreover, we then have \begin{align}\label{ell expansion 2} 0= \varepsilon\beta t\,\ell(t) +O(\ell(t)^2) \end{align} for $t\in[0,\delta)$ for some small $\delta>0$. Since $\beta>0$ and \eqref{ell expansion}, it implies that \eqref{ell expansion 2} can not hold for all $t$ in $[0,\delta)$. This leads to a contradiction and thus $\ell'(0)\neq 0$. Finally since $\ell(t)>0$ for $t>0$, we obtain that $\ell'(0)>0$. \end{proof} \begin{remark} Lemma \ref{derivative of exit time} shows the existence of $\xi(t)$ with $\ell'(0)\neq 0$. As one will see in Section \ref{sec:1 derivative}, this is sufficient for determining $\nabla Q(\0)$ from $\mathcal S$. Then using the ODEs \eqref{dipolepair}, we can determine the relation between $\xi(t)$ and $\dot a_+(0;t)$. In particular, we are able to choose $\xi(t)$ so that $\ell'(0)\neq 0$, and $\dot a_+(0;t)$ exactly has the form as in \eqref{choice of xi(t)}. \end{remark} \section{Determination of the boundary jet}\label{sec:local reconstruction} In this section, we will show the local reconstruction of $Q$ from the measurement $\mathcal S$. This section consists of two main parts. In Section~\ref{sec:SU identity}, we first introduce the Stefanov-Uhlmann identity. This identity connects the data $\mathcal{S}$ to the discrepancy in the two vortex dynamic in a given potential (we will take the given one to be trivial potential $Q_0$) and the unknown potential $Q$, respectively. Then we will apply this identity to recover such potential $Q$ in Section~\ref{sec:jet}. \subsection{Stefanov-Uhlmann identity}\label{sec:SU identity} Following the setting at the beginning of Section~\ref{sec:boundary jet}, now we denote the initial boundary data by $$ \phi(t):= (\0,\, \xi_t),\qquad \xi_t:=\xi(t) \quad \hbox{for small }t. $$ We denote the trajectory of the dipole in the background $Q$ by $$ X(s,\phi(t)) = (a_+(s;t),\,a_-(s;t)). $$ Then $X(0,\phi(t))=\phi(t)$. When $Q\equiv 0$ (we denote the trivial potential by $Q_0$), we particularly use the notation $$X_0(s,\phi(t))=(a^0_+(s;t),\,a^0_-(s;t))$$ to denote the trajectory of the dipole in this trivial background. Specifically, from the ODEs \eqref{dipolepair}, one can derive the exact expression of $a^0_\pm(s;t)$ as follows: $$ a^0_+(s;t)= {-\xi_t^\perp \over \pi|\xi_t|^2} s,\qquad a^0_-(s;t)= a^0_+(s;t)+\xi_t. $$ Note that since $X_0(s,\phi(t))$ is the path of the dipole in the trivial background, we actually know its trajectory at any time $s$. Then $X_0(\ell(t),\phi(t))$ is indeed a known data. The Stefanov-Uhlmann integral identity, derived in \cite{SU3}, is adjusted in our setting and is read as follows: \begin{proposition}[Stefanov-Uhlmann Identity]\label{Prop:SUid} We have \begin{align} \label{F2int2} &X(\ell(t),\phi(t))-X_0(\ell(t),\phi(t)) \notag \\ & =\int^{\ell(t)}_0 {\partial X_0 \over \partial \phi(t)} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\,ds, \end{align} where the velocity matrix in the background potential $Q$ is defined by \begin{equation*}\label{definition V} \begin{split} V(X(s,\phi(t))) &:= (\dot a_+(s;t), \; \dot a_-(s;t))^T, \end{split} \end{equation*} and when $Q\equiv 0$, we denote \begin{equation*}\label{definition V_0} \begin{split} V_0(X(s,\phi(t))) &:= \bigg(\frac{(a_+(s;t)-a_-(s;t))^\perp}{\pi |a_+(s;t)-a_-(s;t)|^2},\; \frac{(a_+(s;t)-a_-(s;t))^\perp}{\pi |a_+(s;t)-a_-(s;t)|^2}\bigg)^T. \end{split} \end{equation*} Here the superscript $T$ denotes the transpose of a vector. In particular $$(V-V_0)(X(s,\phi(t)))=(\nabla^\perp Q(a_+(s;t)),\; -\nabla^\perp Q(a_-(s;t)))^T.$$ \end{proposition} Since $X(\ell(t),\phi(t))$ for $t$ small is given by the measurement $\mathcal S$, we have $X(\ell(t),\phi(t))-X_0(\ell(t),\phi(t))$ is known, which implies that the right-hand side of \eqref{F2int2} is also known for sufficiently small $t$. From now on, we denote the integral in \eqref{F2int2} by \begin{align} \label{F2int1} R(t):=\int^{\ell(t)}_0 {\partial X_0 \over \partial \phi(t)} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\,ds. \end{align} In the remaining of this section, we will discuss how to obtain the information of $Q$ near $p=\0$ from this known data $R(t)$. The key strategy is taking the derivative of $R(t)$ w.r.t. $t$ multiple times in order to extract useful information of the derivatives of $Q$. Let's denote the $k$-th derivative of $R(t)$ by $R^{(k)}(t)$, that is, $$R^{(k)}(t):={d^k\over dt^k} R(t).$$ \subsection{Reconstruction of boundary jet}\label{sec:jet} We will recover all the derivatives of the potential $Q$ at boundary point $\0$ by the induction argument. In particular, we will discuss the derivative of $R(t)$ up to fifth order in details, which will provide crucial observations regarding the reconstruction of the higher order term of $Q$. \subsubsection{\bf $1^{st}$ derivative w.r.t. $t$}\label{sec:1 derivative} We differentiate \eqref{F2int1} w.r.t. $t$ at $t=0$. Then we obtain \begin{align*} \lim\limits_{t\rightarrow 0}R^{(1)}(t)&=\lim\limits_{t\rightarrow 0}\Big[ \ell'(t) {\partial X_0 \over \partial \phi } (0, X(\ell(t), \phi(t)))(V - V_0)(X(\ell(t),\phi(t))) \\ &\quad + \int^{\ell(t)}_0 {\p\over \p t}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\,ds\Big]\\ &= \ell'(0){\partial X_0 \over \partial \phi} (0,\phi(0)) (V - V_0)(\phi(0)) \\ &= \ell'(0)\left[ \begin{array}{c} \nabla^\perp Q(a_+(\ell(0);0))\\ -\nabla^\perp Q(a_-(\ell(0);0))\\ \end{array}\right]_{4\times 1}, \end{align*} where we used that $$ \ell(0)=0,\quad {\partial X_0 \over \partial \phi} (0,\phi(0)) = Id_{4\times 4}. $$ Here $Id_{4\times 4}$ is the $4\times 4$ identity matrix. Note that $a_+(\ell(0);0)=a_+(0;0)=\0$ and from the calculation of $\lim\limits_{t\rightarrow 0}R^{(1)}(t)$ above, we can determine the value $$ \ell'(0)\left[ \begin{array}{c} \nabla^\perp Q(\0)\\ -\nabla^\perp Q(\xi_0)\\ \end{array}\right]_{4\times 1}. $$ Since $\ell'(0)>0$ is known from the measurement $\mathcal S$, it gives the recovery of $\nabla Q(\0)$ by considering only the first component in the above matrix. Note that all the derivatives of $Q$ at $\xi_0$ is known due to the assumption that $Q$ is known in $U\setminus \widetilde\Omega$. \subsubsection{\bf $2^{nd}$ derivative w.r.t. $t$}\label{sec:2 derivative} In order to determine the second order derivative of $Q$ at $\0$, we further calculate the derivative of $R^{(1)}(t)$ w.r.t. $t$ as below: \begin{align*} R^{(2)}(t)& ={d\over dt}\LC\ell'(t) {\partial X_0 \over \partial \phi } (0, X(\ell(t), \phi(t)))(V - V_0)(X(\ell(t),\phi(t)))\RC\\ &\quad+\ell'(t){\p \over \p t }\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\Big|_{s=\ell(t)} \\ & \quad+ \int^{\ell(t)}_0 {\p^2\over \p t^2}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\,ds \\ &=:R^{(2)}_1(t)+R^{(2)}_2(t)+R^{(2)}_{\text{int}}(t). \end{align*} It is clear that $R^{(2)}_{\text{int}}(0)=\0$. Notice that if $\p\Omega$ is strictly convex w.r.t. $Q$ at $\0$, then it is strictly convex w.r.t. $Q$ for boundary points sufficiently close to $\0$. By applying the argument in Section \ref{sec:1 derivative} to the nearby boundary points, we can determine $\nabla Q(x)$ for $x\in\p\Omega$ sufficiently close to $p=\0$. Since the position information $X(\ell(t),\phi(t)$) is known from $\mathcal S$ and, in particular, $a_+(\ell(t);t)\in\p\Omega$ are close to $\0$ for small $t$, it implies that the value of $(V - V_0)(X(\ell(t),\phi(t)))$ is then known for small $t$. This gives that the term $R^{(2)}_1(t)$ is also known for small $t$ by noting that ${\p X_0\over \p\phi}$ is known. Now for $R^{(2)}_2$, we have \begin{align*} R^{(2)}_2(t) &= \ell'(t) {\p \over \p t}\LC{\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))\RC\Big|_{s=\ell(t)} (V - V_0)(X(\ell(t),\phi(t)))\\ &\quad+ \ell'(t) {\partial X_0 \over \partial \phi} (0, X(\ell(t), \phi(t)))[\p_t(V - V_0)(X(s,\phi(t)))]\Big|_{s=\ell(t)}. \end{align*} Similarly, ${\p \over \p t}{\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t))|_{s=\ell(t)}$ and $(V - V_0)(X(\ell(t),\phi(t)))$ are known for $t$ small, as a result, we only need to focus on the second term in $R^{(2)}_2(t)$, that is, \begin{align}\label{to be recovered term in 2 derivative} \mathcal{K}_2(t)&:=( \mathcal{K}_{2,1}(t),\, \mathcal{K}_{2,2}(t))^T \notag\\ &:=\ell'(t) {\partial X_0 \over \partial \phi} (0, X(\ell(t), \phi(t)))[\p_t(V - V_0)(X(s,\phi(t)))]\Big|_{s=\ell(t)}, \end{align} whose first term $\mathcal{K}_{2,1}(t)$ turns out to be zero as $t\rightarrow 0$ by Lemma~\ref{lemma limit V} in the Appendix. Its second term $\mathcal{K}_{2,2}(t)$ contains known data $\p_t(\nabla^\perp Q)(a_-(s;t))|_{s=\ell(t)}$ as $t\rightarrow 0$. In conclusion, we have seen that taking the second derivative of $R(t)$ does not lead to additional information about $Q$, which motivates us to consider the next derivative, that is, $R^{(3)}(t)$. \subsubsection{\bf $3^{rd}$ derivative w.r.t. $t$} We recall that in $R^{(2)}(t)$, every term is known now for small $t$, except $R^{(2)}_{\text{int}}(t)$ and $\mathcal{K}_2(t)$ defined in \eqref{to be recovered term in 2 derivative} which are only known as $t\rightarrow 0$. Then after straightforward computations, we have \begin{align*} R^{(3)}(t)&=\hbox{known terms}+ {d\over d t}\LC\ell'(t) {\partial X_0 \over \partial \phi } (0, X(\ell(t), \phi(t))) \p_t (V - V_0)(X(s,\phi(t))) \Big|_{s=\ell(t)} \RC\\ &\quad+\ell'(t){\p^2 \over \p t^2}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t))) \RC\Big|_{s=\ell(t)} \\ & \quad+ \int^{\ell(t)}_0 {\p^3\over \p t^3}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\,ds \\ &=:\hbox{known terms}+R^{(3)}_1(t)+R^{(3)}_2(t)+R^{(3)}_{\text{int}}(t). \end{align*} Note that these known terms have been known for small $t$ and the remaining terms are derived from $R^{(2)}(t)$ through $$R^{(3)}_1(t):={d\over dt}\mathcal{K}_2(t)\qquad \hbox{ and } R^{(3)}_2(t)+R^{(3)}_{\text{int}}(t) := {d\over dt} R^{(2)}_{\text{int}}(t).$$ Again, it is clear to see that $R^{(3)}_{\text{int}}(0)=\0$. Moreover, we can also derive that $\lim\limits_{t\rightarrow 0}R^{(3)}_2(t)$ is indeed known by Lemma~\ref{lemma limit V}. For the term $R^{(3)}_1(t)$, Lemma~\ref{lemma limit V} is applied to get that \begin{align*} \lim\limits_{t\rightarrow 0} R^{(3)}_1(t) &=\hbox{known terms}\\ &\quad+ \lim\limits_{t\rightarrow 0} \ell'(t) {\partial X_0 \over \partial \phi } (0, X(\ell(t), \phi(t))) {d\over dt} \left[ \p_t (V - V_0)(X(s,\phi(t))) |_{s=\ell(t)}\right], \end{align*} Then due to $(2)$ in Lemma~\ref{lemma V derivative}, one can reconstruct the second derivative of $Q$ in the normal direction from $$ (\ell'(0))^2\; \varepsilon \beta \;\p_2 \nabla^\perp Q(\0). $$ Specifically, since $\varepsilon, \beta>0,\ell'(0)>0$ are known, one then recovers $\p_2\nabla^\perp Q(\0)$. On the other hand, recall that $\nabla Q(x)$ has been determined for any boundary point $x\in\p\Omega$, which is sufficiently close to $p=\0$. Then we can also recover its tangential derivative, that is, $ \p_1 \nabla Q(\0). $ Combining the normal and tangential derivatives together, as this stage, we have recovered $$ \nabla^2 Q(\0). $$ Similarly, we can also recover $\nabla^2 Q(x)$ for $x\in\p\Omega$ sufficiently close to $p=\0$, by the same argument above. \subsubsection{\bf Higher derivatives w.r.t. $t$} Another direct computation gives that \begin{align*} R^{(4)}(t) &=\hbox{known terms}+\ell'(t) {\partial X_0 \over \partial \phi} (0, X(\ell(t), \phi(t))) {d\over dt} \left[ \p^2_t (V - V_0)(X(s,\phi(t))) |_{s=\ell(t)}\right] \\ &\quad+ \ell'(t){\p^3 \over \p t^3}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t))) \RC\Big|_{s=\ell(t)} \\ & \quad+ \int^{\ell(t)}_0 {\p^4\over \p t^4}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\,ds\\ &=:\hbox{known terms}+R^{(4)}_1(t)+R^{(4)}_2(t)+R^{(4)}_{\text{int}}(t), \end{align*} where $R^{(4)}_{\text{int}}(0)=0$ and these terms $R^{(4)}_1(t), R^{(4)}_2(t)$ are either tangential terms or terms depending only on lower order derivatives of $Q$ ($\nabla^\gamma Q$, $\gamma=1,2$) near $\0$ due to Lemma~\ref{lemma V derivative}. Therefore, $R^{(4)}$ does not contribute any new information about $Q$, and thus we have to move on to compute $R^{(5)}$. For $R^{(5)}(t)$, we obtain \begin{align*} \lim\limits_{t\rightarrow 0} R^{(5)}(t) &=\hbox{known terms}\\ &\quad + \lim\limits_{t\rightarrow 0}\ell'(t) {\partial X_0 \over \partial \phi } (\0, X(\ell(t), \phi(t))) {d^2\over dt^2 } \left[\p^2_t (V - V_0)(X(s,\phi(t))) |_{s=\ell(t)}\right] \\ &\quad+ \lim\limits_{t\rightarrow 0}\ell'(t){\p^4 \over \p t^4}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t))) \RC\Big|_{s=\ell(t)} \\ & \quad+ \lim\limits_{t\rightarrow 0} \int^{\ell(t)}_0 {\p^5\over \p t^5}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\,ds\\ &=:\hbox{known terms}+ \lim\limits_{t\rightarrow 0}R^{(5)}_1(t)+ \lim\limits_{t\rightarrow 0}R^{(5)}_2(t)+ \lim\limits_{t\rightarrow 0}R^{(5)}_{\text{int}}(t). \end{align*} Again, following a similar argument, we only need to focus on the term $ \lim\limits_{t\rightarrow 0}R^{(5)}_1(t)$ since it contains \begin{align*} {d^2\over dt^2 } \left[\p^2_t (V - V_0)(X(s,\phi(t))) |_{s=\ell(t)}\right]. \end{align*} In particular, by (2) in Lemma~\ref{lemma V derivative}, we can recover the first component of $\lim\limits_{t\rightarrow 0}R^{(5)}_1(t)$, that is, $$ 2(\ell'(0))^3\; \varepsilon^2 \beta^2\; \p_2^2\nabla^\perp Q(\0). $$ Note that the tangential derivative $\p_1\nabla\nabla^\perp Q(\0)$ and lower order derivatives $\nabla^\gamma Q(\0),\ \gamma=1,2$ are already known. Then $\nabla^3 Q(\0)$ is reconstructed. Based on the above detailed analysis on the derivative of $R(t)$ up to the $5$-th order, we are ready to show the determination of the boundary jet of $Q$ at the point $p=\0$ by the induction argument. \begin{theorem}\label{boundary determination in local coordinates} The measurement $\mathcal S$ in an open neighborhood of $\0$ determines $\p^j_1 \p_2^k Q(\0)$ for any $j, k\in\mathbb Z$, $j+k\geq 1$. \end{theorem} \begin{proof} We have seen that $\p_1^j\p_2^k Q(\0)$ for $j+k\leq 3$ can be determined from $R^{(1)}(t)$, $R^{(3)}(t)$ and $R^{(5)}(t)$ as $t$ goes to $0$. Given an arbitrary integer $K\geq 3$, by induction, suppose that $\p_1^j\p_2^k Q(\0)$ for $j+k\leq K$, is recovered. Thus $\p_1^j\p_2^k Q(x)$, $j+k\leq K$, is known for $x\in\p\Omega$ sufficiently close to $\0$. This information is enough for determining $\p_1^j\p_2^k Q(\0)$ with $j+k=K+1$, $j\geq 1$. The only unknown $(K+1)$-th derivative is the term $\p_2^{K+1} Q(\0)$. We differentiate $R(t)$, $2K+1$ times, w.r.t. $t$ and obtain \begin{align*} R^{(2K+1)}(t) &=\hbox{known terms}\\ &\quad+\ell'(t) {\partial X_0 \over \partial \phi } (\0, X(\ell(t), \phi(t))) {d^K\over dt^K} \left[\p^K_t (V - V_0)(X(s,\phi(t))) |_{s=\ell(t)}\right] \\ &\quad+ \ell'(t){\p^{2K} \over \p t^{2K}}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t))) \RC\Big|_{s=\ell(t)} \\ & \quad+ \int^{\ell(t)}_0 {\p^{2K+1}\over \p t^{2K+1}}\LC {\partial X_0 \over \partial \phi} (\ell(t)-s, X(s, \phi(t)))(V - V_0)(X(s,\phi(t)))\RC\,ds. \end{align*} Here those known terms involve the derivatives of $Q$ with orders $\leq K$ and the tangential derivatives of $Q$. Therefore based on the analysis above and Lemma~\ref{lemma V derivative}, we finally recover the term $$ K!(\ell'(0))^{K+1} \varepsilon^{K} \beta^K\; \p_2^K\nabla^\perp Q(\0), $$ which involves $\p_2^{K+1} Q(\0)$. This completes the proof. \end{proof} \subsubsection{\bf Proof of Theorem~\ref{boundary determination}} We are ready to show the local result. \begin{proof}[Proof of Theorem~\ref{boundary determination}] The proof follows immediately from the discussion above and Theorem~\ref{boundary determination in local coordinates}. \end{proof} \section{Global reconstruction of real-analytic potentials}\label{sec:global reconstruction} Based on the local result in Section~\ref{sec:local reconstruction}, we can now determine $Q$ globally in $\widetilde{\Omega}$. \begin{proof}[Proof of Theorem \ref{real analytic case}] The hypothesis of the theorem gives that $\p\Omega$ is strictly convex at some point $p\in\p\Omega$ w.r.t. the potential $Q$. By Theorem \ref{boundary determination} (also Theorem \ref{boundary determination in local coordinates}), $\p^\alpha Q(p)$ is recovered for any multi-index $\alpha$, $|\alpha|\geq 1$ from the data $\mathcal{S}$. Notice that $\widetilde\Omega$ is an open neighborhood of $\overline\Omega$, then it implies that $p$ is actually an interior point of $\widetilde\Omega$. Since $Q$ is real-analytic in $\widetilde \Omega$, so is $\nabla Q$. Thus, we can uniquely recover $\nabla Q$ in some neighborhood of $p$. Moreover, since $\widetilde\Omega$ is path-connected and $\nabla Q$ is analytic, we can then determine $\nabla Q$ uniquely in the whole domain $\widetilde{\Omega}$. Now for any fixed point $x\in\widetilde\Omega$, let $$\gamma: [0,T]\to \overline{\widetilde\Omega},\quad \gamma(0)=z\in \p \widetilde\Omega,\quad \gamma(T)=x$$ be a smooth curve connecting $x\in\widetilde\Omega$ with the boundary of $\widetilde \Omega$ for $T>0$. Notice that $Q$ is given outside $\widetilde\Omega$, thus $Q(z)$ is known for $z\in\p\widetilde{\Omega}$. The Fundamental Theorem of Calculus yields that $$Q(x)=Q(z)+\int_0^T \left<\nabla Q(\gamma(t)), \dot\gamma(t)\right>\, dt,$$ which then determines $Q(x)$. Here $\left<\cdot,\cdot\right>$ is the Euclidean inner product. Since $x$ is an arbitrary point in $\widetilde\Omega$, this leads to the reconstruction of $Q$ in the whole domain $\widetilde\Omega$. Therefore, the proof of Theorem~\ref{real analytic case} is complete. \end{proof} \appendix \section{Some useful lemmas}\label{sec:appendix} The following lemmas play an important role in the calculation of $R^{(k)}(t)$ in Section~\ref{sec:local reconstruction}. Recall that the definition of $V$ and $V_0$ in Proposition~\ref{Prop:SUid} leads to $$ (V-V_0)(X(s,\phi(t))) = \LC\nabla^\perp Q(a_+(s;t)),\,-\nabla^\perp Q(a_-(s;t))\RC^T, $$ and it satisfies the following limit. \begin{lemma}\label{lemma limit V} For any positive integer $k$, one has $$ \lim\limits_{t\rightarrow 0} \p^k_t(V - V_0)(X(s,\phi(t))) \Big|_{s=\ell(t)}= \left[\begin{array}{c} \0\\ known\\ \end{array}\right]_{4\times 1}, $$ where this $\0$ is the zero vector in $\R^2$. \end{lemma} \begin{proof} We first recall that $a_+(0;t)=\0$ and $a_-(0;t)=\xi_t$. For small $s$ and $t$, the asymptotic expansion of the trajectory $a_+(s;t)$ is as follows: \begin{align}\label{a+ asymptotic} a_+(s;t)=\dot{a}_+(0;t)s+O(s^2). \end{align} By taking the $k$-th derivative with respect to $t$, we have $$ \p_t^ka_+(s;t)={d^k\over d t^k} \dot{a}_+(0;t)s+O(s^2), $$ which gives that \begin{align}\label{a+ asymptotic diff} \lim\limits_{t\rightarrow 0}\p_t^ka_+(s;t)|_{s=\ell(t)} =\lim\limits_{t\rightarrow 0} {d^k\over d t^k}\dot{a}_+(0;t)\ell(t)+O(\ell(t)^2) =\0 \end{align} due to $\ell(0)=0$. Similarly, for $a_-(s;t)$, we have $$ \lim\limits_{t\rightarrow 0}\p_t^ka_-(s;t)|_{s=\ell(t)}=\lim\limits_{t\rightarrow 0}{d^k\over dt^k}\xi_t, $$ which is known. Now we turn back to the matrix and obtain \begin{align*} \lim\limits_{t\rightarrow 0}\p_t(V - V_0)(X(s,\phi(t)))\Big|_{s=\ell(t)}&= \lim\limits_{t\rightarrow 0} \left[ \begin{array}{c} \nabla \p_2 Q(a_+(\ell(t);t))\cdot \p_ta_+(s;t)|_{s=\ell(t)}\\ -\nabla \p_1 Q(a_+(\ell(t);t))\cdot \p_ta_+(s;t)|_{s=\ell(t)}\\ -\nabla \p_2 Q(a_-(\ell(t);t))\cdot \p_ta_-(s;t)|_{s=\ell(t)}\\ \nabla \p_1 Q(a_-(\ell(t);t))\cdot \p_ta_-(s;t)|_{s=\ell(t)}\\ \end{array}\right]\\ &= \left[\begin{array}{c} \0\\ known\\ \end{array}\right]_{4\times 1}, \end{align*} where we used the fact that $\lim\limits_{t\rightarrow 0}\nabla \nabla^\perp Q(a_-(\ell(t);t))=\nabla \nabla^\perp Q(\xi_0)$ is known since $\xi_0\in \R^2\setminus\widetilde\Omega$. For $k>1$, due to \eqref{a+ asymptotic diff}, we also have $$\lim\limits_{t\rightarrow 0}\p^k_t(\nabla^\perp Q)(a_+(s;t)) |_{s=\ell(t)}=\0 $$ and the term $$ \lim\limits_{t\rightarrow 0}\p^k_t(\nabla^\perp Q)(a_-(s;t)) |_{s=\ell(t)}$$ is known as well, which completes the proof. \end{proof} Recall that $\dot{a}_+(0;t) = \varepsilon(\alpha(t),\beta t)$ with $\alpha(t)^2+(\beta t)^2=1$ in \eqref{choice of xi(t)}. Then $\alpha(t)=\pm\sqrt{1-(\beta t)^2}$ implies $\alpha'(0)=0$. Build upon this and \eqref{a+ asymptotic} and $\ell(0)=0$, we have \begin{align}\label{dt pt a_+} \lim\limits_{t\rightarrow 0}{d\over dt}\p_ta_+(s;t)|_{s=\ell(t)}=\lim\limits_{t\rightarrow 0}{d\over dt}\dot{a}_+(0;t)\ell'(t)+\0= \varepsilon (0,\beta) \ell'(0). \end{align} In the following lemma, we will only focus on the first component of $(V - V_0)(X(s,\phi(t)))$ since from above discussion we have seen that its second component already contains known data $ (\nabla^\perp Q)(a_-(s;t))|_{s=\ell(t)}$ for sufficiently small $t$. \begin{lemma}\label{lemma V derivative} Let $k,\,\eta$ be integers satisfying $k\geq\eta\geq1$. Then the first component of $$ \lim\limits_{t\rightarrow 0}{d^\eta\over dt^\eta} \left[ \p_t^k (V - V_0)(X(s,\phi(t))) |_{s=\ell(t)}\right], $$ that is, $$ \mathcal{F}:=\lim\limits_{t\rightarrow 0}{d^\eta\over dt^\eta} \left[ \p_t^k (\nabla^\perp Q)(a_+(s;t)) |_{s=\ell(t)}\right], $$ satisfies the following statements: \begin{enumerate} \item If $k\neq \eta$, then $\mathcal{F}$ only depends on the derivatives $\p^{\gamma_1}_1\p^{\gamma_2}_2 Q(\0)$ for $1\leq \gamma_1+\gamma_2\leq \eta+1$. \item If $k=\eta$, then $\mathcal{F}$ satisfies \begin{align}\label{case k} \mathcal{F} = k! \varepsilon^k (\ell'(0))^{k} \beta^k \p_2^k\nabla^\perp Q(\0) + \Phi, \end{align} where the remaining function $\Phi$ only depends on the tangential derivative $$\p_1\nabla^{k-1}\nabla^\perp Q(\0)$$ and lower order derivatives $\p^{\gamma_1}_1\p^{\gamma_2}_2 Q(\0)$ for $1\leq \gamma_1+\gamma_2\leq k$. \end{enumerate} \end{lemma} \begin{proof} We first consider the case $k=1$ and $\eta=1$ and compute $$ \p_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)} = \nabla\nabla^\perp Q(a_+(s;t)) \p_ta_+(s;t)|_{s=\ell(t)}. $$ Then from \eqref{a+ asymptotic diff} and \eqref{dt pt a_+}, we have \begin{align*} \lim\limits_{t\rightarrow 0}{d\over dt} \left[ \p_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)}\right] &= \lim\limits_{t\rightarrow 0}\nabla^2\nabla^\perp Q(a_+(\ell(t);t)) {d\over dt}a_+(\ell(t);t) \p_ta_+(s;t)|_{s=\ell(t)} \\ &\quad +\lim\limits_{t\rightarrow 0} \nabla\nabla^\perp Q(a_+(\ell(t);t)) {d \over dt }\p_ta_+(s;t)|_{s=\ell(t)}\\ &=\varepsilon \ell'(0) \beta \p_2 \nabla^\perp Q(\0), \end{align*} which shows \eqref{case k} for $k=1$. Next we consider the case $k=2$ and then take $1\leq \eta \leq 2$. Note that \begin{align*} \p^2_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)} &= \nabla^2\nabla^\perp Q(a_+(s;t)) (\p_ta_+(s;t))^2|_{s=\ell(t)}\\ &\quad +\nabla\nabla^\perp Q(a_+(s;t))\p_t^2a_+(s;t)|_{s=\ell(t)}. \end{align*} For $k=2$ and $\eta=1$, a direct computation yields \begin{align*} \lim\limits_{t\rightarrow 0}{d \over dt } \left[ \p^2_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)}\right] = \nabla\nabla^\perp Q(\0) \lim\limits_{t\rightarrow 0}{d\over dt} \p_t^2a_+(s;t)|_{s=\ell(t)}, \end{align*} which only contains $\p_1^{\gamma_1}\p_2^{\gamma_2}Q(\0)$ for $1\leq \gamma_1+\gamma_2\leq 2$. Similarly, for $k=2$ and $\eta=2$, we have \begin{align*} &\lim\limits_{t\rightarrow 0}{d^2 \over dt^2 } \left[ \p^2_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)}\right] \\ &= 2\lim\limits_{t\rightarrow 0} \nabla^2\nabla^\perp Q(a_+(\ell(t);t)) \LC{d \over dt }\p_ta_+(s;t)|_{s=\ell(t)}\RC^2\\ &\quad + \hbox{tangential derivative } \p_1 \nabla\nabla^\perp Q(\0)+\hbox{lower oder derivatives}\\ &= 2 \varepsilon^2 (\ell'(0))^2 \beta^2\p_2^2 \nabla^\perp Q(\0)\\ &\quad + \hbox{tangential derivative } \p_1 \nabla\nabla^\perp Q(\0)+\hbox{lower oder derivatives}, \end{align*} where we used \eqref{dt pt a_+} ($\lim\limits_{t\rightarrow 0} {d \over dt }\p_ta^1_+(s;t)|_{s=\ell(t)}=0$) in the last identity. So far we have shown (1) and (2) for $k=2$. Now for $k>2$, we have \begin{align*} &\p^k_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)} \\ &= \nabla^k \nabla^\perp Q(a_+(s;t)) (\p_t a_+(s;t))^k|_{s=\ell(t)} +\ldots +\nabla\nabla^\perp Q(a_+(s;t))\p^k_t a_+(s;t)|_{s=\ell(t)}. \end{align*} Along the lines of similar arguments for the case $k=1,2$ above, we have \begin{align*} \lim\limits_{t\rightarrow 0}{d^\eta \over dt^\eta } \left[ \p^k_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)}\right] \qquad \hbox{for }1\leq\eta< k \end{align*} depending only on the lower derivatives $\nabla^{\gamma}\nabla^\perp Q(\0)$ for $1\leq \gamma\leq \eta$. Finally for $\eta=k>2$, we get \begin{align*} &\lim\limits_{t\rightarrow 0}{d^k\over dt^k } \left[ \p^k_t (\nabla^\perp Q(a_+(s;t))) |_{s=\ell(t)}\right]\\ &=k! \lim\limits_{t\rightarrow 0} \nabla^k\nabla^\perp Q(a_+(\ell(t);t)) \LC{d \over dt }\p_ta_+(s;t)|_{s=\ell(t)}\RC^k\\ &\quad + \hbox{tangential derivative } \p_1 \nabla^{k-1}\nabla^\perp Q(\0)+\hbox{lower oder derivatives}, \end{align*} and then we apply \eqref{a+ asymptotic diff} again. This completes the proof of the lemma. \end{proof} \vskip1cm \noindent \textbf{Acknowledgment.} R.-Y. Lai is partially supported by the NSF grant DMS-1714490. \vskip1cm \bibliographystyle{plain} \bibliography{dipolebib} \end{document}
84,958
Do you ever want to submit something to be the worst deal ever? I'm looking at Groupon and there is this place named the Chelsea Inn. Regardless of how the hotel really is, it is still in Atlantic City, and that town is dodgy at best. Can't imagine staying in a hotel for $40 a night that advertises "private bathrooms" as an amenity and is in the "heart of Atlantic City." Hey, that actually beats Wootbot's average for worst deal. Best Worst Deal Ever (so far): Still funny, thanks @mtm2! 3 Answers answer Sort By:
385,741
\begin{document} \title{Further study on the maximum number of bent components of vectorial functions} \author{ Sihem Mesnager$^{1}$, Fengrong Zhang$^2$, Chunming Tang$^3$, Yong Zhou$^2$} \institute{ 1. LAGA, Department of Mathematics, University of Paris VIII \\(and Paris XIII and CNRS), Saint--Denis cedex 02, France.\\ E-mail: \email{smesnager@univ-paris8.fr}\\ 2. School of Computer Science and Technology, China University\\ of Mining and Technology, Xuzhou, Jiangsu 221116, China.\\ E-mail: \email{\{zhfl203,yzhou\}@cumt.edu.cn}\\ 3. School of Mathematics and Information, China West Normal University, Nanchong, Sichuan 637002, China.\\ E-mail: \email{tangchunmingmath@163.com} } \date{\today} \maketitle \begin{abstract} In 2018, Pott, at al. have studied in [IEEE Transactions on Information Theory. Volume: 64, Issue: 1, 2018] the maximum number of bent components of vectorial function. They have presented serval nice results and suggested several open problems in this context. This paper is in the continuation of their study in which we solve two open problems raised by Pott et al. and partially solve an open problem raised by the same authors. Firstly, we prove that for a vectorial function, the property of having the maximum number of bent components is invariant under the so-called CCZ equivalence. Secondly, we prove the non-existence of APN plateaued having the maximum number of bent components. In particular, quadratic APN functions cannot have the maximum number of bent components. Finally, we present some sufficient conditions that the vectorial function defined from $\mathbb{F}_{2^{2k}}$ to $\mathbb{F}_{2^{2k}}$ by its univariate representation: $$ \alpha x^{2^i}\left(x+x^{2^k}+\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j}} +\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j+k}}\right)$$ has the maximum number of { components bent functions, where $\rho\leq k$}. Further, we show that the differential spectrum of the function $ x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}})$ (where $i,t_1,t_2$ satisfy some conditions) is different from the binomial function $F^i(x)= x^{2^i}(x+x^{2^k})$ presented in the article of Pott et al. Finally, we provide sufficient and necessary conditions so that the functions $$Tr_1^{2k}\left(\alpha x^{2^i}\left(Tr^{2k}_{e}(x)+\sum\limits_{j=1}^{\rho}\gamma^{(j)}(Tr^{2k}_{e}(x))^{2^j} \right)\right) $$ are bent. \end{abstract} {\bf Keywords:} Vectorial functions, Boolean functions, Bent functions, Nonlinearity, APN functions, Plateaued functions, CCZ equivalence. \bigskip \section{Introduction} Vectorial (multi-output) Boolean functions, that is, functions from the vector space $\mathbb{F}^{n}_{2}$ (of all binary vectors of length $n$) to the vector space $\mathbb{F}^{m}_{2}$, for given positive integers $n$ and $m$. These functions are called $(n,m)$-functions and include the (single-output) Boolean functions (which correspond to the case $m =1$). In symmetric cryptography, multi-output functions are called \emph{S-boxes}. They are fundamental parts of block ciphers. Being the only source of nonlinearity in these ciphers, S-boxes play a central role in their robustness, by providing confusion (a requirement already mentioned by C. Shannon), which is necessary to withstand known (and hopefully future) attacks. When they are used as S-boxes in block ciphers, their number $m$ of output bits equals or approximately equals the number $n$ of input bits. They can also be used in stream ciphers, with $m$ significantly smaller than $n$, in the place of Boolean functions to speed up the ciphers. We shall identify $\mathbb{F}_{2}^n$ with the Galois field $\mathbb{F}_{2^n}$ of order $2^n$ but we shall always use $\mathbb{F}_{2}^n$ when the field structure will not really be used. The {\em component functions} of $F$ are the Boolean functions $v\cdot F$, that is, $x\in\mathbb{F}_{2^n}\mapsto \tr{m}(v F(x))$, where ``$\cdot$" stands for an inner product in $\mathbb{F}_{2^m}$ (for instance: $u\cdot v:=\tr m(uv), \forall u\in \mathbb{F}_{2^m}, v\in \mathbb{F}_{2^m}$ where "$\tr m$" denotes the absolute trace over $ \mathbb{F}_{2^m}$). In order to classify vectorial Boolean functions that satisfy desirable nonlinearity conditions, or to determine whether, once found, they are essentially new (that is, inequivalent in some sense to any of the functions already found) we use some concepts of equivalence. For vectorial Boolean functions, there exist essentially two kinds concepts of equivalence: the extended affine EA-equivalence and the CCZ-equivalence (Carlet-Charpin-Zinoviev equivalence). Two $(n,r)$-functions $F$ and $F^{\prime}$ are said to be EA-equivalent if there exist affine automorphisms $L$ from $\mathbb{F}_{2^n}$ to $\mathbb{F}_{2^n}$ and $L^{\prime}$ from $\mathbb{F}_{2^r}$ to $\mathbb{F}_{2^r}$ and an affine function $L''$ from $\mathbb{F}_{2^n}$ to $\mathbb{F}_{2^r}$ such that $F'=L^{\prime} \circ F \circ L + L''$. EA-equivalence is a particular case of CCZ-equivalence \cite{CCZ98}. Two $(n,r)$-functions $F$ and $F^{\prime}$ are said to be CCZ-equivalent if their graphs $G_F:=\{(x,F(x)),~ x\in \mathbb{F}_{2^n}$\} and $G_F^{\prime}:=\{(x,F^{\prime}(x)),~ x\in \mathbb{F}_{2^n}$\} are affine equivalent, that is, if there exists an affine permutation $\mathcal{L}$ of $\mathbb{F}_{2^n} \times \mathbb{F}_{2^m}$ such that $\mathcal{L}(G_F)=G_F^{\prime}$. A standard notion of \emph{nonlinearity} of an $(n,m)$-function $F$ is defined as \begin{equation}\label{N_1} \mathcal{N}(F)=\underset{v\in \F_{2^m}^\star}{\min}nl(v\cdot F), \end{equation} where $v\cdot F$ denotes the usual inner product on $\F_{2^m}$ and $nl(\cdot)$ denotes the nonlinearity of Boolean functions (see definition in Section \ref{Preliminaries}). From the covering radius bound, it is known that $\mathcal{N}(F)\leqslant 2^{n-1}-2^{n/2-1}$. The functions achieving this bound are called $(n,m)$-\emph{bent} functions. Equivalently, a vectorial Boolean function $F: \mathbb{F}_{2^n}\rightarrow \mathbb{F}_{2^m}$ is said to be a vectorial bent function if all nonzero component functions of $F$ are \emph{bent} (Boolean) functions. Bent Boolean functions have maximum Hamming distance to the set of affine Boolean functions. The notion of bent function was introduced by Rothaus \cite{RO76} and attracted a lot of research of more than four decades. Such functions are extremal combinatorial objects with several areas of application, such as coding theory, maximum length sequences, cryptography. A survey on bent function can be found in \cite{CarletMesnagerDCC2016} as well as the book \cite{MesnagerBook}. In~\cite{CC-Nyberg}, it is shown that $(n,m)$-bent functions exist only if $n$ is even and $m\leqslant n/2$. The notion of nonlinearity in (\ref{N_1}) (denoted by $\mathcal{N}$), was first introduced by Nyberg in~\cite{CC-Nyberg}, which is closely related to Matsui's linear attack~\cite{Matsui} on block ciphers. It has been further studied by Chabaud and Vaudenay \cite{ChabaudVaudenay95}. The nonlinearity is invariant under CCZ equivalence (and hence under extended affine equivalence). Budaghyan and Carlet have proved in \cite{BC09} that for bent vectorial Boolean functions, CCZ-equivalence coincides with EA-equivalence. The problem of construction vectorial bent functions has been considered in the literature. Nyberg \cite{CC-Nyberg} investigated the constructions of vectorial bent functions; she presented two constructions based on Maiorana-McFarland bent functions and $\mathcal{PS}$ bent functions, respectively. In \cite{SatohIwataKurosawa99}, Satoh, Iwata, and Kurosawa have improved the first method of construction given in \cite{CC-Nyberg} so that the resulting functions achieve the largest degree. Further, serval constructions of bent vectorial functions have been investigated in some papers \cite{CarletMesnagerBentVectorial,Pasaliczh2012,Feng2011,MesnagerDCC2015,Wu2005}. A complete state of the art can be found in \cite{MesnagerBook} (Chapter 12). Very recently, Pott \emph{et al}.\cite{Pott2017} considered functions $ \mathbb{F}_{2^n}\rightarrow \mathbb{F}_{2^n}$ of the form $F^i(x) = x^{2^i}(x + x^{2^{k}})$, where $n=2k, i=0,1,\cdots,n-1$. They showed that the upper bound of number of bent component functions of a vectorial function $F: \mathbb{F}_{2^n}\rightarrow \mathbb{F}_{2^n}$ is $2^n-2^{n/2}$ ($n$ even). In addition, they showed that the binomials $F^i(x)= x^{2^i}(x + x^{2^k})$ have such a large number of bent components, and these binomials are inequivalent to the monomials $x^{2^{k}+1}$ if $0 < i < k$. Further, the properties (such as differential properties and complete Walsh spectrum) of the functions $F^i$ were investigated. In this paper, we will consider three open problems raised by Pott et al \cite{Pott2017}. In the first part, we prove that CCZ equivalence is preserved for vectorial functions having the maximum number of bent components. Next, we consider APN plateaued functions and investigate if they can have the maximum number of bent components. We shall give a negative answer to this question. Finally, we consider the bentness property of functions $\mathbb{F}_{2^{2k}}\rightarrow \mathbb{F}_{2^{2k}}$ of the form \begin{equation}\label{equa main} G(x)=\alpha x^{2^i}\left(x+x^{2^k}+\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j}} +\sum\limits_{j=1}^{\rho}\gamma^{(j)}x^{2^{t_j+k}}\right), \end{equation} where $m\leq k$, $\gamma^{(j)}\in {\Bbb F}_{2^{k}}$ and $0\leq t_j\leq k$ be a nonnegative integer. In particular, { we show the functions $ x^{2^{t_2}}\left(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}}\right)$ are inequivalent to $x^{2^{t_2}}(x + x^{2^k}) $, where $t_1=1$ and $\gcd(t_2,k) \neq 1$ }. Here we use the concept of CCZ-equivalence when we speak about the equivalence of functions. The rest of the paper is organized as follows. Some preliminaries are given in Section \ref{Preliminaries}. In Section \ref{stability}, we prove our result on the stability under CCZ equivalence of a function having the maximum number of bent components which solve Problem 4 in \cite{Pott2017}. Next, in Section \ref{APNplateaued}, we prove that APN plateaued functions cannot have the maximum number of bent components, which partially solves Problem 8 in \cite{Pott2017}. Finally, in Section \ref{newbent} we investigate Problem 2 in \cite{Pott2017}. To this end, we provide several functions defined as $G(x) = x L(x)$ on ${\Bbb F}_{2^{2k}}$ (where $L(x)$ is a linear function on ${\Bbb F}_{2^{2k}}$) such that the number of bent components $Tr^{2k}_1(\alpha F(x))$ is maximal. \section{Preliminaries and notation}\label{Preliminaries} Throughout this article, $\| E\|$ denotes the cardinality of a finite set $E$, the binary field is denoted by $\mathbb{F}_2$ and the finite field of order $2^n$ is denoted by ${\Bbb F}_{2^n}$. The multiplicative group ${\Bbb F}^*_{2^n}$ is a cyclic group consisting of $2^n-1$ elements. The set of all Boolean functions mapping from ${\Bbb F}_{2^n}$ (or $ {\Bbb F}_2^n$) to ${\Bbb F}_{2}$ is denoted by $B_n$. Recall that for any positive integers $k$, and $r$ dividing $k$, the trace function from $\GF{k}$ to $\GF{r}$, denoted by $Tr_{r}^{k}$, is the mapping defined as: \begin{displaymath} Tr_{r}^{k}(x):=\sum_{i=0}^{\frac kr-1} x^{2^{ir}}=x+x^{2^r}+x^{2^{2r}}+\cdots+x^{2^{k-r}}. \end{displaymath} In particular, the {\em absolute trace} over $\mathbb{F}_2$ of an element $x \in \mathbb{F}_{2^n}$ equals $Tr_1^{n}(x)=\sum_{i=0}^{n-1} x^{2^i}$. There exist several kinds of possible {\em univariate representations} (also called trace, or polynomial, representations) of Boolean functions which are not all unique and use the identification between the vector-space ${\Bbb F}_2^n$ and the field $\GF{n}$. Any Boolean function over ${\Bbb F}_2^n$ can be represented in a unique way as a polynomial in one variable $x\in \mathbb{F}_{2^n}$ of the form $f(x) =\sum_{j=0}^{2^n-1} a_j x^j$, where $a_0,a_{2^n-1}\in{\Bbb F}_2$, $a_j$'s are elements of $\GF{n}$ for $1\leq j <2^n-1$ such that ${a_j}^2=a_{2i\mod (2^n-1)}$. The binary expansion of $j$ is $j=j_0+j_12+\cdots j_{n-1}2^{n-1}$ and we denote $\bar{j}=(j_0,j_1, \cdots,j_{n-1})$. The algebraic degree of $f$ equals $max \{wt(\bar i)\mid a_j\not=0, 0\leq j < 2^n\}$ where $wt(\bar i)=j_0+j_1+\cdots+j_{n-1}$. Affine functions (whose set is denoted by $A_n$) are those of algebraic degree at most $1$. The Walsh transform of $f\in{ B}_{n}$ at $ \ll \in \mathbb{F}_{2^n}$ is defined as $$ \begin{array}{l}W_f(\ll) = \sum\limits_{x \in \mathbb{F}_{2^n}}(-1)^{f(x) + Tr^n_1(\ll x)}.\end{array}$$ \iffalse A Boolean function $f(x_1, \ldots, x_n)$ defined on ${\Bbb F}_2^n$ is commonly represented as a multivariate polynomial over $\F_2$ called the {\em algebraic normal form} (ANF) of $f$. More precisely, $f(x_1, \ldots, x_n)$ can be written as \begin{equation} \label{eq:anf} f(x_1,\dots,x_n)=\sum_{u\in \F_2^n}\lambda_u\left(\prod_{i=1}^n x_i^{u_i}\right)~, \end{equation} for $\lambda_u\in \F_2~, u=(u_1,\ldots,u_n)$. The algebraic degree of $f$, denoted by $\deg(f)$, is equal to the maximum Hamming weight of $u \in \F_2^n$ for which $\lambda_u \neq 0$. An $n$-variable Boolean function of the form $a \cdot x=a_{1}x_{1}\oplus\cdots\oplus a_{n}x_{n}$, where $a_i \in {\Bbb F}_{2}$ and whose degree is at most one, is called linear function. The function $f(x)=a \cdot x \oplus b$, where $b \in {\Bbb F}_{2}$ is a fixed constant, is called affine and the set of all $n$-variable affine functions is denoted by $A_{n}$. \fi The \emph{nonlinearity} of $f\in B_{n}$ is defined as the minimum Hamming distance to the set of all $n$-variable affine functions, i.e., \begin{displaymath} \begin{array}{c} nl(f)=\min_{g\in A_{n}}d(f,g). \end{array} \end{displaymath} where $d(f,g)$ is the Hamming distance between $f$ and $g$. Following is the relationship between nonlinearity and Walsh spectrum of $f \in {B}_n$ $$ nl(f) = 2^{n-1} - \frac{1}{2}\max_{\ll \in \mathbb{F}_{2^n}}|W_f(\ll)|.$$ By Parseval's identity $\sum_{\ll \in \mathbb{F}_{2^n}} W_f(\ll)^2 = 2^{2n}$, it can be shown that \noindent $ \max\{ |W_f(\ll) | : \ll \in \mathbb{F}_{2^n} \} \ge 2^{\frac{n}{2}}$ which implies that $nl(f) \le 2^{n-1} - 2^{\frac{n}{2} - 1}$. If $n$ is an even integer a function $f \in \mathcal{B}_n$ is said to be \emph{ bent } if $W_f(\ll) \in \{2^{\frac{n}{2}}, -2^{\frac{n}{2}}\}$, for all $\ll \in \mathbb{F}_{2^n}$. Moreover, a function $f \in \mathcal{B}_n$ is said to be $t$-\emph{plateaued} if $W_f(\ll) \in\{ 0, \pm 2^\frac{n+t}{2}\}$, for all $\ll \in \mathbb{F}_{2^n}$. The integer $t$ $(0\leq t\leq n$) is called the \emph {amplitude} of $f$. Note that a bent function is a 0-plateaued function. In the following, $``<,>"$ denotes the standard inner (dot) product of two vectors, that is, $<\ll , x>=\lambda_1x_1+ \ldots + \lambda_nx_n$, where $\ll,x\in F_2^n$. If we identify the vector space $F_2^n$ with the finite field $F_{2^n}$, we use the trace bilinear form $Tr^n_1(\ll x)$ instead of the dot product, that is, $<\ll , x>=Tr^n_1(\ll x)$, where $\ll,x\in F_{2^n}$. For vectorial functions $F: \mathbb{F}_2^n\rightarrow \mathbb{F}_2^m$, the extended Walsh-Hadamard transform defined as, $$ \begin{array}{l} W_F(u,v) = \sum\limits_{x \in \mathbb{F}_2^n}(-1)^{<v,F(x)> + <u, x>},\end{array}$$ where $F(x)=(f_1(x),f_2(x),\cdots,f_m(x)),u\in \mathbb{F}_2^n, v\in \mathbb{F}_2^m$. Let $F$ be a vectorial function from $\mathbb{F}_{2^n}$ into $\mathbb{F}_{2^m}$. The linear combinaison of the coordinates of $F$ are the Boolean functions $f_\lambda: x\mapsto Tr_1^{m} (\lambda F(x))$, $\lambda \in \mathbb{F}_{2^m}$, where $f_{0}$ is the null function. The functions $f_\lambda$ ($\lambda\not=0$) are called the \emph{components} of $F$. A vectorial function is said to be \emph{bent} (resp. $t$-\emph{plateaued}) if all its components are bent (resp. $t$-plateaued). A vectorial function $F$ is called \textit{vectorial plateaued} if all its components are plateaued with possibly different amplitudes. Let $F: \mathbb{F}_2^n\rightarrow \mathbb{F}_2^n $ be an $(n,n)$-function. For any $a\in {\Bbb F}_2^n, b\in {\Bbb F}_2^n$, we denote \begin{displaymath} \begin{array}{c} \Delta_F(a,b)=\{x|x\in {\Bbb F}_2^n,F(x\oplus a)\oplus F(x)=b\},\\ \delta_F(a,b)=\|\Delta_F(a,b)\|, \end{array} \end{displaymath} Then, we have $\delta(F):=max_{a\not=0, b\in {\Bbb F}_2^n } \delta_F(a,b)\geq 2$ and the functions for which equality holds are said to be \emph {almost perfect nonlinear}(APN). A nice survey on Boolean and vectorial Boolean functions for cryptography can be found in \cite{Cbook1} and \cite{Cbook}, respectively. \section{The stability of a function having the maximum number of bent components under CCZ equivalence}\label{stability} In \cite{Pott2017}, Pott et al. have shown that the maximum number of bent components of a vectorial $(n,n)$-function $F$ is $2^n -2^k$ where $k:=\frac{n}2$ ($n$ even). They left open the problem whether the property of a function having the maximum number of bent components is invariant under CCZ equivalence or not. In this section we solve this problem by giving a positive answer in the following theorem. \begin{theorem}\label{theo:CCZ} Let $n=2k$ and $F, F'': \mathbb F_2^n \rightarrow \mathbb F_2^n$ be CCZ-equivalent functions. Then $F$ has $2^n -2^k$ bent components if and only if $F''$ has $2^n -2^k$ bent components. \end{theorem} \begin{proof} Let $F$ be a function with $2^n -2^k$ bent components. Define \begin{displaymath} S=\{v\in \mathbb F_2^n: x \rightarrow <v, F(x)> \text{ is not bent} \}. \end{displaymath} By Theorem 3 of \cite{Pott2017}, $S$ is a linear subspace of dimension $k$. Then, let $U$ be any $k$-dimensional subspace of $\mathbb F_2^n$ such that $U \cap S =\{0\}$. Let $v_1, \cdots, v_k$ be a basis of $S$ and $u_1, \cdots, u_k$ be a basis of $U$. Define a new function $F': \mathbb F_2^n \rightarrow \mathbb F_2^n$ as \begin{displaymath} F'(x)=(H(x), I(x)) \end{displaymath} where $H(x)=(<v_1, F(x)>, \cdots, <v_k, F(x)>)$ and $I(x)=(<u_1, F(x)>, \cdots, <u_k, F(x)>)$. Then, $F'$ is EA-equivalent to $F$. Recall that the property of a function having the maximum number of bent components is invariant under EA equivalence. Thus, $F'$ has $2^n -2^k$ bent components. Since $F$ and $F''$ are CCZ-equivalent functions, $F''$ is CCZ-equivalent to $F'$, which has $2^n -2^k$ bent components. Let $\mathcal L(x, y,z) = (L_1(x, y,z), L_2(x, y,z), L_3(x, y,z))$, (with $L_1 :\mathbb F^n_2 \times F^k_2 \times F^k_2 \rightarrow \mathbb F^n_2$, $L_2 :\mathbb F^n_2 \times F^k_2 \times F^k_2 \rightarrow \mathbb F^k_2$ and $L_3 :\mathbb F^n_2 \times F^k_2 \times F^k_2 \rightarrow \mathbb F^k_2$) be an affine permutation of $\mathbb F^n_2 \times F^k_2 \times F^k_2$ which maps the graph of $F'$ to the graph of $F''$. Then, the graph $\mathcal G_{F''}=\{\mathcal L (x, H(x), I(x)): x \in \mathbb F_2^n \}$. Thus $L_1(x,H(x), I(x))$ is a permutation and for some affine function $L_1': \mathbb F^n_2 \times F^k_2 \rightarrow \mathbb F^n_2$ and linear function $L_1'': \mathbb F^k_2 \rightarrow \mathbb F^n_2$ we can write $L_1(x, y,z)=L_1'(x, y)+L_1''(z)$. For any element $v$ of $\mathbb F^n_2$ we have \begin{align}\label{eq:v-comp} <v , L_1(x, H (x), I(x))> =& <v , L_1'(x,H(x))> + <v , L_1''(I (x))> \nonumber\\ =& <v , L_1'(x,H(x))> + <L_1''^*(v) , I (x)> \nonumber\\ =& <L_1''^*(v) , I (x)>+<v',H(x)>+<v'',x>+a, \end{align} where $a\in \mathbb F_2$, $v'\in \mathbb F_2^k$, $v''\in \mathbb F_2^n$ and $L_1''^*$ is the adjoint operator of $L_1''$, in fact, $L_1''^*$ is the linear permutation whose matrix is transposed of that of $L_1''$. Since $L_1(x,H(x), I(x))$ is a permutation, then any function $<v , L_1(x, H (x), I(x))>$ is balanced (recall that this property is a necessary and sufficient condition) and, hence, cannot be bent. From the construction of $F'$, $<L_1''^*(v) , I (x)>+<v',H(x)>+<v'',x>+a$ is not bent if and only if $L_1''^*(v)=0$. Therefore, $L_1''^*(v)=0$ for any $v\in \mathbb F_2^n$. This means that $L_1''$ is null, that is, $L_1(x, H (x), I(x))=L_1'(x, H (x)) $. We can also write $L_i(x, y,z)=L_i'(x, y)+L_i''(z)$ for $i\in \{2, 3\}$ where $L_i': \mathbb F^n_2 \times F^k_2 \rightarrow \mathbb F^k_2$ are affine functions and $L_i'': F^k_2 \rightarrow \mathbb F^k_2$ are linear functions. Set $F_1''(x)=L_1(x, H (x), I(x))=L_1'(x, H (x))$ and $F_2''(x)=(L_2'(x, H(x))+L_2''(I(x)),L_3'(x, H(x))+L_3''(I(x)))$. Then, $F''(x)=F_2''\circ F_1''^{-1}(x)$. For any $v\in \mathbb F_2^n$ and $u=(u',u'')\in \mathbb F_2^k \times \mathbb F_2^k $, \begin{align*} W_{F''}(v,u)=& \sum_{x \in \mathbb F_2^n} (-1)^{<u, F''(x)>+<v, x>}\\ =& \sum_{x \in \mathbb F_2^n} (-1)^{<u, F''\circ F_1''(x)>+<v, F_1''(x)>}\\ =&\sum_{x \in \mathbb F_2^n} (-1)^{<u, F''_2(x)>+<v, F_1''(x)>}\\ =& \sum_{x \in \mathbb F_2^n} (-1)^{<u, (L_2''(I(x)),L_3''(I(x)))>+ <u, (L_2'(x, H(x)),L_3'(x, H(x)))> +<v, L_3'(x, H(x))>}\\ =& \sum_{x \in \mathbb F_2^n} (-1)^{< L_2''^*(u')+L_3''^*(u''), I(x)>+ <v',H(x)>+<v'',x>+a}. \end{align*} By the construction of $I(x)$, if $L_2''^*(u')+L_3''^*(u'')\neq 0$, $< L_2''^*(u')+L_3''^*(u''), I(x)>+ <v',H(x)>$ is bent. Thus, $<u, F''(x)>$ is bent when $L_2''^*(u')+L_3''^*(u'')\neq 0$, where $u=(u',u'')\in \mathbb F_2^k \times \mathbb F_2^k $. For $i=2, 3$, let $A_i$ be the matrices of size $k \times k$ defined as \begin{align*} L_i''(z)=z A_i, \end{align*} where $z=(z_1, \cdots, z_k) \in \mathbb F_2^k$. Then, \begin{align} L_2''^*(u')+L_3''^*(u'')=&u'A_2^T+u'' A_3^T \nonumber \\ =& (u',u'') \left [\begin{matrix} A_2^T \\ A_3^T \end{matrix} \right ]. \end{align} Recall that $\mathcal L$ is a affine permutation. Hence, the rank of the linear function $(L_1''(z), L_2''(z), L_3''(z))$ $=(0,L_2''(z), L_3''(z))$ from $\mathbb F_2^k$ to $\mathbb F^n_2 \times F^k_2 \times F^k_2$ is $k$. By $(L_1''(z), L_2''(z), L_3''(z))=z \left [ \begin{matrix} 0| A_2 |A_3 \end{matrix} \right]$, the rank of the matrix $\left [ \begin{matrix} A_2 |A_3 \end{matrix} \right]$ is $k$. Thus, the rank of the matrix $\left [\begin{matrix} A_2^T \\ A_3^T \end{matrix} \right ]=\left [ \begin{matrix} A_2 |A_3 \end{matrix} \right]^T$ is also $k$. Set \begin{align*} S'=& \{(u',u'')\in \mathbb F_2^k \times \mathbb F_2^k: L_2''^*(u')+L_3''^*(u'')=0\}\\ =&\{(u',u'')\in \mathbb F_2^k \times \mathbb F_2^k: (u',u'') \left [\begin{matrix} A_2^T \\ A_3^T \end{matrix} \right ]=0\}. \end{align*} Then, $S'$ is a linear subspace of dimension $k$. By the previous discussion, if $u=(u',u'')\in \mathbb F_2^n \setminus S'$, the component function $<u, F''(x)>$ is bent. Thus, $F''(x)$ has at least $2^n -2^k$ bent components. From Theorem 3 in \cite{Pott2017}, $F''(x)$ has exactly $2^n -2^k$ bent components, which completes the proof. \qed \end{proof} \section{The non-existence of APN plateaued functions having the maximum number of bent components}\label{APNplateaued} In \cite{Pott2017}, the authors asked if APN functions could have the maximum number of bent components or not. In this section we investigate the case of all APN plateaued functions. The result is given the following theorem. \begin{theorem} Let $F$ be a plateaued APN function defined on $\mathbb{F}_2^n$ (where $n\geq 4$ is an even positive integer). Then $F$ cannot have the maximum number of bent components. \end{theorem} \begin{proof} Let $F$ be a plateaued APN function on $\mathbb{F}_2^n$. Denote $$ N_t=\{v\in \mathbb{F}_2^n: W_F(u,v) =\pm 2^{\frac{n+t}{2}}\}, $$ where $W_F(u,v)=\sum_{x\in \mathbb{F}_2^n} (-1)^{v\cdot F(x)+u\cdot x}$ and $t$ is a positive integer ($0\leq t \leq n$). We have \begin{align}\label{equ1} \sum_{u,v\in \mathbb{F}_2^n} W_F^4(u,v) =&\sum_{v\in F_2^n}(2^{\frac{n+t_v}{2}})^2 \sum_{u\in \mathbb{F}_2^n}W_F^2(u,v)\nonumber\\ =&2^n\sum_{v\in \mathbb{F}_2^n}2^{t_v} \sum_{u\in \mathbb{F}_2^n}W_F^2(u,v)\nonumber\\ =& 2^{3n}\sum_{v\in \mathbb{F}_2^n}2^{t_v}\nonumber\\ =&2^{3n}(N_0+N_22^2+\cdots +N_n2^n). \end{align} Since $F$ is APN, we have \begin{equation}\label{equ2} \sum_{u,v\in \mathbb{F}_2^n} W_F^4(u,v) =2^{3n}(3\cdot 2^n-2). \end{equation} From Equations (\ref{equ1}) and (\ref{equ2}), we have $$ N_0+N_22^2+\cdots+N_n 2^n=3\cdot 2^n-2. $$ Therefore, we have $N_0\equiv 2\mod 4$. Since $n\geq 4$, $2^n-2^{\frac{n}{2}} \equiv 0 \mod 4$. Hence, $$ N_0\neq 2^n-2^{\frac{n}{2}}. $$ Thus, $F$ does not have the maximum number of bent components. In particular, quadratic APN functions cannot have the maximum number of bent components. \qed \end{proof} \section{New constructions of bent component functions of vectorial functions }\label{newbent} In this section we provide several functions defined as $G(x) = x L(x)$ on ${\Bbb F}_{2^{2k}}$ such that the number of bent components $Tr^{2k}_1(\alpha F(x))$ equals $2^{2k}-2^k$, where $L(x)$ is a linear function on ${\Bbb F}_{2^{2k}}$. We first recall two lemmas which will be useful in our context. \begin{lemma}\cite{Pott2017}\label{lemma adjoint} Let $V = {\Bbb F}_{2^{2k}}$ and let $ <,>$ be a nondegenerate symmetric bilinear form on $V$. If $\mathcal{L}:V\rightarrow V$ is linear, we denote the adjoint operator by $\mathcal{L}^*$, i.e., $<x,\mathcal{L}(y)>=<\mathcal{L}^*(x),y>$ for all $x,y\in V$. The function $f : V \rightarrow {\Bbb F}_2$, defined by $x\mapsto <x,\mathcal{L}(x)>$, is bent if and only if $\mathcal{L}+\mathcal{L}^*$ is invertible. \end{lemma} \begin{lemma}\label{lemma L}\cite{Pott2017} Let $V = {\Bbb F}_{2^{2k}}$ and $<x , y>=Tr^n_1(xy)$ be the trace bilinear form. If $\mathcal{L}:V\rightarrow V$ is defined by $\mathcal{L}(x)=\alpha x^{2^i} $, $\alpha\in V$ and for any $i=0,1,\cdots,n-1$, then $\mathcal{L}^*(x)= \alpha^{2^{n-i}} x^{2^{n-i}}$. \end{lemma} In \cite{Pott2017}, the authors presented a construction of bent functions through adjoint operators. We start by providing a simplified proof of \cite[Theorem 4]{Pott2017} (which is the main result of their article). \begin{theorem}\label{theo in POtt}\cite[Theorem 4]{Pott2017} Let $V = {\Bbb F}_{2^{2k}}$ and $i$ be a nonnegative integer. Then, the mapping $\mathrm{F}_\alpha^{i} $ defined by \begin{displaymath} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha x^{2^i}(x+x^{2^k})\right) \end{displaymath} is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. \end{theorem} \begin{proof} From the proof of \cite[Theorem 4]{Pott2017}, we know $$ \mathcal{L}(x)+\mathcal{L}^*(x)=Tr^{2k}_k(\alpha x^{2^i})+\alpha^{2^{2k-i}}\left(Tr^{2k}_k( x)\right)^{2^{k-i}}.$$ From Lemma \ref{lemma adjoint}, we need to show that $\mathcal{L}(x)+\mathcal{L}^*(x)=0$ if and only if $x=0$. Let $\nabla_{a}=\{x|Tr^{2k}_k(x)=a, a\in {\Bbb F}_{2^{k}} \} $. We all know $ Tr^{2k}_k( x) $ is a surjection from ${\Bbb F}_{2^{2k}} $ to ${\Bbb F}_{2^{k}}$ and $\|\nabla_{a}\|=2^{2k-k} $ for any $a\in {\Bbb F}_{2^{k}}$. We also know $\nabla_{0}= {\Bbb F}_{2^{k}}$. If $\alpha\notin {\Bbb F}_{2^{k}} $, then $\mathcal{L}(x)+\mathcal{L}^*(x)=0$ if and only if \begin{equation}\label{equ 1bent2} \left\{\begin{array}{c} Tr^{2k}_k(\alpha x^{2^i})=0,\\ Tr^{2k}_k( x)=0, \end{array} \right. \end{equation} i.e., $x=0$. If for any $ x\neq 0$, we always have $\mathcal{L}(x)+\mathcal{L}^*(x)\neq 0$, then $\alpha\notin {\Bbb F}_{2^{k}} $. In fact, if $\alpha\in {\Bbb F}_{2^{k}} $, then $$\mathcal{L}(x)+\mathcal{L}^*(x)=\alpha Tr^{2k}_k( x^{2^i})+\left(\alpha Tr^{2k}_k( x)\right)^{2^{k-i}}=\alpha Tr^{2k}_k( x)+\left(\alpha Tr^{2k}_k( x)\right)^{2^{k-i}}.$$ Further, $\mathcal{L}(x)+\mathcal{L}^*(x)= 0$ for any $x\in {\Bbb F}_{2^{k}}$. Thus, we have $\mathrm{F}_\alpha^{i}(x)$ is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}}$. \qed \end{proof} Now, we are going to present a first new family of bent functions through adjoint operators. \begin{theorem}\label{theo-new_bent} Let $V = {\Bbb F}_{2^{2k}}$ and $i$ be a nonnegative integer. Let $t_1,t_2$ be two positive integers such that $0\leq t_1,t_2\leq k$ and both $ z^{2^{k-t_1}-1}+z^{2^{k-t_2}-1}+1=0$ and $ z^{2^{t_1}-1}+ z^{2^{t_2}-1}+1=0$ have no solutions on ${\Bbb F}_{2^{k}}$. Then, the function $\mathrm{F}_\alpha^{i} $ defined on $V$ by \begin{equation}\label{equa bent0} \mathrm{F}_\alpha^{i}(x)=Tr_1^{2k}\left(\alpha x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}})\right) \end{equation} is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. \end{theorem} \begin{proof} We have \begin{equation}\label{equa bent1} \begin{array}{rl} \mathrm{F}_\alpha^{i}(x)=&Tr_{1}^{2k}\left(\alpha x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}})\right)\\ =&Tr_{1}^{2k}(x\alpha x^{2^i})+Tr_{1}^{2k}(x\alpha^{2^k} x^{2^{i+k}})+Tr_{1}^{2k}(x\alpha^{2^{2k-t_1}} x^{2^{i+2k-t_1}})\\ &+Tr_{1}^{2k}(x\alpha^{2^{k-t_1}} x^{2^{i+k-t_1}})+Tr_{1}^{2k}(x\alpha^{2^{2k-t_2}} x^{2^{i+2k-t_2}})+Tr_{1}^{2k}(x\alpha^{2^{k-t_2}} x^{2^{i+k-t_2}})\\ =&Tr_{1}^{2k}(x\mathcal{L}(x)), \end{array} \end{equation} where \begin{displaymath}\begin{array}{rl} \mathcal{L}(x)=&\alpha x^{2^i}+\alpha^{2^k} x^{2^{i+k}}+\alpha^{2^{2k-t_1}} x^{2^{i+2k-t_1}}+\alpha^{2^{k-t_1}} x^{2^{i+k-t_1}}\\ &+\alpha^{2^{2k-t_2}} x^{2^{i+2k-t_2}}+\alpha^{2^{k-t_2}} x^{2^{i+k-t_2}}\\ =&\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}+\alpha^{2^{k-t_1}} x^{2^{i+k-t_1}}+(\alpha^{2^{k-t_1}} x^{2^{i+k-t_1}})^{2^k}\\ &+\alpha^{2^{k-t_2}} x^{2^{i+k-t_2}}+(\alpha^{2^{k-t_2}} x^{2^{i+k-t_2}})^{2^k}\\ =&\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)+\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_1}}+\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_2}} \end{array} \end{displaymath} According to Lemma \ref{lemma L}, the adjoint operator $ \mathcal{L}^*(x)$ is \begin{equation}\label{equa adjoint1} \begin{array}{rl} \mathcal{L}^*(x)=&\alpha^{2^{2k-i}} x^{2^{2k-i}}+\alpha^{2^{2k-i}} x^{2^{k-i}}+\alpha^{2^{2k-i}} x^{2^{t_1-i}}\\ &+\alpha^{2^{2k-i}} x^{2^{k+t_1-i}}+\alpha^{2^{2k-i}} x^{2^{t_2-i}}+\alpha^{2^{2k-i}} x^{2^{k+t_2-i}}\\ = &\alpha^{2^{2k-i}} \left(x^{2^{2k-i}}+ x^{2^{k-i}}+ x^{2^{t_1-i}} + x^{2^{k+t_1-i}}+ x^{2^{t_2-i}}+ x^{2^{k+t_2-i}}\right)\\ =&\alpha^{2^{2k-i}} \left( x^{2^{k-i}}+(x^{2^{k-i}})^{2^k}+ x^{2^{t_1-i}} + (x^{2^{t_1-i}})^{2^k}+ x^{2^{t_2-i}}+ (x^{2^{t_2-i}})^{2^k}\right)\\ =&\alpha^{2^{2k-i}} \left( (x+x^{2^k})^{2^{k-i}}+ (x+x^{2^k})^{2^{t_1-i}}+(x+x^{2^k})^{2^{t_2-i}} \right)\\ =&\alpha^{2^{2k-i}} \left( (x+x^{2^k})^{2^{k-i}}+ (x+x^{2^k})^{2^{t_1+k-i}}+(x+x^{2^k})^{2^{t_2+k-i}} \right) \end{array} \end{equation} Thus, we have \begin{displaymath} \begin{array}{rl} \mathcal{L}(x)+\mathcal{L}^*(x) =&\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)+\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_1}}+\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_2}} \\ &+\alpha^{2^{2k-i}} \left( (x+x^{2^k})^{2^{k-i}}+ (x+x^{2^k})^{2^{t_1+k-i}}+(x+x^{2^k})^{2^{t_2+k-i}} \right). \end{array} \end{displaymath} Note that we have $\mathcal{L}(x)\in {\Bbb F}_{2^{k}} $ and $ (x+x^{2^k})^{2^{k-i}}+ (x+x^{2^k})^{2^{t_1+k-i}}+(x+x^{2^k})^{2^{t_2+k-i}}\in {\Bbb F}_{2^{k}}$ for any $x\in {\Bbb F}_{2^{2k}}$. From Lemma \ref{lemma adjoint}, it is sufficient to show that $\mathcal{L}(x)+\mathcal{L}^*(x)$ is invertible. That is, we need to show that $\mathcal{L}(x)+\mathcal{L}^*(x)=0$ if and only if $x=0$. Since both $ z^{2^{k-t_1}-1}+z^{2^{k-t_2}-1}+1=0$ and $ z^{2^{t_1}-1}+ z^{2^{t_2}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}}$, we have both $\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)+\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_1}}+\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_2}}=0$ and $ (x+x^{2^k})^{2^{k-i}}+ (x+x^{2^k})^{2^{t_1+k-i}}+(x+x^{2^k})^{2^{t_2+k-i}}=0$ if and only if \begin{equation}\label{equ bent2} \left\{\begin{array}{c} \alpha x^{2^i}+(\alpha x^{2^i})^{2^k}=0,\\ x+x^{2^k}=0. \end{array} \right. \end{equation} From the proof of Theorem \ref{theo in POtt}, when both $ z^{2^{k-t_1}-1}+z^{2^{k-t_2}-1}+1=0$ and $ z^{2^{t_1}-1}+ z^{2^{t_2}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}}$, we have $\mathrm{F}_\alpha^{i}(x) $ is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. \qed \end{proof} We immediately have the following statement by setting $t_2= k-t_1$ in the previous theorem. \begin{corollary}\label{cor thr} Let $V = {\Bbb F}_{2^{2k}}$ and $i$ be a nonnegative integer. Let $t_1,t_2$ be two positive integers such that $t_1+t_2= k$ and $ z^{2^{t_1}-1}+z^{2^{t_2}-1}+1=0$ has no solution in ${\Bbb F}_{2^{k}}$. Then, the mapping $\mathrm{F}_\alpha^{i} $ defined on $V$ by \begin{displaymath} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha G(x)\right) \end{displaymath} is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $, where $G(x)=x^{2^i}\left(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}}\right)$. \end{corollary} The previous construction given by Theorem \ref{theo-new_bent} can be generalized as follows. \begin{theorem}\label{theorem 3} Let $V = {\Bbb F}_{2^{2k}}$ and $i$ be a nonnegative integer. Let $t_1,t_2$ be two positive integers such that $0\leq t_1,t_2\leq k$ and both $ (\gamma^{(1)})^{2^{k-t_1}} z^{2^{k-t_1}-1}+(\gamma^{(2)})^{2^{k-t_2}}z^{2^{k-t_2}-1}+1=0$ and $ (\gamma^{(1)})^{2^{k-i}} z^{2^{t_1}-1}+(\gamma^{(2)})^{2^{k-i}} z^{2^{t_2}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}}$, where $\gamma^{(1)}, \gamma^{(2)}\in {\Bbb F}_{2^{k}}$. Then, the mapping $\mathrm{F}_\alpha^{i} $ defined by \begin{equation}\label{equa bent 30} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha x^{2^i}\left(x+x^{2^k}+\gamma^{(1)}(x^{2^{t_1}}+x^{2^{t_1+k}}) +\gamma^{(2)}(x^{2^{t_2}}+x^{2^{t_2+k}})\right)\right) \end{equation} is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. \end{theorem} \begin{proof} We have \begin{equation}\label{equa bent31} \begin{array}{rl} \mathrm{F}_\alpha^{i}(x) =&Tr_{1}^{2k}(x\mathcal{L}(x)), \end{array} \end{equation} where \begin{displaymath}\begin{array}{rl} \mathcal{L}(x)=&\alpha x^{2^i}+\alpha^{2^k} x^{2^{i+k}}+(\gamma^{(1)})^{2^{k-t_1}} \left(\alpha^{2^{2k-t_1}} x^{2^{i+2k-t_1}}+\alpha^{2^{k-t_1}} x^{2^{i+k-t_1}}\right)\\ +&(\gamma^{(2)})^{2^{k-t_2}} \left(\alpha^{2^{2k-t_2}} x^{2^{i+2k-t_2}}+\alpha^{2^{k-t_2}} x^{2^{i+k-t_2}}\right)\\ =&\left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)+(\gamma^{(1)})^{2^{k-t_1}} \left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_1}}\\ +&(\gamma^{(2)})^{2^{k-t_2}} \left(\alpha x^{2^i}+(\alpha x^{2^i})^{2^k}\right)^{2^{k-t_2}} \end{array} \end{displaymath} The adjoint operator $ \mathcal{L}^*(x)$ is \begin{equation}\label{equa adjoint31} \begin{array}{rl} \mathcal{L}^*(x)=&\alpha^{2^{2k-i}} x^{2^{2k-i}}+\alpha^{2^{2k-i}} x^{2^{k-i}}+(\gamma^{(1)})^{2^{k-i}}\left(\alpha^{2^{2k-i}} x^{2^{t_1-i}}\right.\\ &\left.+\alpha^{2^{2k-i}} x^{2^{k+t_1-i}}\right)+(\gamma^{(2)})^{2^{k-i}}\left(\alpha^{2^{2k-i}} x^{2^{t_2-i}}+\alpha^{2^{2k-i}} x^{2^{k+t_2-i}}\right)\\ =&\alpha^{2^{2k-i}} \left( (x+x^{2^k})^{2^{k-i}}+ (\gamma^{(1)})^{2^{k-i}}(x+x^{2^k})^{2^{t_1-i}}+(\gamma^{(2)})^{2^{k-i}}(x+x^{2^k})^{2^{t_2-i}} \right)\\ =&\alpha^{2^{2k-i}} \left( (x+x^{2^k})^{2^{k-i}}+ (\gamma^{(1)})^{2^{k-i}}(x+x^{2^k})^{2^{t_1+k-i}} +(\gamma^{(2)})^{2^{k-i}}(x+x^{2^k})^{2^{t_2+k-i}} \right) \end{array} \end{equation} Note that we have $\mathcal{L}(x)\in {\Bbb F}_{2^{k}} $ and $ (x+x^{2^k})^{2^{k-i}}+ (\gamma^{(1)})^{2^{k-i}}(x+x^{2^k})^{2^{t_1+k-i}} +(\gamma^{(2)})^{2^{k-i}}(x+x^{2^k})^{2^{t_2+k-i}}\in {\Bbb F}_{2^{k}}$ for any $x\in {\Bbb F}_{2^{2k}}$. In order to show that $\mathcal{L}(x)+\mathcal{L}^*(x)$ is invertible, we need to show that $\mathcal{L}(x)+\mathcal{L}^*(x)=0$ if and only if $x=0$. Since both $ (\gamma^{(1)})^{2^{k-t_1}} z^{2^{k-t_1}-1}+(\gamma^{(2)})^{2^{k-t_2}}z^{2^{k-t_2}-1}+1=0$ and $ (\gamma^{(1)})^{2^{k-i}} z^{2^{t_1}-1}+(\gamma^{(2)})^{2^{k-i}} z^{2^{t_2}-1}+1=0$ have no solutions on ${\Bbb F}_{2^{k}}$, we have both $\mathcal{L}(x)=0$ and $ (x+x^{2^k})^{2^{k-i}}+ (\gamma^{(1)})^{2^{k-i}}(x+x^{2^k})^{2^{t_1+k-i}} +(\gamma^{(2)})^{2^{k-i}}(x+x^{2^k})^{2^{t_2+k-i}}=0$ if and only if \begin{equation}\label{equ bent32} \left\{\begin{array}{c} \alpha x^{2^i}+(\alpha x^{2^i})^{2^k}=0,\\ x+x^{2^k}=0. \end{array} \right. \end{equation} By using the proof of Theorem \ref{theo in POtt}, we have $\mathrm{F}_\alpha^{i}(x) $ is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. \qed \end{proof} By the same process used to prove Theorem \ref{theorem 3}, one can get the following result. \begin{theorem}\label{theo bentkla} Let $V = {\Bbb F}_{2^{2k}}$ and $i,\rho$ be two nonnegative integers such that $\rho\leq k$. Let $\gamma^{(j)}\in {\Bbb F}_{2^{k}}$ and $0\leq t_j\leq k$ be a nonnegative integer, where $j=1,2,\cdots,\rho$. Assume that both equations $ \sum\limits_{j=1}^{\rho}(\gamma^{(j)})^{2^{k-t_j}} z^{2^{k-t_j}-1}+1=0$ and $ \sum\limits_{j=1}^{\rho}(\gamma^{(j)})^{2^{k-i}} z^{2^{t_j}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}}$. Then, the mapping $\mathrm{F}_\alpha^{i} $ defined on $V$ by \begin{equation}\label{equa bent 41} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha G(x)\right) \end{equation} is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $, where $G(x)= x^{2^i}\left(Tr^{2k}_k(x)+\sum\limits_{j=1}^{\rho}\gamma^{(j)}(Tr^{2k}_k(x))^{2^{t_j}}\right)$. \end{theorem} \begin{lemma}\cite{Pott2017}\label{lemma vectorial bent} Let $F_\alpha(x) = Tr^{2k}_1(\alpha G(x))$, be a Boolean bent function for any $\alpha \in {\Bbb F}_2^{2k}\setminus {\Bbb F}_2^{k}$, where $G : {\Bbb F}_2^{2k}\rightarrow {\Bbb F}_2^{2k}$. Then, $F : {\Bbb F}_2^{2k}\rightarrow {\Bbb F}_2^{k} $, defined as $F(x)=Tr^{2k}_k(\alpha G(x))$ is a vectorial bent function for any $\alpha \in {\Bbb F}_2^{2k}\setminus {\Bbb F}_2^{k}$. \end{lemma} According to Theorem \ref{theo bentkla} and Lemma \ref{lemma vectorial bent}, we immediately get the following theorem. \begin{theorem}\label{theo bentvector} Let $G(x)$ be defined as in Theorem \ref{theo bentkla}. Then, the mapping $\mathrm{F}_\alpha$ defined by \begin{equation}\label{equa bent 41} \mathrm{F}_\alpha(x)=Tr^{2k}_k\left(\alpha G(x)\right) \end{equation} is a vectorial bent function for any $\alpha \in {\Bbb F}_{2^{2k}}\setminus {\Bbb F}_{2^k}$. \end{theorem} In \cite{Pott2017}, the authors presented the differential spectrum of the functions $ G: {\Bbb F}_{2^{2k}}\rightarrow {\Bbb F}_{2^{2k}}$ defined by $G(x)=x^{2^i}(x+x^{2^k})$. Their result is given below. \begin{lemma}\cite{Pott2017}\label{lemma diff} Let $i$ be a nonnegative integer such that $i<k$. The differential spectrum of the functions $G(x)=x^{2^i}(x+x^{2^k}), G: {\Bbb F}_{2^{2k}}\rightarrow {\Bbb F}_{2^{2k}} $, is given by, \begin{equation}\label{equ lem pott} \delta_{G}(a,b)\in \left\{ \begin{array}{cl} \{0,2^k\} & ~~\text{if}~~a\in {\Bbb F}^*_{2^k},\\ \{0,2^{\gcd(i,k)}\}& ~~\text{if}~~a\in {\Bbb F}_{2^{2k}}\setminus {\Bbb F}_{2^k}. \end{array}\right. \end{equation} In particular, $\delta_{G}(a,b)=2^k$ only for $a\in {\Bbb F}^*_{2^k}$ and $b\in {\Bbb F}_{2^k}$. \end{lemma} Now we are going to show that the differential spectrum of the functions $x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}}) $ is different from the one of the functions $x\mapsto x^{2^i}(x+x^{2^k}) $. { \begin{theorem} Let $ \mathrm{F}_\alpha^{i}(x)=Tr^{2k}_1\left(\alpha G(x)\right)$ be defined as Theorem \ref{theo-new_bent}, where $G(x)= x^{2^i}(x+x^{2^k}+x^{2^{t_1}}+x^{2^{t_1+k}}+x^{2^{t_2}}+x^{2^{t_2+k}})$. If there exists $t_1=1$ and $gcd(t_2,k)\neq 1$ such that both $ z^{2^{k-t_1}-1}+z^{2^{k-t_2}-1}+1=0$ and $ z^{2^{t_1}-1}+ z^{2^{t_2}-1}+1=0$ have no solutions on ${\Bbb F}_{2^{k}}$, then for $i=t_2$, there exist elements $a\in {\Bbb F}_{2^{2k}}\setminus {\Bbb F}_{2^k}$ such that the number $\delta_G(a,b)$ is equal to $2$ for any $b\in {\Bbb F}_{2^{2k}}$, which is neither $2^{\gcd(i,k)}$ nor $ 2^k$. \end{theorem} \begin{proof} Let $a\in {\Bbb F}_{2^{2k}}\setminus {\Bbb F}_{2^k}$ such that $\tau^{2^{t_1}}=\tau$, where $\tau=a+a^{2^k}\neq 0$ (since $t_1|k$). We have \begin{equation}\label{equ bent51} \begin{array}{rl} G(x)+G(x+a)=&a^{2^i}\left(x+x^{2^k}+(x+x^{2^k})^{2^{t_1}}+(x+x^{2^k})^{2^{t_2}}\right)\\ &+(x+a)^{2^i}\left(a+a^{2^k}+(a+a^{2^k})^{2^{t_1}}+(a+a^{2^k})^{2^{t_2}}\right). \end{array} \end{equation} Thus, for any $x'\in {\Bbb F}_{2^{2k}}$, there must be one element $b\in {\Bbb F}_{2^{2k}}$ such that $ G(x')+G(x'+a)=b$. Let $x',x''$ are the solutions of $G(x)+G(x+a)=b $. Hence \begin{equation}\label{equ bent52} \begin{array}{rl} &G(x')+G(x'+a)+G(x'')+G(x''+a)\\ =&a^{2^i}\left(x'+x'^{2^k}+(x'+x'^{2^k})^{2^{t_1}}+(x'+x'^{2^k})^{2^{t_2}} +x''+x''^{2^k}\right.\\ &\left.+(x''+x''^{2^k})^{2^{t_1}}+(x''+x''^{2^k})^{2^{t_2}}\right)\\ &+(x'+x'')^{2^i}\left(a+a^{2^k}+(a+a^{2^k})^{2^{t_1}}+(a+a^{2^k})^{2^{t_2}}\right)=0. \end{array} \end{equation} Since $x+x^{2^k}\in {\Bbb F}_{2^k}$ for any $x\in {\Bbb F}_{2^{2k}}$, (\ref{equ bent52}) implies that $(x'+x'')^{2^i}\left(a+a^{2^k}+(a+a^{2^k})^{2^{t_1}}\right.$ $\left.+(a+a^{2^k})^{2^{t_2}}\right)$ belongs to the multiplicative coset $a^{2^i}{\Bbb F}^*_{2^k}$. Thus, we necessarily have $x'+x''=a\nu$, where $\nu\in {\Bbb F}^*_{2^k}$. Further, $x'+x''+(x'+x'')^{2^k}=a\nu+a^{2^k}\nu$. Since $t_1|t_2$, from (\ref{equ bent52}), we have \begin{equation}\label{equ bent53} \begin{array}{rl} &\left(\tau+\tau^{2^{t_1}}+\tau^{2^{t_2}}\right) \nu^{2^i} +\tau \nu+\tau^{2^{t_1}}\nu^{2^{t_1}}+\tau^{2^{t_2}}\nu^{2^{t_2}}\\ =&\tau \left(\nu^{2^{i}} + \nu+ \nu^{2^{t_1}}+ \nu^{2^{t_2}}\right)=0. \end{array} \end{equation} If we set $i=t_2$, then from (\ref{equ bent53}) we have $\delta_G(a,b)=2\neq 2^{\gcd(i,k)}$ since $\gcd(i,k)=gcd(t_2,k)\neq 1$. For one element $b\in {\Bbb F}_{2^{2k}}$, if for any $x'\in {\Bbb F}_{2^{2k}}$, we always have $ G(x')+G(x'+a)\neq b$, then $\delta_G(a,b)=0$. \qed \end{proof} } \begin{theorem} Let $i,\rho$ be two nonnegative integers such that $\rho\leq k$. Let $0\leq t_j\leq k$ be a nonnegative integer, where $j=1,2,\cdots,\rho$. Assume that both $ \sum\limits_{j=1}^{\rho}z^{2^{k-t_j}-1}+1=0$ and $ \sum\limits_{j=1}^{\rho} z^{2^{t_j}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}}$. Then, the mapping $\mathrm{F}_\alpha^{i} $ defined by \begin{equation} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha G(x)\right) \end{equation} where $G(x)= x^{2^i}\left(Tr^{2k}_k(x)+\sum\limits_{j=1}^{\rho}(Tr^{2k}_k(x))^{2^{t_j}}\right)$ is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. Further, if the number of the solutions of $ \sum\limits_{j=1}^{\rho} z^{2^{t_j}}+z+z^{2^i}=0$ on ${\Bbb F}_{2^{k}}$ is not equal to $2^{\gcd(i,k)}$, then there exist elements $a\in {\Bbb F}_{2^{2k}}\setminus {\Bbb F}_{2^k}$ such that the number $\delta_G(a,b)$ does not equal $2^{\gcd(i,k)}$ for any $b\in {\Bbb F}_{2^{2k}}$. \end{theorem} \begin{proof} From Theorem \ref{theo bentkla}, we know $ \mathrm{F}_\alpha^{i}(x) $ is bent if and only if $\alpha\notin {\Bbb F}_{2^{k}} $. We have \begin{equation}\label{equ bent61} \begin{array}{rl} G(x)+G(x+a)=&a^{2^i}\left(Tr^{2k}_k(x)+\sum\limits_{j=1}^{\rho}(Tr^{2k}_k(x))^{2^{t_j}}\right)\\ &+(x+a)^{2^i}\left(Tr^{2k}_k(a)+\sum\limits_{j=1}^{\rho}(Tr^{2k}_k(a))^{2^{t_j}}\right)=b. \end{array} \end{equation} Let $a\in {\Bbb F}_{2^{2k}}\setminus {\Bbb F}_{2^k}$ such that $a+a^{2^k}=1 $. We need to show the number of solutions of $ G(x)+G(x+a)=b$ is not equal to $ 2^{\gcd(i,k)}$ for any $b\in {\Bbb F}_{2^{2k}}$. Let $\rho=\gcd(i,k)$. We suppose $\delta_G(a,b)=2^{\rho}$ and let $x',x''$ are the solutions of (\ref{equ bent61}) for some $b$. Hence \begin{equation}\label{equ bent62} \begin{array}{rl} &G(x')+G(x'+a)+G(x'')+G(x''+a)\\ =&a^{2^i}\left(Tr^{2k}_k(x'+x'')+\sum\limits_{j=1}^{\rho}(Tr^{2k}_k(x'+x''))^{2^{t_j}}\right)+(x'+x'')^{2^i}=0 \end{array} \end{equation} since $ \sum\limits_{j=1}^{\rho} z^{2^{t_j}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}} $, that is, $ \left(Tr^{2k}_k(a)+\sum\limits_{j=1}^{\rho}(Tr^{2k}_k(a))^{2^{t_j}}\right)=1$. For any $x\in {\Bbb F}_{2^{2k}}$, (\ref{equ bent62}) implies that $(x'+x'')^{2^i}$ belongs to the multiplicative coset $a^{2^i}{\Bbb F}^*_{2^k}$. Thus, we necessarily have $x'+x''=a\nu$, where $\nu\in {\Bbb F}^*_{2^k}$. Further, $Tr^{2k}_k(x'+x'')=x'+x''+(x'+x'')^{2^k}=a\nu+a^{2^k}\nu$. From (\ref{equ bent62}), we have \begin{equation}\label{equ bent63} \nu^{2^i} +\nu+\sum\limits_{j=1}^{\rho}\nu^{2^{t_j}}=0. \end{equation} We also know that the number of the solutions of $ \sum\limits_{j=1}^{\rho} z^{2^{t_j}}+z+z^{2^i}=0$ on ${\Bbb F}_{2^{k}}$ is not equal to $2^{\gcd(i,k)}$, thus, if $a\in \{ x|Tr^{2k}_k(x)=1,x\in {\Bbb F}_{2^{2k}}\}$, the number $\delta_G(a,b)$ is not equals $2^{\gcd(i,k)}$ for any $b\in {\Bbb F}_{2^{2k}}$ \qed \end{proof} \begin{theorem}\label{theo genealized} Let $n=2k,e$ be two positive integers. Let $V = {\Bbb F}_{2^{2k}}$ and $i$ be a nonnegative integer. Let $E=\{x|x\in {\Bbb F}_{2^{2k}}, Tr_k^{2k}(x)\in {\Bbb F}_{2^{e}} \} $ and $O=\{x\in {\Bbb F}_{2^{2k}}, Tr_k^{2k}(x)\in M \} $, where $M=\{y+Tr^k_e(y)| y\in{\Bbb F}_{2^{k}}\}$. Let $\mathrm{F}_\alpha^{i} $ be the function defined on $V$ by \begin{equation}\label{equa bent70} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha x^{2^i}Tr^{2k}_e(x)\right). \end{equation} If $\frac{k}{e}$ is even, then $\mathrm{F}_\alpha^{i} $ is bent if and only if $\alpha\notin E$. If $\frac{k}{e}$ is odd, then $\mathrm{F}_\alpha^{i} $ is bent if and only if $\alpha\notin O$. Further, if $k$ is odd and $e=2$, then $\mathrm{F}_\alpha^{i} $ is bent if and only if $\alpha\notin O$. \end{theorem} \begin{proof} We have \begin{equation}\label{equa bent71} \begin{array}{rl} \mathrm{F}_\alpha^{i}(x)=&Tr_{1}^{2k}\left(\alpha x^{2^i}(x+x^{2^e}+x^{2^{2e}}+\cdots+x^{2^{2k-e}})\right)\\ =&Tr_{1}^{2k}(x\alpha x^{2^i})+Tr_{1}^{2k}(x\alpha^{2^{2k-e}} x^{2^{i+2k-e}})+Tr_{1}^{2k}(x\alpha^{2^{2k-2e}} x^{2^{i+2k-2e}})\\ &+\cdots +Tr_{1}^{2k}(x\alpha^{2^{e}} x^{2^{i+e}})\\ =&Tr_{1}^{2k}(x\mathcal{L}(x)), \end{array} \end{equation} where \begin{displaymath}\begin{array}{rl} \mathcal{L}(x)=&\alpha x^{2^i}+\alpha^{2^{2k-e}} x^{2^{i+2k-e}}+\alpha^{2^{2k-2e}} x^{2^{i+2k-2e}} +\cdots +\alpha^{2^{e}} x^{2^{i+e}}\\ =&\alpha x^{2^i}+(\alpha x^{2^i})^{2^{2k-e}}+(\alpha x^{2^i})^{2^{2k-2e}}+\cdots+(\alpha x^{2^i})^{2^{e}}\\ =&Tr^{2k}_e(\alpha x^{2^i}). \end{array} \end{displaymath} According to Lemma \ref{lemma L}, the adjoint operator $ \mathcal{L}^*(x)$ is \begin{equation}\label{equa adjoint71} \begin{array}{rl} \mathcal{L}^*(x)=&\alpha^{2^{2k-i}} x^{2^{2k-i}}+\alpha^{2^{2k-i}} x^{2^{e-i}}+\alpha^{2^{2k-i}} x^{2^{2e-i}} +\cdots+ \alpha^{2^{2k-i}} x^{2^{2k-e-i}}\\ = &\alpha^{2^{2k-i}} \left(x^{2^{2k-i}}+ x^{2^{e-i}}+ x^{2^{2e-i}} +\cdots+ x^{2^{2k-e-i}}\right)\\ = &\alpha^{2^{2k-i}} \left(x+ x^{2^{e}}+ x^{2^{2e}} +\cdots+ x^{2^{2k-e}}\right)^{2^{2k-i}}\\ \\ =&\alpha^{2^{2k-i}}\left(Tr^{2k}_e( x)\right)^{2^{2k-i}}. \end{array} \end{equation} Thus, we have \begin{displaymath} \begin{array}{rl} \mathcal{L}(x)+\mathcal{L}^*(x) =&Tr^{2k}_e(\alpha x^{2^i})+\alpha^{2^{2k-i}}\left(Tr^{2k}_e( x)\right)^{2^{2k-i}}. \end{array} \end{displaymath} From Lemma \ref{lemma adjoint}, it is sufficient to show that $\mathcal{L}(x)+\mathcal{L}^*(x)$ is invertible. That is, we need to show that $\mathcal{L}(x)+\mathcal{L}^*(x)=0$ if and only if $x=0$. For $\frac{k}{e}$ being even, we have $ Tr^{2k}_e( x)=0$ if and only if $x\in E$. If $\alpha\notin E$, then $\mathcal{L}(x)+\mathcal{L}^*(x)=0$ is only if \begin{equation}\label{equ bent72} \left\{\begin{array}{c} Tr^{2k}_e(\alpha x^{2^i})=Tr^{k}_e\left(Tr^{2k}_k(\alpha x^{2^i})\right)=0,\\ Tr^{2k}_e( x)=0, \end{array} \right. \end{equation} i.e., $x=0$. If for any $x\neq 0$, we have $\mathcal{L}(x)+\mathcal{L}^*(x)\neq0 $, then $\alpha\notin E$. In fact, if suppose $\alpha\in E $, then \begin{displaymath} \begin{array}{rl} \mathcal{L}(x)+\mathcal{L}^*(x) =&Tr^{2k}_e(\alpha x^{2^i})+\alpha^{2^{2k-i}}\left(Tr^{2k}_e( x)\right)^{2^{2k-i}}\\ =& Tr^{2k}_k(\alpha )Tr^{k}_e\left(Tr^{2k}_k( x^{2^i})\right)+\alpha^{2^{2k-i}}\left(Tr^{2k}_e( x)\right)^{2^{2k-i}}\\ =&0 \end{array} \end{displaymath} for any $x\in E$. Hence, $ \mathrm{F}_\alpha^{i}(x)$ is bent if and only if $\alpha\notin E$. Similarly, for $\frac{k}{e}$ odd, we have $ Tr^{2k}_e( x)=0$ if and only if $x\in O$. We can prove $ \mathrm{F}_\alpha^{i}(x)$ is bent if and only if $\alpha\notin O$. Similarly, for $k$ odd and $e=2$, we have $ Tr^{2k}_2( x)=0$ if and only if $x\in O$. We can prove $ \mathrm{F}_\alpha^{i}(x)$ is bent if and only if $\alpha\notin O$. \qed \end{proof} \begin{remark} Note that Theorem \ref{theo in POtt} is special case of Theorem \ref{theo genealized}. It corresponds to the case where $e=k$. \end{remark} Similary to Theorem \ref{theo bentkla}, we have the following statement. \begin{theorem}\label{theo bentkla generalized} Let $i,\rho$ be two nonnegative integers such that $\rho\leq k$. Let $\gamma^{(j)}\in {\Bbb F}_{2^{k}}$ and $0\leq t_j\leq k$ be a nonnegative integer, where $j=1,2,\cdots,\rho$. Let both $ \sum\limits_{j=1}^{\rho}(\gamma^{(j)})^{2^{k-t_j}} z^{2^{k-t_j}-1}+1=0$ and $ \sum\limits_{j=1}^{\rho}(\gamma^{(j)})^{2^{k-i}} z^{2^{t_j}-1}+1=0$ have no solution in ${\Bbb F}_{2^{k}}$. Let $E$ and $O$ be defined as Theorem \ref{theo genealized}. Let the function $\mathrm{F}_\alpha^{i} $ be defined by \begin{equation}\label{equa bent 80} \mathrm{F}_\alpha^{i}(x)=Tr_{1}^{2k}\left(\alpha x^{2^i}\left(Tr^{2k}_e(x) +\sum\limits_{j=1}^{\rho}\gamma^{(j)}(Tr^{2k}_e(x))^{2^{t_j}}\right)\right). \end{equation} If $\frac{k}{e}$ is even, then $\mathrm{F}_\alpha^{i} $ is bent if and only if $\alpha\notin E$. If $\frac{k}{e}$ is odd, then $\mathrm{F}_\alpha^{i} $ is bent if and only if $\alpha\notin O$. Further, if $k$ is odd and $e=2$, then $\mathrm{F}_\alpha^{i} $ is bent if and only if $\alpha\notin O$. \end{theorem} \section{Conclusions} This paper is in the line of a very recent paper published in the IEEE-transactions Information Theory by Pott et al \cite{Pott2017} in which several open problems have been raised. In the present paper, we have established that the property of a function having the maximal number of bent components is invariant under CCZ-equivalence which gives an answer to an open problem in \cite{Pott2017}. Next, we have proved the non-existence of APN plateaued functions having the maximal number of bent components which gives a partial answer to an open problem in \cite{Pott2017}. Furthermore, we have exhibited several bent functions $F_{\alpha}^i$ for any $\alpha\in\mathbb{F}_{2^{2k}}\setminus\mathbb{F}_{2^k}$ provided that some conditions hold. In other words, the set of those $\alpha$ for which $F_{\alpha}^i$ is bent is of maximal cardinality $2^{2k}-2^{k}$. This provide an answer to another open problem in \cite{Pott2017}. In addition, we have studied the differential spectrum of certain functions and showed that it is not equal to those studied in \cite{Pott2017}.
36,973
At Mavada we develop customer-centric E-Commerce solutions. These bespoke solutions offer a real alternative to conventional shopping, helping you to deliver customer value, whilst lowering your costs and accelerating your return on investment. Our customer focused approach enables you to offer a holistic E-Commerce solution, incorporating the entire shopping experience from product selection and purchasing, to customer service, delivery and return mechanisms.
119,526
Free Shipping on all game tables Brand: Viking Log Furniture Item Number: VikBarnPT-HP Regular price $2,390.00 $1,695. These tables can be used for poker, for eating dinner as a family, for sitting around casually chatting or even to play other board games and the like. With so much variety and choice, you can make use of all that extra space that this convertible poker table can provide. The rustic nature of the design comes to life in the most delicate manner, creating a very impressive looking poker surface that adds a touch of class to the whole experience. Now, you can sit in at home with friends and enjoy a much cheaper and equally exciting night together when playing as a team. This makes it so easy for you to get more games of poker on the go without having to plan them extensively around casinos. The combination shown above is the dark finish with burgundy felt color. A beautiful combination, well picked! Now, everyone can just come to you for a games night, a chill-out night or a bite to eat. With this poker table, you can host your guests in comfortable, classy style like never before! This comes with a felt addition, too, to help you get the right atmosphere around the table. Merely spin it around via flipping and you are left with the other design shining through. This makes it so easy for you to control and understand how to make the most of the space in your home to allow for gaming, fun, and excitement all at once. Don’t lose out any longer and have to pay for the privilege to have fun. Use this poker table today for the most exciting experience possible when it comes to enjoying a round with friends, family and loved ones. The Poker Table Top is stocked with Charcoal Felt to match the 1 in clavos nails but you can select your choice of Felt to match any other game table in your house as well. When you are not playing cards just flip this table top over and you will have a smooth dining table. Features: NOTE: Many different felt colors are available for you to chose from for this table. Please submit your choice to us via the cart page's Note to AMERICANA POKER TABLES. We are looking forward to receiving your order together with your choice of felt color..
212,215
Explore Wailea E Komo Mai The Wailea Resort Association proudly welcomes you to share the beauty of this community that many also call home. Stroll the scenic, sun-drenched shoreline past world class hotels, oceanfront condominiums, restaurants, shops, and golden sand beaches. Come let the warm sun caress you as the waves lap at your toes. From the majestic sunrise over Haleakala, through the breathtaking sunset, each day in Wailea is a realization of tropical Hawaii dreams. Enjoy your time on Maui at the chic and fashionable Wailea Resort. Come experience the elegance of Wailea, Maui Wailea offers the world’s finest in lodging, dining, shopping & spas, golf, tennis, beach activities, and events. Above it All in Wailea Locally loved and internationally recognized photographer, Randy Jay Braun, offers you this aerial view of Wailea.
148,428
TITLE: About completeness of the Fourier series. QUESTION [2 upvotes]: The Fourier series of a function is given by $$ \frac{a_0}{2} + \sum_{n=1}^\infty a_n \cos n \theta + \sum_{n=1}^\infty b_n \sin n \theta . $$ Here what does the statement " $\sum_{n=1}^\infty b_n \sin n \theta $ is complete" mean? And would you tell me how can I prove this statement? REPLY [2 votes]: Supposing you omit the $n=5$ term from the cosine series so $a_5\cos(5\theta)$ just isn't included. Why would the set of functions you're adding up be incomplete? The point is that there would be some functions that cannot be approached by series that omit that term. One such function is $\cos(5\theta)$ itself. "Approached" means convergence in the $L^2$ sense, i.e. $f_n\to f$ in $L^2$ means that $\displaystyle\int_0^{2\pi} |f_n(\theta) - f(\theta)|^2\,d\theta \to 0$ as $n\to\infty$. And "some functions" means some $L^2$ functions, i.e. functions $f$ such that $\displaystyle\int_0^{2\pi} |f(\theta)|^2\,d\theta<\infty$.
196,689
TITLE: Proving a series of functions does not converge uniformly on $\mathbb{R}$ QUESTION [1 upvotes]: I'm working through a question (it's a question with parts that builds on each other) that overall will show: a series of functions does not converge uniformly on $\mathbb{R}$ by showing the sequence of its partial sums is not uniformly Cauchy on $\mathbb{R}$ The series of functions is: $$\sum_{n=1}^{\infty}{\dfrac{1}{\sqrt{n}}\sin\left(\frac{x}n\right)} \quad $$ Firstly, I'm asked to show the negation for uniformly Cauchy: For some $\epsilon > 0$, for all $n$ $\epsilon$ $\mathbb{N}$, there exists an $m, n \geq N$ and there exists an $x$ $\epsilon$ I such that $|f_m{(x)} - f_n{(x)}| \geq \epsilon$ Secondly: If $k, N$ $\epsilon$ $\mathbb{N}$ and $N + 1 \leq k \leq 4N + 3$, show that : $1 \geq \sin\left(\frac{N+1}k\right) \geq \frac{1}5$ I have done this on paper, but would take while to write out, so will omit my answer here. Third (where I'm beginning to get stuck): Show that for all $N$ $\in$ $\mathbb{N}$ $$\sum_{k=N+1}^{k=4N+3}{\dfrac{1}{\sqrt{k}}} \geq 2\sqrt{2}$$ My idea for part 3 is to compare it to an integral, specifically the integral of $\frac1{\sqrt{x}}$, but I'm not sure how to show this relation for all $N$ (what the limits of integration should be) - any tips? Lastly: Show that when $x = N + 1, m = 4N + 3$, and $n = N$, $$|S_m{(x)} - S_n{(x)}| \geq \frac{2\sqrt{2}}5$$ And then ultimately show the original sequence doesn't uniformly converge on $\mathbb{R}$ Part C I think I'm on the right track and some tips should suffice, but for part d I'm relatively lost. Any help's greatly appreciated - thank you REPLY [2 votes]: Part 2 Let $N + 1 \le k \le 4N + 3$. Then $\frac{N+1}{k}$ satisfies the inequalities $\frac{N+1}{4N+3}\le\frac{N+1}{k}\le1$. Now, for $x$ in the interval $\left(\frac{N+1}{4N+3},1\right)$, $\sin x$ is monotonically increasing and we have $$1\ge \sin(1)\ge \sin\left(\frac{N+1}{k}\right)\ge\sin\left(\frac{N+1}{4N+3}\right)$$ and thus $$\begin{align} \sin\left(\frac{N+1}{k}\right)&\ge\sin\left(\frac{N+1}{4N+3}\right)\\\\ &\ge\sin(1/4)\\\\ &\ge \frac14-\frac{1}{3!}(\frac14)^3\\\\ &=\frac14\left(1-\frac{1}{96}\right)\\\\ &>\frac15 \end{align}$$ which was to be shown. Part 3 For this part, we will use the result from the following integral $$\begin{align} \int_{N+1}^{4N+4}\frac{dx}{\sqrt{x}}&=2\left(\sqrt{4N+4}-\sqrt{N+1}\right)\\ &=2\sqrt{N+1}(2-1)\\ &\ge 2\sqrt{2} \end{align}$$ for $N\ge1$. Now, we note that this integral can be represented by the following summation: $$\begin{align} \int_{N+1}^{4N+4}\frac{dx}{\sqrt{x}}&=\sum_{k=N+1}^{4N+3}\int_{k}^{k+1} \frac{dx}{\sqrt{x}}\\ &\le\sum_{k=N+1}^{4N+3} \frac{1}{\sqrt{k}} &\ge 2\sqrt{2} \end{align}$$ and we have the desired inequality! Part 4 From the Part 2, we showed that if $N + 1 \le k \le 4N + 3$, then $$1 \ge \sin\left(\frac{N+1}{k}\right) \ge \frac15$$ From the Part 3, we showed that for all $N$ $$\sum_{k=N+1}^{4N+3}{\frac{1}{\sqrt{k}}} \ge 2\sqrt{2}$$ Then, putting these together we have $$\begin{align} |S_m(x)-S_n(x)|&=\left|\sum_{k=N+1}^{4N+3} \frac{\sin((N+1)/k)}{\sqrt{k}}\right|\\ &\ge\left|\sum_{k=N+1}^{4N+3} \frac{\frac15}{\sqrt{k}}\right|\\\\ &\ge\frac15\left|\sum_{k=N+1}^{4N+3} \frac{1}{\sqrt{k}}\right|\\\\ &\ge\frac{2\sqrt{2}}{5} \end{align}$$
68,734
DNA microarray(redirected from Dna array) Also found in: Dictionary, Medical, Wikipedia. Related to Dna array: Microarray, Microarray analysis, Protein array.
258,229
\begin{document} \title{Factorisation of the complete bipartite graph into spanning semiregular factors} \author{ Mahdieh Hasheminezhad\\ \small Department of Mathematical Sciences\\[-0.8ex] \small Yazd University\\[-0.8ex] \small Yazd, Iran\\ \small\tt hasheminezhad@yazd.ac.ir \and Brendan D. McKay\\ \small School of Computing\\[-0.8ex] \small Australian National University\\[-0.8ex] \small Canberra, ACT 2601, Australia\\ \small\tt brendan.mckay@anu.edu.au } \maketitle \begin{abstract} We enumerate factorisations of the complete bipartite graph into spanning semi\-regular graphs in several cases, including when the degrees of all the factors except one or two are small. The resulting asymptotic behaviour is seen to generalise the number of semiregular graphs in an elegant way. This leads us to conjecture a general formula when the number of factors is vanishing compared to the number of vertices. As a corollary, we find the average number of ways to partition the edges of a random semiregular bipartite graph into spanning semiregular subgraphs in several cases. Our proof of one case uses a switching argument to find the probability that a set of sufficiently sparse semiregular bipartite graphs are edge-disjoint when randomly labelled. \end{abstract} \nicebreak \section{Introduction} A classical problem in enumerative graph theory is the asymptotic number of 0-1 matrices with uniform row and column sums; equivalently, semiregular bipartite graphs. We will consider all bipartite graphs to have bipartition $(V_1,V_2)$ where $\abs{V_1} = m$ and $\abs{V_2}=n$. An \textit{$(m,n,\lambda)$-semiregular} bipartite graph has every degree in $V_1$ equal to~$\lambda n$ and every degree in $V_2$ equal to~$\lambda m$. Of course, for such a graph to exist we must have $0\le\lambda\le 1$ and $\lambda m, \lambda n$ must be integers. We will tacitly assume that these elementary conditions hold throughout the paper for every mentioned semiregular graph. The parameter $\lambda$ will be called the \textit{density}. Define $R_\lambda(m,n)$ to be the number of $(m,n,\lambda)$-semiregular bipartite graphs. The asymptotic determination of $R_\lambda(m,n)$ as $n\to\infty$ with $m=m(n)$ and $\lambda=\lambda(n)$ is not yet complete, but the known values fit a simple formula. \begin{thm}\label{thm:reg} Let $n\to\infty$ with $m\le n$. Then \[ R_\lambda(m,n) \sim \frac{\displaystyle\binom{n}{\lambda n}^{\!m} \binom{m}{\lambda m}^{\!n}} {\displaystyle\binom{mn}{\lambda mn}} \, (1-1/m)^{(m-1)/2} \] in the following cases. \begin{itemize}\itemsep=0pt \item[1.] $\lambda = o\((mn)^{-1/4}\)$. \item[2.] For sufficiently small $\eps>0$ , $(1-2\lambda)^2\(1+\dfrac{5m}{6n}+\dfrac{5n}{6m}\) \le(4-\eps)\lambda(1-\lambda)\log n$ and $n=o\(\lambda(1-\lambda) m^{1+\eps}\)$. \item[3.] For some $\eps>0$, $2\le m = O\( (\lambda(1-\lambda)n)^{1/2-\eps}\)$. \item[4.] $\lambda\le c$ for a sufficiently small constant~$c$, $n=O(\lambda^{1/2-\eps}m^{3/2-\eps})$ for some $\eps >0$, and $\lambda m\ge \log^K n$ for every $K$. \end{itemize} \end{thm} \begin{proof} Note that $(1-1/m)^{(m-1)/2}\to e^{-1/2}$ if $m\to\infty$. Case~1 covers the sparse case and was proved by McKay and Wang~\cite{MW}. Case~2 applies when $m$ and $n$ are not very much different and the graph is very dense, while Case~3 covers all densities when $n$ is much larger than~$m$. Case~2 was proved by Greenhill and McKay~\cite{GMX}, and Case~3 by Canfield and McKay~\cite{CM}. Case~4, proved by Liebenau and Wormald~\cite{LW}, covers a wide range of densities and moderately large~$n/m$. The results so far do not cover all $(m,n,\lambda)$. For example $\lambda=\frac12$ and $m=n^{2/3}$ is missing, and so is $\lambda=\frac 1m$ and $m=n^{1/2}$. However, based on numerical evidence a strong form of the theorem was conjectured in~\cite{CM} to hold for all $(m,n,\lambda)$. \end{proof} We can consider $R_\lambda(m,n)$ to count the number of partitions of the edges of~$K_{m,n}$ into two spanning semiregular subgraphs, one of density $\lambda$ and one of density $1-\lambda$. This suggests a generalization: how many ways are there to partition the edges of~$K_{m,n}$ into more than two spanning semiregular subgraphs, of specified densities? For positive numbers $\lambda_0,\ldots,\lambda_k$ with sum~1, define $R(m,n; \lambda_0,\ldots,\lambda_k)$ to be the number of ways to partition the edges of $K_{m,n}$ into spanning semiregular subgraphs of density $\lambda_0,\ldots,\lambda_k$. We conjecture that for $k=o(m)$, the asymptotic answer is a simple generalisation of Theorem~\ref{thm:reg}. \begin{conj}\label{conj:main} Let $\lambda_0,\ldots,\lambda_k$ be positive numbers such that $\sum_{i=0}^k\lambda_i=1$. Then, if $n\to\infty$ with $2\le m\le n$, and $1\le k=o(m)$, $R(m,n; \lambda_0,\ldots,\lambda_k)\sim R'(m,n; \lambda_0,\ldots,\lambda_k)$, where (using multinomial coefficients) \[ R'(m,n; \lambda_0,\ldots,\lambda_k) = \frac{\displaystyle\binom{n}{\lambda_0 n,\ldots,\lambda_k n}^{\!m} \binom{m}{\lambda_0 m,\ldots,\lambda_k m}^{\!n}} {\displaystyle\binom{mn}{\lambda_0 mn,\ldots,\lambda_k mn}} \, (1-1/m)^{k(m-1)/2}. \] \end{conj} We will prove the conjecture in six cases. \begin{thm}\label{thm:main} Conjecture~\ref{conj:main} holds in the following cases. \begin{enumerate}\itemsep=0pt \item[(a)] $k=1$ and one of the conditions of Theorem~\ref{thm:reg} holds. \item[(b)] $k\ge 2$, $m\le n$ and $m^{-1}n^3 \(\sum_{i=1}^k\lambda_i\)^3 = o(1)$. \item[(c)] $k\ge 2$, $m \le n$ and $m^{1/2}n\sum_{1\le i<j\le k} \lambda_i\lambda_j=o(1)$. \item[(d)] $m=n$, $k=o(n^{6/7})$ and $\lambda_1=\cdots=\lambda_k=\frac 1n$ (the case of Latin rectangles). \item[(e)] $(m,n,\lambda_1)$ satisfies condition 2 of Theorem~\ref{thm:reg}, and $\lambda_2+\cdots+\lambda_k=O(n^{-1+\eps})$ for sufficiently small $\eps>0$. \item[(f)] $m=O(1)$ and $1\le k\le m-1$. In this case we do not need the condition $k=o(m)$. \end{enumerate} \end{thm} Part~(a) is just a restatement of Theorem~\ref{thm:reg}. Part~(c) will be proved using~\cite{silver}. Part~(c) will follow from a switching argument applied to the probability of two randomly labelled semiregular graphs being edge-disjoint. Part~(d) is a consequence of~\cite{GM}. Part~(e) will follow from a combination of~\cite{GMX} and Part~(b). Part~(f) will be proved using the Central Limit Theorem. \begin{conj}\label{conj:ransplit} Let $\lambda_1,\ldots,\lambda_k,\lambda$ be positive numbers such that $\sum_{i=1}^k\lambda_i=\lambda$. Then, if $n\to\infty$ with $2\le m\le n$, and $1\le k=o(m)$, the average number of ways to partition the edges of a uniform random $(m,n,\lambda)$-semiregular bipartite graph into spanning semiregular subgraphs of density $\lambda_1,\ldots,\lambda_k$ is asymptotically \begin{align*} &\frac{R'(m,n;1-\lambda,\lambda_1\ldots,\lambda_k)} {R'(m,n;1-\lambda,\lambda)} \\ &{\kern 4em} = \frac{\displaystyle\binom{\lambda n}{\lambda_1 n,\ldots,\lambda_k n}^{\!m} \binom{\lambda m}{\lambda_1 m,\ldots,\lambda_k m}^{\!n}} {\displaystyle\binom{\lambda mn}{\lambda_1 mn,\ldots,\lambda_k mn}} \, (1-1/m)^{(k-1)(m-1)/2}. \end{align*} \end{conj} Note that Conjecture~\ref{conj:ransplit} holds if the formula in Theorem~\ref{thm:reg} holds for $(m,n,\lambda)$ and Conjecture~\ref{conj:main} holds for $(m,n,1-\lambda,\lambda_1,\ldots,\lambda_k)$. Thus, many cases of Conjecture~\ref{conj:ransplit} follow from Theorem~\ref{thm:reg} and Theorem~\ref{thm:main}. \medskip For integer $x\ge 0$, $(N)_x$ denotes the falling factorial $N(N-1)\cdots(N-x+1)$. We will use the fact that if $N\to\infty$ and $x=O(N^{3/4})$, \begin{equation}\label{eq:ff} (N)_x = N^x\exp\Bigl( -\frac{x(x-1)}{2N} -\frac{x(x-1)(2x-1)}{12N^2} + O(x^4/N^3) \Bigr). \end{equation} \section{A first case of a single dense factor} In~\cite{silver}, the second author proved a generalized version of the following lemma. \begin{lemma}\label{lem:silver} Assume $m\le n$, $\lambda_h>0$, $\lambda_d\ge 0$ and $\lambda_h(\lambda_d^2+\lambda_h^2)m^{-1}n^3=o(1)$. Let $D$ be any $(m,n,\lambda_d)$-semiregular graph. Then the number of $(m,n,\lambda_h)$-semiregular graphs edge-disjoint from~$D$ is \[ \frac{(\lambda_h mn)!}{(\lambda_h m)!^n(\lambda_h n)!^m} \exp\( -\dfrac12 (\lambda_h m-1)(\lambda_h n-1) -\lambda_h\lambda_dmn + O(\lambda_h(\lambda_d+\lambda_h)^2m^{-1}n^3)\). \] \end{lemma} Now we can prove Theorem~\ref{thm:main}(b). \begin{thm}\label{thm:silver} Assume $m\le n$ and that $\lambda_1,\ldots,\lambda_k$ are positive numbers with $\sum_{i=1}^k \lambda_i = o(m^{1/3}/n)$. Define $\lambda =\sum_{i=1}^k \lambda_i$ and $\lambda_0=1-\lambda$. Then \[ R(m,n;\lambda_0,\ldots,\lambda_k) = R'(m,n;\lambda_0,\ldots,\lambda_k) \( 1 + O(\lambda^3 m^{-1}n^3)\). \] \end{thm} \begin{proof} For notational convenience, write the argument of the exponential in Lemma~\ref{lem:silver} as $A(\lambda_d,\lambda_h)+O(\delta(\lambda_d,\lambda_h))$. We can partition $K_{m,n}$ into spanning semiregular subgraphs of density $\lambda_0,\ldots,\lambda_k$ by first choosing an $(m,n,\lambda_1)$-semireg\-ular graph~$D_1$, then choosing an $(m,n,\lambda_2)$-semiregular graph~$D_2$ disjoint from~$D_1$, then an $(m,n,\lambda_3)$-semiregular graph~$D_3$ disjoint from~$D_1\cup D_2$, and so on up to~$D_k$. This gives \begin{align*} R(m,n;&\lambda_0,\ldots,\lambda_k) = \exp\( \hat A + O(\hat\delta)\) \prod_{i=1}^k \frac{ (\lambda_i mn)!}{(\lambda_i m)!^n\,(\lambda_i n)!^m}, \qquad\text{where} \\[-1ex] \hat A&= A(0,\lambda_1)+A(\lambda_1,\lambda_2)+A(\lambda_1+\lambda_2,\lambda_3) + \cdots + A\Bigl({\textstyle\sum_{i=1}^{k-1}}\lambda_i,\lambda_k\Bigr), \\ \hat \delta&= \delta(0,\lambda_1)+\delta(\lambda_1,\lambda_2)+ \delta(\lambda_1+\lambda_2,\lambda_3) + \cdots + \delta\Bigl({\textstyle\sum_{i=1}^{k-1}}\lambda_i,\lambda_k\Bigr) . \end{align*} By routine induction, we find that \[ \hat A = -\dfrac12 k + \dfrac12 \lambda(m+n) - \dfrac12 \lambda^2mn \] and $\hat\delta = O(\lambda^3 m^{-1}n^3)$. Under the conditions of this theorem, a consequence of~\eqref{eq:ff} is \[ \binom{m}{\lambda_0 m,\ldots,\lambda_k m} = \frac{m^{\lambda_0 m}} {\prod_{i=1}^k (\lambda_i m)!} \exp\(\dfrac12\lambda -\dfrac12\lambda^2 m + O(\lambda^3 m^{-1}n^2)\), \] and similarly for the other multinomials in the definition of $R'(m,n;\lambda_0,\ldots,\lambda_k)$. Also, since $\lambda\ge k/m$, $(1-1/m)^{k(m-1)/2}=\exp\(-\frac12 k+O(\lambda^3 m^{-1}n^3)\)$. Putting these parts together completes the proof. \end{proof} \section{A second case of one dense factor}\label{s:stepone} The enumeration of sparse semiregular bipartite graphs was extended to higher degrees by McKay and Wang~\cite{MW} and generalised by Greenhill, McKay and Wang~\cite{GMW}. However, unlike Lemma~\ref{lem:silver}, there was no allowance for a set of forbidden edges. In order to use the more accurate enumeration for our purposes, we consider the problem of forbidden edges in a more general form that may be of independent interest. Instead of considering a random semiregular graph, we consider an arbitrary semiregular graph that is labelled at random. When we write of a semiregular bipartite graph on $(V_1,V_2)$ being randomly (re)labelled, we mean that $V_1$ and $V_2$ are independently permuted ($m!\,n!$ possibilities altogether of equal probability). In this section we consider two semiregular bipartite graphs on $(V_1,V_2)$: an $(m,n,\lambda_d)$-semiregular graph~$D$ and an $(m,n,\lambda_h)$-semiregular graph~$H$. If $H$ is randomly labelled, what is the probability that it is edge-disjoint from~$D$? Define $M = \lceil \lambda_d\lambda_h mn\log n\rceil$. We will work under the following assumptions. \begin{equation}\label{eq:assume} n\to\infty, \quad m\le n, \quad \lambda_d,\lambda_h>0, \quad \lambda_d^2\lambda_h^2 mn^2=o(1). \end{equation} \begin{lemma}\label{lem:assume} Under assumptions~\eqref{eq:assume}, we have $m=\omega(n^{2/3})\to\infty$ and $m^{-1}\le \lambda_d,\lambda_h = o(m^{1/2}n^{-1})$. In addition, $\lambda_d \lambda_h=o(m^{-1/2}n^{-1})$ and $\log n\le M=o(m^{1/2}\log n)$. \end{lemma} \begin{proof} Since $D,H$ are not empty, $\lambda_d,\lambda_h\ge\frac1m$, corresponding to degree~1 in~$V_2$. The rest is elementary. \end{proof} \begin{lemma}\label{lem:basic} Let $D$ be an $(m, n, \lambda_d)$-semiregular bipartite graph and $H$ be an $(m, n, \lambda_h)$-semiregular bipartite graph satisfying Assumptions~\eqref{eq:assume}. Then, with probability $1- O(\lambda_d^2 \lambda_h^2 mn^2)$, a random labeling of $H$ does not have any path of length~2 in common with~$D$ and the number of edges in common with $D$ is less than~$M$. \end{lemma} \begin{proof} Graphs $D$ and $H$ have, respectively, $O(mn^2\lambda_d^2)$ and $O(mn^2\lambda_h^2)$ paths of length~2 with the central vertex in~$V_1$. The probability that such a path in $H$ matches a path in $D$ when randomly labelled is $O(m^{-1}n^{-2})$. So the expected number of such coincidences is $O(\lambda_d^2 \lambda_h^2 mn^2)$. Similarly, the expected number of coincidences between paths of length~2 with central vertex in $V_2$ is $O(\lambda_d^2 \lambda_h^2 m^2n)$, which is no larger on account of the assumption $m\le n$. Thus, with probability $1-O(\lambda_d^2 \lambda_h^2 m^2n)$, a random labelling of $H$ has no paths of length~2 in common with~$D$. Now consider a set $S$ of $M$ edges of $H$. In light of the preceding, we can assume they are independent edges. There are at most $\binom{\lambda_h mn}{M}$ choices of $S$, and at most $\binom{\lambda_d mn}{M}$ choices of a set of $M$ edges of $D$ that $S$ might map onto. The probability that $S$ maps onto a particular set of $M$ independent edges of~$D$ is \[ \frac{ M!\, (m-M)!\,(n-M)!}{m!\,n!} \le \frac{M!}{(m-M)^M (n-M)^M}. \] Since $M=o(m)$ and $M=o(n)$, $mn\le 2(m-M)(n-M)$ for sufficiently large~$n$. Using $M!\ge (M/e)^M$, we find that the expected number of sets of $M$ independent edges of $H$ that map onto edges of $D$ is at most \[ \biggl( \frac {\lambda_h\lambda_d\, m^2n^2 e}{M(m-M)(n-M)}\biggr)^{\!M} \le \biggl( \frac{2e}{\log n} \biggr)^{\!\log n} = O(n^{-t}) \quad\text{for any $t>0$}. \] This probability is smaller than the probability of common paths of length~2, which completes the proof. \end{proof} Let $\calL(t)$ be the set of all labelings of the vertices of $H$ with no common paths of length 2 with $D$ and exactly $t$ edges in common with $D$. Define $L(t)=|\calL(t)|$, so in particular the number of labelings of the vertices of $H$ with no common edges with $D$ is $L(0)$. Let \[ T = \sum_{t =0}^{M-1} L(t). \] In the next step we will estimate the value of $T/L(0)$ by the switching method. \bigskip A \textit{forward switching} is a permutation $(a \, e)(b \, f)$ of the vertices of $H$ such that \begin{itemize}\itemsep=0pt \item $a,e\in V_1$ are distinct, and $b,f\in V_2$ are distinct. \item $ab$ is a common edge of $D$ and $H$, \item $ef$ is a non-edge of $D$, and \item after the permutation, the edges common to $D$ and $H$ are the same except that $ab$ is no longer a common edge. \end{itemize} \noindent A \textit{reverse switching} is a permutation $(a \, e)(b \, f)$ of the vertices of $H$ such that \begin{itemize}\itemsep=0pt \item $a,e\in V_1$ are distinct, and $b,f\in V_2$ are distinct. \item $ab$ is an edge of $D$ that is not an edge of $H$, \item $ef$ is an edge of $H$ that is not an edge of $D$, and \item after the permutation, the edges common to $D$ and $H$ are the same except that $ab$ is a common edge of $D$ and $H$. \end{itemize} \begin{lemma}\label{lem:Lrat} Assume Conditions~\eqref{eq:assume} Then, uniformly for $1\le t\le M$ \[ \frac{L(t)}{L(t-1)} = \frac{(\lambda_d\, mn-t+1)(\lambda_h\, mn-t+1)}{tmn} \Bigl(1 + O\Bigl(\frac tm + \frac{1}{m^{1/2}}\Bigr)\Bigr). \] \end{lemma} \begin{proof} By using a forward switching, we will convert a labeling $R \in \calL(t)$ to a labeling $R'\in \calL(t-1)$. Without loss of generality, we suppose that $R$ is the identity, since our bounds will be independent of the structure of $H$ other than its density. There are $t$ choices for edge $ab$. We will bound the choices of $e,f$ for a fixed choice of $ab$. Graph $D$ has $(1-\lambda_d)mn$ non-edges $ef$, including at most $m+n=O(n)$ for which $e=a$ or $f=b$. In addition, we must ensure that no common edges are created, and none other than $ab$ are destroyed. Since $D$ and $H$ have no paths of length 2 in common, there are no other common edges of $H$ and $D$ incident to $a$ or $b$. Therefore the only way to destroy a common edge other than $ab$ is if $e$ or $f$ are incident to a common edge. This eliminates at most $t(n+m)=O(tn)$ pairs $e,f$. Creation of a new common edge can only occur if there is a path from~$a$ to~$e$, or from~$b$ to~$f$, consisting of one edge from $H$ and one from~$D$. This eliminates at most $4\lambda_d\lambda_h mn$ choices of $ef$. Using Lemma~\ref{lem:assume} we find that the number of forward switchings is \begin{align*} W_F &= t\( (1-\lambda_d)mn - O(n+tn+\lambda_d\lambda_h mn)\) \\ &= t \,mn \Bigl(1 + O\Bigl(\frac tm + \frac{m^{1/2}}n\Bigr)\Bigr). \end{align*} A reverse switching converts a labeling $R' \in \calL(t-1)$ to a labeling $R$ in $\calL(t)$. Again, without loss of generality, we suppose that $R'$ is the identity. There are $\lambda_d\, mn-t+1$ choices for an edge $ab$ in~$D$ and $\lambda_h mn-t+1$ choices for and edge~$ef$ in~$H$, in each case avoiding the $t-1$ common edges. From these we must subtract any choices that create or destroy a common edge except the new common edge $ab$ we intend to create. There are at most $\lambda_d\lambda_h mn^2$ choices of $a,b,e,f$ such that $a=e$ and at most that number (since $m\le n$) such that $b=f$. To destroy a common edge, at least one of $a,b,e,f$ must be the endpoint of a common edge. The number of such choices is bounded by $4(t-1)\lambda_d\lambda_hmn^2$. To create a new common edge other than $ab$, there must be a path from~$a$ to~$e$, or from~$b$ to~$f$, consisting of one edge from $H$ and one from~$D$. This eliminates at most $\lambda_d^2\lambda_h^2m^2n^3$ cases. Using Lemma~\ref{lem:assume} we find that the number of forward switchings is \begin{align*} W_R &= (\lambda_d mn-t+1)(\lambda_h mn-t+1) + O\( t\lambda_d\lambda_h mn^2 + \lambda_d^2\lambda_h^2 m^2n^3\) \\ &= (\lambda_d mn-t+1)(\lambda_h mn-t+1) \Bigl( 1 + O\Bigl(\frac tm + \frac{1}{m^{1/2}} \Bigr)\Bigr). \end{align*} The lemma now follows from the ratio $W_R/W_F$, using $m\le n$. \end{proof} We will need the following summation lemma from \cite[Cor.~4.5]{GMW}. \begin{lemma}\label{sumlemma} Let $Z \ge 2$ be an integer and, for $1\le i \le Z$, let real numbers $A(i)$, $B(i)$ be given such that $A(i) \ge 0$ and $1 - (i - 1)B(i) \ge 0$. Define $A_1 = \min _{i=1}^{Z} A(i)$, $A_2 =\max_{i=1}^{Z} A(i), C_1 = \min_{i=1}^{Z} A(i)B(i)$ and $C_2 = \max_{i=1}^{Z} A(i)B(i)$. Suppose that there exists $\hat{c}$ with $0 < \hat{c} < \frac13$ such that $\max\{A/Z, |C|\} \le \hat{c}$ for all $A \in [A_1,A_2]$, $C \in [C_1,C_2]$. Define $n_0, \ldots , n_Z$ by $n_0 = 1$ and \[ n_i/ n_{i-1}=\dfrac1i A(i) (1 - (i -1)B(i)) \] for $1 \le i \le Z$, with the following interpretation: if $A(i) = 0$ or $1 - (i - 1)B(i) = 0$, then $n_j = 0$ for $ i \le j \le Z$. Then \[ \Sigma_{1} \le \sum_{i=0}^{Z}n_i \le \Sigma_2, \] where \[ \Sigma_1 = \exp\(A_1 - \dfrac12 A_1C_2)-(2e\hat{c}\)^Z \] and \[ \Sigma_2=\exp\(A_2-\dfrac12A_2C_1+\dfrac12 A_2C_1^2\)+ (2e\hat{c} )^Z. \hbox to0pt{\qquad\qquad\qed\hss} \] \end{lemma} \begin{lemma}\label{lem:tsum} Under Assumptions~\eqref{eq:assume} we have \[ \frac{T}{L(0)} = \exp\( \lambda_d\lambda_h mn + O(\lambda_d\lambda_h m^{1/2}n) \). \] \end{lemma} \begin{proof} Lemma~\ref{lem:assume} and the definition of $M$ allow us to write Lemma~\ref{lem:Lrat} as \[ \frac{L(t)}{L(t-1)} = \dfrac 1t A(t) \(1-(t-1)B(t)\), \] where $A(t)=\lambda_d\lambda_h mn(1+O(m^{-1/2}))$ and $B(t)=O(m^{-1})$. In each case, the $O(\,)$ expression is a function of $t$ but uniform over $1\le t\le M$. Clearly $A(t)>0$, and using Lemma~\ref{lem:assume}, we can check that $1-(t-1)B(t)>0$ for $1\le t\le M$. Define $A_1$, $A_2$, $C_1$ and $C_2$ as in Lemma~\ref{sumlemma}. This gives \begin{align*} A_1,A_2&= \lambda_d\lambda_h mn + O(\lambda_d\lambda_hm^{1/2}n), \\ C_1,C_2&= O(\lambda_d\lambda_h n) = o(m^{-1/2}). \end{align*} The condition $\max\{A/M, |C|\} \le \hat{c}$ of Lemma~\ref{sumlemma} is satisfied with $\hat{c}$ any sufficiently small constant. Using Lemma~\ref{lem:assume}, we also have $A_1C_2, A_2C_1= O(\lambda_d^2 \lambda_h^2 mn^2)$, $A_2C_1^2=O(\lambda_d^2 \lambda_h^2 m^{1/2}n^2)$ and $(2e\hat{c})^M=O(n^{-1})$ if $\hat{c}$ is small enough. Since $\lambda_d^2 \lambda_h^2 mn^2=o(1)$ by assumption, $\lambda_d^2 \lambda_h^2 mn^2=o(\lambda_d \lambda_h m^{1/2}n)$. The lemma now follows from Lemma~\ref{sumlemma}. \end{proof} \begin{lemma}\label{lem:prob} Let $D$ be an $(m,n, \lambda_d)$-semiregular bipartite graph and $H$ be an $(m,n, \lambda_h)$-semiregular bipartite graph such that Assumptions~\eqref{eq:assume} hold. Then the probability that a random labelling of $H$ is edge-disjoint from $D$ is \[ \exp\( -\lambda_d\lambda_h mn + O(\lambda_d\lambda_h m^{1/2}n) \). \] \end{lemma} \begin{proof} The probability that there are no common paths of length two and less than $M$ edges in common is $1-O(\lambda_d^2\lambda_h^2 mn^2)$ by Lemma~\ref{lem:basic}. Subject to those conditions, the probability of no common edge is \[ \exp\( -\lambda_d\lambda_h mn + O(\lambda_d\lambda_h m^{1/2}n) \) \] by Lemma~\ref{lem:tsum}. Multiplying these probabilities together gives the theorem, since\\ $\lambda_d\lambda_h m^{1/2}n\to 0$ by~\eqref{eq:assume}. \end{proof} \begin{thm}\label{thm:disj} For $i=1, \ldots, k$, let $D_i$ be an $(m,n,\lambda_i)$-semiregular bipartite graph. Assume $m \le n$ and $m^{1/2}n\sum_{1\le i<j\le k} \lambda_i\lambda_j=o(1)$. Then the probability that $D_1,D_2,\ldots,D_k$ are edge-disjoint after labelling at random is \[ \exp\biggl(-mn \sum_{1 \le i < j \le k} \lambda_i \lambda_j +O\Bigl(m^{1/2}n \sum_{1 \le i < j \le k} \lambda_i \lambda_j\Bigr)\biggr). \] \end{thm} \begin{proof} This is an immediate consequence of Lemma~\ref{lem:prob} applied iteratively. \end{proof} Now we are ready to prove Theorem~\ref{thm:main}(c). First we state the enumeration theorem from~\cite{MW}, extended with the help of Lemma~\ref{lem:prob}. Note that, despite this theorem and Lemma~\ref{lem:silver} having considerable overlap, neither of them implies the other. \begin{thm}\label{thm:MW} Assume $m\le n$, $\lambda_h>0$, $\lambda_d\ge 0$ and $\lambda_h\lambda_d\, m^{1/2}n=o(1)$. Let $D$ be any $(m,n,\lambda_d)$-semiregular graph. Then the number of $(m,n,\lambda_h)$-semiregular graphs edge-disjoint from~$D$ is \begin{align*} \frac{(\lambda_h mn)!}{(\lambda_h m)!^n(\lambda_h n)!^m} &\exp\( -\dfrac12 (\lambda_h m-1)(\lambda_h n-1) -\dfrac16\lambda_h^3mn \\[-1ex] &{\qquad\quad} -\lambda_h\lambda_d\,mn + O(\lambda_h\lambda_d\, m^{1/2}n)\). \end{align*} \end{thm} \begin{thm}\label{thm:coloured} Let $\lambda_0,\lambda_1,\ldots,\lambda_k>0$ be such that $k\ge 2$ and $\sum_{i=0}^n \lambda_i=1$. Assume that $m \le n$ and $m^{1/2}n\sum_{1\le i<j\le k} \lambda_i\lambda_j=o(1)$. Then \[ R(m,n;\lambda_0,\ldots,\lambda_k) = R'(m,n;\lambda_0,\ldots,\lambda_k) \Bigl(1 + O\Bigl(m^{1/2}n\sum_{1\le i<j\le k} \lambda_i\lambda_j \Bigr)\Bigr), \] where $R'(m,n;\lambda_0,\ldots,\lambda_k)$ is defined in Conjecture~\ref{conj:main}. \end{thm} \begin{proof} The proof follows the same line as Theorem~\ref{thm:silver}. Define $\lambda=\sum_{i=1}^k\lambda_i$ and write the argument of the exponential in Theorem~\ref{thm:MW} as $A'(\lambda_d,\lambda_h)+O(\delta'(\lambda_d,\lambda_h))$. Then, as before, \begin{align*} R(m,n;&\lambda_0,\ldots,\lambda_k) = \exp\( \check A + O(\check\delta)\) \prod_{i=1}^k \frac{ (\lambda_i mn)!}{(\lambda_i m)!^n\,(\lambda_i n)!^m}, \qquad\text{where} \\[-1ex] \check A&= A'(0,\lambda_1)+A'(\lambda_1,\lambda_2)+A'(\lambda_1+\lambda_2,\lambda_3) + \cdots + A'\Bigl({\textstyle\sum_{i=1}^{k-1}}\lambda_i,\lambda_k\Bigr), \\ \check \delta&= \delta'(0,\lambda_1)+\delta'(\lambda_1,\lambda_2)+ \delta'(\lambda_1+\lambda_2,\lambda_3) + \cdots + \delta'\Bigl({\textstyle\sum_{i=1}^{k-1}}\lambda_i,\lambda_k\Bigr) . \end{align*} By routine induction, we find that \[ \check A = -\dfrac12 k + \dfrac12 \lambda(m+n) -\dfrac16 mn\sum_{i=1}^k\lambda_i^3- \dfrac12 \lambda^2mn \] and $\check\delta = O\(m^{1/2}n\sum_{1\le i<j\le k} \lambda_i\lambda_j\)$. Under the conditions of this theorem, a consequence of~\eqref{eq:ff} is \[ \binom{m}{\lambda_0 m,\ldots,\lambda_k m} = \frac{m^{\lambda_0 m}} {\prod_{i=1}^k (\lambda_i m)!} \exp\(\dfrac12\lambda -\dfrac12\lambda^2 m-\dfrac16\lambda^3 m + O(\check\delta/n)\), \] and similarly for the other multinomials in the definition of $R'(m,n;\lambda_0,\ldots,\lambda_k)$. Also, $\lambda_i\ge\frac1m$ for each $i$ so the assumptions imply that $\lambda\ge \frac km$ and $k^2m^{-3/2}n=o(1)$. This is enough to show that $(1-1/m)^{k(m-1)/2}=\exp\(-\frac12 k+O(\check\delta)\)$. Putting these parts together completes the proof except for one issue. From $\check A$ we have $-\frac16 mn\sum_{i=1}^k\lambda_i^3$, whereas from the multinomials in $R'(m,n;\lambda_0,\ldots,\lambda_k)$ we have $-\frac16 mn\lambda^3$. The difference between these two terms is $O\(mn\sum_\ell\sum_{i< j}\lambda_i\lambda_j\lambda_\ell\)\allowbreak =O\(mn\lambda \sum_{i< j}\lambda_i\lambda_j\)$. Note that the assumptions $m^{1/2}n\sum_{i<j}\lambda_i\lambda_j=o(1)$ and $\lambda_j\ge\frac1m$ imply $m^{-1/2}n\lambda=o(1)$. Thus, \[ O\Bigl(mn\sum_{\ell=1}^k\sum_{1\le i< j\le k}\lambda_i\lambda_j\lambda_\ell\Bigr) = O\Bigl(m^{3/2}\sum_{1\le i< j\le k}\lambda_i\lambda_j\Bigr) = O(\check\delta), \] since $m\le n$. This completes the proof. \end{proof} \begin{figure}[ht!] \[ \includegraphics[scale=0.4]{LR} \] \vspace*{-6ex} \caption{Latin rectangles. $F(n,k)/R'(n,n;k\times\frac1n)$ for $n=10$ (diamonds) and $n=11$ (circles). The horizontal scale is $x=k/(n-1)$ and the line is $1+x/12$.\label{fig}} \end{figure} \section{The case of Latin rectangles} The case $m=n, \lambda_1,\ldots,\lambda_k=\frac{1}{n}$ corresponds to choosing an ordered sequence of $k$ perfect matchings in $K_{n,n}$, which is a $k\times n$ Latin rectangle. Let $F(n,k)$ be the number $k\times n$ Latin rectangles. Note that $F(n,n-1)=F(n,n)$; we will use only $F(n,n-1)$. In~\cite{GM}, Godsil and McKay found the asymptotic value of $F(n,k)$ for $k=o(n^{6/7})$ and conjectured that the same formula holds for any $k=O(n^{1-\delta})$ for $\delta>0$, namely that \[ F(n,k) \sim (n!)^k \Bigl( \frac{n!}{(n-k)!\,n^k}\Bigr)^{\!n} \Bigl(1 - \frac{k}{n}\Bigr)^{\!-n/2} e^{-k/2}. \] The reader can check that this expression is equal to $R'(n,n;k\times\frac1n)$ asymptotically when $k=o(n)$. Here we have used $k\times\frac1n$ to represent a sequence of $k$ terms with each term equal to~$\frac1n$. To illustrate what happens when $k$ is even larger, Figure~\ref{fig} shows the ratio $F(n,k)/\allowbreak R'(n,n;k\times\frac1n)$ for $n=10,11$, as a function of $k/(n-1)$~\cite{LS11}. Experiment suggests that $F(n,x(n-1))/R'(n,n;k\times\frac1n)$ converges to a continuous function $f(x)$ as $n\to\infty$ with $x$ fixed. Conjecture~\ref{conj:main} in this case corresponds to $f(0)=1$. \section{Two factors of high degree} In this section, we consider the case where $\lambda_0$ and $\lambda_1$ are approximately constant. We will use a constant $\eps>0$ that must be sufficiently small. Suppose the following conditions hold. \begin{equation}\label{eq:dense} \begin{aligned} (1-2\lambda_1)^2\Bigl(1+\frac{5m}{6n}+\frac{5n}{6m}\Bigr) & \le(4-\eps)\lambda_1(1-\lambda_1)\log n \\ m\le n &= o\(\lambda_1(1-\lambda_1) m^{1+\eps}\). \end{aligned} \end{equation} Let $D$ be an $(m,n,\hat\lambda)$-semiregular graph where $0<\hat\lambda\le n^{-1+\eps}$. Greenhill and McKay~\cite{GMX} proved that the number of $(m,n,\lambda_1)$-semiregular graphs is given by Theorem~\ref{thm:reg}, and, moreover, the probability that a uniformly random $(m,n,\lambda_1)$-semiregular graph is edge-disjoint from~$D$ is asymptotically \begin{equation}\label{eq:l01} P(m,n,\lambda_1,\hat \lambda) = (1-\lambda_1)^{\hat\lambda mn} \exp\Bigl( -\frac{\lambda_1\hat\lambda(\hat\lambda mn-m-n)} {2(1-\lambda_1)} \Bigr). \end{equation} \begin{thm}\label{thm:ranx} Suppose conditions~\eqref{eq:dense} hold and suppose $\hat\lambda=\lambda_2+\cdots+\lambda_k=O(n^{-1+\eps})$. Then, as $n\to\infty$, \[ R(m,n; \lambda_0,\ldots,\lambda_k) \sim R'(m,n; \lambda_0,\ldots,\lambda_k). \] \end{thm} \begin{proof} By Theorem~\ref{thm:silver} or Theorem~\ref{thm:coloured}, $R(m,n; \lambda_0+\lambda_1,\lambda_2,\ldots,\lambda_k)\sim R'(m,n; \lambda_0+\lambda_1,\lambda_2,\ldots,\lambda_k)$. Therefore, \[ R(m,n; \lambda_0,\ldots,\lambda_k)\sim P(m,n,\lambda_1,\hat\lambda) R'(m,n;1-\lambda_1,\lambda_1) R'(m,n;\lambda_0+\lambda_1,\lambda_2,\ldots,\lambda_k). \] Then we have \[ \frac{R(m,n; \lambda_0,\ldots,\lambda_k)}{R'(m,n; \lambda_0,\ldots,\lambda_k)} \sim P(m,n,\lambda_1,\hat\lambda) \frac{ \Bigl( \frac{m!\, ((1-\lambda_1-\hat\lambda)m)!} {((1-\lambda_1)m)!\, ((1-\hat\lambda)m)!} \Bigr)^{\!n} \Bigl( \frac{n!\, ((1-\lambda_1-\hat\lambda)n)!} {((1-\lambda_1)n)!\, ((1-\hat\lambda)n)!} \Bigr)^{\!m} } { \Bigl( \frac{(mn)!\, ((1-\lambda_1-\hat\lambda)mn)} {((1-\lambda_1)mn)!\, ((1-\hat\lambda)mn)!} \Bigr) }. \] Define $g(N)$ by $N! =\sqrt{2\pi}\, N^{N+1/2} e^{-N+g(N)}$ and \[ \bar g(N)= g(N)+g((1{-}\hat\lambda{-}\lambda_1)N)-g((1{-}\hat\lambda)N)-g((1{-}\lambda_1)N. \] Then \begin{align*} &\frac{R(m,n; \lambda_0,\ldots,\lambda_k)}{R'(m,n; \lambda_0,\ldots,\lambda_k)} \\ &{\qquad}\sim \( 1 - \hat\lambda\)^{-(1-\hat\lambda)mn-(m+n-1)/2} \Bigl(1 - \frac{\hat\lambda}{1-\lambda_1}\Bigr)^{\!(1-\hat\lambda-\lambda_1)mn+(m+n-1)/2} \\ &{\qquad\qquad}\times \exp\Bigl( -\frac{\lambda_1\hat\lambda(\hat\lambda mn-m-n)} {2(1-\lambda_1)} +n\bar g(m)+m\bar g(n)-\bar g(mn) \Bigr). \end{align*} Now we can apply the estimates $1-x=e^{-x-x^2/2+O(x^3)}$ and $g(N)=\frac1{12N} + O(N^{-3})$ to show that the above quantity is $e^{o(1)}$. This completes the proof. \end{proof} \section{The highly oblong case} Suppose $2\le m=O(1)$ and $1\le k\le m-1$. In that case we can prove Conjecture~\ref{conj:main} by application of the Central Limit Theorem, without requiring the condition $k=o(m)$. We will consider the partition of $K_{m,n}$ into $k+1$ semiregular graphs as an edge-colouring with colours $0,1,\ldots,k$, where colour $i$ gives a semiregular subgraph of density $\lambda_i$ for $0\le i\le k$. Label $V_1$ as $u_1,\ldots,u_m$ and consider one vertex $v\in V_2$. For $0\le i\le k$, $v$ must be adjacent to $\lambda_i m$ vertices of~$V_1$ by an edge of colour~$i$; let us make the choice uniformly at random from the \[ \binom{m}{\lambda_0 m,\ldots,\lambda_k m} \] possibilities. Define the random variable $X_{u,c}$ to be the indicator of the event ``$v$~is joined to vertex~$u$ by colour~$c$''. Let $\boldsymbol{X}$ be the $(m-1)k$-dimensional random vector $(X_{u,c})_{1\le u\le m-1, 1\le c\le k}$. Note that we are omitting $u=m$ and $k=0$ since those indicators can be determined from the others (this avoids degeneracy in the following). Now we can calculate these covariances: \[ \mathrm{Cov}(X_{u,c}, X_{u',c'}) = \begin{cases} \lambda_c(1-\lambda_c), & \text{if $u=u', c=c'$}; \\ -\lambda_c\lambda_{c'}, & \text{if $u=u', c\ne c'$}; \\ -\frac{\lambda_c(1-\lambda_c)}{m-1}, & \text{if $u\ne u', c=c'$}; \\ \frac{\lambda_c\lambda_{c'}}{m-1}, & \text{if $u\ne u', c\ne c'$}. \end{cases} \] Let $\varSigma$ be the covariance matrix of $\boldsymbol{X}$, labelled in the order $(1,1),(1,2),\ldots,(m-1,k)$. Let $\boldsymbol{X}^{(n)}$ denote the sum of $n$ independent copies of $\boldsymbol{X}$, corresponding to a copy of $\boldsymbol{X}$ for each vertex in~$V_2$. For each vertex in $V_2$ to have the correct number of incident edges of each colour, $\boldsymbol{X}^{(n)}$ must equal its mean. By~\cite[Thm.~1]{CLT}, $\boldsymbol{X}^{(n)}$ satisfies a local Central Limit Theorem as $n\to\infty$. In particular \[ \mathrm{Prob}( \boldsymbol{X}^{(n)}=n\,\mathbb{E}\boldsymbol{X} ) \sim \frac{1}{(2\pi)^{k(m-1)/2} n^{k(m-1)/2} \abs{\varSigma}^{1/2}}. \] To find the determinant $\abs\varSigma$, it helps to notice that $\varSigma$ is a tensor product $B\otimes C$. Here $B$ is a $k\times k$ matrix with $B_{ii}=\lambda_i(1-\lambda_i)$ for all~$i$ and $B_{ij}=-\lambda_i\lambda_j$ for $i\ne j$; while $C$ is an $(m-1)\times(m-1)$ matrix with 1 on the diagonal and $-\frac1{m-1}$ off the diagonal. Both $B$ and $C$ are rank-1 modifications of diagonal matrices, and the matrix determinant lemma gives $\abs{B}=\prod_{i=0}^k\lambda_i$ and $\abs{C}=m^{m-2}/(m-1)^{m-1}$. Therefore, \[ \abs{\varSigma} = \abs{B}^{m-1}\abs{C}^k = \biggl( \frac{m^{m-2}}{(m-1)^{m-1}}\biggr)^{\!k} \biggl( \,\prod_{i=0}^k \lambda_i\biggr)^{\!m-1}. \] To complete the proof of Theorem~\ref{thm:main}(e), we only need to apply Stirling's formula to give \[ \frac{\displaystyle\binom{n}{\lambda_0 n,\ldots,\lambda_k n}^{\!m}} {\displaystyle\binom{mn}{\lambda_0 mn,\ldots,\lambda_k mn}} \sim (2\pi n)^{-k(m-1)/2} m^{k/2} \biggl(\,\prod_{i=0}^k \lambda_i\biggr)^{\!-(m-1)/2}. \] \section{Concluding remarks}\label{s:conclusion} We have proposed an asymptotic formula for the number of ways to partition a complete bipartite graph into spanning semiregular regular subgraphs and proved it in several cases. The analytic method described in~\cite{mother} will be sufficient to test the conjecture when there are several factors of high density. This will be the topic of a future paper. \nicebreak
137,201
TITLE: Limit of Matrices QUESTION [1 upvotes]: Given that $A$ is symmetric $nxn$ matrix. Show that $\lim_{k \rightarrow \infty} (x^tA^{2k}x)^{1/k}$ exists for all $x \in R^n$ and possible limit values are the eigenvalues of A. Since A is symmetric you can find an orthonormal basis. So A is similar to the diagonal matrix B. So you get $\lim_{k \rightarrow \infty} (\sum_{i=0}^n x_i^2 \lambda_i^{2k})^{1/k}$. So I can show that this last thing is bounded above and below, but it isn't monotone in k, so how should I show that it's convergent and find what it could possibly converge to? REPLY [1 votes]: Hint: Of the $\lambda_i$ whose corresponding $x_i$ are nonzero, suppose $\lambda_l$ is the largest eigenvalue in absolute value. Then $\sum_{i=0}^n x_i^2 \lambda_i^{2k}$ is dominated by the $i=l$ term.
32,702
\begin{document} \maketitle \abstract{ By using our novel Grassmann formulation we study the phase transition of the spanning-hyperforest model of the $k$-uniform complete hypergraph for any $k\geq 2$. The case $k=2$ reduces to the spanning-forest model on the complete graph. Different $k$ are studied at once by using a microcanonical ensemble in which the number of hyperforests is fixed. The low-temperature phase is characterized by the appearance of a giant hyperforest. The phase transition occurs when the number of hyperforests is a fraction $(k-1)/k$ of the total number of vertices. The behaviour at criticality is also studied by means of the coalescence of two saddle points. As the Grassmann formulation exhibits a global supersymmetry we show that the phase transition is second order and is associated to supersymmetry breaking and we explore the pure thermodynamical phase at low temperature by introducing an explicit breaking field. } \clearpage \section{Introduction} The phase transition in a model of spanning forests is particularly interesting because only the geometric properties of connection of different parts are involved and this extremely reduced structure is probably at the root of many critical phenomena, within, and even outside, natural sciences. A possible way to attack this problem on a generic graph by the tools of statistical mechanics goes back to the formulation as a Potts model~\cite{Potts, Wu, Wu2} in the limit of vanishing number $q$ of states. The Potts model on any finite graph $G=(V,E)$, with vertex set $V$ and edge set $E$, is characterized by the coupling $v_e$, for each edge $e\in E$, which is related to the inverse temperature $\beta$ and exchange coupling $J_e$ by the relation $v_e = e^{\beta J_e} - 1$. By definition $q$ is a positive integer and the set of couplings ${\bf v} = \{v_e\}_{e\in E}$ is of real numbers in the interval $[-1,\infty]$. The Fortuin-Kasteleyn representation~\cite{Kasteleyn_69, Fortuin_72} expresses the partition function $Z_G(q; {\bf v})$ of the Potts model as a sum on all subgraphs $H\subseteq G$ of monomials in both $q$ and $v_e$'s \begin{equation} Z_G(q; {\bf v}) := \sum_{H\subseteq G} q^{K(H)}\,\prod_{e\in E(H)} v_e \end{equation} where $K(H)$ is the number of connected components of the subgraph $H$. Therefore the model is easily extended to more general values of its parameters. In this form it takes the name of {\em random cluster model}~\cite{Grimmet}. More generally, it is convenient to introduce the redundant description in terms of two global parameters $\lambda$ and $\rho$ \begin{equation} Z_G(\lambda,\rho; {\bf w}) := \sum_{H\subseteq G} \lambda^{K(H)-K(G)}\,\rho^{L(H)}\, \prod_{e\in E(H)} w_e \end{equation} where $L(H)$ is the {\em cyclomatic number} of the subgraph $H$. The redundancy is easily shown by using the Euler relation \begin{equation} V - K = E - L \end{equation} and the relations \begin{align} q \,=\, & \lambda \, \rho \\ {\bf v} \,=\, & {\bf w} \, \rho \, . \end{align} Indeed, this form is suitable for taking two different limits when $q \to 0$. In the former limit $\lambda\to 0$ at $\rho$ fixed only maximally-connected subgraphs will survive and will be weighted by a factor $\rho^{L(H)}$. In the latter one $\rho\to 0$ at $\lambda$ and $\bf w$ fixed only spanning forests will survive, weighted by a factor $\lambda^{K(H)}$ as $\lambda^{-K(G)}$ is an overall constant. Remark that, when $G$ is {\em planar}, maximally-connected subgraphs are in one-to-one correspondence with spanning forests by graph duality, and that when both $\lambda\to 0$ and $\rho\to 0$ only spanning trees will survive in each connected component of $G$. As shown in \cite{CarJacSal04, CarSokSpo07}, the model in the limit $\rho\to 0$, that is the spanning forest model, admits a representation in terms of fermionic fields, which means that the partition function can be written as a multiple Berezin integral over anti-commuting variables which belong to a Grassmann algebra. Moreover, this representation~\cite{CarSokSpo07, BedCarSpo08} is powerful enough to describe the model with many body interactions which gives rise to hyperforests defined on a hypergraph~\cite{Grimmett_94}, which is a natural generalization of the concept of graph where the edges can connect more than two vertices at once. In two dimensions, the critical behaviour of the ferromagnetic Potts/random-cluster model is quite well understood, thanks to a combination of exact solutions~\cite{Bax07}, Coulomb gas methods~\cite{Nienhuis}, and conformal field theory~\cite{FraMatSen97}. Information can also be deduced from the study of the model on random planar lattices~\cite{Kazakov,PZJ,eynbon}. Also in the $q \to 0$ limit detailed results are avalaible both for the tree model, in particular in connection with the abelian sandpile model~\cite{MD}, as for spanning forest on a regular lattice~\cite{CarJacSal04,claudia} and directly in the continuum~\cite{arboreal}. Also the model on random planar lattices has been considered~\cite{BP}. But in more than two dimensions the only quantitative informations we have about the spanning-forest model come from numerical investigations~\cite{DenGarSok07}. Monte Carlo simulations performed at increasing dimensionality ($d = 3,4,5$) show a second-order phase transition. Much less results are available for the case of hyperforests. Also in two dimensions or in the limit of hypertrees. Even the problem of determining whether there exists a spanning hypertree in a given $k$-uniform hypergraph, is hard, technically NP-complete, for $k\ge 4$, whereas for $k=3$, there exists a polynomial-time algorithm based on Lovasz' theory of polymatroid matching~\cite{LovPlu}. See~\cite{CMSS} for a randomized polynomial-time algorithm in the case $k=3$ whose main ingredients is a Pfaffian formula for a polynomial that enumerates spanning hypertrees with some signs~\cite{MV}, which is quite similar to our Grassmann representation~\cite{Abdessalam}. In~\cite{SPS} a phase transition is detected in the random $k$-uniform hypergraph when a number of hyperedges $|E|=n/k(k-1)$ of the total number of vertices $n=|V|$ is chosen uniformly at random. In the case of random graphs, that is for $k=2$, Erd\H{o}s and R\'enyi showed in their classical paper~\cite{ER} that at the transition an abrupt change occurs in the structure of the graph, for low density of edges it consists of many small components, while, in the high-density regime a {\em giant} component occupies a finite fraction of the vertices. Remark that their ensemble of subgraphs is the one occurring in the microcanonical formulation, at fixed number of edges, of the Potts model at number of states $q=1$. The connected-component structure of the random $k$-uniform hypergraph has been analyzed in~\cite{SPS} where it has been shown that if $|E| < n/k(k-1)$ the largest component occupies order $\log n$ of vertices, for $|E| = n/k(k-1)$ it has order $n^{2/3}$ vertices and for $|E| > n/k(k-1)$ there is a unique component with order $n$ vertices. More detailed information on the behaviour near the phase transition when $|E| \to n/k(k-1)$ have been recovered in~\cite{Bollo1, Bollo2} for the case of the random graph, but see also~\cite{JKLP, JLR}, and in~\cite{Karonski_02} for the general case of hypergraphs. By using the new Grassmann representation, we present here a study of the phase transition for the hyperforest model on the $k$-uniform complete hypergraph, for general $k$, where the case $k=2$ corresponds to spanning forests on the complete graph. The random-cluster model~\cite{BolGriJan05} on the complete graph has already been developed but it cannot be extended to the $q \to 0$ case, exactly like the mean-field solution for the Potts model~\cite{Mittag, Baracca}. The fermionic representation, instead, describes the Potts model directly at $q=0$ as it provides an exact representation of the partition function of the spanning-hyperforest model. As usual with models on the complete graph, the statistical weight reduces to a function of only one extensive observable, which here is quadratic in the Grassmann variables. Under such a condition the partition function can be expressed as the integration over a single complex variable in a closed contour around the origin~\cite{BedCarSpo08}. Indeed, counting the spanning forests over a complete (hyper-)graph is indeed a typical problem of {\em analytical combinatorics}. And, exactly like in the case of ordinary graph, when the number of connected components in the spanning forests is macroscopic, that is a finite fraction of the number of vertices, there are two different regimes, which can be well understood by means of two different saddle points of a closed contour integration over a single complex variable as presented in \cite{FlaSed08} (but see also the probabilistic analysis in~\cite{Kolchin}). And even the behaviour at the critical point can be studied as the coalescence of these two saddle points. In this paper we shall first review for reader convenience the Grassmann formulation of the spanning-forest model in Sec.~\ref{sec:the_spanning_forests_model} and in Sec.~\ref{sec:the_mean_field_theory} how it is possible to recover, in the case of the $k$-uniform complete hypergraphs, a representation of the partition function suitable for the asymptotic analysis for large number of vertices $n$. In this same Section we shall also present a full discussion of the saddle points in the micro-canonical ensemble, that is at fixed number of connected components, and of the associated different phases. We shall see that the universality class of the transition is independent from $k$. And we will exhibit the relation with the canonical ensemble. In Sec.~\ref{sec:t} we will provide an interpretation of the transition as the appearance of a {\em giant} component by introducing a suitable observable which is sensible to the size of the different hypertrees in the hyperforest. More interestingly, our Grassmann formulation exhibits a global continuos supersymmetry, non-linearly realized. We shall show that the phase transition is associated to the spontaneous breaking of this supersymmetry. By the introduction of an explicit breaking we shall be able to investigate the expectation values in the broken pure thermodynamical states. We shall therefore be able to see in Sec.~\ref{sec:the_symmetry_breaking} that the phase transition is of second order. This seems at variance with the supersymmetric formulation of polymers given by Parisi and Sourlas~\cite{Parisi-Sourlas} where it appeared to be of zeroth order. \section{The spanning-forest model} \label{sec:the_spanning_forests_model} Given the complete hypergraph $\overline{\cal K}_n=(V,E)$ with vertex set $V = [n]$, and complete in the $k$-hyperedges for all $2\leq k\leq n$, so that the hyperedge set $E$ is the collection of all $A \subseteq V$ with cardinality at least 2, let's introduce on each vertex $i \in V$ a pair of anti--commuting variables $\psi_i$, $\bar\psi_i$: \begin{equation} \{\psi_i, \psi_j\} = \{\bar\psi_i, \bar\psi_j\} = \{\bar\psi_i, \psi_j\} = 0 \qquad \forall i, j \in V(G) \end{equation} which generate the Grassmann algebra $\Lambda[\psi_1, \dots, \psi_n,\bar\psi_1, \dots, \bar\psi_n]$ of dimension $2^{2n}$. Then, for each hyperedge $A\subseteq E$, we define the monomial \begin{equation} \tau_A := \prod_{i \in A} \bar\psi_i \psi_i , \end{equation} and, for each indeterminate $t$, the Grassmann element \begin{equation} f_A^{(t)} := t (1 - |A|) \tau_A + \sum_{i \in A} \tau_{A \smallsetminus i} - \sum_{\substack{i,j \in A \\ i \neq j}} \bar\psi_i \psi_j \tau_{A \smallsetminus \{i,j\}}\, . \end{equation} In \cite{CarSokSpo07} it has been shown that the generating function of unrooted spanning forest on a generic hypergraph admits the following representations: given a set of edge weights $\mathbf w = \{w_{A}\}_{A \in E}$ we have \begin{align} \calZ_G(\mathbf w, t) & := \sum_{F\in \calF} t^{K(F)} \prod\limits_{A \in F} w_A \\ & \label{eq:z} = \int \calD (\bar\psi, \psi)\ \exp \left\{ t \sum_{i \in V} \bar\psi_i \psi_i + \sum_{A \in E} w_A f_A^{(t)} \right\} \\ & = \int \calD (\bar\psi, \psi)\ \exp \left( - {\cal H} \right) \end{align} where the indeterminate $t$ plays the role of the parameter $\lambda$ we had in the random cluster model formulation, $\cal F$ is the set of hyperforests, $K(F)$ the number of connected components in the hyperforest $F$, that is the number of hypertrees, \begin{align} \int \calD (\bar\psi, \psi) := \int \prod_{i \in V} d\bar\psi_i d\psi_i \end{align} is the Berezin integration and we denoted by $- {\cal H}$, as usual in statistical mechanics, the exponential weight. The fermionic model we introduced above presents a non-linearly realized $\osp(1|2)$ supersymmetry~\cite{CarJacSal04, CarSokSpo07}. Firstly, we have the elements of the $\ssp(2)$ subalgebra, with \begin{align} \delta \psi_i & = -\, \alpha\, \psi_i + \gamma\, \bar\psi_i \\ \delta \bar\psi_i & = +\, \alpha\, \bar\psi_i + \beta\, \psi_i \label{def.sp2} \end{align} where $\alpha,\beta,\gamma$ are bosonic (Grassmann-even) global parameters. Secondly, we have the transformations parametrized by fermionic (Grass\-mann-odd) global parameters $\epsilon,\bar\epsilon$: \begin{align} \delta \psi_i & = t^{-1/2}\,\epsilon \,(1 - t\,\bar\psi_i \psi_i) \label{eq:osp-fermionica} \\ \delta \bar\psi_i & = t^{-1/2}\,\bar\epsilon \,(1 - t\,\bar\psi_i \psi_i) \label{eq:osp-fermionicb} \end{align} In terms of the differential operators $\partial_i = \partial/\partial \psi_i$ and $\bar{\partial}_i = \partial/\partial \bar\psi_i$, the transformations~(\ref{def.sp2}) can be represented by the generators \begin{align} X_0 & = \sum_{i\in V} (\bar\psi_i \bar{\partial}_i - \psi_i \partial_i) \\[1mm] X_+ & = \sum_{i\in V} \bar\psi_i \partial_i \\[1mm] X_- & = \sum_{i\in V} \psi_i \bar{\partial}_i \label{eq:repsp2} \end{align} corresponding to the parameters $\alpha,\beta,\gamma$, respectively, while the transformations~(\ref{eq:osp-fermionica}) and~(\ref{eq:osp-fermionicb}) can be represented by the generators \begin{align} Q_+ & = t^{-1/2} \sum_{i\in V} (1 - t \,\bar\psi_i \psi_i) \partial_i \\[1mm] Q_- & = t^{-1/2} \sum_{i\in V} (1 - t\, \bar\psi_i \psi_i) \bar{\partial}_i \label{eq:def_Q_bis} \end{align} corresponding to the parameters $\epsilon,\bar\epsilon$, respectively. These transformations satisfy the commutation/anticommutation relations \begin{align} [X_0, X_\pm] \,=\, \pm 2 X_\pm \quad & \quad [X_+, X_-] \,=\, X_0 \label{eq:sp2} \\[1mm] \{ Q_\pm, Q_\pm \} \,=\, \pm 2 X_\pm \quad & \quad \{ Q_+, Q_- \} \,=\, X_0 \label{eq:osp12a} \\[1mm] [X_0, Q_\pm] \,=\, \pm Q_\pm \quad\qquad [X_\pm, Q_\pm] &\!=\! 0 \quad\qquad [X_\pm, Q_\mp] \,=\, -Q_\pm \label{eq:osp12c} \end{align} Note in particular that $X_\pm = Q_\pm^2$ and $X_0 = Q_+ Q_- + Q_- Q_+$. \section{Uniform complete hypergraphs} \label{sec:the_mean_field_theory} The $k$-uniform complete hypergraph is the hypergraph whose vertices are connected in groups of $k$ in all possible ways or, alternatively, whose edge set $E$ is the set of all $k$-sets over the vertex set $V$. In our general formulas for the complete hypergraph we must set the weights $w_A=0$ for all the hyperedges $A$ with cardinality different from $|A|=k$. In the following we will set all the nonzero weights to one, so that we shall restrict ourselves to a simple, one-parameter, counting problem. In this case the expansion of the partition function in series of $t$ \begin{equation} \calZ(t) = \sum_p \calZ_p \,t^p \end{equation} provides $\calZ_p$ the number of hyperforests, with all hyperedges with cardinality $k$, composed by $p$ hypertrees. Please remark that, in the $k$-uniform complete hypergraph, the number $p$ of hyperforests must be such that \begin{equation} s \, = \, \frac{n-p}{k-1} \label{s} \end{equation} must be an integer. Indeed it is the total number of hyperedges in the hyperforest. By definition \begin{align} \calZ_p = & \, \frac{1}{p!}\,\langle {\cal U}^p \rangle_{t=0} \\ = & \, \frac{1}{p!}\, \int \calD (\bar\psi, \psi)\, {\cal U}^p\, \exp \left\{ \sum_{A: |A|=k} f_A^{(0)} \right\} \end{align} where \begin{equation} {\cal U} = \sum_{i \in V} \bar\psi_i \psi_i + (1 - k) \sum_{A: |A|=k} \tau_A \end{equation} and $\langle \cdot \rangle_{t=0}$ is the un-normalized expectation value in the ensemble of hypertrees. The interested reader can find a full comparison of our approach with respect to the standard tools of combinatorics in our previous paper~\cite{BedCarSpo08}. We introduce a mean-field variable \begin{equation} \bar\psi\psi := \sum_{i \in V} \bar\psi_i \psi_i = (\bar\psi, \psi) \end{equation} and we observe that \begin{align} {\cal U} & = \bar\psi\psi + (1 - k) \frac {(\bar\psi\psi)^k} {k!} \\ \sum_{A: |A|=k} f_A^{(0)} & = n \frac{(\bar\psi\psi)^{k-1}}{(k-1)!} - (\bar\psi, \J \psi) \frac{(\bar\psi\psi)^{k-2}}{(k-2)!} , \end{align} where $\J$ is the matrix with 1 on all entries, so that $(\bar\psi, \J \psi) = \sum_{i,j} \bar\psi_i \psi_j$. The following lemma then applies: \begin{lemma}[\cite{BedCarSpo08}] Let $n$ be the number of vertices, $g$ and $h$ generic functions on the Grassmann algebra, then \begin{multline} \int \calD (\bar\psi, \psi)\ (\bar\psi\psi)^{r} e^{h(\bar\psi\psi) + (\bar\psi, J \psi) g(\bar\psi\psi)} \\ = \int \calD (\bar\psi, \psi)\ (\bar\psi\psi)^{r} e^{h(\bar\psi\psi)} \left[1 + \bar\psi\psi \, g(\bar\psi\psi)\right]\, . \end{multline} \end{lemma} By this lemma, $\calZ(t)$ can be written in terms of the sole mean-field variable $\bar\psi\psi$ \begin{equation} \label{eq:z-complete} \calZ(t) = \int \calD (\bar\psi, \psi)\, \exp \left\{ t \,{\cal U} + n \frac {(\bar\psi\psi)^{k-1}} {(k-1)!} \right\} \left[ 1 - \frac {(\bar\psi\psi)^{k-1}}{(k-2)!} \right] . \end{equation} In order to perform an estimate for the asymptotic value of the integral for large $n$ we recall that for an analytic function $f$ \begin{equation} \int \calD (\bar\psi, \psi)\ f(\bar\psi\psi) \equiv n! \oint \frac {d\xi} {2 \pi i}\ \frac{f(\xi)}{\xi^{n+1}} \end{equation} where the integration contour in the complex plane is around the origin. We have the following complex integral representation form for the partition function $\calZ(t)$ \begin{multline} \label{eq:z-mf} \calZ(t) = n! \oint \frac {d\xi} {2 \pi i} \, \frac1{\xi^{n+1}} \exp \left\{ t \left[ \xi + (1 - k) \frac {\xi^k} {k!} \right] \right\} \\ \exp \left\{ n \frac {\xi^{k-1}} {(k-1)!} \right\} \left[ 1 - \frac {\xi^{k-1}}{(k-2)!} \right] . \end{multline} Let us first work at fixed number of hypertrees, in a micro--canonical ensemble in the physics terminology. Expanding $\eqref{eq:z-mf}$ in powers of $t$ we obtain the number $\calZ_{p}$ of spanning hyperforests on the complete $k$-uniform hypergraph which is the number of states in the micro-canonical ensamble \begin{multline} \label{eq:fp} {\cal Z}_p = \frac {n!} {p!} \oint \frac {d\xi} {2 \pi i} \, \frac1{\xi^{n+1}} \left[ \xi + (1 - k) \frac {\xi^k} {k!} \right]^p \\ \exp\left\{ n \frac {\xi^{k-1}} {(k-1)!} \right\} \left[ 1 - \frac {\xi^{k-1}}{(k-2)!} \right] . \end{multline} Since we are interested in obtaining $ {\cal Z}_p$ in the thermodynamical limit $n \to \infty$ also for large values of $p$, we define $p = \alpha n$ with fixed $\alpha$ as $n, p \to \infty$. Changing the variable of integration to $\eta = (k-1) \frac{\xi^{k-1}}{k!}$, we obtain the following integral expression: \begin{equation} \label{eq:fp-AB} {\cal Z}_{\alpha n} = \frac {n!} {\Gamma(\alpha n+1)} \left[ \frac{k -1}{k!} \right]^{n \frac {1-\alpha } {k - 1}}\, I(\alpha) \end{equation} where \begin{equation} I(\alpha) := \oint \frac{d\eta}{2 \pi i} \, A(\eta) \, e^{nB(\eta)} \label{intI} \end{equation} with \begin{align} A(\eta) & = \frac{1 - k \eta}{\eta} \label{Axi} \\ B(\eta) & = \frac k {k - 1} \eta + \alpha \log(1 - \eta) + \frac {\alpha - 1} {k - 1} \log \eta . \end{align} Please note that the factor $k - 1$ coming from the change of variable in the integral is exactly compensated by the fact that a full turn around the origin in the $\eta$ plane is equivalent to $k - 1$ turns of the $\xi$ variable. Precise estimates of integrals of this kind for $n \to \infty$ can be obtained by the saddle point method (see \cite{FlaSed08} for a very complete discussion of this method). \subsection{The saddle point method} \label{sub:the_saddle_point_method} A \emph{saddle point} of a function $B(\eta)$ is a point $\eta_0$ where $B'(\eta_0) = 0$, it is said to be a \emph{simple saddle point} if furthermore $B''(\eta_0) \neq 0$. In this case it is easy to see that the equilevel lines divide a neighborhood of $\eta_0$ in four regions where $\Re B(\eta)$ is alternately higher and lower than the saddle point value $\Re B(\eta_0)$. We will refer to the two lower regions as the \emph{valleys}. Analogously, a \emph{multiple saddle point} has multiplicity $p$ if all derivatives up to $B^{(p)}(\eta_0)$ are equal to zero while $B^{(p+1)}(\eta_0) \neq 0$. In this case there are $p + 1$ higher and lower regions. When evaluating Cauchy contour integrals of the form~(\ref{intI}), saddle points of $B(\eta)$ play a central role in the asymptotic estimate for large $n$. The method essentially consists of two basic ingredients: an accurate choice of the contour and Laplace's method for the evaluation of integrals depending on a large parameter. The contour has to be chosen to pass through a point which is a global maximum of the integrand along the contour and that a neighborhood of which (the \emph{central region}) dominates the rest of the contour (the \emph{tails}) as $n$ grows. Since an analytic function cannot have an isolated maximum, this implies that the contour should pass through a saddle point. The existence of a contour surrounding the origin and that crosses a saddle point along its direction of steepest descent requires that two of its valleys are topologically connected and the region connecting them surrounds the origin. Once we have a contour, we proceed neglecting the tails and approximating the functions $A(\eta)$ and $B(\eta)$ with their Taylor series about the chosen saddle point $\eta^*$. Then, after having absorbed the factor $n$ into a rescaled variable $x = (\eta - \eta^*) / n^{1/(p+1)}$ (where $p$ is the multiplicity of the saddle point), we can easily obtain an asymptotic expansion of the integral in inverse powers of $n$. We collect here the first few terms of the asymptotic expansion for the case of a simple saddle \begin{equation} \label{eq:single-saddle} I \simeq \frac{e^{n B(\eta^{*})}}{\sqrt{2 \pi n B''(\eta^{*})}} \left[ A(\eta^{*}) + \frac{1}{n} C(\eta^{*}) + \frac{1}{n^2} D(\eta^{*}) + O\left(\frac{1}{n^3}\right) \right] \end{equation} where the terms in the square brackets with half-integer inverse-power of $n$ vanish, and of a double saddle \begin{equation} \label{eq:double-saddle} I \simeq \frac{e^{n B(\eta^{*})}}{n^{\frac{1}{3}} B^{(3)}(\eta^{*})^{\frac{1}{3}}} \left[ \gamma_{0} \, A(\eta^{*}) + \frac{1}{n^{\frac{1}{3}}} \tilde C(\eta^{*}) + \frac{1}{n} \tilde D(\eta^{*}) + O\left(\frac{1}{n^{\frac{4}{3}}}\right) \right] \end{equation} where the terms in the square brackets with powers $n^{-(l+\frac{2}{3})}$, with integer $l$, vanish. In these formulae $C$, $\tilde C$, $D$, and $\tilde D$ are rational functions of $A(\eta^*)$, $B(\eta^*)$ and their derivatives, whose expression is reported in the Appendix~\ref{sec:appendix}, together with the value of the constant~$\gamma_0$. For our integral~(\ref{intI}) in the large $n$ limit the relevant saddle-point equation $B'(\eta)= 0$ has two solutions, $\eta_a$ and $\eta_b$: \begin{align} \label{eq:saddles} \eta_a = \frac{1}{k} \qquad \eta_b = 1 - \alpha . \end{align} If $\alpha \neq \alpha_c \equiv (k - 1)/k$ the two solutions are distinct and correspond to simple saddle points. To understand which one is relevant to our discussion we need to study the landscape of the function $B(\eta)$ beyond the neighborhood of the saddles. In our specific case, as illustrated in figures from~\ref{fig:SaddlesLow} to \ref{fig:SaddlesCritical}, when $\alpha < \alpha_c$ among the two saddles only $\eta_a$ is accessible, while, if $\alpha > \alpha_c$, only $\eta_b$ is so. When $\alpha = \alpha_c$ the two saddle points coalesce into a double saddle point, thus with three valleys, having steepest-descent directions $e^{\frac{2 \pi i k}{3}}$, with $k=0$, 1, 2. Of these valleys, the ones with the appropriate global topology are those with indices $k=1$ and $2$. \begin{figure}[tbp ] \centering \setlength{\unitlength}{50pt} \begin{picture}(6.2,4.6) \put(0,0){\includegraphics[scale=1]{fig_low.pdf}} \put(2.89,2){$\eta_a$} \put(4.03,2){$\eta_b$} \put(0.9,2){$0$} \put(4.95,2){$1$} \end{picture} \caption{\small Contour levels for $\Re B(\eta)$ when $\alpha < \alpha_c$. More precisely, the figure shows the case $k=2$ and $\alpha = \frac{1}{2} \alpha_c$. The two bold contour lines describe the level lines of $\Re B(\eta)$ for the values at the two saddle points (located at the bullets). Darker tones denote higher values of $\Re B(\eta)$. The crosses and the dotted lines describe the cut discontinuities due to logarithms in $B(\eta)$. The dashed path surrounding the origin going through one of the saddle points is an example of valid integration contour, and the solid straight portion of the path describes an interval in which the perturbative approach is valid. \label{fig:SaddlesLow} } \end{figure} \begin{figure}[htbp] \centering \setlength{\unitlength}{50pt} \begin{picture}(6.2,4.6) \put(0,0){\includegraphics[scale=1]{fig_high.pdf}} \put(3,2){$\eta_a$} \put(2.13,1.98){$\eta_b$} \put(0.9,2){$0$} \put(4.95,2){$1$} \end{picture} \caption{\small Contour levels for $\Re B(\eta)$ when $\alpha > \alpha_c$. More precisely, the figure shows the case $k=2$ and $\alpha = \frac{3}{2} \alpha_c$. Description of notations is as in figure~\ref{fig:SaddlesLow}. \label{fig:SaddlesHigh} } \end{figure} \begin{figure}[htbp] \centering \setlength{\unitlength}{50pt} \begin{picture}(6.2,4.6) \put(0,0){\includegraphics[scale=1]{fig_crit.pdf}} \put(3.165,2){$\eta_a \equiv \eta_b$} \put(0.9,2){$0$} \put(4.95,2){$1$} \end{picture} \caption{\small Contour levels for $\Re B(\eta)$ when $\alpha = \alpha_c$. More precisely, the figure shows the case $k=2$. Description of notations is as in figure~\ref{fig:SaddlesLow}. \label{fig:SaddlesCritical} } \end{figure} As a first result of this discussion, in order to study the asymptotic behaviour of $\calZ_{\alpha n}$ will need to distinguish two different phases, and a critical point, upon the value of $\alpha$ being below, above or equal to $\alpha_c$. We will name the phases with a smaller and a larger number of hypertrees, respectively, the {\em low temperature} and {\em high temperature} phase, the reason being that, as we shall see, in the low temperature phase there is a spontaneous symmetry breaking and the appearance of a non-zero residual {\em magnetization}. \subsubsection{Low temperature phase} \label{sec:case-a} In the case $\alpha < \alpha_c$ the relevant saddle point is $\eta_a = 1/k$. See Fig.~\ref{fig:SaddlesLow}. Since $A(\eta_a) = 0$, we are in the case in which the leading order of \eqref{eq:single-saddle} vanishes and the next order has to be considered. The expansion of $A(\eta)$ and $B(\eta)$ in a neighborhood of the saddle $\eta_{a}$ is as follows: \begin{align} \label{eq:2} A(\eta_a + u) & \simeq - k^2 u + k^3 u^2 + O(u^3) \\ \label{eq:4} \begin{split} B(\eta_a + u) & \simeq \frac{1+(1-\alpha) \log k}{k \, \alpha_c} + \alpha \log \alpha_c + k\, \frac{\alpha_c - \alpha}{\alpha_{c}^2} \frac{u^2}{2} \\ & \quad - \left[ k^2 \frac {1-\alpha} {\alpha_c} + \frac{\alpha}{\alpha_c^3}\right] \frac {u^{3}}{3} + O(u^{4})\, . \end{split} \end{align} Using formula \eqref{eq:single-saddle} we obtain for \eqref{eq:fp} the following asymptotic expression: \begin{align} \label{eq:18} {\cal Z}_{\alpha n} \simeq & \frac {n!} {\Gamma(\alpha n+1)} \frac {\alpha \sqrt{k-1}} {\sqrt{2 \pi n^3 }} \frac {e^{\frac{n}{ k - 1}} \, \left(\frac{k-1}{k}\right)^{ n \alpha -1}} { [(k-2)!]^{n \frac {1 - \alpha}{k - 1}}} \left(1 - \frac{k \alpha}{k-1}\right)^{-5/2} \\ \label{eq:27} \simeq & \frac {n^{n-2}} {(\alpha n)^{\alpha n - \frac{1}{2}}} \sqrt{\frac {k-1} {2 \pi }} \frac {e^{\left( \alpha - \frac{k-2}{k-1}\right) n} \, \left(\frac{k-1}{k}\right)^{ n \alpha -1}} { [(k-2)!]^{n \frac {1 - \alpha}{k - 1}}} \left(1 - \frac{k \alpha}{k-1}\right)^{-5/2} \end{align} where in the second line we used the Stirling formula to approximate the large factorial $n!$. In a previous work~\cite{BedCarSpo08} we already gave an asymptotic formula for the number of forests with a given number $p$ of connected components. That formula has been obtained keeping $p$ fixed while doing the limit $n \to \infty$, in the notation of this paper this means taking $\alpha$ infinitesimal. By setting $ \alpha n \to p$ in \eqref{eq:18} and using \begin{equation} \frac{\alpha}{ \Gamma(\alpha \, n +1)} = \frac{\alpha }{\alpha n \,\Gamma(\alpha n)} = \frac{n^{-1}}{(p-1)!} \end{equation} and then taking the limit $\alpha \to 0$, we can re-obtain the result in~\cite{BedCarSpo08} by using again the Stirling formula to approximate the large factorial $n!$: \begin{equation} \label{eq:28} {\cal Z}_p \simeq \frac{n^{n - 2}}{e^{n \frac {k - 2} {k - 1}}} \frac{ \sqrt {k - 1}} {\left[ (k-2)! \right]^{\frac{n-p}{k-1}}} \frac1{(p-1)!} \left( \frac{k-1}k \right)^{p-1}\, . \end{equation} \subsubsection{High temperature phase} \label{sec:case-b} When $\alpha_c< \alpha < 1$ the relevant saddle point changes into $\eta_b = 1 - \alpha$ (see Fig.~\ref{fig:SaddlesHigh}) where the functions $A(\eta_b+u)$ and $B(\eta_b+u)$ can be approximated at $O(u^4)$ with \begin{align} A(\eta_b + u) & \simeq k\,\frac{\alpha - \alpha_{c}}{1 - \alpha} - \frac{u}{(\alpha - 1)^2} - \frac{u^2}{(\alpha - 1)^3} - \frac{u^3}{(\alpha - 1)^4} \\ \begin{split} B(\eta_b + u) & \simeq \frac{1 - \alpha}{\alpha_c} \left[ 1 - \frac{1}{k}\log(1 - \alpha)\right] + \alpha \log{\alpha} \\ & \; + \frac{1}{\alpha \, \alpha_{c}} \frac{\alpha - \alpha_c}{1 - \alpha} \frac{u^2}{2} - \left[ \frac{1}{\alpha^2} + \frac{1}{k\, \alpha_c\,(1-\alpha)^2}\right] \frac{u^3}{3} \end{split} \end{align} The situation is quite analogous to the previous one, with the exception that $A(\eta_b) \neq 0$, and using formula \eqref{eq:single-saddle} we obtain for \eqref{eq:fp} the following asymptotic expression: \begin{align} \label{eq:7} {\cal Z}_{\alpha n} \simeq & \frac {n!\, \alpha^{\alpha n}\,(k-1)} {\Gamma(\alpha n+1)} \left[\frac {e^{k} }{ k\,(1-\alpha)\, (k-2)! }\right]^{n \frac {1 - \alpha} {k - 1}} \sqrt{\frac {\alpha}{2 \pi n (1 - \alpha)}}\nonumber\\ & \qquad \left( \frac{\alpha k}{k-1} -1\right)^{1/2} \\ \simeq & \frac {n^{(1-\alpha)n}} {\sqrt{2 \pi n \frac{1-\alpha}{k-1}}} \left[\frac {e}{ k\,(1-\alpha)\, (k-2)! }\right]^{n \frac {1 - \alpha} {k - 1}} \, \left( \alpha k - k + 1\right)^{1/2}\, . \end{align} Remark that the saddle point method cannot be applied when $\alpha=1$, but if we replace \begin{equation} \frac{e^{n \frac {1 - \alpha} {k - 1}}}{\sqrt{2 \pi n \frac{1-\alpha}{k-1}}} \simeq \frac{n^{n \frac {1 - \alpha} {k - 1}}}{\Gamma\left( n \frac {1 - \alpha} {k - 1} + 1 \right)} \end{equation} for $\alpha \simeq 1$ we get \begin{equation} {\cal Z}_{ n} \simeq 1 \, . \end{equation} as we should. \subsubsection{The critical phase} \label{sec:critical-phase} When $\alpha$ is exactly $\alpha_c = (k - 1)/k$, the saddle points $\eta_b$ and $\eta_a$ coalesce into a double saddle point in $\eta_a$ in which the second derivative vanishes along with the first one. The expansion of $A(\eta)$ and $B(\eta)$ are as in (\ref{eq:2})-(\ref{eq:4}) with $\alpha = \alpha_{c}$: \begin{align} A(\eta_c + u) & \simeq - k^2 u + k^3 u^2 + O(u^3) \\ \begin{split} B(\eta_c + u) & \simeq \frac{1}{k-1} + \frac{1}{k(k-1)} \log (k-1) + \frac{k-2}{k-1} \log k \\ & \quad - \frac{k^3}{(k - 1)^2} \frac{u^{3}}{3} + \frac{k^4 (k - 2)}{(k - 1)^3} \frac{u^{4}}{4} + O(u^{5}) \end{split} \end{align} Using \eqref{eq:double-saddle} we obtain the following result \begin{align} \label{eq:10} {\cal Z}_{\alpha_c n} = & \frac {n!} {\Gamma\left( \frac{k-1}{k} n +1 \right)} \frac{e^{\frac {n} {k - 1} }\left(\frac{k-1}{k}\right)^{\frac{k-1}{k} n}} { \left[ (k - 2)! \right]^{\frac n {k(k - 1)}}} \, \frac {3^{1/6} \Gamma (2/3) (k - 1)^{4/3}} {2 \pi \, n^{2/3}}\\ \simeq & \,n^{\frac{n}{k}}\, \left[\frac{e}{(k-2)!}\right]^{\frac{n}{k(k-1)}}\, \frac {3^{1/6} \Gamma (2/3) (k - 1)^{4/3}} {2 \pi \, n^{2/3}}\, . \end{align} This formula, for $k=2$, can be, in principle, compared with the result presented in~\cite[Proposition VIII.11]{FlaSed08}, but unfortunately there is a discrepancy in the numerical pre-factor. \subsection{The canonical ensemble} According to the definition~(\ref{eq:fp-AB}) for the number of forests with $p = \alpha n$ trees $\mathcal Z_{\alpha n}$, by evaluating the integral $I$ defined in~(\ref{intI}) by the saddle-point method, when the saddle point $\eta^*(\alpha)$ is simple and thus away from the critical point $\alpha_c$, we get the following asymptotic expansion for large number of vertices $n$: \begin{equation} \label{eq:1} \mathcal Z_{\alpha n} \simeq \frac{n! }{\Gamma(\alpha n+1)} \, \left[ \frac{k-1}{k!} \right]^{n \frac{1-\alpha }{k - 1}} \frac{e^{n B(\eta^{*})}}{\sqrt{2 \pi n B''(\eta^{*})}} \left[ A(\eta^{*}) + \frac{1}{n} C(\eta^{*}) \right]\, . \end{equation} We define the entropy density $s(\alpha)$ as \begin{equation} s(\alpha) = \frac{1}{n} \log \frac{\mathcal Z_{\alpha n}}{n!} \end{equation} so that we can recover the partition function $\mathcal Z(t)$ by a Legendre transformation \begin{align} \mathcal Z(t) & = \sum_p \mathcal Z_p\, t^p \\ & \simeq \int_{0}^{1} d\alpha\, \mathcal Z_{\alpha n}\, t^{n\alpha} \\ & = n! \int_{0}^{1} d\alpha\, \exp \left\{ n [s(\alpha) + \alpha \log t] \right\} \end{align} that can be evaluated for large $n$ once more by the saddle-point method. Calling $\bar\alpha(t)$ the mean number of trees at given $t$, we have: \begin{equation} s'(\bar\alpha(t)) + \log t = 0\, . \end{equation} From \eqref{eq:1} we see that $s(\alpha)$ still has an $\alpha$-dependent leading order in $n$ \begin{equation} s(\alpha) \simeq - \alpha \log n - \alpha \log \alpha + \alpha + \frac{\alpha - 1}{k - 1} \log \left[ \frac{k!}{k-1} \right] + B(\eta^*(\alpha)) \end{equation} that would shift the solution down to 0. By the rescaling \begin{equation} t = n\, \tilde t \end{equation} which is usual in the complete graph, in order to obtain a correct thermodynamic scaling, we can reabsorb this factor. The saddle-point equation now reads \begin{equation} s'(\bar\alpha) + \log n + \log \tilde t = 0 \end{equation} whose solution is \begin{equation} \bar\alpha = \begin{cases} \frac{k-1}{k}\, \left[(k-2)!\right]^\frac{1}{k-1} \, \tilde t & \hbox{for\quad}\tilde t < \tilde t_c \\ 1 - \frac{k-1}{k!} \frac{1}{\tilde t^{k-1}} & \hbox{for\quad} \tilde t > \tilde t_c \end{cases} \end{equation} where $\tilde t_c = [(k-2)!]^{-1/(k-1)}$. And by inversion \begin{equation} \tilde t = \begin{cases} \frac{k}{k-1}\, \left[(k-2)!\right]^{-\frac{1}{k-1}} \, \bar\alpha & \hbox{for\quad} \bar\alpha < \bar\alpha_c \\ \left( \frac{k - 1}{k!}\frac{1}{ 1 - \bar\alpha}\right)^\frac{1}{k-1} & \hbox{for\quad} \bar\alpha > \bar\alpha_c\, . \end{cases} \end{equation} In the ordinary graph case, this means \begin{equation} \bar\alpha = \begin{cases} \frac{\tilde t}{2} & \tilde t < 1 \\ 1 - \frac{1}{2\tilde t} & \tilde t > 1 \end{cases} \qquad \tilde t = \begin{cases} 2 \bar\alpha & \bar\alpha < \frac{1}{2} \\ \frac{1}{2(1 - \bar\alpha)} & \bar\alpha > \frac{1}{2}\, . \end{cases} \end{equation} \section{Size of the hypertrees} \label{sec:t} We have shown in the previous sections that the system admits two different phases. We want now to characterize these two regimes. Our field-theoretical approach provides us a full algebra of observables, as polynomials in the Grassmann fields, which we could study systematically. However, it is interesting to note that some of these observables have a rephrasing in terms of combinatorial properties of the forests (cfr.~\cite{CarSokSpo07}). Furthermore, we are induced by the results of~\cite{DenGarSok07} to investigate the possibility of a transition of percolative nature, with the emergence of a giant component in the {\em typical} forest for a given ensemble. A possibility of this sort is captured by the mean square size of the trees in the forest, as the following argument shows at least at a heuristic level. If we have all trees with size of order 1 in the large $n$ limit, (say, with average $a$ and variance $\sigma$ both of order 1), then the sum of the squares of the sizes of the trees in a forest scales as $(a+\sigma^2/a) n$. If, conversely, in the large $n$ limit one tree occupies a finite fraction $p$ of the whole graph, the same sum as above would scale as $p^2 n^2 + \mathcal{O}(n)$. Furthermore, it turns out that the combinatorial observable above has a very simple formulation in the field theory, corresponding to the natural susceptibility for the fermionic fields, as we will see in a moment. Let's start our analysis with the un-normalized expectation \begin{align} t\,\langle \bar\psi_i \psi_i \rangle = & \,t\,\int \calD (\bar\psi, \psi)\,\bar\psi_i \psi_i\, \exp \left( - {\cal H} \right) \\ = & \, {\cal Z}(t) \end{align} because the insertion of the operator $ \bar\psi_i \psi_i$ simply marks the vertex $i$ as a root of a hypertree, and in a spanning forest every vertex can be chosen as the root of a hypertree. If we now sum over the index $i$ we gain a factor $|T|$ for each hypertree. Therefore we have: \begin{equation} \label{eq:t-lin} t \, \<\bar\psi\psi \> = t \,\sum_{i \in V} \<\bar\psi_i\psi_i \> = \sum_{F \in \cal{F}} t^{K(F)} \sum_{T \in F} |T| \prod_{A \in T} w_A = n\, {\cal Z}(t) \end{equation} as in each spanning hyperforest the total size of the hypertrees is the number of vertices in the graph, that is $n$. By expanding in the parameter $t$ and by taking the $p$-th coefficient we get the relation \begin{equation} \frac{1}{{\cal Z}_p} \frac{\langle \bar\psi \psi \, {\cal U}^{p-1}\rangle_{t=0}}{(p-1)!} = n\, . \label{W1} \end{equation} For the un-normalized two-point function \begin{equation} \langle \bar\psi_i \psi_j \rangle = \int \calD (\bar\psi, \psi)\,\bar\psi_i \psi_j\, \exp \left( - {\cal H} \right)\, \end{equation} we know (see~\cite{CarSokSpo07}) that \begin{equation} t\, \< \bar\psi_i \psi_j \> = \sum_{\substack{F\in \calF \\ i,j \text{ connected}}} t^{K(F)} \sum_{T \in F} \prod_{A \in T} w_A\, . \end{equation} As $i$ and $j$ are connected if they belong to the same hypertree, if we sum on both indices $i$ and $j$ we gain a factor $|T|^2$ for each hypertree \begin{equation} \label{eq:t-square} t\,\<(\bar\psi, \J \psi) \> = t\,\sum_{i,j \in V} \<\bar\psi_i\psi_j \> = \sum_{F\in \calF} t^{K(F)} \sum_{T \in F} |T|^2 \prod_{A \in T} w_A \, . \end{equation} The effect of this observable is to introduce an extra weight for hypertrees in the spanning forests which is the square of its size. The average of the square-size of hypertrees in the microcanonical ensemble of hyperforests with fixed number $p$ of hypertrees is easily obtained from the previous relation by expanding in the parameter $t$ and by taking the $p$-th coefficient, so that \begin{equation} \<\, |T|^2\>_p := \frac{1}{\calZ_{p}} \frac{\< (\bar\psi, \J \psi)\, \calU^{p-1} \>_{t=0}} {(p - 1)!}\, . \end{equation} The very same method of the preceding Section can be used to evaluate this quantity. Still in a mean-field description, we have: \begin{align} \label{eq:47} & \< (\bar\psi, \J \psi) \, \calU^{p-1} \>_{t=0} = \\ & = \int \calD (\bar\psi, \psi)\ (\bar\psi, \J \psi) \ \calU^{p-1} \exp\left\{ n \frac {(\bar\psi\psi)^{k-1}} {(k-1)!} \right\} \left[ 1 - (\bar\psi, \J \psi) \frac {(\bar\psi\psi)^{k-2}}{(k-2)!} \right] \\ & = \int \calD (\bar\psi, \psi)\ \bar\psi\psi \ \calU^{p-1} \exp\left\{n \frac {(\bar\psi\psi)^{k-1}} {(k-1)!} \right\} \\ & = n! \oint \frac{d\xi}{2\pi i} \, \frac{1}{\xi^{n + 1}} \ \xi \left[ \xi + (1 - k) \frac {\xi^k} {k!} \right]^{p-1} \exp \left\{n \frac {\xi^{k-1}} {(k-1)!} \right\} \\ & = n! \left[ \frac{k-1}{ k!}\right]^{n \frac {1-\alpha } {k - 1}} \oint \frac{d\eta}{2 \pi i}\ \tilde{A}(\eta) \ e^{n B(\eta)} , \end{align} where now \begin{equation} \tilde{A}(\eta) = \frac{1}{\eta (1 - \eta)} , \end{equation} and $B(\eta)$ is the same as before. To evaluate this integral we again use the saddle point method. Please note that since the function $B(\eta)$ is unchanged so are the saddle points. Using the general expansion for $p= \alpha n$~\eqref{eq:single-saddle} we have \begin{align} \label{eq:t-square-saddle} \<\, |T|^2\>_{\alpha n} = \frac{1}{\calZ_{\alpha n} }\,\frac{\< (\bar\psi, \J \psi)\, \calU^{\alpha n-1}\>_{t=0}}{\Gamma(\alpha n)} = \alpha n\, \frac{ \tilde{A}(\eta^*) + \frac1n \tilde{C}(\eta^*) + O\left(\frac{1}{n^2}\right) }{ A(\eta^*) + \frac1n C(\eta^*) + O\left(\frac{1}{n^2}\right)} . \end{align} Now in the low temperature phase we have $A(\eta_{a}) = 0$ so in order to get the leading term we need $C(\eta^*)$ and as \begin{equation} \tilde{A}(\eta_{a}) = \frac{k^2}{k-1} \quad \text{and} \quad C(\eta_{a}) = \frac{\alpha (k - 1)}{(\alpha - \alpha_c)^2}, \end{equation} \eqref{eq:t-square-saddle} at leading order gives \begin{equation} \<\, |T|^2\>_{\alpha n} \, \simeq \, \alpha n^2 \,\frac{\tilde{A}(\eta_{a})}{C(\eta_{a})} = n^2 \left( \frac{\alpha_c - \alpha}{\alpha_c} \right)^2 \label{giant} \end{equation} so that, as soon as $\alpha < \alpha_c$, a giant hypertree appears in the typical forest, which occupies on average a fraction $1-\alpha/ \alpha_c$ of the whole graph. In the high temperature instead we have \begin{equation} \tilde{A}(\eta_b) = \frac{1}{\alpha(1 - \alpha)} \quad \text{and} \quad A(\eta_{b}) = k \frac{\alpha - \alpha_{c}}{1 - \alpha} , \end{equation} giving (always at leading order) \begin{equation} \<\, |T|^2\>_{\alpha n}\, \simeq \, \alpha n \, \frac{\tilde{A}(\eta_{b})}{A(\eta_{b})} = \frac{n}{k } \frac{1}{\alpha - \alpha_{c}}\, . \end{equation} So that \begin{equation} \lim_{n\to \infty} \frac{1}{n^2}\, \<\, |T|^2\>_{\alpha n} = \begin{cases} \left( \frac{\alpha_c - \alpha}{\alpha_c} \right)^2 & \hbox{for\, } \alpha \leq \alpha_c \\ 0 & \hbox{for\, } \alpha \geq \alpha_c \end{cases} \end{equation} is an order parameter, but it is represented as the expectation value of a non-local operator. We shall see in the next Section how to construct a local order parameter. \section{The symmetry breaking} \label{sec:the_symmetry_breaking} In this section we will describe the phase transition in terms of the breaking of the global $\osp(1|2)$ supersymmetry. According to the general strategy (see for example~\cite{Zinn-Justin}) let's add an exponential weight with an external source $h$ coupled to the variation of the fields \eqref{eq:osp-fermionica} and \eqref{eq:osp-fermionicb}: \begin{equation} h\, \sum_{i \in V} (1 - t \,\bar\psi_{i} \psi_{i}) \, = \, h\, (n - t\, \bar\psi\psi) , \end{equation} The partition function becomes now: \begin{equation} \label{eq:45} \calZ (t, h) = \int \calD (\bar\psi, \psi)\ e^{- \calH[\psi,\bar\psi] - h(n - t \bar\psi\psi)} . \end{equation} We have chosen to add the exponential weight with a minus sign because in this way when $t$ is sent to zero with the product $h\,t$ kept fixed, we get, aside from a vanishing trivial factor, the generating function of rooted hyperforests. More generally, for finite $t$ and $h$, we have that $\calZ(t,h)$ can be expressed as a sum over spanning hyperforests with a modified weight \begin{equation} \calZ(t,h) = \sum_{F \in {\cal F}} \prod_{T\in F} t\, e^{- h |T|}\, (1+ h\, |T|) \end{equation} which is always positive, at any $n$, only for $h\ge 0$. On the $k$-uniform complete hypergraph the partition function~(\ref{eq:45}) is expressed \begin{multline} \calZ (t, h) = \int \calD (\bar\psi, \psi)\ \exp \left\{ t \, {\cal U} + h t \, \bar\psi\psi \right\} \\ \exp \left\{ - n h + n \frac{(\bar\psi\psi)^{k-1}}{(k-1)!} \right\} \left[ 1 - \frac{(\bar\psi\psi)^{k-1}}{(k-2)!} \right] \end{multline} To work in the micro-canonical ensemble we again expand in powers of $t$ \begin{equation} \calZ(t, h) = \sum_{p = 0}^{n} \calZ_p(h) \, t^p, \end{equation} where each term of the above expansion gives the partition function at fixed number of components: \begin{align} \calZ_p(h) = & \frac{1}{p!}\,\int \calD (\bar\psi, \psi)\ \left(\,{\cal U} + h \bar\psi\psi \right)^{p} \nonumber \\ & \quad \exp\left\{- n h + n \frac{(\bar\psi\psi)^{k-1}}{(k-1)!}\right\} \left[ 1 - \frac{(\bar\psi\psi)^{k-1}}{(k-2)!} \right] \\ = & \frac{e^{-nh}}{p!}\, \langle \left(\,{\cal U} + h \bar\psi\psi \right)^{p} \rangle_{t=0}\, . \end{align} Following the very same steps of the previous section, we can write this expression in terms of a complex integral: \begin{equation} \label{eq:zp-with-h} \calZ_{p}(h) = \frac {n!} {\Gamma(\alpha n+1)} \left[\frac{ k-1}{ k!}\right]^{n \frac {1-\alpha} {k - 1}}\, I(\alpha,h) \end{equation} with \begin{equation} I(\alpha,h) := \oint \frac {d\eta} {2 \pi i} \, A(\eta) \, e^{n B(\eta, h)} \end{equation} where $A(\eta)$ is the same as in~(\ref{Axi}) while \begin{equation} \label{eq:54} B(\eta, h) := - h + \frac k {k - 1} \eta + \alpha \log(1 + h - \eta) + \frac {\alpha - 1} {k - 1} \log \eta . \end{equation} $I(\alpha, h)$ can be again evaluated with the same technique as above. Let's call $\eta^{*}(h)$ the position of the relevant saddle point, which is the accessible solution of the saddle point equation \begin{equation} \left. \frac{\de }{ \de \eta} B(\eta, h)\,\right|_{\eta =\eta^*(h)} = 0 \, . \end{equation} If $h > 0$ the two solutions are real valued and distinct for every value of $\alpha$ and the accessible saddle is simple and turns out to be always the one closer to the origin. In the following we are going to consider all the functions $A$, $B$, $C$ and $D$ as evaluated on $\eta^{*}(h)$ and therefore as functions of the single parameter $h$. \begin{align} & A(h) \equiv A(\eta^{*}(h)) \quad & B(h) \equiv B(\eta^{*}(h), h) \\ & C(h) \equiv C(\eta^{*}(h), h) \quad & D(h) \equiv D(\eta^{*}(h), h) \end{align} The asymptotic behaviour of \eqref{eq:zp-with-h} is given by the general formula \eqref{eq:single-saddle}: \begin{equation} \calZ_{\alpha n}(h) \propto \frac{e^{n B(h)}}{\sqrt{2 \pi n B''(h)}} \left[ A(h) + \frac{C(h)}{n} + O\left(\frac{1}{n^2}\right) \right]\, . \end{equation} The density of entropy is obtained by taking the logarithm of the partition function $\calZ_{\alpha n}(h)$ \begin{equation} s(\alpha, h) := \frac{1}{n} \log \frac{\calZ_{\alpha n}(h)}{n!}\, . \end{equation} The magnetization is then the first derivative of the entropy \begin{align} m(\alpha,h) = - \frac{\de s }{ \de h} = & 1 - \alpha \, \frac{ \langle \bar\psi \psi \,\left( {\cal U} + h \bar\psi \psi\right)^{\alpha n -1}\rangle_{t=0}}{\langle \left( {\cal U} + h \bar\psi \psi\right)^{\alpha n}\rangle_{t=0}} \\ = & 1 - \alpha \,n\, \frac{ \langle \bar\psi_i \psi_i \,\left( {\cal U} + h \bar\psi \psi\right)^{\alpha n -1}\rangle_{t=0}}{\langle \left( {\cal U} + h \bar\psi \psi\right)^{\alpha n}\rangle_{t=0}} \end{align} which is written as the expectation of a local operator, and if we set $h=0$ in this formula we get \begin{equation} m(0) = 1 - \frac{1}{{\cal Z}_{\alpha n}(0)} \,\frac{ \langle \bar\psi_i\psi_i \, {\cal U}^{\alpha n-1}\rangle_{t=0}}{\Gamma(\alpha n)} = 0 \end{equation} because of~(\ref{W1}). In order to evaluate first the limit of large number of vertices we use the asymptotic expression for $\calZ_{\alpha n}(h)$ to get \begin{multline} m(\alpha,h) = -\frac{\de B(h) }{ \de h} +\frac{1}{2n} \frac{1}{B''(h)} \frac{\de B''(h) }{ \de h} \\ - \frac{1}{n} \frac{\frac{\de A(h) }{\de h} + \frac1n \frac{\de C(h) }{ \de h} + O(\frac1{n^2})} { A(h) + \frac1n C(h) + O\left(\frac{1}{n^2}\right)} . \end{multline} The vanishing of $A(0)$ in the low temperature phase ($\alpha < \alpha_{c}$) has the consequence that the two limits $n \to \infty$ and $h \to 0$ do not commute, indeed: \begin{align} & \lim_{n \to \infty} \lim_{h \to 0} m(\alpha,h) = \left. - \frac{\de B(h) }{ \de h}\,\right|_{h=0} - \frac{1}{C(0)} \left. \frac{\de A(h) }{ \de h}\,\right|_{h=0} = 0 \\ & \lim_{h \to 0} \lim_{n \to \infty} m(\alpha,h) = \left. - \frac{\de B(h) }{ \de h}\,\right|_{h=0} = \frac{\alpha_{c} - \alpha}{\alpha_{c}} \geq 0 . \end{align} Remark that the magnetization $m$ vanishes at the critical point linearly and not with critical exponent $1/2$ as it is common in mean-field theory, the reason being that here the order parameter is not linear but quadratic in the fundamental fields. In the high temperature phase $A(0) \neq 0$ and the two limits above commute. \begin{equation} \lim_{n \to \infty} \lim_{h \to 0} m(\alpha,h) = \lim_{h \to 0} \lim_{n \to \infty} m(\alpha,h) = \left. - \frac{\de B(h) }{ \de h}\,\right|_{h=0} = 0 . \end{equation} In the study of phase transitions the thermodynamical limit $n \to \infty$ has to be taken first. Indeed, the ergodicity is broken in the thermodynamical limit first and then a residual spontaneous magnetization appears even when the external field vanishes. Remark that both the free energy and the magnetization vary continuously passing from one phase to the other. The longitudinal susceptibility $\chi_L$ \begin{align} \chi_L(\alpha,h) = \, & \frac{\de^2 s (\alpha,h) }{ \de h^2} \\ = \, & \alpha \, (\alpha n -1) \, \frac{ \langle (\bar\psi \psi)^2 \,\left( {\cal U} + h \bar\psi \psi\right)^{\alpha n -2}\rangle_{t=0}}{\langle \left( {\cal U} + h \bar\psi \psi\right)^{\alpha n }\rangle_{t=0}} - n \,\left[1 - m(\alpha,h)\right]^2 \end{align} can be obtained from the magnetization: \begin{multline} \chi_L(\alpha,h) = - \frac{\de m (\alpha,h) }{ \de h} = \frac{\de^{2} B(h) }{ \de h^{2}} + \frac{1}{2n} \frac{1}{B''(h)^{2}} \left(\frac{\de B''(h) }{ \de h}\right)^{2} \\ - \frac{1}{2n} \frac{1}{B''(h)} \frac{\de^{2} B''(h) }{ \de h^{2}} + \frac{1}{n} \frac {\frac{\de^{2} A(h) }{ \de h^{2}} + \frac1n \frac{\de^{2} C(h) }{ \de h^{2}} + O(\frac1{n^2})} { A(h) + \frac1n C(h) + O(\frac1{n^2})} \\ - \frac{1}{n} \left[ \frac {\frac{\de A(h) }{ \de h} + \frac1n \frac{\de C(h) }{ \de h} + O(\frac1{n^2})} { A(h) + \frac1n C(h) + O(\frac1{n^2})} \right]^{2} + O\left(\frac1{n^2}\right) , \end{multline} and taking the two limits in the appropriate order we get \begin{align} \lim_{h \to 0} \lim_{n \to \infty} \chi_L(\alpha,h) = \left. \frac{\de^{2} B(h) }{ \de h^{2}} \,\right|_{h=0} = \begin{cases} - \frac{\alpha(1 - \alpha)}{\alpha_{c}^{2}} \left( \frac{\alpha_{c}- \alpha}{\alpha_{c}} \right)^{-1} & \alpha < \alpha_{c} \\ - \frac{1- \alpha_{c}}{\alpha_{c}} \left( \frac{\alpha - \alpha_{c}}{\alpha_{c}} \right)^{-1} & \alpha > \alpha_{c} \\ \end{cases} \end{align} which shows that the susceptibility is discontinuous at the transition, with a singularity $\chi(\alpha) \sim |\alpha-\alpha_c|^{-1}$, so that the transition is second order. Remark that the longitudinal susceptibility appears to be negative. This means that in our model of spanning hyperforest there are events negatively correlated. It is well known that in the model of spanning trees on a finite connected graph the indicator functions for the events in which an edge belongs to the tree are negatively correlated. This is proven by Feder and Mihail~\cite{FM} in the wider context of balanced matroids (and uniform weights). See also~\cite{40new} for a purely combinatorial proof of the stronger Raileigh condition, in the weighted case. The random cluster model for $q>1$ is known to be positive associated. When $q<1$ negative association is conjectured to hold. For an excellent description of the situation about negative association see~\cite{Pemantle}. Still following the analogy with magnetic systems, let us introduce the transverse susceptibility \begin{equation} \chi_T(\alpha, h) := \frac{2}{\calZ_{\alpha n} }\,\frac{\< (\bar\psi, \J \psi)\, \calU^{\alpha n-1}\>_{t=0}}{n\, \Gamma(\alpha n)} \end{equation} which, by comparison with~(\ref{eq:t-square-saddle}), provides, at $h=0$ \begin{equation} \chi_T(\alpha, 0) = \frac{2}{n} \, \<\, |T|^2\>_{\alpha n} \, . \end{equation} In Appendix~\ref{sec:b} we prove the identity \begin{equation} m(\alpha, h) = \frac{h}{2}\, \chi_T(\alpha, h)\, . \end{equation} This relation is the bridge between the average square-size of hypertrees and the local order parameter. At finite $n$, when the symmetry-breaking field $h$ is set to zero, we get \begin{equation} m(\alpha, 0) = 0 \end{equation} in agreement with formula~(\ref{W1}) and \begin{equation} \chi_T(\alpha, 0) = \lim_{h\to 0} 2\,\frac{m(\alpha, h) }{h} = \left. 2\,\frac{\partial m(\alpha, h) }{\partial h}\right|_{h=0} = - 2\,\chi_L(\alpha, 0) \end{equation} which should be compared with the analogous formula for the $O(N)$-model where it is \begin{equation} \chi_T(\alpha, 0) = (N-1)\, \chi_L(\alpha, 0) \end{equation} and, in our case, as the symmetry is $\osp(1|2)$, $N$ should be set to $-1$ as we have one bosonic direction and two fermionic ones which give a negative contribution. The leading $n$ contribution is \begin{equation} \frac{1}{2}\,\chi_T(\alpha, 0) = - \, \chi_L(\alpha, 0) = \begin{cases} n\,\left( \frac{\alpha_c - \alpha}{\alpha_c}\right)^2 & \hbox{for\, } \alpha \leq \alpha_c \\ \frac{1}{k}\, \frac{1}{\alpha- \alpha_c} & \hbox{for\, } \alpha \geq \alpha_c \, \end{cases} \end{equation} But, for $ \alpha \leq \alpha_c$, if we first compute the large $n$ limit and afterwards send $h \to 0$, we know that we get a non-zero magnetization and therefore the transverse susceptibility diverges as \begin{equation} \chi_T(\alpha, 0) \sim 2\, \frac{m(\alpha, 0)}{h} = \frac{2}{h}\, \frac{\alpha_c - \alpha}{\alpha_c} \end{equation} which corresponds to the idea that there are massless excitations, Goldstone modes associated to the symmetry breaking. Remark that, at finite $h$, the transverse susceptibility does not increase with $n$, which shows that the average square-size of hypertrees stays finite. The longitudinal susceptibility instead \begin{equation} \chi_L(\alpha, 0) \sim \, - \frac{1}{m(\alpha,0)}\,\frac{\alpha\,(1-\alpha)}{\alpha_c^2}\, . \end{equation} diverges only at $\alpha = \alpha_c$, when the magnetization vanishes. \section{A symmetric average} \label{sec:c} At the breaking of an ordinary symmetry the equilibrium states can be written as a convex superposition of pure, clustering, states, which cab be obtained, one from the other, by applying the broken symmetry transformations. The pure state we have defined in this Section uses a breaking field in the only direction we have at disposal where the Grassmann components are null. A more general breaking field would involve a direction in the superspace to which we are unable to give a combinatorial meaning. However, if we take the average in the invariant Berezin integral of these fields we give rise to a different, non-pure but symmetric, low-temperature state. In this Section we shall set $t=1$. The most general breaking field, with total strength $h$, but arbitrary direction in the super-space, would give a weight \begin{equation} h\, \sum_{i=1}^n \left[ \lambda\,( 1 - \bar\psi_i \psi_i) +\, \bar \epsilon \psi_i + \, \bar \psi_i \epsilon\,\right] \end{equation} where $(\lambda; \bar{\epsilon}, \epsilon)$ is a unit vector in the $1|2$ supersphere, i.e.~$\epsilon$ and $\bar{\epsilon}$ are Grassmann coordinates and $\lambda$ is a formal variable satisfying the constraint $$ \lambda^2 + 2 \, \bar \epsilon \, \epsilon = 1 .$$ Let us introduce the normalized generalized measure \begin{equation} d\Omega := d\lambda\, d\epsilon \, d\bar \epsilon \, \delta\left( \lambda^2 + 2 \, \bar \epsilon \, \epsilon - 1\right)\, . \end{equation} A symmetric equilibrium measure can be constructed by considering the factor \begin{align} & F[h; \bar\psi, \psi] := \\ & = \int d\Omega\, \exp \left\{ \, -\, h\,\sum_{i=1}^n \left[ \lambda\,( 1 - \bar\psi_i \psi_i) +\, \bar \epsilon \psi_i + \, \bar \psi_i \epsilon\,\right]\right\} \\ & = \int d\epsilon \, d\bar \epsilon \, \exp \left\{ \bar \epsilon\, \epsilon - h\, \sum_{i=1}^n \left[ (1 - \bar \epsilon\, \epsilon) ( 1 - \bar\psi_i \psi_i) + \bar \epsilon \psi_i + \bar \psi_i \epsilon\right]\right\}\, \\ & = \left[ 1 - h^2\, ( \bar\psi, \J \psi) + h\,(n - \bar \psi \psi) \right]\, \exp \left[ -\,h\,(n - \bar \psi \psi) \right]. \end{align} where only the last expression is specific to our model, but the previous are the appropriate expressions for the model of unrooted spanning hyperforests on an arbitrary weighted hypergraph. This function is symmetric, for every strength $h$, as it can be easily checked that \begin{equation} Q_\pm\, F= 0 \, . \end{equation} If we send $h\to 0$ this factor is simply $1$, but if we first take the $n\to \infty$ limit and then $h\to 0$ the expectation value of non-symmetric observables can be different. The partition function is not changed because of the identity~(\ref{aw1}). Indeed \begin{equation} \< F \> = \< \exp \left[ -\,h\,(n - \bar \psi \psi) \right] \> = \< 1 \>_h \end{equation} for un-normalized expectation values, because of the relation between the transverse susceptibility and the magnetization, equation~(\ref{aw1}), which is \begin{align} 0 = &\, \< \left[- h^2\, ( \bar\psi, \J \psi) + h\,(n - \bar \psi \psi) \right]\, \exp \left[ -\,h\,(n - \bar \psi \psi) \right] \> \\ = &\, h\, \< \left[- h\, ( \bar\psi, \J \psi) + \,(n - \bar \psi \psi) \right] \>_h \, \end{align} for every $h$, and therefore also for the derivatives with respect to $h$. But consider for example the magnetization. The insertion of the given factor $F$ in the un-normalized expectation provides the relation \begin{align} & \< (n - \bar \psi \psi)\>^{\mathrm{sym}}_h := \< (n - \bar \psi \psi)\, F\> = \< (n - \bar \psi \psi)\>_h + \\ & \; + \<\left[ - h^2\, ( \bar\psi, \J \psi) + h\,(n - \bar \psi \psi) \right]\,\left( - \frac{\partial}{\partial h} \right) \exp \left[ -\,h\,(n - \bar \psi \psi) \right] \> \\ & = \, 2\, \< (n - \bar \psi \psi)\>_h - 2\, h\, \<( \bar\psi, \J \psi)\>_h = 0 \, \end{align} Similarly \begin{align} \< ( \bar\psi, \J \psi)\>^{\mathrm{sym}}_h & = \<( \bar\psi, \J \psi)\, F \> \\ & = \<( \bar\psi, \J \psi) \>_h - h \frac{\partial}{\partial h} \<( \bar\psi, \J \psi) \>_h \\ & = \<( \bar\psi, \J \psi) \>_h - h \frac{\partial}{\partial h} \frac{\< (n - \bar \psi \psi)\>_h }{h} \\ & = \<( \bar\psi, \J \psi) \>_h + \left( \frac{1 }{h} - \frac{\partial}{\partial h} \right) \,\< (n - \bar \psi \psi)\>_h \\ & = 2\, \<( \bar\psi, \J \psi) \>_h + \< (n - \bar \psi \psi)^2\>_h \\ & = \sum_{i,j} \,\< \bar\psi_i \psi_j + \bar\psi_j \psi_i + (1 - \bar \psi_i \psi_i)(1 - \bar \psi_j \psi_j)\>_h \\ & = \sum_{i,j} \,\< 1 - f_{\{i,j\}} \>_h \end{align} is the {\em total}, not-connected, susceptibility, that is the sum of the longitudinal and transverse not-connected ones. And also \begin{equation} \< (n - \bar \psi \psi)^2 \>^{\mathrm{sym}}_h = - \,2\, \<( \bar\psi, \J \psi) \>_h - \< (n - \bar \psi \psi)^2\>_h \end{equation} so that \begin{equation} 2\, \<( \bar\psi, \J \psi) \>^{\mathrm{sym}}_h + \< (n - \bar \psi \psi)^2\>^{\mathrm{sym}}_h = 2\, \<( \bar\psi, \J \psi) \>_h + \< (n - \bar \psi \psi)^2\>_h \end{equation} as it must occur for a symmetric observable. \section{Conclusions} \label{sec:conclusions} We have found that in the $k$-uniform complete hypergraph with $n$ vertices, in the limit of large $n$, the structure of the hyperforests with $p$ hypertrees has an abrupt change when $p = \alpha_c n$ with $\alpha_c = (k-1)/k$. This change of behaviour is related to the appearance of a {\em giant} hypertree which covers a finite fraction of all the vertices. As the number of hyperedges in the hyperforests with $p$ hypertrees is $(n-p)/(k-1)$, this means that this change occurs when the number of hyperedges becomes $1/k(k-1)$, which is exactly the critical number of hyperdges in the phase transition of random hypergraphs at fixed number of hyperedges~\cite{SPS}. If ${\cal Z}(t)$ is the generating partition function of hyperforests, where the coefficient of $t^p$ is the total number of those with $p$ hypertrees, in the limit of large $n$ there is a corresponding singularity at $t_c = n\, [(k-2)!]^{-1/(k-1)}$. In our Grassmann formulation this singularity can be described as a second-order phase transition associated to the breaking of a global $\osp(1|2)$ supersymmetry which is non-linearly realised. The equilibrium state occurring in the broken phase can be studied by the introduction of an explicit breaking of the supersymmetry. \appendix \section{Saddle-point constants } \label{sec:appendix} Let us use the notation $$ X_{(n)} := \left. \frac{\partial^n}{\partial \eta^n} X(\eta) \right |_{\eta = \eta^*}\, . $$ For the simple saddle point we report the combinations $C$ and $D$ in terms of functions $A$ and $B$ \begin{align} \label{eq:single-saddle-c-and-d} C & = \frac{1}{24 B_{(2)}^3} \left[ 12 A_{(1)} B_{(2)} B_{(3)} - 12 A_{(2)} B_{(2)}^2 + A \left( 3 B_{(2)} B_{(4)} - 5 B_{(3)} ^2 \right) \right] \\ D & = \frac{1}{1152 B_{(2)}^6} \left[ 385\, A \,B_{(3)}^4 + 144\, B_{(2)}^4 A_{(4)} \right. \nonumber \\ & \quad \left. - 210\, B_{(2)} B_{(3)}^2 \left( 4 A_{(1)} B_{(3)} + 3 A B_{(4)} \right) \right. \nonumber \\ & \quad \left. + 21\, B_{(2)}^2 \left( 40 A_{(2)} B_{(3)}^2 + 40 A_{(1)} B_{(3)} B_{(4)} + 5 A B_{(4)}^2 + 8 A B_{(3)} B_{(5)} \right) \right. \nonumber \\ & \quad \left. - 24\, B_{(2)}^3 \left( 20 A_{(3)} B_{(3)} + 15 A_{(2)} B_{(4)} + 6 A_{(1)} B_{(5)} + A B_{(6)} \right) \right] \, . \end{align} For the double saddle points the necessary combinations are instead \begin{align} \label{eq:double-saddle-c-and-d} \tilde C &= \frac {A \, B_{(4)}} {B_{(3)}^{4/3}} \frac{\gamma_{4}}{4!} - \frac{A_{(1)}}{B_{(3)}^{1/3}} \gamma_{1} \\ \tilde D &= - \frac{A_{(3)}}{B_{(3)}} \frac{\gamma_{3}}{3!} + \frac{A_{(2)} B_{(4)}} {B_{(3)}^{2}} \frac{\gamma_{6}}{2 \cdot 4!} + \frac{A_{(1)} B_{(5)}}{B_{(3)}^{2}} \frac{\gamma_{6}}{5!} - \frac{A_{(1)} B_{(4)}^{2}}{B_{(3)}^{3}} \frac{\gamma_{9}}{2 (4!)^{2}} \nonumber \\ & \quad - \frac{A \, B_{(4)} B_{(5)}}{B_{(3)}^{3}} \frac{\gamma_{9}}{4!5!} + \frac{A \, B_{(4)}^{3}}{B_{(3)}^{4}} \frac{\gamma_{12}}{3! (4!)^{3}} + \frac{A \, B_{(6)}}{B_{(3)}^{2}} \frac{\gamma_{6}}{6!} \, . \end{align} The constants $\gamma_{k}$ are given by \begin{align} \gamma_{k} := & -\frac{1}{\pi} \sin \left( 2 \pi \, \frac{1+k}{3} \right) \int_0^{\infty} du\, u^k \, e^{-\frac{u^3}{3!}} \\ = & - \frac{(3!)^{\frac{1 + k}{3}}}{3\pi} \sin\left[2 \pi \left(\frac{1 + k}{3}\right)\right] \Gamma\left(\frac{1+k}{3}\right)\, . \end{align} \section{Ward identities} \label{sec:b} As a result of the underlying symmetry, there are relations among the correlation functions, called {\em Ward identities}~\cite{Zinn-Justin}. In this Appendix we give a more direct derivation of one of them which simply uses integration by parts. By definition \begin{equation} {\cal U}(\xi) \, = \, \xi + (1 - k) \frac {\xi^k} {k!} \end{equation} so that \begin{equation} \frac{\partial {\cal U}} {\partial \xi} = 1 - \frac {\xi^{k-1}}{(k-2)!} \end{equation} and therefore the un-normalized expectation value of $\bar\psi \psi$ in presence of the symmetry breaking is \begin{align} & t\, \< \bar\psi \psi \>_h \, = \nonumber \\ & = \, n! \oint \frac {d\xi} {2 \pi i} \, \frac1{\xi^{n+1}}\, \exp \left\{t\, {\cal U} + n \frac {\xi^{k-1}} {(k-1)!} - h\, t \, (n - \xi) \right\} \, t\,\xi\, \frac{\partial {\cal U}} {\partial \xi} \\ & = \, n! \oint \frac {d\xi} {2 \pi i} \, \frac1{\xi^{n+1}}\, \exp \left\{ n \frac {\xi^{k-1}} {(k-1)!} - h\, t \, (n - \xi) \right\} \,\xi\, \frac{\partial} {\partial \xi} \,\exp \left\{t\, {\cal U} \right\}\, . \end{align} Perform now an integration by parts \begin{align} & t\, \< \bar\psi \psi \>_h \, = \nonumber \\ & = \, {n!} \oint \frac {d\xi} {2 \pi i} \, \frac1{\xi^{n+1}}\, \left[ {n} \left( 1 - \frac{\xi^{k-1}}{(k-2)!} \right) - h\, t\,\xi \right] \, \nonumber \\ & \qquad \qquad \qquad \qquad \qquad \exp \left\{t\, {\cal U} + n \frac {\xi^{k-1}} {(k-1)!} - h\, t \, (n - \xi) \right\} \\ & = \,n\, {\cal Z}(t, h)\, - \, h\, t\, \< (\bar\psi, \J \psi) \>_h\, . \end{align} So that \begin{equation} \,n\, {\cal Z}(t, h)\, - \, t\, \< \bar\psi \psi \> \, = \, h\, t\, \< (\bar\psi, \J \psi) \>\, \label{aw1} \end{equation} which expanded in series of $t$ implies that \begin{multline} \,\frac{n}{p}\, \< ({\cal U} + h \bar\psi \psi)^{p}\>_{t=0} \ - \, \< \bar\psi \psi\, ({\cal U} + h \bar\psi \psi)^{p-1}\>_{t=0} \, = \, \\ h\, \< (\bar\psi, \J \psi) ({\cal U} + h \bar\psi \psi)^{p-1} \>_{t=0}\, \end{multline} or for $p=\alpha n$ \begin{multline} 1 \ - \, \alpha\, \frac{ \< \bar\psi \psi\, ({\cal U} + h \bar\psi \psi)^{\alpha n-1}\>_{t=0} }{ \< ({\cal U} + h \bar\psi \psi)^{\alpha n}\>_{t=0}}\, = \, \\ \alpha\, h\,\frac{ \< (\bar\psi, \J \psi) ({\cal U} + h \bar\psi \psi)^{\alpha n-1} \>_{t=0} }{ \< ({\cal U} + h \bar\psi \psi)^{\alpha n}\>_{t=0}}\, \label{aw1m} \end{multline} which means that, in the microcanonical ensemble, for every $h$, we have \begin{equation} m(\alpha, h) \, = \, \frac{h}{2}\, \chi_T(\alpha, h)\, . \label{ward} \end{equation}
102,604
We've got 55 definitions for LJ » What does LJ stand for? What does LJ mean? This page is about the various possible meanings of the acronym, abbreviation, shorthand or slang term: LJ. Filter by: Sort by:PopularityAlphabeticallyCategory Still can't find the acronym definition you were looking for? Use our Power Search technology to look for more unique definitions from across the web! "LJ." Abbreviations.com. STANDS4 LLC, 2017. Web. 19 Sep. 2017. <>.
149,041
Any quantity of conditions can make contributions to smoking problems in a wood-burning hearth. In a number of cases, cleaning or some relatively simple measures may improve conditions. In other cases, further evaluation and in depth repairs might be necessary. The first step in most situations is to arrange for an inspection or cleaning of the chimney flue. An authorized chimney sweep is usually the proper pro person to contact for chimney cleaning or inquiry of fireplace or chimney issues. Many sweeps now have kit to take a recording of the chimney flue in order that you can see exactly what problems may actually lie in the flue, as well as to confirm if it was cleaned properly. Here are some possible practical solutions to worsening smoke problems : Raise the hearth. A hearth opening that’s too big matched against the opening of the chimney flue can lead to poor drafting ( the movement of the gases that result from the burning wood up the chimney ). To play around with this approach, a sheet metal hearth can be supported on bricks placed on the existing hearth. By building up the base of the fireplace, the opening will be decreased. hearth opening. Try numerous designs and sizes. If the hood works really well, an everlasting metal hood can be installed. Extend the chimney. The higher the chimney, the better the draft. A good draft is generally supplied by a chimney which is Twenty feet or more higher than the hearth. Several metal chimney sections can be temporarily installed on top of an existing chimney to check whether the draft is improved before a more permanent ( and costly ) fix is tried. If the current chimney is short a good draft could not be able to develop. Wavering smoke patterns above the chimney may indicate that tall trees are causing a downdraft ( air forced down the chimney by the wind ). Trim surrounding trees. The encompassing trees should be trimmed and / or the chimney flue height extended to stop this condition. Add a chimney cap or flueguard. These recommendations for applying corrections to smoking fireplace conditions might be only the 1st step in some situations. If a downdraft seems to affect the exhaust gases, adding a chimney cap or flueguard of metal or stone may help deflect the air before it entering the chimney. If there are major fireplace inadequacies or the chimney is deteriorated, more drastic measures will be needed. A less costly option is to retrofit a masonry fireside or chimney with a gas-powered fireplace coupled with a new metal flue inside the defective chimney or to use an electric hearth and seal off the old chimney. The only practical options in grim cases might be to reconstruct the fireplace and / or chimney. Simply maintaining a tiny fire may help also..
349,657
\begin{document} \newtheorem{theorem}{Theorem} \newtheorem{definition}{Definition} \newtheorem{corollary}{Corollary} \newtheorem{example}{Example} \newtheorem{lemma}{Lemma} \newtheorem{proposition}{Proposition} \maketitle \setlength{\parindent}{0pt} \begin{abstract} In this paper we study the inversion in an ellipse and some properties, which generalizes the classical inversion with respect to a circle. We also study the inversion in an ellipse of lines, ellipses and other curves. Finally, we generalize the Pappus Chain with respect to ellipses and the Pappus Chain Theorem. \\ \textbf{Keywords:} Inversion, elliptic inversion, elliptic inversion of curves, Elliptic Pappus Chain. \end{abstract} \date \maketitle \section{Introduction} In this paper we study the elliptic inversion, which was introduced in \cite{CHI}, and some related properties to the distance of elliptic inverse points, cross ratio, harmonic conjugates and the elliptic inversion of different curves. Elliptic inversion generalizes the classical inversion, which has a lot of properties and applications, see \cite{BLA, OGI, PED}.\\ The outline of this paper is as follow. In Section 2 we define the inversion respect to an ellipse. In Section 3 we study some basic properties of the inversion in an ellipse and its relations with the cross ratio and the harmonic conjugates. We also study the cartesian coordinates of elliptic points. In Section 4 we describe the inversion in an ellipse of lines and conics. Finally, in Section 5 we introduce the Elliptic Pappus Chain and we apply the inversion in an ellipse to proof the generalize Pappus Chain Theorem. \section{Elliptic Inversion} \begin{definition}\label{dinve} Let $E$ be an ellipse centered at a point $O$ with focus $F_1$ and $F_2$ in $\mathbb{R}^2$. The inversion in the ellipse $E$ or Elliptic Inversion respect to $E$ is the mapping $\psi:\mathbb{R}^2\setminus \{O\} \longmapsto \mathbb{R}^2\setminus \{O\}$ defined by $\psi(P)=P'$, where $P'$ lies on the ray $\stackrel{\longrightarrow}{OP}$ and $OP\cdot OP'=(OQ)^2$, where $Q$ is the point of intersection of the ray $\stackrel{\longrightarrow}{OP}$ and the ellipse $E$. \end{definition} The point $P'$ is said to be the \emph{elliptic inverse} of $P$ in the ellipse $E$, or with respect to the ellipse $E$, $E$ is called the \emph{ellipse of inversion}, $O$ is called the \emph{center of inversion}, and the number $OQ=w$ is called the \emph{radius of inversion}, see Figure \ref{fig:elipsepartes}. The inversion with respect to the ellipse $E$, center of inversion $O$ and radius of inversion $w>0$ is denoted by $\mathcal{E}(O,w)$. Unlike the classical case, here the radius is not constant. \begin{figure}[h] \begin{center} \psset{xunit=1.0cm,yunit=1.0cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-2.8,-1.6)(4.2,2.16) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-2.6,2.1){$E$} \psline(0,0)(3.72,1.6) \psline[linestyle=dashed,dash=2pt 2pt](-2.5,0)(2.5,0) \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](2,0) \rput[bl](2,0.12){\blue{$F_2$}} \psdots[dotstyle=*,linecolor=blue](-2,0) \rput[bl](-2,0.12){\blue{$F_1$}} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.3,0.12){\blue{$O$}} \psdots[dotstyle=*,linecolor=blue](3.72,1.6) \rput[bl](3.8,1.72){\blue{$P$}} \psdots[dotstyle=*,linecolor=blue](1.11,0.48) \rput[bl](0.7,0.58){\blue{$P'$}} \psdots[dotstyle=*,linecolor=blue](2.03,0.87) \rput[bl](1.9,1){\blue{$Q$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse.} \label{fig:elipsepartes} \end{figure} The elliptic inversion is an involutive mapping, i.e., $\psi\left(\psi\left(P\right)\right)=P$. The fixed points are the points on the ellipse $E$. Indeed, if $F$ is a fixed point, $\psi(F)=F$, then $OF\cdot OF=(OF)^2=(OQ)^2$, then $OF=OQ$ and as Q lies on the ray $\stackrel{\longrightarrow}{OF}$, then $F=Q$. \begin{proposition}\label{intexteriorelipse} If $P$ is in the exterior of $E$ then $P'$ is interior to $E$, and conversely. \end{proposition} \begin{proof} Let $P$ be an exterior point of $\mathcal{E}(O,w)$, then $w<OP$. If $P'$ is the elliptic inverse of $P$, then $OP\cdot OP'=w^2$. Hence $w^2=OP\cdot OP'>w \cdot OP'$ and $OP'<w$. \end{proof} Inversion in an ellipse inversion does not hold for the center of inversion $O$, as in the usual definition. However, we can add to the Euclidean plane a single point at infinite $O_{\infty}$, which is the inverse of the center of any elliptic inversion. This plane is denoted by $\mathbb{R}^2_{\infty}$. We now have a one-to-one map of our extended plane. \begin{definition}\label{dinve2} Let $E$ be an ellipse centered at a point $O$ in $\mathbb{R}^2_{\infty}$, the elliptic inversion in this ellipse is the mapping $\psi:\mathbb{R}^2_{\infty}\longmapsto \mathbb{R}^2_{\infty}$ defined by $\psi(P)=P'$, where $P'$ lies on the ray $\stackrel{\longrightarrow}{OP}$ and $(OP)(OP')=(OQ)^2$, where $Q$ is the point of intersection of the ray $\stackrel{\longrightarrow}{OP}$ and the ellipse $E$, $\psi(O_\infty)=O$ and $\psi(O)=O_\infty$. \end{definition} \section{Basic Properties} \begin{theorem}\label{diseliptica} Let $P$ and $T$ be different points. Let $P'$ and $T'$ their respective elliptic inverse points respect to $\mathcal{E}(O,w)$ and $\mathcal{E}(O,u)$. Then \begin{enumerate}[i.] \item If $P$, $T$ and $O$ are not collinear, then \[ P'T'=\frac{\sqrt{\left(w^2-u^2\right)\left(w^2(OT)^2-u^2(OP)^2\right)+w^2u^2(PT)^2}}{OP\cdot OT}. \] \item If $P$, $T$ and $O$ are collinear, then \[P'T'=\frac{w^2PT}{OP\cdot OT}.\] \end{enumerate} \end{theorem} \begin{proof} \emph{i.} If $P, T$ and $O$ are not collinear. Then $P', T'$ and $O$ are not also collinear, see Figure \ref{fig:disinveliptica}. \begin{figure}[h] \begin{center} \newrgbcolor{qqwuqq}{0 0.39 0} \psset{xunit=0.85cm,yunit=0.85cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-3.86,-1.6)(2.86,2.46) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](2.24,2.12){$E$} \psline[linestyle=dashed,dash=3pt 3pt](-2.5,0)(2.5,0) \psline[linecolor=qqwuqq](0,0)(1.46,2.02) \psline[linecolor=qqwuqq](0,0)(-3.73,2.03) \psline[linecolor=qqwuqq](-3.73,2.03)(0.68,0.94) \psline[linecolor=qqwuqq](-0.92,0.5)(1.46,2.02) \rput[tl](-0.16,0.4){$\alpha$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](2,0) \rput[bl](1.92,-0.36){\blue{$F_2$}} \psdots[dotstyle=*,linecolor=blue](-2,0) \rput[bl](-2.08,-0.34){\blue{$F_1$}} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](0.04,-0.24){\blue{$O$}} \psdots[dotstyle=*,linecolor=blue](-1.85,1) \rput[bl](-1.96,0.65){\blue{$Q$}} \psdots[dotstyle=*,linecolor=blue](1,1.35) \rput[bl](0.95,1.05){\blue{$S$}} \psdots[dotstyle=*,linecolor=blue](-0.92,0.5) \rput[bl](-0.98,0.16){\blue{$P$}} \psdots[dotstyle=*,linecolor=red](-3.73,2.03) \rput[bl](-3.66,2.14){\red{$P'$}} \psdots[dotstyle=*,linecolor=blue](1.46,2.02) \rput[bl](1.54,2.14){\blue{$T$}} \psdots[dotstyle=*,linecolor=red](0.68,0.94) \rput[bl](0.8,0.66){\red{$T'$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Distance and Inverse Points.} \label{fig:disinveliptica} \end{figure} Let $\alpha$ be the measure of the angle $\angle P'OT'$, then by law of cosines \begin{align}\label{ecuaa15} (P'T')^2=(OP')^2+(OT')^2-2 \cdot OP' \cdot OT' \cdot\cos \alpha \end{align} From $OP\cdot OP'=(OQ)^2=w^2$ and $OT\cdot OT'=(OS)^2=u^2$, we have $OP'=\frac{w^2}{OP}$ and $OT'=\frac{u^2}{OT}$, where $Q$ and $S$ are respectively the points of intersection of rays $\stackrel{\longrightarrow}{OP}$ and $\stackrel{\longrightarrow}{OT}$ with $E$, see Figure \ref{fig:disinveliptica}. Replacing these values in (\ref{ecuaa15}): \begin{align}\label{ecuaa16} (P'T')^2=\frac{w^4}{(OP)^2}+\frac{u^4}{(OT)^2}-2\frac{w^2u^2}{OP\cdot OT}\cos \alpha \end{align} As $\alpha$ is also the measure of the angle $\angle POT$, then by law of cosines \begin{align*} (PT)^2&=(OP)^2+(OT)^2-2\cdot OP\cdot OT\cdot \cos \alpha\\ 2\cos \alpha&=\frac{(OP)^2+(OT)^2-(PT)^2}{OP\cdot OT} \end{align*} Replacing in (\ref{ecuaa16}): \begin{align*} (P'T')^2&=\frac{w^4}{(OP)^2}+\frac{u^4}{(OT)^2}-\frac{w^2u^2}{OP\cdot OT}\left(\frac{(OP)^2+(OT)^2-(PT)^2}{OP\cdot OT}\right)\\ &=\frac{w^2(OT)^2\left(w^2-u^2\right)-u^2(OP)^2\left(w^2-u^2\right)+w^2u^2(PT)^2}{(OP)^2(OT)^2}\\ &=\frac{\left(w^2-u^2\right)\left(w^2(OT)^2-u^2(OP)^2\right)+w^2u^2(PT)^2}{(OP)^2(OT)^2} \end{align*} Hence \[P'T'=\frac{\sqrt{\left(w^2-u^2\right)\left(w^2(OT)^2-u^2(OP)^2\right)+w^2u^2(PT)^2}}{OP\cdot OT} \] \emph{ii.} When $P, Q$ are $O$ collinear, then $OQ=w=u=OS$. Therefore \[P'T'=\frac{w^2\cdot PT}{OP\cdot OT} \qedhere \] \end{proof} Note that if $E$ is a circumference, then $OQ=w=u=OS$. Hence \begin{align*} P'T'&=\frac{\sqrt{\left(w^2-w^2\right)\left(w^2(OT)^2-w^2(OP)^2\right)+w^2w^2(PT)^2}}{(OP)(OT)}\\ &=\frac{\sqrt{w^4(PT)^2}}{OP\cdot OT}\\ &=\frac{w^2\cdot PT}{OP\cdot OT} \end{align*} where $w$ is the radius of the circumference. \subsection{Inversion in an Ellipse and Cross Ratio} Suppose that $A, B, C$ and $D$ are four distinct points on a line $l$; we define their \emph{cross ratio} $\left\{AB, CD\right\}$ by \begin{align*} \left\{AB, CD\right\}=\frac{\overrightarrow{AC}\cdot\overrightarrow{BD}}{\overrightarrow{AD}\cdot\overrightarrow{BC}} \end{align*} where $\overrightarrow{AB}$ denote the signed distance from $A$ to $B$. The cross ratio is an invariant under inversion in a circle whose center is not any of the four points $A, B, C$ or $D$, see \cite{BLA}. However, the inversion in an ellipse does not preserve the cross ratio, for example see Figure \ref{fig:cocientedobleinveliptica}. \begin{figure}[h] \begin{center} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-4.58,-1.76)(10,2.42) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-3.68,2.4){$E$} \psline[linestyle=dashed,dash=2pt 2pt](2.3,1.78)(-1.34,-0.26) \psline[linestyle=dashed,dash=2pt 2pt](-1.64,2)(3.7,-1.5) \psline[linestyle=dashed,dash=2pt 2pt](2.3,1.78)(3.7,-1.5) \psline[linestyle=dashed,dash=2pt 2pt](-1.64,2)(-1.34,-0.26) \psline[linestyle=dashed,dash=2pt 2pt](1.02,0.79)(-4.22,-0.82) \psline[linestyle=dashed,dash=2pt 2pt](-0.74,0.91)(1.16,-0.47) \psline[linestyle=dashed,dash=2pt 2pt](1.02,0.79)(1.16,-0.47) \psline[linestyle=dashed,dash=2pt 2pt](-0.74,0.91)(-4.22,-0.82) \rput[tl](7,1.1){$BD=6.38$} \rput[tl](4,1.7){$AC=4.17$} \rput[tl](7,1.7){$AD=3.57$} \rput[tl](4,1.1){$BC=2.28$} \rput[tl](4,0){$A'C'=5.48$} \rput[tl](7,0){$A'D'=1.27$} \rput[tl](4,-0.6){$B'C'=3.88$} \rput[tl](7,-0.6){$B'D'=2.35$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.2,-0.3){\blue{$O$}} \psdots[dotstyle=*,linecolor=blue](2.3,1.78) \rput[bl](2.38,1.9){\blue{$A$}} \psdots[dotstyle=*,linecolor=red](1.02,0.79) \rput[bl](1.16,0.56){\red{$A'$}} \psdots[dotstyle=*,linecolor=blue](-1.64,2) \rput[bl](-1.56,2.12){\blue{$B$}} \psdots[dotstyle=*,linecolor=blue](-1.34,-0.26) \rput[bl](-1.46,-0.6){\blue{$C$}} \psdots[dotstyle=*,linecolor=blue](3.7,-1.5) \rput[bl](3.78,-1.38){\blue{$D$}} \psdots[dotstyle=*,linecolor=red](-0.74,0.91) \rput[bl](-0.76,0.98){\red{$B'$}} \psdots[dotstyle=*,linecolor=red](-4.22,-0.82) \rput[bl](-4.32,-1.2){\red{$C'$}} \psdots[dotstyle=*,linecolor=red](1.16,-0.47) \rput[bl](0.88,-0.9){\red{$D'$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Elliptic Inversion and Cross Ratio.} \label{fig:cocientedobleinveliptica} \end{figure} \begin{align*} \left\{AB,CD\right\}&=\frac{AC\cdot BD}{AD\cdot BC}=\frac{4.17\cdot 2.28}{3.57\cdot 2.28}\approx 1.168, \\ \left\{A'B',C'D'\right\}&=\frac{A'C'\cdot B'D'}{A'D'\cdot B'C'}=\frac{5.48\cdot 2.35)}{1.27\cdot 3.88}\approx 2.613. \end{align*} \subsection{Inversion in an Ellipse and Harmonic Conjugates} If $A$ and $B$ are two points on a line $l$, any pair of points $P$ and $Q$ on $l$ for which \begin{align*} \frac{AP}{PB}=\frac{AQ}{QB}, \end{align*} are said to divide $\overline{AB}$\emph{ harmonically}. The points $P$ and $Q$ are called \emph{harmonic conjugates with respect to $A$ and $B$}. It is clear that two distinct points $P$ and $Q$ are harmonic conjugates with respecto to $A$ and $B$ if and only if $\left\{AB,PQ\right\}=1$. \begin{theorem}\label{armoinveliptica} Let $E$ be an ellipse with center $O$, and $\overline{Q_{1}Q}_{2}$ a diameter of $E$. Let $P$ and $P'$ be distinct points of the ray $\stackrel{\longrightarrow}{OQ}_{1}$, which divide the segment $\overline{Q_{1}Q}_{2}$ internally and externally. Then $P$ and $P'$ are harmonic conjugates with respect to $Q_{1}$ and $Q_{2}$ if and only if $P$ and $P'$ are elliptic inverse points with respect $E$. \end{theorem} \begin{proof} Suppose that $P$ and $P'$ are harmonic points with respect to $Q_{1}$ and $Q_{2}$. Then \begin{align*} \left\{Q_{1}Q_{2},PP'\right\}&=1,\\ \frac{Q_{1}P\cdot Q_{2}P'}{Q_{1}P'\cdot Q_{2}P}&=1. \end{align*} Note that if $P$ divide the segment $\overline{Q_{1}Q}_{2}$ internally and $P\in\stackrel{\longrightarrow}{OQ_{1}}$. Then $Q_{1}P=OQ_{1}-OP=w-OP$ and $Q_{2}P=OQ_{2}+OP=w+OP$. Moreover, $P'$ divide the segment $\overline{Q_{1}Q}_{2}$ externally and $P'\in \stackrel{\longrightarrow}{OQ_{1}}$. Then $Q_{1}P'=OP'-OQ_{1}=OP'-w$ and $Q_{2}P'=OQ_{2}+OP'=w+OP'$. Hence \begin{align*} \frac{(w-OP)(k+OP')}{(OP'-w)(w+OP)}&=1,\\ (w-OP)(w+OP')&=(OP'-w)(k+OP). \end{align*} Simplifying this equation, we have $OP\cdot OP'=w^2$. Therefore $P$ and $P'$ are elliptic inverse points with respect to $E$. Conversely, if $P$ and $P'$ are elliptic inverse points with respect to $\mathcal{E}(O,w)$, the proof is similar. \end{proof} \subsection{Inversion in an Ellipse and Cartesian Coordinates} \begin{theorem}\label{coordenadaselipse} Let $E$ be an ellipse with center $O$ and equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$, $a$ and $b$ are respectively the semi-major axis and semi-minor axis. Let $P=(u,v)$ and $P'=(x, y)$ be a pair of elliptic points with respect to $E$. Then \begin{align}\label{ecuainvelipse} x&=\frac{a^2b^2u}{b^2u^2+a^2v^2},\\ y&=\frac{a^2b^2v}{b^2u^2+a^2v^2}. \end{align} \end{theorem} \begin{proof} Let $E$ be an ellipse with equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$. Suppose that $P=(u,v)$ is an exterior point to $E$. Let $T=(x_{1},y_{1})$ and $M=(x_{2},y_{2})$ be the points of contact of the tangent lines to $E$ from $P$, see Figure \ref{fig:coordenadasinvelipse}. Then the tangent lines $\stackrel{\longleftrightarrow}{PT}$ and $\stackrel{\longleftrightarrow}{PM}$ have the following equations \cite[p. 186]{LEM}: \begin{align}\label{coor1} b^2x_{1}x+a^2y_{1}y&=a^2b^2,\\ b^2x_{2}x+a^2y_{2}y&=a^2b^2. \end{align} \begin{figure}[h] \begin{center} \newrgbcolor{qqwuqq}{0 0.39 0} \psset{xunit=0.9cm,yunit=0.9cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-6.12,-1.92)(5.46,2.78) \psaxes[labelFontSize=\scriptstyle,xAxis=true,yAxis=true,labels=none,Dx=1,Dy=1,ticksize=-2pt 0,subticks=2]{->}(0,0)(-6.12,-1.92)(5.46,2.78) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](2.44,2.52){$E$} \psline[linestyle=dashed,dash=3pt 3pt](-2.5,0)(2.5,0) \psline[linecolor=qqwuqq](0,0)(-5.1,2.3) \psplot{-6.12}{5.46}{(-8.8-3.05*x)/2.93} \psplot{-6.12}{5.46}{(--8.8-0.84*x)/5.7} \rput[tl](0.42,2.05){$T=(x_1,y_1)$} \rput[tl](-4.5,-0.6){$M=(x_2,y_2)$} \rput[tl](2.46,-0.04){$(a,0)$} \rput[tl](-2.32,-0.04){$(-a,0)$} \rput[tl](-4.98,2.6){$P=(u,v)$} \rput[tl](-1.52,1){$P'=(x',y')$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](0.12,0.12){\blue{$O$}} \psdots[dotstyle=*,linecolor=darkgray](-2.5,0) \psdots[dotstyle=*,linecolor=darkgray](2.5,0) \psdots[dotstyle=*,linecolor=blue](-5.1,2.3) \psdots[dotstyle=*,linecolor=red](-0.78,0.35) \psdots[dotstyle=*,linecolor=red](0.6,1.46) \psdots[dotstyle=*,linecolor=red](-2.17,-0.75) \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse and Cartesian Coordinates.} \label{fig:coordenadasinvelipse} \end{figure} Particularly $P=(u,v)$ satisfies these equations. Hence \begin{align}\label{coor2} b^2x_{1}u+a^2y_{1}v&=a^2b^2,\\ b^2x_{2}u+a^2y_{2}v&=a^2b^2.\label{coor3} \end{align} Equating equations (\ref{coor2}) and (\ref{coor3}) \begin{align*} b^2x_{1}u+a^2y_{1}v&=b^2x_{2}u+a^2y_{2}v,\\ -\frac{b^2u}{a^2v}&=\frac{y_{1}-y_{2}}{x_{1}-x_{2}}. \end{align*} Then the line $\stackrel{\longleftrightarrow}{TM}$ has slope $-\frac{b^2u}{a^2v}$. Therefore, $\stackrel{\longleftrightarrow}{TM}$ has the following equation \begin{align} y-y_{1}&=-\frac{b^2u}{a^2v}\left(x-x_{1}\right),\\ a^2vy-a^2vy_{1}&=-b^2ux+b^2ux_{1},\\ a^2vy+b^2ux&=b^2ux_{1}+a^2vy_{1}.\label{coor5} \end{align} Replacing (\ref{coor2}) in (\ref{coor5}), we have \begin{align}\label{coor6} a^2vy+b^2ux&=a^2b^2, \end{align} i.e., (\ref{coor6}) is the equation of the line $\stackrel{\longleftrightarrow}{TM}$. On the other hand, the line $\stackrel{\longleftrightarrow}{OP}$ has slope $\frac{v}{u}$, then its equation is $y=\frac{v}{u}x$. As $P'$ is the meeting point of the lines $\stackrel{\longleftrightarrow}{TM}$ and $\stackrel{\longleftrightarrow}{OP}$, then \begin{align*} a^2v\left(\frac{v}{u}x\right)+b^2ux&=a^2b^2\\ \left(a^2v^2+b^2u^2\right)x&=ua^2b^2\\ x&=\frac{ua^2b^2}{a^2v^2+b^2u^2} \end{align*} and \begin{align*} y&=\frac{va^2b^2}{a^2v^2+b^2u^2} \end{align*} When $P$ is an interior point of $E$, the proof is analogous. \end{proof} When $a=b=1$, i.e., when $E$ is a circle, we obtain \begin{align*} \psi:(u,v)&\longmapsto \left(\frac{u}{v^2+u^2},\frac{v}{v^2+u^2} \right) \end{align*} \section{Elliptic Inversion of Curves} In this section we study the inversion in an ellipse of lines, ellipses and other curves. If a point $P$ moves on a curve $\mathcal{C}$, and $P'$, the elliptic inverse of $P$ with respect to the $E$ moves on a curve $\mathcal{C}'$, the curve $\mathcal{C}'$ is called the elliptic inverse of $\mathcal{C}$. It is evident that $\mathcal{C}$ is the elliptic inverse of $\mathcal{C}'$ in $E$. \begin{theorem}\label{invrelipse} \begin{enumerate}[i.] \item The elliptic inverse of a line $l$ which pass through the center of the elliptic inversion is the line itself. \item The elliptic inverse of a line $l$ which does not pass through the center of the elliptic inversion is an ellipse which pass through the center of inversion, see Figure \ref{fig:rectasinversionelipse2}. \end{enumerate} \end{theorem} \begin{proof} \textit{i.} Let $E$ be an ellipse of inversion with equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ and $l$ a line with equation $Mx + Ny = 0$. Applying $\psi$ to $Mx + Ny = 0$ gives $Mx + Ny = 0$. Indeed \begin{align*} Mx + Ny &= 0\\ M\left(\frac{a^2b^2x}{b^2x^2+a^2y^2}\right)+N\left(\frac{a^2b^2y}{b^2x^2+a^2y^2}\right)&=0\\ Ma^2b^2x+Na^2b^2y&=0\\ Mx + Ny &= 0 \end{align*} \textit{ii.} Let $E$ be an ellipse of inversion with equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ and $l$ a line with equation $Mx + Ny + P = 0$ ($P\neq 0$). Applying $\psi$ to $Mx + Ny + P = 0$ gives $\frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{M}{P}x+\frac{N}{P}y=0$. Indeed \begin{align*} Mx + Ny + P &= 0\\ M\left(\frac{a^2b^2x}{b^2x^2+a^2y^2}\right)+N\left(\frac{a^2b^2y}{b^2x^2+a^2y^2}\right)+ P&=0\\ M\left(a^2b^2x\right)+N\left(a^2b^2y\right)+ \left(b^2x^2+a^2y^2\right)P&=0\\ \frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{M}{P}x+\frac{N}{P}y&=0 \end{align*} \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-4.02,-1.68)(4.54,3.32) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-3.36,2){$E$} \psplot[linecolor=qqttzz]{-4.02}{4.54}{(--7.75-3.44*x)/4.92} \pscustom[linecolor=ccqqqq]{\moveto(-0.36,1.04) \lineto(-0.33,1.09) \lineto(-0.29,1.15) \lineto(-0.24,1.22) \lineto(-0.17,1.29) \lineto(-0.07,1.38) \lineto(-0.01,1.42) \lineto(0.05,1.46) \lineto(0.09,1.49) \lineto(0.1,1.49) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \moveto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.17) \lineto(2.47,-0.17) \lineto(2.46,-0.17) \lineto(2.43,-0.19) \lineto(2.38,-0.21) \lineto(2.32,-0.23) \lineto(2.27,-0.25) \lineto(2.21,-0.26) \lineto(2.16,-0.28) \lineto(2.11,-0.29) \lineto(2.06,-0.31) \lineto(1.96,-0.33) \lineto(1.87,-0.34) \lineto(1.78,-0.36) \lineto(1.69,-0.37) \lineto(1.61,-0.37) \lineto(1.54,-0.38) \lineto(1.46,-0.38) \lineto(1.39,-0.38) \lineto(1.33,-0.38) \lineto(1.27,-0.38) \lineto(1.21,-0.38) \lineto(1.16,-0.37) \lineto(1.1,-0.37) \lineto(1.01,-0.36) \lineto(0.95,-0.35) \lineto(0.89,-0.34) \lineto(0.83,-0.33) \lineto(0.78,-0.32) \lineto(0.69,-0.3) \lineto(0.61,-0.28) \lineto(0.54,-0.26) \lineto(0.49,-0.24) \lineto(0.43,-0.22) \lineto(0.39,-0.2) \lineto(0.35,-0.19) \lineto(0.31,-0.17) \lineto(0.28,-0.16) \lineto(0.25,-0.14) \lineto(0.22,-0.13) \lineto(0.2,-0.12) \lineto(0.17,-0.11) \lineto(0.15,-0.09) \lineto(0.13,-0.08) \lineto(0.12,-0.07) \lineto(0.1,-0.07) \lineto(0.09,-0.06) \lineto(0.07,-0.05) \lineto(0.06,-0.04) \lineto(0.05,-0.03) \lineto(0.04,-0.03) \lineto(0.03,-0.02) \lineto(0.02,-0.01) \lineto(0.01,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.05,0.03) \lineto(-0.05,0.04) \lineto(-0.06,0.05) \lineto(-0.07,0.05) \lineto(-0.08,0.06) \lineto(-0.09,0.07) \lineto(-0.1,0.07) \lineto(-0.11,0.08) \lineto(-0.11,0.09) \lineto(-0.12,0.1) \lineto(-0.13,0.1) \lineto(-0.14,0.11) \lineto(-0.15,0.12) \lineto(-0.16,0.13) \lineto(-0.17,0.14) \lineto(-0.17,0.14) \lineto(-0.18,0.15) \lineto(-0.19,0.16) \lineto(-0.2,0.17) \lineto(-0.21,0.18) \lineto(-0.22,0.19) \lineto(-0.23,0.2) \lineto(-0.23,0.21) \lineto(-0.24,0.22) \lineto(-0.25,0.23) \lineto(-0.26,0.24) \lineto(-0.27,0.25) \lineto(-0.28,0.26) \lineto(-0.28,0.27) \lineto(-0.29,0.28) \lineto(-0.3,0.29) \lineto(-0.31,0.3) \lineto(-0.31,0.31) \lineto(-0.32,0.32) \lineto(-0.33,0.34) \lineto(-0.34,0.35) \lineto(-0.34,0.36) \lineto(-0.35,0.37) \lineto(-0.36,0.39) \lineto(-0.36,0.4) \lineto(-0.37,0.41) \lineto(-0.38,0.42) \lineto(-0.38,0.44) \lineto(-0.39,0.45) \lineto(-0.39,0.47) \lineto(-0.4,0.48) \lineto(-0.4,0.49) \lineto(-0.41,0.51) \lineto(-0.41,0.52) \lineto(-0.42,0.54) \lineto(-0.42,0.55) \lineto(-0.42,0.57) \lineto(-0.43,0.59) \lineto(-0.43,0.6) \lineto(-0.43,0.62) \lineto(-0.44,0.64) \lineto(-0.44,0.65) \lineto(-0.44,0.67) \lineto(-0.44,0.69) \lineto(-0.44,0.7) \lineto(-0.44,0.72) \lineto(-0.44,0.74) \lineto(-0.44,0.76) \lineto(-0.44,0.79) \lineto(-0.43,0.81) \lineto(-0.43,0.84) \lineto(-0.42,0.87) \lineto(-0.41,0.91) \lineto(-0.4,0.95) \lineto(-0.38,0.99) \lineto(-0.36,1.04) \lineto(-0.36,1.04) \lineto(-0.36,1.04) } \pscustom[linecolor=ccqqqq]{\moveto(0.22,1.56) \lineto(0.22,1.56) \lineto(0.27,1.58) \lineto(0.33,1.61) \lineto(0.39,1.63) \lineto(0.45,1.66) \lineto(0.52,1.68) \lineto(0.59,1.7) \lineto(0.67,1.72) \lineto(0.75,1.74) \lineto(0.84,1.76) \lineto(0.94,1.78) \lineto(0.98,1.78) \lineto(1.04,1.79) \lineto(1.09,1.8) \lineto(1.14,1.8) \lineto(1.2,1.8) \lineto(1.25,1.81) \lineto(1.31,1.81) \lineto(1.37,1.81) \lineto(1.43,1.81) \lineto(1.49,1.81) \lineto(1.56,1.81) \lineto(1.62,1.8) \lineto(1.68,1.8) \lineto(1.75,1.79) \lineto(1.82,1.78) \lineto(1.88,1.77) \lineto(1.95,1.76) \lineto(2.02,1.74) \lineto(2.09,1.73) \lineto(2.16,1.71) \lineto(2.22,1.69) \lineto(2.29,1.67) \lineto(2.36,1.64) \lineto(2.42,1.62) \lineto(2.49,1.59) \lineto(2.55,1.56) \lineto(2.61,1.53) \lineto(2.67,1.49) \lineto(2.73,1.46) \lineto(2.78,1.42) \lineto(2.84,1.38) \lineto(2.89,1.34) \lineto(2.98,1.26) \lineto(3.05,1.16) \lineto(3.12,1.07) \lineto(3.16,0.97) \lineto(3.19,0.87) \lineto(3.21,0.78) \lineto(3.21,0.68) \lineto(3.2,0.58) \lineto(3.18,0.49) \lineto(3.14,0.4) \lineto(3.09,0.32) \lineto(3.04,0.24) \lineto(2.97,0.17) \lineto(2.9,0.1) \lineto(2.83,0.04) \lineto(2.75,-0.02) \lineto(2.67,-0.07) \lineto(2.59,-0.11) \lineto(2.51,-0.15) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \moveto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.12,1.5) \lineto(0.12,1.51) \lineto(0.14,1.51) \lineto(0.17,1.53) \lineto(0.22,1.56) \lineto(0.22,1.56) } \rput[tl](-0.96,2.86){$l:Mx+Ny+P=0$} \rput[tl](3.22,1.52){$\psi(l)$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.28,-0.35){\blue{$O$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Elliptic Inversion of the line $l$.} \label{fig:rectasinversionelipse2} \end{figure} Moreover, it is clear that the ellipse passing through the center of inversion. \end{proof} \begin{corollary} Let $l_1$ and $l_2$ be perpendicular lines intersecting at point $P$. Then \begin{enumerate}[i.] \item If $P\neq O$, then $\psi(l_1)$ and $\psi(l_2)$ are orthogonal ellipses (their tangents at the points of intersection are perpendicular), which pass through $P'$ and $O$. \item If $P=O$, then $\psi(l_1)$ and $\psi(l_2)$ are perpendicular lines. \item If $l_1$ through $O$ but $l_2$ not through $O$, then $\psi(l_1)$ is an ellipse and $\psi(l_2)$ is an line which passes through $O$ and it is orthogonal to $\psi(l_1)$ in $O$. \end{enumerate} \end{corollary} \begin{proof} \textit{i.} Let $E$ be an ellipse of inversion with equation $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$. Let $l$ and $m$ be two perpendicular lines intersecting at $P$, ($P\neq O$), with respectively equations $Mx+ Ny+P=0 $ ($P\neq0$) and $My-Nx+D=0$ ($D\neq 0$), see Figure \ref{fig:RectaPerpenInvElip}. \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \newrgbcolor{xdxdff}{0.49 0.49 1} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-8,-1.7)(5.48,3.44) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-1.84,1.74){$E$} \psplot[linecolor=qqttzz]{-6.06}{5.48}{(--7.75-3.44*x)/4.92} \pscustom[linecolor=ccqqqq]{\moveto(-0.36,1.04) \lineto(-0.33,1.09) \lineto(-0.29,1.15) \lineto(-0.24,1.22) \lineto(-0.17,1.29) \lineto(-0.07,1.38) \lineto(-0.01,1.42) \lineto(0.05,1.46) \lineto(0.09,1.49) \lineto(0.1,1.49) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \moveto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.17) \lineto(2.47,-0.17) \lineto(2.46,-0.17) \lineto(2.43,-0.19) \lineto(2.38,-0.21) \lineto(2.32,-0.23) \lineto(2.27,-0.25) \lineto(2.21,-0.26) \lineto(2.16,-0.28) \lineto(2.11,-0.29) \lineto(2.06,-0.31) \lineto(1.96,-0.33) \lineto(1.87,-0.34) \lineto(1.78,-0.36) \lineto(1.69,-0.37) \lineto(1.61,-0.37) \lineto(1.54,-0.38) \lineto(1.46,-0.38) \lineto(1.39,-0.38) \lineto(1.33,-0.38) \lineto(1.27,-0.38) \lineto(1.21,-0.38) \lineto(1.16,-0.37) \lineto(1.1,-0.37) \lineto(1.01,-0.36) \lineto(0.95,-0.35) \lineto(0.89,-0.34) \lineto(0.83,-0.33) \lineto(0.78,-0.32) \lineto(0.69,-0.3) \lineto(0.61,-0.28) \lineto(0.54,-0.26) \lineto(0.49,-0.24) \lineto(0.43,-0.22) \lineto(0.39,-0.2) \lineto(0.35,-0.19) \lineto(0.31,-0.17) \lineto(0.28,-0.16) \lineto(0.25,-0.14) \lineto(0.22,-0.13) \lineto(0.2,-0.12) \lineto(0.17,-0.11) \lineto(0.15,-0.09) \lineto(0.13,-0.08) \lineto(0.12,-0.07) \lineto(0.1,-0.07) \lineto(0.09,-0.06) \lineto(0.07,-0.05) \lineto(0.06,-0.04) \lineto(0.05,-0.03) \lineto(0.04,-0.03) \lineto(0.03,-0.02) \lineto(0.02,-0.01) \lineto(0.01,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.05,0.03) \lineto(-0.05,0.04) \lineto(-0.06,0.05) \lineto(-0.07,0.05) \lineto(-0.08,0.06) \lineto(-0.09,0.07) \lineto(-0.1,0.07) \lineto(-0.11,0.08) \lineto(-0.11,0.09) \lineto(-0.12,0.1) \lineto(-0.13,0.1) \lineto(-0.14,0.11) \lineto(-0.15,0.12) \lineto(-0.16,0.13) \lineto(-0.17,0.14) \lineto(-0.17,0.14) \lineto(-0.18,0.15) \lineto(-0.19,0.16) \lineto(-0.2,0.17) \lineto(-0.21,0.18) \lineto(-0.22,0.19) \lineto(-0.23,0.2) \lineto(-0.23,0.21) \lineto(-0.24,0.22) \lineto(-0.25,0.23) \lineto(-0.26,0.24) \lineto(-0.27,0.25) \lineto(-0.28,0.26) \lineto(-0.28,0.27) \lineto(-0.29,0.28) \lineto(-0.3,0.29) \lineto(-0.31,0.3) \lineto(-0.31,0.31) \lineto(-0.32,0.32) \lineto(-0.33,0.34) \lineto(-0.34,0.35) \lineto(-0.34,0.36) \lineto(-0.35,0.37) \lineto(-0.36,0.39) \lineto(-0.36,0.4) \lineto(-0.37,0.41) \lineto(-0.38,0.42) \lineto(-0.38,0.44) \lineto(-0.39,0.45) \lineto(-0.39,0.47) \lineto(-0.4,0.48) \lineto(-0.4,0.49) \lineto(-0.41,0.51) \lineto(-0.41,0.52) \lineto(-0.42,0.54) \lineto(-0.42,0.55) \lineto(-0.42,0.57) \lineto(-0.43,0.59) \lineto(-0.43,0.6) \lineto(-0.43,0.62) \lineto(-0.44,0.64) \lineto(-0.44,0.65) \lineto(-0.44,0.67) \lineto(-0.44,0.69) \lineto(-0.44,0.7) \lineto(-0.44,0.72) \lineto(-0.44,0.74) \lineto(-0.44,0.76) \lineto(-0.44,0.79) \lineto(-0.43,0.81) \lineto(-0.43,0.84) \lineto(-0.42,0.87) \lineto(-0.41,0.91) \lineto(-0.4,0.95) \lineto(-0.38,0.99) \lineto(-0.36,1.04) \lineto(-0.36,1.04) \lineto(-0.36,1.04) } \pscustom[linecolor=ccqqqq]{\moveto(0.22,1.56) \lineto(0.22,1.56) \lineto(0.27,1.58) \lineto(0.33,1.61) \lineto(0.39,1.63) \lineto(0.45,1.66) \lineto(0.52,1.68) \lineto(0.59,1.7) \lineto(0.67,1.72) \lineto(0.75,1.74) \lineto(0.84,1.76) \lineto(0.94,1.78) \lineto(0.98,1.78) \lineto(1.04,1.79) \lineto(1.09,1.8) \lineto(1.14,1.8) \lineto(1.2,1.8) \lineto(1.25,1.81) \lineto(1.31,1.81) \lineto(1.37,1.81) \lineto(1.43,1.81) \lineto(1.49,1.81) \lineto(1.56,1.81) \lineto(1.62,1.8) \lineto(1.68,1.8) \lineto(1.75,1.79) \lineto(1.82,1.78) \lineto(1.88,1.77) \lineto(1.95,1.76) \lineto(2.02,1.74) \lineto(2.09,1.73) \lineto(2.16,1.71) \lineto(2.22,1.69) \lineto(2.29,1.67) \lineto(2.36,1.64) \lineto(2.42,1.62) \lineto(2.49,1.59) \lineto(2.55,1.56) \lineto(2.61,1.53) \lineto(2.67,1.49) \lineto(2.73,1.46) \lineto(2.78,1.42) \lineto(2.84,1.38) \lineto(2.89,1.34) \lineto(2.98,1.26) \lineto(3.05,1.16) \lineto(3.12,1.07) \lineto(3.16,0.97) \lineto(3.19,0.87) \lineto(3.21,0.78) \lineto(3.21,0.68) \lineto(3.2,0.58) \lineto(3.18,0.49) \lineto(3.14,0.4) \lineto(3.09,0.32) \lineto(3.04,0.24) \lineto(2.97,0.17) \lineto(2.9,0.1) \lineto(2.83,0.04) \lineto(2.75,-0.02) \lineto(2.67,-0.07) \lineto(2.59,-0.11) \lineto(2.51,-0.15) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \moveto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.12,1.5) \lineto(0.12,1.51) \lineto(0.14,1.51) \lineto(0.17,1.53) \lineto(0.22,1.56) \lineto(0.22,1.56) } \rput[tl](-0.96,2.86){$l_1:Mx+Ny+P=0$} \rput[tl](3.22,1.52){$\psi(l_1)$} \psplot[linecolor=qqttzz]{-6.06}{5.48}{(--16.4--4.92*x)/3.44} \pscustom[linecolor=ccqqqq]{\moveto(-1.94,0.14) \lineto(-1.95,0.19) \lineto(-1.95,0.25) \lineto(-1.95,0.3) \lineto(-1.93,0.36) \lineto(-1.91,0.41) \lineto(-1.88,0.46) \lineto(-1.8,0.56) \lineto(-1.71,0.64) \lineto(-1.65,0.67) \lineto(-1.6,0.7) \lineto(-1.54,0.73) \lineto(-1.48,0.75) \lineto(-1.42,0.77) \lineto(-1.37,0.79) \lineto(-1.31,0.8) \lineto(-1.26,0.82) \lineto(-1.2,0.83) \lineto(-1.15,0.83) \lineto(-1.05,0.84) \lineto(-0.96,0.85) \lineto(-0.87,0.85) \lineto(-0.8,0.84) \lineto(-0.73,0.83) \lineto(-0.67,0.82) \lineto(-0.61,0.81) \lineto(-0.56,0.8) \lineto(-0.52,0.79) \lineto(-0.48,0.78) \lineto(-0.44,0.77) \lineto(-0.4,0.76) \lineto(-0.37,0.74) \lineto(-0.34,0.73) \lineto(-0.31,0.71) \lineto(-0.28,0.7) \lineto(-0.25,0.69) \lineto(-0.22,0.67) \lineto(-0.2,0.65) \lineto(-0.17,0.64) \lineto(-0.15,0.62) \lineto(-0.13,0.61) \lineto(-0.11,0.59) \lineto(-0.09,0.57) \lineto(-0.07,0.56) \lineto(-0.06,0.54) \lineto(-0.04,0.53) \lineto(-0.03,0.51) \lineto(-0.02,0.49) \lineto(0,0.48) \lineto(0.01,0.46) \lineto(0.02,0.45) \lineto(0.03,0.43) \lineto(0.03,0.42) \lineto(0.04,0.4) \lineto(0.05,0.39) \lineto(0.05,0.37) \lineto(0.06,0.36) \lineto(0.06,0.34) \lineto(0.07,0.33) \lineto(0.07,0.32) \lineto(0.07,0.3) \lineto(0.08,0.29) \lineto(0.08,0.28) \lineto(0.08,0.27) \lineto(0.08,0.25) \lineto(0.08,0.24) \lineto(0.08,0.23) \lineto(0.08,0.22) \lineto(0.08,0.21) \lineto(0.08,0.2) \lineto(0.08,0.19) \lineto(0.07,0.18) \lineto(0.07,0.17) \lineto(0.07,0.16) \lineto(0.07,0.15) \lineto(0.07,0.14) \lineto(0.06,0.13) \lineto(0.06,0.12) \lineto(0.06,0.11) \lineto(0.05,0.1) \lineto(0.05,0.09) \lineto(0.05,0.08) \lineto(0.04,0.08) \lineto(0.04,0.07) \lineto(0.04,0.06) \lineto(0.03,0.05) \lineto(0.03,0.05) \lineto(0.02,0.04) \lineto(0.02,0.03) \lineto(0.02,0.03) \lineto(0.01,0.02) \lineto(0.01,0.01) \lineto(0,0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,-0.01) \lineto(-0.01,-0.01) \lineto(-0.02,-0.02) \lineto(-0.02,-0.03) \lineto(-0.03,-0.04) \lineto(-0.03,-0.04) \lineto(-0.04,-0.05) \lineto(-0.05,-0.06) \lineto(-0.06,-0.07) \lineto(-0.07,-0.08) \lineto(-0.08,-0.09) \lineto(-0.09,-0.1) \lineto(-0.1,-0.11) \lineto(-0.11,-0.12) \lineto(-0.12,-0.13) \lineto(-0.14,-0.14) \lineto(-0.16,-0.15) \lineto(-0.18,-0.17) \lineto(-0.2,-0.18) \lineto(-0.22,-0.2) \lineto(-0.24,-0.21) \lineto(-0.27,-0.23) \lineto(-0.3,-0.24) \lineto(-0.34,-0.26) \lineto(-0.38,-0.27) \lineto(-0.42,-0.29) \lineto(-0.47,-0.31) \lineto(-0.53,-0.32) \lineto(-0.59,-0.34) \lineto(-0.66,-0.35) \lineto(-0.73,-0.36) \lineto(-0.82,-0.37) \lineto(-0.92,-0.37) \lineto(-0.97,-0.37) \lineto(-1.02,-0.37) \lineto(-1.08,-0.37) \lineto(-1.14,-0.36) \lineto(-1.2,-0.35) \lineto(-1.26,-0.34) \lineto(-1.32,-0.33) \lineto(-1.39,-0.31) \lineto(-1.45,-0.29) \lineto(-1.51,-0.27) \lineto(-1.58,-0.24) \lineto(-1.64,-0.21) \lineto(-1.7,-0.17) \lineto(-1.75,-0.13) \lineto(-1.84,-0.04) \lineto(-1.88,0.01) \lineto(-1.91,0.06) \lineto(-1.94,0.14) \lineto(-1.94,0.14) } \rput[tl](-7.5,1.68){$l_2:My-Nx+D=0$} \rput[tl](-1.84,-0.4){$\psi(l_2)$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.28,-0.4){\blue{$O$}} \psdots[dotstyle=*,linecolor=qqttzz](-1.5,2.62) \rput[bl](-1.6,2.9){\qqttzz{$P$}} \psdots[dotstyle=*,linecolor=xdxdff](-0.44,0.77) \rput[bl](-0.36,0.88){\xdxdff{$P'$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse of Perpendicular Lines.} \label{fig:RectaPerpenInvElip} \end{figure} By Theorem \ref{invrelipse}, $\psi(l_1)=l'_1$ and $\psi(l_2)=l'_2$ are ellipses pass through $O$ and their equations are \begin{align*} \frac{x^2}{a^2}+\frac{y^2}{b^2}+\frac{M}{P}x+\frac{N}{P}y&=0,\\ \frac{x^2}{a^2}+\frac{y^2}{b^2}-\frac{N}{D}x+\frac{M}{D}y&=0. \end{align*} The equations of the tangent lines to these ellipses at $O$, are: \begin{align*} \frac{Ma^2b^2}{2}x+\frac{Na^2b^2}{2}y&=0,\\ -\frac{Na^2b^2}{2}x+\frac{Ma^2b^2}{2}y&=0. \end{align*} Simplifying \begin{align*} Mx + Ny &=0,\\ -Nx + My &=0. \end{align*} Therefore the lines are perpendicular and hence the ellipses are orthogonal.\\ \textit{ii.} It is clear by Theorem \ref{invrelipse}.\\ \textit{iii.} It is similar to the part 1. \end{proof} \begin{corollary} The inversion in an ellipse of a system of concurrent lines for a point $H$, distinct of the center of inversion is a set of coaxal system of circles with two common points $H'$ and the center of inversion, see Figure \ref{fig:RectasConcuInvElip}. \end{corollary} \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-5.76,-2.1)(4.46,3.68) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](2.3,1.2){$E$} \psplot[linecolor=qqttzz]{-5.76}{4.46}{(-7.38-1*x)/-2.04} \psplot[linecolor=qqttzz]{-5.76}{4.46}{(--11.7--0.4*x)/5.06} \psline[linecolor=qqttzz](-3.18,-2.1)(-3.18,3.68) \psplot[linecolor=qqttzz]{-5.76}{4.46}{(--4.93--3.08*x)/-2.36} \psplot[linecolor=qqttzz]{-5.76}{4.46}{(-8.28--0.48*x)/-4.76} \pscustom[linecolor=red]{\moveto(0.2,-0.53) \lineto(0.19,-0.6) \lineto(0.18,-0.67) \lineto(0.17,-0.76) \lineto(0.13,-0.86) \lineto(0.1,-0.91) \lineto(0.07,-0.97) \lineto(0.03,-1.03) \lineto(-0.02,-1.1) \lineto(-0.08,-1.17) \lineto(-0.16,-1.25) \lineto(-0.26,-1.33) \lineto(-0.31,-1.37) \lineto(-0.37,-1.41) \lineto(-0.44,-1.45) \lineto(-0.46,-1.46) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \moveto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.16,0.74) \lineto(-2.16,0.74) \lineto(-2.14,0.75) \lineto(-2.11,0.75) \lineto(-2.06,0.75) \lineto(-2,0.75) \lineto(-1.95,0.75) \lineto(-1.86,0.75) \lineto(-1.8,0.75) \lineto(-1.74,0.74) \lineto(-1.68,0.74) \lineto(-1.63,0.74) \lineto(-1.54,0.73) \lineto(-1.46,0.72) \lineto(-1.38,0.7) \lineto(-1.32,0.69) \lineto(-1.26,0.68) \lineto(-1.21,0.67) \lineto(-1.16,0.66) \lineto(-1.12,0.65) \lineto(-1.08,0.64) \lineto(-1.05,0.63) \lineto(-1.02,0.62) \lineto(-0.99,0.61) \lineto(-0.96,0.6) \lineto(-0.93,0.6) \lineto(-0.91,0.59) \lineto(-0.89,0.58) \lineto(-0.87,0.57) \lineto(-0.85,0.57) \lineto(-0.82,0.56) \lineto(-0.8,0.55) \lineto(-0.78,0.54) \lineto(-0.76,0.53) \lineto(-0.74,0.53) \lineto(-0.72,0.52) \lineto(-0.7,0.51) \lineto(-0.68,0.5) \lineto(-0.66,0.49) \lineto(-0.65,0.48) \lineto(-0.63,0.48) \lineto(-0.61,0.47) \lineto(-0.59,0.46) \lineto(-0.57,0.45) \lineto(-0.55,0.44) \lineto(-0.54,0.43) \lineto(-0.52,0.42) \lineto(-0.5,0.41) \lineto(-0.49,0.4) \lineto(-0.47,0.4) \lineto(-0.45,0.39) \lineto(-0.44,0.38) \lineto(-0.42,0.37) \lineto(-0.41,0.36) \lineto(-0.39,0.35) \lineto(-0.38,0.34) \lineto(-0.36,0.33) \lineto(-0.35,0.32) \lineto(-0.33,0.31) \lineto(-0.32,0.3) \lineto(-0.31,0.29) \lineto(-0.29,0.28) \lineto(-0.28,0.27) \lineto(-0.27,0.26) \lineto(-0.26,0.25) \lineto(-0.24,0.24) \lineto(-0.23,0.23) \lineto(-0.22,0.22) \lineto(-0.21,0.21) \lineto(-0.2,0.2) \lineto(-0.18,0.19) \lineto(-0.17,0.18) \lineto(-0.16,0.17) \lineto(-0.15,0.16) \lineto(-0.14,0.16) \lineto(-0.13,0.15) \lineto(-0.12,0.14) \lineto(-0.11,0.13) \lineto(-0.1,0.12) \lineto(-0.09,0.11) \lineto(-0.08,0.1) \lineto(-0.07,0.09) \lineto(-0.06,0.08) \lineto(-0.06,0.07) \lineto(-0.05,0.06) \lineto(-0.04,0.05) \lineto(-0.03,0.04) \lineto(-0.02,0.03) \lineto(-0.02,0.02) \lineto(-0.01,0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,-0.01) \lineto(0.01,-0.01) \lineto(0.02,-0.02) \lineto(0.03,-0.03) \lineto(0.03,-0.05) \lineto(0.04,-0.06) \lineto(0.05,-0.07) \lineto(0.06,-0.09) \lineto(0.07,-0.1) \lineto(0.08,-0.12) \lineto(0.09,-0.14) \lineto(0.1,-0.16) \lineto(0.11,-0.18) \lineto(0.12,-0.21) \lineto(0.14,-0.23) \lineto(0.15,-0.26) \lineto(0.16,-0.29) \lineto(0.17,-0.33) \lineto(0.18,-0.37) \lineto(0.19,-0.42) \lineto(0.19,-0.47) \lineto(0.2,-0.52) \lineto(0.2,-0.53) } \pscustom[linecolor=red]{\moveto(-0.69,-1.58) \lineto(-0.69,-1.58) \lineto(-0.78,-1.62) \lineto(-0.84,-1.64) \lineto(-0.89,-1.66) \lineto(-0.95,-1.68) \lineto(-1.01,-1.69) \lineto(-1.07,-1.71) \lineto(-1.13,-1.73) \lineto(-1.2,-1.74) \lineto(-1.26,-1.76) \lineto(-1.33,-1.77) \lineto(-1.41,-1.78) \lineto(-1.48,-1.8) \lineto(-1.56,-1.81) \lineto(-1.64,-1.81) \lineto(-1.72,-1.82) \lineto(-1.81,-1.82) \lineto(-1.89,-1.83) \lineto(-1.98,-1.83) \lineto(-2.07,-1.82) \lineto(-2.16,-1.82) \lineto(-2.25,-1.81) \lineto(-2.35,-1.8) \lineto(-2.44,-1.79) \lineto(-2.53,-1.78) \lineto(-2.63,-1.76) \lineto(-2.72,-1.74) \lineto(-2.81,-1.72) \lineto(-2.9,-1.69) \lineto(-3,-1.66) \lineto(-3.08,-1.63) \lineto(-3.17,-1.6) \lineto(-3.26,-1.56) \lineto(-3.34,-1.52) \lineto(-3.42,-1.48) \lineto(-3.49,-1.43) \lineto(-3.57,-1.39) \lineto(-3.63,-1.34) \lineto(-3.7,-1.29) \lineto(-3.76,-1.24) \lineto(-3.81,-1.18) \lineto(-3.86,-1.13) \lineto(-3.91,-1.07) \lineto(-3.95,-1.01) \lineto(-3.98,-0.96) \lineto(-4.01,-0.9) \lineto(-4.04,-0.84) \lineto(-4.06,-0.78) \lineto(-4.08,-0.72) \lineto(-4.09,-0.66) \lineto(-4.1,-0.61) \lineto(-4.1,-0.55) \lineto(-4.1,-0.49) \lineto(-4.09,-0.44) \lineto(-4.08,-0.39) \lineto(-4.07,-0.33) \lineto(-4.06,-0.28) \lineto(-4.04,-0.23) \lineto(-3.99,-0.14) \lineto(-3.94,-0.05) \lineto(-3.88,0.03) \lineto(-3.81,0.11) \lineto(-3.74,0.18) \lineto(-3.66,0.24) \lineto(-3.58,0.3) \lineto(-3.5,0.35) \lineto(-3.42,0.4) \lineto(-3.34,0.44) \lineto(-3.26,0.48) \lineto(-3.18,0.52) \lineto(-3.11,0.55) \lineto(-3.03,0.58) \lineto(-2.96,0.6) \lineto(-2.89,0.62) \lineto(-2.82,0.64) \lineto(-2.75,0.66) \lineto(-2.69,0.67) \lineto(-2.62,0.69) \lineto(-2.56,0.7) \lineto(-2.5,0.71) \lineto(-2.45,0.72) \lineto(-2.39,0.72) \lineto(-2.34,0.73) \lineto(-2.24,0.74) \lineto(-2.2,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \lineto(-2.17,0.74) \moveto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.47,-1.47) \lineto(-0.48,-1.47) \lineto(-0.48,-1.48) \lineto(-0.49,-1.48) \lineto(-0.5,-1.49) \lineto(-0.52,-1.5) \lineto(-0.58,-1.53) \lineto(-0.64,-1.56) \lineto(-0.69,-1.58) \lineto(-0.69,-1.58) } \pscustom[linecolor=red]{\moveto(-1.51,0.5) \lineto(-1.57,0.47) \lineto(-1.63,0.44) \lineto(-1.69,0.41) \lineto(-1.74,0.37) \lineto(-1.8,0.33) \lineto(-1.85,0.27) \lineto(-1.9,0.21) \lineto(-1.93,0.15) \lineto(-1.96,0.07) \lineto(-1.97,-0.01) \lineto(-1.95,-0.09) \lineto(-1.92,-0.18) \lineto(-1.86,-0.26) \lineto(-1.78,-0.34) \lineto(-1.74,-0.38) \lineto(-1.68,-0.41) \lineto(-1.63,-0.45) \lineto(-1.56,-0.48) \lineto(-1.5,-0.5) \lineto(-1.43,-0.52) \lineto(-1.37,-0.54) \lineto(-1.3,-0.56) \lineto(-1.23,-0.57) \lineto(-1.16,-0.58) \lineto(-1.09,-0.59) \lineto(-1.02,-0.59) \lineto(-0.96,-0.59) \lineto(-0.89,-0.59) \lineto(-0.83,-0.58) \lineto(-0.77,-0.58) \lineto(-0.72,-0.57) \lineto(-0.67,-0.56) \lineto(-0.57,-0.53) \lineto(-0.48,-0.51) \lineto(-0.41,-0.48) \lineto(-0.34,-0.45) \lineto(-0.29,-0.42) \lineto(-0.24,-0.38) \lineto(-0.2,-0.35) \lineto(-0.16,-0.32) \lineto(-0.13,-0.3) \lineto(-0.11,-0.27) \lineto(-0.09,-0.24) \lineto(-0.07,-0.22) \lineto(-0.05,-0.19) \lineto(-0.04,-0.17) \lineto(-0.03,-0.15) \lineto(-0.02,-0.13) \lineto(-0.02,-0.11) \lineto(-0.01,-0.09) \lineto(-0.01,-0.08) \lineto(-0.01,-0.06) \lineto(0,-0.05) \lineto(0,-0.03) \lineto(0,-0.02) \lineto(0,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0.01) \lineto(0,0.02) \lineto(0,0.03) \lineto(0,0.04) \lineto(0,0.05) \lineto(-0.01,0.07) \lineto(-0.01,0.08) \lineto(-0.01,0.09) \lineto(-0.02,0.1) \lineto(-0.02,0.12) \lineto(-0.02,0.13) \lineto(-0.03,0.14) \lineto(-0.03,0.15) \lineto(-0.04,0.17) \lineto(-0.05,0.18) \lineto(-0.05,0.19) \lineto(-0.06,0.2) \lineto(-0.07,0.22) \lineto(-0.08,0.23) \lineto(-0.09,0.24) \lineto(-0.1,0.26) \lineto(-0.11,0.27) \lineto(-0.12,0.28) \lineto(-0.13,0.29) \lineto(-0.14,0.3) \lineto(-0.15,0.32) \lineto(-0.17,0.33) \lineto(-0.18,0.34) \lineto(-0.19,0.35) \lineto(-0.21,0.36) \lineto(-0.22,0.37) \lineto(-0.24,0.39) \lineto(-0.26,0.4) \lineto(-0.27,0.41) \lineto(-0.29,0.42) \lineto(-0.31,0.43) \lineto(-0.32,0.44) \lineto(-0.34,0.45) \lineto(-0.36,0.46) \lineto(-0.38,0.47) \lineto(-0.4,0.47) \lineto(-0.42,0.48) \lineto(-0.44,0.49) \lineto(-0.46,0.5) \lineto(-0.48,0.51) \lineto(-0.5,0.51) \lineto(-0.52,0.52) \lineto(-0.54,0.53) \lineto(-0.56,0.53) \lineto(-0.58,0.54) \lineto(-0.61,0.54) \lineto(-0.63,0.55) \lineto(-0.65,0.55) \lineto(-0.67,0.56) \lineto(-0.69,0.56) \lineto(-0.72,0.57) \lineto(-0.74,0.57) \lineto(-0.76,0.57) \lineto(-0.78,0.58) \lineto(-0.81,0.58) \lineto(-0.83,0.58) \lineto(-0.85,0.58) \lineto(-0.87,0.59) \lineto(-0.89,0.59) \lineto(-0.92,0.59) \lineto(-0.94,0.59) \lineto(-0.96,0.59) \lineto(-0.99,0.59) \lineto(-1.02,0.59) \lineto(-1.04,0.59) \lineto(-1.07,0.59) \lineto(-1.11,0.58) \lineto(-1.14,0.58) \lineto(-1.18,0.58) \lineto(-1.22,0.57) \lineto(-1.26,0.57) \lineto(-1.3,0.56) \lineto(-1.35,0.55) \lineto(-1.4,0.53) \lineto(-1.45,0.52) \lineto(-1.51,0.5) \lineto(-1.51,0.5) } \pscustom[linecolor=red]{\moveto(-1,0.11) \lineto(-0.98,0.09) \lineto(-0.95,0.07) \lineto(-0.93,0.05) \lineto(-0.9,0.03) \lineto(-0.87,0.01) \lineto(-0.83,-0.01) \lineto(-0.8,-0.02) \lineto(-0.76,-0.04) \lineto(-0.72,-0.05) \lineto(-0.68,-0.06) \lineto(-0.63,-0.07) \lineto(-0.59,-0.08) \lineto(-0.54,-0.08) \lineto(-0.5,-0.09) \lineto(-0.45,-0.09) \lineto(-0.41,-0.09) \lineto(-0.36,-0.09) \lineto(-0.32,-0.09) \lineto(-0.27,-0.08) \lineto(-0.23,-0.07) \lineto(-0.19,-0.07) \lineto(-0.15,-0.06) \lineto(-0.11,-0.05) \lineto(-0.08,-0.03) \lineto(-0.05,-0.02) \lineto(-0.02,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0.01,0) \lineto(0.02,0.01) \lineto(0.04,0.02) \lineto(0.07,0.04) \lineto(0.09,0.06) \lineto(0.12,0.08) \lineto(0.14,0.1) \lineto(0.17,0.12) \lineto(0.19,0.14) \lineto(0.2,0.17) \lineto(0.22,0.2) \lineto(0.23,0.23) \lineto(0.24,0.25) \lineto(0.24,0.28) \lineto(0.25,0.32) \lineto(0.24,0.35) \lineto(0.24,0.38) \lineto(0.23,0.41) \lineto(0.21,0.44) \lineto(0.19,0.47) \lineto(0.17,0.49) \lineto(0.15,0.52) \lineto(0.12,0.54) \lineto(0.09,0.57) \lineto(0.06,0.59) \lineto(0.02,0.61) \lineto(-0.01,0.63) \lineto(-0.05,0.64) \lineto(-0.09,0.66) \lineto(-0.12,0.67) \lineto(-0.16,0.68) \lineto(-0.2,0.69) \lineto(-0.24,0.7) \lineto(-0.27,0.7) \lineto(-0.31,0.71) \lineto(-0.34,0.71) \lineto(-0.38,0.71) \lineto(-0.41,0.71) \lineto(-0.44,0.71) \lineto(-0.47,0.71) \lineto(-0.5,0.71) \lineto(-0.53,0.71) \lineto(-0.56,0.7) \lineto(-0.58,0.7) \lineto(-0.61,0.7) \lineto(-0.63,0.69) \lineto(-0.65,0.69) \lineto(-0.67,0.68) \lineto(-0.69,0.68) \lineto(-0.71,0.67) \lineto(-0.73,0.67) \lineto(-0.75,0.66) \lineto(-0.76,0.66) \lineto(-0.78,0.65) \lineto(-0.79,0.65) \lineto(-0.81,0.64) \lineto(-0.82,0.63) \lineto(-0.83,0.63) \lineto(-0.84,0.62) \lineto(-0.85,0.62) \lineto(-0.86,0.61) \lineto(-0.87,0.61) \lineto(-0.88,0.6) \lineto(-0.89,0.6) \lineto(-0.9,0.59) \lineto(-0.91,0.59) \lineto(-0.92,0.58) \lineto(-0.93,0.58) \lineto(-0.94,0.57) \lineto(-0.94,0.56) \lineto(-0.95,0.56) \lineto(-0.96,0.55) \lineto(-0.97,0.54) \lineto(-0.98,0.54) \lineto(-0.99,0.53) \lineto(-0.99,0.52) \lineto(-1,0.51) \lineto(-1.01,0.5) \lineto(-1.02,0.49) \lineto(-1.03,0.48) \lineto(-1.04,0.47) \lineto(-1.04,0.46) \lineto(-1.05,0.45) \lineto(-1.06,0.44) \lineto(-1.06,0.43) \lineto(-1.07,0.41) \lineto(-1.08,0.4) \lineto(-1.08,0.39) \lineto(-1.08,0.37) \lineto(-1.09,0.36) \lineto(-1.09,0.34) \lineto(-1.09,0.32) \lineto(-1.09,0.31) \lineto(-1.09,0.29) \lineto(-1.09,0.27) \lineto(-1.08,0.25) \lineto(-1.08,0.23) \lineto(-1.07,0.21) \lineto(-1.06,0.19) \lineto(-1.05,0.17) \lineto(-1.04,0.15) \lineto(-1.02,0.13) \lineto(-1,0.11) \lineto(-1,0.11) } \pscustom[linecolor=red]{\moveto(-0.28,0.97) \lineto(-0.2,0.97) \lineto(-0.12,0.98) \lineto(-0.04,0.98) \lineto(0.05,0.97) \lineto(0.14,0.95) \lineto(0.23,0.93) \lineto(0.32,0.91) \lineto(0.4,0.87) \lineto(0.47,0.83) \lineto(0.54,0.79) \lineto(0.59,0.74) \lineto(0.64,0.69) \lineto(0.67,0.64) \lineto(0.7,0.58) \lineto(0.71,0.53) \lineto(0.71,0.48) \lineto(0.71,0.43) \lineto(0.69,0.38) \lineto(0.67,0.34) \lineto(0.65,0.3) \lineto(0.62,0.27) \lineto(0.59,0.23) \lineto(0.56,0.2) \lineto(0.53,0.18) \lineto(0.49,0.15) \lineto(0.46,0.13) \lineto(0.42,0.11) \lineto(0.39,0.1) \lineto(0.35,0.08) \lineto(0.32,0.07) \lineto(0.29,0.06) \lineto(0.26,0.05) \lineto(0.23,0.04) \lineto(0.2,0.03) \lineto(0.17,0.03) \lineto(0.15,0.02) \lineto(0.12,0.02) \lineto(0.1,0.01) \lineto(0.08,0.01) \lineto(0.05,0.01) \lineto(0.03,0) \lineto(0.01,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0) \lineto(-0.03,0) \lineto(-0.05,0) \lineto(-0.07,0) \lineto(-0.09,0) \lineto(-0.11,0) \lineto(-0.13,0) \lineto(-0.15,0) \lineto(-0.17,0) \lineto(-0.19,0) \lineto(-0.21,0) \lineto(-0.23,0) \lineto(-0.26,0) \lineto(-0.28,0.01) \lineto(-0.3,0.01) \lineto(-0.32,0.01) \lineto(-0.35,0.02) \lineto(-0.37,0.02) \lineto(-0.39,0.03) \lineto(-0.41,0.03) \lineto(-0.44,0.04) \lineto(-0.46,0.04) \lineto(-0.48,0.05) \lineto(-0.5,0.06) \lineto(-0.52,0.06) \lineto(-0.55,0.07) \lineto(-0.57,0.08) \lineto(-0.59,0.09) \lineto(-0.61,0.1) \lineto(-0.63,0.11) \lineto(-0.65,0.12) \lineto(-0.67,0.13) \lineto(-0.69,0.14) \lineto(-0.71,0.15) \lineto(-0.72,0.16) \lineto(-0.74,0.18) \lineto(-0.76,0.19) \lineto(-0.77,0.2) \lineto(-0.79,0.22) \lineto(-0.8,0.23) \lineto(-0.82,0.24) \lineto(-0.83,0.26) \lineto(-0.84,0.27) \lineto(-0.85,0.29) \lineto(-0.86,0.3) \lineto(-0.87,0.31) \lineto(-0.88,0.33) \lineto(-0.89,0.34) \lineto(-0.9,0.36) \lineto(-0.9,0.37) \lineto(-0.91,0.39) \lineto(-0.91,0.4) \lineto(-0.92,0.42) \lineto(-0.92,0.43) \lineto(-0.92,0.45) \lineto(-0.92,0.46) \lineto(-0.93,0.48) \lineto(-0.92,0.49) \lineto(-0.92,0.51) \lineto(-0.92,0.52) \lineto(-0.92,0.54) \lineto(-0.92,0.55) \lineto(-0.91,0.57) \lineto(-0.91,0.58) \lineto(-0.91,0.59) \lineto(-0.9,0.61) \lineto(-0.89,0.62) \lineto(-0.89,0.64) \lineto(-0.88,0.65) \lineto(-0.87,0.67) \lineto(-0.85,0.69) \lineto(-0.84,0.7) \lineto(-0.82,0.72) \lineto(-0.81,0.74) \lineto(-0.79,0.76) \lineto(-0.76,0.78) \lineto(-0.73,0.8) \lineto(-0.7,0.82) \lineto(-0.67,0.84) \lineto(-0.63,0.87) \lineto(-0.58,0.89) \lineto(-0.53,0.91) \lineto(-0.48,0.92) \lineto(-0.42,0.94) \lineto(-0.35,0.96) \lineto(-0.28,0.97) } \pscustom[linecolor=red]{\moveto(-0.67,0.24) \lineto(-0.66,0.23) \lineto(-0.64,0.22) \lineto(-0.63,0.2) \lineto(-0.61,0.19) \lineto(-0.59,0.18) \lineto(-0.58,0.17) \lineto(-0.56,0.17) \lineto(-0.54,0.16) \lineto(-0.53,0.15) \lineto(-0.51,0.14) \lineto(-0.49,0.13) \lineto(-0.47,0.12) \lineto(-0.45,0.11) \lineto(-0.43,0.1) \lineto(-0.42,0.1) \lineto(-0.4,0.09) \lineto(-0.38,0.08) \lineto(-0.36,0.08) \lineto(-0.34,0.07) \lineto(-0.32,0.06) \lineto(-0.3,0.06) \lineto(-0.28,0.05) \lineto(-0.26,0.05) \lineto(-0.24,0.04) \lineto(-0.22,0.04) \lineto(-0.2,0.03) \lineto(-0.18,0.03) \lineto(-0.16,0.02) \lineto(-0.14,0.02) \lineto(-0.12,0.02) \lineto(-0.1,0.01) \lineto(-0.08,0.01) \lineto(-0.06,0.01) \lineto(-0.04,0) \lineto(-0.02,0) \lineto(-0.01,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0.01,0) \lineto(0.01,0) \lineto(0.03,0) \lineto(0.05,0) \lineto(0.07,-0.01) \lineto(0.1,-0.01) \lineto(0.13,-0.01) \lineto(0.15,-0.01) \lineto(0.19,-0.01) \lineto(0.22,-0.01) \lineto(0.25,-0.01) \lineto(0.29,-0.01) \lineto(0.33,0) \lineto(0.37,0) \lineto(0.42,0.01) \lineto(0.47,0.01) \lineto(0.52,0.02) \lineto(0.58,0.04) \lineto(0.64,0.05) \lineto(0.7,0.07) \lineto(0.77,0.09) \lineto(0.84,0.12) \lineto(0.91,0.16) \lineto(0.98,0.2) \lineto(1.05,0.25) \lineto(1.12,0.31) \lineto(1.18,0.39) \lineto(1.23,0.47) \lineto(1.26,0.56) \lineto(1.27,0.61) \lineto(1.27,0.66) \lineto(1.27,0.71) \lineto(1.26,0.77) \lineto(1.23,0.82) \lineto(1.21,0.88) \lineto(1.17,0.93) \lineto(1.12,0.98) \lineto(1.07,1.03) \lineto(1,1.08) \lineto(0.93,1.12) \lineto(0.86,1.16) \lineto(0.78,1.2) \lineto(0.69,1.23) \lineto(0.61,1.25) \lineto(0.52,1.27) \lineto(0.43,1.29) \lineto(0.34,1.3) \lineto(0.25,1.3) \lineto(0.17,1.3) \lineto(0.08,1.3) \lineto(0,1.29) \lineto(-0.07,1.28) \lineto(-0.14,1.27) \lineto(-0.21,1.26) \lineto(-0.27,1.24) \lineto(-0.33,1.23) \lineto(-0.38,1.21) \lineto(-0.48,1.17) \lineto(-0.56,1.13) \lineto(-0.63,1.09) \lineto(-0.69,1.05) \lineto(-0.73,1.01) \lineto(-0.77,0.97) \lineto(-0.8,0.93) \lineto(-0.83,0.9) \lineto(-0.85,0.87) \lineto(-0.86,0.84) \lineto(-0.88,0.81) \lineto(-0.89,0.78) \lineto(-0.9,0.76) \lineto(-0.9,0.74) \lineto(-0.91,0.71) \lineto(-0.91,0.69) \lineto(-0.91,0.67) \lineto(-0.91,0.66) \lineto(-0.91,0.64) \lineto(-0.91,0.62) \lineto(-0.91,0.61) \lineto(-0.91,0.6) \lineto(-0.91,0.58) \lineto(-0.9,0.57) \lineto(-0.9,0.56) \lineto(-0.9,0.54) \lineto(-0.89,0.53) \lineto(-0.89,0.52) \lineto(-0.89,0.51) \lineto(-0.88,0.49) \lineto(-0.88,0.48) \lineto(-0.87,0.47) \lineto(-0.86,0.45) \lineto(-0.86,0.44) \lineto(-0.85,0.43) \lineto(-0.84,0.41) \lineto(-0.83,0.4) \lineto(-0.82,0.39) \lineto(-0.81,0.38) \lineto(-0.81,0.36) \lineto(-0.8,0.35) \lineto(-0.78,0.34) \lineto(-0.77,0.33) \lineto(-0.76,0.32) \lineto(-0.75,0.3) \lineto(-0.74,0.29) \lineto(-0.72,0.28) \lineto(-0.71,0.27) \lineto(-0.7,0.26) \lineto(-0.68,0.25) \lineto(-0.67,0.24) } \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](0.5,-0.44){\blue{$O$}} \psdots[dotstyle=*,linecolor=blue](-3.18,2.06) \rput[bl](-3.12,2.26){\blue{$H$}} \psdots[dotstyle=*,linecolor=darkgray](-0.91,0.59) \rput[bl](-1.24,0.76){\darkgray{$H'$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse of a System of Concurrent Lines.} \label{fig:RectasConcuInvElip} \end{figure} \begin{corollary} The inversion in an ellipse of a system of parallel lines which does not pass through of the center of inversion is a set of tangent ellipses at the center of inversion, see Figure \ref{fig:RectasParaleInvElipse}. \end{corollary} \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-7.02,-2.02)(6.76,2.64) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-2.44,1.48){$E$} \psplot[linecolor=qqttzz]{-7.02}{6.76}{(--7.75-3.44*x)/4.92} \pscustom[linecolor=ccqqqq]{\moveto(-0.36,1.04) \lineto(-0.33,1.09) \lineto(-0.29,1.15) \lineto(-0.24,1.22) \lineto(-0.17,1.29) \lineto(-0.07,1.38) \lineto(-0.01,1.42) \lineto(0.05,1.46) \lineto(0.09,1.49) \lineto(0.1,1.49) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \moveto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.16) \lineto(2.48,-0.17) \lineto(2.47,-0.17) \lineto(2.46,-0.17) \lineto(2.43,-0.19) \lineto(2.38,-0.21) \lineto(2.32,-0.23) \lineto(2.27,-0.25) \lineto(2.21,-0.26) \lineto(2.16,-0.28) \lineto(2.11,-0.29) \lineto(2.06,-0.31) \lineto(1.96,-0.33) \lineto(1.87,-0.34) \lineto(1.78,-0.36) \lineto(1.69,-0.37) \lineto(1.61,-0.37) \lineto(1.54,-0.38) \lineto(1.46,-0.38) \lineto(1.39,-0.38) \lineto(1.33,-0.38) \lineto(1.27,-0.38) \lineto(1.21,-0.38) \lineto(1.16,-0.37) \lineto(1.1,-0.37) \lineto(1.01,-0.36) \lineto(0.95,-0.35) \lineto(0.89,-0.34) \lineto(0.83,-0.33) \lineto(0.78,-0.32) \lineto(0.69,-0.3) \lineto(0.61,-0.28) \lineto(0.54,-0.26) \lineto(0.49,-0.24) \lineto(0.43,-0.22) \lineto(0.39,-0.2) \lineto(0.35,-0.19) \lineto(0.31,-0.17) \lineto(0.28,-0.16) \lineto(0.25,-0.14) \lineto(0.22,-0.13) \lineto(0.2,-0.12) \lineto(0.17,-0.11) \lineto(0.15,-0.09) \lineto(0.13,-0.08) \lineto(0.12,-0.07) \lineto(0.1,-0.07) \lineto(0.09,-0.06) \lineto(0.07,-0.05) \lineto(0.06,-0.04) \lineto(0.05,-0.03) \lineto(0.04,-0.03) \lineto(0.03,-0.02) \lineto(0.02,-0.01) \lineto(0.01,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.05,0.03) \lineto(-0.05,0.04) \lineto(-0.06,0.05) \lineto(-0.07,0.05) \lineto(-0.08,0.06) \lineto(-0.09,0.07) \lineto(-0.1,0.07) \lineto(-0.11,0.08) \lineto(-0.11,0.09) \lineto(-0.12,0.1) \lineto(-0.13,0.1) \lineto(-0.14,0.11) \lineto(-0.15,0.12) \lineto(-0.16,0.13) \lineto(-0.17,0.14) \lineto(-0.17,0.14) \lineto(-0.18,0.15) \lineto(-0.19,0.16) \lineto(-0.2,0.17) \lineto(-0.21,0.18) \lineto(-0.22,0.19) \lineto(-0.23,0.2) \lineto(-0.23,0.21) \lineto(-0.24,0.22) \lineto(-0.25,0.23) \lineto(-0.26,0.24) \lineto(-0.27,0.25) \lineto(-0.28,0.26) \lineto(-0.28,0.27) \lineto(-0.29,0.28) \lineto(-0.3,0.29) \lineto(-0.31,0.3) \lineto(-0.31,0.31) \lineto(-0.32,0.32) \lineto(-0.33,0.34) \lineto(-0.34,0.35) \lineto(-0.34,0.36) \lineto(-0.35,0.37) \lineto(-0.36,0.39) \lineto(-0.36,0.4) \lineto(-0.37,0.41) \lineto(-0.38,0.42) \lineto(-0.38,0.44) \lineto(-0.39,0.45) \lineto(-0.39,0.47) \lineto(-0.4,0.48) \lineto(-0.4,0.49) \lineto(-0.41,0.51) \lineto(-0.41,0.52) \lineto(-0.42,0.54) \lineto(-0.42,0.55) \lineto(-0.42,0.57) \lineto(-0.43,0.59) \lineto(-0.43,0.6) \lineto(-0.43,0.62) \lineto(-0.44,0.64) \lineto(-0.44,0.65) \lineto(-0.44,0.67) \lineto(-0.44,0.69) \lineto(-0.44,0.7) \lineto(-0.44,0.72) \lineto(-0.44,0.74) \lineto(-0.44,0.76) \lineto(-0.44,0.79) \lineto(-0.43,0.81) \lineto(-0.43,0.84) \lineto(-0.42,0.87) \lineto(-0.41,0.91) \lineto(-0.4,0.95) \lineto(-0.38,0.99) \lineto(-0.36,1.04) \lineto(-0.36,1.04) \lineto(-0.36,1.04) } \pscustom[linecolor=ccqqqq]{\moveto(0.22,1.56) \lineto(0.22,1.56) \lineto(0.27,1.58) \lineto(0.33,1.61) \lineto(0.39,1.63) \lineto(0.45,1.66) \lineto(0.52,1.68) \lineto(0.59,1.7) \lineto(0.67,1.72) \lineto(0.75,1.74) \lineto(0.84,1.76) \lineto(0.94,1.78) \lineto(0.98,1.78) \lineto(1.04,1.79) \lineto(1.09,1.8) \lineto(1.14,1.8) \lineto(1.2,1.8) \lineto(1.25,1.81) \lineto(1.31,1.81) \lineto(1.37,1.81) \lineto(1.43,1.81) \lineto(1.49,1.81) \lineto(1.56,1.81) \lineto(1.62,1.8) \lineto(1.68,1.8) \lineto(1.75,1.79) \lineto(1.82,1.78) \lineto(1.88,1.77) \lineto(1.95,1.76) \lineto(2.02,1.74) \lineto(2.09,1.73) \lineto(2.16,1.71) \lineto(2.22,1.69) \lineto(2.29,1.67) \lineto(2.36,1.64) \lineto(2.42,1.62) \lineto(2.49,1.59) \lineto(2.55,1.56) \lineto(2.61,1.53) \lineto(2.67,1.49) \lineto(2.73,1.46) \lineto(2.78,1.42) \lineto(2.84,1.38) \lineto(2.89,1.34) \lineto(2.98,1.26) \lineto(3.05,1.16) \lineto(3.12,1.07) \lineto(3.16,0.97) \lineto(3.19,0.87) \lineto(3.21,0.78) \lineto(3.21,0.68) \lineto(3.2,0.58) \lineto(3.18,0.49) \lineto(3.14,0.4) \lineto(3.09,0.32) \lineto(3.04,0.24) \lineto(2.97,0.17) \lineto(2.9,0.1) \lineto(2.83,0.04) \lineto(2.75,-0.02) \lineto(2.67,-0.07) \lineto(2.59,-0.11) \lineto(2.51,-0.15) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \lineto(2.49,-0.16) \moveto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.11,1.5) \lineto(0.12,1.5) \lineto(0.12,1.51) \lineto(0.14,1.51) \lineto(0.17,1.53) \lineto(0.22,1.56) \lineto(0.22,1.56) } \psplot[linecolor=qqttzz]{-7.02}{6.76}{(-10.16-3.44*x)/4.92} \psplot[linecolor=qqttzz]{-7.02}{6.76}{(-19.8-3.44*x)/4.92} \psplot[linecolor=qqttzz]{-7.02}{6.76}{(--23.42-3.44*x)/4.92} \pscustom[linecolor=ccqqqq]{\moveto(-0.08,0.4) \lineto(-0.08,0.41) \lineto(-0.07,0.42) \lineto(-0.06,0.42) \lineto(-0.05,0.43) \lineto(-0.04,0.44) \lineto(-0.03,0.45) \lineto(-0.02,0.46) \lineto(-0.01,0.46) \lineto(0,0.47) \lineto(0.01,0.48) \lineto(0.02,0.48) \lineto(0.03,0.49) \lineto(0.04,0.5) \lineto(0.05,0.5) \lineto(0.06,0.51) \lineto(0.07,0.52) \lineto(0.09,0.52) \lineto(0.1,0.53) \lineto(0.11,0.53) \lineto(0.12,0.54) \lineto(0.14,0.54) \lineto(0.15,0.55) \lineto(0.17,0.55) \lineto(0.19,0.56) \lineto(0.21,0.57) \lineto(0.23,0.57) \lineto(0.25,0.58) \lineto(0.27,0.58) \lineto(0.3,0.59) \lineto(0.33,0.59) \lineto(0.36,0.59) \lineto(0.39,0.6) \lineto(0.43,0.6) \lineto(0.47,0.6) \lineto(0.51,0.6) \lineto(0.56,0.59) \lineto(0.6,0.59) \lineto(0.65,0.58) \lineto(0.71,0.57) \lineto(0.76,0.55) \lineto(0.81,0.53) \lineto(0.86,0.51) \lineto(0.92,0.47) \lineto(0.96,0.44) \lineto(1,0.4) \lineto(1.03,0.35) \lineto(1.05,0.3) \lineto(1.06,0.25) \lineto(1.06,0.2) \lineto(1.05,0.15) \lineto(1.02,0.1) \lineto(0.98,0.06) \lineto(0.94,0.02) \lineto(0.89,-0.02) \lineto(0.83,-0.05) \lineto(0.77,-0.07) \lineto(0.71,-0.09) \lineto(0.66,-0.11) \lineto(0.6,-0.12) \lineto(0.55,-0.12) \lineto(0.49,-0.13) \lineto(0.45,-0.13) \lineto(0.4,-0.12) \lineto(0.36,-0.12) \lineto(0.32,-0.12) \lineto(0.29,-0.11) \lineto(0.26,-0.11) \lineto(0.23,-0.1) \lineto(0.2,-0.09) \lineto(0.18,-0.08) \lineto(0.15,-0.08) \lineto(0.13,-0.07) \lineto(0.11,-0.06) \lineto(0.1,-0.05) \lineto(0.08,-0.05) \lineto(0.07,-0.04) \lineto(0.05,-0.03) \lineto(0.04,-0.03) \lineto(0.03,-0.02) \lineto(0.02,-0.01) \lineto(0.01,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.04,0.03) \lineto(-0.05,0.04) \lineto(-0.06,0.05) \lineto(-0.07,0.06) \lineto(-0.07,0.06) \lineto(-0.08,0.07) \lineto(-0.09,0.08) \lineto(-0.09,0.09) \lineto(-0.1,0.09) \lineto(-0.1,0.1) \lineto(-0.11,0.11) \lineto(-0.11,0.12) \lineto(-0.12,0.13) \lineto(-0.12,0.14) \lineto(-0.13,0.15) \lineto(-0.13,0.15) \lineto(-0.13,0.16) \lineto(-0.14,0.17) \lineto(-0.14,0.18) \lineto(-0.14,0.19) \lineto(-0.14,0.2) \lineto(-0.14,0.21) \lineto(-0.15,0.22) \lineto(-0.15,0.23) \lineto(-0.15,0.24) \lineto(-0.15,0.25) \lineto(-0.14,0.26) \lineto(-0.14,0.27) \lineto(-0.14,0.28) \lineto(-0.14,0.29) \lineto(-0.14,0.3) \lineto(-0.13,0.31) \lineto(-0.13,0.32) \lineto(-0.13,0.33) \lineto(-0.12,0.34) \lineto(-0.12,0.35) \lineto(-0.11,0.36) \lineto(-0.11,0.37) \lineto(-0.1,0.38) \lineto(-0.09,0.38) \lineto(-0.09,0.39) \lineto(-0.08,0.4) \lineto(-0.08,0.4) } \pscustom[linecolor=ccqqqq]{\moveto(-0.95,-1.38) \lineto(-0.95,-1.38) \lineto(-0.89,-1.37) \lineto(-0.82,-1.37) \lineto(-0.77,-1.36) \lineto(-0.71,-1.35) \lineto(-0.65,-1.34) \lineto(-0.6,-1.33) \lineto(-0.54,-1.32) \lineto(-0.49,-1.31) \lineto(-0.39,-1.28) \lineto(-0.3,-1.25) \lineto(-0.22,-1.21) \lineto(-0.14,-1.18) \lineto(-0.08,-1.14) \lineto(-0.02,-1.1) \lineto(0.04,-1.06) \lineto(0.13,-0.98) \lineto(0.2,-0.91) \lineto(0.25,-0.84) \lineto(0.29,-0.77) \lineto(0.31,-0.7) \lineto(0.33,-0.65) \lineto(0.33,-0.59) \lineto(0.34,-0.54) \lineto(0.33,-0.5) \lineto(0.33,-0.46) \lineto(0.32,-0.42) \lineto(0.31,-0.38) \lineto(0.3,-0.35) \lineto(0.29,-0.32) \lineto(0.27,-0.29) \lineto(0.26,-0.27) \lineto(0.24,-0.25) \lineto(0.23,-0.23) \lineto(0.22,-0.21) \lineto(0.2,-0.19) \lineto(0.19,-0.17) \lineto(0.18,-0.15) \lineto(0.16,-0.14) \lineto(0.15,-0.13) \lineto(0.14,-0.11) \lineto(0.12,-0.1) \lineto(0.11,-0.09) \lineto(0.1,-0.08) \lineto(0.09,-0.07) \lineto(0.08,-0.06) \lineto(0.07,-0.05) \lineto(0.06,-0.04) \lineto(0.05,-0.04) \lineto(0.04,-0.03) \lineto(0.03,-0.02) \lineto(0.02,-0.01) \lineto(0.01,-0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.05,0.03) \lineto(-0.06,0.04) \lineto(-0.07,0.05) \lineto(-0.08,0.05) \lineto(-0.09,0.06) \lineto(-0.1,0.07) \lineto(-0.12,0.07) \lineto(-0.13,0.08) \lineto(-0.14,0.09) \lineto(-0.15,0.09) \lineto(-0.17,0.1) \lineto(-0.18,0.11) \lineto(-0.2,0.11) \lineto(-0.21,0.12) \lineto(-0.23,0.13) \lineto(-0.25,0.14) \lineto(-0.26,0.14) \lineto(-0.28,0.15) \lineto(-0.3,0.16) \lineto(-0.32,0.16) \lineto(-0.34,0.17) \lineto(-0.36,0.18) \lineto(-0.38,0.19) \lineto(-0.4,0.19) \lineto(-0.43,0.2) \lineto(-0.45,0.21) \lineto(-0.48,0.22) \lineto(-0.5,0.22) \lineto(-0.53,0.23) \lineto(-0.56,0.24) \lineto(-0.59,0.24) \lineto(-0.62,0.25) \lineto(-0.65,0.25) \lineto(-0.68,0.26) \lineto(-0.71,0.27) \lineto(-0.75,0.27) \lineto(-0.78,0.28) \lineto(-0.82,0.28) \lineto(-0.86,0.28) \lineto(-0.9,0.29) \lineto(-0.94,0.29) \lineto(-0.98,0.29) \lineto(-1.03,0.29) \lineto(-1.07,0.29) \lineto(-1.12,0.29) \lineto(-1.17,0.29) \lineto(-1.22,0.29) \lineto(-1.27,0.28) \lineto(-1.32,0.28) \lineto(-1.37,0.27) \lineto(-1.42,0.26) \lineto(-1.48,0.25) \lineto(-1.53,0.24) \lineto(-1.59,0.23) \lineto(-1.65,0.21) \lineto(-1.7,0.2) \lineto(-1.76,0.18) \lineto(-1.82,0.16) \lineto(-1.87,0.13) \lineto(-1.93,0.11) \lineto(-1.98,0.08) \lineto(-2.04,0.05) \lineto(-2.11,0.01) \lineto(-2.17,-0.04) \lineto(-2.23,-0.09) \lineto(-2.29,-0.15) \lineto(-2.34,-0.22) \lineto(-2.39,-0.3) \lineto(-2.41,-0.35) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \moveto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.97,-1.38) \lineto(-0.96,-1.38) \lineto(-0.95,-1.38) } \pscustom[linecolor=ccqqqq]{\moveto(-0.73,0.13) \lineto(-0.75,0.13) \lineto(-0.77,0.13) \lineto(-0.79,0.12) \lineto(-0.81,0.12) \lineto(-0.83,0.11) \lineto(-0.85,0.11) \lineto(-0.87,0.1) \lineto(-0.89,0.1) \lineto(-0.91,0.09) \lineto(-0.93,0.08) \lineto(-0.95,0.07) \lineto(-0.96,0.07) \lineto(-0.98,0.06) \lineto(-1,0.05) \lineto(-1.01,0.04) \lineto(-1.03,0.03) \lineto(-1.05,0.03) \lineto(-1.06,0.02) \lineto(-1.08,0.01) \lineto(-1.09,0) \lineto(-1.11,-0.02) \lineto(-1.12,-0.03) \lineto(-1.14,-0.04) \lineto(-1.15,-0.06) \lineto(-1.17,-0.07) \lineto(-1.19,-0.09) \lineto(-1.2,-0.11) \lineto(-1.22,-0.13) \lineto(-1.23,-0.16) \lineto(-1.24,-0.19) \lineto(-1.25,-0.21) \lineto(-1.26,-0.25) \lineto(-1.26,-0.28) \lineto(-1.26,-0.32) \lineto(-1.25,-0.35) \lineto(-1.23,-0.39) \lineto(-1.21,-0.44) \lineto(-1.18,-0.48) \lineto(-1.13,-0.52) \lineto(-1.08,-0.56) \lineto(-1.01,-0.6) \lineto(-0.94,-0.64) \lineto(-0.85,-0.67) \lineto(-0.75,-0.69) \lineto(-0.66,-0.7) \lineto(-0.6,-0.71) \lineto(-0.55,-0.71) \lineto(-0.5,-0.71) \lineto(-0.45,-0.71) \lineto(-0.35,-0.69) \lineto(-0.26,-0.67) \lineto(-0.18,-0.65) \lineto(-0.1,-0.62) \lineto(-0.04,-0.58) \lineto(0.01,-0.55) \lineto(0.06,-0.51) \lineto(0.09,-0.48) \lineto(0.12,-0.44) \lineto(0.14,-0.4) \lineto(0.16,-0.37) \lineto(0.17,-0.34) \lineto(0.17,-0.31) \lineto(0.17,-0.28) \lineto(0.17,-0.26) \lineto(0.17,-0.23) \lineto(0.16,-0.21) \lineto(0.16,-0.19) \lineto(0.15,-0.17) \lineto(0.14,-0.15) \lineto(0.13,-0.14) \lineto(0.12,-0.12) \lineto(0.11,-0.11) \lineto(0.1,-0.09) \lineto(0.09,-0.08) \lineto(0.08,-0.07) \lineto(0.07,-0.06) \lineto(0.06,-0.05) \lineto(0.05,-0.04) \lineto(0.04,-0.03) \lineto(0.03,-0.03) \lineto(0.02,-0.02) \lineto(0.02,-0.01) \lineto(0.01,-0.01) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.05,0.03) \lineto(-0.06,0.04) \lineto(-0.07,0.04) \lineto(-0.08,0.05) \lineto(-0.09,0.05) \lineto(-0.1,0.06) \lineto(-0.12,0.07) \lineto(-0.13,0.07) \lineto(-0.14,0.08) \lineto(-0.15,0.08) \lineto(-0.17,0.09) \lineto(-0.18,0.09) \lineto(-0.2,0.1) \lineto(-0.21,0.1) \lineto(-0.23,0.11) \lineto(-0.24,0.11) \lineto(-0.26,0.11) \lineto(-0.27,0.12) \lineto(-0.29,0.12) \lineto(-0.31,0.13) \lineto(-0.32,0.13) \lineto(-0.34,0.13) \lineto(-0.36,0.14) \lineto(-0.38,0.14) \lineto(-0.4,0.14) \lineto(-0.41,0.14) \lineto(-0.43,0.14) \lineto(-0.45,0.15) \lineto(-0.47,0.15) \lineto(-0.49,0.15) \lineto(-0.51,0.15) \lineto(-0.53,0.15) \lineto(-0.55,0.15) \lineto(-0.57,0.15) \lineto(-0.59,0.15) \lineto(-0.61,0.15) \lineto(-0.63,0.15) \lineto(-0.65,0.14) \lineto(-0.67,0.14) \lineto(-0.7,0.14) \lineto(-0.72,0.14) \lineto(-0.73,0.13) } \pscustom[linecolor=ccqqqq]{\moveto(-2.36,-0.85) \lineto(-2.31,-0.91) \lineto(-2.26,-0.97) \lineto(-2.19,-1.03) \lineto(-2.11,-1.09) \lineto(-2.02,-1.15) \lineto(-1.93,-1.2) \lineto(-1.87,-1.22) \lineto(-1.82,-1.25) \lineto(-1.76,-1.27) \lineto(-1.7,-1.29) \lineto(-1.64,-1.3) \lineto(-1.58,-1.32) \lineto(-1.52,-1.33) \lineto(-1.46,-1.35) \lineto(-1.39,-1.36) \lineto(-1.33,-1.36) \lineto(-1.27,-1.37) \lineto(-1.2,-1.38) \lineto(-1.14,-1.38) \lineto(-1.07,-1.38) \lineto(-1.01,-1.38) \lineto(-0.99,-1.38) \lineto(-0.99,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \lineto(-0.98,-1.38) \moveto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.37) \lineto(-2.42,-0.38) \lineto(-2.42,-0.38) \lineto(-2.43,-0.39) \lineto(-2.43,-0.4) \lineto(-2.44,-0.44) \lineto(-2.45,-0.51) \lineto(-2.45,-0.56) \lineto(-2.45,-0.62) \lineto(-2.43,-0.68) \lineto(-2.41,-0.74) \lineto(-2.38,-0.81) \lineto(-2.36,-0.85) } \psplot[linecolor=qqttzz]{-7.02}{6.76}{(--13.67-3.44*x)/4.92} \pscustom[linecolor=ccqqqq]{\moveto(-0.03,0.79) \lineto(0.02,0.82) \lineto(0.08,0.86) \lineto(0.14,0.89) \lineto(0.22,0.92) \lineto(0.3,0.95) \lineto(0.38,0.98) \lineto(0.48,1) \lineto(0.57,1.01) \lineto(0.62,1.02) \lineto(0.68,1.02) \lineto(0.73,1.03) \lineto(0.78,1.03) \lineto(0.83,1.03) \lineto(0.88,1.02) \lineto(0.93,1.02) \lineto(0.99,1.02) \lineto(1.04,1.01) \lineto(1.09,1) \lineto(1.19,0.98) \lineto(1.28,0.95) \lineto(1.36,0.92) \lineto(1.44,0.89) \lineto(1.51,0.85) \lineto(1.58,0.81) \lineto(1.63,0.76) \lineto(1.68,0.72) \lineto(1.72,0.68) \lineto(1.75,0.63) \lineto(1.78,0.59) \lineto(1.8,0.54) \lineto(1.81,0.49) \lineto(1.82,0.44) \lineto(1.82,0.39) \lineto(1.82,0.34) \lineto(1.81,0.29) \lineto(1.79,0.24) \lineto(1.76,0.19) \lineto(1.73,0.15) \lineto(1.69,0.1) \lineto(1.65,0.06) \lineto(1.61,0.03) \lineto(1.56,-0.01) \lineto(1.51,-0.04) \lineto(1.45,-0.07) \lineto(1.4,-0.1) \lineto(1.34,-0.12) \lineto(1.28,-0.14) \lineto(1.22,-0.16) \lineto(1.17,-0.17) \lineto(1.11,-0.19) \lineto(1.05,-0.2) \lineto(1,-0.2) \lineto(0.95,-0.21) \lineto(0.9,-0.21) \lineto(0.85,-0.22) \lineto(0.8,-0.22) \lineto(0.75,-0.22) \lineto(0.71,-0.22) \lineto(0.67,-0.21) \lineto(0.63,-0.21) \lineto(0.59,-0.21) \lineto(0.55,-0.2) \lineto(0.52,-0.2) \lineto(0.49,-0.19) \lineto(0.45,-0.18) \lineto(0.42,-0.18) \lineto(0.4,-0.17) \lineto(0.37,-0.16) \lineto(0.34,-0.16) \lineto(0.32,-0.15) \lineto(0.29,-0.14) \lineto(0.27,-0.14) \lineto(0.25,-0.13) \lineto(0.23,-0.12) \lineto(0.21,-0.11) \lineto(0.19,-0.11) \lineto(0.18,-0.1) \lineto(0.16,-0.09) \lineto(0.15,-0.08) \lineto(0.13,-0.08) \lineto(0.12,-0.07) \lineto(0.1,-0.06) \lineto(0.09,-0.06) \lineto(0.08,-0.05) \lineto(0.07,-0.04) \lineto(0.06,-0.04) \lineto(0.04,-0.03) \lineto(0.03,-0.02) \lineto(0.02,-0.02) \lineto(0.02,-0.01) \lineto(0.01,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,0) \lineto(-0.01,0.01) \lineto(-0.02,0.01) \lineto(-0.03,0.02) \lineto(-0.04,0.03) \lineto(-0.05,0.04) \lineto(-0.06,0.04) \lineto(-0.07,0.05) \lineto(-0.08,0.06) \lineto(-0.09,0.07) \lineto(-0.1,0.08) \lineto(-0.1,0.09) \lineto(-0.11,0.1) \lineto(-0.12,0.11) \lineto(-0.13,0.12) \lineto(-0.15,0.13) \lineto(-0.16,0.15) \lineto(-0.16,0.16) \lineto(-0.17,0.17) \lineto(-0.18,0.19) \lineto(-0.19,0.2) \lineto(-0.2,0.22) \lineto(-0.21,0.24) \lineto(-0.22,0.26) \lineto(-0.23,0.28) \lineto(-0.23,0.3) \lineto(-0.24,0.32) \lineto(-0.24,0.34) \lineto(-0.25,0.36) \lineto(-0.25,0.39) \lineto(-0.25,0.42) \lineto(-0.25,0.45) \lineto(-0.24,0.48) \lineto(-0.24,0.51) \lineto(-0.23,0.54) \lineto(-0.21,0.57) \lineto(-0.19,0.61) \lineto(-0.17,0.64) \lineto(-0.14,0.68) \lineto(-0.11,0.72) \lineto(-0.07,0.75) \lineto(-0.03,0.79) \lineto(-0.03,0.79) } \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.22,-0.32){\blue{$O$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse of a system of parallel lines.} \label{fig:RectasParaleInvElipse} \end{figure} \subsection{Elliptic Inversion of Ellipses} \begin{definition} If two ellipses $E_1$ and $E_2$ have parallel axes and have equal eccentricities, then they are said to be of the same semi-form. If in addition the princpal axes are parallel, then they are called homothetic and it is denoted by $E_1 \sim E_2$. \end{definition} \begin{theorem}\label{elipsem} Let $\chi$ and $\chi'$ be an ellipse and its elliptic inverse curve with respect to the ellipse $E$. Let $\chi$ and $E$ be homothetic curves ($\chi \sim E$), then \begin{enumerate}[i.] \item If $\chi$ not passing through the center of inversion, then $\chi'$ is an ellipse not passing through the center of inversion and $\chi' \sim E$, see Figure \ref{fig:elipsesinversioncaso3}. \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-3.38,-1.7)(5.68,3.42) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-2.44,1.48){$E$} \psplot{-3.38}{5.68}{(-0-0*x)/4} \psplot{-3.38}{5.68}{(--7.28-0*x)/4} \rput{0}(3.5,1.82){\psellipse[linecolor=qqttzz](0,0)(1.87,1.12)} \pscustom[linecolor=ccqqqq]{\moveto(0.57,0.69) \lineto(0.58,0.7) \lineto(0.58,0.72) \lineto(0.59,0.73) \lineto(0.59,0.75) \lineto(0.6,0.76) \lineto(0.61,0.78) \lineto(0.63,0.8) \lineto(0.64,0.81) \lineto(0.66,0.83) \lineto(0.67,0.85) \lineto(0.7,0.87) \lineto(0.72,0.89) \lineto(0.75,0.9) \lineto(0.78,0.92) \lineto(0.81,0.94) \lineto(0.85,0.96) \lineto(0.85,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.86,0.96) \lineto(0.87,0.96) \lineto(0.88,0.97) \lineto(0.89,0.97) \lineto(0.93,0.99) \lineto(0.98,1) \lineto(1.04,1.01) \lineto(1.09,1.02) \lineto(1.15,1.02) \lineto(1.22,1.03) \lineto(1.28,1.02) \lineto(1.35,1.02) \lineto(1.42,1.01) \lineto(1.48,0.99) \lineto(1.55,0.97) \lineto(1.61,0.95) \lineto(1.67,0.92) \lineto(1.72,0.89) \lineto(1.77,0.85) \lineto(1.8,0.81) \lineto(1.83,0.77) \lineto(1.85,0.73) \lineto(1.87,0.69) \lineto(1.87,0.64) \lineto(1.87,0.6) \lineto(1.86,0.56) \lineto(1.85,0.53) \lineto(1.83,0.49) \lineto(1.81,0.46) \lineto(1.78,0.43) \lineto(1.75,0.41) \lineto(1.72,0.38) \lineto(1.69,0.36) \lineto(1.65,0.34) \lineto(1.62,0.32) \lineto(1.58,0.31) \lineto(1.55,0.3) \lineto(1.52,0.29) \lineto(1.49,0.28) \lineto(1.45,0.27) \lineto(1.42,0.26) \lineto(1.39,0.26) \lineto(1.36,0.25) \lineto(1.33,0.25) \lineto(1.3,0.25) \lineto(1.28,0.24) \lineto(1.25,0.24) \lineto(1.23,0.24) \lineto(1.2,0.24) \lineto(1.18,0.24) \lineto(1.16,0.24) \lineto(1.14,0.25) \lineto(1.11,0.25) \lineto(1.09,0.25) \lineto(1.08,0.25) \lineto(1.06,0.25) \lineto(1.04,0.26) \lineto(1.02,0.26) \lineto(1,0.26) \lineto(0.99,0.27) \lineto(0.97,0.27) \lineto(0.96,0.28) \lineto(0.94,0.28) \lineto(0.93,0.28) \lineto(0.91,0.29) \lineto(0.9,0.29) \lineto(0.89,0.3) \lineto(0.87,0.3) \lineto(0.86,0.31) \lineto(0.85,0.31) \lineto(0.84,0.32) \lineto(0.82,0.32) \lineto(0.81,0.33) \lineto(0.8,0.33) \lineto(0.79,0.34) \lineto(0.78,0.34) \lineto(0.77,0.35) \lineto(0.76,0.35) \lineto(0.75,0.36) \lineto(0.74,0.37) \lineto(0.73,0.37) \lineto(0.72,0.38) \lineto(0.72,0.38) \lineto(0.71,0.39) \lineto(0.7,0.4) \lineto(0.69,0.4) \lineto(0.68,0.41) \lineto(0.68,0.42) \lineto(0.67,0.42) \lineto(0.66,0.43) \lineto(0.65,0.44) \lineto(0.65,0.45) \lineto(0.64,0.45) \lineto(0.63,0.46) \lineto(0.63,0.47) \lineto(0.62,0.48) \lineto(0.61,0.49) \lineto(0.61,0.5) \lineto(0.6,0.5) \lineto(0.6,0.51) \lineto(0.59,0.52) \lineto(0.59,0.53) \lineto(0.59,0.54) \lineto(0.58,0.55) \lineto(0.58,0.56) \lineto(0.57,0.57) \lineto(0.57,0.58) \lineto(0.57,0.59) \lineto(0.57,0.61) \lineto(0.57,0.62) \lineto(0.57,0.63) \lineto(0.57,0.64) \lineto(0.57,0.66) \lineto(0.57,0.67) \lineto(0.57,0.68) \lineto(0.57,0.69) } \rput[tl](1.72,3.12){$\chi$} \rput[tl](0.24,1.32){$\chi'$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.22,-0.28){\blue{$O$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Theorem \ref{elipsem}, Case $i$.} \label{fig:elipsesinversioncaso3} \end{figure} \item If $\chi$ passing through the center of inversion, then $\chi'$ is a line, see Figure \ref{fig:elipsesinversioncaso4y5}. \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-4.68,-1.7)(4.74,2.44) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](-2.28,1.62){$E$} \psplot[linestyle=dashed,dash=1pt 1pt]{-4.68}{4.74}{(-0-0*x)/4} \psplot[linestyle=dashed,dash=1pt 1pt]{-4.68}{4.74}{(--2.72-0*x)/4} \rput[tl](-3.86,1.86){$\chi$} \rput[tl](0.82,2.46){$\chi'$} \rput{0}(-1.69,0.68){\psellipse[linecolor=qqttzz](0,0)(2.04,1.22)} \pscustom[linecolor=ccqqqq]{\moveto(-0.95,0.8) \lineto(-0.98,0.78) \lineto(-1,0.76) \lineto(-1.02,0.74) \lineto(-1.05,0.72) \lineto(-1.07,0.7) \lineto(-1.09,0.68) \lineto(-1.11,0.66) \lineto(-1.13,0.64) \lineto(-1.16,0.62) \lineto(-1.18,0.6) \lineto(-1.2,0.58) \lineto(-1.22,0.56) \lineto(-1.24,0.55) \lineto(-1.26,0.53) \lineto(-1.28,0.51) \lineto(-1.3,0.49) \lineto(-1.32,0.47) \lineto(-1.34,0.45) \lineto(-1.37,0.43) \lineto(-1.39,0.41) \lineto(-1.41,0.39) \lineto(-1.43,0.37) \lineto(-1.45,0.36) \lineto(-1.47,0.34) \lineto(-1.5,0.32) \lineto(-1.52,0.3) \lineto(-1.53,0.29) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.54,0.28) \lineto(-1.55,0.27) \lineto(-1.56,0.26) \lineto(-1.58,0.24) \lineto(-1.6,0.22) \lineto(-1.62,0.2) \lineto(-1.65,0.18) \lineto(-1.67,0.16) \lineto(-1.7,0.13) \lineto(-1.73,0.11) \lineto(-1.75,0.09) \lineto(-1.78,0.06) \lineto(-1.81,0.04) \lineto(-1.84,0.01) \lineto(-1.87,-0.02) \lineto(-1.9,-0.05) \lineto(-1.93,-0.08) \lineto(-1.97,-0.11) \lineto(-2,-0.14) \lineto(-2.04,-0.17) \lineto(-2.08,-0.21) \lineto(-2.12,-0.24) \lineto(-2.16,-0.28) \lineto(-2.21,-0.32) \lineto(-2.26,-0.36) \lineto(-2.31,-0.41) \lineto(-2.36,-0.45) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \moveto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.49) \lineto(-0.18,1.49) \lineto(-0.18,1.49) \lineto(-0.19,1.49) \lineto(-0.2,1.48) \lineto(-0.22,1.45) \lineto(-0.27,1.42) \lineto(-0.32,1.37) \lineto(-0.36,1.33) \lineto(-0.41,1.29) \lineto(-0.45,1.25) \lineto(-0.49,1.22) \lineto(-0.53,1.18) \lineto(-0.56,1.15) \lineto(-0.6,1.12) \lineto(-0.63,1.09) \lineto(-0.66,1.06) \lineto(-0.69,1.03) \lineto(-0.73,1.01) \lineto(-0.75,0.98) \lineto(-0.78,0.96) \lineto(-0.81,0.93) \lineto(-0.84,0.91) \lineto(-0.86,0.88) \lineto(-0.89,0.86) \lineto(-0.91,0.84) \lineto(-0.94,0.82) \lineto(-0.95,0.8) } \pscustom[linecolor=ccqqqq]{\moveto(-2.41,-0.5) \lineto(-2.41,-0.5) \lineto(-2.48,-0.56) \lineto(-2.54,-0.62) \lineto(-2.61,-0.68) \lineto(-2.68,-0.74) \lineto(-2.77,-0.82) \lineto(-2.85,-0.9) \lineto(-2.95,-0.98) \lineto(-3,-1.03) \lineto(-3.06,-1.08) \lineto(-3.12,-1.13) \lineto(-3.18,-1.18) \lineto(-3.24,-1.24) \lineto(-3.31,-1.3) \lineto(-3.38,-1.37) \lineto(-3.46,-1.43) \lineto(-3.54,-1.51) \lineto(-3.63,-1.58) \lineto(-3.72,-1.67) \lineto(-3.82,-1.75) \moveto(0.83,2.39) \lineto(0.76,2.33) \lineto(0.69,2.27) \lineto(0.62,2.21) \lineto(0.56,2.16) \lineto(0.51,2.11) \lineto(0.45,2.06) \lineto(0.4,2.01) \lineto(0.35,1.97) \lineto(0.26,1.88) \lineto(0.17,1.81) \lineto(0.09,1.74) \lineto(0.02,1.67) \lineto(-0.04,1.61) \lineto(-0.11,1.56) \lineto(-0.17,1.51) \lineto(-0.17,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \lineto(-0.18,1.5) \moveto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.37,-0.47) \lineto(-2.38,-0.47) \lineto(-2.38,-0.47) \lineto(-2.38,-0.47) \lineto(-2.38,-0.47) \lineto(-2.38,-0.47) \lineto(-2.38,-0.47) \lineto(-2.38,-0.47) \lineto(-2.39,-0.48) \lineto(-2.4,-0.49) \lineto(-2.41,-0.5) \lineto(-2.41,-0.5) } \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.22,-0.28){\blue{$O$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Theorem \ref{elipsem}, Case $ii$. } \label{fig:elipsesinversioncaso4y5} \end{figure} \item If $\chi$ is orthogonal to $E$, then $\chi'$ is the ellipse itself. \end{enumerate} \end{theorem} \begin{proof} \emph{i.} Let $\chi$ be the ellipse $\frac{x^2}{a^2} + \frac{y^2}{b^2} + Dx + Ey + F=0 \ (F\neq 0)$. Applying $\psi$ to this equation gives $\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{D}{F}x + \frac{E}{F}y + \frac{1}{F}=0$. Indeed \begin{align*} \frac{x^2}{a^2} + \frac{y^2}{b^2} + Dx + Ey + F=&0\\ \frac{\left(\frac{a^2b^2x}{b^2x^2+a^2y^2}\right)^2}{a^2} + \frac{\left(\frac{a^2b^2y}{b^2x^2+a^2y^2}\right)^2}{b^2} + D\left(\frac{a^2b^2x}{b^2x^2+a^2y^2}\right) + E\left(\frac{a^2b^2y}{b^2x^2+a^2y^2}\right) + F=&0\\ a^2b^4x^2 + a^4b^2y^2 + Da^2b^2 x (b^2x^2 + a^2y^2) + Ea^2b^2 x (b^2x^2 + a^2y^2) + F(b^2x^2 + a^2y^2)^2=&0\\ \frac{x^2}{a^2} + \frac{y^2}{b^2} + Dx\left(\frac{x^2}{a^2} + \frac{y^2}{b^2}\right) + Ey\left(\frac{x^2}{a^2} + \frac{y^2}{b^2}\right) + F\left(\frac{x^2}{a^2} + \frac{y^2}{b^2}\right)^2 =&0\\ \left(\frac{x^2}{a^2} + \frac{y^2}{b^2}\right)\left(1 + Dx + Ey + F\left(\frac{x^2}{a^2} + \frac{y^2}{b^2}\right)\right) =&0\\ \frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{D}{F}x + \frac{E}{F}y + \frac{1}{F}=&0 \end{align*} \emph{ii} and \emph{iii} proof run like in $i$. \end{proof} \subsection{Elliptic Inversion of Other Curves} \begin{theorem}\label{curves} The inverse of any conic not of the same semi-form as the central conic of inversion and passing through the center of inversion is a cubic curve. \end{theorem} \begin{proof} Let $\chi$ be the conic $Ax^2 + Bxy + Cy^2 + Dx + Ey =0, (A=1/a^2, B=0$ and $C=1/b^2$ cannot hold simultaneously). Applying $\psi$ to this equation, we have \begin{align*} Aa^4b^4x^2 + Ba^4b^4xy + Ca^4b^4y^2 + Db^2x^3 + Da^2xy^2 + Eb^2x^2y + Ea^2y^3&=0 \end{align*} \end{proof} \begin{theorem} The inverse of any conic not of the same semi-form as the central conic of inversion and not passing through the center of inversion is a curve of the fourth degree. \end{theorem} \begin{proof} Similar to Theorem \ref{curves}. \end{proof} \begin{example} In Figure \ref{fig:CircunInvElipse}, we show the elliptic inverse of a circumference $\mathcal{\chi}$. \end{example} \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-4.92,-1.68)(3.02,3.92) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](2.14,1.5){$E$} \pscircle[linecolor=qqttzz](-2.8,1.96){1.2} \pscustom[linecolor=ccqqqq]{\moveto(-1.19,1.3) \lineto(-1.19,1.3) \lineto(-1.13,1.29) \lineto(-1.07,1.27) \lineto(-1.02,1.26) \lineto(-0.97,1.24) \lineto(-0.87,1.21) \lineto(-0.79,1.17) \lineto(-0.72,1.13) \lineto(-0.66,1.09) \lineto(-0.61,1.06) \lineto(-0.56,1.02) \lineto(-0.52,0.99) \lineto(-0.49,0.95) \lineto(-0.46,0.92) \lineto(-0.44,0.89) \lineto(-0.42,0.87) \lineto(-0.4,0.84) \lineto(-0.38,0.81) \lineto(-0.37,0.79) \lineto(-0.36,0.77) \lineto(-0.35,0.75) \lineto(-0.34,0.73) \lineto(-0.34,0.71) \lineto(-0.33,0.69) \lineto(-0.33,0.68) \lineto(-0.33,0.66) \lineto(-0.33,0.65) \lineto(-0.33,0.63) \lineto(-0.33,0.62) \lineto(-0.33,0.61) \lineto(-0.33,0.6) \lineto(-0.33,0.59) \lineto(-0.33,0.58) \lineto(-0.34,0.57) \lineto(-0.34,0.56) \lineto(-0.35,0.55) \lineto(-0.35,0.54) \lineto(-0.36,0.54) \lineto(-0.36,0.53) \lineto(-0.37,0.52) \lineto(-0.37,0.52) \lineto(-0.38,0.51) \lineto(-0.39,0.51) \lineto(-0.4,0.5) \lineto(-0.41,0.5) \lineto(-0.41,0.49) \lineto(-0.42,0.49) \lineto(-0.43,0.48) \lineto(-0.44,0.48) \lineto(-0.45,0.48) \lineto(-0.47,0.47) \lineto(-0.48,0.47) \lineto(-0.49,0.47) \lineto(-0.5,0.46) \lineto(-0.52,0.46) \lineto(-0.53,0.46) \lineto(-0.54,0.46) \lineto(-0.56,0.45) \lineto(-0.58,0.45) \lineto(-0.59,0.45) \lineto(-0.61,0.44) \lineto(-0.63,0.44) \lineto(-0.64,0.44) \lineto(-0.66,0.44) \lineto(-0.68,0.43) \lineto(-0.7,0.43) \lineto(-0.72,0.43) \lineto(-0.75,0.42) \lineto(-0.77,0.42) \lineto(-0.79,0.42) \lineto(-0.81,0.41) \lineto(-0.84,0.41) \lineto(-0.86,0.4) \lineto(-0.89,0.4) \lineto(-0.9,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.4) \lineto(-0.91,0.39) \lineto(-0.91,0.39) \lineto(-0.92,0.39) \lineto(-0.93,0.39) \lineto(-0.96,0.39) \lineto(-0.98,0.38) \lineto(-1.01,0.37) \lineto(-1.04,0.37) \lineto(-1.07,0.36) \lineto(-1.1,0.35) \lineto(-1.14,0.34) \lineto(-1.17,0.34) \lineto(-1.2,0.33) \lineto(-1.23,0.32) \lineto(-1.27,0.31) \lineto(-1.3,0.3) \lineto(-1.34,0.29) \lineto(-1.37,0.28) \lineto(-1.41,0.27) \lineto(-1.45,0.26) \lineto(-1.49,0.25) \lineto(-1.52,0.24) \lineto(-1.56,0.23) \lineto(-1.61,0.22) \lineto(-1.65,0.21) \lineto(-1.69,0.2) \lineto(-1.74,0.2) \lineto(-1.78,0.19) \lineto(-1.83,0.18) \lineto(-1.88,0.18) \lineto(-1.93,0.18) \lineto(-1.99,0.18) \lineto(-2.05,0.18) \lineto(-2.1,0.19) \lineto(-2.16,0.19) \lineto(-2.23,0.21) \lineto(-2.29,0.22) \lineto(-2.36,0.25) \lineto(-2.42,0.27) \lineto(-2.44,0.28) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \moveto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.21,1.31) \lineto(-1.21,1.31) \lineto(-1.19,1.3) \lineto(-1.19,1.3) } \pscustom[linecolor=ccqqqq]{\moveto(-2.6,1.01) \lineto(-2.5,1.09) \lineto(-2.45,1.13) \lineto(-2.38,1.17) \lineto(-2.32,1.2) \lineto(-2.24,1.23) \lineto(-2.17,1.26) \lineto(-2.09,1.28) \lineto(-2.01,1.3) \lineto(-1.93,1.32) \lineto(-1.85,1.33) \lineto(-1.77,1.34) \lineto(-1.69,1.34) \lineto(-1.61,1.34) \lineto(-1.53,1.34) \lineto(-1.46,1.34) \lineto(-1.39,1.33) \lineto(-1.32,1.32) \lineto(-1.25,1.31) \lineto(-1.23,1.31) \lineto(-1.23,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \lineto(-1.22,1.31) \moveto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.45,0.29) \lineto(-2.46,0.29) \lineto(-2.46,0.29) \lineto(-2.46,0.29) \lineto(-2.47,0.3) \lineto(-2.48,0.3) \lineto(-2.51,0.32) \lineto(-2.56,0.36) \lineto(-2.62,0.41) \lineto(-2.67,0.47) \lineto(-2.71,0.54) \lineto(-2.74,0.61) \lineto(-2.75,0.7) \lineto(-2.73,0.79) \lineto(-2.7,0.88) \lineto(-2.64,0.97) \lineto(-2.6,1.01) } \rput[tl](-4.78,3.26){$\chi$} \rput[tl](-2.94,1.58){$\chi'$} \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.22,-0.35){\blue{$O$}} \psdots[dotstyle=*,linecolor=blue](-2.8,1.96) \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse of a Circumference.} \label{fig:CircunInvElipse} \end{figure} \begin{example} In Figure \ref{fig:ParabolasInvEliptica}, we show the elliptic inverse of a parabola $\chi$. \end{example} \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-4.42,-3.52)(4.6,2.24) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](2.24,1.82){$E$} \rput[tl](-4.12,1.46){$\chi$} \rput[tl](-1.46,0.5){$\chi'$} \psplot{-4.42}{4.6}{(-4-0*x)/1} \rput{0}(0,-3.25){\psplot[linecolor=qqttzz]{-6}{6}{x^2/2/1.5}} \pscustom[linecolor=ccqqqq]{\moveto(-0.12,-0.71) \lineto(-0.13,-0.71) \lineto(-0.14,-0.71) \lineto(-0.15,-0.71) \lineto(-0.17,-0.72) \lineto(-0.18,-0.72) \lineto(-0.2,-0.72) \lineto(-0.21,-0.73) \lineto(-0.23,-0.73) \lineto(-0.26,-0.74) \lineto(-0.28,-0.75) \lineto(-0.31,-0.75) \lineto(-0.34,-0.76) \lineto(-0.37,-0.77) \lineto(-0.41,-0.78) \lineto(-0.46,-0.79) \lineto(-0.52,-0.8) \lineto(-0.58,-0.82) \lineto(-0.67,-0.83) \lineto(-0.71,-0.83) \lineto(-0.77,-0.84) \lineto(-0.83,-0.84) \lineto(-0.89,-0.84) \lineto(-0.97,-0.84) \lineto(-1.05,-0.84) \lineto(-1.14,-0.83) \lineto(-1.19,-0.82) \lineto(-1.24,-0.82) \lineto(-1.3,-0.81) \lineto(-1.35,-0.79) \lineto(-1.41,-0.78) \lineto(-1.47,-0.76) \lineto(-1.53,-0.74) \lineto(-1.6,-0.71) \lineto(-1.66,-0.68) \lineto(-1.73,-0.64) \lineto(-1.79,-0.6) \lineto(-1.85,-0.56) \lineto(-1.9,-0.51) \lineto(-1.95,-0.45) \lineto(-1.99,-0.39) \lineto(-2.02,-0.32) \lineto(-2.04,-0.25) \lineto(-2.04,-0.17) \lineto(-2.04,-0.1) \lineto(-2.01,-0.02) \lineto(-1.97,0.05) \lineto(-1.92,0.12) \lineto(-1.85,0.19) \lineto(-1.78,0.25) \lineto(-1.69,0.3) \lineto(-1.6,0.34) \lineto(-1.5,0.38) \lineto(-1.4,0.41) \lineto(-1.3,0.43) \lineto(-1.21,0.45) \lineto(-1.11,0.46) \lineto(-1.02,0.47) \lineto(-0.94,0.47) \lineto(-0.86,0.46) \lineto(-0.78,0.46) \lineto(-0.71,0.45) \lineto(-0.65,0.43) \lineto(-0.59,0.42) \lineto(-0.54,0.41) \lineto(-0.49,0.39) \lineto(-0.4,0.36) \lineto(-0.33,0.33) \lineto(-0.27,0.3) \lineto(-0.19,0.25) \lineto(-0.13,0.2) \lineto(-0.09,0.16) \lineto(-0.06,0.12) \lineto(-0.04,0.1) \lineto(-0.02,0.07) \lineto(-0.01,0.05) \lineto(-0.01,0.04) \lineto(0,0.03) \lineto(0,0.02) \lineto(0,0.01) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0.01) \lineto(0,0.02) \lineto(0,0.03) \lineto(0.01,0.04) \lineto(0.01,0.05) \lineto(0.02,0.07) \lineto(0.04,0.1) \lineto(0.06,0.12) \lineto(0.09,0.16) \lineto(0.13,0.2) \lineto(0.19,0.25) \lineto(0.27,0.3) \lineto(0.33,0.33) \lineto(0.4,0.36) \lineto(0.49,0.39) \lineto(0.54,0.41) \lineto(0.59,0.42) \lineto(0.65,0.43) \lineto(0.71,0.45) \lineto(0.78,0.46) \lineto(0.86,0.46) \lineto(0.94,0.47) \lineto(1.02,0.47) \lineto(1.11,0.46) \lineto(1.21,0.45) \lineto(1.3,0.43) \lineto(1.4,0.41) \lineto(1.5,0.38) \lineto(1.6,0.34) \lineto(1.69,0.3) \lineto(1.78,0.25) \lineto(1.85,0.19) \lineto(1.92,0.12) \lineto(1.97,0.05) \lineto(2.01,-0.02) \lineto(2.04,-0.1) \lineto(2.04,-0.17) \lineto(2.04,-0.25) \lineto(2.02,-0.32) \lineto(1.99,-0.39) \lineto(1.95,-0.45) \lineto(1.9,-0.51) \lineto(1.85,-0.56) \lineto(1.79,-0.6) \lineto(1.73,-0.64) \lineto(1.66,-0.68) \lineto(1.6,-0.71) \lineto(1.53,-0.74) \lineto(1.47,-0.76) \lineto(1.41,-0.78) \lineto(1.35,-0.79) \lineto(1.3,-0.81) \lineto(1.24,-0.82) \lineto(1.19,-0.82) \lineto(1.14,-0.83) \lineto(1.05,-0.84) \lineto(0.97,-0.84) \lineto(0.89,-0.84) \lineto(0.83,-0.84) \lineto(0.77,-0.84) \lineto(0.71,-0.83) \lineto(0.67,-0.83) \lineto(0.58,-0.82) \lineto(0.52,-0.8) \lineto(0.46,-0.79) \lineto(0.41,-0.78) \lineto(0.37,-0.77) \lineto(0.34,-0.76) \lineto(0.31,-0.75) \lineto(0.28,-0.75) \lineto(0.26,-0.74) \lineto(0.23,-0.73) \lineto(0.21,-0.73) \lineto(0.2,-0.72) \lineto(0.18,-0.72) \lineto(0.17,-0.72) \lineto(0.15,-0.71) \lineto(0.14,-0.71) \lineto(0.13,-0.71) \lineto(0.12,-0.71) \lineto(0.11,-0.7) \lineto(0.1,-0.7) \lineto(0.09,-0.7) \lineto(0.08,-0.7) \lineto(0.07,-0.7) \lineto(0.06,-0.7) \lineto(0.06,-0.7) \lineto(0.05,-0.69) \lineto(0.04,-0.69) \lineto(0.04,-0.69) \lineto(0.03,-0.69) \lineto(0.03,-0.69) \lineto(0.02,-0.69) \lineto(0.01,-0.69) \lineto(0.01,-0.69) \lineto(0,-0.69) \lineto(0,-0.69) \lineto(-0.01,-0.69) \lineto(-0.01,-0.69) \lineto(-0.02,-0.69) \lineto(-0.02,-0.69) \lineto(-0.03,-0.69) \lineto(-0.04,-0.69) \lineto(-0.04,-0.69) \lineto(-0.05,-0.69) \lineto(-0.06,-0.7) \lineto(-0.06,-0.7) \lineto(-0.07,-0.7) \lineto(-0.08,-0.7) \lineto(-0.09,-0.7) \lineto(-0.1,-0.7) \lineto(-0.1,-0.7) \lineto(-0.11,-0.7) \lineto(-0.12,-0.71) } \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.22,-0.28){\blue{$O$}} \psdots[dotstyle=*,linecolor=blue](0,-4) \rput[bl](0.08,-3.88){\blue{$B$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse of a Parabola.} \label{fig:ParabolasInvEliptica} \end{figure} \begin{example} In Figure \ref{fig:hiperbolainversioncaso1}, we show the elliptic inverse of an hyperbola $\chi$. \end{example} \begin{figure}[h] \begin{center} \newrgbcolor{qqttzz}{0 0.2 0.6} \newrgbcolor{ccqqqq}{0.8 0 0} \psset{xunit=0.7cm,yunit=0.7cm,algebraic=true,dotstyle=o,dotsize=3pt 0,linewidth=0.8pt,arrowsize=3pt 2,arrowinset=0.25} \begin{pspicture*}(-6.48,-2.62)(6.86,2.9) \rput{0}(0,0){\psellipse(0,0)(2.5,1.5)} \rput[tl](2.24,1.82){$E$} \rput[tl](-3.36,2.06){$\chi$} \rput[tl](-1.54,1.3){$\chi'$} \rput{0}(0,0){\parametricplot[linecolor=qqttzz]{-0.99}{0.99}{2.85*(1+t^2)/(1-t^2)|2.03*2*t/(1-t^2)}} \rput{0}(0,0){\parametricplot[linecolor=qqttzz]{-0.99}{0.99}{2.85*(-1-t^2)/(1-t^2)|2.03*(-2)*t/(1-t^2)}} \pscustom[linecolor=ccqqqq]{\moveto(-1.27,0.5) \lineto(-1.2,0.5) \lineto(-1.13,0.49) \lineto(-1.07,0.49) \lineto(-1,0.48) \lineto(-0.93,0.46) \lineto(-0.87,0.45) \lineto(-0.8,0.43) \lineto(-0.74,0.41) \lineto(-0.68,0.39) \lineto(-0.63,0.37) \lineto(-0.57,0.35) \lineto(-0.52,0.32) \lineto(-0.42,0.27) \lineto(-0.33,0.22) \lineto(-0.25,0.17) \lineto(-0.18,0.13) \lineto(-0.13,0.09) \lineto(-0.08,0.05) \lineto(-0.04,0.03) \lineto(-0.02,0.01) \lineto(-0.01,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0.01,-0.01) \lineto(0.03,-0.02) \lineto(0.06,-0.05) \lineto(0.11,-0.08) \lineto(0.16,-0.11) \lineto(0.23,-0.16) \lineto(0.3,-0.21) \lineto(0.39,-0.26) \lineto(0.49,-0.31) \lineto(0.54,-0.33) \lineto(0.59,-0.35) \lineto(0.64,-0.38) \lineto(0.7,-0.4) \lineto(0.76,-0.42) \lineto(0.82,-0.44) \lineto(0.89,-0.45) \lineto(0.95,-0.47) \lineto(1.02,-0.48) \lineto(1.09,-0.49) \lineto(1.16,-0.5) \lineto(1.22,-0.5) \lineto(1.29,-0.5) \lineto(1.36,-0.5) \lineto(1.43,-0.5) \lineto(1.5,-0.49) \lineto(1.56,-0.48) \lineto(1.62,-0.47) \lineto(1.68,-0.46) \lineto(1.74,-0.44) \lineto(1.8,-0.42) \lineto(1.85,-0.4) \lineto(1.94,-0.35) \lineto(2.02,-0.3) \lineto(2.08,-0.25) \lineto(2.13,-0.19) \lineto(2.16,-0.14) \lineto(2.18,-0.08) \lineto(2.19,-0.03) \lineto(2.19,0.02) \lineto(2.18,0.07) \lineto(2.17,0.12) \lineto(2.14,0.18) \lineto(2.09,0.23) \lineto(2.04,0.29) \lineto(1.96,0.34) \lineto(1.88,0.39) \lineto(1.83,0.41) \lineto(1.78,0.43) \lineto(1.72,0.45) \lineto(1.66,0.46) \lineto(1.6,0.48) \lineto(1.54,0.49) \lineto(1.47,0.49) \lineto(1.4,0.5) \lineto(1.34,0.5) \lineto(1.27,0.5) \lineto(1.2,0.5) \lineto(1.13,0.49) \lineto(1.06,0.49) \lineto(0.99,0.48) \lineto(0.93,0.46) \lineto(0.86,0.45) \lineto(0.8,0.43) \lineto(0.74,0.41) \lineto(0.68,0.39) \lineto(0.62,0.37) \lineto(0.57,0.35) \lineto(0.52,0.32) \lineto(0.42,0.27) \lineto(0.33,0.22) \lineto(0.25,0.17) \lineto(0.18,0.13) \lineto(0.12,0.09) \lineto(0.08,0.05) \lineto(0.04,0.03) \lineto(0.02,0.01) \lineto(0.01,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(0,0) \lineto(-0.01,-0.01) \lineto(-0.03,-0.02) \lineto(-0.06,-0.05) \lineto(-0.11,-0.08) \lineto(-0.16,-0.11) \lineto(-0.23,-0.16) \lineto(-0.3,-0.21) \lineto(-0.39,-0.26) \lineto(-0.49,-0.31) \lineto(-0.54,-0.33) \lineto(-0.59,-0.35) \lineto(-0.64,-0.38) \lineto(-0.7,-0.4) \lineto(-0.76,-0.42) \lineto(-0.82,-0.44) \lineto(-0.89,-0.45) \lineto(-0.95,-0.47) \lineto(-1.02,-0.48) \lineto(-1.09,-0.49) \lineto(-1.16,-0.5) \lineto(-1.22,-0.5) \lineto(-1.29,-0.5) \lineto(-1.36,-0.5) \lineto(-1.43,-0.5) \lineto(-1.5,-0.49) \lineto(-1.56,-0.48) \lineto(-1.62,-0.47) \lineto(-1.68,-0.46) \lineto(-1.74,-0.44) \lineto(-1.8,-0.42) \lineto(-1.85,-0.4) \lineto(-1.94,-0.35) \lineto(-2.02,-0.3) \lineto(-2.08,-0.25) \lineto(-2.13,-0.19) \lineto(-2.16,-0.14) \lineto(-2.18,-0.08) \lineto(-2.19,-0.03) \lineto(-2.19,0.02) \lineto(-2.18,0.07) \lineto(-2.17,0.12) \lineto(-2.14,0.18) \lineto(-2.09,0.23) \lineto(-2.04,0.29) \lineto(-1.96,0.34) \lineto(-1.88,0.39) \lineto(-1.83,0.41) \lineto(-1.78,0.43) \lineto(-1.72,0.45) \lineto(-1.66,0.46) \lineto(-1.6,0.48) \lineto(-1.54,0.49) \lineto(-1.47,0.49) \lineto(-1.4,0.5) \lineto(-1.34,0.5) \lineto(-1.27,0.5) } \begin{scriptsize} \psdots[dotstyle=*,linecolor=blue](0,0) \rput[bl](-0.04,-0.36){\blue{$O$}} \end{scriptsize} \end{pspicture*} \end{center} \caption{Inversion in an Ellipse of a Hyperbola.} \label{fig:hiperbolainversioncaso1} \end{figure} Note that the inversion in an ellipse is not conformal. \section{Pappus Elliptic Chain} The classical inversion has a lot of applications, such as the Pappus Chain Theorem, Feuerbach's Theorem, Steiner Porism, the problem of Apollonius, among others \cite{BLA, OGI, PED}. In this section, we generalize The Pappus Chain Theorem with respect to ellipses. \begin{theorem} Let $E$ be a semiellipse with principal diameter $\overline{AB}$, and $E'$ and $E_0$ semiellipses on the same side of $\overline{AB}$ with principal diameters $\overline{AC}$ and $\overline{CD}$ respectively, and $E\sim E_0, E_0 \sim E'$, see Figure \ref{cadenaeli}. Let $E_1, E_2, \dots$ be a sequence of ellipses tangent to $E$ and $E'$, such that $E_n$ is tangent to $E_{n-1}$ and $E_n \sim E_{n-1}$ for all $n\geq 1$. Let $r_n$ be the semi-minor axis of $E_n$ and $h_n$ the distance of the center of $E_n$ from $\overline{AB}$. Then $h_{n}=2nr_{n}$ \end{theorem} \begin{proof} Let $\psi_i$ the elliptic inversion such that $\psi(E_{i})=E_{i}$, (in Figure \ref{cadenaeli} we select $i=2$), i.e., $\psi_i=\mathcal{E}(B,t_i)$, where $t_i$ is the length of the tangent segment to the Ellipse $E$ from the point $B$. \begin{figure}[h] \begin{center} \includegraphics[scale=1]{Pappus1.eps} \end{center} \caption{Elliptic Pappus Chain.} \label{cadenaeli} \end{figure} By Theorem \ref{elipsem}, $\psi_i(E)$ and $\psi_i(E_0)$ are perpendicular lines to the line $\stackrel{\longleftrightarrow}{AB}$ and tangentes to the ellipse $E_{i}$. Hence, ellipses $\psi_i(E_{1}), \psi_i(E_{2}), \dots$ will also invert to tangent ellipses to parallel lines $\psi_i(E)$ and $\psi_i(E_0)$. Whence $h_{i}=2ir_{i}$. \end{proof} \section{Concluding remarks} The study of elliptic inversion suggests interesting and challenging problems. For example, generalized the Steiner Porism or Apollonius Problems with respect to ellipses.
184,668
TITLE: How to find all matrices simultaneously fulfilling two different polynomial equations? QUESTION [1 upvotes]: Say we have two different polynomial equations for a matrix $\bf P$: $$\cases{\displaystyle \sum_0^{N_1} c_{k1} {\bf P}^k = {\bf 0}\\\displaystyle\sum_0^{N_2} c_{k2} {\bf P}^k = {\bf 0}}$$ How can we characterize / describe all possible solutions $\bf P$ ? Own work Inspired by the previous question regarding single polynomial equation, let us denote solutions for the eigenvalues for matrices solving only one: $$\mathcal S_1 = \{\lambda_{11},\cdots, \lambda_{N_11}\}$$ $$\mathcal S_2 = \{\lambda_{12},\cdots, \lambda_{N_22}\}$$ So that: $$\cases{\displaystyle \sum_0^{N_1} c_{k1} {\lambda_{k1}}^k = 0\\\displaystyle\sum_0^{N_2} c_{k2} {\lambda_{k2}}^k = 0}$$ Now let us build a new set $\mathcal S = \mathcal S_1\cap \mathcal S_2 $ And then we draw eigenvalues from $\mathcal S$, so that every element is present at least once. When we are done drawing we do same as in previous question, put the drawn $\lambda$s into diagonal matrix $\bf D$ $${\bf P = SDS}^{-1}$$ Will this suffice to be able to build all possible matrices $\bf P$ which solve both the above equations? REPLY [2 votes]: Satisfying multiple polynomials is the same as satisfying a single polynomial (the GCD of the original polynomials) - even if you had an infinite set of polynomials to satisfy! In particular, suppose we're working over a field $F$; if $P$ is a $n\times n$ matrix, we can always take any polynomial $a_0+a_1x+\ldots+a_kx^k$ in $F[x]$ and evaluate it at $P$ - just by evaluating $a_0I+a_1P+\ldots+a_kP^k$. The important fact is that this forms a map $\operatorname{eval_P}:F[x] \rightarrow \operatorname{Mat}_{n\times n}(F)$. This is a ring homomorphism, since it respects addition and multiplication. Note that a matrix $P$ "satisfies" a polynomial $f$ if and only if $\operatorname{eval}_P(f) = 0$. This lets us tap into ring theoretic reasoning: we are really asking that the kernel of $\operatorname{eval}_P$ contain two different polynomials. However, the kernel is an ideal, and we know that $F[x]$ is a principal ideal domain: every ideal is generated by a single polynomial - which can be computed as the GCD of the original polynomials. Thus, "satisfying a set of polynomials" (even an infinite set) always reduces to "satisfies some single polynomial" - although, a word of warning, is that this GCD can sometimes be a (non-zero) constant polynomial, which is never satisfied, which would mean that the set of polynomials is not satisfied by any matrix. Another way to state this is that the kernel of $\operatorname{eval}_P$ is always just the set of multiples of some polynomial $m$. This polynomial (often assumed to be monic, to ensure uniqueness) is known as the minimal polynomial of $P$. When you say that a matrix satisfies some polynomial, you're really saying that its minimal polynomial divides that polynomial - and there are a number of good tools for describing which matrices have which minimal polynomial. It's worth noting that, in the general case of a single polynomial, the reasoning of the question to which you link does not extend; for instance, a matrix can satisfy the equation $P^2=0$ without being the zero matrix. In general, for an algebraically closed field such as $\mathbb C$, every matrix can be written in Jordan canonical form and its minimal polynomial will be the product of the various values of $(x-\lambda)^k$ where $\lambda$ runs over the eigenvalues and $k$ is the size of the largest block with that eigenvalue - for fields that are not algebraically closed, one may use rational canonical form to work similarly.
23,997
\begin{document} \title[Calculations about the WRT invariants of closed three-manifolds] {Optimistic calculations about \\ the Witten--Reshetikhin--Turaev invariants of \\ closed three-manifolds obtained from the figure-eight knot \\ by integral Dehn surgeries} \author{Hitoshi Murakami} \address{ Department of Mathematics, Tokyo Institute of Technology, Oh-okayama, Meguro, Tokyo 152-8551, Japan } \email{starshea@tky3.3web.ne.jp} \begin{abstract} I calculate {\em optimistically} asymptotic behaviors of the WRT $SU(2)$ invariants for the three-manifolds obtained from the figure-eight knot by $p$-surgeries with $p=0,1,2,\dots,10$, from which one can extract volumes and the Chern--Simons invariants of these closed manifolds. I conjecture that this also holds for general closed three-manifolds. \end{abstract} \keywords{} \subjclass{Primary 57M27; Secondary 57M25, 57M50, 17B37, 81R50} \maketitle \section{Introduction} In \cite{Kashaev:LETMP97} R.~Kashaev defined a link invariant by using quantum dilogarithm and confirmed that his invariants grow exponentially with the growth rates the hyperbolic volumes (times a constant) for three hyperbolic knots with small numbers of crossings. He also conjectured that this holds for every hyperbolic knot. \par J.~Murakami and I proved that Kashaev's link invariant is essentially (up to normalization) the same as the Jones polynomial colored with $N$-dimensional representation evaluated at the $N$-th root of unity. Moreover we generalized Kashaev's conjecture to the following conjecture. \begin{conj}[Volume Conjecture, \cite{Murakami/Murakami:volume}] Let $J_N(K)$ be the $N$-colored Jones polynomial of a knot $K$ evaluated at $\exp\left(\dfrac{2\pi\sqrt{-1}}{N}\right)$. Then \begin{equation*} \lim_{N\to\infty}\frac{\log|J_N(K)|}{N}=\frac{v_3}{2\pi}\Vert{K}\Vert, \end{equation*} where $\Vert{K}\Vert$ is the Gromov norm (or the simplicial volume) of the complement of $K$ and $v_3$ is the hyperbolic volume of the regular ideal tetrahedron. \end{conj} Recent developments toward the Volume Conjecture can be found in \cite{Kashaev/Tirkkonen:1999, Yokota:Murasugi70, Yokota:volume, Murakami:4_1, Murakami/Murakami/Okamoto/Takata/Yokota:CS}. \par It is natural to ask whether a similar formula holds for closed three-manifolds replacing the colored Jones polynomial with the Witten--Reshetikhin--Turaev $SU(2)$ invariant associated with the $N$-th root of unity. But an argument using Heegaard splitting and Topological Quantum Field Theory tells us that the growth of the WRT invariant is a polynomial, showing that a similar limit in the Volume Conjecture vanishes. (After the first attempt of this work I learned this argument from D.~Thurston and J.~Roberts; it was also pointed out by S.~Garoufalidis, V.~Turaev and K.~Walker independently.) \par The aim of this article is to be optimistic to calculate (fake) limits of the logarithms of the WRT invariants of closed three-manifolds obtained from the figure-eight knot by Dehn surgeries with integral coefficients. I will follow Kashaev's calculation in \cite{Kashaev:LETMP97} formally and optimistically, and deduce an analytic function with integer parameter corresponding to the surgery coefficient. The function turns out to describe not only the (simplicial) volume but also the Chern--Simons invariant of the manifold. (J.~Murakami told me to look at the Chern--Simons invariants after my earlier calculations. See \cite{Murakami/Murakami/Okamoto/Takata/Yokota:CS} for a similar relation between the Chern--Simons invariants and the colored Jones polynomials for knots and links.) \par I do not know what these optimistic calculations mean. But this is not a coincidence and there should be something behind it! \begin{ack} This article is for the proceedings of the workshop `Recent Progress toward the Volume Conjecture' held at the International Institute for Advanced Study from 14th to 17th March, 2000 (http://www.iias.or.jp/research/suuken/\linebreak[1]20000314eng.html), supported by the Research Institute of Mathematical Sciences, Kyoto University. I thank the IIAS for its hospitality and the RIMS for its financial support. I am grateful to K.~Saito, who encouraged me to organize the workshop. \par Thanks are also due to the participants of the workshop, and to K.~Ichihara, K.~Motegi, J.~Murakami, J.~Roberts, M.~Teragaito, D.~Thurston and Y.~Yokota for their helpful comments. \end{ack} \section{The Witten--Reshetikhin--Turaev invariant} Let $J_n(K;t)$ be the colored Jones polynomial of a knot $K$ associated with the $n$-dimensional representation of the Lie algebra $sl_2(\C)$. We normalize $J_n(K;t)$ so that $J_2(K;t)$ is the Jones polynomial and $J_n(O;t)=\dfrac{t^{n/2}-t^{-n/2}}{t^{1/2}-t^{-1/2}}$ with $O$ the unknot. Due to T.~Le the colored Jones polynomial of the figure-eight knot $4_1$ is \begin{equation*} J_n(4_1;t)= \sum_{m=0}^{n-1} \prod_{l=1}^{m} \left(t^{(n+l)/2}-t^{-(n+l)/2}\right) \left(t^{(n-l)/2}-t^{-(n-l)/2}\right). \end{equation*} (K.~Habiro obtained the same formula by a different technique.) Let $M_p$ be the closed three-manifold obtained from $S^3$ by Dehn surgery along the figure-eight knot with coefficient $p\in\Z$. We denote by $\tau_N(M_p)$ the Witten--Reshetikhin--Turaev invariant of $M_p$ associated with the Lie group $sl_2(\C)$ and with level $N-2$. Then from \cite[(1.9)]{Kirby/Melvin:INVEM91}, \begin{equation} \tau_{N}(M_p) = \sqrt{\dfrac{2}{N}}\sin\dfrac{\pi}{N} \exp\left(\dfrac{-3\pi\sqrt{-1}}{4}\right)q^{(3-p)/4} \sum_{n=1}^{N-1}[n]^2q^{pn^2/4}J_{n}(4_1;q) \end{equation} if $p>0$. Here $q=\exp\left(\dfrac{2\pi\sqrt{-1}}{N}\right)$ and $[n]=\dfrac{q^{n/2}-q^{-n/2}}{q^{1/2}-q^{-1/2}}$. Note that our $J_n(K;q)$ is $[n]J_{K_0,n}$ with the notation in \cite{Kirby/Melvin:INVEM91}. Since \begin{align*} [n]& \prod_{l=1}^{m} \left(q^{(n+l)/2}-q^{-(n+l)/2}\right) \left(q^{(n-l)/2}-q^{-(n-l)/2}\right) \\ &= \dfrac{\prod_{l=-m}^{m}(q^{(n+l)/2}-q^{-(n+l)/2})}{q^{1/2}-q^{-1/2}} \\ &= \dfrac{(1-q^{n-m})(1-q^{n-m+1})\cdots(1-q^{n+m})\times(-1)q^{-n(2m+1)/2}} {q^{1/2}-q^{-1/2}} \\ &= \dfrac{-q^{-n(2m+1)/2}}{q^{1/2}-q^{-1/2}}\times\dfrac{(q)_{n+m}}{(q)_{n-m-1}} \end{align*} we have \begin{align*} \tau_{N}(M_p)= P(N) \sum_{n=1}^{N-1}\sum_{m=0}^{n-1} \frac{(q)_n(q)_{n+m}}{(q)_{n-1}(q)_{n-m-1}}q^{n(pn/4-m)-n}, \end{align*} where $(q)_k=(1-q)(1-q^2)\cdots(1-q^k)$ and $P(N)$ is a function of $N$ with polynomial growth. Note that this also holds for a non-positive $p$. \section{An optimistic limit} Suppose that we are given a function $S(N)$ of $N$ by the following summation. \begin{equation}\label{eq:sum} S(N) = P(N) \sum_{n_1,n_2,\dots,n_k} q^{Q+L} \prod_{a=1}^{\alpha}(q)^{\varepsilon_a}_{l_a}, \end{equation} where $P(N)$ is a function of $N$ with polynomial growth, $\varepsilon_a=\pm1$, $l_a$ and $L$ are linear functions of $n_1,n_2,\dots,n_k$ (they do not depend on $N$ but may have constant terms), $Q=\displaystyle\sum_{1\le i\le j\le k}r_{ij}n_in_j$, and the summation runs over some range in $\{(n_1,n_2,\dots,n_k) \mid 0\le n_i \le N-1\,(i=1,2,\dots,k)\}$. Note that $q=\exp\left(\dfrac{2\pi\sqrt{-1}}{N}\right)$ and we regard $S$ as a function of $N$ rather than $q$. \par Then {\em an optimistic limit} of $\dfrac{2\pi\sqrt{-1}\log{S(N)}}{N}$, denoted by $\displaystyle\olim_{N\to\infty}\dfrac{2\pi\sqrt{-1}\log{S(N)}}{N}$ is defined as follows. \par First we replace $S(N)/P(N)$ with the following iterated integral $I(N)$ along some contours. \begin{equation}\label{eq:integral} I(N):= \idotsint \exp \left[ \dfrac{N}{2\pi\sqrt{-1}} V(z_1,z_2,\dots,z_k) \right] dz_1\,dz_2 \cdots dz_k. \end{equation} Here $V(z_1,z_2,\dots,z_k)$ is defined as follows. Put \begin{equation}\label{eq:V} \tilde{V}(z_1,z_2,\dots,z_k) := -\sum_{a=1}^{\alpha}\varepsilon_a \left\{\Li_2\left(x_a\right)-\dfrac{\pi^2}{6}\right\} +\sum_{1\le i\le j\le k}r_{ij}\log z_i \log z_j, \end{equation} where $z_i=q^{n_i}$ and $x_a=q^{{l_a}'}$ with ${l_a}'$ the degree one term in $l_a$, and $\Li_2(z)$ is Euler's dilogarithm defined by \begin{equation*} \Li_2(z):=-\int_{0}^{z}\dfrac{\log(1-u)}{u}\,du. \end{equation*} \par Next we consider the following system of partial differential equations: \begin{equation}\label{eq:differential} \dfrac{\partial\,\tilde{V}(z_1,z_2,\dots,z_k)}{\partial\,z_i}=0 \quad(i=1,2,\dots,k). \end{equation} Since \begin{equation*} \dfrac{\partial\,\tilde{V}(z_1,z_2,\dots,z_k)}{\partial\,z_i} = \sum_{a=1}^{\alpha}\varepsilon_al_{ai}\dfrac{\log(1-x_a)}{z_i} + \sum_{j=1}^{k}r_{ij}\dfrac{\log z_j}{z_i}+r_{ii}\dfrac{\log z_i}{z_i} \end{equation*} with ${l_a}'=\sum_{i=1}^{k}l_{ai}z_i$, \eqref{eq:differential} implies the following algebraic equations. \begin{equation}\label{eq:algebraic} {z_i}^{r_{ii}}\prod_{j=1}^{k}{z_j}^{r_{ij}} \prod_{a=1}^{\alpha}(1-x_a)^{\varepsilon_a l_{ai}} =1 \quad(i=1,2,\dots,k). \end{equation} Let $(\zeta_1,\zeta_2,\dots,\zeta_k)$ be a solution to \eqref{eq:algebraic}. \begin{defn}[optimistic limit] We put \begin{equation}\label{eq:definition} V(\zeta_1,\zeta_2,\dots,\zeta_k):= \tilde{V}(\zeta_1,\zeta_2,\dots,\zeta_k) +2\pi\sqrt{-1}\left(\sum_{j=1}^{k}c_j\log\zeta_j\right) \end{equation} and call it {\em an optimistic limit} of $\dfrac{2\pi\sqrt{-1}\log{S(N)}}{N}$ as $N$ goes to the infinity. It is denoted by $\displaystyle\olim_{N\to\infty}\dfrac{2\pi\sqrt{-1}\log{S(N)}}{N}$. Here $c_i$ is chosen so that $\dfrac{\partial\tilde{V}(\zeta_1,\zeta_2,\dots,\zeta_k)}{\partial z_i} +2\pi\sqrt{-1}\sum_{j=1}^{k}\dfrac{c_j}{\zeta_j}=0$ for every $i$. \end{defn} \begin{rem} The term $-\dfrac{\pi^2}{6}$ in \eqref{eq:V} appears so that $V(1,1,\dots,1)=0$ since $\Li_2(1)=\dfrac{\pi^2}{6}$ (see for example \cite[(1.5)]{Kirillov:dilog}). (I learned this from \cite[\S5]{Jun:MSJ2000}.) \end{rem} \begin{rem} Note that $V$ and $\tilde{V}$ satisfy the same algebraic equations \eqref{eq:algebraic} but different partial differential equations \eqref{eq:differential} and that the extra terms in \eqref{eq:definition} are necessary to choose an appropriate branch since $\Li_2$ and $\log$ are multivalued functions. (I learned this from T.~Takata.) \end{rem} \begin{rem} An optimistic limit is {\em not} well defined. There are many ambiguities both in choosing $I(N)$ (I did not say anything about the range of the summation in $S(N)$ and the contours in $I(N)$) and in choosing $(\zeta_1,\zeta_2,\dots,\zeta_k)$. \end{rem} \begin{rem} Following Kashaev \cite{Kashaev:LETMP97}, the behavior of $S(N)$ for large $N$ may be approximated by $P(N)I(N)$ with suitably chosen contours. Moreover by using the saddle point method (see for example \cite[\S7.2]{Marsden/Hoffman:Complex_Analysis}) we see that $I(N)$ (and $S(N)$) behaves like $\displaystyle\exp\left(\dfrac{N}{2\pi\sqrt{-1}}\olim_{N\to\infty} \dfrac{2\pi\sqrt{-1}\log{S(N)}}{N}\right)$ for large $N$ if we choose a solution $(\zeta_1,\zeta_2,\dots,\zeta_k)$ suitably. Even if this is not true I expect that there is a relation between an optimistic limit and the asymptotic behavior of $S(N)$. \end{rem} \par \section{Dehn surgery along the figure-eight knot} Put $k:=2$, $n_1:=n$, $n_2:=m$, $z:=z_1$, $w:=z_2$, $Q:=pn^2/4-mn$, $L:=-n/2$, $\alpha:=4$, $l_1:=n$, $\varepsilon_1:=1$, $l_2:=n-1$, $\varepsilon_2:=-1$, $l_3:=n+m$, $\varepsilon_3:=1$, $l_4:=n-m-1$, and $\varepsilon_4:=-1$ in \eqref{eq:sum}. Note that $x_1=z$, $x_2=z$, $x_3=zw$, $x_4=zw^{-1}$, $r_{11}=p/4$, $r_{12}=-1$, $r_{22}=0$, $l_{11}=1$, $l_{12}=0$, $l_{21}=1$, $l_{22}=0$, $l_{31}=1$, $l_{32}=1$, $l_{41}=1$, $l_{42}=-1$. Then \begin{equation*} \tilde{V}(z,w):= -\Li_2(zw)+\Li_2\left(\dfrac{z}{w}\right)+ \dfrac{p}{4}(\log z)^2- \log z\log w \end{equation*} and \begin{equation*} \begin{cases} \dfrac{\partial\,\tilde{V}}{\partial\,z} &= \dfrac{1}{z} \left\{ \log z^{p/2}+\log\left(\dfrac{1-zw}{w-z}\right) \right\}, \\[5mm] \dfrac{\partial\,\tilde{V}}{\partial\,w} &= \dfrac{1}{w} \log\dfrac{(1-zw)(w-z)}{zw}. \end{cases} \end{equation*} Therefore \eqref{eq:algebraic} turns out to be \begin{align} & \begin{cases} z^{p/2}(1-zw)=w-z, \\[5mm] (1-zw)(w-z)=zw, \end{cases} \label{eq:algebraic_fig8_original} \\ \notag \intertext{from which we have} & \begin{cases} w=\dfrac{z+z^{p/2}}{z^{p/2}z+1}, \\[5mm] z^2-\left(\dfrac{z+z^{p/2}}{z^{p/2}z+1}+1 +\dfrac{z^{p/2}z+1}{z+z^{p/2}}\right)z+1=0. \end{cases} \label{eq:algebraic_fig8} \end{align} \par \begin{rem} Since the second equation of \eqref{eq:algebraic_fig8} is symmetric with respect to $z$ and $z^{-1}$, if $\zeta$ is a solution to it then so is $\zeta^{-1}$. (This may be caused by the amphicheirality of the figure-eight knot.) Clearly $\overline{\zeta}$, the complex conjugate of $\zeta$, also satisfies it. Therefore if $(\zeta,\omega)$ is a solution to \eqref{eq:algebraic_fig8} then so are $(\overline{\zeta},\overline{\omega})$, $(\zeta^{-1},\omega)$, and $(\overline{\zeta}^{-1},\overline{\omega})$. \end{rem} \par I will show calculations for $p=0,1,\dots,10$. \subsection{$6$-surgery along the figure-eight knot} I will describe the case where $p=6$ in detail. Note that $M_6$ is hyperbolic. In this case there are the following six solutions to \eqref{eq:algebraic_fig8} due to MAPLE V: \begin{equation*} (\zeta_1,\omega_1),\, (\zeta_2,\omega_2),\, ({\zeta_1}^{-1},\omega_1),\, (\overline{\zeta_2},\overline{\omega_2}),\, ({\zeta_2}^{-1},\omega_2),\, ({\overline{\zeta_2}}^{-1},\overline{\omega_2}), \end{equation*} where \begin{align*} (\zeta_1,\omega_1) &= (-0.8294835410-0.5585311587\sqrt{-1}, -2.205569430\phantom{2}-0.3703811357\times10^{-9}\sqrt{-1}) \\ \intertext{and} (\zeta_2,\omega_2) &= (\phantom{-}0.3679390314-0.4972675889\sqrt{-1}, \phantom{-}0.1027847152-0.6654569513\sqrt{-1}). \end{align*} Note that $\left|\zeta_1\right|=1$ and so $\overline{\zeta_1}={\zeta_1}^{-1}$ and ${\overline{\zeta_1}}^{-1}=\zeta_1$. \par The partial derivatives $\dfrac{\partial\,\tilde{V}(\zeta,\omega)}{\partial\,z}$ and $\dfrac{\partial\,\tilde{V}(\zeta,\omega)}{\partial\,w}$ for $(\zeta_1,\omega_1)$ and $(\zeta_2,\omega_2)$ are as follows. \begin{align*} &\begin{cases} \dfrac{\partial\,\tilde{V}}{\partial\,z}(\zeta_1,\omega_1) &= 0.424142903\times10^{-10} -6.283185309\sqrt{-1}, \\[5mm] \dfrac{\partial\,\tilde{V}}{\partial\,w}(\zeta_1,\omega_1) &= 0.2205569430\times10^{-9} -0.2205569430\times10^{-9}\sqrt{-1}, \end{cases} \\ \intertext{and} &\begin{cases} \dfrac{\partial\,\tilde{V}}{\partial\,z}(\zeta_2,\omega_2) &= 0.3868858795\times10^{-9} +0.5171193081\times10^{-9}\sqrt{-1}, \\[5mm] \dfrac{\partial\,\tilde{V}}{\partial\,w}(\zeta_2,\omega_2) &= 0.3623902007\times10^{-10} -0.6757354228\times10^{-9}\sqrt{-1}. \end{cases} \end{align*} Therefore we have \begin{align*} V(\zeta_1,\omega_1) &= \tilde{V}(\zeta_1,\omega_1)+2\pi\sqrt{-1}\log\zeta_1, \\ V(\zeta_2,\omega_2) &= \tilde{V}(\zeta_2,\omega_2), \\ V({\zeta_1}^{-1},\omega_1) &= V(\zeta_1,\omega_1), \\ V(\overline{\zeta_2},\overline{\omega_2}) &= \overline{V(\zeta_2,\omega_2)}, \\ V({\zeta_2}^{-1},\omega_2) &= V(\zeta_2,\omega_2), \\ V({\overline{\zeta_2}}^{-1},\overline{\omega_2}) &= \overline{V(\zeta_2,\omega_2)}, \end{align*} with \begin{align*} V(\zeta_1,\omega_1)&=13.76750570 +0.1\times10^{-8}\sqrt{-1} \\ \intertext{and} V(\zeta_2,\omega_2)&=1.340917487 +1.284485301\sqrt{-1} \end{align*} by MAPLE V. \par So there are three optimistic limits (up to 10 digits); $V_1:=13.76750570$, $V_2:=1.340917487+1.284485301\sqrt{-1}$, and $\overline{V_2}$. By using SnapPea \cite{Weeks:SnapPea}, we calculate $\Vol(M_6)=1.2844853$ and $\cs(M_6)=0.0679316734799$ and so we can write \begin{equation*} V_2=\CS(M_6)+\Vol(M_6)\sqrt{-1}. \end{equation*} since $0.0679316734799\times2\pi^2=1.34091748750\dots$. Here $\Vol(M)$ and $\cs(M)$ are the volume and the Chern--Simons invariant \cite{Chern/Simons:ANNMA274} of a closed hyperbolic three-manifold $M$ respectively, and $\CS(M):=2\pi^2\cs(M)$. \subsection{$5,7,8,9,10$-surgeries along the figure-eight knot} Similar calculations using MAPLE V for $p=5,7,8,9,10$ give the following. Note that $M_p$ is also hyperbolic in this case. \begin{obs}\label{obs} For $p=5,6,7,8,9,10$ there is an optimistic limit $V(\zeta_p,\omega_p)$ of $\dfrac{2\pi\sqrt{-1}\log\tau_N(M_p)}{N}$ such that \begin{equation*} V(\zeta_p,\omega_p)=\CS(M_p)+\Vol(M_p)\sqrt{-1} \end{equation*} up to several digits. Here \begin{equation*} V(z,w)=-\Li_2(zw)+\Li_2(z/w)+\dfrac{p}{4}\left(\log{z}\right)^2 -\log{z}\log{w} \end{equation*} and \begin{align*} (\zeta_5,\omega_5)&=(0.1979823656-0.4438341209\sqrt{-1},\, 0.007552359501-0.5131157955\sqrt{-1}), \\ (\zeta_6,\omega_6)&=(0.3679390314-0.4972675889\sqrt{-1},\, 0.1027847152\phantom{00}-0.6654569513\sqrt{-1}), \\ (\zeta_7,\omega_7)&=(0.4855046904-0.5042960525\sqrt{-1},\, 0.1761405059\phantom{00}-0.7455559248\sqrt{-1}), \\ (\zeta_8,\omega_8)&=(0.5730134132-0.4940983127\sqrt{-1},\, 0.2327856161\phantom{00}-0.7925519927\sqrt{-1}), \\ (\zeta_9,\omega_9)&=(0.6404276706-0.4765868179\sqrt{-1},\, 0.2769632324\phantom{00}-0.8216401587\sqrt{-1}), \\ (\zeta_{10},\omega_{10})&=(0.6935298015-0.4561607978\sqrt{-1},\, 0.3118108269\phantom{00}-0.8402398912\sqrt{-1}). \end{align*} \end{obs} \begin{rem} For $p=4n+2$ with $n=1,2,\dots,100$, we can observe the same result. See Figures~1 and 2 for $(\zeta_p,\omega_p)$. \end{rem} \input z \input w \subsection{$1,2,3$-surgeries along the figure-eight knot} Next we consider the case where $p=1,2$ and $3$. Note that the manifold $M_p$ for $p=1,2,3$ is a Seifert fibered space. See \cite[p.~95]{Kirby:problems} for details, which was informed by K.~Ichihara. \par In this case, the (simplicial) volume of $M_p$ is zero but SnapPea tells us that it has the non-trivial Chern--Simons invariant. MAPLE V shows that the same observation as Observation~\ref{obs} holds with \begin{align*} (\zeta_1,\omega_1)&=(0.3738178762,0.8019377355), \\ (\zeta_2,\omega_2)&=(0.346014339\phantom{0},0.6180339884), \\ (\zeta_3,\omega_3)&=(0.2819716801,0.4142135623). \end{align*} \subsection{$0$-surgery of the figure-eight knot} In this case $M_0$ is a torus bundle over a circle. For a detail see \cite[p.~95]{Kirby:problems} again. Both the volume and the Chern--Simons invariant vanish in this case and Observation~\ref{obs} holds putting $(\zeta_0,\omega_0)=(0.381966011,1)$. \subsection{$4$-surgery of the figure-eight knot} It is known that $M_4$ is toroidal and can be obtained by gluing the twisted $I$-bundle over the Klein bottle and the complement of the trefoil, which I learned from K.~Motegi and M.~Teragaito. The gluing map can be found in \cite[p.~95]{Kirby:problems} again. Therefore $\Vol(M_4)=0$. (Here I use $\Vol$ for $v_3$ times the simplicial volume.) Computation by using MAPLE V told us that if we put $(\zeta_4,\omega_4)=(-1,-.381966011)$ then $V(\zeta_4,\omega_4)=1.973920880+0.1\times10^{-8}\sqrt{-1}$. Note that $1.973920880=2\pi^2\times0.09999999995$. I guess this is the sum of the Chern--Simons invariants of the two pieces (with suitably chosen metrics), which one might regard as the Chern--Simons invariant of the toroidal manifold $M_4$. \subsection{$\infty$-surgery of the figure-eight knot} Put $\zeta_{\infty}:=\exp\left(-\dfrac{2\pi\sqrt{-1}}{p}\right)$ and $\omega_{\infty}:=\exp\left(-\dfrac{\pi\sqrt{-1}}{3}\right)$. Then the left hand side minus the right hand side of the first equation in \eqref{eq:algebraic_fig8_original} is $(\zeta_{\infty}-1)(\omega_{\infty}+1)$ since ${\zeta_{\infty}}^{p/2}=-1$. That of the second equation is $\omega_{\infty}(\zeta_{\infty}-1)^2$ since ${\omega_{\infty}}^2=\omega_{\infty}-1$. Noting that $\zeta_{\infty}\to1$ if $p\to\infty$, $(\zeta_{\infty},\omega_{\infty})$ can be regarded as a solution to \eqref{eq:algebraic_fig8_original} for a large $p$ . \par Now we calculate $V(\zeta_{\infty},\omega_{\infty})$. Since $\dfrac{\partial\,\tilde{V}}{\partial\,z}(\zeta_{\infty},\omega_{\infty}) \to0$ and $\dfrac{\partial\,\tilde{V}}{\partial\,w}(\zeta_{\infty},\omega_{\infty}) \to0$ if $p\to\infty$, \begin{align*} V(\zeta_{\infty},\omega_{\infty}) &= -\Li_2(\zeta_{\infty}\omega_{\infty}) +\Li_2\left(\dfrac{\zeta_{\infty}}{\omega_{\infty}}\right) +\dfrac{p}{4}\left(\log\zeta_{\infty}\right)^2 -\log{\zeta_{\infty}}\log{\omega_{\infty}} \\ &\xrightarrow[N\to\infty]{} -\Li_2\left(\exp\left(-\dfrac{\pi\sqrt{-1}}{3}\right)\right) +\Li_2\left(\exp\left(\dfrac{\pi\sqrt{-1}}{3}\right)\right) \\ &= \pi^2 \left\{ \overline{B}_2\left(\dfrac{1}{6}\right) -\overline{B}_2\left(-\dfrac{1}{6}\right) \right\} +\sqrt{-1} \left\{ \Lob\left(\dfrac{\pi}{3}\right) -\Lob\left(-\dfrac{\pi}{3}\right) \right\} \\ &= \sqrt{-1}\times2\Lob\left(\dfrac{\pi}{3}\right). \end{align*} Here I used the fact that \begin{equation*} \Li_2(\exp(\theta\sqrt{-1}))= \pi^2\overline{B}_2\left(\dfrac{\theta}{2\pi}\right) +\sqrt{-1}\Lob(\theta) \end{equation*} with $\Lob$ the Lobachevskij function and $\overline{B}_2$ the second modified Bernoulli polynomial \cite[Proposition B]{Kirillov:dilog}: \begin{equation*} \overline{B}_2(x):= -\dfrac{1}{\pi^2}\sum_{n=1}^{\infty}\dfrac{\cos(2n\pi x)}{n^2}. \end{equation*} Note that $\Lob$ is an odd function and $\overline{B}_2$ is an even function. \par Thus we can write \begin{equation*} V(\zeta_{\infty},\omega_{\infty}) = \CS(4_1)+\sqrt{-1}\Vol(4_1). \end{equation*} This suggests that the series of optimistic limits $\displaystyle \left\{ \olim_{N\to\infty}\dfrac{2\pi\sqrt{-1}\log\tau_{N}(M_p)}{N} \right\}_{p=0,1,\dots,\infty}$ goes to $\displaystyle\lim_{N\to\infty}\dfrac{2\pi\sqrt{-1}\log{J_{N}(4_1)}}{N}$, agreeing with the facts that $\displaystyle\lim_{p\to\infty}\Vol(M_p)=\Vol(4_1)$ and that $\displaystyle\lim_{p\to\infty}\CS(M_p)=\CS(4_1)$. Note that $\CS(4_1)=0$ since $4_1$ is amphicheiral. \begin{rem} It was suggested by A.~Kricker to observe the $p\to\infty$ limit. \end{rem} \section{Volume Conjecture for closed three-manifolds} Now I propose a very ambiguous conjecture. \begin{conj}[Volume conjecture for closed three-manifolds] For any closed three-manifold $M$ \begin{equation*} \olim_{N\to\infty}\dfrac{2\pi\sqrt{-1}\log\tau_N(M)}{N} =\CS(M)+\sqrt{-1}\Vol(M), \end{equation*} where $\Vol(M):=v_3\Vert{M}\Vert$ with $\Vert{M}\Vert$ the simplicial volume of $M$. \end{conj} A weaker but more precise conjecture is \begin{conj}\label{conj:fig8} For an integer $p$ put \begin{equation*} V_p(z,w):= -\Li_2(zw)+\Li_2\left(\dfrac{z}{w}\right)+ \dfrac{p}{4}(\log z)^2-\log z\log w. \end{equation*} Then there exists $(\zeta_p,\omega_p)$ such that \begin{enumerate} \item $\dfrac{\partial\,V_p}{\partial\,z}(\zeta_p,\omega_p) =\dfrac{\partial\,V_p}{\partial\,w}(\zeta_p,\omega_p)=0$, and \item $V_p(\zeta_p,\omega_p)=\CS(M_p)+\sqrt{-1}\Vol(M_p)$. \end{enumerate} Here $M_p$ is the closed three-manifold obtained from the three-sphere by $p$-surgery along the figure-eight knot. \end{conj} \begin{rem} Conjecture~\ref{conj:fig8} is {\em numerically} true (up to 8 digits or so) for $p=0,\pm1,\pm2,\pm3,\pm5,\pm6,\dots,\pm100$. \end{rem} \begin{rem} Note that $V_p(1,w)=-\Li_2(w)+\Li_2\left(\dfrac{1}{w}\right)$ and this appears in calculations about the figure-eight knot \cite[3.15]{Kashaev:LETMP97}. More precisely $\omega:=\exp\left(-\dfrac{\pi\sqrt{-1}}{3}\right)$ satisfies \begin{enumerate} \item $\dfrac{\partial\,V_p}{\partial\,w}(1,\omega)=0$, and \item $V_p(1,\omega)=\CS(4_1)+\sqrt{-1}\Vol(4_1)$. \end{enumerate} \end{rem} Finally I raise some natural problems. \begin{prob}\label{prob:fig8} Can one generalize Conjecture~\ref{conj:fig8} to rational surgery? Compare with \cite{Yoshida:INVEM85}. \end{prob} \begin{prob} For any knot $K$ (or more generally link) and any {\em rational} number $p$, does there exist a function $V_{p}$ as above? \end{prob} \begin{rem} Y.~Yokota told me that Conjecture~\ref{conj:fig8} and Problem~\ref{prob:fig8} can be solved by considering deformations of tetrahedron decomposition described in \cite{Yokota:volume}. \end{rem} \bibliography{mrabbrev,hitoshi} \bibliographystyle{amsplain} \end{document}
125,651
Today, Harvard Business School announced a multi-million dollar donation that will benefit MBA students who are the first in their families to go to college. The $12.5 million donation comes from Jeannie and Jonathan Lavine, both Harvard MBA alumni. The donation from the Lavines is the largest Harvard University scholarships donation ever given to the business school. And the donation isn’t just generous. It’s personal. Here’s what you need to know about the Lavine’s donation if you have your eyes on an MBA from Harvard. Harvard scholarships donation helps first-generation students Jeannie Lavine’s father, Herbert Bachelor, was the first in his family to attend college. Bachelor worked his way through Harvard College and Harvard Business School. He opted to major in finance instead of science, his first love, to ensure he’d be able to earn enough to repay his student loans. Jonathan Lavine said that though his father-in-law was successful, thoughts of “what if” never left his mind. “He had a successful career in investment banking. But he talks about what could have been, if he had the choice. Going to college should increase your opportunities, not narrow them.” Because of this, the Lavine’s scholarships donation will prioritize first-generation college students as a way to honor Bachelor’s legacy. Jeannie Lavine said, “We know that intellect is not distributed based on income, and neither should a top education.” According to the Boston Globe, $2 million of the donation will go to two specific Harvard University scholarships. The remaining funds will match gifts from other donors. The goal is to ensure the business school has a higher amount of scholarships available to students. Nitin Nohria, the dean of Harvard Business School, hopes the scholarship funds can both increase access to the school and encourage graduates to take lower-paying work in the nonprofit sector. “For many students, being admitted to Harvard Business School becomes a reality only when they know there is financial support available.” Harvard Business School relies more on tuition payments than federal aid or grants. However, it does offer help through $35 million in annual financial aid to its students — about $37,000 for each student recipient per year. Scholarship announcement comes on heels of recent criticism This donation couldn’t come at a better time for Harvard. Last week, CNBC reported on data from the Harvard Crimson highlighting the disproportionately large makeup of legacy students — students whose family members also attended Harvard. According to the report, one-third of Harvard’s student body is considered to be legacy, giving them three times more likelihood of admittance. This decreases the odds not just for first-generation students, but any student who doesn’t have family ties to the university. Havard’s very own newspaper criticized this trend in 2015. “The preference Harvard awards the children of alumni offers a leg up to those who, in most cases, already started ahead of the pack.” And the Crimson didn’t stop there: Harvard’s legacy preference is, in the simplest terms, wrong. It takes opportunities from those with less and turns them over to those who have more. It lends legitimacy to the entrenched and insidious campus tendency to give affluence an unmistakable social cachet. The Lavine’s Harvard University scholarships donation won’t solve the unfair advantage legacy applicants receive during the admissions process. But it can help make the dream of attendance a reality for students who might never have applied because of the cost. How aspiring Harvard students can ease the burden of the costs If you’re reading this with dreams of attending Harvard, here’s how you could receive a scholarship if you’re accepted. Start with Harvard’s resource page for scholarships and outside awards. Don’t forget that you can also get Harvard scholarships and grants to its extension school. Finally, if you are the first in your family to go to college, check out Scholarship.com’s first in family scholarship listings. And if you’re a first-generation student looking at schools other than Harvard, there are colleges that offer scholarships just for you. Ball State University just announced the Mearns/Family Proud Scholarship for students of Muncie Central whose parents didn’t attend college. Companies are also getting in on this. The Coca-Cola Foundation recently announced funding for first-generation students at the College of Coastal Georgia. The moral of the story: Don’t forego applications to your dream colleges because you’re worried you can’t afford it. Focus on acceptance and then use your research skills to find scholarships and grants to make your dream a reality.
158,563
Dr Tiffany Denny will lead a 15 hour weekend workshop for self-empowered women who feel inspired to provide Embody Love Movement® workshops in their communities. Dates and Times: - Fri Jan 20 4-7p - Sat Jan 21 12-7p - Sun Jan 22 9a-4p The weekend immersion will include: ♥ Participation in an Embody Love Workshop led by A Certified Trained Faculty Member A Guided Process to Certification: While exhilarating and empowering, facilitating can sometimes be tough. We get it. Holding space for girls and women can bump up against some of our own insecurities. Because of this, our certification process includes a mentoring component. After your training, you will participate in two mentoring sessions with your CF-T so that she can answer any questions that you may have about the process of facilitation, setting up workshops, building community, etc. Our mentoring process gives you the opportunity to grow and learn from the guidance and support of our experienced facilitators. * Please note that these mentoring sessions will be an additional cost to the training and fees will be paid directly to your CF-T. you to draw in what feels right to you and, at the same time, give back to the organization so that we can continue to offer and deliver to the greater whole. Facilitators are also more than welcome to deliver workshops at no cost and volunteer their time. We hold no expectations regarding which path you choose for any given workshop.
362,571
Event Information Date and time Location Online event About this event For our first NABC meeting, we will discuss John Holloway's essay, "Stop Making Capitalism" online via Zoom. It’s a critical, hopeful and short piece that will get us thinking about the ways that human agency is central to tackling as daunting of a problem as the climate emergency. Bring your ideas, questions, and creative spirit with you. Here’s the link to the first ‘text’ we’ll discuss at the first event: ------------------------------------------------------------------------------ The Not Always A Book Club is a community within the Climate Justice Working Group that meets regularly to consume and discuss media related to climate change. It is run primarily by Tanner Layton (one of our Core Team Members) and seeks to foster a space for members to critically discuss the various ways in which climate change is lived—from the individual to the local, and from the social to the global. We hope to cultivate an encouraging and open context to learn and to think through the pressing problems of our time. The club is collectively guided by its members and primarily adopts a critical and structural approach to understanding climate change. The texts, films, and other cultural products that we discuss aim to illuminate the ways in which climate justice work intersects with many other forms of social justice work—both historically and contemporarily. By focusing on various kinds of texts (from Pixar movies to academic books), it is our hope that the theoretical and intellectual focus of the club will help to critically inform our activism, our everyday practices, and create a shared sense of community. Date and time Location Online event Organizer CJWG Organizer of Not Always a Book Club (NABC) First Meeting
176,137
Link to Pubmed [PMID] – 34504012 Link to DOI – 10.1073/pnas.2024893118 Proc Natl Acad Sci U S A 2021 Sep; 118(37): The interleukin-2 receptor (IL-2R) is a cytokine receptor essential for immunity that transduces proliferative signals regulated by its uptake and degradation. IL-2R is a well-known marker of clathrin-independent endocytosis (CIE), a process devoid of any coat protein, raising the question of how the CIE vesicle is generated. Here, we investigated the impact of IL-2Rγ clustering in its endocytosis. Combining total internal reflection fluorescence (TIRF) live imaging of a CRISPR-edited T cell line endogenously expressing IL-2Rγ tagged with green fluorescent protein (GFP), with multichannel imaging, single-molecule tracking, and quantitative analysis, we were able to decipher IL-2Rγ stoichiometry at the plasma membrane in real time. We identified three distinct IL-2Rγ cluster populations. IL-2Rγ is secreted to the cell surface as a preassembled small cluster of three molecules maximum, rapidly diffusing at the plasma membrane. A medium-sized cluster composed of four to six molecules is key for IL-2R internalization and is promoted by interleukin 2 (IL-2) binding, while larger clusters (more than six molecules) are static and inefficiently internalized. Moreover, we identified membrane cholesterol and the branched actin cytoskeleton as key regulators of IL-2Rγ clustering and IL-2-induced signaling. Both cholesterol depletion and Arp2/3 inhibition lead to the assembly of large IL-2Rγ clusters, arising from the stochastic interaction of receptor molecules in close correlation with their enhanced lateral diffusion at the membrane, thus resulting in a default in IL-2R endocytosis. Despite similar clustering outcomes, while cholesterol depletion leads to a sustained IL-2-dependent signaling, Arp2/3 inhibition prevents signal initiation. Taken together, our results reveal the importance of cytokine receptor clustering for CIE initiation and signal transduction.
136,064
TITLE: A book is $90\%$ likely to be illustrated; an illustration is $90\%$ likely to be in color. How many of $10000$ books must have a color illustration? QUESTION [2 upvotes]: This problem is in Art and Craft of Problem Solving by Paul Zeitz. It's an easy looking puzzle which is not "so obvious" according to the problem itself. It says - Of all the books at a certain library, if you select one at random, then there is a $90\%$ chance that it has illustrations. Of all the illustrations in the book, if you select one at random, then there is a $90\%$ chance that it is in color. If the library has $10000$ books, then what is the minimum number of books that must contain colored illustrations? I immediately got an answer, but the warning in the problem makes me thoughtful. Isn't it obvious, you have $\frac{90}{100}\cdot 10000=9000$ books with illustrations and then $\frac{90}{100}\cdot 9000=8100$ books with colored illustrations? Or am missing something obvious? REPLY [4 votes]: I am the author of this book. The problem as is misquoted. The correct language for the second sentence begins, "Of all the illustrations in all the books..." (cf. p. 25 of the second edition). The problem is tricky, but the unambiguous answer is "one," since it is possible that one of the books in the library has the title, "the big book of illustrations, including all colored images in the library."
178,539
\begin{document} \title{Quantum weighted entropy and its properties} \author{Y.~Suhov$^1$ and S.~Zohren$^2$} \address{ $^1$ Department of Mathematics, Penn State University, PA 16802, USA; \\ DPMMS, University of Cambridge, CB30WB, UK; \\ Institute for Information Transmission Problems, RAS, 127994 Moscow, Russia \\ $^2$ Department of Physics, Pontifica Universidade Cat\'olica, Rio de Janeiro, Brazil } \pacs{03.67.-a, 89.70.Cf \\ MSC numbers: 46N50, 94A15, 47L90 \\ Keywords: Quantum information theory, weighted entropy, trace inequalities } \begin{abstract} We introduce quantum weighted entropy in analogy to an earlier notion of (classical) weighted entropy and derive many of its properties. These include the subadditivity, concavity and strong subadditivity property of quantum weighted entropy, as well as an analog of the Araki-Lieb inequality. Interesting byproducts of the proofs are a weighted analog of Klein's inequality and non-negativity of quantum weighted relative entropy. A main difficulty is the fact that the weights in general do not commute with the density matrices. \end{abstract} \maketitle \section{Introduction} Shannon entropy and its quantum analog, von Neumann entropy, play essential roles in classical and quantum information theory \cite{CT,Information,Nielsen}. There are many generalisations of Shannon entropy, such as R\'enyi entropy, which have been proposed both in the classical as well as quantum case. Another interesting generalisation is (classical) weighted entropy \cite{BG,G}. The idea behind weighted entropy is the incorporation of further characteristics of each event through a weight assigned to it in addition to its probability. Weighted entropy has been used at several places in the information theory and computer science literature (see for instance \cite{ShMM,SiB,K,SrV,S} and references therein) including in machine learning applications \cite{MMN}. However, the natural quantum analog of weighted entropy has apparently not yet been considered in the quantum information theory literature. The aim of this letter is to introduce \emph{quantum weighted entropy} and to prove several basic properties, including subadditivity, concavity and strong subadditivity of quantum weighted entropy and an analog of the Araki-Lieb inequality. Many of the corresponding trace inequalities for von Neumann entropy have important applications in quantum information theory. As an example let us mention strong subadditivity of standard von Neumann entropy which was conjectured in \cite{LR} and then proven by Lieb and Ruskai \cite{LR1,LR2} (see also \cite{NP} for a modern simplified proof). Amongst many of its applications is the thermodynamic limit of entropy per volume which was already considered in the classical case \cite{RR}. We refer the reader to \cite{Ruskai2002} for a review. Some of the above trace inequalities are quantum analogs of a series of inequalities for classical weighted entropy recently considered in \cite{SY}. Let us now give a formal definition of quantum weighted entropy. Consider a quantum mechanical system with Hilbert space $\mathcal{H}$ and a density matrix $\rho$ on $\mathcal{H}$. For an Hermitian, positive definite matrix $\phi$ on $\mathcal{H}$, which from here on we simply refer to as \emph{weight}, we define the \emph{quantum weighted entropy} as follows \beq \label{def1} S_\phi(\rho)=- \tr (\phi\rho\log\rho). \eeq One sees that for $\phi=1_{\mathcal{H}}$, where $1_{\mathcal{H}}$ is the identity matrix on $\mathcal{H}$, the quantum wighted entropy reduces to the standard von Neumann entropy. Before moving to the discussion of different properties and their proofs in the next sections let us first comment on potential difficulties. Consider for the moment the well-known Gibbs inequality which yields positivity of von Neumann entropy whose weighted analog we will discuss in the next section. The main difficulty in the proof, as compared to the corresponding classical result for Shannon entropy, is the fact that the different (reduced) density matrices in general do not commute. Similarly, when extending many trace inequalities for von Neumann entropy to quantum weighted entropy the difficulty lies in the fact that now also the weight $\phi$ is not commuting with the density matrices. \section{Quantum weighted relative entropy and Gibbs inequality} Extending standard classical notions, we can also introduce the \emph{quantum weighted relative entropy} or weighted Kullback-Leibler divergence as \beq \label{def2} D_\phi(\rho \|\sigma )= \tr (\phi\rho\log\rho)-\tr (\phi\rho\log\sigma). \eeq Here and below $\rho$ is a density matrix and $\sigma$ is positive definite in $\mathcal{H}$. An important property of quantum weighted relative entropy is given by the weighted Klein's inequality \begin{lemma}[Weighted Klein's inequality] \label{Klein} Assume that $X, Y, W$ are Hermitian positive definite matrices on a Hilbert space $\mathcal{H}$. Then if $f$ is a convex function one has \beq \tr \left(W(f(Y) - f(X))\right) \geq \tr \left( W(Y - X)f'(X)\right) . \eeq In particular for $f(x)=x\log x$ one has \beq\label{Kleineq2} \tr \left(W Y (\log Y - \log X) \right) \geq \tr \left( W(Y - X) \right) . \eeq \end{lemma} The proof is given in the appendix. At this point we only note that a slight difficulty in comparison to the standard Klein's inequality is caused by the fact that $W$ in general does not commute with $X$ and $Y$. An important result which can be easily derived from the weighted Klein's inequality is the weighted Gibbs inequality. \begin{theorem}[Weighted Gibbs inequality] \label{Gibbs} Under the condition $\tr \,\phi\,\rho\geq \tr \,\phi\,\sigma$ one has \beq D_\phi(\rho \|\sigma )\geq 0, \eeq with equality if and only if $\rho =\sigma$. \end{theorem} \begin{proof} The proof of the weighted Gibbs inequality follows directly from the weighted Klein's inequality, Lemma \ref{Klein}. In particular for $X=\sigma$, $Y=\rho$, $W=\phi$ and $f(x)=x\log x$, the result immediately follows form \eqref{Kleineq2} under the condition $\tr \,\phi\,\rho\geq \tr \,\phi\,\sigma$. \end{proof} \begin{remark} Note that the condition $\tr \,\phi\,\rho\geq \tr \,\phi\,\sigma$ is physical and a minimum necessary requirement for the weighted Gibbs inequality. As expected, it is automatically satisfied if $\phi=1_{\mathcal{H}}$, i.e.\ in the case of standard von Neumann entropy. \end{remark} \section{Basic properties of quantum weighted entropy} We now discuss three propositions with useful basic properties of the quantum weighted entropy. \begin{proposition} Denote by $\ket{e_1},\ldots ,\ket{e_d}$ the normalised eigenvectors of $\rho$ and by $\lam_1,\ldots ,\lam_d$ the corresponding eigenvalues. \begin{enumerate} \item The quantum weighted entropy $S_\phi (\rho )$ is non-negative and zero if and only if either (i) $\rho$ is pure or (ii) $\langle e_i|\phi| e_i\rangle=0$ whenever $0<\lam_i<1$. \item $S_\phi (\rho )=S_{\phi'} (\rho )$ if $\langle e_i|\phi| e_i\rangle =\langle e_i|\phi'|e_i\rangle$ whenever $0<\lam_i<1$. In this case we say that $\phi$ and $\phi'$ are $\rho$-conjugate. \end{enumerate} \end{proposition} \begin{proof} The results of this proposition are a consequence of the eigenvalue decomposition of $\rho$ which yields \beq\label{Srhoeigen} S_\phi (\rho )=-\sum_i\langle e_i|\phi|e_i\rangle\lam_i\log\,\lam_i. \eeq We directly see from this that $S_\phi (\rho )$ is non-negative. If $\rho$ is pure $\lam_i\log\,\lam_i=0, \forall i$ which implies $S_\phi (\rho )=0$. Otherwise, if $\rho$ is not pure one has $\lam_i<1, \forall i$ and it is clear from the above that in this case $S_\phi (\rho )=0$ if and only if $\langle e_i|\phi| e_i\rangle=0$ whenever $0<\lam_i<1$. This proves the first part of the proposition. The second part of the proposition follows directly from \eqref{Srhoeigen}. \end{proof} Above we have already shown that $S_\phi (\rho )\geq0$. The following proposition gives an upper bound on the quantum weighted entropy. \begin{proposition} Suppose the rank of $\phi$ equals $m\leq d$ and let $P= P_\phi$ be the orthoprojection to the range of $\phi$. If $\tr \phi\rho\geq{\rm{tr}}\,\phi/m$ then \beq S_{\phi}(\rho )\leq S_{\phi}(P/m )=\big(\log\,m\big)\,{\rm{tr}}\,\phi \eeq with equality if and only if $\rho = P/m$. \end{proposition} \begin{proof} The proposition is a direct consequence of the quantum weighted Gibbs inequality, i.e.\ Theorem \ref{Gibbs}, with $\sigma=P/m$. \end{proof} Let us now discuss properties of the weighted entropy with respect to the operations of purification and partial traces (see \cite{Nielsen} for a standard textbook account). Purifications are based on the Schmidt decomposition of pure states which itself is a consequence of the singular value decomposition of complex matrices. Let us quickly recall the concept of purifications. If $\rho_A$ is a density matrix on $A$, then there exists a reference system $R$ and a pure state $\ket{\chi}$ on $AR$ such that $\rho_A=\tr_R \ket{\chi}\bra{\chi}$. Denote also $\rho_R=\tr_A \ket{\chi}\bra{\chi}$. Standard arguments show that $\rho_A$ and $\rho_R$ have the same collection $\{\lam_i\}$ of non-negative eigenvalues. Furthermore, if we denote by $\{\ket{e^{A}_i}\}$ and $\{\ket{e^{R}_i}\}$ the corresponding eigenvectors of $\rho_A$ and $\rho_R$, then one finds \bea S_{\phi_1}(\rho_A)&=&-\sum\limits_i\langle e^{A}_i|\phi_1| e^{A}_i\rangle\lam_i\log\,\lam_i , \\ S_{\phi_2}(\rho_R)&=&-\sum\limits_i\langle e^{R}_i|\phi_2| e^{R}_i\rangle\lam_i\log\,\lam_i . \eea This proves the following proposition: \begin{proposition} Let $\ket{\chi}$ be a pure state on $AR$ with $\rho_A=\tr_R \ket{\chi}\bra{\chi}$ and $\rho_R=\tr_A\ket{\chi}\bra{\chi}$, then for any pair of weight matrices $\phi_A$ on $A$ and $\phi_R$ on $R$ such that \beq \langle e^{A}_i|\phi_A| e^{A}_i\rangle =\langle e^{R}_i|\phi_R| e^{R}_i\rangle \quad \hbox{$\forall\;\;i$ with }\;0<\lam_i<1 \nn \eeq one has \beq S_{\phi_1}(\rho_A)= S_{\phi_R}(\rho_R).\nn \eeq In this case we say that $\phi_A$ is $(\rho_A,\rho_R)$-conjugated to $\phi_R$ and $\phi_R$ is $(\rho_R,\rho_A)$-conjugated to $\phi_A$. \end{proposition} From this one has the following corollary: \begin{corollary}\label{cor1} Let $\rho$ be a density matrix in $AB$, as well as $\rho_A=\tr_B \rho$ and $\rho_B=\tr_A\rho$. Take a reference system $R$ with Hilbert space isomorphic to the Hilbert space of $AB$, and a pure state $\ket{\chi}$ on $ABR$ such that $\rho =\tr_R\ket{\chi}\bra{\chi}$. Set $\rho_R= \tr_{AB} \ket{\chi}\bra{\chi}$ and $\rho_{BR}=\tr_{A} \ket{\chi}\bra{\chi} $, then \beq S_\phi(\rho )=S_{\phi_R}(\rho_R)\;\hbox{ and }\;S_{\phi_A}(\rho_A)=S_{\phi_{BR}}(\rho_{BR}) \eeq if $\phi$ is $(\rho,\rho_R)$-conjugate to $\phi_R$ and $\phi_A$ is $(\rho_A,\rho_{BR})$-conjugate to $\phi_{BR}$. \end{corollary} \section{A diagonalisation bound} Another simple trace inequality, which is also based on the weighted Gibbs inequality, deals with the projection of the density matrix on its diagonal in a given basis, as occurs for example when projective measurements are performed. Let $\rho$ be a density matrix in $\mathcal{H}$. Let $\ket{f_1},\ldots ,\ket{f_d}$ be a basis in $\mathcal{H}$ and $\rho^{\rm d}$ denote the diagonal part of $\rho$ in this basis, i.e. $\langle f_j| \rho^{\rm d}| f_k\rangle =\delta_{jk}\langle f_j|\rho | f_j\rangle$ for $1\leq j,k\leq d$. Then we have the following bound for the weighted entropy of $\rho^{\rm d}$ \begin{theorem} \label{thm:diag} Under the condition $\tr\phi\rho \geq \tr \phi\rho^{\rm d}$ \beq S_\psi (\rho^{\rm d})\geq S_{\phi} (\rho ), \eeq with equality if and only if $\rho=\rho^{\rm d}$ and where $\psi$ fulfils $\langle f_j |\psi \rho^{\rm d} | f_j\rangle=\langle f_j |\phi \rho | f_j\rangle$. \end{theorem} \begin{proof} The proof is again an application of the weighted Gibbs inequality, Theorem \ref{Gibbs}. Choosing $\sigma=\rho^{\rm d}$, by the condition of the theorem the Gibbs inequality can be used, yielding \bea 0&\leq& D_\phi(\rho \| \rho^{\rm d} )= -S_{\phi} (\rho ) - \tr \phi \rho \log \rho^{\rm d} \nonumber\\ &=& -S_{\phi} (\rho ) - \sum_j \langle f_j |\phi \rho | f_j\rangle \log \langle f_j |\rho^{\rm d} | f_j\rangle=-S_{\phi} (\rho ) +S_\psi (\rho^{\rm d}) \eea with inequality for $\rho=\rho^{\rm d}$. This completes the proof. \end{proof} Note that in the special case of von Neumann entropy, one has $\phi=\psi=1_{\mathcal{H}}$ and all conditions of the theorem are automatically satisfied. \section{Subadditivity of quantum weighted entropy} Let us first focus on a composite system $AB$ of two components $A$ and $B$ with density matrix $\rho_{AB}$ and weight $\phi_{AB}=\phi_A\otimes\phi_B$. Recall the standard reduced density matrices defined by taking the partial trace, i.e. $\rho_{A} =\tr_B (\rho_{AB})$ and so on. We can now prove the following subadditivity property of quantum weighted entropy: \begin{theorem}[Subadditivity] \label{thm:sub} Under the condition $\tr_{AB}(\phi_{AB} \rho_{AB})\geq \tr_{A}(\phi_{A} \rho_{A}) \tr_{B}(\phi_{B} \rho_{B})$ one has \beq S_{\phi_{AB}}(\rho_{AB})\leq S_{\psi_{A}}(\rho_{A})+S_{\psi_{B}}(\rho_{B}) \eeq with equality for $\rho_{AB}=\rho_A\otimes\rho_B$, where the reduced weights are defined implicitly through $\psi_{A} \rho_{A}=\tr_B(\phi_{AB} \rho_{AB})$ and similarly for $B$. \end{theorem} \begin{proof} The condition stated in Theorem \ref{thm:sub}, i.e. $\tr_{AB}(\phi_{AB} \rho_{AB})\geq \tr_{A}(\phi_{A} \rho_{A}) \tr_{B}(\phi_{B} \rho_{B})$, ensures that we can use the weighted Gibbs inequality with $\sigma_{AB} =\rho_A\otimes\rho_B$ and $\phi =\phi_{AB}=\phi_A \otimes\phi_B$. Further abbreviating $\rho=\rho_{AB}$, one gets \beq D_\phi(\rho \| \rho_A\otimes\rho_B )\geq 0 \eeq with equality if and only if $\rho =\rho_A\otimes\rho_B $. Simplifying the above gives \bea 0&\leq& D_\phi(\rho \| \rho_A\otimes\rho_B ) \nonumber\\ &=&\tr_{AB} \left\{ \phi \rho \left( \log\,\rho - \log\,(\rho_A\otimes\rho_B) \right)\right\} \eea Hence, \bea S_\phi (\rho ) \! &\leq& \!\!\! - \tr_{AB} \left\{ \phi\rho \left[ \log\,(\rho_A\otimes 1_B)- \log\,(1_A\otimes\rho_B)\right]\right\} \nonumber \\ &=& \!\!\! - \tr_{A}\{ \tr_B( \phi\rho) \log\rho_A \} \!-\! \tr_{B}\{ \tr_A( \phi\rho ) \log\rho_B \} \nonumber \\ &=& S_{\psi_{A}}(\rho_{A})+S_{\psi_{B}}(\rho_{B}) \eea under the above definition of reduced weights. This completes the proof. \end{proof} \section{Concavity of quantum weighted entropy} \def\cK{\mathcal K} We can use the subadditivity property of quantum weighted entropy proved in the previous section to show that quantum weighted entropy is concave in analogy to the case of standard von Neumann entropy. \begin{theorem} \label{ThmConcavity} Suppose that $\rho^{(1)},\ldots ,\rho^{(r)}$ are density matrices of a system $A$ a Hilbert space $\mathcal{H}_A$ and $\bfb =(b_1,\ldots b_r)$ is a probability vector, with non-negative entries and $\sum_{1\leq l\leq r} b_l =1$. Then \beq S_\phi\left(\sum_lb_l\rho^{(l)} \right) \geq\sum_l b_l S_\phi(\rho^{(l)}) \label{concave-thm-eq} \eeq with equality if and only if $b_l=1$ for some $l$ or $\rho^{(l)}=\rho^{(1)}$ $\forall$ $l$. \end{theorem} \begin{proof} Set \beq \sigma=\sum\limits_lb_l\rho^{(l)},\quad\!\! {\rB}_l=\tr\phi\rho^{(l)},\quad\!\! {\rA}=\tr\phi\sigma =\sum_lb_l\rB_l. \eeq Recall the expressions for the standard Shannon entropy of $\bfb$ and the weighted Shannon entropy of $\bfb$ with weight $B$, \beq h(\bfb )=-\sum_lb_l\log\,b_l, \quad h_{\rB} (\bfb ) =-\sum_l{\rB}_l b_l\log\,b_l. \eeq Take an auxiliary system $R$ with Hilbert space $\mathcal{H}_R$ of dimension $r$ and fix a basis $\ket{e_1} ,\ldots,\ket{e_r} $ in $\mathcal{H}_R$. Consider a density matrix $\rho$ on $AR$ defined by the condition that for all $\ket{v},\ket{v'}\in \mathcal{H}_A$ and $1\leq l,l^\prime\leq r$: \beq \big\langle v\otimes e_l|\rho | v'\otimes e_{l'}\big\rangle = b_l\langle v |\rho^{(l)}|v'\rangle\delta_{l,l'}. \label{conc-rhodef} \eeq It is easily verified that $\rho$ is indeed a density matrix, i.e.\ a positive-definite operator of trace 1. Then \beq \rho_A =\tr_R \rho =\sum_l b_l\rho^{(l)}=\sigma \eeq and $\rho_R=\tr_A \rho$ is diagonal in basis $\ket{e_1} ,\ldots,\ket{e_r}$, with diagonal entries $b_1,\ldots b_r$. Also, if $\rho^{(l)}$ has eigenvectors $\ket{e^{(l)}_j}$ with eigenvalues $\lambda^{(l)}_j$ then $\rho$ has the eigenvectors $\ket{e^{(l)}_j}\otimes \ket{e_l} $ with the eigenvalues $\lambda^{(l)}_jb_l$. Hence, with $1_{R}$ denoting the unit operator on $R$ one has \bea S_{\phi\otimes 1_R}(\rho)&=&-\sum_{j,l} \big\langle e^{(l)}_j\big|\phi\big| e^{(l)}_j\big\rangle(\lambda^{(l)}_jb_l)\log (\lambda^{(l)}_jb_l) \nn\\ &=&-\sum_{l}b_l\sum_j\big\langle e^{(l)}_j\big|\phi\big| e^{(l)}_j\big\rangle\lambda^{(l)}_j \log\lambda^{(l)}_j -\sum_l{\rB}_lb_l\log b_l \nn\\ &=&\sum_l b_l S_\phi(\rho^{(l)})+ h_{\rB} (\bfb ). \label{Concave-eq-S} \eea Note that $\tr(\phi\otimes 1_R)\rho=\tr(\phi\rho_A)$, so the bound $\tr(\phi\otimes 1_R)\rho\geq \tr(\phi\otimes 1_R)(\rho_A\otimes\rho_R)$ is fulfilled. Finally, by \eqref{conc-rhodef} the partial trace $T=\tr_A(\phi\otimes 1_R)\rho$ is a diagonal matrix in the basis $\ket{e_1} ,\ldots,\ket{e_r} $ of $R$, with entries \beq \langle e_l| T | e_{l'}\rangle =\delta_{l,l'}b_l\,\rB_l. \eeq To complete the proof we use the subadditivity property proven in the previous section in Theorem \ref{thm:sub} for the joint system $AR$ with density matrix $\rho$ and weight $\phi\otimes 1_R$. Therefore, we introduce reduced weights defined implicitly through \bea \psi_{A} \rho_{A}&=&\tr_R (\phi\otimes 1_R) \rho = \phi \sigma \nn \\ \psi_{R} \rho_{R}&=& \tr_A (\phi\otimes 1_R) \rho =T= \sum\limits_l b_l \, {\rB}_l | e_l\rangle\langle e_l|. \nn \eea Then one has \beq S_{\psi_A}(\rho_A)=S_\phi \left(\sigma\right),\;\hbox{ and }\; S_{\psi_R}(\rho_R)=h_{\rB}(\bfb ) . \eeq Therefore, subadditivity (Theorem \ref{thm:sub}) yields \beq S_{\phi\otimes 1_R}(\rho)\leq S_{\psi_A}(\rho_A)+S_{\psi_R}(\rho_R) =S_\phi \left(\sigma\right)+h_{\rB}(\bfb) \eeq with equality if and only if $\rho =\rho_A\otimes\rho_R,$ i.e. $h(\bfb )=0$ or $\rho^{(l)}=\rho^{(1)}$ $\forall$ $l$. This together with \eqref{Concave-eq-S} gives \eqref{concave-thm-eq}. \end{proof} \section{Araki-Lieb inequality for quantum weighted entropy} Consider a composite system $AB$ with density matrix $\rho$, weight $\phi$ and partial density matrices $\rho_A=\tr_B \rho$ and $\rho_B=\tr_A\rho$. Construct a purification by introducing a reference system $R$ with Hilbert space isomorphic to the Hilbert space of $AB$, and a pure state $\ket{\chi}$ on $ABR$ such that $\rho =\tr_R\ket{\chi}\bra{\chi}$. Furthermore, set $\rho_R= \tr_{AB} \ket{\chi}\bra{\chi}$ and $\rho_{BR}=\tr_{A} \ket{\chi}\bra{\chi} $, then by Corollary \ref{cor1} we have \beq \label{AL1} S_\phi(\rho )=S_{\phi_R}(\rho_R)\;\hbox{ and }\;S_{\phi_A}(\rho_A)=S_{\phi_{BR}}(\rho_{BR}) \eeq if $\phi$ is $(\rho,\rho_R)$-conjugate to $\phi_R$ and $\phi_A$ is $(\rho_A,\rho_{BR})$-conjugate to $\phi_{BR}$. Combining this with subadditivity gives rise to the following result. \begin{theorem}[Weighted Araki-Leib inequality] \label{thm:AL} One has \beq S_\phi(\rho)\geq \Big(\sup_{\Psi\in\mathcal{D}} \big[S_{\psi_A}(\rho_A)-S_{\psi_B}(\rho_B)\big]\Big)\vee \Big(\sup_{\bar{\Psi}\in\bar{\mathcal{D}}} \big[S_{\opsi_B}(\rho_B)-S_{\opsi_A}(\rho_A)\big]\Big), \eeq where the set $\mathcal{D}(\phi)$ consists of all pairs $\Psi=(\psi_A,\psi_B)$ for which there exists a $\phi_{BR}$ and $\psi_{R}^*$ implicitly defined through $\psi_{R}^*\rho_R=\tr_B \phi_{BR}\rho_{BR}$ satisfying $\tr_{BR} \phi_{BR}\rho_{BR}\geq \tr_{BR} \phi_{BR}\rho_{B}\otimes\rho_{R}$ and $\psi_{R}^*$ is $(\rho_R,\rho)$-conjugate to $\phi$, such that $\psi_A$ is $(\rho_A,\rho_{BR})$-conjugate to $\phi_{BR}$ and $\psi_B$ is $\rho_B$-conjugate to $\psi_{B}^*$ defined through $\psi_{B}^*\rho_B=\tr_R \phi_{BR}\rho_{BR}$; and similarly for $\bar{\mathcal{D}}(\phi)$. \end{theorem} \begin{proof} To proof the theorem is suffices to show that \beq\label{AL2} S_\phi(\rho)\geq S_{\psi_A}(\rho_A)-S_{\psi_B}(\rho_B), \quad \mathrm{and}\quad S_\phi(\rho)\geq S_{\opsi_B}(\rho_B)-S_{\opsi_A}(\rho_A) \eeq for all $(\psi_A,\psi_B)\in\mathcal{D}(\phi)$ and $(\opsi_A,\opsi_B)\in\bar{\mathcal{D}}(\phi)$. We start by establishing the first inequality in \eqref{AL2}. For any $(\psi_A,\psi_B)\in\mathcal{D}(\phi)$ there exists a $\phi_{BR}$, $\psi_{B}^*$ and $\psi_{R}^*$ with $\psi_{B}^*\rho_B=\tr_R \phi_{BR}\rho_{BR}$ and $\psi_{R}^*\rho_R=\tr_B \phi_{BR}\rho_{BR}$, satisfying $\tr_{BR} \phi_{BR}\rho_{BR}\geq \tr_{BR} \phi_{BR}\rho_{B}\otimes\rho_{R}$. We can thus apply the subadditivity inequality, Theorem \ref{thm:sub}, to obtain \beq S_{\phi_{BR}} (\rho_{BR})\leq S_{\phi_{B}^*} (\rho_{B})+S_{\phi_{R}^*} (\rho_{R}) \eeq Since $\psi_A$ and $\phi_{BR}$ are $(\rho_A,\rho_{BR})$-conjugate, one has $S_{\phi_{BR}} (\rho_{BR})=S_{\psi_A}(\rho_A)$. Further, since $\psi_{B}^*$ and $\psi_B$ are $\rho_B$-conjugate, one has $S_{\phi_{B}^*} (\rho_{B})=S_{\psi_B}(\rho_B)$. Moreover, the condition that $\psi_{R}^*$ is $(\rho_R,\rho)$-conjugate to $\phi$ implies $S_{\phi_{R}^*} (\rho_{R}) =S_\phi(\rho)$. This proves the first inequality in \eqref{AL2}. The second inequality in \eqref{AL2} is established in a similar manner. This completes the proof. \end{proof} \begin{remark} Note that in the case of von Neumann entropy with $\phi=1_{AB}$, one has $\psi_A=\opsi_A=1_A$ and $\psi_B=\opsi_B=1_B$ and thus \eqref{AL2} simply reads \beq S(\rho)\geq |S(\rho_A)-S(\rho_B)| \eeq which is the Araki-Leib inequality for von Neumann entropy. \end{remark} \section{Strong subadditivity of quantum weighted entropy} Another very interesting trace inequality concerns the strong subadditivity property for a composite system $ABC$ with density matrix $\rho_{ABC}$ and weight $\phi_{ABC}=\phi_A\otimes\phi_B\otimes\phi_C$. \begin{theorem}[Strong subadditivity] \label{thm:strong} Under the conditions $(i)$ $\tr_{ABC}(\phi_{ABC} \rho_{ABC})\geq \tr_B\left\{\phi_B \tr_{A}(\phi_{A} \rho_{AB}) \tr_{C}(\phi_{C} \rho_{BC}) \rho_B^{-1} \right\}$, as well as $(ii)$ $[\rho_{AB},\phi_A\otimes \phi_B]=0$ and $[\tr_C(\phi_C \rho_{BC}),\rho_B]=0$ one has \beq S_{\phi_{ABC}}(\rho_{ABC})\!+\!S_{\psi_{B}}(\rho_{B}) \! \leq \! S_{\psi_{AB}}(\rho_{AB}) \! +\! S_{\psi_{BC}}(\rho_{BC}) \eeq where the reduced weights are defined as above, i.e. $\psi_{AB} \rho_{AB}=\tr_C(\phi_{ABC} \rho_{ABC})$ and so on. \end{theorem} \begin{remark} Let us make two remarks regarding the conditions of the theorem. Firstly, as expected, both conditions are automatically satisfied in case of $\phi=1_{ABC}$, i.e.\ the case of standard von Neumann entropy. Secondly, condition $(i)$ is the natural analog of the condition of the subadditivity property and as in the latter case is a physical condition which one expects not to be able to improve on. However, condition $(ii)$ is a technical condition which one might hope to improve. Further, note that an analogs condition with $A$ and $C$ interchanged is also a valid condition $(ii)$. \end{remark} \begin{proof}[Proof of Theorem \ref{thm:strong}] To prove Theorem \ref{thm:strong} we have to show that $\mathcal{A}:=S_{\phi_{ABC}}(\rho_{ABC})\!+\!S_{\psi_{B}}(\rho_{B}) - S_{\psi_{AB}}(\rho_{AB}) \! - \! S_{\psi_{BC}}(\rho_{BC}) \leq 0$. Since we have already proven the weighted Klein's inequality we can follow a similar strategy as in the original proof by Lieb and Ruskai \cite{LR1} of strong subadditivity of standard von Neumann entropy which uses the standard Klein's inequality as a first ingredient. To do so we first make the following observation, which follows from the specific definition of the reduced weights as given in Theorem \ref{thm:strong}; abbreviating $\rho\equiv \rho_{ABC}$ and $\phi\equiv \phi_{ABC}$, one gets \bea \mathcal{A}=\! \tr_{ABC} \! \left\{ \phi\rho \left( \log\rho_{AB} +\log\rho_{BC}-\log\rho_B -\log\rho \right) \right\}, \eea where all matrices are to be understood as extended to the Hilbert space of the full system $ABC$, i.e. $\rho_{AB}$ is short for $\rho_{AB}\otimes 1_C$ and similarly for the others. Now we can apply the weighted Klein's inequality, Lemma \ref{Klein}, with $W=\phi$, $Y=\rho$ and $X=\exp(\log\rho_{AB}-\log\rho_B+\log\rho_{BC})$ and $f(x)=x\log x$ as in \eqref{Kleineq2}, yielding \bea \label{eqinter} \mathcal{A}\leq \tr_{ABC} \! \left\{ \phi \exp(\log\rho_{AB}\! -\! \log\rho_B\!+\!\log\rho_{BC})\! -\! \phi \rho \right\} \eea This relation very much resembles the Golden-Thomson inequality \cite{GT1,GT2} \beq \tr \left( e^{X+Y}\right)\leq \tr \left(e^{X} e^{Y}\right) \eeq and in the proof of the strong subadditivity for standard von Neumann entropy and Lieb and Ruskai \cite{LR1} used a generalisation of the Golden-Thomson inequality derived in an earlier work by Lieb \cite{Lieb}, \beq \label{GTLieb} \tr \left( e^{X+Y+Z} \right )\leq \tr \left( e^Z T_{\exp(-X)}(e^Y) \right) \eeq where \beq T_{\exp(-X)}(e^Y) = \int_0^\infty (e^{-X} + 1\omega)^{-1} e^Y (e^{-X} + 1\omega)^{-1} d\omega. \eeq This relation can be extended to the weighted case, i.e. for $W$ being a weight one has \beq \label{newineq} \tr \left(W e^{X+Y+Z} \right )\leq \tr \left(K_W(Z)\, T_{\exp(-X)}(e^Y) \right). \eeq with \beq K_W (Z) = \sum_{n=0}^\infty \frac{1}{(n+1)!} \sum_{l=0}^n Z^{n-l} W Z^l \eeq and $T_{\exp(-X)}(e^Y) $ as defined above. The proof is a generalisation of the proof of Theorem 7 of \cite{Lieb}. Set $\xi=e^{-X}$, $\eta=e^Y$ and $R=X+Z$. Note that $\xi$ and $\eta$ are strictly positive operators. We define a function $F$ from the cone of strictly positive operators to the real numbers through $F: \xi\to-\tr[ W \exp(R+\log\xi)]$. Since $F$ is convex and homogeneous of order one, Lemma 5 of \cite{Lieb} can be applied, yielding \beq -\tr \left(W e^{X+Y+Z} \right ) = F(\eta)\geq \frac{d}{d\omega} \left[F(\xi+\omega \eta)\right]_{\omega=0}. \eeq Taylor expanding and taking the derivative gives \eqref{newineq}. We can now apply \eqref{newineq} to the first term on the right-hand-side of \eqref{eqinter}. Choosing $X \!=\!-\! \log\rho_B$, $Y\!=\! \log\rho_{BC}$, $Z=\!\log\rho_{AB}$ and $W=\phi$ and furthermore assuming under condition $(ii)$ that we have the commutation relations $[\rho_{AB},\phi_A\otimes \phi_B]=0$ and $[\tr_C(\phi_C \rho_{BC}),\rho_B]=0$, then the first term on the right-hand-side of \eqref{eqinter} is bounded from above by \bea && \tr_{B} \Big\{ \phi_B\, \tr_A(\phi_A \rho_{AB}) \tr_C(\phi_C \rho_{BC}) \int_0^\infty (\rho_B + 1\omega)^{-1} (\rho_B + 1\omega)^{-1} d\omega \Big\} \nn\\ &&=\tr_{B} \Big\{ \phi_B\, \tr_A(\phi_A \rho_{AB}) \tr_C(\phi_C \rho_{BC}) \rho_B^{-1} \Big\} \eea Thus one arrives at \beq \mathcal{A} \leq \tr_{B} \Big\{ \phi_B\, \tr_A(\phi_A \rho_{AB}) \tr_C(\phi_C \rho_{BC}) \rho_B^{-1} \Big\} - \tr_{ABC} \phi \rho \eeq which by condition $(i)$ of the theorem yields \beq \mathcal{A} \leq 0. \eeq This completes the proof. \end{proof} \section{Discussion} We introduce quantum weighted entropy and derived several useful properties in terms of various trace inequalities. Each of those inequalities contains the corresponding result for von Neumann entropy as a special case when the weight is chose to be the identity matrix. In particular, besides basic properties, we derived a diagonalisation bound (Theorem \ref{thm:diag}), subadditivity and concavity of quantum weighted entropy (Theorem \ref{thm:sub} and \ref{ThmConcavity}), an analog of the Araki-Lieb inequality (Theorem \ref{thm:AL}) and strong subadditivity of quantum weighted entropy (Theorem \ref{thm:strong}). An essential ingredient to the previous results is an analog of Gibbs inequality for quantum weighted relative entropy (Theorem \ref{Gibbs}) which in turn is obtained from a weighted Klein's inequality (Lemma \ref{Klein}). A difficulty in proving the above trace inequalities, in comparison to the analogous results for von Neumann entropy, is the fact that in general the weights do not commute with the (reduced) density matrices. In the case of the weighted Klein's inequality we circumvent this problem by utilising the unique decompositions $W=L L^\dag$ of the weight. Since the weighted Gibbs inequality and in turn most of the other inequalities are derived from the weighted Klein's inequality, they inherited this property and can be proven without any further assumptions on commutation relations of the weight. The only result, where commutativity of parts of the weight with some of the reduced density matrices is assumed, is strong subadditivity. The proof is thus not optimal and one would hope to be able to improve it by relaxing those conditions. In this context we note that our proof of strong subadditivity of quantum weighted entropy closely follows the original proof by Lieb and Ruskai for strong subadditivity of von Neumann entropy \cite{LR1}. This enables one to use the weighted Gibbs inequality in an essential manner. In the case of alternative, more modern strategies for proving strong subadditivity, as in \cite{NP}, the situation is more involved. The here presented discussion of quantum weighted entropy is a first account deriving many of its properties and thus forms a basis for further interesting potential applications of the latter in quantum information theory. \\ {\emph{Acknowledgements --}} YS and SZ thank Salimeh Yasaei Sekeh for useful discussions. YS thanks University of Sao Paulo (at Sao Paulo and at Sao Carlos) for the hospitality during the academic year 2013-4. SZ acknowledges support by CNPq (Grant 307700/2012-7) and PUC-Rio, as well as thanks USP for kind hospitality. \appendix \section{Proof of weighted Klein's inequality} In case the matrices $X$ and $W$ commute, one can simultaneously diagonalise them, which enables one to follow the same steps as in the proof for the standard Klein's inequality. The general case, where $X$ and $W$ no \emph{not} commute, is slightly more involved and relies on the following decomposition: Since $W$ is positive definite one has that $\langle v| W | v \rangle\geq0$ for any $|v\rangle$ and there exists a unique $L$ such that $W=L L^\dag$. Let now $|e_1\rangle,|e_2\rangle,\ldots$, be the normalised eigenvectors of $X$ and $\la_1,\la_2,\ldots$ the corresponding eigenvalues. Furthermore, we define the \emph{normalised} vectors $| \tilde{e}_j \rangle= L | e_j \rangle / \sqrt{\langle e_j| W | e_j \rangle}$. Then \bea \!\!\!\!\! \!\!\!\!\! \!\!\!\!\! && \tr \left( W(f(Y) - f(X))\right) = \nonumber \\ &&=\sum_j \langle e_j| W | e_j \rangle \left\{ \langle \tilde{e}_j| f(Y) | \tilde{e}_j \rangle -f(\la_j) \right\} \eea Now we use that for any unit vector $|v \rangle \in{\cal H}$, by convexity of $f$, \beq \langle v |f( Y)| v\rangle \geq {f\big(\langle v | Y| v \rangle\big)}. \eeq Also, $f(y)-f(x)\geq (y-x)f'(x)$ for $x,y\in{\mathbb R}$. Thus \bea \!\!\!\!\! \!\!\!\!\! \!\!\!\!\! \!\!\!\!\! && \tr \left(W(f(Y) - f(X))\right) \nonumber\\ && \geq \sum_j \langle e_j| W | e_j \rangle \left\{ f\left( \langle \tilde{e}_j | Y | \tilde{e}_j \rangle\right) -f(\la_j) \right\} \nonumber\\ && \geq \sum_j \langle e_j| W | e_j \rangle \left\{ \langle \tilde{e}_j | Y | \tilde{e}_j \rangle - \la_j \right] f'(\la_j) \nonumber\\ &&= \tr \left( W(Y -X) f'(X) \right), \eea which completes the proof. \section*{References}
143,236
TITLE: What formulas are required to calculate a 3d transformation? QUESTION [1 upvotes]: Considering the point of view of square A and B, what math tranformations must be applied (either to the 3d camera or world) to transition from A to B? I can tell that for the B viewpoint I had to move right and up, but I lack the math background to know what formulas can give me accurate values. Thanks REPLY [0 votes]: I would recommend an approach where you first determine the initial basis unit vectors $\hat{u}$, $\hat{v}$, $\hat{w}$ and a fixed point $\vec{t}$, and corresponding final basis unit vectors $\hat{U}$, $\hat{V}$, $\hat{W}$ and a fixed point $\vec{T}$, such that the basis vectors are orthonormal, $$\begin{array}{rcr} \hat{u} \cdot \hat{u} = 1 & ~ & \hat{U} \cdot \hat{U} = 1 \\ \hat{u} \cdot \hat{v} = 0 & ~ & \hat{U} \cdot \hat{V} = 0 \\ \hat{u} \cdot \hat{w} = 0 & ~ & \hat{U} \cdot \hat{W} = 0 \\ \hat{v} \cdot \hat{u} = 0 & ~ & \hat{V} \cdot \hat{U} = 0 \\ \hat{v} \cdot \hat{v} = 1 & ~ & \hat{V} \cdot \hat{V} = 1 \\ \hat{v} \cdot \hat{w} = 0 & ~ & \hat{V} \cdot \hat{W} = 0 \\ \hat{w} \cdot \hat{u} = 0 & ~ & \hat{W} \cdot \hat{U} = 0 \\ \hat{w} \cdot \hat{v} = 0 & ~ & \hat{W} \cdot \hat{V} = 0 \\ \hat{w} \cdot \hat{w} = 1 & ~ & \hat{W} \cdot \hat{W} = 1 \\ \hat{u} \times \hat{v} = \hat{w} & ~ & \hat{U} \times \hat{V} = \hat{W} \\ \hat{w} \times \hat{u} = \hat{v} & ~ & \hat{W} \times \hat{U} = \hat{V} \\ \hat{v} \times \hat{w} = \hat{u} & ~ & \hat{V} \times \hat{W} = \hat{U} \\ \end{array}$$ Next, form two $4 \times 4$ transformation matrices: $$\mathbf{M}_\text{before} = \left[ \begin{matrix} u_x & v_x & w_x & t_x \\ u_y & v_y & w_y & t_y \\ u_z & v_z & w_z & t_z \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right ], \quad \mathbf{M}_\text{after} = \left[ \begin{matrix} U_x & V_x & W_x & T_x \\ U_y & V_y & W_y & T_y \\ U_z & V_z & W_z & T_z \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right ]$$ Then, you can calculate the transformation matrix $\mathbf{M}$ using $$\mathbf{M} = \mathbf{M}_\text{after} \mathbf{M}_\text{before}^{-1}$$ noting that because of orthonormality, $$\mathbf{M}_\text{before}^{-1} = \left[ \begin{matrix} v_y w_z - v_z w_y & v_z w_x - v_x w_z & v_x w_y - v_y w_x & t_x (v_z w_y - v_y w_z) + t_y (v_x w_z - v_z w_x) + t_z (v_y w_x - v_x w_y) \\ u_z w_y - u_y w_z & u_x w_z - u_z w_x & u_y w_x - u_x w_y & t_x (u_y w_z - u_z w_y) + t_y (u_z w_x - u_x w_z) + t_z (u_x w_y - u_y w_x) \\ u_y v_z - u_z v_y & u_z v_x - u_x v_z & u_x v_y - u_y v_x & t_x (u_z v_y - u_y v_z) + t_y (u_x v_z - u_z v_x) + t_z (u_y v_x - u_x v_y) \\ 0 & 0 & 0 & 1 \\ \end{matrix} \right]$$ but this also applies to non-orthonormal basis vector sets, as long as the initial vector $\vec{u}$ corresponds to final vector $\vec{U}$, initial vector $\vec{v}$ to final $\vec{V}$, initial $\vec{w}$ to final $\vec{W}$, they are all nonzero, and initial vector $\vec{t}$ corresponds to final vector $\vec{T}$. The resulting transformation matrix $\mathbf{M}$ is applied to point $\vec{p} = ( x, y, z )$, transforming it to $\vec{q} = ( \chi , \gamma, \zeta )$, via $$\left[\begin{matrix} \chi \\ \gamma \\ \zeta \\ 1 \end{matrix} \right] = \mathbf{M} \left[ \begin{matrix} x \\ y \\ z \\ 1 \end{matrix} \right]$$ and represents both rotation and translation: the upper left $3\times 3$ matrix of $\mathbf{M}$ is a rotation matrix, and upper right $3 \times 1$ vector is the translation after rotation. These $4\times 4$ matrices are extremely common in 3D graphics, in e.g. OpenGL.
173,570
"Fantastic company, great sales team, efficient, fair and not pushy. We purchased our first Motorhome from Choose Leisure and have since had a couple of weekends away, trying and testing all the vans features. Everything all seems great and we are delighted with the van. big thank you to all the staff."
286,648
See the cost of attendance for undergraduate, domestic students at Indiana University Bloomington. Your new Tuition Guarantee. Keep the same tuition until you graduate from your degree program, even if it takes longer than you planned. That's our promise. We understand that financing your education is an important concern. We're here to help you learn about the costs of school, types of financial assistance. 0 thoughts on “University price”
292,978
Renal Pathophysiology Laboratory, Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, Minnesota 55905 Submitted 10 March 2004 ; accepted in final form 18 July 2004 protein kinase A; mitogen-activated protein kinase; cyclin E; p21; Raf In some cell types, the antiproliferative effects of cAMP have been associated with protein kinase A-mediated phosphorylation of the Raf isoform Raf-1 on Ser 43 or other serine residues, including S233 and S259. Phosphorylation of Raf-1 is associated with reduced Raf-1 kinase activity, decreased interaction of Raf-1 with its upstream effector Ras, and decreased ERK activity. Although previous studies demonstrated that cAMP inhibits MC mitogenesis, the site(s) on Raf-1 phosphorylated by PKA in MC has not been previously identified. Recent studies have indicated that phosphorylation of Raf-1 by PKA or suppression of ERK activity may not be necessary for cAMP-mediated inhibition of proliferation, at least in some cell types. In NIH 3T3 cells expressing Raf-1 with S43, S233, and S259 substituted with alanine residues to prevent their phosphorylation, cAMP still inhibited mitogenesis without suppressing Raf-1 or ERK activity (30). In CCL39 cells, cAMP inhibits mitogenesis without suppressing ERK activation. These studies indicate that pathways other than Raf-1 that regulate cell growth are targeted by PKA. Depending on cell type, potential targets of cAMP-PKA signaling have included cyclin D, cyclin E, cyclin A, and the cell cycle inhbitors p21 and p27 (25, 37, 38, 44, 47, 50, 51, 56). It is not known whether suppression of ERK activity is necessary for cAMP agonists to inhibit proliferation or whether cAMP agonists alter expression of cell cycle-regulatory proteins in MC. However, there is a great deal of evidence indicating that the cAMP-PKA pathway interacts with the ERK pathway in a cell type-specific manner. For example, in some cell types that express B-Raf as the predominant Raf isoform, cAMP promotes ERK activation and stimulates mitogenesis (31, 35, 79). In some (31, 35, 79) but not all cell types (11), cAMP promotes mitogenesis through PKA-mediated phosphorylation of Rap-1, which increases B-Raf kinase activity. Whereas Raf-1 has a ubiquitous tissue distribution, B-Raf is highly expressed in cell types that proliferate in response to cAMP, including neuronal tissues and testis (92). Based on these considerations, some investigators postulated that B-Raf is a primary determinant of the cellular proliferative response to cAMP (36). No previous studies have determined whether B-Raf is expressed in MC. Although cAMP agonists are potent inhibitors of MC proliferation (69), therapeutic efficacy of these agents is limited by a number of untoward systemic side effects. Recent studies have indicated that phosphodiesterase (PDE) inhibitors, which prevent the catabolism of cAMP and thereby lead to PKA activation, may have therapeutic efficacy as selective and cell type-specific agonists of the cAMP-PKA pathway. The PDE superfamily is large and complex. At present, at least 12 families of PDEs have been described, which catalyze the hydrolysis of cAMP, cGMP, or both (7, 20, 48, 65, 89, 90). PDED family members are distinguished by primary structure, modes of regulation, and capacity for inhibition by specific PDE inhibitors (7, 8, 28). In most cells, the capacity to hydrolyze cAMP by PDE far exceeds the capacity for cAMP synthesis (24). The activity of PDE isozymes is tightly regulated (24, 65, 81). A small change in PDE isozyme activity can have a profound effect on cAMP signaling without large changes in total intracellular cAMP levels (7, 51, 94). Recent evidence suggests that PDE isozymes are capable of compartmentalizing intracellular cAMP-mediated responses within a cell (15, 27, 32, 33, 64, 65, 83). We previously showed that cAMP hydrolysis in MC is almost exclusively directed by members of the PDE3 and PDE4 family (69). PDE3 family members have a high affinity and specificity for cAMP hydrolysis. A number of agents are potent and specific inhibitors of PDE3 activity, which include lixazinone, cilostamide, cilostazol, and others (3, 8, 28, 74). There are two subfamilies of PDE3-PDE3A, which is expressed in cardiac myocytes, VSMC, oocytes, and other tissues, and PDE3B, which is expressed in adipocytes, hepatocytes, and pancreatic cells (18, 55, 80, 87). Recent studies have demonstrated the presence of additional isoforms generated through alternative transcriptional start sites and posttranscriptional processing (18, 97). There are at least 18 isoforms of PDE4 encoded by 4 distinct genes (PDE4A, PDE4B, PDE4C, and PDE4D) (12, 93). These isoforms are expressed in a cell-specific fashion and show distinct activities, distribution, and regulation. PDE4 activity is specifically inhibited by a number of pharmacological agents, including rolipram, RO-1724, and denbufylline (28, 69, 74). These agents have been employed as anti-inflammatory agents in a number of experimental and human studies (9, 40, 84). We previously demonstrated that PDE4 inhibitors suppress reactive oxygen species (ROS) generation by MC (15). However, expression of specific PDE3 or PDE4 isoforms in MC has not previously been determined. These studies are necessary to provide the basis for future studies to define potential mechanisms whereby PDE inhibitors regulate MC mitogenesis and response to inflammatory stimuli. Although we previously demonstrated that PDE3 but not PDE4 inhibitors suppress MC mitogenesis and PDE4 but not PDE3 inhibitors suppress MC ROS generation, potential mechanisms underlying the differential effect of PDE3 and PDE4 inhibitors in regulation of mitogenesis have not been established in MC or any other cell type. The primary objectives of this study were 1) to identify specific PDE3 and PDE4 isoforms present in MC; 2) to determine whether MC express both Raf-1 and B-Raf isoforms; 3) to determine whether PDE3 and PDE4 inhibitors differentially regulate Raf-1 or B-Raf kinase activity; 4) to determine whether ERK inhibition is mechanistically related to the antiproliferative effects of PDE3 inhibitors; and 5) to define cell cycle targets of PDE3 vs. PDE4 inhibitors in MC. These studies provide further support for the notion that PDE inhibitors may be employed as therapeutic agents to regulate distinct functions in MC. MC preparation and culture. Handling of rats conformed to the institutional animal care guidelines established by The National Institute of Health. MC cultures were obtained from 200-g male Sprague-Dawley rats by differential sieving, as described previously (13, 41, 69). Briefly, rats were anesthetized by intraperitoneal injection of a 50:50 mixture containing 20 mg/ml xylazine and 100 mg/ml ketamine. The kidneys were excised, the renal capsule was removed, and the cortical tissue was minced and passed through a stainless steel sieve (200-µm pore size). The homogenate was sequentially sieved through nylon meshes of 390-, 250-, and 211-µm pore openings. The cortical suspension was then passed over a 60-µm sieve to collect glomeruli. The purity of glomerular preparations was evaluated by light microscopy. Preparations typically contained >90% glomeruli. Glomeruli were seeded on plastic tissue culture dishes and grown in complete Waymouth's medium [Waymouth's medium supplemented with 20% heat-inactivated fetal bovine serum, 15 mM HEPES, 1 mM sodium pyruvate, 0.1 mM nonessential amino acids, 2 mM L-glutamine, 100 IU/ml penicillin, 100 µg/ml streptomycin, and 1% ITS+ (insulin, transferrin, selenium, and bovine serum albumin)]. Fresh medium was added every 3 days. Cell outgrowths were characterized as MC by positive immunohistochemical staining for vimentin, smooth muscle-specific actin, and negative stains for cytokeratin, factor VIII-related antigen, and leukocyte-common antigen (antibodies from Dako, Carpinteria, CA). MC were passed once a week following treatment with trypsin-EDTA (0.02%; Sigma, St. Louis, MO). Cells used in experiments were from passages 5-15. PCR analysis for PDE profiles. Total RNA was isolated from rat MC using the RNeasy Total RNA Isolation Kit (Qiagen, Valencia, CA) according to the manufacturer's protocol. Reverse-transcription and PCR amplification were performed using the Gene-Amp System (PerkinElmer, Branchburg, NJ). PCR analysis of PDE3A, PDE3B, PDE4A, PDE4B, PDE4C, and PDE4D was performed essentially as previously described (57a), with the addition of a DNase I treatment (RNase-free DNase I; Roche Molecular Chemicals, Indianapolis, IN) to eliminate residual genomic DNA. All PCR products were sequenced in both orientations. Transfection studies. PKA and ERK activation was measured using the PathDetect In Vivo Signal Transduction Pathway trans-Reporting System (Stratagene, La Jolla, CA). Constitutively active Ras and Raf vector set, Dominant Negative Ras and Raf Vector Set (Clontech Laboratories, Palo Alto, CA) were used to investigate the effect of Ras and Raf overexpression on ERK activation in MC. MC were plated into 24-well culture dishes at 8 x 104 cells/well in complete Waymouth's medium. Twenty-four hours after being plated, cells were cotransfected with a firefly luciferase reporter vector (pTRE-Luc), a control Renilla luciferase reporter vector, and transactivator plasmids, e.g., pFA2-CREB for the PKA pathway and pFA2-Elk1 for the ERK pathway. Transfections were performed using FuGENE 6 Transfection Reagent (Roche Molecular Biochemical), according to the manufacturer's instructions. Eighteen hours after transfection, lixazinone, or rolipram (10 µM each), and epidermal growth factor (EGF; 20 ng/ml) were added. Control cells received vehicle only. Cells were rinsed and lysed at various time points as described in RESULTS. Luciferase activity was assessed using the Dual-Luciferase Reporter Assay System (Promega, Madison, WI). Assay for cAMP. MC were incubated with PDE inhibitors (10 µM) in 24-well culture dishes for 60 min. Supernatants were removed and the reactions were terminated with 5% TCA (final concentration). The TCA was extracted with water-saturated ether, and the cAMP content was measured using RIA as previously described (16, 99). Measurement of [3H]thymidine incorporation. MC were plated into 24-well culture dishes at 5 x 104 cells/well and grown for 2448 h in complete Waymouth's medium. Cells were rendered quiescent for 24 h in Waymouth's medium containing 0.5% fetal calf serum and then treated with PDE inhibitors (10 µM each). Control cells were treated with an equal volume of vehicle. In some experiments, EGF (20 ng/ml) was added 30 min after treatment with PDE inhibitors. After 20 h, cells were treated with methyl-[3H]thymidine (1 µCi/ml) and incubated for an additional 4 h. Cells were washed three times with 10% TCA and once with water and lysed by the addition of 0.2 N NaOH. Radioactivity was determined by liquid scintillation counting. Incorporation of [3H]thymidine was used as a measure of the rate of mitogenic synthesis of DNA. Western blot analysis. MC were treated with PDE inhibitors, as described above. After incubation, MC were rinsed, harvested, and subjected to sonication (3 cycles of 10 s each, 8-µm amplitude) in 1x lysis buffer (Cell Signaling Technology). The homogenates were centrifuged at 10,000 g for 10 min at 4°C. Protein concentration of the supernatant was determined by the method of Lowry (62). Equal amounts of lysate proteins ( 100 µg) were subjected to SDS-PAGE in the PROTEAN II minigel system or the Criterion System (Bio-Rad Laboratories, Hercules, CA). Lysates were denatured for 5 min at 100°C in SDS-loading buffer according to Laemmli (60). Electrophoresis was performed at a constant current (200 mA/gel), followed by transfer to polyvinylidene difluoride membranes (Bio-Rad Laboratories). The membranes were blocked with 5% nonfat dry milk in TBS containing 0.5% Tween 20 and incubated with primary antibodies followed by HRP-conjugated secondary antibodies. The blots were then visualized by exposure to X-ray film using the Phototype-HRP Western Detection System (Cell Signaling Technology). In vitro kinase assays for ERK activity. A p44/42 MAP kinase Assay Kit (Cell Signaling Technology) was used to measure ERK activity, according to the manufacturer's instructions. Briefly, after treatment with PDE inhibitors (10 µM), MC were rinsed, harvested, and sonicated four times for 5 s each in 1x lysis buffer plus 1 mM PMSF. Samples were microcentrifuged for 10 min at 4°C, and protein concentration of the supernatants was determined, as described above. Two hundred microliters of cell lysate containing 200 µg of total protein were s at 4°C, pellets were washed twice with 1x lysis buffer and twice with 1x kinase buffer. The washed pellets were suspended in 50 µl of 1x kinase buffer supplemented with 200 µM ATP and 2 µg Elk-1 fusion protein and incubated for 30 min at 30°C. Reactions were terminated with 25 µl of 3x SDS sample buffer. Samples were boiled for 5 min, vortexed, microcentrifuged for 2 min, and then loaded (30 µl) on SDS-PAGE gels (12%). Samples were analyzed by Western blotting, as described above. Kinase assays for cyclin D and cyclin E activity. After treatment with PDE inhibitors (10 µM), cells were lysed in 1x lysis buffer, and protein concentration was determined, as described above. Equal amounts of lysate protein (200 µg) were immunoprecipitated with antibodies specific for cyclin D and cyclin E. The immune complexes were collected with protein G plus agarose and washed twice with 1x lysis buffer. Complexes were resuspended and washed twice with kinase buffer (50 mM Tris·HCl, pH 7.4, 10 mM MgCl2, 1 mM DTT). Complexes were then resuspended in 50 µl of kinase buffer containing 2 µg of histone H1 200 µM ATP, and 10 µCi [ -32P]ATP (3,000 Ci/mM), and incubated at 30°C for 30 min. After incubation, 25 µl of 3x SDS loading buffer were added, and the samples were boiled and electrophoresed on a SDS-PAGE gel. The gels were dried, and incorporation of 32P was visualized by autoradiography and quantitated with a Kodak image analysis system. Raf-1 and B-Raf kinase assay. Cells were treated with PDE inhibitors for 15 min, lysed, and immunoprecipitated with 5 µg of anti-Raf-1 and anti-B-Raf antibodies (Santa Cruz Biotechnology) overnight. The Raf-1 and B-Raf kinase activity in immunoprecipitates was measured in vitro using Raf-1 and B-Raf Kinase Cascade Assay Kits (Upstate Biotechnology, Lake Placid, NY), according to the manufacturer's instructions. Briefly, after 30-min incubation at 30°C in the presence of Mg2+, MEK-1 and MAP kinase2/Erk2, MAP kinase2/Erk2 substrate myelin basic protein (MBP), and [ -32P]ATP were added and incubated at 30°C for 10 min. Twenty-five microliters of each reaction were spotted onto P81 phosphocellulose squares, and the squares were washed three times with 0.75% phosphoric acid, once with acetone. The level of 32P incorporation into MBP was determined by liquid scintillation counting. Caspase 3 assay. Caspase 3 activity of control and PDE inhibitor (10 µM)-treated MC was determined fluorometrically using the CaspACE Assay System (Promega), according to the manufacturer's instructions. Statistical analysis. Data presented are representative of at lease three independent experiments performed in duplicate or triplicate, as indicated in the figure legends. Groups or pairwise comparisons were evaluated by the Student's t-test; P values of <0.05 were considered statistically significant. In complementary studies, MC were transfected with either a constitutively active MEK-1 construct or pBluescript as control and were treated with PDE inhibitors and EGF. [3H]thymidine uptake was assessed. Lixazinone significantly inhibited EGF-stimulated DNA synthesis in control plasmid-transfected cells, whereas the constitutively active MEK-1 plasmid blocked the inhibitory effect of lixazinone on MC mitogenesis (Fig. 7B). These data suggest that inhibition of the Ras-Raf-MEK-ERK signaling pathway is mechanistically linked to the inhibitory effect of lixazinone on MC mitogenesis. PDE3 and PDE4 inhibitors differentially modulate cell cycle-regulatory proteins. MC were treated with lixazinone or rolipram. Cyclin and cdk levels were determined by Western blot analysis; activity of cyclin D and cyclin E was assessed by histone H1 kinase assay. Lixazinone significantly suppressed cyclin D expression and activity at 8 h (Fig. 8, A and B). Although lixazinone did not suppress cyclin E expression (data not shown), cyclin E kinase activity was inhibited after 1 h of lixazinone treatment and the inhibitory effect persisted for up to 18 h. Rolipram alone reduced cyclin E kinase activity by 23% at 8 h; however, lixazinone inhibited cyclin E kinase activity to a greater extent (P < 0.01; Fig. 9). After 8 h of treatment, lixazinone inhibited cyclin A expression (by 27%); rolipram was without effect (Fig. 10). Neither lixazinone nor rolipram significantly altered expression of cdk2 and cdk4 (data not shown). The distribution of PDE3 and PDE4 isoforms in MC has not been previously characterized in detail. We found that MC express both PDE3 isoforms (PDE3A and PDE3B) and express all four gene products comprising the PDE4 family (PDE4A, PDE4B, PDE4C, and PDE4D) (93). PDE3A is widely expressed in arterial tissues, platelets, and cardiac tissue (40, 97). Recent studies showed that alternative transcriptional and posttranscriptional processing of the PDE3A gene is responsible for the generation of at least three isoforms that differ in the number of membrane association domains, protein kinase A or protein kinase B phosphorylation sites (18, 97). PDE3B is primarily expressed in adipocytes, hepatocytes, and pancreatic cells (80). Both PDE3A and PDE3B are expressed in cultured arterial smooth muscle cells (76). The PDE4 family is the largest PDE family characterized to date, with at least 18 isoforms (10, 21, 48, 49, 93). Recent studies have defined important functions for specific PDE4 isoforms. For example, PDE4D knockout mice have impaired growth and fertility (54). The airways of mice deficient in the PDE4D gene are refractory to muscarinic cholinergic stimulation (46, 71). These animals still exhibit pulmonary inflammation, indicating that other PDE4 isoforms such as PDE4A or PDE4B regulate this process (39, 66). Along these lines, PDE4B has been shown to be essential for TNF synthesis in response to LPS stimulation (53). The PDE4D5 isoform, but not other PDE4D isoforms, interacts with the RACK1 signaling scaffold protein (100), whereas the PDE4D3 isoform contains a unique NH2-terminal domain that allows interactions with specific SH3 proteins (6). Using a functional assay, we previously demonstrated that most cAMP PDE activity in MC resides within the cytosol (69). However, additional studies are needed to determine which PDE3 isoform is responsible for inhibition of mitogenesis and which PDE4 isoform is responsible for inhibition of ROS generation in MC and to determine whether these activities reside within the cytosol or are localized to the plasma membrane or other cellular compartments. It has been postulated that the relative expression of B-Raf vs. Raf-1 may dictate whether cAMP stimulates or inhibits proliferation (91). Raf-1 has a ubiquitous tissue distribution, whereas B-Raf is expressed primarily in neuronal cells and testis (92). We found that MC expressed both Raf-1 and B-Raf. The PDE3 inhibitor lixazinone inhibited Raf-1 kinase activity, whereas the PDE4 inhibitor rolipram was without significant effect. However, both PDE3 and PDE4 inhibitors suppressed B-Raf kinase activity. It is likely that the effect of cAMP on B-Raf kinase activity depends on cell culture conditions and/or cell type. For example, cAMP stimulates B-Raf and ERK activity in melanocytes (11) and PC12 pheochromocytoma cells cultured in the presence of serum (34). However, cAMP inhibits Raf-1 and B-Raf activity in serum-starved PC12 cells (34, 77, 96) or PC12 cells treated with EGF, nerve growth factor, or platelet-derived growth factor (96). In differentiating promyelocytic HL-60 leukemia cells, cAMP inhibits B-Raf activity but activates ERK (17). In Rat-1 fibroblasts (which lack B-Raf), cAMP inhibits Raf-1 activity, leading to suppression of ERK activity and inhibition of mitogenesis. However, expression of B-Raf renders the cells resistant to the inhibitory effects of cAMP on both growth factor-induced activation of ERK and mitogenesis, suggesting that the relative levels of B-Raf and Raf-1 may determine whether cells proliferate or are growth inhibited by cAMP (34). The ability of B-Raf to activate MAPK may be related to its ability to complex with 143-3 proteins (79). In cell types inhibited by cAMP, approximately fivefold less 143-3 protein was associated with B-Raf than was observed in cell types in which cAMP stimulated MAPK (79). Further studies are needed to address this issue in MC. In cell types that are growth inhibited by cAMP, it is thought that inhibition of mitogenesis is associated with PKA-mediated phosphorylation of Raf-1 on serine 43 and other serine residues, including serine 259 and serine 621 (22, 30, 45, 85, 98). We therefore sought to determine whether the differential effect of PDE3 and PDE4 inhibitors on MC mitogenesis was associated with a differential effect on Raf-1 phosphorylation. We found that the PDE3 inhibitor lixazinone promoted rapid phosphorylation of Raf-1 on serine 43 and serine 259 and decreased phosphorylation on serine 338. The PDE4 inhibitor rolipram had no significant effect on phosphorylation of Raf-1. In at least some cell types, phosphorylation of Raf-1 on serine 43 by PKA inhibits binding of Raf-1 to Ras-GTP (19, 30, 98) and suppresses Raf-1 kinase activity. PKA phosphorylates Raf-1 at other serine residues, including serine 259 and serine 621 (30, 73). However, elevated cAMP does not stimulate phosphorylation of serine 621 or suppress activity of the catalytic domain of Raf-1 in vivo, indicating that this may be an in vitro artifact (88). Recent studies showed that growth factor-stimulated mitogenesis is associated with phosphorylation of Raf-1 at serine 338 (26). This phosphorylation apparently promotes membrane localization of Raf-1 and subsequent activation of ERK signaling (26, 68). We found that lixazinone, but not rolipram, decreased phosphorylation of Raf-1 at serine 338 and suppressed Raf-1 kinase activity. Further studies are needed to determine whether lixazinone-mediated decreases in Raf-1 kinase activity are due to decreased phosphorylation of Raf-1 at serine 338. We found that the inhibitory effect of lixazinone on ERK activity was abrogated when MC were transfected with a constitutively active Ras or Raf plasmid. Furthermore, the antiproliferative effect of lixazinone was reversed when MC were transfected with a constitutively active MEK-1 construct. These studies underscore the importance of the ERK pathway in regulation of MC mitogenesis and indicate that strong activation of the Ras-Raf-MEK-ERK pathway can stimulate cAMP-independent proliferation. Although these studies demonstrate a critical role of cAMP in regulation of the Ras-Raf-MEK-ERK pathway, recent studies have suggested that cAMP may regulate mitogenesis through other pathways. In some cell systems, inhibition of ERK by cAMP does not appear to explain the growth-inhibitory effects of cAMP (23, 43, 70). In rat smooth muscle cells, cAMP at concentrations that strongly inhibit DNA synthesis does not inhibit ERK activation (42). In MC, addition of cAMP at a time when transient growth factor activation of ERK has declined to basal levels still results in potent inhibition of DNA synthesis. The efficiency whereby cAMP elevation inhibits smooth muscle cell DNA synthesis is nearly identical when cAMP is added before growth factor stimulation or 612 h later (58). The same phenomenon has been observed in other cell types (70). Furthermore, recent studies showed that ERK is activated normally in cells derived from Raf-1 knockout animals (50, 72). These observations indicate the presence of Raf-1-independent pathways for ERK activation, at least in some circumstances. Based on these considerations, we sought to define other potential cell cycle targets that may be subject to differential regulation by PDE3 and PDE4 inhibitors. We found that the antiproliferative effects of PDE3 inhibitors were associated with reduced cyclin D levels, cyclin E kinase activity, and decreased cyclin A levels. In mammalian cells, G1 to S transition is governed by the cyclin-cyclin-dependent kinase (cdk) complexes, in which the cdk acts as the catalytic subunit and the cyclin acts as the regulatory subunit. Expression of cyclin D increases in early-mid part of G1 to form complexes with cdk4 and cdk6 (57, 63, 86). In the mid-part of G1, cyclin E associates with cdk2 to form an active complex. In contrast to cyclin D, cyclin E expression is not generally induced by growth factors. However, activity of the cyclin E-cdk2 complex is markedly increased in mid-G1 following growth factor stimulation, and inhibition of cdk2 activity prevents cells from entering the S phase of the cell cycle (86). In the S phase, cdk2 complexes with cyclin A (86). Activity of cyclin-cdk complexes is also regulated by cdk inhibitor proteins that bind to and inhibit cdks. In smooth muscle cells (38, 59, 95) and other cell types (4, 56), cAMP suppresses induction of cyclin D and cyclin A and upregulates the cdk inhibitor protein p27. In smooth muscle cells, p27 levels are induced and levels of p27 associated with cdk2 are also increased (58), resulting in an abrogation of both cdk2 and cdk4 activities (38). We found that PDE3 inhibitors increase expression of the cdk inhibitor protein p21, whereas PDE4 inhibitors had no effect on p21 expression. These studies indicate that PDE3 inhibitors suppress MC mitogenesis, at least in part, through upregulation of the cdk inhibitor p21. We previously showed that MC possess functionally compartmentalized pools of cAMP regulated by distinct PDE isoforms. A pool of cAMP directed by PDE3 inhibits mitogenesis, whereas a pool of cAMP directed by PDE4 suppresses ROS generation by MC (15). In our current studies, we demonstrate that PDE3 inhibitors promote phosphorylation of Raf-1 on serine 43 and serine 259 and inhibit phosphorylation of Raf-1 on serine 338, and PDE4 inhibitors were without effect. PDE3, but not PDE4, inhibitors suppress Raf-1 kinase activity and ERK activation, indicating MAPK signaling is regulated by functionally compartmentalized pools of cAMP. The compartmentalization can be overcome by increasing intracellular cAMP levels with the adenylate cyclase agonist FSK. The inhibitory effect of PDE3 inhibitors on ERK activity is abolished when constitutively active Ras or Raf constructs are expressed and the suppressive effects of PDE3 inhibitors on MC mitogenesis are reversed when constitutively active MEK-1 is expressed in MC. These studies provide evidence that, at least in MC, PDE3 inhibitors act by suppressing ERK activation. Other targets of cAMP signaling directed by PDE3 include cyclins D, E, and A and the cell cycle inhibitor p21. Additional studies are needed to determine whether the observed effects of PDE3 inhibitors on cell cycle protein expression and activity are mechanistically related to suppression of ERK activity. Although nonselective PDE inhibitors such as IBMX have antiproliferative effects in MC and other cell types, a variety of systemic side effects limits their clinical use. In contrast, isoform-specific PDE inhibitors have been used to treat a variety of inflammatory and autoimmune conditions. PDE inhibitors may provide useful therapeutic targets for modulating ERK activation in response to acute or chronic renal injury and may thereby arrest progression of a variety of chronic:
252,440
[…] - Ada Maris - Carla Baratta - Danny Pino - Edward James Olmos - Elgin James - Emilio Rivera - FX Networks - Gino Vento - JD Pardo - Jimmy Gonzales - Joseph Raymond Lucero - Kevin Alejandro - Kim Coates - Kurt Sutter - Manny Montana - Mayans M.C. - Mayans MC S4x04 A Crow Flew By - Michael D. Olmos - Michael Irby - Renee Victor - Richard Cabral - Sara Price - Sarah Bolger - Sons of Anarchy - Vincent Vargas
320,873
Life Was Better in Black and White Raise your hand if you grew up watching black and white TV. Remember when you only got 3 channels on your television? And the day started, and ended, with that familiar TV Test Pattern. It meant the idea of watching television 24 hours a day had not yet been invented, developed, or common place. Television in Black and White ‘63. It felt so good, it felt so right. Life looked better in black and white. I’d trade all the channels on the satellite, If I could just turn back the clock tonight To when everybody knew wrong from right. Life was better in black and white! Remembering Those Black and White Words "Barney, you promised me when I gave you that bullet you'd keep it in your shirt pocket. Now, why'd you take it out?" ~ Andy Taylor "Lucy, you got some ‘splainin’ to do!” ~ Ricky Ricardo Rob Petrie: "How's your white satin evening gown? " Laura Petrie: "Fine. How's your red flannel bathrobe? " "Gee, Mr. Wilson - you're the best friend a kid could ever have." "Gee Mr. Wilson, for a grownup, you're swell". ~ Dennis the Menace The Lone Ranger: "Don't worry about this mask, it's on the side of the law. " Alice Kramden: "I've got to admit it, Ralph. For once in your life, you were right: we never should've gotten a television set." ~ The Honeymooners In the Words of Festus Haggin from Gunsmoke "He ain't got the gumption to pound sand down a rat hole." "The onliest thing you get from stradlin' the fence is a sore backside." "You couldn't burst a bird's egg with a ball-pene hammer." "Tighter than the feathers on a prairie chicken's rump." "No more chance than a grass hopper in a hen house." "Ain't you startin' to itch before you git bit?" "When you learn a thing a day you store up smart."
409,714
R.J. Ogren has been an artist, actor and teacher for over 35 years. Born in St. Charles, Illinois in the spring of 1944, and making his first mark as an artist when, at the age of 6, he designed the zoo animals for the first grade play, R.J. has now created hundreds of paintings, portraits, murals, cartoons, graphic designs, scenic designs and architectural renderings across the United States, England and Australia. All R. J. Ogren products
185,216
TITLE: Rocket speed (Is the equation my choose is the correct one and is my (asumption) and calculation is correct with regard to Hawking's sentences QUESTION [1 upvotes]: Hawking wrote, Exhaust speed of chemical rockets is 3 kilometer/second. By dropping 30 percent of their mass, they can achieve speed of about 0.5 kilometer per second and then slow down again. So, I would like to know does "0.5" mean delta v (change of velocity)? I think it is about Tsiolkovsky equation: $$\Delta v = v_E \ln{ \left(\frac{m_i}{m_f} \right) } $$ So my calculation is that he says (jettisoning 30 % of mass), so m(1)= 100 and m(f)=70. ln (100/70)* 3 km/s = 1.07. So, it is not equal to what Hawking says (0.5 km/s). But if we use log (base 10) instead of natural logarithm, we get (0.46 km/s) which is nearly equal to Hawking's amount. But the real equatin uses natural logarithm. So, I would like to know what's wrong. REPLY [2 votes]: The original calculation, giving $\Delta v=1.07$ km/s, is correct. The crucial part is that the rocket is assumed to speed up and then slow down again. The rocket uses 0.5 km/s of $\Delta v$ to reach a speed of 0.5 km/s, and then uses another 0.5 km/s of $\Delta v$ to slow back down to being at rest. REPLY [1 votes]: You've used the equation correctly, including the natural logarithm. Have another look at the quote: Exhaust speed of chemical rockets is 3 km/s. By jettisoning 30 % of their mass, they can achieve speed of about 0.5 km per second and then slow down again. So that's using 30% of the mass to accelerate to 0.5 km/s, rotate and point the engine in the other direction, and then decelerate back down to 0. If you didn't turn around, you'd be going 1.0 km/s like you calculated. The logarithmic term $$v_e \ln{ \left(\frac{m_i}{m_f} \right) } = v_e \ln{ \left(\frac{1.0}{0.7} \right) } \approx 0.36 \ v_e $$ applies to the whole trip. You can take the square root of the term inside to estimate the mass used to do the first half and get to 0.5 km/s $$\left(\frac{1}{0.7} \right)^{1/2} \approx \left(\frac{1}{0.84} \right)$$ So you would use 16% of your original mass to speed up to 0.5 km/s, and then $$\left(\frac{0.84}{0.70} \right)$$ 14% of your original mass to slow down again. That makes sense because the rocket is now lighter.
120,499
\begin{document} \title[Universal $sl_2$ invariant and Milnor invariants]{The universal $sl_2$ invariant and Milnor invariants} \author[J.B. Meilhan]{Jean-Baptiste Meilhan} \address{Univ. Grenoble Alpes, IF, F-38000 Grenoble, France} \email{jean-baptiste.meilhan@ujf-grenoble.fr} \author[S. Suzuki]{Sakie Suzuki} \address{The Hakubi Center for Advanced Research/Research Institute for Mathematical Sciences, Kyoto University, Kyoto, 606-8502, Japan. } \email{sakie@kurims.kyoto-u.ac.jp} \begin{abstract} The universal $sl_2$ invariant of string links has a universality property for the colored Jones polynomial of links, and takes values in the $\hbar$-adic completed tensor powers of the quantized enveloping algebra of $sl_2$. In this paper, we exhibit explicit relationships between the universal $sl_2$ invariant and Milnor invariants, which are classical invariants generalizing the linking number, providing some new topological insight into quantum invariants. More precisely, we define a reduction of the universal $sl_2$ invariant, and show how it is captured by Milnor concordance invariants. We also show how a stronger reduction corresponds to Milnor link-homotopy invariants. As a byproduct, we give explicit criterions for invariance under concordance and link-homotopy of the universal $sl_2$ invariant, and in particular for sliceness. Our results also provide partial constructions for the still-unknown weight system of the universal $sl_2$ invariant. \end{abstract} \maketitle \section{Introduction} The theory of quantum invariants of knots and links emerged in the middle of the eighties, after the fundamental work of Jones. Instead of the classical tools of topology, such as algebraic topology, used until then, this new class of invariants was derived from interactions of knot theory with other fields of mathematics, such as operator algebras and representation of quantum groups, and revealed close relationships with theoretical physics. Although this gave rise to a whole new class of powerful tools in knot theory, we still lack a proper understanding of the topological information carried by quantum invariants. One way to attack this fundamental question is to exhibit explicit relationships with classical link invariants. The purpose of this paper is to give such a relation, by showing how a certain reduction of the universal $sl_2$ invariant is captured by Milnor invariants. Recall that Milnor's $\overline{\mu}$-invariants were defined by Milnor in the fifties \cite{Milnor,Milnor2}. Given an $l$-component oriented, ordered link $L$ in $S^3$, Milnor invariants $\overline{\mu}_I(L)$ of $L$ are defined for each multi-index $I=i_1 i_2 ...i_m$ (i.e., any sequence of possibly repeating indices) among $\n$. Unfortunately, these $\overline{\mu}(I)$ are in general not well-defined integers, as their definition contains a rather intricate self-recurrent indeterminacy. In \cite{HL1}, Habegger and Lin showed that the indeterminacy in Milnor invariants of a link is equivalent to the indeterminacy in representing it as the closure of a string link, which is a pure tangle without closed components, and that Milnor invariants are actually well defined integer-valued invariants of string links. They also showed how the first non-vanishing Milnor string link invariants of length $k+1$ can be assembled into a single Milnor map $\mu_k$. See Section \ref{2} for a review of string links and Milnor invariants. Milnor invariants constitute an important family of classical (string) link invariants, and their connection with quantum invariants has already been the subject of several works. The first attempt seem to be due to Rozansky, who conjectured a formula relating Milnor invariants to the Jones polynomial \cite{rozansky}. An important step was taken by Habegger and Masbaum, who showed explicitly in \cite{HMa} how Milnor invariants are related to the Kontsevich integral, which is universal among quantum invariants; roughly speaking, they showed that, for an $l$-component string link $L$ with vanishing Milnor invariants of length $\le m$, we have \begin{align} \label{eq:HM} Z^t(L) = 1 + \mu_{m}(L) + (\textrm{terms of degree $\ge m+1$}) \in \mathcal{B}^t(l), \end{align} where $Z^t$ is the projection of the Kontsevich integral onto the quotient space $\mathcal{B}^t(l)$ of $\{1,\ldots, l\}$-labeled Jacobi diagram modulo non-simply connected Jacobi diagrams, and where $\mu_{m}(L)$ is the Milnor map of $L$ regarded as an element of $\mathcal{B}^t(l)$. See Section \ref{sec:HM} for details. More recently, Yasuhara and the first author gave explicit formulas expressing Milnor invariants in terms of the HOMFLYPT polynomial of knots \cite{MY}. The present paper exhibits another type of such relationships, involving the universal $sl_2$ invariant. Given a ribbon Hopf algebra $H$, Reshetikhin and Turaev defined an invariant of framed links colored by finite dimensional representations of $H$ \cite{RT}. The universal invariant associated to $H$ has a universality property for the Reshetikhin-Turaev invariant, in the sense that it is defined for non colored objects, and is such that taking trace in the representations attached to the link components recovers the Reshetikhin-Turaev invariant \cite{Law1,Law2,O}. In this paper, we consider the universal $sl_2$ invariant, which is universal for the colored Jones polynomial in the above sense. For an $l$-component framed string link $L$, the universal $sl_2$ invariant $J(L)$ takes values in the $l$-fold completed tensor powers $U_{\hbar}(sl_2)^{\hat \otimes l}$ of the quantized enveloping algebra $U_{\hbar}(sl_2)$. The second author \cite{sakie1, sakie2, sakie3} studied the universal $sl_2$ invariant of several classes of \textit{bottom tangles} (ribbon, boundary, and Brunnian) which admit vanishing properties for Milnor invariants. Here, we can identify bottom tangles with string links via a fixed one-to-one correspondence, see \cite{H1}. See Section \ref{3} for the definitions of $U_{\hbar}(sl_2)$ and the universal $sl_2$ invariant. Our first main result, Theorem \ref{sth2}, can be roughly stated as follows. Let $S(sl_2)$ be the symmetric algebra of $sl_2$. Using the PBW basis, we will define an isomorphism of $\mathbb{Q}$-modules (see Section \ref{5.1}) \begin{align}\label{qm} U_{\hbar}(sl_2)^{\hat\otimes l} &\to S(sl_2)^{\otimes l}[[\hbar]], \end{align} and consider the projection $$ \pi^t\co U_{\hbar}(sl_2)^{\hat \otimes l} \to \prod_{m\geq 1}(S(sl_2)^{\otimes l})_{m+1} \hbar^m, $$ where $(S(sl_2)^{\otimes l})_{m+1}$ is the degree $m+1$ part of the graded algebra $S(sl_2)^{\otimes l}$ with respect to the length of the words in $sl_2$. On the other hand, there is a graded algebra homomorphism $$ W\co \mathcal{B}(l) \to S(sl_2)^{\otimes l}[[\hbar]], $$ called \textit{the $sl_2$ weight system}, which is defined on the space $\mathcal{B}(l)$ of $\{1,\ldots, l\}$-labeled Jacobi diagram (see Section \ref{4} for the definition). Actually $\mu_{m}(L)$ is contained in the space of tree Jacobi diagrams, i.e. connected and simply connected Jacobi diagrams, and the restriction of $W$ to the space of tree Jacobi diagrams takes values in $\prod_{m\geq 1}(S(sl_2)^{\otimes l})_{m+1} \hbar^m$; see Lemma \ref{wcc}. Thus we can compare $(W \circ \mu_{m})(L)$ and $J^t(L) := (\pi^t\circ J) (L)$, and obtain the following result. \begin{theorem*}[Theorem \ref{sth2}] Let $m\geq 1$. If $L$ is a string link with vanishing Milnor invariants of length $\le m$, then we have $$ J^t(L)\equiv(W \circ \mu_{m}) (L) \oh{m+1}. $$ \end{theorem*} \noindent Here, and throughout the paper, we simply set $(\mathrm{mod} \ \hbar^{k})=( \mathrm{mod} \ \hbar^{k}U_{\hbar}^{\hat\otimes l})$ for $k\geq 1$ and an appropriate $l\geq 1$. Note that the case $m=1$ of this statement is essentially well-known, see Proposition \ref{sc}. This result implies a certain `concordance invariance'-type result for $J^t$, stated in Corollary \ref{cor:main}, which implies in particular that $J^t$ is trivial for slice string links. There is also a variant of the above theorem, using another projection map $\tilde \pi^t$ onto a larger quotient of $U_{\hbar}^{\hat \otimes l}$; see Remark \ref{rem:tbw}. This provides another criterion for the universal $sl_2$ invariant, which applies in particular to slice, boundary or ribbon string links. \begin{theorem*}[Corollary \ref{cor:final}] Let $L$ be an $l$-component string link with vanishing Milnor invariants. Then we have $$J(L)\in 1+ \prod_{ 1\leq i\leq j}(S(sl_2)^{\otimes l})_i\hbar^j. $$ \end{theorem*} \noindent This result supports strongly \cite[Conjecture 1.5]{sakie2}, where the second author suggests that the universal $sl_2$ invariant of a bottom tangle with vanishing Milnor invariants is contained in a certain subalgebra of $U_{\hbar}(sl_2)^{\hat \otimes l}$. As is often with Milnor invariants, the proof of the first main theorem consists mainly of two steps: we first prove the result for Milnor link-homotopy invariants, then deduce the general case using cabling operations. Recall that the link-homotopy is the equivalence relation generated by self-crossing changes. Habegger and Lin showed that Milnor invariants indexed by sequences with no repetition form a complete set of link-homotopy invariants for string links \cite{HL1}. We can thus consider the link-homotopy reduction $\mu^h_{m}$ of the Milnor map $\mu_{m}$ (see Section \ref{subseq}). On the other hand, we consider the projection of $\mathbb{Q}$-modules $$ \pi^h\co U_{\hbar}^{\hat \otimes l} \to \bigoplus_{m=1}^{l-1}\langle sl_2 \rangle _{m+1}^{(l)}\hbar^m, $$ where $\langle sl_2 \rangle_{m+1}^{(l)}\subset (S(sl_2)^{\otimes l})_{m+1}$ denotes the subspace spanned by tensor products such that each tensorand is of degree $\leq 1$, that is, roughly speaking, tensor products of $1$'s and elements of $sl_2$. It turns out that the restriction of the $sl_2$ weight system $W$ to the space of tree Jacobi diagrams with non-repeated labels takes values in this space $\bigoplus_{m=1}^{l-1}\langle sl_2 \rangle _{m+1}^{(l)}\hbar^m$. Thus, similarly as before, we can compare $(W \circ \mu^h_{m})(L)$ and $J^h(L) := (\pi^h\circ J) (L)$, and obtain the following second main result. \begin{theorem*}[Theorem \ref{sth2h}] Let $m\geq 1$. If $L$ is a string link with vanishing Milnor link-homotopy invariants of length $\le m$, then we have $$ J^h(L)\equiv (W \circ \mu^h_{m})(L) \oh{m+1}. $$ \end{theorem*} In order to prove the latter, one of the key results is a `link-homotopy invariance'-type result for the map $J^h$ (Proposition \ref{s241}). This reduces the proof to an explicit computation for a link-homotopy representative, given in terms of the lower central series of the pure braid group. In the process of proving Proposition \ref{s241}, we obtain, as above, a variant of Theorem \ref{sth2h} using another projection map, giving an algebraic criterion detecting link-homotopically trivial string links; see Remark \ref{rem:tbw2} and Corollary \ref{cor:final2}. It may be worth noting here that Theorem \ref{sth2h} cannot in general be simply deduced from Theorem \ref{sth2} by a mere `link-homotopy reduction' process. (This is simply because a string link may in general have nonzero Milnor invariants of length $m$, yet vanishing Milnor link-homotopy invariants of length $m$.) We also emphasize that our results are not mere consequences of Habegger-Masbaum's work. Indeed, since the Kontsevich integral $Z$ is universal among quantum invariants, we know that $J$ can be recovered from $Z$ via a weight system map, so that Theorems \ref{sth2} and \ref{sth2h} could in principle be obtained from (\ref{eq:HM}) simply by applying this weight system. But no explicit formula is known for this weight system map, since we do not know an explicit algebra homomorphism between $U_{\hbar}(sl_2) $ and $U(sl_2)[[\hbar]]$ see \cite{Kassel}. This is why we had to fix a $\mathbb{Q}$-module homomorphism in (\ref{qm}). Actually, we expect that our result could allow to study and compute the universal $sl_2$ invariant weight system, or at least, its restriction to $\mathcal{B}^t(l)$. It is also worth mentioning here that the $sl_2$ weight system $W$ is not injective, and thus we do not expect that the universal $sl_2$ invariant detects Milnor invariants. This follows from the fact that $W$ takes values in the invariant part of $S(sl_2)^{\otimes l}[[\hbar]]$ and a simple argument comparing the dimensions of the domain and images. We will further study properties of the universal $sl_2$ weight system in a forthcoming paper. The rest of the paper is organized as follows. In section 2, we review in detail the definition of Milnor numbers and of the Milnor maps $\mu_k$, and recall some of their properties. In section 3, we recall the definitions of the quantized enveloping algebra $U_{\hbar}(sl_2)$ and the universal $sl_2$ invariant, and recall how the framing and linking numbers are simply contained in the latter. Section 4 provides the diagrammatic settings for our paper; we review the definition of Jacobi diagrams, and their close relationships with the material from the previous sections. This allows us to give the precise statements of our main results in Section 5. Sections 6, 7 and 8 are dedicated to the proofs. Specifically, the link-homotopy version of our main result is shown in Section 6, while Section 7 contains the proof of the general case. Some of the key ingredients of these proofs require the theory of claspers, which we postponed to Section 8. \begin{acknowledgments} The first author is supported by the French ANR research projects ``VasKho'' ANR-11-JS01-00201 and ``ModGroup'' ANR-11-BS01-02001. The second author is supported by JSPS Research Fellowships for Young Scientists. The authors wish to thank Kazuo Habiro for helpful comments and conversations. \end{acknowledgments} \section{Milnor invariants}\label{2} Throughout the paper, let $l\ge 1$ be some fixed integer. Let $D^2$ denote the standard $2$-disk equipped with $l$ marked points $p_1$, . . . , $p_l$ in its interior as shown in Figure \ref{fig:basis}. Fix also a point $e$ on the boundary of the disk $D^2$, and for each $i=1, . . ., l$, pick an oriented loop $\alpha_i$ in $D^2$ based at this $e$ and winding around $p_i$ in the trigonometric direction. See Figure \ref{fig:basis}. \begin{figure}[!h] \includegraphics[scale=0.6]{disk.eps} \caption{The disk $D^2$ with $l$ marked points $p_i$, and the arcs $\alpha_i$; $i=1, \cdots, l$. }\label{fig:basis} \end{figure} An $l$-component string link is a proper embedding of $l$ disjoint copies of the unit interval $[0,1]$ in $D^2\times [0,1]$, such that for each $i$, the image $L_i$ of the $i$th copy of $[0,1]$ runs from $(p_i,1)$ to $(p_i,0)$. The arc $L_i$ is called the $i$th component of $L$. An $l$-component string link is equipped with the \emph{downwards} orientation induced by the natural orientation of $[0,1]$. In this paper, by a string link we will implicitly mean a \emph{framed} string link, that is, equipped with a trivialization of its normal tangent bundle. (Here, it is required that this trivialization agrees with the positive real direction at the boundary points.) In the various figures of this paper, we make use of the blackboard framing convention. The ($0$-framed) $l$-component string link $\{p_1,...,p_l\}\times[0,1]$ in $D^2\times [0,1]$ is called the {\em trivial $l$-component string link} and is denoted by $\mathbf{1}_l$, or sometimes simply $\mathbf{1}$ when the number of components is implicit. Let $SL(l)$ denote the set of isotopy classes of $l$-component string links fixing the endpoints. The stacking product endows $SL(l)$ with a structure of monoid, with the trivial $l$-component string link $\mathbf{1}_l$ as unit element. In this paper, we use the notation $\cdot$ for the stacking product, with the convention that the rightmost factor is \emph{above}. Note that the group of units of $SL(l)$ is precisely the pure braid group on $l$ strands $P(l)$ \cite{HL2}. \subsection{Artin representation and the Milnor map $\mu_k$ for string links} \label{sec:milnormap} In this subsection we review Milnor invariants for string links, following \cite{HL1,HL2}. For an $l$-component string link $L=L_1\cup\cdots\cup L_l$ in $D^2\times [0,1]$, denote by $Y=(D^2\times [0,1])\setminus N(L)$ the exterior of an open tubular neighborhood $N(L)$ of $L$, and set $Y_0=(D^2\times \{0\})\setminus N(L)$ and $Y_1=(D^2\times \{1\})\setminus N(L)$. For $i=0,1$, the fundamental group of $Y_i$ based at $(e,i)$ identifies with the free group $\F$ on generators $\alpha_1,...,\alpha_l$. Recall that the lower central series of a group $G$ is defined inductively by $\Gamma_1G=G$ and $\Gamma_{k+1}G=[G,\Gamma_k G]$. By a theorem of Stallings \cite{stallings}, the inclusions $\iota_i:Y_i\longrightarrow Y$ induce isomorphisms $(\iota_i)_k:\pi_1(Y_t)/\Gamma_{k+1}\pi_1(Y_t) \longrightarrow \pi_1(Y)/\Gamma_{k+1}\pi_1(Y)$ for any positive integer $k$. Hence for each $k$, the string link $L$ induces an automorphism $(\iota_0)_k^{-1}\circ(\iota_1)_k$ of $\F/\Gamma_{k+1}\F$. Actually, this assignment defines a monoid homomorphism $$ A_k: SL(l)\rightarrow \textrm{Aut}_0\left( \F/\Gamma_{k+1}\F \right), $$ called the \emph{$k$th Artin representation}, where $\textrm{Aut}_0\left( \F/\Gamma_{k+1}\F \right)$ denotes the group of automorphisms of $\F/\Gamma_{k+1}\F$ sending each generator $\alpha_j$ to a conjugate of itself and preserving the product $\prod_j \alpha_j$. More precisely, for each component $j$, consider the \emph{preferred $j$th longitude} of $L$, which is a $f_j$-framed parallel copy of $L_j$, where $f_j$ denotes the framing of component $j$. This defines an element $l_j$ in $\pi_1(Y)/\Gamma_{k+1}\pi_1(Y)$, and for any positive integer $k$, we set $l^k_j:=(\iota_0)_k^{-1}(l_j)\in \F/\Gamma_{k+1}\F$. Then we have that $A_k(L)$ maps each generator $\alpha_j$ to its conjugate $$ A_k(L):\alpha_j\mapsto l_j^k \alpha_j (l_j^k)^{-1}.$$ \noindent (Here, we denoted the image of $\alpha_j$ in the lower central series quotient $\F/\Gamma_{k+1}\F$ again by $\alpha_j$.) Denote by $SL_k(l)$ the set of $l$-component string links whose longitudes are all trivial in $\F/\Gamma_{k}\F$. We have a descending filtration of monoids $$ SL(l)=SL_1(l)\supset SL_2(l)\supset . . . \supset SL_k(l)\supset . . .$$ called the \emph{Milnor filtration}, and we can consider the map $$ \mu_{k}: SL_k(l) \rightarrow \dfrac{\F}{\Gamma_2 \F}\otimes \dfrac{\Gamma_k \F}{\Gamma_{k+1} \F} $$ for each $k\ge 1$, which maps $L$ to the sum $$ \mu_{k}(L) := \sum_{i=j}^l \alpha_j\otimes l^{k}_j, $$ called the \emph{degree $k$ Milnor map}. \subsection{Milnor numbers for string links} \label{milnornb} As mentioned in the introduction, Milnor invariants were originally defined as numerical invariants. Let us briefly review their definition and connection to the Milnor map. Let $\mathbb{Z}\langle \langle X_1, . . . ,X_l\rangle \rangle$ denote the ring of formal power series in the non-commutative variables $X_1,...,X_l$. The \emph{Magnus expansion} $E: \F\rightarrow \mathbb{Z}\langle \langle X_1, . . . ,X_l\rangle \rangle$ is the injective group homomorphism which maps each generator $\alpha_j$ of $\F$ to $1+X_j$ (and thus maps each $\alpha_j^{-1}$ to $1-X_j+X_j^2-X_j^3+\cdots$). Since the Magnus expansion $E$ maps $\Gamma_k\F$ to terms of degree $>k$, the coefficient $\mu_{i_1i_2...i_{m}j}(L)$ of $X_{i_1}\cdots X_{i_{m}}$ in the Magnus expansion $E(l^k_j)$ is a well-defined invariant of $L$ for any $m\leq k$,\footnote{Note that the integer $k$ can be chosen arbitrarily large, so this condition is not restrictive. } and it is called a \emph{Milnor $\mu$-invariant}, or Milnor number, of length $m+1$. Milnor invariants are sometimes referred to as higher order linking numbers, since $\mu_{ij}(L)$ is merely the linking number of component $i$ and $j$, while $\mu_{ii}(L)$ is just the framing of the $i$th component. For each $k\ge 1$, the $k$th term $SL_k(l)$ of the Milnor filtration coincides with the submonoid of $SL(l)$ of string links with vanishing Milnor $\mu$-invariants of length $\le k$, and the Milnor map $\mu_{k}$ is strictly equivalent to the collection of all Milnor $\mu$-invariants of length $k+1$. Recall that two $l$-component string links $L$ and $L'$ are concordant if there is an embedding $$ f: \left(\sqcup_{i=1}^l [0,1]_i\right)\times I \longrightarrow \left(D^2\times I\right)\times I, $$ where $\sqcup_{i=1}^l [0,1]_i$ is the disjoint union of $l$ copies of the unit interval $[0,1]$, such that $f\left( (\sqcup_{i=1}^l [0,1]_i)\times \{ 0 \} \right)=L\times \{ 0 \}$ and $f\left( (\sqcup_{i=1}^l [0,1]_i)\times \{ 1 \} \right)=L'\times \{ 1 \}$, and such that $f\left(\partial(\sqcup_{i=1}^l [0,1]_i)\times I\right)=(\partial L) \times I$. It is well known that Milnor numbers, hence Milnor maps, are not only isotopy invariants, but also concordance invariants : this is for example shown by Casson in \cite{casson}, although it is already implicit in Stallings' paper \cite{stallings}. \subsection{Link-homotopy and the lower central series of the pure braid group}\label{sec:pure} Recall that the link-homotopy is an equivalence relation on knotted objects generated by isotopies and self-crossing changes. Using the properties of Magnus expansion, Milnor proved that, if $I$ is a sequence \emph{with no repeated index}, then the corresponding invariant $\mu_I$ is a link-homotopy invariant, see Theorem 8 of \cite{Milnor2}. Habegger and Lin subsequently proved that string links are classified up to link-homotopy by Milnor invariants with no repeated indices \cite{HL1}. More precisely, Habegger and Lin showed that the set $ \bigcup_{m=2}^l \left\{ \mu_{I}\textrm{ $|$ }I\in \mathcal{I}_m \right\} $ forms a complete set of link-homotopy invariants for string links \cite{HL1,HL2}, where for each $m\in \{2, . . . ,l\}$, $$ \mathcal{I}_m := \left\{\quad j_{\tau(1)}...j_{\tau(m-2)}j_{m-1}j_m \quad \left| \quad \begin{array}{c} 1\le j_1<\cdots<j_{m-2}<j_{m-1}<j_m\le l \\ \tau\in S_{m-2} \end{array}\right. \right\}. $$ In other words, $\mathcal{I}_m$ is the set of all sequences $j_1...j_m$ of $m$ non-repeating integers from $\{1,...,l\}$ such that $j_i<j_{m-1}<j_m$ for all $i\le m-2$. In this subsection, we use this result to give an explicit representative for the link-homotopy class of any string link in terms of the lower central series of the pure braid group. Recall that the pure braid group on $l$ strands $P(l)$ is generated by elements $$ A_{i,j}=\sigma_{j-1}\cdot . . . \cdot\sigma_{i+1}\cdot \sigma_{i}^2\cdot \sigma_{i+1}^{-1}\cdot . . . \cdot\sigma_{j-1}^{-1}\textrm{, for }1 \le i < j \le l, $$ which may be represented geometrically as the pure braid where the $i$th string overpasses the strings ($i+1$), . . . , ($j-1$) and $j$, underpasses the $j$th string, then goes back to the $i$th position by overpassing all strings. For convenience, we also define $A_{i,j}$ for $i>j$, by the convention $A_{i,j}:=A_{j,i}$. Given a sequence $J=j_1...j_m$ in $\mathcal{I}_m$, we define the pure braid \begin{equation}\label{braidBJ} B_J^{(l)} = [[. . . [ [A_{j_1,j_2},A_{j_2,j_3}],A_{j_3,j_4}], . . . ] , A_{j_{m-1},j_m}], \end{equation} which lies in the $(m-1)$th term $\Gamma_{m-1} P(l)$ of the lower central series. We simply write $B_J=B_J^{(l)}$ when there is no risk of confusion. The pure braids $B_{J}$ ($J\in \mathcal{I}_m$) can be used to construct an explicit representative of the link-homotopy class of any string link as follows. \begin{lemma}\label{lem:braidlh} Any $l$-component string link $L$ is link-homotopic to $b^L_1\cdots b^L_{l-1}$, where \begin{equation}\label{eq:braid} b^L_i= \prod_{J\in \mathcal{I}_{i+1}} (B_{J})^{\mu_{J}(b^L_i) }\textrm{, where } \mu_{J}(b^L_i)=\left\{\begin{array}{ll} \mu_{J}(L)&\textrm{if $i=1$},\\ & \\ \mu_{J}(L)-\mu_{J}(b^L_1\cdots b^L_{i-1})& \textrm{if $i\geq 2$}. \end{array}\right. \end{equation} \end{lemma} \begin{remark} This lemma is to be compared with \cite[Thm. 4.3]{yasuhara} and \cite[Thm. 4.1]{MY}, where similar results are given in terms of tree claspers -- see Section \ref{8}. \end{remark} \begin{proof} In view of the link-homotopy classification result of Habegger and Lin recalled above, the lemma simply follows from a computation of Milnor invariants of the pure braids $B_{J}$ ($J\in \mathcal{I}_m$). Specifically, using the additivity property of Milnor string link invariants (see e.g. \cite[Lem. 3.3]{MYpjm}), it suffices to show that, for any $m$ and any two sequences $J$ and $J'$ in $\mathcal{I}_m$, we have \begin{equation}\label{eq:muVJ} \mu_{J'}(B_{J})= \left\{\begin{array}{ll} 1&\text{ if $J=J'$,}\\ 0&\text{ otherwise. } \end{array} \right. \end{equation} (See \cite[Rem. 4.2]{MY}.) Fixing a sequence $J=j_1...j_m$ in $\mathcal{I}_m$, set $$ B_k = [[. . . [ [A_{j_1,j_2},A_{j_2,j_3}],A_{j_3,j_4}], . . . ] , A_{j_{k-1},j_k}]\in \Gamma_{k-1} P(l), $$ for all $k=2, . . . , m$. (In particular, $B_2=A_{j_1,j_2}$, while $B_{k}=B_J$.) Using the skein formula for Milnor invariants due to Polyak \cite{Polyak}, one can easily check that, for any $k=3, . . ., m$, we have $$ \mu_{j_1. . . j_{k-1} j_k}(B_k)=\mu_{j_1. . . j_{k-1}}(B_{k-1}).$$ It follows that $\mu_J(B_J)=\mu_{j_1 j_2}(A_{j_1,j_2})=1$, as desired. The fact that $\mu_{J'}(B_J)=0$ for any $J'\neq J$ in $\mathcal{I}_m$ follows easily from similar arguments. \end{proof} The following notation will be useful in the next sections. Let $SL_m^h(l)$ be the set of $l$-component string links with vanishing Milnor link-homotopy invariant of length $\leq m$, that is, $L\in SL_m^h(l)$ if and only if $L$ is link-homotopic to $b^L_mb^L_{m+1}\cdots b^L_{l-1}$ as in Lemma \ref{lem:braidlh}. Note that we have a descending filtration $$ SL(l)\supset SL^h_1(l)\supset SL^h_2(l)\supset . . . \supset SL^h_k(l)\supset . . . \supset SL^h_l(l).$$ \section{The universal $sl_2$ invariant}\label{3} In the rest of this paper, we use the following $q$-integer notation. \begin{align*} &\{i\}_q = q^i-1,\quad \{i\}_{q,n} = \{i\}_q\{i-1\}_q\cdots \{i-n+1\}_q,\quad \{n\}_q! = \{n\}_{q,n}, \\ &[i]_q = \{i\}_q/\{1\}_q,\quad [n]_q! = [n]_q[n-1]_q\cdots [1]_q, \quad \begin{bmatrix} i \\ n \end{bmatrix} _q = \{i\}_{q,n}/\{n\}_q!, \end{align*} for $i\in \mathbb{Z}, n\geq 0$. \subsection{Quantized enveloping algebra $U_{\hbar}(sl_2)$} We first recall the definition of the quantized enveloping algebra $U_{\hbar}(sl_2)$, following the notation in \cite{H2, sakie2}. We denote by $U_{\hbar}=U_{\hbar}(sl_2)$ the $\hbar$-adically complete $\mathbb{Q}[[\hbar]]$-algebra, topologically generated by $H, E,$ and $F$, defined by the relations \begin{align*} HE-EH=2E, \quad HF-FH=-2F, \quad EF-FE=\frac{K-K^{-1}}{q^{1/2}-q^{-1/2}}, \end{align*} where we set \begin{align*} q=\exp {\hbar},\quad K=q^{H/2}=\exp\frac{{\hbar}H}{2}. \end{align*} \noindent We equip $U_{\hbar}$ with a topological $\mathbb{Z}$-graded algebra structure with $\deg F=-1$, $\deg E=1$, and $\deg H=0$. There is a unique complete ribbon Hopf algebra structure on $U_{\hbar}$ such that \begin{align*} \Delta_{\hbar} (H)&=H\otimes 1+1\otimes H, \quad \varepsilon_{\hbar} (H)=0, \quad S_{\hbar} (H)=-H, \\ \Delta_{\hbar} (E)&=E\otimes 1+K\otimes E, \quad \varepsilon_{\hbar} (E)=0, \quad S_{\hbar} (E)=-K^{-1}E, \\ \Delta_{\hbar} (F)&=F\otimes K^{-1}+1\otimes F, \quad \varepsilon_{\hbar} (F)=0, \quad S_{\hbar} (F)=-FK. \end{align*} The \emph{universal $R$-matrix} and its inverse are given by \begin{align}\ R&=D\bigg(\sum_{n\geq 0}q^{\frac{1}{2}n(n-1)}\frac{(q-1)^n}{[n]_q!}F^n\otimes E^n\bigg),\label{rm1} \\ R^{-1}&=\bigg(\sum_{n\geq 0}(-1)^nq^{-\frac{n}{2}}\frac{(q-1)^n}{[n]_q!}F^n\otimes E^n\bigg)D^{-1},\label{rm2} \end{align} where $D=q^{\frac{1}{4}H\otimes H} =\exp \big(\frac{\hbar}{4}H\otimes H\big)\in U_{\hbar}^{\hat {\otimes }2}$. For simplicity, we set $R^{\pm 1}= \sum_{n\geq 0} \alpha^{\pm}_n \otimes \beta^{\pm}_n$ with \begin{align*} \alpha _n \otimes \beta _n(&=\alpha^+ _n \otimes \beta^+ _n)=D\Big(q^{\frac{1}{2}n(n-1)}\frac{(q-1)^n}{[n]_q!}F^n\otimes E^n\Big), \\ \alpha _n^- \otimes \beta _n^-&=D^{-1}\Big((-1)^nq^{-\frac{n}{2}}\frac{(q-1)^n}{[n]_q!}F^nK^n\otimes K^{-n}E^n\Big). \end{align*} Note that the right-hand sides above are sums of infinitely many tensors of the form $x\otimes y$ with $x,y\in U_{\hbar}$, which we denote by $\alpha ^{\pm}_n \otimes \beta^{\pm} _n$ formally. \subsection{Universal $sl_2$ invariant for string links}\label{univinv} In this section, we recall the definition of the universal $sl_2$ invariant of string links. For an $n$-component string link $L=L_1\cup \cdots \cup L_n$, we define the universal $sl_2 $ invariant $J(L)\in U_{\hbar}^{\hat {\otimes }n}$ in three steps as follows. We follow the notation in \cite{sakie2}. \textbf{Step 1. Choose a diagram.} We choose a diagram $\tilde L$ of $L$ which is obtained by pasting, horizontally and vertically, copies of the fundamental tangles depicted in Figure \ref{fig:fundamental}. We call such a diagram \textit{decomposable}. \begin{figure}[!h] \centering \includegraphics[width=9cm,clip]{fundarmantal.eps} \caption{Fundamental tangles, where the orientations of the strands are arbitrary. }\label{fig:fundamental} \end{figure} \textbf{Step 2. Attach labels.} We attach labels on the copies of the fundamental tangles in the diagram, following the rule described in Figure \ref{fig:cross}, where $S_{\hbar}'$ should be replaced with $S_{\hbar}$ if the string is oriented upward, and with the identity otherwise. We do not attach any label to the other copies of fundamental tangles, i.e., to a straight strand and to a local maximum or minimum oriented from right to left. See Figure \ref{fig:T_h} for an (elementary) example. \begin{figure}[!h] \centering \begin{picture}(300,70) \put(50,25){\includegraphics[width=7.5cm,clip]{cross2.eps}} \put(40,70){$(S_{\hbar}'\otimes S_{\hbar}')(R)$} \put(100,10){$(S_{\hbar}'\otimes S_{\hbar}')(R^{-1})$} \put(188,16){$K$} \put(243,60){$K^{-1}$} \end{picture} \caption{How to place labels on the fundamental tangles.}\label{fig:cross} \end{figure} \textbf{Step 3. Read the labels.} We define the $i$th tensorand of $J({L})$ as the product of the labels on the $i$th component of $\tilde L$, where the labels are read off along $L_i$ reversing the orientation, and written from left to right. Here, the labels on the crossings are read as in Figure \ref{fig:cross2}. \begin{figure}[!h] \centering \begin{picture}(300,120) \put(50,20){\includegraphics[width=5.8cm,clip]{cross.eps}} \put(42,120){$(S_{\hbar}'\otimes S_{\hbar}')(R)$} \put(100,90){$=$} \put(120,90){$\sum_{n\geq 0}$} \put(150,95){$S_{\hbar}'(\alpha_n)$} \put(208,95){$S_{\hbar}'(\beta_n)$} \put(42,10){$(S_{\hbar}'\otimes S_{\hbar}')(R^{-1})$} \put(100,40){$=$} \put(120,30){$\sum_{m\geq 0}$} \put(150,35){$S_{\hbar}'(\alpha^{-}_m)$} \put(208,35){$S_{\hbar}'(\beta^{-}_m)$} \end{picture} \caption{How to read the labels on crossings.}\label{fig:cross2} \end{figure} As is well known \cite{O}, $J(L)$ does not depend on the choice of the diagram, and thus defines an isotopy invariant of string links. \begin{figure}[!h] \centering \begin{picture}(300,100) \put(70,20){\includegraphics[width=4.5cm,clip]{T_h.eps}} \put(35,40){$\tilde A \ =$} \put(200,60){$R$} \put(200,30){$R$} \put(60,0){(a)} \put(180,0){(b)} \end{picture} \caption{(a) A diagram $\tilde{A}$ of the string link $A$ (b) The label put on $\tilde{A}$. }\label{fig:T_h} \end{figure} For example, for the string link $A$ shown in Figure \ref{fig:T_h}, we have \begin{align} \begin{split}\label{univc} J({A})&=\sum_{m,n\geq 0} \beta_m\alpha_n\otimes \alpha_m\beta_n \\ &=D\bigg(\sum_{m\geq 0}q^{\frac{1}{2}m(m-1)}\frac{(q-1)^m}{[m]_q!}E^m\otimes F^m\bigg) D\bigg(\sum_{n\geq 0}q^{\frac{1}{2}n(n-1)}\frac{(q-1)^n}{[n]_q!}F^n\otimes E^n\bigg) \\&=D^2\bigg(\sum_{m,n\geq 0}q^{\frac{1}{2}m(m-1)+{\frac{1}{2}n(n-1)+m^2}}\frac{(q-1)^{m+n}}{[m]_q![n]_q!} E^mK^mF^n\otimes F^mK^{-m}E^n\bigg), \end{split} \end{align} where the last identity follows from \begin{align*} &D(1\otimes x)=(K^{|x|}\otimes x)D, \end{align*} for $x\in U_{\hbar}$ an homogeneous element of degree $|x|$. Note that \begin{align*} J({A})\equiv1+c\hbar \oh{2}, \end{align*} where $c$ denotes the symmetric element \begin{equation}\label{cs} c=\frac{1}{2}H\otimes H+F\otimes E +E\otimes F. \end{equation} \subsection{Universal $sl_2$ invariant and linking number}\label{sec:Jlk} We now recall how the linking number and framing can be simply derived from the ``coefficient'' of $\hbar$ in the universal $sl_2$ invariant. Before giving a precise statement (Proposition \ref{sc}), we need to introduce a few extra notation, which will be used throughout the paper. For $1\leq i\leq n$, and for $x\in U_{\hbar}$, we define $x^{(l)}_i\in U_{\hbar}^{\hat \otimes l }$ by $$x^{(l)}_i=1\otimes \cdots \otimes x \otimes \cdots \otimes 1, $$ where $x$ is at the $i$th position. More generally, for $1\leq j_1,\ldots, j_m \leq l$ and $y=\sum y_1\otimes \cdots \otimes y_m\in U_{\hbar}^{\hat \otimes m}$, we define $y^{(l)}_{j_1\ldots j_m} \in U_{\hbar}^{\hat \otimes l }$ by \begin{align*} y^{(l)}_{j_1\ldots j_m}=\sum (y_1)^{(l)}_{j_1}\cdots (y_m)^{(l)}_{j_m}. \end{align*} For $x\in U_{\hbar}^{\hat \otimes l}$ such that $x\equiv1\oh{}$, set \begin{align*} \coef(x)=\frac{x-1}{\hbar} \in U_{\hbar}^{\hat \otimes l} /\hbar U_{\hbar}^{\hat \otimes l}, \end{align*} i.e., we have $x\equiv 1+\coef(x)\hbar \oh{2}$. Note that $J(L)\equiv1 \oh{}$ for any string link $L$, by definition. \begin{proposition}\label{sc} For $L\in SL(l)$ with linking matrix $(m_{ij})_{1\leq i,j\leq l}$, we have \begin{align*} \coef(J(L))&=\frac{1}{2} \sum_{1\leq i,j \leq l} m_{ij}c^{(l)}_{ij} \\ &=\sum_{1\leq i<j \leq l} m_{ij}c^{(l)}_{ij}+\frac{1}{2} \sum_{1\leq i \leq l} m_{ii}c^{(l)}_{ii}. \end{align*} \end{proposition} Our main result in this paper generalizes Proposition \ref{sc} with respect to Milnor invariants. In the rest of this section, we prove Proposition \ref{sc} in an elementary way. \begin{proof}[Proof of Proposition \ref{sc}] Let $L\in SL(l)$, and choose a decomposable diagram $\tilde L=\tilde L_1\cup \cdots \cup \tilde L_l$ such that each crossings has both strands oriented downwards. Denote by $C(\tilde L)$ the set of the crossings, and by $M(\tilde L)$ the set of local maxima and minima oriented from left to right. For $a\in C(\tilde L) \cup M(\tilde L)$, let $J(a)\in U_{\hbar}^{\hat \otimes l}$ be the element obtained by reading only the labels on $a$, as indicated in Step 2 of the definition of $J(L)$. Note that $J(a)\equiv1 \oh{}$ for each $a\in C(\tilde L) \cup M(\tilde L)$, and \begin{equation}\label{scl} \coef(J(L))=\sum _{a\in C(\tilde L)\cup M(\tilde L)}\coef(J(a)). \end{equation} Now, for $1\leq i< j\leq l$, let $C_{i,j}(\tilde L)\subset C(\tilde L)$ be the subset of crossings between $\tilde L_i$ and $\tilde L_j$. Set $R_{21}=R_{21}^{(2)}$. Since we have \begin{align*} \coef(R^{\varepsilon})+\coef(R^{\varepsilon}_{21})=\varepsilon c, \end{align*} for $\varepsilon=\pm 1$, it follows that \begin{align}\label{cij} \sum _{a\in C_{i, j}(\tilde L)}\coef(J(a)) =m_{ij}c^{(l)}_{ij}. \end{align} Similarly, for $1\leq i\leq l$, let $C_{i}(\tilde L)\subset C(\tilde L)$ be the subset of self-crossings of $\tilde L_i$, and $M_{i}(\tilde L)\subset M(\tilde L)$ the subset of local maxima and minima oriented from left to right in $\tilde L_i$. Let us consider $J(a)$ for $a\in C_i(\tilde L)\cup M_i(\tilde L)$, for $l=i=1$ for simplicity. Notice that each crossing in $C_{1}(\tilde L)$ is either left-connected or right-connected, where a downward oriented crossing is called left (resp. right)-connected if its left (resp. right) outgoing strand is connected to the left (resp. right) ingoing strand in $\tilde L$. For a left (resp. right)-connected positive crossing $a\in C_1(\tilde L)$, we have $J(a)=R^{(1)}_{11}$ (resp. $J(a)=(R_{21})^{(1)}_{11}$), and on a left (resp. right)-connected negative crossing $b\in C_1(\tilde L)$, we have $J(b)=(R_{21}^{-1})^{(1)}_{11}$ (resp. $J(b)=(R^{-1})^{(1)}_{11}$). Recall that we put $K$ (resp. $K^{-1}$) on a local maximum (resp. minimum) oriented left to right. For these labels we have \begin{align*} \coef(R^{(1)}_{11})&=\frac{c^{(1)}_{11} - H}{2}, \quad \coef((R^{-1})^{(1)}_{11})=\frac{-c_{11}^{(1)} + H}{2}, \\ \coef((R_{21})^{(1)}_{11})&=\frac{c_{11}^{(1)} + H}{2}, \quad \coef((R_{21}^{-1})^{(1)}_{11})=\frac{-c_{11}^{(1)} - H}{2}, \\ \coef(K)&=\frac{ H}{2}, \quad \coef(K^{-1})=\frac{-H}{2}. \end{align*} We consider the sum of these coefficients over all labels on $C_1(\tilde L)\cup M_1(\tilde L)$. Actually, if $l$ (resp. $r$) denotes the number of left-connected (resp. right-connected) crossings in $C_{1}(\tilde L)$, and if $M$ (resp. $m$) denotes the number of local maximum (resp. minimum) in $M_{1}(\tilde L)$, then it is not difficult to check that \begin{equation}\label{wutang} l - r - M + m= 0 \end{equation} \noindent (By \cite[Thm. XII.2.2]{Kassel}, and since $l- r - M + m$ is clearly invariant under a crossing change, it suffices to prove that this quantity is invariant under each of the moves of \cite[Fig. 2.2--2.9]{Kassel}: this is easily checked by a case-by-case study of all possible types of crossings involved in the moves). This implies that \begin{align*} \sum _{a\in C_{1}(\tilde L)}\coef(J(a))+\sum_{b\in M_1(\tilde L)} \coef(J(b))= \frac{1}{2}m_{11}c^{(1)}_{11}. \end{align*} This, together with Equation (\ref{cij}) and Equation (\ref{scl}), implies the desired formula. \end{proof} \section{Diagrammatic approach}\label{4} \subsection{Jacobi diagrams} \label{sec:jacobi} We mostly follow the notation in \cite{HMa}. A \emph{Jacobi diagram} is a finite uni-trivalent graph, such that each trivalent vertex is equipped with a cyclic ordering of its three incident half-edges. In this paper we require that each connected component of a Jacobi diagram has at least one univalent vertex. The \emph{degree} of a Jacobi diagram is half its number of vertices. Let $X$ be a compact oriented $1$-manifold. A \emph{Jacobi diagram on $X$} is a Jacobi diagram whose univalent vertices are disjointly embedded in $X$. Let $\mathcal{A}(X)$ denote the $\mathbb{Q}$-vector space spanned by Jacobi diagrams on $X$, subject to the AS, IHX and STU relations depicted in Figure \ref{relations}. Here as usual \cite{BN}, we use bold lines to depict the $1$-manifold $X$ and dashed ones to depict the Jacobi diagram, and the cyclic ordering at a vertex is given by the counter-clockwise orientation in the plane of the figure. \begin{figure}[!h] \includegraphics{relations.eps} \caption{The relations AS, IHX and STU. } \label{relations} \end{figure} We denote by $\mathcal{A}_k(X)$ the subspace spanned by Jacobi diagrams of degree $k$. Abusing notation, we still denote by $\mathcal{A}(X)$ its completion with respect to the degree, i.e., $\mathcal{A}(X)=\prod_{k\geq 0} \mathcal{A}_k(X)$. In this paper we shall restrict our attention to the case $X=\coprod_{j=1}^l I_j$, where each $I_j$ is a copy of the interval $I=[0,1]$. For simplicity, set $\mathcal{A}(l)=\mathcal{A}(\coprod_{j=1}^l I_j)$. Note that $\mathcal{A}(l)$ has an algebra structure with multiplication defined by stacking. We denote by $\mathcal{B}(l)$ the completed $\mathbb{Q}$-vector space spanned by Jacobi diagrams whose univalent vertices are labelled by elements of the set $\{1,...,l\}$, subject to the AS and IHX relations. Here completion is given by the degree as before. Note that $\mathcal{B}(l)$ has an algebra structure with multiplication defined by disjoint union. There is a natural graded $\mathbb{Q}$-linear isomorphism \cite{BN} \begin{equation*} \chi : \mathcal{B}(l)\to \mathcal{A}(l), \end{equation*} which maps a diagram to the average of all possible combinatorially distinct ways of attaching its $i$-colored vertices to the $i$th interval, for $i=1,\ldots , l$. Note that $\chi$ is not an algebra homomorphism. In what follows, we focus only on the subspace $\mathcal{A}^t(l)$ of $\mathcal{A}(l)$, which is the graded quotient of $\mathcal{A}(l)$ by the space spanned by Jacobi diagrams containing non-simply connected diagrams. It follows that $\mathcal{B}^t(l)=\chi^{-1}(\mathcal{A}^t(l))$ is the commutative polynomial algebra on the subspace $\mathcal{C}^t(l)$ spanned by \emph{trees}, that is, by connected and simply connected Jacobi diagrams. Let us also denote by $\mathcal{A}^h(l)$ the graded quotient of $\mathcal{A}^t(l)$ by the space spanned by Jacobi diagrams containing a chord between the same component of $\coprod_{j=1}^l I_j$. Similarly, denote $\mathcal{B}^h(l):=\chi^{-1}(\mathcal{A}^h(l))$. Then $\mathcal{B}^h(l)$ is the commutative polynomial algebra on the subspace $\mathcal{C}^h(l)$ spanned by trees with distinct labels \cite{BN}. As above, we denote by $\mathcal{C}^t_k(l)$ and $\mathcal{C}^h_k(l)$ the respective subspaces of $\mathcal{C}^t(l)$ and $\mathcal{C}^h(l)$ spanned by Jacobi diagrams of degree $k$. For any sequence $I=(i_1,\ldots, i_{m})$ of integers in $\{1,\ldots, l\}$, let $T^{(l)}_I$ be the tree Jacobi diagram of degree $(m-1)$ labeled by $I$ as shown in Figure \ref{fig:TI}. \begin{figure}[!h] \input{TI.pstex_t} \caption{The tree Jacobi diagram $T^{(l)}_I$ for $I=(i_1,\ldots,i_{m})$. }\label{fig:TI} \end{figure} It is not difficult to see, by the AS and IHX relations, that $\mathcal{C}_m^t(l)$ is spanned by diagrams $T^{(l)}_I$ indexed by sequences $I=(i_1,\ldots, i_{m})$ of integers in $\{1,\ldots, l\}$, while $\mathcal{C}_m^h(l)$ is spanned by those with distinct integers. \subsection{Kontsevich integral and Milnor map}\label{sec:HM} Given a tangle $T$ which is defined as the image of a proper embedding in $D^2\times I$ of a compact, oriented $1$-manifold $X$, the \emph{Kontsevich integral} $Z(T)$ of $T$ lives in the space $\mathcal{A}(X)$ of Jacobi diagrams on $X$ \cite{Ko}. \footnote{More precisely, the boundary of $T$ consists of two linearly ordered sets of boundary points, and each must be endowed with a q-structure, i.e., a consistent collection of parentheses. } We shall not review the definition of the Kontsevich integral $Z$ here, but refer the reader to \cite{BN,CD,ohtsuki} for surveys. A fundamental property of the Kontsevich integral is its universality, over $\mathbb{Q}$, among finite type (or Vassiliev) invariants and among quantum invariants, in the sense that any such invariant can be recovered from the Kontsevich integral by post-composition with an appropriate map, called \emph{weight system}. Bar-Natan \cite{BN2} and Lin \cite{Lin} proved that Milnor invariants for string links are finite type invariants, and thus can be recovered from the Kontsevich integral. This connection was made completely explicit by Habegger and Masbaum, who showed that Milnor invariants determine and are determined by the so-called tree-part of the Kontsevich integral \cite{HMa}. In order to state this result, we first need the following diagrammatic formulation for the image of the Milnor map defined in Section \ref{sec:milnormap}. Denote by $H$ the abelianization $\F / \Gamma_2 \F$ of the free group $\F$, and denote by $L(H)=\bigoplus_k L_k(H)$ the free $\mathbb{Q}$-Lie algebra on $H$. Note that $L_k(H)$ is isomorphic to $(\Gamma_k \F / \Gamma_{k+1} \F)\otimes \mathbb{Q}$, so that $\mu_{k}$ can be regarded as taking values in $H\otimes L_k(H)$. It turns out that the Milnor map $\mu_{k}$ actually takes values in the kernel $D_k(H)$ of the Lie bracket map $H\otimes L_k(H)\rightarrow L_{k+1}(H)$, and that $D_k(H)$ identifies with the space $\mathcal{C}^t_{k}(l)$, as we now explain. Let $T$ be a tree Jacobi diagram in $\mathcal{C}^t_{k}(l)$. To each univalent vertex $v_0$ of $T$, we associate an element $T_{v_0}$ of $L_{k}(H)$ as follows. For any univalent vertex $v\neq v_0$, label the incident edge of $T$ by $c_v=\alpha_j\in \F$, where $j$ is the label of $v$. Next, label all edges of $T$ by recursively assigning the label $[a,b]$ to any edge which meets an $a$-labelled and a $b$-labelled edge at a trivalent vertex (following the cyclic ordering). The last step of this process assigns a label to the edge incident to $v_0$: this final label is the desired element $T_{v_0}$ of $L_{k}(H)$. Using this, we can define a $\mathbb{Q}$-linear isomorphism \begin{equation}\label{isomu} \mathcal{C}^t_{k}(l)\rightarrow D_k(H) \end{equation} by sending a tree $T$ to the sum $\sum_v c_v\otimes T_v$, where the sum ranges over the set of all univalent vertices of $T$. \begin{example}\label{ex:isom} A single chord with vertices labeled $i$ and $j$ is mapped to $\alpha_i\otimes \alpha_j + \alpha_j\otimes \alpha_i$, which is an element of $D_1(H)$ by antisymmetry. (Note in particular that for $i=j$, the corresponding diagram is mapped to $2.\alpha_i\otimes \alpha_i$.) \\ Similarly, an $Y$-shaped diagram with univalent vertices labeled $i$, $j$ and $k$ (following the cyclic ordering) is mapped to the sum $\alpha_i\otimes [\alpha_j,\alpha_k] + \alpha_j\otimes [\alpha_k,\alpha_i] + \alpha_k\otimes [\alpha_i,\alpha_j]$ : clearly, this is an element of $D_2(H)$ by the Jacobi identity. \end{example} In the rest of this paper, we implicitly identify the image of the Milnor map $\mu_{k}$ with $\mathcal{C}^t_k(l)$ via the isomorphism (\ref{isomu}). Now, Habegger-Masbaum's result can be simply formulated as follows. Let $L\in SL_m(l)$ be an $l$-component string link with vanishing Milnor invariants of length up to $m$. The tree-part of the Kontsevich integral of $L$, which is defined as $Z^t=p^t\circ \chi^{-1}\circ Z$, where $p^t:\mathcal{B}(l)\rightarrow \mathcal{B}^t(l)$ is the natural projection, is then given by \begin{align*} Z^t(L) = 1 + \mu_{m}(L) + \textrm{terms of degree $\ge m+1$}, \end{align*} where $1$ denotes the empty Jacobi diagram. In particular, the leading term of $Z^t-1$ does not depend on the choice of q-structure, and lives in the space $\mathcal{C}^t_m(l)$ of degree $m$ tree Jacobi diagrams. In \cite{HMa}, it is also proved that $Z^t$ is the universal finite type concordance invariant over $\mathbb{Q}$, which implies in particular that it determines Milnor invariants. Furthermore, Habegger and Masbaum showed that, for $L\in SL^h_m(l)$, we have \begin{align}\label{equ:HMh} Z^h(L) = 1 + \mu^h_{m}(L) + \textrm{terms of degree $\ge m+1$}, \end{align} where $Z^h$ is the Kontsevich integral $Z$ composed with the projection $\mathcal{B}(l)\rightarrow \mathcal{B}^h(l)$ \cite{HMa}, and where $\mu^h_{m}$ is the link-homotopy reduction of the Milnor map $\mu_{m}$, which is defined as $\mu^h_{m} = p^h\circ \mu_{m}$, with $p^h:\mathcal{C}^t(l)\rightarrow \mathcal{C}^h(l)$ the natural projection. (Note in particular that the leading term of $Z^h-1$ lives in $\mathcal{C}^h_m(l)$.) \subsection{Weight system associated to $sl_2$}\label{sec:weight} Recall that the Lie algebra $sl_2$ is the 3-dimensional Lie algebra over $\mathbb{Q}$ generated by $h, e,$ and $f$ with Lie bracket \begin{align*} [h,e]=2e, \quad [h,f]=-2f, \quad [e,f]=h. \end{align*} Let $U=U(sl_2)$ denote the universal enveloping algebra of $sl_2$, and $S=S(sl_2)$ the symmetric algebra of $sl_2$. There is a well-known commutative diagram \cite{wheels} \begin{align*} \begin{CD} \mathcal{A}(l) @>{W}>> U^{\otimes l} [[\hbar]] \\ @AA{ \chi}A @AA {\beta}A \\ \mathcal{B}(l) @>{W}>> S^{\otimes l}[[\hbar]], \end{CD} \end{align*} where $\chi$ is the isomorphism defined in Section \ref{sec:jacobi}, $\beta$ is the $\mathbb{Q}$-linear isomorphism induced by the Poincar\'e-Birkhoff-Witt isomorphism $S \cong U$, sending a monomial $v_1\cdots v_m\in S$ to $\sum_{\sigma \in S(m)} \frac{1}{m!} v_{\sigma(1)}\cdots v_{\sigma(m)}\in U$, and where $W$ is the weight system associated to $sl_2$. In this paper we will make use of the map $W$ defined on the space $\mathcal{B}(l)$ of labeled Jacobi diagrams, and more precisely of its restriction to $\mathcal{B}^t(l)$, and thus recall its definition below. More precisely, we first define a map $w_m\co \mathcal{C}^t_m(l) \to S^{\otimes l}$, and then define $W\co \mathcal{B}^t(l) \to S^{\otimes l}[[\hbar]]$ as the $\mathbb{Q}$-algebra homomorphism such that $W(D)=w_m(D)\hbar^m$ for $D\in \mathcal{C}^t_m(l)$. For $m=1$, we simply define $w_1$ by \begin{align} w_1(D_{ij}) = c_{ij}^{(l)}\in S_2^{ \otimes l}, \end{align} where $D_{ij}$ is a single chord with vertices labeled by $i$ and $j$ (possibly $i=j$), and where $c_{ij}^{(l)}$ was defined in Section \ref{sec:Jlk}. Now let $m\geq 2$, and let $D\in \mathcal{C}^t_m(l)$. Set \begin{align}\label{csb} \begin{split} b &= \sum_{\sigma\in \mathfrak{S}(3)} (-1)^{|\sigma |} \sigma (h\otimes e\otimes f) \\ &=h\otimes e \otimes f + e\otimes f\otimes h + f\otimes h\otimes e -h\otimes f\otimes e -f\otimes e\otimes h -e\otimes h \otimes f \in sl_2^{\otimes 3}, \end{split} \end{align} where $\sigma$ acts by permutation of the tensorands. Consider a copy of $b$ for each trivalent vertex of $D$, where each tensorand of $b$ is associated to one of the half-edges incident to the trivalent vertex, following the cyclic ordering. Each internal edge (i.e. each edge between two trivalent vertices) comprises a pair of half-edges, and we contract the two corresponding copies of $sl_2$ using the symmetric bilinear form $$ \langle - , - \rangle : sl_2\otimes sl_2 \rightarrow \mathbb{Q} $$ defined by $\langle a , b \rangle = Tr(ab)$, that is given by $$ \langle h , h \rangle = 2\quad , \quad \langle e , f \rangle =1\quad , \quad \langle h , e \rangle = \langle h , f \rangle = \langle e , e \rangle = \langle f , f \rangle = 0. $$ Fix an arbitrary total order on the set of univalent vertices of $D$; we get in this way an element $\sum x_1\otimes . . . \otimes x_{m+1}$ of $sl_2^{\otimes m+1}$, the $i$th tensorand corresponding to the $i$th univalent vertex of $D$. We then define $w_m(D) \in S_{m+1}^{\otimes l}$ by \begin{align}\label{formula} w_m(D)= \sum y_1\otimes \cdots \otimes y_l, \end{align} where $y_j$ is the product of all $x_{i}\in sl_2$ such that the $i$th vertex is labelled by $j$. It is known that $w_m$ is well-defined, i.e. is invariant under AS and IHX relations. \subsection{Computing $w_m$ on trees}\label{sec:compute} There is another formulation of $w_m$ for tree Jacobi diagrams, which we will use later. Recall that $ \mathcal{C}^t_m(l)$ is spanned by the trees $T^{(l)}_I$ indexed by sequences $I$ of (possibly repeating) integers in $\{1,\ldots, l\}$, introduced in Section \ref{sec:jacobi}. For convenience, we only give this alternative definition for the trees $T^{(l)}_I$. Recall the element $c\in sl_2^{\otimes 2}$ and $b\in sl_2^{\otimes 3}$ from (\ref{cs}) and (\ref{csb}), respectively. Let $$s\co sl_2 \to sl_2 ^{\otimes 2}$$ be the $\mathbb{Q}$-linear map defined by \begin{align*} s(a)=(\mathrm{ad}\otimes 1)(a\otimes c)= \frac{1}{2} [a,h]\otimes h+[a,f]\otimes e+[a,e]\otimes f \end{align*} for $a\in sl_2$, where $\mathrm{ad}(x\otimes y)=[x,y]$ for $x, y\in sl_2$. On the basis elements, we have \begin{align*} s(e)&= h\otimes e - e\otimes h, \\ s(h)&=2(e\otimes f - f\otimes e), \\ s(f) & = f\otimes h - h\otimes f. \end{align*} Set $\varsigma_2=c\in sl_2^{\otimes 2}$. For $m\geq 3$, set \begin{align}\label{varsigma} \varsigma_m=(1^{\otimes m-2}\otimes s)(1^{\otimes m-3} \otimes s)\cdots (1\otimes s)(c) \in sl_2^{\otimes m}. \end{align} \noindent For example, one can easily check that $\varsigma_3=b$. \begin{proposition}\label{lem:Xm} For $m\geq 1$, we have \begin{align}\label{sch} w_m(T^{(m+1)}_{(1,\ldots, m+1)})= \varsigma_{m+1}. \end{align}\end{proposition} \begin{proof} This is easily shown by induction on $m\geq 1$. For $m=1$, we have $w_1 (T^{(2)}_{(1,2)})=c=\varsigma_{2}$. Now let $m\geq 2$, and let $X_h, X_e,X_f\in sl_2 ^{\otimes m-2}$ such that \begin{align*} \varsigma_{m} &= X_h\otimes h +X_e\otimes e+X_f\otimes f \\ &=w_{m-1}(T^{(m)}_{(1,\ldots, m)}). \end{align*} Then we have \begin{align*} \varsigma_{m+1}&=(1^{\otimes m-1}\otimes s)(\varsigma_{m-1} ) \\ &= (1^{\otimes m-1}\otimes s)(X_h\otimes h +X_e\otimes e+X_f\otimes f) \\ &=X_h\otimes 2 (e\otimes f-f\otimes e ) +X_e\otimes (h\otimes e -e\otimes f)+X_f\otimes (f\otimes h-h\otimes f) \\ &=X_h\otimes (\sum \langle h, b_1\rangle b_2\otimes b_3) +X_e\otimes (\sum \langle e, b'_1\rangle b'_2\otimes b'_3)+X_f\otimes (\sum \langle f, b''_1\rangle b''_2\otimes b''_3) \\ &=w_m(T^{(m+1)}_{(1,\ldots, m+1)}), \end{align*} where $\sum b_1\otimes b_2\otimes b_3=\sum b'_1\otimes b'_2\otimes b'_3=\sum b''_1\otimes b''_2\otimes b''_3=b$. Hence we have the assertion. \end{proof} For an arbitrary sequence $I=(i_1,\ldots, i_{m+1})$ of indices in $\{1,\ldots, l\}$, set \begin{align*} \varsigma^{(l)}_{I}=(\varsigma_{m+1})^{(l)}_{i_1,\ldots, i_{m+1}}, \end{align*} that is, if we write formally $\varsigma_{m+1}=\sum x_1\otimes \cdots \otimes x_{m+1}$, the $j$th tensorand of $\varsigma^{(l)}_{I}$ is the product of of all $x_{p}\in U$ such that $i_p=j$. Then by Proposition \ref{lem:Xm} and the definition of $w_m$, it immediately follows that \begin{align}\label{formula2} w_m(T^{(l)}_I)= \varsigma^{(l)}_{I}\in S_{m+1}^{\otimes l}. \end{align} \section{Milnor map and the universal $sl_2$ invariant}\label{5} In this section we give the main results of this paper, which relate Milnor invariants to the universal $sl_2$ invariant via the $sl_2$ weight system $W$. \subsection{The quantized enveloping algebra $U_{\hbar}$ and formal power series $S[[\hbar]]$ over the symmetric algebra}\label{5.1} The symmetric algebra $S$ of $sl_2$ has a graded structure $S=\oplus_{m\geq 0} S_m$, where $S_m$ is the $\mathbb{Q}$-subspace spanned by words of length $m$. Likewise, its $l$-fold tensor product $S^{\otimes l}=\oplus_{m\geq 0} (S^{\otimes l})_m$ is graded, with \begin{align*} (S^{\otimes l})_m=\bigoplus_{\substack{m_1+\cdots+m_l=m \\ m_1,\ldots,m_l\geq 0.}} S_{m_1}\otimes \cdots \otimes S_{m_l}. \end{align*} Consider the $\mathbb{Q}$-subspace $\langle sl_2\rangle ^{(l)}_{m}$ of $(S^{\otimes l})_m$ defined by \begin{align*} \langle sl_2\rangle ^{(l)}_{m}=\bigoplus_{\substack{m_1+\cdots+m_l=m \\ 0\leq m_1,\ldots,m_l\leq 1}} S_{m_1}\otimes \cdots \otimes S_{m_l}. \end{align*} Roughly speaking, $\langle sl_2\rangle ^{(l)}_{m}$ is spanned by tensors in $S^{\otimes l}$ with exactly $m$ nontrivial tensorands, each of which being of degree one. For example, the tensor $\varsigma_m$ defined in (\ref{varsigma}) is an element of $\langle sl_2\rangle ^{(l)}_{m}$. By the definition of $W$ given in Sections \ref{sec:weight} and \ref{sec:compute}, we immediately have the following. \begin{lemma}\label{wcc} For $m\geq 1,$ we have $$W(\mathcal{C}^t_m(l))\subset (S^{\otimes l})_{m+1}\hbar^m \quad \textrm{and} \quad W(\mathcal{C}^h_m(l))\subset \langle sl_2\rangle ^{(l)}_{m+1}\hbar^m. $$ \end{lemma} Now, recall that the enveloping algebra $U=U(sl_2)$ of $sl_2$ has a filtered algebra structure \begin{align*} \mathbb{Q}=U_0\subset U_1\subset . . . \subset U_m \subset . . . \subset U, \end{align*} where $U_m$ is the $\mathbb{Q}$-subspace spanned by words of length equal to or less than $m$, and that the associated graded algebra \begin{align*} \mathrm{gr} U=\bigoplus_{m\geq 0}(U_m/U_{m-1}) \end{align*} is canonically isomorphic to $S$ as graded algebra. On the other hand, $U$ is also canonically isomorphic to $U_{\hbar}/\hbar U_{\hbar}$ as an algebra. In summary we have the sequence of isomorphisms \begin{align*} U_{\hbar}/\hbar U_{\hbar} \simeq U \simeq \mathrm{gr} U \simeq S, \end{align*} where the $\mathbb{Q}$-linear isomorphism $U \simeq \mathrm{gr} U$ maps the PBW basis $f^sh^ne^r $ to $\overline{f^s h^ne^r}$. (Note that this is not an algebra isomorphism.) This induces the sequence of $\mathbb{Q}$-linear isomorphisms \begin{align*} U_{\hbar} \simeq U[[\hbar]] \simeq S[[\hbar]], \end{align*} where $U[[\hbar]]$ and $S[[\hbar]]$ are the $\hbar$-adic completions of $U\otimes_{\mathbb{Q}} \mathbb{Q}[[\hbar]]$ and $S\otimes_{\mathbb{Q}} \mathbb{Q}[[\hbar]]$, respectively. We can extend these isomorphisms to the completed tensor powers \begin{align*} U_{\hbar}^{\hat \otimes l} \simeq U^{\otimes l}[[\hbar]] \simeq S^{\otimes l}[[\hbar]]. \end{align*} In what follows, we denote by $\rho\co U_{\hbar}^{\hat \otimes l}\to S^{\otimes l}[[\hbar]]$ the composition of the above isomorphisms, and identify $U_{\hbar}^{\hat \otimes l}$ and $S^{\otimes l}[[\hbar]]$ as $\mathbb{Q}$-modules via $\rho$. Note that $\rho$ is simply given by $\rho(F^sH^nE^r\hbar^m) = f^s h^ne^r \hbar ^m$. \subsection{Main results}\label{subseq} We can now give the precise statements of our main results. Set \begin{align*} J^t&:=\pi^t\circ J\co SL(l) \to \prod_{m\geq 1}(S^{\otimes l})_{m+1} \hbar^m, \end{align*} where \begin{align*} \pi^t&\co U_{\hbar}^{\hat \otimes l} \to \prod_{m\geq 1}(S^{\otimes l})_{m+1} \hbar^m \end{align*} denotes the projection as $\mathbb{Q}$-modules. The first main result in this paper is as follows. \begin{theorem}\label{sth2} Let $m\geq 1$. If $L\in SL_m(l)$, then we have $$ J^t(L)\equiv (W\circ \mu_{m}) (L) \oh{m+1}. $$ \end{theorem} \begin{example} Let $L$ be a string link with nonzero linking matrix $(m_{ij})_{1\le i,j \le l}$. Then the $i$th longitude $l^1_i$ in $\F/\Gamma_{2}\F$, defined in Section \ref{sec:milnormap}, reads $l^1_i = \sum_{j=1}^l m_{ij} \alpha_j$ ($1\le i\le l$), so that the degree $1$ Milnor map of $L$ is given by \begin{align*} \mu_1(L) & = \sum_{i=1}^l \alpha_i\otimes l^1_i\\ & = \sum_{1\le i,j \le l} m_{ij} \alpha_i\otimes \alpha_j\\ & = \sum_{1\le i<j \le l} m_{ij} (\alpha_i\otimes \alpha_j + \alpha_j\otimes \alpha_i)+ \sum_{1\leq i \leq l} m_{ii} (\alpha_i\otimes \alpha_i)\\ & = \sum_{1\le i<j \le l} m_{ij}. D_{\alpha_i,\alpha_j} + \sum_{1\leq i \leq l} m_{ii}. \frac{1}{2} D_{\alpha_i,\alpha_i}, \end{align*} where $D_{\alpha_i,\alpha_j}$ denotes a single chord with vertices labelled $\alpha_i$ and $\alpha_j$, and where the last equality uses isomorphism (\ref{isomu}), see Example \ref{ex:isom}. Applying the $sl_2$ weight system $W$ then yields $$ (W\circ \mu_1)(L) = \sum_{1\leq i<j \leq l} m_{ij}c^{(l)}_{ij}+\frac{1}{2} \sum_{1\leq i \leq l} m_{ii}c^{(l)}_{ii}, $$ as predicted by Proposition \ref{sc}. \end{example} Since Milnor maps are concordance invariants, we obtain the following topological property for $J^t$ as an immediate consequence of Theorem \ref{sth2}. \begin{corollary}\label{cor:main} Let $L, L' \in SL_m(l)$ be two concordant string links. Then we have \begin{align*} J^t(L') \equiv J^t(L) \oh{m+1}. \end{align*} In particular, if $L$ is concordant to the trivial string link, then $J^t(L)$ is trivial. \end{corollary} \noindent Theorem \ref{sth2} is proved in Section \ref{7}. The proof relies on the fact that Milnor concordance invariants are related to Milnor link-homotopy invariants via some cabling operation, so that Theorem \ref{sth2h} below is actually used as a tool for proving Theorem \ref{sth2}. In order to state the second main result in this paper, set \begin{align*} J^h&:=\pi^h\circ J\co SL(l) \to \bigoplus_{m=1}^{l-1}\langle sl_2 \rangle^{(l)}_{m+1}{\hbar}^m, \end{align*} where $\pi^h\co U_{\hbar}^{\hat \otimes l} \to \bigoplus_{m=1}^{l-1}\langle sl_2 \rangle _{m+1}^{(l)}\hbar^m$ is the projection as $\mathbb{Q}$-modules. We have the following. \begin{theorem}\label{sth2h} Let $m\geq 1$. If $L\in SL^h_m(l)$, then we have $$ J^h(L)\equiv (W\circ \mu^h_{m}) (L) \oh{m+1}, $$ where $\mu^h_{m}$ is the link-homotopy reduction of the Milnor map $\mu_{m}$, defined in Section \ref{sec:HM}. \end{theorem} Theorem \ref{sth2h} is equivalent to the following theorem, formulated in terms of Milnor numbers and the tensors $\varsigma^{(l)}_I$ defined in Section \ref{sec:compute}. \begin{theorem}\label{sth1} For $m\geq 1$, if $L\in SL_m^h(l)$, then we have $$ J^h(L)\equiv \left( \sum_{I\in \mathcal{I}_{m+1}} \mu_I(L)\varsigma^{(l)}_I\right)\hbar^m \oh{m+1}, $$ where the sum runs over the set $\mathcal{I}_{m+1}$ defined in Section 2. \end{theorem} \begin{proof}[Proof of equivalency of Theorem \ref{sth2h} and \ref{sth1}] We need to prove that \begin{align} \label{equivalence} (w_m \circ \mu^h_{m})(L)= \sum_{I\in \mathcal{I}_{m+1}} \mu_I(L).\varsigma^{(l)}_I. \end{align} Recall from Lemma \ref{lem:braidlh} that if $L\in SL_m^h(l)$, then $L$ is link-homotopic to $b_m^Lb_{m+1}^L\cdots b_{l-1}^L$, where the pure braids $b^L_i$ are defined in (\ref{eq:braid}). Actually, it follows directly from Equation (\ref{eq:braid}) that if $L\in SL_m^h(l)$, then we have $\mu^h_{m} (L)=\mu^h_{m} (b_m^L)$. We thus have that \begin{align*} \mu^h_{m} (L) &=\mu^h_{m} (b_m^L) \\ &=\mu^h_{m} \left(\prod_{I\in \mathcal{I}_{m+1}} (B_{I})^{\mu_{I}(b^L_m)}\right) \\ &=\mu^h_{m} \left(\prod_{I\in \mathcal{I}_{m+1}} (B_{I})^{\mu_{I}(L)}\right) \\ &=\sum_{I\in \mathcal{I}_{m+1}} \mu_{I}(L).\mu^h_{m} (B_{I}), \end{align*} where the last equality uses the additivity of the first non-vanishing Milnor string link invariants. The result then follows from (\ref{formula2}) and the fact that $\mu^h_m(B_I)=T_I$ for $I\in \mathcal{I}_{m+1}$, which can be easily checked either by a direct computation or using (\ref{equ:HMh}). \end{proof} We prove Theorems \ref{sth1} in the next section. \section{Proof of Theorem \ref{sth1} : the link-homotopy case}\label{6} We reduce Theorem \ref{sth1} to the following two propositions. The first one shows that the invariant $J^h$ is well-behaved with respect to link-homotopy. \begin{proposition}\label{s241} Let $L, L' \in SL^h_m(l)$ be two link-homotopic string links. Then we have \begin{align*} J^h(L') \equiv J^h(L) \oh{m+1}. \end{align*} In particular, if $L$ is link-homotopic to the trivial string link, then $J^h(L)$ is trivial. \end{proposition} For the second proposition, recall from Section \ref{sec:pure} that for each sequence $J\in \mathcal{I}_{m+1}$ we defined a pure braid $B^{(l)}_J$ which lies in the $m$th term of the lower central series of $P(l)$. \begin{proposition}\label{sp} For any $J\in \mathcal{I}_{m+1}$, we have \begin{align*} J(B^{(l)}_{J})& \equiv 1+\varsigma^{(l)}_{J} {\hbar}^m \oh{m+1}, \end{align*} where $\varsigma^{(l)}_{J} \in $ was defined in Section \ref{sec:compute}. \end{proposition} \begin{proof}[Proof of Theorem \ref{sth1} assuming Propositions \ref{s241} and \ref{sp}] We first note that, as an immediate consequence of Proposition \ref{sp}, for any $J\in \mathcal{I}_{m+1}$ we have \begin{align}\label{eq:sp} J^h(B^{(l)}_{J})& \equiv \varsigma^{(l)}_{J} {\hbar}^m \oh{m+1}. \end{align} Now, let $L\in SL_m^h(l)$, for some $m\geq 1$. We have \begin{align*} J^h(L)& \equiv J^h(b^L_m\cdot b^L_{m+1}\cdot . . . \cdot b^L_{l-1}) \\ &\equiv J^h(b^L_m) \\ &\equiv J^h \left( \prod_{I\in \mathcal{I}_{m+1}}B_{I}^{\mu_{I}(L)} \right) \\ & \equiv \sum_{I\in \mathcal{I}_{m+1}}\mu_{I}(L)\varsigma^{(l)}_{I} {\hbar}^m \oh{m+1}, \end{align*} where the first equality uses Lemma \ref{lem:braidlh} and Proposition \ref{s241}, while the last three equalities follow from the definition of the pure braids $b^L_i$ and from (\ref{eq:sp}). Thus we have the assertion. \end{proof} The proof of Proposition \ref{s241}, which makes use of the theory of claspers, is postponed to Section \ref{8}. Proposition \ref{sp} is proved by a direct computation, as shown below. \begin{proof}[Proof of Proposition \ref{sp}] Set $J=j_1j_2\ldots j_{m+1}$ The result is shown by induction on $m$. For $m=1$, then $B^{(l)}_{j_1 j_2}$ is the pure braid $A^{(l)}_{j_1,j_2}$, so by (\ref{univc}) we have \begin{align*} J(B^{(l)}_{j_1 j_2}) &\equiv 1 + c^{(l)}_{j_1, j_2}\hbar \\ &\equiv 1 + \varsigma^{(l)}_{j_1 j_2} \hbar \oh{2}, \end{align*} as desired. For $m>1$, by the induction hypothesis we have \begin{align*} J(B^{(l)}_{J})&=J([B^{(l)}_{j_1\ldots j_m},A^{(l)}_{j_m,j_{m+1}}]) \\ & = [J(B^{(l)}_{j_1\ldots j_m}), J(A^{(l)}_{j_m,j_{m+1}})] \\ &\in [1+\varsigma_{j_1\ldots j_m}^{(l)}{\hbar}^{m-1} + \hbar^{m}U_{\hbar}^{\hat\otimes l}, 1+c_{j_m, j_{m+1}}^{(l)}\hbar +\hbar^{2}U_{\hbar}^{\hat\otimes l}] \\ &\subset 1+[\varsigma_{j_1\ldots j_m}^{(l)},c_{j_m, j_{m+1}}^{(l)}]{\hbar}^{m} + \hbar^{m+1}U_{\hbar}^{\hat \otimes l}, \end{align*} and on the other hand we have \begin{align*} [\varsigma_{j_1\ldots j_m}^{(l)},c_{j_m ,j_{m+1}}^{(l)}]&=\left([\varsigma_m\otimes 1,c^{(m+1)}_{m, {m+1}}]\right)^{(l)}_{j_1\ldots j_{m+1}} \\ &=\left((1^{\otimes m-2}\otimes \mathrm{ad} \otimes 1)(\varsigma_{m}\otimes c)\right)^{(l)}_{j_1\ldots j_{m+1}} \\ &=\left((1^{\otimes m-2}\otimes s)(\varsigma_{m})\right)^{(l)}_{j_1\ldots j_{m+1}} \\ &=(\varsigma_{m+1})^{(l)}_{j_1\ldots j_{m+1}} \\ &=\varsigma_{J}^{(l)}. \end{align*} This completes the proof. \end{proof} \section{Proof of Theorem \ref{sth2} : the general case} \label{7} In this section, we show how to deduce Theorem \ref{sth2} from Theorem \ref{sth2h}. First, let us set some notation for the various projection maps that will be used throughout this section. For $i,j\ge 1$, let $$ \pi^t_{i,j} \co S^{ \otimes l}[[\hbar]]\to (S^{ \otimes l})_i \hbar^j, $$ where $(S^{ \otimes l})_i$ was defined in Section \ref{5.1}, and set also $ \pi^t_{i}:=\pi^t_{i+1,i}$, so that $\pi^t=\prod_{i\ge 1} \pi^t_{i}$. Likewise, let $$ \pi^h_{i,j} \co S^{ \otimes l}[[\hbar]]\to \langle sl_2 \rangle^{(l)}_{i} \hbar^j, $$ and set $ \pi^h_{i}:=\pi^h_{i+1,i}$. Next, recall that there are two (completed) coalgebra structures on $S[[\hbar]]$ as follows. The first one is defined by $\bar \Delta (x)=x\otimes 1+1\otimes x$ and $\varepsilon (x)=0$ for $x\in sl_2$ as algebra morphisms. The second one is induced by the coalgebra structure of $U_{\hbar}$ defined in Section \ref{3}, via the $\mathbb{Q}$-module isomorphism $\rho \co U_{\hbar} \to S[[\hbar]]$ seen in Section \ref{5.1}. For $\Delta=\Delta _{\hbar}, \bar \Delta$ and $p\geq 0$, define $$\Delta^{[p]} \co S[[\hbar]]\to S^{\otimes p}[[\hbar]]$$ by $\Delta^{[0]}=\varepsilon$, $\Delta^{[1]}=\id$, $\Delta^{[2]}=\Delta,$ and $\Delta^{[p]}= (\Delta \otimes 1^{\otimes p-2})\circ \Delta^{[p]}$ for $p\geq 3$. Abusing notation, for $l\geq 0,$ we write $\Delta^{(p)}:=(\Delta^{[p]})^{\otimes l}\co S^{ \otimes l}[[\hbar]]\to S^{\otimes pl}[[\hbar]]$. Since we have $\Delta _{\hbar}(y)\equiv\bar \Delta(y) \oh{}$ for any $y\in S$, the restriction of $\pi_{i,j}^h\circ \bar \Delta^{(p)}$ to $(S^{\otimes l})_{i}\hbar^j$ is equal to that of $\pi_{i,j}^h\circ \Delta_{\hbar}^{(p)}$ for any $1\le i,j < p$, that is, we have \begin{align}\label{eq:delta} \pi_{i,j}^h\circ \bar \Delta^{(p)}|_{(S^{\otimes l})_{i}\hbar^j} =\pi_{i,j}^h\circ \Delta_{\hbar}^{(p)}|_{(S^{\otimes l})_{i}\hbar^j}. \end{align} Actually, the injectivity of these maps is one of the key points in this section. \begin{lemma}\label{sl3} For any $1\le i,j < p$, the restriction to $(S^{\otimes l})_{i}\hbar^j$ of the $\mathbb{Q}$-linear map $\pi_{i,j}^h \circ \bar \Delta^{(p)}$ is injective. \end{lemma} \begin{proof} This simply follows from the fact that the map $\nabla^{(p)}\circ \pi_{i,j}^h \circ \bar\Delta^{(p)}$ is a scalar map on $(S^{\otimes l})_{i}\hbar^j$, where $\nabla^{(p)}\co U_{\hbar}^{\hat \otimes pl} \to U_{\hbar}^{\hat \otimes l}$ is the tensor power of $p$-fold multiplications, i.e. the map sending $x_1\otimes \cdots \otimes x_{pl}\in U_{\hbar}^{\hat \otimes pl} $ to $x_1\cdots x_p\otimes \cdots \otimes x_{p(l-1)+1}\cdots x_{pl}\in U_{\hbar}^{\hat \otimes l}$. \end{proof} Now, for $p\geq 1$, let $D^{(p)}: SL(l)\rightarrow SL(pl)$ be the cabling map, which sends a string link $L\in SL(l)$ to the string link $D^{(p)}(L)\in SL(pl)$ obtained by replacing each component with $p$ parallel copies. Recall from \cite{HMa} that, for $m\ge 1$ and $p>m$, we have that $L\in SL_m(l)$ if and only if $D^{(p)}(L)\in SL^h_m(pl)$. We have the following. \begin{lemma}\label{scom} For $1\le m < p$, the following diagram commutes \begin{align*} \xymatrixcolsep{5pc} \xymatrix{ SL_m(l)\ar[d]_{D^{(p)}} \ar@{->}[r]^{ W\circ \mu_{m}} & (S^{\otimes l})_{m+1} \hbar^m\ar[d]^-{\pi^h_m \circ \bar \Delta^{(p)}} \\ SL^h_m(pl) \ar@{->}[r]_{W \circ \mu^h_{m}} & \langle sl_2 \rangle^{(pl)}_{m+1}\hbar^m. } \end{align*} \end{lemma} \begin{proof} Denote by $D ^{(p)}: \mathcal{C}^t_m(l)\rightarrow \mathcal{C}^t_m(pl)$ the map defined by sending a tree Jacobi diagram $\xi \in \mathcal{C}^t_m(l)$ to the sum of all diagrams obtained from $\xi$ by replacing each label $i\in \{1,\ldots, l\}$ by one of ${(i-1)p+1, (i-1)p+2, \ldots, ip}$. Then the lemma follows from the following two commutative diagrams $$ \xymatrix{ SL_m(l)\ar[d]_{D^{(p)}} \ar@{->}[r]^{\mu_{m}} & \mathcal{C}^t_m(l)\ar[d]^{p^h\circ D^{(p)}} \\ SL^h_m(pl) \ar@{->}[r]_{\mu^h_{m}} & \mathcal{C}^h_m(pl), } \textrm{ and } \xymatrix{ \mathcal{C}^t_m(l)\ar[d]_{p^h\circ D^{(p)}} \ar@{->}[r]^{W\ \ \ } & (S^{\otimes l})_{m+1}\hbar^m \ar[d]^-{\pi^h_m\circ \bar \Delta^{(p)}} \\ \mathcal{C}^h_m(pl)\ar@{->}[r]_{W\ \ \ } & \langle sl_2 \rangle^{(pl)}_{m+1}\hbar^m. } $$ The fact that the left-hand side diagram commutes is due to Habegger and Masbaum \cite{HMa}, while the commutativity of the right-hand side diagram is a direct consequence of the definitions. \end{proof} The next technical lemma will be shown in Section \ref{8}. \begin{lemma}\label{sl4} Let $L\in SL^h_m(l)$, and $1\leq j\leq i-2\leq m$. We have $\pi^h_{i,j}(J(L))=0$. \end{lemma} We use Lemma \ref{sl4} to establish the following. \begin{lemma}\label{spro1} For $1\le m < p$, the following diagram commutes \begin{align*} \xymatrix{ SL_m(l)\ar[d]_{D^{(p)}} \ar@{->}[r]^{\pi^t_{m}\circ J\textrm{ }\ \ } & (S^{\otimes l})_{m+1}\hbar^m\ar[d]^{\pi^h_m\circ \Delta_{\hbar} ^{(p)}} \\ SL^h_m(pl) \ar@{->}[r]_{\pi^h_{m}\circ J\textrm{ }\ \ } & \langle sl_2\rangle^{(pl)}_{m+1}\hbar^m. } \end{align*} \end{lemma} \begin{proof} The diagram in the statement decomposes as \begin{align*} \xymatrix{ SL_m(l)\ar[d]_{D^{(p)}} \ar@{->}[r]^{J\ \ } & J(SL_m(l))\ar[d]_{\Delta_{\hbar} ^{(p)}} \ar@{->}[r]^{\pi^t_{m}} & (S^{\otimes l})_{m+1}\hbar^m\ar[d]^{\pi^h_m\circ \Delta_{\hbar} ^{(p)}} \\ SL^h_m(pl) \ar@{->}[r]_{J\ \ } & J(SL^h_m(pl)) \ar@{->}[r]_{\pi^h_{m}} & \langle sl_2\rangle^{(pl)}_{m+1}\hbar^m, } \end{align*} where the left-hand side square commutes as a general property of the universal $sl_{2}$ invariant. In order to prove that the right-hand square commutes as well, we first show that, given a string link $L\in SL_m(l)$, we have \begin{align}\label{eq:JT} J(L)\in 1+ \bigoplus_{ 1\leq i\leq j\leq m}(S^{\otimes l})_i\hbar^j+(S^{\otimes l})_{m+1}\hbar^m + \hbar^{m+1}U_{\hbar}^{\hat \otimes l}. \end{align} In other words, we show that \begin{itemize} \item[\rm{(a)}] $\pi^t_j(J(L))=0$ for $1\leq j< m-1$, \item[\rm{(b)}] $\pi^t_{i,j} (J(L))=0$ for $1\leq j\leq i-2\leq m$. \end{itemize} By Lemma \ref{sl3}, if $\pi_{i,j}^t (J(L))\not =0$ for $i,j\geq 1$, then we have \begin{align}\label{ieq} \pi^h_{i,j} (J({D^{(p)}(L)}))=(\pi^h_{i,j}\circ \Delta_{\hbar}^{(p)} )(J(L))\not =0. \end{align} However, as already recalled above, the fact that $L\in SL_m(l)$ implies that $J(D^{(p)}(L))$ is in $SL_m^h(l)$. So (\ref{ieq}) above cannot hold in Case (a) by Theorem \ref{sth1} (ii), nor in Case (b) by Lemma \ref{sl4}. Thus we have shown (\ref{eq:JT}). Let us now proceed with the proof that the right-hand square of the diagram above is commutative. In view of (\ref{eq:JT}), we only need to show the following two claims: \begin{itemize} \item[\rm{(i)}] $\pi_m^h\circ \Delta_{\hbar} ^{(p)} \big((S^{\otimes l})_i\hbar^j\big)=0$ for any $1\leq i\leq j\leq m$, and \item[\rm{(ii)}] $\pi_m^h\circ \Delta_{\hbar} ^{(p)} (\hbar^{m+1}U_{\hbar}^{\hat \otimes l})=0$. \end{itemize} Claim (ii) is obvious, from the fact that $\Delta_{\hbar} ^{(p)}(\hbar^{i}U_{\hbar}^{\hat \otimes l})\subset \hbar^{i}U_{\hbar}^{\hat \otimes pl}$ for $i \geq0$. In order to prove Claim (i), it is enough to show for $0\leq i\leq j$ that \begin{align*} \Delta_{\hbar} ^{(p)} \big((S^{\otimes l})_{i}\hbar^j\big)\subset \prod_{0\leq u\leq v} (S^{\otimes pl})_{u}\hbar^{v}. \end{align*} Recall from \cite{J} that for $s,n,r\ge 0$, $\Delta_{\hbar} (F^sH^nE^r)$ is equal to \begin{align*} \sum_{0\leq j_1\leq s, 0\leq j_2\leq n, 0\leq j_3\leq r} \begin{bmatrix} s \\ j_1 \end{bmatrix}_q\begin{bmatrix} n \\ j_2 \end{bmatrix}_q \begin{pmatrix} r\\ j_3 \end{pmatrix} F^{s-j_1}H^{n-j_2}K^{j_3}E^{r-j_3}\otimes F^{j_1} K^{s-j_1}H^{j_2} E^{j_3}. \end{align*} Since $K=\exp\frac{\hbar H}{2}\in \prod_{t\geq 0}\mathbb{Q} H^t \hbar^t $, the above formula implies \begin{align*} \Delta_{\hbar} (F^sH^nE^r) \in \prod_{0\leq t\leq k } (S^{\otimes 2})_{s+n+r+t}\hbar^k. \end{align*} Thus we have \begin{align*} \Delta_{\hbar} ^{(p)} \big((S^{\otimes l})_{i}\hbar^j\big)\subset \prod_{0\leq t\leq k } (S^{\otimes pl})_{i+t}\hbar^{j+k}\subset \prod_{0\leq u\leq v} (S^{\otimes pl})_{u}\hbar^{v}. \end{align*} This concludes the proof of Lemma \ref{spro1}. \end{proof} We can finally proceed with the proof of Theorem \ref{sth2}. \begin{proof}[Proof of Theorem \ref{sth2}] Let $L\in SL_m(l)$ for some $m\ge 1$. By (\ref{eq:JT}), we have $$ J^t(L)\in (S^{\otimes l})_{m+1}\hbar^m+ \hbar^{m+1}U_{\hbar}^{\hat \otimes l}, $$ and thus we only need to prove that $\pi_{m}^t \circ J^t=W\circ \mu_{m+1}$. By Lemma \ref{sl3}, it suffices to show that this equality holds after post-composing with $\pi_m^h \circ \bar \Delta^{(p)}$, that is, it suffices to prove that $$ \pi_m^h \circ \bar \Delta^{(p)} \circ \pi_{m}^t \circ J^t = \pi_m^h \circ \bar \Delta^{(p)} \circ W\circ \mu_{m+1}. $$ But according to the commutative diagrams of Lemmas \ref{scom} and \ref{spro1}, this is equivalent to proving that $$ (\pi_m^h \circ J^h) (D^{(p)}(L) )=(\pi_m^h \circ W \circ \mu_{m+1})(D^{(p)}(L)),$$ which follows immediately from Theorem \ref{sth2h}. Thus we have the assertion. \end{proof} \begin{remark}\label{rem:tbw} We have in particular shown that the universal $sl_2$ invariant for a string link $L\in SL_m(l)$ satisfies (\ref{eq:JT}), and the proof of Theorem \ref{sth2} given above relies on the fact that, when applying the projection map $\pi^{t}$ to the above equation, we obtain that $J^t(L)\in (S^{\otimes l})_{m+1}\hbar^m+ \hbar^{m+1}U_{\hbar}^{\hat \otimes l}$. So we could consider an alternative version of the reduction $J^t$ of the universal $sl_2$ invariant for the statement of our main result, by setting $$ \tilde{J^t} := \tilde{\pi}^{t}\circ J, $$ where $\tilde{\pi}^{t}$ denotes the quotient map as $\mathbb{Q}$-modules $$ \tilde{\pi}^{t}\co (S^{\otimes l})[[\hbar]]\to \frac{(S^{\otimes l})[[\hbar]]}{\prod_{ 1\leq i\leq j}(S^{\otimes l})_i\hbar^j}. $$ Clearly, it appears from the above proof, that Theorem \ref{sth2} still holds when replacing $J^t$ with this alternative version $\tilde{J^t}$. \end{remark} The above observation gives the following, which in particular applies to slice and boundary string links. \begin{corollary}\label{cor:final} Let $L$ be an $l$-component string link with vanishing Milnor invariants. Then we have $$J(L)\in 1+ \prod_{ 1\leq i\leq j}(S^{\otimes l})_i\hbar^j. $$ \end{corollary} \section{Universal $sl_2$ invariant and clasper surgery} \label{8} This section contains the proof of Lemma \ref{sl4} and Proposition \ref{s241}. In order to prove these results, we will make use of the theory of claspers, and more precisely we will study the behavior of the universal $sl_2$ invariant under clasper surgery. \subsection{A quick review of clasper theory} We recall here only the definition and a few properties of claspers for string links, and refer the reader to \cite{H} for more details. Let $L$ be a string link. A \emph{clasper} for $L$ is an embedded surface in $D^2\times [0,1]$, which decomposes into disks and bands, called \emph{edges}, each of which connects two distinct disks. The disks have either $1$ or $3$ incident edges, and are called {\em leaves} or {\em nodes}, respectively, and the clasper intersects $L$ transversely at a finite number of points, which are all contained in the interiors of the leaves. A clasper is called a \textit{tree clasper} if it is connected and simply connected. In this paper, we make use of the drawing convention of \cite[Fig. 7]{H} for representing claspers. The \emph{degree} of a tree clasper is defined to the number of nodes plus 1, i.e., the number of leaves minus 1. Given a clasper $G$ for a string link $L$, we can modify $L$ using the local moves $1$ and $2$ of Figure \ref{fig:clasper} as follows. If $G$ contains one or several nodes, pick any leaf of $G$ that is connected to a node by an edge, and apply the local move $1$. Keep applying this first move at each node, until none remains: this produces a disjoint union of degree $1$ claspers for the string link $L$ (note indeed that erasing these degree $1$ claspers gives back the string link $L$). Now apply the local move $2$ at each degree $1$ clasper. We say that the resulting string link $L_G$ in $D^2\times [0,1]$ is obtained from $L$ by \emph{surgery along $G$}. Note that the isotopy class of $L_G$ does not depend on the order in which the moves were performed. \begin{figure}[h!] \includegraphics{clasp.eps} \caption{Constructing the image of a string link under clasper surgery. Here, bold lines represent a bunch of parallel strands from the string link. }\label{fig:clasper} \end{figure} The \emph{$C_k$-equivalence} is the equivalence relation on string links generated by surgeries along tree claspers of degree $k$ and isotopies. A clasper for a string link $L$ is called \emph{simple} if each of its leaves intersects $L$ at one point. Habiro showed that two string links are $C_k$-equivalent if and only if they are related by surgery along a disjoint union of simple degree $k$ tree claspers. In the following, \emph{we will implicitly assume that all tree claspers are simple.} A tree clasper $G$ for a string link $L$ is called \emph{repeated} if more than one leaf of $G$ intersects the same component of $L$. An important property of repeated tree claspers is the following, see for example \cite{FY}. \begin{lemma} \label{lem:repeated} Surgery along a repeated tree clasper preserves the link-homotopy class of (string) links. \end{lemma} We conclude this subsection with a couple of standard lemmas in clasper theory. Proofs are omitted, since they involve the same techniques as in \cite[\S 4]{H}, where similar statements appear. \begin{lemma} \label{lem:calculus} Let $C$ be a union of tree claspers for a string link $L$, and let $t$ be a component of $C$ which is a tree clasper of degree $k$. Let $C'$ be obtained from $C$ by passing an edge of $t$ across $L$ or across another edge of $C$. Then we have \begin{align}\label{cal1} L_C \stackrel{C_{k+1}}{\sim} L_{C'}. \end{align} Moreover, if $t$ is repeated, then the $C_{k+1}$-equivalence in (\ref{cal1}) is realized by surgery along repeated tree claspers. \end{lemma} \begin{lemma} \label{lem:slide} Let $t_1\cup t_2$ be a disjoint union of a degree $k_1$ and a degree $k_2$ clasper for a string link $L$. Let $t'_1\cup t'_2$ be obtained from $t_1\cup t_2$ by sliding a leaf of $t_1$ across a leaf of $t_2$, as shown below. \begin{figure}[h!] \includegraphics[scale=0.8]{slide.eps} \end{figure} Then we have \begin{align}\label{cal2} L_{t_1\cup t_2} \stackrel{C_{k_1+k_2}}{\sim} L_{t'_1\cup t'_2}. \end{align} Moreover, if one of $t_1$ and $t_2$ is repeated, then the $C_{k_1+k_2}$-equivalence in (\ref{cal2}) is realized by surgery along repeated tree claspers. \end{lemma} \subsection{Proofs of Lemma \ref{sl4} and Proposition \ref{s241}. } In this section we prove Lemma \ref{sl4} and Proposition \ref{s241}. The proofs rely on two results (Corollary \ref{cor:slide} and Lemma \ref{lem:repeat}) which describe the behavior of the universal $sl_2$ invariant with respect to clasper surgery. We need an additional technical notion to state these results. Recall from Section \ref{2} that the trivial string link is defined as $\mathbf{1}(=\mathbf{1}_l)=\{p_1,\ldots, p_l\}\times [0,1]$. We assume that the points $p_i$ are on the line $\{(x, y)\in D^2 \ |\ y=0\}$. A tree clasper $T$ for the trivial string link $\mathbf{1}$ is called \emph{overpassing}, if all edges and nodes of $T$ are contained in $\{(x,y)\in D^2 | \ y\leq 0\} \times [0,1]\subset D^2\times [0,1]$. In other words, $T$ is overpassing if there is a diagram of $\mathbf{1}\cup T$ which restrict to the standard diagram of $\mathbf{1}$, where the strands do not cross, and where the edges of $T$ overpass $\mathbf{1}$ at all crossings. \begin{lemma}\label{slide} Let $L$ and $ L_0$ be two link-homotopic $l$-component string links. Then for any $m\geq 1$, there exists $n\geq 0$ overpassing repeated tree claspers $R_1,\ldots, R_n$ of degree $\leq m$ for $\mathbf{1}$ such that \begin{align*} L \stackrel{C_{m+1}}{\sim} L_0 \cdot \prod_{j=1}^n \mathbf{1}_{R_j}. \end{align*} \end{lemma} \begin{proof} By the definition of link-homotopy, $L$ can be obtained from $L_0$ a finite sequence of self crossing changes, i.e., by surgery along a disjoint union $R$ of $n_1$ repeated degree $1$ tree claspers. Pick a connected component $R_1$ of $R$. By a sequence of crossing changes and leaf slides, we can ``pull down'' $R_1$ in $D^2\times [0,1]$ so that it leaves in a small neighborhood of $D^2\times \{0\}$, which is disjoint from $R\setminus R_1$ and intersects $L_0$ at $n$ trivial arcs. Apply further crossing changes to ensure that the image $\tilde R_1$ of $R_1$ under this deformation is overpassing. By Lemmas \ref{lem:calculus} and \ref{lem:slide}, we have $$ L \stackrel{C_{2}}{\sim} ( L_0)_{R\setminus R_1}\cdot \mathbf{1}_{\tilde R_1} , $$ and the $C_{2}$-equivalence is realized by surgery along repeated tree claspers of degree $2$. Applying this procedure to each of the $n_1$ connected components of $R$ successively, we eventually obtain that $$ L = \left( L_0\cdot \prod_{1\leq i\leq n_1} \mathbf{1}_{\tilde R_i} \right)_ {R^{(2)}}, $$ where each $\tilde R_i$ is an overpassing tree clasper of degree $1$, and $R^{(2)}$ is a disjoint union of repeated tree claspers of degree $2$. Next, we apply the same ``pull down'' procedure to each connected component of $R^{(2)}$ successively. Using the same lemmas, we then have that $$ L = \left(L_0\cdot \prod_{1\leq i_2\leq n_2} \mathbf{1}_{\tilde R^{(2)}_{i_2}} \right)_{R^{(3)}}, $$ where each $R^{(2)}_i$ is an overpassing repeated tree clasper of degree at most $2$, and $R^{(3)}$ is a disjoint union of repeated tree claspers of degree $\geq 3$. Iterating this procedure, we obtain that, for any integer $m\ge 1$, we have $$ L = \left( L_0\cdot \prod_{1\leq i_m\leq n_m} \mathbf{1}_{\tilde R^{(m)}_{i_m}} \right)_{R^{(m+1)}}, $$ where $\tilde R^{(m)}_i$ is an overpassing repeated tree clasper of degree at most $m$, and $R^{(m+1)}$ is a disjoint union of repeated tree claspers of degree $\geq m+1$. This completes the proof. \end{proof} Since the universal $sl_2$ invariant modulo the ideal $\hbar^{k}U_{\hbar}^{\hat \otimes l}$ is a finite type invariant of degree $<k$, hence is an invariant of $C_{k}$-equivalence \cite{H}, the multiplicativity of the universal $sl_2$ invariant implies the following. \begin{corollary}\label{cor:slide} Let $L$ and $ L_0$ be two link-homotopic $l$-component string links. Then for any $m\geq 1$, there exists $n\geq 0$ overpassing repeated tree claspers $R_1,\ldots, R_n$ of degree $\leq m$ for $\mathbf{1}$ such that \begin{align*} J(L)\equiv J(L_0) \cdot \prod_{j=1}^n J(\mathbf{1}_{R_j}) \oh{m+1}. \end{align*} \end{corollary} We will apply this result to the case where $L_0$ is the explicit representative for the link-homotopy class of $L$ given in Lemma \ref{lem:braidlh}, whose universal $sl_{2}$ invariant was studied in details in Section \ref{6}. By Corollary \ref{cor:slide}, we are thus lead to studying the universal $sl_{2}$ invariant of string links obtained from $\mathbf{1}$ by surgery along an overpassing repeated tree clasper: this is the subject of Lemma \ref{lem:repeat} below. For $x=F^{s_1}H^{n_1}E^{r_1}\otimes F^{s_2}H^{n_2}E^{r_2}\otimes \cdots \otimes F^{s_l}H^{n_l}E^{r_l}\in S^{\otimes l}$, set \begin{align*} \mathrm{supp}(x)= \sharp \{ 1\leq i\leq l\ | \ s_i+n_i+r_i\not =0\}, \end{align*} that is, roughly speaking, the number of nontrivial tensorands. We denote by $(S^{\otimes l})_{\mathrm{supp}\le n}$ the $\mathbb{Q}$-submodule of $S^{\otimes l}$ spanned by all monomials $x$ such that $\mathrm{supp}(x)\leq n$. \begin{lemma}\label{lem:repeat} Let $C$ be an overpassing repeated tree clasper for $\mathbf{1}\in SL(l)$. We have \begin{align*} J(\mathbf{1}_C)\in 1+\prod_{j\geq 1} (S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j. \end{align*} \end{lemma} \begin{proof} Since $C$ is an overpassing repeated tree clasper for $\mathbf{1}_l$, there exists an $l$-component braid $B$ such that \begin{align}\label{over} (\mathbf{1}_l)_C=B \cdot ((\mathbf{1}_k)_{C'} \otimes \mathbf{1}_{l-k})\cdot B^{-1}, \end{align} where $k$ denotes the number of strands of $\mathbf{1}_l$ intersecting $C$, and where $C'$ denotes the image of $C$ under this isotopy. (Recall that $\otimes$ denotes the horizontal juxtaposition of string links.) Let $m$ denote the degree of $C$ (and $C'$). Since $J((\mathbf{1}_k)_{C'})\equiv 1 \oh{m}$ and $k\leq$ $\sharp \{$leaves of $C\}-1 = m$, we have \begin{align*} J((\mathbf{1}_k)_{C'}) &\in 1 + \prod_{j\geq m}S^{\otimes k}\hbar^j \\ &\subset 1+ \prod_{j\geq k}S^{\otimes k}\hbar^j, \end{align*} which implies that \begin{align*} J((\mathbf{1}_k)_{C'} \otimes \mathbf{1}_{l-k})& \in 1 + \prod_{j\geq k}(S^{\otimes l})_{\mathrm{supp}\le k}\hbar^j. \end{align*} So, by Equation (\ref{over}), in order to obtain the desired result it only remains to show that $\prod_{j\geq 1} (S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j$ is invariant under the braid group action. Here, the braid group acts on $U_{\hbar}^{\hat \otimes l}$ by quantized permutation: the action of Artin generator $\sigma_n$ on an element $x$ is given by $R^{(l)}_{n+1,n} (\bar \sigma_i (x)) (R^{-1})^{(l)}_{n+1,n}$, where $\bar \sigma_n(x)$ denotes the permutation of the $n$th and $(n+1)$th tensorands of $x$. Hence it suffices to prove that, for any monomial $x=(x_1\otimes \cdots \otimes x_k\otimes 1^{\otimes l-k})\hbar^j$ with $x_1,\ldots,x_k\in S$, $j\geq k$, and any $n \in \{1, . . . ,l-1\}$, we have \begin{align*} R^{(l)}_{n+1,n}.x.(R^{-1})^{(l)}_{n+1,n}\in \prod_{j\geq 1}(S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j, \intertext{and} (R^{-1})^{(l)}_{n,n+1}.x.R^{(l)}_{n,n+1}\in \prod_{j\geq 1}(S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j. \end{align*} We prove the first inclusion. The second one is similar. This is clear when $1\leq n\leq k-1$ and $k+1 \leq n$. When $n=k$, since $R^{\pm1}\equiv 1 \oh{}$, we have \begin{align*} R^{(l)}_{k+1,k}.x.(R^{-1})^{(l)}_{k+1,k}&= \Big(x_1\otimes \cdots \otimes R_{21}(x_k\otimes 1)(R^{-1})_{21}\otimes 1^{\otimes l-k-1} \Big)\hbar^j \\ &\in ( x_1\otimes \cdots \otimes x_k \otimes 1^{\otimes l-k})\hbar^j + \prod_{i\geq j+1}(S^{\otimes l})_{\mathrm{supp}\le k+1}\hbar^i \\ &\subset ( x_1\otimes \cdots \otimes x_k \otimes 1^{\otimes l-k})\hbar^j + \prod_{i\geq j+1}(S^{\otimes l})_{\mathrm{supp}\le j+1}\hbar^i. \end{align*} This completes the proof. \end{proof} \begin{corollary}\label{coslide} If $L\in SL^h_m(l)$, then we have \begin{align}\label{eq:JT2} J(L) & \in 1 + (W\circ \mu^h_m)(L) + \bigoplus_{j=1}^m (S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j + \hbar^{m+1}U_{\hbar}^{\hat \otimes l}. \end{align} \end{corollary} \begin{proof} Recall from Lemma \ref{lem:braidlh} that $L\in SL_m^h(l)$ is link-homotopic to $b=b_m^Lb_{m+1}^L\cdots b_{l-1}^L$. By Proposition \ref{sp} we have \begin{align}\label{eq:Jbraid} \begin{split} J(b) &\equiv 1 + \left( \sum_{J\in \mathcal{I}_{m+1}} \mu_J(L).\varsigma^{(l)}_J\right) \hbar^m \\ &\equiv 1 + (W\circ \mu^h_m) (L) \oh{m+1}, \end{split} \end{align} where the second equality follows from Equation (\ref{equivalence}). Since $\prod_{j} (S^{\otimes l})_{\mathrm{supp}\le j} \hbar^j$ is closed under multiplication, Corollary \ref{cor:slide} and Lemma \ref{lem:repeat} imply that \begin{align*} J(L) &\in J(b) \cdot \left( 1+\prod_{j} (S^{\otimes l})_{\mathrm{supp}\le j} \hbar^j \right) + \hbar^{m+1}U_{\hbar}^{\hat \otimes l} \\ &\subset \left(1 + (W\circ \mu^h_m) (L)\right)\cdot \left( 1+\prod_{j} (S^{\otimes l})_{\mathrm{supp}\le j} \hbar^j \right) + \hbar^{m+1}U_{\hbar}^{\hat \otimes l} \\ &\subset 1 + (W\circ \mu^h_m)(L) + \bigoplus_{j=1}^m (S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j + \hbar^{m+1}U_{\hbar}^{\hat \otimes l}. \end{align*} This completes the proof. \end{proof} We can now prove Lemma \ref{sl4} and Proposition \ref{s241}. \begin{proof}[Proof of Lemma \ref{sl4}] Let $L\in SL^h_m(l)$ and $1\leq j\leq i-2\leq m$. By Corollary \ref{coslide}, we have \begin{align*} \pi^h_{i,j }(J(L))&\in \pi^h_{i,j }\left(1 + (W\circ \mu^h_m)(L) + \bigoplus_{k=1}^m (S^{\otimes l})_{\mathrm{supp}\le k}\hbar^k + \hbar^{m+1}U_{\hbar}^{\hat \otimes l}\right) \\ &=\pi^h_{i,j }\left((W\circ \mu^h_m)(L)\right) + \pi^h_{i,j }\left(\bigoplus_{k=1}^m (S^{\otimes l})_{\mathrm{supp}\le k}\hbar^k\right). \end{align*} But the right hand side is equal to $0$ since we have \begin{align*} (W\circ \mu^h_m)(L)&\in \langle sl_2 \rangle ^{(l)}_{ m+1}\hbar^m, \intertext{and} \bigoplus_{k=1}^m (S^{\otimes l})_{\mathrm{supp}\le k}\hbar^k &\cap \langle sl_2 \rangle ^{(l)}_{i}\hbar ^j=\emptyset, \end{align*} since for any monomial $x\in \langle sl_2 \rangle ^{(l)}_{ i}$ we have $\mathrm{supp}(x)=i\geq j+2$. This completes the proof. \end{proof} \begin{proof}[Proof of Proposition \ref{s241}] Let $L, L' \in SL^h_m(l)$ be two link-homotopic string links. Then, Corollary \ref{cor:slide} and Lemma \ref{lem:repeat}, together with the fact that $\prod_{j} (S^{\otimes l})_{\mathrm{supp}\le j} \hbar^j$ is closed under multiplication, imply that \begin{align*} J(L) &\in J({ L'}) \cdot \left( 1+\prod_{j} (S^{\otimes l})_{\mathrm{supp}\le j} \hbar^j \right) +\hbar^{m+1}U_{\hbar}^{\hat \otimes l}. \end{align*} It follows that $J^h(L) \equiv J^h(L') \oh{m+1}$, as desired. \end{proof} In a similar spirit as Remark \ref{rem:tbw}, we have the following. \begin{remark}\label{rem:tbw2} By Corollary \ref{coslide}, the universal $sl_2$ invariant for a string link $L$ in $SL^h_m(l)$ satisfies (\ref{eq:JT2}). So we have a variant of Theorem \ref{sth2h}, using an alternative version of the reduction $J^h$ of the universal $sl_2$ invariant $J$, by setting $$ \tilde{J^h} := \tilde{\pi}^{h}\circ J, $$ where $\tilde{\pi}^{h}$ denotes the quotient map as $\mathbb{Q}$-modules $$ \tilde{\pi}^{h}\co (S^{\otimes l})[[\hbar]]\to \frac{(S^{\otimes l})[[\hbar]]}{\prod_j(S^{\otimes l})_{\mathrm{supp}\le j}\hbar^j }. $$ Indeed, it follows immediately from Corollary \ref{coslide} that if $L\in SL^h_m(l)$, then $$\tilde{J^h}(L)\equiv (W\circ \mu^h_m)(L) \oh{m+1}.$$ \end{remark} In particular, we obtain the following. \begin{corollary}\label{cor:final2} Let $L$ be a link-homotopically trivial $l$-component string link. Then we have $$J(L)\in 1+ \bigoplus_{j=1}^{l-1} (S^{\otimes l})_{supp\le j}\hbar^j+ \hbar^lU_{\hbar}^{\hat \otimes l}. $$ \end{corollary}
189,566
TITLE: How can you prove Ceva's Theorem using vectors? QUESTION [2 upvotes]: Given the following layout, where $ \overrightarrow{CY} = \mathbf{a} $ and $ \overrightarrow{CX} = \mathbf{b} $:                                               Prove that $$ \frac{\overrightarrow{BX}}{\overrightarrow{XC}} \times \frac{\overrightarrow{CY}}{\overrightarrow{YA}} \times \frac{\overrightarrow{AZ}}{\overrightarrow{ZB}} = 1$$ REPLY [2 votes]: Our proof is in three parts. Define $ \overrightarrow{CB} = \alpha\overrightarrow{CX}$, $ \overrightarrow{CA} = \beta\overrightarrow{CY}$ and $ \overrightarrow{AB} = \gamma\overrightarrow{AZ}$. Also, let $ \overrightarrow{AX} = \lambda\overrightarrow{AP}$, $ \overrightarrow{BY} = \mu\overrightarrow{BP}$ and $ \overrightarrow{CZ} = \nu\overrightarrow{CP}$. 1. $ \alpha, \beta, \overrightarrow{CP} $ $\overrightarrow{CP} = \overrightarrow{CX} + \overrightarrow{XP} = \mathbf{b} - \overrightarrow{PX} = \mathbf{b} - (\overrightarrow{AX} - \overrightarrow{AP}) = \mathbf{b} - (\overrightarrow{AX} - \frac{1}{\lambda}\overrightarrow{AX}) = \mathbf{b} - (\frac{\lambda - 1}{\lambda}(\overrightarrow{AC} + \overrightarrow{CX})) = $   $\mathbf{b} - \frac{\lambda - 1}{\lambda}\mathbf{b} + \frac{(\lambda - 1)\beta}{\lambda}\mathbf{a}$ By a similar argument, $ \overrightarrow{CP} = \overrightarrow{CY} + \overrightarrow{YP} = \mathbf{a} - \frac{\mu - 1}{\mu}\mathbf{a} + \frac{(\mu - 1)\beta}{\mu}\mathbf{b}$ So, $ \mathbf{b} - \frac{\lambda - 1}{\lambda}\mathbf{b} + \frac{(\lambda - 1)\beta}{\lambda}\mathbf{a} = \mathbf{a} - \frac{\mu - 1}{\mu}\mathbf{a} + \frac{(\mu - 1)\beta}{\mu}\mathbf{b}$        $ (1 - \frac{\lambda - 1}{\lambda} - \frac{(\mu - 1)\alpha}{\mu})\mathbf{b} = (1 - \frac{\mu - 1}{\mu} - \frac{(\lambda - 1)\beta}{\lambda})\mathbf{a}$ From which it follows that $$ 1 - \frac{\lambda - 1}{\lambda} - \frac{(\mu - 1)\alpha}{\mu} = 0 \Rightarrow \beta = \frac{\lambda}{\lambda\mu - \mu} $$ and $$ 1 - \frac{\mu - 1}{\mu} - \frac{(\lambda - 1)\beta}{\lambda} = 0 \Rightarrow \alpha = \frac{\mu}{\mu\lambda - \lambda} $$ We can substitute either one of these into either expression for $ \overrightarrow{CP}$ to get $$ \overrightarrow{CP} = \frac{1}{\mu}\mathbf{a} + \frac{1}{\lambda}\mathbf{b}$$ 2. $ \gamma $ $ \overrightarrow{CZ} = \overrightarrow{CA} + \overrightarrow{AZ} = \beta \mathbf{a} + \frac{1}{\gamma}(\overrightarrow{AC} + \overrightarrow{CB}) = \beta \mathbf{a} - \frac{\beta}{\gamma}\mathbf{a} + \frac{\alpha}{\gamma}\mathbf{b}$ We substitute this result into $ \overrightarrow{CZ} = \nu\overrightarrow{CP} $ to get $$ \beta \mathbf{a} - \frac{\beta}{\gamma}\mathbf{a} + \frac{\alpha}{\gamma}\mathbf{b} = \frac{\nu}{\mu}\mathbf{a} + \frac{\nu}{\lambda}\mathbf{b}$$ $$ (\beta - \frac{\beta}{\gamma} - \frac{\nu}{\mu})\mathbf{a} = (\frac{\nu}{\lambda} - \frac{\alpha}{\gamma})\mathbf{b}$$ From which it follows that $$ \beta - \frac{\beta}{\gamma} - \frac{\nu}{\mu} = 0 \Rightarrow \gamma = \frac{\nu\gamma}{\beta\mu} + 1 $$ and $$ \frac{\nu}{\lambda} - \frac{\alpha}{\gamma} = 0 \Rightarrow \nu\gamma = \alpha\lambda $$ So, once we substitute our values for $ \alpha $ and $ \beta $ $$ \gamma = \frac{\alpha\lambda}{\beta\mu} + 1 = \frac{\mu(\lambda - 1)}{\lambda(\mu - 1)} + 1 $$ 3. Tie it all together By substituting in our values for $\alpha,\beta $ and $ \gamma $ we achieve $$ \begin{align} \frac{\overrightarrow{BX}}{\overrightarrow{XC}} \times \frac{\overrightarrow{CY}}{\overrightarrow{YA}} \times \frac{\overrightarrow{AZ}}{\overrightarrow{ZB}} & = \frac{\alpha - 1}{(\beta - 1)(\gamma - 1)} \\ & = 1 \end{align} $$ The above proof uses the most fundamental approach of equating two vectors which are the same and solving a pair of simultaneous equations.
186,496
\section{Numerical experiments}\label{sec:exp} Here, we evaluate and compare the performance of five alternating projection methods introduced in Section~\ref{sec:methods}. The numerical examples are random matrices, an image, and a solution to the Smoluchowski equation. \subsection{Random uniform matrices} In the first example, we consider random $256 \times 256$ matrices with independent identically distributed entries, distributed uniformly on $[0, 1]$, and try to approximate them with nonnegative rank-64 matrices. The best rank-64 approximation given by the truncated singular value decomposition contains many negative elements (see Fig.\ref{num:fig:uniform_data}), and we attempt to correct it using alternating projections. The results are presented in Tab.~\ref{num:tab:uniform} and Fig.~\ref{num:fig:uniform_ap_comparison}: the former contains the per-iteration computational complexities of each approach measured in flops, and the approximation errors in the Frobenius and Chebyshev (maximum) norms after 100 iterations; the latter shows the decay rate of the negative elements (we consider a value negative if it is below $-10^{-15}$). We see that the randomized approaches perform fewer operations per iteration than SVD and Tangent and have similar convergence properties. The only exception is GN: it keeps more large negative elements in the process and then abruptly makes them positive. \begin{figure}[th] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_svs.png} \caption{} \end{subfigure}\hfill \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_negels.png} \caption{} \end{subfigure} \caption{Properties of a random $256 \times 256$ matrix with iid elements distributed uniformly on $[0, 1]$: normalized singular values~(a) and the negative elements of its best rank-$64$ approximation~(b).} \label{num:fig:uniform_data} \end{figure} \begin{table}[h]\centering \begin{tabular}{@{}llccc@{}}\toprule Method & Sketch & Flops per iter & Frobenius & Chebyshev \\ \midrule Initial $\mathrm{SVD}_r$ & N/A & $3.5 \cdot 10^8$ & $3.07 \cdot 10^{-1}$ & $7.18 \cdot 10^{-1}$ \\ \midrule SVD & N/A & $3.6 \cdot 10^8$ & $3.08 \cdot 10^{-1}$ & $7.17 \cdot 10^{-1}$ \\ Tangent & N/A & $9.2 \cdot 10^7$ & $3.08 \cdot 10^{-1}$ & $7.19 \cdot 10^{-1}$ \\ HMT(1, 70) & $\mathcal{N}(0,1)$ & $7.7 \cdot 10^7$ & $3.08 \cdot 10^{-1}$ & $7.14 \cdot 10^{-1}$ \\ HMT(0, 70) & $\mathcal{N}(0,1)$ & $5.0 \cdot 10^7$ & $3.11 \cdot 10^{-1}$ & $7.14 \cdot 10^{-1}$ \\ HMT(0, 70) & $\mathrm{Rad}$ & $4.4 \cdot 10^7$ & $3.10 \cdot 10^{-1}$ & $7.24 \cdot 10^{-1}$ \\ HMT(0, 70) & $\mathrm{Rad}(0.2)$ & $4.0 \cdot 10^7$ & $3.10 \cdot 10^{-1}$ & $7.18 \cdot 10^{-1}$ \\ Tropp(70, 100) & $\mathrm{Rad}(0.2)$ & $3.8 \cdot 10^7$ & $3.17 \cdot 10^{-1}$ & $7.47 \cdot 10^{-1}$ \\ Tropp(70, 85) & $\mathrm{Rad}(0.2)$ & $3.6 \cdot 10^7$ & $3.30 \cdot 10^{-1}$ & $7.97 \cdot 10^{-1}$ \\ GN(150) & $\mathrm{Rad}(0.2)$ & $2.0 \cdot 10^7$ & $3.40 \cdot 10^{-1}$ & $8.25 \cdot 10^{-1}$ \\ GN(120) & $\mathrm{Rad}(0.2)$ & $1.8 \cdot 10^7$ & $3.60 \cdot 10^{-1}$ & $8.33 \cdot 10^{-1}$ \\ \bottomrule\\ \end{tabular} \caption{Comparison of alternating projection methods for rank-$64$ nonnegative approximation of random $256 \times 256$ matrices with iid elements distributed uniformly on $[0, 1]$: their computational complexities and relative errors in the Frobenius and Chebyshev norms after 100 iterations.} \label{num:tab:uniform} \end{table} \begin{figure}[th] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_svd_distance.png} \caption{SVD} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_tangent_distance.png} \caption{Tangent} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_rsvd_gauss_k70_p1_distance.png} \caption{HMT(1, 70), $\mathcal{N}(0,1)$} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_rsvd_rad02_k70_distance.png} \caption{HMT(0, 70), $\mathrm{Rad}(0.2)$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_rsvd_2sketch_rad02_k70_l100_distance.png} \caption{Tropp(70, 100), $\mathrm{Rad}(0.2)$} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_rsvd_2sketch_rad02_k70_l85_distance.png} \caption{Tropp(70, 85), $\mathrm{Rad}(0.2)$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_gen_nystrom_rad02_l150_distance.png} \caption{GN(150), $\mathrm{Rad}(0.2)$} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/random/uniform_256_64_gen_nystrom_rad02_l120_distance.png} \caption{GN(120), $\mathrm{Rad}(0.2)$} \end{subfigure} \caption{Comparison of alternating projection methods for rank-$64$ nonnegative approximation of random $256 \times 256$ matrices with iid elements distributed uniformly on $[0, 1]$: the Frobenius and Chebyshev norms of the negative part and the density of negative elements over 100 iterations. The results are averaged over 10 trials.} \label{num:fig:uniform_ap_comparison} \end{figure} \clearpage \subsection{Images} The second example aims to show that alternating projections can be used to clip the values of a low-rank matrix to a prescribed range. We pick a $512 \times 512$ grayscale image and look for its rank-50 approximation, whose values lie in $[0, 1]$: this requires a simple modification of the algorithms. The best rank-50 approximation, that we refine with alternating projections, contains outliers both below $0$ and above $1$, as Fig.~\ref{num:fig:astro_data} shows. We see from Tab.~\ref{num:tab:astro} and Fig.~\ref{num:fig:astro_ap_comparison} that all methods converge and that randomized approaches are faster. By visually comparing the resulting approximations in Fig.~\ref{num:fig:astro_ap_comparison_visual}, we note that Tangent introduced vertical artifacts and GN lead to more disturbances than HMT and Tropp. \begin{figure}[th] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.6\textwidth]{pics/astro/astro.png} \caption{} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/astro/astro_svs.png} \caption{} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.75\textwidth]{pics/astro/astro_50_lowels.png} \caption{} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.75\textwidth]{pics/astro/astro_50_highels.png} \caption{} \end{subfigure} \caption{Properties of the $512 \times 512$ Astronaut image: the image itself~(a), the normalized singular values~(b), the negative elements of its best rank-$50$ approximation~(c), and the elements greater than 1 of its best rank-$50$ approximation~(d).} \label{num:fig:astro_data} \end{figure} \begin{table}[h]\centering \begin{tabular}{@{}llccccc@{}} \toprule Method & Sketch & Flops per iter & Frobenius & Chebyshev \\ \midrule Initial $\mathrm{SVD}_r$ & N/A & $2.8 \cdot 10^9$ & $8.07 \cdot 10^{-2}$ & $4.94 \cdot 10^{-1}$ \\ \midrule SVD & N/A & $2.8 \cdot 10^9$ & $8.30 \cdot 10^{-2}$ & $5.24 \cdot 10^{-1}$ \\ Tangent & N/A & $1.3 \cdot 10^8$ & $1.04 \cdot 10^{-1}$ & $5.28 \cdot 10^{-1}$ \\ HMT(0, 60) & $\mathrm{Rad}(0.2)$ & $8.7 \cdot 10^7$ & $8.50 \cdot 10^{-2}$ & $5.22 \cdot 10^{-1}$ \\ HMT(0, 55) & $\mathrm{Rad}(0.2)$ & $8.0 \cdot 10^7$ & $8.82 \cdot 10^{-2}$ & $5.22 \cdot 10^{-1}$ \\ Tropp(65, 110) & $\mathrm{Rad}(0.2)$ & $7.6 \cdot 10^7$ & $8.77 \cdot 10^{-2}$ & $5.49 \cdot 10^{-1}$ \\ Tropp(60, 120) & $\mathrm{Rad}(0.2)$ & $7.1 \cdot 10^7$ & $8.92 \cdot 10^{-2}$ & $5.34 \cdot 10^{-1}$ \\ GN(340) & $\mathrm{Rad}(0.2)$ & $7.1 \cdot 10^7$ & $1.16 \cdot 10^{-1}$ & $6.93 \cdot 10^{-1}$ \\ GN(150) & $\mathrm{Rad}(0.2)$ & $4.8 \cdot 10^7$ & $1.31 \cdot 10^{-1}$ & $6.94 \cdot 10^{-1}$ \\ \bottomrule\\ \end{tabular} \caption{Comparison of alternating projection methods for rank-$50$ nonnegative approximation of the $512 \times 512$ Astronaut image: their computational complexities and relative errors in the Frobenius and Chebyshev norms after 300 iterations.} \label{num:tab:astro} \end{table} \begin{figure}[th] \centering \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_svd_distance_low.png}\hspace{0.05\textwidth} \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_svd_distance_high.png} \caption{SVD} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_tangent_distance_low.png}\hspace{0.05\textwidth} \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_tangent_distance_high.png} \caption{Tangent} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_rsvd_60_sparse_rademacher_distance_low.png}\hspace{0.05\textwidth} \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_rsvd_60_sparse_rademacher_distance_high.png} \caption{HMT(0, 60), $\mathrm{Rad}(0.2)$} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_rsvd_2sketch_60_120_sparse_rademacher_distance_low.png}\hspace{0.05\textwidth} \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_rsvd_2sketch_60_120_sparse_rademacher_distance_high.png} \caption{Tropp(60, 120), $\mathrm{Rad}(0.2)$} \end{subfigure} \begin{subfigure}[b]{0.9\textwidth} \centering \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_gen_nystrom_340_sparse_rademacher_distance_low.png}\hspace{0.05\textwidth} \includegraphics[width=0.4\textwidth]{pics/astro/astronaut_50_gen_nystrom_340_sparse_rademacher_distance_high.png} \caption{GN(340), $\mathrm{Rad}(0.2)$} \end{subfigure} \caption{Comparison of alternating projection methods for rank-$50$ nonnegative approximation of the Astronaut image: the Frobenius and Chebyshev norms of the negative part and the density of negative elements over 300 iterations~(left), same for the elements greater than 1~(right).} \label{num:fig:astro_ap_comparison} \end{figure} \begin{figure}[th] \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{pics/astro/astro_50.png} \caption{Initial SVD} \end{subfigure}\hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{pics/astro/astro_50_svd.png} \caption{SVD} \end{subfigure}\hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{pics/astro/astro_50_tangent.png} \caption{Tangent} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{pics/astro/astro_50_rsvd_60_sparse_rademacher.png} \caption{HMT(0, 60), $\mathrm{Rad}(0.2)$} \end{subfigure}\hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{pics/astro/astro_50_rsvd_2sketch_60_120_sparse_rademacher.png} \caption{Tropp(60, 120), $\mathrm{Rad}(0.2)$} \end{subfigure}\hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=0.9\textwidth]{pics/astro/astro_50_gen_nystrom_350_sparse_rademacher.png} \caption{GN(340), $\mathrm{Rad}(0.2)$} \end{subfigure} \caption{Comparison of rank-$50$ nonnegative approximations of the Astronaut image.} \label{num:fig:astro_ap_comparison_visual} \end{figure} \clearpage \subsection{Solution to Smoluchowski equation} Our third example comes from the two-component Smoluchowski coagulation equation \begin{equation}\label{num:eq:smol} \begin{split} &\frac{\partial n(v_1, v_2, t)}{\partial t} = -n(v_1, v_2,t) \int_0^{\infty} \int_0^{\infty} K(u_1, u_2; v_1, v_2) n(u_1, u_2,t) du_1 du_2 \\ & +\frac{1}{2}\int_0^{v_1} \int_0^{v_2} K(v_1 - u_1, v_2 - u_2; u_1, u_2) n(v_1 - u_1, v_2 - u_2, t) n(u_1, u_2, t) du_1 du_2, \end{split} \end{equation} which describes the evolution of the concentration function $n(v_1, v_2, t)$ of the two-component particles of size $(v_1, v_2)$ per unit volume. In the previous works \cite{smirnov2016fast, matveev2016tensor}, we showed that the corresponding initial-value problem can be solved by explicit time-integration in low-rank format for a wide range of coagulation kernels $K(u_1, u_2; v_1, v_2)$ and nonnegative initial conditions. This means that at every time instant $t$ the solution $n(v_1, v_2, t)$ is represented as a low-rank matrix, which accelerates computation. In some cases, analytical solutions are known \cite{fernandez2007exact}: the solution to Eq.~\eqref{num:eq:smol} with constant kernel \begin{equation*} K(u_1, u_2; v_1, v_2) \equiv K \end{equation*} and the initial conditions \begin{equation*} n(v_1, v_2, t = 0) = \sqrt{K} ab e^{-a v_1 - b v_2} \end{equation*} is given by \begin{equation}\label{num:eq:Analytical} n(v_1, v_2, t) = \sqrt{K} \frac{ab e^{-a v_1 - b v_2}} {(1 + \sqrt{K} t /2)^2} I_0 \left(2 \sqrt{\frac{ab v_1 v_2 \sqrt{K} t}{ \sqrt{K} t + 2}}\right), \end{equation} where $a, b > 0$ are arbitrary positive numbers and $I_0$ is the modified Bessel function of order zero. It was proved in \cite{matveev2016tensor} that \eqref{num:eq:Analytical}, discretized on any equidistant rectangular grid, can be approximated with accuracy $\varepsilon$ by a matrix of rank $O(\log 1 / \varepsilon)$ that is independent of the grid. For our numerical experiments, we set $K = 100$, $a = b = 1$, choose an equidistant rectangular grid with step $0.1$, and study rank-50 approximations of the $1024 \times 1024$ discretized mass-concentration function \begin{equation*} m(v_1, v_2, t) \equiv(v_1 + v_2) \cdot n(v_1, v_2, t) \end{equation*} corresponding to the solution \eqref{num:eq:Analytical} at $t = 6$. In Fig.~\ref{num:fig:smolukh_data}, we demonstrate the heatmap of $m(v_1, v_2, t)$ and the plot of its normalized singular values, which decay rapidly in agreement with \cite{matveev2016tensor}. Unlike the two previous examples, where we always started with the best low-rank approximation, here we use different initial low-rank approximations, according to the alternating projection method. In Tab.~\ref{num:tab:smolukh} and Fig.~\ref{num:fig:smolukh_ap_distance} we compare the performance of the discussed approaches: once again, randomized approaches are faster than deterministic ones and show similar convergence. The GN method eliminates the negative elements a lot sooner than the others, but its relative error in the Frobenius norm is 5 times higher. In Fig.~\ref{num:fig:smolukh_negels}, we show how the negative elements disappear after 1000 alternating projection iterations: HMT and Tropp leave the matrix with fewer negative elements than SVD and Tangent, and GN removes them completely. Also note how the initial low-rank approximation in GN has a distinct negative pattern. \begin{figure}[th] \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.75\textwidth]{pics/smolukh/smolukh.png} \caption{} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/smolukh/smolukh_svs.png} \caption{} \end{subfigure} \caption{Properties of the $1024 \times 1024$ solution of the Smoluchowski equation at $t = 6$: (a)~the solution itself and (b)~the normalized singular values.} \label{num:fig:smolukh_data} \end{figure} \begin{table}[h]\centering \begin{tabular}{@{}llccccc@{}} \toprule Method & Sketch & Flops (init/per iter) & Frobenius (init/res) & Chebyshev (init/res)\\ \midrule SVD & N/A & $2.3 \cdot 10^{10} / 2.3 \cdot 10^{10}$ & $2.39 \cdot 10^{-2} / 2.70 \cdot 10^{-2}$ & $1.19 \cdot 10^{-1} / 1.48 \cdot 10^{-1}$ \\ Tangent & N/A & $2.3 \cdot 10^{10} / 6.5 \cdot 10^{7}$ & $2.39 \cdot 10^{-2} / 2.72 \cdot 10^{-2}$ & $1.19 \cdot 10^{-1} / 1.49 \cdot 10^{-1}$ \\ HMT(0, 15) & $\mathrm{Rad}(0.2)$ & $3.7 \cdot 10^{7} / 5.8 \cdot 10^{7}$ & $2.43 \cdot 10^{-2} / 2.75 \cdot 10^{-2}$ & $1.19 \cdot 10^{-1} / 1.57 \cdot 10^{-1}$ \\ Tropp(15, 25) & $\mathrm{Rad}(0.2)$ & $1.2 \cdot 10^{7} / 3.3 \cdot 10^{7}$ & $2.43 \cdot 10^{-2} / 2.87 \cdot 10^{-2}$ & $1.33 \cdot 10^{-1} / 1.60 \cdot 10^{-1}$ \\ GN(40) & $\mathrm{Rad}(0.2)$ & $1.2 \cdot 10^{7} / 3.3 \cdot 10^{7}$ & $8.47 \cdot 10^{-2} / 1.83 \cdot 10^{-1}$ & $1.54 \cdot 10^{-1} / 3.72 \cdot 10^{-1}$ \\ \bottomrule\\ \end{tabular} \caption{Comparison of alternating projection methods for rank-$10$ nonnegative approximation of the $1024 \times 1024$ solution of the Smoluchowski equation: their computational complexities and relative errors in the Frobenius and Chebyshev norms after 1000 iterations.} \label{num:tab:smolukh} \end{table} \begin{figure}[th] \centering \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/smolukh/smolukh_svd_distance_low.png} \caption{SVD} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/smolukh/smolukh_tangent_distance_low.png} \caption{Tangent} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/smolukh/smolukh_rsvd_15_distance_low.png} \caption{HMT(0, 15), $\mathrm{Rad}(0.2)$} \end{subfigure}\hspace{0.05\textwidth} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/smolukh/smolukh_rsvd_2sketch_15_25_distance_low.png} \caption{Tropp(15, 25), $\mathrm{Rad}(0.2)$} \end{subfigure} \begin{subfigure}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{pics/smolukh/smolukh_gen_nystrom_40_distance_low.png} \caption{GN(40), $\mathrm{Rad}(0.2)$} \end{subfigure} \caption{Comparison of alternating projection methods for rank-$10$ nonnegative approximation of the $1024 \times 1024$ solution of the Smoluchowski equation: the Frobenius and Chebyshev norms of the negative part and the density of negative elements over 1000 iterations.} \label{num:fig:smolukh_ap_distance} \end{figure} \begin{figure}[th] \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_svd_negels_beg.png}\hspace{0.05\textwidth} \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_svd_negels_end.png} \caption{SVD: iterations 0 and 1000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_svd_negels_beg.png}\hspace{0.05\textwidth} \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_tangent_negels_end.png} \caption{Tangent: iterations 0 and 1000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_rsvd_15_negels_beg.png}\hspace{0.05\textwidth} \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_rsvd_15_negels_end.png} \caption{HMT(0, 15), $\mathrm{Rad}(0.2)$: iterations 0 and 1000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_rsvd_2sketch_15_25_negels_beg.png}\hspace{0.05\textwidth} \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_rsvd_2sketch_15_25_negels_end.png} \caption{Tropp(15, 25), $\mathrm{Rad}(0.2)$: iterations 0 and 1000} \end{subfigure} \begin{subfigure}[b]{\textwidth} \centering \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_gen_nystrom_40_negels_beg.png}\hspace{0.05\textwidth} \includegraphics[width=0.3\textwidth]{pics/smolukh/smolukh_gen_nystrom_40_negels_end.png} \caption{GN(40), $\mathrm{Rad}(0.2)$: iterations 0 and 1000} \end{subfigure} \caption{Comparison of alternating projection methods for rank-$10$ nonnegative approximation of the $1024 \times 1024$ solution of the Smoluchowski equation: the negative elements of the rank-$10$ approximation initially~(left) and after 1000 iterations~(right).} \label{num:fig:smolukh_negels} \end{figure}
110,496
develop affordable websites that are modern and fully responsive. We guarantee the best price without compromising on quality or customer service with our clients. Our Developers are capable to build attractive and customised online store according to client’s requirement and budget. za 'dashboard'.. Welcome to WordPress. This is your first post. Edit or delete it, then start writing! Welcome to WordPress. This is your first post. Edit or delete it, then start writing! Office 1, First Floor, The Griffeen Centre ,Lucan, Co.Dublin, Republic of Ireland. Zai +353 86 153 9213 Mon - Fri, 8:00-22:00
301,311
TITLE: Random Variable Worded Problem QUESTION [0 upvotes]: I can figure out the basics to the question, that is the mean and variance of Y: E(Y) = 1-2p Var(Y) = 4p(1-p) I don't understand parts (i) and (ii). I dont understand the question itself, that is what is X=Y1 + Y2 + Y3, and how to use it for the latter of the question. Why is E(X) = nE(Y) and Var(X)=nVar(Y) Can someone please explain the logic, and working out process for i and ii. How does the textbook conclude this. REPLY [0 votes]: Just wanted to show another way of looking at this problem (ii) (even though heropup's answer is correct). To find $Var(X)=Var(\sum_{i=1}^{n}Y_{i})$ note the following properties of random variables (I will leave you to prove these individual properties) (1) $Var(W)=Cov(W,W)$ (2) $Cov(W+V,N+M)=Cov(W,N)+Cov(W,M)+Cov(W,N)+Cov(V,M)$ (sort of like a distirbution law) (3) $Cov(W,V)=Cov(V,W)$ (4) if W and V are independent then $Cov(W,V)=0$ With this we see that $$Var(X)=Var\left(\sum_{i=1}^{n}Y_{i}\right)=Cov\left(\sum_{i=1}^{n}Y_{i},\sum_{i=1}^{n}Y_{i}\right)=\sum_{i=1}^{n}Cov\left(Y_{i},Y_{i}\right)+2\sum_{i\neq j}Cov(Y_{i},Y_{j})$$ where the last line can be best understood on making what is called a covariance matrix (http://en.wikipedia.org/wiki/Covariance_matrix) where the diagonal is just ones when indexes are the same and since (3) the matrix is symmetric thus you can sum ones that aren't the same and multiply them by $2$. Now remember (4) implies that $Cov(Y_{i},Y_{j})=0$ for $j\neq i$ since we supposed these where independent. Now we have $$=\sum_{i=1}^{n}Cov\left(Y_{i},Y_{i}\right)=\sum_{i=1}^{n}Var(Y_{i})$$ where if $Y_{i}$'s are identical then variance will be same for all thus we have letting $Var(Y)=\sigma^{2}$ $$=n\sigma^{2}$$
204,534
Stray dog becomes petrol station employee Tuesday February 14th 2017 A dog who was abandoned at a petrol station in Brazil is now the company's cutest member of staff. Negão was found roaming around the Shell station in Mogi das Cruzes, Brazil, when its two new owners - Sabrina Plannerer and her partner - went to look at the site. Speaking to The Dodo , Ms Plannerer said: "We adopted him immediately and got him all the care animals need. "We took him to the vet to get vaccinated and de-wormed. We bought him food, a dog house, and a leash to take him on walks." When the petrol station opened after construction works were complete, the shiny black pooch became the official greeter. The owners even got the him his own name badge, complete with picture. "Negão waits for people to arrive, and then goes up to say hello, winning them over with his charms," Plannerer added. The happy hound has been a hit with customers with some of them even bringing him toys when they come to fill up their tanks. A Brazilian animal charity, Grupo FERA, is now using Negão as its "poster-pup" to pair abandoned doggies with local businesses, rather than in traditional homes. A spokesperson from Grupo FERA said: "It's been sensational...And workers enjoy having the companionship of a 4-legged colleague."
184,935
Blockchain explained in 2 minutes Following her popular blog post Blockchain for Beginners. We gave Joanne, the Trisent Office Manager (a non-techy) the challenge to explain the blockchain in a video lasting no longer than two minutes, and we think she has done a great job. Blockchain in 2 minutes from Trisent on Vimeo. We also found this short video on “Blockchain and the Middleman” pretty good at explaining how blockchain deliver benefits.
156,548
Benefits of getting Biloxi Cajun products Everyone always wants to get the best and tasty food and that is why we all should be prepared to get the best company that gives the very best products so as to ensure that she or he gets the very best spices and hence get the food tasting and that is why we need to very careful in order to always go for the very best products. Biloxi Cajun has products that will be able to, make our food delicious and tasty as you will want it because you would have got the very best products ever that you would have wanted. We all should consider getting the very best by being very keen while selecting the company to buy these food additives from because that is the only way you will get the best. Besides we should all consider being very careful because it is something that is going to be eaten by a human being and that is why we are supposed to consider the health of the person by checking out what the spice contains and that is why you should be very careful on what you select. Get your food to be tasty and delicious by buying spices that are manufactured by a company that is well known in manufacturing these products. Make sure you get to give your health a priority and hence consider going for the very best products. The company that you get to buy spices from is very critical and that is why you should all consider getting your food spices from a company that has the experience and that is the only way we will all be assured of getting to purchase the very best and quality spice. The experience will always remain to be a critical factor that we should consider because that is the only way to know the period the company has been in this field and hence she or he will be able to know what the company can do. Always consider the experience because it will always have its consequences if nit considered because that is the way to go and know what the company has done in the previous years. You can also get the information from people who have used spices from that specific company because they will be able to tell you how to handle the issue or the decision you are about to make. Spices always tend to make food delicious because that is the only way that appetite will rise and make a family happy and satisfied. Always go for the best-produced spices because they will always make sure you get the best from it all. It is also a good thing to try and consider the reputation of the company that you are about to buy products from. Make sure you get the very correct information about the company and get to consider their reputation and that is the only way you will be able to know whether the company is right for you. A 10-Point Plan for (Without Being Overwhelmed) How I Became An Expert on
162,013
Quality Assurance Rp 5juta - Rp 8juta - Candidate must possess at least Diploma in Computer Science/Information Technology or equivalent. - At least 1 Year(s) of working experience in the related field is required for this position. - Required Skill(s): Robot Framework PT. Barrans Global Mandiri Bandung 23 hari yang lalu
45,652
OSU hiring PR firm in dispute over site of Bend campus BEND, Ore. (AP) — A prominent Northwest public relations company is being brought into the dispute over a new college campus in Bend. The plan to turn OSU-Cascades into a four-year school on Bend's west side has met resistance. The opening has been delayed a year, until 2016. Opponents say the new campus would be better on the east or north side, where it wouldn't have so great an impact on traffic. The Bend Bulletin reports the school is negotiating over a $100,000 contract with Gallatin Public Affairs. A school official says the goal is not to persuade opponents but to reach people who could know more about the benefits for the region. A hearings officer and the Bend City Council have approved the site. It's being evaluated by state regulators.
296,136
Dream11 Referral Code 2022: Refer & Earn 250 Rs Per Friend Hello Friends, After sharing lots of dream11 unlimited hack tricks, today we come up with a very famous gaming app whose name is Dream11. Nowadays this app is running many referral Codes and offers. You can grab them easily by playing this game. If you are a new user then download the Dream11 app in … Read more
405,532
\begin{document} \title{Limits of Quotients of Polynomial Functions in Three Variables} \author{Juan D. V\'{e}lez,~~ Juan P. Hern\'{a}ndez,~~ Carlos A. Cadavid.} \date{} \maketitle \begin{abstract} A method for computing limits of quotients of real analytic functions in two variables was developed in \cite{CSV}. In this article we generalize the results obtained in that paper to the case of quotients $q=f(x,y,z)/g(x,y,z)$ of polynomial functions in three variables with rational coefficients. The main idea consists in examining the behavior of the function $q$ along certain real variety $X(q)$ (the \emph{ discriminant variety associated to} $q)$. The original problem is then solved by reducing to the case of functions of two variables. The inductive step is provided by the key fact that any algebraic curve is birationally equivalent to a plane curve. Our main result is summarized in Theorem \ref {tl2}. In Section 4 we describe an effective method for computing such limits. We provide a high level description of an algorithm that generalizes the one developed in \cite{CSV}, now available in \textit{Maple} as the \texttt{ limit/multi} command. \end{abstract} \section{Introduction} Algorithms for computing limits of functions in one variable are studied in \cite{G3}. Similar algorithms have been developed in \cite{G1} and \cite{G2} . Computational methods dealings with classical objects, like power series rings and algebraic curves, have been developed by several authors during the last two decades, \cite{AMR} and \cite{SSH}. A symbolic computation algorithm for computing local parametrization of analytic branches and real analytic branches of a curve in $n$-dimensional space is presented in \cite {ANMR}. In \cite{CSV} V\'{e}lez, Cadavid and Molina developed a method for analyzing the existence of limits $\lim_{(x,y)\rightarrow (a,b)}q(x,y),$ where $q(x,y)$ is a quotient of two real analytic functions $f$ and $g,$ under the hypothesis that $(a,b)$ is an isolated zero of $g$. In the case where $f$ and $g$ are polynomial functions with rational coefficients, the techniques developed in that article provide an algorithm for the computation of such limits, now available in \textit{Maple} as the \texttt{limit/multi} command \cite{link}. An alternative method for computing limits of quotients of functions in several variables has been recently developed in \cite{Xiao}. Their approach is completely different from ours, relaying on Wu's algorithm as the main tool. In this article we generalize the methods presented in \cite{CSV} to the case of quotients of polynomials in three variables, under the same assumption that $g$ is a function with an isolated zero at the point $(a,b)$ . The main idea consists in reducing the problem of determining the existence of limits of the form \begin{equation} \lim_{(x,y,z)\rightarrow (a,b,c)}f(x,y,z)/g(x,y,z) \label{xxx} \end{equation} to the problem of determining the limit along some real variety $X(q)$ associated to $q$ (the \emph{discriminant variety of }$q$). In order to achieve this one needs to study the topology of the irreducible components of the singular locus of $X(q)$. The original problem is then solved by reducing to the case of functions of two variables. The inductive step is provided by the key fact that any algebraic curve is birationally equivalent to a plane curve. Our main result is summarized in Theorem \ref{tl2}. In Section 4 we provide a high level description of a potential algorithm capable of determining the existence of (\ref{xxx}), and if the limit exists, it would be able to determine its value. \ Any of the Groebner Basis packages available may serve as a computational engine to implement such an algorithm. In Section 5 we present two examples that illustrate some the computation that would be needed in a typical problem of determining and computing a limit of this sort. \section{Preliminaries} \subsection{Dimension of algebraic sets and its singular locus} \label{crzv} In this article we consider complex affine varieties defined by polynomials with real coefficients. If $I$ is an ideal in the polynomial ring $S=\mathbb{ \mathbb{R}}[x_{1},\dots ,x_{n}]$, by $X=V(I)$ we will denote the \emph{ complex} \emph{affine variety} defined by $I,$ i.e., the common zeros of $I$ in $\mathbb{C}^{n}$. The \emph{dimension }of $X$ is the Krull dimension of the ring $\mathbb{C\otimes }_{\mathbb{R}}S/I$. Since $S/I\subset \mathbb{ C\otimes }_{\mathbb{R}}S/I$ is a faithfully flat extension of rings, the dimension of $X$ coincides with the dimension of $\mathbb{R}[x_{1},\dots ,x_{n}]/I$, the \emph{real affine ring of }$X.$ It is well known that if $X$ is irreducible, defined by some prime ideal $P\subset S$, then the dimension of the domain $R=S/P$ coincides with the transcendence degree of the field extension $\mathbb{R}\subset L$ (denoted by trdeg$_{K}L),$ where $L$ denotes the fraction field of $R$. We recall the definition of the \emph{singular locus} of a equidimensional affine variety. \begin{definition} \label{sing}Let $Y\subset \mathbb{C}^{n}$ be an affine variety, and let $R= \mathbb{C}[x_{1},\ldots ,x_{n}]/I(Y)$ be its ring of coordinates. Suppose that $R$ is equidimensional of dimension $r$ (i.e., $ht(P)=r,$ for all the minimal primes $P$ containing $I(Y)$). Let's choose arbitrary generators $ f_{1},\dots ,f_{k}$ for $I(Y).$ The singular locus of $Y$, denoted by \textrm{Sing}($Y$), is the closed subvariety of $Y$ defined by the ideal $ J=I(Y)+$ the ideal of all $(n-r)\times (n-r)$ minors of the Jacobian matrix. \end{definition} \begin{remark} \label{rsp} \ \ \ \ \ \ \ \begin{enumerate} \item The above criterion to determine \textrm{Sing}($Y$) does not depend on the generators one chooses for $I(Y)$. \item The singular locus \textrm{Sing}($Y$) is a proper closed subvariety of $Y$, defined by those points $p\in Y$ for which the rank of the Jacobian matrix $[(\partial f_{i}/\partial x_{j})(p)]$ is less that $n-r$. \item $\dim ($\textrm{Sing}$(Y))<\dim (Y)$ \end{enumerate} \end{remark} (See \cite{Eisenbud}, Section 16.5 and \cite{Hartshorne}, Chapter I, Section 5). \bigskip We will mainly focus in the following simple case: Suppose that $X\subset \mathbb{C}^{3}$ is an affine variety of dimension $2$ defined by a prime ideal $P\subset \mathbb{R}[x,y,z].$ In this case $P\subset \mathbb{R}[x,y,z]$ must be a prime ideal of height $1,$ and so it has to be principal, i.e., $ P=(h),$ where $h\in \mathbb{R}[x,y,z]$ is some real irreducible polynomial. Therefore, $X=V(h)$. In this case \textrm{Sing}($X$) is the complex affine variety defined by the ideal $I_{S}=(h,\partial h/\partial x,\partial h/\partial y,\partial h/\partial z)\subset \mathbb{R}[x,y,z]$. \subsection{The discriminant variety\label{discrimina}} The existence of $\lim_{(x,y,z)\rightarrow (a,b,c)}f(x,y,z)/g(x,y,z)$ does not depend on the particular choice of local coordinates. Hence, after an appropriate translation we may always assume that $p=(a,b,c)$ is the origin, here denoted by $O$. Our objective is to compute \begin{equation} \lim_{(x,y,z)\rightarrow (0,0,0)}f(x,y,z)/g(x,y,z), \label{eq1} \end{equation} where $f(x,y,z)$ and $g(x,y,z)$ are rational polynomial functions, and where $g$ has an isolated zero at $O$. If $q(x,y,z)=f(x,y,z)/g(x,y,z),$~we define the\emph{\ discriminant variety} $X(q)$ associated to $q$ as the variety defined by the $2\times 2$ minors of the matrix \begin{equation} A=\left[ \begin{array}{ccc} x & y & z \\ \partial q/\partial x & \partial q/\partial y & \partial q/\partial z \end{array} \right] . \label{matriz} \end{equation} Strictly speaking, the $2\times 2$ minors of $A$, $x_{i}\partial q/\partial x_{j}-x_{j}\partial q/\partial x_{i}$, are not necessarily polynomial functions. However, these minors can be written as \begin{equation*} x_{i}\partial q/\partial x_{j}-x_{j}\partial q/\partial x_{i}=\frac{ x_{i}(g\partial f/\partial x_{j}-f\partial g/\partial x_{j})}{g^{2}}-\frac{ x_{j}(g\partial f/\partial x_{i}-f\partial g/\partial x_{i})}{g^{2}}, \end{equation*} and therefore, if we let \begin{equation*} f_{x_{i},x_{j}}=x_{i}(g\partial f/\partial x_{j}-f\partial g/\partial x_{j})-x_{j}(g\partial f/\partial x_{i}-f\partial g/\partial x_{i}), \end{equation*} then the variety $X(q)$ can be defined as the zeros of the ideal $ J=(f_{x,y},f_{x,z},f_{y,z}).$ The following proposition states that in order to determine the existence of the limit (\ref{eq1}) it suffices to analyze the behavior of the function $ q(x,y,z)$ along the discriminant variety $X(q)$. \begin{proposition} \label{pl1}The limit $\lim_{(x,y,z)\rightarrow 0}q(x,y,z)$ exists, and equals $L\in \mathbb{R}$, if and only if for every $\epsilon >0$ there is $ \delta >0$ such that for every $(x,y,z)\in X(q)$ with $0<|(x,y,z)|<\delta $ the inequality $|q(x,y,z)-L|<\epsilon $ holds. \end{proposition} \begin{proof} The method of Lagrange multipliers applied to the function $q(x,y,z)$ with the constraint $x^{2}+y^{2}+z^{2}=r^{2}$, $r>0$ guarantees that if $ C_{r}(0)=\{(x,y,z)\in \mathbb{R}^{3}~:~x^{2}+y^{2}+z^{2}=r^{2}\}$ then the extreme values of $q(x,y,z)$ on $C_{r}(0)$ are taken at those points $ p=(a,b,c)\in C_{r}(0)$ for which $(\partial q/\partial x(p),\partial q/\partial y(p),\partial q/\partial z(p))=\lambda (a,b,c)$, i.e., at those points in $X(q)$.\newline Suppose that given $\epsilon >0$ there is $\delta >0$ such that for every $ (x,y,z)\in X(q)\cap D_{\delta }^{\ast }$ the inequality $|q(x,y,z)-L|< \epsilon $ holds, where~ $D_{\delta }^{\ast }=\{(x,y,z)\in \mathbb{R} ^{3}~:~0<\sqrt{x^{2}+y^{2}+z^{2}}<\delta \}$. Let $(x,y,z)\in D_{\delta }^{\ast }$ and $r=\sqrt{x^{2}+y^{2}+z^{2}}$. If $t(r),s(r)\in C_{r}(0)$ are respectively the maximum and minimum values of $q(x,y,z)$, subject to $ C_{r}(0)$, then \begin{equation*} q(s(r))-L\leq q(x,y,z)-L\leq q(t(r))-L. \end{equation*} As $t(r)$ and $s(r)\in X(q)\cap C_{r}(0)\subset X(q)\cap D_{\delta }^{\ast }$ , one sees that $-\epsilon <q(s(r))-L$, and henceforth $q(t(r))-L<\epsilon $ . Thus, $|q(x,y,z)-L|<\epsilon $.\newline The reciprocal is obvious. \end{proof} \subsection{Birational equivalence of curves} We intend to reduce the problem of determining the existence of the limit ( \ref{xxx}) to a problem in fewer variables. In order to achieve this we will use the fact that any algebraic curve is birationally equivalent to a plane curve. This result follows from the following standard result: \begin{proposition}[existence of primitive elements] \label{t1}Let $K$ be a field of characteristic zero and let $L$ be a finite algebraic extension of $K$. Then there is $z\in L$ such that $L=K(z)$ (\cite {Fulton}, Page 75). \end{proposition} This immediately implies the following corollary: \begin{corollary} \label{c1} Let $X$ be an irreducible algebraic curve over a field $k$ of characteristic zero, and let $K$ be the quotient field of the ring of coordinates of $X$. Then for any $x\in K-k$ which is not algebraic over $k$, $K$ is algebraic over $k(x)$, and there is an element $y\in K$ such that $ K=k(x,y) $. \end{corollary} The next theorem is a well known fact. Notwithstanding, we give a proof since we will need the explicit construction of the isomorphism denoted by $ \mu$ in the following theorem. \begin{theorem} \label{t2}\label{rnew copy(1)} Let $X$ be an irreducible space curve $X$ in $ \mathbb{C}^{3}$ defined by polynomials with real coefficients, and such that the origin of $\mathbb{C}^{3}$ is a point of $X$. Then there exists an irreducible affine plane curve $Y\subset \mathbb{C}^{2}$ and a field isomorphism $\varphi :K(Y)\rightarrow K(X)$ so that $X$ is birationally equivalent to $Y$. After removing a finite set of points $Z\subset Y$, if $ Y_{0}=Y\setminus Z$, then there is a morphism $\mu :X\rightarrow Y_{0}$ such that $\mu $ restricted to $X_{0}=\mu ^{-1}(Y_{0})$ is an isomorphism onto $ Y_{0}.$ Both $\mu $ and its inverse can be explicitly constructed. \end{theorem} \begin{proof} Suppose that $X=V(P),$ where $P\subset \mathbb{R}[X,Y,Z]$ is a prime ideal. Since $X$ is an irreducible algebraic curve $\text{dim}(X)=\text{dim}( \mathbb{R}[X,Y,Z]/P)=1$. Denote by $\mathbb{R}(x,y,z)$ the fraction field of $\mathbb{R}[X,Y,Z]/P$. Recall that $\text{dim}(X)=\text{trdeg}_{\mathbb{R}} \mathbb{R}(x,y,z)$.\newline For dimensional reasons some of the variables $x,y$ or $z$ has to be transcendental over $\mathbb{R}$. Suppose without loss of generality that $x$ is transcendental over $\mathbb{R}$. Corollary \ref{c1} implies that $\mathbb{R}(x)\subset \mathbb{R}(x,y,z)$ is an algebraic extension. By Proposition \ref{t1}, one can always find $u=y+\lambda z,$ for some $\lambda \in \mathbb{R}(x),$ such that $\mathbb{R}(x,y,z)=\mathbb{R}(x,u)$. Moreover, since this is true for almost all $\lambda $, this element can be taken to be any real constant, except for finitely many choices. Define $\varphi :~ \mathbb{R}[S,T]\rightarrow \mathbb{R}[x,u]\subset \mathbb{R}(x,y,z)$ as the $ \mathbb{R}$-algebra homomorphism that sends $S\rightarrow x$ and $ T\rightarrow u$. Clearly $\varphi $ is surjective, and therefore, if $J=\text{ ker}(\varphi ),$ there is an isomorphism of $\mathbb{R}$-algebras $\varphi : \mathbb{R}[S,T]/J\overset{\sim }{\rightarrow }\mathbb{R}[x,u].$ Consequently, $J\in \mathbb{R}[S,T]$ is a prime ideal. Denote $V(J)$ by $Y$. The last isomorphism induces a field isomorphism $\varphi :\mathbb{R} (Y)\cong \mathbb{R}(x,u)\rightarrow \mathbb{R}(x,y,z)$ defined as $\varphi (x)=x,$ $\varphi (u)=y+\lambda z$. Therefore, $\text{dim}(Y)=\text{dim} (X)=1$. Hence, $Y=V(J)$ is an irreducible algebraic plane curve which is birationally equivalent to $X$.\newline The morphism $\varphi :\mathbb{R}(Y)\rightarrow \mathbb{R}(X)$ induces a morphism of varieties $\mu :X\rightarrow Y$ given by $\mu (a,b,c)=(a,b+\lambda c)$. Notice that since $(0,0,0)\in X,$ then, obviously, $(0,0)\in Y$. Since $Y$ is an irreducible plane curve, $J$ must be a height one prime ideal. Thus, $J=(h),$ for some $h(X,U)\in \mathbb{R}[X,U]$. \newline We can assume that the polynomial $h(a,U)$ obtained by replacing the variable $X$ by $a\in \mathbb{C}$ is not identically zero: If $h(a,U)=0$ we would have $h(X,U)=(X-a)^{m}t(X,U),$ with $t(a,U)\neq 0$. But $(0,0)\in Y$ implies $a=0$, and henceforth $h(X,U)=X^{m}t(X,U)$. Thus, $h(X,U)=X$ or $ h(X,U)=t(X,U),$ since $Y$ is irreducible. Finally, we note that $h(X,U)=X$ contradicts the fact that $x$ is transcendental over $\mathbb{R}$. On the other hand, since $x$ is transcendental over $\mathbb{R}$, by Corollary \ref{c1}, the extension $\mathbb{R}(x)\subset \mathbb{R}(x)(u)$ is algebraic. Therefore, since $y\in \mathbb{R}(x,u)$ one can write $y$ as: \begin{equation*} y=\frac{a_{0}(x)}{b_{0}(x)}+\frac{a_{1}(x)}{b_{1}(x)}u+\cdots +\frac{a_{r}(x) }{b_{r}(x)}u^{r}, \end{equation*} where $r$ is smaller than the degree of the field extension $[\mathbb{R} (x)(u):\mathbb{R}(x)]$. Taking $b(x)=b_{0}(x)\cdots b_{r}(x)$ we can rewrite the last equation as \begin{equation} y=\frac{c_{0}(x)+c_{1}(x)u+\cdots +c_{r}(x)u^{r}}{b(x)}, \label{zzz} \end{equation} for certain $c_{i}(x)$. Therefore, we have $y=f_{1}(x,u)/g_{1}(x)$ and $z=f_{2}(x,u)/g_{2}(x)$. Consider $Z=\{(a,b)\in Y~:~g_{1}(a)=0~~\text{or}~~g_{2}(a)=0\}$, which is a Zariski closed subset of $Y$.\newline Let us see that $Z$ is a finite set. Indeed, the polynomials $g_{1}$ and $ g_{2}$ have finitely many roots. Therefore, if $a_{1},\dots ,a_{k}\in \mathbb{C}$ are these roots, for each $a_{i}$, $(a_{i},b)\in Y$ if and only if $h(a_{i},b)=0$, where $Y=V(J)$ with $J=(h)$. Notice that the polynomial $ t(U)=h(a_{i},U)\in \mathbb{C}[U]$ has finitely many roots. Hence, there are only finitely many elements $(a_{i},b)$ with $g_{1}(a_{i})=0$ or $ g_{2}(a_{i})=0$, and such that $h(a_{i},b)=0$. Thus, we conclude that $Z$ is finite.\newline Consider the open subset $Y_{0}=Y\setminus Z$ of $Y$. Let $X_{0}=\mu ^{-1}(Y_{0})$. Define $\tau :Y_{0}\rightarrow X_{0}$ as $\tau (d,e)=(d,\frac{f_{1}(d,e)}{ g_{1}(d)},\frac{f_{2}(d,e)}{g_{2}(d)})$. This last morphism induces an $\mathbb{R}$-algebra homomorphism $\psi : \mathbb{R}(X)\rightarrow O_{Y}(Y_{0})$ given by $\psi (x)=s,~\psi (y)=f_{1}(s,t)/g_{1}(s)$, and $\psi (z)=f_{2}(s,t)/g_{2}(s)$. Clearly, \begin{equation} \varphi \circ \psi (x)=x,~~\varphi \circ \psi (y)=\frac{f_{1}(x,u)}{g_{1}(x)} =y,~~\varphi \circ \psi (z)=\frac{f_{2}(x,u)}{g_{2}(x)}=z. \label{eq2} \end{equation} Therefore, $\varphi \circ \psi =Id_{\mathbb{R}(X)},$ and consequently $ \varphi \circ \psi |_{X_{0}}:O_{X}|_{X_{0}}\rightarrow O_{Y}|_{Y_{0}}$ is the identity. On the other hand, $\psi \circ \varphi (s)=\psi (x)=s$ and $ \psi \circ \varphi (t)=\psi (u)$. By (\ref{eq2}) we have $\varphi \circ \psi (u)=u$ and $\varphi (t)=u$, which implies that $t=\psi (u),$ since $\varphi $ is injective. Hence, $\psi \circ \varphi (t)=t$ and therefore $\psi \circ \varphi |_{Y_{0}}:O_{Y}|_{Y_{0}}\rightarrow O_{X}|_{X_{0}}$ is the identity. Hence, $\psi :O_{X}|_{X_{0}}\rightarrow O_{Y}|_{Y_{0}}$ is the inverse of the morphism $\varphi :O_{Y}|_{Y_{0}}\rightarrow O_{X}|_{X_{0}}.$ Thus, the homomorphism $\tau :Y_{0}\rightarrow X_{0}$ induced by $\psi $ is the inverse of $\mu :X_{0}\rightarrow Y_{0}$.\newline Finally, it is clear that the morphism $\mu :X_{0}\rightarrow Y_{0}$ sends the real part of $X_{0}$ into the real part of $Y_{0}$, and since $\mu ^{-1}=\tau :Y_{0}\rightarrow X_{0}$ is determined by the polynomials $ f_{1},f_{2},g_{1}$ and $g_{2}$, which are all real polynomials, then $\mu ^{-1}=\tau $ also sends the real part of $Y_{0}$ into the real part of $X_{0} $. \end{proof} \begin{remark} \label{rnew2} $X_{0}$ is obtained from $X$ by removing finitely many points. \begin{proof} In fact, a point $(a,b,c)\in X$ does not belong to $X_{0}$ iff $\mu (a,b,c)=(a,b+\lambda c)\notin Y_{0}$, i.e., iff $(a,b+\lambda c)\in Z$. But $ Z$ is finite, and therefore there are only finitely many choices for $a$ and $(b+\lambda c)$ such that $(a,b+\lambda c)\notin Y_{0}.$ Fix any values for $a$ and for $\eta $=$b+\lambda c$. If $f_{1}(x,y,z),\dots ,f_{k}(x,y,z)$ are generators for $P$ then, clearly, $f_{i}(a,\eta -\lambda c,c)=0$. But each polynomial $g_{i}(z)=f_{i}(a,\eta -\lambda z,z)$ can only have finitely many roots. This proves the claim. \end{proof} \end{remark} This Remark tells us that the problem of determining (and computing) the limit of a function along the varieties $X$ and $Y$ is equivalent to the same problem when one approaches the origin along $X_{0}$ and $Y_{0}$. \subsection{Groebner bases\label{Groebner}} In this section we collect some basic properties and results on Groebner bases and Elimination Theory that will be needed later for the development of an algorithm that computes (\ref{xxx}). The main reference for this section is \cite{Eisenbud}, Chapter 15. By $S=K[x_{1},\dots ,x_{n}]$ we denote the polynomial ring in $n$-variables with coefficients in a field $K$. We denote the set of monomials of $S$ by $ M $. By a \emph{term} in $S$ is meant a polynomial of the form $cm$, where $ c\neq 0\in K$ and $m\in M.$ \begin{definition} A monomial order in $S$ is a total order on $M$ satisfying $ nm_{1}>nm_{2}>m_{2},$ for every monomial $n\neq 1,$ and for any pair of monomials $m_{1}$ and $m_{2}$ satisfying $m_{1}>m_{2}$. \end{definition} Every monomial order is Artinian which means that every subset of $M$ has a least element. For a fixed monomial order $>$ in $S$, the \emph{initial term} of $p\in S$ is the term of $p$ whose monomial is the greatest with respect to $>$. It is usually denoted by $\text{in}(p)$. Given an ideal $I\subset S,$ its ideal of initial terms, $\text{in}(I),$ is defined as the ideal generated by the set $ \{\text{in}(p):p\in I\}$. \begin{definition} Let $I\subset S$ be any ideal, and fix a monomial order in $S$. We say that a set of elements ~$\{f_{1},\dots ,f_{k}\}$ of $I$ is a \textit{Groebner basis} for $I$ iff $\text{in}(I)=(\text{in}(f_{1}),\dots ,\text{in}(f_{k}))$. \end{definition} We list some basic facts about Groebner bases. \begin{remark} \label{rl1} \begin{itemize} \item[1.] The set of monomials not in the ideal $\text{in}(I)$ forms a basis for the $K$-vector space $S/I$. \item[2.] There always exists a Groebner basis for an ideal $I\subset S$. As~ $S$ is a Noetherian ring, the ideal $I$ is finitely generated, let's say, $I=(f_{1},\dots ,f_{k})$. Consider the ideal $J=(\text{in}(f_{1}),\dots ,\text{in}(f_{k}))$. If $J=\text{in}(I)$ then $\{f_{1},\dots ,f_{k}\}$ is a Groebner basis for $I$. \item[3.] If $\{f_{1},\dots ,f_{k}\}$ is a Groebner basis for $I$ then $ I=(f_{1},\dots ,f_{k})$. \item[4.] There is a criterion that allows to compute algorithmically a Groebner basis for an ideal $I\subset S$. This criterion is known as Buchberger's algorithm (\cite{Eisenbud}, Page 332). \item[5.] Let $I,J$ be ideals of $S$ such that $I\subset J$. If $\text{in} (I)=\text{in}(J)$ then $I=J$. \end{itemize} \end{remark} An example of a monomial order is the\emph{\ lexicographic order}, defined in the following way: Fix any total order for the variables, for instance $ x_{1}>x_{2}>\cdots >x_{n}$, and define $x_{1}^{a_{1}}x_{2}^{a_{2}}\cdots x_{n}^{a_{n}}>x_{1}^{b_{1}}x_{2}^{b_{2}}\cdots x_{n}^{b_{n}}$ if for the first $j$ with $a_{j}\neq b_{j}$ one has $a_{j}>b_{j}$. (The lexicographic order will be the monomial order that we will use in this article.) Now we discuss a basic result that will be needed in Sections 4 and 5. Let $I$ be an ideal of the polynomial ring $K[x_{1},\dots x_{n},y_{1},\dots ,y_{s}]$. Given a Groebner basis for $I$ we want to compute a Groebner basis for $I\cap K[x_{1},\dots ,x_{n}]$. For this, we have to introduce the notion of an \textit{elimination order:} \begin{definition} A monomial order in $K[x_{1}, \dots x_{n}, y_{1}, \dots, y_{s}]$ is called an elimination order if the following condition holds: $f \in K[x_{1}, \dots x_{n}, y_{1}, \dots, y_{s}]$ with $\text{in}(f) \in K[x_{1}, \dots, x_{n}]$ implies $f \in K[x_{1}, \dots, x_{n}]$. \end{definition} \begin{lemma} \label{ll1} Let $I\subset K[x_{1},\dots x_{n},y_{1},\dots ,y_{s}]$ be an ideal, and let $\mathcal{B}=\{f_{1},\dots ,f_{k}\}$ be a Groebner basis for $ I$ with respect to an elimination order. Assume that $f_{1},\dots ,f_{t}$ with $t\leq k$ are all elements of $\mathcal{B}$ such that $f_{1},\dots ,f_{t}\in K[x_{1},\dots ,x_{n}]$. Then $\{f_{1},\dots ,f_{t}\}$ is a Groebner basis for $I\cap K[x_{1},\dots ,x_{n}]$. \end{lemma} \begin{proof} (See \cite{Eisenbud}, Page 380). \end{proof} \begin{remark} \label{rl2} Suppose that $\varphi :K[x_{1},\dots ,x_{n}]\rightarrow K[y_{1},\dots ,y_{s}]/J$ is a ring homomorphism defined as $\varphi (x_{i})=f_{i} $. Consider $F_{i}\in K[y_{1},\dots ,y_{s}]$ such that $ \overline{F_{i}}=f_{i}$ in $K[y_{1},\dots ,y_{s}]/J,$ and define the ideal $ I=JT+(F_{1}-x_{1},\dots ,F_{n}-x_{n})\subset T$, where $T=K[x_{1},\dots ,x_{n},y_{1},\dots ,y_{s}]$. Then $\ker {\varphi }=I\cap K[x_{1},\dots ,x_{n}]$. Therefore the above lemma implies that $\ker {\varphi }$ can be computed algorithmically. \end{remark} \begin{proof} (See \cite{Eisenbud}, Page 358). \end{proof} \section{Reduction to the case of functions of two variables} Let $q(x,y,z)=f(x,y,z)/g(x,y,z)$ be the quotient of two polynomials. We recall (Section \ref{discrimina}) that the \emph{discriminant variety associated to }$q,$ $X(q)\subset \mathbb{C}^{3}$, is the affine variety defined by the $2\times 2$ minors of the matrix we denoted by $A$. As a variety, $X(q)$ may be decomposed into its irreducible components in $ \mathbb{C}^{3},$ let's say $X(q)=X_{1}\cup X_{2}\cup \cdots \cup X_{n}$. We are only interested in those components that contain the origin. These will be called the \emph{relevant components}. Suppose these are $ X_{1},X_{2},\dots ,X_{k},$ $k\leq n.$ We consider three possible cases: \begin{itemize} \item[1.] $\text{dim }X_{i}=0$: In this case, if $X_{i}=V(P_{i}),$ then $ \mathbb{R}[x,y,z]/P_{i}$ is a field and $X_{i}$ is just the origin $\{O\}$. Hence, $X_{i}$ does not contribute to any trajectory in $\mathbb{R}^{3}$ that approaches $O$, and can be discarded. \item[2.] $\text{dim }X_{i}=1$: In this case $X_{i}$ is an irreducible algebraic curve. \item[3.] $\text{dim }X_{i}=2$: In this case $X_{i}$ is an hypersurface, i.e., $X_{i}=V(P_{i}),$ where $P_{i}$ is a principal ideal. \end{itemize} We only have to study Cases $2$ and $3$. We deal first with the case of an irreducible space curve in $\mathbb{C}^{3}$ . Let us see that the problem of determining the limit of $q(x,y,z)$ along $ X $, as well as its computation can be reduced to the case of a real plane curve, a question already addressed in \cite{CSV}.\newline By Theorem \ref{t2},~there is a plane curve $Y$ which is birationally equivalent to $X$, and therefore a local isomorphism $\mu :X_{0}\rightarrow Y_{0}$, where $X_{0}$ and $Y_{0}$ are as in Theorem \ref{t2}. There we observed that the existence of the limit of $q(x,y,z)$ as $ (x,y,z)\rightarrow (0,0,0)$ along $X_{0}$ is equivalent to the existence of the limit of $q\circ \mu ^{-1}$ as $(u,v)\rightarrow (0,0)$ along $Y_{0}$. Thus, \begin{equation} \lim_{ \begin{array}{c} (x,y,z)\rightarrow O \\ (x,y,z)\in X_{0} \end{array} }q(x,y,z)~~=~~\lim_{ \begin{array}{c} (u,v)\rightarrow O \\ (u,v)\in Y_{0} \end{array} }q\circ \mu ^{-1}(u,v). \label{cambio} \end{equation} Summarizing: \begin{proposition} \label{pl3} Let $X\subset \mathbb{C}^{3}$ be an irreducible component of $ X(q)$ of dimension $1$ containing $O$. Let $\mu :X_{0}\rightarrow Y_{0}$ be the local isomorphism defined in Theorem \ref{t2}. Then, the limit of $ q(x,y,z)$ as $(x,y,z)\rightarrow O$ along $X$ exists if and only if exists along the irreducible plane curve $Y$ as $(u,v)\rightarrow (0,0).$ The corresponding limits are related by (\ref{cambio}). \end{proposition} In the sequel we will denote by $X_{i}$ and $Y_{i}$ the two open subsets $ X_{0}\subset X$ and $Y_{0}\subset Y$ defined in Theorem \ref{t2}. For the purpose of analyzing the limit along the space curve $X$ it is only necessary to consider those cases where the real trace of the birationally isomorphic curve $Y$ turns out to be a\emph{\ plane curve containing the origin.} By $\mu _{X_{i}}$ we denote the corresponding isomorphism between $ X_{i}$ and $Y_{i}$ already constructed. Now we analyze Case $3$. This is a lot more subtle, and requires a careful analysis of the topology of the corresponding two dimensional component. A key ingredient is a celebrated theorem of Whitney \cite{Whitney} about the number of connected components of an affine algebraic variety. In the following discussion we will show how one can reduce the analysis of the 2-dimensional irreducible components to Case 2. Suppose that we have a rational function $q(x,y,z)=f(x,y,z)/g(x,y,z)$ defined on an irreducible hypersurface $X=V(h),$ where $h$ is a real polynomial function of three variables and $q$ has an isolated zero at $0$. Let $\mathcal{S}=\text{Sing}(X)$ be the singular locus of $X$. By Remark \ref {rsp}, $\mathcal{S}$ must be a variety of dimension strictly less than two. Hence, if $\mathcal{S}$ contains the origin, the limit of $q$ as $ (x,y,z)\rightarrow O$ along $\mathcal{S}$ can be computed as in Case 2. \newline Now, we restrict our analysis to the nonsingular locus of $X$, that we denote by $\mathcal{N}=X\setminus \mathcal{S}$. Without loss of generality we may assume that $\mathcal{N}$ contains the origin, otherwise all of their components would be irrelevant.\newline Assume $O\in \mathcal{N}$, and define a family of real ellipsoids $ E_{r}=\{(x,y,z)\in \mathbb{R}^{3}~:~Ax^{2}+By^{2}+Cz^{2}-r^{2}=0\}$, $ A,B,C>0 $, $r\neq 0$. By $p_{r}(x,y,z)$ we will denote the quadratic polynomial $Ax^{2}+By^{2}+Cz^{2}-r^{2}$. \begin{definition} Let $X=V(h) \subset \mathbb{C}^{3}$ and $E_{r}=\{(x,y,z) \in \mathbb{R} ^{3}~:~Ax^{2}+By^{2}+Cz^{2}-r^{2}=0\}$, $r\neq 0$ as above. The critical set $C_{r}(q)$ will be the set of all real points in $E_{r} \cap X$ where $ q(x,y,z)$ attains its maxima and minima. The union $\cup _{r>0}C_{r}(q)$ of all critical sets will be denote by $\mathrm{Crit}_{X}(q)$. \end{definition} Since each $E_{r}\cap X$ is a compact set, and by hypothesis $O$ is an isolated zero of $q$, the set $\mathrm{Crit}_{X}(q)$ is a well defined subset of $X$. We need the following analogue of Proposition \ref{pl1}. \begin{proposition} The limit $\lim_{(x,y,z)\rightarrow O}q(x,y,z)$ along $X$ exists and equals $ L$ if and only if for every $\epsilon >0$ there is $\delta >0$ such that for every $0<r<\delta $ the inequality $|q(x,y,z)-L|<\epsilon $ holds for all $ (x,y,z)\in C_{r}$. \end{proposition} \begin{proof} The proof follows identical lines as in Proposition \ref{pl1}. One just have to notice that each point in the critical set must lie in some $E_{r}$, since $p=(a,b,c)$ is obviously contained in $E_{r}$, with $r=\sqrt{Aa^{2}+Bb^{2}+Cc^{2}}$. \end{proof} Our objective is to determine $\text{Crit}_{X}(q)$. We can decompose this set as the union of $\text{Crit}_{\mathcal{N}}(q)=\text{Crit}_{X}(q)\cap \mathcal{N}$ and $\text{Crit}_{X}(q)\cap \mathcal{S}$. Since $\text{Crit} _{X}(q)\cap \mathcal{S}\subset \mathcal{S},$ and the limit along $\mathcal{S} $ can be determined as in Case 2, we just have to focus on $\text{Crit}_{ \mathcal{N}}(q)$.\newline First, we want to determine the nonsingular part of $\text{Crit}_{\mathcal{N} }(q)$ by using the method of Lagrange Multipliers, as in \cite{CSV}. For this we define $\mathfrak{X}=V(\mathfrak{J})\subset X$ to be the zero set of the ideal $\mathfrak{J}$ generated by $h$ and the determinant: \begin{equation*} d(x,y,z)=\left\vert \begin{array}{ccc} \partial p_{r}/\partial x & \partial p_{r}/\partial y & \partial p_{r}/\partial z \\ \partial h/\partial x & \partial h/\partial y & \partial h/\partial z \\ \partial q/\partial x & \partial q/\partial y & \partial q/\partial z \end{array} \right\vert . \end{equation*} As the points of $X$ already satisfy $\nabla q(x,y,z)=\lambda (x,y,z)$ (where $\nabla q$ denotes the gradient of $q$), and since $\nabla p_{r}(x,y,z)=(2Ax,2By,2Cz)$, the affine variety $\mathfrak{X}$ must be defined by the ideal generated by $h$ and by the determinant: \begin{equation*} D(x,y,z)=\left\vert \begin{array}{ccc} Ax & By & Cz \\ x & y & z \\ \partial h/\partial x & \partial h/\partial y & \partial h/\partial z \end{array} \right\vert . \end{equation*} That is, $\mathfrak{X}=V(D,h)$. This variety is precisely the set of regular points of $X$ that are critical points of $q$. \begin{proposition} \label{nueva}(Notation as above) Let us assume $O\in \mathcal{N}$. Then it is possible to choose (in a generic way) suitable positive constants $A,B$ and $C$ such that the height of the ideal $\mathfrak{J}=(D,h)$ in the polynomial ring $\mathbb{C}[x,y,z]$ is greater than one, and consequently $ \dim \mathfrak{X}<2$. \end{proposition} \begin{proof} It suffices to show that for a suitable choice of positive constants $A,B,C$ there is at least one point $p\neq O$ in $\mathcal{N}$ such that $D(p)\neq 0$ . First, let us see that there is at least one point $p\in \mathcal{N}$ different from the origin such that the gradient of $h$ does not point in the direction of $p$, i.e., such that $\nabla h(p)\neq \lambda p,$ for all $ \lambda \in \mathbb{R}$. Indeed, suppose on the contrary that for every $ p\in \mathcal{N}$ there existed $\lambda (p)\neq 0$ such that $\nabla h(p)=\lambda (p)p$. Since each $p$ is a regular point of $X$, one must have $ \nabla h(p)\neq 0.$ Hence, after making an appropriated change of coordinates that fixes $O$ (a rotation, and then a homothety) we may assume without loss of generality that $\partial h/\partial z(0,0,1)\neq 0$, and that $p=(0,0,1)$. By the implicit function theorem there would exist $ U_{0}\subset \mathbb{R}^{2},$ a neighborhood of $(0,0),$ and a smooth function $ u(x,y)$ in $U_{0}$ such that $u(0,0)=1,$ and $h(x,y,u(x,y))=0,$ for all $ (x,y)\in U_{0}$. Since $\nabla h(p)=\lambda (p)p$, one must have $\partial h/\partial x(0,0,1)=\partial h/\partial y(0,0,1)=0,$ and consequently $ \partial u/\partial x(0,0)=0=\partial u/\partial y(0,0)$. Let $W_{p}$ be the graph $W_{p}=\{(x,y,u(x,y))~:~(x,y)\in U_{0}\}$. For any $ \mathfrak{t}\in W_{p}$, the normal vector at $\mathfrak{t}$ is given by \begin{equation*} n(\mathfrak{t})=\frac{(-u_{x},-u_{y},1)}{\sqrt{u_{x}^{2}+u_{y}^{2}+1}}. \end{equation*} Henceforth, if $\mu (\mathfrak{t})=\lambda (\mathfrak{t})/\Vert \nabla h( \mathfrak{t})\Vert $ one has that $\nabla h(\mathfrak{t})=\mu (\mathfrak{t)} \Vert \nabla h(\mathfrak{t})\Vert \mathfrak{t}$, and consequently $n( \mathfrak{t})$ can be written as \begin{equation*} n(\mathfrak{t})=\frac{(x,y,u(x,y))}{\sqrt{x^{2}+y^{2}+u^{2}(x,y)}}. \end{equation*} From this, we deduce:~ \begin{eqnarray*} \frac{1}{\sqrt{u_{x}^{2}+u_{y}^{2}+1}} &=&\frac{u(x,y)}{\sqrt{ x^{2}+y^{2}+u^{2}(x,y)}},~ \\ \frac{-u_{x}}{\sqrt{u_{x}^{2}+u_{y}^{2}+1}} &=&\frac{x}{\sqrt{ x^{2}+y^{2}+u^{2}(x,y)}}, \end{eqnarray*} and \begin{equation*} \frac{-u_{y}}{\sqrt{u_{x}^{2}+u_{y}^{2}+1}}=\frac{y}{\sqrt{ x^{2}+y^{2}+u^{2}(x,y)}}. \end{equation*} This implies~ $u_{x}=-x/u(x,y),$ and $u_{y}=-y/u(x,y)$. Hence, $u(x,y)=\sqrt{ 1-x^{2}-y^{2}}$, since $u(0,0)=1$. \emph{We conclude that }$W_{p}$\emph{\ would be a neighborhood of }$p$ \emph{in} $\mathcal{N}$\emph{\ which is part of a sphere centered at the origin. } But\emph{\ }on the other hand, a theorem of Whitney asserts that $\mathcal{N}$ can only have finitely many connected components (see \cite{Whitney}). Then this would imply that $ \mathcal{N}$ could not contain the origin, a contradiction with our assumption. Therefore, we may assume there exists a point $p\neq O$ in $\mathcal{N}$ such that $\nabla h(p)\neq \lambda p,$ for all $\lambda \neq 0$. After applying a rotation (if necessary) we may also assume that $a,b,c$ are all nonzero. After those preliminaries it becomes clear how to choose positive constants $A,B$ and $C$ such that the determinant \begin{equation*} \left\vert \begin{array}{ccc} Aa & Bb & Cc \\ a & b & c \\ \partial h/\partial x(a,b,c) & \partial h/\partial y(a,b,c) & \partial h/\partial z(a,b,c) \end{array} \right\vert \end{equation*} does not vanish: The vectors $\nabla h(p)$ and $p=(a,b,c)$ generate a plane $ H,$ since they are not parallel. Therefore, it suffices to choose any point $ (\alpha ,\beta ,\gamma )$ outside $H$ and such that $A=\alpha /a$, $B=\beta /b$, and $C=c/\gamma $ are positive. \end{proof} As before, for the limit $\lim_{(x,y,z)\rightarrow O}q(x,y,z)$ to exist along $X$ it is necessary that it exists along any real curve that contains $ O $. In particular, the limit along each component of $\mathfrak{X}$ must exist, and all theses limits must be equal. By Proposition 17, $\text{dim}( \mathfrak{X})<2$, and henceforth we can reduce this last question to cases 1 and 2.\newline Let $\mathfrak{Z}$ be the affine variety defined by the ideal generated by $ h $ and by the minors $2\times 2$ of the matrix \begin{equation*} \left[ \begin{array}{ccc} Ax & By & Cz \\ \partial h/\partial x & \partial h/\partial y & \partial h/\partial z \end{array} \right] . \end{equation*} The set $\mathfrak{Z}\cap E_{r}\cap \mathcal{N}$ defines the locus of those real points where $E_{r}$ and $\mathcal{N}$ do not intersect transversely. Outside this set, $E_{r}\cap \mathcal{N}$ is a $1$-dimensional manifold (see \cite{GP}, Page 30) that we shall denote by $\Sigma $. Clearly, the vanishing of these two by two minors forces the vanishing of the determinant $D(x,y,z)$. Henceforth, $\mathfrak{Z}\subset \mathfrak{X}$, and consequently $\text{dim}(\mathfrak{Z})<2$, by Proposition \ref{nueva}.\newline Again, for the existence of the limit $\lim_{(x,y,z)\rightarrow O}q(x,y,z)$ it is required, in particular, its existence along any relevant component of $\mathfrak{Z}$, and consequently the problem reduces again to cases 1 and 2. This takes care of the subset of $\text{Crit}_{\mathcal{N}}(q)$ inside $ \mathfrak{Z}$.\newline As for those points in $\text{Crit}_{\mathcal{N}}(q)$ that lie outside $ \mathfrak{Z}$, we notice that they are contained in the $1$-dimensional manifold $\Sigma $. Then they must be part of $\mathfrak{X}$, since this variety is precisely those regular points where $q$ attains an extreme value. Thus, the points in $\text{Crit}_{\mathcal{N}}(q)$ that lie outside $ \mathfrak{Z}$ must be contained in $\mathfrak{X}$. Once again, we have reduced the problem to cases 1 and 2.\newline The following proposition summarizes this discussion: \begin{proposition} \label{pl4} Let $X$ be a relevant irreducible component of dimension $2$ of the discriminant variety $X(q)$. Consider $\mathcal{S}$, $\mathfrak{X}$, and $\mathfrak{Z}$ as defined above. Then, the limit of $q(x,y,z)$ as $ (x,y,z)\rightarrow O$ along $X$ exists, and equals $L$, if and only if, the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow O$ exists and equals $L$ along each one of the components of the curves $\mathcal{S}$, $\mathfrak{X}$, and $ \mathfrak{Z}$. \end{proposition} We are now ready to state our main result. \begin{theorem} \label{tl2} Let $q(x,y,z)=f(x,y,z)/g(x,y,z),$ where $f$ and $g$ are rational polynomial functions, and where $g$ has an isolated zero at the origin. Let $ X(q)$ be the discriminant variety associated to $q$. Denote by $ \{X_{1},\dots ,X_{k}\}$ the relevant irreducible components of dimension one of $X(q)$, and by $\{X_{k+1},\dots ,X_{n}\}$ the relevant irreducible components of dimension two of $X(q)$. Then, the limit of $q$ as $ (x,y,z)\rightarrow O$ exists, and equals $L,$ if and only if the limit of $ q(x,y,z)$ as $(x,y,z)\rightarrow O$ along $X_{i}$ exists, and equals $L$, for all $i=1,2,\dots ,n$. Moreover: \begin{enumerate} \item For the components $X_{i}$, $i=1,2,\dots ,k,$ the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow (0,0,0)$ along $X_{i}$ is determined as in Proposition \ref{pl3}. \item For the components $X_{j},$ $j=k+1,\dots ,n$, the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow (0,0,0)$ along $X_{j}$ is determined as in Proposition \ref{pl4}. \end{enumerate} \end{theorem} \section{A high level description of an algorithm for computing the limit \label{HLA}} Let $q(x,y,z)=f(x,y,z)/g(x,y,z),$ where $f$ and $g$ are polynomial functions of three variables with rational coefficients, and $g$ has an isolated zero at the origin. Consider $X(q),$ the discriminant variety associated to $q$. We have to decompose $X(q)$ into irreducible components, and then choose only those irreducible components $\{X_{1},\dots ,X_{n}\}$ that are relevant. \newline The algorithm has to deal with two different cases: \begin{itemize} \item \textbf{D1}: The component $X_{i}$ has dimension $1$. Then as observed before, $X_{i}$ is birationally equivalent to an irreducible plane curve $ Y_{i}$. Let us denote by $\mathbb{C}(x,y,z)$ the fraction field of the ring of coordinates $\mathbb{C}[X,Y,Z]/I(X_{i})$ of $X_{i}$. As we already noticed we may always assume that $x,y,z$ are transcendental elements over $ \mathbb{C}$: If, for instance, $x$ were algebraic over $\mathbb{C}$, then there would exist a polynomial $P(X)\in \mathbb{C}[X]$ such that $P(x)=0.$ This is equivalent to saying that $P(X)\in I(X_{i})$. Suppose we write $ P(X)=(X-\alpha _{1})(X-\alpha _{2})\cdots (X-\alpha _{n})$ in $\mathbb{C}[X]$ . Since $I(X_{i})$ is a prime ideal, some linear factor $X-\alpha _{j}$ must belong to $I(X_{i}).$ But as $X_{i}$ contains the origin, we must have $ \alpha _{j}=0$. Hence, we could write $I(X_{i})=(X,h_{1}(Y,Z),\dots ,h_{m}(Y,Z)),$ where $h_{k}(Y,Z)\in \mathbb{C}[Y,Z],$ for $k=1,2,\dots ,m$. If we denote by $I$ the ideal $(h_{1}(Y,Z),\dots ,h_{m}(Y,Z)),$ and by $ X_{i}^{\prime }=V(I)\subset \mathbb{C}^{2}$ the affine variety defined by $I$ , then the limit of $q(X,Y,Z)$ as $(X,Y,Z)\rightarrow O$ along $X_{i}$ is the same as the limit of $q(0,Y,Z)$ as $(Y,Z)\rightarrow (0,0)$ along $ X_{i}^{\prime }$. But the existence of this limit, as well as its value, can be computed using the algorithmic method developed in \cite{CSV}. By Proposition \ref{t1} and Corollary \ref{c1} we know that if $x$ is transcendental over $\mathbb{C}$ there exists $\lambda \in \mathbb{C}(x)$ such that $\mathbb{C}(x,y,z)=\mathbb{C}(x,u),$ where $u=y+\lambda z$. Also, by Theorem \ref{t2}, if we consider $\varphi :\mathbb{C}[X,U]\rightarrow \mathbb{C}[X,Y,Z]/I(X)$ defined by $\varphi (X)=x$, $\varphi (U)=y+\lambda z$ , then $\ker {\varphi }$ defines the irreducible plane curve $Y$ that is birationally equivalent to $X$. As we observed in Section \ref{Groebner}, $ \ker {\varphi }=(I(W)T+(U-(Y+\lambda Z)))\cap \mathbb{C}[X,U],$ where $T= \mathbb{C}[X,U,Y,Z]$ is computable. On the other hand, the ring homomorphism $\mathbb{C}[X,U]/\ker {\varphi }\rightarrow \mathbb{C}[X,Y,Z]/I(X)$ induces an isomorphism of fields $\varphi :K(X)\rightarrow K(Y)$. As we showed in the proof of Theorem \ref{t2}, since $y,z\in \mathbb{C}(x,u)$ then one must have $y=f_{1}(x,u)/g_{1}(x),$ and $z=f_{2}(x,u)/g_{2}(x),$ for some $ f_{1},f_{2},g_{1},$ and $g_{2}$ with real coefficients. In that same proof we noticed that the local isomorphism $\mu :X_{0}\rightarrow Y_{0}$ is determined by those polynomials. By Proposition \ref{pl3}, computing the limit of $q(X,Y,Z)$ as $ (X,Y,Z)\rightarrow O$ along $X_{i}$ is equivalent to computing the limit of $ q\circ \mu _{X_{i}}^{-1}(X,U),$ as $(X,U)\rightarrow (0,0)$ along $Y_{i}$, and this last limit can be dealt with using the algorithm developed in \cite {CSV}. \item \textbf{D2:} Suppose that $\text{dim}X_{i}=2$. Then $X_{i}$ is an affine variety defined by a principal ideal $I(X_{i})=(h)$. For random positive values $A$, $B$ and $C$ the algorithm computes the height of the ideal $J=(D,h),$ where \begin{equation*} D=\left\vert \begin{array}{ccc} Ax & By & Cz \\ x & y & z \\ \partial h/\partial x & \partial h/\partial y & \partial h/\partial z \end{array} \right\vert . \end{equation*} As we saw in the reduction to plane curves, there always exist positive constants $A,B$ and $C$ such that $\text{ht}(\mathfrak{J})\geq 2$. Since $ \text{dim}(\mathfrak{X})\leq 1$, then $\mathfrak{X}=V(\mathfrak{J})$. Henceforth, one can compute the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow O$ along $\mathfrak{X}$ using the prescription in \textbf{D1}.\newline Since $\mathcal{S}=\text{Sing(}X)$, the affine variety defined by the ideal $ (h,\frac{\partial h}{\partial x},\frac{\partial h}{\partial y},\frac{ \partial h}{\partial z})$ must be a proper subset of $X$. Then $\mathcal{S}$ is also an algebraic curve, and once again we can compute the limit of $ q(x,y,z)$ as $(x,y,z)\rightarrow O$ along $\mathcal{S}$ using \textbf{D1}. \newline Now, the affine variety $\mathfrak{Z}$ defined by the ideal generated by the minors $2\times 2$ of the matrix \begin{equation*} \left[ \begin{array}{ccc} Ax & By & Cz \\ \frac{\partial h}{\partial x} & \frac{\partial h}{\partial y} & \frac{ \partial h}{\partial z} \end{array} \right] \end{equation*} and the polynomial $h$, has also dimension less than $2$. Hence, the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow O$ along $\mathfrak{Z}$ is also computed using \textbf{D1}. \item Finally, if the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow O$ along each relevant irreducible component of $X(q)$ of dimension one exists, and equals $L$, one says that the limit of $q(x,y,z)$ as $(x,y,z)\rightarrow O$ is $L$. Otherwise, one says that this limit does not exists. \end{itemize} \section{Examples} \subsection{Example $1$} Suppose that we want to compute the limit: \begin{equation*} \lim_{(X,Y,Z)\rightarrow (0,0,0)}\frac{YX-ZY+ZX}{X^{2}+Y^{2}+Z^{2}}. \end{equation*} Let $q(X,Y,Z)=YX-ZY+XZ/X^{2}+Y^{2}+Z^{2}$. We illustrate the necessary computations, carried out in the program \textit{Maple}. \begin{enumerate} \item Using the command \texttt{PrimeDecomposition(X(q))} one gets the irreducible components of $X(q)$: \begin{eqnarray*} &&V((Y-X+Z)),~~V((X^{2}+Y^{2}+Z^{2})),~~V((X+Y,Z-2X)),~~ \\ &&V((X+Y,Z+X))~~\text{and}~~V((X+Y,Z^{2}+2X^{2})). \end{eqnarray*} By using the command \texttt{HilbertDimension(Q)} one can see that the irreducible component $V(X+Y,Z+X)$ has dimension $1$. Let us see that for $\lambda =1$, $\mathbb{C}(x,u)=\mathbb{C}(x,y,z),$ where $u=y+z$. Here, $\mathbb{C}(x,y,z)$ denotes the fraction field of the ring of coordinates of the variety $V(X+Y,Z+X)$. Consider the ideal $ I=(X+Y,Z+X)T+(U-(Y+Z)),$ where $T=\mathbb{C}[X,Y,Z,U]$. The command \texttt{ EliminationIdeal(I,\{U,X\})} generates the ideal $J=(2X+U)$. On the other hand, the command \texttt{Basis(I,plex(Z,Y,U))} gives us a Groebner basis for $I$ respect to the lexicographic monomial order, with $Z>Y>U$. In this particular case we obtain the following basis: $\{2X+U,Y+X,Z+X\}$. From this basis we deduce that $y=-x$ and $z=-x$ are elements of $\mathbb{C}(x,u)$. Therefore, $\mathbb{C}(x,u)=\mathbb{C}(x,y,z),$ and consequently the ideal $ J=(2X+U)$ defines an irreducible plane curve which is birationally equivalent to $V(X+Y,Z+X)$. Also, $y=-x$ and $z=-x$ determine the isomorphism $\rho :V(2X+U)\rightarrow V(X+Y,Z+X)$. Therefore, the limit of $ q(X,Y,Z),$ as $(X,Y,Z)\rightarrow (0,0,0)$ along $V(X+Y,Z+X),$ is equivalent to the limit of $q\circ \rho (X,U)$ as $(X,U)\rightarrow (0,0)$ along $ V(2X+U)$. This latter limit can be computed using the algorithm developed in \cite{CSV}. However, in this case it is easy to see directly that the value of the limit is $-1/3,$ since $q\circ \rho (X,U)=q(X,-X,-X)=-1/3$. Therefore, the limit of $q(X,Y,Z)$ as $(X,Y,Z)\rightarrow (0,0,0)$ along $ V(X+Y,Z+X)$ is $-1/3$.\newline Let $h(X,Y,Z)=Y-X+Z.$ One may choose $A=1$, $B=2,$ and $C=1$. Using the command \texttt{HilbertDimension(P)}\textbf{,} with $P=(f,h)$, where \begin{equation*} f=\left\vert \begin{array}{ccc} X & 2Y & Z \\ X & Y & Z \\ \frac{\partial h}{\partial X} & \frac{\partial h}{\partial Y} & \frac{ \partial h}{\partial Z} \end{array} \right\vert , \end{equation*} one obtains that the variety defined by the ideal $P=(f,h)$ has dimension $1$ . In this case $V(P)=V(-XY-YZ,Y-X+Z)$. Using again the command \texttt{ PrimeDecomposition(P)} one obtains the irreducible components of the variety $V(P)$: $V(P)=V(Y,-X+Z)\cup V(2X-Y,Y+2Z),$ where each of these components has dimension $1$. Therefore, one just needs to compute a limit along irreducible algebraic curves (again, using the main algorithm of \cite{CSV} ). For the variety $V(Y+X,Z+X)$ we may follow an analogous procedure. It is not difficult to see that the limit of $q(X,Y,Z),$ as $(X,Y,Z)\rightarrow (0,0,0)$ along the variety $V(Y,-X+Z)$ is equal to $1/2$.\newline Hence, we conclude that \begin{equation*} \lim_{(X,Y,Z)\rightarrow (0,0,0)}\frac{YX-ZY+ZX}{X^{2}+Y^{2}+Z^{2}} \end{equation*} does not exist. \end{enumerate} \subsection{Example $2$} \begin{enumerate} \item We want to compute the limit: \begin{equation*} \lim_{(X,Y,Z)\rightarrow (0,0,0)}\frac{X^{2}YZ}{X^{2}+Y^{2}+Z^{2}} \end{equation*} Let $q(X,Y,Z)=X^{2}YZ/X^{2}+Y^{2}+Z^{2}$. Using the command \texttt{PrimeDecomposition(X(q)}\textit{\ }one obtains the irreducible components of $X(q)$: \item The irreducible components of dimension $1$ are: $X_{1}=V(X,Y)$,~ $ X_{2}=V(X,Z)$,~ $X_{3}=V(Y,Z)$,~ $X_{4}=V(X,Z-Y)$,~ $X_{5}=V(X,Z+Y)$,~ $ X_{6}=V(X,Y^{2}+Z^{2})$,~ $X_{7}=V(X,3Z^{2}-Y^{2})$,~ $ X_{8}=V(X,3Z^{2}+Y^{2})$, ~$X_{9}=V(Z,X^{2}+Y^{2})$,~ $ X_{10}=V(Z+Y,-2Z^{2}+X^{2})$,~ $X_{11}=V(-Z+Y,-2Z^{2}+X^{2})$,~ and~ $ X_{12}=V(-2Z^{2}+X^{2},3Z^{2}+Y^{2})$. \item The irreducible components of dimension $2$ are: $X_{13}=V(X),$~ and~ $ X_{14}=V(X^{2}+Y^{2}+Z^{2})$. \item There is an irreducible component of dimension $0$: $X_{15}=V(X,Y,Z)$. We know that each irreducible component of dimension $1$ is birationally equivalent to an irreducible plane curve. Now, if any of the variables $X,Y$ or $Z$ appears in the ideal that defines the corresponding irreducible component, then one can easily see that such component in $\mathbb{C}^{3}$ is actually contained in $\mathbb{C}^{2}$. Henceforth, it is already a plane curve, and one can use the main algorithm of \cite{CSV} to compute these (two variable) limits. Hence, one sees that the limits along the varieties $ X_{1}=V(X,Y)$,~ $X_{2}=V(X,Z)$,~ $X_{3}=V(Y,Z)$,~ $X_{4}=V(X,Z-Y)$,~ $ X_{5}=V(X,Z+Y)$,~ $X_{6}=V(X,Y^{2}+Z^{2})$,~ $X_{7}=V(X,3Z^{2}-Y^{2})$,~ $ X_{8}=V(X,3Z^{2}+Y^{2})$, and ~$X_{9}=V(Z,X^{2}+Y^{2})$ are equal to zero. Now we discuss the limit along the other irreducible components of dimension $1$. \item For $X_{10}=V(Z+Y,-2Z^{2}+X^{2})$: We noticed in the proof of the Primitive Element Theorem that for almost all $\lambda \in \mathbb{R}$, $ \mathbb{R}(x,y,z)=\mathbb{R}(x,u)$, where $u=y+\lambda z$. In this case, one could take $\lambda =2$. Let $I=(Z+Y,-2Z^{2}+X^{2},U-(Y+2Z))\subset \mathbb{R }[X,Y,Z,U]$, with the command \texttt{EliminationIdeal}$(I,{X,U})$ one gets the plane curve $V(2U^{2}-X^{2}),$ which is birationally equivalent to $ X_{10}$. On the other hand, by using the command \texttt{Basis}$ (I,plex(Z,Y,U))$ one computes the basis $\{2U^{2}-X^{2},Y+U,-U+Z\}$. From this basis we deduce that $y=-u$ and $z=u$ as elements of $\mathbb{R}(x,u)$. Thus, the limit along the component $X_{10}$ is the same as the limit of $ q(X,-U,U)$ along the irreducible plane curve $V(2U^{2}-X^{2}).$ This latter limit can be calculated using the the main algorithm of \cite{CSV}. In this case we obtain the value zero. \item For $X_{11}=V(-Z+Y,-2Z^{2}+X^{2})$, with $\lambda =1$ and following the same procedure, i.e., defining the ideal $I=(-Z+Y,-2Z^{2}+X^{2},U-(Y+Z))$ and then computing \texttt{EliminationIdeal}$(I,{X,U})$ and \texttt{Basis}$ (I,plex(Z,Y,U))$, one obtains the irreducible plane curve $V(U^{2}-2X^{2}),$ which is birationally equivalent to $X_{11},$ as well as the basis $ \{U^{2}-2X^{2},-U+2Y,-U+2Z\}$. From this basis one deduces that $y=u/2$ and $ z=u/2$, and therefore the limit along the component $X_{11}$ is the same as the limit of $q(X,U/2,U/2)$ along the plane curve $V(U^{2}-2X^{2})$ which is again a limit in two variables, and can be computed using the methods of \cite{CSV}. In this case the limit is also zero. \item For $X_{12}=V(-2Z^{2}+X^{2},3Z^{2}+Y^{2})$, with $\lambda =1$, by defining the ideal \begin{equation*} I=(-2Z^{2}+X^{2},3Z^{2}+Y^{2},U-(Y+Z)), \end{equation*} and then computing \texttt{EliminationIdeal}$(I,{X,U})$ and \texttt{Basis}$ (I,plex(Z,Y,U))$, one obtains the irreducible plane curve $ V(4X^{4}+2X^{2}U^{2}+U^{4}),$ which is birationally equivalent to $X_{12},$ as well as the basis \begin{equation*} \{4X^{4}+2X^{2}U^{2}+U^{4},-U^{3}-4UX^{2}+4X^{2}Y,4ZX^{2}+U^{3}\}. \end{equation*} From this we deduce $y=\frac{u^{3}+4ux^{2}}{4x^{2}},$ and $z=\frac{-u^{3}}{ 4x^{2}}$, and therefore the limit along the component $X_{12}$ is the same as the limit of $q(X,\frac{U^{3}+4UX^{2}}{4X^{2}},\frac{-U^{3}}{4X^{2}})$ along the plane curve $V(4X^{4}+2X^{2}U^{2}+U^{4}),$ which is again a limit in two variables. This limit is also zero. \item Now, the components of dimension $2$ are $V(X^{2}+Y^{2}+Z^{2})$ and $ V(X)$. The first component is precisely the set of points where the rational function $q$ is not defined, and consequently can be discarded. Since the variable $X$ appears in the ideal defining the second variety the limit clearly must be zero.\newline We conclude that \begin{equation*} \lim_{(X,Y,Z)\rightarrow (0,0,0)}\frac{X^{2}YZ}{X^{2}+Y^{2}+Z^{2}}=0, \end{equation*} since it is zero along each of the irreducible components of the discriminant variety. \end{enumerate}
137,057
We’ve made it to episode 3 baby! I just got back from doing personal travel to Washington DC and San Diego. In between airplanes and during flights I managed to plan this podcast to be recorded the night I return back to San Jose! In this episode there are now segments. Specifically, there is News, a Quick Tip, and the Featured Segment. In This Episode - iTunes - Ekahau 8.1 Update - Painting RF - Visualizing RF - EHS - Gathering Wireless Requirements Links and Resources Mentioned - Ekahau 8.1 Update - Painting RF - Spectrum Painter - Coagula - Visualizing Wifi Signals - Architecture Of Radio - Wifi Makes Kids Sick? - Cisco Wifi Forecast - Mobidia research - Gathering Wireless Requirements Thanks For Listening! Wow episode 3 is done! Thank you for listening. Subscribe to the show on iTunes and remember to leave a rating and a review. I greatly appreciate it if you do. If you leave a review I will give you a shoutout on a future episode! Join Clear To Send Come join the Clear To Send community.
94,448
After two and a half months of poking, prodding, generating reams of test PDF bills and reports, writing, rewriting, link-checking, image editing, checking topics for consistency, rewriting again, deciding that my organizational scheme was unworkable and shuffling things around, and rewriting JUST ONE MORE TIME, the first stage of my help file is finished. I’ve been busy at work, but this is the first tangible thing I can point at and say, “See? This is what I’ve been doing.” I’m a little ahead of the schedule I had set for myself; work today is relaxed, and in between helping a salesman with some testing I’m in a place of re-evaluating, setting priorities, and deciding what to do next. It felt goooood to check that one off the to-do list. 🙂 Work progress GOOD! Advertisements Huzzah!
11,577
Tuesday, May 13, 2003 San Francisco ad campaign puts squeeze on panhandlers SAN FRANCISCO -- The advertisements that showed up on taxis and buses in San Francisco this week are meant to be as provocative as they are purposeful. "Today we rode a cable car, visited Alcatraz and supported a drug habit," reads one featuring a tourist couple at Fisherman's Wharf. Another ad depicting a girl in pigtails reads: "Today I adopted a cat, gave some change and shut down my corner grocer." The attention-grabbing ads sponsored by the Hotel Council of San Francisco are the latest effort to put the squeeze on panhandling, a problem in the city since the Barbary Coast days. Hoteliers say San Francisco has such a reputation for being soft on the homeless that they are coming from all over to beg, spoiling the city's image as a world-class travel destination. The $65,000 campaign is meant to discourage residents, workers and visitors from putting money into all the outstretched cups and open hands. Some homeless people resent the way they are portrayed. "I think that's really cold. I know a whole bunch of people who aren't on drugs or alcohol," said Carol Oyama, 60, from her customary perch on Market Street. Mayor Willie Brown, who criticized the hotel council last year when it sponsored billboards questioning the city's efforts to deal with homelessness, believes the campaign is misguided. "Negative publicity is just a poor way to drum up business, and this seems to be falling on one's own sword," said his spokesman, P.J. Johnston. But the hotel executives say tourists already know the city's streets are crowded with beggars. "It's been a major complaint of groups coming in, saying, 'Why don't you do something about this?' " said Robert Begley, the trade group's executive director. For 10 years in a row, San Francisco has been named the best U.S. city to visit by the well-heeled readers of Conde Nast Traveler. It consistently ranks among the top two destinations in Travel & Leisure magazine's annual reader poll. But the number of people attending conferences in San Francisco dropped to its lowest level in more than eight years in 2001, the last year for which statistics are available. There is little doubt the city's hospitality industry is hurting. Some wonder whether it's fair to blame panhandlers for hotel vacancies when the Sept. 11 attacks, the nation's weak economy and other factors have led to similar declines in other U.S. cities. Panhandlers have "always been a feature of San Francisco," said Teddy Witherington, who runs one of the city's biggest tourist draws, the annual Lesbian Gay Bisexual and Transgender Pride Parade. "It existed when times were good in 2000 and people were so busy making money they never thought to do anything about it." According to Witherington, the city risks losing something more than panhandlers if people take the hoteliers' message to heart. He said one of the reasons people visit San Francisco in such large numbers is because of its culture of acceptance. 101 Elliott Ave. W. Seattle, WA 98119 (206) 448-8000 Send comments to newmedia@seattlepi.com ©1996-2009 Hearst Seattle Media, LLC
365,671
TITLE: Question about exercise 5.2(b) of Do Carmo's Riemannian Geometry QUESTION [0 upvotes]: Let $X$ be a Killing field on a Riemannian manifold $M$, and let $\nabla$ denote the Levi-Civita connection. Define $f: M \rightarrow \mathbb{R}$ by $f(q) = \langle X, X \rangle_q$, and let $p$ be a critical point of $f$, so that $df_p = 0$. Do Carmo's hint for 5.2(b) implies that for all $Z$ in $\mathcal{X}(M)$ (the set of smooth vector fields on $M$), $\langle \nabla_X \nabla_Z X, Z \rangle = -\langle \nabla_Z X, \nabla_X Z \rangle$. He seems to derive this via an application of the "Killing equation" $\langle \nabla_W X, V \rangle + \langle \nabla_V X, W \rangle = 0$, which holds for all $W, V \in \mathcal{X}(M)$ whenever $X$ is Killing. However, the Killing equation would only seem to apply to $\langle \nabla_X \nabla_Z X, Z \rangle$ if $\nabla_Z X$ was Killing. Is the equation $\langle \nabla_X \nabla_Z X, Z \rangle = -\langle \nabla_Z X, \nabla_X Z \rangle$ correct, and if so, why? For the claim of the exercise it suffices to show the equation holds at $p$. REPLY [1 votes]: Yes since $$\langle \nabla_X \nabla_Z X, Z \rangle = X \langle \nabla_Z X, Z \rangle -\langle \nabla_Z X, \nabla_X Z \rangle$$ Now, apply the Killing equation when $W=V=Z$ to get $\langle \nabla_Z X, Z \rangle=0.$
202,681
It was extremely cold during our recent trip to Normandy and we saw many birds of prey lurking by roadsides and near buildings presumably hoping for carrion or rubbish to pick over. We saw the buzzard near the house in Ouville several times, moodily skimming over the hedgerow in contrast to his summer soaring high above the river valley and then when I was at La Rupallerie I saw our barn owl. The picture is borrowed because I wasn't quick enough to catch her this time. There were signs of her feeding in the loft of the main house which had not been there before so I hope we haven't scared her away by being there this time. We also saw various small birds (owl food?) and heard the woodpecker in the forest. There were a lot of deer droppings too. We're going to need a roc to keep those under control. Saturday, 4 February 2006 3 comments: I saw a tawny owl in Wales once in the middle of the night, by the time the kids got there all the animals buggered off. But I saw it. That was me, Xtal, sorry such a hurry to post.. We'll have to make a special animal survey, once the weather gets better!
52,091
\begin{document} \title[De Bruijn's identity for fBm]{Entropy flow and De Bruijn's identity for a class of stochastic differential equations driven by fractional Brownian motion} \author{Michael C.H. Choi, Chihoon Lee, and Jian Song} \address{Institute for Data and Decision Analytics, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, P.R. China and Shenzhen Institute of Artificial Intelligence and Robotics for Society} \email{michaelchoi@cuhk.edu.cn} \address{School of Business, Stevens Institute of Technology, Hoboken, NJ, 07030, USA and Institute for Data and Decision Analytics, The Chinese University of Hong Kong, Shenzhen, Guangdong, 518172, P.R. China} \email{clee4@stevens.edu} \address{School of Mathematics, Shandong University, Jinan, Shandong, 250100, P.R. China} \email{txjsong@hotmail.com} \date{\today} \maketitle \begin{abstract} Motivated by the classical De Bruijn's identity for the additive Gaussian noise channel, in this paper we consider a generalized setting where the channel is modelled via stochastic differential equations driven by fractional Brownian motion with Hurst parameter $H\in(0,1)$. We derive generalized De Bruijn's identity for Shannon entropy and Kullback-Leibler divergence by means of It\^o's formula, and present two applications. In the first application we demonstrate its equivalence with Stein's identity for Gaussian distributions, while in the second application, we show that for $H \in (0,1/2]$, the entropy power is concave in time while for $H \in (1/2,1)$ it is convex in time when the initial distribution is Gaussian. Compared with the classical case of $H = 1/2$, the time parameter plays an interesting and significant role in the analysis of these quantities. \smallskip \noindent \textbf{AMS 2010 subject classifications}: 60G22, 60G15 \noindent \textbf{Keywords}: fractional Brownian motion; De Bruijn's identity; Fokker-Planck equation; entropy power \end{abstract} \section{Introduction} Consider an additive Gaussian noise channel modelled by $$X_t = X_0 + \sqrt{t}Z,$$ where $t \geq 0$, $Z$ is a standard normal random variable and the initial value $X_0$ is independent of $Z$. In information theory, the classical De Bruijn's identity, first studied by \cite{Stam59}, establishes a relationship between the time derivative of the entropy of $X_t$ to the Fisher information of $X_t$. While such a Gaussian channel is very popular in the literature (see e.g. \cite{GSV05,PV06}), in recent years researchers have been investigating into various generalizations of the noise channel. This includes the Fokker-Planck channel \cite{WJL17} in which it is modelled via stochastic differential equation driven by Brownian motion with general drift and diffusion, and also the dependent case \cite{KA16} where $X_0$ and $Z$ are jointly distributed as Archimedean or Gaussian copulas. In reality however, the channel may exhibit features that are not adequately modelled by the classical model. For example, in the area of Eternet traffic \cite{WTLW95}, it has been reported that the traffic exhibits self-similarity and long-range dependence. Similar phenomenon is also observed in analyzing video conference traffic \cite{BSTW95}. As these non-standard characteristics, in particular long-range dependence, cannot be effectively captured in the traditional additive Gaussian noise model, it motivates us to consider channel driven by fractional Brownian motion naturally as a possible generalization. In particular, long-range dependence is a significant feature possessed by fractional Brownian motion with Hurst parameter greater than $1/2$. In this paper we derive generalized De Bruijn's identity for such channel and discuss its relationship with Stein's identity as well as entropy power. Interestingly, the time paramter $t$ and the Hurst parameter $H$ of the fractional Brownian motion both play an important role in these results. Our mathematical contributions in this paper are as follows. \begin{itemize} \item We derive a generalized version of the celebrated De Bruijn's identity for channel driven by fractional Brownian motion. It involves a combination of techniques such as It\^o's formula and the Fokker-Planck equation. \item We build the connection and prove the equivalence between Stein's identity for Gaussian distributions and the generalized De Bruijn's identity, when the initial noise is a normal distribution. \item As another application of the generalized De Bruijn's identity, we demonstrate that the convexity/concavity of the entropy power depends on the Hurst parameter. This phenomenon is not observed in the classical Brownian motion case. \end{itemize} The rest of the paper is organized as follows. In Section \ref{sec:gendbi}, we first introduce the channel driven by fractional Brownian motion, followed by stating the results for generalized De Bruijn's identity as well as their proofs. In Section \ref{sec:applications}, we present two applications. In Section \ref{subsec:Stein}, we prove the equivalence between the generalized De Bruijn's identity and the Stein's identity for Gaussian distribution, while in Section \ref{subsec:entropypower}, we prove that the entropy power is convex or concave depending on the value of $H$. Finally in Section \ref{sec:conclusion}, we conclude our paper along with future research directions. Before we proceed to the main results of the paper, we first review a few important concepts that will be frequently used in subsequent sections. A \textbf{fractional Brownian motion (fBm)} $B^{H} = (B^{H}_t)_{t \geq 0}$ with Hurst parameter $H \in (0,1)$ is a centered Gaussian process with stationary increments and covariance function given by $$\E B^H_s B^H_t = \dfrac{1}{2} (t^{2H} + s^{2H} - |t-s|^{2H}).$$ For further references of fBm, we refer readers to \cite{MV68}. The \textbf{Shannon entropy} of a random variable $X$ with probability density function $f_X$, denoted by $h(X)$, is given by \begin{align}\label{eq:entropy} h(X) := -\E(\ln f_X(X)) = - \int_{\mathbb R} f_X(x) \ln f_X(x)\, dx\,. \end{align} Let $b : \mathbb{R} \to (0,\infty)$ be a positive function. The \textbf{generalized Fisher information} with respect to $b$, first introduced by \cite{WJL17}, is given by \begin{align}\label{eq:gfi} J_b(X) := \E \left[b(X) \left(\dfrac{\partial}{\partial x} \ln f_X(X) \right)^2\right]. \end{align} Note that when $b = 1$, $J_1(X)$ is simply the classical Fisher information. When $X$ follows a parametric distribution, say with location parameter $\theta$, then the Cram\'{e}r-Rao lower bound states that the variance of any unibased estimator of $\theta$ is lower bounded by the reciprocal of $J_1(X)$. The \textbf{Kullback–Leibler (KL) divergence}, or the relative entropy, between $X$ and random variable $Y$ with density $f_Y$, written as $K(X||Y)$, is given by \begin{align}\label{eq:kldiv} K(X||Y) := \E \left[ \ln \dfrac{f_X(X)}{f_Y(X)} \right] = \int_{\mathbb R} f_X(x) \ln \dfrac{f_X(x)}{f_Y(x)}\, dx. \end{align} For two random variables $X$ and $Y$, the \textbf{relative Fisher information} with respect to $b$ is \begin{align}\label{eq:relFisher} J_b(X||Y) := \E \left[b(X) \left(\dfrac{\partial}{\partial x} \ln \dfrac{f_X(X)}{f_Y(X)} \right)^2\right]. \end{align} \section{Generalized De Bruijn's identity}\label{sec:gendbi} In this section, we derive the generalized De Bruijn's identity for channel modelled via stochastic differential equation driven by fractional Brownian motion (fBm). More precisely, consider a channel governed by \begin{equation}\label{sde} dX_t =\sigma(X_t)\circ dB_t^H, \end{equation} with initial value $X_0=x_0,$ where $(B_t^H)_{t \geq 0}$ is a fBm with Hurst parameter $H \in (0,1)$. The stochastic integral $\sigma(X_t)\circ dB_t^H$ is in the ``pathwise'' sense, i.e., if $H\in (1/2,1)$, the integral is understood as Young's integration (\cite{Y36}); if $H\in(1/4,1)$ it is understood in the rough paths sense of Lyons (\cite{CQ02}); if $H\in(1/6,1)$ it is understood in the sense of symmetric integral (\cite{RV93}); and if $H\in(0,1)$, and it is understood as $m$-order Newton-Cotes functional (\cite{GNRV05}). By Theorem 2.10 in \cite{N08}, assuming that the diffusion coefficient $\sigma(x)$ is sufficiently regular (say, infinitely differentiable with bounded derivatives of all orders), the one-dimensional SDE \eqref{sde} with $H\in (0,1)$ has a unique solution $X_t= \varphi(B_t^H)$, where $\varphi'(x)=\sigma(\varphi(x))$ with $\varphi(0)=x_0$, which can be obtained by using the Doss-Sussman transformation as in \cite{Suss78}. Note that the solution $X_t$ is a function of $B_t^H$, rather than a functional of $(B_s^H)_{0\le s\le t}$. This particular form allows functions of $X_t$ to have a simple It\^o's formula (Lemma \ref{lem:Ito}) without involving Malliavian derivatives. As a consequence, the Fokker-Planck equation (Lemma \ref{lem:FPe}) can be derived, and furthermore, a Feynman-Kac type formula can be also obtained for a class of partial differential equations (See Corollary 26 and Example 28 in \cite{BD07}). Note that by Remark 27 in \cite{BD07}, this type of formulas only hold for the SDEs driven by fractional Brownian motion in the commutative case which is in the form of \eqref{sde} if the dimension is one. \begin{theorem}[Generalized De Bruijn's identity for Shannon entropy of fBm]\label{thm:dbi} Consider the channel $X = (X_t)_{t \geq 0}$ modelled by equation \eqref{sde} with Hurst parameter $H \in (0,1)$ and initial value $X_0 = x_0$. Assume that the diffusion coefficient $\sigma(x)\in C^\infty(\R)$ has bounded derivatives of all orders. The entropy flow of $X$ is given by \begin{align}\label{formula} \dfrac{d}{dt} h(X_t) = Ht^{2H-1} \Bigg\{& J_{\sigma^2}(X_t) -\E\left[\frac{\partial^2}{\partial x^2}\sigma^2(X_t)\right] +\E\left[\sigma''(X_t)\sigma(X_t)+(\sigma'(X_t))^2\right]\Bigg\}, \end{align} where we recall that the generalized Fisher information $J_{\sigma^2}(X_t)$ is defined in \eqref{eq:gfi}. \end{theorem} \begin{rk} Note that when $H=1/2$, the fBm $W=B^H$ is a Brownian motion, and the Stratonovich equation \eqref{sde} becomes \[dX_t = \dfrac{\sigma(X_t)\sigma'(X_t)}{2} dt+\sigma(X_t) \diamond dW_t\] where the stochastic integral is in the It\^o sense. Then formula \eqref{formula} coincides with the classical De Bruijn's identity \cite{WJL17,KA16}. That is, when $H = 1/2$, \eqref{formula} becomes \begin{align*} \dfrac{d}{dt} h(X_t) = \dfrac{1}{2} \Bigg\{& J_{\sigma^2}(X_t) -\E\left[\frac{\partial^2}{\partial x^2}\sigma^2(X_t)\right] +\E\left[\sigma''(X_t)\sigma(X_t)+(\sigma'(X_t))^2\right]\Bigg\}, \end{align*} which is the result in \cite[Theorem $5$]{WJL17} with drift $\dfrac{\sigma(x)\sigma'(x)}{2}$ and diffusion coefficient $\sigma(x)$. In particular, when $\sigma = 1$, we have $$\dfrac{d}{dt} h(X_t) = \dfrac{1}{2} J_1(X_t).$$ \end{rk} \begin{theorem}[Generalized De Bruijn's identity for KL divergence of fBm]\label{thm:relativedbi} Consider the channel $X = (X_t)_{t \geq 0}$ (resp.~$Y = (Y_t)_{t \geq 0}$) modelled by equation \eqref{sde} with Hurst parameter $H \in (0,1)$ and initial value $X_0 = x_0$ (resp.~$Y_0 = y_0$). Assume that the diffusion coefficient $\sigma(x)\in C^\infty(\R)$ has bounded derivatives of all orders. The time derivative of the KL divergence between $X_t$ and $Y_t$ is given by \begin{align}\label{eq:KLdiv} \dfrac{d}{dt} K(X_t || Y_t) = - Ht^{2H-1} J_{\sigma^2} (X_t || Y_t), \end{align} where we recall that the relative Fisher information $J_{\sigma^2} (X_t || Y_t)$ is defined in \eqref{eq:relFisher}. In particular, $K(X_t || Y_t)$ is non-increasing in $t$. \end{theorem} \begin{rk} Note that again when $H = 1/2$, \eqref{eq:KLdiv} becomes $$\dfrac{d}{dt} K(X_t || Y_t) = - \dfrac{1}{2} J_{\sigma^2} (X_t || Y_t),$$ which is \cite[Theorem $6$]{WJL17}. \end{rk} In the first two main results above, we assume that the initial value is $X_0 = x_0$. In the following result, we assume that the channel is of the form \begin{align}\label{eq:channel} X_t = X_0 + B^H_t, \end{align} where the initial value $X_0$ is independent of the fBm. We shall derive the generalized De Bruijn's identity via the classical version: \begin{theorem}[Deriving the generalized De Bruijn's identity via the classical De Bruijn's identity]\label{thm:derivegeneralized} Consider the channel $X = (X_t)_{t \geq 0}$ modelled by equation \eqref{eq:channel} with Hurst parameter $H \in (0,1)$, initial value $X_0$ independent of the fBm and has a finite second moment. The entropy flow of $X$ is given by \begin{align}\label{formula2} \dfrac{d}{dt} h(X_t) = Ht^{2H-1} J_{1}(X_t). \end{align} In particular, when $X_0$ is a Gaussian distribution with mean $0$ and variance $\sigma_0^2$, we then have $$\dfrac{d}{dt} h(X_t) = \dfrac{Ht^{2H-1} }{\sigma_0^2 + t^{2H}}.$$ \end{theorem} \subsection{Proof of Theorem \ref{thm:dbi}, Theorem \ref{thm:relativedbi} and Theorem \ref{thm:derivegeneralized}} We first present two lemmas that will be used in our proofs of Theorem \ref{thm:dbi} and Theorem \ref{thm:relativedbi}. \begin{lemma}[It\^o's formula]\label{lem:Ito} Consider the channel $X = (X_t)_{t \geq 0}$ modelled by equation \eqref{sde} with Hurst parameter $H \in (0,1)$, initial value $X_0 = x_0$ and twice differentiable diffusion coefficient $\sigma(x)$. Suppose that $f(t,x)$ is any twice differentiable function of two variables. Assume that the functions $\sigma(x)$ and $f(t,x)$ and their (partial) derivatives are at polynomial growth. Then \begin{align*} f(t,X_t) =& f(0, x_0) +\int_0^t f_s(s,X_s)ds+\int_0^t f_x(s,X_s)\sigma(X_s)\diamond dB_s^H\\ &+H\int_0^t s^{2H-1} \Big(f_{xx}(s,X_s)\sigma(X_s)+ f_x(s, X_s)\sigma'(X_s)\Big)\sigma(X_s) ds. \end{align*} \end{lemma} \begin{proof} Note that $X_t=\varphi(B_t^H)$ with $\varphi'(x)=\sigma(\varphi(x))$ and $\varphi(0)=x_0.$ By It\^o's formula in Section 8 of \cite{AMN01} for $H\in (1/4, 1)$ and in Corollary 4.8 of \cite{CN05} for $H\in(0, 1/2)$, noting that $\varphi''(x)=\sigma(\varphi(x))\varphi'(x)$, we have \begin{align*} &f(t,X_t)=f(t, \varphi(B_t^H))\\ =&f(0, x_0)+\int_0^t f_s(s,\varphi(B_s^H)) ds+\int_0^t f_x(s,\varphi(B_s^H)) \varphi'(B_s^H) \diamond dB_s^H\\ &+H\int_0^t s^{2H-1} \Big(f_{xx}(s,\varphi(B_s^H))(\varphi'(B_s^H))^2+f_x(s,\varphi(B_s^H))\varphi''(B_s^H) \Big)ds\\ =&f(0, x_0)+\int_0^t f_s(s,X_s) ds+\int_0^t f_x(s,X_s) \sigma(X_s) \diamond dB_s^H\\ &+H\int_0^t s^{2H-1} \Big(f_{xx}(s,X_s)\sigma(X_s)+f_x(s,X_s)\sigma'(X_s)\Big)\sigma(X_s)ds. \end{align*} \end{proof} \begin{rk}[The condition imposed on $\sigma(x)$ in Theorems \ref{thm:dbi} and \ref{thm:relativedbi}] The proofs of Theorems \ref{thm:dbi} and \ref{thm:relativedbi} rely on the It\^o's formula given in Lemma \ref{lem:Ito} and the existence and uniqueness of the solution to \eqref{sde}. On one hand, in order to apply Lemma \ref{lem:Ito}, it is natural to assume that $\sigma(x)$ is twice differentiable with the derivatives growing at most polynomially fast. On the other hand, more regularity condition on $\sigma(x)$ was imposed to establish the existence and uniqueness of the solution to \eqref{sde} in Theorem 2.10 of \cite{N08} for small Hurst parameter $H$. More precisely, for $H\in(\frac1{4m+2},1)$ with $m\in \mathbb N$, it is assumed that, $\sigma(x)$ belongs to $C^{4m+1}(\mathbb R)$ and is Lipschitz. In Theorems \ref{thm:dbi} and \ref{thm:relativedbi} for all $H\in (0,1)$, we simply assume that $\sigma(x)$ is smooth and all derivatives are bounded, which clearly satisfies the conditions in Lemma \ref{lem:Ito} above and Theorem 2.10 of \cite{N08}. \end{rk} \begin{lemma}[Fokker-Planck equation]\label{lem:FPe} Consider the channel $X = (X_t)_{t \geq 0}$ modelled by equation \eqref{sde} with Hurst parameter $H \in (0,1)$ and initial value $X_0 = x_0$. Assume that the diffusion coefficient $\sigma(x)\in C^\infty(\R)$ has bounded derivatives of all orders. Let $P_t(x)$ be the probability density function of $X_t$, then \begin{align*} \dfrac{\partial }{\partial t} P_t(x) = H t^{2H-1} \left(- \dfrac{\partial}{\partial x} \sigma'(x) \sigma(x) P_t(x) + \dfrac{\partial^2}{\partial x^2} \sigma^2(x) P_t(x)\right). \end{align*} \end{lemma} \begin{proof} Let $g(x)$ be a twice differentiable function, and we substitute $f(t,x) = g(x)$ in Lemma \ref{lem:Ito}. We arrive at \begin{align}\label{eq:FPe} \dfrac{d}{dt} \E[g(X_t)] &= Ht^{2H-1} \left(\E\left[g_{xx}(X_t) \sigma^2(X_t)\right] + \E \left[g_x(X_t) \sigma'(X_t) \sigma(X_t)\right]\right). \end{align} Note that the left hand side of \eqref{eq:FPe} is $$\dfrac{d}{dt} \E[g(X_t)] = \int_{\mathbb R} g(x) \dfrac{\partial}{\partial t} P_t(x) \,dx.$$ Using integration by part, the right hand side of \eqref{eq:FPe} can be written as $$Ht^{2H-1} \left( \int g(x) \left(\dfrac{\partial^2}{\partial x^2} \sigma^2(x) P_t(x)\right)\, dx - \int_{\mathbb R} g(x) \left(\dfrac{\partial}{\partial x} \sigma'(x) \sigma(x) P_t(x)\right)\,dx \right).$$ The desired result follows since $g$ is arbitrary. \end{proof} \subsubsection{Proof of Theorem \ref{thm:dbi}} Denote by $P_t(x)$ the probability density of $X_t$, and let $f(s,x)=-\ln P_s(x)$ in Lemma \ref{lem:Ito}. Then we have \begin{align*} &f_x(s,x)=-(P_s(x))^{-1}\frac{\partial}{\partial x}P_s(x), \end{align*} and \begin{align*} &f_{xx}(s,x)=(P_s(x))^{-2}\left(\frac{\partial}{\partial x}P_s(x)\right)^2-(P_s(x))^{-1}\frac{\partial^2}{\partial x^2}P_s(x). \end{align*} Thus, \begin{align*} &\E[f_x(s,X_s) \sigma'(X_s)\sigma(X_s)]=-\int_{\R}\frac{\partial}{\partial x}P_s(x) \sigma'(x)\sigma(x)dx\\ &=\int_{\R} P_s(x) (\sigma'(x)\sigma(x))'dx=\E\left[\sigma''(X_s)\sigma(X_s)+(\sigma'(X_s))^2\right], \end{align*} and \begin{align*} &\E[f_{xx}(s,X_s) \sigma^2(X_s)]=\int_\R \left[ (P_s(x))^{-1}\left(\frac{\partial}{\partial x}P_s(x)\right)^2 -\frac{\partial^2}{\partial x^2}P_s(x) \right]\sigma^2(x)dx\\ &=\E\left[\sigma^2(X_s) \left(\frac\partial{\partial x}\ln P_s(X_s)\right)^2 \right]-\E\left[\frac{\partial^2}{\partial x^2}\sigma^2(X_s)\right]. \end{align*} Therefore, we have the following formula. \begin{align} -\frac d{dt} \E[\ln P_t(X_t)]= Ht^{2H-1} \Bigg\{& \E\left[\sigma^2(X_t) \left(\frac\partial{\partial x}\ln P_t(X_t)\right)^2 \right]-\E\left[\frac{\partial^2}{\partial x^2}\sigma^2(X_t)\right]\\ &+\E\left[\sigma''(X_t)\sigma(X_t)+(\sigma'(X_t))^2\right] \Bigg\}.\notag \end{align} \subsubsection{Proof of Theorem \ref{thm:relativedbi}} In this proof, we write $P_t(x)$ to be the probability density of $X_t$, $Q_t(y)$ to be the probability density of $Y_t$ and let $f(s,x)=\ln \dfrac{P_s(x)}{Q_s(x)}$ in Lemma \ref{lem:Ito}. Then we have \begin{align*} f_x(s,x) &= \left(\dfrac{P_s(x)}{Q_s(x)}\right)^{-1} \dfrac{\partial}{\partial x} \dfrac{P_s(x)}{Q_s(x)}, \\ f_{xx}(s,x) &= \left(\dfrac{P_s(x)}{Q_s(x)}\right)^{-1} \left(\dfrac{\partial^2}{\partial x^2} \dfrac{P_s(x)}{Q_s(x)}\right) - \left(\dfrac{P_s(x)}{Q_s(x)}\right)^{-2} \left(\dfrac{\partial}{\partial x} \dfrac{P_s(x)}{Q_s(x)}\right)^2 \\ &= \left(\dfrac{P_s(x)}{Q_s(x)}\right)^{-1} \left(\dfrac{\partial^2}{\partial x^2} \dfrac{P_s(x)}{Q_s(x)} \right) - \left(\dfrac{\partial}{\partial x} \ln \dfrac{P_s(x)}{Q_s(x)}\right)^2, \\ f_s(s,x) &= \dfrac{1}{P_s(x)} \dfrac{\partial}{\partial s} P_s(x) - \dfrac{1}{Q_s(x)} \dfrac{\partial}{\partial s} Q_s(x). \end{align*} As a result, using integration by part we arrive at \begin{align*} \E [f_s(s,X_s)] &= - \int \left(\dfrac{\partial}{\partial s} Q_s(x) \right) \dfrac{P_s(x)}{Q_s(x)}\,dx, \\ \E [f_x(s,X_s)\sigma'(X_s)\sigma(X_s)] &= - \int \left(\dfrac{\partial}{\partial x} \sigma'(x) \sigma(x) Q_s(x) \right) \dfrac{P_s(x)}{Q_s(x)}\,dx, \\ \E [f_{xx}(s,X_s)\sigma^2(X_s)] &= \int \left(\dfrac{\partial^2}{\partial x^2} \sigma^2(x) Q_s(x) \right) \dfrac{P_s(x)}{Q_s(x)}\,dx - J_{\sigma^2}(X_s || Y_s). \end{align*} Now, by Lemma \ref{lem:Ito} we note that \begin{align*} \dfrac{d}{dt} K(X_t || Y_t) &= \dfrac{d}{dt} \E \left[\ln \dfrac{P_t(X_t)}{Q_t(X_t)} \right] \\ &= \E [f_t(t,X_t)] + Ht^{2H-1} \left(\E [f_x(t,X_t)\sigma'(X_t)\sigma(X_t)] + \E [f_{xx}(t,X_t)\sigma^2(X_t)]\right) \\ &= - Ht^{2H-1}J_{\sigma^2} (X_t || Y_t), \end{align*} where the last equality follows from Lemma \ref{lem:FPe}. \subsubsection{Proof of Theorem \ref{thm:derivegeneralized}} Let $t\geq0$ be such that $s^{H} = \sqrt{t}$ and denote $Z$ to follow the standard normal distribution. Using chain rule we have \begin{align*} \dfrac{d}{ds} h(X_s) &= \dfrac{d}{ds} h(X_0 + s^H Z) \\ &= \dfrac{d}{ds} h(X_0 + \sqrt{t} Z) \\ &= \dfrac{d}{dt} h(X_0 + \sqrt{t} Z) \dfrac{dt}{ds} \\ &= \dfrac{1}{2} J_1(X_0 + \sqrt{t} Z) 2H s^{2H-1} \\ &= Hs^{2H-1} J_{1}(X_s), \end{align*} where the fourth equality follows from the classical De Bruijn's identity (see e.g. \cite{CJ06}). In particular when $X_0$ is Gaussian, then $X_s$ is also Gaussian with mean $0$ and variance $\sigma_0^2 + s^{2H}$. Since for normal distribution the Fisher information is the reciprocal of the variance, we have \begin{align*} \dfrac{d}{ds} h(X_s) = Hs^{2H-1} J_{1}(X_s) = \dfrac{Hs^{2H-1}}{\mathrm{Var}(X_s)} = \dfrac{Hs^{2H-1}}{\sigma_0^2 + s^{2H}}. \end{align*} \section{Applications}\label{sec:applications} In this section, we present two applications of the generalized De Bruijn's identity. In the first application in Section \ref{subsec:Stein}, we demonstrate its equivalence with the Stein's identity for Gaussian distribution, while in Section \ref{subsec:entropypower}, we prove the convexity or the concavity of entropy power, which depends on the Hurst parameter $H$. Throughout this section, we assume that the channel is of the form \begin{align*} X_t = X_0 + B^H_t, \end{align*} where the initial value $X_0$ is independent of the fBm and the Hurst parameter $H \in (0,1)$. \subsection{Equivalence of the generalized De Bruijn's identity and Stein's identity for normal distribution}\label{subsec:Stein} It is known that the classical De Bruijn's identity is equvialent to the Stein's identity for normal distribution as well as the heat equation identity, provided that the initial noise $X_0$ is Gaussian, see e.g. \cite{BDHS06,PSQ12}. These identities are equivalent in the sense that one can derive the others using any one of them. It is therefore natural for us to guess that the same equivalence also holds for the proposed generalized De Bruijn's identity. To this end, let us recall the classical Stein's identity for normal distribution. Writing $Y$ to be the normal distribution with mean $\mu$ and variance $\sigma^2$, the \textbf{Stein's identity} is given by \begin{align}\label{eq:Stein} \E[r(Y)(Y - \mu)] = \sigma^2 \E \left[\dfrac{d}{dy}r(Y)\right], \end{align} where $r$ is a differentiable function such that the above expectations exist. In the following result, we prove that the generalized De Bruijn's identity presented in Theorem \ref{thm:derivegeneralized} is equivalent to the Stein's identity, \begin{theorem}[Equivalence of the generalized De Bruijn's identity \eqref{formula2} and Stein's identity]\label{thm:eqdbistein} Consider the channel $X = (X_t)_{t \geq 0}$ modelled by equation \eqref{eq:channel} with Hurst parameter $H \in (0,1)$ and initial Gaussian $X_0$ independent of the fBm. Then the generalized De Bruijn's identity \eqref{formula2} is equivalent to the Stein's identity \eqref{eq:Stein}. \end{theorem} \begin{rk}[Equivalence of the generalized De Bruijn's identity \eqref{formula} and Stein's identity] While Theorem \ref{thm:eqdbistein} is established for channel \eqref{eq:channel}, it is natural to consider if the same equivalence holds between \eqref{formula} and Stein's identity for channel \eqref{sde}. One direction is straightforward: with \eqref{formula} we have the classical De Bruijn's identity by taking the diffusion coefficent to be $\sigma(x) = 1$, and so we can derive the Stein's identity. However, for the opposite direction, we did not manage to prove \eqref{formula} via the classical De Bruijn's identity in the presence of general diffusion coefficient $\sigma(x)$. The trick employed in proving Theorem \ref{thm:derivegeneralized}, where the diffusion coefficient is simply $1$, does not seem to carry over to this setting. If this can be proved by other means, then the equivalence could be established. \end{rk} \begin{proof} If we have the Stein's identity, then we can derive the classical De Bruijn's identity \cite{PSQ12}, and so we have the generalized De Bruijn's identity by Theorem \ref{thm:derivegeneralized}. For the other direction, if we have the generalized De Bruijn's identity, then we can derive the classical De Bruijn's identity by taking $H = 1/2$, and from it we can derive the Stein's identity by \cite{PSQ12}. \end{proof} \subsection{Convexity/Concavity of the entropy power}\label{subsec:entropypower} Recall that the \textbf{entropy power} of a random variable $X$ is defined to be \begin{align}\label{def:entropypower} N(X) := \dfrac{1}{2\pi e} e^{2h(X)}. \end{align} In the classical setting when the channel $X_t$ is of the form \eqref{eq:channel} with $X_0$ being an arbitrary initial noise, \cite{Costa85,Dembo89} prove that the entropy power of $N(X_t)$ is concave in time $t$. Recently in \cite{KA16} the authors extend the concavity of entropy power to the dependent case where the dependency structure between the initial value $X_0$ and the channel $X_t$ is specified by Archimedean and Gaussian copulas. In our case, interestingly convexity/concavity of the entropy power depends on the Hurst parameter $H$: \begin{theorem}[Convexity/Concavity of the entropy power]\label{thm:convexconcave} Consider the channel $X = (X_t)_{t \geq 0}$ modelled by equation \eqref{eq:channel} with Hurst parameter $H \in (0,1)$, initial value $X_0$ independent of the fBm and has a finite second moment. We have \begin{align*} \dfrac{d^2}{d t^2} N(X_t) &= 2 N(X_t) \left(2H^2 t^{4H-2} J_1(X_t)^2 + H(2H-1)t^{2H-2}J_1(X_t) + Ht^{2H-1} \dfrac{d}{d t} J_1(X_t)\right) \\ &= 2 N(X_t) g(t,H,X_t), \end{align*} where $g(t,H,X_t) := 2H^2 t^{4H-2} J_1(X_t)^2 + H(2H-1)t^{2H-2}J_1(X_t) + Ht^{2H-1} \dfrac{d}{d t} J_1(X_t)$. Consequently, \begin{align*} N(X_t) &=\begin{cases} \text{convex in t} &\mbox{if } g(t,H,X_t) > 0, \\ \text{concave in t} & \mbox{if } g(t,H,X_t) \leq 0. \end{cases} \end{align*} In particular, when $X_0$ is a Gaussian distribution with mean $0$ and variance $\sigma_0^2$, we then have $g(t,H,X_t) = H(2H-1)t^{2H-2}J_1(X_t)$ and \begin{align*} N(X_t) &=\begin{cases} \text{convex in t} &\mbox{if } H \in (1/2,1), \\ \text{concave in t} & \mbox{if } H \in (0,1/2]. \end{cases} \end{align*} \end{theorem} \begin{rk} In the special case when $H = 1/2$ and $X_0$ is Gaussian, we retrieve the classical result that $N(X_t)$ is linear and hence concave (or convex) in $t$. \end{rk} \begin{rk}[The role of time parameter $t$] In Theorem \ref{thm:convexconcave}, we see that there are factors such as $t^{2H-1}$ or $t^{4H-2}$ appearing in the function $g(t,H,X_t)$. While these terms equal to $1$ in the classical $H = 1/2$ case, as we shall see in the proof these terms play an important and interesting role in determining the second-order behaviour of the entropy power in the general fBm case. Note that these terms all come from the factor $t^{2H-1}$ in front of the Fisher information $J_1$ in \eqref{formula2}. \end{rk} \begin{rk}[On establishing the convexity/concavity of the entropy power in model \eqref{sde}] While Theorem \ref{thm:convexconcave} is stated for model \eqref{eq:channel}, we can in fact state a similar result for model \eqref{sde} using Theorem \ref{thm:dbi}. However, there will not be a clear cut distinction between the two cases $H \leq 1/2$ and $H > 1/2$ as in Theorem \ref{thm:convexconcave}; it will also depend on the derivatives of the diffusion coefficient $\sigma(x)$, which may not be tractable in general. \end{rk} \begin{proof} Using the definition of the entropy power \eqref{def:entropypower}, we have \begin{align*} \dfrac{d^2}{d t^2} N(X_t) &= {\color{blue} \dfrac{d}{dt}\left(2 N(X_t) \dfrac{d}{dt} h(X_t)\right)} \\ &= 2N(X_t) \left( 2 \left(\dfrac{d}{dt} h(X_t)\right)^2 + \dfrac{d^2}{dt^2} h(X_t) \right) \\ &= 2 N(X_t) \left(2H^2 t^{4H-2} J_1(X_t)^2 + H(2H-1)t^{2H-2}J_1(X_t) + Ht^{2H-1} \dfrac{d}{d t} J_1(X_t)\right) \\ &= 2 N(X_t) g(t,H,X_t), \end{align*} where we make use of the generalized De Bruijn's identity \eqref{formula2} in the third equality. Since $N(X_t) \geq 0$, convexity/concavity of $N(X_t)$ thus depends on the sign of the function $g(t,H,X_t)$. In particular, when $X_0$ is Gaussian with mean $0$ and variance $\sigma_0^2$, we have \begin{align*} J_1(X_t) &= \dfrac{1}{\sigma_0^2 + t^{2H}}, \\ \dfrac{d}{dt} J_1(X_t) &= \dfrac{- 2H t^{2H-1}}{(\sigma_0^2 + t^{2H})^2} = - 2H t^{2H-1} J_1(X_t)^2, \\ g(t,H,X_t) &= H(2H-1) t^{2H-2} J_1(X_t) \\ &= \begin{cases} > 0 &\mbox{if } H \in (1/2,1), \\ \leq 0 & \mbox{if } H \in (0,1/2]. \end{cases} \end{align*} \end{proof} Theorem \ref{thm:convexconcave} implies that, for channel of the form \eqref{eq:channel} and Gaussian distributed $X_0$, for $t \in [0,1]$, we have \begin{align*} N(X_t) &\begin{cases} \leq t N(X_0) + (1-t) N(X_1) &\mbox{if } H \in (1/2,1), \\ \geq t N(X_0) + (1-t) N(X_1) & \mbox{if } H \in (0,1/2]. \end{cases} \end{align*} One application of the above equation lies in determining the so-called capacity region in a Gaussian interference channel; see the paper \cite{Costa85b}. It turns out that the concavity of entropy power is a crucial step in the proof of Theorem 2 in \cite{Costa85b}. With our Theorem \ref{thm:convexconcave}, it seems possible to study the capacity region in an interference channel with fBm noise and generalize the result to $H \in (0,1/2]$. We leave this as one of our future research directions. \section{Conclusion}\label{sec:conclusion} In this paper, we present the generalized De Bruijn's identity for the channel driven by fractional Brownian motion with Hurst parameter $H\in(0,1)$. Compared with the classical Brownian motion, i.e., $H=\frac12$, in our setting, the term $t^{2H-1}$ in general does not degenerate unless $H=\frac12$, and thus plays an essential role in the derivation of the identity. Consequently, we also investigate its equivalence with the Stein's identity and study the second-order behaviour of the entropy power. We hope that this paper can open new doors in analyzing the De Bruijn's identity in a more general context to model phenomenon such as self-similarity and long-range dependency. There are at least two future research directions. First, as mentioned in Section \ref{subsec:entropypower}, we can study the capacity region in an interference channel with fBm noise, where our convexity/concavity result should be an important step in the analysis. Second, we can consider more general channel driven by stochastic processes such as the Gaussian Volterra process \cite{V17}. Members of this broad family include fBm as well as the Riemann-Liouville process. This shall provide a unified framework in studying the De Bruijn's identity and its applications for channels driven by the more general Gaussian processes.\\ \noindent \textbf{Acknowledgements}. The authors would like to thank the associate editor and the referee for constructive comments that improve the quality of the manuscript. Michael Choi acknowledges the support from the Chinese University of Hong Kong, Shenzhen grant PF01001143 and AIRS - Shenzhen Institute of Artificial Intelligence and Robotics for Society Project 2019-INT002. \bibliographystyle{abbrvnat} \bibliography{thesis} \end{document}
187,482
You must appreciate an honest friend, one that will advise you “exactly like it is.” You might have gone through a split with your partner. In case one were to poll your pals, there isn’t any real doubt that generally there will be several among all of these people who may look at you and also tell you truthfully you are much better off without having that person. It might be that this individual didn’t treat you properly. Maybe he actually cheated on you. Perhaps he’s just a loser overall that, regardless of his own very good looks, was willing to sit near for hours on end and play online games and permit you to carry out virtually all of the food preparation, cleaning as well as revenue making. Nonetheless, reason seldom penetrates into love. The matters of the heart are apt to have their own music, and thus, it is not necessarily uncommon for a girl to wake up and find herself within the positioning of having broken up with the actual man that she really considers is actually the love involving her existence. Once the girl is aware for certain she needs this guy back, next the very next factor is actually for her to initially, create a approach to get him back, and second, conduct it. Fortuitously, you will find a wide range of knowledgeable support on the net in the form of products like the ones provided by ExBackExpertise.com, and thus, as a result of Jessica Simien’s tips on love, that can be found on the site. Very much depends on the conditions that encircled the particular break-up. For example, are you able to ascertain just where things decided to go wrong? If so, there may be evidence that can be found while looking back that will be a key component in leading to the relationship’s renewal. The actual content about JessicaSimien.com are of help in mentioning what to consider. It’s possible that most a person might should really conduct is to take a prolonged, challenging look at yourself. Quite often true self-examination will disclose persona defects which usually, once fixed, result in you becoming a person that acts differently with a relationship. Occasionally, that makes just about all the change in the world. There is one particular genuine reality in terms of connections: you possibly will not be able to change the other individual, however you can invariably change yourself.
225,177
By Tony Best | Sun, December 16, 2012 Addressing Dr Frank Alleyne, one of Barbados’ newest knights, as “Sir” wouldn’t be a difficult thing for Professor Cecil Foster. Foster, who teaches graduate and undergraduate courses at the University of Guelph in Ontario and at State University of New York in Buffalo, has written a dozen books, some of them novels and others insightful analyses of life in Canada and the Caribbean. I have been calling him ‘sir’ from the time I was five years old, when I first attended classes at Christ Church Boys’ School and he was a young teacher there,” said Foster, a professor in the department of anthropology and sociology at the University of Guelph. - Read More:
336,825
TITLE: If $(κ_t)$ is a semigroup with invariant measure $\mu$ and $ν$ is singular to $\mu$, then $νκ_t$ might not converge to $\mu$ in total variation norm QUESTION [1 upvotes]: Let $E$ be a Polish space, $(\kappa_t)_{t\ge0}$ be a Markov semigroup on $(E,\mathcal B(E))$, $\mu$ be a probability measure on $(E,\mathcal B(E))$ invariant with respect to $(\kappa_t)_{t\ge0}$ and $\nu$ be a probability measure $(E,\mathcal B(E))$ singular to $\mu$. I've read that if $(\kappa_t)_{t\ge0}$ is not strongly Feller$^1$, then we cannot expact that the composition $\nu\kappa_t$ tends to $\mu$ in total variation distance. How do we see this? And does "cannot expect" mean that there are instances of the setting such that the convergence does not hold or can we even show that it's impossible to hold in general? $^1$ $(\kappa_t)_{t\ge0}$ is called strongly Feller at time $t\ge0$ if $\kappa_t f$ is continuous for all bounded $\mathcal E$-measurable $f:E\to\mathbb R$. REPLY [2 votes]: You are asking for a condition that, in particular, guarantees that your semigroup has a unique invariant (stationary) measure, which is quite strong. There are lots of examples without this property. For a very simple (if somewhat degenerate) example notice that if you do not impose any continuity assumptions on the transition probabilities at all, then there is no way the behaviours of $\kappa_t\mu$ and $\kappa_t\nu$ can be related if $\mu$ and $\nu$ are singular. Take a partition of $E$ into two measurable sets $A$ and $B$, take $a\in A, b\in B$, and let $\kappa_t$ send any point from $A$ to $a$, and any point from $B$ to $b$. Then the semigroup $\kappa_t$ has two invariant measures $\delta_a,\delta_b$.
74,560
TITLE: Nontrivial solution to a system of equations over a general ring QUESTION [4 upvotes]: Given a matrix $A\in D^{n\times (n+1)}$ where $D$ is an integral domain, the system $Ax=0$, where $x\in D^{n+1}$, has a nontrivial solution (which can be proved looking at the field of fractions of $D$). Would this result still be true if instead of $D$ we had a general ring $R$ with identity? Does $R$ need to be commutative? (Obviously the above proof fails in this case, however I don't seem to see a reason why the result wouldn't be true in general.) Thanks. REPLY [8 votes]: It may fail for noncommutative rings. There are noncommutative rings (with identity) $R$ such that $R^2\cong R$. For example, let $k$ be a field, let $\mathbf{V}$ be an infinite dimensional vector space over $k$, and let $R$ be the ring of endomorphisms of $\mathbf{V}$. The fact that $\mathbf{V}\cong\mathbf{V}\oplus\mathbf{V}$ can be translated into a proof that $R^2\cong R$. Since $R^2\cong R$, there is a $1\times 2$ matrix that corresponds to this isomorphism. For such a matrix $A$, $A\mathbf{x} = 0$ if and only if $\mathbf{x}=0$, since $A$ is an isomorphism. For commutative rings, though, it works. Every commutative ring satisfies the strong rank condition, which is that for any $n\lt \infty$, any set of linearly independent elements in the right module $R^n$ has cardinality at most $n$. This is equivalent to saying that any homogeneous system of $n$ linear equations with $m$ unknowns and $m\gt n$ has a nontrivial solution. The fact that commutative rings satisfy the strong rank condition is proven in T-Y Lam's Lectures on Rings and Modules, pp 14-15. Lemma. Let $A$ and $B$ be right modules over $R$, where $B\neq 0$. If $A\oplus B$ can be embedded in $A$, then $A$ is not a noetherian module. Proof. $A$ has a submodule $A_1\oplus B_1$ with $A_1\cong A$ and $B_1\cong B$. So $A\oplus B$ can be embedded in $A_1$, which therefore contans a submodule $A_2\cong B_2$ with $A_2\cong A$ and $B_2\cong B$. Continuing this way we get an infinite direct sum $B_1\oplus B_2\oplus\cdots$ in $A$, each $B_i\neq 0$, which shows that $A$ is not a noetherian module. QED Theorem. Any right noetherian ring $R\neq 0$ satisfies the strong rank condition. Proof. Let $R\neq 0$ be right noetherian. Then for any $n$, the free right module $A=R^n$ is noetherian, so if $B\neq 0$, then $A\oplus B$ cannot be embedded in $A$; in particular, for any $m\gt n$, $R^m=A\oplus R^{m-n}$ cannot be embedded in $A=R^n$. QED Corollary. Any commutative ring $R\neq 0$ satisfies the strong rank condition. Proof. Consider a system of $n$ linear equations in $m\gt n$ unknowns, with coefficients in $R$. Let $R_0$ be the subring of $R$ generated over $\mathbb{Z}\cdot 1$ by the coefficients $a_{ij}$. This is a nonzero noetherian ring by the Hilbert Basis Theorem, so the system of equations has a nontrivial solution over $R_0$, hence over $R$. Thus, $R$ has the strong rank condition. QED Another nontrivial instance of rings with strong rank condition: if $R=A\times B$, then $R$ has the strong rank condition if and only if either $A$ or $B$ have the strong rank condition.
126,074
Be the first to receive important updates on security In its just published 2012 security threat forecasts, McAfee highlights that proof-of-concepts (POC) that abuse embedded mechanisms will grow in efficacy during 2012 and even later. For this therefore, malware will be necessary designed to target the hardware, while attacks will be possible for acquiring more control as well as sustained admission into both the computer and data stored on it over an extensive length of time. Such a situation will subsequently have advanced hackers gain total hold over hardware. Currently, it maybe mentioned that embedded mechanisms cater to one particular control operation as part of a bigger mechanism, while getting utilized in medical instruments, printers, digital cameras, GPS systems, automotives and routers. Normally, individuals and organizations repose faith in certificates that are digitally signed, but as per the report by McAfee, the Stuxnet and its variant Duqu lately utilized fake certificates for bypassing identification. Expectedly, during 2012, certificate authorities will be targeted over a wide scale and there'll be greater generation and use of fake digital certificates. Consequently, chief infrastructures will be affected; so also safe browsing for online transactions in addition to application control and white-listing type of host-based techniques, McAfee predicts. Moreover, McAfee also predicts that the hackers' gang Anonymous requires changing else terminate so far as hacktivism in 2012 is concerned, meaning that the influence of Anonymous must get organized via issuing operation and responsibility declarations, otherwise any group that labels itself as Anonymous may ultimately get marginalized. Tgdaily.com published this on December 28, 2011. Nevertheless, the security firm thinks that hackers will combine with protestors like of the Occupy pressure group more-and-more to expose still further details of individuals. Actually, seeking to fulfill ideological or political motives, there'll be increased revelations of public figures' private lives, including those of politicians, security and law-enforcement officials, judges, and industry magnets. Indeed, demonstrators will do everything for acquiring data from Web-servers or social-networking websites towards backing their causes. Finally, McAfee states that as operating systems become equipped with better security mechanisms, hackers will be compelled for hunting vulnerabilities elsewhere like within hard drives, network cards, or the Basic Input Output System. Related article: PC-Virus of 2005 Threatening Japanese Bank Accountholders, Warns Symantec » SPAMfighter News - 07-01-2012 Press Releases - IT Security News
174,905
TITLE: assigning p tasks to k people of n QUESTION [1 upvotes]: Had a question on exam of enumerative combinatorics to prove the following: $$\sum_{k=0}^n k^3 {n \choose k} = (n+3)n^22^{n-3}$$ Sol'n, let us use the binomial formula: $$\sum_{k=0}^n x^k {n \choose k} = (1+x)^n$$ now take derivative: $$\sum_{k=0}^n k x^{k-1} {n \choose k} = n(x+1)^{n-1}$$ setting x=1, let this be identity [1] $$\sum_{k=0}^n k {n \choose k} = n2^{n-1}$$ now take another derivative $$\sum_{k=0}^n k(k-1) x^{k-1} {n \choose k} = n(n-1)(x+1)^{n-2}$$ set x = 1 $$\sum_{k=0}^n (k^2-k) {n \choose k} = n(n-1)(2)^{n-2}$$ now kill the extra -k using the identity [1] by adding it $$\sum_{k=0}^n (k^2-k) {n \choose k} + \sum_{k=0}^n k {n \choose k}= n(n-1)(2)^{n-2} + n2^{n-1}$$ $$\sum_{k=0}^n (k^2) {n \choose k} = n(n+1)(2)^{n-2}$$ repeating the above procedure with derivative of order 3, and killing the extra terms will give the identity that was to be proved. so for each $k^{1,2,3}$ we have $$\sum_{k=0}^n k {n \choose k} = n2^{n-1}$$ $$\sum_{k=0}^n k^2 {n \choose k} = n(n+1)(2)^{n-2}$$ $$\sum_{k=0}^n k^3 {n \choose k} = n^2(n+3)2^{n-3}$$ The question, is it possible to somehow solve combinatorially, say more general, $q$ fixed $$\sum_{k=0}^n k^q {n \choose k} = something$$ I used the question that was to assign the $p$ tasks to $p$ different people in the group of $k$ chosen from $n$ (some people may not have the task assigned). the formula was the following: $$\sum_{k=0}^n {k \choose p} {n \choose k} = {n \choose p}2^{n-p}$$ where on LHS we first choose a group of $k$ from $n$, then assign $p$ tasks to people in that group of k, summing up for all k, we get total number of groups. whereas on RHS we first choose people to do the tasks, then add (n-p) $\require{cancel} \cancel{slackers}$ people to that group of $tasked$ people Now back to the question: It is to be assigned $q$ possible duties for the people from the group ${n \choose k}$, where the same person may have more than 1 duty. LHS is somehow clear, now it is left to think of the RHS. P.S. taking further derivatives will lead to more general formula. applying for order 4 gives irrational roots, polynomial on RHS turns out to be $n(n^3+6n^2+3n-2)2^{n-4}$ REPLY [1 votes]: Let $\def\fall(#1,#2){#1^{\underline{#2}}}\fall(k,i)$ denote $k(k-1)(k-2)\cdots (k-i+1)=k!/(k-i)!$. These are referred to as the falling factorials. Importantly, we have the following identity: $$ k^q=\sum_{i=0}^q{q\brace i}\fall(k,i), $$ where ${q\brace i}$ are the Stirling numbers of the second kind. Furthermore, $$ \fall(k,i)\binom{n}k=\fall(n,i)\binom{n-i}{k-i} $$ Therefore, \begin{align} \sum_{k=0}^n k^q\binom{n}k &=\sum_{k=0}^n\sum_{i=0}^q{q\brace i}\binom{n}k\fall(k,i) \\&=\sum_{k=0}^n\sum_{i=0}^q{q\brace i}\binom{n-i}{k-i}\fall(n,i) \\&=\sum_{i=0}^q{q\brace i}\fall(n,i)\sum_{k=0}^n\binom{n-i}{k-i} \\&=\sum_{i=0}^q{q\brace i}\fall(n,i)2^{n-i} \end{align} Using the combinatorial interpretation of the Stirling numbers, you can no doubt find a combinatorial proof of the above identity as well. Namely, ${q\brace i}$ counts the number of ways to partition a set of size $q$ into $i$ nonempty, non-distinct parts.
11,197
\section{Introduction} In non-relativistic quantum theory a \emph{vacuum state} can simply be identified with a lowest-energy state. In a relativistic context the absence of a unique notion of time and consequently energy, makes this less straightforward. Minkowski space has a rich isometry group (the Poincaré group) that helps to fix a notion of vacuum by demanding its invariance. However, generic curved spacetimes do not admit isometries. This makes the question of how to choose a vacuum state rather important, as well as the understanding of what such a choice means physically. A further important question about the vacuum concerns its ``localizability'' properties. Usually, a vacuum is seen as encoding global information about spacetime. This is reinforced by the Reeh-Schlieder theorem \cite{ReSc:unitequiv}. However, one can ask to which extent a vacuum might encode information just about a spacetime region or (as we shall see) a hypersurface neighborhood. This question is particularly important from the point of view of Segal's axiomatic approach to quantum field theory, which posits that quantum amplitudes in composite spacetime regions may be decomposed into amplitudes in component regions \cite{Seg:cftproc,Seg:cftdef,Oe:gbqft}. More recently, this approach has been generalized to include observables \cite{Oe:feynobs} and general processes \cite{Oe:posfound}. A third question we want to raise here concerns the generalization of the notion of vacuum to a context where no background metric is fixed from the outset. This is relevant in particular for quantum gravity. With the present work we aim to make some contribution to addressing each of these questions. To be able to make some headway we restrict in this work purely to linear (i.e., free) field theory. We recall (Section~\ref{sec:revquant}) that a standard quantization method in curved spacetime \cite{BiDa:qftcurved} starts with selecting a set of modes (i.e., solutions of the equations of motion) that satisfy certain completeness and orthogonality properties (\ref{eq:propmodes}) with respect to an inner product (\ref{eq:iplc}) that derives from the symplectic form on the solution space. A choice of such modes amounts to selecting a vacuum. Equivalently, we may encode this choice in terms of a complex structure on solution space with certain properties. As we emphasize in this work, a third way of encoding this information is in terms of a particular type (that we call \emph{definite}) of \emph{Lagrangian subspace} of the solution space. In Minkowski space with the standard vacuum, this Lagrangian subspace is precisely the space of ``positive energy solutions'' and its conjugate that of ``negative energy solutions''. In order to move towards a more local picture and away from a restriction to Minkowski space we recall that there is a natural symplectic form on the space of germs of solutions on any hypersurface in spacetime (Appendix~\ref{sec:lagingreds}). A vacuum can then be encoded as a definite Lagrangian subspace on any hypersurface. If the hypersurface is spacelike and spacetime globally hyperbolic this can be brought into correspondence with the more traditional global perspective. There is another, apparently completely distinct setting where Lagrangian subspaces occur in (purely classical) field theory (Section~\ref{sec:classlag}). This is the symplectic framework of Kijowski and Tulczyjew \cite{KiTu:symplectic}, axiomatized in the linear case in \cite{Oe:holomorphic}. The key insight is that the solutions of a sufficiently simple field theory in a spacetime region form a Lagrangian subspace of the space of germs of solutions on the boundary.\footnote{``Sufficiently simple'' means here for example that there are no gauge symmetries. In the presence of gauge symmetries a refined scheme can be applied that involves symplectic reduction, see e.g.\ \cite{DiOe:qabym}.} The Lagrangian subspaces in question are \emph{real} subspaces in contrast to the definite ones for vacua which are necessarily complex (and defined on the \emph{complexified} space of germs). Our core proposal (Section~\ref{sec:vaclag}) is that, nevertheless, both occurrences of Lagrangian subspaces are really special cases of a common unified structure, which, for simplicity we continue to call vacuum. To this end, we show on the classical level that the definite Lagrangian subspaces are naturally associated to ``sufficiently'' non-compact regions of spacetime, complementing the real Lagrangian subspaces for compact and ``mildly'' non-compact regions. Crucially, also Lagrangian subspaces that are neither definite nor real (but are a mixture of both) occur naturally, as we show. The unification becomes really compelling at the quantum level, where we show that the wave function for a standard vacuum state takes exactly the same form as the wave function encoding the state dual to the amplitude for a region. This is most easily seen by using the Schrödinger representation and the Feynman path integral. Expectation values of observables (defined as functions on spacetime field configurations) on all the generalized vacua can be evaluated by reducing to Weyl observables and then applying a simple path integral formula (\ref{eq:veweyl}). A second component of the present work consists of the proposal of methods for vacuum selection (Section~\ref{sec:vchoice}). These are inspired by \emph{Euclidean methods} and incorporate notions of \emph{Wick rotation}. We observe that real Lagrangian subspaces occur naturally in association with decaying asymptotic boundary conditions. This suggests to view the definite Lagrangian subspaces of traditional vacua as arising through a Wick rotation of boundary conditions. Concretely, we propose an infinitesimal and an asymptotic method for fixing a vacuum. While this works straightforwardly when solutions show a decaying behavior, it requires a Wick rotation when solutions show oscillatory behavior. The latter case recovers traditional methods of vacuum selection using timelike vector fields. In order to motivate our proposal we showcase the natural occurrence of generalized vacua in simple examples and demonstrate the application of our vacuum selection methods. This is partly in the spirit of the reverse engineering approach to quantum field theory, where we use known tools and methods to extract underlying structure \cite{Oe:reveng}. For simplicity, all examples are based on (massive or massless) Klein-Gordon theory. The examples involve different regions and hypersurfaces (including timelike ones) in Minkowski space (Sections~\ref{sec:hypcyl} and \ref{sec:tlhp}), Rindler space (Section~\ref{sec:Rindler}), a Euclidean space (Section~\ref{sec:2deucl}), and de~Sitter space (Section~\ref{sec:deSitter}). An intriguing repeated pattern is the occurrence of \emph{evanescent waves} with a decaying behavior and corresponding real Lagrangian subspaces along with the \emph{oscillating waves} with corresponding definite Lagrangian subspaces. It is only the latter that occur in the traditional approach to the vacuum. While aspects of the example applications are novel, their purpose is limited to providing an initial proof of concept for the proposed generalized notion of vacuum and selection methods. The real interest of these new concepts and methods lies in their applicability to situations which lie outside the scope of standard methods or where such methods lack conceptual clarity or present technical difficulty. Of particular interest are spacetimes that are not globally hyperbolic, such as anti-de~Sitter space, black hole spacetimes or certain cosmological spacetimes. On the other hand, although this is not emphasized explicitly in this work, a wide range of boundary conditions may be understood in terms of our generalized notion of vacuum. This might lead to a completely different class of applications such as to the Casimir effect and related problems. We notice, in accordance with previous remarks, another potential area of application in terms of quantum theory (such as quantum gravity) on spacetimes without background metric. While we focus the discussion in this work on standard quantum field theories and the methods of vacuum selection proposed in Section~\ref{sec:vchoice} rely to some extent on a metric, the framework of Section~\ref{sec:vaclag} is in principle applicable also in the absence of a metric. For further discussion of results and a more detailed outlook, see Section~\ref{sec:outlook}. We emphasize that the present work is focused on certain aspects of the notion of vacuum only. Other important aspects such as whether a Hadamard condition \cite{KaWa:qfstatesbifurcate} is satisfied, relevant for obtaining a renormalized energy-momentum tensor, are not touched upon. This does not mean that they are not interesting, but that their relation to the presented concepts and methods is outside of the scope of this work and should be the subject of future investigation. Some mathematical details on Lagrangian subspaces are collected in Appendix~\ref{sec:mathlag}. This includes Proposition~\ref{prop:rdlagcompl}, which is instrumental in ensuring well-definedness and uniqueness in the application of formula (\ref{eq:veweyl}) for vacuum expectation values. In Appendix~\ref{sec:caxioms} an axiomatization of our notion of generalized vacuum is presented, generalizing the axiomatic framework \cite{Oe:holomorphic} that formalizes the mentioned Lagrangian approach of Kijowski and Tulczyjew \cite{KiTu:symplectic} in the linear case.
95,442
OG Kush Autoflowering CBD: Fast, Potent CBD Goodness OG Kush Autoflowering CBD was brought into this world through the combination of OG Kush Autoflowering and Auto CBD. Although not the most original name, OG Kush Autoflowering CBD is a potent strain that brings a lot of highly regarded traits together. She features the rich, pungent flavor of OG Kush, potency, a decent CBD content, and a fast turn around. Expect lemon and fuel to dominate her flavor while inducing a functional yet potent effect. OG Kush Autoflowering CBD won’t lock you down, but still provides a nice body buzz and mood elevation. The complete life cycle of OG Kush Autoflowering CBD is 70-80 days with yields reaching 450g/m² indoors. Outdoors 60-170g per plant is achievable. These plants are branchy with long runs of thick lime green flower clusters. True to the OG Kush structure.
17,779
ctc's suppliers and prospective suppliers are being invited to take part in the open day for employers planned for 11 may 2018. each supplier will be requested to refer 10 prospective new employers that may be in need of ctc's expert training services.Get Price training is an essential part of msha's mission to keep miners safe and healthy. our goal is to help the mining industry develop high quality training programs, and to strengthen and modernize training through collaboration with industry stakeholders.Get Price
370,981
\section{Confidence Interval Bounds from Prior Work} \subsection{Confidence Bounds from \texorpdfstring{\cite{DFHPRR15nips}}{}} \label{sec:dfh_res} We start by deriving confidence bounds using results from \cite{DFHPRR15nips}, which uses the following transfer theorem (see Theorem 10 in \cite{DFHPRR15nips}). \begin{theorem} If $\alg$ is $(\epsilon,\delta)$-DP where $\phi \gets \alg(X)$ and $\tol \geq \sqrt{\frac{48}{n} \ln(4/\beta)}$, $\epsilon \leq \frac{\tol}{4}$, and $\delta = \exp\left(\frac{-4 \ln(8/\beta)}{\tol} \right)$, then $\prob{|\phi(\cD) - \phi(X) |\geq \tol} \leq \beta$. \label{thm:DFHPRR} \end{theorem} We pair this together with the accuracy from either the Gaussian mechanism or the Laplace mechanism along with \Cref{cor:pop_acc} to get the following result \begin{theorem}\label{thm:LapGauss} Given confidence level $1-\beta$ and using the Laplace or Gaussian mechanism for each algorithm $\alg_i$, then $(\alg_1,\cdots, \alg_k)$ is $(\tol^{(1)}, \beta$)-accurate, where \begin{itemize} \item{\bf Laplace Mechanism}: We define $\tol^{(1)}$ to be the solution to the following program {\begin{align*} \min & \qquad \tol \\ \text{ s.t. } & \qquad \tol \geq \sqrt{\frac{48}{n} \ln\left(\frac{8}{\beta}\right)} + \tau' \\ & \qquad \left(\tol - \tol' - 4 \epsilon' k \cdot \left(\frac{e^{\epsilon'}-1}{e^{\epsilon'}+1} \right) \right)^2 \cdot \left(\tol - \tol' \right) \geq 256 \eps'^2 k\ln\frac{16}{\beta} \\ \text{for } & \qquad \epsilon' > 0 \text{ and } \tau' = \frac{\ln\left(2k/\beta \right)}{n \epsilon' } \end{align*}} \item{\bf Gaussian Mechanism}: We define $\tol^{(1)}$ to be the solution to the following program {\begin{align*} \min & \qquad \tol \\ \text{ s.t. } & \qquad \tol \geq \sqrt{\frac{48}{n} \ln\left(\frac{8}{\beta}\right)} + \tol' \\ & \qquad \left( \left(\tol - \tol' - 4 \rho k\right)^2 - 64\rho k \ln\sqrt{\pi \rho k} \right) \cdot \left(\tol - \tol' \right) \geq 64\rho k \ln\frac{16}{\beta} \\ \text{for } & \qquad \rho > 0 \text{ and } \tol' = \frac{1}{2n} \sqrt{\frac{1}{\rho}\ln(4k/\beta)} \end{align*}} \end{itemize} \end{theorem} To bound the sample accuracy, we will use the following lemma that gives the accuracy guarantees of Laplace mechanism. \begin{lemma} If $\{Y_i : i \in [k]\} \stackrel{i.i.d.}{\sim} \Lap(b)$, then for $\beta \in (0,1]$ we have: \begin{align*} & \prob{|Y_i| \geq \ln(1/\beta) b} \leq \beta \implies \prob{\exists i \in [k] \text{ s.t. } |Y_i| \geq \ln(k/\beta) b} \leq \beta. \label{eq:acc_lap} \numberthis \end{align*} \label{lem:accLap} \end{lemma} \begin{proof}[Proof of Theorem \ref{thm:LapGauss}] We will focus on the Laplace mechanism part first, so that we add $\Lap\left( \frac{1}{n \epsilon'}\right)$ noise to each answer. After $k$ adaptively selected queries, the entire sequence of noisy answers is $(\epsilon,\delta)$-DP where \begin{align} \epsilon = k \epsilon' \cdot \frac{e^{\epsilon'}-1}{e^{\epsilon'}+1} + \epsilon' \cdot \sqrt{2k \ln(1/\delta)} \label{eqn:eps_del} \end{align} Now, we want to bound the two terms in \eqref{eq:errors} by $\frac{\beta}{2}$ each. We can bound sample accuracy as: \begin{align*} \tau' = \frac{1}{n \epsilon' }\ln\left(\frac{2k}{\beta}\right) \end{align*} which follows from \Cref{lem:accLap}, and setting the error width to $\tau'$ and the probability bound to $\frac{\beta}{2}$. For the population accuracy, we apply \Cref{thm:DFHPRR} to take a union bound over all selected statistical queries, and set the error width to $\tau - \tau'$ and the probability bound to $\frac{\beta}{2}$ to get: { \small \begin{align*} & \delta = \exp\left( \frac{-8\ln(16/\beta)}{\tau - \tau'} \right), \qquad \tau - \tau' \geq \sqrt{\frac{48}{n} \ln\frac{8}{\beta}}, \qquad \text{ and } \qquad \tau - \tau' \geq 4 \epsilon \numberthis \label{eqn:tau's} \end{align*} } We then use \eqref{eqn:eps_del} and write $\epsilon$ in terms of $\delta$ to get: $$ \epsilon = \epsilon' k \cdot \frac{e^{\epsilon'}-1}{e^{\epsilon'}+1} + 4 \epsilon' \cdot \sqrt{ \frac{k \ln(16/\beta)}{\tau - \tau'} }. $$ Substituting the value of $\epsilon$ in \Cref{eqn:tau's}, we get: $$ \tau - \tau' \geq 4 \left(\epsilon' k \cdot \left(\frac{e^{\epsilon'}-1}{e^{\epsilon'}+1} \right) + 4\epsilon'\cdot \sqrt{\frac{k\ln\frac{16}{\beta}}{\tau - \tau'}} \right) $$ By rearranging terms, we get $$\left(\tol - \tol' - 4 \epsilon' k \cdot \left(\frac{e^{\epsilon'}-1}{e^{\epsilon'}+1} \right) \right)^2 \cdot \left(\tol - \tol' \right) \geq 256 \eps'^2 k\ln\frac{16}{\beta}$$ We are then left to pick $\epsilon'>0$ to obtain the smallest value of $\tol$. When can follow a similar argument when we add Gaussian noise with variance $\frac{1}{2n^2 \rho}$. The only modification we make is using \Cref{thm:zCDP} to get a composed DP algorithm with parameters in terms of $\rho$, and the accuracy guarantee in \Cref{thm:acc}. \end{proof} \subsection{Confidence Bounds from \texorpdfstring{\cite{BNSSSU16}}{}} \label{sec:bns_res} We now go through the argument of \cite{BNSSSU16} to improve the constants as much as we can via their analysis to get a decent confidence bound on $k$ adaptively chosen statistical queries. This requires presenting their \emph{monitoring}, which is similar to the monitor presented in \Cref{alg:monitor} but takes as input several independent datasets. We first present the result. \begin{theorem} Given confidence level $1-\beta$ and using the Laplace or Gaussian mechanism for each algorithm $\alg_i$, then $(\alg_1,\cdots, \alg_k)$ is $(\tol^{(2)}, \beta$)-accurate. \begin{itemize} \item{\bf Laplace Mechanism}: We define $\tol^{(2)}$ to be the following quantity: {\small \begin{align*} & \frac{1}{1-(1-\beta)^{\left\lfloor \frac{1}{\beta}\right\rfloor}} \inf_{\substack{\epsilon'>0,\\ \delta \in (0,1)}} \left\{ e^\psi - 1 + 6\delta \left\lfloor \frac{1}{\beta}\right\rfloor + \frac{\ln\frac{k}{2\delta}}{\epsilon' n} \right\}, \text{where } \psi = \left(\frac{e^{\epsilon'}-1}{e^{\epsilon'}+1} \right) \cdot \epsilon' k + \epsilon'\sqrt{2k \ln\frac{1}{\delta}} \end{align*}} \item{\bf Gaussian Mechanism}: We define $\tol^{(2)}$ to be the following quantity: {\small \begin{align*} & \frac{1}{1-(1-\beta)^{\left\lfloor \frac{1}{\beta}\right\rfloor}} \inf_{\substack{\rho>0, \\\delta \in (0,1)}} \left\{ e^\xi - 1 + 6\delta \left\lfloor \frac{1}{\beta}\right\rfloor + \sqrt{\frac{\ln\frac{k}{\delta}}{n^2\rho} } \right\}, \text{where } \xi = k \rho + 2 \sqrt{k \rho \ln\left(\frac{\sqrt{\pi \rho}}{\delta}\right)} \end{align*}} \end{itemize} \label{thm:BNSSSU} \end{theorem} In order to prove this result, we begin with a technical lemma which considers an algorithm $\cW$ that takes as input a collection of $s$ samples and outputs both an index in $[s]$ and a statistical query, where we denote $\cQ_{SQ}$ as the set of all statistical queries $\phi: \cX \to [0,1]$ and their negation. \begin{lemma}[\citep{BNSSSU16}] Let $\cW: \left(\cX^n\right)^s \to \cQ_{SQ} \times [s]$ be $(\epsilon,\delta)$-DP. If $\vec{X} = (X^{(1)}, \cdots, X^{(s)}) \sim \left(\cD^{n}\right)^s$ then $$ \left| \Ex{\vec{X},(\phi ,t) = \cW(\vec{X})}{\phi(\cD) - \phi(X^{(t)})} \right| \leq e^\epsilon -1 + s \delta $$ \label{lem:technical} \end{lemma} We then define what we will call the \emph{extended monitor} in \Cref{alg:ex_monitor}. \begin{algorithm} \caption{Extended Monitor $\cW_\cD[\alg,\adv](\vX)$} \label{alg:ex_monitor} \begin{algorithmic} \REQUIRE $\vX = (X^{(1)}, \cdots, X^{(s)}) \in (\cX^n)^s$ \FOR{$t \in [s]$} \STATE We simulate $\alg(X^{(t)})$ and $\adv$ interacting. We write $\phi_{t,1}, \cdots, \phi_{t,k} \in \cQ_{SQ}$ as the queries chosen by $\adv$ and write $a_{t,1},\cdots, a_{t,k} \in \R$ as the corresponding answers of $\alg$. \ENDFOR \STATE Let $ (j^*,t^*) = \myargmax_{j \in [k], t \in [s]} \left| \phi_{t,j}(\cD) - a_{t,j} \right|. $ \STATE \textbf{if} $a_{t^*,j^*} - \phi_{t^*,j^*}(\cD) \geq 0$ \textbf{then} $\phi^* \gets \phi_{t^*,j^*}$ \STATE \textbf{else} $\phi^* \gets -\phi_{t^*,j^*}$ \ENSURE $(\phi^*,t^*)$ \end{algorithmic} \end{algorithm} We then present a series of lemmas that leads to an accuracy bound from \cite{BNSSSU16}. \begin{lemma}[\citep{BNSSSU16}] For each $\epsilon,\delta\geq 0$, if $\alg$ is $(\epsilon,\delta)$-DP for $k$ adaptively chosen queries from $\cQ_{SQ}$, then for every data distribution $\cD$ and analyst $\adv$, the monitor $\cW_\cD[\alg,\adv]$ is $(\epsilon,\delta)$-DP. \end{lemma} \begin{lemma}[\citep{BNSSSU16}] If $\alg$ fails to be $(\tol,\beta)$-accurate, then $\phi^*(\cD) - a^* \geq 0$, where $a^*$ is the answer to $\phi^*$ during the simulation ($\adv$ can determine $a^*$ from output $(\phi^*,t^*)$) and \begin{align*} & \Prob{\substack{\vec{X} \sim (\cD^n)^s, \\(\phi^*,t^*) = \cW_\cD [\alg,\adv](\vbX)}}{\left|\phi^*(\cD) - a^*\right| > \tol } > 1 - (1-\beta)^s. \end{align*} \label{lem:BNSSSU1} \end{lemma} The following result is not stated exactly the same as in \cite{BNSSSU16}, but it follows the same analysis. We just do not simplify the expressions in the inequalities. \begin{lemma} If $\alg$ is $(\tol',\beta')$-accurate on the sample but not $(\tol,\beta)$-accurate for the population, then \begin{align*} & \left|\Ex{\substack{\vec{X} \sim (\cD^n)^{s},\\(\phi^*,t) = \cW_\cD[\alg,\adv](\vec{X})}}{\phi^*(\cD) - \phi^*\left(X^{(t)}\right)} \right| \geq \tol\left(1 - (1-\beta)^s\right) - \left(\tol' + 2 s \beta' \right). \end{align*} \label{lem:BNSSSUlb} \end{lemma} We now put everything together to get our result. \begin{proof}[Proof of \Cref{thm:BNSSSU}] We ultimately want a contradiction between the result given in \Cref{lem:technical} and \Cref{lem:BNSSSUlb}. Thus, we want to find the parameter values that minimizes $\tol$ but satisfies the following inequality \begin{equation} \tol\left(1 - (1-\beta)^s\right) - \left(\tol' + 2 s \beta' \right) > e^\epsilon -1 + s \delta. \label{eq:contradict} \end{equation} We first analyze the case when we add noise $\Lap\left( \frac{1}{n\epsilon'}\right)$ to each query answer on the sample to preserve $\epsilon'$-DP of each query and then use advanced composition \Cref{thm:drv} to get a bound on $\epsilon$. $$ \epsilon = \left(\frac{e^{\epsilon'}-1}{e^{\epsilon'} + 1}\right) \epsilon' k + \epsilon'\sqrt{2k \ln(1/\delta)} = \psi. $$ Further, we obtain $(\tol',\beta')$-accuracy on the sample, where for $\beta' >0$ we have $ \tol' = \frac{\ln(k/\beta') }{\epsilon' n}. $ We then plug these values into \eqref{eq:contradict} to get the following bound on $\tol$ \begin{align*} \tol & \geq \left(\frac{1}{1 - (1-\beta)^s} \right) \left( \frac{\ln\left(\frac{k}{\beta'}\right) }{\epsilon' n} +2s \beta'+ e^\psi-1 + s \delta \right) \end{align*} We then choose some of the parameters to be the same as in \cite{BNSSSU16}, like $s = \lfloor 1/\beta \rfloor$ and $\beta' = 2\delta$. We then want to find the best parameters $\epsilon',\delta$ that makes the right hand side as small as possible. Thus, the best confidence width $\tol$ that we can get with this approach is the following \begin{align*} & \frac{1}{1-(1-\beta)^{\left\lfloor \frac{1}{\beta}\right\rfloor}} \cdot \inf_{\substack{\epsilon'>0,\\ \delta \in (0,1)}} \left\{ e^\psi - 1 + 6\delta \left\lfloor \frac{1}{\beta}\right\rfloor + \frac{\ln\frac{k}{2\delta}}{\epsilon' n} \right\} \end{align*} Using the same analysis but with Gaussian noise added to each statistical query answer with variance $\frac{1}{2\rho n^2}$ (so that $\alg$ is $\rho k$-zCDP), we get the following confidence width $\tol$, \begin{align*} & \frac{1}{1-(1-\beta)^{\left\lfloor \frac{1}{\beta}\right\rfloor}} \inf_{\substack{\rho>0, \\\delta \in (0,1)}} \left\{ e^\xi - 1 + 6\delta \left\lfloor \frac{1}{\beta}\right\rfloor + \sqrt{\frac{\ln\frac{k}{\delta}}{n^2\rho} } \right\} \end{align*} \end{proof} \subsection{Confidence Bounds for Thresholdout (\texorpdfstring{\cite{DFHPRR15nips}}{})} \label{app:thresh_cw} \begin{theorem} If the Thresholdout mechanism $\cM$ with noise scale $\sigma$, and threshold $T$ is used for answering queries $\phi_i$, $i \in [k]$, with reported answers $a_1,\cdots, a_k$ such that $\cM$ uses the holdout set of size $h$ to answer at most $B$ queries, then given confidence parameter $\beta$, Thresholdout is $(\tol, \beta$)-accurate, where {\small \begin{align*} \tol = \sqrt{\frac{1}{\beta} \cdot \left( T^2 + \psi + \frac{\xi}{4h} + \sqrt{\frac{\xi}{h} \cdot \left( T^2 + \psi\right)}\right) } \end{align*} } for $\psi = \Ex{}{(\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j)^2} + 2 T \cdot \Ex{}{\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j}$, and $\xi = \min\limits_{\lambda \in [0,1)} \left(\frac{ \frac{2B}{\sigma^2h} - \ln \left( 1-\lambda \right)}{\lambda}\right)$, where $W_i \sim Lap(4\sigma), i \in [k]$ and $Y_j \sim Lap(2\sigma), j \in [B]$. \label{thm:thresh_cw} \end{theorem} \begin{proof} Similar to the proof of Theorem~\ref{thm:RZ_CW}, first we derive bounds on the mean squared error (MSE) for answers to statistical queries produced by Thresholdout. We want to bound the maximum MSE over all of the statistical queries, where the expectation is over the noise added by the mechanism and the randomness of the adversary. \begin{theorem} If the Thresholdout mechanism $\cM$ with noise scale $\sigma$, and threshold $T$ is used for answering queries $\phi_i$, $i \in [k]$, with reported answers $a_1,\cdots, a_k$ such that $\cM$ uses the holdout set of size $h$ to answer at most $B$ queries, then we have \begin{align*} & \Ex{\substack{X \sim \cD^n, \\ \phi_{j^*} \sim \cW_\cD[\alg,\adv](X)}}{(a_{j^*} - \phi_{j^*}(\cD))^2} \leq T^2 + \psi + \frac{\xi}{4h} + \sqrt{\frac{\xi}{h} \cdot \left( T^2 + \psi\right)}, \end{align*} for $\psi = \Ex{}{(\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j)^2} + 2 T \cdot \Ex{}{\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j}$ and $\xi = \min\limits_{\lambda \in [0,1)} \left(\frac{ \frac{2B}{\sigma^2h} - \ln \left( 1-\lambda \right)}{\lambda}\right)$, where $W_i \sim Lap(4\sigma), i \in [k]$ and $Y_j \sim Lap(2\sigma), j \in [B]$. \label{thm:thresh_mse} \end{theorem} \begin{proof} Let us denote the holdout set in $\cM$ by $X_h$ and the remaining set as $X_t$. Let $\cO$ denote the distribution $\cW_\cD[\alg,\adv](X)$, where $X \sim \cD^n$. We have: \begin{align*} \Ex{\phi_{j^*} \sim \cO}{(a_{j^*} - \phi_{j^*}(\cD))^2} & = \Ex{\phi_{j^*} \sim \cO}{(a_{j^*} - \phi_{j^*}(X_h) + \phi_{j^*}(X_h) - \phi_{j^*}(\cD))^2} \\ & = \Ex{\phi_{j^*} \sim \cO}{(a_{j^*} - \phi_{j^*}(X_h))^2} + \Ex{\phi_{j^*} \sim \cO}{(\phi_{j^*}(X_h) - \phi_{j^*}(\cD))^2} \\ & \qquad + \Bigg( 2\sqrt{\Ex{\phi_{j^*} \sim \cO}{(a_{j^*} - \phi_{j^*}(X_h))^2}} \cdot \sqrt{\Ex{\phi_{j^*} \sim \cO}{(\phi_{j^*}(X_h) - \phi_{j^*}(\cD))^2}}\Bigg) \label{eqn:all_diff_cw} \numberthis \end{align*} where the last inequality follows from the Cauchy-Schwarz inequality. Now, define a set $S_h$ which contains the indices of the queries answered via $X_h$. We know that for at most $B$ queries $\phi_j \in S_h$, the output of $\cM$ was $a_j = \phi_j\left(X_h\right) + Z_j$ where $Z_j \sim Lap(\sigma)$, whereas it was $a_i = \phi_i\left(X_t\right)$ for at least $k - B$ queries, $i \in [k \setminus S_h]$. Also, define $W_i \sim Lap(4\sigma), i \in [k]$ and $Y_j \sim Lap(2\sigma), j \in S_h$. Thus, for any $j^* \in [k]$, we have: \begin{align*} a_{j^*} - \phi_{j^*}(X_h) & \leq \max\left\{\max\limits_{i \in [k\setminus S_h]}|\phi_i(X_h) - \phi_i(X_t)|, \max\limits_{j \in S_h}Z_j\right\} \\ & \leq \max\left\{\max\limits_{\substack{i \in [k\setminus S_h], \\j(i)\in S_h}}T + Y_{j(i)} + W_i, \max\limits_{j \in S_h}Z_j\right\} \\ & \leq \max\left\{\max\limits_{\substack{i \in [k\setminus S_h], \\j\in S_h}}T + Y_{j} + W_i, \max\limits_{j \in S_h}Z_j\right\} \\ & \leq T + \max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j \end{align*} Thus, { \begin{align*} \Ex{\phi_{j^*} \sim \cO}{(a_{j^*} - \phi_{j^*}(X_h))^2} & \leq \Ex{}{(T + \max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j)^2} \\ & = T^2 + \Ex{}{(\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j)^2} + 2 T \cdot \Ex{}{\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j} \label{eqn:f_term} \numberthis \end{align*} } We bound the 2nd term in \eqref{eqn:all_diff_cw} as follows. For every $i \in S_h$, there are two costs induced due to privacy: the Sparse Vector component, and the noise addition to $\phi_i(X_h)$. By the proof of Lemma 23 in \cite{DFHPRR15nips}, each individually provides a guarantee of $\left(\frac{1}{\sigma h},0\right)$-DP. Using Theorem~\ref{thm:zCDP}, this translates to each providing a $\left(\frac{1}{2\sigma^2h^2}\right)$-zCDP guarantee. Since there are at most $B$ such instances of each, by Theorem~\ref{thm:zCDP} we get that $\cM$ is $\left(\frac{B}{\sigma^2h^2}\right)$-zCDP. Thus, by Lemma~\ref{lem:mutualCDP} we have $$I\left(\cM(X_h);X_h\right) \leq \frac{B}{\sigma^2h}$$ Proceeding similar to the proof of \Cref{thm:RZ_MSE}, we use the sub-Gaussian parameter for statistical queries in \Cref{lem:SQgauss} to obtain the following bound from \Cref{thm:RZds}: \begin{align*} \Ex{\phi_{j^*} \sim \cO}{\left(\phi_{j^*}(X_h) - \phi_{j^*}(\cD) \right)^2} & = \Ex{\substack{X \sim \cD^n,\\ \alg,\adv}}{\max_{i \in S_h} \left\{ (\phi_i(X_h) - \phi_i(\cD))^2 \right\} } \\ & \leq \frac{\xi}{4h} \numberthis \label{eqn:thr_exp} \end{align*} Defining $\psi = \Ex{}{(\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j)^2} + 2 T \cdot \Ex{}{\max\limits_{i \in [k]}W_{i} + \max\limits_{j\in [B]} Y_j}$, and combining Equations \eqref{eqn:all_diff_cw}, \eqref{eqn:f_term}, and \eqref{eqn:thr_exp}, we get: {\small \begin{align*} \Ex{\phi_{j^*} \sim \cO}{(a_{j^*} - \phi_{j^*}(\cD))^2} &\leq T^2 + \psi + \frac{\xi}{4h} + \sqrt{\frac{\xi}{h} \cdot \left( T^2 + \psi\right)} \end{align*} } \end{proof} We can use the MSE bound from Theorem~\ref{thm:thresh_mse}, and Chebyshev's inequality, to get the statement of the theorem. \end{proof}
169,829
TITLE: Why are angular velocities of double pendulum small in small angle approximation? QUESTION [2 upvotes]: In the lagrangian for double pendulum for small angles, the term $\dot{\theta}_1\dot{\theta}_2 \left [ 1-\frac{(\theta_1-\theta_2)^2}{2} \right ]$ is replaced with $\dot{\theta}_1\dot{\theta}_2$, because $\dot{\theta}_1\dot{\theta}_2 \frac{(\theta_1-\theta_2)^2}{2}$ is neglected. The product $\dot{\theta}_1\dot{\theta}_2$ has the second order of smallness, but why? This comment explains for simple pendulum, but says that it is more complicated for double pendulum. What is explanation for double pendulum? Edit: Correct me if I am wrong, but I think I found the answer. If double pendulum starts oscillating from rest, the potential energy at that moment is $E_{pm}=m_1gh_{1i}+m_2gh_{2i}$, where $h_{1i}$ and $h_{2i}$ are lengths from reference line to centres of masses of two pendulums. In small angles aproximation these heights are $h_{1i}=l_1+l_2-l_{cm1}+\frac{\theta_{1i}^2}{2}l_{cm1}$ and $h_{2i}=l_2-l_{cm2}+l_1\frac{\theta_{1i}^2}{2}+l_{cm2}\frac{\theta_{2i}^2}{2}$. Kinetic energy of first pendulum is $E_{k1}=E_{pm}-E_{k2}-E_{p1}-E_{p2}=\frac{m_1v_1^2}{2}+\frac{I_1\dot{\theta_1}^2}{2}=\frac{m_1l_{cm1}^2+I_1}{2}\dot{\theta_1}^2$. Kinetic energy of second is $E_{k2}=\frac{m_2v_2^2}{2}+\frac{I_2\dot{\theta_2}^2}{2}=\frac{m_2l_{cm2}^2+I_2}{2}\dot{\theta_2}^2$. $E_{k1}=\frac{g}{2}((\theta_{1i}^2-\theta_{1}^2)(m_1l_{cm1}+m_2l_1)+m_2l_{cm2}(\theta_{2i}^2-\theta_{2}^2))-E_{k2}$ $\dot{\theta_1}=\sqrt{\frac{g((\theta_{1i}^2-\theta_{1}^2)(m_1l_{cm1}+m_2l_1)+m_2l_{cm2}(\theta_{2i}^2-\theta_{2}^2))-2E_{k2}}{m_1l_{cm1}^2+I_1}}=\sqrt{\frac{2E_{k1}}{m_1l_{cm1}^2+I_1}}$ $\dot{\theta_2}=\sqrt{\frac{g((\theta_{1i}^2-\theta_{1}^2)(m_1l_{cm1}+m_2l_1)+m_2l_{cm2}(\theta_{2i}^2-\theta_{2}^2))-2E_{k1}}{m_2l_{cm2}^2+I_2}}=\sqrt{\frac{2E_{k2}}{m_2l_{cm2}^2+I_2}}$ Masses and lengths have influence, but $\dot{\theta_1}$ and $\dot{\theta_1}$ should be small because terms $(\theta_{1i}^2-\theta_{1}^2)$ and $(\theta_{2i}^2-\theta_{2}^2)$ are small. Also when $\dot{\theta_1}$ is maximal, $E_{k1}$ is maximal, so $\dot{\theta_2}$ will be smaller. Similar is with $\dot{\theta_1}$. $\dot{\theta_1}$ and $\dot{\theta_1}$ will not be maximal in same time, so their product will be small. REPLY [1 votes]: I think that $\dot{\theta}$'s are not neccesarily small, but actually what is small is the term $\frac{(\theta_1-\theta_2)^2}{2}$. You are using small angles, so you've got the substraction of two small things, which is assumed to be also small. The most it can take is "twice small", but it's squared (something small squared is even smaller) and then you divide by 2. So I guess that the brackets are $\simeq [1+\ \sim 0 ]\approx 1$, and consequently you only have the double $\omega$ product. REPLY [1 votes]: The simplest way to keep track of "smallness" is to introduce a dummy parameter $\epsilon$ and replace $\theta\to \epsilon\theta$ everywhere. It then follows that $$ \dot\theta_i\to \epsilon\dot\theta_i\, . $$ Using this substitution in the Lagrangian, and expanding to quadratic order in $\epsilon$ will produce the correct linearlized equations of motion. (The assumption is that the equilibrium position is at $\theta_i=0$.) With this, for instance, the term $\dot\theta_1\dot\theta_2\left(1-\frac{1}{2}(\theta_1-\theta_2)^2\right)$ in the Lagrangian becomes \begin{align} \dot\theta_1\dot\theta_2\left(1-\frac{1}{2}(\theta_1-\theta_2)^2\right) &\to \epsilon^2\dot\theta_1\dot\theta_2\left(1-\frac{1}{2}\epsilon^2(\theta_1-\theta_2)^2\right)\, ,\\ &= \epsilon^2\dot\theta_1\dot\theta_2 \end{align}
51,712
RESTON, Va., March 16, 2020 /PRNewswire/ -- Bechtel Corporation has: Bechtel built on its history of helping customers change the world for the better. Among the highlights: . Media contact: Corey Dade (O) +1 (571) 262-7067 (M) +1 (571) 283-9363 View original content: SOURCE Bechtel
140,151
The had to hustle in order to make it across town to the Austin Music Hall for The Whigs/Yo La Tengo/My Morning Jacket show. The show was great, but we made it an early night as our Hot Freaks shindig began the next morning. You can stream the performances by The Whigs/Yo La Tengo/My Morning Jacket here. We had two venues (Mohawk/Club DeVille), located next to one another, for Hot Freaks with three stages going simultaneously. That equals a lot of bands, and a lot of running around. Only had the camera with us for about an hour (below: The Dodos and White Denim), but you should be able to find more than enough SXSW photography with a google search, or more likely, a trip down the AD blogroll. Friday artists at Hot Freaks included: The Dodos, White Denim, Blair, Bowerbirds, Nicole Atkins, Evangelicals, Blood On The Wall, Cadence Weapon, Ola Podrida, Blitzen Trapper, Black Joe Lewis, Jens Lekman, Peter Moren, Jason Collett, and British Sea Power. White Denim – Friday – Hot Freaks (The Mohawk, Outside Stage) Download: MP3: The Dodos :: Fools MP3: White Denim :: Shake Shake Shake + Download SXSW artists via eMusic’s 25 free MP3 no risk trial offer ———————————————————————————————————————————
86,837
\begin{document} \begin{abstract} Given a Lie group $G$, one constructs a principal $G$-bundle on a manifold $X$ by taking a cover $U\rightarrow X$, specifying a transition cocycle on the cover, and descending the trivialized bundle $U\times G$ along the cover. We demonstrate the existence of an analogous construction for local $n$-bundles for general $n$. We establish analogues for simplicial Lie groups of Moore's results on simplicial groups; these imply that bundles for strict Lie $n$-groups arise from local $n$-bundles. Our construction leads to simple finite dimensional models of Lie 2-groups such as $\strn$. \end{abstract} \maketitle \section{Introduction} The nerve of a group is a simplicial set satisfying Kan's horn-filling conditions. Grothendieck observed that the nerve provides an equivalence between the category of groups and the category of reduced Kan simplicial sets whose horns have unique fillers above dimension one. More generally, he showed that the nerve extends to an equivalence between the category of groupoids and the category of Kan simplicial sets whose horns have unique fillers above dimension one. Inspired by this, Duskin \cite{Dus:79} defined an $n$-groupoid to be a Kan simplicial set whose horns have unique fillers above dimension $n$. In the last decade, Henriques \cite{Hen:08}, Pridham \cite{Pri:13} and others have begun the study of Lie $n$-groupoids: simplicial manifolds whose horn-filling maps are surjective submersions in all dimensions, and isomorphisms above dimension $n$. Examples are common. (See also Getzler \cite{Get:09}.) Lie $0$-groupoids are precisely smooth manifolds. Lie 1-groups are nerves of Lie groups. Abelian Lie $n$-groups are equivalent to chain complexes of abelian Lie groups supported between degrees $0$ and $n-1$. Simplicial Lie groups whose underlying simplicial set is an $(n-1)$-groupoid give rise to Lie $n$-groups, by the $\LW$-construction (Section 6). We call this special class of Lie $n$-groups \emph{strict Lie $n$-groups}. Much of the theory of principal bundles for Lie groups generalizes naturally to principal bundles for strict Lie $n$-groups. In Theorems \ref{thm:desc} and \ref{thm:wbar}, we show that the construction of a $G$-bundle from a cocycle on a cover has a close analogue for cocycles for strict Lie $n$-groups. As an application, we show how this allows for the construction of finite dimensional Lie 2-groups, such as $\strn$, from cohomological data. The method works equally well for $n>2$. \subsection*{Outline} We develop our results within a category $\C$ which has a terminal object $\ast$ and a subcategory of covers. We require the subcategory of covers to be stable under pullback, to contain the maps $X\to\ast$ for every object $X$, to satisfy an axiom of right-cancelation, and to be contained within the class of effective epimorphisms. Our motivating example is the category of finite dimensional smooth manifolds, with surjective submersions as covers. Other examples include Banach manifolds, analytic manifolds over a complete normed field, and of course sets. In Section \ref{sec:stacks}, we recall the definitions and basic properties of higher stacks. This section parallels the discussion in Behrend and Getzler \cite{BeG:13}, the main difference being that in that paper, the category $\C$ is assumed to possess finite limits. Here, we work with categories of manifolds, so this assumption does not hold. ( Conversely, they do not impose the assumption that the maps $X\to\ast$ are covers, which fails in the setting of not-necessarily smooth analytic spaces.) In Section \ref{sec:highhom}, we show that the collection of $k$-morphisms in a Lie $n$-groupoid forms a Lie $(n-k)$-groupoid (Theorem \ref{thm:kmor}). In Section \ref{sec:stric}, we apply this to study Duskin's $n$-strictification functor $\tn$. (For a discussion in the absolute case, see \cite[Section~3.1, Example~5]{Gle:82}, \cite[Definition~3.5]{Hen:08} or \cite[Section~2]{Get:09}.) In the Lie setting, the functor $\tn$ does not always exist. However, when it does, it provides a partial left adjoint to the inclusion of $n$-stacks into the category of $\infty$-stacks. We recall the relevant properties and give a necessary and sufficient criterion for existence. In Section \ref{sec:descent}, we impose the additional axiom that quotients of regular equivalence relations in $\C$ exist. (This was first established by Godemont for smooth manifolds, or analytic manifolds over a complete normed field.) Under this assumption, we introduce local $n$-bundles and prove our main result on descent (Theorem \ref{thm:desc}). In Section \ref{sec:slie}, we extend results of Moore on simplicial groups in $\st$ to simplicial Lie groups. These results provide a ready supply of examples satisfying the hypotheses of Theorem \ref{thm:desc}. We conclude in Section \ref{sec:string}, by applying our results to construct finite dimensional Lie 2-groups. We describe the resulting model of $\strn$ and compare it to the model constructed by Schommer-Pries \cite{Sch:11}. \subsection*{Acknowledgments} The results of Section \ref{sec:slie} represent joint work with E. Getzler. The author thanks him and J. Batson for helpful comments on several drafts. \section{Higher Stacks}\label{sec:stacks} We work in a category $\C$ with a subcategory of ``covers''. \begin{axiom}\label{axiom:fib} The category $\C$ has a terminal object $\ast$, and the map $X\to\ast$ is a cover for every object $X\in\C$. \end{axiom} \begin{axiom}\label{axiom:top} Pullbacks of covers along arbitrary maps exist and are covers. \end{axiom} \begin{axiom}\label{axiom:fg+g} If $g$ and $f$ are composable maps such that $fg$ and $g$ are covers, then $f$ is also a cover. \end{axiom} If it exists, the \emph{kernel pair} of a map $f:X\to Y$ in a category $\C$ is the pair of parallel arrows \begin{equation*} \begin{xy} \morphism/{@{>}@<3pt>}/<600,0>[X\times_YX`X;] \morphism/{@{>}@<-3pt>}/<600,0>[X\times_YX`X;] \end{xy} \end{equation*} The map $f:X\to Y$ is an \emph{effective epimorphism} if $f$ is the coequalizer of this pair: \begin{equation*} \begin{xy} \morphism/{@{>}@<3pt>}/<600,0>[X\times_YX`X;] \morphism/{@{>}@<-3pt>}/<600,0>[X\times_YX`X;] \morphism(600,0)<500,0>[X`Y;f] . \end{xy} \end{equation*} \begin{axiom}\label{axiom:subcan} Covers are effective epimorphisms. \end{axiom} Axioms \ref{axiom:fib} and \ref{axiom:top} ensure that isomorphisms are covers, that $\C$ has finite products, that projections along factors of products are covers, and that covers form a pre-topology on $\C$. Axiom \ref{axiom:fg+g}, which we borrow from Behrend and Getzler \cite{BeG:13}, ensures that being a cover is a local property: it is preserved \emph{and} reflected under pullback along covers. Likewise, Axiom \ref{axiom:subcan} ensures that being an isomorphism is a local property. We will be interested in the following examples of categories with covers satisfying these assumptions: \begin{enumerate} \item $\st$, the category of sets, with surjections as covers; \item $\sm$, the category of finite-dimensional smooth manifolds, with surjective submersions as covers; \item the category of Banach manifolds, with surjective submersions as covers (see \cite{Hen:08}); \item the category of analytic manifolds over a complete normed field, with surjective submersions as covers (see \cite[Chapter III]{Ser:64}). \end{enumerate} Denote by $\sC$ the category of simplicial objects in $\C$. In particular, we have the category of simplicial sets $\sst$. The category $\C$ embeds fully faithfully in $\sC$ as the category of constant simplicial diagrams. We do not distinguish between the category $\C$ and its essential image under this embedding. Let $\Delta^k$ denote the standard $k$-simplex, that is, the simplicial set $\Delta^k=\Delta(-,[k])$. A simplicial set $S$ is the colimit of its simplices \begin{equation*} \colim_{\Delta^k\to S} \Delta^k\cong S \end{equation*} \begin{definition} Let $X_\bullet$ be a simplicial object in $\C$. Let $S$ be a simplicial set. Denote by $\hom(S,X)$ the limit \begin{equation*} \hom(S,X):=\lim_{\Delta^k\to S} X_k \end{equation*} \end{definition} Note that such limits do not exist in general. By a Lie group, we will mean a group internal to the category $\C$, that is, an object $G$ with product $m:G\times G\to G$, inverse $i:G\to G$, and identity $e:\ast\to G$, satisfying the usual axioms. We may associate to a Lie group its \emph{nerve} $N_\bullet G\in\sC$, which is the simplicial object \begin{equation*} N_kG = G^k . \end{equation*} The face maps $\p_i:G^k\to G^{k-1}$ are defined for $i=0$ and $i=k$ by projection along the first and last factor respectively, and for $0<i<k$ by \begin{equation*} \p_i = G^{i-1} \times m \times G^{k-1} . \end{equation*} The degeneracy maps $\s_i:G^k\to G^{k+1}$ are defined by \begin{equation*} \s_i = G^i \times e \times G^{k-i} . \end{equation*} In fact, the above construction does not use the existence of an inverse, and works if $G$ is only a monoid in $\C$. We will use the following simplicial subsets of $\Delta^k$: \begin{enumerate} \item the boundary $\partial\Delta^k$ of $\Delta^k$; \item the $i^{\textit{th}}$ horn $\Lambda^k_i\subset\partial\Delta^k$, obtained from $\partial\Delta^k$ by omitting its $i^{\textit{th}}$ face. \end{enumerate} \begin{definition} Let $f:X\to Y$ be a map in $\sC$. The \emph{matching object} $M_k(f)$ is the limit \begin{equation*} \hom(\partial\Delta^k,X) \times_{\hom(\partial\Delta^k,Y)} Y_n . \end{equation*} Denote by $\mu_k(f)$ the induced map from $X_k$ to $M_k(f)$. The object of \emph{relative $\Lambda^k_i$-horns} $\Lambda^k_i(f)$ is the limit \begin{equation*} \hom(\Lambda^k_i,X) \times_{\hom(\Lambda^k_i,Y)} Y_k . \end{equation*} Denote by $\lambda^k_i(f)$ the induced map from $X_k$ to $\Lambda^k_i(f)$. \end{definition} A section of $M_k(f)$ is a $k$-simplex of $Y$ together with a lift of its boundary to $X$, and $\mu_k(f)$ measures the extent to which these relative spheres are filled by $k$-simplices of $X$. Similarly, a section of $\Lambda^k_i(f)$ is a $k$-simplex of $Y$ together with a lift of the $\Lambda^k_i$-horn to $X$, and $\lambda^k_i(f)$ measures the extent to which these relative horns are filled by $k$-simplices of $X$. In the absolute case, where the target $Y$ of the simplicial map $f$ is the terminal object $\ast$, we write $M_k(X)$ and $\Lambda^k_i(X)$ instead of $M_k(f)$ and $\Lambda^k_i(f)$, and similarly for the induced maps $\mu_k(X)$ and $\lambda^k_i(X)$. As an example, we have \begin{equation*} \Lambda^k_i(N_\bullet G) \cong \begin{cases} \ast , & k=0,1 , \\ G^k , & k>1 . \end{cases} \end{equation*} This is easily seen if $0<i<k$; for $i=0$ or $i=k$, the proof requires the existence of the inverse for $G$. In fact, the isomorphisms $\Lambda^k_i(N_\bullet G)\cong N_kG$, $k>1$, together with the condition $N_0G\cong\ast$, characterize the nerves of groups, and indeed give an alternative axiomatization of the theory of groups. Grothendieck extended this observation, omitting the condition $N_0G\cong\ast$. \begin{definition} A Lie groupoid $\G$ in $\C$ is an internal groupoid in $\C$, with morphisms $\G_1$ and objects $\G_0$, source and target maps $s,t:\G_1\to\G_0$, multiplication \begin{equation*} m\colon\G_1\times_{\G_0}^{t,s}\G_1\to \G_1, \end{equation*} unit $e:\G_0\to\G_1$, and inverse $i:\G_1\to\G_1$, such that $s$ and $t$ are covers. \end{definition} The \emph{nerve} $N_\bullet\G$ of a groupoid is the simplicial object $N_\bullet G\in\sC$, \begin{equation*} N_k\G = \begin{cases} \G_0 , & k=0 , \\ \G_1 , & k=1 , \\ \underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_k , & k>1 . \end{cases} \end{equation*} On 1-simplices, the face maps $\p_0$ and $\p_1$ correspond to the target $t$ and source $s$. The degeneracy $\s_0\colon\G_0\to\G_1$ corresponds to the unit. On $k$-simplices for $k>1$, the face maps \begin{equation*} \p_i:\underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_k\to\underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_{k-1} \end{equation*} are defined for $i=0$ and $i=k$ by projection along the first and last factor respectively, and for $0<i<k$ by \begin{equation*} \p_i = \underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_{i-1}\times m \times \underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_{k-i} . \end{equation*} The degeneracy maps \begin{equation*} \s_i:\underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_k\to\underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_{k+1} \end{equation*} are defined by \begin{equation*} \s_i = \underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_{i-1} \times e \times \underbrace{\G_1\times_{\G_0}^{t,s}\cdots\times_{\G_0}^{t,s}\G_1}_{k-i} . \end{equation*} Grothendieck's observation, generalized to Lie groupoids from his setting of discrete groupoids to, is as follows. \begin{proposition}[Grothendieck] A simplicial object $X_\bullet\in\sC$ is isomorphic to the nerve of a Lie groupoid if and only if the horn-filler maps \begin{equation*} \lambda^k_i(X) : X_k \to \Lambda^k_i(X) \end{equation*} are covers for $k=1$, and isomorphisms for $k>1$. In particular, a simplicial object $X_\bullet\in\sC$ is isomorphic to the nerve of a Lie group if and only if the above conditions are fulfilled and $X_0\equiv\ast$. \end{proposition} Maps between Lie groupoids are in bijection with simplicial maps between their nerves. As a result, the full subcategory of $\sC$ consisting of those simplicial objects satisfying the above conditions is equivalent to the category of Lie groupoids. Motivated by Grothendieck's observation, Duskin \cite{Dus:79} introduced a notion of $n$-groupoid valid in any topos. Duskin's notion was adapted by Henriques \cite{Hen:08} (see also \cite{Get:09}) to cover higher Lie groupoids. \begin{definition} \label{defi:ngpd} Let $n\in\mathbb{N}\cup\{\infty\}$. A \emph{Lie $n$-groupoid} is a simplicial object $X_\bullet\in\sC$ such that for all $k>0$ and $0\le i\le k$, the limit $\Lambda^k_i(X)$ exists in $\C$, the map \begin{equation*} \lambda^k_i(X)\colon X_k \to \Lambda^k_i(X) \end{equation*} is a cover, and it is an isomorphism for $k>n$. A \emph{Lie $n$-group} $X_\bullet$ is a Lie $n$-groupoid such that $X_0=\ast$. \end{definition} As an example, a Lie 0-groupoid is the same as an object of $\C$ (viewed as a constant simplicial diagram). \begin{definition}[Verdier] \label{def:hyp} Let $n\in\mathbb{N}\cup\{\infty\}$. A map $f:X_\bullet\to Y_\bullet$ of Lie $\infty$-groupoids is an \emph{$n$-hypercover} if, for all $k\ge0$, the limit $M_k(f)$ exists in $\C$, the map \begin{equation*} \mu_k(f)\colon X_k \to M_k(f) \end{equation*} is a cover for all $k$, and it is an isomorphism for $k\geq n$. \end{definition} Hypercovers in $\sst$ are the same as trivial fibrations, that is, Kan fibrations which are also weak homotopy equivalences. Hypercovers play much the same role in the theory of Lie $n$-groupoids. We refer to an $\infty$-hypercover simply as a ``hypercover.'' A $0$-hypercover is an isomorphism, while a $1$-hypercover of a Lie $0$-groupoid is isomorphic to the nerve of the cover $f_0:X_0\to Y_0$. In other words, \begin{equation*} X_k \cong \underbrace{X_0\times_{Y_0}\times\cdots\times_{Y_0}X_0}_{k+1} \end{equation*} \begin{definition} An \emph{augmentation} of a simplicial object $X_\bullet\in\sC$ is a simplicial map to an object $Y\in\C\subset\sC$. \end{definition} This amounts to the same thing as a map $\varepsilon:X_0\too Y$ that renders the diagram \begin{equation*} \begin{xy} \morphism|a|/@{>}@<3pt>/<400,0>[X_1`X_0;\p_0] \morphism|b|/@{>}@<-3pt>/<400,0>[X_1`X_0;\p_1] \morphism(400,0)<400,0>[X_0`Y;\varepsilon] \end{xy} \end{equation*} commutative. \begin{definition} The \emph{orbit space} $\pi_0(X)$ of a Lie $\infty$-groupoid $X_\bullet$ is a cover \begin{equation*} X_0\too\pi_0(X) \end{equation*} which coequalizes the fork \begin{equation*} \begin{xy} \morphism|a|/@{>}@<3pt>/<400,0>[X_1`X_0;\p_0] \morphism|b|/@{>}@<-3pt>/<400,0>[X_1`X_0;\p_1] \morphism(400,0)<400,0>[X_0`\pi_0(X);] \end{xy} \end{equation*} \end{definition} In other words, for any augmentation $\varepsilon:X_0\too Y$ of $X_\bullet$, there is an induced map \begin{equation*} \begin{xy} \morphism|a|/@{>}@<3pt>/<400,0>[X_1`X_0;\p_0] \morphism|b|/@{>}@<-3pt>/<400,0>[X_1`X_0;\p_1] \morphism(400,0)<400,0>[X_0`\pi_0(X);] \morphism(800,0)/{.>}/<0,-400>[\pi_0(X)`Y;] \morphism(400,0)<400,-400>[X_0`Y;] \end{xy} \end{equation*} Furthermore, if $\varepsilon$ is a cover, then so is the induced map from $\pi_0(X)$ to $Y$, by Axiom~\ref{axiom:fg+g}. It is characteristic of the theory of Lie $\infty$-groupoids that the orbit space $\pi_0(X)$ need not exist. One case in which the orbit space of $X_\bullet$ exists, however, is when $X_\bullet$ admits an augmentation $\varepsilon:X_\bullet\to Y$ which is a hypercover. \begin{proposition}\label{prop:hyppi0} \mbox{} \begin{enumerate} \item An augmentation $\varepsilon:X_\bullet\to Y$ is an $n$-hypercover if and only if the maps $\varepsilon:X_0\to Y$ and $\mu_1(\varepsilon):X_1\to X_0\times_YX_0$ are covers, and the maps \begin{equation*} \mu_k(X)\colon X_k\to M_k(X) \end{equation*} are covers for $k>1$ and isomorphisms for $k\ge n$. \item If the augmentation $\varepsilon:X_\bullet\to Y$ is a hypercover, then $\pi_0(X)\cong Y$. \end{enumerate} \end{proposition} \begin{proof} If $\varepsilon:X_\bullet\to Y$ is an augmentation, we have \begin{equation*} M_k(\varepsilon) = \begin{cases} Y , & k=0 , \\ X_0\times_YX_0 , & k=1 , \\ M_k(X) , & k>1 . \end{cases} \end{equation*} The first part follows by inspection. The second part is a restatement of Axiom~\ref{axiom:subcan}. \end{proof} Let $\Delta_{\le n}\subseteq\Delta$ be the full subcategory with objects $\{[m] \mid m\le n\}$. An \emph{$n$-truncated} simplicial object is a functor \begin{equation*} X_{\le n} : \Delta_{\leq n}^\circ \to \C . \end{equation*} Denote by $\snC$ the category of $n$-truncated simplicial objects in $\C$. Restriction along $\Delta_{\leq n}\hookrightarrow\Delta$ induces the functor of \emph{$n$-truncation}: \begin{equation*} \tr_n : \sC \to \snC . \end{equation*} When $S$ is a simplicial set of dimension less than or equal to $n$, we abuse notation and write $S$ for $\tr_nS$. When $\C$ has finite limits, $n$-truncation $\tr_n:\sC\to\snC$ admits a right-adjoint $\csk_n:\snC\to\sC$, called the $n$-coskeleton. The composition \begin{equation*} \csk_n\circ\tr_n : \sC \to \sC \end{equation*} is denoted $\Csk_n$. When $\C$ does not possess finite limits, the functor $\Csk_n$ is only partially defined. In the category of simplicial sets, there is also a left-adjoint $\sk_n:\snst\to\sst$ to $n$-truncation, called the $n$-skeleton. The composition \begin{equation*} \sk_n\circ\tr_n : \sst \to \sst \end{equation*} is denoted $\Sk_n$. Special cases of the next two lemmas first appeared in \cite{DHI:04}. Let $S\hookrightarrow T$ be a monomorphism of finite simplicial sets. \begin{lemma} \label{lemma:phrep} Suppose that $S$ is $n$-dimensional. Let $Y_\bullet\in\sC$ be a simplicial object such that the limit $\hom(T,Y)$ exists. Let $f:\tr_nX_\bullet\to\tr_nY_\bullet$ be a map in $\snC$ such that the matching object $M_k(f)$ exists for all $k\leq n$, and the map \begin{equation*} \mu_k(f) : X_k \to M_k(f) \end{equation*} is a cover for all $k\le n$. Then the limit \begin{equation*} \hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} exists. \end{lemma} \begin{proof} Filter the simplicial set $S$ \begin{equation*} \emptyset = S_0 \hookrightarrow \ldots \hookrightarrow S_N = S \end{equation*} where \begin{equation*} S_\ell \cong S_{\ell-1} \cup_{\partial\Delta^{n_\ell}} \Delta^{n_\ell} . \end{equation*} Here, $n_\ell\leq n$ for all $\ell$. Suppose that the limit \begin{equation*} Z_j = \hom(S_j,X)\times_{\hom(S_j,Y)}\hom(T,Y) . \end{equation*} exists for $j<\ell$. This is true for $\ell=1$, since $Z_0\cong\hom(T,Y)$. The limit $Z_\ell$ is the pullback \begin{equation*} \begin{xy} \Square[Z_\ell`X_{n_\ell}`Z_{\ell-1}`M_{n_\ell}(f);``\mu_{n_\ell}(f)`] \end{xy} \end{equation*} This pullback exists because $\mu_{n_\ell}(f)$ is a cover. \end{proof} \begin{lemma} \label{lemma:hrep} Let $f:X_\bullet\to Y_\bullet$ be a hypercover such that the limit \begin{equation*} \hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} exists. Then the limit $\hom(T,X)$ exists and the map \begin{equation*} \hom(T,X) \to \hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} is a cover. \end{lemma} \begin{proof} Filter the simplicial set $T$ \begin{equation*} S = S_0 \hookrightarrow \ldots \hookrightarrow S_N = T \end{equation*} where \begin{equation*} S_\ell \cong S_{\ell-1} \cup_{\partial\Delta^{n_\ell}} \Delta^{n_\ell} . \end{equation*} Suppose that the limit \begin{equation*} Z_j = \hom(S_j,X)\times_{\hom(S_j,Y)}\hom(T,Y) . \end{equation*} exists for $j<\ell$, and that the map $Z_j\to Z_0$ is a cover. The limit \begin{equation*} Z_0=\hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} exists by hypothesis. The limit $Z_\ell$ is the pullback \begin{equation*} \begin{xy} \Square[Z_\ell`X_{n_\ell}`Z_{\ell-1}`M_{n_\ell}(f);``\mu_{n_\ell}(f)`] \end{xy} \end{equation*} This pullback exists because $\mu_{n_\ell}(f)$ is a cover. We conclude that the limit $Z_N=\hom(T,X)$ exists, and that the morphism $Z_N\to Z_0$ is a cover. \end{proof} \begin{theorem} \label{prop:hk2}\mbox{} \begin{enumerate} \item The composition of two $n$-hypercovers is an $n$-hypercover. \item The pullback of an $n$-hypercover along a map of Lie $\infty$-groupoids exists in $\sC$, and is an $n$-hypercover. \end{enumerate} \end{theorem} \begin{proof} Consider a composable pair of $n$-hypercovers \begin{equation*} \begin{xy} \morphism<400,0>[X_\bullet`Y_\bullet;g] \morphism(400,0)<400,0>[Y_\bullet`Z_\bullet;f] \end{xy} \end{equation*} Suppose that the matching object $M_j(fg)$ exists and that the map $\mu_j(fg)$ is a cover for $j<k$. This is certainly the case for $k=1$, since $M_0(fg)\cong Z_0$, and $\mu_0(fg)=f_0g_0$ is the composition of the two covers $f_0$ and $g_0$. Lemma \ref{lemma:phrep} now shows that the matching object $M_k(fg)$ exists. The square in the commuting diagram \begin{equation*} \begin{xy} \qtriangle<600,500>[X_k`M_k(g)`M_k(fg);\mu_k(g)`\mu_k(fg)`] \Square(600,0)[M_k(g)`Y_k`M_k(fg)`M_k(f);``\mu_k(f)`] \end{xy} \end{equation*} is a pullback. Since $f$ and $g$ are $n$-hypercovers, we see that $\mu_k(fg)$ is a (composition of) cover(s) for all $k$, and an isomorphism if $k>n$. We turn to the second statement. Consider an $n$-hypercover $f:X_\bullet\to Z_\bullet$ and a map $g:Y_\bullet\to Z_\bullet$ of Lie $\infty$-groupoids. Suppose that the limits $g^*X_j$ and $M_j(g^*f)$ exist and that the maps $\mu_j(g^*f)$ are covers for $j<k$. Lemma \ref{lemma:phrep} shows that the matching object $M_k(g^*f)$ exists. The limit $g^\ast X_k$ is the pullback \begin{equation*} \begin{xy} \Square[g^*X_k`X_k`M_k(g^*f)`M_k(f);`\mu_k(g^*f)`\mu_k(f)`] \end{xy} \end{equation*} The map $\mu_k(f)$ is a cover for all $k$ because $f$ is an $n$-hypercover. This shows that the pullback $g^*X_k$ exists, that the map $\mu_k(g^*f)$ is a cover for all $k$, and that it is an isomorphism for $k\ge n$. \end{proof} There is also a relative version of the notion of a Lie $n$-groupoid, modeled on the definition of a Kan fibration in the theory of simplicial sets. \begin{definition}\label{defi:nst} Let $n\in\mathbb{N}\cup\{\infty\}$. A map $f:X_\bullet\to Y_\bullet$ of Lie $\infty$-groupoids is an \emph{$n$-stack} if for all $k>0$ and $0\le i\le k$, the limit $\Lambda^k_i(f)$ exists, the map \begin{equation*} \lambda^k_i(f) : X_k \to \Lambda^k_i(f) \end{equation*} is a cover, and it is an isomorphism if $k>n$. \end{definition} \begin{remark} If we wanted to emphasize the origin in simplicial homotopy theory, we might well have called $\infty$-stacks ``Kan fibrations'', as in \cite{Hen:08}. The present terminology emphasizes their relation with geometry. \end{remark} There are analogues of Lemmas \ref{lemma:phrep} and \ref{lemma:hrep} for $n$-stacks, due to Henriques \cite{Hen:08}, but only under certain additional conditions on the simplicial sets $S$ and $T$. \begin{definition} An inclusion of finite simplicial sets $S\hookrightarrow T$ is an \emph{expansion} if it can be written as a composition \begin{equation*} S = S_0 \hookrightarrow \cdots \hookrightarrow S_N = T \end{equation*} where \begin{equation*} S_\ell \cong S_{\ell-1} \cup_{\Lambda^{n_\ell}_{i_\ell}} \Delta^{n_\ell} . \end{equation*} A finite simplicial set $S$ is \emph{collapsible} if the inclusion of some, and hence any, vertex is an expansion. \end{definition} Let $S\hookrightarrow T$ be a monomorphism of finite simplicial sets. \begin{lemma} \label{lemma:psrep} Suppose that $S$ is $n$-dimensional and collapsible. Let $Y_\bullet\in\sC$ be a simplicial object such that the limit $\hom(T,Y)$ exists and the restriction to any vertex \begin{equation*} \hom(T,Y) \to Y_0 \end{equation*} is a cover. Let $f\colon\tr_nX_\bullet\to\tr_nY_\bullet$ be a map in $\snC$ such that the limit $\Lambda^k_i(f)$ exists for all $0<k\leq n$ and $0\le i\le n$, and the map \begin{equation*} \lambda^k_i(f) : X_k \to \Lambda^k_i(f) \end{equation*} is a cover for all $0<k\le n$ and $0\le i\le n$. Then the limit \begin{equation*} \hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} exists. \end{lemma} \begin{proof} Filter the simplicial set $S$ \begin{equation*} \Delta^0 = S_0 \hookrightarrow \ldots \hookrightarrow S_N = S \end{equation*} where \begin{equation*} S_\ell \cong S_{\ell-1} \cup_{\Lambda^{n_\ell}_{i_\ell}} \Delta^{n_\ell} . \end{equation*} Here, $n_\ell\leq n$ for all $\ell$. Suppose that the limit \begin{equation*} Z_j = \hom(S_j,X)\times_{\hom(S_j,Y)}\hom(T,Y) . \end{equation*} exists for $j<\ell$. This is true for $\ell=1$, by the hypotheses on $\hom(T,Y)$. We have the pullback diagram \begin{equation*} \begin{xy} \Square[Z_\ell`X_{n_\ell}`Z_{\ell-1}`\Lambda^{n_\ell}_{i_\ell}(f); ``\lambda^{n_\ell}_{i_\ell}(f)`] \end{xy} \end{equation*} This pullback exists because $\lambda^{n_\ell}_{i_\ell}(f)$ is a cover. \end{proof} \begin{lemma} \label{lemma:srep} Let $f:X_\bullet\to Y_\bullet$ be an $\infty$-stack such that the limit \begin{equation*} \hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} exists. Suppose that the inclusion $S\hookrightarrow T$ is an expansion. Then the limit $\hom(T,X)$ exists, and the map \begin{equation*} \hom(T,X) \to \hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} is a cover. \end{lemma} \begin{proof} Filter the simplicial set $T$ \begin{equation*} S = S_0 \hookrightarrow \ldots \hookrightarrow S_N = T \end{equation*} where \begin{equation*} S_\ell \cong S_{\ell-1} \cup_{\Lambda^{n_\ell}_{i_\ell}} \Delta^{n_\ell} . \end{equation*} Suppose that the limit \begin{equation*} Z_j = \hom(S_j,X)\times_{\hom(S_j,Y)}\hom(T,Y) . \end{equation*} exists for $j<\ell$, and the map $Z_j\to Z_0$ is a cover. The limit \begin{equation*} Z_0=\hom(S,X)\times_{\hom(S,Y)}\hom(T,Y) \end{equation*} exists by hypothesis. We have the pullback diagram \begin{equation*} \begin{xy} \Square[Z_\ell`X_{n_\ell}`Z_{\ell-1}`\Lambda^{n_\ell}_{i_\ell}(f); ``\lambda^{n_\ell}_{i_\ell}(f)`] \end{xy} \end{equation*} This pullback exists because $\lambda^{n_\ell}_{i_\ell}(f)$ is a cover. We conclude that the limit $Z_N=\hom(T,X)$ exists, and that the morphism $Z_N\to Z_0$ is a cover. \end{proof} \begin{theorem} \label{thm:hk1} \mbox{} \begin{enumerate} \item An $n$-hypercover is an $n$-stack. \item A hypercover which is an $n$-stack is an $n$-hypercover. \item The composition of two $n$-stacks is an $n$-stack. \item Let $f\colon X_\bullet\to Y_\bullet$ be an $n$-stack and let $g\colon Z_\bullet\to Y_\bullet$ be a map of Lie $\infty$-groupoids. If the pullback of $f_0$ along $g_0$ exists in $\C$, then the pullback of $f$ along $g$ exists in $\sC$ and this pullback is an $n$-stack. \end{enumerate} \end{theorem} \begin{proof} Let $f\colon X_\bullet\to Y_\bullet$ be an $\infty$-hypercover. We see that $f$ is an $\infty$-stack by considering the finite inclusions $\Lambda^k_i\hookrightarrow\Delta^k$ and applying Lemma \ref{lemma:hrep}. It remains to show that if $f$ is an $n$-hypercover, then $\lambda^k_i(f)$ is an isomorphism when $k>n$ (and $0\le i\le k$). The square in the commuting diagram \begin{equation} \label{mulambda} \begin{xy} \qtriangle<600,500>[X_k`M_k(f)`\Lambda^k_i(f);\mu_k(f)`\lambda^k_i(f)`] \Square(600,0)[M_k(f)`X_{k-1}`\Lambda^k_i(f)`M_{k-1}(f);``\mu_{k-1}(f)`] \end{xy} \end{equation} is a pullback. If $f$ is an $n$-hypercover and $k>n$, then the maps $\mu_k(f)$ and $\mu_{k-1}(f)$ are isomorphisms, and we see that $\lambda^k_i(f)$ is an isomorphism. To prove the second part, we consider \eqref{mulambda} in the case where $f$ is a hypercover and an $n$-stack. If $k>n$, then $\lambda^k_i(f)$ is an isomorphism. The map $\mu_k(f)$ is a cover, and, by Axiom \ref{axiom:subcan}, an epimorphism. The diagram \eqref{mulambda} now implies that $\mu_k(f)$ is an isomorphism. Similarly, the map $M_k(f)\to\Lambda^k_i(f)$ is an isomorphism. The map $\Lambda^k_i(f)\to M_{k-1}(f)$ is induced by the inclusion $\partial\Delta^{k-1}\hookrightarrow\Lambda^k_i$. Lemma \ref{lemma:phrep} shows that it is a cover. The pull-back of $\mu_{k-1}(f)$ along this cover is the map $M_k(f)\to\Lambda^k_i(f)$. This map is an isomorphism for $k>n$. Axiom \ref{axiom:subcan} therefore implies that $\mu_{k-1}(f)$ is an isomorphism for $k>n$. Hence $f$ is an $n$-hypercover. Turning to the third part of the theorem, consider a composable pair of $n$-stacks \begin{equation*} \begin{xy} \morphism<400,0>[X_\bullet`Y_\bullet;g] \morphism(400,0)<400,0>[Y_\bullet`Z_\bullet;f] \end{xy} \end{equation*} Suppose that the limit $\Lambda^j_i(fg)$ exists and that the map $\lambda^j_i(fg)$ is a cover, for all $0<j<k$ (and $0\le i\le j$). Lemma \ref{lemma:srep} now shows that the limit $\Lambda^k_i(fg)$ exists. The square in the commuting diagram \begin{equation*} \begin{xy} \qtriangle<600,500>[X_k`\Lambda^k_i(g)`\Lambda^k_i(fg); \lambda^k_i(g)`\lambda^k_i(fg)`] \Square(600,0)[\Lambda^k_i(g)`Y_k`\Lambda^k_i(fg)`\Lambda^k_i(f); ``\lambda^k_i(f)`] \end{xy} \end{equation*} is a pullback. Since $f$ and $g$ are $n$-stacks, we see that $\lambda^k_i(fg)$ is a (composition of) cover(s) for all $k>0$ and $0\le i\le k$, and an isomorphism if $k>n$. We turn to the third statement. Consider an $n$-stack $f:X_\bullet\to Z_\bullet$ and a map $g:Y_\bullet\to Z_\bullet$ of Lie $\infty$-groupoids. Suppose that, for $j<k$ (and $0\le i\le j$), the pullback $g^*X_j$ exists, the limit $\Lambda^j_i(g^*f)$ exists, and the map $\lambda^j_i(g^*f)$ is a cover. Lemma \ref{lemma:phrep} shows that the limit $\Lambda^k_i(g^*f)$ exists for $0\le i\le k$. The limit $g^\ast X_k$ is the pullback \begin{equation*} \begin{xy} \Square[g^*X_k`X_k`\Lambda^k_i(g^*f)`\Lambda^k_i(f); `\lambda^k_i(g^*f)`\lambda^k_i(f)`] \end{xy} \end{equation*} The map $\lambda^k_i(f)$ is a cover for all $k>0$ because $f$ is an $n$-hypercover. This shows that the pull-back $g^*X_k$ exists, that the map $\lambda^k_i(g^*f)$ is a cover for all $k>0$ and $0\le i\le k$, and that this map is an isomorphism for $k>n$. \end{proof} \section{Higher Morphism Spaces in Higher Stacks} \label{sec:highhom} Order preserving maps of all finite ordinals, including 0, form a category $\Delta_+$ extending $\Delta$. An augmented simplicial set is a functor \begin{equation*} \begin{xy} \morphism[\Delta_+^{\circ}`\st;] \end{xy} \end{equation*} Such a functor consists of a simplicial set $S$ equipped with a map to a constant simplicial set $S_{-1}$. The ordinal sum \begin{equation*} [n]+[m]:=\{0\leq\ldots\leq n\leq 0'\leq\ldots\leq m'\}=[n+m+1] \end{equation*} endows the category $\Delta_+$ with a monoidal structure. This structure extends along the Yoneda embedding \begin{equation*} \begin{xy} \morphism[\Delta_+`\sst_+;] \end{xy} \end{equation*} to give a closed monoidal structure on $\sst_+$ called the \emph{join}, and denoted $\star$. Given an augmented simplicial set $K$, denote the right adjoint to $K\star(-)$ by \begin{equation*} \begin{xy} \morphism<750,0>[\sst_+`\sst_+;(-)^{K\star}] \end{xy} \end{equation*} \begin{example} The functor $(-)^{\Delta^{n-1}\star}$ is Illusie's $\Dec_n(-)$ (c.f.\ \cite[Chapter VI]{Ill:72}). \end{example} The inclusion $i:\Delta\hookrightarrow\Delta_+$ provides a forgetful functor \begin{equation*} \begin{xy} \morphism[\sst_+`\sst;i^*] \end{xy} \end{equation*} Its right adjoint \begin{equation*} \begin{xy} \morphism[\sst`\sst_+;i_*] \end{xy} \end{equation*} augments a simplicial set by a point. \begin{definition}\mbox{} \begin{enumerate} \item Let $S$ and $T$ be simplicial sets. Denote by $S\star T$ the simplicial set \begin{equation*} S\star T = i^*((i_*S)\star(i_*T)) . \end{equation*} \item Let $X_{\bullet}$ be in $\sC$ and let $S$ be a finite simplicial set. Denote by $X^{S\star}_{\bullet}$ the putative simplicial object with $k$-simplices \begin{align*} X^{S\star}_k:=\hom(S\star\Delta^k,X) \end{align*} Face and degeneracy maps are given by $1\star d_i$ and $1\star s_i$. \end{enumerate} \end{definition} In \cite{Dus:02}, Duskin gave a construction of the collection of morphisms in a higher category.\footnote{Duskin called this the ``path-homotopy complex''. His construction is hinted at in the earlier treatment of nerves in \cite{Ill:72}, and of weak Kan complexes in \cite{BoV:73}.} \begin{definition} Let $X_{\bullet}$ be an $\infty$-groupoid in $\st$. Define $P^{\geq1}X_{\bullet}$ to be the pullback \begin{equation*} \begin{xy} \Square[P^{\geq1}X_{\bullet}`X^{\Delta^0\star}_\bullet`X_0`X_{\bullet};```] \end{xy} \end{equation*} \end{definition} The simplicial set $P^{\geq 1}X$ is an $\infty$-groupoid which models the ``space'' of $1$-morphisms in $X_{\bullet}$.\footnote{Lurie \cite{Lur:09} denotes this construction `$\hom_X^L$'. Given $x,y\in X_0$, Lurie's $\hom_X^L(x,y)$ is the fiber at $(x,y)$ of a canonical map $P^{\geq 1}X\rightarrow\hom(\partial\Delta^1,X)$.} We will generalize $P^{\ge1}X_\bullet$ to higher morphism spaces $P^{\ge k}X_\bullet$, for $k>0$, and at the same time, we will define a relative version of the construction. The discussion follows the lines of Joyal's proof of \cite[Theorem~3.8]{Joy:02}, except that we restrict attention to the case \begin{equation*} (S\hookrightarrow T) = (\partial\Delta^{k-1}\hookrightarrow\Delta^{k-1}) , \end{equation*} and work in the Lie setting. Given a proper, non-empty subset $J$ of $[n]$, let \begin{equation*} \Lambda^n_J = \bigcup_{i\in J} \partial_i\Delta^n \subset \partial\Delta^n . \end{equation*} An induction shows that $\Lambda^n_J$ is collapsible. Let $f:X_\bullet\to Y_\bullet$ be an $\infty$-stack. The $\ell$-simplices of the simplicial object \begin{equation} \label{star} X_\bullet^{\partial\Delta^{k-1}\star}\times_{Y_\bullet^{\partial\Delta^{k-1}\star}} Y_\bullet^{\Delta^{k-1}\star} \end{equation} are given by the limit \begin{multline*} \hom(\partial\Delta^{k-1}\star\Delta^\ell,X) \times_{\hom(\partial\Delta^{k-1}\star\Delta^\ell,Y)}Y_{k+\ell} \\ \cong \hom(\Lambda^{k+\ell}_{\{0,\ldots,k-1\}},X) \times_{\hom(\Lambda^{k+\ell}_{\{0,\ldots,k-1\}},Y)} Y_{k+\ell} . \end{multline*} Lemma \ref{lemma:psrep} implies that this limit exists. In the special case of vertices $\ell=0$, we obtain a natural identification with the space of relative horns $\Lambda^k_k(f)$. Denote by $f^{\partial\Delta^{k-1}\star}$ the induced map \begin{equation*} f^{\partial\Delta^{k-1}\star}\colon X^{\Delta^{k-1}\star} \to X_\bullet^{\partial\Delta^{k-1}\star}\times_{Y_\bullet^{\partial\Delta^{k-1}\star}} Y_\bullet^{\Delta^{k-1}\star} . \end{equation*} \begin{lemma} \label{star-l} Let $f:X_\bullet\to Y_\bullet$ be an $\infty$-stack. For $\ell>0$, the map $\Lambda^\ell_i(f^{\partial\Delta^{k-1}\star})$ is canonically isomorphic to $\Lambda^{k+\ell}_{k+i}(f)$. \end{lemma} \begin{proof} An exercise in the combinatorics of joins shows that \begin{align*} ( \Delta^{k-1}\star\Lambda^\ell_i \hooklongrightarrow \Delta^{k-1}\star\Delta^\ell ) & \cong ( \Lambda^{k+\ell}_{\{k,\ldots,\widehat{k+i},\ldots,k+\ell\}} \hooklongrightarrow \Delta^{k+\ell} ) , \text{ and} \\ ( \partial\Delta^{k-1}\star\Delta^\ell \hooklongrightarrow \Delta^{k-1}\star\Delta^\ell ) & \cong ( \Lambda^{k+\ell}_{\{0,\ldots,k-1\}} \hooklongrightarrow \Delta^{k+\ell} ) . \end{align*} In this way, we obtain a pushout square \begin{equation*} \begin{xy} \square<1000,500>[\partial\Delta^{k-1}\star\Lambda^\ell_i` \partial\Delta^{k-1}\star\Delta^\ell` \Delta^{k-1}\star\Lambda^\ell_i`\Lambda^{k+\ell}_{k+i};```] \end{xy} \end{equation*} which gives rise to the pullback square \begin{equation*} \begin{xy} \square<1550,500>[\Lambda^{k+\ell}_{k+i}(f)` (X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star})_{\ell}` \Lambda^\ell_i(X^{\Delta^{k-1}\star})` \Lambda^{\ell}_i(X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star});```] \end{xy} \end{equation*} But this is also the pullback square defining $\Lambda^\ell_i(f^{\partial\Delta^{k-1}\star})$. \end{proof} It follows that if $f$ is an $\infty$-stack, then so is $f^{\partial\Delta^{k-1}\star}$, and if $f$ is an $n$-stack, $f^{\partial\Delta^{k-1}\star}$ is an $(n-k)$-stack. \begin{definition}\label{defi:relkmor} For $k>0$, the \emph{relative higher morphism space} $P^{\geq k}(f)_{\bullet}$ is the pullback \begin{equation*} \begin{xy} \Square[P^{\geq k}(f)_{\bullet}`X^{\Delta^{k-1}\star}_{\bullet}`\Lambda^k_k(f)` X_\bullet^{\partial\Delta^{k-1}\star}\times_{Y_\bullet^{\partial\Delta^{k-1}\star}} Y_\bullet^{\Delta^{k-1}\star};```] \end{xy} \end{equation*} \end{definition} The following result is an immediate consequence of Lemma~\ref{star-l} and Theorem~\ref{thm:hk1}. \begin{theorem}\label{thm:kmor} If $f:X_{\bullet}\to Y_{\bullet}$ is an $n$-stack, the relative higher morphism space $P^{\geq k}(f)_{\bullet}$ is a Lie $(n-k)$-groupoid. \end{theorem} The vertices of $P^{\geq k}(f)_{\bullet}$ are the $k$-simplices of $X_{\bullet}$. Its $1$-simplices correspond to $(k+1)$-simplices $x\in X_{k+1}$ such that \begin{align*} f(x) &= s_kd_kf(x) \intertext{and, for $i<k$,} d_ix &= s_{k-1}d_{k-1}d_ix . \end{align*} We interpret $x$ as a path rel boundary in the fiber of $f$, which begins at $d_{k+1}x$ and ends at $d_kx$. When the matching object $M_k(f)$ exists, it provides a natural augmentation \begin{equation} \label{augmentation} \pi\colon P^{\geq k}(f)_{\bullet} \to M_k(f) . \end{equation} The map underlying $\pi\colon P^{\geq k}(f)_0\cong X_k \to M_k(f)$ is $\mu_k(f)$. This determines an augmentation because the diagram \begin{equation*} \begin{xy} \morphism|a|/{@{>}@<3pt>}/<700,0>[P^{\geq k}(f)_1`P^{\geq k}(f)_0;\p_0] \morphism|b|/{@{>}@<-3pt>}/<700,0>[P^{\geq k}(f)_1`P^{\geq k}(f)_0;\p_1] \morphism(700,0)<600,0>[P^{\geq k}(f)_0`M_k(f);\pi] \end{xy} \end{equation*} commutes. The augmentation \eqref{augmentation} encodes the idea that vertices in the relative higher morphism space are lifts of a $k$-morphism in $Y_{\bullet}$, that edges are paths rel boundary between these lifts such that the paths live in the fiber of $f$, etc. \begin{theorem} \label{thm:kmorhyp} If $f$ is an $n$-hypercover, then the augmentation \eqref{augmentation} exists and is an $(n-k)$-hypercover. \end{theorem} \begin{proof} We prove that $\pi$ is an $(n-k)$-hypercover by showing that the square \begin{equation*} \begin{xy} \square<1000,500>[P^{\geq k}(f)_\ell`X_{k+\ell}`M_\ell(\pi)`M_{k+\ell}(f); `\mu_\ell(\pi)`\mu_{k+\ell}(f)`] \end{xy} \end{equation*} is a pullback. For $\ell=0$, this is evident: both horizontal maps are isomorphisms in this case. For $\ell>0$, the above square is the top half of a commutative diagram \begin{equation*} \begin{xy} \square<1200,500>[P^{\geq k}(f)_\ell`X_{k+\ell}`M_\ell(\pi)`M_{k+\ell}(f); `\mu_\ell(\pi)`\mu_{k+\ell}(f)`] \square(0,-500)<1200,500>[M_\ell(\pi)`M_{k+\ell}(f)` \Lambda^k_k(f)`(X^{\partial\Delta^{k-1}\star} \times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star})_\ell;```] \end{xy} \end{equation*} The outer rectangle is a pullback by definition. Thus, it suffices to prove that the bottom square is a pullback. For $\ell=1$, this may be checked directly. For $\ell>1$, an exercise in the combinatorics of joins shows that we have a pushout square \begin{equation*} \begin{xy} \square<1000,500>[\partial\Delta^{k-1}\star\partial\Delta^\ell` \partial\Delta^{k-1}\star\Delta^\ell` \Delta^{k-1}\star\partial\Delta^\ell`\partial\Delta^{k+\ell};```] \end{xy} \end{equation*} which gives rise to a pullback square \begin{equation*} \begin{xy} \square<1550,500>[M_{k+\ell}(f)` M_\ell(X^{\Delta^{k-1}\star})` (X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star})_\ell` M_{\ell}(X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star});```] \end{xy} \end{equation*} This square embeds in a commutative diagram \begin{equation*} \begin{xy} \square<1000,500>[M_\ell(P^{\geq k}(f))`M_{k+\ell}(f)`\Lambda^k_k(f)`(X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star})_\ell;```] \square(1000,0)<1550,500>[M_{k+\ell}(f)` M_\ell(X^{\Delta^{k-1}\star})` (X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star})_\ell` M_{\ell}(X^{\partial\Delta^{k-1}\star}\times_{Y^{\partial\Delta^{k-1}\star}}Y^{\Delta^{k-1}\star});```] \end{xy} \end{equation*} The outer rectangle is a pullback because $M_\ell(-)$ commutes with limits. We observed above that the right square is a pullback. We conclude that the left square is a pullback, completing the proof that $\mu_\ell(\pi)$ is a cover. \end{proof} \section{Strictification}\label{sec:stric} In this section we recall Duskin's ``$n$-strictification'' functor $\tn$ for $n\ge 0$. This is a partially defined left-adjoint to the inclusion of the category of $n$-stacks into the category of $\infty$-stacks. We establish its main properties in Propositions \ref{prop:tn} and \ref{prop:tnhyp}. Let $f\colon X_{\bullet}\to Y_{\bullet}$ be an $\infty$-stack such that the orbit space $\pi_0(P^{\geq n}(f))$ exists. We define a map \begin{equation}\label{eq:ntn} \tr_n\tn(f)\colon\tr_n\tn(X,f)_\bullet\to\tr_n Y_\bullet \end{equation} On $k$-simplices, for $k<n$, $\tr_n\tn(f)$ is the map \begin{align*} f_k\colon X_k\to Y_k \end{align*} On $n$-simplices, $\tr_n\tn(f)$ is the canonical map \begin{align*} \pi_0(P^{\geq n}(f))\to Y_n \end{align*} \begin{lemma}\label{lemma:di1} Let $f\colon X_{\bullet}\to Y_{\bullet}$ be an $\infty$-stack such that the orbit space $\pi_0(P^{\geq n}(f))$ exists. The maps $\lambda^k_i(\tn(f))$ are covers for $k\le n$. For all $i$, the limit $\Lambda^{n+1}_i(\tn(f))$ exists and the map $\Lambda^{n+1}_i(f)\to\Lambda^{n+1}_i(\tn(f))$ is a cover. \end{lemma} \begin{proof} For $k<n$, the natural map $\tr_n X_\bullet\to\tr_n\tn(X,f)_\bullet$ induces an isomorphism between the maps $\lambda^k_i(f)$ and $\lambda^k_i(\tr_n\tn(f))$. For $k=n$, we have a commuting square \begin{equation*} \begin{xy} \square<750,500>[X_n`\tn(X,f)_n`\Lambda^n_i(f)`\Lambda^n_i(\tn(f)); `\lambda^n_i(f)`\lambda^n_i(\tn(f))`\cong] \end{xy} \end{equation*} This square guarantees that the map $\lambda^n_i(\tn(f))$ is a cover for all $i$. Indeed, the top horizontal map is the cover $X_n\to\pi_0(P^{\geq n}(f))$, the map $\lambda^n_i(f)$ is a cover by assumption, and the bottom horizontal map is an isomorphism. Axiom~\ref{axiom:fg+g} implies that $\lambda^n_i(\tn(f))$ is a cover. Lemma \ref{lemma:psrep} guarantees that, for all $i$, the limit $\Lambda^{n+1}_i(\tn(f))$ exists. It remains to show that the map $\Lambda^{n+1}_i(f)\to\Lambda^{n+1}_i(\tn(f))$ is a cover for all $i$. Recall that given a proper, non-empty subset $J$ of $[n+1]$, \begin{equation*} \Lambda^{n+1}_J=\bigcup_{i\in J} \partial_i\Delta^{n+1} \subset \partial\Delta^{n+1} . \end{equation*} For each $J$, there is a map \begin{multline*} \Lambda^{n+1}_J(f) = \hom(\Lambda^{n+1}_J,X)\times_{\hom(\Lambda^{n+1}_J,Y)}Y_{n+1} \\ \too \Lambda^{n+1}_J(\tn(f)) = \hom(\Lambda^{n+1}_J,\tn(X,f))\times_{\hom(\Lambda^{n+1}_J,Y)}Y_{n+1} . \end{multline*} We will show that it is a cover, by induction on $|J|$. Let $J_+=J\cup\{j\}$, where $j\notin J$, and let \begin{equation}\label{eq:Jintj} \Lambda^{n+1}_{J\cap j}(f) = \hom(\Lambda^{n+1}_J\cap\partial_j\Delta^{n+1},X) \times_{\hom(\Lambda^{n+1}_J\cap\partial_j\Delta^{n+1},Y)} Y_{n+1} . \end{equation} We have a pair of pullback diagrams in which the vertical maps are covers: \begin{equation*} \begin{xy} \Square[\Lambda^{n+1}_J(f)\times_{\Lambda^{n+1}_{J\cap j}(f)}\pi_0(P^{\le n}(f))` \Lambda^{n+1}_J(f)`\Lambda^{n+1}_{J_+}(\tn(f))` \Lambda^{n+1}_J(\tn(f));```] \end{xy} \end{equation*} and \begin{equation*} \begin{xy} \Square[\Lambda^{n+1}_{J_+}(f)`X_n` \Lambda^{n+1}_J(f)\times_{\Lambda^{n+1}_{J\cap j}(f)}\pi_0(P^{\le n}(f))` \pi_0(P^{\le n}(f));```] \end{xy} \end{equation*} Composing the left vertical arrows, we see that the map \begin{equation*} \Lambda^{n+1}_{J_+}(f) \too \Lambda^{n+1}_{J_+}(\tn(f)) \end{equation*} is a cover; this completes the induction step. \end{proof} Our goal is now to construct, for any $i$, a ``missing face map'' \begin{equation*} d_i\colon\Lambda^{n+1}_i(\tn(f))\to\pi_0(P^{\ge n}(f))=\tn(X,f)_n . \end{equation*} Compose the covers $\Lambda^{n+1}_i(f)\to\Lambda^{n+1}_i(\tn(f))$ and $\lambda^{n+1}_i(f)$ to obtain a cover $X_{n+1}\to\Lambda^{n+1}_i(\tn(f))$. Denote by $q d_i$ the composite \begin{equation*} \begin{xy} \morphism[X_{n+1}`X_n;d_i] \morphism(500,0)[X_n`\pi_0(P^{\ge n}(f));] \end{xy} \end{equation*} \begin{lemma}\label{lemma:di2} The diagram \begin{equation}\label{eq:di} \begin{xy} \morphism/{@{>}@<3pt>}/<1000,0>[X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1}`X_{n+1};] \morphism/{@{>}@<-3pt>}/<1000,0>[X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1}`X_{n+1};] \morphism(1000,0)<750,0>[X_{n+1}`\pi_0(P^{\geq n}(f));qd_i] \end{xy} \end{equation} commutes. \end{lemma} \begin{remark} Recall that a \emph{point} of $\C$ is a functor $p\colon\C\to\st$ which preserves finite limits, which preserves arbitrary colimits, and which takes covers to surjections. We say $\C$ has \emph{enough points} if for every pair of maps $f\neq g$ in $\C$, there exists a point $p$ such that $p(f)\neq p(g)$. We prove the lemma under the assumption that $\C$ has enough points. This assumption is satisfied in many examples of interest, and has the benefit of allowing for an elementary proof. One could proceed without this assumption by using Ehresman's theory of sketches in combination with Barr's theorem on the existence of a Boolean cover of the topos of sheaves on $\C$. The latter approach is discussed in \cite[Section 2 -- ``For Logical Reasons'']{Bek:04} or in more depth in \cite[Chapter 7, especially 7.5]{Joh:77}. \end{remark} \begin{proof} To show that \ref{eq:di} commutes, we extend it to a diagram \begin{equation*} \begin{xy} \morphism(-1000,0)<1000,0>[\mathbb{K}`X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1};g] \morphism/{@{>}@<3pt>}/<1000,0>[X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1}`X_{n+1};] \morphism/{@{>}@<-3pt>}/<1000,0>[X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1}`X_{n+1};] \morphism(1000,0)<750,0>[X_{n+1}`\pi_0(P^{\geq n}(f));qd_i] \end{xy} \end{equation*} where the fork \begin{equation*} \begin{xy} \morphism/{@<3pt>}/[\mathbb{K}`X_{n+1};] \morphism/{@<-3pt>}/[\mathbb{K}`X_{n+1};] \end{xy} \end{equation*} is a ``fattened version'' of the kernel pair of the cover $X_{n+1}\to\Lambda^{n+1}_i(\tn(f))$. We show that the diagram \begin{equation}\label{eq:difat} \begin{xy} \morphism/{@{>}@<3pt>}/[\mathbb{K}`X_{n+1};] \morphism/{@{>}@<-3pt>}/[\mathbb{K}`X_{n+1};] \morphism(500,0)<750,0>[X_{n+1}`\pi_0(P^{\geq n}(f));qd_i] \end{xy} \end{equation} commutes and that the map $g$ is an epi. This implies that \ref{eq:di} commutes. We begin by defining the map $g\colon\mathbb{K}\to X_{n+1}\times_{\Lambda^{n+1}_i(f)}X_{n+1}$. Define $\mathbb{H}^{n+1}_i(f)$ to be the pullback \begin{equation*} \begin{xy} \morphism(0,0)<1750,0>[\mathbb{H}^{n+1}_i(f)`(P^{\ge n}(f)_1)^{\times(n+1)};] \morphism(0,0)<0,-1000>[\mathbb{H}^{n+1}_i(f)`\Lambda^{n+1}_i(f);(d_0)^{\times(n+1)}] \morphism(1750,0)|r|<0,-500>[(P^{\ge n}(f)_1)^{\times(n+1)}`(X_n)^{\times(n+1)};(d_0)^{\times(n+1)}] \morphism(1750,-500)<0,-500>[(X_n)^{\times(n+1)}`(\pi_0(P^{\ge n}(f)))^{\times(n+1)};] \morphism(0,-1000)<750,0>[\Lambda^{n+1}_i(f)`(X_n)^{\times(n+1)};] \morphism(750,-1000)<1000,0>[(X_n)^{\times(n+1)}`(\pi_0(P^{\ge n}(f)))^{\times(n+1)};] \end{xy} \end{equation*} Observe that the pullback exists because the right vertical maps are both covers. In addition to the map $(d_0)^{\times(n+1)}\colon\mathbb{H}^{n+1}_i(f)\to\Lambda^{n+1}_i(f)$, there is another map $(d_1)^{\times(n+1)}\colon\mathbb{H}^{n+1}_i(f)\to\Lambda^{n+1}_i(f)$. Define $\mathbb{K}$ to be the iterated pullback \begin{equation*} \begin{xy} \morphism(0,1000)<0,-500>[\mathbb{K}`X_{n+1}\times_{\Lambda^{n+1}_i(f)}\mathbb{H}^{n+1}_i(f);] \morphism(0,1000)<1850,0>[\mathbb{K}`X_{n+1};] \morphism(1850,1000)|r|<0,-500>[X_{n+1}`\Lambda^{n+1}_i(f);\lambda^{n+1}_i(f)] \morphism(1000,500)<850,0>[\mathbb{H}^{n+1}_i(f)`\Lambda^{n+1}_i(f);(d_1)^{\times(n+1)}] \square<1000,500>[X_{n+1}\times_{\Lambda^{n+1}_i(f)}\mathbb{H}^{n+1}_i(f)`\mathbb{H}^{n+1}_i(f)`X_{n+1}`\Lambda^{n+1}_i(f);``(d_0)^{\times(n+1)}`\lambda^{n+1}_i(f)] \end{xy} \end{equation*} The limit $\mathbb{K}$ exists because $\lambda^{n+1}_i(f)$ is a cover ($f$ is an $\infty$-stack). The projections along the left and right $X_{n+1}$ factors induce a map \begin{equation*} \begin{xy} \morphism<1000,0>[\mathbb{K}`X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1};g] \end{xy} \end{equation*} We show that \ref{eq:difat} commutes by constructing a sequence of covers \begin{equation}\label{eq:diseq} K_{n+3}\to\cdots\to K_{n+3-i}\to K_{n+1-i}\to\cdots\to K_0=\mathbb{K} \end{equation} fitting into a commuting square \begin{equation}\label{eq:diK} \begin{xy} \square<750,500>[K_{n+3}`P^{\ge n}(f)_1`\mathbb{K}`X_n\times X_n;h`c`(d_0,d_1)`(d_i,d_i)g] \end{xy} \end{equation} Recall that $q$ denotes the map $X_n\too\pi_0(P^{\ge n}(f))$. By definition, \begin{equation*} qd_0=qd_1\colon P^{\ge n}(f)_1\too\pi_0(P^{\ge n}(f)) \end{equation*} The square \ref{eq:diK} implies that \begin{align*} qd_i\pr_1gc&=qd_0h\\ &=qd_1h\\ &=qd_i\pr_2gc \end{align*} The map $c\colon K_{n+3}\too\mathbb{K}$ is an epimorphism, because it is cover (Axiom \ref{axiom:subcan}). We conclude that \begin{equation*} qd_i\pr_1g=qd_i\pr_2g, \end{equation*} or equivalently, that \ref{eq:difat} commutes. The construction we have just described arises from the observation that $\mathbb{K}$ encodes the data of pairs $(x_0,x_1)$ of $(n+1)$-simplices and explicit homotopies rel $(n-1)$-skeleta $(p_j)_{0\le j\ne i}^{n+1}$ between all but their $i^{th}$ faces. For $\ell<n+3$, the sequence of covers $K_\ell\too K_{\ell-1}$ amounts to an explicit sequence of combinatorial moves, by which we use the homotopies $(p_j)_{0\le j\ne i}^{n+1}$ to replace $x_1$ by a simplex whose $i^{th}$-horn equals that of $x_0$. The final stage $K_{n+3}$ amounts to an explicit homotopy rel boundary between the $i^{th}$ faces of two simplices all of whose other faces agree. We now construct \ref{eq:diseq}. In detail, a section of $\mathbb{K}$ is a tuple $(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1})$ where \begin{align*} x_j&\in X_{n+1},\text{ and}\\ p_j&\in P^{\ge n}(f)_1,\intertext{such that} f(x_0)&=f(x_1),\intertext{and, for all $j$, $k=0,1$,} d_{n+k}p_j&=d_j x_k. \end{align*} For concreteness, we describe the induction under the assumption $i<n-1$. The inductions when $i=n-1,~n,\text{ and }n+1$ are simpler, because we can omit one of the first three steps below. In all cases, we continue the induction until we have constructed $K_\ell$ for $n+2-i\ne\ell\le n+2$. We then construct $K_{n+3}$ as a final step. The assignment \begin{equation*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1})\mapsto((d_0s_nx_1,\ldots,d_{n-1}s_nx_1,-,x_1,p_{n+1}),s_nf(x_1)) \end{equation*} defines a map $\mathbb{K}\to\Lambda^{n+2}_n(f)$. Denote by $K_1$ the pullback \begin{equation*} K_1:=\mathbb{K}\times_{\Lambda^{n+2}_n(f)}X_{n+2} \end{equation*} The projection $K_1\to\mathbb{K}$ is a cover, because it is the pullback of the cover $\lambda^{n+2}_n(f)$. Denote a section of $K_1$ by a tuple $(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},y_{n+1})$ where \begin{align*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1})&\in\mathbb{K},\text{ and}\\ y_{n+1}&\in X_{n+2}. \end{align*} The assignment \begin{multline*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},y_{n+1})\mapsto\\ ((d_0s_{n+1}d_ny_{n+1},\ldots,d_{n-1}s_{n+1}d_ny_{n+1},p_n,-,d_ny_{n+1}),s_{n+1}d_nf(y_{n+1})) \end{multline*} defines a map $K_1\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_2$ the pullback \begin{equation*} K_2:=K_1\times_{\Lambda^{n+2}_{n+1}(f)}X_{n+2} \end{equation*} The projection $K_2\to K_1$ is a cover, because it is the pullback of the cover $\lambda^{n+2}_{n+1}(f)$. Denote a section of $K_2$ by a tuple $(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},y_{n+1},y_n)$ where \begin{align*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},y_{n+1})&\in K_1,\text{ and}\\ y_n&\in X_{n+2}. \end{align*} The assignment \begin{multline*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},y_{n+1},y_n)\mapsto\\ ((d_0s_{n+1}d_{n+1}y_n,\ldots,p_{n-1},d_ns_{n+1}d_{n+1}y_n,-,d_{n+1}y_n),s_{n+1}d_{n+1}f(y_n)) \end{multline*} defines a map $K_2\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_3$ the pullback \begin{equation*} K_3:=K_2\times_{\Lambda^{n+2}_{n+1}(f)}X_{n+2} \end{equation*} If $i=n-2$, we have constructed $K_3=K_{n+1-i}$. If $i<n-2$, suppose that, for $3\le \ell< n+1-i$, we have constructed a cover $K_\ell\to K_{\ell-1}$, such that sections of $K_\ell$ are tuples $(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{n+2-\ell\le j}^{n+1})$ with \begin{align*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{j=n+3-\ell}^{n+1})&\in K_{\ell-1},\text{ and}\\ y_{n+2-\ell}&\in X_{n+2},\intertext{such that} f(y_{n+2-\ell})&=s_{n+1}d_{n+2}f(y_{n+2-\ell})\\ d_jy_{n+2-\ell}&=\left\{ \begin{array}{rl} d_{n+1}y_{n+3-\ell} & j=n+2\\ d_js_{n+1}d_{n+1}y_{n+3-\ell} & n+1-\ell<j<n+1\\ p_{n+1-\ell} & j=n+1-\ell\\ d_js_{n+1}d_{n+1}y_{n+3-\ell} & j<n+1-\ell \end{array} \right. \end{align*} The assignment \begin{equation*} \begin{xy} \morphism/|->/<1000,0>[(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{j=n+2-\ell}^{n+1})`;] \morphism(500,-250)/{}/[((d_0s_{n+1}d_{n+1}y_{n+2-\ell},\ldots,d_{n-1-\ell}s_{n+1}d_{n+1}y_{n+2-\ell},p_{n-\ell},d_{n+1-\ell}s_{n+1}d_{n+1}y_{n+2-\ell},`;] \morphism(500,-500)/{}/[`\ldots,d_ns_{n+1}d_{n+1}y_{n+2-\ell},-,d_{n+1}y_{n+2-\ell}),s_{n+1}d_{n+1}f(y_{n+2-\ell}));] \end{xy} \end{equation*} defines a map $K_\ell\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_{\ell+1}$ the pullback \begin{equation*} K_{\ell+1}:=K_\ell\times_{\Lambda^{n+2}_{n+1}(f)}X_{n+2} \end{equation*} This completes the induction step for $\ell<n+1-i$. Let $\ell=n+1-i$. If $i=0$, we have completed the induction. If $i>0$, the assignment \begin{equation*} \begin{xy} \morphism/|->/<1000,0>[(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{j=i+1}^{n+1})`;] \morphism(500,-250)/{}/[((d_0s_{n+1}d_{n+1}y_{i+1},\ldots,d_{i-2}s_{n+1}d_{n+1}y_{i+1},p_{i-1},d_is_{n+1}d_{n+1}y_{i+1},\ldots`;] \morphism(500,-500)/{}/[`\ldots,d_ns_{n+1}d_{n+1}y_{i+1},-,d_{n+1}y_{i+1}),s_{n+1}d_{n+1}f(y_{i+1}));] \end{xy} \end{equation*} defines a map $K_{n+1-i}\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_{n+3-i}$ the pullback \begin{equation*} K_{n+1-i}\times_{\Lambda^{n+2}_{n+1}(f)}X_{n+2} \end{equation*} If $i=1$, we have completed the induction. If $i>1$ the assignment \begin{equation*} \begin{xy} \morphism/|->/<1000,0>[(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{i-1\le j\ne i}^{n+1})`;] \morphism(500,-250)/{}/[((d_0s_{n+1}d_{n+1}y_{i-1},\ldots,d_{i-3}s_{n+1}d_{n+1}y_{i-1},p_{i-2},d_{i-1}s_{n+1}d_{n+1}y_{i-1},\ldots`;] \morphism(500,-500)/{}/[`\ldots,d_ns_{n+1}d_{n+1}y_{i-1},-,d_{n+1}y_{i-1}),s_{n+1}d_{n+1}f(y_{i-1}));] \end{xy} \end{equation*} defines a map $K_{n+3-i}\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_{n+4-i}$ the pullback \begin{equation*} K_{n+4-i}:=K_{n+3-i}\times_{\Lambda^{n+2}_{n+1}(f)}X_{n+2} \end{equation*} If $i=2$, we have completed the induction. If $i>2$, suppose that, for $\ell$ at least $n+4-i$ but less than $n+2$, we have constructed a cover $K_\ell\to K_{\ell-1}$, where sections of $K_\ell$ are tuples $(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{n+2-\ell\le j\ne i}^{n+1})$ with \begin{align*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{n+3-\ell\le j\ne i}^{n+1})&\in K_{\ell-1},\text{ and}\\ y_{n+2-\ell}&\in X_{n+2},\intertext{such that} f(y_{n+2-\ell})&=s_{n+1}d_{n+2}f(y_{n+2-\ell})\\ d_jy_{n+2-\ell}&=\left\{ \begin{array}{rl} d_{n+1}y_{n+3-\ell} & j=n+2\\ d_js_{n+1}d_{n+1}y_{n+3-\ell} & n+1-\ell<j<n+1\\ p_{n+1-\ell} & j=n+1-\ell\\ d_js_{n+1}d_{n+1}y_{n+3-\ell} & j<n+1-\ell \end{array} \right. \end{align*} The assignment \begin{equation*} \begin{xy} \morphism/|->/<1000,0>[(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{n+2-\ell\le j\ne i}^{n+1})`;] \morphism(500,-250)/{}/[((d_0s_{n+1}d_{n+1}y_{n+2-\ell},\ldots,d_{n-1-\ell}s_{n+1}d_{n+1}y_{n+2-\ell},p_{n-\ell},d_{n+1-\ell}s_{n+1}d_{n+1}y_{n+2-\ell},`;] \morphism(500,-500)/{}/[`\ldots,d_ns_{n+1}d_{n+1}y_{n+2-\ell},-,d_{n+1}y_{n+2-\ell}),s_{n+1}d_{n+1}f(y_{n+2-\ell}));] \end{xy} \end{equation*} defines a map $K_\ell\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_{\ell+1}$ the pullback \begin{equation*} K_{\ell+1}:=K_\ell\times_{\Lambda^{n+2}_{n+1}(f)}X_{n+2} \end{equation*} This completes the induction step, in all cases. For $i=0$, we conclude the existence of the sequence of covers \begin{equation*} K_{n+1}\to\cdots \to K_0=\mathbb{K} \end{equation*} The construction allows us to denote a section of $K_{n+1}$ by \begin{align*} &(x_0,x_1,(p_j)_{j=1}^{n+1},(y_j)_{j=1}^{n+1})\intertext{where} &s_{n+1}f(x_0)=f(y_0)\intertext{and, for $j>0$,} &d_jx_0=d_jd_{n+1}y_0. \end{align*} The assignment \begin{multline*} (x_0,x_1,(p_j)_{j=1}^{n+1},(y_j)_{j=1}^{n+1})\mapsto\\ ((-,d_1s_{n+1}x_0,\ldots,d_ns_{n+1}x_0,x_0,d_{n+1}y_1),s_{n+1}f(x_0)) \end{multline*} defines a map $K_{n+1}\to\Lambda^{n+2}_0(f)$. Denote by $K_{n+3}$ the pullback \begin{equation*} K_{n+3}:=K_{n+1}\times_{\Lambda^{n+2}_0(f)}X_{n+2} \end{equation*} Note that the map $K_{n+3}\to K_{n+1}$ is a cover, because it is a pullback of the cover $\lambda^{n+2}_0(f)$. Denote a section of $K_{n+3}$ by $(\mathbf{x},z)$ where $\mathbf{x}\in K_{n+1}$ and $z\in X_{n+2}$. For $i>0$, the induction above demonstrates the existence of the sequence of covers \begin{equation*} K_{n+2}\to\cdots K_{n+3-i}\to K_{n+1-i}\to\cdots\to K_0=\mathbb{K} \end{equation*} The construction allows us to denote a section of $K_{n+2}$ by \begin{align*} &(x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{0\le j\ne i}^{n+1})\intertext{where} &s_{n+1}f(x_0)=f(y_0),\intertext{and, for $j\ne i$,} &d_jx_0=d_jd_{n+1}y_0. \end{align*} For $i\ne n+1$, the assignment \begin{multline*} (x_0,x_1,(p_j)_{0\le j\ne i}^{n+1},(y_j)_{0\le j\ne i}^{n+1})\mapsto\\ ((d_0s_{n+1}x_0,\ldots,d_{i-1}s_{n+1}x_0,-,d_{i+1}s_{n+1}x_0,\ldots,d_ns_{n+1}x_0,x_0,d_{n+1}y_0),s_{n+1}f(x_0)) \end{multline*} defines a map $K_{n+2}\to\Lambda^{n+2}_i(f)$. For $i=n+1$, the assignment \begin{equation*} (x_0,x_1,(p_j)_{j=0}^n,(y_j)_{j=0}^n)\mapsto((d_0s_nx_0,\ldots,d_{n-1}s_nx_0,x_0,d_{n+1}y_0,-),s_nf(x_0)) \end{equation*} defines a map $K_{n+2}\to\Lambda^{n+2}_{n+1}(f)$. Denote by $K_{n+3}$ the pullback \begin{equation*} K_{n+3}:=K_{n+2}\times_{\Lambda^{n+2}_i(f)}X_{n+2} \end{equation*} Note that the map $K_{n+3}\to K_{n+2}$ is a cover, because it is a pullback of the cover $\lambda^{n+2}_i(f)$. Denote a section of $K_{n+3}$ by $(\mathbf{x},z)$ where $\mathbf{x}\in K_{n+2}$ and $z\in X_{n+2}$. The construction, in all cases, guarantees that the assignment \begin{equation*} (\mathbf{x},z)\mapsto d_iz \end{equation*} defines a map $h\colon K_{n+3}\too P^{\ge n}(f)_1$ which, along with $c\colon K_{n+3}\too\mathbb{K}$, gives to \ref{eq:diK}. It remains to show that the map $g$ is an epi. Because we assume that $\C$ has enough points, it suffices to check that the map $p(g)$ is a surjection for any point $p\colon\C\to\st$. A point $p$ preserves finite limits and arbitrary colimits, and takes covers to surjections. As a result, $p$ takes each construction we have been considering to its analogue in the category $\st$. It suffices to show that if $f\colon X_{\bullet}\to Y_{\bullet}$ is a Kan fibration of simplicial sets, then the map \begin{equation*} \begin{xy} \morphism<1850,0>[\mathbb{K}= X_{n+1}\times_{\Lambda^{n+1}_i(f)}\mathbb{H}^{n+1}_i(f)\times_{\Lambda^{n+1}_1(f)}X_{n+1}`X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1};g] \end{xy} \end{equation*} is surjective. This map fits into a pullback square \begin{equation*} \begin{xy} \square<1500,500>[\mathbb{K}`(P^{\ge n}(f)_1)^{\times(n+1)}`X_{n+1}\times_{\Lambda^{n+1}_i(\tn(f))}X_{n+1}`(X_n\times_{\pi_0 (P^{\ge n}(f))} X_n)^{\times(n+1)};`g``] \end{xy} \end{equation*} The map \begin{equation*} P^{\ge n}(f)_1\to X_n\times_{\pi_0 (P^{\ge n}(f))} X_n \end{equation*} is surjective because the simplicial set $P^{\ge n}(f)_\bullet$ is Kan. As a result, the map $g$ is surjective, because surjections of sets are preserved under products and pullbacks. \end{proof} By Axiom \ref{axiom:subcan}, \ref{eq:di} determines a map \begin{equation*} \Lambda^{n+1}_i(f) \to^{d_i} \tn(X,f)_n , \end{equation*} We now extend \ref{eq:ntn} to a map \begin{equation*} \tr_{n+1}\tn(f)\colon\tr_{n+1}\tn(X,f)_\bullet\to\tr_{n+1}Y_\bullet \end{equation*} On $k$-simplices, for $k\le n$, the map $\tr_{n+1}\tn(f)$ equals the map $\tr_n\tn(f)$. On $(n+1)$-simplices, $\tr_{n+1}\tn(f)$ is the canonical map \begin{equation*} \Lambda^{n+1}_1(\tn(f))\to Y_{n+1} \end{equation*} The missing face map \begin{equation*} \tr_{n+1}\tn(X,f)_{n+1}=\Lambda^{n+1}_1(\tn(f))\to^{d_1}\pi_0(P^{\ge n}(f))=\tr_{n+1}\tn(X,f)_n \end{equation*} makes $\tr_{n+1}\tn(X,f)_\bullet$ into an $(n+1)$-truncated simplicial object. \begin{definition} Let $f\colon X_\bullet\to Y_\bullet$ be an $\infty$-stack such that $\pi_0(P^{\ge n}(f))$ exists. Define $\tn(X,f)_\bullet$ to be the limit \begin{equation*} \tn(X,f)_\bullet:=\csk_{n+1}\tr_{n+1}\tn(X,f)_\bullet\times_{\Csk_{n+1}Y_\bullet}Y_\bullet \end{equation*} The \emph{$n$-strictification of $f$} is the map \begin{equation*} \tn(f)\colon\tn(X,f)_\bullet\to Y_\bullet \end{equation*} \end{definition} \begin{proposition}\label{prop:tn} Let $f\colon X_\bullet\to Y_\bullet$ be an $\infty$-stack such that $\pi_0(P^{\ge n}(f))$ exists. The $n$-strictification $\tn(f)\colon \tn(X,f)_\bullet\to Y_\bullet$ is an $n$-stack. The maps $f$ and $\tn(f)$ are isomorphic if and only if $f$ is an $n$-stack. \end{proposition} \begin{proof} We show that $\tn(f)$ is an $n$-stack. Lemma \ref{lemma:di1} established that $\lambda^k_i(\tn(f))$ is a cover for $k\le n$ and all $i$. We now show that the map $\lambda^{n+1}_i(\tn(f))$ is an isomorphism for all $i$. The inclusion $\Lambda^{n+1}_i\hookrightarrow\partial\Delta^{n+1}$ induces a map \begin{equation*} d_{\hat{\imath}}\colon M_{n+1}(\tn(f))\to\Lambda^{n+1}_i(\tn(f)) \end{equation*} Observe that, in the notation of \ref{eq:Jintj}, if $J=[n+1]\setminus\{i\}$, then \begin{equation*} M_{n+1}(\tn(f))\cong \Lambda^{n+1}_i(\tn(f))\times_{\Lambda^{n+1}_{J\cap i}(f)}\pi_0(P^{\ge n}(f)) \end{equation*} The missing face map $d_i\colon\Lambda^{n+1}_i(\tn(f))\to\pi_0(P^{\ge n}(f))$ induces a map \begin{equation*} (1,d_i)\colon\Lambda^{n+1}_i(\tn(f))\to M_{n+1}(\tn(f)) \end{equation*} Note that the map $(1,d_i)$ is a right inverse for the map $d_{\hat{\imath}}$. For all $i$, the map $\lambda^{n+1}_i(\tn(f))$ factors as the composite \begin{equation*} \begin{xy} \morphism|a|<1250,0>[\tn(X,f)_{n+1}=\Lambda^{n+1}_1(\tn(f))`M_{n+1}(\tn(f));(1,d_1)] \morphism(1250,0)|a|<1000,0>[M_{n+1}(\tn(f))`\Lambda^{n+1}_i(\tn(f));d_{\hat{\imath}}] \end{xy} \end{equation*} These maps fit into a commuting diagram \begin{equation*} \begin{xy} \Atrianglepair|lmraa|/>`>`>`{@<3pt>}`{@<3pt>}/<1000,500>[X_{n+1}`\Lambda^{n+1}_1(\tn(f))`M_{n+1}(\tn(f))`\Lambda^{n+1}_i(\tn(f));```(1,d_1)`d_{\hat{\imath}}] \morphism(1000,0)|b|/{@<3pt>}/<-1000,0>[M_{n+1}(\tn(f))`\Lambda^{n+1}_1(\tn(f));d_{\hat{1}}] \morphism(2000,0)|b|/{@<3pt>}/<-1000,0>[\Lambda^{n+1}_i(\tn(f))`M_{n+1}(\tn(f));(1,d_i)] \end{xy} \end{equation*} The proof of Lemma \ref{lemma:di2} shows that \begin{align*} (1,d_1)\circ d_{\hat{1}}\circ(1,d_i)&=(1,d_i)\\ (1,d_i)\circ d_{\hat{\imath}}\circ(1,d_1)&=(1,d_1) \end{align*} We conclude that \begin{align*} (d_{\hat{1}}(1,d_i))\circ\lambda^{n+1}_i(\tn(f))&=(d_{\hat{1}}(1,d_i))\circ(d_{\hat{\imath}}(1,d_1))\\ &=d_{\hat{1}}\circ(1,d_1)\\ &=1_{\Lambda^{n+1}_1(\tn(f))}. \end{align*} Similarly, \begin{align*} \lambda^{n+1}_i(\tn(f)\circ(d_{\hat{1}}(1,d_i))&=(d_{\hat{\imath}}(1,d_1)\circ(d_{\hat{1}}(1,d_i))\\ &=1_{\Lambda^{n+1}_i(\tn(f))}. \end{align*} We have shown that $\lambda^{n+1}_i(\tn(f))$ is an isomorphism for all $i$. Lemma \ref{lemma:psrep} guarantees that the limit $\Lambda^{n+2}_i(\tn(f))$ exists. Because $\lambda^{n+1}_i(\tn(f))$ is an isomorphism for all $i$, the map \begin{equation*} \Lambda^{n+2}_i(\tn(f))\hookrightarrow M_{n+2}(\tn(f))=\tn(X,f)_{n+2} \end{equation*} is an isomorphism, with inverse given by $\lambda^{n+2}_i(\tn(f))$. For $k>n+2$, the map $\lambda^k_i(\tn(f))$ is an isomorphism because the map $\tn(f)$ is $(n+1)$-coskeletal, and the inclusion $\Lambda^k_i\hookrightarrow\Delta^k$ is the identity on $(n+1)$-skeleta. By inductively applying Lemma \ref{lemma:psrep}, we conclude that $\tn(X,f)_k$ is an object of $\C$ for all $k$ and that $\tn(f)$ is an $n$-stack. We have shown that if $f$ is isomorphic to $\tn(f)$, then $f$ is an $n$-stack. Conversely, suppose that $f$ is an $n$-stack. For any $k$ and any map, the $(k-1)$-skeleton of the map determines its $\Lambda^k_i$-horns. The horn-filling maps for $\tn(f)$ are isomorphisms above dimension $n$. If $f$ is an $n$-stack, then the horn-filling maps for $f$ are also isomorphisms above dimension $n$. We conclude that the canonical map from $f$ to $\tn(f)$ is an isomorphism if $f$ is an $n$-stack and if the map from $f$ to $\tn(f)$ induces an isomorphism on $n$-skeleta. When $f$ is an $n$-stack, the map from $f$ to $\tn(f)$ is automatically an isomorphism on $n$-skeleta. Indeed, if $f$ is an $n$-stack, then $P^{\geq n}(f)_\bullet$ is a Lie 0-groupoid (Theorem \ref{thm:kmor}). As a result, the map from $X_n$ to $\pi_0(P^{\geq n}(f))$ is an isomorphism. \end{proof} If $f\colon X_\bullet\to Y_\bullet$ is a hypercover, then Proposition \ref{prop:hyppi0} and Theorem \ref{thm:kmorhyp} imply that \begin{equation*} \pi_0(P^{\ge n}(f))\cong M_n(f) \end{equation*} In particular, the $n$-strictification $\tn(f)$ is an $n$-stack. \begin{proposition}\label{prop:tnhyp} If $f:X_{\bullet}\rightarrow Y_{\bullet}$ is a hypercover, then \begin{enumerate} \item the map $\tn(f)$ is an $n$-hypercover, and \item the map \begin{xy} \morphism[X_{\bullet}`\tn(X,f)_{\bullet};q] \end{xy} is a hypercover. \end{enumerate} \end{proposition} \begin{proof} We show that $\tn(X,f)_{n+1}=\Lambda^{n+1}_1(\tn(f))\cong M_{n+1}(\tn(f))$. Consider the commuting triangle \begin{equation*} \begin{xy} \Atriangle|lra|/>`>`{@<3pt>}/[X_{n+1}`\Lambda^{n+1}_1(\tn(f))`M_{n+1}(\tn(f));``(1,d_1)] \morphism(1000,0)|b|/{@<3pt>}/<-1000,0>[M_{n+1}(\tn(f))`\Lambda^{n+1}_1(\tn(f));d_{\hat{1}}] \end{xy} \end{equation*} We showed in Lemma \ref{lemma:di1} that the map $X_{n+1}\to\Lambda^{n+1}_1(\tn(f))$ is a cover. A similar argument shows that, because $f$ is a hypercover, the map $X_{n+1}\to M_{n+1}(\tn(f))$ is a cover. Axiom \ref{axiom:fg+g} implies that both $d_{\hat{1}}$ and $(1,d_1)$ are covers. Axiom \ref{axiom:subcan} implies that they are both epimorphisms. By construction \begin{equation*} d_{\hat{1}}(1,d_1)=1_{\Lambda^{n+1}_1(\tn(f))} \end{equation*} As a result, \begin{align*} (1,d_1)d_{\hat{1}}(1,d_1)&=(1,d_1)\\ &=1_{M_{n+1}(\tn(f))}(1,d_1) \end{align*} Because $(1,d_1)$ is an epimorphism, we conclude that $(1,d_1)d_{\hat{1}}=1_{M_{n+1}(\tn(f))}$. This isomorphism, combines with the isomorphism \begin{align*} \tn(X,f)_n&\cong M_n(f)\\ &=M_n(\tn(f)) \end{align*} to show that \begin{equation*} \tn(X,f)_\bullet\cong\Csk_n(X)_\bullet\times_{\Csk_n(Y)_\bullet}Y_\bullet \end{equation*} We have shown that $\tn(f)$ is an $n$-hypercover. For $k<n$, $M_k(q)\cong X_k$ by inspection. A similar check shows that the map from $M_n(q)$ to $\tn(X,f)_n$ is an isomorphism. We observed above that $\tn(X,f)_n\cong M_n(f)$ when $f$ is a hypercover. This implies that the map $X_n\to M_n(q)$ is isomorphic to the cover $X_n\to M_n(f)$. For $k>n$, an exercise in combinatorics shows that the map $X_k\to M_k(q)$ is isomorphic to the cover $X_k\to M_k(f)$. We conclude the proof. \end{proof} \section{n-Bundles and Descent}\label{sec:descent} A classical construction produces a principal bundle for a Lie group from local data on the base. This local data is frequently presented in the form of a cocycle $\varphi$ on a cover $U\rightarrow X$. If we pass to the nerve of the cover $f:U_{\bullet}\rightarrow X$, we can encode the cocycle as a 0-stack \begin{equation}\label{eq:0bun} \begin{xy} \morphism[E^{\varphi}_{\bullet}`U_{\bullet};p] \end{xy} \end{equation} The principal bundle corresponding to the cocycle is the 0-strictification \begin{equation*} \begin{xy} \morphism<750,0>[\tau_0(E^{\varphi},fp)`X;\tau_0(fp)] \end{xy} \end{equation*} From this perspective, the principal bundle is representable because the 0-stack $p$ has the structure of a \emph{twisted Cartesian product}. Twisted Cartesian products were studied by Barratt, Guggenheim and Moore in their work on principal and associated bundles for simplicial groups \cite{BGM:59}. \begin{definition}\label{def:locn} A map $p:E_{\bullet}\rightarrow X_{\bullet}$ is a \emph{twisted Cartesian product}, if there exists $Y_\bullet\in\sC$, and there exist isomorphisms \begin{equation*} \begin{xy} \Vtriangle<600,300>[E_k`X_k\times Y_k`X_k;\varphi_k`p`\pi_{X_k}] \morphism(0,300)|b|<1200,0>[E_k`X_k\times Y_k;\cong] \end{xy} \end{equation*} for each $k\in\mathbb{N}$, such that, for $i<k$, \begin{equation*} \varphi_{k-1} d_i^E= (d_i^X\times d_i^Y)\varphi_k, \end{equation*} and, for all $i$, \begin{equation*} \varphi_k s_i^E=(s_i^X\times s_i^Y)\varphi_k. \end{equation*} \end{definition} \begin{definition} A \emph{local $n$-bundle} is a twisted Cartesian product which is also an $n$-stack. \end{definition} In analogy with \ref{eq:0bun}, we consider local $n$-bundles on the total spaces of $(n+1)$-hypercovers \begin{equation*} \begin{xy} \morphism[E_{\bullet}`U_{\bullet};p] \morphism(500,0)[U_{\bullet}`X_{\bullet};f] \end{xy} \end{equation*} Our goal in this section is to show that the $n$-strictification \begin{equation*} \begin{xy} \morphism<750,0>[\tn(E,fp)_{\bullet}`X_{\bullet};\tn(fp)] \end{xy} \end{equation*} exists. From the definition, it suffices to show that the orbit space $\pi_0 P^{\geq n}(fp)$ exists. Because the map $fp$ is an $(n+1)$-stack, the relative higher morphism space $P^{\geq n}(fp)$ is a Lie 1-groupoid (Theorem \ref{thm:kmor}). We see that the existence of the $n$-strictification is equivalent to the existence of the orbit space of a Lie groupoid. \begin{theorem}[Godemont] Let $\G$ be a Lie groupoid in the category of analytic manifolds over a complete normed field. If the map $(s,t):\G_1\rightarrow \G_0\times\G_0$ is a closed embedding, then $\pi_0(\G)$ is an analytic manifold and the map $\G_0\rightarrow\pi_0(\G)$ is a surjective submersion. \end{theorem} Serre gives a proof in \cite[Theorem III.12.2]{Ser:64} which applies mutatis mutandi to the category of smooth manifolds. Inspired by this, we formulate an analogue of Godemont's Theorem for categories with covers. We begin by defining an analogue of closed embeddings. \begin{definition} Let $f\colon X\to Y$ be a morphism in $\C$. The \emph{graph} $\Gamma_f$ of $f$ is the inclusion \begin{equation*} \begin{xy} \morphism/^{ (}->/<750,0>[X\times_Y Y`X\times Y;\Gamma_f] \end{xy} \end{equation*} The subcategory of \emph{regular embeddings} is the smallest sub-category of $\C$ which is closed under pullback along covers and which contains all graphs. \end{definition} An isomorphism is a regular embedding, because it is a pullback of the graph \begin{equation*} \begin{xy} \morphism[*`*\times *;\Delta] \end{xy} \end{equation*} along a cover $X\rightarrow *$. A regular embedding in the category of smooth manifolds is a certain type of closed embedding. \begin{definition} A \emph{regular} Lie $n$-groupoid is a Lie $n$-groupoid $X_\bullet$ such that the map $\mu_1(X):X_1\to M_1(X)\cong X_0\times X_0$ is a regular embedding. \end{definition} \begin{axiom}\label{axiom:gode}(Godemont's Theorem) Let $X_\bullet$ be a regular Lie $n$-groupoid. The orbit space $\pi_0(X_\bullet)$ exists. \end{axiom} \begin{theorem}[Descent for $n$-bundles] \label{thm:desc} Suppose that Godemont's Theorem holds in $\C$. If $p:E_{\bullet}\rightarrow U_{\bullet}$ is a local $n$-bundle and $f:U_{\bullet}\rightarrow X_{\bullet}$ is an $(n+1)$-hypercover, then the $n$-strictification \begin{equation*} \begin{xy} \morphism<750,0>[\tn(E,fp)_{\bullet}`X_{\bullet};\tn(fp)] \end{xy} \end{equation*} exists. \end{theorem} \begin{remark} To study bundles for simplicial Lie groupoids, one should use twisted \emph{fiber} products rather than twisted Cartesian products in the definition of local $n$-bundles. The theorem also holds for this more general notion. \end{remark} The following lemma is the crux of the proof. \begin{lemma} There exists an isomorphism \begin{equation*} \begin{xy} \morphism(0,0)|a|<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times Y_n;\psi] \morphism(0,0)|b|<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times Y_n;\cong] \end{xy} \end{equation*} such that the maps $\psi d_n$ and $(d_n\times 1_{Y_n})\psi$ are equal. \end{lemma} \begin{proof} The definition of $P^{\geq n}(fp)_1$ allows us to view it as a sub-object of $E_{n+1}$. Since $p$ is a twisted Cartesian product, there exists $Y_\bullet\in\C$ and isomorphisms \begin{equation*} \begin{xy} \morphism(0,0)|a|<750,0>[E_k`U_k\times Y_k;\varphi_k] \morphism(0,0)|b|<750,0>[E_k`U_k\times Y_k;\cong] \end{xy} \end{equation*} such that, for $i<k$, \begin{equation*} \varphi_{k-1} d_i^E= (d_i^X\times d_i^Y)\varphi_k, \end{equation*} and, for all $i$, \begin{equation*} \varphi_k s_i^E=(s_i^X\times s_i^Y)\varphi_k. \end{equation*} Using this, we see that sections of $P^{\geq n}(fp)_1$ consist of pairs \begin{align*} (u,y)&\in U_{n+1}\times Y_{n+1}\intertext{such that, for $i<n$,} d_iy&=s_{n-1}d_{n-1}d_iy,\\ d_iu&=s_{n-1}d_{n-1}d_iu,\intertext{and} f(u)&=s_nd_nf(u). \end{align*} Similarly, sections of $P^{\geq n}(p)_1$ consist of pairs \begin{align*} (u,y)&\in U_{n+1}\times Y_{n+1}\intertext{such that, for $i<n$,} d_iy&=s_{n-1}d_{n-1}d_iy,\intertext{and} u&=s_nd_nu. \end{align*} These equations say that if $(u,y)$ is a section of $P^{\geq n}(fp)_1$, then $u$ is a section of $P^{\geq n}(f)_1$ and the natural map \begin{equation}\label{eq:descpfmp1} \begin{xy} \morphism(0,0)<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times_{U_n}P^{\geq n}(p)_1;] \morphism(0,-200)/|->/<1000,0>[(u,y)`(u,(s_nd_nu,y));] \end{xy} \end{equation} is an isomorphism. Because $p$ is an $n$-stack, $P^{\geq n}(p)_{\bullet}$ is a Lie $0$-groupoid (Theorem \ref{thm:kmor}). The target map \begin{equation*} \begin{xy} \morphism<750,0>[P^{\geq n}(p)_1`P^{\geq n}(p)_0;] \end{xy} \end{equation*} is therefore an isomorphism. Using this isomorphism and the map \ref{eq:descpfmp1}, we obtain the desired isomorphism \begin{equation*} \begin{xy} \morphism(0,0)|a|<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times Y_n;\psi] \morphism(0,0)|b|<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times Y_n;\cong] \end{xy} \end{equation*} \end{proof} \begin{proof}[Proof of Theorem \ref{thm:desc}] Axiom \ref{axiom:gode} reduces the proof to showing that \begin{equation*} \begin{xy} \morphism<1250,0>[P^{\geq n}(fp)_1`P^{\geq n}(fp)_0\times P^{\geq n}(fp)_0;] \end{xy} \end{equation*} is a regular embedding. The map \begin{equation*} \begin{xy} \morphism<750,0>[P^{\geq n}(f)_{\bullet}`M_n(f);\pi] \end{xy} \end{equation*} is a 1-hypercover, because $f$ is an $(n+1)$-hypercover (Theorem \ref{thm:kmorhyp}). In particular, the map $\partial\Delta^1(\pi)$ gives an isomorphism \begin{equation*} \begin{xy} \morphism(0,0)|a|<1000,0>[P^{\geq n}(f)_1`U_n\times_{M_n(f)}U_n;\partial\Delta^1(\pi)] \morphism(0,0)|b|<1000,0>[P^{\geq n}(f)_1`U_n\times_{M_n(f)}U_n;\cong] \end{xy} \end{equation*} The canonical map \begin{equation*} \begin{xy} \morphism(0,0)/^{ (}->/<1000,0>[U_n\times_{M_n(f)}U_n `U_n\times U_n;\imath] \end{xy} \end{equation*} is a regular embedding. Composing this with the map above, we obtain a regular embedding \begin{equation*} \begin{xy} \morphism<1000,0>[P^{\geq n}(f)_1`U_n\times U_n;\imath\partial\Delta^1(\pi)] \end{xy} \end{equation*} The isomorphisms \begin{equation*} \begin{xy} \morphism<750,0>[P^{\geq n}(fp)_0`E_n;\cong] \morphism(750,0)<750,0>[E_n`U_n\times Y_n;\cong] \end{xy} \end{equation*} and \begin{equation*} \begin{xy} \morphism(0,0)|a|<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times Y_n;\psi] \morphism(0,0)|b|<1000,0>[P^{\geq n}(fp)_1`P^{\geq n}(f)_1\times Y_n;\cong] \end{xy} \end{equation*} allow us to factor the map \begin{equation*} \begin{xy} \morphism<1200,0>[P^{\geq n}(fp)_1`P^{\geq n}(fp)_0\times P^{\geq n}(fp)_0;] \end{xy} \end{equation*} as the composite \begin{multline*} \begin{xy} \morphism<900,0>[P^{\geq n}(fp)_1`P^{\geq n}(fp)_1\times Y_n; \Gamma_{d_{n+1}^Y}] \morphism(900,0)<1200,0>[P^{\geq n}(fp)_1\times Y_n` Y_n\times P^{\geq n}(f)_1\times Y_n; \psi\times 1_Y] \end{xy} \\ \begin{xy} \morphism(2100,0)<1400,0>[`Y_n\times U_n\times U_n\times Y_n; 1_Y\times(\imath\partial\Delta^n(\pi))\times 1_Y] \end{xy} \end{multline*} Each map in this sequence is a regular embedding. \end{proof} \section{Strict Lie n-Groups and their Actions}\label{sec:slie} Principal and associated bundles for discrete simplicial groups provide examples of local $n$-bundles in $\sst$. This theory was developed by Barratt, Guggenheim and Moore \cite{BGM:59}. In this section, we develop analogous results for simplicial Lie groups. While we restrict to simplicial groups for ease of exposition, the results and proofs carry over to simplicial Lie groupoids. \begin{definition} A \emph{simplicial Lie group} $G_{\bullet}$ in $\C$ is a simplicial diagram in the category of group objects in $\C$. Denote by $\sgp(\C)$ the category of simplicial Lie groups. \end{definition} Eilenberg and Mac Lane \cite{EiM:53} introduced a pair of functors $\W$ and $\LW$ from simplicial groups to simplicial sets which generalize the universal bundle and nerve of a group. \begin{definition} Let $G_{\bullet}$ be a simplicial group. \begin{enumerate} \item The \emph{total space $\W_{\bullet}G$ of the universal $G_{\bullet}$-bundle} is the simplicial set with \begin{align*} \W_nG&:= G_0\times\cdots\times G_n\\ d_i(g_0,\ldots g_n)&:=(g_0,\ldots,g_{i-2},g_{i-1}d_ig_i,d_ig_{i+1},\ldots,d_ig_n)\\ s_i(g_0,\ldots g_n)&:=(g_0,\ldots,g_{i-1},e,s_ig_i,\ldots,s_ig_n) \end{align*} \item The \emph{nerve} $\LW_{\bullet}G$ of $G_{\bullet}$ is the simplicial set with \begin{align*} \LW_0G&:=\ast,\intertext{and, for $n>0$,} \LW_nG &:= G_0\times\cdots\times G_{n-1} \\ d_i(g_0,\ldots g_{n-1}) &:= (g_0,\ldots,g_{i-2},g_{i-1}d_ig_i,d_ig_{i+1},\ldots,d_ig_{n-1}) \\ s_i(g_0,\ldots g_{n-1}) &:= (g_0,\ldots,g_{i-1},e,s_ig_i,\ldots,s_ig_{n-1}) \end{align*} \item The assignment which sends $(g_0,\ldots,g_n)\in\W_nG$ to $(g_0,\ldots,g_{n-1})\in\LW_nG$ defines a twisted Cartesian product $\W_{\bullet}G\rightarrow\LW_{\bullet}G$. This is the \emph{universal $G_{\bullet}$-bundle}. \end{enumerate} \end{definition} Because $\C$ has finite products, the same formulas give functors $\W$ and $\LW$ from the category of simplicial Lie groups to the category $\sC$. \begin{definition} If $G_{\bullet}$ is a simplicial Lie group and $X_\bullet$ is a simplicial object in $\C$, then a left \emph{left action $G_\bullet\circlearrowleft X_\bullet$} consists of maps \begin{equation*} \begin{xy} \morphism(0,0)<750,0>[G_k\times X_k`X_k;] \morphism(0,-200)/|->/<750,0>[(g,x)`gx;] \end{xy} \end{equation*} for each $k$, such that, in all dimensions and for all $i$: \begin{align*} g_1(g_2x)&=(g_1g_2)x,\\ ex&=x,\\ d_i(gx)&=(d_ig)(d_ix),\text{ and}\\ s_i(gx)&=(s_ig)(s_ix). \end{align*} Right actions are defined analogously. \end{definition} When $G_{\bullet}$ and $X_{\bullet}$ are constant simplicial diagrams, this is the usual notion of a left action of a Lie group. The right action of $G_{\bullet}$ on itself induces a right $G_{\bullet}$-action on $\W_\bullet G$. \begin{definition} Suppose we have a left action $G_{\bullet}\circlearrowleft X_{\bullet}$. The \emph{homotopy quotient} $(\W G\times_G X)_{\bullet}$ is defined by \begin{align*} (\W G\times_G X)_n &:= \LW_n G\times X_n\\ d_i(g_0,\ldots g_{n-1},x)&:=(g_0,\ldots,g_{i-1}d_ig_i,\ldots,d_ig_{n-1},d_ix)\\ s_i(g_0,\ldots g_{n-1},x)&:=(g_0,\ldots,g_{i-1},e,s_ig_i,\ldots,s_ig_{n-1},s_ix) \end{align*} \end{definition} If $G_{\bullet}$ acts on $X_{\bullet}$ and $Y_{\bullet}$ and $f:X_{\bullet}\rightarrow Y_{\bullet}$ is $G_{\bullet}$-equivariant, then $f$ induces a map of homotopy quotients \begin{equation*} \begin{xy} \morphism<1000,0>[(\W G\times_G X)_{\bullet}`(\W G\times_G Y)_{\bullet};1\times_G f] \end{xy} \end{equation*} given on $n$-simplices by $1_{\LW G}\times f_n$. \begin{definition}\label{def:slien} A \emph{strict Lie $n$-group} is a simplicial Lie group $G_{\bullet}$ such that the horn-filling maps $\lambda^k_i(G)$ are isomorphisms for $k\geq n$. \end{definition} We might have defined a strict Lie $n$-group as a simplicial Lie group such that the maps $\lambda^k_i(G)$ were also covers for all $k<n$. An argument due to Moore shows that this follows from our definition. \begin{proposition} Let $G_{\bullet}$ be a strict Lie $n$-group. The simplicial object underlying $G_{\bullet}$ is a Lie $(n-1)$-groupoid. \end{proposition} \begin{proof} We perform an induction on the dimension of the horns to show that the maps $\lambda^k_i(G)$ are covers for $k<n$. Suppose that for $l<k$ and all $i$, the limit $\Lambda^l_i(G)$ exists and the map $\lambda^l_i(G)$ is a cover. Lemma \ref{lemma:psrep} shows that the limit $\Lambda^k_i(G)$ exists for all $i$. Sections of $\Lambda^k_i(G)\times G_k$ are tuples \begin{equation*} ((g_0,\ldots,-,\ldots,g_k),g)\in\Lambda^k_i(G)\times G_k \end{equation*} such that, for $j<m\neq i$, \begin{align*} d_{m-1}g_j&=d_jg_m. \end{align*} We perform an induction on $0\le\ell\le k+1$ to construct sections $g^\ell\in G_k$ such that for $j<\ell$ we have \begin{equation*} d_jg^\ell=g_j. \end{equation*} Fix $((g_0,\ldots,-,\ldots,g_k),g)\in\Lambda^k_i(G)\times G_k$, and set \begin{equation*} g^0:=g. \end{equation*} Now suppose that for $0\le\ell$, we have $g^\ell\in G_k$ such that, for $j<\ell$, \begin{equation*} d_jg^\ell=g_j. \end{equation*} We define \begin{equation*} a^\ell_j:=g_j(d_jg^\ell)^{-1}\in G_{k-1}. \end{equation*} The horn relations on the $g_j$ ensure that $(a^\ell_0,\ldots,-,\ldots,a^\ell_k)$ defines a section of $\Lambda^k_i(G)$, so we have \begin{equation*} ((a^\ell_0,\ldots,-,\ldots,a^\ell_k),g^l)\in\Lambda^k_i(G)\times G. \end{equation*} We define \begin{equation*} g^{l+1}:=(s_\ell a^\ell_\ell)g^\ell. \end{equation*} A short exercise shows that, for $j<\ell+1$, \begin{align*} d_jg^{\ell+1}=g_j. \end{align*} This completes the induction step. We now define \begin{equation*} \begin{xy} \morphism(0,0)<1800,0>[\Lambda^k_i(G)\times G_k`\Lambda^k_i(G)\times G_k;\varphi] \morphism(0,-250)/|->/<1800,0>[((g_0,\ldots,-,\ldots,g_k),g)`((g_0(d_0g^{k+1})^{-1},\ldots,-,\ldots,g_k(d_kg^{k+1})^{-1}),g^{k+1});] \end{xy} \end{equation*} By construction, $\varphi$ is an isomorphism. It factors the projection \begin{equation*} \begin{xy} \morphism<750,0>[\Lambda^k_i(G)\times G_k`\Lambda^k_i(G);] \end{xy} \end{equation*} as \begin{equation*} \begin{xy} \square/>`>`>`<-/<1150,500>[\Lambda^k_i(G)\times G_k`\Lambda^k_i(G)\times G_k`\Lambda^k_i(G)`G_k;\varphi``\pi_{G_k}`\lambda^k_i(G)] \morphism(0,500)|b|<1150,0>[\Lambda^k_i(G)\times G_k`\Lambda^k_i(G)\times G_k;\cong] \end{xy} \end{equation*} Axiom \ref{axiom:fg+g} guarantees that $\lambda^k_i(G)$ is a cover. Lemma \ref{lemma:psrep} shows that the limit $\Lambda^{k+1}_i(G)$ exists for all $i$. This concludes the induction step. \end{proof} Observe that a strict Lie 1-group is a Lie group viewed as a constant simplicial diagram. A strict Lie 2-group is a simplicial Lie group $G_{\bullet}$ such that the simplicial object underlying $G_{\bullet}$ is the nerve of a Lie groupoid. Therefore, a strict Lie 2-group could be equivalently described as a Lie group in the category of Lie groupoids. Strict Lie 2-groups are relatively abundant. For example, the Lie 2-group associated to a finite-dimensional nilpotent differential graded Lie algebra concentrated in degrees $[-1,0]$ is a strict Lie 2-group (see \cite{Get:02} for details). \begin{theorem}\label{thm:wbar}\mbox{} \begin{enumerate} \item The nerve of a strict Lie $n$-group is a Lie $n$-group. \item The homotopy quotient of a $G_{\bullet}$-equivariant $n$-stack is an $n$-stack. \end{enumerate} \end{theorem} \begin{proof} Let $G_{\bullet}$ be a strict $n$-group, or let $\varphi:X_{\bullet}\rightarrow Y_{\bullet}$ be a $G_{\bullet}$-equivariant $n$-stack. We show that, in the first case, $\LW_{\bullet}G$ is a Lie $n$-group, and, in the second, that $1\times_{G}\varphi$ is an $n$-stack. Denote sections of $\LW_kG$ by \begin{align*} \g^j&=(g^j_0,\ldots,g^j_{k-1})\in G_0\times\cdots\times G_{k-1} = \LW_kG. \end{align*} Denote sections of $\Lambda^k_i(\LW G)$ by \begin{equation*} (\g^0,\ldots,-,\ldots,\g^k)\in \Lambda^k_i(\LW G). \end{equation*} We now give a series of isomorphisms which relate horns in the nerve or homotopy quotient to horns in the strict Lie $n$-group or equivariant $n$-stack. The formulas proceed from the observation that the highest face in these horns determines all but the last coordinates of the lower ones; these last coordinates themselves determine a horn in the original simplicial Lie group or $G_{\bullet}$-map. For $k>0$, the maps $\Lambda^k_i(\LW G)\to \LW_{k-1}G\times\Lambda^{k-1}_i(G)$ given by \begin{equation*} (\g^0,\ldots,\widehat{\g^i},\ldots,\g^k) \mapsto \begin{cases} (\g^k,(g^0_{k-2},\ldots,-,\ldots,g^{k-2}_{k-2},(g^k_{k-2})^{-1}g^{k-1}_{k-2})) & i<k-1 , \\[5pt] (\g^k,(g^0_{k-2},\ldots,g^{k-2}_{k-2},-)) & i=k-1 , \\[5pt] (\g^{k-1},(g^0_{k-2},\ldots,g^{k-2}_{k-2},-)) & i=k , \end{cases} \end{equation*} are isomorphisms. For $k>0$ and $i<k$, the maps $\Lambda^k_i(\W G\times_G\varphi)\to \LW_k G\times\Lambda^k_i(\varphi)$ given by \begin{multline*} (((\g^0,x_0),\ldots,\widehat{(\g^i,x_i)},\ldots,(\g^k,x_k)),(\h,y)) \\ \mapsto \begin{cases} (\h,((x_0,\ldots,\widehat{x_i},\ldots,x_{k-1},(h_{k-1})^{-1}x_k),y)) & i<k , \\ (\h,((x_0,\ldots,x_{k-1},-),y)) & i=k , \end{cases} \end{multline*} are isomorphisms. For $k>1$, the isomorphisms for horns in the nerve fit into the following commuting squares: \begin{align*} & \begin{xy} \Square(0,0)/>`=`>`>/[\LW_k G`\Lambda^k_i(\LW G)` \LW_{k-1}G\times G_{k-1}` \LW_{k-1}G\times\Lambda^{k-1}_i(G); \lambda^k_i(\LW G)``\cong`1\times\lambda^{k-1}_i(G)] \end{xy} \\ & \begin{xy} \Square(0,0)/>`=`>`>/[\LW_k G`\Lambda^k_k(\LW G)` \LW_{k-1}G\times G_{k-1}` \LW_{k-1}G\times\Lambda^{k-1}_{k-1}(G); \lambda^k_k(\LW G)``\cong`1\times\lambda^{k-1}_{k-1}(G)] \end{xy} \end{align*} For $k=1$, we have \begin{equation*} \begin{xy} \Square/>`=`>`>/[\LW_1 G`\Lambda^1_i(\LW G)`G_0`\ast;\lambda^1_i(\LW G)``\cong`] \end{xy} \end{equation*} Similarly, for $k\geq 1$, the isomorphisms for horns in the homotopy quotients fit into the commuting squares \begin{equation*} \begin{xy} \Square/>`=`>`>/[\W G\times_G X_k` \Lambda^k_i(\W G\times_G\varphi)` \LW_k G\times X_k` \LW_k G\times\Lambda^k_i(\phi); \lambda^k_i(1\times_G\varphi)``\cong`1\times\lambda^k_i(\varphi)] \end{xy} \end{equation*} These squares show that the relevant horn-filling maps for $\LW_{\bullet}G$ and $1\times_G\varphi$ are covers for all $k$ and isomorphisms for $k>n$. \end{proof} \begin{remark} While we do not need it for this paper, one could define a strict $n$-stack as a homomorphism of simplicial Lie groups such that the relative horn-filling maps are isomorphisms in dimensions at least $n$. The analogues of the results above hold in the relative case, with minimal changes to the proofs. One could also make analogous definitions for simplicial Lie groupoids. The analogues of the results above hold, with minimal changes to the proofs. \end{remark} \section{A Finite Dimensional String 2-Group} \label{sec:string} In this section, we specialize to the category of smooth manifolds and apply our results to construct finite dimensional Lie 2-groups. Let $A$ be an abelian group. For each natural number $n$, Eilenberg and MacLane introduced a simplicial abelian group $K(A,n)_{\bullet}$ whose geometric realization represents the cohomology functor $H^n(-;A)$. They further observed that $\LW_{\bullet}K(A,n)$ is isomorphic to $K(A,n+1)_{\bullet}$. This construction and identification also exist for Abelian Lie groups. \begin{definition} Let $G$ be a Lie group. Let $A$ an Abelian Lie group. \begin{enumerate} \item An \emph{$A$-valued $n$-cocycle} on $\LW_{\bullet}G$ is a span \begin{equation*} \begin{xy} \Atriangle/>`>`{}/<600,0>[U_{\bullet}`\LW_{\bullet}G`K(A,n)_{\bullet};``] \end{xy} \end{equation*} such that $U_{\bullet}\rightarrow\LW_{\bullet}G$ is a hypercover. \item An \emph{equivalence of cocycles} is a commuting diagram of cocycles \begin{equation*} \begin{xy} \Atrianglepair(0,0)/>`<-`>`<-`>/<600,300>[U^0_{\bullet}`\LW_{\bullet}G`V_{\bullet}`K(A,n)_{\bullet};````] \Vtrianglepair(0,-300)/<-`>`<-`>`<-/<600,300>[\LW_{\bullet}G`V_{\bullet}`K(A,n)_{\bullet}`U^1_{\bullet};````] \end{xy} \end{equation*} such that the maps $V_{\bullet}\rightarrow U^i_{\bullet}$ are hypercovers. \end{enumerate} \end{definition} The connected $2$-types $\mathcal{G}_{\bullet}\in\ssm$ which have arisen in the literature are determined by \begin{enumerate} \item a Lie group $G$, \item an Abelian Lie group $A$, and \item an equivalence class of $3$-cocycles \begin{equation*} \begin{xy} \Atriangle/>`>`{}/<600,0>[U_{\bullet}`\LW_{\bullet}G`K(A,3)_{\bullet};``] \end{xy} \end{equation*} \end{enumerate} Much work has gone into finding geometric models for smooth $2$-types. By pulling back the universal twisted $K(A,2)$-bundle along a cocycle, we obtain a local $2$-bundle \begin{equation*} \begin{xy} \morphism[E_{\bullet}`U_{\bullet};]. \end{xy} \end{equation*} The composite \begin{equation*} \begin{xy} \morphism[E_{\bullet}`U_{\bullet};] \morphism(500,0)[U_{\bullet}`\LW_{\bullet}G;] \end{xy} \end{equation*} is an $\infty$-stack. This shows that connected smooth $2$-types can be realized as finite dimensional Lie $\infty$-groups. Over the last decade there have been many attempts to do better. The most relevant of these is provided by Schommer-Pries \cite{Sch:11} who showed that connected smooth $2$-types can be realized as weak group objects in the bicategory of finite dimensional Lie groupoids. Zhu \cite{Zhu:09}, drawing on ideas of Duskin, constructed a nerve for such weak group objects and showed that the nerve is a Lie $2$-group. The tools in this article allow us to construct a Lie $2$-group $X_{\bullet}$ directly from the data above. The object produced is equivalent to the one obtained by Zhu from Schommer-Pries. Our methods extend to $n>2$. \begin{proposition} \label{prop:coc} Any $A$-valued $n$-cocycle \begin{equation*} \begin{xy} \Atriangle/>`>`{}/<600,0>[U_{\bullet}`\LW_{\bullet}G`K(A,n)_{\bullet};f``] \end{xy} \end{equation*} factors uniquely through $\tn(f)$ as in the diagram \begin{equation*} \begin{xy} \Vtrianglepair/<-`>`<-`>`<-/<600,300>[\LW_{\bullet}G`U_{\bullet}`K(A,n)_{\bullet}`\tn(U,f)_\bullet;f``\tn(f)``] \end{xy} \end{equation*} This factorization is an equivalence of cocycles. \end{proposition} \begin{proof} The data of a cocycle is equivalent to a commuting triangle \begin{equation*} \begin{xy} \Vtriangle<600,300>[U_{\bullet}`\LW_{\bullet}G\times K(A,n)_{\bullet}`\LW_{\bullet}G;`f`\pi_{\LW G}] \end{xy} \end{equation*} Theorem \ref{thm:hk1} shows that both diagonal maps are $\infty$-stacks over $\LW_{\bullet}G$. We apply $\tn$ to obtain the commuting diagram \begin{equation*} \begin{xy} \morphism(0,0)<1200,0>[\tn(U,f)_{\bullet}`\tn(\LW G\times K(A,n),\pi_{\LW G})_{\bullet};] \morphism(0,0)/@{>}|!{(0,-300);(1200,-300)}\hole/<600,-600>[\tn(U,f)_{\bullet}`\LW_{\bullet}G;] \morphism(1200,0)/@{>}|!{(0,-300);(1200,-300)}\hole/<-600,-600>[\tn(\LW G\times K(A,n),\pi_{\LW G})_{\bullet}`\LW_{\bullet}G;] \morphism(0,-300)<0,300>[U_{\bullet}`\tn(U,f)_{\bullet};] \morphism(1200,-300)<0,300>[K(A,n)_{\bullet}`\tn(\LW G\times K(A,n),\pi_{\LW G})_{\bullet};] \morphism(0,-300)<1200,0>[U_{\bullet}`K(A,n)_{\bullet};] \Vtriangle(0,-600)<600,300>[U_{\bullet}`K(A,n)_{\bullet}`\LW_{\bullet}G;``] \end{xy} \end{equation*} Proposition \ref{prop:tn} and Theorem \ref{thm:wbar} together show that the map \begin{equation*} \begin{xy} \morphism<1250,0>[\LW_{\bullet}G\times K(A,n)_{\bullet}`\tn(\LW G\times K(A,n),\pi_{\LW G})_{\bullet};] \end{xy} \end{equation*} is an isomorphism. Proposition \ref{prop:tnhyp} shows that $\tn(f)$ is an $n$-hypercover and that the map \begin{equation*} \begin{xy} \morphism[U_{\bullet}`\tn(U,f)_{\bullet};] \end{xy} \end{equation*} is a hypercover. \end{proof} We can now use Theorem \ref{thm:desc} to produce a Lie $2$-group from an $A$-valued $3$-cocycle on $\LW_{\bullet}G$. We can assume, without loss of generality, that the $3$-cocycle \begin{equation}\label{eq:coc} \begin{xy} \Atriangle|aab|/>`>`{}/<600,0>[U_{\bullet}`\LW_{\bullet}G`K(A,3)_{\bullet};f`\varphi`] \end{xy} \end{equation} has $U_0=\ast$ and $f$ a $3$-hypercover. We pull back the universal $K(A,2)$-bundle along $\varphi$ to obtain a local $2$-bundle \begin{equation*} \begin{xy} \morphism<750,0>[\varphi^{\ast}\W K(A,2)`U_{\bullet};p] \end{xy} \end{equation*} We descend this local $2$-bundle along the $3$-hypercover $f$, as in Theorem \ref{thm:desc}. We obtain a $2$-stack \begin{equation*} \begin{xy} \morphism<1240,0>[X_\bullet:=\tau_2(\varphi^{\ast}\W K(A,2),fp)_{\bullet}`\LW_{\bullet}G;] \end{xy} \end{equation*} The object $X_\bullet$ is the desired Lie $2$-group. We now examine $X_{\bullet}$ in more detail. For simplicity, we ignore degeneracies. The hypercover in the cocycle \ref{eq:coc} is determined by its $2$-skeleton (Proposition \ref{prop:tnhyp}). The $2$-skeleton consists of \begin{enumerate} \item a cover $f_1:U\rightarrow G$, which we view as a 1-truncated hypercover, and \item a cover $f_2:V\rightarrow(\csk_1 U\times_{\Csk_1\LW G}\LW G)_2$. \end{enumerate} The Lie $2$-group $X_{\bullet}$ has the same $1$-skeleton as this hypercover. The $2$-simplices of $X_{\bullet}$ are determined by the Lie $1$-groupoid $P^{\geq 2}(fp)_{\bullet}$. Its vertex manifold, $P^{\geq 2}(fp)_0$, is isomorphic to $V\times A$. We showed that \begin{align*} P^{\geq 2}(fp)_1&\cong (V\times_{(\csk_1 U\times_{\Csk_1\LW G}\LW G)_2}V)\times K(A,2)_2\\ &\cong (V\times_{(\csk_1 U\times_{\Csk_1\LW G}\LW G)_2}V)\times A \end{align*} at the end of the proof of Theorem \ref{thm:desc}. The target map of $P^{\geq 2}(fp)_{\bullet}$ is given by \begin{equation*} (v_2,v_3,a)\mapsto(v_2,a) \end{equation*} We abuse notation and denote by $\varphi$ the restriction of the map \begin{equation*} \begin{xy} \morphism[U_3`K(A,3)_3;\varphi] \end{xy} \end{equation*} to $P^{\geq 2}(f)_1\subset U_3$. The source in the Lie groupoid $P^{\geq 2}(f)_{\bullet}$ is given by \begin{equation*} (v_2,v_3,a) \mapsto (v_3,\varphi(v_2,v_3)+a) \end{equation*} The Lie $1$-groupoid structure on $P^{\geq 2}(fp)_{\bullet}$ ensures that $\varphi$ is an $A$-valued 1-cocycle on the cover \begin{equation*} \begin{xy} \morphism<1000,0>[V`(\csk_1 U\times_{\Csk_1\LW G}\LW G)_2;] \end{xy} \end{equation*} The orbit space \begin{equation*} \begin{xy} \morphism<1000,0>[X_2`(\csk_1 U\times_{\Csk_1\LW G}\LW G)_2;] \end{xy} \end{equation*} is the principal $A$-bundle determined by this data. We now describe the higher simplices. The map $\Lambda^k_i(\tau_2(fp))$ is an isomorphism for $k>2$ and all $i$ (Proposition \ref{prop:tn}). For $k=3$, the data of these isomorphisms can be reduced to a trivialization $\zeta$ of the bundle \begin{equation*} \begin{xy} \morphism(0,0)<1500,0>[d_3^*P^{\vee}\otimes d_2^*P\otimes d_1^*P^{\vee}\otimes d_0^*P`(\csk_1 U\times_{\Csk_1\LW G}\LW G)_3;] \end{xy} \end{equation*} Here $d_i$ denotes the face map from $3$-simplices to $2$-simplices in the simplicial object $(\csk_1 U\times_{\Csk_1\LW G}\LW G)_{\bullet}$, and $P^{\vee}$ denotes the dual bundle of $P$. Proposition \ref{prop:tn} implies that $X_{\bullet}$ is determined by its $3$-skeleton. In the present context, this is equivalent to requiring that $\zeta$ satisfy a pentagonal coherence condition coming from the $1$-skeleton of the $4$-simplex. Summing up, we see that the Lie $2$-group $X_{\bullet}$ is reducible to the following data: \begin{enumerate} \item a cover $f\colon U\too G$, \item a principal $A$-bundle $P\too(\csk_1 U\times_{\Csk_1\LW G}\LW G)_2$, and \item a trivialization $\zeta$ of \begin{equation*} \begin{xy} \morphism(0,0)<1500,0>[d_3^*P^{\vee}\otimes d_2^*P\otimes d_1^*P^{\vee}\otimes d_0^*P`(\csk_1 U\times_{\Csk_1\LW G}\LW G)_3;] \end{xy} \end{equation*} which satisfies a pentagonal coherence condition coming from the 1-skeleton of the 4-simplex. \end{enumerate} If we take $\spn$ for $G$, $U(1)$ for $A$, and a cocycle representing the fractional first Pontrjagin class $\frac{p_1}{2}$, then the resulting Lie $2$-group is the nerve, \textit{\`{a} la} Duskin--Zhu, of Schommer-Pries's model for $\strn$. The discussion above gives a construction of higher central extensions of Lie groups. More generally, we might consider higher \emph{abelian} extensions. We consider an Abelian Lie group $A$ on which the Lie group $G$ acts by automorphisms. The data which specifies a higher abelian extension of a Lie group, and the construction which produces a Lie $2$-group from this data, are analogous to the case of higher central extensions above. We sketch the necessary changes. For each $n>0$, the $G$-action on $A$ induces $G$-actions on $\W_{\bullet}K(A,n-1)$ and $K(A,n)_{\bullet}$ such that the universal $K(A,n-1)$-bundle \begin{equation*} \begin{xy} \morphism<925,0>[\W_{\bullet}K(A,n-1)`K(A,n)_{\bullet};] \end{xy} \end{equation*} is $G$-equivariant with respect to these actions. We define the \emph{twisted universal bundle} \begin{equation*} \begin{xy} \morphism<1000,0>[\W^G_{\bullet}K(A,n-1)`K^G(A,n)_{\bullet};] \end{xy} \end{equation*} by taking the homotopy quotient of the universal $K(A,n-1)$-bundle with respect to the $G$-action. An $A$-valued $n$-cocycle on $\LW_{\bullet}G$ is now a commuting triangle \begin{equation*} \begin{xy} \Vtriangle<600,300>[U_{\bullet}`K^G(A,n)_{\bullet}`\LW_{\bullet}G;``] \end{xy} \end{equation*} such that $U_{\bullet}\rightarrow\LW G_{\bullet}$ is a hypercover. Equivalences of cocycles are defined analogously. The analogue of Proposition \ref{prop:coc} holds, and the construction proceeds just as above. The only difference is that, if we unpack the construction of the Lie $2$-group $X_{\bullet}$ produced from this more general notion of cocycle, we observe that instead of the $2$-simplices $X_2$ being the total space of a principal $A$-bundle, $X_2$ is now the total space of a $G$-twisted principal $A$-bundle. We leave the remaining details to the interested reader. The methods above additionally allow for the construction of Lie $n$-groups for $n>2$. In particular, there exists a finite dimensional model of $\fivebrn$ as a Lie $7$-group extending $\strn$ by $K(\mathbb{Z},7)$. We expect that a model of $\fivebrn$ also exists as a Lie $6$-group extending $\strn$ by $K(U(1),6)$. \bibliographystyle{amsplain} \bibliography{master} \end{document}
111,647
TITLE: What are the (philosophical) implications (if any) of this experiment in relation to the Bohr-Einstein debates and hidden variable theories? QUESTION [3 upvotes]: What are the (philosophical) implications  (if any ) of this experimental result, in relation to the Bohr - Einstein debates and hidden variable theories of quantum mechanics? A related question can be found here REPLY [2 votes]: There seems to be a confusion in this experiment, mixing frameworks and models. In the publication in the beginning of the description of the experiment they state: "First, we developed a superconducting artificial atom with the necessary V-shape level structure " . Super conductivity is a meta level , emergent from the underlying quantum mechanics, and modeled with quantum mechanics. That it can be modeled quantum mechanically does not subtract from the fact that it is emergent from zillions of particles. My opinion is supported by the statement in this tutorial on quantum trajectories , that everything is within the scope of standard quantum mechanics and its postulates. "In these systems, the stochastic equations arise as effective evolution equations, and are in no sense anything other than standard quantum mechanics(except, perhaps, in the trivial sense of approaching the limit of continuous measurement) One cannot argue for the basic postulates of quantum mechanics on emergent data from underlying quantum mechanics, imo. In a sense it is like taking the mechanics in a video game to deduce the mechanics in real three dimensional space. Let me give an example from classical physics: Thermodynamics emerges from classical statistical mechanics, can one deduce from thermodynamic states the underlying particle interactions and assert their continuity and change classical mechanics postulates? It so happens that the emergent behavior of electron pairs in metals can be modeled quantum mechanically , that means that the theory fits the data with postulates appropriate to the data. If for this superconducting construct, "atom", there is a discrepancy with the basic quantum mechanical postulates, too bad, it means that the model's postulates have to change for superconductors, with whatever this implies. In my opinion to really question the underlying quantum mechanical framework of elementary particles one needs experiments with elementary particles, not with collective phenomena emergent from zillions of such particles. Hidden variable theories for the basic quantum mechanics framework cannot be based on emergent states.
168,420
TITLE: Dummit and Foote as a First Text in Abstract Algebra QUESTION [8 upvotes]: I'm wondering how Dummit and Foote (3rd ed.) would fair as a first text in Abstract Algebra. I've researched this question on this site, and found a few opinions, which conflicted. Some people said it is better as a reference text, or something to read after one has a fair deal of exposure to the main ideas of abstract algebra, while others have said it is fine for a beginner. Is a text such as Herstein's Topics in Algebra, Artin's Algebra, or Fraleigh's A First Course in Algebra a better choice? Here's a summary of the parts of my mathematical background that I presume are relevant. I've covered most of Spivak's famed Calculus text (in particular the section on fields, constructing $\mathbf{R}$ from $\mathbf{Q}$, and showing the uniqueness of $\mathbf{R}$ which is probably the most relevant to abstract algebra) so I am totally comfortable with rigorous proofs. I also have a solid knowledge of elementary number theory; the parts that I guess are most relevant to abstract algebra are that I have done some work with modular arithmetic (up to proving fundamental results like Euler's Theorem and the Law of Quadratic Reciprocity), the multiplicative group $(\mathbf{Z}/n\mathbf{Z})^{\times}$ (e.g. which ones are cyclic), polynomial rings such as $\mathbf{Z}[x],$ and studying the division algorithm and unique factorization in $\mathbf{Z}[\sqrt{d}]$ (for $d \in \mathbf{Z}$). I have only a little bit of experience with linear algebra (about the first 30 pages or so of Halmos' Finite Dimensional Vector Spaces and a little bit of computational knowledge with matrices) though. With this said, I don't have much exposure to actual abstract algebra. I know what a group, ring, field, and vector space are but I haven't worked much with these structures (i.e. I can give a definition, but I have little intuition and only small lists of examples). I have no doubt that Dummit and Foote is comprehensive enough for my purposes (I hope to use it mostly for the sections on group theory, ring theory, and Galois Theory), but is it a good text for building intuition and lists of examples in abstract algebra for someone who has basically none of this? Will I, more than just learning theorems and basic techniques, develop a more abstract and intuitive understanding of the fundamental structures (groups, rings, modules, etc.)? It is a very large and supposedly dense text, so will the grand "picture" of group theory, for example, be lost? I've heard it is a book for people who have some basic intuition in group and ring theory, and I hesitate to put myself in this category given my description of my relevant knowledge in the paragraph above. Do you think the text is right for me, or would I be more successful with one of the three texts I mentioned in the first paragraph? Thanks for reading this (lengthy) question. I look forward to your advice! REPLY [2 votes]: I would absolutely recommend Artin's Algebra in your situation. Apart from the book being excellently written, one of its major advantages is that it develops linear algebra and abstract algebra in parallel. Many of the most interesting applications of groups are to geometric problems, and Artin's book is great for that. Many other books on abstract algebra, including Dummit and Foote, have much less discussion of linear algebra. The reason for this is probably that this fits the division of university courses in the U.S. between linear algebra, taken by a wide range of students, and abstract algebra, of interest to a more restricted audience. This isn't sensible from a mathematical viewpoint.
157,732