text
stringlengths
454
608k
url
stringlengths
17
896
dump
stringlengths
9
15
source
stringclasses
1 value
word_count
int64
101
114k
flesch_reading_ease
float64
50
104
于 2011年01月06日 01:10, Eric Blake 写道: On 01/05/2011 07:03 AM, Osier Yang wrote:If invalid cellno is specified, command "freecell" will still print the amount of available memory of node. As a fix, print error instead. * tools/virsh.c: "vshCommandOptInt", return -1 when value for parameter is specified, but invalid, which means strtol was failed, it won't affects other functions that use "vshCommandOptInt"). --- tools/virsh.c | 14 ++++++++++++-- 1 files changed, 12 insertions(+), 2 deletions(-) diff --git a/tools/virsh.c b/tools/virsh.c index 55e2a68..31f2a54 100644 --- a/tools/virsh.c +++ b/tools/virsh.c @@ -2275,6 +2275,12 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd) return FALSE; cell = vshCommandOptInt(cmd, "cellno",&cell_given); + + if (cell == -1) { + vshError(ctl, "%s", _("Invalid value for 'cellno', expecting an int"));-1 is a valid int, but not a valid cellno, so --cellno=-1 would now give a misleading message. urgh, yes. I also don't like the fact that you are changing the semantics of vshCommandOptInt, but not the counterparts such as vshCommandOptUL. I'd rather see all the vshCommandOpt* functions have the same semantics regarding their found parameter. And, since some of the vshCommandOpt* return unsigned values, you can't rely on -1 as a sentinel return value to indicate argument present but invalid. You'd have to go with something different, such as altering the semantics of the found argument: instead of being a binary TRUE/FALSE return (using TRUE/FALSE and int* is nasty anyways, because we already have<stdbool.h> and could be using true/false and bool* instead), let's instead have it be a ternary value: found< 0 => argument was present, but invalid (not an integer); return value is 0 found == 0 => argument was missing; return value is 0 found> 0 => argument was present and valid; return value is its value but this means that all existing callers that pass NULL instead of &found as the third argument are silently ignoring errors, at which point, it seems like we should require a non-NULL found argument. But if we do that, we might as well go one step further, and swap the order of the API entirely (to force ALL callers to check for errors): int vshCommandOptUL(const vshCmd *cmd, const char *name, unsigned long *value) ATTRIBUTE_NONNULL(3); Return value is<0 on failure (*value untouched), 0 when option is absent (*value untouched), and>0 on success (*value updated). Swapping the API like that also has the benefit that a client can specify a non-zero default: agree, I thought like so when making the patch, but was not sure if it would make easy things complicated. unsigned long value = 1024; if (vshCommandOptUL(cmd, "arg",&value)< 0) { error; return FALSE; } use value rather than the current code usage of checking whether a value was supplied, and if not, supplying the default. Yes, an API swap like this would be a much bigger change, but it seems like it is cleaner in the long run (since invalid cellno can't be the only case where passing a non-integer string gets silently ignored back to the default integer value). indeed, we do have more than one bug caused by this problem. + if ((arg->data == end_p) || (*end_p!= 0)) { num_found = FALSE; - else + res = -1; + } else num_found = TRUE;Style nit: you used: if (cond) { abc; def; } else xyz; But we prefer either: if (!cond) xyz; else { abc; def; } or: if (cond) { abc; def; } else { xyz; } since HACKING documents that an else clause should only ever omit braces when the if clause also omitted braces, but an if clause can omit braces even when the else clause requires them. good to known, thanks. NACK as-is; but I agree that this is (at least one) real bug to be fixed. Does my idea for a broader rewrite of the vshCommandOpt* function semantics make sense? +1, as a vote. Regards Osier
https://www.redhat.com/archives/libvir-list/2011-January/msg00168.html
CC-MAIN-2016-50
refinedweb
653
63.12
My Java is pretty rough. So far I've completed one class of Java all the way up to basic array functions. I've asked my Professor to explain it, but he talks with so much jargon. Could someone explain to me why this program spits out the "24" when I run it? O.o I learned when changing any of the numbers, at least at the bottom (8,4,2), to '0', I get an error of "dividing by zero". But I couldn't figure out how that worked. //#2 from Spring 2011 Test #2 public class T1b { protected int a; public T1b(int aIn) { a=aIn; }//Constructor public int f(int b, int c) { return a*b/c; }//f() }//class T1b class D extends T1b { protected int third; public D(int a) { super(a+1); third = a/3; }//Constructor public int g(int x) { return f(x, x/2); }//g() }//class D class E extends D { public E(int z) { super(z); }//Constructor public int f(int b, int c, int d) { return super.f(b,c) +g(d); }//f() public static void main (String[]args) { E e = new E(5); System.out.println(e.f(8,4,2)); }//main() }//class E
https://www.daniweb.com/programming/software-development/threads/435497/explain-this-program
CC-MAIN-2018-43
refinedweb
204
77.67
IRC log of lld on 2010-09-09 Timestamps are in UTC. 13:56:25 [RRSAgent] RRSAgent has joined #lld 13:56:25 [RRSAgent] logging to 13:56:31 [bernard] will go through boston 13:56:33 [TomB] rrsagent, bookmark 13:56:33 [RRSAgent] See 13:56:45 [TomB] zakim, this will be lld 13:56:45 [Zakim] ok, TomB, I see INC_LLDXG()10:00AM already started 13:57:13 [Zakim] +??P14 13:57:23 [TomB] zakim, ??P14 is me 13:57:23 [Zakim] +TomB; got it 13:57:33 [fsasaki] fsasaki has joined #lld 13:57:44 [TomB] zakim, who is on the call? 13:57:44 [Zakim] On the phone I see [IPcaller], TomB 13:57:54 [Zakim] + +1.614.764.aaaa 13:58:11 [Zakim] + +33.1.53.79.aabb 13:58:18 [TomB] zakim, aaaa is Jeff_ 13:58:18 [Zakim] +Jeff_; got it 13:58:21 [marma] hi, I'm following this only via IRC (Martin Malmsten from KB, Stockholm) 13:58:22 [emma] zakim, aabb is me 13:58:22 [Zakim] +emma; got it 13:58:31 [Zakim] +??P24 13:58:42 [Zakim] +??P21 13:58:49 [TomB] zakim, ??P24 is Bernard 13:58:49 [Zakim] +Bernard; got it 13:59:07 [TomB] zakim, IPcaller is Gordon 13:59:07 [Zakim] +Gordon; got it 13:59:13 [TomB] zakim, who is on the call? 13:59:13 [Zakim] On the phone I see Gordon, TomB, Jeff_, emma, Bernard, Felix (muted) 13:59:14 [kcoyle] kcoyle has joined #lld 13:59:32 [TomB] Agenda: 13:59:42 [Andras] Andras has joined #lld 13:59:48 [TomB] Regrets: Jodi, Anette, Joachim, Monica, Kim 14:00:12 [Zakim] +??P25 14:00:35 [TomB] 14:00:51 [TomB] rrsagent, please make record public 14:00:59 [TomB] Meeting: LLD XG 14:01:02 [Zakim] +??P29 14:01:02 [TomB] Chair: Tom 14:01:02 [whalb] whalb has joined #lld 14:01:03 [Zakim] + +1.361.279.aacc 14:01:07 [TomB] Scribe: Bernard 14:01:15 [TomB] scribenick: bernard 14:01:20 [TomB] zakim, aacc is Alexander 14:01:20 [Zakim] +Alexander; got it 14:01:28 [Andras] zakim, P29 is Andras 14:01:30 [mzrcia] mzrcia has joined #LLD 14:01:31 [Zakim] sorry, Andras, I do not recognize a party named 'P29' 14:01:39 [ww] is the bristol dialin broken? 14:01:42 [Andras] zakim, ??P29 is Andras 14:01:42 [Zakim] +Andras; got it 14:01:50 [Zakim] + +43.316.876.aadd 14:01:51 [Zakim] +[LC] 14:01:52 [TomB] Regrets+: Antoine 14:01:55 [whalb] zakim, aadd is me 14:01:56 [Zakim] +whalb; got it 14:02:06 [kefo] kefo has joined #lld 14:02:16 [TomB] zakim, LC is kefo 14:02:16 [Zakim] +kefo; got it 14:02:31 [Zakim] + +1.330.655.aaee 14:02:36 [michaelp] michaelp has joined #lld 14:03:20 [ww] Does anyone know if +44 117 370 6152 still works? 14:03:28 [Zakim] +Jonathan_Rees 14:03:41 [ww] I get Allison Smith's voice telling me the number I have dialed is not in service 14:03:55 [jar] jar has joined #lld 14:04:04 [ww] Allison Smith is the "voice of Asterisk" so I'm guessing it comes from the W3C voice bridge not the carrier 14:04:08 [bernard] seems that only Boston bridge is working 14:04:25 [Andras] as usual :-) 14:04:47 [TomB] ww, are you dialing Boston: +1-617-761-6200, Nice: +33.4.26.46.79.03 , Bristol: +44.203.318.0479 - some of the numbers are new 14:05:08 [ww] on the 020 number, Allison Smith tells me "all circuits are busy now 14:05:12 [Zakim] + +1.423.463.aaff 14:05:17 [ww] can't easily dial internationally at the moment :( 14:05:30 [Zakim] +Jeff_.a 14:05:36 [ww] TomB: fwiw +4420 is London not Bristol 14:05:37 [Zakim] + +49.4.aagg 14:05:40 [rsinger] rsinger has joined #lld 14:05:53 [ww] I think... 14:06:04 [jneubert] zakim, aagg is jneubert 14:06:07 [rsinger] Zakim, mute me 14:06:11 [Zakim] +jneubert; got it 14:06:11 [ww] actually, usually 0207/0208 are london. 0203 might well be elsewhere 14:06:12 [Zakim] sorry, rsinger, I do not know which phone connection belongs to you 14:06:35 [michaelp] zakim, Michael is really me 14:06:35 [Zakim] +michaelp; got it 14:07:00 [rsinger] Zakim, mute me 14:07:00 [Zakim] rsinger should now be muted 14:07:04 [bernard] Admin 14:07:08 [bernard] RESOLUTION: accept previous telecon minutes 14:07:31 [bernard] next meetings 14:08:35 [jeff_] true 14:08:42 [bernard] Tom : will circulate more informatin about F2F 14:09:22 [bernard] Tom : will need improvisation about phone and projector ... 14:09:25 [LarsG] LarsG has joined #lld 14:09:39 [bernard] ... agenda is on the wiki 14:09:55 [emma] draft agenda for F2F : 14:10:39 [bernard] ACTION; all add their attendance on the wiki 14:10:50 [bernard] ACTION: all add their attendance on the wiki 14:11:10 [TomB] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at [recorded in ] 14:11:15 [TomB] --continues 14:11:28 [Zakim] + +49.613.692.aahh 14:11:40 [bernard] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at 14:12:13 [bernard] Tom : informal meeting in Cologne 14:12:26 [bernard] Informal meeting of XG members at SWBIB 2010 on the morning of 1 December - contact Joachim Neubert 14:12:42 [bernard] Use cases and case studies - update 14:13:56 [bernard] Tom: next week pick two use cases for detailed discussion 14:14:38 [emma] ACTION: By Monday Sept. 6 members should add to list of email lists at [recorded in ] 14:14:43 [emma] --DONE 14:14:56 [emma] ACTION: Group to comment on email text at by Monday, Sept. 6 [recorded in ] 14:14:58 [emma] --DONE 14:15:09 [emma] ACTION: Everyone to elaborate on topics in the wiki [recorded in ] 14:15:10 [bernard] 4. RDA 14:15:12 [emma] --CONTINUE 14:15:32 [emma] ACTION: Gordon will prepare something on RDA, FRBR etc. for discussion in Sept. agenda [recorded in ] 14:15:36 [emma] --DONE 14:15:39 [bernard] -- To be discussed: 14:16:08 [RRSAgent] I have made the request to generate emma 14:16:11 [gneher] gneher has joined #lld 14:18:12 [bernard] GordonD commenting 14:19:34 [TomB] Within IFLA, new technical group for namespaces, coordination of standards in SW framwork, etc - will report to new core activity that resurrects something abandoned a few years ago - concept of universal bibliographic control. One or two months. 14:20:43 [Zakim] +??P20 14:21:05 [bernard] GordonD: no formal connection between FRBR and RDA activities 14:21:16 [TomB] ... special interest group for SW issues. Producing mappings beyond IFLA, e.g. for RDA, remains semi-formal - no formal orgn looking at mappings with RDA. Only joint membership in activiities. 14:22:02 [bernard] Publication of standards in RDF not synchronized 14:22:08 [emma] s/activiities/activities 14:22:34 [TomB] ... Trying to convey complexity of issues - if we are to get these standards to work together effectively. 14:23:14 [bernard] GordonD : Issues section gathers issues discussed in both groups 14:23:45 [bernard] GordonD: Constrained versus unconstrained properties and classes is the main issue ... 14:24:01 [matolat] matolat has joined #lld 14:25:03 [bernard] no specification of constraints in RDA ... 14:25:13 [Zakim] +michaelp.a 14:25:33 [emma] pressure from non-lib community for unconstrained properties but RDA, JSC and IFLA groups are convinced that constrained versions ar necessary 14:25:47 [kcoyle] q+ 14:25:52 [emma] recent IFLA meeting consider publishing unconstrained versions, linked with constrained ones. 14:25:59 [michaelp] q+ to talk about constraints 14:26:05 [matolat] zakim michaelp.a is matolat 14:26:08 [TomB] Gordon: IFLA persuaded, JSC coming around - have indicated they will not block unconstrained properties. 14:26:13 [michaelp] zakim, unmute me 14:26:13 [Zakim] michaelp should no longer be muted 14:27:18 [bernard] kcoyle: another issue has to do with the way constraints are expressed and how they are enforced in applications 14:27:36 [bernard] kcoyle: not sure the RDA method actually works 14:28:08 [bernard] GordonD: constraints are not enforced on applications 14:28:40 [bernard] GordonD: constraints are useful to check legacy data 14:29:12 [bernard] kcoyle: there is a sort a contradiction there 14:29:31 [TomB] Gordon: They are constraints on _inferences_. Powerful if you consider there may be billions of triples eventually. Records can be "filled in" using inferences. 14:30:00 [bernard] GordonD: I won't call them "unformal" constraints 14:30:24 [AlexanderH] AlexanderH has joined #lld 14:30:47 [TomB] ack Michaelp 14:30:47 [Zakim] michaelp, you wanted to talk about constraints 14:31:03 [bernard] GordonD: We need more technical investigation 14:31:24 [kcoyle] hearing loud typing 14:31:31 [bernard] Michaelp : there is a bit of confusion about constraints and validation 14:32:03 [bernard] Michaelp: In OWL the constraints are enabling, not restricting 14:32:41 [bernard] Michaelp: There are different of constraints like in XML schemas 14:33:30 [bernard] GordonD: Indeed a big deal of confusion in this area 14:33:49 [bernard] zakim, mute me 14:33:49 [Zakim] Bernard should now be muted 14:34:08 [jeff_] q+ 14:34:15 [bernard] GordonD: RDA wants to be validated as a schema 14:35:18 [bernard] GordonD: but also enabling as Michaelp has pointed 14:35:25 [TomB] ack jeff_ 14:36:25 [kcoyle] q+ 14:36:34 [TomB] ack kcoyle 14:36:45 [bernard] Jeff_: The same model can be used either as a schema, either as enabling depending on the use case 14:37:11 [bernard] kcoyle: we should clarify which constraints we are speaking about 14:38:00 [bernard] GordonD: There are a lot of value constraints in RDA 14:38:04 [emma] Karen : RDA primary constraints are about constraining a property to a FRBR entity, not the values 14:39:55 [bernard] GordonD: RDA should keep its own versions of FRBR constraints ... 14:40:18 [bernard] Tom : move on to next topics 14:40:37 [bernard] Tom : port this discussion to F2F 14:40:52 [jeff_] +1 14:41:01 [mzrcia] +1 14:41:02 [AlexanderH] +1 14:41:31 [bernard] GordonD : next topic is "Application profiles or OWL ontologies" 14:42:22 [LarsG] * Zakim please mute me 14:42:39 [emma] GordonD : ISBD elements have a sequence + indicate if mandatory or not, so need for application profile 14:43:25 [bernard] GordonD: What LLD can bring is clarification of when using application profiles and when using OWL 14:43:53 [bernard] next topic : Use of published properties and classes 14:45:42 [bernard] GordonD: How can library comunity be made aware of and re-use widely used classes and properties? 14:46:12 [emma] q+ to suggest the need to encourage the use of widespread standards 14:46:35 [bernard] Need to acknowledge overlap between libray and "open Web" classes and properties 14:46:45 [TomB] q+ to suggest additional attention to "alignment" - mappings and "identity links" between vocabularies 14:46:52 [TomB] ack emma 14:46:53 [Zakim] emma, you wanted to suggest the need to encourage the use of widespread standards 14:47:12 [bernard] emma: Agreed with GordonD ... 14:47:23 [bernard] it's very much an issue of trust ... 14:47:45 [mzrcia] FRSAD intent to use skos semantic relationships instead of repeating to define them 14:47:47 [bernard] LLD should encourage library community not to be shy about this use 14:48:15 [mzrcia] q+ 14:48:25 [TomB] ack TomB 14:48:25 [Zakim] TomB, you wanted to suggest additional attention to "alignment" - mappings and "identity links" between vocabularies 14:48:37 [bernard] emma: library community not used t o multiplicity of representations 14:48:53 [jeff_] Gordon, you should add OWL to the list of DC, FOAF, SKOS at 14:49:17 [GordonD] Jeff, will do 14:49:25 [TomB] ack mzrcia 14:50:23 [bernard] marcia : We have already actions on declaring relationships using "sameAs" or "equivalent classes" 14:50:44 [mzrcia] using SKOS secmantic relationships 14:51:04 [mzrcia] such as broader, narrower... between themas 14:51:22 [kcoyle] q+ 14:51:30 [bernard] GordonD: The library community should not be childish in saying "not invented here" 14:52:23 [bernard] GordonD: Elements coming from the library models will be useful elsewhere also 14:52:38 [GordonD] bernard: churlish, not childish 14:52:47 [bernard] sorry :)) 14:53:01 [mzrcia] I was also into the discussion of reusing/borwwring the assotiative relationships between concepts/topics defined by the Getty vocabulary 14:53:33 [Zakim] -Felix 14:54:37 [bernard] GordonD: How can LLD can have action on milestones in this area? 14:55:23 [bernard] GordonD : Library linked-data and legacy records ... 14:55:45 [bernard] potential to generate billions of instances ... 14:56:36 [bernard] 14:57:03 [emma] GordonD : Need from LLD community to encourage freeing the data 14:57:09 [bernard] It's a political issue (How can libraries be encouraged to "free" their records?) 14:57:34 [bernard] GordonD : Technical infrastructure ... 14:58:01 [jeff_] q+ 14:58:22 [emma] Library system vendors will only change if customers ask for it 14:58:25 [TomB] ack kcoyle 14:58:32 [bernard] How are vendors to see the ROI? 14:58:42 [bernard] Duplication and identifiers 14:58:58 [RRSAgent] I have made the request to generate emma 14:59:16 [bernard] GordonD : identifying different records for the same thing is also a big issue .. 15:00:19 [bernard] ... Libray culture ... 15:00:41 [bernard] ... sociological barriers to address ... 15:00:43 [emma] s/Libray/Library 15:01:09 [jar] q? 15:01:11 [TomB] ack jeff_ 15:01:17 [rsinger] GordonD++ #awesome 15:01:40 [emma] +1, Thanks GordonD ! 15:02:00 [mzrcia] +1, Gordon! 15:02:21 [LarsG] +1 GordonD, Great! 15:02:23 [TomB] +1 great summary of the issues! 15:02:36 [GordonD] All, thanks! 15:02:51 [bernard] TomB : a lot of congratulations to Gordon! 15:03:00 [Zakim] -kcoyle 15:03:01 [Zakim] -whalb 15:03:03 [Zakim] -jeff_ 15:03:04 [Zakim] -Jonathan_Rees 15:03:06 [Zakim] -Andras 15:03:07 [Zakim] -michaelp.a 15:03:09 [Zakim] -Alexander 15:03:11 [Zakim] -GordonD 15:03:12 [Zakim] -jneubert 15:03:13 [gneher] gneher has left #lld 15:03:14 [Zakim] -michaelp 15:03:16 [Zakim] -LarsG 15:03:17 [bernard] zakim, list attendees 15:03:18 [Zakim] -kefo 15:03:20 [Zakim] -rsinger 15:03:22 [Zakim] -marcia 15:03:23 [matolat] bye 15:03:24 [Zakim] As of this point the attendees have been TomB, +1.614.764.aaaa, +33.1.53.79.aabb, emma, Felix, Bernard, GordonD, kcoyle, +1.361.279.aacc, Alexander, Andras, +43.316.876.aadd, 15:03:31 :03:32 [emma] zakim, unmute bernard 15:03:34 [michaelp] michaelp has left #lld 15:03:40 [Zakim] -gneher 15:03:41 [Zakim] Bernard should no longer be muted 15:04:12 [TomB] rrsagent, please draft minutes 15:04:12 [RRSAgent] I have made the request to generate TomB 15:04:22 [emma] [adjourned] 15:08:46 [TomB] zakim, bye 15:08:46 [Zakim] leaving. As of this point the attendees were TomB, +1.614.764.aaaa, +33.1.53.79.aabb, emma, Felix, Bernard, GordonD, kcoyle, +1.361.279.aacc, Alexander, Andras, +43.316.876.aadd, 15:08:46 [Zakim] Zakim has left #lld 15:08:49 :08:52 [TomB] rrsagent, bye 15:08:52 [RRSAgent] I see 7 open action items saved in : 15:08:52 [RRSAgent] ACTION: all add their attendance on the wiki [1] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at [recorded in ] [2] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Potential attendees in Pittsburgh to use wiki page to indicate whether they are attending or not at [3] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: By Monday Sept. 6 members should add to list of email lists at [recorded in ] [4] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Group to comment on email text at by Monday, Sept. 6 [recorded in ] [5] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Everyone to elaborate on topics in the wiki [recorded in ] [6] 15:08:52 [RRSAgent] recorded in 15:08:52 [RRSAgent] ACTION: Gordon will prepare something on RDA, FRBR etc. for discussion in Sept. agenda [recorded in ] [7] 15:08:52 [RRSAgent] recorded in
http://www.w3.org/2010/09/09-lld-irc
CC-MAIN-2015-18
refinedweb
2,862
61.5
What is really weird is that if I use a flat tile...like grass, it shows up fine...although it does look a little squashed. But my game will be using isometric cube based/shaped textures. So I need it to work with the cubes, but as you can see from the screenshot...its all staggered. I am not very good at math, and it seems like no matter what I change, I just cannot get it. Also, since I am very bad with math, I might have my Height & Depth might be mixed up. For my code reference...depth refers to stacking cubes on top of each other (terrain height), whereas height refers to how deep the cubes go (on the Y) plane. Right now I am just testing with a depth of 1, so 1 map layer to create a nice flat diamond. Height of 10 and Width of 10. Even still I don't think that is what is wrong with it, I have no idea and would like an outside perspective, thanks. Map.cs Class using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace IsometricMapping { public class Map { // Width, height and depth of the map (X, Y, Z) private int width, height, depth; // Accessor for width public int GetWidth { get { return this.width; } } // Accessor for height public int GetHeight { get { return this.height; } } // Accessor for depth public int GetDepth { get { return this.depth; } } // Array of arrays of tiles used by this map [depth][width,height] private Tile[][,] tiles; // Accessor for the tile array public Tile[][,] GetTiles { get { return this.tiles; } } // The default texture assigned to tiles private Texture2D defaultTexture; // Accessor for the default texture public Texture2D GetDefaultTexture { get { return this.defaultTexture; } } // Default constructor for the Map class public Map(Texture2D defaultTexture, int width, int height, int depth) { // Assign values to fields this.defaultTexture = defaultTexture; this.width = width; this.height = height; this.depth = depth; // Initialize the Tile array this.tiles = new Tile[this.depth][,]; // Set up tile positions for (int z = 0; z < this.depth; z++) { // Initialize tile array by Z-layer this.tiles[z] = new Tile[this.width, this.height]; for (int y = 0; y < this.height; y++) { for (int x = this.width - 1; x >= 0; x--) { // Initialize tile at the specified position this.tiles[z][x, y] = new Tile(this.defaultTexture, new Vector2( (x * this.defaultTexture.Width / 2) + (y * this.defaultTexture.Width / 2), (y * this.defaultTexture.Height / 2) - (x * this.defaultTexture.Height / 2))); } } } } // Draws the map public void Render(SpriteBatch spriteBatch) { for (int z = 0; z < this.depth; z++) { for (int y = 0; y < this.height; y++) { for (int x = this.width - 1; x >= 0; x--) { spriteBatch.Draw( this.tiles[z][x, y].GetTexture, this.tiles[z][x, y].GetPosition, Color.White); } } } } } } Tile.cs Class using System; using System.Collections.Generic; using System.Linq; using System.Text; using Microsoft.Xna.Framework; using Microsoft.Xna.Framework.Graphics; namespace IsometricMapping { public class Tile { private Texture2D texture; public Texture2D GetTexture { get { return this.texture; } } public Texture2D SetTexture { set { if (value != null) this.texture = value; } } public int GetWidth { get { return this.texture.Width; } } public int GetHeight { get { return this.texture.Height; } } private Vector2 position; public Vector2 GetPosition { get { return this.position; } } public Tile(Texture2D texture, Vector2 position) { this.texture = texture; this.position = position; } } } Default Tile Texture: Map Result: By the way, I have gotten isometric mapping to work from the guides I have read. However, they are really just a jumping off point, and don't help me write the code the way I want it, similar to the way the code is written above. Also, they usually miss an important step such as terrain height, or are jagged as opposed to diamond, etc.
http://www.gamedev.net/topic/633367-isometric-mapping-issues/
CC-MAIN-2016-50
refinedweb
628
62.14
This article demonstrates how Nokia MixRadio API can be utilised, using a sample application, Music Explorer, as an example. The article begins with an introduction to Music Explorer itself and then proceeds to describing the steps taken to integrate Nokia MixRadio API into the application. You can download the Music Explorer application (with full source code for reference) from GitHub. Music Explorer example demonstrates the use of Nokia MixRadio API together with standard Windows Phone 8 audio features to create an immersive music experience. It takes advantage of Nokia MixRadio API features such as searching for artists by name, requesting top artists and new releases, and launching Nokia MixRadio application from within another application to play mix radio or show artist and product information. Instead of being just "Nokia MixRadio Lite" type of browser for viewing artists and products, Music Explorer integrates Nokia MixRadio API features with local music in the device in a number of educated ways. Favourite artists are ordered according to the most played artists in the device, recommended list is ordered by how many times an artist is found to be similar to your favourites, and you can also play the local songs from your favourite artists. The main panorama of the Music Explorer shown below contains six items: favourites, recommended, what's new, who's hot, genres, and mixes. Figure 1. Main panorama Favourites shows 20 most played artists in the device, and the artist image is retrieved using Nokia MixRadio API. Recommended shows artists which are determined by Nokia MixRadio API to be similar to your favourite artists. The lists are country specific, and genre and mix names are even localized. For this reason locationing services are used to obtain device's current location. Another main component of the user interface is the artist pivot shown below, reached by selecting an artist from favourites, recommended, who's hot, or from genre sub page shown later. Artist pivot can be used to find out what kind of products are available in Nokia MixRadio service for a certain artist. It is not possible to buy products using Nokia MixRadio API so interaction with a product takes the user to Nokia MixRadio application where the product can be bought. While in artist pivot, it is also possible to launch artist mix in Nokia MixRadio or to listen to local tracks stored in the device. Figure 2. Artist pivot Nokia MixRadio API can be used to launch Nokia MixRadio app from the context of a different application. Nokia MixRadio app can be launched to a specified artist or product view or it can be launched directly to play a selected radio mix. A note symbol (as shown above in artist pivot or below in radio mixes) is shown in Music Explorer whenever interaction with a specific item takes the user into Nokia MixRadio app. Below are shown genre and radio mixes sub pages, opened when user selects a genre or mix from genres or mixes on the main panorama. Figure 3. Genre and radio mix sub pages Music Explorer is a quite simple application that implements the Model-View-ViewModel (MVVM) design pattern. MVVM is a way to separate data from user interface and is widely used in Windows Phone 8 application development. In the app, the Models (data) are C# classes, and the View (user interface) is a collection of PhoneApplicationPages filled with Windows Phone 8 controls. The MainViewModel, which is the link between the Models and the View, is also a C# class. It is set as the DataContext for all the application pages to enable data binding. To learn more about MVVM, please turn to this MSDN article and the list of links in the section See Also in the bottom of that page. See more detailed architectural description of Music Explorer in code examples code examples. Here's a table of the most significant APIs used by Music Explorer as well as their capability requirements: Following services provided by Nokia MixRadio API are used in Music Explorer: In order to use Nokia MixRadio API, the application must have a unique app ID and app token (unless using launchers only). These can be obtained by visiting API registration page and requesting credentials for Nokia MixRadio API. The app ID for Music Explorer are specified in MusicApi.cs: namespace MusicExplorer { ... public class MusicApi { // Constants public const string MUSIC_EXPLORER_APP_ID = "music_explorer_private_app_id"; // real app id not shown here ... } } Windows Phone 8 application must also add reference to the actual Nokia MixRadio API client, which in turn requires a reference to JSON.Net library to be added to the application. Instructions on how to add necessary references to the solution can be found from README.md in the Music Explorer project. Before making any requests to Nokia MixRadio API, Music Explorer checks for Nokia MixRadio availability in current location, which in turn is resolved using Windows Phone Location API. By default, Nokia MixRadio API uses (and checks) phone Region settings with each API call made. This allows users to see relevant, and partly localized, country specific information. This can be seen in the image of main panorama at the top of the page,. The main panorama's who's hot panorama item uses data binding to display the contents of the TopArtists in the user interface. As the process with other requests (new releases, genres, etc.) is almost identical to requesting 10 most popular artists, they are not described in this article. See the project source of Music Explorer following method from Music Explorer's MusicApi shows how simple it is to launch Nokia MixRadio application to play artist mix. Nokia MixRadio app can be launched just as easily into a product or artist state using Nokia MixRadio API. Some of the launcher methods in Nokia MixRadio API require unique ids of artists, mixes and products. These ids are received in responses from Nokia MixRadio API's other services. using Nokia.Music; using Nokia.Music.Tasks; ... namespace MusicExplorer { ... public class MusicApi { ... public void LaunchArtistMix(string artistName) { ... PlayMixTask task = new PlayMixTask(); task.ArtistName = artistName; task.Show(); } ... } } The list of artists in main panorama's favourites item is created in LoadData method of MainViewModel. Comments in the code describe the steps taken to create; } ... } } After creating the list of local favourite artists, an artist search is made using Nokia Mix Radio API to get a list of similar artists for each favourite artist. This list is then ordered based on how many times a specific artist is found to be similar to the favourites. In addition, artists with tracks stored locally in the device are removed from the list, as the idea is to present and promote interesting new artists to the user instead of artists which the user is already familiar with. Last updated 19 March 2014 We appreciate your feedback.
http://developer.nokia.com/resources/library/Lumia/nokia-mixradio-api/utilizing-nokia-mixradio-api-music-explorer-example-application.html
CC-MAIN-2014-41
refinedweb
1,140
60.35
Classic ASP Include Files in ASP.NET There are two primary reasons for include files in classic ASP. One is to reuse some kind of templated UI artifact (often containing dynamic functionality), such as navigation menu, header, footer, news panel or other widget. The other is to make common subroutines or procedures available across selected pages within the application. There is third reason, and that is to actually include the static content of a file as part of output within a page. While this could be considered similar to the first reason, the ASP.NET way of dealing with this is different, so I'll look at that last. The first thing to understand is the difference between the way that classic ASP and ASP.NET are processed. When an asp file is processed by the scripting engine, all code is executed from top to bottom. If the processor meets an include directive, the contents of the included file are dynamically "melded" with it at run time, and treated as part of the host file.. It's just an easy way to create a new class in your application - the class being of type System.Web.UI.Page. These classes are not compiled on each use, so adding new code at that point is not possible. References to external code need to be added prior to compilation. Site-wide Layout The following snippet is typical among classic ASP files: <body> <div id="head"><!--#include<!--#include file="includes/nav.asp"--></div> The site-wide layout is defined by minimal html with the head, navigation, footer and other components injected via include files. This basic template needs to be copied and pasted into every page that makes use of the template. If a change needs to be made to the structure, that change needs to either be copied across every page, or included in one of the existing includes, perhaps as an include itself. ASP.NET provides Master Pages to cater for this, which allows all the template structure to be defined in one file - a .master file. The default Master Page includes two ContentPlaceHolder controls: <%@"> <asp:ContentPlaceHolder </form> </body> </html> ContentPlaceHolders are areas that are "filled" dynamically with content at runtime. The content comes from web forms that make use of the Master Page. In VS 2010, the option to create one of these is clearly marked: And the code that you get looks like this: <%@ Page </asp:Content> <asp:Content </asp:Content> If you attempt to place any markup or controls outside of the Content controls (which are automatically married up to their respective ContentPlaceHolder controls in the Master Page), you will receive an error. You can add as many ContentPlaceHolder controls as you like on the Master Page, but some thought should go into the site structure at the outset. Once created, child pages will only acquire new Content controls manually. Within the Master itself, you will define the structure of your template: <%@"> <div id="wrapper"> <div id="header">Some head content</div> <div id="nav">Some navigation content</div> <div id="leftcol">Some items that go into the left column</div> <div id="main"> <asp:ContentPlaceHolder </div> <div id="footer">Some foot content</div> </div> </form> </body> </html> Now, every web form that is added with a Master Page will automatically inherit the structure defined in the Master Page. This means that changes to the site layout only need to be done in one place. For this reason, I always add a Master Page as the first step in creating a web site - even if it may not need a common look and feel to start with. One day it might, and it is so simple to create that in one place. UI Widgets OK. Now we need to look at filling in some of the content, which is where we started with the classic ASP snippet above. In that, the head content and the navigation are drawn from included files. There is nothing to stop you from adding these artifacts directly to the Master Page, just as I have added the text "Some head content" in the code above. From there they will be available wherever the Master Page is used, and this is how many people work with Master Pages. But lets imagine that you have an advertising spot on some, but not all pages. In this, you may want to display optionally an image or a flash banner, and have some code that decides which advert to pull from the image folder or database depending on the Url of the page. This is where Web User Controls come into play. A User Control is a self-contained "pagelet" (which is what they were initially referred to as) which consists of some html perhaps, server controls and a code-behind file in which associated logic can run. They have a .ascx file suffix. Once created, they can literally be dragged from the Solution Explorer on the right hand side of Visual Studio, and dropped onto the designer where they are to appear within a page. the can even be added to existing User Controls. Here's a simple control that contains and Image control and a Literal: <%@ Control <br /> <asp:Literal</asp:Literal> <br /> I've added some code in the Page_Load event in the code behind to set the ImageUrl property of the Image control and the Text property of the Literal control to display the file name of the currently executing page: using System; namespace WebFormsTests { public partial class MainControl : System.Web.UI.UserControl { protected void Page_Load(object sender, EventArgs e) { Image1.ImageUrl = "~/Content/images/comment.png"; Literal1.Text = "Main control says the current page is " + System.IO.Path.GetFileName(Request.Url.ToString()); } } } And here's another user control, which simply gives the date: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="SubControl.ascx.cs" Inherits="WebFormsTests.SubControl" %> <%= "Sub control says the date is " + DateTime.Now.ToShortDateString() %> I drop this onto the previous control: which I then drop onto the Content area within the ChildPage.aspx I created earlier. When I run the whole thing in the browser, you can see Master Page, Child page, and user controls all working together (please excuse the garish css): Reusable routines and procedures As a classic ASP developer, I collected several libraries of code over time which has all sorts of utility code within them - string functions that determined if a user supplied email address was in a valid format, something that generated html select lists (DropDowns) from databases, another one that tidied up html from rich text boxes etc. I also had some pieces of code that needed to run on every page, such as connection initialisation and closing. The .NET framework offers a number of ways to manage these kinds of things in a cleaner manner than classic ASP. We'll take a look at the concept of a Base Page first. Since .NET is fully object oriented, it supports inheritance. I already mentioned that each page you create is a class that inherits from System.Web.UI.Page. As a result, it inherits all the properties, methods and events of System.Web.UI.Page. A Base Page is a kind of shim within this inheritance hierarchy. And it's simple to create. Just add a class to the project, within the App_Code folder called BasePage.cs (or .vb if that's your language). You don't have to call it BasePage - you can call it anything you like, but BasePage is the convention: namespace WebFormsTests { public class BasePage : System.Web.UI.Page { } } Notice one thing in particular - the bit after the colon. This new class inherits from System.Web.UI.Page, and as such, it inherits all the methods, properties events etc from the Page class just as a web form does. If you make your web forms inherit from BasePage instead of Page, they will still acquire all these properties, members, events etc, but they now get them from BasePage, which in turns gets them from Page. However, you can now add code to BasePage, and ensure that any page that inherits from it will cause that code to run. We just need to modify ChildPage to inherit from BasePage: namespace WebFormsTests { public partial class ChildPage : BasePage { protected void Page_Load(object sender, EventArgs e) { } } } You could, for example, add a Page_Load event to BasePage, and use that to log page views: public class BasePage : System.Web.UI.Page { void Page_Load(object sender, EventArgs e) { string connect = "My Connection String"; using(SqlConnection conn = new SqlConnection(connect)) { string sql = "INSERT INTO PageViews (DateVisited, Page) VALUES (GetDate(), @Page)"; SqlCommand cmd = new SqlCommand(sql, conn); cmd.Parameters.AddWithValue("@Page", this.Title); conn.Open(); cmd.ExecuteNonQuery(); } } } Other uses for the BasePage include setting the page title, or some meta content such as keywords or a description. If you want to run some code that times the total execution of all your pages, this is also the kind of place to put that. Utility Functions So far I have looked at methods that can be included in every page, because they are required to run as part of every, or a selection of pages. The other types of functions are utility functions that need to be available to any page in the site, but might only be used by a few of them. A typical example is a method that replaces newline characters with their HTML counterpart. This type of method is independent of any object, so the best way to create it is as a static (shared in VB) method, and to place it in a static class: public static class Utils { public static string LineBreaks(string s) { return s.Replace(Environment.NewLine, "<br />"); } } You would place this in the App_Code folder if you have created a New Web Site, where it will be compiled, and the Utils class will be visible to your code behind files. If you have created a Web Application, you will not have an App_Code folder. But if you place the file in any folder and then click the file to select it, hit F4 to bring up the Properties panel and change the Build Action to Compile, you should be able to reference it in code behind. If you wanted the method to act on the user input provided via a TextBox control, it would look like this: string story = Utils.LineBreaks(txtStory.Text); However, a neater way to do this kind of thing if you are working with ASP.NET 3.5 or greater is to use Extension methods. These allow you to effectively "extend" existing types with their own behaviour. The type that the above method works on is a System.String. The previous method needs a tiny addition to become an extension method - the this keyword before the type parameter (OK, I changed the name of the method too...): public static string WithLineBreaks(this string s) { return s.Replace(Environment.NewLine, "<br />"); } Now, whenever you hit the dot after a string, you will see a new method added in Intellisense: When you use this extension method, the result looks like this: string story = txtStory.Text.WithLineBreaks(); Now you can see why I changed the name of the method. It makes it much easier to understand what the code does. Including a File There may still be times when you want to actually include the contents of a file within the output of an ASP.NET page. The way to do this in classic ASP was to use the FileSystemObject and read a Stream from it. You can still use a StreamReader object in .NET to do essentially the same thing, or more simply, use a method of the Response object that has been added - Response.WriteFile(). This method should be used inline within the aspx file: <pre> <% Response.WriteFile("App_Code/Utils.cs"); %> </pre> The sample above grabs the Utils.cs file that we created for the extension method and writes its content to the page within <pre> tags: There are three things to notice here. First, do you see the odd formatting of the code compared to the snippet earlier? There's a line break that's crept in. That goes to illustrate that html or client-side script contained in the file that is included is rendered as exactly that, and treated as such by the browser. The second thing to note is that when you place Response.WriteFile() in a code-behind file, the content is prepended to the http output, and will appear before any other page markup. Consequently, it should only really be used within the aspx file. Finally, the argument accepted by the Response.WriteFile() method is a string. This means that you can set the value dynamically, which is something you could not do with the include file directive within classic ASP. Currently rated 4.45 by 22 people Rate Now! Date Posted: Wednesday, June 23, 2010 7:43 AM Last Updated: Tuesday, September 6, 2011 6:42 AM Posted by: Mikesdotnetting Total Views to date: 5096, August 3, 2010 4:32 AM from Tara Excellent article to bridge the gap between versions. Wednesday, April 18, 2012 4:29 PM from Mark Thanks for the time and effort to put this article together. I am a long time ASP developer and have been looking for documents such as this one that bridges some of the more complex topics between ASP and ASP.Net. Great job and very helpful!
http://www.mikesdotnetting.com/Article/144/Classic-ASP-Include-Files-in-ASP.NET
CC-MAIN-2014-10
refinedweb
2,253
60.04
Hi and welcome to Just Answer!For tax years beginning in 2010, eligible small business credits (ESBCs) offset both regular tax and the alternative minimum tax (AMT). Any unused ESBCs are carried back five years and are used to offset both regular tax and AMT in the carryback years.That is a major change in the tax law.A corporation can elect to carry an NOL forward instead of first carrying it back. Make this election by attaching a statement to a timely filed tax return (including extensions) for the tax year of the NOL indicating that the corporation is electing to relinquish the entire carry back period under section 172(b)(3) for any NOLs incurred in that tax year.See more details in instructions for the form 1139 - NEW law rule doesn't supersede the old law rule - but temporary provides additional tax benefits for qualified corporations.Any such loss not applied in the preceding years can be carried forward up to 20 years. Actually, my question deals with a net operating loss that occurred in fical year 1994. The old law was a three (3) year carry back, and a fifteen (15) year carry forward. It is my understanding that in 1997, the law changed to three (3) year carry back and twenty (20) years carry forward. The import of my question is: Does the new law, (any and all current rules) supercede the 1994 carry forward rule, thus making a NOL that occurred in ficsal year 1994 available for twenty (20) years? HI For 1997 changed - see here - For an NOL occurring in a tax year beginning after August 5, 1997, the carryback period is reduced to 2 years and the carryforward period is increased to 20 years. However, the carryback period remains 3 years for the part of an NOL that: 1) Is from a casualty or theft, or 2) In the case of a farm business or other qualified small business, is attributable to a Presidentially declared disaster. These changes do not affect NOLs from previous years. You need to treat NOLs differently based on the year if which they were generated. Can you provide me a link to the publication that discusses the rule for the 1994 net operating loss, and the rule for the 1997 change, along with the current rule? The NOL year is the year in which the NOL is generated. For 1997 tax year - see IRS publication 539 page 6 - When To Use an NOL - 2995 tax year - see same publication - page 3 - Generally, you carry back an NOL to the 3 tax years before the NOL year (the carry back years), and then carry forward any NOL remaining for up to 15 years after the NOL year (the carry forward years). OK...one last item. Can you recommend an accountant or tax attorney with this type of related experience for some private consultation? If so, please email their contact information to me. XXXXX@XXXXXX.XXX Unfortunately - Just Answer policies prevent experts from contacting customers in other ways but only via posts on this web site. As you see – your email was automatically removed because all posts become available for general public.As long as you are not in litigations - you do not need a tax attorney. Any local accountant will be able to accomplish tasks related to NOLs. I do not think you need any specific recommendations.Sorry if you expected a different answer. That is ok...actually I will use this service again as you answered my questions directly. Good job...
http://www.justanswer.com/tax/5er7r-carry-forward-period-corporation-net-operating-loss-irs-rules.html
CC-MAIN-2014-23
refinedweb
594
60.35
This article is Part 1 of the Building iOS Interfaces series which tackles the how and why of implementing iOS designs without prior native programming experience–perfect for Web designers and developers. You can find the other articles here: Part 2 – Part 3. Designing for the Web has taught us the value of getting designers to write HTML/CSS and its role in improving the overall quality of the work. Waterfall workflows involving static mock-ups and hundred of megabytes of sliced assets are long gone in the modern Web design process. Unfortunately, the same cannot be said for mobile design. A large number of designers find Xcode daunting and prefer to use other tools to prototype their designs before handing them over to a developer to implement them. Platform makers and developers are not making this task any easier; the documentation and the tools are too developer-centric and the community at large considers UI programming part of the developer’s job. It’s time to change that. Xcode? Swift? UIKit? Oh My! The most common question I hear when I talk to designers learning iOS is “Where do I start?”. It’s a valid question given the relatively high number of concepts involved in the process. The inability to get a definitive answer to this question often leads to giving up before starting. Nothing is more frustrating in learning than dealing with unknown unknowns—things we don’t know that we don’t know. This introduction aims to address that problem before moving on to more specific topics later. Frameworks Let’s start with the basics. The iOS operating system is a closed-source system built by Apple to power every single iPhone, iPad, and iPod Touch out there. It borrows heavily from its grandfather OS X, which has been running on Macs for more than a decade. In turn, the mobile OS heavily influences watchOS (Apple Watch) and tvOS (Apple TV), the newest additions to Apple’s developer toolbelt. One of the most basic building blocks of iOS are frameworks. A framework is a self-contained collection of APIs that are intended to solve a specific problem such as controls, audio, video, animation, and so forth. Frameworks can reside within other frameworks, resulting in a hierarchy that can easily get confusing if one is not paying attention. The top-level framework in iOS is called Cocoa Touch, in contrast to Cocoa for OS X. Cocoa Touch hosts all the frameworks you will be dealing with as a UI designer, including Foundation, UIKit, and Core Animation. How will this affect my day-to-day work? You will need to import some of these frameworks in your files to have access to certain APIs. For iOS, UIKit is going to be the most common import. Languages Now that we have identified the frameworks we are going to use, we need a way to communicate with them. That’s where the programming language comes in. Throughout the years, developers have been able to use languages such as C, Objective-C, and even C++ to call the built-in APIs. In 2014, Apple introduced Swift as a modern take on their official programming language, and most of their frameworks were updated to take advantage of it. How will this affect my day-to-day work? You will need to get familiar with Swift and have some basic understanding of how object-oriented programming works. These may sound daunting at first, but they’re not in actuality; not if you’ve dealt with CSS centering and JavaScript, anyway. Here is an example of a Swift file: import UIKit class MyButton: UIButton { override func awakeFromNib() { super.awakeFromNib() backgroundColor = UIColor.yellowColor() } } The code snippet above defines a new button subclass and makes its default background color yellow. Concepts introduced here include classes and inheritance and are beyond the scope of this introductory article, but are nonetheless essential to take full advantage of the framework APIs. Software Tools So far we’ve got frameworks and a language to communicate with them. What we need now is a way to bring these ingredients together to build our UI. Enter Xcode. Xcode is, for better or worse, the only officially-supported editor for building iOS apps. It’s feature-packed and includes all the first-party frameworks you need to get started. The name is in fact misleading, since the part where you’ll be spending most of your time doesn’t even require writing code: Interface Builder. Interface Builder allows you build interfaces in a visual way that involves dropping different controls into a canvas and styling them—to a certain degree—without having to write any code. Not everything would work like your favorite graphic editor, but the similarities are there. A large part of what we do when designing interfaces is layout. For that purpose, Apple has built Auto Layout, a constraint-based system that describes the relationship between views rather than their absolute frames. How will this affect my day-to-day work? You will be spending most of your time between your graphic editor and Xcode. Also, Auto Layout is going to be a major part of everything you are likely to touch. Recap We’ve gone over the tools and technologies that you need to get familiar with in order to feel more comfortable implementing your iOS designs. You might still be wondering about the order in which you should tackle these—a legitimate question that, if left unanswered, might hinder the learning process, or worse, stop it. Let’s get the bad news out of the way first: - Getting familiar with all the concepts and tools introduced above is required to fully unlock the potential of the platform. - With the exception of Swift, it’s hard to disassociate these technologies from each other in the learning process. Now for the good news: - Basic Swift is a low-hanging fruit to go for. - The best way to learn about the frameworks is to use them. - The learning order matters less when you have a clear goal in mind. - The frameworks that you will be using are very well documented. All things considered, one of the most fundamental things in learning is understanding the why. Alas, most of the resources out there are cookie-cutter recipes that focus entirely on the how. Hopefully, this series will set you on the right path to address this very problem.
https://robots.thoughtbot.com/building-ios-interfaces-swift-primer
CC-MAIN-2018-34
refinedweb
1,075
62.17
We have been looking at all the parts that make a sample ASP.Net MVC application. Previously we have discussed the database schema of our application as well as implementation of the Repository Pattern with filters on that schema. If you haven’t been following this series of posts you might want to read parts 1 and 2 before continuing.. An example would be yourdomain.com/Products/Show/3. The routing map would take yourdomain.com/Product/Show/3 and map it too it’s real url of yourdomain.com/default.aspx?controller=Product&action=Show&id=3. Like I said, this is a very common technique for websites and web applications so I won’t spend anymore time explaining what is and will jump right into showing you how it works in ASP.Net MVC. Url Routing is enabled by default when you create a new MVC application from the Visual Studio template. Visual Studio creates the necessary sections in your Web.config file and defines the default route table in Global.asax file. In the Web.config file there are 4 sections related to Url Routing. They are These sections make Url Routing work so make sure you leave them alone. In Global.asax the default route table code looks like this: 1: public class GlobalApplication : System.Web.HttpApplication 2: { 3: public static void RegisterRoutes(RouteCollection routes) 4: { 5: routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); 6: // MapRoute takes the following parameters, in order: 7: // (1) Route name 8: // (2) URL with parameters 9: // (3) Parameter defaults 10: routes.MapRoute("Default" 11: , "{controller}/{action}/{id}" 12: , new { controller = "Home", action = "Index", id = "" }); 13: } 14: 15: public void Application_Start() 16: { 17: RegisterRoutes(RouteTable.Routes); 18: } 19: } When our application is first loaded the Application_Start method is called which calls our RegisterRoutes method which sets up the route table. The RegisterRoutes method does a few things that we are going to look at. First off, it defines a route that will be ignored. Any request that contains a .axd file will be ignored which means the url will be left alone and will be processed as it is. The scond part is the MapRoute method that takes three arguments. The first argument is the route name, in this case the route is named Default. The second argument is the route definition. This definition will split an incoming request into 3 parts, the controller, the action, and id. The third arugment is a default route. In case there is not a route in the request (i.e. the request is the root of the domain, yourdomain.com/) then this route will be used. What does this mean? If your application receives a request like yourdomain.com/Product/Show/45, the route table will tell you application to use the Product controller, the Show action within the Product controller, and use 45 as the id. Simple right? That is pretty much all there is to Url Routing. The default map route will probably work for most application needs although if you require a different map route you can define your own custom route. A trival example of a custom route would be as follows. 1: routes.MapRoute("Default" 2: , "{id}/{action}/{controller}" 3: , new { controller = "Home", action = "Index", id = "" }); In this route all I did was change the order so the id comes first and the controller is last so, using the sample url from above, our request would look like yourdomain.com/45/Show/Product. In the next article I’ll take a look at controllers and how they manage the flow of your application. To make sure you don’t miss a beat grab the Dev102 RSS feed. You may also be interested to follow what I have to say on my web development blog or grab my RSS feed. We pay for user submitted tutorials and articles that we publish. Anyone can send in a contributionLearn More webtopus Said on Jan 7, 2009 : Thanks for a good article. The custom map route is definitely useful. One problem I notice is the the nice-looking URL must be in a specific site.com/controller/action/id format, or some order of that sequence using custom map routes. What if your site has urls with varying parameters or paths? It seems like a rigid solution. Goku Said on Jan 11, 2009 : A fellow developer and I are having issues with the ASP.NET 3.5 MVC Preview 5 and the display of images (.jpg/.gif) inside a view (using tags), plus the loading .css or .js files. Can someone from this site provide us some direction regarding something we may have missed either in the Web.config, or in the VS2008 IDE that is blocking these items from loading, or a means of determining in a log where the failure is occuring? We have tried both relative and absolute path-ing to no avail. We are using Windows integrated security, and the files are set to be Read and Executed in IIS 5/6 by the group using the application. Thanks for any help you can give us. We really appreciate your time and efforts in helping us. Harendra chauhan Said on Jan 27, 2009 : ?
http://www.dev102.com/2009/01/07/working-with-aspnet-mvc-part-3-url-routing/
crawl-002
refinedweb
872
65.83
The this pointer holds the address of current object, in simple words you can say that this pointer points to the current object of the class. Let’s take an example to understand this concept. C++ Example: this pointer Here you can see that we have two data members num and ch. In member function setMyValues() we have two local variables having same name as data members name. In such case if you want to assign the local variable value to the data members then you won’t be able to do until unless you use this pointer, because the compiler won’t know that you are referring to object’s data members unless you use this pointer. This is one of the example where you must use this pointer. #include <iostream> using namespace std; class Demo { private: int num; char ch; public: void setMyValues(int num, char ch){ this->num =num; this->ch=ch; } void displayMyValues(){ cout<<num<<endl; cout<<ch; } }; int main(){ Demo obj; obj.setMyValues(100, 'A'); obj.displayMyValues(); return 0; } Output: 100 A Example 2: function chaining calls using this pointer Another example of using this pointer is to return the reference of current object so that you can chain function calls, this way you can call all the functions for the current object in one go. Another important point to note in this program is that I have incremented the value of object’s num in the second function and you can see in the output that it actually incremented the value that we have set in the first function call. This shows that the chaining is sequential and the changes made to the object’s data members retains for further chaining calls. #include <iostream> using namespace std; class Demo { private: int num; char ch; public: Demo &setNum(int num){ this->num =num; return *this; } Demo &setCh(char ch){ this->num++; this->ch =ch; return *this; } void displayMyValues(){ cout<<num<<endl; cout<<ch; } }; int main(){ Demo obj; //Chaining calls obj.setNum(100).setCh('A'); obj.displayMyValues(); return 0; } Output: 101 A
https://beginnersbook.com/2017/08/cpp-this-pointer/
CC-MAIN-2018-30
refinedweb
346
55.17
How do I access fields of multidimensional array stored in 1-d array? Please note these java.net forums are being decommissioned and use the new and improved forums at. How do I access fields of multidimensional array stored in 1-d array?March 15, 2011 - 04:19 I need to access fields of multidimensional array stored in 1-d array. Array is created at runtime and may have up to 15 dimensions. So to access say 2d field at (x, y) I would write offset = x + y * w; For 3d (x, y, z) offset = x + y * w + z * w * h; How do I compute it for n dimensions? BTW Can anyone explain me what is dope vector? And how to implement it in java? No replies after 9 days... What's wrong with this forum (and java)? In Adobe forum is at least one guy (emploee) who gives answers (not always useful answers though, but it is definitely better as nothing). Well, after some thinking about this problem, I came to following code: public class MDArray {: int [] dims; int [] multipliers; protected MDArray(int[] dims) { this.dims = dims; multipliers = new int[dims.length]; for(int i = 0; i < dims.length; i++) { int m = 1; for(int j = i; j < dims.length; j++) { m *= dims[j]; } multipliers[i] = m; } } int getOffset(int [] coords) { int offset = 0; for(int i = 0; i < multipliers.length; i++) { offset += coords[i] * multipliers[i]; } return offset; } }
https://www.java.net/forum/topic/jdk/java-se/how-do-i-access-fields-multidimensional-array-stored-1-d-array
CC-MAIN-2015-40
refinedweb
240
67.65
sbt plugins roundup. sbt-dirty-money sbt-dirty-money is a plugin to clean Ivy cache somewhat selectively (anything that includes organization and name under ~/.ivy2/cache). It was such a simplistic 25-line implementation, but clean-cache and clean-local tasks continue to be useful for me. For example, if I am unsure if a plugin that I am developing if being cached by a test hello project or not, I run both clean-cache and clean-local from the plugin project and reload the hello project to see it not resolving the plugin. If it can't resolve it that's good because it's not grabbing it from some magical place. sbt-buildinfo sbt-buildinfo is a plugin I've been meaning to write for a while. It's a plugin to generate Scala source from your build definition. The purpose mainly is for a program to be self-aware of its own version number, especially when they are conscripted. In the past, I have whipped out ad hoc sourceGenerators to generate an object that contains version number, but making it a plugin made sense since others may needs it too. By extracting the values from state sbt-buildinfo is able to grab generate Scala source containing arbitrary keys. Add the following to the build.sbt: buildInfoSettings sourceGenerators in Compile <+= buildInfo buildInfoKeys := Seq[Scoped](name, version, scalaVersion, sbtVersion) buildInfoPackage := "hello" and it generates: package hello object BuildInfo { val name = "helloworld" val version = "0.1-SNAPSHOT" val scalaVersion = "2.9.1" val sbtVersion = "0.11.2" } sbt-scalashim sbt-scalashim is a plugin that generates shim for Scala 2.8.x to use 2.9.x's sys.error. Number of people in the Scala community has been raising the awareness of cross publishing the libraries to support 2.8.x and 2.9.x. I felt like one of the reasons people abandon 2.8.x is the source-level incompatibility due to sys.error, so I wrote a plugin to absorb the differences. Because a packaged class cannot import things from the empty package it requires you to add import scalashim._ in your code, which is not ideal. But you can use sys.error in 2.8.0. The latest version also support things like sys.props and sys.env. sbt-man sbt-man is also another plugin I've been wanting to write. By the way, most of these plugins are written over a weekend, often in a single stretch of a late night hacking. For a while, I've been reading Programming Clojure a few pages at a time. One thing that inspired me was the doc function, which prints documentation for a given function. user=> (doc doc) ------------------------- clojure.core/doc ([name]) Macro Prints documentation for a var or special form given its name nil Reading this made me realize that I've been feeling awkward about using a web browser to lookup function signatures for standard lib functions. So I wrote a plugin that adds man command last weekend: > man Traversable /: [man] scala.collection.Traversable [man] def /:[B](z: B)(op: (B ⇒ A ⇒ B)): B [man] Applies a binary operator to a start value and all elements of this collection, going left to right. Note: /: is alternate syntax for foldLeft; z /: xs is the same as xs foldLeft z. Note: will not terminate for infinite-sized collections. Note: might return different results for different runs, unless the underlying collection type is ordered. or the operator is associative and commutative. This is powerd by Scalex. I just ripped off the cli implementation with a few lift-json adjustments added to it. Other plugins There are also number of cool plugins written by others. Since the Sonatype migration, Josh's xsbt-gpg-plugin has been indispensable. Josh also maintains xsbt-ghpages-plugin and sbt-git-plugin. All my projects have Doug's ls-sbt so I can register it with ls.implicit.ly. Doug also maintains np and coffeescripted-sbt. A plugin I put in my global plugins.sbt recently is Stephen Wells's sbt-sh. This executes the command outside of sbt, so I can do something like: > sh git status from sbt shell. I also want to mention Mathias's sbt-revolver. It runs your application in the background of sbt shell as a forked JVM, and keeps track of it. Because it's tracked, you can run re-start and it'll take down the existing instance and start it again. It automatically takes advantage of JRebel, which is free for Scala instances. For the development server task for sbt-appengine I tried to base it on top of sbt-revolver to take advantage of hot reloading etc. There are probably other interesting uses for it. sbt 0.12 I'm looking forward to sbt 0.12 for many reasons, but one of them is the binary compatibility of the plugins across point releases. This would lessen the burden on plugin authors to keep publishing jars every time sbt comes out. I know you can use source dependencies, but because they aren't like the normal Ivy dependencies the setting is difficult and it doesn't get written in pom.xml. 0.12 also adds Scala-like string literal parser I contributed. This allows the tasks and commands to accept arguments including white spaces, which should make some of the commands more useful.
http://eed3si9n.com/node/54
CC-MAIN-2019-18
refinedweb
902
66.44
Thank you for submitting feedback on Visual Studio 11 and .NET Framework. Your issue has been routed to the appropriate VS development team for review. We will contact you if we require any additional information. 'm using a lambda expression to specialize a class that I don't want to sub-class, but I'm having difficulty in compiling certain expressions, and the error messages are not useful. The code that reproduces the compilation problem is the following: #include <iostream> #include <functional> using namespace std; class M { public: function<int (int r, int c)> fun; M() { fun = [=] (int r, int c) -> int { return r * 2 + c; }; } }; void foo(M * A) { int test1 = A->fun(1,1); cout << test1 << "\n"; auto f1 = [=]() { int test = A->fun(2,2); cout << test << "\n"; }; f1(); int * a = (int*)malloc(sizeof(int)); auto f2 = [=] () { *a = 2; }; f2(); cout << *a << "\n"; //auto f3 = // [=] () // { // int test = A->fun(3,3); // cout << test << "\n"; // *a = test; // }; //f3(); } int main() { M * A = new M(); foo(A); return 0; } The code as it is shown compiles and works fine. The problem occurs when I try to uncomment the block of code defining f3 and making the call to it. This code illustrates the problem because in the definition of f1 and f2, those lambda expressions and calls work fine, but when combined, the expression does not compile. I'm using Visual Studio 11 C++ Beta on Windows 7 x64. I haven't tried this yet on the just-released Release Preview. Please wait... A fix for this issue has been checked into the compiler sources. The fix should show up in the next major release of Visual C++. To workaround the issue in VS2012 or earlier version, you can change the name of the captured variable. Xiang Fan Visual C++ Team
https://connect.microsoft.com/VisualStudio/feedback/details/746135
CC-MAIN-2015-48
refinedweb
301
66.57
In this tutorial we are going to check how to send a HTTP GET request using a micro:bit board and a UART OBLOQ. We will be using MicroPython to program the micro:bit board. Introduction In this tutorial we are going to check how to send a HTTP GET request using a micro:bit board and a UART OBLOQ. We will be using MicroPython to program the micro:bit board. The connection diagram between both devices can be seen on this tutorial. We will setup a very simple Python Flask server, which will be the destination of our HTTP GET request. You can check here a Flask “Hello World” tutorial. The Flask code The Flask code we are going to write on this section will be very simple and similar to this previous tutorial. We will start by importing the Flask class, so we can configure our server, and the request object, which allows to obtain both the request headers and body. In our case, since we will be receiving a GET request, it should not have a body. So, we will use the request object to access the headers of the request. from flask import Flask, request Then we will create an object of class Flask. We will need this object to configure the server. More precisely, we will use it to configure the route that will be listening to incoming GET requests. app = Flask(__name__) We will setup a single route which will listen, as already said, to GET requests. We will call our route “/get“. @app.route('/get', methods = ["GET"]) def get(): In the implementation of our route handling function, we will simply print the headers of the request and then return a “Received” string back to the client, for testing purposes. print(request.headers) return 'Received' Finally, we will call the run method on the Flask object, in order for the server to start listening to incoming requests. We will pass the value “0.0.0.0” in the host parameter, thus indicating the server should listen on all available IPs of the machine. As port we will use the value 8090. app.run(host='0.0.0.0', port= 8090) The final Flask code can be seen below. from flask import Flask, request app = Flask(__name__) @app.route('/get', methods = ["GET"]) def get(): print(request.headers) return 'Received' app.run(host='0.0.0.0', port= 8090) The micro:bit code We will start the code by importing the uart object and the sleep function from the microbit module. from microbit import uart, sleep Then, like we did on the previous tutorial, we will define a readUntil function that reads characters from the serial port until a specific character is received. As output, the function returns the content read until the character was found. This function will help us handling the response to commands sent to the UART OBLOQ. def readUntil(uartObject, termination): result = '' while True: if uartObject.any(): byte = uartObject.read(1) result = result + chr(byte[0]) if chr(byte[0]) == termination: break sleep(100) return result After this, we will initialize the serial interface to work with pins 0 and 1 from the micro:bit board. After this, the MicroPython prompt will become unavailable until we re-enable it again. uart.init(baudrate=9600, tx = pin0, rx = pin1) Then, we will take care of flushing the dummy byte that is sent to the serial port after we initialize the serial interface. Note however that this issue only happens in older versions of MicroPython, such as 1.7.0 (the one I’m using). In newer versions that no longer have this issue, we can skip this procedure. uart.write("\r") readUntil(uart, '\r') Next, we will send the command to connect the UART OBLOQ to the WiFi network and wait for the procedure to finish, leveraging again the readUntil function. You can read in more detail about the WiFi connection procedure here. uart.write("|2|1|yourNetworkName,yourNetworkPassword|\r") readUntil(uart, '3') readUntil(uart, '\r') Finally, after the UART OBLOQ is connected to the WiFi network, we will send the GET request. You can read more about the details of the command we need to send on this tutorial. To sum up, the command takes the format shown below. |3|1|destinationURL| The destination URL has the following format, where you should change #yourFlaskMachineIp# by the IP of the machine that is hosting the Flask server: You can check below the MicroPython code to send this command: uart.write("|3|1|\r") The UART OBLOQ answers to the command in the following format: |3|HTTPResponseCode|ResponseBody| So, we will leverage again the readUntil function to wait for the ‘3‘ character to be sent to the serial port. We don’t need to store the result of this function call in a variable since this part of the command response doesn’t contain any useful information. Then, we will call the function again, now to wait for the “\r” character, which signals the end of the UART OBLOQ command response. This time, we will store the readUntil function response, which contains the HTTP response code and body. readUntil(uart, '3') result = readUntil(uart, '\r') To finalize, we will re-enable the Python prompt and then print the request response we have obtained and stored in a variable.|1|\r") readUntil(uart, '3') result = readUntil(uart, '\r') uart.init(baudrate=115200) print(result) Testing the code To test the whole system, first run the Flask server using a Python tool of your choice. I’m using IDLE, a Python IDE. Then, after connecting the micro:bit to the UART OBLOQ and powering both devices, simply run the previous MicroPython script. You should get an output similar to figure 1 after the script finishes the execution. As can be seen, the code 200 was returned by the server, thus indicating success. Also, the body of the response corresponds to the “Received” string we have defined on Flask. Figure 1 – Response of the GET request. If you go back to the tool where you are running the Flask server, you should see the headers of the request getting printed. Related Posts - Micro:bit uPython: HTTP UART OBLOQ POST request to Flask server - Micro:bit MicroPython: Connecting the UART OBLOQ to WiFi network - Micro:bit uPython: Pinging the UART OBLOQ - Micro:bit uPython: Getting firmware version of UART OBLOQ - Micro:bit uPython: Receiving data from serial port - Micro:bit uPython: using external pins for serial communication
https://techtutorialsx.com/2018/12/08/microbit-micropython-uart-obloq-http-get-request-to-flask-server/
CC-MAIN-2019-35
refinedweb
1,089
62.38
Python Tutorial It is a website that has a tutorial, a forum, and is updated EVERYDAY. Please go to this site, and tell all of your pygame.org buddies to go here and talk on the forums. Socool274 (socool274) Changes Links - Releases Pygame.org account Comments Vlad 2012-02-11 22:18:47 why dont you make a pygame forum? Vladwasright 2014-06-10 02:19:04 Gee, a forum. That'd be great. Peter 2014-07-04 08:55:10 Pygame needs a forum! Or can you point me to a place where people discussing pygame? josmiley 2014-07-04 11:12:58 irc: chat.freenode.net #pygame jmm0 2014-07-17 14:15:24 Justin Case 2014-07-29 16:41:34 Hi Got Window 8 computer can I ask why is so hard to find a pygame installer for this system? PLEASE HELP (in easy steps thanks) jmm0 2014-08-06 15:33:30 1. Install Python 3.4.1 (32-bit). 2. Install pygame‑1.9.2a0.win32‑py3.4.exe. Ben Smith 2014-08-08 14:53:55 I have installed Pygame in both python27 and python34 directories - it was not clear what pygame version was for what version of compiler. If I try to run the following program: #!/usr/bin/env python3 import sys, pygame I get an error: Traceback (most recent call last): File "pygameintro.py", line 3, in <module> import sys, pygame File "e:\python34\lib\site-packages\pygame\__init__.py", line 95, in <module> from pygame.base import * ImportError: DLL load failed: The specified module could not be found. Which is clearly telling me it cannot find a DLL, so do I need to set the PATH env variable in Windows? or what. chris 2014-08-12 10:22:41 how the heck do I get pygame on my raspberry pi. I do not know how to get the package and how to install. I need to know because I am so egad to do this. Jarosław Zięba 2014-11-27 22:39:27 Guys, really, we need pygame forum.... Matt 2015-01-15 02:08:24 yes please! making a pygame forum would be great!!! also would like access to the accelerated graphics for 60 fps pygame science projects! And a builtin pygame font, and a builtin "pygame-midi" (to sound the same on every computer) those would be excellent as standard pygame packages!! Would also be nice to also have a kind of "run_python_code.exe" without needing and actually installation, (simply drag-n drop 'my_program.py' onto the single exe file) so that code can be run without even installing python+pygame!-- Oh and one more thing, how about a builtin pygame package that fully compiles any code into a sellable exe file? (say perhaps $1 per game?) Not joking, I think these would be great additions for every pygame programmer. ^_^ Matt B. Mary 2015-03-17 17:37:23 I create a screen in pygame using: screen = pygame.display.set_mode((width,height)) I then draw 200 random gray scale circles on it and save the screen as an image: pygame.image.save(screen, "Circles.png") (I need to save this screen as an image so I can compare it to a target image later on). Now, I need to randomly change different parameters of different circles by putting the numpy array that contains the circle parameters for all the circles in a loop with a big number as its iteration value. And I need to redraw the mutated circles on a screen (at the end of each iteration) and save that screen as a png image again. Now the problem is, every time I use pygame.display.set_mode((width,height)) it opens up the display window and it significantly slows down my program. I would like to create a screen and save it in a variable but don't need to display that screen at each iteration. I haven't been able to figure out what command to use to avoid displaying the screen. I appreciate your help. bcl 2015-08-21 01:09:29 Th 'pygame subset for android' download link does not work. Is it still available? kdamav7949 2016-01-29 17:44:21 hi. i have installed python on my computer but i can not get pygame to work. please help me. when i import pygame in the python shell, i get this error: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> import pygame ImportError: No module named 'pygame' I have python 3.5.1 and installed the 32-bit version on my 64-bit computer because there was no option for 64-bit.
https://www.pygame.org/project/849
CC-MAIN-2020-10
refinedweb
784
75.2
SMS client and server is an application software which is used for sending and receiving messages(SMS). It listens for incoming messages to arrive, processes the message if it's in a valid format. Note the processing of arrived messages depends on the application which will be discussed later. I am going to explain the following things: I have used the GSMComm Library for Sending and Receiving SMS. You require a GSM modem or phone for sending an SMS. CommSetting class is used for storing comm port settings: public class CommSetting { public static int Comm_Port=0; public static Int64 Comm_BaudRate=0; public static Int64 Comm_TimeOut=0; public static GsmCommMain comm; public CommSetting() { // // TODO: Add constructor logic here // } } Comm is an object of type GsmCommMain which is required for sending and receiving messages. We have to set the Comm port, Baud rate and time out for our comm object of type GsmCommMain. Then try to open with the above settings. We can test the Comm port settings by clicking on the Test button after selecting the Comm port, baud rate and Time out. Sometimes if the comm port is unable to open, you will get a message "No phone connected". This is mainly due to Baud rate settings. Change the baud rate and check again by clicking the Test button until you get a message "Successfully connected to the phone." Before creating a GSMComm object with settings, we need to validate the port number, baud rate and Timeout. The EnterNewSettings() does validation, returns true if valid, and will invoke SetData(port,baud,timeout) for comm setting. The following block of code will try to connect. If any problem occurs "Phone not connected" message appears and you can either retry by clicking on the Retry button or else Cancel. GsmCommMain comm = new GsmCommMain(port, baudRate, timeout); try { comm.Open(); while (!comm.IsConnected()) { Cursor.Current = Cursors.Default; if (MessageBox.Show(this, "No phone connected.", "Connection setup", MessageBoxButtons.RetryCancel, MessageBoxIcon.Exclamation) == DialogResult.Cancel) { comm.Close(); return; } Cursor.Current = Cursors.WaitCursor; } // Close Comm port connection (Since it's just for testing // connection) comm.Close(); } catch(Exception ex) { MessageBox.Show(this, "Connection error: " + ex.Message, "Connection setup", MessageBoxButtons.OK, MessageBoxIcon.Warning); return; } // display message if connection is a success. MessageBox.Show(this, "Successfully connected to the phone.", "Connection setup", MessageBoxButtons.OK, MessageBoxIcon.Information); We are going to register the following events for GSMComm object comm. PhoneConnected comm_PhoneConnectedwhich will invoke OnPhoneConnectionChange(bool connected)with the help of Delegate ConnectedHandler. MessageReceived MessageReceivedEventHandler. When the incoming message arrives, the comm_MessageReceivedmethod will be invoked which in turn calls the MessageReceived()method in order to process the unread message. GSMCommobject commhas a method ReadMessageswhich will be used for reading messages. It accepts the following parameters phone status ( All, ReceivedRead, ReceivedUnread, StoredSent, and StoredUnsent) and storage type: SIM memory or Phone memory. private void MessageReceived() { Cursor.Current = Cursors.WaitCursor; string storage = GetMessageStorage(); DecodedShortMessage[] messages = CommSetting.comm.ReadMessages (PhoneMessageStatus.ReceivedUnread, storage); foreach(DecodedShortMessage message in messages) { Output(string.Format("Message status = {0}, Location = {1}/{2}", StatusToString(message.Status), message.Storage, message.Index)); ShowMessage(message.Data); Output(""); } Output(string.Format("{0,9} messages read.", messages.Length.ToString())); Output(""); } The above code will read all unread messages from SIM memory. The method ShowMessage is used for displaying the read message. The message may be a status report, stored message sent/un sent, or a received message. You can send an SMS by keying in the destination phone number and text message. If you want to send a message in your native language (Unicode), you need to check in Send as Unicode(UCS2). GSMComm object comm has a SendMessage method which will be used for sending SMS to any phone. Create a PDU for sending messages. We can create a PDU in straight forward version as: SmsSubmitPdu pdu = new SmsSubmitPdu (txt_message.Text,txt_destination_numbers.Text,""); An extended version of PDU is used when you are sending a message in Unicode. try { // Send an SMS message SmsSubmitPdu pdu; bool alert = chkAlert.Checked; bool unicode = chkUnicode.Checked; if (!alert && !unicode) { // The straightforward version pdu = new SmsSubmitPdu (txt_message.Text, txt_destination_numbers.Text,""); } else { // The extended version with dcs byte dcs; if (!alert && unicode) dcs = DataCodingScheme.NoClass_16Bit; else if (alert && !unicode) dcs = DataCodingScheme.Class0_7Bit; else if (alert && unicode) dcs = DataCodingScheme.Class0_16Bit; else dcs = DataCodingScheme.NoClass_7Bit; pdu = new SmsSubmitPdu (txt_message.Text, txt_destination_numbers.Text, "", dcs); } // Send the same message multiple times if this is set int times = chkMultipleTimes.Checked ? int.Parse(txtSendTimes.Text) : 1; // Send the message the specified number of times for (int i=0;i<times;i++) { CommSetting.comm.SendMessage(pdu); Output("Message {0} of {1} sent.", i+1, times); Output(""); } } catch(Exception ex) { MessageBox.Show(ex.Message); } Cursor.Current = Cursors.Default; You can read all messages from the phone memory of SIM memory. Just click on "Read All Messages" button. The message details such as sender, date-time, text message will be displayed on the Data Grid. Create a new row for each read message, add to Data table and assign the Data table to datagrid's source private void BindGrid(SmsPdu pdu) { DataRow dr=dt.NewRow(); SmsDeliverPdu data = (SmsDeliverPdu)pdu; dr[0]=data.OriginatingAddress.ToString(); dr[1]=data.SCTimestamp.ToString(); dr[2]=data.UserDataText; dt.Rows.Add(dr); dataGrid1.DataSource=dt; } The above code will read all unread messages from SIM memory. The method ShowMessage is used for displaying the read message. The message may be a status report, stored message sent/un sent, or a received message. The only change in processing Received message and Read message is the first parameter. DecodedShortMessage[] messages = CommSetting.comm.ReadMessages(PhoneMessageStatus.ReceivedUnread, storage); DecodedShortMessage[] messages = CommSetting.comm.ReadMessages(PhoneMessageStatus.All, storage); All messages which are sent by the users will be stored in SIM memory and we are going to display them in the Data grid. We can delete a single message by specifying the message index number. We can delete all messages from SIM memory by clicking on "Delete All" button. Messages are deleted based on the index. Every message will be stored in memory with a unique index. The following code will delete a message based on index: // Delete the message with the specified index from storage CommSetting.comm.DeleteMessage(index, storage); To delete all messages from memory( SIM/Phone) // Delete all messages from phone memory CommSetting.comm.DeleteMessages(DeleteScope.All, storage); The DeleteScope is an Enum which contains: All Read ReadAndSent ReadSentAndUnsent Here are some interesting applications where you can use and modify this software. The customer has agreed for pre paid electricity recharges with the help of recharge coupons. The coupon is made available at shops. The customer will first buy the coupons from shops; every coupon consists of Coupon PIN which will be masked, the customer needs to scratch to view the PIN number. The customer will send an SMS to the SMS Server with a specified message format for recharging. Message Format for Recharging: RECHARGE <Coupon No> <Customer ID> On the Server, the Database consists of Customer information along with his telephone number, there will a field named Amount which will be used and updated when the customer recharged with some amount. This application becomes somewhat complex, an automatic meter reading software along with hardware needs to be integrated with this. Automatic meter reading systems will read all meter readings and calculate the amount to be deducted for the customer. You can implement as astrology software. The user will send an SMS with his zodiac sign. The SMS server will maintain an Astrology Database with zodiac sign and a text description which contains a message for the day. The Database is required to be updated daily for all zodiac signs. Message Format which will be used by the user to get message of the day: Zodiac Sign We can implement a remote controlling system, for example you need to: You can send an SMS. The SMS server will listen and then process the message. Based on the message format sent by the user we can take action. Example if message format is: SHUTDOWN Send to SMS phone number. This project wouldn't be completed unless I thank the GSMComm Lib developer "Stefan Mayr". I customized my application using this Library. You can download the sample project, library from the web link which I provided under the Reference section. General News Question Answer Joke Rant Admin
http://www.codeproject.com/KB/cs/SMS.aspx
crawl-002
refinedweb
1,388
50.23
minimal BDD library Project description pea - The tiniest green vegetable. pea is a minimal BDD framework for python, in the style of ruby’s cucumber and python’s lettuce. It aims to help you write the same kind of tests - but in straight-up python code, without all the parsing and indirection and other hoops to jump through. It’s a lot like ruby’s coulda. Benefits of cucumber-style testing include: - You write your tests in clear, english language without inline code - Your tests are human-readable, and hopefully human-editable - You can re-use steps with confidence, because they all do exactly what they say on the tin Benefits of pea over lettuce, cucumber, etc: - It’s a really trivial library (thus the name). It doesn’t do very much, so it probably doesn’t have many bugs - Your features are just python code: - No “BDD language parser” needed - No regular expressions - Stack traces make sense - Syntax highlighting - You can use ctags to jump between test & implementation, as well as for method completion - Managing and renaming functions is much easier than managing regexes - You can use whatever abstractions you like - You can use rich python objects as arguments, instead of parsing strings - It doesn’t need its own test runner; so you can just use nose to run it alongside your unit tests So how do I use it? Here’s a minimal example: from pea import * @step def I_go_to_the_store(): world.location='store' world.cart = [] @step def I_buy_some(item): world.cart.append(item) @step def I_go_home(): world.location = 'home' @step def I_have_some_delicious(item): assert item in world.cart world.assertEquals(world.location, 'home') # -------------------- class TestShopping(TestCase): def test_buying_some_peas(self): Given.I_go_to_the_store() When.I_buy_some('peas') And.I_go_home() Then.I_have_some_delicious('peas') … and when you run it (with nosetests, in verbose mode): Typically you would put your steps in a separate python module (or many), but it’s your choice. Basics: - @step adds your function to pea’s registry of steps, which allows them to be called via Given, When, And, and Then. - To re-use a step from inside another step, just call the function! Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/pea/
CC-MAIN-2021-17
refinedweb
384
58.01
30 June 2009 13:57 [Source: ICIS news] (Releads, adds detail and updates throughout) LONDON (ICIS news)--At least 15 people were killed and dozens injured when rail tankers carrying liquefied petroleum gas (LPG) derailed and exploded at Viareggio in northern Italy, officials and media reports said on Tuesday. The train’s locomotive was pulling 14 tank wagons, the lead car registered to Polish railway company PKP and 13 to the German rail and distribution group Deutsche Bahn, the Italian state railway company said. "In all probability, the accident in ?xml:namespace> The wagons had been checked and were in compliance with EU standards, he added. The train was transporting LPG from Two rail cars were said to have exploded, causing buildings to collapse amid widespread devastation. The killed and injured lived alongside the rail tracks or were driving nearby, according to media reports. The train had just passed
http://www.icis.com/Articles/2009/06/30/9228849/at-least-15-killed-many-hurt-in-italy-train-lpg-explosion.html
CC-MAIN-2014-35
refinedweb
149
58.92
Many repositories are used (at least in part) to manage files and other artifacts, including service definitions, policy files, images, media, documents, presentations, application components, reusable libraries, configuration files, application installations, databases schemas, management scripts, and so on. Most JCR repository implementations will store those files and maybe index them for searching. But ModeShape does more. ModeShape sequencers can automatically unlock the structured information buried within all of those files, and this useful content derived from your files is then stored back in the repository where your client applications can search, access, and analyze it using the JCR API. Sequencing is performed in the background, so the client application does not have to wait for (or even know about) the sequencing operations. The following diagram shows conceptually how these automatic sequencers do this. As of ModeShape 3.6.0.Final, your applications can use a session to explicitly invoke a sequencer on a specified property. We call these manual sequencers. Any generated output is included in the session's transient state, so nothing is persisted until the application calls session.save(). Sequencers Sequencers are just POJOs that implement a specific interface, and when they are called they simply process the supplied input, extract meaningful information, and produce an output structure of nodes that somehow represents that meaningful information. This derived information can take almost any form, and it typically varies for each sequencer.. A third example is a sequencer that works on XML Schema Documents might parse the XSD content and generate nodes that mirror the various elements, and attributes, and types defined within the schema document. Sequencers allow a ModeShape repository to help you extract more meaning from the artifacts you already are managing, and makes it much easier for applications to find and use all that valuable information. All without your applications doing anything extra. Each repository can be configured with any number of sequencers. Each one includes a name, the POJO class name, an optional classpath (for environments with multiple named classloaders), and any number of POJO-specific fields. Upon startup, ModeShape creates each sequencer by instantiating the POJO and setting all of the fields, then initializing the sequencer so it can register any namespaces or node type definitions. There are two kinds of sequencers, automatic and manual. Automatic Sequencers An automatic sequencer has a path expression that dictates which content in the repository the sequencer is to operate upon. These path expressions are really patterns and look somewhat like simple regular expressions. When persisted content in the repository changes, ModeShape automatically looks to see which (if any) sequencers might be able to run on the changed content. If any of the sequencers do match, ModeShape automatically calls them by supplying the changed content. At that point, the sequencer then processes the supplied content and generates the output, and ModeShape then saves that generated output to the repository. To use an automatic sequencer, simply add or change content in the repository that matches the sequencers' path expression. For example, if an XSD sequencer is configured for nodes with paths like "/files//*.xsd", then just simply upload a file into that location and save it. ModeShape will detect that the XSD sequencer should be called, and will do the rest. The generated content will magically appear in the repository. Manual Sequencers A manual sequencer is simply a sequencer that is configured without path expressions. Because no path expressions are provided, ModeShape cannot determine when/where these sequencers should be applied. Instead, manual sequencers are intended to be called by client applications. For example, consider that a session just uploaded following code shows how an XSD sequencer configured with name "XSD Sequencer" is manually invoked to place the generated content directly under the "/files/schemas/Customers.xsd" node (and adjacent to the "jcr:content" node): The sequence(...) method returns true if the sequencer generated output, or "false" if the sequencer couldn't use the input and instead did nothing. Remember that when the sequence(...) does return, any generated output is only in the session's transient state and "session.save()" must be called to persist this state. Built-in sequencers ModeShape comes with sequencer implementations for a variety of file types: Please see the Built-in sequencers section of the documentation for more detail on all of these sequencers, including how to configure them and the structure of the output they generate. Custom sequencers As mentioned earlier, a sequencer is actually just a plain old Java object (POJO). Creating a sequencer is pretty straightforward: create a Java class that extends a single abstract class, package it up for use, and then configure your repository to use it. We walk you through all these steps in the Custom sequencers section of the documentation. Configuring a automatic sequencer. A path expression consist of two parts: a selection criteria (or an input path) and an output path: Input path, similar to regular expressions. Thus, the first input path in the previous table would match node "/a/b", and "b" would be captured and could be used within the output path using "$1", where the number used in the output path identifies the parentheses. Here are some examples of what's captured by the parenthesis and available for use in the output path: Square brackets can also be used to specify criteria on a node's properties or children. Whatever appears in between the square brackets does not appear in the selected node. This distinction between the selected path and the changed path becomes important when writing custom sequencers. Output paths The outputPath part of a path expression defines where the content derived by the sequencer should be stored. Typically, this points to a location in a different part of the repository, but it can actually be left off if the sequenced output is to be placed directly under the selected node. The output path can also use any of the capture groups used in the input path. Workspaces in input and output paths So far, we've talked about how input paths and output paths are independent of the workspace. However, there are times when it's desirable to configure sequencers to only work against content in a specific workspace. In these cases, it is possible to specify the workspace names before the path. For example: Again, the rules are pretty straightforward. You can leave off the workspace name, or you can prepend the path with "workspaceNamePattern:", where "workspaceNamePattern" is a regular-expression pattern used to match the applicable workspace names. A blank pattern implies any match, and is a shorthand notation for the ".*" regular expression. Note that the repository names may not include forward slashes (e.g., '/') or colons (e.g., ':'). Example path expression Let's look at an example sequencer path expression: This matches a changed "jcr:data" property on a node named "jcr:content[1]" that is a child of a node whose name ends with ".jpg", ".jpeg", ".gif", ".bmp", ".pcx", or ".png" ( that may have any same-name-sibling index) appearing at any level in the "default" workspace. Note how the selected path capture the filename (the segment containing the file extension), including any same-name-sibling index. This filename is then used in the output path, which is where the sequenced content is placed under the "/images" node in the "meta" workspace. So, consider a PNG image file is stored in the "default" workspace in a repository configured with an image sequencer and the aforementioned path expression, and the file is stored at "/jsmith/photos/2011/08/09/reunion.png" using the standard "nt:file" pattern. This means that an "nt:file" node named "reunion.png" is created at the designated path, and a child node named "jcr:content" will be created with primary type of "nt:resource" and a "jcr:data" binary property (at which the image file's content is store). When the session is saved with these changes, ModeShape discovers that the property satisfies the criteria of the sequencer, and calls the sequencer's execute(...) method with the selected node, input node, input property and output node of "/images" in the "meta" workspace. When the execute() method completes successfully, the session with the change in the "meta" workspace are saved and the content is immediately available to all other sessions using that workspace. Waiting for automatic sequencing When your application creates or uploads content that will kick off a sequencing operation, the sequencing is actually done asynchronously. If you want to be notified when the sequencing is complete, you can use ModeShape's observation feature to register a listener for the sequencing event. The first step is to create a class that implements "javax.jcr.observation.EventListener". Normally this is pretty easy, but in our case we want to block until the listener is notified via a separate thread. An easy way to do this is to use a java.util.concurrent.CountDownLatch, and to count down the latch as soon as we get our event. (If we carefully register the listener using criteria for only the sequencing output we're interested in, we'll know we'll only receive one event.) Here's our implementation that captures from the first event whether the sequencing was successful and the path of the output node, and then counts down the latch: We could then register this using the public API:
https://docs.jboss.org/author/display/MODE/Sequencing
CC-MAIN-2017-09
refinedweb
1,565
51.89
We. Floyd's Cycle-Finding Algorithm One popular algorithm for detecting cycles in linked lists is Floyd's Cycle-Finding Algorithm, which is often called the tortoise and the hare algorithm. The algorithm uses 2 pointers, a fast pointer and a slow pointer. The fast pointer ( hare ) traverses the linked list 2 nodes at a time while the slow pointer ( tortoise ) traverses the linked list 1 node at a time. If these pointers ever point to the same node in the linked, there is a cycle in the linked list. Let's code Floyd's Cycle-Finding Algorithm in Python. We will use the same Node Class that represents a node in the linked list. It will hold an integer as its value and a pointer to None or the next node in the linked list. class Node(object): def __init__(self, value: int, next_node: "Node" = None): self.value = value self.next = next_node def __repr__(self): return "Node <{}>".format(self.value) Now let's write a Python function that will implement Floyd's Cycle-Finding Algorithm. We will create both a slow and fast pointer and both pointers will initially point to the head of the linked list. In a loop, we will increment the slow pointer by 1 and the fast pointer by 2. We then compare the pointers to see if they are pointing to the same node. If they are, there is a cycle in the linked list and we return True. If the pointers are never pointing to the same node and we have reached the end of the linked list, we return False. Here is Python Function implementing Floyd's Cycle-Finding Algorithm. def has_cycle(head: Node) -> bool: """Floyd's Cycle-Finding Algorithm""" slow, fast = head, head while fast is not None and fast.next is not None: slow, fast = slow.next, fast.next.next if slow == fast: return True return False We can run Python test code to verify the function behaves accordingly. First, let's create a linked list that does not have a cycle and verify the function returns False. linked_list = Node(1, Node(2, Node(3, Node(4)))) assert not has_cycle(linked_list) Now let's create a linked list where the last node in the list connects to the first node. Floyd's Cycle-Finding Algorithm should detect the cycle and the function should return True. node4 = Node(4) linked_list = Node(1, Node(2, Node(3, node4))) node4.next = linked_list assert has_cycle(linked_list) Conclusion Detecting a cycle in a linked list is a popular technical interview question and Floyd's Cycle-Finding Algorithm is a popular solution.
https://www.koderdojo.com/blog/detect-cycle-in-linked-list-using-floyd-s-cycle-finding-algorithm
CC-MAIN-2021-39
refinedweb
436
72.46
- Overview - Quickstart - Designing an XML Feed - Choosing a Name for the Feed Data Source - Choosing the Feed Type - Defining the XML Record for a Document - Grouping Records Together - Providing Content in the Feed - Adding Metadata Information to a Record - Using the UTF-8 Encoding - Including Protected Documents in Search Results - Per-URL ACLs and ACL Inheritance - Feeding Groups to the Search Appliance - Feeding Content from a Database - Saving your XML Feed - Feed Limitations - Pushing a Feed to the Google Search Appliance - Turning Feed Contents Into Search Results - Troubleshooting - Error Messages on the Feeds Status Page - Feed Push is Not Successful - Fed Documents Aren’t Appearing in Search Results - Document Feeds Successfully But Then Fails - Fed Documents Aren’t Updated or Removed as Specified in the Feed XML - Document Status is Stuck “In Progress” - Insufficient Disk Space Rejects Feeds - Feed Client TCP Error - Example Feeds - Google Search Appliance Feed DTD This document is for developers who use the Google Search Appliance Feeds Protocol to develop custom feed clients that push content and metadata to the search appliance for processing, indexing, and serving as search results. To push content to the search appliance, you need a feed and a feed client: - The feed is an XML document that tells the search appliance about the contents that you want to push. - The feed client is the application or web page that pushes the feed to a feeder process on the search appliance. This document explains how feeds work and shows you how to write a basic feed client. Overview You can use feeds to push data into the index on the search appliance. There are two types of feeds: - A web feed provides the search appliance with a list of URLs. A web feed: - Must be named “web”, or have its feed type set to “metadata-and-url”. - May include metadata, if the feed type is set to “metadata-and-url”. - Does not provide content. Instead, the crawler queues the URLs and fetches the contents from each document listed in the feed. - Is incremental. - Is recrawled periodically, based on the crawl settings for your search appliance. - A content feed provides the search appliance with both URLs and their content. A content feed: - Can have any name except “web”. - Provides content for each URL. - May include metadata. - Can be either full or incremental. - Is only indexed when the feed is received; the content and metadata are analyzed and added to the index. The URLs submitted in a content feed are not crawled by the search appliance. Any URLs extracted from the content, that have not been submitted in a content feed, will be extracted and scheduled for crawling if they match the crawling rules. The search appliance does not support indexing compressed files sent in content feeds. The search appliance follows links from a content-fed document, as long as the links match URL patterns added under Follow and Crawl Only URLs with the Following Patterns on the Content Sources > Web Crawl > Start and Block URLs page in the Admin Console. Web feeds and content feeds behave differently when deleting content. See Removing Feed Content From the Index for a description of how content is deleted from each type of feed. To see an example of a feed, follow the steps in the section Quickstart. Why Use Feeds? You should design a feed to ensure that your search appliance crawls any documents that require special handling. Consider whether your site includes content that cannot be found through links on crawled web pages, or content that is most useful when it is crawled at a specific time. For example, you might use a feed to add external metadata from an Enterprise Content Management (ECM) system. Examples of documents that are best pushed using feeds include: - Documents that cannot be fetched using the crawler. For example, records in a database or files on a system that is not web-enabled. - Documents that can be crawled but are best recrawled at different times than those set by the automatic crawl scheduler that runs on the search appliance. - Documents that can be crawled but there are no links on your web site that allow the crawler to discover them during a new crawl. - Documents that can be crawled but are much more quickly uploaded using feeds, due to web server or network problems. Impact of Feeds on Document Relevancy For documents sent with content feed, a flat fixed page rank value is assigned by default, which might have a negative impact on the relevancy determination of the documents. However, you can specify PageRank in a feed for either a single URL or group of URLs by using the pagerank element. For more details, see Defining the XML Record for a Document. Choosing a Feed Client You push the XML to the search appliance using a feed client. You can use one of the feed clients described in this document or write your own. For details, see Pushing a Feed to the Google Search Appliance. Quickstart Here are steps for pushing a content feed to the search appliance. - Download sample_feed.xml to your local computer. This is a content feed for a document entitled “Fed Document”. - In the Admin Console, go to Content Sources > Web Crawl > Start and Block URLs and add this pattern to “Follow and Crawl Only URLs with the Following Patterns”: This is the URL for the document defined in sample_feed.xml. - Download pushfeed_client.py to your local computer. This is a feed client script implemented in Python 2.x. You must install Python 2.x to run this script. Google also provides a Python 3.x version, pushfeed_client3.py. - Configure the search appliance to accept feeds from your computer. In the Admin Console, go to Content Sources > Feeds, and scroll down to List of Trusted IP Addresses. Verify that the IP address of your local computer is trusted. - Run the feed client script with the following arguments (you must change “APPLIANCE-HOSTNAME” to the hostname or IP address of your search appliance): % pushfeed_client.py --datasource="sample" --feedtype="full" --url="http://<APPLIANCE-HOSTNAME>:19900/xmlfeed" --xmlfilename="sample_feed.xml" - In the Admin Console, go to Content Sources > Feeds. A data source named “sample” should appear within 5 minutes. - The URL appear under Crawl Diagnostics within about 15 minutes. - Enter the following as your search query to see the URL in the results: info: If your system is not busy, the URL should appear in your search results within 30 minutes. Designing an XML Feed The feed is an XML file that contains the URLs. It may also contain their contents, metadata, and additional information such as the last-modified date. The XML must conform to the schema defined by gsafeed.dtd. This file is available on your search appliance at http://<APPLIANCE-HOSTNAME>/gsafeed.dtd. Although the Document Type Definition (DTD) defines elements for the data source name and the feed type, these elements are populated when you push the feed to the search appliance. Any datasource or feedtype values that you specify within the XML document are ignored. An XML feed must be less than 1 GB in size. If your feed is larger than 1 GB, consider breaking the feed into smaller feeds that can be pushed more efficiently. Choosing a Name for the Feed Data Source When you push a feed to the search appliance, the system associates the fed URLs with a data source name, specified by the datasource element in the feed DTD. - If the data source name is “web”, the system treats the feed as a web feed. A search appliance can only have one data source called “web”. - If the data source name is anything else, and the feed type is metadata-and-url, the system treats the feed as a web feed. - If the data source name is anything else, and the feed type is not metadata-and-url, the system treats the feed as a content feed. To view all of the feeds for your search appliance, log into the Admin Console and choose Content Sources > Feeds. The list shows the date of the most recent push for each data source name, along with whether the feed was successful and how many documents were pushed. Choosing the Feed Type The feed type determines how the search appliance handles URLs when a new content feed is pushed with an existing data source name. Content feeds can be full or incremental. a web feed is always incremental. To support feeds that provide only URLs and metadata, you can also set the feed type to metadata-and-url. This is a special feed type that is treated as a web feed. - When the. - When the. - When the. Unless the metadata-and-urlfeed has the crawl-immediately=truedirective the search appliance will schedule the re-crawling of the URL instead of re-crawling it without delay. It is not possible to modify a single field of a document’s metadata by submitting a feed that contains only the modified field. To modify a single field, you must submit a feed that includes all the metadata fields along with the modified field. Documents that have been fed by using content feeds are specially marked so that the crawler will not attempt to crawl them unless the URL is also one of the Start URLs defined on the Content Sources > Web Crawl > Start and Block URLs page. In this case, the URL is periodically accessed from the GSA as part of the regular connectivity tests. To ensure that the search appliance does not crawl a previously fed document, use googleoff/googleon tags (see "Excluding Unwanted Text from the Index" in Administering Crawl, or robots.txt (see "Using robots.txt to Control Access to a Content Server" in Administering Crawl. To update the document, you need to feed the updated document to the search appliance. Documents fed with web feeds, including metadata-and-urls, are recrawled periodically, based on the crawl settings for the search appliance. metadata-and-urlfeed type is one way to provide metadata to the search appliance. A connector can also provide metadata to the search appliance. See "Content Feed and Metadata-and-URL Feed" in the Connector Developer's Guide. See also the External Metadata Indexing Guide for information about external metadata. Full Feeds and Incremental Feeds Incremental feeds generally require fewer system resources than full feeds. A large feed can often be crawled more efficiently if it is divided into smaller incremental feeds. The following example illustrates the effect of a full feed: - Create a new data source by pushing a feed that contains documents D0, D1 and D2. The system serves D0, D1, and D2. - Use the same data source name, you push a full feed that contains documents D0, an updated D1, and a new D3. When the feed processing is complete, the system serves D0, the updated D1, and the new D3. Because document D2 was not defined in the full feed, it is removed from the index. The following example mixes full and incremental feeds: - Create a new data source by pushing a feed that contains documents D0, D1 and D2. The system serves D0, D1 and D2. - Push an incremental feed that defines the following actions: “add” for D3, “add” for an updated D1, and “delete” for D2. The system serves D0, updated D1, and D3. D0 was pushed by the first feed; because it is not referenced in the incremental feed, the D0’s contents remain in the search results. - Push a full feed that contains documents D0, D7, and D10. The system serves D0, D7, and D10 when the full feed processing is complete. D1 and D3 are not referenced in the full feed, so the system removes them from the index and does not add them back. Defining the XML Record for a Document. This is the URL used by the search appliance when crawling and index. displayurl--The URL that should be provided in search results for a document. This attribute is useful for web-enabled content systems where a user expects to obtain a URL with full navigation context and other application-specific data, but where a page does not give the search appliance easy access to the indexable content. action--Set action to addwhen you want the feed to overwrite and update the contents of a URL. If you don’t specify an action, the system performs an add. Set actionto deleteto remove a URL from the index. The action="delete"feature works for content, web, and metadata-and-URL feeds. information, see License Limits. mimetype(required)--This attribute tells the system what kind of content to expect from the contentelement. All MIME types that can be indexed by the search appliance are supported.Even though the feeds DTD (see Google Search Appliance Feed DTD) marks mimetypeas required, mimetypeis required only for content feeds and is ignored for web and metadata-and-url feeds (even though you are required to specify a value). The search appliance ignores the MIME type in web and metadata-and-URL feeds because the search appliance determines the MIME type when it crawls and indexes a URL. last-modified--For content feeds only., HTTP Basic, or Single Sign-on. The authmethodattribute can be set to none, httpbasic, ntlm, or httpsso. If a value for authmethodis not specified and a protected URL is defined on the search appliance, the default value for authmethodis the previously specified value for that URL. If the URL has not been previously specified on the search appliance, then the default value for authmethodis set to none. If you want to enable crawling for protected documents, see Including Protected Documents in Search Results. pagerank--For content feeds only. This attribute specifies the PageRank of the URL or group of URLs. For metadata-and-url feeds the effective PageRank will be the sum of this value (converted to an internal representation) and the PageRank calculated from crawling. The default value is 96. To alter the PageRank of the URL or group of URLs, set the value to an integer value between 68 and 100. Note that this PageRank value does not determine absolute relevancy, and the scale is not linear. Setting PageRank values should be done with caution and with thorough testing. The PageRank set for a URL overrides the PageRank set for a group. feedrank--This is a linear scale version of pagerank. Valid values are 1 to 100. For content feeds this will be the effective PageRank. For metadata-and-url feeds the effective PageRank will be the sum of this value and the PageRank calculated from crawling. Note that this value is ignored if the record or group also has a pagerankattribute set, and this PageRank value does not determine absolute relevancy. Setting PageRank values should be done with caution and with thorough testing. The value set for a record element overrides the value set for a group. crawl-immediately--For web and metadata-and-url feeds only. If this attribute is set to "true", then the search appliance crawls the URL immediately. If a large number of URLs with crawl-immediately="true"are fed, then other URLs to be crawled are deprioritized or halted until these URLs are crawled. This attribute has no effect on content feeds. crawl-once--For web and metadata-and-url feeds only. If this attribute is set to “true”, then the search appliance crawls the URL once, but does not recrawl it after the initial crawl. crawl-onceurls can get crawled again if explicitly instructed by a subsequent feed using crawl-immediately. Grouping Records Together Record elements must be contained inside the group element. The group element also allows you to apply an action to many records at once. For example, this: <group action="delete"> <record url="" mimetype="text/plain"/> <record url="" mimetype="text/plain"/> <record url="" mimetype="text/plain"/> </group> Is equivalent to this: <record url="" mimetype="text/plain "action="delete"/> <record url="" mimetype="text/plain" action="delete"/> <record url="" mimetype="text/plain" action="delete"/> However, if you define any actions for records as a group, the record’s definition always overrides the group’s definition. For example: <group action="delete"> <record url="" mimetype="text/plain"/> <record url="" mimetype="text/plain" action="add"/> <record url="" mimetype="text/plain"/> </group> In this example, hello01 and hello03 would be deleted, and hello02 would be updated. Providing Content in the Feed You add document content by placing it inside the record definition for your content feed. You can compress content to improve performance, for more information, see Content Compression. .doc files, you must encode the content by using base64 encoding and specifying the appropriate mimetype. Using base64 encoding ensures that the feed can be parsed as valid XML. Here is a record definition that includes base64 encoded content: <record url="..." mimetype="..."> . Content Compression Starting in Google Search Appliance version 6.2, content can be zlib compressed (see, which improves performance, because less data is sent across the network. To send compressed content: - Zlib compress the content. - Base64 encode the content. - Add the content text to the contentelement in the feed’s recordstatement. - Specify the encoding="base64compressed"attribute to the contentelement, for example: <record url=’’ action=’add’ mimetype=’text/html’> <content encoding="base64compressed">eJyzySjJzbGzScpPqbSzKbDLyS9KzbXRL7Cz0YcI6YPlAQSkDS8=</content> </record> Adding Metadata Information to a Record Metadata can be included in record definitions for different types of feeds. You can encode metadata using base64, for more information, see Metadata Base64 Encoding. The following table provides information about incremental web feeds and metadata-and-URL feeds. The following table provides information about incremental and full content feeds. If the metadata is part of a feed, it must have the following format: <record url="..." ...> <metadata> <meta name="..." content="..." /> <meta name="..." content="..." /> </metadata> ... </record> content=attribute cannot be an empty string ( ""). For more information, see Document Feeds Successfully But Then Fails. In version 6.2 and later, content feeds support the update of both content and metadata. Content feeds can be updated by just sending new metadata. Generally, robots META tags with a value of noindex, nofollow, or noarchive can be embedded in the head of an HTML document to prevent the search appliance from indexing links or following them in the document. However, robots META tags in a feed file are not honored, just the META tags in the HTML documents themselves. See the External Metadata Indexing Guide for more information about indexing external metadata and examples of metadata feeds. Metadata Base64 Encoding Starting in Google Search Appliance version 6.2, you can base64 encode metadata using the encoding="base64binary" attribute to the meta element. You can also base64 encode the metadata name attribute, however, both the name and content attributes must be base64 encoded if this option is used. For example: <record url="" action="add" mimetype="text/html"> <metadata> <meta encoding="base64binary" name="cHJvamVjdF9uYW1l" content="Y2lyY2xlZ19yb2Nrcw=="/> </metadata> </record> Using the UTF-8 Encoding Unless you have content in legacy systems that must use a national character set encoding, such as Shift_JIS, it is strongly recommended that all documents to be fed use the UTF-8 encoding. Do not escape & if using numeric character references, for example, the & character in ラ should not be XML encoded as &#12521. Including Protected Documents in Search Results Feeds can push protected contents to the search appliance. If your feed contains URLs that are protected by NTLM, Basic Authentication, or Forms Authentication (Single Sign-on), the URL record in the feed must specify the correct type of authentication. You must also configure settings in the Admin Console to allow the.example.com via Forms Authentication, you would define the record as: <record url="" authmethod="httpsso"> To grant the search appliance access to the protected pages in your feed, log into the Admin Console. For URLs that are protected by NTLM and Basic Authentication, follow these steps: - Open Content Sources > Web Crawl > Secure Crawl > Crawler Access - Define a pattern that matches the protected URLs in the feed. - Enter a username and password that will allow the crawler access to the protected contents. For contents on a Microsoft IIS server, you may also need to specify a domain. -. For URLs that are protected by Single Sign-on, follow these steps: - Open Content Sources > Web Crawl > Secure Crawl > Forms Authentication . - Under Sample Forms Authentication protected URL, enter the URL of a page in the protected site that will redirect the user to a login form. The login form must not contain JavaScript or frames. If you have more than one login page, create a Forms Authentication rule for each login. - Under URL pattern for this rule, enter a pattern that matches the protected URLs in the feed. - Click Create. In the browser page that opens, use the login form to enter a valid username and password. These credentials allow the crawler access to the protected contents. If the login information is accepted, you should see the protected page that you specified. If you can see the protected URL contents, click the Save and Close button. The Forms Authentication page now displays your rule. - Make any changes to the rule. For example,. - When you have finished making changes to the rule, click Save. authmethodattribute set, ensure that the fed URLs do not match any patterns on the Content Sources > Web Crawl > Secure Crawl > Crawler Access or Content Sources > Web Crawl > Secure Crawl > Forms Authentication pages that have the Make Public check box checked, unless you want those results to be public. This is one way of providing access to protected documents. For more information on authentication, refer to the online help that is available in the search appliance’s Admin Console, and in Managing Search for Controlled-Access Content. Per-URL ACLs and ACL Inheritance A per-URL ACL (access control list) has only a single URL associated with it. You can use feeds to add per-URL ACLs to the search appliance index. To specify a per-URL ACL, use the acl element, as described in Specifying Per-URL ACLs. ACL information can be applied to groups of documents through inheritance. To specify ACL inheritance, use the attributes described in See Specifying ACL Inheritance. After you feed a per-URL ACL to the search appliance, the ACL and its inheritance chain appear on the Index > Index Diagnostics page. For compatibility with feeds developed before software release 7.0, the search appliance supports the legacy format for specifying per-URL ACLs in feeds (deprecated). For more information, see Legacy Metadata Format (Deprecated). Take note that Google Search Appliance connectors, release 3.0, do not support feeding ACLs by using the legacy approach. They only support the approach documented in this section. If you update a search appliance to release 7.0 from an earlier release, re-crawling content is required. The search appliance also supports other methods of adding per-URL ACLs to the index. For more information, see "Methods for Adding ACLs to the Index" in Managing Search for Controlled-Access Content. You cannot use feeds to supply policy ACLs for prefix patterns or general URL patterns. To add policy ACLs for prefix patterns or general URL patterns, use either of the following methods: - The Search > Secure Search > Policy ACLs page in the Admin Console For more information see Admin Console Help for this page. - Policy ACL API For information about this API, see Policy ACL API Developer’s Guide. Specifying Per-URL ACLs You can include a per-URL ACL in a feed by specifying a document, the principal (group or user), its access to the document, and ACL inheritance information. acl Element To specify all ACL information, including principals and inheritance, use the acl element. An acl element can have the following attributes: url inheritance-type inherit-from The acl element can be the child of either a group or record element. For more information, see Approaches to Using the acl Element. The acl element is the parent of the principal element. For sample code, see Example Feed with an acl Element. url Attribute The url attribute directly associates the ACL with a URL. This attribute allows specifying ACLs for entities, such as folders and shares, without incrementing document count. For information about the inheritance-type and inherit-from attributes, see Specifying ACL Inheritance. principal Element To specify the principal, its name, and access to a document use the principal element. The principal element is a child of the acl element. The following code shows examples of the principal element: <principal namespace="Default" case-yourdomain\username</principal> <principal namespace="Default" case-yourdomain\groupname</principal> A principal element can have the following attributes: scope access namespace case-sensitivity-type principal-type scope Attribute The scope attribute specifies the type of the principal. Valid values are: user group The scope attribute is required. access Attribute The access attribute specifies the principal’s permission to access the document. Valid values are: permit deny The access attribute is required. namespace Attribute By keeping ACLs in separate namespaces, the search appliance is able to ensure that access to secure documents is maintained unambiguously. Namespaces are crucial to security when a search user has multiple identities and the permissions for documents are composed of ACLs from separate content sources.. Specifying ACL Inheritance While ACLs can be found attached to documents, content systems allow for ACL information to be applied to groups of documents through inheritance. The search appliance is able to model a wide variety of security mechanisms by using the concept of ACL inheritance. For example, in a Microsoft Windows File System, by default, a document inherits permissions from its folder. Permissions can be applied to documents without breaking inheritance. More specific permissions override less specific permissions. In a Microsoft Windows Share, permissions can be applied to the share as a whole. All documents in the tree rooted at the shared folder implicitly inherit share permissions. Share permissions always override more specific permissions. In Microsoft SharePoint, content is organized in hierarchies of sites, document collections, and documents. Each node in the hierarchy inherits permissions from its parent, but if a DENY occurs anywhere in the inheritance chain, the resulting decision is DENY. ACL inheritance is specified by the following attributes of the acl element: inheritance-type inherit-from inheritance-type Attribute The inheritance-type attribute specifies how the permissions (PERMIT, DENY, INDETERMINATE) will be interpreted when the search appliance authorizes against parent and child ACLs and decides which takes precedence. Valid values are: parent-overrides--The permission of the parent ACL dominates the child ACL, except when the parent permission is INDETERMINATE. In this case, the child permission dominates. If both parent and child are INDETERMINATE, then the permission is INDETERMINATE. child-overrides--The permission of the child ACL dominates the parent ACL, except when the child permission is INDETERMINATE. In this case, the parent permission dominates. If both parent and child are INDETERMINATE, then the permission is INDETERMINATE. and-both-permit--The permission is PERMIT only if both the parent ACL and child ACL permissions are PERMIT. Otherwise, the permission is DENY. leaf-node--ACL that terminates the chain. inherit-from Attribute The inherit-from attribute specifies the URL from which the ACL inherits permissions. If this attribute is absent, the ACL is a top-level node. Approaches to Using the acl Element There are two approaches to using the acl element: If the acl element is the child of a group element, the url attribute is required. An acl element as the child of a group element can only be used in the following scenarios: - Updating the ACL of a record (independently of content) for a URL that was previously fed with attached ACLs. - Modeling entities, such as folders, that would not otherwise be represented in the system. If you want to send acl and record elements for the same URL in one feed - please send <acl> element as the child of a record element, not a group element. If the acl element is the child of a record element, the url attribute is illegal. In this approach, the ACL is immediately associated with the document. Example Feed with an acl Element The following code shows an example of a feed XML file with an acl element that inherits permissions. <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" "gsafeed.dtd"> <gsafeed> <header> <datasource>TestUrlAcl</datasource> <feedtype>incremental</feedtype> </header> <group ...> <acl url=""> <!-- Other acl elements or group elements with records can appear in any order in this file --> <!-- There should be no <record> entries with the url in the feed for ensuring that ACL is applied and saved--> <record url=''...> Example Feed with an ACL Element as a child of a <record> element <?xml version='1.0' encoding='UTF-8'?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" "gsafeed.dtd"> <gsafeed> <header> <datasource>TestUrlAcl</datasource> <feedtype>incremental</feedtype> </header> <group action="add"> <record mimetype="text/plain" authmethod="httpbasic" url="" action="add" last- <acl> <metadata> <meta name="google:action" content="add"/> </metadata> <content><test content></content> </record> </group> </gsafeed> Legacy Metadata Format (Deprecated) For compatibility with feeds developed before software release 7.0, the search appliance supports the legacy metadata format for specifying per-URL ACLs in feeds. The legacy approach is limited: it does not support namespaces or case sensitivity. However, the following meta names enable you to specify ACL inheritance in metadata format: google:aclinheritfrom google:aclinheritancetype The valid value for google:aclinheritfrom is a URL string. Google recommends against using the legacy format unless you have legacy feeds to maintain. Instead, Google recommends developing feeds using the approach described in Specifying Per-URL ACLs. A per-URL ACL can be defined either in the metadata portion of the feed, or in the document itself, but not in both places. Specifying Group and User Access in Metadata You can include a per-URL ACL in a feed by specifying a document, and the names of the groups or users that have access. The list of groups and users appears inside the record element for the document that you are feeding. To specify groups or users that have access to the restricted URL, define meta elements with name and content attributes. To specify a group, use the following attribute values: - For the nameattribute, the value must be google:aclgroups. - For the contentattribute, the value must be a single group name. To specify more than one group, use more than one meta tag, one group for each tag. A group name that you specify in a content attribute value must match the group name as it appears in the authentication mechanism (LDAP or GDATA database). For example, to specify engineering (“eng”) as the group that has access to the URL, use the following code: <meta name="google:aclgroups" content="eng"/> To specify a user, use the following attribute values: - For the nameattribute, the value must be google:aclusers. - For the contentattribute, the value must be a single user name. To specify more than one user, use more than one meta tag, one user for each tag. A user name that you specify in a content attribute value must match the user name as it appears in the authentication mechanism (LDAP or GDATA database). For example, to specify Joe, Maria, and Salim as the users that have access to the URL, use the following code: <meta name="google:aclusers" content="joe"/> <meta name="google:aclusers" content="maria"/> <meta name="google:aclusers" content="salim"/> If a content string ends in =owner, =peeker, =reader, or =writer, that suffix is stripped from the user name. Furthermore, if a content string ends in =peeker, that ACL entry is ignored. Specifying Denial of Access to Users and Groups The search appliance supports DENY ACLs. When a user or group is denied permission to view the URL, it does not appear in the search results. You can specify users and groups that are not permitted to view a document by using meta tags, as shown in the following examples. To specify denial of access, the value of the name attribute must be google:acldenyusers or google:acldenygroups. For example, to specify Joe as the user who is denied access to a document, use the following code: <meta name="google:acldenyusers" content="joe"/> To specify administration as the group that is denied access to a document, use the following code: <meta name="google:acldenygroups" content="administration"/> Generally, DENY takes precedence over PERMIT. The following logic determines authorization decisions for per-URL ACLs: - start with decision=INDETERMINATE - if the user is denied, return DENY - if the user is permitted, set decision to PERMIT - if any of the groups are denied, return DENY - if any of the groups are permitted, set decision to PERMIT - if decision is INDETERMINATE, set to DENY - return decision Example of a Legacy Per-URL ACL for a Feed The following example shows the legacy code for adding a per-URL ACL for in a feed: <?xml version=’1.0’ encoding=’UTF-8’?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" "gsafeed.dtd"> <gsafeed> <header> <datasource>IAIncrementalFeedContent</datasource> <feedtype>incremental</feedtype> </header> <group> <record url=""> <metadata> <meta name="google:aclgroups" content="eng"/> <meta name="google:aclusers" content="joe"/> <meta name="google:aclusers" content="maria"/> <meta name="google:aclusers" content="salim"/> <meta name="google:acldenyusers" content="joe"/> ... </metadata> </record> </group> </gsafeed> Feeding Groups to the Search Appliance The search appliance can experience increased latency when establishing a user’s identity and the groups that it belongs to. You can dramatically reduce the latency for group resolution by periodically feeding groups information to the search appliance. When the groups information is on the search appliance, it is available in the security manager for resolving groups at authentication time. Consequently, the information works for all authorization mechanisms. Take note that the cumulative number of group members on the search appliance cannot exceed three million. To feed groups to the search appliance, start by: Designing an XML Groups Feed The XML groups feed contains information about principals (groups) and its members (groups or users). The XML must conform to the schema defined in the Groups Feed Document Type Definition. xmlgroups Element To specify all groups information, including memberships, principals, and members, use the xmlgroups element. membership Element The membership element must contain one principal element. It contains zero to one members elements. members Element The members element contains zero to many principal elements. principal Element To specify the principal, its name, and access to a document, use the principal element. The principal element is a child of the membership or members element. For any principal element that is a child of the membership element, the scope must be GROUP (users cannot have members) and should not include the case-sensitivity-type element. If you choose to include the case-sensitivity-type, it must be EVERYTHING_CASE_SENSITIVE because matching only happens against group members. The following code shows examples of the principal element: <principal namespace="Default" scope="GROUP"> abc.com/group1 </principal> A principal element can have the following attributes: scope namespace case-sensitivity-type principal-type scope Attribute The scope attribute specifies the type of the principal. Valid values are: USER GROUP The scope attribute is required. namespace Attribute By keeping principals in separate namespaces, the search appliance is able to ensure that access to secure documents is maintained unambiguously. Namespaces are crucial to security when a search user has multiple identities.. Example Feed with Groups The following code shows an example of a feed XML file with groups. <?xml version="1.0" encoding="UTF-8" ?> <xmlgroups> <membership source="others"> <principal scope="GROUP" namespace="Default" case-abc/group1</principal> <members> <principal scope="GROUP" namespace="Default" case-subgroup1</principal> <principal scope="USER" namespace="Default" case-user1</principal> </members> </membership> <membership source="others"> <principal scope="GROUP" namespace="Default" case-subgroup1</principal> <members> <principal scope="USER" namespace="Default" case-example/user2</principal> </members> </membership> </xmlgroups> Example Feed with Empty Groups The following code shows an example of a feed XML file with empty groups. <?xml version="1.0" encoding="ISO-8859-1"?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" ""> <xmlgroups> </xmlgroups> Groups Feed Document Type Definition <?xml version="1.0" encoding="UTF-8"?> <!ELEMENT xmlgroups (membership+)> <!ELEMENT membership (principal, members)> <!ELEMENT members (principal+)> <!ELEMENT principal (#PCDATA)> <!ATTLIST principal scope (USER|GROUP) #REQUIRED namespace CDATA "Default" case-sensitivity-type (EVERYTHING_CASE_SENSITIVE|EVERYTHING_CASE_INSENSITIVE) "EVERYTHING_CASE_SENSITIVE" principal-type (unqualified) #IMPLIED> Creating a Groups Feed Client A feed client is required for pushing groups information. You upload an XML feed using an HTTP POST to the feedergate server located on port 19900 of your search appliance. An XML feed must be less than 1 GB in size. If your feed is larger than 1GB, consider breaking the feed into smaller feeds that can be pushed more efficiently. On the search appliance, a feedergate handler, feedergate_groupsfeed, accepts the POST request, parses the XML into serialized data and stores it in the recordio file. The recordio file cannot exceed 1GB. If the data size in the request plus the existing recordio file size is greater than 1GB, the request will be rejected with “error: group db at capacity.” The feedergate server requires two input parameters from the POST operation: - groupsource: The name of the data source. Must be one of the following values: sharepoint, ggg, ldap, and others. - groupsfilename: The groups XML file you want to push to the search appliance. - feedtype: Specifies whether a feed is full or incremental. A full feed overwrites onboard groups. An incremental feed appends groups. The URL that you should use for the groups feed is: http://<APPLIANCE-HOSTNAME>:19900/xmlgroups The following example shows the command to push a groups feed: pushgroups_client.py --groupsource=others --feedtype=full --url="http://<APPLIANCE_HOSTNAME>:19900/xmlgroups" --groupsfilename="groups_feed.xml" feedtype=full). The following example shows a groups feed client written in Python. #!/usr/bin/env python2 """A helper script that pushes a groups xml file to the feeder.""" import getopt import mimetypes import sys import urllib2 def PrintMessage(): """Print help message for command usage.""" print """Usage: %s ARGS --groupsource: sharepoint, ggg, ldap or others --feedtype: incremental or full --url: groupsfeed url of the feedergate, e.g. --groupsfilename: The groups xml file you want to feed --help: output this message""" % sys.argv[0] def main(argv): """Process command line arguments and send feed to the webserver.""" try: opts, _ = getopt.getopt(argv[1:], None, ["help", "groupsource=", "url=", "feedtype=", "groupsfilename="]) except getopt.GetoptError: # print help information and exit: PrintMessage() sys.exit(2) groupsource = None url = None groupsfilename = None for opt, arg in opts: if opt == "--help": PrintMessage() sys.exit() if opt == "--groupsource": groupsource = arg if opt == "--url": url = arg if opt == "--feedtype": feedtype = arg if opt == "--groupsfilename": groupsfilename = arg params = [] if (url and feedtype and groupsfilename and groupsource in ("sharepoint", "ggg", "ldap", "others")): params.append(("groupsource", groupsource)) params.append(("feedtype", feedtype)) data = ("data", groupsfilename, open(groupsfilename, "r").read()) request_url = PostMultipart(url, params, (data,)) print urllib2.urlopen(request_url).read() else: PrintMessage() sys.exit(1) def PostMultipart(theurl, fields, files): """Create the POST request by encoding data and adding headers.""" content_type, body = EncodeMultipartFormdata(fields, files) headers = {} headers["Content-type"] = content_type headers["Content-length"] = str(len(body)) return urllib2.Request(theurl, body, headers) def EncodeMultipartFormdata(fields, files): """Create data in multipart/form-data encoding.""" boundary = "----------boundary_of_feed_data$" crlf = "\r\n" l = [] for (key, value) in fields: l.append("--" + boundary) l.append('Content-Disposition: form-data; name="%s"' % key) l.append("") l.append(value) for (key, filename, value) in files: l.append("--" + boundary) l.append('Content-Disposition: form-data; name="%s"; filename="%s"' % (key, filename)) l.append("Content-Type: %s" % GetContentType(filename)) l.append("") l.append(value) l.append("--" + boundary + "--") l.append("") body = crlf.join(l) content_type = "multipart/form-data; boundary=%s" % boundary return content_type, body def GetContentType(filename): """Determine the content-type of data file.""" return mimetypes.guess_type(filename)[0] or "application/octet-stream" if __name__ == "__main__": main(sys.argv) For more information about developing a feed client, see Pushing a Feed to the Google Search Appliance. Java Code In Java, use the DocIdPusher.pushGroupDefinitions() method to send groups to the Google Search Appliance. Using the Admin Console Feeds page The Content Sources > Groups page in the Admin Console enables you to download the groups DB file to the search appliance, view information about groups on the search appliance, or delete all the contents in the groups DB file. For more information, click Help > Content Sources > Groups in the Admin Console. Feeding Content from a Database To push records from a database into the search appliance’s index, you use a special content feed that is generated by the search appliance based on parameters that you set in the Admin Console. To set up a feed for database content, log into the Admin Console and choose Content Sources > Databases . You can find more information on how to define a database-driven data source in the online help that is available in the Admin Console. Records from a database cannot be served as secure content. Saving your XML Feed You should save a backup copy of your XML Feed in case you need to push it again. For example, if you perform a version update that requires you to rebuild the index, you must push all your feeds again to restore them to the search appliance. The search appliance does not archive copies of your feeds. Feed Limitations For information about feed limitations, see Specifications and Usage Limits. Pushing a Feed to the Google Search Appliance This section describes how to design a feed client. To design your own feed client, you should be familiar with these technologies: - HTTP--Hypertext Transfer Protocol () - XML--Extensible Markup Language () - A scripting language, such as Python If you don’t want to design your own feed client script, you can use one of the following methods to push your feed: - Google provides an example of a Python 2.x feed client script, pushfeed_client.py, that you can use to push an XML feed. (Google also provides a Python 3 version, pushfeed_client3.py) You can also use this script in a cronjob to automate feeds. - Using a Web Form Feed Client explains how to write a simple HTML form that allows a user to push an XML feed from a web page. Adapt the HTML for your use and add this page to any web server that has HTTP access to the search appliance. The IP address of the computer that hosts the feed client must be in the List of Trusted IP Addresses. In the Admin Console, go to Content Sources > Feeds, and scroll down to List of Trusted IP Addresses. Verify that the IP address for your feed client appears in this list. Designing a Feed Client You upload an XML feed using an HTTP POST to the feedergate server located on port 19900 of your search appliance. The search appliance also supports HTTPS access to the feedergate server through port 19902, enabling you to upload an XML feed file by using a secure connection. An XML feed must be less than" (unless its feedtypeis set to “metadata-and-url”). search appliance. Only the data source and feed type provided as POST input parameters are used. The URL that you should use is: http://<APPLIANCE-HOSTNAME>:19900/xmlfeed You should post the feed using enctype="multipart/form-data". Although the search appliance supports uploads using enctype="application/x-www-form-urlencoded", this encoding type is not recommended for large amounts of data. The feed client should URL encode the XML data submitted to the search appliance. Using a Web Form Feed Client Here is an example of a simple HTML form for pushing a feed to the search appliance. Because the web form requires user input, this method cannot be automated. To adapt this form for your search appliance, replace APPLIANCE-HOSTNAME with the fully qualified domain name of your search> How a Feed Client Pushes a Feed When pushing a feed, the feed client sends the POST data to a search appliance. A typical POST from a scripted feed client appears as follows:UTF-8%22%3F%3E%0A%3C%21DOCTYPE+gsafeed+SYSTEM+.. The response from the search appliance is as follows: HTTP/1.0 200 OK Content-Type: text/plain Date: Thu, 30 Apr 2009 after the feeder process runs. The feeder does not provide automatic notification of a feed error. To check for errors, you must log into the Admin Console and check the status on the Content Sources >. Turning Feed Contents Into Search Results. URL Patterns URLs specified in the feed will only be crawled if they pass through the patterns specified on the Content Sources > Web Crawl > Start and Block URLs page in the Admin Console. Patterns affect URLs in your feed as follows: - Do Not Follow Patterns--If a URL in the feed matches a pattern specified under Do Not Crawl URLs with the Following Patterns, the URL is removed from the index. - Follow Patterns--When this pattern is used, all URLs in the feed must match a pattern in this list. Any other URLs are removed from the index.. Trusted IP Lists To prevent unauthorized additions to your index, feeds are only accepted from machines that are included in the List of Trusted IP Addresses. To view the list of trusted IP addresses, log into the Admin Console and open the Content Sources > Feeds page. If your search appliance is on a trusted network, you can disable IP address verification by selecting Trust all IP addresses. Adding Feed Content. Removing Feed Content From the Index There are several ways of removing content from your index using a feed. The method used to delete content depends on the kind of feed that has ownership. For content feeds, remove content by performing one of these actions: - Push the URL as part of an incremental feed, using the “delete” action to remove the content. This is the fastest way to remove content. URLs will be deleted within about 30 minutes. - Remove the URL from the feed and perform a full feed. Because a full feed overwrites the earlier feed contents, any URLs that are omitted from the new full feed will be removed from the index. The content is deleted within about 30 minutes. - Remove the data source and all of its contents. To remove a data source, log into the Admin Console and open the Content Sources > Feeds page. Choose the data source that you want to remove and click Delete. The contents will be deleted within about 30 minutes. The Delete option removes the fed documents from the search appliance index. The feed is then marked Delete in the Admin Console. - After deleting a feed, you can remove the feed from the Admin Console Feed Status page by clicking Destroy. For web and metadata-and-URL feeds, remove content by performing one of these actions: - In the XML record for the document, set actionto delete. The action="delete"feature works for content, web, and metadata-and-URL feeds. - Remove the URL from the web server. The next time that the URL is crawled, the system will encounter a 404 status code and remove the content from the index. - Specify a pattern that removes the URL from the index. For example, add the URL to the Do Not Follow Patterns list. The URL is removed the next time that the feeder delete process runs. Time Required to Process a Feed The following factors can cause the feeder to be slow to add URLs to the index: - The feed is large. - The search appliance is currently using a lot of resources to crawl other documents and serve results. - Other feeds are pending. In general, the search appliance can process documents that are pushed as content feeds more quickly than it can crawl and index the same set of documents as a web feed. Feed Files Awaiting Processing To view a count of how many feed files remain for the search appliance to process into its index, add /getbacklogcount to a search appliance URL at port 19900. The count that this feature provides can be used to regulate the feed submission rate. The count also includes connector feed files. The syntax for /getbacklogcount is as follows: Changing the Display URL in Search Results You can change the display URL on search results by pushing a feed with the displayurl attribute set. Use this feature when you want to use one URL in the index and another for display to the user. For example, you might change the display URL if URL content is not in a web enabled server (and you need to specify a proxy server that uses doc IDs in a back-end content management system) or if you split a large file into segments and each segment is indexed with a separate URL and the display URL for each result points to the original file. The following example shows use of the displayurl attribute. <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" ""> <gsafeed> <header> <datasource>replace</datasource> <feedtype>incremental</feedtype> </header> <group> <record url="" displayurl="" action="add" mimetype="text/html" lock="true"> <content>Hello World - document data goes here!</content> </record> </group> </gsafeed> License Limits If your index already contains the maximum number of URLs, or your license limit has been exceeded, then the index is full. When the index is full, the system reduces the number of indexed documents as follows: - Documents are removed to bring the total number of documents to the license limit. - Documents with the lockattribute set to trueare deleted last. Increasing the Maximum Number of URLs to Crawl To increase the maximum number of URLs in your index, log into the Admin Console and choose Content Sources > Web Crawl > Host Load Schedule. Check the Maximum Number of URLs to Crawl. This number must be smaller than the license limit for your search appliance. To increase the license limit, contact Sales. Troubleshooting Here are some things to check if a URL from your feed does not appear in the index. To see a list of known and fixed issues, see the latest release notes for each version. Error Messages on the Feeds Status Page If the feeds status page shows “Failed in error” you can click the link to view the log file. ProcessFeed: parsing error This message means that your XML file could not be understood. The following are some possible causes of this error: - There is an error in the DOCTYPE line in your XML file. This line should be: <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN"""> - You have not escaped the ampersand or other reserved characters in a URL in your XML file. - You have included content that requires base64 encoding. See Providing Content in the Feed. If none of the above are the cause of the error, run xmllint against your XML file to check for errors in the XML. The xmllint program is included in Linux distributions as part of the libxml2 package. The following is an example that shows how you would use xmllint to test a feed named full-feed.xml. $ xmllint -noout -valid full-feed.xml; echo $? 0 The return code of zero indicates that the document is both valid and well-formed. If the xmllint command fails and displays the parsing error message, ensure that you have the correct DTD file, or you can remove the -valid flag from the xmllint command line so that the xmllint command doesn’t try to validate the XML file’s elements. For more information on the DTD, see Google Search Appliance Feed DTD. Feed Push is Not Successful Before a search appliance can start processing a feed, you need to successfully push a feed to port 19900. If the feed push is not successful, check the following: - List the IP address of the computer that hosts the feed client in the List of Trusted IP Addresses. For more information, see See Pushing a Feed to the Google Search Appliance. - Verify that the feed client connects to port 19900 on the correct IP address. - Verify that port 19900 is reachable on the search appliance by running tracepath applianceIP/19900from the Linux command line. - Check with your network team for firewall or connectivity issues to ensure access to port 19900. Fed Documents Aren’t Appearing in Search Results Some common reasons why the URLs in your feed might not be found in your search results include: - The crawler is still running. Wait a few hours and search again. For large document feeds containing multiple non-text documents, the search appliance can take several minutes to process all of the documents. You can check the status of a document feed by going to the Content Sources > Feeds page. You can also verify that the documents have been indexed by going to Index > Diagnostics > Index Diagnostics and browsing to the URL, or entering the URL in “ URLs starting with.” Documents that are fed into the search appliance can show up in Crawl Diagnostics up to 15 minutes before they are searchable in the index. - The URLs were removed by an exclusion pattern specified under Content Sources > Web Crawl > Start and Block URLs. See See URL Patterns. - The URLs were removed by a full feed that did not include them. See Removing Feed Content From the Index. - The URLs don’t match the pattern for the collection that you were searching. Check the patterns for your collection under Index > Collections. Make sure that the collection specified in the upper right hand corner of the Crawl Diagnostics page contains the URL that you are looking for. - The URLs are listed in multiple feeds. Another feed that contains this URL requested a deleteaction. - A metadata-and-URL feed was submitted with the feedtypeelement set to incrementalor full. Incremental can only be used on a content feed. If this is the case, the feed is treated as a content feed and not crawled. Once a URL is part of a content feed, the feed is not recrawled even if you later send a web or metadata feed. If you run into this issue, remove the URL from the URL pattern (or click the Delete link on the feeds page) and after the feed URLs have been deleted, put the URL patterns back, and send a proper metadata-and-url feed. - The documents were removed because your index is full. See License Limits. - The feed that you pushed was not pointing to a valid host. Verify that the feed has an FQDN (fully qualified domain name) in the host part of the URL. - More relevant documents are pushing the fed URL down in the list. You can search for a specific URL with the query info:[url]where [url]is the full URL to a document fed into the search appliance. Or use inurl:[path]where [path]is part of the URL to documents fed into the search appliance. - The fed document has failed. In this scenario, none of the external metadata fed by using a content feed or metadata-and-URL feed would get indexed. In the case of metadata-and-URL feeds, just the URL gets indexed without any other information. For additional details about the failure, click Index > Diagnostics > Index Diagnostics. - The URLs are on a protected server and cannot be indexed. See Including Protected Documents in Search Results. - The URLs are on a protected server and have been indexed, but you do not have the authorization to view them. Make sure that &access=ais somewhere in the query URL that you are sending to the search appliance. See Including Protected Documents in Search Results. - You did not complete the upgrade from a previous version and are still running in “Test mode” with the old Index. Review the Update Instructions for the current version of the software, and make sure that you have accepted the upgrade and completed the update process. Document Feeds Successfully But Then Fails A content feed reports success at the feedergate, but thereafter, reports the following document feed error: Failed in error documents included: 0 documents in error: 1 error details: Skipping the record, Line number: nn, Error: Element record content does not follow the DTD, Misplaced metadata This error occurs when a metadata element contains a content attribute with an empty string, for example: <meta name="Tags" content=""/> If the content attribute value is an empty string: - Remove the metatag from the metadataelement, or: - Set the value of the contentattribute to show that no value is assigned. Choose a value that is not used in the metadataelement, for example, _noname_: <meta name="Tags" content="_noname_"/> You can then use the inmeta search keyword to find the attribute value in the fed content, for example: inmeta:tags~_noname_ Fed Documents Aren’t Updated or Removed as Specified in the Feed XML the URL is referenced by a web feed and a content feed, the URL’s content is associated with the data source that crawled the URL last. - If the URL is referenced by more than one content feed, the URL’s content is associated with the data source that was responsible for the URL’s last update. - If the URL is referenced in the Admin Console’s list of Crawl URLs and a content feed, the URL’s content is associated with the content feed. The search appliance will not recrawl the URL until the content feed requests a change. To return the URL to its original status, delete the URL from the feed that originally pushed the document to the index. - If the URL has already been crawled by the search appliance, and is then referenced in a web feed, the search appliance immediately injects the URL into the queue to be recrawled as if it were a new, uncrawled URL. The URL’s Enterprise PageRank is not affected. However, the change interval is reset to the default until the crawl scheduler process next runs. Document Status is Stuck “In Progress” If a document feed gives a status of “In Progress” for more than one hour, this could mean that an internal error has occurred. Please contact Google to resolve this problem, or you can reset your index by going to Administration > Reset Index. Insufficient Disk Space Rejects Feeds If there is insufficient free disk space, the search appliance rejects feeds, and displays the following message in the feed response: Feed not accepted due to insufficient disk space. Contact Google Cloud Support. The HTTP return code is 200 OK, so a program sending a feed should check the message text. For more information on response messages, see How a Feed Client Pushes a Feed. Feed Client TCP Error ‘--’ before the argument. MIME syntax discussed in more detail here: Example Feeds Here are some examples that demonstrate how feeds are structured: - Web Feed - Web Feed with Metadata - Web Feed with Base64 Encoded Metadata - Full Content Feed - Incremental Content Feed - Python Implementation of Creating a base64 Encoded Content Feed Web Feed <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" ""> <gsafeed> <header> <datasource>web</datasource> <feedtype>incremental</feedtype> </header> <group> <record url="" mimetype="text/plain"></record> </group> </gsafeed> Web Feed with" lock="true"> <metadata> <meta name="Name" content="Jenny Wong"/> <meta name="Title" content="Metadata Developer"/> <meta name="Phone" content="x12345"/> <meta name="Floor" content="3"/> <meta name="PhotoURL" content=""/> <meta name="URL" content=""/> </metadata> </record> </group> </gsafeed> Web Feed with Base64 Encoded"> <metadata> <meta encoding="base64binary" name="cHJvamVjdF9uYW1l" content="Y2lyY2xlZ19yb2Nrcw=="/> </metadata> </record> </group> </gsafeed> Full Content Feed <> <head><title>namaste</title></head> <body> This is hello03 </body> </html> ]]></content> </record> <record url="" mimetype="text/html"> <content encoding="base64binary">Zm9vIGJhcgo</content> </record> </group> </gsafeed> Incremental Content Feed <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" ""> <gsafeed> <header> <datasource>hello</datasource> <feedtype>incremental</feedtype> </header> <group action="delete"> <record url=" 01"mimetype="text/plain"/> </group> <group> <record url="" mimetype="text/plain"> <content>UPDATED - This is hello02</content> </record> <record url="" mimetype="text/plain" action="delete"/> <record url="" mimetype="text/plain"> <content>UPDATED - This is hello04</content> </record> </group> </gsafeed> Python Implementation of Creating a base64 Encoded Content Feed The following create_base64_content_feeds.py script goes through all PDF files under MY_DIR and creates a content feed for each of them that is added to the base64_pdfs.xml file. This file can then be used to add the documents that are under MY_DIR to the index. import base64 import os MY_DIR = '/var/www/files/' MY_FILE = 'base64_pdfs.xml' def main(): files = os.listdir(MY_DIR) if os.path.exists(MY_FILE): os.unlink(MY_FILE) fh = open(MY_FILE, 'wb') fh.write('<?xml version="1.0" encoding="utf-8"?>\n') fh.write('<!DOCTYPE gsafeed PUBLIC "-//Google//DTD GSA Feeds//EN" "">\n') fh.write('<gsafeed>\n') fh.write('<header>\n') fh.write('\t<datasource>pdfs</datasource>\n') fh.write('\t<feedtype>incremental</feedtype>\n') fh.write('</header>\n') fh.write('<group>\n') for my_file in files: if '.pdf' in my_file: encoded_data = base64.b64encode(open(MY_DIR + my_file, 'rb').read()) fh.write('<record url="googleconnector://localhost.localdomain/' + my_file + '" mimetype="application/pdf">\n') fh.write('<content encoding="base64binary">' + encoded_data + '</content>\n') fh.write('</record>') fh.write('</group>\n') fh.write('</gsafeed>\n') fh.close() print 'Writing to file: %s' % MY_FILE if __name__ == '__main__': main() Google Search Appliance Feed DTD The gsafeed.dtd file follows. You can view the DTD on your search appliance by browsing to the http://<APPLIANCE-HOSTNAME>/gsafeed.dtd URL. <?xml version="1.0" encoding="UTF-8"?> <!ELEMENT gsafeed (header, group+)> <!ELEMENT header (datasource, feedtype)> <!-- datasource name should match the regex [a-zA-Z_][a-zA-Z0-9_-]*, the first character must be a letter or underscore, the rest of the characters can be alphanumeric, dash, or underscore. --> <!ELEMENT datasource (#PCDATA)> <!-- feedtype must be either 'full', 'incremental', or 'metadata-and-url' --> <!ELEMENT feedtype (#PCDATA)> <!-- group element lets you group records together and specify a common action for them --> <!ELEMENT group ((acl|record)*)> <!-- record element can have attribute that overrides group's element--> <!ELEMENT record (acl?,metadata*,content*)> <!ELEMENT metadata (meta*)> <!ELEMENT meta EMPTY> <!ELEMENT content (#PCDATA)> <!-- acl element allows directly associating acls with a url --> <!ELEMENT acl (principal*)> <!ELEMENT principal (#PCDATA)> <!-- default is 'add' --> <!-- last-modified date as per RFC822 --> <!-- 'scoring' attribute is ignored for content feeds --> <!ATTLIST group action (add|delete) "add" pagerank CDATA #IMPLIED> <!ATTLIST record url CDATA #REQUIRED displayurl CDATA #IMPLIED action (add|delete) #IMPLIED mimetype CDATA #REQUIRED last-modified CDATA #IMPLIED lock (true|false) "false" authmethod (none|httpbasic|ntlm|httpsso|negotiate) #IMPLIED pagerank CDATA #IMPLIED crawl-immediately (true|false) "false" crawl-once (true|false) "false" scoring (content|web) #IMPLIED > <!ATTLIST metadata overwrite-acls (true|false) "true"> <!ATTLIST acl url CDATA #IMPLIED inheritance-type (child-overrides|parent-overrides|and-both-permit|leaf-node) "leaf-node" inherit-from CDATA #IMPLIED> <!ATTLIST principal scope (user|group) #REQUIRED access (permit|deny) #REQUIRED namespace CDATA "Default" case-sensitivity-type (everything-case-sensitive|everything-case-insensitive) "everything-case-sensitive" principal-type (unqualified) #IMPLIED> <!ATTLIST meta>
https://support.google.com/gsa/answer/6329211
CC-MAIN-2021-10
refinedweb
10,527
54.32
Now that a basic understanding of a Windows Forms application and the Windows Forms Designer have been established, it is time to create your first Windows Forms application. In the following sections, you will learn to create, compile and run a simple Hello World application. The first step in creating this simple application is invoking the Windows Application Wizard from the Visual Studio .NET New Project dialog as shown in Figures 15.3 and 15.4. When the New Project dialog window appears, select the Windows Application Wizard shown in Figure 15.4 and type in the name and location of the project that you want to create. After you've entered all the required information, click the OK button and watch Visual Studio .NET create your application, as shown in Figure 15.5. You will notice that the Solution Explorer contains three files. One file (App.ico) is the application icon. This file contains the icon image that will be used in your application. Another file (AssemblyInfo.cs) contains all information about the assembly, such as signing and general assembly and version information. The final file (Form1.cs) contains the Web Form that is the basis of our application. By default, it is a blank form that has the name Form1. Now that the basic application has been created, examine the Solution Explorer. If you want to change any of the basic information of this project, including the name of the icon file to be associated with it, you can invoke the application property pages by right-clicking on the project file in the Solution Explorer and selecting the Properties menu item. From these pages, you can change properties in Tables 15.1 and 15.2. Property Page Description General This page contains information such as the type of application, assembly name, and default namespace. Designer Defaults This page is used in Web Forms development. References Path This page lists the directories used for the assemblies that were added with the Add Reference command. Note that this is not stored with the project, but is stored in the project name.user file. Build Events This page specifies events that should take place before and after a build. You can also specify whether or not a post-build event should always take place, or only upon a successful build or when the build updates the project output. Note that this is not available to web applications. Build This page contains properties for setting code generation, errors and warnings, and outputs. Debugging This page specifies debugging specific information such as debuggers, start actions, and start options. Advanced This page contains settings for general configuration properties, such as whether the compiler should perform an incremental build, base address, file alignment, and whether to use the mscorlib assembly. Now that all the files that will be used in the project have been identified, the form must be loaded in the Design View window. To do that, simply right-click on the file in the Solution Explorer, and select the View Designer menu option. TIP Instead of right-clicking on the file and selecting the View Designer menu option to view the form in Design mode, you can simply double-click on the file and the Design View window will appear. Next, drag and drop a Label control from the Windows Forms tab in the toolbox to the form. After the Label control has been placed on the form (as shown in Figure 15.7), change the control's Text property to Welcome to Windows Forms Development!. While you are in the Properties window, change the font size from its default value to a larger font size of 24. You will notice that the label now extends past the end of the form. That's okay; resize the form by clicking the mouse cursor on the lower-right side of the form and dragging to the right until the entire Label control can be seen (see Figure 15.8). After you have completed the visual design of the Hello World application, you must compile the application before it can be run. The compilation of a program can be accomplished in a few different ways: You can choose to compile the application from the keystroke Ctrl+Shift+B (assuming that you are using the default keystroke mapping); you can select the Build, Build <Project Name> menu option; or you can click the Build Solution toolbar button (by default, this button does not appear on the toolbar). Each of these techniques accomplishes the same thing, but seems faster than the other. Compile the program. Figure 15.9 shows the output of the compiler when it has successfully completed its task. You have a few different options to run the application: You can press the F5 key; you can select the Debug, Start option from the main menu; or you can click the Start button on the toolbar. Using the F5 key seems faster than using the other options. Instead of performing two different stepscompiling and then building the applicationyou can just click the Start button. If the application must be rebuilt, Visual Studio .NET will automatically perform this process. If the application does not require a rebuild, Visual Studio .NET simply starts it. After the application has been started, you can see your welcome message, as shown in Figure 15.10. The welcome message is a great start, but it is not very useful. In fact, displaying the message did not require any coding, so the example was perhaps too simple. In Chapter 10, "Events and Delegates," you learned what an event was and why it is used. By default, a Button control has several events ready for your use (see Figure 15.11). To add an event handler to any of the provided events, simply double-click the space next to the event name. The Forms Designer will automatically add the event handler, with a default name, in the Code View window and place your cursor inside the event handler method. All you have to do is add the code that you want to handle the event. To better illustrate this, add a Button control to the Hello World application form, as shown in Figure 15.12. After the button has been added, add an event handler to the Button.Click event. When that has been accomplished, the Forms Designer should place your cursor in a method that resembles the following code snippet: private void button1_Click(object sender, System.EventArgs e) { } When that has been achieved, add the following line of code to this event handler: MessageBox.Show("Button Clicked!"); The event handler should now look something like this: private void button1_Click(object sender, System.EventArgs e) { MessageBox.Show("Button Clicked!"); } When the application is run, you will see the same welcome message with the addition of one Button control (see Figure 15.13). If you click the button, you will see a MessageBox appear with the message Button Clicked!. Listing 15.1 contains all the code necessary to create the Hello World application. using System; using System.Drawing; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; namespace HelloWorld { /// <summary> /// Summary description for Form1. /// </summary> public class Form1 : System.Windows.Forms.Form { private System.Windows.Forms.Label label1; private System.Windows.Forms.Button button1; /// <summary> /// Required designer variable. /// </summary> private System.ComponentModel.Container components = null; public Form1() { // //.label1 = new System.Windows.Forms.Label(); this.button1 = new System.Windows.Forms.Button(); this.SuspendLayout(); // // label1 // this.label1.Font = new System.Drawing.Font("Microsoft Sans Serif", 24F, System.Drawing.FontStyle.Regular, System.Drawing.GraphicsUnit.Point, ((System.Byte)(0))); this.label1.Location = new System.Drawing.Point(16, 24); this.label1.Name = "label1"; this.label1.Size = new System.Drawing.Size(656, 40); this.label1.TabIndex = 0; this.label1.Text = "Welcome to Windows Forms Development!"; // // button1 // this.button1.Location = new System.Drawing.Point(296, 80); this.button1.Name = "button1"; this.button1.TabIndex = 1; this.button1.Text = "Click Me!"; this.button1.Click += new System.EventHandler(this.button1_Click); // // Form1 // this.AutoScaleBaseSize = new System.Drawing.Size(5, 13); this.ClientSize = new System.Drawing.Size(656, 126); this.Controls.Add(this.button1); this.Controls.Add(this.label1); this.Name = "Form1"; this.Text = "Form1"; this.ResumeLayout(false); } #endregion /// <summary> /// The main entry point for the application. /// </summary> [STAThread] static void Main() { Application.Run(new Form1()); } private void button1_Click(object sender, System.EventArgs e) { MessageBox.Show("Button Clicked!"); } } }
https://flylib.com/books/en/1.238.1.112/1/
CC-MAIN-2022-05
refinedweb
1,406
59.8
I'm trying to do some graphics, with the command "ScreenRes". I know how to do with "SCREEN", but since "ScreenRes" provides better possibilities, I was trying to familiarize myself with it. At this program, I'm printing the "hello world" at graphics screen: Code: Select all ScreenRes 320, 200 Print "Hello world!!" Sleep So far , so good. Then I tried to do the same thing, going full-screen: Code: Select all #include "fbgfx.bi" #if __FB_LANG__ = "fb" Using FB '' Screen mode flags are in the FB namespace in lang FB #endif ScreenRes 320, 200, (GFX_FULLSCREEN) Print "Hello world!!" Sleep Apparently, didn't work! Ehm...Why??? What I'm doing wrong? :-) TIA! A. EDIT: I tried also with bigger resolutions (that SCREENLIST says that are supported. eg.1024X768). Nothing changes. Sidequestion: I'm suspecting that it's related, although I'm not sure. The manual says that GfxLib defaults to OpenGL, but can alternatively used X11 or XBDev (on Linux). Supposedly with ScreenControl, but I don't quite understand the procedure! :-) Anyone knows this item?
https://www.freebasic.net/forum/viewtopic.php?p=240730
CC-MAIN-2019-13
refinedweb
175
67.25
/*- * .c 8.2 (Berkeley) 1/2/94 */ #include <sys/cdefs.h> __FBSDID("$FreeBSD: src/usr.bin/make/dir.c,v 1.52 2005/03/23 12:56:15 harti Exp $"); /*- * dir.c -- * Directory searching using wildcards and/or normal names... * Used both for source wildcarding in the Makefile and for finding * implicit sources. * * The interface for this module is: * Dir_Init Initialize the module. * * Dir_HasWildcards Returns TRUE if the name given it needs to * be wildcard-expanded. * * Path_Expand Given a pattern and a path, return a Lst of names * which match the pattern on the search path. * * Path. * * Path <sys/types.h> #include <sys/stat.h> #include <dirent.h> #include <err.h> #include <stdio.h> #include <stdlib.h> #include <string.h> #include <unistd.h> #include "arch.h" #include "dir.h" #include "globals.h" #include "GNode.h" #include "hash.h" #include "lst.h" #include "make.h" #include "str.h" #include "targ.h" #include "util.h" /* * A search path consists of a list of Dir structures. A Dir list. Dir, Path. */ typedef struct Dir { char *name; /* Name of directory */ int refCount; /* No. of paths with this directory */ int hits; /* No. of times a file has been found here */ Hash_Table files; /* Hash table of files in directory */ TAILQ_ENTRY(Dir) link; /* allDirs link */ } Dir; /* * A path is a list of pointers to directories. These directories are * reference counted so a directory can be on more than one path. */ struct PathElement { struct Dir *dir; /* pointer to the directory */ TAILQ_ENTRY(PathElement) link; /* path link */ }; /* main search path */ struct Path dirSearchPath = TAILQ_HEAD_INITIALIZER(dirSearchPath); /* the list of all open directories */ static TAILQ_HEAD(, Dir) openDirectories = TAILQ_HEAD_INITIALIZER(openDirectories); /* * Variables for gathering statistics on the efficiency of the hashing * mechanism. */ static int hits; /* Found in directory cache */ static int misses; /* Sad, but not evil misses */ static int nearmisses; /* Found under search path */ static int bigmisses; /* Sought by itself */ static Dir *dot; /* contents of current directory */ /* Results of doing a last-resort stat in Path Hash_Table mtimes; /*- *----------------------------------------------------------------------- * Dir_Init -- * initialize things for this module * * Results: * none * * Side Effects: * none *----------------------------------------------------------------------- */ void Dir_Init(void) { Hash_InitTable(&mtimes, 0); } /*- *----------------------------------------------------------------------- * Dir_InitDot -- * initialize the "." directory * * Results: * none * * Side Effects: * some directories may be opened. *----------------------------------------------------------------------- */ void Dir_InitDot(void) { dot = Path_AddDir(NULL, "."); if (dot == NULL) err(1, "cannot open current directory"); /* * We always need to have dot around, so we increment its * reference count to make sure it's not destroyed. */ dot->refCount += 1; } /*- *----------------------------------------------------------------------- * Dir_HasWildcards -- * See if the given name has any wildcard characters in it. * * Results: * returns TRUE if the word should be expanded, FALSE otherwise * * Side Effects: * none *----------------------------------------------------------------------- */ Boolean Dir_HasWildcards(const char *name) { const char *cp; int wild = 0, brace = 0, bracket = 0; for (cp = name; *cp; cp++) { switch (*cp) { case '{': brace++; wild = 1; break; case '}': brace--; break; case '[': bracket++; wild = 1; break; case ']': bracket--; break; case '?': case '*': wild = 1; break; default: break; } } return (wild && bracket == 0 && brace == 0); } /*- *----------------------------------------------------------------------- * DirMatchFiles -- * Given a pattern and a Dir(const char *pattern, const Dir *p, Lst *expansions) { Hash_Search search; /* Index into the directory's table */ Hash_Entry *entry; /* Current entry in the table */ Boolean isDot; /* TRUE if the directory being searched is . */ isDot = (*p->name == '.' && p->name[1] == '\0'); for (entry = Hash_EnumFirst(&p->files, &search);. The * given arguments are the entire word to expand, the first curly * brace in the word, the search path, and the list to store the * expansions in. * * Results: * None. * * Side Effects: * The given list is filled with the expansions... * *----------------------------------------------------------------------- */ static void DirExpandCurly(const char *word, const char *brace, struct Path *path, Lst *expansions) { const char *end; /* Character after the closing brace */ const char *cp; /* Current position in brace clause */ const '[': Path_Expand(file, path, expansions); goto next; default: break; } } if (*cp2 == '\0') { /* * Hit the end w/o finding any wildcards, so stick * the expansion on the end of the list. */ Lst_AtEnd(expansions, file); } else { next: free(file); } start = cp + 1; } } /*- *----------------------------------------------------------------------- * DirExpandInt -- * Internal expand routine. Passes through the directories in the * path one by one, calling DirMatchFiles for each. NOTE: This still * doesn't handle patterns in directories... Works given a word to * expand, a path to look in, and a list to store expansions in. * * Results: * None. * * Side Effects: * Things are added to the expansions list. * *----------------------------------------------------------------------- */ static void DirExpandInt(const char *word, const struct Path *path, Lst *expansions) { struct PathElement *pe; TAILQ_FOREACH(pe, path, link) DirMatchFiles(word, pe->dir, expansions); } /*- *----------------------------------------------------------------------- * Dir_Expand -- * Expand the given word into a list of words by globbing it looking * in the directories on the given search path. * * Results: * A list of words consisting of the files which exist along the search * path matching the given pattern is placed in expansions. * * Side Effects: * Directories may be opened. Who knows? *----------------------------------------------------------------------- */ void Path_Expand(char *word, struct Path *path, Lst *expansions) { LstNode *ln; char *cp; DEBUGF(DIR, ("expanding \"%s\"...", word)); cp = strchr(word, '{'); if (cp != NULL) DirExpandCurly(word, cp, path, expansions); else { cp = strchr(word, '/'); if (cp != NULL) { /* * The thing has a directory component -- find the * first wildcard in the string. */ for (cp = word; *cp != '\0'; = Path_FindFile(word, path); cp[1] = sc; /* * dirpath is null if can't find the * leading component * XXX: Path_FindFile won't find internal * components. i.e. if the path contains * ../Etc/Object and we're looking for * Etc, * it won't be found. Ah well. * Probably not important. */ if (dirpath != NULL) { char *dp = &dirpath[strlen(dirpath) - 1]; struct Path tp = TAILQ_HEAD_INITIALIZER(tp); if (*dp == '/') *dp = '\0'; Path_AddDir(&tp, dirpath); DirExpandInt(cp + 1, &tp, expansions); Path_Clear(&tp); } }(ln, expansions) DEBUGF(DIR, ("%s ", (const char *)Lst_Datum(ln))); DEBUGF(DIR, ("\n")); } } /** * Path * Path_FindFile(char *name, struct Path *path) { char *p1; /* pointer into p->name */ char *p2; /* pointer into name */ char *file; /* the current filename to check */ const struct PathElement *pe; /* current path member */ char *cp; /* final component of the name */ != NULL) { hasSlash = TRUE; cp += 1; } else { hasSlash = FALSE; cp = name; } DEBUGF(DIR, () != NULL)) { DEBUGF(DIR, ("in '.'\n")); hits += 1; dot->hits += 1; return (estrdup(name)); } /* *... */ TAILQ_FOREACH(pe, path, link) { DEBUGF(DIR, ("%s...", pe->dir->name)); if (Hash_FindEntry(&pe->dir->files, cp) != NULL) { DEBUGF(DIR, ( = pe->dir->name + strlen(pe->dir->name) - 1; p2 = cp - 2; while (p2 >= name && p1 >= pe->dir->name && *p1 == *p2) { p1 -= 1; p2 -= 1; } if (p2 >= name || (p1 >= pe->dir->name && *p1 != '/')) { DEBUGF(DIR, ("component mismatch -- " "continuing...")); continue; } } file = str_concat(pe->dir->name, cp, STR_ADDSLASH); DEBUGF(DIR, ("returning %s\n", file)); pe->dir->hits += 1; hits += 1; return (file); } else if (hasSlash) { /* * If the file has a leading path component and that * component exactly matches the entire name of the * current search directory, we assume the file * doesn't exist and return NULL. */ for (p1 = pe->dir->name, p2 = name; *p1 && *p1 == *p2; p1++, p2++) continue; if (*p1 == '\0' && p2 == cp - 1) { if (*cp == '\0' || ISDOT(cp) || ISDOTDOT(cp)) { DEBUGF(DIR, ("returning %s\n", name)); return (estrdup(name)); } else { DEBUGF(DIR, ("must be here but isn't --" " returning NULL\n")); return ) { DEBUGF(DIR, ("failed.\n")); misses += 1; return (NULL); } if (*name != '/') { Boolean checkedDot = FALSE; DEBUGF(DIR, ("failed. Trying subdirectories...")); TAILQ_FOREACH(pe, path, link) { if (pe->dir != dot) { file = str_concat(pe->dir->name, name, STR_ADDSLASH); } else { /* * Checking in dot -- DON'T put a leading ./ * on the thing. */ file = estrdup(name); checkedDot = TRUE; } DEBUGF(DIR, ("checking %s...", file)); if (stat(file, &stb) == 0) { DEBUGF(DIR, ("got it.\n")); /* * We've found another directory to search. We * know there's a slash in 'file' because we put * one there. We nuke it after finding it and * call Path'; Path_AddDir(path, file); *cp = '/'; /* * Save the modification time so if * it's needed, we don't have to fetch it again. */ DEBUGF(DIR, ("Caching %s for %s\n", Targ_FmtTime(stb.st_mtime), file)); entry = Hash_CreateEntry(&mtimes, file, (Boolean *)NULL); Hash_SetValue(entry, (void *)(long)stb.st_mtime); nearmisses += 1; return (file); } else { free(file); } } DEBUGF(DIR, ("failed. ")); if (checkedDot) { /* * Already checked by the given name, since . was in * the path, so no point in proceeding... */ DEBUGF(DIR, ('; Path_AddDir(path, name); cp[-1] = '/'; bigmisses += 1; pe = TAILQ_LAST(path, Path); if (pe == NULL) return (NULL); if (Hash_FindEntry(&pe->dir->files, cp) != NULL) { return (estrdup(name)); return (NULL); #else /* !notdef */ DEBUGF(DIR, ("Looking for \"%s\"...", name)); bigmisses += 1; entry = Hash_FindEntry(&mtimes, name); if (entry != NULL) { DEBUGF(DIR, ("got it (in mtime cache)\n")); return (estrdup(name)); } else if (stat (name, &stb) == 0) { entry = Hash_CreateEntry(&mtimes, name, (Boolean *)NULL); DEBUGF(DIR, ("Caching %s for %s\n", Targ_FmtTime(stb.st_mtime), name)); Hash_SetValue(entry, (void *)(long)stb.st_mtime); return (estrdup(name)); } else { DEBUGF(DIR, ("failed. Returning NULL\n")); return (GNode *gn) { char *fullName; /* the full pathname of name */ struct stat stb; /* buffer for finding the mod time */ Hash_Entry *entry; if (gn->type & OP_ARCHV) return (Arch_MTime(gn)); else if (gn->path == NULL) fullName = Path_FindFile(gn->name, &dirSearchPath); else fullName = gn->path; if (fullName == NULL) fullName = estrdup(gn->name); entry = Hash_FindEntry(&mtimes, fullName); if (entry != NULL) { /* * Only do this once -- the second time folks are checking to * see if the file was actually updated, so we need to * actually go to the filesystem. */ DEBUGF(DIR, (); } /*- *----------------------------------------------------------------------- * Path_AddDir -- * Add the given name to the end of the given path. * * Results: * none * * Side Effects: * A structure is added to the list and the directory is * read and hashed. *----------------------------------------------------------------------- */ struct Dir * Path_AddDir(struct Path *path, const char *name) { Dir *d; /* pointer to new Path structure */ DIR *dir; /* for reading directory */ struct PathElement *pe; struct dirent *dp; /* entry in directory */ /* check whether we know this directory */ TAILQ_FOREACH(d, &openDirectories, link) { if (strcmp(d->name, name) == 0) { /* Found it. */ if (path == NULL) return (d); /* Check whether its already on the path. */ TAILQ_FOREACH(pe, path, link) { if (pe->dir == d) return (d); } /* Add it to the path */ d->refCount += 1; pe = emalloc(sizeof(*pe)); pe->dir = d; TAILQ_INSERT_TAIL(path, pe, link); return (d); } } DEBUGF(DIR, ("Caching %s...", name)); if ((dir = opendir(name)) == NULL) { DEBUGF(DIR, (" cannot open\n")); return (NULL); } d = emalloc(sizeof(*d)); d->name = estrdup(name); d->hits = 0; d->refCount = 1; Hash_InitTable(&d->files, -1); while ((dp = readdir(dir)) != */ /* Skip the '.' and '..' entries by checking * for them specifically instead of assuming * readdir() reuturns them in that order when * first going through a directory. This is * needed for XFS over NFS filesystems since * SGI does not guarantee that these are the * first two entries returned from readdir(). */ if (ISDOT(dp->d_name) || ISDOTDOT(dp->d_name)) continue; Hash_CreateEntry(&d->files, dp->d_name, (Boolean *)NULL); } closedir(dir); if (path != NULL) { /* Add it to the path */ d->refCount += 1; pe = emalloc(sizeof(*pe)); pe->dir = d; TAILQ_INSERT_TAIL(path, pe, link); } /* Add to list of all directories */ TAILQ_INSERT_TAIL(&openDirectories, d, link); DEBUGF(DIR, ("done\n")); return (d); } /** * Path_Duplicate * Duplicate a path. Ups the reference count for the directories. */ void Path_Duplicate(struct Path *dst, const struct Path *src) { struct PathElement *ped, *pes; TAILQ_FOREACH(pes, src, link) { ped = emalloc(sizeof(*ped)); ped->dir = pes->dir; ped->dir->refCount++; TAILQ_INSERT_TAIL(dst, ped, link); } } /** * Path. */ char * Path_MakeFlags(const char *flag, const struct Path *path) { char *str; /* the string which will be returned */ char *tstr; /* the current directory preceded by 'flag' */ char *nstr; const struct PathElement *pe; str = estrdup(""); TAILQ_FOREACH(pe, path, link) { tstr = str_concat(flag, pe->dir->name, 0); nstr = str_concat(str, tstr, STR_ADDSPACE); free(str); free(tstr); str = nstr; } return (str); } /** * Path_Clear * * Destroy a path. This decrements the reference counts of all * directories of this path and, if a reference count goes 0, * destroys the directory object. */ void Path_Clear(struct Path *path) { struct PathElement *pe; while ((pe = TAILQ_FIRST(path)) != NULL) { pe->dir->refCount--; TAILQ_REMOVE(path, pe, link); if (pe->dir->refCount == 0) { TAILQ_REMOVE(&openDirectories, pe->dir, link); Hash_DeleteTable(&pe->dir->files); free(pe->dir->name); free(pe->dir); } free(pe); } } /** * Path_Concat * * Concatenate two paths, adding the second to the end of the first. * Make sure to avoid duplicates. * * Side Effects: * Reference counts for added dirs are upped. */ void Path_Concat(struct Path *path1, const struct Path *path2) { struct PathElement *p1, *p2; TAILQ_FOREACH(p2, path2, link) { TAILQ_FOREACH(p1, path1, link) { if (p1->dir == p2->dir) break; } if (p1 == NULL) { p1 = emalloc(sizeof(*p1)); p1->dir = p2->dir; p1->dir->refCount++; TAILQ_INSERT_TAIL(path1, p1, link); } } } /********** DEBUG INFO **********/ void Dir_PrintDirectories(void) { const Dir "); TAILQ_FOREACH(d, &openDirectories, link) printf("# %-20s %10d\t%4d\n", d->name, d->refCount, d->hits); } void Path_Print(const struct Path *path) { const struct PathElement *p; TAILQ_FOREACH(p, path, link) printf("%s ", p->dir->name); }
http://opensource.apple.com/source/bsdmake/bsdmake-23/dir.c
CC-MAIN-2015-27
refinedweb
2,022
64.41
Opened 6 years ago Closed 6 years ago Last modified 6 years ago #20291 closed New feature (duplicate) Add method to reload `AppCache` Description Some tests require test application, e.g. proxy_model_inheritance. Adding test application is simple, but removing is not so simple. I propose addition of new method which would clean caches and repopulate them. def reload(self): # Clean cache self._get_models_cache.clear() self.handled.clear() # Mark cache is empty self.loaded = False # Populate cache again self._populate() Change History (4) comment:1 Changed 6 years ago by comment:2 Changed 6 years ago by Is there a ticket for that? I can't find any relevant one. comment:3 Changed 6 years ago by comment:4 Changed 6 years ago by Thanks Note: See TracTickets for help on using tickets. I'm deeply skeptical that this'll actually work. The real answer here is the app-refactor work, which is still ongoing; this is effectively a duplicate of that.
https://code.djangoproject.com/ticket/20291
CC-MAIN-2018-51
refinedweb
161
58.08
Chapter Overview Geometry is actually a common core like all the others from the last chapter, too. However, the geometry core is of course the most important and most complicated one, and there are some things you should know about it. That is why a whole chapter is dedicated to this special core. Before we start generating geometry like crazy, we need to know something about the concepts. At the very beginning the implementation might look a bit uncomfortable, but please keep in mind that the geometry class was designed to provide a maximum of flexibility while still being most performant. One big advanatge of the OpenSG geometry is it's great flexibility. It is possible to store different primitive types (triangles, quads etc.) in one single core, there is no need to create seperate cores for each primitive type. Even lines and points can be used in the same core along with triangles and others. All data describing the geometry is stored in seperate arrays. Positions, colors, normals as well as texture coordinates are stored in their own osg::MField. OpenGL is capable of processing different formats of data, because some perform better under certain circumstances, or only a part of the data is needed. OpenSG features the same data formats, by providing a lot of different classes which are luckily very similar to use and are all derived from osg::GeoProperty. Prominent examples for geometry properties are osg::GeoPosition3f or osg::GeoNormal3f. There are a lot of other datatypes, of course, just have a look at the osg::GeoProperty description page. All these geometry property classes are simple STL vectors slightly enhanced with some features we already know like multi thread safety. Often one vertex is used by more than just one primitive. On a uniform grid for example most vertices are used by four quads. OpenSG can take advantage of this by indexing the geometry. Every primitive does not have to have a seperate copy of the vertices, but an integer index which points to a vertex. In this way it is possible to reuse data with a minimum of additional effort. It is even possible to use more than one index for different properties. Jump to Indexing for a detailed overview. First of all we are going to build a geometry node with the most important features from bottom up. A good example is the simulation of water as this covers many problems you might encounter if creating your own geometry. This water tutorial will be developed throughout the whole chapter. Let us think about what we will actually need and what we are going to do in detail: We simulate water by using a uniform grid with N * N points, with N some integer constant. As these points are equidistant we only need to store the height value (the value that is going to be changed during simulation) and one global witdth and height as well as some global origin where the grid is going to be placed. There are a lot of algorithms which try to simulate the movement of water more or less adequate or fast but as we are more concerned on how to do stuff in OpenSG, I propose that we just take a very easy 'formula' to calculate the height values. Of course, if you are interested, you may replace the formula by any other. Now take our framework again as a starting point, then add some global variables and a new include file. #include <OpenSG/OSGGeometry.h> // this will specify the resolution of the mesh #define N 100 //the two dimensional array that will store all height values Real32 wMesh[N][N]; //the origin of the water mesh Pnt3f wOrigin = Pnt3f(0,0,0); //width and length of the mesh UInt16 width = 100; UInt16 length = 100; Insert the code right at the beginning of the createScenegraph() function which should still be empty at this point. Before we start creating the geometry we should first initialize the wMesh array to avoid corrupt data when building the scenegraph. For now, we simply set all height values to zero. for (int i = 0; i < N; i++) for (int j = 0; j < N; j++) wMesh[i][j] = 0; Now we can begin to build the geometry step by step. The first thing to do is to define the type of primitives we want to use. Quads would be sufficient for us. However, as mentioned before, it is possible to use more than one primitive. That will be discussed here : Primitive Types. // GeoPTypes will define the types of primitives to be used GeoPTypesPtr type = GeoPTypesUI8::create(); beginEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask); // we want to use quads ONLY type->addValue(GL_QUADS); endEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask); We just told OpenSG that this geometry core we are about to create will consists of only one single type of object: a quad. But of course this is not restricted to a single quad. Just watch the next step. Now we have to tell OpenSG how long (i.e. how many vertices) the primitives are going to be. The length of a single quad is reasonably four, but we want more than one quad, of course, so we multiply four by the number of quads. With N*N vertices we have (N-1)*(N-1) quads. // GeoPLength will define the number of vertices of // the used primitives GeoPLengthsPtr length = GeoPLengthsUI32::create(); beginEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask); // the length of a single quad is four ;-) length->addValue((N-1)*(N-1)*4); endEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask); We have to provide as many different length values as we have provided types in the previous step. As we only added one quad type we need to specify one single length. With N=100 the length will be 39204! Well, of course this does not mean we are creating a quad with so many vertices! OpenSG is smart enought to know that a quad needs four vertices and thus OpenSG was told to store 39204/4=9801 quads as it finishes creation of a quad after four vertices have been passed and begins with the next one. Now we will provide the positions of our vertices by using the data of the 'wMesh' array we initialized previously. // GeoPositions3f stores the positions of all vertices used in // this specific geometry core GeoPositions3fPtr pos = GeoPositions3f::create(); beginEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); // here they all come for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) pos->addValue(Pnt3f(x, wMesh[x][z], z)); endEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); You might question yourself if this is actually useful what I am doing here. It looks like that the width and length of the mesh we create corresponds to the resolution we choose, that is the higher the resolution the greater the mesh is. Well, that is correct. After creating the complete geometry core we are going to scale that whole thing to the correct size provided by the global variables [The Author: I actually haven't found the time to do that - so this may follow in the near future]. Of course it would be reasonable to store not just height values but whole points like an two dimensional array of osg::Pnt3f. But storing whole points consumes more memory than one Real32 value. Anyway, it is up to you or whether memory is a concern or not. As we want to play a bit around with scenegraph manipulation I have chosen the first variant. Now we assign colors to the geometry, actually just one color this time, to be specific. However, every vertex needs a color, so the same color value is added as often as we have vertices stored. This is not very efficent in this special case, however it is easy to implement. Multi indexing will be an alternative I present to you later on. //GeoColors3f stores all color values that will be used GeoColors3fPtr colors = GeoColors3f::create(); beginEditCP(colors, GeoColors3f::GeoPropDataFieldMask); for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) colors->addValue(Color3f(0,0,(x+1)/(z+1))); endEditCP(colors, GeoColors3f::GeoPropDataFieldMask); Normals are still missing. We add them in a similar way like the color. GeoNormals3fPtr norms = GeoNormals3f::create(); beginEditCP(norms, GeoNormals3f::GeoPropDataFieldMask); for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) // As initially all heights are set to zero thus yielding a plane, // we set all normals to (0,1,0) parallel to the y-axis norms->addValue(Vec3f(0,1,0)); endEditCP(norms, GeoNormals3f::GeoPropDataFieldMask); And some material... SimpleMaterialPtr mat = SimpleMaterial::create(); Well, this material is not doing anything interesting except for it's existence. But if no material is assigned the renderer stops doing his job leaving you with a black screen. So we assign an "empty" material. Something still missing? Yes of course! If you think about what we have done so far you might notice that something is quite not correct. We have not considered yet that a quad uses four vertices and thus most quads, except for these at borders, uses vertices already used by four other quads. However we provided every vertex just a single time. Of course we did, because everything else would be a waste of memory. That is what indexes are used for. The next block of code tells OpenSG which vertices are used by a quad. The vertices are only referenced, but not copied, in this way. Vertex are used by multiple quads Quad A uses vertex 1,2,3,4 whereas vertex 4 is used by quads A,B,C and D. The index which defines quad A would point to the vertices 1,2,3 and 4. Quad B would reuse the vertices 2 and 4 as well as two others not considered here. // GeoIndicesUI32 points to all relevant data used by the // provided primitives GeoIndicesUI32Ptr indices = GeoIndicesUI32::create(); beginEditCP(indices, GeoIndicesUI32::GeoPropDataFieldMask); for (int x = 0; x < N-1; x++) for (int z = 0; z < N-1; z++){ // points to four vertices that will // define a single quad indices->addValue(z*N+x); indices->addValue((z+1)*N+x); indices->addValue((z+1)*N+x+1); indices->addValue(z*N+x+1); } endEditCP(indices, GeoIndicesUI32::GeoPropDataFieldMask); There are different possibilities on how to index the data. That will be discussed in this section: Indexing. Now that we have created all data we need, we can create the geometry object that will hold all the pieces together. GeometryPtr geo = Geometry::create(); beginEditCP(geo, Geometry::TypesFieldMask | Geometry::LengthsFieldMask | Geometry::IndicesFieldMask | Geometry::PositionsFieldMask | Geometry::NormalsFieldMask | Geometry::ColorsFieldMask ); geo->setTypes(type); geo->setLengths(length); geo->setIndices(indices); geo->setPositions(pos); geo->setNormals(norms); geo->setColors(colors); endEditCP(geo, Geometry::TypesFieldMask | Geometry::LengthsFieldMask | Geometry::IndicesFieldMask | Geometry::PositionsFieldMask | Geometry::NormalsFieldMask | Geometry::ColorsFieldMask ); Some pages ago I told you that the field masks need not to be specified as the library in this case assumes that all fields will be changed. I also told you that leaving them out will slow down your application. However, as the start up is not performance critical in most circumstances, I personally, would leave the field masks out. To be honest: who cares if startup takes 5 or 5.1 seconds ;-) Maybe you just give it a try and compare the time you wait for the system to start up Finally we put the newly created core into a node and return it. NodePtr root = Node::create(); beginEditCP(root); root->setCore(geo); endEditCP(root); return root; Your first version of the water simulation is done. Compile and execute and watch the beautiful result! Please notice that you need to rotate the view in order to see anything. This is because we are initially located at y=0, just the same as the plane, thus you can see the plane as a line only. We can fix this if we add some value to the camera position during setup. You can add the code directly, before the glutMainLoop is called in the main function: This will get the navigator helper object from the simple scene manager. The setFrom() method allows to specify a point (osg::Pnt3f) where the camera shall be located. In that case we get the current position via getFrom() and add 50 units to the y-axis. This ensures, that the camera is above our mesh and not at the same height. The code so far can be found in file progs/09geometry_water.cpp. What? A plane? That whole effort for a simple plane? Of course the result is a plane as we set all height values to zero previously. We need to modify the values during the display function. But first we have a deeper look at what we have done so far! If you remember what we have done at the beginning when we started to create the water mesh geometry, you know that we had told OpenSG to use just one single primitive type, a quad, with a length of 39204 vertices. Now here some words about the geometry's flexibility: If you want to use triangles, quads and some polygons you need not to create seperate geometry cores, but you can use them all in one single core even mixed with lines and points. This is done by first telling OpenSG what primitives you are going to use. Let us imagine this little example: We want to use 8 quads, 16 triangles two lines and another 8 quads. Sure, you could (and should) put the quads together to 16 quads, but we leave it that way for now. Data from modeling packages is not quite well structured most of the time, so better get used to it ;-) Now, we simply tell OpenSG what is going to come: // do not add this code to the tutorial source. // It is just an example GeoPTypesPtr type = GeoPTypesUI8::create(); beginEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask); type->addValue(GL_QUADS); type->addValue(GL_TRIANGLES); type->addValue(GL_LINES); type->addValue(GL_QUADS); endEditCP(type, GeoPTypesUI8::GeoPropDataFieldMask); As far as well, but OpenSG also need to know how many of each type will come. The length we have provided previously in our example specify the number of vertices, not the number of quads, triangles or whatever. So with some math we will find out that we need 32 vertices for 8 quads (8 quads * 4 vertices per quad = 32) and 48 for the 16 triangles and so on // do not add this code to the tutorial source. // It is just an example GeoPLengthsPtr length = GeoPLengthsUI32::create(); beginEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask); length->addValue(32); // 8 quads length->addValue(48); // 16 triangles length->addValue(4); // 2 lines length->addValue(32); // 8 quads endEditCP(length, GeoPLengthsUI32::GeoPropDataFieldMask); Here is a list of all supported primitives: If you are striping geometry, please make sure to provide a correct number of vertices! Please notice that concave polygons are not supported neither by OpenGL nor by OpenSG! So make sure your polygons with more than three vertices are convex. The following imagine shows an example of primitive types and corresponding lengths. Primtives and corresponding length OpenSG geometry is very flexible and powerful. It is easy to mix different primitive types in one core, assign some properties like normals or texture coordinates to them and you can even reuse data with indexing (see next section Indexing). So far everything seems to be fine, but from another point of view things might become difficult. If you want to walk over all triangles for example you can easily run into problems, as the data might be highly mixed up with different primitive types. So you would have to take care of a lot of special cases usualy solved by some kind of big ugly switch block. This is where geometry iterators may help you out. These will iterate primitive by primitive, face by face (which is either a quad or a triangle) or triangle by triangle. For example if you are using the build-in ray intersection functionality you might have encountered the problem of finding the triangle you actually hit. You can easly get the hit point, but the promising method "getHitTriangle" returns an Int32... so what to do? This integer defines the position in the index data array of the geometry. We will have a closer look at the ray intersection functions later in section Intersect Action, but for now I only want to show a little code fragment of how to use a geometry iterator. Let's image we have send some ray into the scene, hit a triangle and now we have an integer returned from that class. We now try to compute the coordinates of the three vertices. // the object 'ia' is of type osg::IntersectAction and // stores the result of the intersection traversal //retrieve the hit triangle as well as the node Int32 pIndex = ia->getHitTriangle(); NodePtr n = ia->getHitObject(); // we make sure that the core of this node is // actually a geometry core, just for safety std::string coretype = n->getCore()->getTypeName(); if (coretype != "Geometry"){ std::cerr << "No geometry core! Nothing to do!" << std::endl; return; } // get the geometry GeometryPtr geo = GeometryPtr::dcast(n->getCore()); // Creating the iterator object TriangleIterator ti(geo); // jump to the index we got from the // IntersectAction class ti.seek(pIndex); // and now retrieve the coordinates Pnt3f p1 = ti.getPosition(0); Pnt3f p2 = ti.getPosition(1); Pnt3f p3 = ti.getPosition(2); The usage of these iterators is very easy, at least if you know how to do it. When I first started using OpenSG I had a similar problem to solve and I did it with five times the code length and much more effort. It took me some time to cut it down to these few lines... :) Indexing is a very important topic if you want to use geometry efficiently. In the example above we added each vertex only a single time and this vertex was reused by all other primitives. On the one hand this is smarter than providing such a vertex four times, on the other hand we added the same color object N*N times, although adding it once would have been sufficient. All these problems can be addressed by choosing the right kind of indexing. First of all there is the possibility to not use indexing at all. The following figure shows how the data would be organized in memory Geometry data which is not indexed I guess this figure could need some explanation. At the top you have three colored circles representing a vertex. The yellow vertex, for example is used by both quads and the triangle, whereas the blue vertex is used by the right quad and the triangle. Below you find a sample data set. The first row contains the data of a GeoPTypes object. In this case we have two quads, a triangle, a polygon, another quad followed by two triangles and finally another polygon. This row may continue with even more types. The next row defines the length of these types. The quads have a length of four and the triangles have three, that's easy, but polygons can have any number of vertices. The last three rows represent the data that defines the geometry. In this case we have position-, normal- and color information given. This could be extended by some more data (i.e. texture coordiantes etc.). A column is one dataset for one vertex. You know can see that the yellow vertex appears three times in our data. With no indexing the vertex data is copied for every time it is used! Of course this is not very efficient. You will learn more efficent methods next. This is the most often used type of geometry and this is also very close to OpenGL. Indexing is easy and efficient, but it does not handle all cases. This is the kind of geometry storage we used in the water mesh example above. The following figure shows how simple indexing work. Indexed geometry data As you can see, every vertex is stored exactly one time. The data of the yellow vertex is referenced three times. Indexed geometry in general is a lot better than non-indexed geometry, but still we have some issues that are not solved optimally. In our water mesh example every vertex has exactly the same color. With indexed geometry we need to have as many entries in positions as there are in the normals and colors array - so we need to store the same color a lot too often. This issue is adressed by multi indexed geometry. Another issue is, that some vertices need additional different data even though the position is the same. For example a textured cube has different normals at the corners whereas the position is the same. To handle such cases the vertex data need to be replicated. This too can be handeled with multi indexed geometry. In order to face the issues encountered with single indexing you need to use multiple indices. However using a seperate index field for every property would double the number of Fields for the Geometry NodeCore, which is already pretty complex, and working with several different index fields would not be fun at all. That is why OpenSG uses another approach: interleaving indeces. The idea is quite simple. You define a mask of which indices you are going to use, let's say we want to use indices for positions, normals and colors in this order. Now you have to provide every vertex with three indices, that is a triangle would have nine indices assigned to it. The first index is used for the position, the second for the second field provided in the mask field etc. Again the following figure shows how it works Multi indexed geometry If using multi indexing the property arrays need not to be equally long any more. In our case this means that our color array could be one entry long with every color index pointing to this one and only entry. So now that you have non-indexed, single- and multi-indexed Geometry at hand, which should you use? In general single-indexed Geometry is the most efficient way for rendering. It can make sense to use non-indexed geometry, if there are no shared vertices. In this case indices only cost memory brint no benefit. Multi-indexed data can be more compact in terms of memory (if the data is bigger than the additional index), but OpenGL doesn't natively support it. Therefore it has to be split up to be used with OpenGL, which can have a big impact on performance. Only use it if memory is really critical or you really need it. Conclusion: use single-indexed geometry, if you can. Often geometry itself stays untouched during a simulation except for rigid transformations applied to the whole geometry. However, if it is necessary to modify the geometry during a simulation (like in our water example) it is mostly very important to do it fast. In this section we want to enable animation of the water mesh and by doing so, I will demonstrate some tricks on how to speed up this important task. Before we start, we quickly implement a function which will simulate the behaviour of water with respect to the time passed. As said earlier I will only use a simple function but feel free to replace this with a more complex one. Add this block of code somewhere at the beginning of the file (at least before the display function). void updateMesh(Real32 time){ for (int x = 0; x < N; x++) for (int z = 0; z < N; z++) wMesh[x][z] = 10*cos(time/1000.f + (x+z)/10.f); } Please notice: It is important to divide by 1000.f. If you divide by 1000 an integer division will be calculated yielding discret values, but that is not correct in most cases! And replace the display function with this code Well, but of course we won't see anything different now on screen, because we have updated our datastructure, but not the scenegraph. So now comes the interesting part: We are going to modify the data stored in the graph. Of course we could generate a new geometry node and replace the old with the new one. Well, this is obviously not very efficient due to a big amount of memory deletion and allocation. What we are actually going to do is the following: First of all we need a pointer to the appropriate geometry node we want to modify. Luckily this is no big deal this time, as we know that the root node itself contains the geometry core. Add this block of code in the display() function right before mgr->redraw() is called // we extract the core out of the root node // as we know this is a geometry node GeometryPtr geo = GeometryPtr::dcast(scene->getCore()); //now modify it's content // first we need a pointer to the position data field GeoPositions3fPtr pos = GeoPositions3fPtr::dcast(geo->getPositions()); //this loop is similar to when we generated); Previously we used addValue() to add osg::Pnt3f objects to the osg::GeoPositions3f array. Now we use setValue() to overwrite existing values. If you have a look at code where we first added the points to the array, you can see that these were added column major, i.e. the inner loop added all points along the z-axis where x was zero then all points with x=1 and so on. setValue() gets a point as first parameter and an integer as second which defines the index of the data that will be overwritten. With N*x+z we overwrite the values like we first generated them: column major. Now you can again look forward to compilation and execution. The file 09geometry_water2.cpp contains the code so far.You will be rewarded with an animation of something that doesn't look like water at all, but is nice anyway. The problem is, that the water is uniformly shaded and the "waves" can only be spotted at the borders. Animated water without proper lightning The next chapter will be about lightning, this is where we will improve the appearance of the water. Another issue is the performance. With a resolution of 100*100 Vertices (= 19602 polygons) the animation is no longer smooth when moving the camera with the mouse on my AMD 1400 Mhz machine with an ATI Radeon 9700! So we are definitly in need of some optimizations We used the interface methods provided by the GeoPositions class. These are relativly slow compared to directly working on the data lying beneath. By getting access to the multi field, where all data is finally stored, we can speed up the rendering process quite a bit. In your display function remove some of the code we added in the last step //remove the following code //this loop is similar to when we generted); and replace with this //get the data field the pointer used to store the positions GeoPositions3f::StoredFieldType *posfield = pos->getFieldPtr(); //get some iterators GeoPositions3f::StoredFieldType::iterator last, it; // set the iterator to the first data element it = posfield->begin(); beginEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); //now simply run over all entires in the array for (int x = 0; x < N; x++) for (int z = 0; z < N; z++){ (*it) = Pnt3f(x, wMesh[x][z], z); ++it; } endEditCP(pos, GeoPositions3f::GeoPropDataFieldMask); The result will be all the same, of course, but working directly on the field with an iterator or faster to compute to some extend. As you might know, OpenGL is capable of using "display lists". Such lists are usually defined by a programmer and OpenGL compiles this list and thus can render the content of such a list a bit faster. However there is an overhead in compiling display lists, which makes it useless for objects which change permanently - like our water mesh. In OpenSG display lists will be generated per default for every geometry, and they will be generated again if the geometry data changes. You can turn off this feature by telling your geometry core : // geo is of type osg::Geometry geo->setDlistCache(false); Add this line where the geometry core is created during createScenegraph(). Don't forget to extend the edit mask field with the following mask Geometry::DlistCacheFieldMask This may increase rendering performance a lot if used wisely. When using static geometry you should not turn this feature off as this will slow down rendering. Only disable display lists on geometry which is modified often or even every frame. Please notice: transformations do not affect the geometry in that way! Only direct manipulation of the geometry data is a performance problem with display lists! All hints and tricks that can be used with OpenGL can be used with OpenSG the one way or another, too. Of course, it is not good to allocate new memory during rendering and similar things. If you want to tweak your application to the maximum possible it might be useful to read a book about this specific topic. I run a little self-made benchmark on my machine to show you the results of the optimizations I suggested above. Please keep in mind that this is only one example and thus claims not to be an objective benchmark! I simply let OpenSG render 500 frames and watched how long it took. Display Lists on Dynamic objects As you can easily see the usage of the multi field manipulation instead of the interface methods is not such a big win at all, but turning the display lists off, is rewarded with a performance increase by about 170! Notice: You might know think that display lists are stupid and should be turned off, to increase performance - of course that is not the case, as a display lists only purpose is to increase performance! They only will slow rendering down if the list themselves are constantly recreated as this will be the case with non-rigid transformations. With static geometry they perform very well. I ran some small tests on my machine with the beethoven model (progs/data/beethoven.wrl), which has 60k polygons. For this benchmark I let OpenSG render 5000 frames and took the time. The figure below shows the results Display Lists on Static Objects OpenSG comes along with some useful utility functions, which can help you with some basic, but important tasks. I remember when I first needed face normals where the model only had vertex normals. I had one or two long nights I spend with the geometry in general and the geometry iterators until I had succeeded in developing an algorithm that worked in that way I wanted it to. It was a few days later when I realized a function called "calcFaceNormals". Well, my variant of face normal calculation was as fast as the build-in function was, but the only annoying thing was, that I did it with some dozens lines of code where one single line would have done the job. Here is how it works Note: you need to include the following include file for the utility functions: #include <OpenSG/OSGGeoFunctions.h> If you have some geometry core, for which you want to calculate face normals you simply need to type //geo again is of type osg::Geometry calcFaceNormals(geo); Of course with vertex normals it is just all the same calcVertexNormals(geo); You probably already know, that face normals are unique for every triangle or quad. Objects rendered with face normals will look faceted, which might not be what you want. For a smooth rendering normals per vertex are required. Please keep in mind that vertex normals can only be computed correctly, if the geometry data is also correct. The resulting vertex normal is just the result of averaging all neighbouring vertex normals and if some vertices are stored multiple times the result will be incorrect. Anyway, identical vertices, which are defined multiple times, can be unified automatically (i.e. they are "merged" into one vertex) by calling createSharedIndex(geo); on the geometry beforehand. Different normals used for rendering The left image was rendered using face normals, resulting in a faceted look, as promised. The middle image shows what happens if you calculate vertex normals with multiple vertex definitions, where the right image shows correct vertex normal rendering with usage of createSharedIndex() before the vertex normals were calculated The faceted effect on a sphere is often not what you want and calculating vertex normals does a fine job here. However, some other objects might not perform so well. The problem is, that all normals at a vertex will be averaged and thus edges you may want to keep will be averaged out, too. Box with bad vertex normals See how bad the cube now looks. If you increase the mesh resolution the effect will reduce in size, but that is not a good solution anyway. There is another variant of calcVertexNormals which can be given an angle. All edges between faces that have an angle larger that the one specified, will be preserved. Replacing the old function call with calcVertexNormals(geo, deg2rad(30)) would help us out with the cube. deg2rad is a useful function that allows you to convert degree values into radians. As you might guess there is also a rad2deg function. Calculating vertex normals is more complex than it sounds and it requires a considerably amount of time to compute. Using one of these methods on a per frame basis is not really recommended! As OpenGL supports a lot of different geometry data formats, also new problems arise. Of course, not all of these variants are equally efficient, but luckily OpenSG also offers some methods to improve the data automatically. I mentioned createSharedIndex() before, which will look for identical elements and remove the copies, changing only the indexes. This step is necessary for osg::createOptimizedPrimitivs to work as this method needs to know which primitves are using the same vertex which means they are neighbours. It tries to reduce the needed number of vertices to a minimum by combining primitives to stripes or fans. No property values will be changed, only the indexes are modified. The actual algorithm used here is very fast, but will not necessarily provide an optimal solution. Due to its pseudo random nature you can run it several times and take the best result. If performace is critical you can of course do it only once which will yield a non-optimal but definetly better solution than before. osg::createSingleIndex will reduce a multi indexed geometry into a single indexed. Multi-indexing is very efficient in storage but when it comes to rendering performance single indexing is better. The reason is that OpenGL does not support multi indexing directly. OpenGL's more efficient specifiers like vertex arrays cannot be used with OpenSG's multi indexing geometry. Finally you have do decide for yourself what's suiting better for you, but it is good to know that you can convert from multi- to single indexing. Last but not least, it is possible to let OpenSG render normals for you. This may be useful for debuging purposes, so you can make sure the normals are actually pointing in the desired direction. There are two methods, one for vertex and the other for face normals. You should make sure which exist before calling one of these methods. Rendered normals of a sphere This nice picture shows the rendered normals of a sphere. However, it would be difficult to figure out if one is facing the wrong direction anyway ;-) This code fragment shows how to do it: root = calcVertexNormalsGeo(some_geometry_core, 1.0); SimpleMaterialPtr mat = SimpleMaterial::create(); GeometryPtr geo = GeometryPtr::dcast(root->getCore()); beginEditCP(geo); geo->setMaterial(mat); endEditCP(geo); Note that you have to add a material even if it is "empty" like this one, else you won't see anything but error messages! osg::calcVertexNormalsGeo need some geometry core and a float value which will define the length of the normals. Of course this does not change the real normals in any way.
http://www.opensg.org/htdocs/doc-1.8/Geometry.html
CC-MAIN-2016-18
refinedweb
6,028
60.55
This site uses strictly necessary cookies. More Information Hey, for a current project I have to scale the byte/color array of an image, therefore I found the easiest way to do that is using System.Drawing (if somebody has a better solution I am open for it :D). System.Drawing is unfortunately not included in Unity by standard, therefore I set up a csc.rsp with the following line '-r:System.Drawing.dll'. That does work like a charm in the editor. Unfortunately when I am trying to build I still get: "error CS0234: The type or namespace name 'Image' does not exist in the namespace 'System.Drawing' (are you missing an assembly reference?)" - errors. Is there anything additional I need to set up to get this working? I am using Unity 2019.3.2 building for x64 and using Mono as scripting. DLLs work on Windows but not on Android 1 Answer Getting SSH.NET to work in Unity 1 Answer EntryPointNotFound exception when integrating Picovoice Porcupine 0 Answers What does Symlink Unity Libraries do? 5 Answers Using a flash swc library when building for flash in 4.0 1 Answer EnterpriseSocial Q&A
https://answers.unity.com/questions/1724720/cscrsp-works-in-editor-but-throws-an-error-when-bu.html
CC-MAIN-2021-31
refinedweb
196
59.09
Introduction: How to Make button with customized fonts. Start and quit button change colors when hovered over. Customized background. Step 1: Make a Plane & Position It in Front of the Camera Open Unity and save the scene as MainMenu. Click Create - Plane in the Hierarchy panel. Rename it "Background." Rotate the Main Camera 90 degrees downward and change its Projection from Perspective to Orthographic. Adjust the camera's transform so it is 8-10 units above the menu, looking straight down. Scale the plane Background to fit the camera preview exactly. When you are done, it should look similar to the image above. Step 2: Light It Up Add lights. Do this by selecting Create in the Hierarchy panel and clicking Directional Light. It does not matter where in 3D space a directional light is; it will provide the same lighting no matter what. Therefore, after adjusting the light's direction to shine down on the background, raise the light up 20 units or so in the Y direction to keep your field of vision clear in the scene view. I adjusted the light to have a transform of (0, 20, 0) position and (60,0,-20) rotation. I also changed its Intensity to 0.4. Once again, see the image for what this should look like when you have completed the step. Step 3: Add a Texture Our background is rather plain-looking right now, but we can spice it up easily enough. Just take a digital camera/smartphone and take a photo of something convenient and interesting, such as the floor, ceiling, walls, etc., and upload it to your computer. Create a new folder called Textures in your Assets folder for the Unity project you are working on. Right click the folder in the Project panel in Unity and select, "Show in Explorer." Copy and paste the picture you took into the Textures folder. With your Background selected in the Hierarchy, click and drag the picture from the Textures folder in the Project panel to Inspector panel, where it should be added as the new texture for the background. See the image. Step 4: Add Text Create an empty Game Object using the Game Object dropdown toolbar at the top of the Unity window. Rename it "Text." Reset its transform. Click Create - 3D Text in the Hierarchy panel. Reset its transform. Rename the 3D Text object and enter the text you want it to display in its Text Mesh component in the Inspector. Rotate it 90 degrees about the X or Z axis so that is displays properly in the camera's view. Drag the 3D text object into the empty Text game object in the Hierarchy. Be Duplicate the 3D text as desired. Step 5: Go Get Some Fonts (that You Already Have) You already have lots of fonts. You can access them (on Windows) by opening Explorer and going to the folder called Fonts, which is located under OS/Windows/Fonts. In the Unity project panel, create a new folder in Assets and call it Fonts, as well. Copy and paste the fonts you want for your Unity project from your computer's fonts folder to the new Fonts folder you created within the Assets folder for your project. Note: This will most likely copy over several different files for each font, one for regular, bold, italic, etc. Select the 3D text whose font you want to change in the Hierarchy panel, and drag the desired font from the fonts folder in the project panel to the box labeled "font" in the Text Mesh component in the Inspector. You can change the font color, size, and other other attributes in the Text Mesh component of the 3D text. This will appear in the Inspector panel, provided you have the 3D text you want to edit selected in the Hierarchy. The text will most likely look a little blurry. You can clean this up by making the font size significantly larger, though this will mess up the camera's view at this point, so you would have to readjust the camera and the background plane's size. Step 6: Make the Text Change Color When You Hover Over It Create a new folder called Scripts in the project panel. Create a new CSharp script and call it MouseHover. Open the script in MonoDevelop. There are three functions in this script. The first tells the text to be its original color. The second tells the text to change color when the mouse is touching it, and the third tells the text to go back to its original color after the mouse is no longer hovering over it. void Start(){ renderer.material.color = Color.black; } void OnMouseEnter(){ renderer.material.color = Color.red; } void OnMouseExit() { renderer.material.color = Color.black; } Add the script to each piece of text by dragging it from the project panel to the 3D text object's name in the Hierarchy. In order to make the script work, we need to add colliders to each of the pieces of 3D text so that the code knows whether or not the mouse is touching them. To add a collider, select a piece of 3D text in the Hierarchy, go to the Inspector panel, and select Add Component - Physics - Box Collider. Add the box collider to each piece of text and check off the box that says "Is Trigger." Test whether your buttons change color by clicking the play button at the top middle of the screen and hovering your mouse. Step 7: Write a Script to Control the Buttons Create a new script and call it MainMenu. File it in your Scripts folder and open it in MonoDevelop. Declare Boolean (true/false) variables, one for each button you would like to have on your menu. I have two buttons, so I wrote: public bool isStart; public bool isQuit; Then, write a function called OnMouseUp(). This activates when the mouse button is released, which is a better way to activate a button than OnMouseDown() because it prevents the function from being executed repeatedly while the mouse button is held down. void OnMouseUp(){ if(isStart) { Application.LoadLevel(1); } if (isQuit) { Application.Quit(); } } Application.LoadLevel(1) loads scene number 1 of the game. (The menu scene should be level 0. You can change which scene is which in Build Settings, under File.) Application.Quit() quits the game, though this will only do something if the game is open as a PC/Mac application. Nothing will happen if the game is running in Unity or online. Step 8: Make the Buttons Do Things! Add the MainMenu script to each of your 3D text objects serving as buttons. Because you declared public bools for each category of button, they should show up in the Inspector for each button. Go to the Inspector, and check the appropriate boolean variable for each button. See the image above for what this should look like. That's it, you are done! You can add an additional line of code to your MainMenu script to make sure it is working. Just tell it to change the color of the button when you click it (to a different color than when you hover over it). void OnMouseUp() { if (isQuit) { Application.Quit (); } if(isStart) { Application.LoadLevel (1); renderer.material.color=Color.cyan; } } Be the First to Share Recommendations 11 Comments 7 weeks ago Thanks. Helped. And Comments let all complete without errors. Question 8 months ago on Step 6 So it says test if hover works, but it doesn't say what to do if it does not. How do I fix it if hovering doesn't work? Question 10 months ago Can sombody explain me the tutorial more detailed in private please my discord name and id is Diogo De La Torre #4911 11 months ago Doesn't work. Visual studio keeps saying there are errors. 11 months ago Also don't forget to add a very little at least, Z axis value to the box collider. fixes the issue when on mouse enter, rapidly text color changes (enter, exit, .. ). 1 year ago Great ! Thank you for creating this tutorial ! Especially for making a really good readble step- by- step-instruction and not some video. Tested it on Unity 5.4 - Unity automatically did the change from render.material.color to GetComponent<Renderer>().material.color ! Question 2 years ago Does this tutorial also work for 2D in Unity? 5 years ago main menu script send plese 5 years ago You should change the " render.material.color = Color.whatever; " instead of it it would have to be: " GetComponent<Renderer>().material.color = Color.cyan; " also would add that for adding scenes on the build is file -> buildSettings -> on the box labeled Scenes in build, drag the scenes you want in the order u want them But thx for the instructable <3 ^^ 5 years ago You should change the " render.material.color = Color.whatever; " instead of it it would have to be: " GetComponent<Renderer>().material.color = Color.cyan; " 6 years ago on Step 6 Assets/Scripts/Mouse Hover.cs(9,6): error CS0116: A namespace can only contain types and namespace declarations
https://www.instructables.com/How-to-make-a-main-menu-in-Unity/
CC-MAIN-2021-43
refinedweb
1,531
73.68
Originally posted by sonir shah: Question ID :988380923984 What will the following code print when run? public class Test { static String s = ""; public static void m0(int a, int b) { s +=a; m2(); m1(b); } public static void m1(int i) { s += i; } public static void m2() { throw new NullPointerException("aa"); } public static void m() { m0(1, 2); m1(3); } public static void main(String args[]) { try { m(); } catch(Exception e){ } System.out.println(s); } } Answer: 1 can any one explain me with reasons Originally posted by sonir shah: But how is m1() related to m2() method. both are completely different methods. my question still remains unanswered .i.e why only have we considered the m0() method instead of m1() for applying the value of 's'?? sonir
http://www.coderanch.com/t/235733/java-programmer-SCJP/certification/Jq-ID
CC-MAIN-2014-35
refinedweb
127
60.45
Technical Support On-Line Manuals RL-ARM User's Guide (MDK v4) #include <rtl.h> void *_calloc_box ( void* box_mem ); /* Start address of the memory pool */ The _calloc_box function allocates a block of memory from the memory pool that begins at the address box_mem and initializes the entire memory block to 0. The _calloc_box function is in the RL-RTX library. The prototype is defined in rtl.h. Note The _calloc_box function returns a pointer to the allocated block if a block was available. If there was no available block, it returns a NULL pointer. _alloc_box, _free_box, _init_box #include <rtl.h> /* Reserve a memory for 32 blocks of 20-bytes. */ U32 mpool[32*5 + 3]; void membox_test (void) { U8 *box; U8 *cbox; _init_box (mpool, sizeof (mpool), 20); box = _alloc_box (mpool); /* This block is initialized to 0. */ cbox = _calloc_box (mpool); .. .
https://www.keil.com/support/man/docs/rlarm/rlarm__calloc_box.htm
CC-MAIN-2020-34
refinedweb
138
75.4
One of the vital requirements for academics is to provide a single data set to allow all there students to utilise for undertaking experiments. By hosting data on a Blob Storage account you can allow students connect and undertake experiments using Azure Jupyter Notebook in a pretty straight forward manner. Data can be uploaded it to an Azure blob using the Azure Storage Explorer tool. Creating a storage account on Azure -, find and click Configuration under SETTINGS. Once you have setup your storage account you can use the Azure Storage Explorer to connect to your storage container create a new BLOB container and upload the data. Azure Storage Explorer showing upload blob which has SampleData for our experiment Within your Jupyter Notebook you now need to define the connection parameters, So in a code block create the following and take the details from your Azure Account. Example of Notebooks setup So Code Block is where we define the connection blob_account_name = "" # fill in your blob account name blob_account_key = "" # fill in your blob account key mycontainer = "" # fill in the container name myblobname = "" # fill in the blob name mydatafile = "" # fill in the output file name: In a new code block create your connection and query strings import os # import OS dependant functionality import pandas as pd #import data analysis library required from azure.storage.blob import BlobService dirname = os.getcwd() blob_service = BlobService(account_name=blob_account_name, account_key=blob_account_key) blob_service.get_blob_to_path(mycontainer, myblobname, mydatafile) mydata = pd.read_csv(mydatafile, header = 0) os.remove(os.path.join(dirname, mydatafile)) print(mydata.shape) Before you can create a storage account, you must have an Azure subscription, which is a plan that gives you access to a variety of Azure services. You can get started with Azure with a free account. If you have Imagine Access premium then your an Visual Studio Dev Essentials, you get free monthly credits that you can use with Azure services, including Azure Storage. See Azure Storage Pricing for information on volume pricing.+ To learn how to create a storage account, see Create a storage account for more details. You can create up to 200 uniquely named storage accounts with a single subscription. See Azure Storage Scalability and Performance Targets for details about storage account limits.
https://blogs.msdn.microsoft.com/uk_faculty_connection/2017/07/20/using-external-data-with-azure-jupyter-notebooks/
CC-MAIN-2018-22
refinedweb
371
52.6
Codeforces Global Round 6 Editorial Wow, fastest editorial out there! Gap between E and F is HUUUUUGGGGGEEEEE :C Fast Editorial and system testing :) epic F... wow fastest guide ever i have seen Thanks for the Fastest Editorial and it was a very nice contest :D For D, you have given solution for min dept, how to find number of depts? I did a greedy solution 67115957, can anyone please tell me why i am getting TLE. IN Russia, please!!!:))) Yeeee. This is the fastest guide I've ever seen For E , what ideally we should have done -" It is obvious that we cannot do better and this number is necessary. Let's prove that it is also sufficient. " What most people did — " It is obvious that we cannot do better and this number is necessary. Let's submit." How to solve D with rule 1 changed to d(a, b) and d(b, c) instead of d(a,b) and d(c, d)? If I understand that correctly, it means we cannot greedily match balances and we need to preserve (more or less) the original graph structure. Yes, I also thought that it was important in the contest, but if you carefully read and understand all the limitations in the first operation, it will become clear that you can get any situation that is described in the solution. Yeah, I know, I managed to reread the problem statement after 1 hour and solved it, but I am wondering about a harder problem now :P Sorry, confused in the letters :D Please someone can find mistake in my first question solution.. Your link is incorrect. This one I think you mentioned. Try this test: 100. Your output I think red, instead of cyan. 100 thanks for your reply.. i found my mistake Thanks for the quick editorial. Thanks for the quick editorial!!! fast!!! awesome!! H is very cool, one of my favorite problems now. Thanks! Then I have a bonus for you: It looks like this bonus is a previous code jam problem from a while ago: Damn! It's really difficult to come up with something original. Hello majk ! I want to ask something about problem C. Since it is a constructive algorithm task ; could you please describe the thought process involved while arriving at the construction. PS : I also solved the problem in the contest ; however it took me a lot more time to derive the construction. Thanks in advance ! Here is how I thought of it it, Since we have to produce minimum magnitude and all Gcds should be distinct, I thought the gcds should be of the form 1,2,3,4,5. I didn't know of this will work, but just a trail. I tried to produce gcd = 1. This can easily be done by writing 2,3. I also noticed that 1 cannot be written anywhere as it make gcds of column and row same. So in the first row I wrote 2,3,4,5,6,....,m + 1.after this I observed that I can make gcds of each column the topmost element if each column is a multiple of the topmost element. After this, I thought that if I multiply the first row by some number and write it in the second row, the gcds of the second row will be the number it was multiplied with as I can take the number common and the gcds of remaining elements is one. It's easy to see how this works after this. Thanks! I was just thinking like you, but missed to notice a silly mistake I was making in my logic, now I understand my mistake. Thanks. It's helpful. I think my solution for C plainly describes the thought process involved. here I created two arrays(one for row and one for column) that contains the targeted minimum disjoint gcd values that can be achieved ie { 1,2...r...(r+c) }then I filled the matrix by multiplying each row array element with column array element. By analyzing output you can infer {1...r} values can be achieved by taking gcd of each row and remaining {r+1...c} can be achieved by taking gcd of each column. It would be really helpful if someone could provide the thought process required for C,as it is a constructive algo? Since we want to minimise GCD we may want to start with putting 1 but we can't do so because it will make the GCD of that particular row and column 1 which will not satisfy condition that GCD values must be distinct. Firstly, try a simpler case like when r = 1 and c > 1, we can have values like 2,3,4,...c-1. Similarly, for case when c = 1 and r > 1. Now, Let us consider case when r = 3 and c = 3. Then in first row we can have 4,5,6 which have GCD = 1. Now observe that for second row we can multiply the first row by 2, which gives for second row 8,10,12 which will have GCD as 2. Similarly, third row will be 12,15,18 having GCD as 3. And for the first column GCD will be 4, second column GCD will be 5 and for third column GCD will be 6. So set of distinct values of GCD = {1,2,3,4,5,6} and also we can say that r+c will always be the optimal value. Very good idea good contest ! Thanks majk ! Why is it clear in problem D that we can't do better than sum of balances divided by 2? I have a linear-time solution (67124772) to F. The idea is to make extensive use of the trivial data structure that maintains a set of integers and supports the following operations: To do this, just store for every value how many times the value appears in the set, and maintain the maximum. We'll loop from $$$k = 1$$$ upwards, and maintain $$$dp[i] = $$$ number of subtrees of $$$i$$$ with depth at least $$$k$$$ (including parent). Then at step $$$k$$$ we have $$$ans[2k] = \max(\max_{i} dp[i], \max_{a, b \text{ adjacent}} dp[a] + dp[b] - 2)$$$ and $$$ans[2k-1] = \max_{i} dp[i] + \text{at least one of the subtrees of i has depth exactly } k-1$$$. To maintain $$$dp[i]$$$, note that when we increase $$$k$$$ and calculate the new value $$$dp'[i]$$$, we have $$$dp'[i] = \sum_{t \in conns[i]} \max(1, dp[t]) - 1$$$. Hence we can store a queue of nodes whose some neighbour had their $$$dp[i]$$$ decreased to one in the previous step. The nodes in the queue are exactly the nodes whose $$$dp[i]$$$ will decrease in this step (multiple times if they appear multiple times). Since initially $$$dp[i] = deg(i)$$$, and they can only decrease, and we do updates to them in $$$\mathcal{O}(1)$$$ time, this part takes linear time. We'll use one copy of the data structure to maintain values of $$$dp[i]$$$. To maintain the values we need to calculate $$$ans[2k-1]$$$, we'll maintain the values $$$dp[i] + \mathbb{I}[dp[i] \text{ got decremented in the previous step}]$$$ in another copy of the data structure. Lastly, to maintain the values to calculate $$$ans[2k]$$$, we'll maintain for every node a copy of the data structure storing the current values of $$$dp[i]$$$ for its children. Note that every $$$dp[i]$$$ appears in exactly one such data structure, so to initialise and maintain them we again do only linear work. We'll also have a data structure storing the "active" values $$$dp[i] + \max_{c \in childs[i]} dp[c] - 2$$$. Whenever $$$dp[i]$$$ decreases, we decrement its active value, its parents children data structure, and its parents active value if $$$dp[i]$$$ was the unique maximum in the data structure. All the operations we'll ever do again take in total time equal to the sum of degrees, which is linear. I have a similar solution to you, and the complexity is also linear: 67162965. We notice that the number of subtrees of $$$i$$$ with depth at least $$$k$$$ is always smaller than that of $$$k - 1$$$. So, instead of looping $$$k$$$ from $$$1$$$ upwards, we loop $$$k$$$ from $$$n$$$ downwards, which will make every variable in the computation non-decreasing, so we don't have to use any data structures; straight up updating every variable using the $$$max$$$ function is enough. Edit: Courtesy to GreymaneSilverfang for helping me with this discovery :D In D editorial can someone explain "We have just reduced the total debt by z, which is a contradiction." We assume that the debt is already minimal, and there is a vertex triple such that ... We managed to reduce the debt, so it wasn't minimal. This means that for a minimal debt, there cannot be such triple. [DELETED] In question D for input: 6 4 1 2 6 2 3 4 4 5 3 5 6 5 is this a correct solution?: 4 1 6 5 4 3 3 5 2 2 1 3 1 Edit: It is .. nevermind In problem H: These two conditions are in fact necessary and sufficient. These two conditions are in fact necessary and sufficient. How to prove that these conditions are sufficient? Can someone tell why my solution is failing in D. Is there something to do with it out of bounds? a[u] = 0; a[v] = a[v]+a[u]; You should swap these two lines. Shit. How could I make such mistake. Thanks for help. Thanks for the contest! In problem D "It follows that any solution in which every vertex has either outgoing or incoming edges is constructible using finite number of applications of rules." I couldn't understand why... How to prove it? Can someone explain? "It follows that any solution in which every vertex has either outgoing or incoming edges is constructible using finite number of applications of rules." Because if there is combination of vertices like a->b->c then this combination surely results in a decreased total debt i.e. why using any number of operations we can achieve the state of graph where every vertex has either incoming or outgoing edge. Thank you,but why "any solution is constructible"...? I understood there exists the graph which we can construct where every vertex has either outgoing or incoming edges,but I don't think it is clear that we can construct "any" graphs that satisfies the condition. (I'm sorry if I misunderstood the editorial) For example, imagine that there is a answer a->b 10, c->d 10. In order to get another answer, we add two edges a->c 10 and c->a 10, then we could use the first rule to get another answer a->d 10, c->b 10 Thanks,I got it I'm really interested in yosupo's and eatmore's solutions to problem H. It looks they are quite different from the intended solution. For each vertex, we assign a level as the shortest distance for the vertex N. Core part is making function succ(l, State={position, color, time}) -> new_State: this function simulate the state while a token reach one of the vertex whose level is l(we assume the first position of token is a vertex whose level is higher than l). We can memoize this function by (l, color of vertexes whose level is higher than l). How many states this function memoized? ... actually I don't have any proof(so, my solution is unproved, sorry). My intuition say that it is exponential, but quite small (even in the worst case). I can make a test which makes it memoize $$$O(n2^{n/2})$$$ states. My solution uses simulation with memoization. Suppose that we start in some state, and the token is in vertex $$$i$$$. If we know only the state of vertex $$$i_1=i$$$, we can simulate one step (or two, if the first step moves the token to the same vertex). Then, the token lands at vertex $$$i_2\ne i_1$$$. If we know the state of vertices $$$i_1$$$ and $$$i_2$$$, we can simulate more steps until the token lands at vertex $$$i_3\ne i_1,i_2$$$. This way, it is possible to construct the following lazy DP: the state is $$$(i,k,\textrm{state of }i_1\ldots i_k)$$$, and the value is $$$(i_{k+1},\textrm{new state of }i_1\ldots i_k,\textrm{number of steps})$$$. The simulation function (run in my solution) takes the following arguments: initial vertex, current state and a set of vertices. It runs the simulation until the current vertex is not in the set. There is also an option to limit the number of steps. It starts with $$$k=1$$$ and $$$i_1=i$$$ to compute $$$i_2$$$, $$$i_3$$$ and so on. To compute DP for some $$$k$$$, it runs the same procedure recursively with starting vertex $$$i_k$$$ and the set $$${i_1,\ldots,i_k}$$$. For $$$k=1$$$, a straightforward simulation is used (this requires at most two steps). run Can Somebody please explain problem D As I think that is incorrect, as d(1,2) = 10 and d(2,3) = 15 (as in example) doesn't work according to the writeup as here v = 2 is valid and also a tripe will occur as 1,2,3, with v as both incoming and outgoing debts, so how this can be a contradicton? Please correct if I am thinking wrong.. In the editorial, he is trying to prove by contradiction that if you try to eliminate all such cases where the triples exist, you can combine the rest of the remaining vertices to get your final answer. As they are the only way you can reduce the total sum given the conditions in the question. As a result, finally, you will be left with only vertices with either outgoing or incoming edges and not both for a single vertex Could anyone help me why the following solution is incorrect for C? I hope my logic was correct. Your logic is half correct,your solution does produce disjoint gcd arry or diverse matrix,but it is failing to produce the diverse matrix that minimises the magnitude For eg. dry run your program for 3 3,you will get it. diverse matrix that minimises the magnitude Such a Terrible mistake! Thanks a lot Two doubts for problem D editorial how did you conclude this " This means that we can just find balances, and greedily match vertices with positive balance to vertices with negative balance.The total debt is then Σd=∑v|bal(v)|2 , and it is clear that we cannot do better than that" how do we find the final debts i.e. the second half of the answers , the remaining edges. Is it like to minimize the debt we have to pay all the loan back so that the overall debt decreases and that is why we are matching the positive ones with negative ones so as to make positive ones as small as possible. Do we have to think like this ?? i wonder how to prove the greedy algorithm can get the answer, i.e. second part of output In E if we consider a = {1,1} and queries are — (1, 1, 2) and (2, 1, 1) then p will be {0,0} how is it possible to not create anything? Hey, I was also stuck on this. However, the question states that ti < ai. In problem H,why are these two conditions sufficient? For a valid state, we take the subgraph of all active arcs. It's easy to see that there exists exactly one vertex such that: The active arc starting from it ends at v. The path from 1 to the said vertex does not include v. Then the only possible predecessor of v is the said vertex. The rest is induction. upd: I seem to have missed the case when v=1... Forget what I've said before. take the directed multigraph, where each arc appears exactly the number of times it is traversed. Given the constraints of the matrix, it's easy to see that there exists an Euler path(if it's connected, hence the condition we're checking). Consider constructing an Euler path the following way: Start from 1; If the vertex we're at has been traversed odd number of times, take the blue arc; else take the red arc. It could be checked that this Euler path is the legitimate path. P.S. Since all other vertices must be traversed before v, it could be checked that they should all be able to go to v using only active arcs. #include <bits/stdc++.h> #define ll long long #define ibs ios_base::sync_with_stdio(false) #define cti cin.tie(0) using namespace std;//coded by abhijay mitra #define watch(x) cout << (#x) << " is " << (x) << endl int main() { ibs;cti; int t;cin>>t; while(t--){ ll x; cin>>x; if (x>15) {x%=14;x+=14;} if(x>14 and x<21) cout<<"YES\n";else cout<<"NO\n"; } return 0; } B solution hi majk,i am quite new in coding,i still dont really understand problem D,i am lost after the part(we greedily match vertices with positive balance with vertices with negative balances).does it mean to perform the operation (u,v) (v,w) if v and w is the vertices chosen to be matched,could you please clear this up for me,thank you how to construct r X c matrix in question C. there is only r and c is taken as input.please answer bc people at codechef give better explanation than codeforces editorial.Even the submissions are not intituitive at codeforces.here is disscussion for problem d see royalfalcon156 ans on the given link Here is a Detail explanation for Div2 D Here. Hope could be helpful to someone This is the worst and the most unrigorous explanation I have ever seen. Well Sorry for that.I will try to improve In the editorial of problem G, "the answer is $$$|p|⋅(|p|+1)−∑LCP[i]$$$", shouldn't it be $$$|p|⋅(|p|+1)/2−∑LCP[i]$$$ ? Good
https://codeforces.com/blog/entry/72243
CC-MAIN-2022-40
refinedweb
3,059
70.33
Today's Page Hits: 1242 Disclaimer: The following is my understanding (possibly misunderstanding!) of Ruby and Smalltalk metaclasses. I am a Java programmer. I've played a bit with Squeak and Strongtalk. As I said in my earlier posts, I am learning Ruby using the JRuby 0.9.0 implementation. If you are Rubyist or Smalltalker (or both!) and find gross errors here, please let me know [how do we call a Java programmer?] With class based object-oriented programming languages: Point number (1) implies every class is an object too. If so, per point (2) every class has to be an instance of ... well another class [called "metaclass"]. So, what is a metaclass of a metaclass? And how do we end this seamingly "infinite" chain?Java's answer for this: class Person { static { System.out.println("I'm " + Person.class); } } class Person end # prints true puts(Person.instance_of?(Class)) Huh! That looks pretty similar to Java, isn't it? (except for the syntax changes and the metaclass name). No, that is not the complete story. Look at the following Ruby code. If you run above program, you getIf you run above program, you get class Person # "static" method? def Person.greet sayHello end # another "static" method? def Person.sayHello puts "Person: hello, world" end end class Employee < Person # "Override" "static" method? def Employee.sayHello puts "Employee: hello, world" # calling "super" method? super end end Employee.greet Employee: hello, world Person: hello, world In Java, the static methods don't have "this" object and can't access "super" either. The "static" methods of Java are not really connected to any object in the system -- static "methods" are just like good old global functions except for namespace and accessibility. Clearly, "static" methods in Ruby are not like static methods of Java. Class methods [as these are known in Ruby and Smalltalk world] are very much like instance methods - can be overriden, can call super method and have "self". "self" inside a class method is - guess what - the class itself (recall that classes are objects!). In fact, "self" used anywhere inside class body (except inside an instance method) refers to the class. For example: So, you can write "class methods" with the syntax as well:So, you can write "class methods" with the syntax as well: class Person puts self end class Person # "self" here refers to "Person" def self.greet sayHello end def self.sayHello # "self" here refers to "Person" puts "hello, world ${self}" end # instance method def whoAraYou # "self" here refers to the Person # object on which "func" was called puts "I'm ${self}" end end # call a class method Person.greet # call an instance method p = Person.new p.whoAreYou I mentioned class holds instance methods, if so which class would hold class methods of a class? For example, Person "holds" the instance methods such as "whoAreYou" Which class would hold the class methods (such as "greet") of Person class? Is it Person.class? - no, that can not be. Recall that Person.class is Class - which can only "hold" methods common to all classes (like general reflective queries such "superclass", "instance_methods" etc.). The Class class can't hold class methods of the Person class. If Class class had the class methods of, say Person class, then you could call that on any class. For example, you can call "greet" class method of Person class on any other class in the system! Ruby's answer for this is as follows: in Ruby, every object has an optional singleton class [also known as exclusive class]. Whenever a method is called on an object, first the associated singleton class, if available, is looked up for the method. If not found, only then the "actual" class of the object is searched [and then the usual superclass chain search]. How would you add singleton methods to a specific object - in other words how would you define singleton class for an object? When we defined class methods like Person.greet, you are actually adding a singleton method to Person's singleton class. In fact, you can define singleton class with the following syntax - so that you can add multiple singleton methods in "one shot": class Person # "self" here refers to "Person" # add singleton methods. class <<self def greet sayHello end def sayHello puts "hello, world" end end end For every Ruby class, there is a singleton class associated with it. The singleton class holds the "class methods" of that Ruby class. For Person class, there is Person singleton class. For Employee (which is a subclass Person), there is Employee singleton class and so on. Also, Employee's singleton class is subclass of Person's singleton class [singleton class mirrors regular class hierarchy] Note that every Ruby object can have an optional singleton class. Yes, that is right -- every object, not just class objects can have a singleton class associated with it. class Person end x = Person.new y = Person.new # now, add singleton class to "y" object class <<y def wonder puts "I'm wondering!" end end # calls wonder method in "y"'s singleton class y.wonder # won't work - method_missing error! x.wonder The singleton classes alongwith clone method can be used to write prototype based object-oriented programs (as with Self). i.e., you don't need classes at all. You just create objects (say using Object.new) and you specialize some of your objects by defining singleton classes for those (note that singleton classes are unnamed). Whenever similar behaving objects are needed [you need a class of objects], then clone one or more prototypical objects! To understand Ruby class, singleton class relationship, the following session with JRuby could help: Update: - the output of the following script with JRuby 0.9.0 is different from Ruby 1.8.4. It seems that Ruby hides singleton classes as an implementation detail. I've filed a bug with JRuby ancestors() on a singleton class returns result different from Ruby 1.8.4 # add a method called "singleton" # to Object class class Object # return singleton class associated # with the current object def singleton # create a singleton class for "self" class <<self # note that class is expression # in Ruby, we just return "self" # Note: "self" here is the singleton # class itself self end end end class Person end class Employee < Person end class Manager < Employee end puts "Manager's singleton = #{Manager.singleton}" puts "Employee's singleton = #{Employee.singleton}" puts "Person's singleton = #{Person.singleton}" def print_hierarchy(klass) puts "#{klass}'s hierarchy" puts "\t" + klass.ancestors.join("\n\t") end # print class hierarchy of each class print_hierarchy(Manager) print_hierarchy(Employee) print_hierarchy(Person) # print class hierarchy of each singleton class print_hierarchy(Manager.singleton) print_hierarchy(Employee.singleton) print_hierarchy(Person.singleton) From the output of above program, we can see that singleton hierarchy parallels the class hierarchy.More info on Ruby metaclasses: What is the answer of Smalltalk to the class-of-class question? In Smalltalk, Person | Employee | Manager PersonMetaclass | EmployeeMetaclass | ManagerMetaclass Ruby's treatment is similar yet different from that of Smalltalk. Ruby's singleton classes serve like Smalltalk's metaclasses. But, unlike Smalltalk metaclasses which have single instances, we can't create instances of Ruby singleton classes: jruby>Object.singleton.new script error: org.jruby.exceptions.RaiseException: can't create instance of virt ual class jruby>o = Object.new #<Object:0x16c9867> jruby> o.singleton.new script error: org.jruby.exceptions.RaiseException: can't create instance of virt ual class Hence it has been suggested that singleton classes be referred to as "sterile metaclasses" It is important to note that Ruby's singleton classes can be associated with any (ordinary) object (unlike Smalltalk). Because of this, Ruby supports prototype based object orientation. If I understand correctly, this contrasts with Smalltalk metaclasses where, for example, "Person class: yields the single instance of the class PersonClass, a subclass of Class, and "Employee class" yields the single instance of the class EmployeeClass, a subclass of PersonClass. (I'm not speaking from a position of authority, I have recently been trying to get my head around the same concepts, but in Smalltalk.) Posted by 82.243.20.33 on September 26, 2006 at 11:45 PM IST # The super call in Employee.greet calls Person.greet -- which cannot be explained if the singleton hierarchy does not parallel the class hierarchy. I'll file a bug in JRuby project on the difference with the return value of ancestors() call. Posted by A. Sundararajan on September 27, 2006 at 11:09 AM IST #
http://blogs.sun.com/sundararajan/entry/metaclasses_in_ruby_and_smalltalk
crawl-002
refinedweb
1,417
57.37
Hi, I have a little issue that i have been trying to solve, i was wondering if someone might be able to help? What i'm trying to do is get some fields pre-filled from the page they have selected. So when they click #bookhere1 this will trigger the data from #price1, the button will direct them to the payment/details page and prefill the #pricetxt. I'm looking to fill out as many of the details as possible but thought if I start with one, that should help me at least get started. The code I have is below but is obviously not working. import wixData from 'wix-data'; export function bookhere1_click(event, $w) { $w.onReady(function () { $w("#dynamicDataset").onReady(()=>{ let itemObj = $w("#dynamicDataset").getCurrentItem(); $w("#price1").text = itemObj.dateone.toDateString(); }); } I know i need to have code on both pages, but thought someone might be able to point me in the right direction. Thanks in advance! Stephen Hi Stephen. I think wix-storage might solve this problem. You can store the data you want to pass to another page, and read it on that page in onReady section. I think the session storage is good enough for this case. Regards, Genry. Hi Genry, Thank you for your response, I have managed to get some time to try and get back to solving this issue. This is an area that is new to me, and I'm finding a little tough to get my head around... would you mind helping me a little? After the bookhere1_click I would like the value of #price1 #date1 #duration1 to be stored then applied to the new page in fields #pricetxt #datetxt #durationtxt So I have played around with the code and have come up with this on the category page import {local} from 'wix-storage'; export function bookhere1_change(event, $w) { local.setItem("#price1", "value"); } Then placed this in the payment page import {local} from 'wix-storage'; let value = session.getItem("#price1"); // "value" How do I apply it to the new fields? Thank you in advance. Stephen maybe this would work on the new page: $w("#pricetxt.value) = value; Two things: if the above does not work, try #pricetxt.text. Other thing: you know that users might have turned off local storage in their browser. So if you rely on that, all fails. If you have the time, may be it is worth your while to investigate this: - prepare the link for the button by code, using a parameter, like {parh}/orderpage?courseid=34KJ2H34897DFGSFDG (where that silly number is its ID) - on the recieving end, use wix-location to get that last part (courseid=) with "query" - retrieve this item (it´s the row id of the collection) again from the db and display the data This way, you don´t have to rely on local storage. I have not tried it out myself, just an idea. Hi Stephen. I see that when you store the value - you use 'local' storage, but when you read it - you use 'session' storage. Please try with using 'session' storage for both storing and reading the values. Also to clarify, for the 'key' you can use any string, it is not necessary to use the '#' sign. Also when setting a value, I am not sure what 'value' you mentioning in your code. If the element name for example is price1, then the value for the element to store should be $w('#price1').text for Text element or $w('#price1').value for TextInput element. And when reading the item: Regards, Genry. Hi Giri and Genry, Thank you for getting back to me. I have had a look through your thoughts and tried to implement the code from Genry, I think that the code is reading the value now, due to the code making sense and now no errors. The area that is confusing me, is when trying to display the item on the new page: import {session} from 'wix-storage'; ... const price1 = session.getItem('price1'); How does the code know where to display the value on the new page? I'm trying to get it to display in #pricetxt so I've tried this: import {session} from 'wix-storage'; const price1 = session.getItem('price1', $w('#pricetxt').value); PS how do you get the code to display in the grey box? Hi Stephen. The code snippet I wrote was only to show how to store and retrieve the data to and from the session storage. Now, once you read the data, you should assign it to the relevant component. So in your case this should be: More info about the Text and TextInput elements and how you can interact with them via code can be found here: Regarding how to show the code in grey box: you can select a portion of a text - and a popup with more selection options will be shown - you can then mark the section as code. Regards, Genry. Hi Genry, I must admit this is driving me a little of the wall! The reason I'm finding it hard is that I can't get anything to happen so there isn't any process of elimination.. One issue I've come across is that if I put an on_Click event on the button then the 'link' that directs them to the new page doesn't work? So I have read as much as my brain can take and have come up with the following code to try and get around this one also. The frustrating part is that the code actually makes sense, but no results. Anyway the below code is to store the data Then the code you provided to retrieve the data on the payment page I have played around with using input fields instead of text fields and changed the text to value but still nothing seems to change Hi Stephen. Your direction is correct. You should add an onClick event on the relevant button in first page where you will store the information for the second page to read from. From my understanding in your site it is the button bookhere1. On you code you are calling an onClick method of the button, rather than defining it. You can find a short demo of how to add and define events on UI elements in the following article: Regards, Genry. Hi Genry, Thanks for getting back to me. This is driving me up the wall... it makes sense but I can't get any results, I've been at it for days, reading through articles, videos but I can't get it to do anything. I have amended the code, see what you think. I was going to ask if you offer services for this? and how much it would be to write the code? Here is what I have currently PAGE 1 PAGE 2 I've noticed on the API documentation that the retrieve code is: But I can't get the above to work either... I appreciate this is becoming a little long winded, and you have already given me a lot of your time, if this is a service you offer, I don't mind paying to learn the code, or to have it written as you have been very helpful! Best wishes Stephen Can't believe I actually sorted this one out! Thanks for all your help Genry, Very much appreciated!! The code that worked is below Hi Stephen. Glad to hear that it works for you. I somehow didn't receive a notification about your previous post. Another alternative on how to initialize the values of the components on the second page, is to put the initialization code in the $w.onReady event. E.g.: Regarding my services - I offer my services here at the forum for free as part of me being an employee of WixCode R&D :) Good luck and best regards, Genry. Hi Genry, How did you set up the transition to the other page? The redirect I mean as with a on click event you cannot as well redirect Hi mschuettke. What functionality are you trying to achieve? On clicking a button - open another page? Should the onClick event also do some custom logic, e.g. pass data to the target page? Regards, Genry. Hi Genry, thanks for all the help above. I am also trying to achieve the same functionality as you mentioned above, but when the data is passed the first time from page1 to page2, it does not show, however when again a different data is passed, it is then that the previous data is shown. Also, all this happens when it is opened in a new window, and not in the current window. Please Help import {session} from 'wix-storage'; export function button1_click(event,$w) { //Add your code for this event here: session.setItem('firstname', $w('#input1').value); } import {session} from 'wix-storage'; $w.onReady(function () { //TODO: write your page related code here... }); export function text13_viewportEnter(event,$w) { //Add your code for this event here: let firstname = session.getItem('firstname'); $w('#text13').text = firstname; } You probably have to add a timeout so that the input value is properly set. See the article Give the TextInput onKeyPress Function some time for more information. Something like this: I’ve been trying to do the same thing for days, and finally I found this post and the code works great. Thank you for sharing the workable code. I am so happy to see that the text appeared in the text field on the next page. I am however facing another problem, when the form is submitted on the second page, the autofilled value isn’t submitted to the database. Can I know what am I doing wrong ? Thank you in advance Hello everyone, I'm trying to create a login page and provide the ability to transfer data from one page to another. In the login page, it will check the database in order to see if the input email and password are the same as email and password stored in the database. After checking that, the input email that is equal to email stored in the database should be transferred into another page (main page). Please find below the code in the login page: // For full API documentation, including code examples, visit import wixData from 'wix-data'; import wixUsers from 'wix-users'; import wixLocation from 'wix-location'; import {session} from 'wix-storage'; let userEmail; let userPassword; let user = wixUsers.currentUser; $w.onReady(function () { //TODO: write your page related code here... $w("#textErrors").hide(); }) export function button1_click(event, $w) { //TODO: write your page related code here.. let inEmail = $w('#input1'); let inPassword = $w('#input2'); let transEmail = inEmail; wixData.query("loginDB").eq("email", inEmail).find().then((results) => { let resultCount = results.totalCount; if (resultCount) { userEmail = results.items[0].email; userPassword = results.items[0].password; if (inPassword===userPassword && inEmail===userEmail) { session.setItem('transEmail', transEmail.value);//Keep in mind, 'item' must be a string. wixLocation.to(`/mainpage`); } else { $w("#textErrors").show(); $w("#textErrors").text = "Incorrect email or password."; } } else { $w("#textErrors").show(); $w("#textErrors").text = "Can't find the user."; }}).catch((err) => { let errorMsg = err; }); return inEmail; } export function button2_click(event) { //Add your code for this event here: } Here is the code in main page: // For full API documentation, including code examples, visit import {session} from 'wix-storage'; let filteredEmailtxt; let filteredEmailvle; let filteredEmail; $w.onReady(function () { //TODO: write your page related code here... console.log(filteredEmail, "here is the email being sent to filter"); console.log("Dataset is now filtered"); //$w("#transEmail").hide(); }); export function button3_click(event) { //Add your code for this event here: session.removeItem("inEmail"); session.clear(); } export function trans_email(event, $w) { const transEmail = session.getItem('transEmail'); //Where result will be a string. filteredEmail = transEmail; } However, the outcome is:undefined here is the email being sent to filter So what did I do wrong here? Hi Stephen, I was hoping you could help me with this same/similar issue. I have a drop down list on my homepage (dropdown1) that I would like my users to select a Service from a list of 127 Services I have being pulled from a data set. I would like them to push a button (button8) below the drop down and redirect them to another page. On the other page I have a miltipage form I created using a slide show. I would like the selection the user made from dropdown1 on the homepage to appear in the first user input text box on my form. I need this to happen because I don't want the form to be on the homepage but I need the original service selection from the dropdown menu to go to the same data collection in the same row as the rest of the form. I have tried everything. I wasn't sure if for .value I have to actually write a string and include every single value in the drop down list. or if you just leave it as .value I really hope you can help me I have spent endless hours trying to figure this one out. Thanks, Bethany
https://www.wix.com/corvid/forum/community-discussion/importing-data-from-dynamic-page-to-display-on-another-page
CC-MAIN-2019-47
refinedweb
2,192
71.85
Introduction: Animatronic Stargate Helmet I love the movie Stargate and when I first saw it I immediately knew I wanted to make one of the super cool Horus guard helmets. I had sketched multiple designs over the years and figured out several different methods for building it but rejected them all for one reason or another- usually due to cost or complexity of construction. Since I wanted this to be a costume helmet my requirements were that it be light weight, comfortable, have decent outward vision and be reasonably durable. I also wanted it to be buildable by anyone using simple hand tools. Most important of all I wanted it to move in a similar fashion to the movie helmets. All of this proved to be a pretty tall order but eventually it all came together and now you can make a moving Stargate helmet of your own! Here's a video of the helmet- Be sure to click on the photos to download high res images. Step 1: Tools and Materials Tools- Saw for cutting wood/metal- I use a Milwaukee hand saw that accepts reciprocating saw blades- super handy! Cordless drill w/ various drill bits X-Acto knife with #11 blades Scissors- small sharp scissors make cutting the patterns easier Glue gun Sandpaper- small piece of 100 grit to smooth wood edges and spackling Allen wrenches- Inch Screwdrivers- phillips and flat head Soldering iron- the Hakko FX-888D is probably the best soldering station available for under $100 Ballpoint pen Bench vise or some other way of securing work while cutting metals Trusty Instructables multitool- I never leave home without it! Materials- Cardstock (2pkgs)- Newspaper Craft foam sheet (10ea 12" x 18")- White glue Tacky glue- Gorilla glue- Spray foam- Paint- 1 can silver 1 can copper 1 can satin clear coat Pastels- dark blue, reddish brown, black Plywood- 3/32" thickness, 6" x 12" (3ea needed) Minwax Polycrylic sealer Velcro- Cotton swabs/ soft brush- for applying pastels Electronics/Hardware- Arduino- I used my own design Arduino servo board (you can use any variety Arduino you want- available at RadioShack, Sparkfun, Adafruit, etc.)- Small switch (2ea)- JST female connector- JST extension wire AA batteries (4ea) AA battery holder- Servos- Hitec HS-81 (3ea)- Hitec HS- 425BB (2ea)- Servo extension wire- Gears- 22T 32 pitch Hitec splined (2ea)- 24T 32 pitch 1/4" shaft mount (4ea)- 4-40 Swivel ball links (4ea)- 4-40 threaded rod- Super Duty short control horns (2ea)- Servo shaft adapter 1/4"- 10-32 Rod end- 10-32 tap & drill bit 10-32 bolt 10-32 nuts (3ea) 1" Aluminum angle- Nylon spacers- 1/4" ID x 1/2" OD- 1/4" OD brass tubing- 3/8" Aluminum rod Aluminum mounting hubs w/bolts- 1/4" and 3/8" bore 10mm super bright white LEDs- local supplier or 10mm LED holders (2ea) I purchased these locally but I found some online here- Resistors- 100 Ohm (2ea) -local Radio Shack Standoffs- I used standoffs I salvaged from electronics equipmentI found in dumpsters but lots of places sell them online in various sizes - Female breakaway headers- Miscellaneous wire/small wood screws Small piece of steel sheet- I used a scrap piece cut from old electronics chassis material Magnet- Step 2: Printing Patterns So here we go! The base of this helmet uses a pepakura folded paper model. If you're not familiar with pepakura it allows you to take a 3D model and essentially fold it out flat into a paper pattern. The pattern is printed on cardstock and cut out, folded and glued together. It's a pretty easy way to get a general shape for a physical model. The complexity of the model can vary greatly and it's always a trick to have the minimum number of folds to reduce complexity but still allows you to have good shape and model detail. I'm using using the pepakura model provided by nintendude and movieman on the RPF forum as a base (a HUGE thank you to you guys!) So the first thing you want to do is download the provided patterns and open the Horus pattern using the free Pepakura Viewer, which can be downloaded here (sorry Windows only)- There are additional files provided should you want to make the Anubis version or want the additional files for the neck collar, staff weapon or ZAT weapon to complete a costume. Note that I have not yet constructed these so I can't say if the animatronics will fit the Anubis head -but I'm sure there's a way to make it work. :) An opening staff weapon and animatronic ZAT are on my project list... After you have opened the Horus file you will see a wire model on one side of the screen and the patterns on the other side of the screen. The first thing you will notice is that if you click on a particular pattern page it will show you where that part goes in the finished model as well as what other pattern parts it mates to- this is great to back to and use as a reference when assembling the patterns. At this point what you want to do is turn off the "Set Materials To Faces" button. If you don't do that all of the pattern faces will print grey. Next turn on the "Show Edge ID" button. This is a big one- when the patterns print they will have numbered edges that show what edges mate together. Without this it will be very difficult to assemble the patterns. Now select Setting form the pull down menu and select Print Setting. I set the line thickness to 3, Print lines smoothly, transparency to 50% and Print page number. Now select the printer icon, select your printer, select All and then OK. A window then appears asking if you if you want to adjust the scale- select NO. The reason for this is that Pepakura Viewer is set up to print on A4 paper and if you scale the model to fit on letter sized paper the patterns will be the wrong size. The down side to this is that if you are printing on letter sized paper some of the patterns can run just barely outside the borders, but it's no big deal. So print your patterns onto cardstock and get your scissors ready... Step 3: Cut, Fold and Glue Now let's cut some patterns.! Once you get going the helmet surfaces build up pretty quick. A few notes: for the animatronic helmet not all of the patterns will be used, namely the back of the head (since all of the servo motors go there) and only the faces of one set of fans are used as templates to cut the fans from wood. I also left off the very top part of the fan cover as I think it looks better without it. If you want to build the helmet without the animatronics then just assemble all of the parts- obviously I didn't do this so I can't give too much advice as to exactly how all the parts go together regarding final assembly (attaching head, fans, etc.) but I'll help out as best as I can! One thing that will help immensely when building this is a stand. I found a $10 foam head and mounted it on a wood dowel and secured it to a base made from scrap 3/4" thick plywood. Step 4: Paper Mache Time to add paper mache. The cardstock shell is fairly flimsy so we need to reinforce it with a few layers of paper mache. I used white glue with just a little bit of water to make it easier to brush on. Just tear some newspaper into small pieces, brush the glue onto the cardstock using a small brush and then start sticking down the newspaper. The trick here is to not do too big an area at once or get it too wet- this will help avoid warping. The head also requires that you paper mache the inside near the back edges on the top and bottom. This will keep the edges from curling. I paper mached the entire helmet first and then cut the holes for the fan support tubes- it was a lot easier to do it this way as the helmet has a lot more rigidity. It's a bit tricky to get the holes lined up in the same spot on each side so take your time. Then I glued in the tubes and applied paper mache to those surfaces as well. After the helmet parts have fully dried it's time to move to foam and spackling... Step 5: Foam Is Your Friend- So Is Spackling Add spray foam reinforcing. I used spray foam on the inside of the helmet to fill in the area around the top where the head attaches and around the fan tubes and cavities in the sides and rear of the helmet. This adds a lot of rigidity to the helmet and gives you something solid to glue the wood head mounting plate onto. You only want to spray in a bit at a time and then let it cure, especially in the top of the helmet. If you try to do it all at once only the foam on the outer surfaces will cure and you'll have a gooey mess when you go to cut the access hole for the servo wires. When filling in the top of the helmet, cover the large hole with packing tape to keep the foam from oozing out. After the foam had cured, I cut the access hole for the servo wires using my small hand saw- just jab it through to make a good sized through hole. You can also use the saw to trim back any excess foam on the inside of the helmet and the front face of the large opening (after removing the packing tape.) Now cut two identical pieces of 3/32" thick plywood to fit the large opening at the front/top of the helmet. One of these will need a large hole cut in it to match the hole in the foam. This is then glued to the helmet using Gorilla glue. Just get the plywood slightly wet on one side, apply glue to the foam and press the wet wood side to the glued foam. Hold the plywood piece in place using packing tape until the glue sets. The second plywood piece is set aside- it's used to mount the animatronic assembly. Now to do some spackling. Light weight spackling paste is applied to the helmet to smooth out rough spots and seams in the paper mache. I use an old plastic gift card as a spatula to apply the paste but a scrap of cardboard works too. Once the spackling has dried it can be sanded smooth. At this time you should also cut out the panel in the front of the helmet using an X-Acto knife so you can see out. I covered this with a piece of window screen scrap and glued it in place with a glue gun. Speaking of glue guns- now is the time to start skinning the helmet with craft foam sheet. I chose gray sheet since I figured it would hide any paint mistakes best when using silver paint. What I did was wrap the foam sheet over the helmet and then rough trim it to shape using an X-Acto knife. Keep changing the blade in the knife as it makes a huge difference as to how clean a cut you can get. Then I would glue one edge down, wait for it to set and then stretch and wrap the foam around the helmet as best as I could, gluing sections as I went. It's definitely a tricky process to try and minimize the number of seams! The process is identical for the head- the beak section is especially tricky to do- just take your time and trim the foam sheet as you go, trying to minimize the gaps. You want the surface to be as smooth as possible when you go to put in all of the detail lines/grooves with a ballpoint pen. With every added layer (going from folded cardstock to paper mache/spackling to foam skin) you are trying to smooth the surface and reduce the appearance of the fold lines of the original pattern. The foam skin simply gives you a nice outer layer that has a uniform appearance and allows you to put the detail lines in. Any gaps in the foam can now be filled with spackling paste and lightly sanded. Now set the helmet aside- it's time to get to work on the mechanics... Step 6: Head Mechanism Now to make it move! The head mechanism consists of two servos that move a third servo around a spherical bearing (rod end.) It's very easy to build and is pretty compact. I primarily use Hitec servos and have been very happy with their performance and durability. I used two Hitec HS-485HB standard size servos for the main servos and one Hitec HS-80 micro servo at the front (the same HS-80 servo is used in the fan mechanism.) Unfortunately these servos are not available as of this writing so I suggest using the less expensive HS-425BB standard size servos and the Hitec HS-81 micro servo. If you want to use better quality/stronger servos then by all means do so- it certainly won't hurt. If that is the case I would recommend the HS-645MG standard servo and HS-85MG micro servo- these are both very strong/durable servos for the money. The HS-85MG micro has the added benefit of metal gears and ball bearings compared the the HS-81 micro so it will better support the head. Normally I don't like putting side loads on the servo output shaft (which is the case in the forward micro servo in this application) but the head is so light the servo can handle it- plus the head is attached with a magnet so a hard hit will simply move the head without damaging the servo. NOTE: Make sure all of your servos are in their center position before construction. This can save you some real headaches later! The first thing you want to make is the base plate. This uses the second piece of plywood that matches the piece at the top of the helmet. An Aluminum hub with a 3/8" hole is bolted to this and an access hole for the servo wires is drilled under the hub. Now cut a piece of 3/8" diameter Aluminum rod to 2 1/4" length. Drill and tap one end for the 10-32 rod end. Drill two holes spaced 5/8" apart on the top of the rod and mount two 1/2" long standoffs. For the third servo mount cut a piece of 1" Aluminum angle to 1 1/2" length. Drill a 1/4" hole in the front and mount the 1/4" Aluminum hub. Drill a hole for a 10-32 bolt on the other face. Drill holes for the control horns and mount them as shown in the photos. Make the control links by cutting two pieces of 4-40 rod to 1" length and then threading on the swivel ball links so there is a 1/4" gap between them. Mount the one end of each swivel link to the control horns. You'll have to drill out the hole in the control horn slightly to get the bolt through. Now attach the third servo mount to the rod end using a 1 1/4" long 10-32 bolt as shown in the photos and secure it with two nuts so it doesn't come loose. You can use a shorter bolt if you like (I just had this one on hand) and I made some spacers using scrap Aluminum tubing. You'll want to use some small washers or spacers on this bolt so you get full movement when tilting fore/aft. Now you need to mount the main servos. I used some 1 3/16" long standoffs (two per servo) and mounted the two servos to a thin plywood plate. The plywood plate then bolts to the two standoffs on the 3/8" Aluminum rod. Now connect the swivel links to the two servos. The third servo is mounted to the front using a 1/4" diameter servo output shaft extension. You'll probably have to cut down the mounting screw a bit like I did to get it to fit the small servo properly. Attach a servo extension wire to this forward servo so it will be able to reach all the way through the access hole in the base plate. I cut a small piece of scrap steel sheet and attached it to the front of the servo using double sided foam tape. I also removed the mounting tabs from the forward servo case in order to get the servo as close to the front of the inside of the head as possible. The 3/8" Aluminum rod is then secured to the base plate mounting hub and then it's on to the fan mechanism... Step 7: Fan Mechanism Let's make those fans rotate. This is a really simple mechanism to build and it works well- it also doesn't require much precision in it's construction. Basically there is a servo motor with a gear that turns a gear mounted to the top fan blade. The top fan blade then turns the gear on the bottom fan blade in the opposite direction- so when the servo rotates in one direction the fans spread apart and when it rotates in the opposite direction the fan blades close. To build the mechanism first cut four plywood discs to fit the fan mounting tubes in the sides of the helmet- they should fit pretty snug. Now cut out the fan blades from thin plywood sheet. The center fan blade is just like the template but the upper and lower blades need small extensions with 3/8" holes to allow for mounting the 24T gears. The 1/4" bore gears are press fit and glued into the holes in the upper and lower blades using Gorilla glue. Just get the wood damp first, apply some glue to the back of the gear and press it into place in the fan blade. Some glue may foam into the gear teeth during curing- just trim it away using an X-Acto knife. NOTE: The trick here is to get the gears properly aligned with the matching opposite blade. You want the gear on the right fan blade to be in the same position (rotation wise) as the gear on the identical left fan blade. This is important because if they are not aligned correctly it will be very difficult to get the right and left fan blades in sync when they open and close. Also note that the gears are mounted on the opposite sides of the top and bottom fan blades. Take one of the plywood discs and cut a hole in it to mount the servo. The right and left side servo mounting plates are mirrored. Mount the 22T gear to the servo and then glue the fixed center fan blade in place using Gorilla glue. Now place the top and bottom fan blades on the servo mounting plate, holding the gears in alignment so they mesh with each other- mark their position. Note that the top fan blade gear is turned by the servo gear and the bottom fan blade gear is turned by the top fan blade gear. The bottom fan blade gear does not come into contact with the servo gear. The fan blades are held in place by nylon spacers mounted on 1/4" brass tubing, which goes all the way through both plywood plates. The gears in the fan blades are designed to be press fit onto a 1/4" shaft so you have to drill them out using a 1/4" drill bit- make sure they rotate free on the 1/4" brass tube without binding. The top fan blade has a nylon spacer on both sides while the the bottom fan blade has a nylon spacer only on the inside (closest to the servo.) The exact length of these spacers depends on the length of the standoffs used to hold the plywood plates apart. They do not require a precise fit at all- just tight enough to keep the fan blades from wobbling around. The fan blades should rotate freely on the brass tubes. The spacers should fit snug onto the 1/4" brass tubing- they are what holds the tubing in place between the two plywood plates. If the nylon spacers fit loose on your 1/4" brass tube then glue them in place on the tube after you have determined their positioning. When you are finished building the fan assemblies test fit them into the helmet tubes- they will require a notch cut for the center fan blade. Do not glue them to the helmet at this time. The cap for the fan assembly is held on with a small piece of velcro. Step 8: Detailing Let's put it all together and see what it looks like! Since the fan blades are temporarily mounted, mount the head by cutting a small piece of plywood to fit up in the beak section. This gets a super strong small magnet glued to it using Gorilla glue and the plate is held into the inside of the head using hot glue. The magnet connects to a small metal plate on the front of the small servo and holds the head in place. This way the head is secure but is still easy to remove and it won't damage the servo if it gets knocked around. Now come the details. Cut the raised areas on the back and sides of the helmet using foam sheet and glue it into place with the glue gun. Now use a regular old ballpoint pen to draw all the engraved lines in the helmet and head. You have to press pretty hard but the lines will stay there! Also cut out foam sheet for the fan blades and secure them using a glue gun. The patterns are pretty time consuming to draw- don't be too concerned about making them perfectly match from side to side. Step 9: Electronics Let's add a brain. For this project I'm using an Arduino controller board I wrote a complete instructable about here- This works really well for this application but feel free to use any Arduino you want- there are diagrams that shows how to wire it either way. The mentioned instructable shows how to build, program and use the Arduino controller board I'm using. The servos are powered by four "AA" rechargeable batteries. I mounted small switches for both the "AA" batteries and the LiPo cell to make it easy to turn on and off. I used a small female JST connector for the LiPo cell and then wired a rocker switch with a JST extension cable- this wiring harness plugs directly into the controller board and makes it easy to turn the controller on and off without having to constantly unplug the LiPo cell since the JST LiPo connectors can be somewhat fragile. If you are using a standard Arduino (Uno, Deumilanove, etc.) it's not necessary to make this wiring harness- just wire a switch between a 9V transistor battery and your Arduino and you're good to go. To connect the "AA" batteries just use two pins from a female break away header to create a wiring harness with an inline switch- this will allow you to plug the battery pack directly into the controller board (or a proto board if you're wiring up a standard Arduino)- watch the polarity! The controller board is mounted to a small plywood plate and is secured to the inside of the helmet with Velcro, as are the batteries. Each 10mm LED gets a 100 Ohm resistor soldered to its positive lead and then they are wired in parallel. The LEDs are glued into cut down LED holders in the head eye sockets using a glue gun. These should be glued in after the helmet is painted. Have a look at the wiring diagram- it's super simple. The servos connect as follows: Head small servo- digital output pin 9 Right side head servo (looking at head)- digital output pin 8 Left side head servo (looking at head)- digital output pin 7 Right side fan servo (looking at head)- digital output pin 6 Left side fan servo (looking at head)- digital output pin 5 LED eyes- both connect to digital output pin 11 Here's the code to use- just copy and paste this into your Arduino window. This is a simple code that just runs the servos and LEDs through a sequence over and over. That way when you're wearing the helmet you don't have to worry about what it's doing- just flip the switches and you're good to go. Feel free to play around with the servo positions but be careful not to make them move too far or they will bind and possibly strip a gear or stall and make a lot of noise. #include <Servo.h> // include the servo library Servo servo1; // creates an instance of the servo object to control a servo Servo servo2; Servo servo3; Servo servo4; Servo servo5; int servoPin1 = 9; // control pin for servo int servoPin2 = 8; int servoPin3 = 7; int servoPin4 = 6; int servoPin5 = 5; const int ledPin = 11; int ledState = LOW; // variable used to store the last LED status, to toggle the light void setup() { servo1.attach(servoPin1); // attaches the servo on pin to the servo object servo2.attach(servoPin2); servo3.attach(servoPin3); servo4.attach(servoPin4); servo5.attach(servoPin5); }); } servo1.write(90); the number in parentheses tells the servo what position to go to servo2.write(50); servo3.write(120); servo4.write(90); servo5.write(90); delay(1000); wait a second servo1.write(60); servo4.write(100); servo5.write(80); delay(1000); servo1.write(70); servo2.write(90); servo3.write(110); delay(1000); servo4.write(70); servo5.write(110); delay(2000); wait two seconds servo2.write(55); servo3.write(85); delay(2000); servo1.write(90); servo2.write(90); servo3.write(90); servo4.write(90); servo5.write(90); // fade out from max to min in increments of 5 points: for(int fadeValue = 255 ; fadeValue >= 0; fadeValue -=5) { // sets the value (range from 0 to 255): analogWrite(ledPin, fadeValue); // wait for 30 milliseconds to see the dimming effect delay(40); } delay(3000); wait three seconds } Once you are happy with how everything is working you can glue in the fan assemblies by using a glue gun to glue the back plate (where the fan servo is mounted) into the tube. Do not glue the front plate as you want it to be easy to remove should you ever need to replace a servo or gear. Step 10: Painting and Finishing Almost done- time for paint! When painting this I first gave it a coat of Minwax Polycrylic sealer using a large brush to seal the foam and prevent the spray paint from damaging it. Then I sprayed the helmet and fans silver. This was followed by a coat of copper on the beak section of the head and the lower part of the helmet. Painting was then followed with some pastel work. I first went over the helmet with some dark blue pastel powder (I scraped the pastel stick using my multitool blade to make powder) using a small soft paintbrush and a cotton swab to blend it. Copper areas were then highlighted using a reddish brown pastel stick and that was blended in using a brush and cotton swab. Black pastel powder was applied around the detail lines to add some definition. Finally the entire helmet was given a satin clear coat and it was ready to go! As a side note, instead of wearing this as a costume you could also use it as a Halloween prop- dress up a mannequin, add a motion sensor and a Adafruit Arduino Wave Shield for sound effects and set it outside your door and wait for trick or treaters! This project was a very long time coming and was a real challenge for me- it was a great experience seeing this come together and solving challenges along the way. I learned an awful lot building it and will be applying that knowledge to future costume projects. It was very important to me when building this that other members of this community be able to replicate it so I wanted to use low cost materials and focus on build techniques that would make it as accessible as possible. The side effect is that the construction method presented here could be used to create all kinds of wonderful costumes and Halloween props. So go forth and build your Stargate helmet! As always, if there are ever any questions feel free to ask away. I'm here to help and nothing would please me more than to have a whole bunch of these out there. :) I'd also like to tell everyone thanks for the nice comments- I've been floored by the response to this project! First Prize in the Halloween Props Contest First Prize in the 4th Epilog Challenge 3 People Made This Project! Arrghot made it! makellan made it! szaffarano1 made it! Recommendations We have a be nice policy. Please be positive and constructive. 149 Comments I'm almost finished building this one (will post pictures/details when I'm done) and I'm just finishing up the animatronics. Just wanted to double check something: when you marked the servos as 'left/right head' or 'left/right fin', is that from the perspective of the person wearing the helmet, or of someone looking at it from the front. Wow I'm impressed. Cool project! Thanks! Salut! Je m'appelle david. 2ème menbre du groupe props-memorabilia. Nous faison partit des plus grands collectionneur de props originaux Stargate. J'adore ton travail sur le casque il est génial! Serait tu interressé par un casque de garde serpent? Je peut te proposer une bonne affaire en échange d'un casque comme le tiens? Voici mon contact: sudpeinture@hotmail.com Would it be possible to make it fully out of EVA - foam? (with slight changes and without the elektronics) I think you probably could- I've seen some pretty impressive foam builds over the last couple of years so I say go for it. And I'll make one with your instructions thanks and I love your work hello i just want to know how can you see with the helmet If you read through the instructable you will see there is a small mesh panel you can see through. The outward vision is quite good. Hi, loving this build, I'm making my own and your instructions are helping me lots with some difficulties I encountered. I would love to see the new one, when is it coming up? Or can I find it on other sites? Ooh little question how do you get the lines in your new build to be so much more prominent than in your old one? Ty and again love your work! Hi there! I just received the fiberglass castings for the new one and I'll have a video up soon- I'll be sure to post a link. The new one was cast by a friend using molds made from actual movie production pieces and it will be radio control just like the originals. :) This is a fantastic build! Thanks so much for sharing! Thanks! Wait until you see the new one... :) I love you. You are gonna help make a childhood dream come true for me. I plan on doing a 3d printed one soon, just finished a 18x18x24" printer for it. You are the man for sourcing everything. Would love to swap ideas sometime. Thanks! If you ever have any questions just let me know! I've begun work on a new Horus helmet with a friend. This one is much more screen accurate. :) Beautiful! If only I where as good crafting these things.. More time practicing needed! Very nice end result! probably the second coolest thing I've ever seen from a movie The first was mc in forward into dawn I am working on making a Bastet Guard cosplay. I fell in love with this design and as a girl I think the cosplay would work better for me. With a couple of modifications. Here is the design:-... I couldn't have even started it without your instructable. It has been a brilliant resource for getting me started. I will have to make some alterations to shape and such but you made the whole pepakura process a breeze! I have to make the face part from scratch though as there are no pepakura files for it. I am too scared to do servos and such. But I will stick some LED's in the eyes.
http://www.instructables.com/id/Animatronic-Stargate-helmet/
CC-MAIN-2018-22
refinedweb
5,613
75.84
Identify the character you want to be at the front of the string after the rotation. Then divide the string into two halves such that this character is the first character in the second half. Reverse each half in place, then reverse the resulting string. There is another O(n) solution. It has a better constant, but may or may not perform better in practice due to cache locality issues. I call it the "two handed" algorithm, because you need "two hands" to do it. :-) Basically, you pick up the characters in position 0 and position m, then you put down the character from position 0 into position m. Now holding only the character originally in position m, you pick up the character from position 2m and put down the new character for that position. When you reach a multiple of m that is past the end of the string, you wrap around to the beginning. Repeat until you are back at position 0. If m and n were co-prime, you're already done. If not, then you need to repeat with positions 1, m+1, 2m+1, .., and then 2, m+2, 2m+2, .., and so on. You can stop at GCD(n,m), since this position will have been reached by the first loop starting at position 0. Pseudocode: num_cycles = GCD(n, m) cycle_length = n / num_cycles for cycle = 0 to num_cycles - 1 index = cycle value = string[index] do index = (index + m) mod n -- Note: The following can be implemented as a swap. next_value = string[index] string[index] = value value = next_value loop while index <> cycle end for There's a simpler, more intuitive solution, which also more efficient. The trick is : Here a python snippet (sorry about the closing brackets, i don't know how to escape them): def rotate_string(string, d): old_char = string[0) current_index = 0 for i in range(len(string)): target_index = (current_index + d) % len(string) temp_char = string [ target_index ) string [ target_index ) = old_char current_index = target_index old_char = temp_char return string Here is a simpler solution that is also one-pass with n swaps. Start with 0 and place the element there where it should be (i -> i+m) with a swap. Repeat until you'd wrap around. You've now moved all of the string into place but the last m pieces. These are almost in place, but have been shifted n % m (remainder) places. So if necessary, shift them back recursively using the same algorithm. I guess if m << n this version should have fairly nice cache-properties, be easy to unroll and work well enough with tail recursion optimization. In Python: def swap(a,i,j): a[i], a[j] = a[j], a[i] def rotate(a,m,start=0): n=len(a)-start m = m % n if m != 0: for i in range(start,start+n-m): swap(a, i, i+m) rotate (a,m-n%m,start+n-m) a = range(10) rotate(a,3) print a This rotation problem has been part of algorithmic folklore since the 60s/70s. The solution provided by jliszka showed up as early as 1971 in a text editor written by Ken Thompson. It is interesting because it is similar to the problem of swapping adjacent regions of memory. jliskza's solution is probably the "best" solution. It is very simple and easy to implement. It is also both space and time efficient in practice. logiclrd's solution is probably the worst. Algorithmically it has the best constant, but in practice it suffers poorly due to cache locality. Running some benchmarks on a 1.83GHz Intel dual core, it takes twice as long to run compared to the simple reversal algorithm. Another algorithm that tends to be even faster than the reversal one is to use a recursive swap like this: Given s, we want to rotate by m. This is equivalent to the problem s = ab, where we want to swap a and b, and a has length m. Change ab to ablbr, where br has the same length as a. Swap a and br to get brbla. Now we have the subproblem to solve, swapping brbl. This is the same format as the original problem and leads to a recursive solution. This algorithm can be converted to an iterative version, and is described in Gries's Science of Programming (1981). Benchmarking on a 1.83GHz Intel dual core shows it is about twice as fast as the reversal algorithm. All of these algorithms are also discussed in John Bentley's Programming Pearls. Log in or register to write something here or to contact authors. Need help? accounthelp@everything2.com
https://everything2.com/title/Rotating+a+String+solution
CC-MAIN-2018-30
refinedweb
773
64.3
This preview shows pages 1–3. Sign up to view the full content. Computer Science 211 Data Structures Mount Holyoke College Fall 2009 Topic Notes: Linked Structures So far, all of our structures for holding collections of items have been very simple. We’ve used only arrays and Vector s. These have some pretty significant limitations. Vector s are resizeable, but it is an expensive operation. It’s also expensive to add or remove objects from the start or the middle of the vector. We can do better. We will begin our study of more advanced data structures with lists . These are structures whose elements are in a linear order. Singly Linked Lists These came up just briefly last time as a motivation for iterators. Most of you have seen the idea of a linked list : List Objects . . . . . . . . . . . . head This structure is made up of a pointer to the first list element and a collection of list elements. The structure that makes up a list element has two fields: 1. value : the Object which is stored at that list element’s position in the list. 2. next : a pointer to the next list element, or null for the last element. So the data for a very basic linked structure could look like this: class SimpleListNode<E> { protected E value; protected SimpleListNode<E> next; } public class SimpleLinkedList<E> { protected SimpleListNode<E> head; } View Full Document This preview has intentionally blurred sections. CS 211 Data Structures Fall 2009 As we saw with the VectorIterator , public is not specified in the class definition, since we aren’t allowing regular users to create one of these, only a SimpleLinkedList<E> . So if we want to create one of these, it’s very easy. We just construct a SimpleLinkedList<E> and set its head to null . public SimpleLinkedList() { head = null; } How about adding an element? This involves two steps: 1. construct a new list node for the element 2. insert the new list node into the list Let’s think about what this will mean. We add our first element, say a 1, we want this list to go from just an empty head reference, to a node pointed at by head which has the 1 as its value and null as its next . Now, we add another element, say 2. We have two choices. We can add at the beginning or at the end. Now, we add another element, 3. Now we have three choices. Beginning, middle, or end. In general, we can add at position 0, 1, or 2. Construction of the new list node is easy, once we know what to set its next pointer to. Here’s a constructor: public SimpleListNode(E value, SimpleListNode<E> next) { this.value = value; this.next = next; } We’ll see that we will need to be able to set and retrieve the value and the next pointer. We’ll call the accessors value() and next() , and the mutators setValue() and setNext() . We would like to allow additions to any place in our list, so we will develop a general add method that deals with all three of the cases described above. We’ll need to provide our
https://www.coursehero.com/file/5788235/lists/
CC-MAIN-2016-50
refinedweb
531
75.1
A new Twitter SSB June 12, 2010 at 9:13 PM by Dr. Drang With the coming requirement to use OAuth to sign on to Twitter, I’ve had to abandon my Dr. Twoot client program and have shifted to a new Fluid-based SSB of the Twitter home page. I’m using a userscript to reformat the page and make it narrower. Two reasons I couldn’t adapt Dr. Twoot to OAuth: - I’ve read in several places, including in Twitter’s developer pages, that JavaScript isn’t appropriate for OAuth because it exposes certain tokens that are supposed to be secret. Dr. Twoot is written in JavaScript. - As important, I just can’t find an OAuth tutorial that I understand. Everything I’ve seen is all about Consumers and Users and Providers and getting credentials from one site to use on another. I’m not interested in any of these two-cushion internet bank shots; I just want to log in to one site and use it. So, barring a last-minute change in Twitter policy, Dr. Twoot will stop working at the end of the month. I’ve looked into other Twitter desktop clients, but haven’t found any I like. I’m dead set against installing Adobe Air on my computer, which cuts down the number of available applications considerably. I tried Tweetie, but for some reason it isn’t nearly as compelling on the desktop as it is/was on the iPhone. I liked the customization options of Kiwi, but didn’t like any of the themes that came with it. I didn’t want to buy an application and then spend a bunch of time programming just to get an acceptable look. Which led me to Fluid. In a minute I had a new application—called, with a certain lack of imagination, Twitter—that was a site-specific browser for the Twitter home page. It didn’t have the look I wanted, but I eventually hammered out a userscript that gave me a nice narrow view of the timeline that I could keep over by the right edge of my screen. The sidebar is still available by scrolling to the right. As you can see, I made the tweet input area taller and added the IM Fell font for rendering @DrSamuelJohnson’s tweets. Less obvious, maybe, is that I’ve boosted the font size and the line spacing a bit to make all the tweets more readable. I knew from my earlier Twitter userscript that the Twitter home page loads jQuery, which should have made the reformatting easy. Unfortunately, the layout of the Twitter home page is a mess. You’d think it would be clean and simple: there’s a background image, a header, a footer, and a two-column content area—not much more complex than this site, really. But the HTML of the Twitter home page is a nightmare of <div>s within <div>s. Some contents are allowed to spill out of their containers, others aren’t. The two columns that make up the main portion of the page comprise a one-row <table>, which has a very 1990s vibe. My first experiments in narrowing the timeline were very frustrating. Everything I tried that should have worked didn’t. I started Googling to see if anyone had solved this problem before me. As it happens, there’s a userscript called Endless Tweets by Mislav Marohnić that, among many other things, very cleverly reformats the Twitter page to a narrow layout when the user resizes the window to a small width. I didn’t want to use Endless Tweets itself—it had some minor rendering problems in my tests—but it gave me hope that I could get the timeline narrowed if I just kept at it. Success eventually came when I opened the Twitter home page in CSSEdit, a seldom-used (by me, anyway) application that came with the MacHeist 2 bundle, and started playing around with the widths of various elements. I kept track of the tests that worked and incorporated them into my userscript. Here it is: 1: // ==UserScript== 2: // @name twitdrang 3: // @namespace 4: // @description Makes Twitter narrower. 5: // @include * 6: // @author Dr. Drang ( 7: // ==/UserScript== 8: 9: function getIMFell() { 10: $("head link:last").after('<link href="" rel="stylesheet" type="text/css">'); 11: } 12: 13: function twitdrang() { 14: $("#container").css({'width':'600px'}); 15: $("#header").css({'width':'600px'}); 16: $("div#wrapper").css({'width':'400px'}); 17: $("#side_base").css({'width':'200px'}); 18: $("ol.statuses li").css({'width':'400px'}); 19: $("fieldset.common-form").css({'width':'400px'}); 20: $("#update_notifications").css({'width':'250px'}); 21: $("ol.statuses span.status-body").css({'width':'300px'}); 22: $("fieldset.common-form textarea").css({'width':'375px'}); 23: $(".actions-hover li").css({'width':'20px'}); 24: // $(".actions-hover .reply").css({'float':'right'}); 25: // $(".actions-hover .retweet-link").css({'float':'right'}); 26: $(".actions-hover .retweet-link a").css({'display':'none'}); 27: $(".actions-hover .reply a").css({'display':'none'}); 28: $(".actions-hover .retweet-link a").css({'display':'none'}); 29: $(".actions-hover .del a").css({'display':'none'}); 30: $("ol#timeline .status-content").css({"font-family": "Lucida Grande", "font-size": "15px", "line-height": "1.3"}); 31: $("textarea#status").css({"line-height": "1.4", "height": "4.2em"}); 32: $("li.u-DrSamuelJohnson span.entry-content").css({"font-family": "IM Fell English", "font-size": "120%"}); 33: $("li.u-DrSamuelJohnson.latest-status span.entry-content").css({"font-family": "IM Fell English", "font-size": "175%"}); 34: } 35: 36: if (window.fluid) { 37: getIMFell(); 38: twitdrang(); 39: } 40: 41: $(window).scroll( function() { 42: twitdrang(); 43: }); If you’re interested in a narrow Twitter SSB, follow the Fluid instructions to make an application that points to, and then add this to the Userscripts folder. I call it twitdrang.user.js, but you can name it whatever you want. If you’re reading this well after the day it was posted, you should look at the twitdrang GitHub repository because it’ll have the latest version. You may find that any time you post a new tweet, or update the timeline by clicking the “xx new tweets” link, the new tweets will not be formatted to the narrower width. That’s because those events don’t trigger the userscript. I haven’t yet figured out how to get that to work, so in the meantime there’s a kluge. See Lines 41–43 in the code above? They cause the reformatting function to be run whenever the window is scrolled. So if you just wiggle the scrollbar after the new posts arrive, they’ll narrow themselves down just like the others.
https://leancrew.com/all-this/2010/06/a-new-twitter-ssb/
CC-MAIN-2019-04
refinedweb
1,098
65.52
I created a very simple c++ console application which compare user entered password with hard-corded one and print corresponding output. #include "stdafx.h" #include <iostream> #include <string> using namespace std; int _tmain(int argc, _TCHAR* argv[]) { string password; cout << "Enter the password"<<endl; getline (cin, password); if(password=="123"){ cout << "correct password" <<endl; }else { cout << "incorrect password" <<endl; } return 0; } I want to debug and skip a specific line. For that I want to search the correct password / incorrect password line. So I tried using "search for all referenced text " option but the problem is there is no such text. I followed some tutorials where they search for referenced text and find that kind of hard coded lines.
http://www.howtobuildsoftware.com/index.php/how-do/bUe2/c-reverse-engineering-ollydbg-olly-debugger-cannot-find-referenced-text
CC-MAIN-2017-04
refinedweb
119
57
Calculator Hey guys! I made this <a href="">calculator</a>, which I think works pretty good. It's a simple graphical calculator with the usual like addition, subtraction, multiplication, division, percentage, square root and some other stuff. It should work on iPad and iPhone. Let me know what you think, and if there are any bugs! Very nice! I like how this has less than 100 lines of code. :) Perhaps you could add some sort of feedback for invalid input, for example like this (makes the text red and plays an 'error' sound): Thanks! That's a great idea :) I'm glad I found the Button class, but I haven't found it in the docs. I found it kinda by accident as I created a Button class myself, and then I removed the class but the script still worked XD I've fixed one bug though, if you pressed the "%" button, and then typed a number, like for instance 25, it would suddenly change to 0.025. I fixed it by switching from "continue" to "pass". Oh, I didn't even notice you used the Button class – sneaky! ;) It's not documented because it's somewhat incomplete, but if you're curious how it works, you could try this in the console: <pre>import inspect import scene print inspect.getsource(scene.Button)</pre> (this can also come in handy for other things) You'll notice that it has an <code>action</code> attribute. That's actually a function that gets called when the button is tapped, and you could use it to get rid of all your manual touch handling code. Here's another modified version that takes advantage of this: Awesome! That shortened it a lot :) Thanks for the great info @omz! Also, the inspect module was really handy ;) @omz and Sebastian, nice code. Thanks. Please, @omz: how to change text size in TextLayer class in your undocumented Button class ? And how about button pads text changing with context ? @jose3f Neither <code>TextLayer</code> nor <code>Button</code> (which uses a <code>TextLayer</code> internally) support changing the text after initialization. When you look at the source code for <code>TextLayer</code> with the <code>inspect</code> module, you'll see that it's a very simple class (under 10 lines of code) that basically just renders the given text as an image (using <code>render_text()</code>) and sets the result as the Layer's <code>image</code> attribute, adjusting the size of the layer accordingly. If you wanted to create a <code>TextLayer</code> subclass that allows changing the text, a simple implementation could look like this: <pre>class MutableTextLayer (TextLayer): def set_text(self, text, font, font_size): img, size = render_text(text, font, font_size) self.image = img self.frame = Rect(self.frame.x, self.frame.y, size.w, size.h)</pre> @omz: The problem subclassing TextLayer is that Button class calls at init TextLayer not MutableTextLayer. So imho subclassing Button is also required. Thanks. - eliskan175 @Sebastion - Very awesome. I never knew about how to use eval. Did some research and I wish I had known this sooner. I had written genetic algorithms and probably should have used eval to calculate their strings. Ended up basically writing a function to do the same thing :( The more you know. - eliskan175 Yeah my function ended up being a 20 line function that created a NEW function at runtime.. it was a solution I found that had worked for my problem, but really a short eval would have done the trick. This program you wrote here was very elegant. I wish I could program that cleanly
https://forum.omz-software.com/topic/170/calculator
CC-MAIN-2017-34
refinedweb
604
63.49
JavaScript/Print version Contents[edit] - Basics - Placing the Code - The scriptelement - Bookmarklets - Lexical Structure - Reserved Words - Variables and Types - Operators - Control Structures - Functions and Objects - Event Handling - Program Flow - Regular Expressions Introduction[edit]. First Program[edit] Here> alert("Hello World!"); </script> </head> <body> <p>The content of the web page.</p> </body> </html> This basic hello World program can then be used as a starting point for any new programs that you need to create. Exercises[edit] Exercise 1-1[edit][edit]. The SCRIPT Tag[edit] The script element[edit]. Inline JavaScript[edit] Using inline JavaScript allows you to easily work with HTML and JavaScript within the same page. This is commonly used for temporarily testing out some ideas, and in situations where the script code is specific to that one page. <script> // JavaScript code here </script> Inline HTML comment markers[edit]. Inline XHTML JavaScript[edit] In XHTML, the method is somewhat different: <script> // <![CDATA[ // [Todo] JavaScript code here! // ]]> </script> Note that the <![CDATA[ tag is commented out. The // prevents the browser from mistakenly interpreting the <![CDATA[ as a JavaScript statement. (That would be a syntax error). Linking to external scripts[edit] named "script.js" and is located in a directory called "js", your src would be "js/script.js". Location of script elements[edit]] suggested by the Yahoo! Developer Network that specify a different placement for script tags: to put scripts at the bottom, just before the </body> tag. This speeds up downloading, and also allows for direct manipulation of the DOM while the page is loading. It is also a good practice to separate HTML documents from CSS code for easier management. <!DOCTYPE html> <html> <head> <title>Web page title</title> </head> <body> <!-- HTML code here --> <script src="script.js"></script> </body> </html> Controlling external script evaluation and parser blocking[edit] By default, JavaScript execution is "parser blocking". When the browser encounters a script in the document, it must pause Document Object Model (DOM) construction, hand over control to the JavaScript runtime, and let the script execute before proceeding with DOM construction.[3] As an alternative to placing scripts at the bottom of the document body, loading and execution of external scripts may be controlled using async or defer attributes. Asynchronous external scripts are loaded and executed in parallel with document parsing. The script will be executed as soon as it is available.[4] <!DOCTYPE html> <html> <head> <title>Web page title</title> <script async</script> </head> <body> <!-- HTML code here --> </body> </html> Deferred external scripts are loaded in parallel with document parsing, but script execution is deferred until after the document is fully parsed.[5] <!DOCTYPE html> <html> <head> <title>Web page title</title> <script defer</script> </head> <body> <!-- HTML code here --> </body> </html> Reference[edit] - ↑ w:JavaScript#History and naming - ↑ Yahoo: best practices for speeding up your web site - ↑ Google: Adding Interactivity with JavaScript - ↑ Mozilla: The Script element - ↑ Mozilla: The Script element Bookmarklets[edit] Bookmarklets are one line scripts stored in the URL field of a bookmark. Bookmarklets have been around for a long time so they will work in older browsers., as. Lexical Structure[edit] typical amount of called, to the right of the equals sign: 1A2B3C is an invalid identifier, as it starts with a number. Naming variables[edit] References[edit] - ↑ Standard ECMA-262 ECMAScript Language Specification, Chapter 7.9 - Automatic Semicolon Insertion Reserved Words[edit] This page contains a list of reserved words in JavaScript, which cannot be used as names of variables, functions or other objects. Variables and Types[edit] JavaScript is a loosely typed language. This means that you can use the same variable for different types of information, but you may also have to check what type a variable is yourself, if the differences matter. For example, if you wanted to add two numbers, but one variable turned out to be a string, the result wouldn't necessarily be what you expected. Variable declaration[edit][edit] Primitive types are types provided by the system, in this case by JavaScript. Primitive type for JavaScript are Booleans, numbers and text. In addition to the primitive types, users may define their own classes. The primitive types are treated by JavaScript as value types and when passed to a function, they are passed as values. Some types, such as string, allow method calls. Boolean type[edit] Boolean variables can only have two possible values, true or false. var mayday = false; var birthday = true; Numeric types[edit] You can use an integer and double types on your variables, but they are treated as a numeric type. var sal = 20; var pal = 12.1; In the ECMA JavaScript specification, your number literals can go from 0 to -+1.79769e+308. And because 5e-324 is the smallest infinitesimal you can get, anything smaller is rounded to 0. String types[edit] The String and char types are all strings, so you can build any string literal that you wished for. var myName = "Some Name"; var myChar = 'f'; Complex types[edit] A complex type is an object, be it either standard or custom made. Its home is the heap and is always passed by reference. Array type[edit]]; There is no limit to the number of items that can be stored in an array. Object types[edit] An object within JavaScript is created using the new operator: var myObject = new Object(); Objects can also be created with the object notation, which uses curly braces: var myObject = {}; JavaScript objects can implement inheritance and support overriding, and you can use polymorphism. There are no scope modifiers, with all properties and methods having public access. More information on creating objects can be found in Object Oriented Programming. You can access browser built-in objects and objects provided through browser JavaScript extensions. Scope[edit] scope[edit] scope[edit][edit] Further reading[edit] - "Values, variables, and literals". MDN. 2013-05-28.. Retrieved 2013-06-20. Numbers[edit] JavaScript implements numbers as floating point values, that is, they're attaining decimal values as well as whole number values.. Further reading[edit] Strings[edit] A string is a type of variable that stores a string (chain of characters). Basic use[edit] To make a new string, you can make a variable and give it a value of new String(). var foo = new String(); But, most developers skip that part and use a string literal: var foo = "my string"; After you have made your string, you can edit it as you like: foo = "bar"; // foo = "bar" foo = "barblah"; // foo = "barblah" foo += "bar"; // foo = "barblahbar" A string literal is normally delimited by the ' or " character, and can normally contain almost any character. Common convention differs on whether to use single quotes or double quotes for strings. Some developers are for single quotes (Crockford, Amaram, Sakalos, Michaux), while others are for double quotes (NextApp, Murray, Dojo). Whichever method you choose, try to be consistent in how you apply it. Due to the delimiters, it's not possible to directly place either the single or double quote within the string when it's used to start or end the string. In order to work around that limitation, you can either switch to the other type of delimiter for that case, or place a backslash before the quote to ensure that it appears within the string: foo = 'The cat says, "Meow!"'; foo = "The cat says, \"Meow!\""; foo = "It's \"cold\" today."; foo = 'It\'s "cold" today.'; Properties and methods of the String() object[edit] As with all objects, Strings have some methods and properties. concat(text)[edit] The concat() function joins two strings. var foo = "Hello"; var bar = foo.concat(" World!") alert(bar); // Hello World! length[edit] Returns the length as an integer. var foo = "Hello!"; alert(foo.length); // 6 indexOf[edit] Returns the first occurrence of a string inside of itself, starting with 0. If the search string cannot be found, -1 is returned. The indexOf() method is case sensitive. var foo = "Hello, World! How do you do?"; alert(foo.indexOf(' ')); // 6 var hello = "Hello world, welcome to the universe."; alert(hello.indexOf("welcome")); // 13 lastIndexOf[edit] Returns the last occurrence of a string inside of itself, starting with index 0.. If the search string cannot be found, -1 is returned. var foo = "Hello, World! How do you do?"; alert(foo.lastIndexOf(' ')); // 24 replace(text, newtext)[edit] The replace() function returns a string with content replaced. Only the first occurrence is replaced. var foo = "foo bar foo bar foo"; var newString = foo.replace("bar", "NEW!") alert(foo); // foo bar foo bar foo alert(newString); // foo NEW! foo bar foo As you can see, the replace() function only returns the new content and does not modify the 'foo' object. slice(start[, end])[edit] Slice extracts characters from the start position. "hello".slice(1); // "ello" When the end is provided, they are extracted up to, but not including the end position. "hello".slice(1, 3); // "el" Slice allows you to extract text referenced from the end of the string by using negative indexing. "hello".slice(-4, -2); // "el" Unlike substring, the slice method never swaps the start and end positions. If the start is after the end, slice will attempt to extract the content as presented, but will most likely provide unexpected results. "hello".slice(3, 1); // "" substr(start[, number of characters])[edit] substr extracts characters from the start position, essentially the same as slice. "hello".substr(1); // "ello" When the number of characters is provided, they are extracted by count. "hello".substr(1, 3); // "ell" substring(start[, end])[edit] substring extracts characters from the start position. "hello".substring(1); // "ello" When the end is provided, they are extracted up to, but not including the end position. "hello".substring(1, 3); // "el" substring always works from left to right. If the start position is larger than the end position, substring will swap the values; although sometimes useful, this is not always what you want; different behavior is provided by slice. "hello".substring(3, 1); // "el" toLowerCase()[edit] This function returns the current string in lower case. var foo = "Hello!"; alert(foo.toLowerCase()); // hello! toUpperCase()[edit] This function returns the current string in upper case. var foo = "Hello!"; alert(foo.toUpperCase()); // HELLO! Escape Sequences[edit] Escape sequences are very useful tools in editing your code in order to style your output of string objects, this improves user experience greatly.[1] \bbackspace (U+0008 BACKSPACE) \f: form feed (U+000C FORM FEED) \n: line feed (U+000A LINE FEED) \r: carriage return (U+000D CARRIAGE RETURN) \t: horizontal tab (U+0009 CHARACTER TABULATION) \v: vertical tab (U+000B LINE TABULATION) \0: null character (U+0000 NULL) (only if the next character is not a decimal digit; else it’s an octal escape sequence) \': single quote (U+0027 APOSTROPHE) \": double quote (U+0022 QUOTATION MARK) \\: backslash (U+005C REVERSE SOLIDUS) Further reading[edit] Dates[edit]][2] - getHours(): Returns hours based on a 24 hour clock.[3] - getMinutes():Returns minutes based on [0 - 59][4] - getSeconds():Returns seconds based on [0 - 59][5] - getTime(): Gets the time in milliseconds since January 1, 1970. -[edit] - JavaScript Date Object, developer.mozilla.org Arrays[edit] An array is a type of variable that stores a collection of variables. Arrays in JavaScript are zero-based - they start from zero. (instead of foo[1], foo[2], foo[3], JavaScript uses foo[0], foo[1], foo[2].) Overview[edit][edit][edit] Make an array with "zzz" as one of the elements, and then make an alert box using that element. Nested arrays[edit] You can also put an array within[edit]"] Note that in this example the new arr3 array contains the contents of both the arr1 array and the arr2 array. join() and split()[edit] The Array Object's join() method returns a single string which contains all of the elements of an array — separated by a specified delimiter. If the delimiter is not specified, it is set to a comma. The String object's split() method returns an array in which the contents of the supplied string become the array elements — each element separated from the others based on a specified string Array pop() method removes and returns the last element of an array. The Array shift() method removes and returns the first element of an array. The length property of the array is changed by both the pop and shift" Further reading[edit] Operators[edit] Arithmetic operators[edit] JavaScript has the arithmetic operators +, -, *, /, and %. These operators function as the addition, subtraction, multiplication, division, and modulus operators, and operate very similarly to other languages. Multiplication and division operators will be calculated before addition and subtraction. Operations in parenthesis will be calculated first. var a = 12 + 5; // 17 var b = 12 - 5; // 7 var c = 12*5; // 60 var d = 12/5; // 2.4 - division results in floating point numbers. var e = 12%5; // 2 - the remainder of 12/5 in integer math is 2. var f = 5 -2 * 4 // -3 - multiplication is calculated first. var g = (2+2) / 2 // 2 - Parenthesis are calculated first..[6]:[7] - ↑ - ↑ - ↑ - ↑ - ↑ - ↑ W3Schools: JavaScript Object Properties - ↑ "typeof" (in English) (HTML). Mozilla Corporation. 2014-11-18. Archived from the original on 2014-11-18.. Retrieved 2015-03-05. Control Structures[edit] The control structures within JavaScript allow the program flow to change within a unit of code or function. These statements can determine whether or not given statements are executed - and provide the basis for the repeated execution of a block of code. Most of the statements listed below are so-called conditional statements that can operate either on a statement or on a block of code enclosed with braces ({ and }). The structure provided by the use of conditional statements utilizes Booleans to determine whether or not a block gets executed. In this use of Booleans, any defined variable that is neither zero nor an empty string will be evaluated as true. Conditional statements[edit] if[edit][edit] while block. The continue keyword finishes the current iteration of the while block or statement, and checks the condition to see, if it is true. If it is true, the loop commences again. do … while[edit]. In other words, break exits the loop, and continue checks the condition before attempting to restart the loop. for[edit] object elements accessed by this version is arbitrary. For instance, this structure can be used to loop through all the properties of an object instance. It should not be used when the object is of Array type switch[edit]. A slightly different usage of the switch statement can be found at the following link: Omitting the break can be used to test for more than one value at a time: switch(i) { case 1: case 2: case 3: // … break; case 4: // … break; default: // … break; } In this case the program will run the same code in case i equals 1, 2 or 3. with[edit] The with statement is used to extend the scope chain for a block[1] and has the following syntax: with (expression) { // statement } Pros[edit] The with statement can help to - reduce file size by reducing the need to repeat a lengthy object reference, and - relieve the interpreter of parsing repeated object references. However, in many cases, this can be achieved by using a temporary variable to store a reference to the desired object. Cons[edit][edit] var area; var r = 10; with (Math) { a = PI*r*r; // == a = Math.PI*r*r x = r*cos(PI); // == a = r*Math.cos(Math.PI); y = r*sin(PI/2); // == a = r*Math.sin(Math.PI/2); } Functions and Objects[edit] Functions[edit] A function is an action to take to complete a goal, objective, or task. Functions allow you to split a complex goal into simpler tasks, which make managing and maintaining scripts easier. Parameters or arguments can be used to provide data, which is passed to a function to effect the action to be taken. The Parameters or arguments are placed inside the parentheses, then the function is closed with a pair of curly braces. The block of code to be executed is placed inside the curly braces. parameter calling a function within an html is called with an argument of 6. See Also[edit] Event Handling[edit] Event Handlers[edit] An event that can be handled is something happening in a browser window, including a document loading, the user clicking a mouse button, the user pressing a key, and the browser screen changing size. When a function is assigned to handle an event type, that function is run when an event of the event type occurs. An event handler can be assigned in the following ways: - Via an element attribute directly in HTML: <body onload="alert('Hello World!');"> - Via JavaScript, by assigning the event type to an element attribute: document.onclick = clickHandler; - Via JavaScript by a direct call to the addEventListener() method of an element. A handler that is assigned from a script uses the syntax '[element].[event] = [function];', where [element] is a page element, [event] is the name of the selected event and [function] is the name of the function that is called Regular Expressions[edit] Overview[edit] JavaScript implements regular expressions (regex for short) when searching for matches within a string. As with other scripting languages, this allows searching beyond a simple letter-by-letter match, and can even be used to parse strings in a certain format. Unlike strings, regular expressions are delimited by the slash (/) character, and may have some options appended. Regular expressions most commonly appear in conjunction with the string.match() and string.replace() methods. At a glance, by example: strArray = "Hello world!".match(/world/); // Singleton array; note the slashes strArray = "Hello!".match(/l/g); // Matched strings are returned in a string array "abc".match(/a(b)c/)[1] === "b" // Matched subgroup is the 2nd item (index 1) str1 = "Hey there".replace(/Hey/g, "Hello"); str2 = "N/A".replace(/\//g, ","); // Slash is escaped with \ str3 = "Hello".replace(/l/g, "m").replace(/H/g, "L").replace(/o/g, "a"); // Pile if (str3.match(/emma/)) { console.log("Yes"); } if (str3.match("emma")) { console.log("Yes"); } // Quotes work as well "abbc".replace(/(.)\1/g, "$1") === "abc" // Backreference (?=...), (?!...), (?<=...), and (?<!...) are not available. Examples[edit] - Matching - string = "Hello world!".match(/world/); - stringArray = "Hello world!".match(/l/g); // Matched strings are returned in a string array - "abc".match(/a(b)c/)[1] => "b" // Matched subgroup is the second member (having the index "1") of the resulting array - Replacement - string = string.replace(/expression without quotation marks/g, "replacement"); - string = string.replace(/escape the slash in this\/way/g, "replacement"); - string = string.replace( ... ).replace ( ... ). replace( ... ); - Test - if (string.match(/regexp without quotation marks/)) { Modifiers[edit] Single-letter modifiers: classicText = "To be or not to be?"; var changedClassicText = classicText.replace(/\W[a-zA-Z]+/g, capitalize); console.log(changedClassicText==="To Be Or Not To Be?"); Reference at W3schools.com - JavaScript RexExp Tester at regular-expressions.info - Regular Expressions in Javascript at mozilla.org - JavaScript RegExp Object at mozilla.org Optimization[edit] JavaScript optimization[edit] Optimization Techniques[edit] - High Level Optimization - Algorithmic Optimization (Mathematical Analysis) - Simplification - Low Level Optimization - Loop Unrolling - Strength Reduction - Duff's Device - Clean Loops - External Tools & Libraries for speeding/optimizing/compressing JavaScript code Common Mistakes and Misconceptions[edit] String concatenation[edit] Strings in JavaScript are immutable objects. This means that once you create a string object, to modify it, another string object must theoretically be created. Now, suppose you want to perform a ROT-13 on all the characters in a long string. Supposing you have a rot13() function, the obvious way to do this might be: var s1 = "the original string"; var s2 = ""; for (i = 0; i < s1.length; i++) { s2 += rot13(s1.charAt(i)); } Especially in older browsers like Internet Explorer 6, this will be very slow. This is because, at each iteration, the entire string must be copied before the new letter is appended. One way to make this script faster might be to create an array of characters, then join it: var s1 = "the original string"; var a2 = new Array(s1.length); var s2 = ""; for (i = 0; i < s1.length; i++) { a2[i] = rot13(s1.charAt(i)); } s2 = a2.join(''); Internet Explorer 6 will run this code faster. However, since the original code is so obvious and easy to write, most modern browsers have improved the handling of such concatenations. On some browsers the original code may be faster than this code. A second way to improve the speed of this code is to break up the string being written to. For instance, if this is normal text, a space might make a good separator: var s1 = "the original string"; var c; var st = ""; var s2 = ""; for (i = 0; i < s1.length; i++) { c = rot13(s1.charAt(i)); st += c; if (c == " ") { s2 += st; st = ""; } } s2 += st; This way the bulk of the new string is copied much less often, because individual characters are added to a smaller temporary string. A third way to really improve the speed in a for loop, is to move the [array].length statement outside the condition statement. In face, every occurrence, the [array].length will be re-calculate For a two occurrences loop, the result will not be visible, but (for example) in a five thousand occurrence loop, you'll see the difference. It can be explained with a simple calculation : // we assume that myArray.length is 5000 for (x = 0;x < myArray.length;x++){ // doing some stuff } "x = 0" is evaluated only one time, so it's only one operation. "x < myArray.length" is evaluated 5000 times, so it is 10,000 operations (myArray.length is an operation and compare myArray.length with x, is another operation). "x++" is evaluated 5000 times, so it's 5000 operations. There is a total of 15 001 operation. // we assume that myArray.length is 5000 for (x = 0, l = myArray.length; x < l; x++){ // doing some stuff } "x = 0" is evaluated only one time, so it's only one operation. "l = myArray.length" is evaluated only one time, so it's only one operation. "x < l" is evaluated 5000 times, so it is 5000 operations (l with x, is one operation). "x++" is evaluated 5000 times, so it's 5000 operations. There is a total of 10002 operation. So, in order to optimize your for loop, you need to make code like this : var s1 = "the original string"; var c; var st = ""; var s2 = ""; for (i = 0, l = s1.length; i < l; i++) { c = rot13(s1.charAt(i)); st += c; if (c == " ") { s2 += st; st = ""; } } s2 += st; Debugging[edit] JavaScript Debuggers[edit] Firebug[edit] - Firebug is a powerful extension for Firefox that has many development and debugging tools including JavaScript debugger and profiler. Venkman JavaScript Debugger[edit] - Venkman JavaScript Debugger (for Mozilla based browsers such as Netscape 7.x, Firefox/Phoenix/Firebird and Mozilla Suite 1.x) - Introduction to Venkman - Using Breakpoints in Venkman Internet Explorer debugging[edit] -[edit].[2] JTF: JavaScript Unit Testing Farm[edit] - JTF is a collaborative website that enables you to create test cases that will be tested by all browsers. It's the best way to do TDD and to be sure that your code will work well on all browsers. jsUnit[edit] built-in debugging tools[edit] Some people prefer to send debugging messages to a "debugging console" rather than use the alert() function[2][3][4]. Following is a brief list of popular browsers and how to access their respective consoles/debugging tools. - Firefox: Ctrl+Shift+K opens an error console. - Opera (9.5+): Tools >> Advanced >> Developer Tools opens Dragonfly. - Chrome: Ctrl+Shift+J opens chrome's "Developer Tools" window, focused on the "console" tab. - Internet Explorer: F12 opens a firebug-like Web development tool that has various features including the ability to switch between the IE8 and IE7 rendering engines. - Safari: Cmd+Alt+C opens the WebKit inspector for Safari. Common Mistakes[edit] -: alert('He's eating food');should be -[edit] Debugging in JavaScript doesn't differ very much from debugging in most other programming languages. See the article at Computer Programming Principles/Maintaining/Debugging. Following Variables as a Script is Running[edit] The most basic way to inspect variables while running is a simple alert() call. However some development environments allow you to step through your code, inspecting variables as you go. These kind of environments may allow you to change variables while the program is paused. Browser Bugs[edit] Sometimes the browser is buggy, not your script. This means you must find a workaround. browser-dependent code[edit]]; } References[edit] - ↑ Sheppy, Shaver et al. (2014-11-18). "with" (in English) (HTML). Mozilla. Archived from the original on 2014-11-18.. Retrieved 2015-03-18. - ↑ "Safari - The best way to see the sites." (in English) (HTML). Apple.. Retrieved 2015-03-09. Further reading[edit] - "JavaScript Debugging" by Ben Bucksch DHTML[edit] DHTML (Dynamic HTML) is a combination of JavaScript, CSS and HTML. alert messages[edit] <script type="text/javascript"> alert('Hello World!'); </script> This will give a simple alert message. <script type="text/javascript"> prompt('What is your name?'); </script> This will give a simple prompt message. <script type="text/javascript"> confirm('Are you sure?'); </script> This will give a simple confirmation message. Javascript Button and Alert Message Example:[edit] Sometimes it is best to dig straight in with the coding. Here is an example of a small piece of code: <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>"THE BUTTON" - Javascript</title> <script type="text/javascript"> x = 'You have not pressed "THE BUTTON"' function bomb() { alert('O-GOD NOOOOO, WE ARE ALL DOOMED!!'); alert('10'); alert('9'); alert('8'); alert('7'); alert('6'); alert('5'); alert('4'); alert('3'); alert('2'); alert('1'); alert('!BOOM!'); alert('Have a nice day. :-)'); x = 'You pressed "THE BUTTON" and I told you not to!'; } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } </style> </head> <body> <div> <input type="button" value="THE BUTTON - Don't Click It" onclick="bomb()">> What does this code do? When it loads it tells what value the variable 'x' should have. The next code snippet is a function that has been named "bomb". The body of this function fires some alert messages and changes the value of 'x'. The next part is mainly HTML with a little javascript attached to the INPUT tags. The "onclick" property tells its parent what has to be done when clicked. The bomb function is assigned to the first button, the second button just shows an alert message with the value of x. Javascript if() - else Example[edit] <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>The Welcome Message - Javascript</title> <script type="text/javascript"> function wlcmmsg() { name = prompt('What is your name?', ''); correct = confirm('Are you sure your name is ' + name + ' ?'); if (correct == true) { alert('Welcome ' + name); } else { wlcmmsg(); } } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } </style> </head> <body onload="wlcmmsg()" onunload="alert('Goodbye ' + name)"> <p> This script is dual-licensed under both, <a href="">GFDL</a> and <a href="GNU General Public License">GPL</a>. See <a href="">Wikibooks</a> </p> </body> </html> Two Scripts[edit] Now, back to the first example. We have modified the script adding a different welcome message. This version requests the user to enter a name. They are also asked if they want to visit the site. Some CSS has also been added to the button. <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>"THE BUTTON" - Javascript</title> <script type="text/javascript"> // global variable x x = 'You have not pressed "THE BUTTON"'; function bomb() { alert('O-GOD NOOOOO, WE ARE ALL DOOMED!!'); alert('3'); alert('2'); alert('1'); alert('!BOOM!'); alert('Have a nice day. :-)'); x = 'You pressed "THE BUTTON" and I told you not too!'; } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } </style> </head> <body onload="welcome()"> <script type="text/javascript"> function welcome() { var name = prompt('What is your name?', ''); if (name == "" || name == "null") { alert('You have not entered a name'); welcome(); return false; } var visit = confirm('Do you want to visit this website?') if (visit == true) { alert('Welcome ' + name); } else { window.location=history.go(-1); } } </script> <div> <input type="button" value="THE BUTTON - Don't Click It" onclick="bomb()" STYLE="color: #ffdd00; background-color: #ff0000">> Simple Calculator[edit] <!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN" ""> <html lang="en"> <head> <title>Calculator</title> <script type="text/javascript"> function multi() { var a = document.Calculator.no1.value; var b = document.Calculator.no2.value; var p = (a*b); document.Calculator.product.value = p; } function divi() { var d = document.Calculator.dividend.value; var e = document.Calculator.divisor.value; var q = (d/e); document.Calculator.quotient.value = q; } function circarea() { var r = document.Calculator.radius.value;756 48233786783165; var a = pi*(r*r); document.Calculator.area.value = a; var c = 2*pi*r; document.Calculator.circumference.value = c; } </script> <style type="text/css"> body { background-color:#00aac5; color:#000 } label { float:left; width:7em } </style> </head> <body> <h1>Calculator</h1> <form name="Calculator" action=""> <fieldset> <legend>Multiply</legend> <input type="text" name="no1"> × <input type="text" name="no2"> <input type="button" value="=" onclick="multi()"> <input type="text" name="product"> </fieldset> <fieldset> <legend>Divide</legend> <input type="text" name="dividend"> ÷ <input type="text" name="divisor"> <input type="button" value="=" onclick="divi()"> <input type="text" name="quotient"> </fieldset> <fieldset> <legend>Area and Circumfrence of Circle</legend> <p>(Uses pi to 240 d.p)</p> <div> <label for="radius">Type in radius</label> <input type="text" name="radius" id="radius" value=""> </div> <div> <input type="button" value="=" onclick="circarea()"> </div> <div> <label for="area">Area</label> <input type="text" name="area" id="area" value=""> </div> <div> <label for="circumference">Circumference</label> <input type="text" name="circumference" id="circumference" value=""> </div> </fieldset> </form> <p>Licensed under the <a href="">GNU GPL</a>.</p> </body> </html> Finding Elements[edit] The most common method of detecting page elements in the DOM is by the document.getElementById(id) method. Simple Use[edit] Let's say, on a page, we have: <div id="myDiv">content</div> A simple way of finding this element in JavaScript would be: var myDiv = document.getElementById("myDiv"); // Would find the DIV element by its ID, which in this case is 'myDiv'. Use of getElementsByTagName[edit] Another way to find elements on a web page is by the getElementsByTagName(name) method. It returns an array of all name elements in the node. Let's say, on a page, we have: <div id="myDiv"> <p>Paragraph 1</p> <p>Paragraph 2</p> <h1>An HTML header</h1> <p>Paragraph 3</p> </div> Using the getElementsByTagName method we can get an array of all <p> elements inside the div: var myDiv = document.getElementById("myDiv"); // get the div var myParagraphs = myDiv.getElementsByTagName('P'); //get all paragraphs inside the div // for example you can get the second paragraph (array indexing starts from 0) var mySecondPar = myParagraphs[1] Adding Elements[edit] Basic Usage[edit] Using the Document Object Module we can create basic HTML elements. Let's create a div. var myDiv = document.createElement("div"); What if we want the div to have an ID, or a class? var myDiv = document.createElement("div"); myDiv.id = "myDiv"; myDiv.class = "main"; And we want it added into the page? Let's use the DOM again… var myDiv = document.createElement("div"); myDiv.id = "myDiv"; myDiv.class = "main"; document.documentElement.appendChild(myDiv); Further Use[edit] So let's have a simple HTML page… <html> <head> </head> <body bgcolor="white" text="blue"> <h1> A simple Javascript created button </h1> <div id="button"></div> </body> </html> Where the div which has the id of button, let's add a button. var myButton = document.createElement("input"); myButton.type = "button"; myButton.value = "my button"; placeHolder = document.getElementById("button"); placeHolder.appendChild(myButton); All together the HTML code looks like: <html> <head> </head> <body bgcolor="white" text="blue"> <h1> A simple Javascript created button </h1> <div id="button"></div> </body> <script> myButton = document.createElement("input"); myButton.type = "button"; myButton.value = "my button"; placeHolder = document.getElementById("button"); placeHolder.appendChild(myButton); </script> </html> The page will now have a button on it which has been created via JavaScript. Changing Elements[edit] In JavaScript you can change elements by using the following syntax: element.attribute="new value" Here, the srcattribute), use: myButton.type = "text"; //changes the input type from 'button' to 'text'. Another way to change or create an attribute is to use a method like element.setAttribute("attribute", "value") or element.createAttribute("attribute", "value"). Use setAttribute to change" Removing Elements[edit]); References[edit] Code Structuring[edit] Links[edit]). Useful Software Tools[edit] A list of useful tools for JavaScript programmers. Editors / IDEs[edit] - Adobe Brackets: Another browser-based editor by Adobe - Eclipse: The Eclipse IDE includes an editor and debugger for JavaScript - Notepad++: A Great tool for editing any kind of code, includes syntax highlighting for many programming languages. - Programmers' Notepad: A general tool for programming many languages. - Scripted: An open source browser-based editor by Spring Source - Sublime Text: One of the most used editors for HTML/CSS/JavaScript editing - Web Storm or IntelliJ IDEA: both IDEs include and editor and debugger for JavaScript, IDEA also includes a Java development platform Engines and other tools[edit] - JSLint: static code analysis for JavaScript - jq - " 'jq' is like sed for JSON data " - List of ECMAScript engines - List of Really Useful Free Tools For JavaScript Developers
https://en.wikibooks.org/wiki/JavaScript/Print_version
CC-MAIN-2020-24
refinedweb
5,593
57.06
Shopping Cart Index Page Shopping Cart Application What is Shopping Cart ? A shopping cart is an application which runs on the server and allows users to do online shopping... is added to his shopping cart. Introduction to Application Shopping Cart PROBLEM IN EXECUTION PROBLEM IN EXECUTION class R { private int x; private int y; void getdata(int x1,int x2) { x=x1; y=y1; } void displaydata() { system.out.println(x+"\t"+y getting a problem in execution - Development process getting a problem in execution hi friends i have a problem in imcms content managment system it is a java content mangment system it is according... the whole file i am getting a problem like in server.properties file problem at the time of execution - JSP-Servlet problem at the time of execution when i was running web applications the exception i.e 404 resource is not available what it means and where it occures what is the solution Hi Friend, This error occurs when Error Index Out of Bound Exception ; System.out.println("Execution does not reach here if there is a invalid index... Index Out of Bound Exception Index Out of Bound Exception are the Unchecked Exception Body Mass Index (BMI) Java: Body Mass Index (BMI) The Body Mass Index program is divided into two files, the main program... // File: bmi/BMI.java // Description: Compute Body Mass Index Java Batch Execution is the result of the execution? It returns the int array.The array contains the affected row count in the corresponding index of the SQL. batch execution actually in JDBC Present a series of independent statements to be executed problem - JSP-Servlet jsp problem hi, i am working on a project of developing a shopping cart for online book store.can it be done using jsp?if yes, can u please help...:// Hope that the above link will be helpful | Shopping Cart Web Designing/Development Section servlets execution - JSP-Servlet , To visit this link for solving the problem: Beep in Execution - Java Beginners for my "file reading" problem index execution How to prevent adding duplicate items to the shopping cart ;?php session_start(); if (!isset($_SESSION['SHOPPING_CART'])){ $_SESSION['SHOPPING_CART'] = array(); } if (isset($_GET['itemID']) && isset($_GET...'] ); $_SESSION['SHOPPING_CART'][] = $ITEM; header('Location different execution time - Java Beginners different execution time hello, when i run the bellow code more than one time i am getting different execution time("Total time taken"), Ex...)); Hi Friend, This problem is not due to any coding error or mistake How to create dynamic buttons for adding products on a cart. How to create dynamic buttons for adding products on a cart. Hi. I have some problems creating a page to add items into a cart. The page loads...; <h:commandLink Java Programming: Chapter 12 Index and have been programmed thousands of times before. The problem is how.... In this chapter, we'll look at Java's attempt to address this problem. Contents... | Main Index Java Programming: Chapter 9 Index be impossible to guarantee that programs are problem-free, but careful programming... Chapter | Previous Chapter | Main Index Java Programming: Chapter 3 Index design. Given a problem, how can you come up with a program to solve that problem? We'll look at a partial answer to this question in Section 2... Chapter | Main Index add to cart add to cart sir, i want to do add to cart to my shopping application each user using sessions. Plz help thnaks in advance jsp fie execution in tomcat and using mysql - JDBC jsp fie execution in tomcat and using mysql I created 2 jsp files... userDataviewErrorpage.jsp handles any internal exception generated(eg;SOLException). PROBLEM:THE LAST JSP PAGE IS COMING DIRECTLY ON EXECUTION IN TOMCAT Maximize Sales By Setting up Your Shopping Cart Maximize Sales By Setting up Your Shopping Cart Setting up a shopping cart... of the tools available in a wise manner. The shopping cart solution comes in varied... programming analyst to handle your problem. It is also important to know What is Index? What is Index? What is Index Shopping Cart,Shopping Cart software Shopping Cart Overview A web based shopping cart is something like the original grocery shop shopping cart that is used by the customer in selecting certain Shopping Cart Shopping Cart Shopping cart is also know as trolley, carriage, buggy or wagon is a cart mostly used while shopping in shopping mall. These cart is provided... it to the shopping cart. Shopping cart software allows you to view the items in your cart plz tell me the code & execution process - Java Beginners plz tell me the code & execution process Write a program to download a website from a given URL. It must download all the pages from that website... the problem : import java.net.*; import java.io.*; public class servlet execution servlet execution Java Execution Web hosting with shopping cart Web hosting with shopping cart If you are looking for hosting your shopping Drop Index Drop Index Drop Index is used to remove one or more indexes from the current database. Understand with Example The Tutorial illustrate an example from Drop Index Shopping Cart Features Ideal Requirements Of Shopping Cart Most of the people think to use a shopping cart for their online store will run their business. But it is far from truth. Using an shopping cart for your e-commerce site (Online Store) is just Struts-problem With DispatchAction Class Struts-problem With DispatchAction Class hi this is Mahesh...i'm...: Servlet execution threw an exception root cause java.lang.NoClassDefFoundError: org/apache/struts/actions/DispatchAction Can any one help me with this problem How to execute my query fast..When A date filter is there in a query it takes more time for execution.? How to execute my query fast..When A date filter is there in a query it takes more time for execution.? When A date filter is there in a query... in 30 seconds with 5000 records... what is the problem ... how to execute my query Features of shopping cart Features of shopping cart In this section we will learn the features of good shopping cart applications. There are shopping cart application on the Internet... on requirement and you can select best shopping cart application for your shopping clustered and a non-clustered index? clustered and a non-clustered index? What is the difference between clustered and a non-clustered index DOM paring problem : 65: <%=getStat_list(nodeList).get(index)%> 66...: 68: <% java.lang.IndexOutOfBoundsException: Index: 1, Size: 1 for(int index=0; index < nodeList.getLength(); index++) { %> Mysql Btree Index Mysql Btree Index Mysql BTree Index Which tree is implemented to btree index? (Binary tree or Bplus tree or Bminus tree online shopping cart complete coding in pure jsp online shopping cart complete coding in pure jsp online shopping cart complete coding in pure jsp Please visit the following link: JSP Online shopping cart Insertion Sort Problem . It is supposed to be an insertion sorter: int min, index=0, temp; for(int i=0;i<...;sorted.length;j++){ if(sorted[j]<min){ index=j; min=sorted[j]; } } temp=sorted[index struts---problem with DispatchAction struts---problem with DispatchAction hi this is Mahesh...i'm working... error: exception javax.servlet.ServletException: Servlet execution threw.../DispatchAction Can any one help me with this problem.... Thanks in Advance java implementation problem java implementation problem I want to implement following in java... a new main data (MD) Resume all three threads execution Wait for end of each... is paused ( is waiting). How to resume execution of all three threads. I will exp Expression Language execution in notepad Expression Language execution in notepad how to execute expression language in notepad for java i am using apache tomcat server4.0. Java implementation problem bulid a new main data (MD) resume all three threads execution wait for end of each... is paused ( is waiting). How to resume execution of all three threads. I... problem in your post previews. please consider 1. 2. points just after main to know execution time to know execution time Hai this is sravanthi Is there any possibility to know the execution time of an SQL in its order to execute... to know the execution time for where,having and group by separately.If $ Tools required to build Simple Cart Tools required to build Simple Cart Shopping cart application is written in Java and so you can compile with a standard JDK on any platform computer. The following online shopping - Java Beginners to handle online shooping and shopping cart by click on image by only using jsp? Hi friend, For solving the problem visit to : Thanks palindrome array problem and show the execution for: a.) Count the number of palindromes in this set. b JDBC related Problem - JDBC connectivity code ...... my problem is... that even after the successful compilation and execution .... the update process of the database is not performed... the other the access from the database is successful the problem is only arising checking index in prepared statement checking index in prepared statement If we write as follows: String query = "insert into st_details values(?,?,?)"; PreparedStatement ps = con.prepareStatement(query); then after query has been prepared, can we check the index Shopping Cart design Shopping Cart design We provide Shopping Cart designing, development and maintenance services. We design develops world class shopping cart system that can... source shopping cart for our client. We customize the shopping cart for meet JavaScript array index of JavaScript array index of In this Tutorial we want to describe that makes you to easy to understand JavaScript array index of. We are using JavaScript... line. 1)index of( ) - This return the position of the value that is hold Shopping Cart Application installation manual missing Shopping Cart Application installation manual missing Hi, I downloaded this Shopping cart zip folder from but there's no installation manual inside Online Shopping Cart Solutions Professional Shopping Cart Developers Can Bring Yield Great Profits There are different shopping cart solutions available to suite different online stores. What is important is that one must look for custom shopping cart solutions java problem - Java Beginners . After each Room has been instantiated, we will assume that the array index... will be in array cell with index 2, room numbered 5 will be in array cell with index 5, etc Drop Index Drop Index Drop Index is used to remove one or more indexes from the current database... Index. In this example, we create a table Stu_Table. The create table is used Execution of Multiple Threads in Java Execution of Multiple Threads in Java Can anyone tell me how multiple threads get executed in java??I mean to say that after having called the start method,the run is also invoked, right??Now in my main method if I want Form Processing Problem the index of file pos = file.indexOf("filename...; <% } %> </table> </html> The problem while index - Java Beginners servlets execution - JSP-Servlet
http://www.roseindia.net/tutorialhelp/comment/94418
CC-MAIN-2014-35
refinedweb
1,820
55.24
Bug Description Backport of 3.4 common clock code and the DT clock and highbank support and EDAC support. In merge window for v3.5, several export symbol changed cause build error after rebase onto 3.5-rc2 /home/ikepanhc/ /home/ikepanhc/ /home/ikepanhc/ /home/ikepanhc/ /home/ikepanhc/ /home/ikepanhc/ /home/ikepanhc/ For clk_register, shall modify patch according to this commit commit 0197b3ea0f66cd2 Author: Saravana Kannan <email address hidden> Date: Wed Apr 25 22:58:56 2012 -0700 clk: Use a separate struct for holding init data. Create a struct clk_init_data to hold all data that needs to be passed from the platfrom specific driver to the common clock framework during clock registration. Add a pointer to this struct inside clk_hw. This has several advantages: * Completely hides struct clk from many clock platform drivers and static clock initialization code that don't care for static initialization of the struct clks. * For platforms that want to do complete static initialization, it removed the need to directly mess with the struct clk's fields while still allowing to statically allocate struct clk. This keeps the code more future proof even if they include clk-private.h. * Simplifies the generic clk_register() function and allows adding optional fields in the future without modifying the function signature. * Simplifies the static initialization of clocks on all platforms by removing the need for forward delcarations or convoluted macros. for struct csrow_info, this commit is the reference commit 084a4fccef39ac7 Author: Mauro Carvalho Chehab <email address hidden> Date: Fri Jan 27 18:38:08 2012 -0300 edac: move dimm properties to struct dimm_info On systems based on chip select rows, all channels need to use memories with the same properties, otherwise the memories on channels A and B won't be recognized. However, such assumption is not true for all types of memory controllers. Controllers for FB-DIMM's don't have such requirements. Also, modern Intel controllers seem to be capable of handling such differences. So, we need to get rid of storing the DIMM information into a per-csrow data, storing it, instead at the right place. The first step is to move grain, mtype, dtype and edac_mode to the per-dimm struct. One done, one to go also need to look into this commit commit 1c0035d710dd3bf Author: Shawn Guo <email address hidden> Date: Thu Apr 12 20:50:18 2012 +0800 clk: pass parent_rate into .set_rate For most of .set_rate implementation, parent_rate will be used, so just like passing parent_rate into .recalc_rate, let's pass parent_rate into .set_rate too. It also updates the kernel doc for .set_rate ops. New precise-proposed kernel has the EDAC patches and it works fine with edac-utils ubuntu@c02:~$ cat /proc/version Linux version 3.2.0-27-highbank (buildd@ain) (gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5) ) #43-Ubuntu SMP PREEMPT Fri Jul 6 16:48:52 UTC 2012 ubuntu@c02:~$ edac-util -s -v edac-util: EDAC drivers are loaded. 1 MC detected: mc0:fff00000. ... This bug was fixed in the package linux - 3.5.0-9.9 --------------- linux (3.5.0-9.9) quantal-proposed; urgency=low [ Daniel P. Berrange ] * SAUCE: (drop after 3.6) Forbid invocation of kexec_load() outside initial PID namespace - LP: #1034125 [ Douglas Bagnall ] * SAUCE: Unlock the rc_dev lock when the raw device is missing - LP: #1015836 [ Ike Panhc ] * [Config] Enable EDAC/CLK for highbank - LP: #1008345 [ Leann Ogasawara ] * Revert "ubuntu: AUFS -- reenable" [ Rob Herring ] * SAUCE: net: calxedaxgmac: add write barriers around setting owner bit - LP: #1008345 * SAUCE: ARM smp_twd: add back "arm,smp-twd" compatible property - LP: #1008345 * SAUCE: ARM: highbank: add soft power and reset key event handling - LP: #1008345 * SAUCE: ARM: highbank: use writel_relaxed variant for pwr requests - LP: #1008345 * SAUCE: ahci: un-staticize ahci_dev_classify - LP: #1008345 * SAUCE: ahci_platform: add custom hard reset for Calxeda ahci ctrlr - LP: #1008345 [ Upstream Kernel Changes ] * rt2x00: Add support for BUFFALO WLI-UC-GNM2 to rt2800usb. - LP: #871904 * Avoid sysfs oops when an rc_dev's raw device is absent - LP: #1015836 * eCryptfs: Copy up POSIX ACL and read-only flags from lower mount * clk: add DT clock binding support - LP: #1008345 * clk: add DT fixed-clock binding support - LP: #1008345 * clk: add highbank clock support * edac: add support for Calxeda highbank memory controller - LP: #1008345 * edac: add support for Calxeda highbank L2 cache ecc - LP: #1008345 * net: calxedaxgmac: enable rx cut-thru mode - LP: #1008345 * net: calxedaxgmac: fix hang on rx refill - LP: #1008345 * eCryptfs: Revert to a writethrough cache model - LP: #1034012 * eCryptfs: Initialize empty lower files when opening them - LP: #911507 * eCryptfs: Unlink lower inode when ecryptfs_create() fails - LP: #872905 -- Leann Ogasawara <email address hidden> Wed, 08 Aug 2012 08:39:42 . update patchset
https://bugs.launchpad.net/eilt/+bug/1008345
CC-MAIN-2021-10
refinedweb
785
58.72
I followed the Acrobotic tutorial by Cayenne I have copied the exact code from the tutorial. My board is “Lolin NodeMCU ESP8266 CP2102 NodeMCU WIFI Serial Wireless Module” . i have attached the picture of board from the website i bought. I am getting following error in my IDE "Arduino: 1.8.5 (Windows 10), Board: “NodeMCU 1.0 (ESP-12E Module), 80 MHz, 4M (1M SPIFFS), v2 Lower Memory, Disabled, None, Only Sketch, 115200” C:\Users\vishal\Desktop\sketch_apr30b\sketch_apr30b.ino:1:32: fatal error: CayenneMQTTESP8266.h: No such file or directory #include <CayenneMQTTESP8266.h> ^ compilation terminated. exit status 1 Error compiling for board NodeMCU 1.0 (ESP-12E Module). This report would have more information with “Show verbose output during compilation” option enabled in File -> Preferences." Please help me @adam @bestes
http://community.mydevices.com/t/unable-to-add-esp8266/9524
CC-MAIN-2018-39
refinedweb
132
53.98
Overview Atlassian SourceTree is a free Git and Mercurial client for Windows. Atlassian SourceTree is a free Git and Mercurial client for Mac. MITgcmdata - A Diagnostic Package for MITgcm This represents my first attempt to make a python package for analyzing the output of MITgcm simulations. It is potentially usefuly, but also contains lots of quirks. I am writing a completely new version from scratch that will adhere more closely to the model numerics and conventions. In the meantime, I decided to publish this anyway, in case someone finds it useful. It is also not really properly structured as a python pacakge. It is really just a single module. Basic Usage The package is based around a class called MITgcmmodel.ModelInstance. This class contains many useful methods. A basic example is from MITgcmdata import MITgcmmodel # initalize the model instance m = MITgcmmodel.ModelInstance( output_dir='/path/to/output/files', grid_dir='/path/to/grid/files/', default_iter=112233) # load a variable # (this looks for THETA.0000112233.data in the output_dir) theta = m.rdmds('THETA') There are lots of operators for derivatives, integrals etc. I will try to upload more examples at some point.
https://bitbucket.org/ryanaberanthey/mitgcmdata
CC-MAIN-2017-30
refinedweb
189
51.24
. Now: begin …_500_000, or 1.5 seconds, which will be the timeout level of any requests made with that client instance, such as the set(). The Couchbase Ruby. key has been successfully updated in the datastore, a new CAS value will be returned raises Error::KeyExists. CAS stands for “compare and swap”, and avoids the need for manual key mutexing. The following illustrates use of the cas function: #appends to a JSON-encoded value # first sets value and formatting for stored value c.default_format = :document c.set("foo", {"bar" => 1}) #perform cas and provide value to block c.cas("foo") do |val| val["baz"] = 2 val end # returns {"bar" => 1, "baz" => 2} c.get("foo") The decrement methods reduce the value of a given key if the corresponding value can be parsed to an integer value. These operations are provided at a protocol level to eliminate the need to get, update, and reset a simple integer value in the database. All the Ruby:: couchbase.set("counter", 10) couchbase.incr("counter", 5) couchbase.get("counter") #=> 15", ... }, ... } foo per loop. The following sections provide release notes for individual release versions of Couchbase Client Library Ruby. To browse or submit new issues, see Couchbase Client Library Ruby Issues Tracker. New Features and Behavior Changes in 1.0.0 The library supports three different formats for representing values: :document (default) format supports most of Ruby types which could be mapped to JSON data (hashes, arrays, string, numbers). :marshal This format avoids any conversions to be applied to your data, but your data should be passed as String. It could be useful for building custom algorithms or formats. For example to implement set please see :plain Use this format if you’d like to transparently serialize your Ruby object with standard Marshal.dump and Marshal.load methods The Namespace Memcached is no longer available and has been replaced. For Ruby code that used the former namespace, use code that looks something like below.``` rescue Couchbase::Error::NotFound => e * Removed Views support The client library still supports the Memcached protocol and that syntax is still supported as in below.``` val = c.get("foo", :ttl => 10) The client will automatically adjust to the changed configuration of the cluster (such as on node addition, deletion, rebalance and so on).
http://docs.couchbase.com/couchbase-sdk-ruby-1.0/index.html
CC-MAIN-2018-17
refinedweb
381
58.18
Say I have a dictionary and then I have a list that contains the dictionary's keys. Is there a way to sort the list based off of the dictionaries values? I have been trying this: trial_dict = {'*':4, '-':2, '+':3, '/':5} trial_list = ['-','-','+','/','+','-','*'] sorted(trial_list, key=trial_dict.values()) TypeError: 'list' object is not callable trial_dict.get() def sort_help(x): if isinstance(x, dict): for i in x: return x[i] sorted(trial_list, key=trial_dict.get(sort_help(trial_dict))) sort_help trial_dict.get() Yes dict.get is the correct (or at least, the simplest) way: sorted(trial_list, key=trial_dict.get) As Mark Amery commented, the equivalent explicit lambda: sorted(trial_list, key=lambda x: trial_dict[x]) might be better, for at least two reasons:
https://codedump.io/share/axAL6jSHTJf8/1/sort-a-list-based-on-dictionary-values-in-python
CC-MAIN-2017-47
refinedweb
119
51.95
Use your Raspberry Pi (or a Linux PC) to talk to a Lego NXT rover and then use a Wii remote to drive the rover. Python Libraries There are 2 sets of libraries that we used: - nxt-python : to talk to the Lego Mindstorm NXT - cwiid : to talk to Wii remotes To install these libraries: wget tar -zxvf nxt-python-2.2.2.tar.gz cd nxt* sudo python setup.py install sudo apt-get install python-cwiid The Lego NXT brick needs to have Bluetooth turned., (for example: 00:16:53:04:23:3D). Using the NXT’s Bluetooth address you can pair the Pi and the Lego NXT by: sudo bluez-simple-agent hci0 00:16:53:04:23:3D For the pin code use the default of: 1234 The NXT will beep and prompt you for the pairing passkey. After the Raspberry Pi is paired with the Lego NXT brick you are able to use Python to read the NXT sensors and control the motors. No NXT coding is required. In the Python NXT directory there are some examples, mary.py is a good test example because it does not require any sensors or motors. Our full Python code to control the NXT Rover with a Wii remote is below: import cwiid import time import nxt.locator from nxt.motor import * print 'Looking for a Lego NXT ... may take up to 15 seconds...' b = nxt.locator.find_one_brick() print 'Lego NXT Connected' left = Motor(b, PORT_B) right = Motor(b, PORT_C) print 'Press 1+2 on your Wiimote now...' wii = cwiid.Wiimote() time.sleep(1) wii.rpt_mode = cwiid.RPT_BTN print 'WII Remote Connected' while True: buttons = wii.state['buttons'] if (buttons & cwiid.BTN_LEFT): right.brake() left.run(power = 100,regulated=False) time.sleep(0.1) if (buttons & cwiid.BTN_RIGHT): left.brake() right.run(power = 100,regulated=False) time.sleep(0.1) if (buttons & cwiid.BTN_UP): right.run(power = 100,regulated=False) left.run(power = 100,regulated=False) time.sleep(0.1) if (buttons & cwiid.BTN_DOWN): left.brake() right.brake() time.sleep(0.1)
https://funprojects.blog/2016/12/10/wii-controlled-lego-rover/
CC-MAIN-2022-40
refinedweb
344
68.47
Applets Section Index | Page easily provide default values for when applet parameters are not specified? Here's a helper method that you can use to do that: import java.applet.*; public class AppletUtils { private AppletUtils() {} public static String getParameter( Applet applet, String...more I get a load: class XXX not found message when I try to load my applet, what's wrong? Be sure to check the case and location of the .class file on the Web server. Either the CODE attribute specify the wrong class name, the class name is saved with the wrong case, or the class file ...more How can I prevent the "Applet started" message from appearing in the status line? You can't really prevent this. The best you can do it hide it by repeatedly showing a status message in a thread: getAppletContext().showStatus(""); This does hinder the use of status messages fo...more Are there any restrictions placed on applets loaded from HTML pages generated from servlets / JSP pages? It is good practice to include the CODEBASE attribute in the APPLET tag and specify a directory other then the servlet directory for storing the applet-related classes. For example: out.println(&l...more What's the deal with Java support in Windows XP? Microsoft isn't shipping the Java runtime with Windows XP (IE6). Users can still download it separately from Microsoft's Web site. You'll be better off ignoring their version altogether and using ...more What MIME type do I have to add to my Web server to be able to deliver a Java Web Start applet/application? The jnlp file extension needs to deliver a type of application/x-java-jnlp-file. What is a CAP file? The CAP file format is defined by the Java Card 2.1.1 Virtual Machine Specification. A CAP file is the loading unit in Java Card. Using a host tool called converter, a CAP file must be built for ...more How do I choose the applet Im sending commands to? In the normal case, commands are sent to the current applet, i.e. the last one that has been selected using the SELECT command. However, if the platform supports logical channels, multiple applets...more Which applet methods must I write? Since an applet extends javacard.framework.Applet, it does inherit some methods, like select() and deselect(). The only methods which must be defined in your applets are install() and process(). ...more Can several applets be present on the same card? Yes. Java Card is a multi-application environment. Some applets may have been masked; some others may have been loaded post-issuance. However, only one applet is running at any given time.more What is a Java Card applet? A Java Card application is called a Java Card applet. Although they have the same name, the only thing a Java Card applet and a Java applet have in common is that both are objects. An arbitrary n...more
http://www.jguru.com/faq/client-side-development/applets?page=6
CC-MAIN-2018-26
refinedweb
495
68.47
HosticHostic Yet another static web site builder There are plenty static web site generators around, but many of them think for you or try to make use of a specific framework on all costs. Hostic in contrary was built from the ground to use the optimal tools for the task while keeping the process pleasant. Some features: - Supports JSX but is not related to any framework like React, Vue or Svelte - Implements a light way DOM abstraction (hostic-dom) to easily allow post process tasks like calculating image sizes, optimize for SEO, etc. - Real time preview with self reloading pages. Experiment with different designs or contents. - Any file type can be created like XML sitemaps of RSS feeds, robots.txt or whatever you need - Great Markdown support: Write your articles using Markdown and refer to assets used in it. Hostic puts it all together. Works great with OnePile.app. - Plugins and Middleware help doing common repetitive tasks with ease - SEO: Static pages load fast. All assets are tagged with a SHA checksum, therefore the caching on the server can be set to infinite. Perfect sitemap generation done by a plugin. Post processing can reduce file sizes or optimize per page like stripping unused CSS. - Fast page load due to the static nature of the output. Nothing is in between and you can optimize for best loading performance on your server. Getting StartedGetting Started Start a project like this: npm init hostic <site-name> cd <site-name> npm i npm start To build the static version into the dist folder: npm run build To preview and develop your site: npm start Open your browser at. The browser will reload after any change you saved to your project. SiteSite Creating a new site starts with an entry file usually site/index.js. This one only export one default function that takes a path argument: export default function (path) { let site = new Site(path) // Define your site's pages here return site } The path in out example is site from site/index.js. This is required because we cannot use __dirname due to implementation details. site.html('/sample', ctx => { ctx.body = <div>Hello World</div> }) This will create an HTML file with content <div>Hello World</div>. It uses JSX to describe the content. HTML files will automatically reload when the content or a referenced asset changes. <html>, <head>, <body> and everything es needed will be added automatically. As you noted the Context ctx contains data and can receive new data as well for other Middleware (see below). It is important to put the result into ctx.body. MiddlewareMiddleware Hostic makes use of the middleware programming pattern as known from Koa.js or Express.js. Complexity and extensibility can quite elegantly be managed this way. A middleware is written as easy such a simple function: async (ctx, next) => { // Do something before nested middlewares are executed await next() // Do something after bested middlewares were executed } You can register the middleware like this: site.use(myMiddlewareFunction) It can also be added per page, like this: site.html('/', template, async ctx => { ctx.body = <div>My content for the template</div> }) ContextContext The context that is passed to Middleware is important. The most important property of it is body. It holds the content of the page. For HTML and XML usually in form of a virtual DOM. But it can also be used to pass properties to other Middlewares, like lang for language or title for the page title. PluginsPlugins A special kind of Middleware is the Plugin. It is basically the same, but it can have some attributes to better define its place in the process chain. You add them like this: import { plugin } from 'hostic' // ... app.use(plugin.tidy()) This is the most simple variant of a plugin: export function example(opt = {}) { return { name: 'example', priority: 0.80, middleware: async (ctx, next) => { // ... await next() // .. }) }) The priority tells when the plugin should be executed. The higher the value the earlier it executes. Imagine it like they are nested like this: jsx { links { html { // your middleware } } } These are the priorities of plugins bundled with Hostic. The ones with stars are activated by default. More details: matomomatomo Adds tracking code for Matomo to pages to allow better insights about your visitors. The code is respecting "don't track me" settings and also has an entry point for users to opt out. youtubeyoutube Use the original code from YouTube to embed a video and this plugin will replace it by a lazy loading alternative. This helps speeding up page load while at the same time improving privacy for the visitor. cookieConsentcookieConsent Displays an information about the use of cookies, required by European law. disqusdisqus Privacy conforming integration of Disqus service. localelocale Translate: - Text content that starts with underscore like <div>_Translate this</div> - Remove elements with not matching languages in data-langattributes, like <div data-Translate this</div> - Remove non matching elements like <en>Translate this</en> Translations can be provided as simple objects like: { "Translate this": "Übersetze das" } hostic-dom)Virtual DOM ( This DOM abstraction for HTML and XML content is not designed for speed like in UI frameworks. Its goal is to help doing post process tasks on the content with familiar API. You can e.g. use CSS selectors to retrieve elements like root.querySelectorAll('img[src]') and then manipulate like element.setAttribute('src', src + '?ref=example'). Some special additions help to work on nodes like document.handle('h1,h2,h3', e => e.classList.add('header')). Learn more at github.com/holtwick/hostic-dom. Static FilesStatic Files Serving static files: site.static('favicon.ico'): Serve file at /favicon.ico site.static('assets): Serve folder /assets site.static('css/style.css', 'assets/style.css'): Serve file assets/style.cssunder /css/style.css MarkdownMarkdown As for most static site generators Markdown is a welcome format for content. The first part describes properties in YAML and the second part the textual content: --- title: Example lang: en --- # Example of a Page Lorem ipsum (more details to be added) Lazy Loading and Multiple PassesLazy Loading and Multiple Passes Site creation and serving contents are two separate steps. In the first step paths and their contents descriptions are registered to a site manager. In the second step the content is dynamically generated on demand. A benefit from this separation is, that the content registration can have multipe passes, for example you can first register all pages and then in a second pass modify them. As an example in a multi language website it is possible to first register all pages and then connect the alternate pages. An example: site.routes.values().forEach(page => { if (page.path.startsWith('/en/')) { page.meta.alt = { 'de': '/de/' + page.path.substr(4), '*': '/' + page.path.substr(4), // Redirection based on } } }) Another step is done for assets. If a HTML page has references to images, CSS, JS etc. it can add these references on the fly. That increases speed and offers more flexibility. The build process for the static pages is therefore run twice, because in the first step new references to assets might have been added. ApacheApache By convention the .html suffix is dropped i.e. the url /a/b.html will become /a/b. To support this on Apache add a .htaccess file with the following lines: RewriteEngine on RewriteRule ^([^.]+[^/])$ $1.html [PT] ConfigurationConfiguration The top level of configuration are environment variables. You can set them in your build environment or note them down into .env or .env.local files. The later one is intended to be excluded from Git repositories in case you need to set sensitive information. Available settings are: BASE_URL=- The base URL that is required to calculate absolute URLs e.g. for canonical URL or alternative languages meta data, but also for sitemaps and the like. For the preview server this will be automatically set to the appropriate localhostaddress. This is especially useful if you are building for different targets like stageand production. PORT=8080- The preview servers port number PerformancePerformance Fast previews and build processes are a great thing to have if you are working with a tool like this. Hostic tries to achieve this by doing the following: - For transpilation esbuild is used, the fastest tool around right now - Pages are only loaded on demand. When code changes first the routes are build and only if a preview is requested this one page is generated in that moment. - It does not use frameworks like Webpack or React that again bring their own complexity. File changes are tracked directly and all other implementations are quite basic without relying on to much other frameworks. Be aware of ...Be aware of ... - All resulting URL paths are absolute. HTML files have no .htmlextension. - All paths in JS and JSX have to be absolute from the site's origin, those from Markdown files can also be releative. - You cannot use __dirnamebecause code is transpiled to a monolithic JS file that lives in .hosticfolder. - LESS or SCSS transform is not supported nor other optimizations that an IDE like IntelliJ can provide out of the box as well. If you want to have it please contribute or support. Why another web site builder?Why another web site builder? Read about it in my blog. I don't know... there are plenty of good tools around. But I stumbled into creating this one and then got fascinated by esbuild, vite, the own virtual DOM and other details that where interesting to implement. For non geeky people it might be easier to start with something from the shelf like 11ty. LicenseLicense Hostic is free and can be modified and forked. But the conditions of the EUPL (European Union Public License 1.2) need to be respected, which are similar to ones of the GPL. In particular modifications need to be free as well and made available to the public. For different licensing options please contact license@holtwick.de. Get a quick overview of the license at Choose an open source license. This license is available in the languages of the European Community. AuthorAuthor My name is Dirk Holtwick. I'm an independent software developer located in Germany. Learn more at hotlwick.de.
https://www.npmjs.com/package/hostic
CC-MAIN-2021-10
refinedweb
1,702
58.18
When a Function can call itself again and Again or When a Function call itself until a Condition is not to be False. When a function definition includes a call to itself, it is referred to as a recursive function and the process is known as recursion or circular definition. In this a function call itself repeatedly. In the Recursion we just have to make a One Time call and the will automatically call itself Again and Again. When a recursive function is called for the first time, a space is set aside in the memory to execute this call and the function body is executed. Then a second call to a function is made; again a space is set for this call, and so on. In other words, memory spaces for each function call are arranged in a stack. Each time a function is called; its memory area is placed on the top of the stack and is removed when the execution of the call is completed. To understand the concept of recursive function, consider this example. Example : A program to demonstrate the concept of recursive function #include<iostream> using namespace std; void reverse (); int main () { reverse () ; cout<<” is the reverse of the entered characters”; return 0; } void reverse () char ch; cout<<”Enter a character ('/' to end program): “; cin>>ch; if (ch != '/') { reverse() ; cout<<ch; } } The output of the program is Enter a character ('/' to end program) : h Enter a character ('/' to end program) : i Enter a character ('/' to end program) : / ih is the reverse of the entered characters In this example, function reverse ()is called to accept a character from the user. The function reverse ()calls itself again and again until the user enters' /' and prints the reverse of the characters entered. Note that the recursive functions can also be defined iteratively using "for", "while" and "do ...while" loops. This is because recursion makes the program execution slower due to its extra stack manipulation and more memory utilization. In addition, recursion sometimes results in stack overflow, as for each function call, new memory space is allocated to local variables and function parameters on the stack. However, in some cases, recursive functions are preferred over their iterative counterparts as they make code simpler and easier to understand. For example, it is easier to implement Quick sort algorithm using recursion.
http://ecomputernotes.com/cpp/functions/what-is-recursion
CC-MAIN-2020-16
refinedweb
390
56.89
I've been trying to get tensorflow 0.10 up and running on my El Capitan Macbook Pro (Late 2013, GeForce GT 750M), so far without success. I've tried the official tensorflow documentation's instructions and a number of other folks' approaches, including this one and this one. For reference, I'm trying to use Python3, CUDA 7.5, and tensorflow 0.10 on OSX 10.11.5. I've gotten CUDA installed and it recognizes my GPU. I can successfully compile the deviceQuery /Developer/NVIDIA/CUDA-7.5/samples/1_Utilities/deviceQuery ./deviceQuery Starting... CUDA Device Query (Runtime API) version (CUDART static linking) Detected 1 CUDA Capable device(s) Device 0: "GeForce GT 750M" CUDA Driver Version / Runtime Version 7.5 / 7.5 CUDA Capability Major/Minor version number: 3.0 Total amount of global memory: 2048 MBytes (2147024896 bytes) ( 2) Multiprocessors, (192) CUDA Cores/MP: 384 CUDA Cores GPU Max Clock rate: 926 MHz (0.93 GHz) Memory Clock rate: 2508T 750M Result = PASS /usr/local/cuda/lib include import tensorflow Python 3.5.2 (v3.5.2:4def2a2901a5, Jun 26 2016, 10:47:25) [GCC 4.2.1 (Apple Inc. build 5666) (dot 3)] on darwin Type "help", "copyright", "credits" or "license" for more information. >>> import tensorflow I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcublas.7.5.dylib locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.5.dylib locally I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.7.5.dylib locally Segmentation fault: 11 dtruss The problem is described in this comment: "There is a bug with loading libcuda.dylib - the default cuda install creates libcuda.dylib, but tensorflow tries to load libcuda.1.dylib . This fails, resorting to using the LD_LIBRARY_PATH which if NULL crashes. If you copy libcuda.dylib to libcuda.1.dylib it loads fine." It would be pretty easy to fix the crash for everyone else with a pull request -- ie compile with -c dbg to see exactly which line is trying to use the null value, and adding something like this to the code if (mystring == NULL) { return; }
https://codedump.io/share/zoVzuikwfCjb/1/tensorflow-010-cuda-on-osx-segfaults-on-python-import
CC-MAIN-2017-09
refinedweb
356
52.76
shuffling/permutating a DataFrame in pandas What's a simple and efficient way to shuffle a dataframe in pandas, by rows or by columns? I.e. how to write a function shuffle(df, n, axis=0) that takes a dataframe, a number of shuffles n, and an axis ( axis=0 is rows, axis=1 is columns) and returns a copy of the dataframe that has been shuffled n times. Edit : key is to do this without destroying the row/column labels of the dataframe. If you just shuffle df.index that loses all that information. I want the resulting df to be the same as the original except with the order of rows or order of columns different. Edit2 : My question was unclear. When I say shuffle the rows, I mean shuffle each row independently. So if you have two columns a and b, I want each row shuffled on its own, so that you don't have the same associations between a and b as you do if you just re-order each row as a whole. Something like: for 1...n: for each col in df: shuffle column return new_df But hopefully more efficient than naive looping. This does not work for me: def shuffle(df, n, axis=0): shuffled_df = df.copy() for k in range(n): shuffled_df.apply(np.random.shuffle(shuffled_df.values),axis=axis) return shuffled_df df = pandas.DataFrame({'A':range(10), 'B':range(10)}) shuffle(df, 5) In [16]: def shuffle(df, n=1, axis=0): ...: df = df.copy() ...: for _ in range(n): ...: df.apply(np.random.shuffle, axis=axis) ...: return df ...: In [17]: df = pd.DataFrame({'A':range(10), 'B':range(10)}) In [18]: shuffle(df) In [19]: df Out[19]: A B 0 8 5 1 1 7 2 7 3 3 6 2 4 3 4 5 0 1 6 9 0 7 4 6 8 2 8 9 5 9 From: stackoverflow.com/q/15772009
https://python-decompiler.com/article/2013-04/shuffling-permutating-a-dataframe-in-pandas
CC-MAIN-2020-10
refinedweb
325
72.97
Introduction List concatenation the act of creating a single list from multiple smaller lists by daisy chaining them together. There are many ways of concatenating lists in Python. Specifically, in this article, we'll be going over how to concatenate two lists in Python using the plus operator, unpack operator, multiply operator, manual for loop concatenation, the itertools.chain() function and the inbuilt list method extend(). In all the code snippets below, we'll make use of the following lists: list_a = [1, 2, 3, 4] list_b = [5, 6, 7, 8] Plus Operator List Concatenation The simplest and most straightforward way to concatenate two lists in Python is the plus ( +) operator: list_c = list_a + list_b print (list_c) [1, 2, 3, 4, 5, 6, 7, 8] Unpack Operator List Concatenation This method allows you to join multiple lists. It is a fairly new feature and only available from Python 3.6+. The unpacking operator, as the name implies, unpacks an iterable object into its elements. Unpacking is useful when we want to generate a plethora of arguments from a single list. For example: def foo(a, b, c, d): return a + b + c + d # We want to use the arguments of the following list with the foo function. # However, foo doesn't take a list, it takes 4 numbers, which is why we need to # unpack the list. foo(*list_a) # This is the same as if we were to call foo(1,2,3,4) 10 In a nutshell, we use the list constructor ( [a,b..]) and generate the elements of the new list in order by unpacking multiple lists one after another: list_c = [*list_a, *list_b, *list_a] # This is the same as: # list_c = [1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4] print (list_c) [1, 2, 3, 4, 5, 6, 7, 8, 1, 2, 3, 4] Multiply Operator List Concatenation The multiply ( *) operator is special case of list concatenation in Python. It is used to repeat a whole list multiple times (which is why it's denoted with a multiplication operator): print(list_a * 2) [1, 2, 3, 4, 1, 2, 3, 4] for loop List Concatenation In this method we will go through one list while appending each of its elements to another list one by one. When the loop is over you will have a single list with all the desired elements: for i in list_b: list_a.append(i) print (list_a) [1, 2, 3, 4, 5, 6, 7, 8] itertools.chain() List Concatenation This method works with iterables. It constructs and returns an iterator that can later be used to construct the chained list (think of it as an arrow which just memorizes the order of elements in the resulting list): # If we were to call itertools.chain() like so iterator = itertools.chain([1, 2], [3, 4]) # Basically the iterator is an arrow which can give us the next() element in a sequence, so if we call a list() constructor with said iterable, it works like this: list(iterator) Under the hood, something along these lines is what happens: # Iterator: The next element in this list is 1 [1, 2], [3, 4] ^ # Iterator: The next element in this list is 2 [1, 2], [3, 4] ^ # Iterator: The next element in this list is 3 [1, 2], [3, 4] ^ # Iterator: The next element in this list is 4 [1, 2], [3, 4] ^ # So the list() call looks something like: list([1,2,3,4]) # Keep in mind this is all pseudocode, Python doesn't give the developer direct control over the iterator For this method, you will need to import itertools: import itertools list_c = list(itertools.chain(list_a, list_b)) print (list_c) [1, 2, 3, 4, 5, 6, 7, 8] extend() List Concatenation This is aa built-it function that can be used to expand a list. Here we are expanding the first list by adding elements of the second list to it: list_a.extend(list_b) print (list_a) [1, 2, 3, 4, 5, 6, 7, 8] Conclusion In this article, we've gone over five ways to concatenate two lists in Python - using the plus operator, the unpack operator, the multiply operator, a for loop, itertools.chain() and extend().
https://stackabuse.com/how-to-concatenate-two-lists-in-python/
CC-MAIN-2021-17
refinedweb
698
59.06
Details - Type: Bug - Status: Closed - Priority: Major - Resolution: Fixed - Affects Version/s: 1.4 - - Component/s: contrib - DataImportHandler - Labels. Activity - All - Work Log - History - Activity - Transitions there is a huge problem w/ the current implementation. The whole delta-import process is built like an after thought. I wish to revamp the whole design. so that all the rows returned, deletedPkQuery or deltaQuery etc should also go through the transformations FYI - some prior discussion on solr-user too:, about trying "select concat('board-',board_id) from boards where deleted = 'Y'", to no avail. Erik plz let me know if this helps. Someone with magic powers, please mark this as "to be fixed in Solr 1.4". Noble - that looks like the right thing to add for delta queries, but that didn't help with the deleted one. I've attached a patch that fixes things in my limited test. We really need to add some unit tests for this - tricky business though. Lance - do you have some unit tests to add that shows it broken and then fixed with this patch? Erik, the fix is not right see this key = map.get(dataImporter.getSchema().getUniqueKeyField().getName()); The name of the uniqueKey field in the scema and the one you have in the map does not have to be same. DIH really gives you an option for it to be different. The attribute 'pk' is only used for this purpose. Noble - they do have to be the same, and that is why applying the transformations as you patched is part of the right answer. But in order to call writer.deleteDoc(key) in DocBuilder#deleteAll it must use the value of the uniqueKey field, not of the pk one. That is the crux of the issue here. In my case, in the example at the top of this issue, the pk field is board_id with a value of "1", and the id field after applying transformations is "board-1". The delete-by-id to Solr must be done using value "board-1" and that is only obtained by looking up the uniqueKey field (id in this example) from the schema and pulling it from the map. The latest patch I supplied worked in my test case and I analyzed it through a debugger. But in order to call writer.deleteDoc(key) in DocBuilder#deleteAll it must use the value of the uniqueKey field, not of the pk one Why can't you keep the uniqueKey same as pk as shown below(it is not used anywhere else (yes there is one other place which is gonna be deprecated) ) . That should solve the problem. DIH is agnostic of the solr primary key . The point is that , the key name does not matter only the value matters . As long as the value is correct, delete should work automatically . the following should be just fine <entity name="test" pk="id" ... If the pk attribute is not used, then let's just remove it altogether. DIH should NOT be agnostic to Solr's uniqueKey field though - and it would be silly to mandate users duplicate the uniqueKey setting as DIH pk values when it can just be gotten from the schema. So -1 to misusing a pk-named attribute this way, when the most intuitive interpretation would be that pk is the Primary Key of the table being indexed. Here's another patch that removes references to pk from DocBuilder#deleteAll. I've just looked for uses of pk myself, and indeed it seems it should be removed. DIH must know and use the uniqueKey field from the schema for updates/deletes - there is no reason for DIH configs to have to specify it again. In my latest patch, I added a comment with a question about the use of the loop in deletelAll. I don't see why it is needed. Noble? There are two bugs in this issue: 1) The deletedPkQuery must say "select field from table". It may not say "select expression" or "select field AS FIELD from table". The former does not work and the latter may work with some databases and not others. In other words, the first example works and the others fail: deletedPkQuery="select id from item where EXPRESSION" deletedPkQuery="select concat('',id) from item where EXPRESSION" deletedPkQuery="select id from item AS ID where EXPRESSION" deletedPkQuery="select id from item AS ITEM.ID where EXPRESSION" 2) If a template is used to format the contents of the Solr primary key, that template is declared at the <field> level and is not visible in the parent <entity> level. Since the deletedPkQuery is declared in the <entity> level, it cannot see the field's template. Thus the template cannot be applied to the results of the deletedPkQuery. Any formatting in the <field> template must be duplicated (in SQL syntax) in the deletedPkQuery. This is workaround that achieves the goal of #2: if the Solr primary key is declared as <field name="id" template="prefix-${item.id} " then the deletedPkQuery must be declared as deletedPkQuery="select concat('prefix',id) from item where EXPRESSION" Because of bug #1, it is not possible to use this workaround. The attached patch fixes bug #1, making it possible to use the workaround. It is Erik's second patch but only with the fix for bug #1. It does not create an automatic way for results of deletedPkQuery to be formatted with the primary key's template. Lance - your patch misses the all important EntityProcessorWrapper.java changes that both Noble and I contributed in our patches. That's needed to make this work! I've retested my last patch and all works fine in my single record simple test case using the exact configuration (and MySQL DB) as posted here. I do a full-import, one record goes in. I do change deleted to 'Y' and do a delta-import and the document is removed from Solr. Lance - please fall back to my last patch - the applyTransformer call in EntityProcessorWrapper#nextDeletedRow that I added is crucial. It gives an id=board-1 value which is then used for deletion in DocBuilder#deleteAll. If the pk attribute is not used, then let's just remove it altogether..... hi Erik. you got me wrong. Let me try to explain. There are two potential names for a field one is the 'column' and other is the 'name' 'column' is the name of the field in the source or in DIH. The 'name' is the name of that field in Solr. DIH uses the 'name' attribute only when it writes the field to Solr. The relation ship between 'pk' attribute and the '<uniqueKey>' in Solr is same . The distinction is important . Otherwise the user will be forced to use the same name in both the db and solr (assuming no transformations are done). Yup, I've deleted my patch. After a lot more examination of the code, and combinatoric testing, I see why it works and is the right fix for this incarnation of the DIH. And, yes, unit tests for more of these options are on the way! Noble - I'm not following your explanation here. The pk attribute seems like it should simply be removed (and the usages of it refactored to use Solr's uniqueKey setting when doing updated/deleted activities - that's the key to the Solr docs after all). Regarding 'column'/'name' - there are cases where it is quite confusing - where column is used to specify the Solr field name - and that feels wrong to me. 'name' should always be the Solr name, and if it is a templated field, it should be 'name'/'template' not 'column'/'template', for example. I plan on committing this patch later this week. It fixes the problems we've had. Please review and comment further if there are still holes... but we really should be discussing this with unit tests. I hope to see some of those this week from Lance. he pk attribute seems like it should simply be removed The pk attribute IS REQUIRED for this use case. I guess the solution to your problem is that you set the pk correct as the name you find in the "source row" map. if we remove it every user will be forced to keep the uniquekey same as the name it has in the source DB. Take the following usecase <entity name="test" pk="db_id" deletedPkQuery="select db_id from boards where deleted = 'Y'"> <field column="db_id" name="solr_id"/> </entity> elsewhere in solrconfig.xml <uniqueKey>solr_id</uniqueKey> if you omit that pk attributethis will fail The pk attribute currently is used to track only modified/deleted records during a delta-import. But only the uniqueKey setting makes sense, and that will get mapped from the transformations applied. I fail to see how removing pk (and of course refactoring its current uses to the uniqueKey value) will cause problems. The only thing that matters here is that transformations are applied such that the uniqueKey field value is accurate. There is nothing in what I'm suggesting here that makes the DB primary key field match the Solr uniqueKey field name. Again, tests are really required so we can be sure we have the bases covered, but it's been tested pretty thoroughly that my patch fixes a serious issue. hi Erik it works in your case , But the usecase I have given above (which is a more normal usecase) let me explain with the above example . // this code is taken from your patch SchemaField uniqueKeyField = dataImporter.getSchema().getUniqueKeyField(); // now the value of uniqueKeyField is 'solr_id' if (uniqueKeyField == null) return; Object key = map.get(uniqueKeyField.getName()); //the map contains {"db_id"-> "12345"} //so key == null; and it will do nothing and return // on the contrary if you set the pk="db_id" , then it works committed r788587 Noble - I don't really appreciate you committing a partial patch of this. What you committed requires us to set pk="id", which is just silly and nonsensical. I'm opening this back up to continue to fix this awkward area of DIH. Erik, the most common use-case as far as I have seen is that the primary key in tables is different from the uniqueKey in Solr (think about multiple tables with each having a root-entity). Yes, the pk can be transformed (or one can alias it in sql) but this being the most common use-case, I feel pk should be kept as-is. Let me give a few possible cases - The name of table's primary key is different from solr's unique key name and the deletedPkQuery returns only one column (most common use-case) - The name of table's primary key is different from solr's unique key name and the deletedPkQuery returns multiple columns - The name of table's primary key is same as solr's unique key name and the deletedPkQuery returns only one column - The name of table's primary key is same as solr's unique key name and the deletedPkQuery returns multiple columns For #1 'pk' does not matter because we can use the single columns coming back from deletedPkQuery For #2, 'pk' is required otherwise the user is forced to use a transformer (or alias). For non-database use-cases (there is none right now), there is no aliasing so the user must write a transformer For #3, neither 'pk' nor 'uniqueKey' matters For #4, we can use solr's uniqueKey name (I guess this is your use-case?). I think that this is a rare use-case. If at all, we decide to go with uniqueKey only, the right way to do that would be to use the corresponding column-mapping for looking up the unique key. For the example below, we should use "db-id" to lookup in the map returned by deletedPkQuery if solr-id is the uniqueKey in solr: <field column="db-id" name="solr-id" /> However, even though the above approach is the 'right' one, it is very tricky and hard to explain to users. Also, there could be multiple columns mapped to same solr key (think about template for unique key for 'types' of documents based on a flag column). This may be very error-prone. What do you think? In all of those cases, as long as what is returned from the query and run through transformations ends up with a key in the map that is the same as the uniqueKey setting in schema.xml, then all is fine. I still don't see a need to set pk="solr-id". Won't there always be a uniqueKey-named key in that map after transformations are applied? uniqueKey definitely matters... that's the field that must be used for deletions, and what I'm consistently seeing mentioned here is that you want to duplicate that by saying pk="<uniqueKeyFieldName>", which is unnecessary duplication. When would you set pk to anything else? It'll be later next week at the earliest, but I hope to get some unit tests contributed so we can discuss this topic through tests rather than prose. My use case is exactly the config at the top of this issue, where the uniqueKey value is a templated transformation (because of multiple DIH configurations bringing in various data sources, so the unique key value must be fabricated to be guaranteed to be unique across different datasources that may have the same primary keys in the databases) - this corresponds to your #1. don't really appreciate you committing a partial patch of this I am sorry about this. But, my fix was good enough for the bug's description. I was planing to open another issue for this. Either we should change the descrption to , deprecate 'pk' or we can open another issue for the same. DIH works as intented to with this fix and the deprecation can be take up . Won't there always be a uniqueKey-named key in that map after transformations.. The assumption is that transformation is always done. My experience with ''DIH support says that users don't use it always.Transformer is just a addon . Anyway , the right fix would be to find out what is the right mapping in DIH for the given solr <uniqueKey> and use it as a pk. That can definitely be another jira issue your thoughts? This patch contains two unit test files addressing the topics of this bug: 1) database and Solr have different primary key names 2) solr primary key value is processed with a template TestSqlEntityProcessorDelta.java does general tests of delta-import TestSqlEntityProcessorDelta2.java does the same tests with a different solr id name and a template for the solr id value. Also, there was a mistake in the test fixture which caused delta imports to always empty the index first. I believe that these tests should work. Please make all changes necessary so that we have working unit tests for these features, and please change the wiki to match all syntax changes. Also, I recommend that a feature be removed: creating a deltaImportQuery if the user does not declare one. From the wiki: "deltaImportQuery : (Only used in delta-import) . If this is not present , DIH tries to construct the import query by(after identifying the delta) modifying the 'query' (this is error prone). There is a namespace ${dataimporter.delta.<column-name>} which can be used in this query. e.g: select * from tbl where id=${dataimporter.delta.id} Solr1.4." The user should just be required to supply the deltaImportQuery. Now that the deltaImportQuery declaration exists, this "helper feature" is just a trap for the unwary. The phrase "this is error prone" is always a bad sign. Shalin, how can a deletedPkQuery return multiple columns? It is supposed to return a list of Solr uniqueKeys. These are then sent in with the Solr delete-by-id operation. Lance - the SQL query could return more than one column. A bit non-sensical, but it could be specified that way. SELECT * FROM BOARDS WHERE deleted='Y' for example. And no, it isn't supposed to return Solr uniqueKey id's!!! That's the main crux of the dilemma here... how to jive what is returned from a SQL resultset with Solr unique keys. Also, I recommend that a feature be removed: creating a deltaImportQuery if the user does not declare one. From the wiki: It will be removed. But, we cannot remove it from the wiki until we release 1.4. I believe that these tests should work. hi Lance, do the testcases run.? or we require some fix to make it work? I applied the patch and the testcases are failing Lance - I too get failures with your tests. Here's one example: junit.framework.AssertionFailedError: query failed XPath: //*[@numFound='0'] xml response was: <?xml version="1.0" encoding="UTF-8"?> <response> <lst name="responseHeader"><int name="status">0</int><int name="QTime">1</int><lst name="params"><str name="rows">20</str><str name="start">0</str><str name="q">desc:hello OR XtestCompositePk_DeltaImport_replace_nodelete</str><str name="qt">standard</str><str name="version">2.2</str></lst></lst><result name="response" numFound="1" start="0"><doc><arr name="desc"><str>hello</str></arr><str name="id">1</str><date name="timestamp">2009-06-30T13:55:58.2Z</date></doc></result> </response> at org.apache.solr.util.AbstractSolrTestCase.assertQ(AbstractSolrTestCase.java:182) at org.apache.solr.util.AbstractSolrTestCase.assertQ(AbstractSolrTestCase.java:172) at org.apache.solr.handler.dataimport.TestSqlEntityProcessorDelta.testCompositePk_DeltaImport_replace_nodelete(TestSqlEntityProcessorDelta.java:194) I apologize, I did not put it clearly. Yes, these tests do not work against the current code base. I believe they are correct tests for the features we have talked about in this issue, SOLR-1229. The code in these tests should work, when this bug is resolved. They will not work with the partial patch currently committed. These tests are variations of the original test set. They handle two cases: 1) the solr uniqueKey has a different name than the DB primary key, and 2) the solr uniqueKey's value is a transformation of the DB primary key's value. In other words, if your code actually works, and passes your by-hand tests, it should then pass all of these tests. If there is something wrong in these tests, please fix it. Should you wish to write further tests, the pre/post remove query features at present have no coverage. The testcases are wrong. take the case of testCompositePk_DeltaImport_empty() the uniqueKey is 'solr_id' and there is no such field which maps to 'solr_id ' see the mapping <field column="id" name="id"/> Erik, the new fix This eliminates the need for 'pk' most of the cases. It tries to identify the corresponding column mapping for the solr uniqueKey. But the user can override this by specifying the pk attribute if the guess is not going to be correct Erik, did you have a chance to look at the fix? Latest patch (adapted by Lance). All tests pass and this will be committed shortly. Committed to r792963. Thanks Lance and Noble for iterating on this. We've at least got it working well enough for our current needs. this fix does not allow the user to override the pk key name from the data-config.xml . The deduced key takes precedence. In most cases the pk attribute is not required, but when it is mentioned it should take precedence Noble- With this change to DocBuilder.deleteAll(), the Delta2 test runs. With the code in the other order, the two delete operations do not work because the template is not applied. Please alter this config file (from unit test Delta2) to embody the use case you mention. The database pk is 'id' and the schema pk is 'solr_id'. And please describe the primary key design of the database and solr schemas. <dataConfig> <document> <entity name="x" pk="id" transformer="TemplateTransformer" query="select * from x" deletedPkQuery="select id from x where last_modified > NOW AND deleted='true'" deltaImportQuery="select * from x where id='$ '" deltaQuery="select id from x where last_modified > NOW"> <field column="solr_id" template="prefix-$ <entity name="y" query="select * from y where y.A='${x.id} '"> <field column="desc" /> </entity> </entity> </document> </dataConfig> ideally , for your usecases , the pk attribute is not required. So i have removed it. Now it uses the user provided pk if it is not present it falls back to the solr schema uniqueKey Ok - these run. Thanks. Just to make sure I understand. The 'pk' attribute declares 2 things: 1) that this column must exist for a document to be generated, and 2) that this entity is the level where documents are created. Is this true? tmpid appears as an unused name merely so that ${x.id} is sent into solr_id. Maybe name="" would be more clear for this purpose? Lance Something is documented on the wiki but not used: multiple PKs in one entity. On the wiki page, see the config file after "Writing a huge deltaQuery" - there is a attribute:{pk="ITEM_ID, CATEGORY_ID"} There is code to parse this in DataImporter.InitEntity() and store the list in Entity.primaryKeys. But this list of PKs is never used. I think the use case for this is that the user requires more fields besides the uniqueKey for a document. Is this right? This is definitely on my list of must-have features. The second field may or may not be declared "required" in the schema, so looking at the schema is not good enough. The field has to be declared "required" in the dataconfig. Lance (I split this comment out from the previous since they are not related. However, this is another feature inside the same bit of code upon which we are ceaselessly chewing.) 1) that this column must exist for a document to be generated, and no, 2) that this entity is the level where documents are created. Is this true? no: rootEntity decides that pk is currently used for delete. XPathEntityProcessor also uses that to decode if this row is a candidate for a doc multiple pk values were supported for deltas (to construct the deltaImportQuery when it is not present ). That is going to be deprecated from 1.4. I guess the latest patch can be committed and this issue can be closed I guess the fix was not done Do you have a failing test case? Nope. The last current trunk has changed an existing feature. The pk attribute is ignored. that is why I have reopened it in the first place There are a couple of other features to remove in this code: 1) multiple primary keys 2) deltaImportQuery is created automatically if it is not given in the dataconfig.xml file Do we want to attack all of those in this issue? Do we want to attack all of those in this issue? we must remove them . Let us have a separate issue for them The Delta2 test handles both of the problems in this issue. Should this issue be closed? The issue is reopened because it breaks an existing functionality. The latest patch fixes that Noble, what is this broken functionality? I just did a full update and all unit tests work. If you describe the problem I will do the unit test and fix it. what is this broken functionality? Till this fix the user provided 'pk' was always honoured. Now, the derived pk can never be overridden. My latest patch has the fix and corresponding changes to the tests. The latest patch works against the latest SVN checkout. Please commit it and close this issue. Thanks! committed :r807537 Bulk close for Solr 1.4 Maybe the pk field returned from deletedPkQuery should be run through the transformation process on the uniqueKey (id in this case) field to get the actual value??
https://issues.apache.org/jira/browse/SOLR-1229?focusedCommentId=12725948&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2016-07
refinedweb
3,989
72.56
src/ehd/implicit.h Ohmic conduction This function approximates implicitly the EHD equation set given by the electric Poisson equation and the ohmic conduction term of the charge density conservation (the advection term is computed elsewhere using a tracer advection scheme), where is the charge density, and is the electric potential. and are the conductivity and permittivity respectively. Substituting the Poisson equation into the conservation equation gives the following time-implicit scheme, We need the Poisson solver and the timestep dt. #include "poisson.h" extern double dt; The electric potential and the volume charge density are scalars while the permittivity and conductivity are face vectors. In mgphi we will store the statistics for the multigrid resolution of the electric poisson equation. scalar φ[], rhoe[]; face vector ε[], K[]; mgstats mgphi; event defaults (i = 0) { The restriction/refine attributes of the charge density are those of a tracer otherwise the conservation is not guaranteed. #if TREE rhoe.restriction = restriction_volume_average; rhoe.refine = refine_linear; #endif By default the permittivity is unity and other quantities are zero. foreach_face() ε.x[] = K.x[] = fm.x[]; boundary ((scalar *){ε, K}); } event tracer_diffusion (i++) { scalar rhs[]; The r.h.s of the poisson equation is set to , foreach() rhs[] = - rhoe[]*cm[]; if (K.x.i) { We compute the coefficients of the Laplacian: . face vector f[]; foreach_face() f.x[] = K.x[]*dt + ε.x[]; boundary ((scalar *){f}); The poisson equation is solved to obtain . mgphi = poisson (φ, rhs, f); Finally is computed as . #if TREE foreach_face() f.x[] = ε.x[]*(φ[] - φ[-1])/Δ; boundary_flux ({f}); foreach() { rhoe[] = 0.; foreach_dimension() rhoe[] -= f.x[1] - f.x[]; rhoe[] /= cm[]*Δ; } #else // Cartesian foreach() { rhoe[] = 0.; foreach_dimension() rhoe[] -= (ε.x[1]*(φ[1] - φ[]) - ε.x[]*(φ[] - φ[-1])); rhoe[] /= cm[]*sq(Δ); } #endif boundary ({rhoe}); } In the absence of conductivity, the electric potential only varies if the electrical permittivity varies, else mgphi = poisson (φ, rhs, ε); }
http://basilisk.fr/src/ehd/implicit.h
CC-MAIN-2018-34
refinedweb
317
50.12
The QTextLayout class is used to lay out and paint a single paragraph of text. More... #include <QTextLayout> The QTextLayout class is used to lay out and paint a single paragraph of text. It offers most currently deal with plain text and rich text paragraphs that are part of a QTextDocument. QTextLayout can be used to create a sequence of QTextLine's with given widths and can position them independently on the screen. Once the layout is done, these lines can be drawn on a paint device. Here's some pseudo code that presents the layout phase: int leading = fontMetrics.leading(); int height = 0; int widthUsed = 0; textLayout.beginLayout(); while (1) { QTextLine line = textLayout.createLine(); if (!line.isValid()) break; line.layout(lineWidth); height += leading; line.setPosition(QPoint(0, height)); height += line.height(); widthUsed = qMax(widthUsed, line.naturalTextWidth()); } textLayout.endLayout(); The text can be drawn by calling the layout's draw() function: QPainter painter(this); textLayout.draw(&painter, QPoint(0, 0)); The text layout's text is set in the constructor or with setText(). The layout can be seen as a sequence of QTextLine objects; use lineAt() or lineForTextPosition() to get a QTextLine, createLine() to create one. For a given position in the text you can find a valid cursor position with isValidCursorPosition(), nextCursorPosition(), and previousCursorPosition(). The layout itself can be positioned with setPosition(); it has a boundingRect(), and a minimumWidth() and a maximumWidth(). A text layout can be drawn on a painter device using draw(). Constructs an empty text layout. See also setText().. Constructs a text layout to lay out the given block.). Draws the whole layout on the painter p at the position specified by pos. The rendered layout includes the given selections and is clipped within the rectangle specified by clip. Draws a text cursor with the current pen at the given position using the painter specified. The corresponding position within the text is specified by cursorPosition. Ends the layout process. Returns the current font that is used for the layout, or a default font if none is set. See also setFont(). i-th line of text in this text layout. See also lineCount() and lineForTextPosition(). Returns the number of lines in this text layout. See also lineAt().. See also isValidCursorPosition() and previousCursorPosition(). The global position of the layout. This is independent of the bounding rectangle and of the layout process.(). Enables caching of the complete layout information if enable is true; otherwise disables layout caching. Usually QTextLayout throws most of the layouting information away after a call to endLayout() to reduce memory consumption. If you however want to draw the layouted text directly afterwards enabling caching might speed up drawing significantly. See also cacheEnabled(). Sets the layout's font to the given font. The layout is invalidated and must be laid out again. See also font() and text(). Moves the text layout to point p. See also position(). Sets the position and text of the area in the layout that is processed before editing occurs. Sets the layout's text to the given string. The layout is invalidated and must be laid out again. See also text(). Sets the text option structure that controls the layout process to the given option. See also textOption() and QTextOption. Returns the layout's text. See also setText(). Returns the current text option used to control the layout process. See also setTextOption() and QTextOption.
http://doc.trolltech.com/4.0/qtextlayout.html
crawl-001
refinedweb
563
61.33
Up to [DragonFly] / src / lib / libc / stdio Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN - Ansify function definitions. - Fix some warnings. - In function definitions, move the type on a line of its own. - Remove (void) casts for discarded return values. - Use va_copy() where appropriate. In-collaboration-with: Alexey Slynko <slynko@tronet.ru> First step to cleaning up stdio. This breaks the libc ABI, all programs have to be recompiled. Make FILE an opaque type for normal operation (anything outside libc). This means programs have to use the exported interface, they can neither make static instances on the heap or access fields of their own. Introduce a new type __FILE_public, which contains the fields accessed by the various macros. It is placed first in the real FILE and the macros cast the given FILE * to __FILE_public for access. To allow better argument checks, all macros have been converted to inline functions instead. Merge the various stdio helper headers into a single priv_stdio.h. The license from the original files has been kept, the third clause is gone as part of the UCB copyright addendum. They haven't been changed in FreeBSD at all. Add two new helper functions, fcookie and __fpending to read parts of the hidden state. The former is handy for funopen users, the latter exists on other systems as well. Cleanup some minor warnings on the way and hide some local functions with static. Adept libftpio and CVS to the chanced API.> Use ANSI C prototypes and remove the !__STDC__ varargs compatibility junk. Add a missing '$DragonFly$' tag in stdio/setbuf.c file. Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.5.2.1
http://www.dragonflybsd.org/cvsweb/src/lib/libc/stdio/freopen.c?r1=1.7
CC-MAIN-2015-11
refinedweb
303
69.07
EnoschMembers Content count39 Joined Last visited Community Reputation133 Neutral About Enosch - RankMember Believable Futuristic Technology Enosch replied to T1Oracle's topic in Game Design and TheoryIn english, technically lightning IS silent (we call the resulting sound thunder), lol. But yeah, ionization of air is a little noisy. For computing in the future, I think we will soon move past actual devices for computing (I mean electronic devices). With the advent of cheap nanotech and nanofactories, we will be able to create nanocomputers that will have an incredible computing power/size ratio. Also, we will be able to store information at the atomic (and eventually subatomic) level, so data storage (on our level of size magnitude) will be inconsequential. Also, as we move into nantcomputing, we will see all technology become more integrated. Thus, I don't see us using human interface devices for much longer. By integrated, I mean that we will be able to keep nanobots inside our bodies that will enhance our biological functions. We will be able to communicate "wirelessly" via the nanobots directly from the electro-checmical signals in our brain to someone else's brain. Many obstacles stand in the way of this, not the least of which is that each human's brain stores and uses information differently (like each person has a different and unknown computer architecture and OS), but we will overcome those. Eventually, I believe that we will shed our biological bodies over time via tech enhancements and run our processes ("consciousness") on more stable and efficient "bodies". On teleportation, we will definitely achieve this, though with nanofactories it will most likely be more plausible to make a copy by sending over the "blueprints" of an object than to decompose an object, ship the particles, then reassemble them. It is a weird idea that you could (theoretically) make a copy of yourself at a given time and rebuild it next to you. Kind of makes you wonder if that "person" is you, or if you are? On weapons, I think that in the future, nanoweapons will be the most alarming. Think about it. Nanotech already has produced a "pirhana" type of fluid that breaks down only biological substances (like living things). Imagine a gas bomb of this substance dropped in a group of people. Or swarms of remote or AI controlled nano "bees" that could be targetted onto a group of enemies. I would imagine it kind of difficult to fight back against nano weapons with modern technology, though we will have such amenities by the time these are developed. Just my two cents. Oh yeah, and I think that with the exponential growth of technology, these advances are within the next century, at least. Check out Ray Kurtzweil on google. He has a few books out about future technologies. - Enosch questions about gravity Enosch replied to devgamedev's topic in Math and PhysicsJust orient your player so that his "up" vector is the negation of the gravity vector. For instance, if your gravity vector is [0, -9.8, 0], then your player's up vector is [0, 1, 0] (the negation of [0, -1, 0]). To get your effect very simply, draw your character (right side up), then set the camera's up vector to be the opposite of gravity (as above) and draw everything else. That should make everything right side up when gravity points down, and updside down when gravity points up. But your character will always be right side up. This will work for a 3rd person game where your camera orientation is relative to your player's orientation. I am sure there are better solutions, but maybe this will help you get started. Good luck dood! - Enosch - I think you misunderstood what I meant. Using a game engine to make games is GREAT. Nearly all professional games have engines under the hood that run things (Torque, Unreal, etc). So using one that is already done will take that burden off of you. With that said, you will still need to design gameplay code that uses the engine of your choice. I've heard a lot of great things about Torque. It provides tools and scripting along with its engine that streamlines gameplay and content generation. However, if you are planning to use scripting for your gameplay, I would imagine that you will run into a point where things are running too slow. This is because interpreted (scripting) languages are not fast. So using C++ code for the bulk of your gameplay module will speed things up, allowing you to add more gameplay. Scripts are most useful for things that require heavy tweakage and for things that may change a lot. Think of scripts like content. They are easily generated, but aren't the meat and potatoes of your professional game. I would bet that most of those games you saw use another language like C++ with Torque to make their games. Also, even with the content generation tools available with Torque (and others), you will still need to devote a serious amount of time to content generation. I think it is correct to say that most of a game is content, not code, so even with the engine and gameplay out of the way, you are still looking at a very large undertaking to generate content for your game. Definitly check out Torque if you are looking for a game creation tool. If you are wanting to get something together very quickly and learn a lot about using scripting and generating content, you might check out the Unreal or Quake engines and start making a mod. Not only will this be fun and very informative, having a completed MOD under your belt will make you a more enticing candidate for game dev jobs. From what I understand, making a MOD is easier, but still similar to making games using a game system like Torque. I hope I've helped a bit. - Enosch - Heya dood. First off, congrats on your huge endeavor. I hope you find it as rewarding as everyone else. Second, I would (in my soon to be professional opinion) recommend learning C++ and DirectX. C++ because it is (currently) the language used by most professional game dev companies. If you are wanting to get a job working for one of those, you pretty much need C++. The reason is that most/all of their existing code bases are in C++ and because it is fast. Whether C++ will remain the defacto language or be replaced by C# or others is still up for debate (rather hot debate at that). It is NOT the only language you can use and not the only one you can get a job with. But if getting a job is your priority, C++ opens up more doors at the moment. DirectX is (again, currently) the most used API for games, because most games run on Windows because most people have Windows. While that may or may not change in the future, it is the way it is right now. And while DX is not often a requirement for general programmer jobs, it is a definite bonus and a declaration that you are well rounded and experienced enough to use it. Knowledge of other APIs will do this too, but DX is one of the most desirable. Having said that, you want to use a language that feels natural to you. If you will be writing your own games, this will increase your productivity and your enjoyment of the process. However, keep in mind that there are limits to what some languages can and can't do. Visual Basic isn't made for real time 3D games. C++ may be overkill for a simple tic-tac-toe game. You need to find the kinds of games you want to make and learn accordingly. Also, what do you mean by professional quality? If you are thinking of writing the next HALO, you will never finish working on it by yourself. By the time you learn all you need to make HALO and then make it (maybe 7-8 years working full time, lol) games will have moved so far past that level of quality that you will be lucky to get it onto the shelf at Bud's Movie Rental. Realistically, the odds are against you completing a project that large, simply because of all the content required. If you have never worked on a "real" game before, you may not realize just how much work goes into it. But only considering the gameplay programming (assuming you have a fully featured engine), you will need to develop many different complex systems just to get a game that is playable, forget professional quality. After you get gameplay that works, you still need music, sound fx, models, animations, textures, physics data, ai data, levels, visual effects, etc. Most/all of these are usually generated by tools that you may can find online or at a software store. But for the rest, you will need to design and build your own, which is another task. Finally, you actually need use the tools (3D graphics editors for instance) enough to get good at it. Most programmers really suck at art and music generation (hasty generalization) because we don't use it enough and/or lack the talent that artists have. THAT is why so many dev companies have large teams. Professional level games made with large teams or experienced professinoals usually still take 1-2 years to complete, because not only do you need to make the game, you have to make it fun and idiot-proof. That in itself is a daunting task, because by the time you think you've solved it for the most idiotic idiot out there, they go and make a better one. (Jeez this is getting long...) My overall point (wow, that thing I started to get to at the beginning...) is that while your goal is admirable, it is unlikely and may just lead to frustration when you actually had the stuff to get good at this after all. My advice is to not quit your day job. Unless you are incredibly gifted and lucky (more so than most of the really smart folks here) you won't be able to start cranking out good games after only a month of C++. Software development is an infinite learning process, so if you are up for that (and you sound like you are), this industry is where you want to be. Just start small. As Kazgoroth said, you won't "master" C++ or DX in a year, probably not even in two if you have an ounce of a life. So keep working where you are and just learn as fast as you can. I would definitly advise going to school. A four year degree will open up so many doors for you and you will not only learn a lot there, but also have a chance to learn on your own. Alternatively, there are a few 2 year degree schools in the US where you can study ONLY game development. When you have made a few small games and feel confident in your language, try to join one of the groups here on gamedev that is working on a project that interests you. I think that you'll find plenty of challenges and new stuff with just that. Finally, when you are experienced enough, try to get a job working for a game dev company. I know that by reading some of the posts around here it may sound nearly impossible to get hired, but if you know your stuff you can do it (I did, and I've only been programming for 2.5 years; I'm only 21). After you learn all you can there, then move out to your own shop. You will have the experience and professional contacts needed to make it happen. I know this doesn't look very exciting, but not very much in life worth having happens easily. I would hate for you to jump out there and start making games only to flop a couple of months down the road when you run out of money. There is SO much out there to learn that you haven't seen yet. Keep asking questions. Best of luck to you. And have fun. - Enosch 3D Compass Enosch replied to SAE Superman's topic in General and Gameplay ProgrammingTo get the compass in 3D coords, try this: You have the Camera position [x,y,z] and the Camera's direction vectors. If you don't keep the direction vectors (it helps a LOT), you can compute them from your Camera's rotation (I won't describe it here) without much difficulty. I always recompute them each time the Camera's rotation changes. Once you know those vectors, positioning the compass in 3D space is easy. For instance, if you want it in the upper-right corner of the screen d distance in front of the camera, you can simply say: Compass.position = Camera.position + Camera.rightVec + Camera.upVec + Camera.frontVec*d; The frontVec is scaled by d to move the compass d distance directly in front of you, then you simply add the up and right vectors to get it to the upper-right corner, and add the camera's 3D position and viola, it is positioned in world space and is always in the upper right corner of your screen. Having said all that, it is probably more intuitive to just use screen space coords, since it is a HUD object, not a game object. That way, you can position it on the screen, instead of in the world. Making it always point north shouldn't be a problem, though. Just apply a rotation in the opposite direction as your camera. For instance, if your camera is facing north, the compass points straight ahead. As the camera rotates left, the compass should rotate right with the same angle, so that when the Camera is facing south, the compass is facing straight back. Sounds like a cool HUD object to have. Good luck making it work! - Enosch Linear motion problems Enosch replied to Enosch's topic in General and Gameplay ProgrammingI finally figured out the problem. I was updating all particles in the physics sim every frame from within the physics engine, and also updating all scene graph nodes from within the scene graph. This updated the linear motion of the "particles" (objects) twice each frame. Easily fixed. Thanks for the help though. - Enosch Linear motion problems Enosch posted a topic in General and Gameplay ProgrammingHey guys, I've recently implemented a "scene graph" in my 3d engine. Mainly, it supports heirarchical relationships using a matrix stack. Right now, it is demoed VERY simply with the terrain as the root, a dwarf model as its child and the camera as the dwarf's child. This way, you move the camera relative to the dwarf model. This gives a 3rd person camera effect, which works nicely. The problem happens when you move the dwarf model. I use linear motion model with a position and velocity. There are forces that act on each particle, like gravity and drag, and particles may collide with each other and with the terrain. Currently, gravity and drag are applied to the dwarf and only consider it colliding with the terrain. When it moves, its image (onscreen) appears to "skip". It moves correctly, but the image seems to shake from time to time around its accurate position and orientation (quaternion rotation). Any ideas? I'll try to get a demo posted asap. Thanks, - Enosch OpenGL Need help on a very simple prob Enosch replied to UtkarshGaur's topic in Graphics and GPU ProgrammingIf I understand your problem correctly, you want both sides of each face to be lit. As Thought suggested, this is a setting that can be modified. Try: glDisable( GL_CULL_FACE ); This should make sure both sides of every face are drawn. Then, don't specify normals when you are drawing. This "should" draw them both equal. I'm not sure how this is defined if you specify lights, though. If you disable lighting, though, I would think that this would work for you. If there are different / better solutions (there most likely are) please feel free to correct me. - Enosch Question about cloud layers/animating skyboxs. Enosch replied to grill8's topic in General and Gameplay ProgrammingI use a sky sphere instead of a skybox for my 3d engine. It looks the same and is more intuitive (no adjusting to avoid corners, for instance). You still leave off things like lighting and can simply create your sky sphere with minimal faces. To implement your effects, simply have the sky sphere at radius x and the sky effects sphere at radius x - y. Then you can just rotate the actual mesh instead of the uv coords. Heck you could even make multiple spheres and do multiple effects or have some clouds moving fast and some slow. Btw, if you are using particle effects for the clouds instead of textures, it would be simpler in my opinion to just make them face the camera and swing them around the camera at radius x - y (the same as above, just place them at that distance, don't have a seperate sphere with that radius). You could set up cloud layers by simply have clouds at different radii. For instance, if you wanted 5 layers, set the minimum "altitude" (read radius) and simply place the other four layers evenly between that and your sky sphere radius. Using this method would make your effects "stick" to your sky sphere. If your sky goes all the way down to the horizon, the clouds do too, lol. Good luck dood! - Enosch BIG spells in mmorpg's Enosch replied to Riviera Kid's topic in Game Design and TheoryI've always wondered about using keypresses as spells in MMOs. Sounds like a good idea to me. I think it would be an interesting game where spellcasting is a seperate "language" that must be learned in order to be decent wizards. You would still need some sort of "level" system in place behind it. Perhaps casting takes mana over time so only more powerful wizards may cast longer spells. But this would allow spellcasting skill to be placed in the players' hands, removing "push-button" playing. This would make wizards more vulnerable, but you could offset this by making spells more powerful or flexible. I also think it would be cool to redo the stealth system of most MMOs. Instead of a simple "invis-mode"+"backstab" approach, do something more like metal gear solid or manhunt or tenchu. You could still have the "invis mode", but make it temporary and a spell or something. The problem with this is that it makes stealthers less viable in open field combat, where now they roxor. But in an urban (time period open) type environment, stealth MMOs would be cool. And archery. If I were making an MMO, I would have the ranged weaponry behave like FPS games where you actually have to hit what you are aiming at, not just select it and let the computer decide whether or not you hit from 2 feet away. That would be sweet. The main idea, I think, is to have an MMO where more skill is required to play a class than figuring out the best attacks and pressing that button as fast as possible. My two copper. - Enosch Question about LOD's Enosch replied to grill8's topic in General and Gameplay ProgrammingROAM isn't THAT slow if you do it right. There is a paper on Gamasutra that I can't think of the name for (if anyone knows this, feel free to add). It implements ROAM with a Variance Tree instead of a distance metric to speed things up. I get around 40-50 FPS on my 2.8GHz, Radeon 9800 machine rendering 1024x1024 terrains with 4 textures blended by height. I haven't optimized it yet, but it works well enough for now. It runs between 4000-5000 tris at a time (less of you aren't looking at the terrain) and dynamically sets the LOD rather realistically. The only problem I have atm is a slight popping while moving around, but I plan to quell this with geomorphing when I have time. Just my two cents. I've heard that simply using lots of little patches at a single LOD works as well, but for 1024x1024 terrain you would have 1024 / 20 = 51 patches per side, which is 51*51 = 2,601 center points to cull against your camera. Also, if you only test the center of the patch against the camera, you have severely noticeable popping when a patch enters the view frustum. And if you test all four corner vertices, you get 2,601*4 = 10,401 points to test. And yes, as you said quadtrees can make this a lot less. Finally, if you still want to use ROAM, but don't want to deal with dynamically changing terrain, there is another paper on gamasutra (again, no name) that deals with using ROAM to generate the level of detail based on flat vs. lumpy areas on a map. This means you generate few triangles in flat areas with not much detail and much more triangles for hills where you need lots of detail. This is done only once when you load the terrain, then you can divide it into sections and use quadtrees like ade-the-heat mentioned. You get static level of detail and can even (with the algorithm in this paper) set the max number of tris to use for a map (maybe scaleable by system speed: fewer/less detial for slow systems, more/higher detail for faster systems). I plan to use this one for my MMO engine later. Good luck dood, - Daniel Help on my DX engine Enosch replied to Enosch's topic in Graphics and GPU ProgrammingAny ideas? I am thinking that the problem is my near / far clipping planes. It looks like it takes a 2D "slice" and draws that. However, I have had that problem before and it still applied textures and materials. And I double checked my near / far distances and they are set to 0.18f and 100.0f respectively. The projection matrix is updated using D3DXMatrixPerspectiveLH() with these values each frame. ANY suggestions would be greatly appreciated... Help on my DX engine Enosch posted a topic in Graphics and GPU ProgrammingHey guys. I'm writing my own DX graphics engine and getting some odd results. Wondering if anyone might have some suggestions. My design works like this: gfx namspace contains all graphics classes. Resources (wrappers for the D3D versions): gfx::cShader gfx::cTexture gfx::cMesh gfx::cMaterial gfx::cEngine class manages engine components (camera, window, etc) gfx::cCamera class manages view and projection transforms like this: in the Update function (called each frame) it computes the projection and view matrices and sets them to all loaded shaders. gfx::cWindow class manages Win32 and D3D initialization, alt+tabbing, etc. My meshes currently only support loading from .X files, which I've used before. I'm currently loading the dwarf.x mesh to test. It loads the mesh into the D3DXMESH and loads the materials into a material buffer, then loops through the material buffer and creates each material with the values loaded. The materials in my engine contain the textures that material uses, so the engine then takes the filename in the D3DXMATERIAL it is loading from (the materialbuffer) and loads that texture (LPDIRECT3DTEXTURE9) into the material. These are stored in an array that the attribute buffer indexes (like normal). My mesh rendering looks like this: gfx::cMesh::Render (const cMatrix44& matWorld) { iUInt numPasses = 0; shader.SetWorldMatrix(matWorld); shader.Begin(numPasses); for (iUInt i=0; i<numPasses; ++i) { shader.BeginPass(i); for (iUInt j=0; j<nMaterials; ++j) { shader.SetMaterial(pMaterials[j]); shader.CommitChanges(); pD3DMesh->DrawSubset(j); } shader.EndPass(); } shader.End(); } shader.SetMaterial() looks like this: Each shader keeps a reference to the last loaded texture. If the next material uses the same texture as the last one, it doesn't send the texture data to the shader. void SetMaterial(const cMaterial& mat) { pD3DEffect->SetFloatArray("mAmb", (iFloat*)&mat.ambient, 4); pD3DEffect->SetFloatArray("mDif", (iFloat*)&mat.diffuse, 4); pD3DEffect->SetFloatArray("mSpe", (iFloat*)&mat.specular, 4); pD3DEffect->SetFloatArray("mEmi", (iFloat*)&mat.emissive, 4); pD3DEffect->SetFloat ("mShi", mat.shine); if (texturemap != mat.texturemap) { texturemap = mat.texturemap; pD3DEffect->SetTexture ("texMap", mat.texturemap); } } My resource classes contain cast operators to the D3D types they wrap, so passing them directly into a function expecting their D3D type should call the cast operator right? Anyways, here is my result from this design. Screenie It obviously loads the mesh, and and the other resource load, because any load errors send an error message to the debug window on my compiler, and there are none. At this point I'm at a loss as to what is wrong. Any suggestions? Engine material systems Enosch replied to Endar's topic in General and Gameplay ProgrammingI used something similar with combining graphics and physics material properties, but kept them seperate (more modular). My engine is divided into several components, including graphics, physics, and simulation. Each of these contains its own namespace to keep things sane. Graphics=gfx, Physics=psx, Simulation=sim. I then have a gfx::cMaterial class, a psx::cMaterial class, and a sim::cMaterial class. The simulation engine basically provides a way for the game using the engine to handle game objects on a higher level, without worrying about graphics and physics and AI, etc. So the gfx::cMaterial class contains data that the graphics engine will use (textures, dif/spec/amb/emi/shine, etc.) and the psx::cMaterial class contains data that the physics engine will use (drag, friction, mass, density, etc.). Finally, the sim::cMaterial class provides higher level interaction, for example setting what happens if the material is shot or collided with (a shimmer effect for dragon scales would be an example). This keeps things modular. If I change out my graphics engine, it doesn't affect the other components. Likewise the physics engine. - Enosch a software development people grow stage. Enosch replied to derek7's topic in For BeginnersI would say that an "experienced programmer" is one who knows how to recognize, classify, and break down problems. He can also know how and where to gain knowledege about his problem in order to solve it on his own. Also, I would say that a "master programmer" is one who has already learned how others accomplish things, but has graduated to a point where he strives to develop systems that work better than those. He also has developed his own software development process to efficiently translate design to implementation and knows how to implement a design to be as fast and small (CPU and memory) as possible. Just my two cents. - Enosch
https://www.gamedev.net/profile/77612-enosch/
CC-MAIN-2017-51
refinedweb
4,532
71.04
TSMimeHdrFieldValueStringSet¶ Synopsis¶ #include <ts/ts.h> - TSReturnCode TSMimeHdrFieldValueStringSet(TSMBuffer bufp, TSMLoc hdr, TSMLoc field, int idx, const char * value, int length)¶ Description¶ TSMimeHdrFieldValueStringSet() sets the value of a MIME field. The field is identified by the combination of bufp, hdr, and field which should match those passed to the function that returned field such as TSMimeHdrFieldFind(). The value is copied to the header represented by bufp. value does not have to be null terminated (and in general should not be). If idx is non-negative the existing value in the field is treated as a multi-value and idx as the 0 based index of which element to set. For example if the field had the value dave, grigor, tamara and TSMimeHdrFieldValueStringSet() was called with value of syeda and idx of 1, the value would be set to dave, syeda, tamara. If idx is non-negative it must be the index of an existing element or exactly one past the last element or the call will fail. In the example case idx must be between 0 and 3 inclusive. TSMimeHdrFieldValuesCount() can be used to get the current number of elements. This function returns TS_SUCCESS if the value was set, TS_ERROR if not.
https://docs.trafficserver.apache.org/en/latest/developer-guide/api/functions/TSMimeHdrFieldValueStringSet.en.html
CC-MAIN-2020-40
refinedweb
202
61.87
>> nth term of the given series 0, 0, 2, 1, 4, 2, 6, 3, 8, 4… in C++ In this problem, we are given an integer value N. Our task is to Find the nth term of the given series − 0, 0, 2, 1, 4, 2, 6, 3, 8, 4, 10, 5, 12, 6, 14, 7, 16, 8, 18, 9, 20, 10… Let’s take an example to understand the problem, Input − N = 6 Output − 2 Solution Approach To find the Nth term of the series, we need to closely observe the series. It is the mixture of two series and odd and even terms of the series. Let’s see each of them, At even positions − - T(2) = 0 - T(4) = 1 - T(6) = 2 - T(8) = 3 - T(10) = 4 The value at T(n) if n is even is {(n/2) - 1} At Odd positions − - T(1) = 0 - T(3) = 2 - T(5) = 4 - T(7) = 6 - T(9) = 4 The value at T(n) if n is even is {n - 1} Example Program to illustrate the working of our solution #include <iostream> using namespace std; bool isEven(int n){ if(n % 2 == 0) return true; return false; } int findNthTerm(int n){ if (isEven(n)) return ((n/ 2) - 1); else return (n - 1); } int main(){ int N = 45; cout<<N<<"th term of the series is "<<findNthTerm(N); return 0; } Output 45th term of the series is 44 - Related Questions & Answers - Program to find N-th term of series 0, 0, 2, 1, 4, 2, 6, 3, 8…in C++ - Sum of the series 2 + (2+4) + (2+4+6) + (2+4+6+8) + ... + (2+4+6+8+...+2n) in C++ - C++ program to find nth Term of the Series 1 2 2 4 4 4 4 8 8 8 8 8 8 8 8 … - C++ program to find Nth term of the series 0, 2, 4, 8, 12, 18… - Sum of the series 1 + (1+2) + (1+2+3) + (1+2+3+4) + ... + (1+2+3+4+...+n) in C++ - C++ Programe to find n-th term in series 1 2 2 3 3 3 4 - Sum of the Series 1/(1*2) + 1/(2*3) + 1/(3*4) + 1/(4*5) + ... in C++\n - Find sum of the series 1-2+3-4+5-6+7....in C++ - Program to find sum of series 1*2*3 + 2*3*4+ 3*4*5 + . . . + n*(n+1)*(n+2) in C++ - C++ program to find the sum of the series 1/1! + 2/2! + 3/3! + 4/4! +…….+ n/n! - C++ program to get the Sum of series: 1 – x^2/2! + x^4/4! -…. upto nth term - C++ program to find the sum of the series (1*1) + (2*2) + (3*3) + (4*4) + (5*5) + … + (n*n) - C++ Program to find the sum of a Series 1/1! + 2/2! + 3/3! + 4/4! + …… n/n! - Python Program to find the sum of a Series 1/1! + 2/2! + 3/3! + 4/4! +…….+ n/n! - Java Program to find the sum of a Series 1/1! + 2/2! + 3/3! + 4/4! +…….+ n/n! Advertisements
https://www.tutorialspoint.com/find-the-nth-term-of-the-given-series-0-0-2-1-4-2-6-3-8-4-in-cplusplus
CC-MAIN-2022-27
refinedweb
531
71.72
Ste fine tune something. To do this a rotary encoder is the perfect option. This tutorial will show you how to connect a rotary encoder to your ARDUINO to control a H-Bridge that will run the stepper motor. This video shows the circuit in action: PARTS YOU WILL NEED: - 1 Rotary Encoder (one with a push button built in) - 1 Push Button (only if your rotary encoder does not have a push button) - 1 H-Bridge chip (L293D or SN754410NE, you can also use a pre assembled stepper control board) - 1 Bipolar stepper motor - ARDUINO board that supports interrupts - bread board - jumper wires Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Build the Curcuit The picture above shows the circuit you will need to build, make sure the rotary encoder is connected to pins that support interrupts (on the uno its pin 2 & 3). If you use a different H-Bridge chip or a pre built board you will need to modify the circuit accordingly. For more info on H-Bridges and how they work check out this link: H-Bridge INFO Step 2: The Code Now that you have the circuit built copy the code below and past it into your ARDUINO ide #include <Stepper.h> const int encoderPinA = 2; // right const int encoderPinB = 3; // left const int encButton = 4; // encoder push button int encoderPos = 0; // counter unsigned int lastReportedPos = 1; // change static boolean rotating = false; // debounce boolean A_set = false; boolean B_set = false; const int stepsPerRevolution = 200; Stepper myStepper(stepsPerRevolution, 8, 9, 10, 11); //h-bridge pins void setup() { myStepper.setSpeed(60); pinMode(encoderPinA, INPUT_PULLUP); //enabling pullups pinMode(encoderPinB, INPUT_PULLUP); pinMode(encButton, INPUT_PULLUP); attachInterrupt(0, doEncoderA, CHANGE); //pin 2 attachInterrupt(1, doEncoderB, CHANGE); //pin 3 Serial.begin(9600); } void loop() { rotating = true; // reset the debouncer if (lastReportedPos != encoderPos) { Serial.println(encoderPos); lastReportedPos = encoderPos; if (digitalRead(encButton) == LOW ) { encoderPos = (encoderPos * 100); //change the 100 to how many steps } //you want the stepper to move when myStepper.step(encoderPos); //the button is pushed down encoderPos = 0; } } void doEncoderA() { // debounce if ( rotating ) delay (1); // wait a little until the bouncing is done // Test transition if ( digitalRead(encoderPinA) != A_set ) { // debounce once more A_set = !A_set; // adjust counter + if A leads B if ( A_set && !B_set ) encoderPos += 1; //change the 1 to steps to take when encoder turned rotating = false; // no more debouncing until loop() hits again } } // Interrupt on B changing state void doEncoderB() { if ( rotating ) delay (1); if ( digitalRead(encoderPinB) != B_set ) { B_set = !B_set; // adjust counter - 1 if B leads A if ( B_set && !A_set ) encoderPos -= 1; //change the 1 to steps to take when encoder turned rotating = false; } } Now that you have every thing running turn the rotary encoder, If the stepper turns the opposite way than what you want just switch 2 wires on one side of the stepper motor (only one side not both). Next push the button down and turn the encoder, Is should turn 100 steps in the direction you turned the encoder. If you want to change how many steps the stepper moves when you turn the encoder just look at the code, its commented at the 3 places you need to change it. When you change the push button value remember it will be multiplied by the normal steps value. I hope this tutorial was helpful, if you have any questions or comments please leave them and ill get back to you ASAP..... 3 Discussions 2 years ago Nice project thanks for sharing. I am working on something similar and would appreciate any help you can give. I want the encoder to control the motion of the stepper motor but I am new to arduino so not sure how to build such a code. e.g. 1) I want the stepper motor to start rotating clockwise at a speed X when the encoder reads a value Y0. 2) Then when encoder reads a value Y1 I want it to give a signal for PC to start recording the value of the encoder. 3) When encoder reaches a value Y2 I want the stepper motor to stop and rotate in the opposite direction till it reach Y0 again and also give signal to PC to stop recording. 4) When encoder reaches Y0 again I want it to give a signal for a 2nd stepper motor to move just 1 step clockwise. This process is to be repeated until the encoder of the 2nd stepper motor reaches a certain value where it stops the whole process. It is like a scanner controlled by the encoder location. Any hints would be much appreciated. 3 years ago Great Arduino project. Reply 3 years ago thank you......
https://www.instructables.com/id/ARDUINO-stepper-motor-controlled-with-rotary-encod/
CC-MAIN-2019-35
refinedweb
791
66.17
By now, you probably are aware that you can dynamically load XAP files using the Managed Extensibility Framework (MEF) within your Silverlight applications. Have you been scratching your head, however, and wondering how on earth you would actually test something like that? It is possible, and here’s a quick post to show one way you can. First, we need a decent deployment service. You’re not really going to hard-code the download and management, are you? I didn’t think so. If you need an example, look no further than the sample code I posted to Advanced Silverlight Applications using MEF. Here’s what the interface looks like: public interface IDeploymentService { void RequestXap(string xapName, Action<Exception> xapLoaded); AggregateCatalog Catalog { get; } } This keeps it simple. Request the xap file, then specify a delegate for a callback. You’ll either get a null exception object (it was successful) or a non-null (uh… oh.) Now, let’s focus on testing it using the Silverlight Unit Testing Framework. The first caveat is that you cannot use it on the file system. This means that your project will not work if you run it with a test page rather than hooking it to a web server (local or not). Doing this is simple. In your ASP.NET project, go to the Silverlight tab and add your test project. When you are adding it, there is an option to generate a test page. I typically have one “test” web project with all of my test Silverlight applications, so I will have multiple test pages. To run a particular test, you simply set your ASP.NET web project as the start up project, then the corresponding test page (we’re talking the aspx, not the html) as the start page. I usually delete the automatically generated HTML pages. Now we need to give MEF a test container. The caveat here is that, without a lot of work, it’s not straightforward to reconfigure the host container so you’ll want to make sure you test a given dynamic XAP file only once, because once it’s loaded, it’s loaded. This is what my application object ends up looking like: public partial class App { public AggregateCatalog TestCatalog { get; private set; } public App() { Startup += Application_Startup; InitializeComponent(); } private void Application_Startup(object sender, StartupEventArgs e) { // set up a catalog for tests TestCatalog = new AggregateCatalog(); TestCatalog.Catalogs.Add(new DeploymentCatalog()); var container = new CompositionContainer(TestCatalog); CompositionHost.Initialize(container); // now set up the unit testing framework var settings = UnitTestSystem.CreateDefaultSettings(); RootVisual = UnitTestSystem.CreateTestPage(settings); } } Here, I haven’t composed anything, just set up the container. Now I’m going to add a simple dynamic XAP for testing. I add a new Silverlight application and wire it to the test web site but do NOT generate a test page. I blow away the App.xaml and MainPage.xaml resources, and add a simple class called Exports. Here is my class: public class Exports { private const string TESTTEXT = "TestText"; [Export(TESTTEXT, typeof(string))] public string TestText { get { return TESTTEXT; } } } Yes, you got it – just a simple export of a string value. Now let’s write our test. I create a new test class and decorate it with the TestClass attribute. I am also running asynchronous tests, so it’s best to inherit the test from SilverlightTest which has some base methods for asynchronous testing. Let’s take a look at the set up for my test: [TestClass] public class DeploymentServiceTest : SilverlightTest { private const string DYNAMIC_XAP = "DynamicXap.xap"; private const string TESTTEXT = "TestText"; private DeploymentService _target; [Import(TESTTEXT, AllowDefault = true, AllowRecomposition = true)] public string TestString { get; set; } public DeploymentServiceTest() { CompositionInitializer.SatisfyImports(this); } [TestInitialize] public void TestInit() { if (Application.Current.Host.Source.Scheme.Contains("file")) { _target = null; } else { _target = new DeploymentService(); ((App) Application.Current).TestCatalog.Catalogs.Add(_target.Catalog); } } } So right now I’m simply setting up my targets. The property is key – by composing imports on construction, I register my test class with the MEF system. Right now, however, I haven’t loaded anything, so it won’t be able to satisfy the import. By using AllowDefault true, however, I tell it I’m expecting something later and setting it to null is fine. The recomposition is what will trigger an update once the catalogs change. I also reach out to the test catalog I set up in the main application and add the catalog from my deployment service to it. Note that if I am running on the file system, I don’t bother setting up my service. Next, I can add a stub to determine if I can even test this. If I am running from the file system, the deployment service is never set up. I created a helpful method that asserts an “inconclusive” when this is the case: private bool _CheckWeb() { if (_target == null) { Assert.Inconclusive("Cannot test deployment service from a test page. Must be hosted in web."); return false; } return true; } Now we can write our main test. First, we check to make sure we are in a web context. Then, we load the xap, and once it is loaded, confirm there were no errors and that our property was successfully set: [Asynchronous] [TestMethod] public void TestValidXap() { if (!_CheckWeb()) { return; } Assert.IsTrue(string.IsNullOrEmpty(TestString), "Test string should be null or empty at start of test."); _target.RequestXap(DYNAMIC_XAP, exception => { Assert.IsNull(exception, "Test failed: exception returned."); Assert.IsFalse(string.IsNullOrEmpty(TestString), "Test failed: string was not populated."); Assert.AreEqual(TESTTEXT, TestString, "Test failed: property does not match."); } } And that’s pretty much all there is to it – of course, I am also adding checks for things like contract validation (are you passing me a valid xap name?) and managing duplicates, but you get the picture. MEF, Silverlight, unit testing
http://www.wintellect.com/devcenter/jlikness/unit-testing-dynamic-xap-files
CC-MAIN-2015-32
refinedweb
965
57.57
Someone was asking on Flask‘s IRC channel #pocoo about sharing templates across more than one app but allowing each app to override the templates (pretty much what Django’s TEMPLATE_DIRS setting is for). One way of doing this would be to customise the Jinja2 template loader. Here’s a trivial Flask app that searches for templates first in the default folder (‘templates’ in the same folder as the app) and then in an extra folder. import flask import jinja2 app = flask.Flask(__name__) my_loader = jinja2.ChoiceLoader([ app.jinja_loader, jinja2.FileSystemLoader('/path/to/extra/templates'), ]) app.jinja_loader = my_loader @app.route('/') def home(): return flask.render_template('home.html') if __name__ == "__main__": app.run() The only thing special here is creating a new template loader and then assigning it to the jinja_loader attribute on the Flask application. ChoiceLoader will search for a named template in the order of the loaders, stopping on the first match. In this example I re-used the loader that is created by default for an app, which is roughly like FileSystemLoader('/path/to/app/templates'). There are all kinds of other exciting template loaders available. I really like the fact that Flask and Bottle’s APIs are so similar. Next I want Flask to include Bottle’s template wrapping decorator by default (there’s a recipe in the Flask docs) and for both of them to re-name it @template. I just wanted to comment to say that I rolled my own terrible version of this functionality that included way too many calls to os.path.exists(), then got frustrated and found this post, which simplified my app dramatically. Thanks for shedding some light on this. In case anyone is having problems: this doesn’t work in Heroku. You have to pass the path without the first ‘/’. Like 'flaskapp/userdata' Thank you very much!
https://buxty.com/b/2012/05/custom-template-folders-with-flask/
CC-MAIN-2019-22
refinedweb
307
65.93
Introduction For a COM interface for this class library. I did some research and found some very useful articles that dealt with the same problem. However, I found them somewhat difficult to follow.. The Problem Obviously, I cannot present the actual class here. Instead,). If you follow this example, you can apply the same procedure for very complex classes too. Public class demo Private csError As String ‘. The Solution As I found out, there are two ways to solve this problem. One is very easy while the other one is a bit hard. Unfortunately, I came to know the hard one first and discovered the second one after I had implemented my component using the hard method. Solution 1 : To create a COM object using the COM class template of Visual Studio.NET: - Open a New Windows Application project by clicking New on the File menu, and then clicking Project. The New Project dialog box will appear. - With Visual Basic Projects highlighted in the Project Types list, select Class Library from the Templates list, and then click OK. The new project will be displayed. - Select Add New Item from the Project menu. The Add New Item dialog box will be displayed. - Select COM Class from the Templates list, and then click Open. Visual Basic.NET will add a new class and configure the new project for COM interop. - Add code, such as properties, methods, and events to the COM class. In our case we will just copy the code for property ErrorMsg and method Concat into the class and rename the class to class demo. - Select Build Solution from the Build menu. Visual Basic.NET will build the assembly and register the new COM object with the operating system automatically. What’s next? Nothing, folks. Your COM component is ready to be accessed from any VB or classic ASP page etc. using Createobject(...). Solution 2: Do-it-yourself approach ‘ STEPS INVOLVED: - Add the interop namespace to your class Imports System.Runtime.InteropServices - Then develop an interface with the same name as of your original class, just add _ before it to make it different. For our example, it will be like this: Public Interface _demo End Interface - Then, create a GUID for this interface. To create GUID for your COM object, click Create GUID on the Tools menu, or launch guidgen.exe to start the guidgen utility. Search for the file guidgen.exe on your machine. It is located under c:program files Microsoft visual studio .netcommon7 tools directory on my computer. Run it. Then, select Registry Format from the list of formats provided by the guidgen application. Click the New GUID button to generate the GUID and click the Copy button to copy the GUID to the clipboard. Paste this GUID into the Visual Studio.NET code editor. Remove the leading and trailing braces from the GUID provided. For example, if the GUID provided by guidgen is {2C8B0AEE-02C9-486e-B809-C780A11530FE} then the GUID should appear as: 2C8B0AEE-02C9-486e-B809-C780A11530FE. Once a guid is created, paste it into the class as: Guid(“1F249C84-A090-4a5b-B592-FD64C07DAB75”) Then use the interfacetypeattribute to make it a Dispatchinterface required by automation as: InterfaceType(ComInterfaceType.InterfaceIsIDispatch). - - In a similar fashion, you need to create a GUID for the actual class too.for the class here. This is the same ProgIDthat we will be using to call our component as follows; set myobject = createobject (“comDemo.demo”) ‘: <ClassInterface(ClassInterfaceType.None)>Public Class LoanApp Implements IExplicit Sub M() Implements IExplicit.M … End Class The ClassInterfaceType.None value prevents the class interface from being generated when the class metadata is exported to a type library. In the preceding example, COM clients can access the LoanApp class only through the IExplicit interface.” - The next step is to modify the properties and methods signatures which we want to expose. So, Public ReadOnly Property ErrorMsg() As String becomes Public ReadOnly Property ErrorMsg() _ As String implements _demo.errormsg and Public Function Concat(ByVal str1 As String, _ ByVal str2 As String) As String becomes Public Function Concat (ByVal str1 As String, _ ByVal str2 As String) _ As String implements _demo.Concat - Once you make these changes to your class, you are all set. Just one more change is required to make it complete. Right-click the project to bring the project properties, they’re under the Build properties check the interop check box. When building the project, this will ensure that a type library is created for the COM Interface and is registered too. If you do not want to use this option, you can use the Regasm utility to do the same. For Regasm, the syntax would be regasm demo.dll /tlb: demo.tlb It will create the type library and will register it to for you. That Was It You can build the project and can access it: - Through Late binding: using createobject("comDemo.demo") or - Through Early binding: by setting reference to this component in your project Strong Name Requirement It is recommended that your assembly should have a strong name. For that, you can use the strong name tool sn.exe. If you do not create the strong name, you will have to keep the component and the calling program in the same directory.
https://www.codeguru.com/visual-basic/exposing-com-interfaces-of-a-net-class-library-for-late-binding/
CC-MAIN-2021-49
refinedweb
883
65.93
Wikiversity:Colloquium/archives/March 2008 Contents - 1 tabs? - 2 User:Mirwin - 3 Username Change - 4 help - 5 Changes in Wikiversity staffing - 6 Do you know if there is any page at Wikiversity where people can offer their teaching skills in a certain field or area? - 7 Military science and intelligence (the latter broader than military) - 8 Wikiversity google search toolbar add-on - 9 w:Wikiversity - 10 Yikes! - 11 Development stages - 12 Interwiki links - 13 Wikiversity Vision 2009 - 14 Firefox extension for Wikiversity? - 15 What colour would you like Wikiversity to be? - 16 Embedding youtube, slideshare, etc.? - 17 Wikiversity:Approved_Wikiversity_project_proposal#Mission - 18 Society as ecology project (waiting for development in the incubator) - 19 Using wikipedia images in wikiversity pages - 20 America - 21 Image caption not working - 22 Special characters - 23 Spam Blocker preventing real work - 24 a teacher / researcher needs help - 25 RSS Reader testing on Sandbox server - 26 Unified Log-in - 27 Image licensing issues: the deadline approaches! - 28 Saturday, 29.3.2008: online meeting of reading group - 29 Sister Projects Interview - 30 Request for archiving/moving of Wikiversity:Mission and Wikiversity:Scope - 31 Wiki Research could be Sold for Fundraising - 32 Summer of Code - 33 meetups - 34 RSS for this page? - 35 Finding content on Wikiversity - the navigation - 36 Colloquium - evolution? - 37 transwiki import tabs? I wonder if it's possible for a page to split up into different parts using what appears to be a link to the "next" page by hiding the text until the learner continues to the next part of the chapter, by hiding the next part of the course behind the text and showing it on request. The reason is I don't want to have excessive number of pages created just to accommodate the different parts of the course, but worse would be to have pages that continue too long. I know it would be similar to tabbing using css, but this is wiki and I wonder if there is an easier method.--danthemango 12:24, 1 March 2008 (UTC) - You know what, I think that'll work --danthemango 10:59, 2 March 2008 (UTC) - There are a number of options for navigating courses - one is a simple 'back, forward' link somewhere on the page (eg. at the top of Historical Introduction to Philosophy/General Introduction, and another is a more complex navigation box (ie template) that lists all pages in a course allowing you to hop from/to multiple pages of the course (eg. Course:WikiU Film School Course 01 - Learning the Basics of Filmmaking). There are other methods of page design which could allow for course navigation - see Portal:Social entrepreneurship (which has a sort of 'tabs'). Cormaggio talk 14:52, 1 March 2008 (UTC) - Thanks for that, and that's probably what I'll end up using. I won't use the complex list all of the pages list that some other pages have, because I remember reading some of these pages and that REALLY confused me. --danthemango 18:19, 3 March 2008 (UTC) - :-) <whisper>I still don't know how to write, for example, a wiki table from scratch - so I usually just copy code and modify it for my own purposes.</whisper> If I were to use something as complex as the social entrepeneurship portal, then I would say somewhere (perhaps on the talk page, or in the edit summary) where I got the code from. Cormaggio talk 08:53, 4 March 2008 (UTC) - There is no limit on the number of pages on a wiki. However, you may be interested in meta:User:Pathoschild/Scripts/AJAX transclusion table, which has been implemented on meta. Hillgentleman|Talk 01:42, 2 March 2008 (UTC) - Thanks for that script, it doesn't help me now, but I know it might come in handy later --danthemango 10:59, 2 March 2008 (UTC) User:Mirwin For all those who are not subscribed to the Wikiversity mailing list, I wanted to let you know that we have been notified of Mirwin's passing away. He's been with Wikiversity since the beginning and, while we didn't always agree, I still feel that his absence will be missed a lot here. sebmol ? 09:48, 1 March 2008 (UTC) - That's very sad news. Out of respect for his passion, we should try to imagine what the citizens of the Lunar Boom Town would do with their fallen comrades, and perhaps hold a memorial on IRC and/or a make a letter of remembrance for his twin. --SB_Johnny | talk 10:38, 1 March 2008 (UTC) - I don't know if there's any convention for deceased users (especially very active ones), but I would suggest a convention: hold the user's talk page open for condolences for a period of X days, and then protect the page thereafter. --McCormack 10:49, 1 March 2008 (UTC) - pro: I think have seen this "tradition" also at WP - @sebmol: what was the reason to remove the existing content of the user/talk page - can't just the template be put at begin of talk/user page ? We don't know, if mirwin would want this. Also, others still have the possibility to read easily info about him and info about him is not killed also :-( ----Erkan Yilmaz Wikiversity:Chat 14:38, 1 March 2008 (UTC) - I've been checking around. The normal thing to do is to leave the user page in its last state - with a note at the top. --92.228.91.196 18:54, 1 March 2008 (UTC) - Very sad news indeed ... i've finally put 2 and 2 together and figured out that it was User:Mirwin who passed. I hope that Lunar Boom Town will continue in some fashion and that one day someone will discover its true genius. I've always found it to be a simultaneously confusing and fascinating learning project and one that used the wiki space in a truly unique way. We could, I think, learn a lot by the example Mirwin has left us. Countrymike 09:41, 8 March 2008 (UTC) Username Change Could my username be changed to Terra, I know theirs a changing username page but I'm known as Terra on all wikimedia sites, and have stated that I'm known as Imperial on this one, Terra does exist on this site but I'm not sure whether I was the one who created it, but wondered if it would be possible to use the name. Terra What do you want? 18:36, 10 March 2008 (UTC) - Hello, please put your request at: Wikiversity:Changing username (I saw you did this once). If this doesn't yield fast feedback, try contacting one of the bureaucrats please. ----Erkan Yilmaz Wikiversity:Chat 07:24, 11 March 2008 (UTC) Done I've moved it to the Changing Username Page. Terra What do you want? 08:21, 11 March 2008 (UTC) Hi I am Tanvir. and i am the new user of wikiversity. so i have no know idea in this site. so any body help me of friendly mind. Thanks Tanvir Ahmed, --Opu 05:44, 11 March 2008 (UTC) - A warm welcome to you Tanvir ! Did you have some time to look on your talk page already ? There is some basic info. If you want, you can also give some info on you and your interests on your user page, then other people always can read more and join with you. ----Erkan Yilmaz Wikiversity:Chat 07:29, 11 March 2008 (UTC) Changes in Wikiversity staffing Although the proposals have been live for a while now, I have a feeling that the only people to have seen them are recent-changes patrollers and those who have the pages in question on their watchlist! I'd like to encourage the community to become more actively involved in some of the most significant changes in Wikiversity staffing since foundation in August 2006. --McCormack 07:17, 15 March 2008 (UTC) What's being proposed - On 3rd March, SB_Johnny nominated himself as a bureaucrat (click here to discuss and vote). - Shortly after this, Mikeu was also nominated for bureaucrat and has accepted the nomination (click here to discuss and vote). These candidacies are not competitive; the two candidates have supported each other and cooperate a lot on IRC. WV can theoretically have many more bureaucrats, but consensus seems to be that "a few" bureaucrats is enough. --McCormack 07:17, 15 March 2008 (UTC) - also self nomination: Wikiversity:Candidates for Custodianship/Erkan Yilmaz (Bureaucrat) ----Erkan Yilmaz Wikiversity:Chat 17:59, 15 March 2008 (UTC) A short history of bureaucrats at Wikiversity Cormaggio was WV's 1st bureaucrat, probably installed as such when the software/server was set up. Cormaggio is a PhD student specialising on collaborative learning (i.e. he researches into precisely what Wikiversity does); he is widely known in the community, particularly for encouraging others. Unfortunately the real world has necessitated his frequent absence. Shortly after Wikiversity began, there was a spate of quickly appointing custodians and one further bureaucrat: sebmol (see discussion - worth reading to see how a bureaucrat gets elected on WV and what is expected of them). sebmol is only active on German Wikipedia; he visits WV rarely to perform required actions with bureaucrat tools. There was also the issue of the JWSchmidt nomination, which is rather incomprehensible to me and went on for 542 days without the candidate accepting; JWSchmidt now seems to be on an indefinite wiki-break. One of the side-effects of the long Schmidt-candidacy was that nobody else put themselves forward, despite the frequent absence of the two elected bureaucrats. Generally a feeling evolved that the resulting vacuum was not good and WV needed some bureaucrats who are present and active. Both SB_Johnny and Mikeu are very present, active, long-standing members of the community. --McCormack 07:17, 15 March 2008 (UTC) What is a bureaucrat? Wikiversity:Bureaucratship is a fairly useless page, but worth quoting is: "must be excellent judges of consensus... must also have the ability and willingness to thoroughly explain decisions or he or she makes...". Wikiversity talk:Bureaucratship is much more helpful, multi-facetted and exhaustive, illustrating what people here really thought about it. HappyCamper also make some thoughtful comments here. My own 2 cents is this: while there may be a lot of apparently humble talk about serving, tools and no big deals, actually bureaucratship is a very significant thing we all really need to think about, because, as it says everywhere, bureaucrats are the ultimate arbiters of consensus and therefore those with the ability to resolve conflict (if successful) or ruin things (if their interpersonal skills fail). In a healthy community, bureaucrat action will be infrequent, but when they are needed, they must exercise an excellence of judgment which many of us simply do not possess. --McCormack 07:17, 15 March 2008 (UTC) - Is it possible to change the title here to perhaps: "Changes in WV participation" or "changes involving WV custodians and bureaucrats" ? Reason: I assume that a title like governance, government could influence people in their thinking already before they continue to read. - We should not build structures as in real life - we are a wiki - we are in a virtual world. We can eliminate the errors which happen in other wikis. Custodians/bureaucrats just have the tools, they should be seen as normal users as everyone else. If there is something like a government structure then there should be also reelections in certain intervals. I don't see such (see also [1]). - "a few" bureaucrats - why this ? Are they special somehow ? They just have some more options, which are less frequently used than the normal custod tools. Why shouldn't it be possible to have many more bureaucrats or making bureaucrat rotation (an option I am playing to introduce in the future at de.WV - I am b.cat at de.WV): giving people more tools helps them educate, getting more responsible, seeing behind the Wikiversity structure - something like: be a custod for x days - aren't there e.g. kiddo days at universities where they see how it looks like, so they get the appetite to chose this path (in the terms of that being custod/b.cat is a job that involves also tiring activities)? see also: Is mentorship the only path to custodianship? - "bureaucrats are the ultimate arbiters of consensus and therefore those with the ability to resolve conflict": this is again when thinking of b.cats as more than described above. It is definitely something which everyone should try to have. ----Erkan Yilmaz Wikiversity:Chat 09:03, 15 March 2008 (UTC) Do you know if there is any page at Wikiversity where people can offer their teaching skills in a certain field or area? After looking for a while at the Community portal and trying with the search tool, I couldn't find this kind of page/list, where I can list myself for helping people with doubts about Spanish language and culture. --Esenabre 06:14, 16 March 2008 (UTC) - Hello Enric, at the Topic:Spanish page there is a box titled "Active Participants" where you could edit info. Also there is this page: Topic:Spanish/Active Participants. If you want you can also contact the other participants (e.g. User:Juan is quite active) to ask them - who knows perhaps they know more ? ----Erkan Yilmaz Wikiversity:Chat 08:20, 16 March 2008 (UTC) Military science and intelligence (the latter broader than military) Is there an existing place for such materials? I saw peace studies, and a reference to international relations. Hcberkowitz 16:51, 17 March 2008 (UTC) - Welcome, with materials you mean you have existing material which you want to add ? - We have e.g. a page where you could start: Military science and also have a look at the Category:Strategic Studies. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 16:59, 17 March 2008 (UTC) Wikiversity google search toolbar add-on Which google search toolbar add-on do you recommend? -- Jtneill - Talk 04:22, 18 March 2008 (UTC) - I tried all three - they don't do much. I really just want to add Wikiversity to my firefox search toolbar which already has wikipedia, google scholar, etc. listed. -- Jtneill - Talk 04:33, 18 March 2008 (UTC) w:Wikiversity Here we go again. The wikiversity article on wp has been tagged as being in need of primary sources. Mirwin (aka w:User:Lazyquasar) and I had been working on this at w:User:Mu301/Sandbox because we were getting to much grief w:Talk:Wikiversity#Applicability_Section trying to edit the live page. (I also copied the refs to Wikiversity:In the media. See also Wikiversity in the media.) The article does need some work, though I'm not sure how to improve it. The refs are not the best; some only mention wv in passing. Anyone want to help out on this? --mikeu talk 01:08, 19 March 2008 (UTC) - I did some copyediting, but don't know any sources to add at the moment. Honestly I think we're dealing with trolling in this case, so there's little to be done about it. --SB_Johnny | talk 09:21, 19 March 2008 (UTC) Yikes! Leigh Blackall sent a message to the wikiversity-l list a few days ago asking about differences/similarities between Wikiversity and Wikieducator. But I want to highlight the following mail (which was only sent to the Wikieducator googlegroup). I've responded on wikiversity-l, but I'll repeat here, that it highlights just how badly Wikiversity explains and presents itself in its introductory pages. I'd encourage anyone with time and/or ideas to help out - perhaps revive or replace the decaying Wikiversity:Introduction Overhaul Taskforce. (Btw, I've made a start on Wikiversity:Getting involved, which was one of the pages which I think was cited in that mail.) Cormaggio talk 16:23, 19 March 2008 (UTC) - Please also see Main Page 0.5. --McCormack 08:58, 20 March 2008 (UTC) Development stages I popped over to wikibooks and set up a book. One concept which appealed to me was the notion of indicating the stage of development for each of a book's modules using b:Help:Development_stages in the table of contents. Has something similar been discussed/considered for wv? -- Jtneill - Talk 17:27, 19 March 2008 (UTC) - There is available: Wikiversity:Activity bars - but it seems not used ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:41, 19 March 2008 (UTC) - Also Wikiversity:Percent complete... --Remi 17:50, 19 March 2008 (UTC) Interwiki links I'm looking for a handy list of the prefixes for inter-wiki links. -- Jtneill - Talk 04:29, 20 March 2008 (UTC) - I think this should apply here: w:Help:Interwiki linking#Project titles and shortcuts --Remi 05:05, 20 March 2008 (UTC) Wikiversity Vision 2009 This colloquium has been buzzing recently with new ideas about Wikiversity development - note especially the contributions of Cormaggio (now partly archived) and Jtneill. Some of this has been spilling over increasingly into the mailing list as well. I know there's a dozen threads I'd like to start here as well, but it could reach overload point. So I have created Wikiversity Vision 2009 as a hub for discussion of Wikiversity development. Please visit and contribute! --McCormack 07:23, 22 March 2008 (UTC) Firefox extension for Wikiversity? Is there a firefox extension for wikiversity? e.g., it might add wikiversity toolbar search, bookmarks, etc. -- Jtneill - Talk 04:07, 18 March 2008 (UTC) - We do have a screen saver floating about, but I do not believe there is an extension. FTLD involves a potential extension written for a project. Those may be the closest things we have. --Remi 09:49, 20 March 2008 (UTC) - wonder if it could be a kind of thread betweensites that travelswith you? lucychili 17:19, 22 March 2008 (UTC) What colour would you like Wikiversity to be? The new main page draft layout is pretty well ready now and waiting for feedback. As part of this process, I have made it possible for everyone to experiment with colours very easily. Instructions: (1) Copy and paste the following into a new subpage attached to your user page. (2) Modify the numbers and preview the resulting colour schemes until you find one you like. Each of the 8 numbers colours a box. If you find a combination you really like, add a hyperlink to your test page below so that others can view it. Oh - and here's an example. --McCormack 11:44, 20 March 2008 (UTC) Here's another example which colours things by column. --McCormack 15:31, 20 March 2008 (UTC) - Cool. And a link to the colour numbering scheme? -- Jtneill - Talk 14:30, 20 March 2008 (UTC) - User:Jtneill/Main_Page_Layout_0.5a (open checker) - User:Jtneill/Main_Page_Layout_0.5b (open blue) - -- Jtneill - Talk 14:47, 20 March 2008 (UTC) - I prefer User:Jtneill/Main_Page_Layout_0.5b (open blue), plus User:McCormack has done well with the mainpage Layout. Terra Welcome to my talkpage 14:51, 20 March 2008 (UTC) - If that's the case then, it should be your version. And I had another look and it's been blue for a while so it should change. Terra Welcome to my talkpage 15:04, 20 March 2008 (UTC) - Well, my version is rather party-like. With 14 colour schemes available for each box, the choice of possible combinations is vast. Most Wikimedia projects stick with 3 to 4 main colour maximum, so as not to overburden the visual senses of the visitor. I'd like to know what people's favourite colours are, and which combinations they prefer - then perhaps we could tone down the party feel a little, and change them around every few months or so. --McCormack 15:11, 20 March 2008 (UTC) - Red, Black and Green but mainly Red those are what I sometimes prefer, even my signatures on wikimedia's websites. Terra Welcome to my talkpage 15:18, 20 March 2008 (UTC) - Incidentally, the available colour schemes are: - tan - mid green - slate blue - red - mauve - yellow - mid blue - orange - grey-green - sky blue - better mauve - light red - blue - grey - Very Well, Red or grey. Terra Welcome to my talkpage 15:21, 20 March 2008 (UTC) - I think I prefer the plain standard blue. Then what varies is the content - e.g. featured images, etc. It also helps the 'brand' colour to be permanently associated. If the colours change a lot, visitors may be inclined to not be sure which WM project space they're on. However, if party-ish style wins, then I'd suggest a closer comparison with proposed schemes and Wikieducator. -- Jtneill - Talk 15:35, 20 March 2008 (UTC) - I've fiddled around and left it at: User:Cormaggio/Main Page Layout 0.5. Thanks for this McCormack - it's nice to be given a clear model for fiddling. :-) I'll leave other comments on Talk:Main Page 0.5. Cormaggio talk 21:35, 20 March 2008 (UTC) So far I like this color: User:Jtneill/Main Page Layout 0.5b. Perhaps to minimize confusion and same effort for all: could we split the discussion into new pages like color, layout, other things (actually McCormack's idea)? And link them here ? There could be created also a gallery of let's say 10 proposals at start and we can work then from there ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 15:55, 22 March 2008 (UTC) - Please feel free to continue: Main Page 0.5/Color proposals, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 18:37, 22 March 2008 (UTC) Colours of some other Wikimedia projects -: light blue; boring? -: interesting use of colour and shape -: slate blue and gold -: subtle; restrained; blue/green -: orange and blue; interesting use of overlapping elements -: colourful and lots of icons - : remarkably similar to the English wikipedia -: green, blue, purple -: these guys are the artwork specialists; tan and blue, with boxes a little like we have at WV. Just wondering where WV is at with regard to functionality for embedding youtube, slideshare, etc.? -- Jtneill - Talk 06:54, 21 March 2008 (UTC) - See MediaWiki Video Policy. --McCormack 07:33, 21 March 2008 (UTC) - You can see enabled extensions at Special:Version. For both above there are extensions available - mw:Extension:SlideShare (beta status) - mw:Extension:EmbedVideo (beta status) - etc. means ? - Both are not enabled at the WV. If you want you could pose a request to get it enabled on the Sandbox Server and to see how it works there, see e.g. next request. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 07:35, 21 March 2008 (UTC) - About Mediawiki extensions: there are loads of extensions at Mediawiki, most of which are experimental/beta, and most of which don't have a hope in hell of ever being activated. Basically, anyone can add an extension to the Mediawiki site (there are no restrictions at all, short of vandalism and the like) and many people do, with bright eyes and high hopes, start programming media extensions, only to find that the Wikimedia Foundation won't activate them. Occasionally extensions are activated, but they get examined very thoroughly first, especially with regard to their likely effects on the database, bandwidth, CPU time, etc. --McCormack 07:39, 21 March 2008 (UTC) - Thanks for the info - I've added the requests to the next request sandbox server]. - For me personally, I mostly use slideshare (for lecture slides and eventually syncced audio) and less often youtube (I've seen an install using VideoFlash with no known problems). Longer than 10 min video I tend to use GoogleVideo, which VideoFlash also handles. - The Google Map extension, similarly, I've seen working over a sustained period with no obvious problems, and it offers obvious educational potential. - Likewise an RSS rendering extension would allow bringing in much rich content and mashing up, especially say feeds of students' blogs for a course. - Google Calendar could for more dynamic management and coordination of learning events, allow students to receive reminders, etc. - As as far as I know, these extensions are all in sufficient beta status to warrant serious testing and probable implementation. IMHO, let's go for it!!! :) -- Jtneill - Talk 10:30, 21 March 2008 (UTC) Wikiversity:Approved_Wikiversity_project_proposal#Mission In this section, Wikiversity/Scope should be changed to Wikiversity:Scope. -- Jtneill - Talk 15:10, 22 March 2008 (UTC) - done + thanks for your great eyes, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 15:12, 22 March 2008 (UTC) Society as ecology project (waiting for development in the incubator) Thoughts welcome - Wikiversity:Project_incubator - Society as ecology - Society as ecology/Brainstorming lucychili 11:49, 23 March 2008 (UTC) Using wikipedia images in wikiversity pages How can this be done? I tried here: Linear correlation -- Jtneill - Talk 05:23, 18 March 2008 (UTC) - I tried to two ways to embed - neither work - If the image is on Wikipedia and not at the commons, one way to rememdy the issue is to d/l the image off Wikipedia, upload it to commons, and at least at a minimum just make sure it has the same relavent copyright information and probably the url at Wikipedia at which you got the image. If the image that you see at Wikipedia is on Commons () then just type "[[Image:the name of the image.jpg|thumb|right|300px|a description]]" with whatever parameters work best in your situation. Looks like it doesn't exist on Commons, so you may need to upload it there. --Remi 05:32, 18 March 2008 (UTC) - To make it a bit easier, you can also use commonshelper, which automatically uploads the license tags, links back to the original uploader, etc. --SB_Johnny | talk 10:17, 18 March 2008 (UTC) - Don't forget to tag the old image with {{subst:ncd}} to insert w:Template:NowCommons after the image has been copied. --mikeu talk 17:19, 23 March 2008 (UTC) America While Wikiversity is international, America needs its help more than any other developed nation. It has been more than twenty years now since a US Government report stated that in grades K through 12 America has committed "unilateral intellectual disarmament." Hundreds of books have been published and billions of dollars spent trying to reverse that trend, and today things are even worse than they were twenty years ago. As an American, naturally I would like to see American children given a chance to really learn (especially in my own field, mathematics). But I think everyone in the world would sleep better at night if the most heavily armed nation in the world were not also one of the most ignorant of developed nations. The reason that all efforts to change American education for the better have failed is the local nature of the American school system. A person can work all their life to bring standards to one community. Many people have done this successfully. But it has no effect on the community thirty miles down the road, and after the reformer retires, a new school board usually comes in and changes everything back the way it was. If, after more than twenty years of effort, the public school system has not changed, there is reason to believe that public education in America cannot be fixed. I won't even go into the fact that Blacks and Hispanics are taught even worse than Whites, or that schools are major sources of crime and drugs. The good news is that, in America, no child is required to go to school. Home schooling is allowed, is increasingly popular, and on standardized tests home-schooled students do much better than public school students. Wikiversity obviously serves many home schooled students already, but Wikiversity has not yet caught on. Maybe it never will. There are countless "educational" sites on the web, most of them very bad: boring, stupid, and pointless. A few of them very good, but it is hard for a parent or child to find the good amid the bad. America needs a web school as good as, and as well known as, amazon, ebay, paypal, and Wikipedia. Maybe Wikiversity can become that school. Maybe some other web school will rise to the top. But somebody needs to do something soon. If Wikiversity is to be the web school of choice, it needs: 1) a complete set of K-8 courses that are fun, accurate, and intellectually challenging. 2) a committee to monitor the courses to make sure they stay on track. 3) interactive courses, where the students are taught in small groups, given tasks to carry out in groups, and receive feedback from the instructor. A parent needs to be confident that if they sit their child down at a computer connected to Wikiversity they can be sure their child is safe and getting good instruction. I also think that team taught courses are a good idea. Wikiversity needs a "school day" that parents can plan around. And, finally, Wikiversity needs publicity, lots of publicity. Thoughts? Comments? Suggestions? Rick Norwood 13:14, 21 March 2008 (UTC) - Well, I don't think we're used by home schoolers that much just yet, but I do think it will probably end up being an early use. In general, we need content to build the community and community to build content (IOW, it's a chicken-and-egg question). Progress has been aggravatingly slow, but promisingly steady... it's going to take a few years though. - K-8 materials pose a different problem: Personally I wouldn't want my children to be using wikis by themselves at that age, due to the (occaisionally very inappropriate) vandalism that's somewhat endemic to wikis. Over time, stable versions of materials could be hosted separately, though this would to some degree mean losing the participatory element of Wikiversity, which is certainly important. On the other hand, there are some interesting textbooks being developed on Wikibooks (as part of the Wikijunior project), and as textbooks these don't really lose anything by being printed out or otherwise moved to a "non-wiki" format. Developing pre-K through 8 materials as a cross-wiki collaboration may be our best approach. --SB_Johnny | talk 13:36, 21 March 2008 (UTC) - Has a relationship between WV and w:OLPC been explored? This could help drive developments in this area. -- Jtneill - Talk 00:45, 22 March 2008 (UTC) - There is: One Laptop Per Teacher: Content and Curriculum for (in-service) Teacher Training - This paper proposes structure and content for in-service training of teachers in the use of "One Laptop Per Teacher", an idea related to One Laptop Per Child. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 08:25, 22 March 2008 (UTC) There have been countless plans, like the OTPC plan, and, as I mentioned, billions of dollars spent. But the schools are still broken and need a replacement. Maybe Wikiversity is it, maybe not. As for providing material kids can trust, as I mentioned above, monitors will be required. A kid is not going to be hurt by an occasional "fuck". Kids today use language that would make a sailor blush. But kids do need protection from sexual predators, and so somebody has to monitor every course, to make sure none of the teachers are getting inappropriately personal. If Wikiversity does not want to get involved in that (and who would) then some other website will need to become the school of choice. Rick Norwood 16:25, 22 March 2008 (UTC) - Well, it's not really "bad language" I was worried about, but point taken. As for monitoring for other sorts of inappropriate behavior: we're probably not so ill-prepared for that, since the Wikipedians have developed a pretty good system over the years to deal with such issues. We do have most of the tools (various kinds of blocks are available, and we have two Checkusers on call, and if we need oversight (a kind of "super deletion" that permanently and completely removed edits that might contain inappropriately personal information), we can easily reach one of the stewards. The Wikimedia Foundation also has a legal department available for really bad situations. - I'm not at all sure, however, that Wikiversity (or any website) is ever going to be a "replacement" for schools. Face-to-face human reactions are core elements of an education, and even home schooling involves interaction with the parents. Helping teachers to teach better (whether professionals or parents) will probably be among our core services here, rather than acting as substitutes for them. --SB_Johnny | talk 17:05, 22 March 2008 (UTC) You are probably right, the replacement for schools will probably be a commercial venture, like google or paypal, rather than wikiversity. For the personal interaction part, students should gather in classes and there should be two-way webcams. As for fixing the public schools in America (and, to an ever greater extent, in the UK, though UK schools are still an order of magnitude better than American schools) as I've outlined above, I believe the current system (local control) cannot be changed or fixed, only replaced. Rick Norwood 12:36, 23 March 2008 (UTC) Image caption not working Can anyone work out why my image caption isn't working here: Exploratory_factor_analysis/Lecture_notes#Conceptual_models? -- Jtneill - Talk 07:18, 24 March 2008 (UTC) Special characters I'm looking for how to do special characters. e.g., instead of <---> I'd like to use a single-character symbol. -- Jtneill - Talk 03:27, 25 March 2008 (UTC) - wow, thanks, so does that mean i can do this? ☺ -- Jtneill - Talk 05:36, 25 March 2008 (UTC) Spam Blocker preventing real work This has happened a few times now. Hello geniuses, your spam blocker is stopping me from making legitimate edits. Of COURSE I refer to other pages you jerks! I know you want to stop spam, but you are preventing me from making needed updates and edits by your indiscriminate implimentation. I'm really angry now that I CANNOT do my work because you've automated this into stupidity. Please, use some common sense and stop computer geeking your way into uselessness. Just because there is an outside link does NOT make it spam! TWFred 20:24, 25 March 2008 (UTC) - Hello, could you tell us the IP with which that happens ? Please keep in mind, that bugs may appear at any time - e.g. side effects by other external things (software changes, ...). - Your ton above (e.g. jerks) shows you are angry, please give us detailed info, so we can help you feel comfortable. Help us, helping you. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:30, 25 March 2008 (UTC) - Thx to this info by you this could be fixed preliminary. By removing the link which contains this IP: 209<dot>85<dot>135<dot>104, see [2] - test edit with IP worked afterwards, see [3], ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:39, 25 March 2008 (UTC) Erkan, you are awesome. Thank you very much. Sorry about the anger, but...well, you guys know how it is when work is lost inexplicably. Looks like it's fixed now. TWFred 21:04, 25 March 2008 (UTC) - I hope you did not lose too much data. :-( ----Erkan Yilmaz uses the Wikiversity:Chat (try) 21:53, 25 March 2008 (UTC) a teacher / researcher needs help I'm Jeroen Clemens [4] a teacher in a secondary school in the Netherlands and for one day a week I do research on CSCL ( Computer Supported Collaborative Learning). See my user page for more details. My students have made a WIKI, concerning Enlightment in Philosophy and English and Dutch literature. I'm afraid its, especially the qualitative analysis? There must be research projects where they have figured out interesting ways of doing this. Or is there another project in wikiversity were they collaborate on this research issue?? Jeroencl 16:42, 16 March 2008 (UTC) I started a discussion already on the The Wiki_in_education page I continued on a new project Macedonia-Netherlands project in which my students collaborate with students from a school in Macedonia ( in English).Jeroencl 22:21, 16 March 2008 (UTC) - Hi Jeroen - I missed all the previous discussion, because I've been writing a paper for the last week on "Collaborative research in Wikiversity" ;-). All my research is based on qualitative analysis, so I very much want to help here. Basically, the process of qualitative data analysis is, firstly - surprise, surprise - read the data. :-) What is going on here? What are the dynamics of the group? What are people saying/doing, and what are people not saying/doing? Check yourself while doing this - don't jump to conclusions, but hold thoughts as working hypotheses - continually referring to the data to enrich, and self-critique, your developing understanding of the data. Probably the best way to start this would be with a form of "grounded theory" (eg), which is simply a method of building categories, and then meta-categories, while always 'staying close to the data'. In any case, I propose we set this up as a learning-and-research project - how about Enlightenment course research for a start? (Please suggest a better title - or we can always rename it after.) Cormaggio talk 10:58, 18 March 2008 (UTC) - Hi Cormac, I've been busy also with other things, but now I will work on my research again. I'm happy you're interested in this. I Enlightment Course research page would be great. I'm working on e Macedonia-Netherlands WIKI CSCL project also and started a project with a school on Aruba. How shall I start the research page? The Enlightment wiki is in Dutch, so that will be a problem for most of you. But this project is my research task, the other two are sort off spinn-offs. As you can see on my User page, I'm a teacher-researcher very much interested in ICT and education.Jeroencl 12:51, 22 March 2008 (UTC) - Hello Jeroencl, by clicking on the (no more red) link above. I have created it for you and put in a welcome template. Feel free to continue: Enlightenment course research , ----Erkan Yilmaz uses the Wikiversity:Chat (try) 12:54, 22 March 2008 (UTC) - I've actually just completed a short in class presentation on Wikis, and am quite interested in the future of wikis and their impact on education. I'm a strong advocate of the Wiki way and will take a peek at the resources you mention. Welcome aboard! Historybuff 17:09, 26 March 2008 (UTC) RSS Reader testing on Sandbox server Thanks to installation by Darklama, the RSS Reader extension has been installed on the sandbox server. Please come and try it out and discuss: RSS Reader -- Jtneill - Talk 15:30, 26 March 2008 (UTC) Unified Log-in What's all this hyper about the Unified Log-in, there's a lot of discussion going on about the Unified Log-in on Wikipedia, but wondered if any of you knew about it. Terra Welcome to my talkpage 20:06, 26 March 2008 (UTC) - Have a look here and here, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:07, 26 March 2008 (UTC) Image licensing issues: the deadline approaches! I was chatting with Mike.lifeguard about this the other night, and he left the following on my Wikibooks talk page: <snip> == Licensing == Here's the Foundation resolution: foundation:Resolution:Licensing policy. It sets out the deadline (o noes!) as well as a relatively clear outline of what must be included in the EDP (= fair use policy). Points of interest: - All media must be free - If they're not, then you must have a fair use policy with certain characteristics: - Media identified in a machine-readable format (that means templates) - EDPs must be minimal, and then sets out what that means - Free alternatives must be used where possible - Anything not in compliance must be deleted - For projects without an EDP currently (that's both here, and WV too) - New media that are not free start getting deleted beginning March 23 2007 <- Wikibooks does this already; WV doesn't, so you're past the deadline by nearly a year. - Put together an EDP if you want one (which presumably WB and WV both do) - Old media must be brought into compliance by March 23 2008 <- that's the deadline I'm talking about for Wikibooks - "This policy may not be circumvented, eroded, or ignored on local Wikimedia projects." (emphasis theirs) So now we have a rule other than common sense that tell us we need to get on this :) – Mike.lifeguard | talk 04:11, 16 March 2008 (UTC) </snip> Just a note: we have a new bot operator on Wikibooks that is dealing with a lot of this. Mike has that script now, and I think it might be a good idea to have him run it so we can see what we're dealing with. --SB_Johnny | talk 10:40, 17 March 2008 (UTC) - This is something that I've done a bit of work on. Wikiversity:Notices_for_custodians#images_and_copyright One of the big problems that I had was that many students from one particular learning project were uploading images faster than I could resolve the issues. Education is key to prevention. There were many images that I found that were copied from wp or commons. Sounds fine from a license point of view, except that the uploaders sometimes did not include the license info, or a link to where they got the image from (no mention of original author), or sometimes even changed the license. I left repeated notes for some of the users and got no response. There were a couple that I was tempted to block for a few days after 3 warnings for copyvio uploads. I support running a bot to help us identify which images we need to cleanup or remove. I also see no problem with requiring license templates on all images. Not only is this machine readable, but the template contains links to the full license terms. Someone downloading from wv needs to know what the terms are, and they need to be correct. There are also problems with our templages User:Mu301/No_thanks. I have corrected some of the links in templates to point to the correct page, but there are probably more. Our exemption policy and use of screenshots needs to be clarified. For a while I was watching the image uploads closely, but it was tedious work and I started to fall behind. Our instructions for uploaders is not very clear. I have seen many instances where people just choose something like GFDL-self for every image they find on the net and upload here. --mikeu talk 13:20, 17 March 2008 (UTC) - We ended up blocking some students on Wikibooks over similar issues, and had immediate and positive results... so certainly you shouldn't feel too shy about doing that if they're just making more work for others. Clarifying the instructions would be good, or perhaps even better try to figure out how commons changed the Mediawiki on the sidebar (hitting "upload file" there actually brings you into a tutorial page, which would be much better for us). I personally don't know much about the fair use stuff (seems nonsensical to me), but if we're going to allow it, we should perhaps copy the policies and templates in use on Wikipedia for now, then tweak them if and when it becomes necessary down the line. --SB_Johnny | talk 00:13, 18 March 2008 (UTC) I have imported the policy page to Wikiversity:Licensing policy. Please use the talk page to discuss implementation. --mikeu talk 00:25, 28 March 2008 (UTC) Saturday, 29.3.2008: online meeting of reading group We would like to invite interested persons to our next reading group meeting (Thucydides: The Peloponnesian War) this Saturday at 14:30 UTC. We will begin a new book (see status). The discussion normally takes about 1-2 hours. Even if you can not read the 25 aphorisms before (takes about 20 mins), we can give you a quick intro. My history teacher always told: history is better than e.g. mathematics because you can join at any time. So, why don't you lurk ? Still interested ? Click here. P.S. if the time doesn't fit for you, I am sure we can find another time frame. ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:07, 27 March 2008 (UTC) Sister Projects Interview See w:User:OhanaUnited/Sister Projects Interview for future w:Wikipedia:Wikipedia_Signpost feature. --mikeu talk 21:41, 26 March 2008 (UTC) - Interesting OhanaUnited wants to speak with only one person at a time - Question: Why not do this with more people e.g. in a chat ? We could tell all who have the custodian flag to join at that time. This will ensure that the information seeker will meet all interested people (isn't the goal of the interviewing person to get the best info?). We could even post the chat afterwards. - So, and now another thing: the selection of people who possess the custodian flag is a little - how should I say - you tell me :-) ----Erkan Yilmaz uses the Wikiversity:Chat (try) 22:25, 26 March 2008 (UTC) - So everybody, there is now the chance to present WV in good light :-) User:OhanaUnited/Sister Projects Interview, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 18:53, 27 March 2008 (UTC) - I do have my reason for wanting to chat with one person at a time (and it's a good reason). If 2 interviewees provide contradicting comments, it won't look good on a project when they're actively looking for more participants. And doing it via email provides secrecy so that the interviewee can answer the questions without worrying about screwing up (because they're offered a chance to change their answers without anyone knowing about it) OhanaUnitedTalk page 19:04, 27 March 2008 (UTC) - For me personally that is too much secrecy :-) - All people here can together as unit add the different views (e.g. on that page or in a chat session) and later merge to "one WV voice". Earliest date that page will be published was April 14 ? Are these the only questions or will come more ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 19:27, 27 March 2008 (UTC) - Yes, the earliest date is that, because wikisource took March 31 and meta took April 7. There might be more questions, but I'm not sure. If you guys want to say something outside of these questions, feel free to add them and then I'll add the question that will correspond to your answer. OhanaUnitedTalk page 20:48, 27 March 2008 (UTC) your voice counts So, let's start this then: anybody interested in giving his thoughts to the outside world ? How about starting on the talk page ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 00:16, 28 March 2008 (UTC) chat session ? If somebody is interested we could make a chat session ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 00:16, 28 March 2008 (UTC) - possible dates: - Saturday 5th April - ----Erkan Yilmaz uses the Wikiversity:Chat (try) 00:16, 28 March 2008 (UTC) - --mikeu talk 17:16, 29 March 2008 (UTC) - other date ? copied to here, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 18:25, 31 March 2008 (UTC) Request for archiving/moving of Wikiversity:Mission and Wikiversity:Scope Could someone archive much of the material currently at Wikiversity:Mission, so that we can use that page for more clearly presenting the current mission. That way, we can have Wikiversity:Vision, Wikiversity:Mission, Wikiversity:Slogan, Wikiversity:Motto, etc. pages with current info -- Jtneill - Talk 02:44, 23 March 2008 (UTC) - Also, could this be archived, so we can replace it with just the current scope? Wikiversity:Scope. (I would move these pages, but they're protected I think and are notable pages, so would prefer someone more experienced to take a look). -- Jtneill - Talk 12:41, 23 March 2008 (UTC) - My vague thoughts here are: we have so many (very old) introductions and broad statements about Wikiversity on so many disconnected pages that perhaps what we should be doing is reducing and consolidating. If we updated every page of this kind which had ever been created, it would be an unending task. Mind you, I'm not standing in the way of anyone adopting an unending task! Let me hold the door open with flowers to accompany. Suggested procedure: create your new pages as subpages of the original page (e.g. NameOfPage/2.0 ) and then when your new version is complete, a custodian can move them around. Mind you, if you continue contributing at this rate (771 edits in 3 weeks), you'll probably have those tools in your possession before long anyway ;-) --McCormack 08:15, 24 March 2008 (UTC) - I agree with consolidating - I think we have too many introductions to Wikiversity, and would benefit from a smaller number of well-written and well-coordinated pages interlinked from prominent pages. However, I strongly oppose deletion, as some of these pages are important to the history and context of Wikiversity - what I would advise is to archive anything that is redundant, and use them in developing introduction pages, and informing Wikiversity:History of Wikiversity. Cormaggio talk 12:53, 1 April 2008 (UTC) Wiki Research could be Sold for Fundraising) (re-posted from Talk:Research) - Thanks for re-posting here, Gustable (I had missed this comment). But I'm not sure I understand what you're proposing - research done on Wikiversity will be available under the terms of Wikiversity's licence (GFDL). This will be permitted for commercial use (eg selling), and some of this selling could even be done by the WMF (just as German Wikipedia DVDs have been available for sale for a few years now). Is this the kind of thing you're proposing, or something else? Cormaggio talk 13:04, 1 April 2008 (UTC) Summer of Code From Brion Vibber via Planet Wikimedia, I see that Google's annual Summer of Code is open again. It's a genuine opportunity for people interested in MediaWiki to develop a tool that might be of use in Wikimedia. There's a page on Wikiversity:Technical needs if anyone needs any ideas, and any work done might be away to revive MediaWiki Project. Even if you couldn't contemplate hacking/developing mediawiki, you could add ideas for nifty or necessary tools here, or on the above pages. Cormaggio talk 10:17, 6 March 2008 (UTC) - Thanks for drawing attention to these pages. I was previously completely unaware of them! One thing I missed: your idea for completely rewriting MediaWiki from scratch. Otherwise the requests look very modest indeed. --McCormack 11:51, 6 March 2008 (UTC) - I was hoping you or someone else would shake them up! I can't really think what else WV needs at the moment (other than things which would totally violate Wikimedia policy; and I'm not really in favour of doing that). My own imagination, within the constraints set by Wikimedia policy, is a little limited. On the other hand, I do have the technical know-how to comment on feasibility of ideas and perhaps help implement them. --McCormack 18:10, 6 March 2008 (UTC) - Hmm, well, I think there are plenty of things on Wikiversity:Technical needs that would be really cool to have, but apart from the wiki<->pdf converter, I don't know of anything else being done. Actually, looking again makes me realise they aren't that modest - if we had a really good search mechanism (better than Mayflower on Commons), a malleable tagging/metadata system (interfaceable with other (OER) sites' systems), a way for people to synch files on their desktop with pages in Wikiversity, a way for allowing more rich, 'Flash-like' interfaces - I think we would have made more than significant improvements to Wikiversity. :-) Those are practical things that, if someone would start, I hope they would find support from within the Wikimedia/Mediawiki developer community (a community I would like to reach out to more). My own imagination is limited by the technical end of things - I literally haven't a clue what is possible (or, at least, realistic) - and I hope someone more technically clueful can contribute to my/our understanding. (Please consider this conversation an attempt to figure out what is indeed possible, and how we might be helped or limited.) So, what in Wikimedia policy is limiting you? (I'm guessing it might be Flash-related - but there may be other potential solutions if this conversation was widened slightly - involving developers and the wider Wikimedia community.) Cormaggio talk 11:41, 7 March 2008 (UTC) I had a go at Wikiversity:Technical needs. People may wish to inspect the damage. --McCormack 07:24, 9 March 2008 (UTC) - On the other side, how do students get SoC projects? Does one apply directly to Google, or to the mentor org? Seems I might be eligible, and I couldn't think of a better place to help, if it's possible and allowed by the rules. :) Historybuff 06:22, 10 March 2008 (UTC) - Anyone's eligible, as far as I know. Though Brion said in his blog post that Wikimedia has had "limited success" because of a shortage of mentors. If you are interested, I would contact Brion about it - and I would imagine it might be an idea to first have some sense of what you'd be interested in doing. But it'd be really cool if you were to do something for this! Cormaggio talk 23:06, 14 March 2008 (UTC) - Thanks for the response Cormaggio. Is there anything that WV needs done, specifically? The suggestions one the MW page have a video project that might be interesting. There are a few Mentoring organizations that look interesting. I'm hoping to get accepted by one of these projects, and if I am, I'll be documenting the experience here at WV. Historybuff 19:40, 18 March 2008 (UTC) Ok, I've taken a look at Wikiversity:Technical needs and there is some good meaty stuff there. I'm glad to help refine this down. Now, Brion has indicated that the core team will be limited in the number of mentors that they can make available -- is anyone here up to the task? We should also be working on promoting SoC in our daily work -- as it's a tremendous opportunity to bring new people on board into Open Source while giving them some cash for the deal. Historybuff 20:05, 22 March 2008 (UTC) - Msg now also at beta + de.WV about it, application period for students is from 24.-31.3. - besides Wikiversity:Technical needs is also listed at the project page. We should do a prio of the features also (see Wikiversity:Vision_2009#Technology_goals). - If there is a shortage in mentors there, why don't we use this as opportunity ? e.g. motivate some of us to help in a 24h-service by rotating shifts and listing the progress/communication on a page here at WV, so the others have all info which is talked (in email or IRC) ? Who is interested ? ----Erkan Yilmaz uses the Wikiversity:Chat (try) 20:37, 22 March 2008 (UTC) I am willing to take on an advisory role, but as I'm going to apply, I can't also be a mentor in this go (Google's rules). I'm also going to take a peek at the todo lists above, and see if I can put together one application for Mediawiki work. Historybuff 17:04, 26 March 2008 (UTC) - Ok, as the deadline has been extended, I'm going to try and pursue a proposal for GSoC for the WV technical needs stuff. Maybe we can clean it up a bit, and isolate what still needs doing? Historybuff 06:15, 2 April 2008 (UTC) meetups There's a page at w:Meetup for scheduling in person meetings of wiki editors. A post here prompted me to see if there is any interest in holding a meetup in my local area. So, I created a page to plan this. Any interest in creating some kind of meetup coordination page here? --mikeu talk 16:30, 16 March 2008 (UTC) - pro: French + German Wikiversity have also such pages. ----Erkan Yilmaz Wikiversity:Chat 16:34, 16 March 2008 (UTC) - Me as a dutchman will have a hard time to arrange meetups. I seem to be the only active one around. Perhaps i can start a Dutch Wikiversity and see who will come by to join. The good thing about a Dutch Wikiversity would be that it is easy to meet in a small area. Only Dutch and Belgians will come.--Daanschr 17:38, 16 March 2008 (UTC) - Another option: if there is not much interest in the Dutch WV, joining Dutch WP meetings and see if someone is willing to participate in Dutch WV ? ----Erkan Yilmaz Wikiversity:Chat 17:48, 16 March 2008 (UTC) - (cut and paste, after edit conflict) That is why I started my meetup planing page at wikipedia. I don't even know if there are any wikiversity editors within a reasonable distance of my area. I may also post a note at commons or meta. Perhaps you could ask around at beta or Dutch wikipedia? --mikeu talk 17:51, 16 March 2008 (UTC) - See Wikiversity:Meetup. --mikeu talk 02:54, 17 March 2008 (UTC) - I guess it is better to first wait untill there are Dutch Wikiversitans. I just looked at the meetups of Wikipedia and i don't think that i can get participants for Wikiversity from there. There only a few people coming at such meetups. There is a Dutch Wikimedia Union which seems to be very dull. What i like is that people have a certain creativity, to imagine and cange something.Daanschr 08:13, 17 March 2008 (UTC) - Daan, I know plenty of Dutch Wikimedians who are interested in Wikiversity, and possibly interested in setting up a Dutch Wikiversity. And they're all great people. ;-) I'll add links on your talk page... - Re:Meetups in general - yeah, Wikipedia is almost certainly the most active place to organise meetups - but it might be best to organise these on meta (where email notification is turned on). Cormaggio talk 10:04, 18 March 2008 (UTC) - I think meetups are a fantastic idea, but our user community is still quite geographically dispersed and it's unlikely we could muster a fair sized group except by lots of travel. However, we could always do a virtual meetup, or, as a part of Wikiversity outreach, a real meeting with either Wikipedia or academic people to make them more aware of us. Someone even printed buttons and flyers, I think we should revive them! :) Historybuff 06:18, 2 April 2008 (UTC) RSS for this page? The RSS feed listed for this page at the top seems to have postings from 2007, i.e., not current? -- Jtneill - Talk 13:57, 18 March 2008 (UTC) - And so it does. I'm wondering if we might have a borked rss extension, or something to that effect? I will do a little research into this. Historybuff 19:16, 18 March 2008 (UTC) It seems that RSS is still busted for this page. Does anyone know how this works? If not, I'll put it on the WV tech needs page. Historybuff 06:21, 2 April 2008 (UTC) One of the big questions at WV is how we should structure our content, and whether or not university-like faculties are used. The site redesign of 3 months ago highlighted the existing portals structure (right-hand side of main page), with positive comments that there should be main page navigation at all, but also comments that this looked too much like a university. Oddly, only two out of the ten subject-related portals call themselves "faculties", and even here, the university metaphor is otherwise almost absent. In fact, it's striking how little like university faculties these portal pages are. Anyway, I think we have to start structuring content better (and with multiple alternative paths). The best organisation of content which I have seen on any Wikimedia project is Wikimedia Commons. This would require quite some rethinking to apply to Wikiversity, but it is perhaps a model worth taking as a basis. --McCormack 17:43, 20 March 2008 (UTC) - What about User:Terra/Portal as a Quick Portal, it includes links to different area's I could add more. Terra Welcome to my talkpage 18:45, 20 March 2008 (UTC) - Hi Terra. For the kind of thing you are currently building, it might be useful to look at Template:About Wikiversity, Template:Using Wikiversity, Template:Administering Wikiversity and Template:Learning innovation. In this thread I was looking more specifically at the issue of locating content on WV (i.e. actual learning objects, in principle via such means as school subject categories, university faculty categories, topics, etc.). Currently I'm experimenting with <categorytree>. --McCormack 18:50, 20 March 2008 (UTC) - Ok, I'll look through the links, you've provided at least this is keeping occupied which I like. Terra Welcome to my talkpage 18:53, 20 March 2008 (UTC) - I've had a look at it, and seems interesting, however would this User:Terra/Help Contents be a possible replacement someday on the official one Help:Contents the old one could be transfered as a site map and the new one could be put into use, or I could improve it a bit more depending what user's say, the original template is on Wikipedia's Help Content but theirs is easier to Navigate around the site and decided to do one on this one however I created it in my subpage just as a Pre-caution, but am wondering if this could be put into some use or leave it in my subpage for the time being. Terra Welcome to my talkpage 19:06, 20 March 2008 (UTC) - Hi Terra. Yes - I saw you working on that earlier today. Looks like a good page. The templates I made are for putting onto other pages, but we also need a "real" page that guides through all this stuff, such as the one you are making. It would be good to coordinate the content of the templates with your help contents page. --McCormack 19:44, 20 March 2008 (UTC) - What are suggesting, shall I create more templates to improve Wikiversity or possibly transfer the User:Terra/Help Contents and replace the current one and move the original page and rename it as a sitemap. Terra Welcome to my talkpage 19:50, 20 March 2008 (UTC) - I've included some of the template links you've provided and placed the links in User:Terra/Help Contents which the templates themselves have links to various pages throughout Wikiversity I thought it maybe useful to include it. Terra Welcome to my talkpage 20:23, 20 March 2008 (UTC) - I'd like us to take another look at this as well: Wikiversity_learning_model/Discussion_group#A_4-pronged_approach. Countrymike 19:26, 20 March 2008 (UTC) - I've read the topic which Countrymike provided but it doesn't make sense to me, up until recently I've been active on Wikiversity a lot more, but aren't aware what the topic is about. Terra Welcome to my talkpage 19:42, 20 March 2008 (UTC) - Now that I've been here a year, worked with, added, and forgotten (!) about content, my first impression remains: It's still difficult to find things at Wikiversity. - We, for good or bad, have a wiki mindset, and we can't expect visitors to have one, or to become immersed in one. Wikipedia doesn't force a wiki mindset on people, and neither should we, realistically, if we want to be successful. We have to remember our purpose is to connect learners to learning materials. If we are losing learners by being difficult to navigate, well, frankly, we're failing. - Portals are good if you know what the heck a portal is. I've been on the Internet since before the WWW was invented and I still only have a fuzzy idea of what a portal is. So, while portals might be good for Wikipedians or returning Wikiversitians that have a clue, there are a vast army of learners that don't even want to know what a portal is. - I'm not attacking the portals, btw. I'm trying to draw attention to the fact that we really need to figure out how novice users can make use of WV. Once novices become interested in what we have, then they might start to look around, and bingo -- portals come into play. But we should never lose sight of someone's first visit here, but it is easy enough to do. Historybuff 06:56, 2 April 2008 (UTC) Colloquium - evolution? This "Colloquium" works obviously - people use it well. However, it is messy, long, and I think confronting for new users. I just noticed that wikimania2008 has a rather nice clean looking/feeling forum - Wikimania Community Portal. Perhaps something like that could be next evolution of the Colloquium? -- Jtneill - Talk 00:41, 22 March 2008 (UTC) - Well, the "community portal" is nice (I'm rather fond of a more spartan decor for main pages), but I'm not quite sure about breaking up the colloquium yet. The high level of things going on here is actually a fairly recent phenomenon, possibly related to spring energies among us northern hemispherites :). Wikibooks broke up the "Staff Lounge" (since renamed "Reading Rooms") some time ago, and at least for me it hasn't been as good as the single forum (too many pages to keep track of, not enough time in the day). JWSchmidt at one point was trying to get something called "liquid thread" enabled, but I'm not sure what that is (or what the devs think about it). --SB_Johnny | talk 15:38, 22 March 2008 (UTC) - I can see your point - it could easily kill discussion off to break it up prematurely. Perhaps it works to have the general colloquium, from which specific topics can spin off if they get particularly active (e.g., Wikiversity:Vision 2009). Multiple discussion places could also detract from encouraging people into talking on learning project pages, which in general I think should be encouraged. Nevertheless, worth looking more closely at how other projects are going with multi-room discussion fora. Here is another one: Wikibooks:Reading room. -- Jtneill - Talk 11:13, 23 March 2008 (UTC) - Just noticed that commons:Commons:Village_pump uses a general forum, but it looks neater than here because its headings are structured by day (based on initial posting date). What do others think about this as a way to keep a single forum, but make it a little tidier? -- Jtneill - Talk 06:17, 24 March 2008 (UTC) - Have you tried the enhanced talk gadget? See this screenshot for an example. Go to My Preferences and select Gadgets tab to enable. --mikeu talk 11:49, 24 March 2008 (UTC) - Colloquium actually has a dedicated bot that sweeps up postings that are older then a certain date (thanks Sebmol :)). The trick is to get the balance right, you don't want stuff hanging around too long, but when people are away, you don't want the cupboard looking too bare. If it's getting too cluttered, I can pick up the archiving pace a bit and see if that helps short term. - Liquid threads was originally something to allow cross page discussion, if I read the docs right. We were looking for something like liquid threads, that would create collapse/expandable topics. My suggestion is to add it to WV tech needs, and maybe it'll get considered there. Historybuff 07:04, 2 April 2008 (UTC) transwiki import I just imported Speech Fundamentals from wb. It was mentioned in irc that Transwiki should be a real namespace instead of pseudo. That way when importing you can select to put pages in that namespace, among other reasons. I would also like to look at adding WP as a place to import from. Or better yet, just add "upload import" as described at meta:Help:Import. Comments, ideas, suggestions? --mikeu talk 00:11, 14 March 2008 (UTC) - pro - would certainly help: Wikiversity:Wikimedia Garbage Detail, ----Erkan Yilmaz Wikiversity:Chat 16:42, 14 March 2008 (UTC) - At least add en.Wikisource please. I just spent a lot of time importing from xml - but that seems not to work good with many revisions (e.g. server errors during upload or the files don't have all revisions) - If possible enable it for all English projects (English Wikisource, English Wikipedia, English Wiktionary, English Wikiquotes, etc) ----Erkan Yilmaz uses the Wikiversity:Chat (try) 03:07, 25 March 2008 (UTC) - Yup, to both - Transwiki namespace is a very useful thing to have (we've had it on wikibooks for quite some time, and it makes importing from encyclopedias to subpages a lot easier), and there are certainly plenty of good things we could import from wikipedia given the tools --SB_Johnny | talk 16:50, 14 March 2008 (UTC) - Any wiki with transwiki import enabled should have a Transwiki: namespace; I don't know why that's not the case here. I believe that upload import will not be enabled on WMF wikis because it is insecure (the xml can be edited), but transwiki import would be enabled upon request (with consensus shown). I'd suggest enabling import from Wikipedia, and Meta at a minimum. – Mike.lifeguard | @en.wb 03:00, 21 March 2008 (UTC) - Support. --McCormack 20:45, 4 April 2008 (UTC) As it seems we can now import from: Wikipedia, Wikibooks, betawikiversity, Wikiquote and Wikisource but lost import possibility from meta and incubator in the update :-) see Special:Import, ----Erkan Yilmaz uses the Wikiversity:Chat (try) 17:17, 15 April 2008 (UTC)
https://en.wikiversity.org/wiki/Wikiversity:Colloquium/archives/March_2008
CC-MAIN-2019-43
refinedweb
11,731
59.74
I am attempting to write the entire row of a .csv file that has been found modified via a For, In loop. I have managed to get the program to write the keys to a csv file but I can't get it to write the associated values along with those keys. I'm unsure if I need another loop nested in or if I am just making a mistake with the syntax. import csv def make_billing_dict(csv_dict_reader): bdict = {} for entry in csv_dict_reader: key = entry['BillingNumber'] bdict[key] = entry return bdict with open('old/MTBT.txt') as csv_file: old = csv.DictReader(csv_file) old_bills = make_billing_dict(old) with open('new/MTBT.txt') as csv_file: new = csv.DictReader(csv_file) new_bills = make_billing_dict(new) diff = file("diff/diff.csv", "wb" ) writer = csv.writer(diff) for keys in old_bills: if old_bills[keys]['CustomerName'] != new_bills[keys]['CustomerName'] or old_bills[keys]['IsActive'] != new_bills[keys]['IsActive'] or old_bills[keys]['IsPayScan'] != new_bills[keys]['IsPayScan']: writer.writerow([keys]) #Used to send records to the console # print (new_bills[keys]['BillingNumber'],new_bills[keys]['CustomerName'],new_bills[keys]['IsActive'],new_bills[keys]['IsPayScan'],new_bills[keys]['IsCreditHold'],new_bills[keys]['City'],new_bills[keys]['State']) #print set(new_bills.keys()) - set(old_bills.keys()) Also, I have noticed that if I add a new key and entry into the new file it causes no problems but if I remove a record it causes a key error. I understand why, because it's looking for a key that's in the original dict but is there any way around this issue?
http://www.dlxedu.com/askdetail/3/775b61fb08b5bc7d63a75f5b6d860d7a.html
CC-MAIN-2018-39
refinedweb
249
51.14
Despite to get one or two additional developers interested in unit testing – which would make the effort worthwhile. Last time I introduced the very basics of a test – how it is written, executed and evaluated. While doing so I outlined that a test is more than a simple verification machine and can serve also as kind of low level specification. Therefore it should be developed with the highest possible coding standards one could think of. This post will continue with the tutorial’s example and work out the common structure that charactarizes well written unit tests, using the nomenclature defined by Meszaros in xUnit Test Patterns [MES]. The Four Phases of a Test A tidy house, a tidy mind Old Adage The tutorial’s example is about writing a simple number range counter, which delivers a certain amount of consecutive integers, starting from a given value. Beginning with the happy path the last post’s outcome was a test which verified, that the NumberRangeCounter returns consecutive numbers on subsequent invocations of the method @Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter(); int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } Note that I stick with the JUnit build-in functionality for verification in this chapter. I will cover the pro and cons of particular matcher libraries (Hamcrest, AssertJ) in a separate post. The attentive reader may have noticed that I use empty lines to separate the test into distinct segments and probably wonders why. To answer this question let us look at each of the three sections more closely: - The first one creates an instance of the object to be tested, referred to as SUT (System Under Test). In general this section establishs the SUT’s state prior any test related activities. As this state constitutes a well defined test input, it is also denoted as fixture of a test. - After the fixture has been established it is about time to invoke those methods of the SUT, which represent a certain behavior the test intends to verify. Often this is just a single method and the outcome is stored in local variables. - The last section of the test is responsible to verify whether the expected outcome of a given behavior has been obtained. Although there is a school of thought propagating a one-assert-per-test policy, I prefer the single-concept-per-test idea, which means that this section is not limited to just one assertion as it happen to be in the example [MAR1]. This test structure is very common and have been described by various authors. It has been labeled as arrange, act, assert [KAC] – or build, operate, check [MAR2] – pattern. But for this tutorial I like to be precise and stick with Meszaros’ [MES] four phases called setup (1), exercise (2), verify (3) and teardown (4). - The teardown phase is about cleaning up the fixture in case it is persistent. Persistent means the fixture or part of it would survive the end of a test and may have bad influence on the results of its successor. Plain unit tests seldomly use persistent fixtures so the teardown phase is – as in our example – often omitted. And as it is completely irrelevant from the specification angle, we like to keep it out of the test method anyway. How this can be achieved is covered in a minute. Due to the scope of this post I avoid a precise definition of a unit test. But I hold on to the three types of developers’ tests Tomek Kaczanowski describes in Practical Unit Testing with JUnit and Mockito and can be summarized to: - Unit tests make sure that your code works and have to run often and therefore incredibly quickly. Which is basically what this tutorial is all about. - Integration tests focus on the proper integration of different modules, including code over which developers have no control. This usually requires some resources (e.g. database, filesystem) and because of this the tests run more slowly. - End-to-End tests verify that your code works from the client’s point of view and put the system as a whole to the test, mimicking the way the user would use it. They usually require a signification amount of time to execute themselves. - And for an in-depth example of how to combine these testing types effectively you might have a look at Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce. But before we go ahead with the example there is one question left to be discussed: Why is this Important? The ratio of time spent reading (code) versus writing is well over 10 to 1… Robert C. Martin, Clean Code The purpose of the four phases pattern is to make it easy to understand what behavior a test is verifying. Setup always defines the test’s precondition, exercise actually invokes the behavior under test, verify specifies the expected outcome and teardown is all about housekeeping, as Meszaros puts it. This clean phase separation signals the intention of a single test clearly and increases readability. The approach implies that a test verifies only one behavior for a given input state at a time and therefore usually does without conditional blocks or the like (Single-Condition Test). While it is tempting to avoid tedious fixture setup and test as much functionallity as possible within a single method, this usually leads to some kind of obfuscation by nature. So always remember: A test, if not written with care, can be a pain in the ass regarding maintenance and progression. But now it is time to proceed with the example and see what this new knowledge can do for us! Corner Case Tests Once we are done with the happy path test(s) we continue by specifying the corner case behavior. The description of the number range counter states that the sequence of numbers should start from a given value. Which is important as it defines the lower bound (one corner…) of a counter’s range. It seems reasonable that this value is passed as configuration parameter to the NumberRangeCounter‘s constructor. An appropriate test could verify that the first number returned by next is equal to this initialization: @Test public void lowerBound() { NumberRangeCounter counter = new NumberRangeCounter( 1000 ); int actual = counter.next(); assertEquals( 1000, actual ); } Once again our test class does not compile. Fixing this by introducing a lowerBound parameter to the counter’s constructor, leads to an compile error in the subsequentNumber test. Luckily the latter test has been written to be independent from the lower bound definition, so the parameter can be used by the fixture of this test, too. However the literal number in the test is redundant and does not indicate its purpose clearly. The latter is usually denoted as magic number. To improve the situation we could introduce a constant LOWER_BOUND and replace all literal values. Here is what the test class would look like afterwards: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000; @Test public void subsequentNumber() { NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND ); int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } @Test public void lowerBound() { NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND ); int actual = counter.next(); assertEquals( LOWER_BOUND, actual ); } } Looking at the code one may notice that the fixture’s in-line setup is the same for both tests. Usually an in-line setup is composed of more than a single statement, but there are often commonalities between the tests. To avoid redundancy the things in common can be delegated to a setup method: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000; @Test public void subsequentNumber() { NumberRangeCounter counter = setUp(); int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } @Test public void lowerBound() { NumberRangeCounter counter = setUp(); int actual = counter.next(); assertEquals( LOWER_BOUND, actual ); } private NumberRangeCounter setUp() { return new NumberRangeCounter( LOWER_BOUND ); } } While it is debatable if the delegate setup approach improves readability for the given case, it leads to an interesting feature of JUnit: the possibility to execute a common test setup implicitly. This can be achieved with the annotation @Before applied to a public, non static method that does without return value and parameters. Which means this feature comes to a price. If we want to eliminate the redundant setUp calls within the tests we have to introduce a field that takes the instance of our NumberRangeCounter: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000; private NumberRangeCounter counter; @Before public void setUp() { counter = new NumberRangeCounter( LOWER_BOUND ); } @Test public void subsequentNumber() { int first = counter.next(); int second = counter.next(); assertEquals( first + 1, second ); } @Test public void lowerBound() { int actual = counter.next(); assertEquals( LOWER_BOUND, actual ); } } It is easy to see that implicit setup can remove a lot of code duplication. But it also introduces a kind of magic from the view point of a test, which can make it difficult to read. So the clear answer to the question ‘Which kind of setup type should I use?’ is: it depends… As I usually pay attention to keep units/tests small, the trade off seems acceptable. So I often use the implicit setup to define the common/happy path input and supplement it accordingly by small in-line/delegate setup for each of the corner case tests. Otherwise as in particular beginners tend to let tests grow to large, it might be better to stick with in-line and delegate setup first. The JUnit runtime ensures that each test gets invoked on a new instance of the test’s class. This means the constructor only fixture in our example could omit the setUp method completely. Assignment of the counter field with a fresh fixture could be done implicitly: private NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND ); While some people use this a lot, other people argue that a @Before annotated method makes the intention more explicit. Well, I would not go on war over this and leave the decision to your personal taste… Implicit Teardown Imagine for a moment that NumberRangeCounter needs to be disposed of for whatever reason. Which means we have to append a teardown phase to our tests. Based on our latest snippet this would be easy with JUnit, as it supports implicit teardown using the @After annotation. We would only have to add the following method: @After public void tearDown() { counter.dispose(); } As mentioned above teardown is all about housekeeping and adds no information at all to a particular test. Because of this it is very often convenient to perform this implicitly. Alternatively one would have to handle this with a try-finally construct to ensure that teardown is executed, even if a test fails. But the latter usually does not improve readability. Expected Exceptions A particular corner case is testing expected exceptions. Consider for the sake of the example that NumberRangeCalculator should throw an IllegalStateException if a call of next exceeds the amount of values for a given range. Again it might be reasonable to configure the range via a constructor parameter. Using a try-catch construct we could write: @Test public void exeedsRange() { NumberRangeCounter counter = new NumberRangeCounter( LOWER_BOUND, 0 ); try { counter.next(); fail(); } catch( IllegalStateException expected ) { } } Well, this looks somewhat ugly as it blurs the separtion of the test phases and is not very readable. But since Assert.fail() throws an AssertionError it ensures that the test fails if no exception is thrown. And the catch block ensures that the test completes successfully in case the expected exception is thrown. With Java 8 it is possible to write cleanly structured exception tests using lambda expressions. For more information please refer to Clean JUnit Throwable-Tests with Java 8 Lambdas. If it is enough to verify that a certain type of exception has been thrown, JUnit offers implicit verification via the expected method of the @Test annotation. The test above could then be written as: @Test( expected = IllegalStateException.class ) public void exeedsRange() { new NumberRangeCounter( LOWER_BOUND, ZERO_RANGE ).next(); } While this approach is very compact it also can be dangerous. This is because it does not distinct whether the given exception was thrown during the setup or the exercise phase of a test. So the test would be green – and hence worthless – if accidently an IllegalStateException would be thrown by the constructor. JUnit offers a third possibility for testing expected exceptions more cleanly, the ExpectedException rule. As we have not covered Rules yet and the approach twists a bit the four phase structure, I postpone the explicit discussion of this topic to a follow-up post about rules and runners and provide only a snippet as teaser: public class NumberRangeCounterTest { private static final int LOWER_BOUND = 1000; @Rule public ExpectedException thrown = ExpectedException.none(); @Test public void exeedsRange() { thrown.expect( IllegalStateException.class ); new NumberRangeCounter( LOWER_BOUND, 0 ).next(); } [...] } However if you do not want to wait you might have a look at Rafał Borowiec‘s thorough explanations in his post JUNIT EXPECTEDEXCEPTION RULE: BEYOND BASICS Conclusion This chapter of JUnit in a Nutshell explained the four phase structure commonly used to write unit tests – setup, exercise, verify and teardown. It described the purpose of each phase and emphasized on how it improves readability of test cases when consistently used. The example deepened this learning material in the context of corner case tests. It was hopefully well-balanced enough to provide a comprehensible introduction without being trivial. Suggestions for improvements are of course highly appreciated. The next chapter of the tutorial will continue the example and cover how to deal with unit dependencies and test isolation, so stay tuned. References - [MES] xUnit Test Patterns, Chapter 19, Four-Phase Test, Gerard Meszaros, 2007 - [MAR1] Clean Code, Chapter 9: Unit Tests, page 130 et seqq, Robert C. Martin, 2009 - [KAC] Practical Unit Testing with JUnit and Mockito, 3.9. Phases of a Unit Test, Tomek Kaczanowski, 2013 - [MAR2] Clean Code, Chapter 9: Unit Tests, page 127, Robert C. Martin, 2009
https://www.javacodegeeks.com/2014/08/junit-in-a-nutshell-test-structure.html
CC-MAIN-2017-22
refinedweb
2,315
51.58
Hide Forgot Description of problem: When rhevm-log-collector is run for multiple hypervisor hosts and invokes ssh with the "-t" flag, it results in a garbled TTY requiring "stty sane" to be run to get it back. Version-Release number of selected component (if applicable): rhevm-log-collector-3.2.2-4.el6ev.noarch rhevm-log-collector-3.2.2-6.el6ev.noarch rhevm-log-collector-3.1.0-10.el6ev.noarch How reproducible: I was able to reproduce this 100% of the time if gathering more than one host. A single host does not exhibit this problem. Steps to Reproduce: 1. This problem depends on bug 1010472, which adds the "-t" flag to ssh. The patch looks like this: --- rhevm-log-collector.orig 2013-10-07 10:40:57.000000000 -0400 +++ rhevm-log-collector 2013-12-12 12:59:03.552000942 -0500 @@ -503,6 +503,10 @@ def format_ssh_command(self, cmd="ssh"): cmd = "/usr/bin/%s " % cmd + # add a controlling tty + if cmd.startswith("/usr/bin/ssh"): + cmd += "-t " + if "ssh_port" in self.configuration: port_flag = "-p" if cmd.startswith("/usr/bin/ssh") else "-P" cmd += port_flag + " %(ssh_port)s " % self.configuration 2. With that patch in place, run 'rhevm-log-collector collect' in an environment with more than one host Actual results: The command will succeed, but the TTY will become garbled during or just after the collection of data from the hosts The only recovery is to run 'stty sane' or 'reset' on the TTY where the 'rhevm-log-collector' was invoked. Expected results: The command succeeds and the TTY is sane.") Hi James, thanks for your analysis of the issue. (In reply to James W. Mills from comment #0) >. Well logging module is supposed to be thread safe: I don't think it's due to logging module. > >") There are other ssh calls after that one so better move the call to stty sane after them. merged on upstream master for 3.5.0, pushed to 3.4.0 branch. merged on 3.4.0 branch ok, beta3 with 2 hosts... # engine-log-collector INFO: Gathering oVirt Engine information... INFO: Gathering PostgreSQL the oVirt Engine database and log files from localhost... Please provide the REST API password for the admin@internal oVirt Engine user (CTRL+D to skip): About to collect information from 2 hypervisors. Continue? (Y/n): Y INFO: Gathering information from selected hypervisors... INFO: collecting information from 10.34.63.222 INFO: collecting information from 10.34.66.81 INFO: finished collecting information from 10.34.63.222 INFO: finished collecting information from 10.34.66.81 Creating compressed archive... INFO: Log files have been collected and placed in /tmp/sosreport-LogCollector-20140225092540.tar.xz. The MD5 for this file is 24c9bcc187d96d7ef818005f4d1ed055 and its size is 36.2.
https://partner-bugzilla.redhat.com/show_bug.cgi?id=1041749
CC-MAIN-2020-05
refinedweb
462
60.82
Thread.Join Method (TimeSpan) Blocks the calling thread until a thread terminates or the specified time elapses, while continuing to perform standard COM and SendMessage pumping. Assembly: mscorlib (in mscorlib.dll) Parameters - timeout - Type: System.TimeSpan A TimeSpan set to the amount of time to wait for the thread to terminate. Return ValueType: System.Boolean true if the thread terminated; false if the thread has not terminated after the amount of time specified by the timeout parameter has elapsed. If Timeout.Infinite is specified for timeout, this method behaves identically to the Join() method overload, except for the return value. If the thread has already terminated when Join is called, the method returns immediately. This method changes the state of the current thread to include WaitSleepJoin. You cannot invoke Join on a thread that is in the ThreadState.Unstarted state. The following code example demonstrates how to use a TimeSpan value with the Join method. using System; using System.Threading; class Test { static TimeSpan waitTime = new TimeSpan(0, 0, 1); public static void Main() { Thread newThread = new Thread(new ThreadStart(Work)); newThread.Start(); if(newThread.Join(waitTime + waitTime)) { Console.WriteLine("New thread terminated."); } else { Console.WriteLine("Join timed out."); } } static void Work() { Thread.Sleep(waitT.
https://msdn.microsoft.com/en-us/library/23f7b1ct(v=vs.90).aspx
CC-MAIN-2015-27
refinedweb
205
51.95
Gyro substitute for turtlebot Hi, The suggested gyro for turtlebot is no longer available at Sparkfun.com. Does someone knows which other gyro can be used? Thanks in advance, Lucas Hi, The suggested gyro for turtlebot is no longer available at Sparkfun.com. Does someone knows which other gyro can be used? Thanks in advance, Lucas answered 2011-09-10 13:46:33 -0600 updated 2011-09-10 14:29:01 -0600 if you're looking for a breakout board that will also work you can use , it's not a drop in replacement but if you're looking for a breakout board this will work We haven't been able to find a drop-in replacement for that breakout. Our latest run of boards use an ADXRS652BBGZ. Unfortunately, this is a BGA package, requiring you to use a hot-plate to solder this chip to the board before adding other parts. It also has a different scale (250 deg/s instead of 150 deg/s). There are also a few other support components (caps and a resistor) that are required. We're getting those fixes into the latest parts package soon. At the moment, we haven't found anything which is pin-compatible and easy to hand solder. way late to the party, but according to this sub ADXRS613BBGZ the ADXRS623BBGZ is the true replacement. Please start posting anonymously - your entry will be published after you log in or create a new account. Asked: 2011-09-09 05:12:40 -0600 Seen: 1,606 times Last updated: Sep 10 '11 Using the Turtlebot gyro without ROS How to launch multiple turtlebots in real environment? Multirobot with one master using different namespaces Howto create semantic maps using ROS and Kinect? ROS Groovy and Create based TurtleBot dashboard Turtlebot calibration(Still waiting for imu/scan) 3dsensor.launch doesn't work on hydro Editing gyro_measurement_range in CREATE Turtlebot? Does move_base require Twist from Odometry?
https://answers.ros.org/question/11161/gyro-substitute-for-turtlebot/
CC-MAIN-2021-10
refinedweb
322
63.49
Xalan-j XPathAPI and namespaces Discussion in 'XML' started by Dino More Xalan XPathAPI questionJoona I Palaste, Jan 7, 2004, in forum: Java - Replies: - 11 - Views: - 2,853 - Mike Schilling - Jan 11, 2004 XPATHAPI query and NodeList sortErwan, Oct 20, 2004, in forum: Java - Replies: - 0 - Views: - 611 - Erwan - Oct 20, 2004 Upgrade of Xalan 1.2.2 and Xerces 1.4.4 to Xalan 2.6 and Xerces 2.6.2cvissy, Nov 16, 2004, in forum: XML - Replies: - 0 - Views: - 592 - cvissy - Nov 16, 2004 using XPathAPI with xalan extension functions, Jul 7, 2005, in forum: XML - Replies: - 0 - Views: - 426 - Replies: - 2 - Views: - 398 - Joe Kesselman - Nov 1, 2006
http://www.thecodingforums.com/threads/xalan-j-xpathapi-and-namespaces.165979/
CC-MAIN-2014-23
refinedweb
111
84.61
Auto Complete Box is one of the controls that is a part of the Silverlight Toolkit for Windows Phone . One of the features of the AutoCompleteBox is that when a user types a character or keyword , the control will show the related words in the drop down list . To add the Auto Complete Box , make sure you have installed the Silverlight Toolkit for Windows Phone and added the Silverlight Toolkit controls to the toolbox . Just Drag and Drop the control “AutoCompleteBox” from the toolbox to the xaml page The XAML code for the Auto Complete Text box will look like this <toolkit:AutoCompleteBox Grid. Alternatively you can also create the Auto Complete Box in the Code Behind form . 1. Add the namespace – Microsoft.Phone.Controls namespace . Make sure that the Microsoft.Phone.Controls.Toolkit assembly is added via Add Reference 2. Define the Data i.e List of String and assign the data to the ItemSource property of the AutoComplete object and add the AutoCompleteBox to the container (StackPanel) . public void AddAutocomplete() { AutoCompleteBox txtbox = new AutoCompleteBox(); stack.Children.Add(txtbox); // Data txtbox.ItemsSource = GetSports(); } public List<string> GetSports() { List<string> Sports = new List<string>(); Sports.Add("Cricket"); Sports.Add("Tennis"); Sports.Add("Table Tennis"); Sports.Add("Hockey"); Sports.Add("Football"); return Sports; } Now , Run the Windows Phone App and start typing first few charcters in the AutoCompleteBox . You should see the list of Suggestions based on the set of data provided as ItemSource .
http://developerpublish.com/databinding-an-auto-complete-box-in-windows-phone/
CC-MAIN-2017-22
refinedweb
243
57.57
Defining kinds without an associated datatypeDefining kinds without an associated datatype This page tracks feature requests for declaring closed data kinds without associated data types (#6024) and declaring open data kinds that can be freely extended after they are declared (#11080). What comes below is a design proposal that is not yet implemented (as of Jan 2015). Main person responsible for working on the implementation is Jan Stolarek (JS). #6024)Motivation for closed data kinds ( star) (Universe star) | Prod (Universe star) (Universe star) |). Note: this is no longer the case - see below. We lose constructor name space, because the datatype constructor names will be taken, even though we will never use them to construct terms. So Prodand Kcannot be used as constructors of Interpretationas above, because those are also constructors of Universe. #11080)Motivation for open data kinds ( Users might want to create type-level symbols for the purpose of indexing types. In the past one way of doing this was by using -XEmptyDataDecls. But symbols created in this way were always placed in * and that does not allow to use kinds to limit what types are admitted as indices. -XDataKinds allows to create symbols that are assigned a kind other than * but these kinds are closed and adding new symbols is not possible. Thus: - We want a way of defining open kinds that can be later extended with new inhabitants. SolutionSolution I (JS) propose that the mechanism for declaring closed and open data kinds becomes part of -XDataKinds. The proposal is backwards compatible. Closed kindsClosed kinds Starting with GHC 8.0 users can use -XTypeInType extension to write: data Universe = Sum Universe Universe | Prod Universe Universe | K (*) This addresses disadvantage (1) but still leaves us with disadvantage (2). So the idea behind #6024 is to let users define things like: -- closed kind using H98 syntax data kind Universe = Sum Universe Universe | Prod Universe Universe | K (*) -- closed kind using GADTs syntax data kind Universe where Sum :: Universe -> Universe -> Universe Prod :: Universe -> Universe -> Universe K :: * -> Universe By using data kind, we tell GHC that we are only interested in the Universe kind, and not the datatype. Consequently, Sum, Prod, and K will be valid only in types only, and not in terms. Open kindsOpen kinds Open data kinds would be declared using following syntax: -- open kind data kind open Universe data kind member Sum :: Universe -> Universe -> Universe data kind member Prod :: Universe -> Universe -> Universe data kind member K :: * -> Universe Note that open kinds can be parametrized just like closed kinds: data kind open Dimension :: * data kind member Length :: Dimension data kind open Unit :: Dimension -> * data kind member Meter :: Unit 'Length data kind member Foot :: Unit 'Length CaveatsCaveats Kind and Type NamespacesKind and Type Namespaces Currently GHC has separate namespaces for types and data constructors. We have a simple rule: all data constructors go into data namespace. With -XDataKinds promoted data constructors still live in data constructor namespace and there is a hack in the renamer: when renaming types it first looks for a symbol in type namespace and if that fails then it searches for the symbol in the data namespace. Assume we have: data kind Foo = MkFoo In order to resolve disadvantage (2), ie. not pollute data constructor namespace with MkFoo, we would have to put MkFoo in the type namespace. This means that our simple rule "data constructors go into data namespace" would have to be broken. Richard Eisenberg argues that this is bad and in the case of above declaration MkFoo should go into data namespace. But that does not solve disadvantage (2) and thus misses the point of #6024 (given that disadvantage (1) is already solved by -XTypeInType). Richard also argues that members of an open data kind should also be placed in data namespace. Putting MkFoo into data namespace will also allow us to have quite good error messages from the typechecker, rather than cryptic error messages from the renamer about things being out of scope. Non-promotable data types?Non-promotable data types? Let's assume for a moment that we decide to place kind-only constructors in the type namespace (ie. not follow Richard's proposal). Consider again the example of Universe kind and Interpretation data type. Enabling -XTypeInType makes GADTs promotable. This means that data constructors K and Prod if Interpretation data type could be validly used in types. But this would lead to name collission with K and Prod constructors of Universe kind. There would be no way of disambiguating whether K refers to constructor of Universe or promoted constructor of Interpretation. We don't want to end up in a situation where some of the data constructors can be promoted ( L, R) and some can't ( K, Prod). So we would need to make Interpretation data type unpromotable. But detecting that seems Real Hard. Recursive GroupsRecursive Groups We need to be careful about recursive groups. For example, this is valid: data S = S T data T = T S but this is not: data kind S = S T data T = T S Future-proofing the designFuture-proofing the design GHC is growing more and more type level symbols. These symbols vary in their properties like generativity, injectivity, matchability or being open/closed - see 9840#comment:6 for an overview. Here we propose adding yet another way of defining symbols. Can we introduce more order into world of type-level symbols? Can we have some unifying syntax? Can we anticipate what kind of symbols we might want to have in the future? Alternative NotationsAlternative Notations - Use data onlyinstead of data type. - Use 'datainstead of data kind - Use type datainstead of data kind - Use data constructorinstead of data kind member - Use data extension Unit where { Meter :: Unit; Foot :: Unit }instead of data kind member
https://gitlab.haskell.org/ghc/ghc/-/wikis/ghc-kinds/kinds-without-data
CC-MAIN-2021-17
refinedweb
960
60.45
Intro: Send Sensor Data (DHT11 & BMP180) to ThingSpeak With an Arduino, Using ENC28J60 Ethercard Note: This instructable is for the old ENC26J60 Ethershield and ethercard. If you have the modern WIZ5100 based Ethernetshield or an ESP8266 go visit my other instructable that I mention below About a year and a half ago I published an instructable showing how to upload data to Thingspeak with an Arduino W5100 based Ethernet card or an ESP8266. There however is another ethernet card for the Arduino, being the ENC28J60 based Ethercard. It is available as Shield, but also as a module. Though I wouldnt advise anybody to buy the Ethercard, as the W5100 Ethernetcard is more versatile, many people may still have one and rather than gathering dust one might as well put it to use Things you need: Arduino ENC28J60 based EtherShield or Ethercard ( or module) Thingspeak Account Sensors (DHT11 and BMP180) Internet connection Step 1: Send Sensor Data (DHT11 & BMP180) to ThingSpeak With an Arduino, Using ENC28J60 Ethercard: Issues The libraries There are basically 4 libraries for the ENC28J60 Ethershield (development stopped) uses pin10 as chipselect Ethercard develped to allow use of an SD card, uses pin 8 as chipselect Ether_2860 from Simon Monk. If you do not already have that one, you probably will never get it. UIPEthernet from Norbert Truchsess. This library is a drop in replacement for the WS5100 Ethernet library, it makes the ENC28J60 behave like a WIZ5100.That means that programs developed for the latter, can be used for the former, simply by replacing #include <Ethernet.h> by #include <UIPEthernet.h> However, that does require some memory. When googling for the ethercard library, one may come across forks of the various libraries as well. If for whatever reason you want to use the Ethercard library with pin 10 (e.g. if you use it with the Ethershield), change the pin assignment in the library files ENC28J60.h (line 25 and 41 I believe) and the EtherCard.h (Line 134: uint8_t csPin = 8 ). (Depending on the version it can also be in line 154.) But it is easier to add the declaration for pin 10 in the program itself like this: ether.begin(sizeof Ethernet::buffer, mymac, 10) In this instructable I will be using the Ethercard library. Powersupply The Ethershield- and Ethercard shield as wel as most of the modules expect 3.3 Volt. The Thingspeak Data Format In my earlier instructable on Thingspeak, i discussed the dataformat and particularly that it expects strings, whereas the DHT11 and BMP180 deliver floats. The program Fortunately the EtherCard library had a good example to start from. Although Initially I added a routine to convert the float data to strings, I realized that the Ethercard library sends the data to Thingspeak through the print class. Generally this turns floats int strings. tested it and yes, I do not have to do a string conversion and still keep precision in the data. The ENC28J60 is quite hungry regarding memory so the program has reached a critical mass with only 412 bytes to spare for local variables. I have had it running constantly for 2 days without any problem. I could probably win some memory by stripping the adafruit BMP library a bit, Step 2: The Code The program. As instructables is not great in publishing code, I suggest to use the file that i have added // The full development history of this code is in the attached file #include <EtherCard.h> // if this library disappeared, it is EtherCard.h #include <Wire.h> // it is Wire.h #include <Adafruit_BMP085.h> // it is Adafruit_BMP085.h #include <dht11.h> // it is dht11.h #define DHT11PIN 2 Adafruit_BMP085 bmp; dht11 DHT11; #define APIKEY "QTRR4654FRE3" //; } } ---------------- 3 Discussions 1 year ago Hi, Anyone have tried out HLK RM04 wifi module with ThingSpeak. If you do, Please upload the tutorial. Thanks ahead. 2 years ago Check out the Arduino ThingSpeak library that's pretty memory efficient, and solves the float / string problem: Reply 2 years ago Thanks. I know that library but as far as I know that does not take the ethershield/ethercard into account so it won' t work. Also, there is not really a float/string problem. As the ethercard library uses a print statement, the float is turned into a string. easypeasy :-)
https://www.instructables.com/id/Send-Sensor-Data-DHT11-BMP180-to-ThingSpeak-With-a-1/
CC-MAIN-2018-43
refinedweb
718
71.55
GraphQL Scalars 1.0 is out! Explore our services and get in touch. The GraphQL Specification has the Int, Float, String, Booleanand ID Scalar types by default. Those scalar types help you identify the data and validate it before transferring it between client and server. But you might need more specific scalars for your GraphQL application, to help you better describe and validate your app’s data. Validation using Scalars For example, you have a String field but you need to validate upcoming or ongoing string data using regular expressions. So you should have this validation on each end; one in the client, the other one in the server and maybe there is another on a source. Instead of duplicating the same logic in different parts of the project, you can use EmailAddress scalar type that does the validation inside GraphQL for you. Serialization and Parsing The other benefit of using GraphQL scalar types is parsing and serializing while transferring data. For example, you have DateTime data but it is transferred as String due to restrictions of JSON, and each time you receive and pass the data, you have to parse the string and create a JavaScript Date instance while also serializing it to string before passing it to the client. Instead of having that logic in your implementation, you can just use DateTime scalar and you would work with native JavaScript Date instances directly like it is one of primitive types such as string, number and boolean. What’s New? We’ve recently taken over the maintenance of GraphQL-Scalars library from the amazing team of OK-Grow! Since then we completely rewrote the library using TypeScript, upgraded all dependencies, closed all the issues and PRs and increased the number of scalars in the package with new scalars like: BigInt(Long) , GUID , HexColorCode , Hexadecimal , IPv4 , IPv6 , ISBN , MAC , JSON and more. You can see all scalars in the README. Mocking Apollo Server provides mocks built-in scalars such as Int , String , Float , ID and Boolean . What if you need same thing for our scalars? So, we provide you mocking functions for each scalar in this package. You can add those easily in your server for mocking the schema. import { ApolloServer } from 'apollo-server'; import { makeExecutableSchema } from 'graphql-tools'; // import all scalars and resolvers import { typeDefs, resolvers, mocks } from 'graphql-scalars'; // Alternatively, import individual scalars and resolvers // import { DateTimeResolver, DateTimeTypeDefinition, DateTimeMock, ... } from "graphql-scalars" const server = new ApolloServer({ typeDefs: [ // use spread syntax to add scalar definitions to your schema ...typeDefs, // DateTimeDefinition, // ... // ... other type definitions ... ], resolvers: { // use spread syntax to add scalar resolvers to your resolver map ...resolvers, // DateTimeResolver, // ... // ... remainder of resolver map ... }, mocks: { // use spread syntax to add scalar resolvers to your resolver map ...mocks, // DateTimeMock, // ... // ... other mocks ... }, }); Special Thanks Thanks to OK-Grow for creating this package, adriano-di-giovanni for being generous and giving us the graphql-scalars name on npm, to Saeris for letting us to take other scalar implementations from his fork, stems for their graphql-bigint package, abhiaiyer91 for his graphql-currency-scalars package and taion for his graphql-type-json.
https://the-guild.dev/blog/graphql-scalars
CC-MAIN-2021-31
refinedweb
516
60.65
In a production application you frequently can find yourself working with objects that have a large accessor chain like student.School.District.Street.Name But when you want to program defensively you need to always do null checks on any reference type. So your accessing chain looks more like this instead if (student.School != null) { if (student.School.District != null) { if (student.School.District.Street != null) { s += student.School.District.Street.Name; } } } Which sucks. Especially since its easy to forget to add a null check, and not to mention it clutters the code up. Even if you used an option type, you still have to check if it’s something or if its nothing, and dealing with huge option chains is just as annoying. One solution is to use the maybe monad, which can be implemented using extension methods and lambdas. While this is certainly better, it can still can get unwieldy. What I really want is a way to just access the chain, and if any part of it is null for it to return null. The magic of Castle Dynamic Proxy This is where the magic of castle dynamic proxy comes into play. Castle creates runtime byte code that can subclass your class and intercept method calls to it. This means you can now control what happens each time a method is invoked on your function, both by manipulating the return value and by choosing whether or not to even invoke the function. Lots of libraries use castle to do neat things, like the moq library from google and NHibernate. For my purposes, I wanted to create a null safe proxy that lets me safely iterate through the function call chain. Before I dive into it, lets see what the final result is: var user = new User(); var name = user.NeverNull().School.District.Street.Name.Final(); At this point name can be either null, or the street name. But since this user never set any of its public properties everything is null, so name here will be null. At this point I can do one null check and move on. The start NeverNull is an extension method that wraps the invocation target (the thing calling the method) with a new dynamic proxy. public static T NeverNull<T>(this T source) where T : class { return (T) _generator.CreateClassProxyWithTarget(typeof(T), new[] { typeof(IUnBoxProxy) }, source, new NeverNullInterceptor(source)); } I’m doing a few things here. First I’m making a proxy that wraps the source object. The proxy will be of the same type as the source. Second, I’m telling castle to also add the IUnBoxProxy interface to the proxy implementation. We’ll see why that’s used later. All it means is that the proxy that is returned implements not only all the methods of the source, but is also going to be of the IUnBoxProxy interface. Third, I am telling castle to use a NeverNullInterceptor that holds a reference to the source item. This interceptor is responsible for manipulating any function calls on the source object. The method interceptor The interceptor isn’t that complicated. Here is the whole class: public class NeverNullInterceptor : IInterceptor { private object Source { get; set; } public NeverNullInterceptor(object source) { Source = source; } public void Intercept(IInvocation invocation) { try { if (invocation.Method.DeclaringType == typeof(IUnBoxProxy)) { invocation.ReturnValue = Source; return; } invocation.Proceed(); var returnItem = Convert.ChangeType(invocation.ReturnValue, invocation.Method.ReturnType); if (!PrimitiveTypes.Test(invocation.Method.ReturnType)) { invocation.ReturnValue = invocation.ReturnValue == null ? ProxyExtensions.NeverNullProxy(invocation.Method.ReturnType) : ProxyExtensions.NeverNull(returnItem, invocation.Method.ReturnType); } } catch (Exception ex) { invocation.ReturnValue = null; } } } The main gist of this class is that whenever a function gets called on a proxy object, the interceptor can capture the function call. We created the specific proxy to be tied to this interceptor as part of the proxy generation. When a function is captured by the interceptor, the interceptor can choose to invoke the actual underlying function if it wants to (via the proceed method). After that, the interceptor tests to see if the function return value was null or not. If the value wasn’t null, the interceptor then proxies the return value (creating a chain of proxy objects). This means that the next function call in the accessor chain is now also on a proxy! But, if the return value was null we still need to continue the accessor chain. Unlike the maybe monad, we can’t bail in the middle of the call. So, what we do instead is to create an empty proxy of the same type. This just gives us a way to capture invocations onto what would otherwise be a null object. Castle can give you a proxy that doesn’t wrap any target. This is what moq does as well. If anyone calls a function on this proxy, the interceptor’s intercept method gets called and we can choose to not proceed with the actual invocation! There’s no underlying wrapped target, it’s just the interceptor catching calls. In the scenario where the return result is null, here is the function to proxy the type public static object NeverNullProxy(Type t) { return _generator.CreateClassProxy(t, new[] { typeof(IUnBoxProxy) }, new NeverNullInterceptor(null)); } Now, you may notice that I’m passing null to the constructor of the interceptor, but previously I passed a source object to the constructor. This is because I want the interceptor to know what is the underlying proxied target. This is how I’m going to be able to unbox the final value out of the proxy chain when it’s requested. This is also the reason for the IUnBoxProxy interface we added. Getting the value out! At this point there is an entire proxy chain set up. Once you enter the proxy chain, all other functions on that object are also proxies. But at some point you want to get the actual value out, whether its null or not. This is where that special interface comes in. Using an extension method on all object types we can cast the object to the special interface (remembering that the object we’re working on is actually a proxy and that it should have implemented the special interface we told it to) and execute a function on it. It really doesn’t matter which function, just a function public static T Final<T>(this T source) { var proxy = (source as IUnBoxProxy); if (proxy == null) { return source; } return (T)proxy.Value; } Since the proxy is actually a dynamic proxy that was created we get caught back in the interceptor. This is why this block exists if (invocation.Method.DeclaringType == typeof(IUnBoxProxy)) { invocation.ReturnValue = Source; return; } If the declaring type (i.e. the thing calling the function) is of that type (which it is since we explicitly cast it to it) then return the internal stored unboxed proxy. If the proxy contained null then a null gets returned, otherwise the last thing in the chain gets returned. I specificailly excluded primitives during the proxy boxing phase since a primitive implies the final ending of the chain. That and castle kept throwing me an error saying that it Could not load type 'Castle.Proxies.StringProxy' from assembly 'DynamicProxyGenAssembly2, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null' because the parent type is sealed. But thats OK since we don’t need to proxy primitives in this scenario. Performance tests Now this is great and all, but if it incurs an enormous performance penalty then we can’t really use it. This is where I ran some unscientific tests. In a unit test run in release I checked the relative execution time of the following 3 functions: Create an empty user and use the never null proxy to check a string some amount of times. The console writeline exists only to make sure the compiler doesn’t optimize out unused variables. private void NullWithProxy(int amount) { var user = new User(); var s = "na"; for (int i = 0; i < amount; i++) { s += user.NeverNull().School.District.Street.Name.Final() ?? "na"; } Console.WriteLine(s.FirstOrDefault()); } Test a non null object chain with the proxy private void TestNonNullWithProxy(int amount) { var student = new User { School = new School { District = new District { Street = new Street { Name = "Elm" } } } }; var s = "na"; for (int i = 0; i < amount; i++) { s += student.NeverNull().School.District.Street.Name.Final(); } Console.WriteLine(s.FirstOrDefault()); } And finally test a bunch of if statements on a non null object private void NonNullNoProxy(int amount) { var student = new User { School = new School { District = new District { Street = new Street { Name = "Elm" } } } }; var s = "na"; for (int i = 0; i < amount; i++) { if (student.School != null) { if (student.School.District != null) { if (student.School.District.Street != null) { s += student.School.District.Street.Name; } } } } Console.WriteLine(s.FirstOrDefault()); } And the results are You can see on iteration 1 that there is a big spike in using the proxy. That’s because castle has to initially create and then cache dynamic proxies. After that things level out and grow linearly. While you do incur a penalty hit, its not that far off from regular if checks. Doing 4 chained proxy checks 5000 times runs about 200 milliseconds, compared to 25 milliseconds with direct if checks. While its 8 times longer, you get the security of knowing you won’t accidentally have a null reference exception. For lower amounts of accessing the time is pretty comparable. Conclusion Unfortunately a downside to all of this is that castle can only proxy methods and properties that are marked as virtual. Also I had a lot of difficulty getting proxying of enumerables to work. I was only able to get it to work with things that are declared as IEnumerable or List but not Dictionary or HashSet or anything else. If you know how to do this please let me know! Because of those limitations I wouldn’t suggest using this in a production application. But, maybe, one of these days a language will come out with this built in and I’ll be pretty stoked about that. For full source check out my github. Also I’d like to thank my coworker Faisal for really helping out on this idea. It was his experience with dynamic proxies that led to this post. 1 thought on “Minimizing the null ref with dynamic proxies”
https://onoffswitch.net/2013/05/20/minimizing-null-ref/
CC-MAIN-2020-16
refinedweb
1,721
55.54
Date: Nov 30, 2012 9:18 AM Author: Michael Stemper Subject: Showing group is Abelian I'm currently on a problem in Pinter's _A Book of Abstract Algebra_, in which the student is supposed to prove that the (sub)group generated by two elements a and b, such that ab=ba, is Abelian. I have an outline of such a proof in my head: 1. Show that if xy = yx then (x^-1)y = y(x^-1). This is pretty simple. 2. Use induction to show that if p and q commute, then any product of m p's and n q's is equal to any other, regardless of order. 3. Combine these two facts to show the desired result. However, this seems quite messy. I'm also wary that what I do for the third part might end up too hand-wavy. Is there a simpler approach that I'm overlooking, or do I need to just dive in and go through all of the details of what I've outlined? -- Michael F. Stemper #include <Standard_Disclaimer> If this is our corporate opinion, you will be billed for it.
http://mathforum.org/kb/plaintext.jspa?messageID=7930290
CC-MAIN-2015-35
refinedweb
191
78.99
So just a forewarning... I'm going to disappoint you. Sorry about not providing the content you likely crave from a #100daysofcode blog. However even tiny projects can become real timesucks when you add the time to configure and deploy the project then write a blog post on it. Since I've been on my grind recently and have been thinking about getting certain things done before the end of 2021 I wanted to publish a year progress bar built in React. Progress bars are one of those things that everyone builds eventually (or at least implements via a library) and it's actually a nice exercise in native Javascript Date functions. I modified the tutorial here to calculate the days remaining until December 31 2021 and provide that info for the rest of the application to read within the App component. Then I followed this tutorial from an amazing Dev.to blogger I am following (and you should too!) to create the progress bar component. const today = new Date(); const newYear = new Date(today.getFullYear(), 11, 31); if (today.getMonth() == 11 && today.getDate() > 31) { newYear.setFullYear(newYear.getFullYear() + 1) } const one_day = 1000 * 60 * 60 * 24; const remainingDays = Math.ceil((newYear.getTime()-today.getTime())/(one_day)); const yearCompleted = 365 / (365-remainingDays); const readout = (`${remainingDays} days left until 2022!`); ... <div className='readout-container'>{readout}</div> <ProgressBar bgcolor={'green'} completed={yearCompleted} /> Instead of using inline styles I mostly switched everything over to CSS, and simply fed the requested props into the ProgressBar component. import React from 'react' function ProgressBar(props) { const {bgcolor,completed} = props; const fillerStyles = { width: `${completed}%`, backgroundColor: bgcolor, transition: 'width 1s ease-in-out' } return ( <div className='progress-bar-container'> <div className='progress-filler' style={fillerStyles}> <span className='progress-label'>{`${completed}%`}</span> </div> </div> ) } export default ProgressBar That's it! Hope you enjoy and remember to give Kate from dev.to a follow! Like the site says, if you enjoy these kinds of projects you can find me on the twitters. Discussion (4) Hi James! I just want to wish you good luck on your mission to complete 100 React projects and to encourage you not to stop! You are doing great! And I'm happy that my tutorial on how to make a custom progress bar component was helpful! Great efforts! I just checked your Github repo. I loved the concept of implementing 100 small projects in React!! Thanks Yogini! I code at work so sometimes it's hard to come home and do a project so this was a simple one. Getting through it though. How is React Native? I can totally relate. At times you don't feel motivated to work on your own stuffs after a hectic code workday! React Native is amazing and interesting, based on concepts of ReactJS only. It's fun, you should give it a try.
https://practicaldev-herokuapp-com.global.ssl.fastly.net/jwhubert91/project-47-of-100-year-progress-bar-with-react-1ocd
CC-MAIN-2021-17
refinedweb
473
58.69
04 October 2012 11:48 [Source: ICIS news] SINGAPORE (ICIS)--Iranol Oil, a major base oil producer in ?xml:namespace> The Iranian refiner is offering 4,200 tonnes of SN150, 4,200 tonnes of SN500 and 1,400 tonnes of brightstock 150 for loading from Bandar Imam Khomeini (BIK) on 15-30 October, the source said. The cargo will be awarded on a FOB BIK basis with the option for partial shipments, the source added. The tender was valid until 3 October, but there was no settlement heard as at 17:30 hours The market sources added that the tender will likely be awarded on 5-7
http://www.icis.com/Articles/2012/10/04/9600919/iranol-oil-offers-group-i-base-oils-for-loading-15-30.html
CC-MAIN-2014-49
refinedweb
107
63.53
Good work with these changes - they should make Windows 9x users a bit more happy! Comments about this patch follow. First up, there were a couple of changes in this patch that had nothing to do with service/startup/shutdown/restart/console stuff at all. The change to add a function prototype to http_config.h was good, but can it be a separate patch next time? One more serious change is the one below: > Index: os.h .. > -#define HAVE_SENDFILE > +#define HAVE_SENDFILE_UNKNOWN Why? I might be missing something, but it sure looks like this will cause NT never to use TransmitFile! ap_send_fd in http_protocol.c relies on HAVE_SENDFILE being defined before it attempts calling iol_sendfile. > 2) Isolation can't be complete, we need to know when the mpm is > fully initialized. A new pointer to a no-arg function returning > void is provided for this purpose, ap_mpm_init_complete. It is > only called if overridden with a non-NULL value prior to invoking > apache_main. Why can't this call happen from within ap_mpm_run (just before master_main is called) in winnt.c? This would guarantee that it was called once per process and only for the parent process. If it must be done from the post config hook, can the NULL assignments happen there too (ie. just after the function is called)? This way, all the "ap_mpm_init_complete = NULL;" stuff that goes on at the end of each function that's a candidate for being called through ap_mpm_init_complete can disappear. I don't like the way a function must know that it is going to be called through a pointer, and that it is also responsible for setting said pointer to NULL before it returns. > Index: winnt.c .. > +typedef void (CALLBACK *ap_completion_t)(); Can this go into a header file? The same code has been put into winnt.c, main_win32.c and service.c. > -#include "service.h" > +//#include "service.h" Why not just delete this? Same question for calls to service_set_status, etc. that were commented out. > Index: service.c .. > + /* Free the kernel library */ > + // Worthless, methinks, since it won't be reclaimed > + // FreeLibrary(hkernel); Why not just call it? It would be one line instead of three and correct. The > + /* We have to quit right here to avoid an invalid page fault */ > + // But, this is worth experimenting with! > + return (globdat.exit_status); Why is it necessary to quit (ie. why do you get an invalid page fault)? What is "quitting" in this context? What exactly is worth experimenting with? Tim This message was sent through MyMail
http://mail-archives.apache.org/mod_mbox/httpd-dev/200005.mbox/%3C200005170425.OAA12749@oznet15.ozemail.com.au%3E
CC-MAIN-2014-42
refinedweb
418
76.62
I think I need a small split string program with a getLine function to do "exactly" this: (Please notice the user type his full name on one line, but the program output First and Last name on separate lines.) Please type your first and last name: Sam Harris Your First Name is: Sam Your Last Name is: Harris Below is some code I been playing around with, with no success. It is "TOTALLY" useless because it needs user input from the keyboard as described above. Not embedded names like I have in the code below. Can some one fix this or would you have an working example to archive what I'm after which is color-coded above. Hope I made my question clear.Hope I made my question clear.Code:#include <iostream> #include <sstream> #include <string> using namespace std; int main() { // useless. We need user keyboard input. Not embedded // names and it must be in Standard C++. string s("Sam Harris"); istringstream iss(s); string sub; iss >> sub; cout << "First Name: " << sub << endl; iss >> sub; cout << "Last Name: " << sub << endl; return 0; } Thanks in advance
http://cboard.cprogramming.com/cplusplus-programming/130293-split-string-using-getline-function.html
CC-MAIN-2016-07
refinedweb
186
80.31
Doing for User Space What We Did for Kernel Space I believe the best and worst thing about Linux is its hard distinction between kernel space and user space. Without that distinction, Linux never would have become the most leveraged operating system in the world. Today, Linux has the largest range of uses for the largest number of users—most of whom have no idea they are using Linux when they search for something on Google or poke at their Android phones. Even Apple stuff wouldn't be what it is (for example, using BSD in its computers) were it not for Linux's success. Not caring about user space is a feature of Linux kernel development, not a bug. As Linus put it on our 2003 Geek Cruise, "I only do kernel stuff...I don't know what happens outside the kernel, and I don't much care. What happens inside the kernel I care about." After Andrew Morton gave me additional schooling on the topic a couple years later on another Geek Cruise, I wrote:. A natural outcome of this distinction, however, is for Linux folks to stay relatively small as a community while the world outside depends more on Linux every second. So, in hope that we can enlarge our number a bit, I want to point us toward two new things. One is already hot, and the other could be. The first is blockchain, made famous as the distributed ledger used by Bitcoin, but useful for countless other purposes as well. At the time of this writing, interest in blockchain is trending toward the vertical. Figure 1. Google Trends for Blockchain The second is self-sovereign identity. To explain that, let me ask who and what you are. If your answers come from your employer, your doctor, the Department of Motor Vehicles, Facebook, Twitter or Google, they are each administrative identifiers: entries in namespaces each of those organizations control, entirely for their own convenience. As Timothy Ruff of Evernym explains, "You don't exist for them. Only your identifier does." It's the dependent variable. The independent variable—the one controlling the identifier—is the organization. If your answer comes from your self, we have a wide-open area for a new development category—one where, finally, we can be set fully free in the connected world. The first person to explain this, as far as I know, was Devon Loffreto He wrote "What is 'Sovereign Source Authority'?" in February 2012, on his blog, The Moxy Tongue. In "Self-Sovereign Identity", published in February 2016, he writes: Self-Sovereign Identity must emit directly from an individual human life, and not from within an administrative mechanism...self-Sovereign Identity references every individual human identity as the origin of source authority. A self-Sovereign identity produces an administrative trail of data relations that begin and resolve to individual humans. Every individual human may possess a self-Sovereign identity, and no person or abstraction of any type created may alter this innate human Right. A self-Sovereign identity is the root of all participation as a valued social being within human societies of any type. To put this in Linux terms, only the individual has root for his or her own source identity. In the physical world, this is a casual thing. For example, my own portfolio of identifiers includes: - David Allen Searls, which my parents named me. - David Searls, the name I tend to use when I suspect official records are involved. - Dave, which is what most of my relatives and old friends call me. - Doc, which is what most people call me. As the sovereign source authority over the use of those, I can jump from one to another in different contexts and get along pretty well. But, that's in the physical world. In the virtual one, it gets much more complicated. In addition to all the above, I am @dsearls (my Twitter handle) and dsearls (my handle in many other net-based services). I am also burdened by having my ability to relate contained within hundreds of different silos, each with their own logins and passwords. You can get a sense of how bad this is by checking the list of logins and passwords on your browser. On Firefox alone, I have hundreds of them. Many are defunct (since my collection dates back to Netscape days), but I would guess that I still have working logins to hundreds of companies I need to deal with from time to time. For all of them, I'm the dependent variable. It's not the other way around. Even the term "user" testifies to the subordinate dependency that has become a primary fact of life in the connected world. Today, the only easy way to bridge namespaces is via the compromised convenience of "Log in with Facebook" or "Log in with Twitter". In both of those cases, each of us is even less ourselves or in any kind of personal control over how we are known (if we wish to be knowable at all) to other entities in the connected world. What we have needed from the start are personal systems for instantiating our sovereign selves and choosing how to reveal and protect ourselves when dealing with others in the connected world. For lack of that ability, we are deep in a metastasized mess that Shoshana Zuboff calls "surveillance capitalism", which she says. Then she asks, "How can we protect ourselves from its invasive power?" I suggest self-sovereign identity. I believe it is only there that we have both safety from unwelcome surveillance and an Archimedean place to stand in the world. From that place, we can assert full agency in our dealings with others in society, politics and business. I came to this provisional conclusion during ID2020, a gathering at the UN on May. It was gratifying to see Devon Loffreto there, since he's the guy who got the sovereign ball rolling in 2013. Here's what I wrote about it at the time, with pointers to Devon's earlier posts (such as one sourced above). Here are three for the field's canon: - "Self-Sovereign Identity" by Devon Loffreto. - "System or Human First" by Devon Loffreto. - "The Path to Self-Sovereign Identity" by Christopher Allen. A one-pager from Evernym, digi.me, iRespond and Respect Network also was circulated there, contrasting administrative identity (which it calls the "current model") with the self-sovereign one. In it is the graphic shown in Figure 2. Figure 2. Current Model of Identity vs. Self-Sovereign Identity The platform for this is Sovrin, explained as a "Fully open-source, attribute-based, sovereign identity graph platform on an advanced, dedicated, permissioned, distributed ledger" There's a white paper too. The code is called plenum, and it's at GitHub. Here—and places like it—we can do for user space what we've done for the last quarter century for kernel space.
http://www.linuxjournal.com/content/doing-user-space-what-we-did-kernel-space
CC-MAIN-2017-26
refinedweb
1,163
62.98
Ship It! – Episode #57 What do oranges & flame graphs have in common? with Frederic Branczyk, CEO and Founder of Polar Signals Today we are talking with Frederic Branczyk, founder of Polar Signals & Prometheus maintainer. You may remember Frederic from episode 33 when we introduced Parca.dev. This time, we talk about a database built for observability: FrostDB, formerly known as ArcticDB. eBPF generates a lot of high cardinality data, which requires a new approach to writing, persisting & then reading back this state. TL;DR FrostDB is sub zero cool & well worthy of its name. Featuring Sponsors MongoDB – An integrated suite of cloud database and services — They have a FREE forever tier, so you can prove to yourself and to your team that they have everything you need. Check it out today at mongodb.com/changelog. Notes & Links - ⚠️ Naming is hard: ArcticDB is now FrostDB (+updates!) - Introducing ArcticDB: A database for Observability - Profiling Next.js apps with Parca - Michal Kuratczyk helped us figure out what we were doing wrong with Erlang perf maps - David Ansari: Improving RabbitMQ Performance with Flame Graphs - Kemal Akkoyun: Fantastic Symbols and Where to Find Them Part 1 & Part 2 - Matthias Loibl: pyrra - Making SLOs with Prometheus manageable, accessible, and easy to use for everyone! - Tyler Neely: 🎬 Modern database engineering with io_uring - FOSDEM 2020 - Achille Roussel: Go library to read/write Parquet files - segmentio/parquet-go - Julia Evans: How to spy on a Ruby program Transcript Click here to listen along while you enjoy the transcript. 🎧 Hi, Frederic! Welcome back to Ship It. Just in time for summer! [laughs] Thanks for having me back. So we last met in episode 33, Merry Shipmas. It was Christmas. Can you believe it? And it’s almost summer. This was six months ago, and we had a great time talking about trying out Parca. And I enjoyed trying it out. Yeah, time flies. And Michal Kuratczyk, thank you for figuring out what we were doing wrong with Erlang perf maps. That was Parca Agent issue 145. So thank you, Michal, for helping us figure it out. This happened very recently… Thank you David Ansari for writing that amazing blog post, how to do use pprof and how to do Flame Graphs with RabbitMQ, and mentioning Parca. I’m really excited to see what happens next. That was very nice to see; we’ll drop a link in the show notes… And Kemal - he wrote two blog posts on various topics, which… There’s lots of things happening in this space. So do you wanna tell us more about it, Frederic? Because this is like the tip of the iceberg, literally; the tip of the Polar Signals iceberg. There’s been so many things happening in the background. Yeah, I mean - where to start, right? I think one of the most exciting things, that have nothing to do with software for us at Polar Signals, is that we grew the team a ton since we’ve last talked. I think we doubled the team since you and I talked last… So we’re now 11 people, which is extremely exciting to see organizationally. But then, of course, the software that we’re building is becoming ever better, ever more features, and more stable, and everything. Yeah, I think it’s cool that you started with the Erlang bit, because that’s where we left off last, and it’s entirely random that just yesterday that RabbitMQ blog post was, to no control of you or me, was published, showing what we were trying to do last time is properly supported by Erlang. [04:10] You know, when things are meant to happen, they just happen… So sit back and just let them happen. Just going with the flow. Big fan of that. And seeing things come together this way - we’re definitely on the right track with this. So I know that Kemal Akkoyun was with you back in December… He wrote two blog posts, amazing blog posts, on this topic. Fantastic Symbols and Where to Find Them, Part 1 and 2. We’ll drop them in the show notes. They explain a lot more of the issues that we’re seeing, and issues we’re specifically symbolizing stack traces - Kemal did an amazing job explaining it in great detail. There’s some screenshots there… David covers a lot of this in his blog post, the recent blog post… So it’s a really deep dive into this topic, and I really enjoy these fantastic people spending a lot of time just to explain in very detailed terms what the problem is, why it’s important, how it works… Big fan of that, too. So in these sick six months, what changed with Parca.dev? So I think almost everything has changed, at least a little bit. Since you’ve mentioned the work that Kemal has been doing in all of the blog posts that he’s been writing – the blog posts are kind of the result of all of this work. Basically, they are the blog posts that he wished he had had when he was working on this… Because there’s so much archaic information out there. Basically, Linux has grown over the last 30 years (my gosh), and even before that, ELF binaries - they’ve been around for a long time. There’s just a lot of intricate things that can happen, and then there are random things that compilers do to binaries to optimize them… And that kind of just all makes our life really miserable, but also kind of interesting in the profiling world. So Kemal has kind of – like, one of the really important things that came out of all of this work that Kemal was doing, and what ultimately resulted in those blog posts as well, is something call position-independent executables; support for these. And the reason why this is really important is basically all binaries, or all shared objects, shared libraries - think of libc, it’s kind of the one that basically everything dynamically links to, right? But anything you can think of that is like a shared object, shared library in Linux - those are position-independent executables. And the term comes from that they can essentially be mapped into memory, into random places in the process basically… And even if they are mapped in those random places, we can still kind of translate those memory addresses back to something that is understandable uniquely for that shared library. So even if there are two different binaries that do completely different things with these libraries, the shared object is the same one, and we can treat it as the same one. So that was really important, so that we can do analysis of an entire infrastructure, where as I said, lots of binaries link to the same libraries, and we can then link all of this information and say “Hey, there are hundreds of binaries using this function in libc that is super-unoptimized”, or something like that. Not that that’s really the case… Libc is a very well optimized library, but you get the idea. It’s basically a super-power in order to get whole system visibility. So that’s exciting… And kind of as a bonus, every Rust binary out there is a position-independent executable. So that means that just by doing all of this work we now support Rust, even better than we did before. [08:04] That’s amazing. That’s amazing. The one thing – so thank you for slowing me down, because you’re right, this is important; to talk about those two blog posts, the Fantastic Symbols and Where To Find Them… The first one, the ELF Linux executable walkthrough - that picture I think is worth a thousand words in this case. It explains so well how this breaks down, how the ELF binary breaks down, what it is… Sorry, the ELF format. And there’s so much to that. And then in part two, where we talk about JIT and Node is given as an example, how does it actually work in practice. It’s really nice to see that and to basically connect those dots, because as you mentioned, the problem space is huge, and if you’re missing those fundamentals it’s very difficult to understand how the pieces fit together, and what are you even looking at. Why is this important? David, he wrote it in the Improving RabbitMQ Performance blog post. He showed the importance of understanding what is happening at a very low level when it comes to reasoning about performance, when it comes to improving performance in whatever you’re running. So where is the time spent, what is least efficient. And because these things are so complicated, can we have a universal language, please, to understand what is happening? And I think, to a great extent, eBPF allows us to do things that were not possible before, or were very hard before, and only a handful of people were able to pull this one off. And even then, spend a lot of time. Brendan Gregg comes to mind. He did so much for the Flame Graph, understanding CPU sampling, CPU profiling, all that. Yeah, absolutely. And I think our mission with the Parca Project is to take all of this information from all of these communities and kind of bundle it into one. Like you said, Brendan Gregg has done phenomenal work showing us how to profile Java applications, but also native binaries… And then completely on the other side of the spectrum there are really amazing Python and Ruby profilers. One that I’m really excited about is Rbspy, that was originally created by Julia Evans. It basically outlines how we’re going to have support for CPU profiling for Ruby processes as well. That’s what I’m kind of trying to say - we learned also about Erlang; that’s kind of something that actually came out of this podcast, which I think is really exciting… Just kind of getting all of these pieces together, so that we can have actual, whole system profiling, so that we can look at our entire infrastructure as one, regardless of what language we’re talking about. And as we can see based on this podcast, that’s a long road to go, but it’s one worth going. I really like how simple you make this. I think that’s one of my favorite aspects of Parca, how something that’s fairly complex - and if you have to do this by hand, just go and look through all the instructions. And if you haven’t done this, you’ll realize by step number five or six you go “You know what - do I really wanna do this?” You’re questioning whether you really wanna do that. That’s just how involved it is. And having an open source project that makes this really, really easy - that’s what just got me excited the first time I heard about Parca… Because I knew how difficult it is to get it right. And I think everyone that’s spent a bit of time with pprof and - which is the other one? DBG? No, GDB… Oh, my goodness me. Oh, wow. That’s like another tool which is so difficult to use. And I had to spend a bit of time there, and I almost always forgot my steps. There’s so many. So unless you do this all day, every day, it’s really hard stuff. And Parca makes it simple, and I love that story. [12:00] It’s funny that you phrase it in that way, because a couple of weeks ago I was talking to a high-frequency trading company, and as I think everybody can imagine, shaving off a single CPU cycle is a competitive advantage to them. And even in those kinds of environments, they were telling us that – like, they love how we’re going the extra mile, and doing continuous profiling… But they would already be happy with profiling products that just made it easier to do profiling. So we’re kind of doing multiple things there. We’re doing exactly that, like you already said, and then we’re also going that extra step of actually giving them performance data of all of time, not just a single point in time. Yeah. And just as we’ve shown in episode 33, there’s even like a pull request that goes with it. Anyone can take this; if you have Kubernetes, it’s super-simple. One command and you have it. That’s all it takes. And it’s open source; you’re free to look at it, contribute to it, make it your own… Whatever you wanna do with it, because it’s such an important piece of technology, I think. Speaking of which, I’ve noticed straight off the bat your website, and I think – wow, Parca.dev? I really like the new website. Tell us a little bit about that, because I haven’t seen such a big change, such a positive change happen just like within a couple of months. What’s the story behind it? Honestly, that has very little to do with our team, and has all to do with the really incredible team at Pixel Point. They’re a web consultancy, but I got to know them through some other open source projects. They did the website for the K6 project, they did the website for [unintelligible 00:13:56.07] Cilium… Yes, Cilium… Maybe even the eBPF.io website, I’m not 100% sure. But basically, they’ve become THE consultancy for open source projects and deep tech projects… So I was really excited to kind of just reach out to them and see if they’re interested in a project like this, and working with us, because we felt like we needed a makeover for the Parca.dev website… And they’re just absolutely mind-blowingly amazing. They really try to understand what Parca does, and they themselves got really excited about it. That of course is a bonus, but because they tried so hard to actually understand what Parca does, they were able to tell the story really amazingly, and then they’re also just brilliant designers. Yeah. I wanna give a huge shout-out to Pixel Point, because I rarely see a website that captures something as well as Parca.dev does. I really like this story. I mean, I knew Parca, but it just basically opened it up in ways which were surprising to me. Even the screenshots - they’ve got them spot on. How it works, why it’s important… All that good stuff. Good job, Pixel Point. Good job. Yeah. Actually - funny thing… One of the things that actually kind of went the other way was we did the screenshots and they were like “Can we edit the screenshots to look prettier?” And we were like, “I don’t think that’s being genuine to our users, or potential users.” So what happened was they made the edits to the screenshots, and then we actually implemented those changes in Parca, so that it would actually look that way… And then we did real screenshots again. Oh, wow. So that was a cool collaboration that I think unless you ask about it, you don’t really find out… But aside from the website, they actually also influenced the way that Parca looks today. [16:05] So I’m really glad that you mentioned that, because when I looked at the new website and I’ve seen the Flame Graphs, my first thought was “Hang on… They didn’t look like this. Is this real? Has this actually happened?” I ran the update, checked the new Flame Graphs, and they’re exactly the same. And I remember that we talked about this around episode 33, and I was thinking, “Hm… That’s one thing which could do with some improving, because it’s a bit difficult to understand certain things…” Still, huge improvement over what we had before, but not as easy as it could be. And it was great to see… That’s one of the first things which I’ve noticed. The other thing which I noticed is your favorite Easter Egg. Can you tell us a bit about it? Yeah, this is awesome… I mean, it’s kind of a design gimmick, but I think it’s really cool… We talked already about Parca and the relationship to eBPF… eBPF has this B as a logo, and as you scroll through the website, the B kind of flies through the pictures and out of the website, which I think is – I love that detail. I’m disappointed if a website doesn’t have an Easter Egg. I think Chainguard spoiled it for us, with the hair on various people… I mean, now I’m looking for Easter Eggs. And I think Changelog.com needs an Easter Egg, too. If Jerod is listening to this, that’s okay. And if not, I’ll mention it in our next Kaizen. But Easter Eggs are so important. They just – you know, like, play, and having a bit of fun is so important… Because our day-to-day - it’s hard enough as it is, let’s be honest about it. So every little opportunity to have a bit of fun - I think we should seize it. Agree. That’s how I think of Easter Eggs. So I think that you can almost anticipate this question, because I think I asked it last time… Do you use profile Parca.dev? All the time. Nice. All the time, yes. Specifically, our demo cluster - so if you go to demo.parca.dev, that’s Parca profiling itself, but also the Parca Agent profiling itself; so it’s all super-meta… And actually, we have like a Prometheus setup that is monitoring it as well. So all of this we’re kind of using to do improvements all the time, and to figure out whether the improvements that we’re doing actually make sense and have the desired effect. Parca is the project, and then Polar Signals Cloud is the product. Right. So Parca, the open source project - using it, and seeing the improvements, and… Even for itself, it’s so important… But I have noticed this blog post about profiling Next.js apps with Parca, and that made me thing “Oh, hang on… There must be something more to it.” And I know that Parca.dev runs on Vercel, which is the Next.js company… And in that case, I was thinking you must be doing something with the website as well. I haven’t seen that in the demo; maybe I wasn’t paying enough attention… But the fact that it’s the live… Is it for the website itself as well? Okay, so parca.dev itself is 100% a static website that’s hosted on Vercel. So that we’re not profiling, though maybe we can partner with Vercel one day and profile all of the applications there. That’s not something that we’re doing today. But, actually, Polar Signals Cloud is Next.js, and that we’re profiling with Parca. And is that what the demo does? No, that’s our internal Polar Signals Cloud project. I’ve noticed that it runs on K3s. Is it Civo, by any chance? It’s Civo, yeah. Nice. Okay. I can see it. I can see it. It was really nice… Like, click on the Demo, and I was wanting to know more about it - where it runs, how it’s set up, what is being profiled… And I’m glad that you mentioned all those things, because now it just makes a lot more sense in my head. So the other thing which I - you know, just reading around and doing a bit of research, I’ve seen you mention that Matthias recently fixed some things in the Polar Signals IO pipeline, the continuous delivery pipeline. So six minutes from PR to dry run, diff in the cluster Yeah, this is pretty exciting. How does this relate to Parca – I don’t think this is Parca.dev, right? This is just for the agent, for the server… This is the Polar Signals Cloud product. Basically, we have a monorepo that contains all of Polar Signals Cloud, and within that repo we now have - from opening the PR, to doing a dry run apply to our Kubernetes cluster within six minutes. That includes building all of the container images, running previews of the UI… All of these things, everything in six minutes. So in six minutes you can basically try out your change in a staging-like environment, and it will tell you when you merged this pull request, this is the changes that we’re gonna be applying to the production Kubernetes cluster. Okay, okay. So I’m just trying in my head to imagine… How do you view if the changes are positive or negative? Do you look at the profiles? Do you have some – how does that work? …to see if the change that you’re rolling out is a good one. So in this case, it was just that we added much more aggressive caching in our builds… So here it was really just seeing whether the total runtime was less than what we had before. But that was very noticeable, because before it was like 26 minutes, and after doing some very aggressive caching, we got down to six minutes. Okay. So what runs the CI/CD? Is it the GitHub Actions, and what does the caching – It’s GitHub Actions, yeah. [23:45] We just do – so previously, we did most of our caching through Docker layers, but we ran into a couple of issues with that where I wasn’t… I don’t remember exactly anymore what the problem was, but there were some permission issues and we couldn’t figure out why that was happening, and the saving and loading of Docker caches was actually taking longer than running the builds… So we decided we’re not gonna do the actual build within the Docker files anymore. Because we have 100% statically-linked Go binaries - that’s all that Polar Signals Cloud is made up of - so we’re building the statically-linked binaries before, and then we just put those into containers. So basically, all we’re doing is we’re using the Go caching from GitHub Actions now. I see, I see. Okay. So I think you’re thinking about the BuildKit caching; so the BuildKit caching integration with the GitHub Actions cache is slower than actually running the commands… And I have seen this before, and there’s like a great story for another time behind that. Eric is someone that I work with, and he’s one of the BuildKit core maintainers, and he’s well aware of this, and he’s working towards a solution… But I know what you mean. I know that sometimes using the layer caching, the BuildKit layer caching with GitHub Actions can be slower, for sure. Okay, that makes sense. So where do you build those binaries, the statically-linked Go binaries? Those we build just through normal GitHub Actions. Beforehand we load the go mod cache from previous runs, and then we save the cache if it changes. Okay. Yeah, that makes sense. Okay, I can see that. So in six minutes you get your change-out in a staging cluster, and then what happens afterwards? Then people review it. The cool thing is because we also run previews on Vercel, basically you can try out the entire pull request after six minutes. We’ve got the UIs that can either be pointed at different versions of the API, or even the production API… Because you know, most of the time it’s either or - a pull request that only does changes to the frontend, and in that case it’s actually nicer if you can just use production data immediately. And then if it’s approved and merged, then within the next six minutes it’s gonna be deployed. Nice. How many deploys do you do per day? Because this sounds very efficient. You must be doing a lot. Yeah, I mean - it depends on what people are working on… But we can easily do tens of deploys if we want to. That’s very nice. That makes a huge difference. Being able to make small changes, try them out in the final place where they will run, gaining that confidence and then just saying “Yup, this looks good to me”, and then a few minutes later - in this case several minutes later - you have it. Nice. Have you ever found yourself in a situation where you have to roll back? A change had unexpected consequences in production, that were not visible in staging? Absolutely. That’s where another really cool piece comes into play… One of my colleagues - I think you mentioned Matthias already… He built a really cool tool called Pyrra, which is for planning, but also maintaining and tracking SLOs. And all of our APIs have SLOs through Pyrra. So when we have a genuine user impact through a merge and we get notified within a couple of mintues, then we can easily roll back the change, and at worst, we have the time that it took to alert us, which is usually somewhere between five to ten minutes, if there’s a really drastic problem, and then we roll back. So turnaround, 16 to 20 minutes until we would have rolled back a severe change. [27:50] That sounds like a very nice setup. Very, very nice. I bet it must be so nice working with all this tooling, that you mostly built, and you understand how everything fits together, and you have like a very nice and efficient system of getting changes out… And if something – I don’t wanna say breaks; if something behaves unexpectedly, you can go back and you can see when it happens. So I know that you mentioned Pyrra last time that we talked. I don’t remember how much of it made it in the final conversation in the episode… But can you tell us a bit more about it, and how is it coming along since we last spoke? Because I remember you mentioning it, I was excited about it, but I didn’t have time to follow up on that. So I highly recommend actually that you do an episode with Matthias, because he’s much more qualified to talk about it than I am… Because I’m just a user, Matthias is the creator, and he just does everything around that project. And really, it’s not anything that we do at Polar Signals, it’s just something he’s also passionate about… So it made its way into the Polar Signals infrastructure, and it’s an amazing tool. I find myself not going to Prometheus Alert Manager, or even Prometheus. When I get a page, the first thing I do is I hop into Pyrra and see what my error budget burnt rate is, and how severe those changes are actually affecting my users. So Pyrra itself is, like I said, a tool to manage SLOs, essentially, specifically for Prometheus setups. It doesn’t integrate into any thing else. And that’s just because that’s the only tool that we use. But with Pyrra, you can kind of say “I have this gRPC API that I have metrics for in Prometheus, and I have this goal of three nines. 99.9, or 99.95.” And then Pyrra will automatically generate multi-window error burn rates. This is a very long term, and there’s a lot of theory behind this, why these alerts are better than a normal threshold of 0.1% error rate is happening right now… Because we don’t really care if that error rate happens once and just spikes for a very brief second. We actually care about “Are we going to fulfill our promise to our users over the next 30 days?” or within the last 30 days. So we really only want to get paged if we are in danger of violating that kind of contract that we have with our users. So multi-error burn rates essentially calculate how quickly are we burning our error budget, and if we continue at this rate, are we going to run out of error budget? Essentially, when are we going to get to that point where we’re violating that contract we have with our users. That’s essentially – Pyrra allows you to efficiently manage those, but also it’s just much smarter than I am, for example, to generate those Prometheus alerts, because there’s a lot of math behind those that you really need to understand pretty deeply to do useful alerts. And Matthias has spent countless hours studying this, and really implementing something really unique with Pyrra. Alright, that’s a conversation that I’m really looking forward to. Thank you for mentioning it. I remember last time when, again, we just briefly talked about it, but the focus was something else - now that you mention it again, this comes up, and there’s a demo.pyrra.dev, that’s really interesting. It’s pyrra.dev on GitHub. This is something – you know, you have those projects that people get like ideas they are very excited about for a few months, and then they stop being as excited, and then it becomes abandonware… This doesn’t seem to be that. And I really like that a lot of interest is on this; you’re using it, you’re seeing the benefits of it longer-term, more than a few months… And I’m very curious to see where this goes. I think this has some great potential, and I like how Matthias is thinking about it, for sure, so that’ one’s coming up. Thank you, Frederic, for mentioning that. Amazing. So I’d like us to take this like a half-point, so I’d like us to do like a conversation cleanser… But I would like to talk about the orange farm. I’d like you to tell us more about that orange farm, Frederic… [laughter] What is this orange farm? Yeah, so just before KubeCon EU, we at Polar Signals did our very first in-person off-site. So for those who don’t know, Polar Signals was founded end of 2020, so the Covid pandemic was in full… Swing. Oh yes, full swing. Full swing, yeah. And so we’re a fully remote company, and up until that point, I even as the founder hadn’t seen a lot of the people who we ended up hiring at Polar Signals in-person. So we spent kind of the entire week before KubeCon together, kind of partly working, doing hackathons, and doing some strategic planning, but also just spending some quality time together. And yeah, one of the team events that we did was we went to an orange farm in Valencia, because KubeCon EU was in Valencia, and Valencia is famous for their orange farms… And I love orange juice… Okay, I can see where this is going… I can see where it’s going… And we went to this really lovely orange farm just outside of Valencia. We booked kind of like a private tour on the farm, where they kind of taught us through the history of how modern-day oranges were even developed, and the personal history of their family on the orange farm, and so on. And yeah, we got to pick oranges right out of the tree, and they told us how to actually eat oranges, which apparently I’ve been doing wrong all my life… So how do you do it? Hang on, this is important… How should you eat oranges? Yeah, I didn’t know this, but essentially you take the orange with the stem upwards, the green part upwards, and you just kind of bite into it, and you kind of bite out the top part of the orange, you throw that part away, and then you can kind of squeeze the orange juice into your mouth and drink it. And then once you’ve squeezed most of it, you kind of just break it open and then you eat the flesh. Wow. You can actually do that without making a mess. It’s mind-blowing. Okay… Wow, that sounds like great tips. Thank you very much for that. And that sounds like a great team activity. I know it’s really hard to adjust to the new reality, because we always thought that’s short-term, things will come back to normal, we’ll be back in offices… But that hasn’t happened. I’m not seeing it. I think the world has moved on to a new model, where most of us are remote. There’s no office… I mean, who would have thought that this will become the norm, especially among the startups… And that has so many benefits. One of the drawbacks is that you don’t get to spend in-person time, quality time, with the people that you work with… Because it makes a huge difference. And activities like this just create those bonds which are so important to a good, healthy team, and I’m glad that you are taking every opportunity you can to do that. It’s so important to build a healthy team and a healthy company. Yeah, I couldn’t agree more. So there’s another huge thing that happened just before KubeCon EU… You introduced ArcticDB, and that’s what I would like us to talk about next. So what is ArcticDB and why does the world need something like ArcticDB? Yeah, this is something that I’ve been excited about building for a really long time, and I’ve kind of been thinking about this problem space for a really long time… So kind of in the name, it’s a new database, it’s an embedded database written in Go. Maybe listeners are familiar to BadgerDB, or LevelDB, or even kind of like RocksDB, where you’re using it as a library in your application to build something around. I guess SQLite is the most classic example of this. ArcticDB is a columnar database. As opposed to many other databases, where let’s say in SQLite, for example, typically the data is stored in rows, if you insert a new row into your SQLite database, physically, on-disk, all of the data that belongs to the same row are physically collocated. That’s a row-based database. And then a columnar database - we store all of the values of an entire column collocated. And that’s really useful when you wanna do analytics of the data. So if you wanna scan an entire column, and then say you wanna aggregate it, you wanna sum all of the values in there. Or you want to do comparisons of strings, or something like that; it just turns out that the way that computers work, that’s much more efficient to do than doing kind of random access on disk, and loading individual pieces off of this to do those things. [39:55] So that’s why we for Parca needed a columnar database. We kind of realized that pretty early on. And I have some kind of prior experience with the Prometheus TSDB, which if you squint a lot, is also a columnar database, but highly, highly optimized for the Prometheus use case. The one thing that is additionally kind of different in ArcticDB, that really there’s no other database out there that allows you to do something like this, which is - we have semi-flexible schemas. So you can define a schema, and you can say “These columns must always be there” if you insert a new row, but then we also have something that we call dynamic columns. And this is specifically useful for label-style data, similar to what Prometheus has. We wanna be able to attach labels to specific data points, so that we can then slice and dice data by random infrastructure labels. It can be the region of our data center, it can be the name of our data center, it can be our namespace in our Kubernetes cluster, it can be our pod, it can be our container, it can be our process ID. We as Polar Signals don’t want to dictate how you organize your infrastructure, and so we want to give you that flexibility to choose the labeling however you like it. That philosophy came from Prometheus, and we felt like that was one of the things that made Prometheus really successful… So it’s something that we felt like we had to replicate. But the nature of profiling data means that we have unique sets of labels much more often than Prometheus. And this is kind of the classic cardinality problem that people run into with Prometheus. And there’s nothing wrong with Prometheus. It’s designed with that. Prometheus is not meant for the undefined, unbound cardinality use cases. It can actually handle them surprisingly well, but it wasn’t designed in that way. Again, nothing wrong with that, but continuous profiling needed something different, because we don’t know what stack traces will occur, how often they will occur… That’s completely random. It depends on what the process is actually executing. So we needed this storage that actually internalizes that, and where we don’t pay a penalty for cardinality. Essentially, the way it’s done in ArcticDB is that every time we see a new label key, we dynamically create a new column that is then inserted into, and everything else is just treated as this column is just null, basically, for all other rows. So I’m really glad that you mentioned this, because this cardinality used to come up - I’m sure it still does - in the context of Prometheus… And I know that that had memory implications, as well as disk implications. It would basically use up more memory, more disk space to store the data. Does it affect ArcticDB in the same way when it comes to memory size and disk size? Does ArcticDB use more memory and more disk, if there are more labels? So there’s at least one fundamental point here that I think I need to point out, which is if you have more data, then you need to pay for it in some way. There’s no such thing as storing data for free. If we’re able to do that, then I think the fundamentals of computing change. Of course. Okay. [43:41] But the characteristics of paying for cardinality are dramatically different. In Prometheus we want to keep series of data alive for as long as possible, because that improves compression, and that’s ultimately one of the pieces that make Prometheus as efficient as it is. Again, that’s why I keep going back to - this is a good design for Prometheus, because it allows Prometheus to exploit several pieces of that equation to be able to serve things like the super-low latency queries like Prometheus does. In ArcticDB we’re not paying per series, we’re basically paying per row that we’re inserting. And the point is we’re kind of bringing the cost of inserting a row down so much that we don’t care anymore how many columns we have in that row. Basically, our cost is at the row level, as opposed to the cardinality level. I see, I see. Okay, that makes sense, because when we used to have lots and lots of labels on metrics in Prometheus, what used to happen when you would query them - you’d use a lot of memory, so things would take a lot longer. And if you wanted to have them optimized, you’d use more disk space, if I remember correctly, and memory… So I’m wondering, those ad-hoc queries which you don’t know what labels you’ll be querying for, so then you just add up – I mean, you don’t have to declare what the labels are, because I think it will also create different time series, if I remember correctly. This is all coming back; I haven’t used this in maybe a year now, give or take, six months, something like that. And the more labels you would have, the more time series you would get… Is that right? That’s right. Every unique combination of labels identifies a time series in Prometheus. That’s it. And then that is what was resulting in that excessive storage and excessive memory usage, like disk space and memory… And if Parca doesn’t do that, that’s amazing, because that means the cost of label is much, much lower than it is in Prometheus. As you say, two different systems designed for specific use cases… But ArcticDB seems to have tackled head-on the problem of cardinality, which makes a huge difference. So does that mean that you can store the samples or the profiles that you get with arbitrary labels, like customer names, or service names, or things like that? Because that opens up the world to hosts of new possibilities if you do that. Yeah, that’s absolutely right. And one of the first things that we started implementing once we had ArcticDB - we haven’t released this yet, but it’s something that I’ve talked about a couple of times already… It’s that we attach a trace ID to a stack trace. That way, what we can do is we can pull up all of the CPU time that was created by a single request, across services. Because we have a single trace ID that is piped through all of our services. Now, this only does work if you actually have application-level instrumentation for profiling as well, because the profiler needs to know about that trace ID somehow. But if you put in that work - and it’s not a lot of work; as a matter of fact, this can actually be done as kind of an OpenTelemetry wrapper, so you only need to install a library and then you have all of that information automatically. And then you can jump from a distribute trace to all of the profiling data associated with that request, or whatever your trace ID represents. So because you mentioned how Prometheus is being used not as it was designed and people abuse it. Here’s a crazy idea, and you tell me if ArcticDB would be abused if it was used for this purpose - what would happen if ArcticDB would be used to store events with arbitrary labels? Would it work? That’s exactly the use case that it’s built for. Okay. Nice. Of course. But I think the possibilities are exciting. One of the first comments that we got when we open sourced ArcticDB was “Can we use this instead of Prometheus TSDB, to solve some of the cardinality issues?” and definitely this is a possibility, but also we need to take it with a grain of salt. ArcticDB - we open sourced it the moment it started working, and Prometheus TSDB has had seven years of performance optimizations. I think there is a possibility in the future to explore that path further, but it’s definitely gonna take a while to get any sort of similar performance characteristics. And like I said, Prometheus was specifically designed for those super-low-latency queries, so the fundamental setup does mean that Prometheus should always outperform ArcticDB… But ArcticDB I think can get pretty close because of the couple of tricks that we’re doing with the data. Hm. So let me see if I got this right. Prometheus was optimized for metrics, ArcticDB is optimized in build, for events. I don’t know if I would even call it events. It’s really just tagged data, whatever that means to you. I work with a couple of people who want to store super-high cardinality data that they’re grabbing from eBPF, and this is totally possible. There’s no existing type of data that could be used to describe this; it’s just super-high cardinality data that you want to search by a label-based system. One last question before we move from the ArcticDB topic - well, kind of… Is there a single process of ArcticDB – I mean, so first of all, it is embedded. That’s something that you mentioned, and that is important. Does it have any primitives when it comes to clustering? Does it understand the cluster of processes that have ArcticDB embedded? So that’s something that we’re building for Polar Signals Cloud right now, and it’s possible that we’ll open source this in the future. The reality is we’re a business, we need to at some point start making some money, right? Of course, of course. So it’s just something that we haven’t spent too much time on, but it’s definitely a path that we wanna keep open. And I think it’s inevitable that we’ll probably do this eventually. Like I said, it’s just something that we purely need in order to run Polar Signals Cloud today, so that’s why we’re building it, and then we’ll see what we’ll do potentially in the open source community. Before we talk about the Polar Signals Cloud, I would like to cover some of the shout-outs for ArcticDB, because I’ve seen that you’ve collaborated with a lot of people on this… So it’s not just you coming up with a crazy idea and seeing how it works… No, absolutely not. So you’ve mentioned some amazing people… The one which I would like to start with is Tyler Neely. I didn’t even know about him until you mentioned him. He’s been building Rust databases since 2014 - Sled, and Rio - so he has a lot of experience. I was watching one of his FOSDEM talks from 2020… He’s smart. My God… Tyler is… Like genius smart, sort of… Yes. So tell us more about the people that you collaborated on ArcticDB, at least the ideas. Yeah. Let’s start with Tyler, actually. So I’ve known Tyler for six years, seven years almost… He actually rented a desk from us at CoreOS times, in the CoreOS Berlin office… And he was already – he had some history at Mesosphere, working on Zookeeper as well… And yeah, just any crazy distributed system or high-performance databases that you can think of, he’s had his hands in somehow. [52:12] I’m also friends with Tyler, I like to go for a coffee with him or something, and we just have common interests. I was talking to him that we’re thinking about building this new database, with these kinds of characteristics, and I’m not sure about our model for transactions. So we just spent kind of several hours together, discussing various isolation and consistency mechanisms… And ultimately, what we ended up implementing is 100% his idea. Wow… So like I said - sure, we might have written the code, but Tyler was the person who came up with the mechanism. So yeah, huge shout-out to him for that. I guess the next one we definitely need to mention are Paul Dix and Andrew Lamb from InfluxDB. Basically, they’re building something very similar in Rust. Actually, they’ve been building it for much longer than we have… [laughs] So they were kind of vital, and they were very generous in sharing their experience of what they’re building, which is InfluxDB IOx. It’s kind of their next-generation columnar database for, that’s going to, I think, back all of the influx cloud product. And they essentially have something super-similar with the dynamic columns, and they’re also building on top of Apache Arrow and Apache Parquet… So a lot of the foundational pieces are extremely similar, and like I said, they were super-generous in sharing their experience, because we definitely would not be here this soon, this quickly, in this kind of quality if they hadn’t shared all of that experience. Yeah. This is it, right? This is the secret to great teams and great products, and great open source projects - great people coming together over coffee, or a meal, sharing ideas, and then the best ones win, always. And the bad ones eventually go away. There’s lots, lots of bad ideas, and there’s a lot of fun to be had… So they are important, but it’s always amazing people coming together and creating something amazing, and then putting it out there, and see what happens. I love that. You also mention Achille Roussel from Segment… Oh, yeah. That was another shout-out. And Julien Pivotto from the Prometheus team. Yeah, so I’ve never actually spoken to him in-person, but I’ve spoken to other people at Segment. I think it’s pronounced Achille. So Achille is an incredible engineer. He’s put together most of the Parquet Go library that we’re using under the hood. And it was kind of a collaboration… In January I was doing research of which Parquet libraries are out there, and I wanna say I might have tweeted it, or something like that, and then Achille was like “I’ve got something for you.” At that point the library was actually still closed source. Just Segment was working on it by themselves. Then they kind of open sourced it, and we’ve had a super-tight collaboration. I wanna say I’ve done 20 pull requests myself against this library by now, and they’re just – it’s a very, very fine piece of engineering. Huge shout-out. The APIs are just super-thought-through. The performance is just incredible. ArcticDB would be nowhere if it wasn’t for that work. Listeners, don’t take away anything else from this conversation; just check out that library. I’m a huge fan. Right. We’ll put it in the show notes, because that sounds like a very important one. Okay, okay. So for those that stuck with us to this point, we need to talk about the Polar Signals Cloud, because I’m sure that you want to hear about it. So what is the Polar Signals Cloud? Tell us about it. In essence, Polar Signals Cloud is hosted Parca. [laughs] [56:09] Basically, it’s kind of the classic SaaS model. You wanna reap all of the benefits of continuous profiling, you understand that it’s useful, but you don’t wanna have to maintain the backend system, the APIs, uptimes, storage efficiency and all of that… Running a distributed database… All of those things. So basically, the entire experience of Polar Signals Cloud is you just deploy the Parca Agent on your Kubernetes cluster, you point it at Polar Signals Cloud, and you’re automatically profiling your entire infrastructure, just like that. There’s nothing else that you need to do. So yeah, that’s the product that we’re currently working on. It’s not generally available yet; we trialing it with a couple of early beta customers… But yeah, if there are any listeners that think that they’d be a particularly good case study for us, please reach out. You can sign up on our website; we’ll get an email that you’ve signed up, and we can chat and figure out if it makes sense. Yeah. I really like that simplicity of just setting up the Agent, and you have it all. I remember from when I used to set up Prometheus and Grafana on Kubernetes, and managing them, the upgrades and all that… It’s not difficult, but it’s an extra thing that you have to do. And sometimes there’s higher-value things that you may want to do instead. Different use cases, different setups… I remember when we made the switch, and what a big difference that made. I remember when we set up the Honeycomb agent, because you can’t install Honeycomb, the UI and the server, and just use it as a service… And I really enjoyed that experience, I have to say. Parca - I remember when I set everything up, and I was thinking, “Huh, I wish there was just the agent.” Episode 33, remember? And we had the server, and the UI, we talked about memory, we talked about a bunch of things… And now you have it, six months later, you’re trialing it… It’s amazing to see that. My most important take-away from our conversations, Frederic - I usually ask the guests, but this time I’ll go first, because I think it’s so important to mention this… It’s how much I enjoy our interactions at like a very basic level, person-to-person. I really enjoy seeing the journey that you’re on, yourself, with the company, with the people that work with you, and get excited about your ideas, and they see things the way you see things. And it’s been amazing to watch that as a bystander. Every six months, or every few months, actually - it hasn’t been that long - when I check in, there’s always something new and exciting that you have out there. Shipping ArcticDB was such a huge achievement. Seeing you at KubeCon EU, the excitement that was generated - it was great to see, and you’re still such a small team. So that story, from a human, one-to-one, to a team, to a product, to a company - it’s been great to watch. And great people do great things, I don’t know. It may sound a bit cliché, but it is what it is. There’s no secrets. If you truly believe, if you’re aligned, and everything, like what you say, and what you do, and what you think, they’re all the same, the sky is the limit. It’s been great seeing that come together. And Polar Signals Cloud - I’m really looking forward to trying it out, because I’ve seen what the world looked like before, and I wanna see what it looks like after, and I have a good feeling about this… So let’s see how well it works in practice. I have no doubts, but I still wanna see it. [59:54] So what about your key take-away for the audience? You mentioned about the people a little bit, but ArcticDB, and, we can say the key take-away, but maybe first, what are you thinking in the next six months? Where are you going with the Polar Signals Cloud, what do you expect to happen next…? Just a few things that you can share. Yeah, we want to GA the product. We wanna make it as accessible to anyone who wants to, as much as we can. Like you said, it’ll really only take deploying the agent and you’re automatically profiling your entire infrastructure. That said, we wanna make sure – because profiling is one of those… It’s kind of like with any other data problem. If people don’t trust the data, that’s a huge problem. People lose confidence in a product very quickly when that happens. So we wanna be careful that when we do make the product generally available, that it is very solid and that people can rely, depend on it, and trust it, most importantly. So yeah, that’s kind of our mission for all of this year, let’s say. And then, after that we’ll see; there’s definitely a lot of – there’s so much opportunity to build things on top of continuous profiling. There are very exciting things that you can do with this data, that isn’t just as a human analyzing this data. But yeah, just kind of going back to what you were saying - I don’t think I realized it as much before going into this call, but because you and I have been kind of checking in every six months or so, it’s just mind-blowing to kind of check in on the growth of the company, of the people, of the project… Because I’m very close to all of it, so I don’t necessarily see – I see small changes, but if I then look back six months and think about all of the things that we achieved, I’m just blown away. Yeah, I couldn’t decide whether that is my top thing, or something that we kept on bringing up about ArcticDB is kind of how important community is, and how important is leveraging your network, but also… I think whenever I talk about that, I also have to talk about sharing your network. That’s the most powerful thing you can possibly do to someone else - give people access to your network. It’ll put their careers or their projects or whatever it is on hyperspeed. I think that’s something I learned early on in my career, and in both directions it’s helped me tremendously, but I also try to give it onwards as much as I can. That’s a good one, that’s a good one. Well, Frederic, I will definitely check in again in six months’ time, but what I would like to do is keep in closer contact, because I’m seeing some of the amazing things that you’re building, and six months - it’s almost like we’re not doing justice to all the amazing things that come out of Polar Signals, the connections that you make, the ideas that you generate… And I think I would like to share a bit more of that, because there’s a lot of amazing stuff going on. And I think time - that’s the only limit. There’s only so many hours in the day, and there’s only your attention and your mindshare is limited, but definitely worth it. So thank you for joining me here today, and thank you for sharing all the wonderful things, and I’m looking forward to what you do next. It’ll be great, I’m sure, but… Thank you, Frederic. Thank you for having me… Again. Our transcripts are open source on GitHub. Improvements are welcome. 💚
https://changelog.com/shipit/57
CC-MAIN-2022-40
refinedweb
9,900
68.1
Can you use the hardware serial (on pins 0 and 1) for the GPS? Quote from: mem on Dec 07, 2012, 07:30 am Can you use the hardware serial (on pins 0 and 1) for the GPS?How does one do this? void setup() { Serial.begin(57600); // gps TX,RX on 0,1}void loop() { if (Serial.available()) { Serial.print(Serial.read()); }} #include <SoftwareSerial.h>SoftwareSerial gps(3, 2); // gps TX,RX on 2,3void setup() { Serial.begin(57600); // serial monitor gps.begin(57600); // serial to GPS }void loop() { gps.listen(); // doesn't seem to matter if this is here or not if (gps.available()) Serial.print((char)gps.read()); if (Serial.available()) gps.print((char)Serial.read()); } Then this should work...except that it doesn't. Please enter a valid email to subscribe We need to confirm your email address. To complete the subscription, please click the link in the Thank you for subscribing! Arduino via Egeo 16 Torino, 10131 Italy
http://forum.arduino.cc/index.php?topic=137373.0;prev_next=prev
CC-MAIN-2015-22
refinedweb
163
63.25
OK, so I'm experiencing a highly frustrating set of circumstances. I am making an app that will allow you to type text and have it spoken by SAPI. That part is already done and is a windows forms app. You have the text box for typing, a control to adjust the rate, a speak button, and a menu bar that's not fully functional yet, but I have got the Exit option working. The problem comes into play when I create a function to get all the voices you have installed, grab their names and add those to the combobox. The code is valid, that's not the issue, the issue is I can't call the damn function. Now, I'm very new to C# and visual studio, which sort of presents its own unique little problems, in that getting an app up and running is stupidly easy, its more of wiring stuff up than coding. I think so far I put like 3 lines of my own code into it. Anyway, I'm not exactly sure what the issue here is. Perhaps its me putting the function in the wrong place, but it won't let me put it some places. Anyway, fuck sakes, if I put it where it is happy to reside, which is like outside the main class but inside the namespace, I can't call it. If I type its nae, Intellisense will not pick it up, which is usually a good indicator that you're doing something wrong. If I complete it manually, it doesn't work, saying something about thinking I"m trying to define a new function rather than call one, even though I add the semicolon after the parens. Also, there is only some places in the code where intellisense picks up void, so I put it where it will, because as I said, if Intellisense doesn't try to complete something you typed, its a good indication something is wrong. So no matter where I try to call the function from, or no matter where I place the function, no matter whether I make it static or public, it wants to throw a fit. I don't want to create a static class just to do something I'll do once per launch of the app. I also tried removing the function from the form.cs and putting it in in main.cs and making it public, but I couldn't even call it from somewhere else in main.cs, so if I can't do that, there's no way I'm gonna be able to call it in form.cs
https://forum.audiogames.net/post/359173/
CC-MAIN-2019-35
refinedweb
444
75.44
public class Test { public void doSomething(String neverUsed){// Does not show being unused String neverUsedEither;// Shows as being unused, ok } } Yes, this was disabled after 6.0 for non-private methods, as way too often there is nothing that could reasonably be done about this: e.g. the method may be called from a lot of places. It was too annoying to see unused parameters that couldn't practically be fixed. My apologies if I misunderstand this. I do understand that the _name_ of a public method where no caller could be found is not highlighted because it would not be safe to do that. However if a public method does not contain any code within it that uses its own parameters, then I don't understand why one would not want these parameters to be highlighted. Methods having too many parameters is a very common problem to solve. There are of course scenarios for multiple classes implementing the same interfaces where their methods don't use all interface method parameters. Still I would want to know this in such cases because it helps me improve the interfaces by using overloading etc.. I had a practical situation where I was maintaining a legacy system, struggling to eliminate dead code. The productivity enhancing value of this missing feature would definitely outweigh the slight annoyance of hints that one potentially cannot fix. Do you have a reference to the issue where this feature was removed? I see why it is preferred to not show this. Some frameworks have interfaces with many methods where only some of them are used in implementations. Could this be made an optional feature? Stating that 'way too often' this warning is displayed in an older code base ignores the fact that the warning is really useful in new or developing code. Taking the warning away was robbing Peter to pay Paul - one issue solved, new one created. Why not add it as a selectable warning hint, so that people could _choose_ whether or not to show these warnings for public methods? Please implement a way to turn it for public methods as well. (In reply to comment #3) > I see why it is preferred to not show this. Some frameworks have interfaces > with many methods where only some of them are used in implementations. > Could this be made an optional feature? I think there is a misinterpretation here. This issue is about unused parameters within a function. I agree that can be very handy. What others are talking about is about unused non-private functions, which is not of much sense (unless you define some context, say, project, opened project), and still I don't see much of a gain for it. (In reply to comment #6) > > Some frameworks have interfaces with many methods where only some of them are used in implementations. > > I think there is a misinterpretation here... > What others are talking about is about unused non-private functions... I don't think anyone here is talking about unused non-private methods (which you cannot really do anything about other than marking them obsolete). In the comment by bht, I believe 'some of them' is referring to method parameters, not the methods themselves. Certain other IDEs provide some extra SuppressWarnings flags such as "unused" and "unusedArgument", which can be used for more fine-grained control (as well as giving the option to turn the warnings on/off globally). That seems to be the ideal solution, although I would settle for the ability to toggle the warnings globally. (*I think this is about parameters and not private method hints*) Even though there are good reasons to minimize the number of parameters in a method's design, there are a lot of frameworks that include extra params by default. When implementing a method for an API (either via Interface or dynamic calls), you're forced to add those parameters to keep compatibility. For some languages that's mandatory (meaning the linker won't link your method if you don't include all required parameters), for some others it's optional, but still you want to include all parameters as a reference. I don't think the hint should be removed, just made more fire-grained. Most of the time, local unused variables are always trash, while unused method parameters are intended to be there. So it would be nice to have the option to select whether or not we want those warnings. Just my 2 cents :) First, let me give you a few examples where it is often inconvenient to mark the non-private method parameters unused: 1. a method that overrides/implements other method: there may be other overriders/implementors that use the specific parameter, and in such a case the highlighting is simply a noise that cannot be resolved in vast majority of cases. This relates to the usecase described in #c2, but seems to me that for that usecase it would much more beneficial to have a warning that would say "no known overrider/implementor is using this parameter" (but that is a feature request, not a defect). 2. a method is overridden and the overrider is using the parameter. E.g.: public void commandPerformed(Command cmd) { //the subclass may override this method to be informed about an event } Again, the highlighting in this case is only some noise. Given this, I am not in favor of changing the default behavior. I am not against introducing an option, although that does bring several problems to solve: -where to place such an option? Might be possible to (ab-)use the Tools/Options/Editors/Hint tab for that, even though that brings a few trouble itself (e.g. the "Show As" combo normally does not contain "Unused" item, but it probably would have to for these semi-hints). -what are all the required options? Should it be possible to disable the highlighting for overriding method, overridable methods, public/protected/package private methods? -it should probably be possible to suppress the highlighting using @SuppressWarnings - not unsolvable, but I don't think there is a precedent for that in the highlighting area (as opposed to the warning area). Updated milestone since 7.0.1 has already been released. I just found some bugs in my code after I loaded it into Eclipse, and it told me there were unused parameters (yes, not just forgot to use a parameter but used the wrong one somewhere). Then I got back to NB 8.0 and tried to enable the same warning, only to find out that it's a long standing feature request. Is there any progress on that? There are some of us out there that we consider unused parameters in methods (public or private) a code smell. This feature is a must have, it is implemented for JS but not for Java. It is not about the caller of the function with the unused param, it is about the unused param, that is never used inside the function, thats why it is unused.
https://netbeans.org/bugzilla/show_bug.cgi?id=199451
CC-MAIN-2018-05
refinedweb
1,175
61.06
Outjection ignored - very basic example not runningNicole Schweighardt Sep 11, 2008 8:34 PM Hello, I am new to Seam and tried to run a very simple example but it does not work. After searching in google and in forums too many hours I hope anybody here can help me. I use JBoss AS 4.2.2GA, Seam 2.0.2. I work with Eclipse with JBoss Tools. I created a New Seam Webproject, an example project from Jboss Tools. That works fine and the example is running. Then I only wanted to test outjecting an variable from a session bean. But that does not work. I tried 3 versions: 1. Version This is my basicBean import javax.ejb.Remove; import javax.ejb.Stateful; import org.jboss.seam.annotations.Destroy; import org.jboss.seam.annotations.In; import org.jboss.seam.annotations.Logger; import org.jboss.seam.annotations.Name; import org.jboss.seam.annotations.Out; import org.jboss.seam.faces.FacesMessages; import org.jboss.seam.log.Log; @Stateful @Name("basicBean") public class BasicBean implements BasicBeanLocal { @Logger private Log log; @In FacesMessages facesMessages; public void basicBean() { //implement your business logic here log.info("basicBean.basicBean() action called"); facesMessages.add("basicBean"); } @Out public String foo = "That is a test"; @Remove @Destroy public void destroy() {} } and this the xhtml-page: ... <h:messages <rich:panel> <f:facetWelcome!</f:facet> <p> <h:outputText </p> </rich:panel> ... But foo (the Text) isn´t displayed. 2. Version I added a getter-method like someone in a forum told me: @Out public String foo = "That is a test"; public String getFoo() { return foo; } The xhtml-page is still like in Version 1: <h:outputText But foo (the Text) isn´t displayed. 3. Version The bean is the same as in version 2, with a getter-method. In the xhtml-page I call the bean directly: <h:outputText The result is javax.faces.FacesException: javax.el.PropertyNotFoundException: /home.xhtml @18,51 value="Test: #{basicBean.foo}": Property 'foo' not found on type org.javassist.tmp.java.lang.Object_$$_javassist_1 Can anybody tell me what I am doing wrong? Thank you very much. NSchweig 1. Re: Outjection ignored - very basic example not runningMichael Courcy Sep 11, 2008 9:01 PM (in response to Nicole Schweighardt) Outjection occurs after you have invoked one of the method of the bean. As you invoke none, no outjection happen. 2. Re: Outjection ignored - very basic example not runningNicole Schweighardt Sep 11, 2008 10:27 PM (in response to Nicole Schweighardt) Hi, and thanks for your answer. I now invoked the getFoo-method. <h:outputText But nothing happens. Then I wrote another method. ... @Out public String foo = "That is a test"; public String getFoo() { return foo; } public void testAction(){ ... } ... and invoked it: <h:outputText I started JBoss in Debug-mode but the method isn´t invoked by seam. Any ideas? Thank you NSchweig 3. Re: Outjection ignored - very basic example not runningTony Herstell Sep 12, 2008 12:51 AM (in response to Nicole Schweighardt) basicBean.getFoo() or basicBean.foo perhaps... Check your server startup and see if basicBean is being deployed. 4. Re: Outjection ignored - very basic example not runningHans Schlegel Sep 12, 2008 10:30 AM (in response to Nicole Schweighardt) Hi Nicole You have to check the spec - but you try to outject a non seam-component... Please try the following... @Out(required = false, scope = ScopeType.SESSION) public String foo = "That is a test"; Best regards hans 5. Re: Outjection ignored - very basic example not runningAdrien Orsier Sep 12, 2008 11:25 AM (in response to Nicole Schweighardt) Don't forget to declare your foo's getter and setter into your BasicBeanLocal interface, it's needed since you'be made your component an EJB. Since it's not needed, you can simply remove your @Stateful annotation and not impletement any interface. Still, you'll need public getter and setter for your foo property, and you'll need to reference it like that: #{basicBean.foo} Though, this is not an outjection. Thing is outjection isn't used like that. You often use it to outject a variable you need to use after you've done some business. Something like: @Name("basicBean") public class BasicBean { @Out private String foo = "That is a test"; @In FacesMessages facesMessages; public void testMethod() { foo = "test"; } } An in a simple page: <s:button This way it will work and demonstrate how it's used. 6. Re: Outjection ignored - very basic example not runningNicole Schweighardt Sep 15, 2008 12:41 PM (in response to Nicole Schweighardt) Hello everyone thanks for your help! I think Ill read something about seam concepts before I start any other try with outjection.. For the moment I only need to display some data with radiobuttons. I know like it works with jsf and managed backing beans; now I want to try the same with seam. If I understood you right, I don´t need any out- or injection. I got an example, were displaying works: Entity: @Entity @Name("product") public class Product implements Serializable{ private Integer id; private String name; private String price; private String category; private String shortDescription; private Integer numberOf; ... The Entity has getters and setters for the attributes. Bean: @Name("shopWebSiteBean") public @Stateful class ShopWebSiteBean implements ShopWebSite { @PersistenceContext(unitName="ShopSeam") EntityManager em; private List<Product> games; private Product chosenGame; public List<Product> getGames(){ Query q = em.createQuery("select p from Product p where p.category = :n"); q.setParameter("n","Spiele-Software"); games = (List<Product>)q.getResultList(); return games; } public void setGames(List<Product> games) { this.games = games; } public Product getChosenGame() { return chosenGame; } public void setChosenGame(Product chosenGame) { this.chosenGame = chosenGame; } @Destroy @Remove public void destroy(){ } } JSF: <h:form> <h:selectOneRadio <s:selectItems <s:convertEntity/> </h:selectOneRadio> <h:commandButton... ... </h:form> I know it from jsf that if you click the h:commandButton the selected radio button value is written in chosenGame.(the setter-method is invoked; ) But that does not work here. setChosenGameis not invoked if I click the button. It is the same if I try <h:selectOneRadio or <h:selectOneRadio Could anyone help me again? Thank you NSchweig
https://developer.jboss.org/thread/183964
CC-MAIN-2018-17
refinedweb
1,015
59.4
csQuaternion Class Reference [Geometry utilities] Class for a quaternion. More... #include <csgeom/quaternion.h> Detailed Description Class 186 of file quaternion.h. Return euclidian inner-product (dot). Definition at line 192 of file quaternion.h. Get quaternion exp. Get a quaternion as axis-angle representation. - Parameters: - Definition at line 250 of file quaternion.h. Get the conjugate quaternion. Definition at line 180 204 of file quaternion.h. Multiply this quaternion by another. Definition at line 126 of file quaternion.h. Multiply by scalar. Definition at line 147 of file quaternion.h. Add quaternion to this one. Definition at line 91 of file quaternion.h. Subtract quaternion from this one. Definition at line 105 of file quaternion.h. Divide by scalar. Definition at line 170 of file quaternion.h. Rotate vector by quaternion. Definition at line 222 of file quaternion.h. Set the components. Definition at line 69 of file quaternion.h. Set a quaternion using axis-angle representation. - Parameters: - Definition at line 237 198 214 of file quaternion.h. Friends And Related Function Documentation Multiply two quaternions, Grassmann product. Definition at line 118 of file quaternion.h. Multiply by scalar. Definition at line 135 of file quaternion.h. Multiply by scalar. Definition at line 141 of file quaternion.h. Add two quaternions. Definition at line 84 of file quaternion.h. Subtract two quaternions. Definition at line 98 of file quaternion.h. Get the negative quaternion (unary minus). Definition at line 112 of file quaternion.h. Divide by scalar. Definition at line 156 of file quaternion.h. Divide by scalar. Definition at line 163 of file quaternion.h. The documentation for this class was generated from the following file: - csgeom/quaternion.h Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/classcsQuaternion.html
CC-MAIN-2016-22
refinedweb
293
56.93
Setting up NUnitThere are two parts to NUnit setup for use in Visual Studio. First part is to install the framework, we can do this in two ways. 1. Download the 2.6.2 (latest at the time of writing) msi installer from. Ensure Visual Studio is not running while the installer is running. This installs NUnit globally along with the NUnit test runner. However the test runner needs .NET 3.5 to run. 2. Another way to include NUnit in your project is to download it using the following Nuget Package Manager command PM> install-package NUnit Second part is to setup the NUnit Test Adapter so that Visual Studio recognizes the Test Cases in our project and allows us to Run them from the Test Explorer. Setting up the NUnit Test Adapter1. To Setup the adapter, go to Visual Studio Tools->Extensions and Updates. 2. Select, Online in the Left hand pane Search for NUnit and Install the NUnit Test Adapter. 3. It needs a Visual Studio Restart With that we are ready to write Test Cases using NUnit. So let’s write a sample Test Case. Writing Test Cases for ASP.NET MVC AppsLet’s create a new MVC4 project. The NUnit Test Adapter doesn’t register itself to be used with the ‘New Project Template’ dialog hence while creating the project, we can’t select NUnit. So we’ll not create a Test Project here. As shown below, if we select ‘Create a unit Test project’ NUnit is not registered as a Test Framework so we’ll uncheck the ‘Create a unit Test Project’ and continue. Once the MVC Project has been created, we add a simple Class Library project to the Solution and append the word ‘Test’ at the end to indicate it’s a Test project. Next we add references to the Project using the Nuget Package Manager. PM> install-package NUnit Make sure you select the Class library in the Package Manager console. Adding your first Test case1. Rename the default Class1.cs to HomeControllerTest. 2. Next add reference to System.Web.Mvc in the Test Project 3. Add a new Test case to test if view return has the name “Home”. [TestFixture] public class HomeControllerTest { [Test] public void IndexActionReturnsIndexView() { string expected = "Index"; HomeController controller = new HomeController(); var result = controller.Index() as ViewResult; Assert.AreEqual(expected, result.ViewName); } } 4. Build the project 5. Open Test Explorer from the ‘Test’ Menu. 6. The Test Explorer will scan all the projects for Test cases, you’ll see a green progress bar spinning on top. Once it completes, it will show the Test Case you just created. 7. Run the Test case. It will fail. This is because the View name is not set and thus empty. 8. Go to the HomeController’s Index method and change the return statement to explicitly specify the ViewName. public ActionResult Index() { ViewBag.Message = "Modify this template to jump-start your ASP.NET MVC application."; return View("Index"); } 9. Build and Run the Test again. This time the test will succeed. ConclusionWe saw how to setup NUnit and use the integration points in Visual Studio to integrate the Test case execution with Visual Studio’s tooling. Tweet
https://www.devcurry.com/2013/02/getting-started-with-nunit-in-aspnet-mvc.html
CC-MAIN-2020-45
refinedweb
537
76.93
How to recording voice and stream process Can I using pythonista to recording voice from mic,and stream it to a network server? (or process real time local). You can using objcbindings. Probably outdated example was made by @omz some time ago. You can try this gist Record Audio Pythonista example. Thanks. But this code just record to a file. I need to stream to a server or using voice data to draw some graphics real time. According to Apple documentation: NSUrl object represents a URL that can potentially contain the location of a resource on a remote server, the path of a local file on disk, or even an arbitrary piece of encoded data. Probably there is some way to bend output to server, or just read file and route stream by yourself. You would need to have a streaming format in mind. What is it you want to achieve? Do you want the end result to be a file you can play back? Or do you want to somehow livestream to other people? NSUrl simply contains a url, a network address. it is basically just a fancy string. If you just want a file in the end, then you would basicslly use omz's example, then upload the resulting file to a server. For some sort of livestrsm, you will need to research formats/protocols and figure how to transform audio units into the right format. I wrote some code a while back showing how to draw a waveform based on real time audio input. I have not updated for py3, so run with the py2 interpreter. If I were to do this again, I would experiment with Audio Units, which calls a block on small chunks of waveform. As is, this basically writes to a file every second, reads it back in and displays it. I think I have overlapping recorders to minimize latency. Thanks. I try it in pythonista 3. (need add one line on code) def __init__(self,dofft=False): scene.Scene.__init__(self) # <<<<< add this line on pythonista 3 But the speed it's not real time. This post is deleted!last edited by - JerrySmith1021 I'm not sure whether you can record voice via Pythonista. As I know, to record streams playing on the computer, you can use a third party program. AudFree Audio capture for windows is the best recorder can help you to record any sound playing on your PC and save it as MP3, then you will able to play them on other players freely. This post is deleted!last edited by - Miles_Gonzalez This post is deleted!last edited by - Bella Gorden This post is deleted!last edited by
https://forum.omz-software.com/topic/3253/how-to-recording-voice-and-stream-process
CC-MAIN-2020-45
refinedweb
449
75
Introduction The Task class is a wrapper class for starting Bukkit scheduler tasks (synchronized only). They can be used to execute a task with a certain tick delay, or with a certain tick interval. They can easily be started and stopped. Usage Tasks are abstract implementations - you need to make a new Class that extends the Task class and implement the run() method. In there you can execute your logic. In almost all cases, you will want to do something like this: Task myTask = new Task(MyPlugin.plugin) { @Override public void run() { Bukkit.broadcastMessage("Test after 10 ticks!"); } }.start(10); As you can see, all start() and stop() methods return the 'this' Task instance, allowing calls to be chained and the Task to be stored in a single line. In case of a task in your own plugin, this is a general way of doing it: public class MyPlugin extends JavaPlugin { private Task myTask; @Override public void onEnable() { myTask = new Task(this) { @Override public void run() { Bukkit.broadcastMessage("This message is displayed about every second"); } }.start(20, 20); } @Override public void onDisable() { Task.stop(myTask); myTask = null; } } It is possible to use myTask.stop() instead, but in the event that myTask is not initialized (and is null) Task.stop is more secure, as it includes a null check. Who knows, perhaps there was an error while enabling or for some other reason your task ended up nulled sooner. Start types You can start in multiple ways: - Task.start() - runs the task a single time at the end of the current tick - Task.start(10) - runs the task a single time in 10 ticks - Task.start(10, 20) - runs the task every 20 ticks, with a delay of 10 ticks before starting to repeat Next-tick tasks Since you have to start these tasks so often, there is an alternative to using the Task class for executing next-tick tasks: CommonUtil.nextTick. It allows you to schedule a Runnable to execute at the end of the current tick, similar to how Task.start() works. Example: CommonUtil.nextTick(new Runnable() { @Override public void run() { Bukkit.broadcastMessage("Next tick message!"); } }); You do not have to pass in a Plugin instance, and it is also impossible to cancel the task once scheduled. When BKCommonLib disables all remaining tasks are discarded - only use this to schedule tasks while the server is enabling or running, not when disabling. Tips for passing in parameters It is possible that you want to use data obtained in an event or function to pass into the scheduled task - so how to easily use it? Making your own Task wrapper class with all these variables stored is possible, but also very inefficient. Java has a nice trick for that: final variables: @EventHandler public void onPlayerJoin(PlayerJoinEvent event) { final Player player = event.getPlayer(); new Task(plugin) { @Override public void run() { player.sendMessage("Welcome to our server!"); } }.start(40); } Be very careful with this, as it IS prone to memory leaks. As a rule of thumb, don't use final variables to pass them into repeating tasks, as these variables then never get garbage collected. Even better would be to use primitive types only - a player name instead of a Player. Facts - Date created - Aug 24, 2013 - Last updated - Aug 24, 2013
http://dev.bukkit.org/bukkit-plugins/bkcommonlib/pages/services/task/
CC-MAIN-2016-26
refinedweb
549
64.51
Can any one point me to code where users can change their own passwords in Django? Django comes with a user authentication system. It handles user accounts, groups, permissions and cookie-based user sessions. This document explains how things work. How to change Django passwords See the Changing passwords section Navigation to your project where manage.py file lies $ python manage.py shell type below scripts : from django.contrib.auth.models import User u = User.objects.get(username__exact='john') u.set_password('new password') u.save() You can also use the simple manage.py command: manage.py changepassword *username* Just enter the new password twice. from the Changing passwords section in the docs. If you have the django.contrib.admin in your INSTALLED_APPS, you can visit: example.com/path-to-admin/password_change/ which will have a form to confirm your old password and enter the new password twice.
https://codedump.io/share/rQ6PxTDHS5PN/1/how-to-allow-users-to-change-their-own-passwords-in-django
CC-MAIN-2017-30
refinedweb
147
55.1
Here are some samples to help get a better idea of Python's syntax: Hello World (the traditional first program) print 'Hello world!' # Python 2 syntax # or print('Hello world!') # Python 3 syntax String formatting name = 'Monty' print('Hello, %s' % name) # string interpolation print('Hello, {}'.format(name)) # string formatting Defining a function def add_one(x): return x + 1 Testing variable equality x = 1 y = 2 print 'x is equal to y: %s' % (x == y) z = 1 print 'x is equal to z: %s' % (x == z) names = ['Donald', 'Jake', 'Phil'] words = ['Random', 'Words', 'Dogs'] if names == words: print 'Names list is equal to words' else: print "Names list isn't equal to words" new_names = ['Donald', 'Jake', 'Phil'] print 'New names list is equal to names: %s' % (new_names == names) Defining a class with two methods class Talker(object): def greet(self, name): print 'Hello, %s!' % name def farewell(self, name): print 'Farewell, %s!' % name Defining a list dynamic_languages = ['Python', 'Ruby', 'Groovy'] dynamic_languages.append('Lisp') Defining a dictionary numbered_words = dict() numbered_words[2] = 'world' numbered_words[1] = 'Hello' numbered_words[3] = '!' Defining a while loop while True: if value == wanted_value: break else: pass Defining multiline strings string = '''This is a string with embedded newlines. Also known as a tripled-quoted string. Whitespace at the beginning of lines is included, so the above line is indented but the others are not. ''' Splitting a long string over several lines of source code string = ('This is a single long, long string' ' written over many lines for convenience' ' using implicit concatenation to join each' ' piece into a single string without extra' ' newlines (unless you add them yourself).') Defining a for loop for x in xrange(1, 4): print ('Hello, new Python user!' 'This is time number %d') % x
https://wiki.python.org/moin/BeginnersGuide/Programmers/SimpleExamples?action=diff
CC-MAIN-2016-50
refinedweb
288
67.28
SYNOPSIS #include <paradox.h> int PX_get_record2(pxdoc_t *pxdoc, int recno, char *data, int *deleted, pxdatablockinfo_t *pxdbinfo) DESCRIPTION: - blockpos (long) - File positon where the block starts. The first six bytes of the block contain the header, followed by the record data. - recordpos (long) - File position where the requested record starts. - size (int) - Size of the data block without the six bytes for the header. - recno (int) - Record number within the data block. The first record in the block has number 0. - numrecords (int) - The number of records in this block. - number (int) - The number of the data block.. - Note: This function is deprecated. Use PX_retrieve_record(3) instead RETURN VALUE Returns 0 on success and -1 on failure. AUTHOR This manual page was written by Uwe Steinmann [email protected]
http://manpages.org/px_get_record2/3
CC-MAIN-2020-45
refinedweb
127
69.48
On 04/18/2011 06:12 PM, Richard W.M. Jones wrote: >. > Yeah I hear you on that. However, for the libguestfs OS value to really be useful for us in virt-manager, we have to map it to the virtinst osdict which informs us of all the preferred device defaults (like virtio, usb tablet, virtio console, etc.). Which means more energy that would be better spent on getting libosinfo integrated. >> -). > I think <description> is fine the way it is, there is always going to be a use case for an end user freeform field like that. But there is certainly a case for a similar field (or multiple fields) for recording more metadata, possibly for use by apps. Maybe something with XML namespaces. Thanks, Cole
https://www.redhat.com/archives/virt-tools-list/2011-April/msg00112.html
CC-MAIN-2014-15
refinedweb
126
70.84
I. Durant notre session Mini Mix à Toulouse aujourd'hui, un des participants avec qui j'ai parlé pendant Is there anyway you could make the elipses default to the url associated with that particular blog entry. I can use the String object to remove the elipses and add them back if necessary. Thanks for the great control Christoher, please explain the scenario you refer to. Thanks, Dmitry Hi Dmitry, Your work looks great, although I am having troubles 'installing' (and therefore using) it. I am using: Workstation: -XPSP2 -VS2005 Pro -.Net version=2.0.50727 Dev Server: -2003Std First up, I've been coding for years, but still getting my head around dotNet (ie forgive me if I sound confused :P) I attempted to install the RSSToolkit.dll using GACUtil (found in "C:\Program Files\Microsoft Visual Studio 8\SDK\v2.0\Bin") Result: "Assembly Imported Successfully!" However, when I create a datalist, and attempt to create a new DataSource for it, the RSSDataSource is not available from the list as shown on: I have installed the RSSToolKit on BOTH the workstation's GAC and the DevServer's GAC, also rebooted both (just for sh!#$ and giggles) to no avail. I even created a bin folder in my site, and placed another copy of RSSToolKit.dll in there, but no joy. Did I miss a step in the install process ? Is this toolkit compatible with .Net 2.0.50727 ? Do I need to include a "Imports RSSToolkit" type statement ? What am I doing wrong :( Thanks in advance, Douglas Douglas, .Net 2.0.50727 is the right version. You probably need to add the control to the toolbox - this can be done by right click in the toolbox and selecting "Choose Items...". If the assembly is in the GAC just select the control from the list. If things don't work please try the included samples - do they work at runtime? In VisualStudio? Thanks for the quick response :) I set up a new website on the devserver, calling it: RSSTest The web.config file was renamed to web.config.original Then I copied the samples folder (including YOUR web.config) to the root of the RSSTest site. The samples worked fine... Here's one of the headlines: BUT... The assembly does not appear in the list of .Net Components found in: Tools > Choose Toolbox Items... even though I installed it using the GACutil. I sorted by namespace, name, assembly name etc, and entered 'rss' into the filter box, yeilding zero results. Here are a couple of screenshots of the IDE: Thanks again ;) The following steps worked for me: * Close VS if it is open * remove RssToolkit from GAC using "gacutil -u RssToolkit" * Open VS and right click on Toolbox and select "Reset Toolbox" * Close VS * Install RssTookit into GAC using "gacutil -i RssToolkit.dll" * Open VS and right click on Toolbox and select "Choose Item..." * Click on "Browse..." and select RssToolkit.dll from where you installed it (not in GAC) * I see RssDataSource and RssHyperLink in the Toolbox under "General" Fantastic! All smiles here... The only thing I hadn't done was: Thanks so much for your assistance ;) I have an autopostbacking Drop-Down List: Names=Australian States Values= I'm having a little trouble getting RSSDatasource to work as expected... I can change the URL manually and it works, but if I use Page_Load... RSSDataSource1.URL = DropDownList1.SelectedItem.Value it doesnt. I thought there might be some kind of RSSDataSource1.Refresh / Reload / ReFetch method ? I Response.Wrote(DropDownList1.SelectedItem.Value) and the URL came up as expected, so the DropDownList / postback part seems to be working ok... Can this be done ? It looks like the data is cached on the postback in the viewstate by the data control. Please disable viewstate: <asp:datalist Thanks for the pointer. I made the change you suggested, but it didnt work... I tried setting viewstate=flase in a number of places (page, other controls etc) before stumbling accross the AutoEventWireup="False" attribute in the page header. I changed this to AutoEventWireup="True" and now its all working just the way I'd hoped :) I'm now studying up on 'AutoEventWireup' :) Thanks again for your assistance, PingBack from I use RssToolkit-1-0-0-1, and try to add querystring to my ChannelRss.ashx but I get the following error: ~/ChannelRss.ashx?channelID=1 is not a valid virtual path. The code I use: Rsshyperlink.NavigateUrl = "~/ChannelRss.ashx?channelID=1"; Thanks in advance. Loreena, please use ChannelName property of RssHyperLink to pass custom data to the channel handler. Thanks! Regards, loreena Dmitry Robsman ( ) met désormais à disposition les releases de son RSS Dmitry, I have created a feed with a ashx handler and a query string. The feed can be read with IE7, but I cannot consume the feed with my own reader. Using a RSSDataSource, I get an invalid characters in path. I tried the late binding and I also receive errors. Do you have any idea of where to start looking? I have to add that I modified the RSSHTTPHandlerHelper to modify the encryption being used on the query string to use the sites authentication and encryption methods. If I use the RSSHyperLink control it links properly to the handler which produces the feed and again it works in IE7. Carl PingBack from PingBack from a great tool, it's very convenient to read and write rss now. thanks Do you have a way to trap for errors such as 404 and timeouts when using the RSSDataSource bound to a datalist? Craig Craig, The project is now on CodePlex -- Please post your feature requests there. PingBack from 用于VS.NET 2005 Great tool. Just have a question. I am using your toolkit to consume a RDF-source. Everything works great, manages to select everything from the source except items that have a node within the node. Example: <dc:items> <fileid>12453</fileid> <mime:type>pdf</mime> </dc:items> etc. It just skips that node entirely. Is there a way or should I post a request at Codeplex? Thanks in advance! Please post feature requests on CodePlex: Hello, I'm using this VS 2005 web project. From what I've read the /Bin directory is still supported for backwards compatibility with ASP.NET 1.1 applications. Can i put the rsstoolkit.dll in one of the more appropriate directories within my web project rather than the GAC ? If so, which one of the application directory [within my web project] can I use? Regards Ah Ops I might have been incorrect about the bin directory. Any i've put the dll into the bin directory. I link the control to a datalist and point the rsscontrol to the rss feed. I get a {"DataBinding: 'RssToolkit.RssElementCustomTypeDescriptor' does not allow indexed access."}. This is without replacing datalist control with a hyperlink . Is this by design ? Pointing datalist to RssDataSource should work. The toolkit is now on CodePlex () - please report problems there. I've implemented your toolkit into our social website and it works great! However I do encounter one problem: we use ASP.NET Memebership to do the authentication, and we put a LoginStatus control on every page so users can login / logout. But I notice that if users logout on pages with a RSS control, they got a error says "something.ashx?c=channel' is not a valid virtual path. But if we remove the channelName property from the RSS control, everything's fine. Perhaps you could point out something I missed? I believe this should be fixed in the release on CodePlex (). Please give it a try. I downloaded the file from CodePlex, which I think is the same as the file you posted here (1.0.0.1). The problem still persists. I noticed someone also reported here that postback will causing this error, so I tried selecting a day on a Calendar and the error occured. Did I download an older version from CodePlex (but I coldn't find a newer one) or this problem is not fixed yet? Jim The new version on CodePlex will be out very soon. If you need the fix now, please email me (use the link at the top of this page). PingBack from Our internet access requires domain authentication, so running the rss toolkit on our intranet server is generating an error stating it could not find the RSS feed supplied. Is there a way I can pass credentials with the RSS toolkit control? If so how? Thanks hi, This is great tool that u developed. Thanks for the tool. Rahul you have to use ASP.NET RSS Toolkit. Please read this link and review... [tool]ASP.NET 2.0 RSS Tool-Kit Good RSS tool, it's becoming popular in RSS Feed Added RSS Feed to the site PingBack from RSS Toolkit for ASP.net PingBack from PingBack from
http://blogs.msdn.com/dmitryr/archive/2006/03/26/561200.aspx
crawl-001
refinedweb
1,485
67.25
I'm new to IntelliJ (and La Clojure). Running IntelliJ v. 11.0.1 community edition, installed La Clojure plugin version 0.4.30. I created a clojure project am able to run the code by creating a run configuration, setting the namespace and calling functions from the script REPL. So far so good. But I would like to use the various options under Tools/Clojure REPL (load file to REPL, run selected text, etc.), unfortuntately all of those options are greyed out. I must have something misconfigured but I don't have any idea as to what. Thanks for any help you can give me. I have the same issue. I don't know how to enable those. Also, I'm using the Leiningen plugin which I've figured out. But it would be nice if I ran a clj file if it would pull in at least that clj file. I had to create a run and repl leiningen task to get my code to run. Please create an issue in our issue tracker: Best regards, Alexander Podkhalyuzin.
https://intellij-support.jetbrains.com/hc/en-us/community/posts/205993309-Unable-to-run-REPL?page=1
CC-MAIN-2019-13
refinedweb
180
76.93
[ ] Hitesh Shah commented on YARN-2102: ----------------------------------- Comments: - AccessControlList is limited private and not available to other users outside of HDFS and MapReduce - Is there a reason for not supporting separate list of users and groups? - How is a user of this API expected to append a user to a list? Does the user need to do a get and then set? Obviously, if 2 users try to this in parallel, it will not work correctly due to the inherent non-atomic nature of the webservice. - putNamespace or createNamespace? Or is put meant to denote upsert behavior? How is a user meant to update the namespace with additional readers/writers? > More generalized timeline ACLs > ------------------------------ > > Key: YARN-2102 > URL: > Project: Hadoop YARN > Issue Type: Sub-task > Reporter: Zhijie Shen > Assignee: Zhijie Shen > Attachments: GeneralizedTimelineACLs.pdf, YARN-2102.1.patch, > YARN-2102.2.patch, YARN-2102.3.patch, YARN-2102.5.patch > > > We. -- This message was sent by Atlassian JIRA (v6.2#6252)
https://www.mail-archive.com/yarn-issues@hadoop.apache.org/msg34671.html
CC-MAIN-2018-43
refinedweb
160
55.64
Agenda See also: IRC log TBL: I have an action to get onto the TAG's agenda discussions of what ISP's let users do, e.g. controlling mime-types. Not quite sure how best to do it. SW: Send email? (scribe missed Tim's response on the email suggestion.) TVR: We should also think about people kludging around with text/xml as a mime type. Has to do with what browsers content sniff and what they don't. SW: Draft minutes of the 10th are at ... Any objections to approving them? Silence. RESOLUTION: Minutes of 10 January 2008 at are approved. <timbl> Raman, One could argue that XML is, with namespaces, self-describing and so it should be delivered as application/xml or text/xml so that the XML architecture alone defines what sort of a document it is. Especially with namespace mixing. SW: We have a meeting on the 24th, with Dave Orchard as designated scribe, but we're short of agenda topics. If I don't get more by Tues., we may cancel the call on the 24th. SW: Noah had an action <Stuart> tracker... that's ISSUE-7 NM: Anyone have problems with marking the action done? SW: Agreed, action is done. (David Orchard joins the call) NM: I think a key point in the email responses is that this stuff is happening with scripting and redirects today, so having declarative markup in the form of the ping attribute is a good thing. DC: I think using GET is OK for this. Don't have a strong opinion. We should leave the action on the someday. <DanC> (i.e. I'm not ready to close the issue again just yet.) HT: It's an error in logic to say that because scripts are used to do this, making an explicit attribute has no architectural downsides. There are all sorts of things you can do in script that aren't good Web arch. When you make an attribute, you imply "do this". I don't necessarily think ping is a bad thing, but the script precedent is the wrong argument. <Zakim> noah, you wanted to say we need to keep issue a bit on the front burner, since we've just invited discussion. NM: I think we need to keep some active attention to this. If nothing else, we just invited discussion. NW: Not much to add. I agree with Henry that the fact that script is being used doesn't make the case one way or the other. Also, as I said last week, I have some skepticism that this will be sued. <DanC> (I note our discussion seems to be more about requirements for PING rather than whenToUseGet, i.e. whether PING should use GET or POST) TVR: I think this is not just about UI. It's more than that, and more than HTML-specific. So, I don't really agree with David Baron's response at TBL: I asked on the list about using UDP. <DanC> (mnot pointed out that UDP doesn't go thru firewalls straightforwardly... i.e. we're back to NAT vs IPV6) NM: Responses you got were 1) UDP doesn't have specified semantics for this and 2) firewalls don't know about this <Zakim> noah, you wanted to ask a bit more about GET vs POST TBL: On point 1, you grab a port and define the semantics. Seems an abuse of the Internet to do something that's both high traffic and statistical (occasional losses don't matter) with a high-overhead connection-oriented approach. TVR: I agree. Tim raises a core question: issues like this shouldn't be treated primarily in the context of a markup-specific specification. Knowing how Firefox works is good. Writing down specs is good. There's a risk of baking in too early. HT: How can they use "options". That's supposed to be idempotent. <Zakim> noah, you wanted to ask a bit more about GET vs POST <scribe> scribenick: DanC NM: where do you draw the line on GET vs POST? clearly GET is not for debiting my credit card balance, but I'm a little surprised that people say GET is OK for counting. <scribe> scribenick: noah <DanC> (careful with the quanitifiers; nobody is saying anything about "empty post for all resources") <DanC> (nobody is talking about reliable accounting anyway; it's statistical) <noah_> (added during editing of minutes -- actually, I think Roy's point is that with the attribute there, malicious sites can trick you into sending an empty post to URIs that don't have the intended semantic. So, in that sense, I think the quantifier is "all" resources.) <Zakim> dorchard, you wanted to say the firewall problem won't be an issue because few sites will implement DO: I don't think the firewall problems with UDP are that bad. Opening firewalls to UDP seems not that hard. <DanC> (but the few sites that implement are going to care a *lot* about the whole AOL userbase that's behind one firewall, IIRC.) <noah_> I wonder about home routers and the like. <Stuart> what about the firewalls closer to the client or mid-net <Zakim> DanC, you wanted to note the UDP bit falls into the tragedy-of-the-commons pile... i.e. invidual actors can get their job done better in the short-term using http/TCP; while it's DO: I agree with Raman that it's not about UI at all. I think we should keep this on the front burner and be active. <timbl> Stevens? <Zakim> timbl, you wanted to talk about effects under/over th hood <DanC> (the Stevens book is the classical text on TCP/IP and UDP, IIRC.) DC: The "use UDP" falls out of p.47 of Stevens' book ("TCP/IP Illustrated" by Richard Stevens and Gary Wright), but saying that won't make all the users do it. Then again, if people start doing pings with connection oriented protocols, the overhead could be big. Not sure what to do. <DanC> (Steven's books seem to be ) <DanC> by "p. 47" I just meant "straight out of TCP/IP textbooks" TBL: Clearly counting GETs is a risky thing to do. When you are instrumenting things inside of HTTP, you're sort of at two layers. <noah_> Yes, of course, but these folks want to do advertising billing based on this stuff, and probably aren't OK with a big fraction of their hits being swallowed by caches. <DanC> ( TCP/IP Illustrated, Volume 1: The Protocols, Addison-Wesley, 1994. ) <Zakim> ht, you wanted to raise the http bis issue <ht> TBL: I don't want to argue for GET for two reasons 1) I prefer UDP 2) clearly there are problems with cacheing and proxying. <DanC> (there's a whole bunch of work on counting GETs... there used to be internet drafts on HTTP extensions for HTTP proxies to report aggregate counts and such. I think that stuff is the subject of huge piles of academic research, these days.) HT: The section on in the draft HTTP 1.1 RFC update on safe and idemptotent operations seems at odds with with what we're talking about. See: ... Let's say I'm going to put a CGI script at the URI of the w3c home page, and only changes member of the day every 1000 gets. That seems fine to me. It sure seems to violate 9.1.2 in the draft RFC, because the requests certainly aren't idempotent. TBL: But the user hasn't bought into that contract. HT: Yes, I agree, that's what I think too, but section 9.1.2 doesn't account for that being OK. When I read 9.1.2, incrementing a counter seems a side affect in my book. DC: At the protocol level, you can't tell. <DanC> (indeed, the visible counters run counter to the HTTP spec. they're an abuse.) <ht> OK DanC, we're in agreement -- _should_ they be ruled out by the HTTP spec. <ht> ? <DanC> yes <ht> Why? Counters are benign. . . <ht> I understand that they are misleading, but not seriously so <DanC> no, counters are not benign. they work against caching. they're expensive for the community. <ht> OK, so _invisible_ counters are benign? <DanC> I think so; might have to hear about a specific example to be sure <Stuart> DanC, so... is W3C member of the day... benign? <DanC> now we're twisting, so I'm struggling. There are 400+ representations of . that's costly, but the community seems OK with the cost. one could say likewise about visible counters... "there are 99,999 representations of this resource". hmm. TBL: Accounting GET accesses is fine. NM: Yes, but we have to admit that the accounting semantics you get with GET are very different from what you POST gives you. TVR: Yes, and people use rather sophisticated heuristics, like counting multiple requests in short period differently from ones that are widely spaced. <ht> Stuart is right, OPTIONS is the now-proposed vehicle for the Access-control exchange, my confusion SW: I don't see a particular action item at this point. Is there a tag blog article? TVR: I think we should help focus the www-tag discussion on the non-UI aspects. I'm a bit worried that the discussion would get sidetracked if people felt "this is a UI issue, and that's not the TAG's main focus" SW: OK, I'll do something, but we won't record it as a formal action. SW: This is back on the table due to a post from Dan DC: The Cool URIs document has been revised. The question is, have Norm's comments been addressed? NW: I'm a bit swamped with other things, so can't do a review immediately. DC: Maybe we should request an extension. When can we get comments back? NW: I'm aiming for before next Thursday. SW: Tim, you gave them some comments too. <jar> I plan to review cool uris before the 21st, FWIW DC: I think Tim sent mail on behalf of "the bunch of us" ... I feel I should doublecheck Tim's comments. SW: By next week. DC: No. ... Seems like a one week extension. Longer might help more. I guess I'll ask for one week extension. NM: I thought they accounted for our comments. Ah yes, at the bottom they say "all TAG comments addressed" SW: But, I think they only mentioned Norm's. <scribe> ACTION: Dan to ask SWEO working group for one week extension for review of their document [recorded in] <trackbot-ng> Created ACTION-95 - Ask SWEO working group for one week extension for review of their document [on Dan Connolly - due 2008-01-24]. (Scribe notes that Jonathan Rees is in fact on the call.) <Norm> ACTION: Walsh to review latest draft with respect in particular to his earlier comments [recorded in] <trackbot-ng> Created ACTION-96 - Review latest draft with respect in particular to his earlier comments [on Norman Walsh - due 2008-01-24]. <timbl> I have it on my agenda -- just things happen. JR: We've had some calls. Alan Ruttenberg, David Booth and I seem to be on the hook to try and write down some RDF. The core question is: what triples can or should you infer from HTTP interactions. We want both directions: what can you infer from the interaction, but also from the application side, what do you want to get? ... I'll point people to our wiki pages and post here in IRC. DC: Cool. SW: Thank you, Jonathan. <jar> AWWSW home page: <DanC> action-85? <trackbot-ng> ACTION-85 -- David Orchard to produce another draft of Passwords in the Clear finding, based on comments from 15 November telcon, publish it and invite comment -- due 2007-12-06 -- OPEN <trackbot-ng> DO: I think I have an overdue action on passwords in the clear. Intending to, just haven't managed to get to it. Will do eventually I'm sure. SW: Please update the due date. DO: I'll reset it for 2 weeks from now. <DanC> ACTION-85 is now due 2008-01-31 SW: I'll try and get to my overdue action on CURIE's this week. You can see I'm active on my action on F2F time. <DanC> action-92? <trackbot-ng> ACTION-92 -- Tim Berners-Lee to consider whether or not he wants to post an issue re: POWDER/rules -- due 2007-12-20 -- OPEN <trackbot-ng> SW: Tim, have you been looking at your action #92 on Powder TBL: I'm afraid not. I don't think I've seen this before. <DanC> comes from <jar> powder + rules is actually quite interesting. DC: This action came from the 13 Dec. meeting, and the records say you were there ( ) HT: I think this has to do with how you specificy collections of URIs in POWDER and in access controls. <Zakim> DanC, you wanted to note POWDER design progress in ways that I'm not entirely comfortable with <DanC> [1] <DanC> [2] <DanC> [3] <DanC> "POWDER : my rabbit" DC: (see links from Dan above) The POWDER use case is that some "Good Housekeeping" group says a certain group of pages is OK for 13 year olds. You need to say who said that and why. There was discussion of reification from RDF, which is known to be problematic. <ht> zakim HT.a is me TBL: We said you can do it in either XML or RDF and convert. Then you can explain the relationships in OWL. (Scribe isn't following this quite well enough to record this accurately.) <DanC> (what timbl was talking about involves standardizing the log:uri level-breaker.) TBL: We encouraged them to put the ratings into a document. Then you can get provenance by saying things like "this document comes from so and so". By talking about the document, you avoid the need for reification. DC: I don't think he's using documents for the provenance. I also think they're using GRDDL on RDF/XML, which makes my head spin a bit. SW: I think he's trying to give a story to those who need a hard core RDF model. ... OK, this is a side discussion from the overdue actions topic. Is there still an action? DC: I think it could be withdrawn. SW: I will withdraw it. ... Tim can decide whether to come back to this. ... In December we had the whole access control thing, which also talks about collections of resources, and we were concerned about the differences. Can't quite decide whether I have the time/energy to push for getting this fixed. <Zakim> dorchard, you wanted to mention XACML DO: While research the access control stuff, I took a look at XACML. They've got a very interesting policy language, with implementations, etc. They have a rule combining algorithm that looks very nice. I took part of the English and pseudo code and proposed it for the access control stuff, but so far I'm getting pushback. ... I still think the XACML work is an interesting and useful base. <timbl> Pointer to the XACL spec you read? <dorchard> <dorchard> <DanC> (yes, XACML has come up in our semweb policy research stuff now and again; I still haven't studied it as much as I perhaps should... but... yeah... I don't really feel welcomed by PDF specs.) DO: I was particularly interested in their Appendix C on combining algorithms for rule sets. This is exactly what the access control spec does. TBL: Does the design guarantee things about the finished system that are nice, or is it just that the rules look appealing? DO: Don't know. <DanC> (our research group has something really cool called AIR... ) <DanC> (nifty justification browser in tabulator.) <dorchard> Hal's responses: <dorchard> hal_lockhart: not sure I understand question - XACML has properties that no other ac system can match <dorchard> hal_lockhart: primarily about scaling <dorchard> hal_lockhart: xacml is a calculator - takes info as given <dorchard> hal_lockhart: solves ac problem, depends on env for security guarantees <Stuart> SW: I've posted a new WBS form about scheduling for Bristol in the May/June timeframe. ... There are quite a few constraints. ... This focusses on Bristol in the spring. <ht> 19--21 works for me <Norm> There you go, over constrained! <dorchard> 19th is Victoria day holiday in Canada. SW: I'll look for more survey results, then suggest something. DC: I'm really interested in the proposal for KC in Sept. SW: You had some commitment from Tim. ... Norm suggested 2 weeks earlier. NW: Just to spread meetings more evenly. <DanC> "Proposed 23-25th September 2008, Kansas City (Tue-Thu)" from the agenda DC: But I want Tim there, and he can make the 23rd. <raman> I'd find it more useful to use the remaining time talking about tagsoup. <raman> calendar discussions by voice are highly unproductive <Stuart> <DanC> <DanC> DC: I saw more consensus on 24-26 <DanC> ... but I misread it. DC: Anyone have preferences on Tues-Thurs vs. Wed.-Fri.? Several: Tues-Thurs TBL: Did I agree to this? DC: Yes, you said prefer. SW: We of course haven't heard from those who will be elected soon. <jar> i need to go, sorry. DC: Planets are lined up to publish HTML spec 22 January. It has a lot of stuff all wrapped together. ALL: (...there was some more informal discussion, but too rambling to minute...)
http://www.w3.org/2001/tag/2008/01/17-minutes
crawl-002
refinedweb
2,939
74.39