question_id
int64
4
6.31M
answer_id
int64
7
6.31M
title
stringlengths
9
150
question_body
stringlengths
0
28.8k
answer_body
stringlengths
60
27.2k
question_text
stringlengths
40
28.9k
combined_text
stringlengths
124
39.6k
tags
listlengths
1
6
question_score
int64
0
26.3k
answer_score
int64
0
28.8k
view_count
int64
15
14M
answer_count
int64
0
182
favorite_count
int64
0
32
question_creation_date
stringdate
2008-07-31 21:42:52
2011-06-10 18:12:18
answer_creation_date
stringdate
2008-07-31 22:17:57
2011-06-10 18:14:17
13,599
13,613
Convert enums to human readable values
Does anyone know how to transform a enum value to a human readable value? For example: ThisIsValueA should be "This is Value A".
Converting this from a vb code snippet that a certain Ian Horwill left at a blog post long ago... i've since used this in production successfully. /// /// Add spaces to separate the capitalized words in the string, /// i.e. insert a space before each uppercase letter that is /// either preceded by a lowercase letter or followed by a /// lowercase letter (but not for the first char in string). /// This keeps groups of uppercase letters - e.g. acronyms - together. /// /// A string in PascalCase /// public static string Wordify(string pascalCaseString) { Regex r = new Regex("(?<=[a-z])(? [A-Z])|(?<=.)(? [A-Z])(?=[a-z])"); return r.Replace(pascalCaseString, " ${x}"); } (requires, 'using System.Text.RegularExpressions;') Thus: Console.WriteLine(Wordify(ThisIsValueA.ToString())); Would return, "This Is Value A". It's much simpler, and less redundant than providing Description attributes. Attributes are useful here only if you need to provide a layer of indirection (which the question didn't ask for).
Convert enums to human readable values Does anyone know how to transform a enum value to a human readable value? For example: ThisIsValueA should be "This is Value A".
TITLE: Convert enums to human readable values QUESTION: Does anyone know how to transform a enum value to a human readable value? For example: ThisIsValueA should be "This is Value A". ANSWER: Converting this from a vb code snippet that a certain Ian Horwill left at a blog post long ago... i've since used this in production successfully. /// /// Add spaces to separate the capitalized words in the string, /// i.e. insert a space before each uppercase letter that is /// either preceded by a lowercase letter or followed by a /// lowercase letter (but not for the first char in string). /// This keeps groups of uppercase letters - e.g. acronyms - together. /// /// A string in PascalCase /// public static string Wordify(string pascalCaseString) { Regex r = new Regex("(?<=[a-z])(? [A-Z])|(?<=.)(? [A-Z])(?=[a-z])"); return r.Replace(pascalCaseString, " ${x}"); } (requires, 'using System.Text.RegularExpressions;') Thus: Console.WriteLine(Wordify(ThisIsValueA.ToString())); Would return, "This Is Value A". It's much simpler, and less redundant than providing Description attributes. Attributes are useful here only if you need to provide a layer of indirection (which the question didn't ask for).
[ "c#", "enums", "human-readable" ]
27
19
8,381
8
0
2008-08-17T12:24:21.840000
2008-08-17T13:10:20.513000
13,607
13,618
GUI system development resources?
Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++? Currently my biggest influence is 3DBuzz.com 's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming. This question does relate to " How do I make a GUI? ", where there is also a rough outline of my current structure. Any response would be appreciated. Edit: I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it. I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one. Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one. Qt in it's current form is not portable to the PSP, never mind the overkill of such a task. That said I've decided to create an IM-GUI, and have started to prototype the code.
I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g Qt, wxWidgets, GTK, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then FLTK is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like Crazy Eddie's GUI that do just that and provide lots of skinnable widgets that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like SDL. EDIT: Now that I've gone back and taken at look your other post I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. Jari Komppa has a good tutorial about them, but you could use a more object-oriented approach with C++ than the C code he presents.
GUI system development resources? Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++? Currently my biggest influence is 3DBuzz.com 's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming. This question does relate to " How do I make a GUI? ", where there is also a rough outline of my current structure. Any response would be appreciated. Edit: I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it. I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one. Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one. Qt in it's current form is not portable to the PSP, never mind the overkill of such a task. That said I've decided to create an IM-GUI, and have started to prototype the code.
TITLE: GUI system development resources? QUESTION: Could someone recommend any good resources for creating Graphics User Interfaces, preferably in C/C++? Currently my biggest influence is 3DBuzz.com 's C++/OpenGL VTMs (Video Training Modules). While they are very good, they cover a large area of graphics programming, so only skim the surface of GUI programming. This question does relate to " How do I make a GUI? ", where there is also a rough outline of my current structure. Any response would be appreciated. Edit: I've just read some of the answers and spotted that I've missed some points. I had to type the question twice as I caught the wrong button and lost it. I missed two important points, first: This will be used cross platform including homebrew on a Sony PSP. Second: I want to create a GUI system not use an existing one. Edit 2: I think some of you are missing the point, I don't what to use an existing GUI system I want to build one. Qt in it's current form is not portable to the PSP, never mind the overkill of such a task. That said I've decided to create an IM-GUI, and have started to prototype the code. ANSWER: I wouldn't use OpenGL for the GUI unless you are planning for hardware accelerated and/or 3D effects that you don't think you would get with a more traditional GUI toolkit (e.g Qt, wxWidgets, GTK, etc). If you just need a quick simple GUI for hosting your OpenGL graphics then FLTK is a nice choice. Otherwise, for rendering the GUI directly in OpenGL their are libraries like Crazy Eddie's GUI that do just that and provide lots of skinnable widgets that you won't have to reinvent. The window and OpenGL context could then be provide with a portable library like SDL. EDIT: Now that I've gone back and taken at look your other post I think I have a better understanding of what you are asking. For a GUI on an embedded system like the Nintendo DS, I would consider using an "immediate mode" GUI. Jari Komppa has a good tutorial about them, but you could use a more object-oriented approach with C++ than the C code he presents.
[ "c++", "user-interface", "playstation-portable" ]
1
2
1,829
9
0
2008-08-17T12:50:15
2008-08-17T13:18:18.673000
13,615
4,807,469
Validate Enum Values
I need to validate an integer to know if is a valid enum value. What is the best way to do this in C#?
You got to love these folk who assume that data not only always comes from a UI, but a UI within your control! IsDefined is fine for most scenarios, you could start with: public static bool TryParseEnum (this int enumValue, out TEnum retVal) { retVal = default(TEnum); bool success = Enum.IsDefined(typeof(TEnum), enumValue); if (success) { retVal = (TEnum)Enum.ToObject(typeof(TEnum), enumValue); } return success; } (Obviously just drop the ‘this’ if you don’t think it’s a suitable int extension)
Validate Enum Values I need to validate an integer to know if is a valid enum value. What is the best way to do this in C#?
TITLE: Validate Enum Values QUESTION: I need to validate an integer to know if is a valid enum value. What is the best way to do this in C#? ANSWER: You got to love these folk who assume that data not only always comes from a UI, but a UI within your control! IsDefined is fine for most scenarios, you could start with: public static bool TryParseEnum (this int enumValue, out TEnum retVal) { retVal = default(TEnum); bool success = Enum.IsDefined(typeof(TEnum), enumValue); if (success) { retVal = (TEnum)Enum.ToObject(typeof(TEnum), enumValue); } return success; } (Obviously just drop the ‘this’ if you don’t think it’s a suitable int extension)
[ "c#", "validation", "enums" ]
101
108
90,158
13
0
2008-08-17T13:14:42.720000
2011-01-26T17:02:44.867000
13,620
13,665
Speed difference in using inline strings vs concatenation in php5?
(assume php5) consider Is there much of a difference between 1 and 2? If not, what about between 1/2 and 3?
Well, as with all "What might be faster in real life" questions, you can't beat a real life test. function timeFunc($function, $runs) { $times = array(); for ($i = 0; $i < $runs; $i++) { $time = microtime(); call_user_func($function); $times[$i] = microtime() - $time; } return array_sum($times) / $runs; } function Method1() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are $foo"; } function Method2() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are {$foo}"; } function Method3() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are ". $foo; } print timeFunc('Method1', 10). "\n"; print timeFunc('Method2', 10). "\n"; print timeFunc('Method3', 10). "\n"; Give it a few runs to page everything in, then... 0.0035568 0.0035388 0.0025394 So, as expected, the interpolation are virtually identical (noise level differences, probably due to the extra characters the interpolation engine needs to handle). Straight up concatenation is about 66% of the speed, which is no great shock. The interpolation parser will look, find nothing to do, then finish with a simple internal string concat. Even if the concat were expensive, the interpolator will still have to do it, after all the work to parse out the variable and trim/copy up the original string. Updates By Somnath: I added Method4() to above real time logic. function Method4() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = 'these are '. $foo; } print timeFunc('Method4', 10). "\n"; Results were: 0.0014739 0.0015574 0.0011955 0.001169 When you are just declaring a string only and no need to parse that string too, then why to confuse PHP debugger to parse. I hope you got my point.
Speed difference in using inline strings vs concatenation in php5? (assume php5) consider Is there much of a difference between 1 and 2? If not, what about between 1/2 and 3?
TITLE: Speed difference in using inline strings vs concatenation in php5? QUESTION: (assume php5) consider Is there much of a difference between 1 and 2? If not, what about between 1/2 and 3? ANSWER: Well, as with all "What might be faster in real life" questions, you can't beat a real life test. function timeFunc($function, $runs) { $times = array(); for ($i = 0; $i < $runs; $i++) { $time = microtime(); call_user_func($function); $times[$i] = microtime() - $time; } return array_sum($times) / $runs; } function Method1() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are $foo"; } function Method2() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are {$foo}"; } function Method3() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = "these are ". $foo; } print timeFunc('Method1', 10). "\n"; print timeFunc('Method2', 10). "\n"; print timeFunc('Method3', 10). "\n"; Give it a few runs to page everything in, then... 0.0035568 0.0035388 0.0025394 So, as expected, the interpolation are virtually identical (noise level differences, probably due to the extra characters the interpolation engine needs to handle). Straight up concatenation is about 66% of the speed, which is no great shock. The interpolation parser will look, find nothing to do, then finish with a simple internal string concat. Even if the concat were expensive, the interpolator will still have to do it, after all the work to parse out the variable and trim/copy up the original string. Updates By Somnath: I added Method4() to above real time logic. function Method4() { $foo = 'some words'; for ($i = 0; $i < 10000; $i++) $t = 'these are '. $foo; } print timeFunc('Method4', 10). "\n"; Results were: 0.0014739 0.0015574 0.0011955 0.001169 When you are just declaring a string only and no need to parse that string too, then why to confuse PHP debugger to parse. I hope you got my point.
[ "php", "optimization", "performance" ]
57
45
25,197
15
0
2008-08-17T13:19:32.587000
2008-08-17T15:10:24.843000
13,647
13,663
Is there an easy way to convert C# classes to PHP?
I am used to writing C# Windows applications. However, I have some free hosted PHP webspace that I would like to make use of. I have a basic understanding of PHP but have never used its object-oriented capabilities. Is there an easy way to convert C# classes to PHP classes or is it just not possible to write a fully object-oriented application in PHP? Update: There is no reliance on the.NET framework beyond the basics. The main aim would be to restructure the class properties, variable enums, etc. The PHP will be hosted on a Linux server.
It is entirely possible to write a PHP application almost entirely in an object-oriented methodology. You will have to write some procedural code to create and launch your first object but beyond that there are plenty of MVC frameworks for PHP that are all object-oriented. One that I would look at as an example is Code Igniter because it is a little lighter weight in my opinion.
Is there an easy way to convert C# classes to PHP? I am used to writing C# Windows applications. However, I have some free hosted PHP webspace that I would like to make use of. I have a basic understanding of PHP but have never used its object-oriented capabilities. Is there an easy way to convert C# classes to PHP classes or is it just not possible to write a fully object-oriented application in PHP? Update: There is no reliance on the.NET framework beyond the basics. The main aim would be to restructure the class properties, variable enums, etc. The PHP will be hosted on a Linux server.
TITLE: Is there an easy way to convert C# classes to PHP? QUESTION: I am used to writing C# Windows applications. However, I have some free hosted PHP webspace that I would like to make use of. I have a basic understanding of PHP but have never used its object-oriented capabilities. Is there an easy way to convert C# classes to PHP classes or is it just not possible to write a fully object-oriented application in PHP? Update: There is no reliance on the.NET framework beyond the basics. The main aim would be to restructure the class properties, variable enums, etc. The PHP will be hosted on a Linux server. ANSWER: It is entirely possible to write a PHP application almost entirely in an object-oriented methodology. You will have to write some procedural code to create and launch your first object but beyond that there are plenty of MVC frameworks for PHP that are all object-oriented. One that I would look at as an example is Code Igniter because it is a little lighter weight in my opinion.
[ "c#", "php" ]
7
2
3,758
5
0
2008-08-17T14:16:50.833000
2008-08-17T15:05:51.170000
13,655
13,657
C# .NET listing contents of remote files
Is it possible in.NET to list files on a remote location like an URL? Much in the same way the System.IO classes work. All I need is the URLs to images that are on a remote server.
Short answer: No, unless you have more control over that web-server Long answer: Here are possible solutions... You will need server-side script that will do it locally and output this list in your preferred format. Most of the web-servers implement default file-browsing pages, so you could theoretically parse those but this solution will be very fragile and not very portable even between different versions of the same web-server. If you have FTP access...
C# .NET listing contents of remote files Is it possible in.NET to list files on a remote location like an URL? Much in the same way the System.IO classes work. All I need is the URLs to images that are on a remote server.
TITLE: C# .NET listing contents of remote files QUESTION: Is it possible in.NET to list files on a remote location like an URL? Much in the same way the System.IO classes work. All I need is the URLs to images that are on a remote server. ANSWER: Short answer: No, unless you have more control over that web-server Long answer: Here are possible solutions... You will need server-side script that will do it locally and output this list in your preferred format. Most of the web-servers implement default file-browsing pages, so you could theoretically parse those but this solution will be very fragile and not very portable even between different versions of the same web-server. If you have FTP access...
[ ".net" ]
4
3
2,220
2
0
2008-08-17T14:48:00.350000
2008-08-17T14:52:09.513000
13,678
13,746
Textual versus Graphical Programming Languages
I am part of a high school robotics team, and there is some debate about which language to use to program our robot. We are choosing between C (or maybe C++) and LabVIEW. There are pros for each language. C(++): Widely used Good preparation for the future (most programming positions require text-based programmers.) We can expand upon our C codebase from last year Allows us to better understand what our robot is doing. LabVIEW Easier to visualize program flow (blocks and wires, instead of lines of code) Easier to teach (Supposedly...) "The future of programming is graphical." (Think so?) Closer to the Robolab background that some new members may have. Don't need to intimately know what's going on. Simply tell the module to find the red ball, don't need to know how. This is a very difficult decision for us, and we've been debating for a while. Based on those pros for each language, and on the experience you've got, what do you think the better option is? Keep in mind that we aren't necessarily going for pure efficiency. We also hope to prepare our programmers for a future in programming. Also: Do you think that graphical languages such as LabVEIW are the future of programming? Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn. Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?) Thanks for the advice! Edit: I'd like to emphasize this question more: The team captain thinks that LabVIEW is better for its ease of learning and teaching. Is that true? I think that C could be taught just as easily, and beginner-level tasks would still be around with C. I'd really like to hear your opinions. Is there any reason that typing while{} should be any more difficult than creating a "while box?" Isn't it just as intuitive that program flows line by line, only modified by ifs and loops, as it is intuitive that the program flows through the wire, only modified by ifs and loops!? Thanks again! Edit: I just realized that this falls under the topic of "language debate." I hope it's okay, because it's about what's best for a specific branch of programming, with certain goals. If it's not... I'm sorry...
Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them. LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e. y = x*x + x + 1 ) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas. Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW. I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have while, for, if flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over. As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing. @mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game...
Textual versus Graphical Programming Languages I am part of a high school robotics team, and there is some debate about which language to use to program our robot. We are choosing between C (or maybe C++) and LabVIEW. There are pros for each language. C(++): Widely used Good preparation for the future (most programming positions require text-based programmers.) We can expand upon our C codebase from last year Allows us to better understand what our robot is doing. LabVIEW Easier to visualize program flow (blocks and wires, instead of lines of code) Easier to teach (Supposedly...) "The future of programming is graphical." (Think so?) Closer to the Robolab background that some new members may have. Don't need to intimately know what's going on. Simply tell the module to find the red ball, don't need to know how. This is a very difficult decision for us, and we've been debating for a while. Based on those pros for each language, and on the experience you've got, what do you think the better option is? Keep in mind that we aren't necessarily going for pure efficiency. We also hope to prepare our programmers for a future in programming. Also: Do you think that graphical languages such as LabVEIW are the future of programming? Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn. Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?) Thanks for the advice! Edit: I'd like to emphasize this question more: The team captain thinks that LabVIEW is better for its ease of learning and teaching. Is that true? I think that C could be taught just as easily, and beginner-level tasks would still be around with C. I'd really like to hear your opinions. Is there any reason that typing while{} should be any more difficult than creating a "while box?" Isn't it just as intuitive that program flows line by line, only modified by ifs and loops, as it is intuitive that the program flows through the wire, only modified by ifs and loops!? Thanks again! Edit: I just realized that this falls under the topic of "language debate." I hope it's okay, because it's about what's best for a specific branch of programming, with certain goals. If it's not... I'm sorry...
TITLE: Textual versus Graphical Programming Languages QUESTION: I am part of a high school robotics team, and there is some debate about which language to use to program our robot. We are choosing between C (or maybe C++) and LabVIEW. There are pros for each language. C(++): Widely used Good preparation for the future (most programming positions require text-based programmers.) We can expand upon our C codebase from last year Allows us to better understand what our robot is doing. LabVIEW Easier to visualize program flow (blocks and wires, instead of lines of code) Easier to teach (Supposedly...) "The future of programming is graphical." (Think so?) Closer to the Robolab background that some new members may have. Don't need to intimately know what's going on. Simply tell the module to find the red ball, don't need to know how. This is a very difficult decision for us, and we've been debating for a while. Based on those pros for each language, and on the experience you've got, what do you think the better option is? Keep in mind that we aren't necessarily going for pure efficiency. We also hope to prepare our programmers for a future in programming. Also: Do you think that graphical languages such as LabVEIW are the future of programming? Is a graphical language easier to learn than a textual language? I think that they should be about equally challenging to learn. Seeing as we are partailly rooted in helping people learn, how much should we rely on prewritten modules, and how much should we try to write on our own? ("Good programmers write good code, great programmers copy great code." But isn't it worth being a good programmer, first?) Thanks for the advice! Edit: I'd like to emphasize this question more: The team captain thinks that LabVIEW is better for its ease of learning and teaching. Is that true? I think that C could be taught just as easily, and beginner-level tasks would still be around with C. I'd really like to hear your opinions. Is there any reason that typing while{} should be any more difficult than creating a "while box?" Isn't it just as intuitive that program flows line by line, only modified by ifs and loops, as it is intuitive that the program flows through the wire, only modified by ifs and loops!? Thanks again! Edit: I just realized that this falls under the topic of "language debate." I hope it's okay, because it's about what's best for a specific branch of programming, with certain goals. If it's not... I'm sorry... ANSWER: Before I arrived, our group (PhD scientists, with little programming background) had been trying to implement a LabVIEW application on-and-off for nearly a year. The code was untidy, too complex (front and back-end) and most importantly, did not work. I am a keen programmer but had never used LabVIEW. With a little help from a LabVIEW guru who could help translate the textual progamming paradigms I knew into LabVIEW concepts it was possible to code the app in a week. The point here is that the basic coding concepts still have to be learnt, the language, even one like LabVIEW, is just a different way of expressing them. LabVIEW is great to use for what it was originally designed for. i.e. to take data from DAQ cards and display it on-screen perhaps with some minor manipulations in-between. However, programming algorithms is no easier and I would even suggest that it is more difficult. For example, in most procedural languages execution order is generally followed line by line, using pseudo mathematical notation (i.e. y = x*x + x + 1 ) whereas LabVIEW would implement this using a series of VI's which don't necessarily follow from each other (i.e. left-to-right) on the canvas. Moreover programming as a career is more than knowing the technicalities of coding. Being able to effectively ask for help/search for answers, write readable code and work with legacy code are all key skills which are undeniably more difficult in a graphical language such as LabVIEW. I believe some aspects of graphical programming may become mainstream - the use of sub-VIs perfectly embodies the 'black-box' principal of programming and is also used in other language abstractions such as Yahoo Pipes and the Apple Automator - and perhaps some future graphical language will revolutionise the way we program but LabVIEW itself is not a massive paradigm shift in language design, we still have while, for, if flow control, typecasting, event driven programming, even objects. If the future really will be written in LabVIEW, C++ programmer won't have much trouble crossing over. As a postcript I'd say that C/C++ is more suited to robotics since the students will no doubt have to deal with embedded systems and FPGAs at some point. Low level programming knowledge (bits, registers etc.) would be invaluable for this kind of thing. @mendicant Actually LabVIEW is used a lot in industry, especially for control systems. Granted NASA unlikely use it for on-board satellite systems but then software developement for space-systems is a whole different ball game...
[ "robotics", "labview", "graphical-language" ]
35
35
9,859
25
0
2008-08-17T15:39:57.560000
2008-08-17T17:23:27.023000
13,698
13,777
php Zend / MVC without mod_rewrite
I've seen it mentioned in many blogs around the net, but I believe it shoud be discussed here. What can we do when we have an MVC framework (I am interested in ZEND) in PHP but our host does not provide mod_rewrite? Are there any "short-cuts"? Can we transfer control in any way (so that a mapping may occur between pages)? Any ideas? Thank you:-)
Zend framework should work without mod_rewrite. If you can live with your URL:s looking more like "/path/to/app/index.php/controller/action". If you had mod_rewrite you could do away with the "index.php" bit, but it should work with too. It's all a matter of setting up the routes to accept the index.php part.
php Zend / MVC without mod_rewrite I've seen it mentioned in many blogs around the net, but I believe it shoud be discussed here. What can we do when we have an MVC framework (I am interested in ZEND) in PHP but our host does not provide mod_rewrite? Are there any "short-cuts"? Can we transfer control in any way (so that a mapping may occur between pages)? Any ideas? Thank you:-)
TITLE: php Zend / MVC without mod_rewrite QUESTION: I've seen it mentioned in many blogs around the net, but I believe it shoud be discussed here. What can we do when we have an MVC framework (I am interested in ZEND) in PHP but our host does not provide mod_rewrite? Are there any "short-cuts"? Can we transfer control in any way (so that a mapping may occur between pages)? Any ideas? Thank you:-) ANSWER: Zend framework should work without mod_rewrite. If you can live with your URL:s looking more like "/path/to/app/index.php/controller/action". If you had mod_rewrite you could do away with the "index.php" bit, but it should work with too. It's all a matter of setting up the routes to accept the index.php part.
[ "php", "model-view-controller", "zend-framework" ]
2
4
4,177
5
0
2008-08-17T16:06:03.827000
2008-08-17T18:00:47.387000
13,704
13,726
Suggestions for Migrating from ASP.NET WebForms to ASP.NET MVC?
ASP.NET MVC has been discussed on this forum a few times. I'm about to do a large migration of several websites from classic ASP/ASP.NET WebForms to ASP.NET MVC and was wondering what kind of advice those of you with experience in both technologies have. What I have: a typical ASP.NET app with heavily coupled presentation/business logic, all sorts of messy ASP.NET-generated Javascript cruft, and so forth. What I want: clean ASP.NET MVC-generated agnostic markup. 'Nuff said. Any pointers, tips, tricks, or gotchas to be aware of? Thanks!
Any pointers, tips, tricks, or gotchas to be aware of? Well, I think you're probably a little ways away from thinking about tricks & gotchas:) As I'm sure you're aware, ASP.NET MVC is not some new version of ASP.NET, but a totally different paradigm from ASP.NET, you won't be migrating, you'll be initiating a brand new development effort to replace an existing system. So maybe you can get a leg up on determining requirements for the app, but the rest will probably re-built from scratch. Based on the (very common) problems you described in your existing code base you should consider taking this opportunity to learn some of the current best practices in designing loosely coupled systems. This is easy to do because modern "best practices" are easy to understand and easy to practice, and there is enormous community support, and high quality, open source tooling to help in the process. We are moving an ASP/ASP.NET application to ASP.NET MVC at this time as well, and this is the conclusion my preparatory research has led me to, anyway. Here is a post to links on using ASP.NET MVC, but I would start by reading this post. The post is about NHibernate (an ORM tool) on its surface but the discussion and the links are about getting the foundations right and is the result of preparing to port an ASP.NET site to MVC. Some of the reference architectures linked to in that post are based on ASP.NET MVC. Here is another post about NHibernate, but in the "Best Practices & Reference Applications" section most if not all of the reference applications listed are ASP.NET MVC applications also. Reference architectures can be extremely useful for quickly getting a feeling for how an optimal, maintainable ASP.NET MVC site might be designed.
Suggestions for Migrating from ASP.NET WebForms to ASP.NET MVC? ASP.NET MVC has been discussed on this forum a few times. I'm about to do a large migration of several websites from classic ASP/ASP.NET WebForms to ASP.NET MVC and was wondering what kind of advice those of you with experience in both technologies have. What I have: a typical ASP.NET app with heavily coupled presentation/business logic, all sorts of messy ASP.NET-generated Javascript cruft, and so forth. What I want: clean ASP.NET MVC-generated agnostic markup. 'Nuff said. Any pointers, tips, tricks, or gotchas to be aware of? Thanks!
TITLE: Suggestions for Migrating from ASP.NET WebForms to ASP.NET MVC? QUESTION: ASP.NET MVC has been discussed on this forum a few times. I'm about to do a large migration of several websites from classic ASP/ASP.NET WebForms to ASP.NET MVC and was wondering what kind of advice those of you with experience in both technologies have. What I have: a typical ASP.NET app with heavily coupled presentation/business logic, all sorts of messy ASP.NET-generated Javascript cruft, and so forth. What I want: clean ASP.NET MVC-generated agnostic markup. 'Nuff said. Any pointers, tips, tricks, or gotchas to be aware of? Thanks! ANSWER: Any pointers, tips, tricks, or gotchas to be aware of? Well, I think you're probably a little ways away from thinking about tricks & gotchas:) As I'm sure you're aware, ASP.NET MVC is not some new version of ASP.NET, but a totally different paradigm from ASP.NET, you won't be migrating, you'll be initiating a brand new development effort to replace an existing system. So maybe you can get a leg up on determining requirements for the app, but the rest will probably re-built from scratch. Based on the (very common) problems you described in your existing code base you should consider taking this opportunity to learn some of the current best practices in designing loosely coupled systems. This is easy to do because modern "best practices" are easy to understand and easy to practice, and there is enormous community support, and high quality, open source tooling to help in the process. We are moving an ASP/ASP.NET application to ASP.NET MVC at this time as well, and this is the conclusion my preparatory research has led me to, anyway. Here is a post to links on using ASP.NET MVC, but I would start by reading this post. The post is about NHibernate (an ORM tool) on its surface but the discussion and the links are about getting the foundations right and is the result of preparing to port an ASP.NET site to MVC. Some of the reference architectures linked to in that post are based on ASP.NET MVC. Here is another post about NHibernate, but in the "Best Practices & Reference Applications" section most if not all of the reference applications listed are ASP.NET MVC applications also. Reference architectures can be extremely useful for quickly getting a feeling for how an optimal, maintainable ASP.NET MVC site might be designed.
[ ".net", "asp.net", "asp.net-mvc", "webforms" ]
5
2
744
4
0
2008-08-17T16:18:54.097000
2008-08-17T17:01:50.593000
13,708
13,719
Resources on wordpress theme-development
What are the best resources for Wordpress theme-development? I am currently in the phase of starting my own blog, and don't want to use one of the many free themes. I already have a theme for my website, so I want to read about best-practices. Any advice on how to get started would be very welcome:) I have now created my theme (wohoo!), and thought I should summarize the best resources I found. Lets see.. Resources: ThemeTation's three-part guide to create a wordpress-theme from scratch Nettuts.com's guide: How to Create a Wordpress Theme from Scratch Didn't actually use this, it's a quite new article, but anyway - it's great. It will get a follow-up in the next few days too.. Wordpress.org's own guide on templates Definatly a must-read for everyone new to wordpress-designing.. "The loop" Essential knowledge, also a must-read Directory of all the template tags Used by wordpress to actually output blog-content.. Inspiration: Smashing Magazine's lists: first, one more, yet another one Wordpress.org's theme-directory
I think that the best way to learn is to look at how other people construct their themes. The first one to start one is the Default Kubrick theme that is included in the standard WordPress install. It has all of the basics and will show you some advanced techniques like including sidebar widgets. Next, in conjunction with the docs on theme development (previously mentioned by Mark), Blog Design and Layout and Using Themes, go to the Theme Directory on the Wordpress.org site, download a couple of popular themes, and go through them, looking up any template tags or techniques that you don't understand. After you do this, you should be more than well-equipped to write your own theme from scratch, or modify an existing theme to your needs.
Resources on wordpress theme-development What are the best resources for Wordpress theme-development? I am currently in the phase of starting my own blog, and don't want to use one of the many free themes. I already have a theme for my website, so I want to read about best-practices. Any advice on how to get started would be very welcome:) I have now created my theme (wohoo!), and thought I should summarize the best resources I found. Lets see.. Resources: ThemeTation's three-part guide to create a wordpress-theme from scratch Nettuts.com's guide: How to Create a Wordpress Theme from Scratch Didn't actually use this, it's a quite new article, but anyway - it's great. It will get a follow-up in the next few days too.. Wordpress.org's own guide on templates Definatly a must-read for everyone new to wordpress-designing.. "The loop" Essential knowledge, also a must-read Directory of all the template tags Used by wordpress to actually output blog-content.. Inspiration: Smashing Magazine's lists: first, one more, yet another one Wordpress.org's theme-directory
TITLE: Resources on wordpress theme-development QUESTION: What are the best resources for Wordpress theme-development? I am currently in the phase of starting my own blog, and don't want to use one of the many free themes. I already have a theme for my website, so I want to read about best-practices. Any advice on how to get started would be very welcome:) I have now created my theme (wohoo!), and thought I should summarize the best resources I found. Lets see.. Resources: ThemeTation's three-part guide to create a wordpress-theme from scratch Nettuts.com's guide: How to Create a Wordpress Theme from Scratch Didn't actually use this, it's a quite new article, but anyway - it's great. It will get a follow-up in the next few days too.. Wordpress.org's own guide on templates Definatly a must-read for everyone new to wordpress-designing.. "The loop" Essential knowledge, also a must-read Directory of all the template tags Used by wordpress to actually output blog-content.. Inspiration: Smashing Magazine's lists: first, one more, yet another one Wordpress.org's theme-directory ANSWER: I think that the best way to learn is to look at how other people construct their themes. The first one to start one is the Default Kubrick theme that is included in the standard WordPress install. It has all of the basics and will show you some advanced techniques like including sidebar widgets. Next, in conjunction with the docs on theme development (previously mentioned by Mark), Blog Design and Layout and Using Themes, go to the Theme Directory on the Wordpress.org site, download a couple of popular themes, and go through them, looking up any template tags or techniques that you don't understand. After you do this, you should be more than well-equipped to write your own theme from scratch, or modify an existing theme to your needs.
[ "wordpress", "themes" ]
13
8
2,738
11
0
2008-08-17T16:35:15.920000
2008-08-17T16:55:33.820000
13,718
13,735
How to schedule a batch process in asp.net
I want to run a weekly batch process in an asp.net page. How can that be done? Thanks
Is there any known drawbacks with the solution? Here is the blog post in which Jeff Atwood discusses this approach. As with most of Jeff's post, the meat is in the comments where the pros and cons have been discussed in extreme detail by a large number of opinionated folks, so that is an ideal place to have that particular part of your question answered.
How to schedule a batch process in asp.net I want to run a weekly batch process in an asp.net page. How can that be done? Thanks
TITLE: How to schedule a batch process in asp.net QUESTION: I want to run a weekly batch process in an asp.net page. How can that be done? Thanks ANSWER: Is there any known drawbacks with the solution? Here is the blog post in which Jeff Atwood discusses this approach. As with most of Jeff's post, the meat is in the comments where the pros and cons have been discussed in extreme detail by a large number of opinionated folks, so that is an ideal place to have that particular part of your question answered.
[ "asp.net" ]
2
3
4,851
3
0
2008-08-17T16:54:57.190000
2008-08-17T17:15:30.537000
13,725
13,742
In Cocoa do you prefer NSInteger or int, and why?
NSInteger / NSUInteger are Cocoa-defined replacements for the regular built-in types. Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms?
The way I understand it is that NSInteger et al. are architecture safe versions of the corresponding C types. Basically their size vary depending on the architecture, but NSInteger, for example, is guaranteed to hold any valid pointer for the current architecture. Apple recommends that you use these to work with OS X 10.5 and onwards, and Apple's API:s will use them, so it's definitely a good idea to get into the habit of using them. They require a little more typing, but apart from that it doesn't seem to be any reason not to use them.
In Cocoa do you prefer NSInteger or int, and why? NSInteger / NSUInteger are Cocoa-defined replacements for the regular built-in types. Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms?
TITLE: In Cocoa do you prefer NSInteger or int, and why? QUESTION: NSInteger / NSUInteger are Cocoa-defined replacements for the regular built-in types. Is there any benefit to using the NS* types over the built-ins? Which do you prefer and why? Are NSInteger and int the same width on 32-bit / 64-bit platforms? ANSWER: The way I understand it is that NSInteger et al. are architecture safe versions of the corresponding C types. Basically their size vary depending on the architecture, but NSInteger, for example, is guaranteed to hold any valid pointer for the current architecture. Apple recommends that you use these to work with OS X 10.5 and onwards, and Apple's API:s will use them, so it's definitely a good idea to get into the habit of using them. They require a little more typing, but apart from that it doesn't seem to be any reason not to use them.
[ "objective-c", "cocoa", "types" ]
48
58
24,040
5
0
2008-08-17T17:01:45.150000
2008-08-17T17:20:31.803000
13,745
13,756
Resharper and TortoiseSVN
Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out. Also not sure if this feature alone is worth the cost of VisualSVN. Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far. I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded.
TortoiseSVN 1.5 has a neat hidden feature on the check in window: Select a missing file and a new file and right-click. One of the options will be "fix move". I tend to refactor away, and then use this to fix any files where the name has changed.
Resharper and TortoiseSVN Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out. Also not sure if this feature alone is worth the cost of VisualSVN. Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far. I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded.
TITLE: Resharper and TortoiseSVN QUESTION: Is there any good way to deal with the class renaming refactor from Resharper when the file is under source control and TortoiseSVN is the client. I have am trying VisualSVN right now but I haven't had the need to rename anything recently. I don't want to change our repository just to try this out. Also not sure if this feature alone is worth the cost of VisualSVN. Update: I have uninstalled the trial of VisualSVN and tried AhknSVN. I seems to provided the same functionality so far. I know this my sound trivial but the indicators seem to be lacking some functionality, it seems like they don't trickle up. (If a file in the project is different I would think the project indicator would indicate this as well.) I tend to keep my projects rolled as much as possible, so it is hard to tell what files have changed unless the project is expanded. ANSWER: TortoiseSVN 1.5 has a neat hidden feature on the check in window: Select a missing file and a new file and right-click. One of the options will be "fix move". I tend to refactor away, and then use this to fix any files where the name has changed.
[ "svn", "tortoisesvn", "resharper", "visualsvn", "ankhsvn" ]
13
12
1,872
4
0
2008-08-17T17:22:26.590000
2008-08-17T17:35:59.167000
13,753
13,972
quoting System.DirectoryServices.ResultPropertyCollection
I'm missing something here: $objSearcher = New-Object System.DirectoryServices.DirectorySearcher $objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry $objSearcher.Filter = ("(objectclass=computer)") $computers = $objSearcher.findall() So the question is why do the two following outputs differ? $computers | %{ "Server name in quotes $_.properties.name" "Server name not in quotes " + $_.properties.name } PS> $computers[0] | %{"$_.properties.name"; $_.properties.name} System.DirectoryServices.SearchResult.properties.name GORILLA
When you included $_.properties.name in the string, it was returning the type name of the property. When a variable is included in a string and the string is evaluated, it calls the ToString method on that object referenced by the variable (not including the members specified after). In this case, the ToString method is returning the type name. You can force the evaluation of the variable and members similar to what EBGreen suggested, but by using "Server name in quotes $($_.properties.name)" In the other scenario PowerShell is evaluating the variable and members specified first and then adding it to the previous string. You are right that you are getting back a collection of properties. If you pipe $computer[0].properties to get-member, you can explore the object model right from the command line. The important part is below. TypeName: System.DirectoryServices.ResultPropertyCollection Name MemberType Definition Values Property System.Collections.ICollection Values {get;}
quoting System.DirectoryServices.ResultPropertyCollection I'm missing something here: $objSearcher = New-Object System.DirectoryServices.DirectorySearcher $objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry $objSearcher.Filter = ("(objectclass=computer)") $computers = $objSearcher.findall() So the question is why do the two following outputs differ? $computers | %{ "Server name in quotes $_.properties.name" "Server name not in quotes " + $_.properties.name } PS> $computers[0] | %{"$_.properties.name"; $_.properties.name} System.DirectoryServices.SearchResult.properties.name GORILLA
TITLE: quoting System.DirectoryServices.ResultPropertyCollection QUESTION: I'm missing something here: $objSearcher = New-Object System.DirectoryServices.DirectorySearcher $objSearcher.SearchRoot = New-Object System.DirectoryServices.DirectoryEntry $objSearcher.Filter = ("(objectclass=computer)") $computers = $objSearcher.findall() So the question is why do the two following outputs differ? $computers | %{ "Server name in quotes $_.properties.name" "Server name not in quotes " + $_.properties.name } PS> $computers[0] | %{"$_.properties.name"; $_.properties.name} System.DirectoryServices.SearchResult.properties.name GORILLA ANSWER: When you included $_.properties.name in the string, it was returning the type name of the property. When a variable is included in a string and the string is evaluated, it calls the ToString method on that object referenced by the variable (not including the members specified after). In this case, the ToString method is returning the type name. You can force the evaluation of the variable and members similar to what EBGreen suggested, but by using "Server name in quotes $($_.properties.name)" In the other scenario PowerShell is evaluating the variable and members specified first and then adding it to the previous string. You are right that you are getting back a collection of properties. If you pipe $computer[0].properties to get-member, you can explore the object model right from the command line. The important part is below. TypeName: System.DirectoryServices.ResultPropertyCollection Name MemberType Definition Values Property System.Collections.ICollection Values {get;}
[ ".net", "powershell", "scripting", "active-directory" ]
1
1
2,365
3
0
2008-08-17T17:33:54.727000
2008-08-17T22:10:02.907000
13,763
14,782
How can I remove a child node in HTML using JavaScript?
Is there a function like document.getElementById("FirstDiv").clear()?
To answer the original question - there are various ways to do this, but the following would be the simplest. If you already have a handle to the child node that you want to remove, i.e. you have a JavaScript variable that holds a reference to it: myChildNode.parentNode.removeChild(myChildNode); Obviously, if you are not using one of the numerous libraries that already do this, you would want to create a function to abstract this out: function removeElement(node) { node.parentNode.removeChild(node); } EDIT: As has been mentioned by others: if you have any event handlers wired up to the node you are removing, you will want to make sure you disconnect those before the last reference to the node being removed goes out of scope, lest poor implementations of the JavaScript interpreter leak memory.
How can I remove a child node in HTML using JavaScript? Is there a function like document.getElementById("FirstDiv").clear()?
TITLE: How can I remove a child node in HTML using JavaScript? QUESTION: Is there a function like document.getElementById("FirstDiv").clear()? ANSWER: To answer the original question - there are various ways to do this, but the following would be the simplest. If you already have a handle to the child node that you want to remove, i.e. you have a JavaScript variable that holds a reference to it: myChildNode.parentNode.removeChild(myChildNode); Obviously, if you are not using one of the numerous libraries that already do this, you would want to create a function to abstract this out: function removeElement(node) { node.parentNode.removeChild(node); } EDIT: As has been mentioned by others: if you have any event handlers wired up to the node you are removing, you will want to make sure you disconnect those before the last reference to the node being removed goes out of scope, lest poor implementations of the JavaScript interpreter leak memory.
[ "javascript", "html", "dom" ]
84
109
136,123
9
0
2008-08-17T17:48:04.057000
2008-08-18T15:16:16.437000
13,768
13,923
Open ID - What happens when you decide you don't like your existing provider?
So I'm not quite convinced about OpenID yet, and here is why: I already have an OpenID because I have a Blogger account. But I discovered that Blogger seems to be a poor provider when I tried to identify myself on the altdotnet page and recieved the following message: You must use an OpenID persona that specifies a valid email address. Lets forget the details of this little error and assume that I want to change to a different provider. So I sign up with a different provider and get a new, different OpenID - how would I switch my existing StackOverflow account to be associated with my new OpenID? I understand this would be easy if I had my own domain set up to delegate to a provider, because I could just change the delegation. Assume I do not have my own domain.
Ideally Stack Overflow would allow you to change your OpenID. OTOH, ideally you would have set up OpenID delegation on your own site, and used that to identify yourself. With delegation, you would need only change which service you delegate to. You'd still be identified by your own URL that you control. But that doesn't help now unless Stack Overflow lets you change it. Most sites tie OpenIDs to real accounts, and would let you switch or at least add additional OpenIDs. Doesn't seem like you could map OpenIDs to accounts 1:1 unless the result of access is totally trivial; otherwise you're in a situation like this where you lose your existing questions, answers, and reputation for switching IDs.
Open ID - What happens when you decide you don't like your existing provider? So I'm not quite convinced about OpenID yet, and here is why: I already have an OpenID because I have a Blogger account. But I discovered that Blogger seems to be a poor provider when I tried to identify myself on the altdotnet page and recieved the following message: You must use an OpenID persona that specifies a valid email address. Lets forget the details of this little error and assume that I want to change to a different provider. So I sign up with a different provider and get a new, different OpenID - how would I switch my existing StackOverflow account to be associated with my new OpenID? I understand this would be easy if I had my own domain set up to delegate to a provider, because I could just change the delegation. Assume I do not have my own domain.
TITLE: Open ID - What happens when you decide you don't like your existing provider? QUESTION: So I'm not quite convinced about OpenID yet, and here is why: I already have an OpenID because I have a Blogger account. But I discovered that Blogger seems to be a poor provider when I tried to identify myself on the altdotnet page and recieved the following message: You must use an OpenID persona that specifies a valid email address. Lets forget the details of this little error and assume that I want to change to a different provider. So I sign up with a different provider and get a new, different OpenID - how would I switch my existing StackOverflow account to be associated with my new OpenID? I understand this would be easy if I had my own domain set up to delegate to a provider, because I could just change the delegation. Assume I do not have my own domain. ANSWER: Ideally Stack Overflow would allow you to change your OpenID. OTOH, ideally you would have set up OpenID delegation on your own site, and used that to identify yourself. With delegation, you would need only change which service you delegate to. You'd still be identified by your own URL that you control. But that doesn't help now unless Stack Overflow lets you change it. Most sites tie OpenIDs to real accounts, and would let you switch or at least add additional OpenIDs. Doesn't seem like you could map OpenIDs to accounts 1:1 unless the result of access is totally trivial; otherwise you're in a situation like this where you lose your existing questions, answers, and reputation for switching IDs.
[ "openid" ]
17
16
1,045
3
0
2008-08-17T17:55:57.723000
2008-08-17T20:59:10.850000
13,786
13,930
Should we support IE6 anymore?
Are we supposed to find workarounds in our web applications so that they will work in every situation? Is it time to do away with IE6 programming?
This depends so much on the context of the application, and of its users. There are two key aspects: what browsers are your users using; and how important is it that they can access/interact with your site. The first part is generally easily establish, if you have an existing version with stats (Google Analytics or similar is simple and great) or you have access to such data from a similar app / product. The later is a little harder to decide. If you're developing a publically availalbe, ad-sponsored site for exmple, it's just a numbers game - work out how much of your audience you lose and factor what that's worth against the additional development time. If, however you're doing something specifically at the request of a group of users - like an enterprise web app for example - you may be stuck with what those users are browsing with. In my experience those two things can change significantly for different apps. We've got web apps still (stats from last week) with close to 70% IE6 usage (20% IE7, the rest split between IE5.5 and FF2) and others with close to 0% IE6. For relatively ovbivous reasons, the latter are the kind of apps where losing a few users isn't so important. Having said all that, we generally find it easy to support IE6 (and IE5.5 as others point out) simply because we've been doing so for a while. Yes, it's a pain and yes, it takes more time, but often not too much. There are very few situations where having to support IE6 drastically changes what kind development you do - it just means a little more work. The other nice benefit of supporting it (and testing for it) is that you generally end up doing better all-round browser and quirks testing as a result of the polarity of IE6's behaviours. You need to decide whether or not you're supposed to find workarounds, based on the requirements of your app/product. That's it's IE6 isn't really that relevant - this kind of problem happens all the time in other situations, it just so happens that IE6 is a great example of the costs and implications of mixed standards, versioning and legacy support.
Should we support IE6 anymore? Are we supposed to find workarounds in our web applications so that they will work in every situation? Is it time to do away with IE6 programming?
TITLE: Should we support IE6 anymore? QUESTION: Are we supposed to find workarounds in our web applications so that they will work in every situation? Is it time to do away with IE6 programming? ANSWER: This depends so much on the context of the application, and of its users. There are two key aspects: what browsers are your users using; and how important is it that they can access/interact with your site. The first part is generally easily establish, if you have an existing version with stats (Google Analytics or similar is simple and great) or you have access to such data from a similar app / product. The later is a little harder to decide. If you're developing a publically availalbe, ad-sponsored site for exmple, it's just a numbers game - work out how much of your audience you lose and factor what that's worth against the additional development time. If, however you're doing something specifically at the request of a group of users - like an enterprise web app for example - you may be stuck with what those users are browsing with. In my experience those two things can change significantly for different apps. We've got web apps still (stats from last week) with close to 70% IE6 usage (20% IE7, the rest split between IE5.5 and FF2) and others with close to 0% IE6. For relatively ovbivous reasons, the latter are the kind of apps where losing a few users isn't so important. Having said all that, we generally find it easy to support IE6 (and IE5.5 as others point out) simply because we've been doing so for a while. Yes, it's a pain and yes, it takes more time, but often not too much. There are very few situations where having to support IE6 drastically changes what kind development you do - it just means a little more work. The other nice benefit of supporting it (and testing for it) is that you generally end up doing better all-round browser and quirks testing as a result of the polarity of IE6's behaviours. You need to decide whether or not you're supposed to find workarounds, based on the requirements of your app/product. That's it's IE6 isn't really that relevant - this kind of problem happens all the time in other situations, it just so happens that IE6 is a great example of the costs and implications of mixed standards, versioning and legacy support.
[ "internet-explorer", "internet-explorer-6" ]
15
15
5,783
35
0
2008-08-17T18:15:14.097000
2008-08-17T21:08:32.343000
13,791
13,943
Python Regular Expressions to implement string unescaping
I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what... >>> import re >>> mystring = r"This is \n a test \r" >>> p = re.compile( "\\\\(\\S)" ) >>> p.sub( "\\1", mystring ) 'This is n a test r' >>> p.sub( "\\\\\\1", mystring ) 'This is \\n a test \\r' >>> p.sub( "\\\\1", mystring ) 'This is \\1 a test \\1' I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
Isn't that what Anders' second example does? In 2.5 there's also a string-escape encoding you can apply: >>> mystring = r"This is \n a test \r" >>> mystring.decode('string-escape') 'This is \n a test \r' >>> print mystring.decode('string-escape') This is a test >>>
Python Regular Expressions to implement string unescaping I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what... >>> import re >>> mystring = r"This is \n a test \r" >>> p = re.compile( "\\\\(\\S)" ) >>> p.sub( "\\1", mystring ) 'This is n a test r' >>> p.sub( "\\\\\\1", mystring ) 'This is \\n a test \\r' >>> p.sub( "\\\\1", mystring ) 'This is \\1 a test \\1' I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light?
TITLE: Python Regular Expressions to implement string unescaping QUESTION: I am trying to implement string unescaping with Python regex and backreferences, and it doesn't seem to want to work very well. I'm sure it's something I'm doing wrong but I can't figure out what... >>> import re >>> mystring = r"This is \n a test \r" >>> p = re.compile( "\\\\(\\S)" ) >>> p.sub( "\\1", mystring ) 'This is n a test r' >>> p.sub( "\\\\\\1", mystring ) 'This is \\n a test \\r' >>> p.sub( "\\\\1", mystring ) 'This is \\1 a test \\1' I'd like to replace \\[char] with \[char], but backreferences in Python don't appear to follow the same rules they do in every other implementation I've ever used. Could someone shed some light? ANSWER: Isn't that what Anders' second example does? In 2.5 there's also a string-escape encoding you can apply: >>> mystring = r"This is \n a test \r" >>> mystring.decode('string-escape') 'This is \n a test \r' >>> print mystring.decode('string-escape') This is a test >>>
[ "python", "regex", "backreference" ]
17
10
2,325
5
0
2008-08-17T18:20:21.060000
2008-08-17T21:36:32.693000
13,806
13,870
Any Windows APIs to get file handles besides createfile and openfile?
I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them.
You can use Sysinternal's FileMon. It is an excellent monitor that can tell you exactly which file-related system calls are being made and what are the parameters. I think that this approach is much easier than hooking API calls and much less intrusive.
Any Windows APIs to get file handles besides createfile and openfile? I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them.
TITLE: Any Windows APIs to get file handles besides createfile and openfile? QUESTION: I am trying to snoop on a log file that an application is writing to. I have successfully hooked createfile with the detours library from MSR, but createfile never seems to be called with file I am interested in snooping on. I have also tried hooking openfile with the same results. I am not an experienced Windows/C++ programmer, so my initial two thoughts were either that the application calls createfile before I hook the apis, or that there is some other API for creating files/obtaining handles for them. ANSWER: You can use Sysinternal's FileMon. It is an excellent monitor that can tell you exactly which file-related system calls are being made and what are the parameters. I think that this approach is much easier than hooking API calls and much less intrusive.
[ "c++", "windows", "api", "logfile" ]
5
8
3,766
3
0
2008-08-17T18:27:08.473000
2008-08-17T19:26:56.637000
13,848
13,861
VisualSVN undelete with TortoiseSVN
Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that. Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN?
What you have to do is the following: Right click on the folder where you think it is. Choose Show Log under TortioseSVN Find the checkin that the file was deleted in Go down the list and find the file Select Revert changes for this version to undelete.
VisualSVN undelete with TortoiseSVN Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that. Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN?
TITLE: VisualSVN undelete with TortoiseSVN QUESTION: Using TortoiseSVN against VisualSVN I delete a source file that I should not have deleted. Now this isn't a train smash because I can get the file back from the daily backup. However I would like to undelete it from SVN (VisualSVN) so that I can get the history back. However I can't work out how to do that. Anybody know how to undelete a file from VisualSVN either using the VisualSVN interface or the latest version of TortoiseSVN? ANSWER: What you have to do is the following: Right click on the folder where you think it is. Choose Show Log under TortioseSVN Find the checkin that the file was deleted in Go down the list and find the file Select Revert changes for this version to undelete.
[ "svn", "tortoisesvn", "visualsvn-server" ]
6
12
2,819
2
0
2008-08-17T19:03:33.500000
2008-08-17T19:17:43.783000
13,851
13,854
IE7: header above menu
I am having trouble with IE7. I have a header, which is an IMG. Under it I have a div that represents a menu, they have to be attached to each other without space in between. Both are 1000px width. In Opera and FireFox the header and the menu are neatly attached to each other. However, in IE7, there is a small space between the menu DIV and the IMG. I have tried explicitly defining padding and margin on the IMG, however it does not work. I have had this problem before, so it seems to be a IE7 quirk. My HTML Code: Home | Leden | Agenda | Foto's | Geschiedenis | Gastenboek
Try the IE Developer Toolbar, which will let you inspect what is going on with the elements and give you outlines of the areas covered. It might give you a better understanding of the problem.
IE7: header above menu I am having trouble with IE7. I have a header, which is an IMG. Under it I have a div that represents a menu, they have to be attached to each other without space in between. Both are 1000px width. In Opera and FireFox the header and the menu are neatly attached to each other. However, in IE7, there is a small space between the menu DIV and the IMG. I have tried explicitly defining padding and margin on the IMG, however it does not work. I have had this problem before, so it seems to be a IE7 quirk. My HTML Code: Home | Leden | Agenda | Foto's | Geschiedenis | Gastenboek
TITLE: IE7: header above menu QUESTION: I am having trouble with IE7. I have a header, which is an IMG. Under it I have a div that represents a menu, they have to be attached to each other without space in between. Both are 1000px width. In Opera and FireFox the header and the menu are neatly attached to each other. However, in IE7, there is a small space between the menu DIV and the IMG. I have tried explicitly defining padding and margin on the IMG, however it does not work. I have had this problem before, so it seems to be a IE7 quirk. My HTML Code: Home | Leden | Agenda | Foto's | Geschiedenis | Gastenboek ANSWER: Try the IE Developer Toolbar, which will let you inspect what is going on with the elements and give you outlines of the areas covered. It might give you a better understanding of the problem.
[ "html", "css", "internet-explorer-7" ]
1
3
524
7
0
2008-08-17T19:10:28.510000
2008-08-17T19:12:09.193000
13,857
141,426
Can you explain closures (as they relate to Python)?
I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.
Closure on closures Objects are data with methods attached, closures are functions with data attached. def make_counter(): i = 0 def counter(): # counter() is a closure nonlocal i i += 1 return i return counter c1 = make_counter() c2 = make_counter() print (c1(), c1(), c2(), c2()) # -> 1 2 1 2
Can you explain closures (as they relate to Python)? I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them.
TITLE: Can you explain closures (as they relate to Python)? QUESTION: I've been reading a lot about closures and I think I understand them, but without clouding the picture for myself and others, I am hoping someone can explain closures as succinctly and clearly as possible. I'm looking for a simple explanation that might help me understand where and why I would want to use them. ANSWER: Closure on closures Objects are data with methods attached, closures are functions with data attached. def make_counter(): i = 0 def counter(): # counter() is a closure nonlocal i i += 1 return i return counter c1 = make_counter() c2 = make_counter() print (c1(), c1(), c2(), c2()) # -> 1 2 1 2
[ "python", "functional-programming", "closures" ]
102
114
19,133
13
0
2008-08-17T19:14:30.747000
2008-09-26T19:28:32.573000
13,893
526,939
How do I make a custom .net client profile installer?
For.net 3.5 SP1, Microsoft have the new client profile which installs only a subset of.net 3.5 SP1 on to Windows XP user's machines. I'm aware of how to make my assemblies client-profile ready. And I've read the articles on how to implement an installer for ClickOnce or MSI. But I've been using Inno Setup for my project so far and I'd like to continue to use it (as an Express user, I can't easily make MSIs), I need to know how to use the client-profile installer in a custom environment. There is an article on a Deployment.xml schema, but no indication of how to write one, package it or anything else. Can someone explain this process? Finding the articles I linked to alone was a painful search experience.
Microsoft has now shipped the Client Profile Configuration Designer (Beta). This designer lets you edit the XML files with some limitations, this isn't a 'Google beta' by any means. Information and download
How do I make a custom .net client profile installer? For.net 3.5 SP1, Microsoft have the new client profile which installs only a subset of.net 3.5 SP1 on to Windows XP user's machines. I'm aware of how to make my assemblies client-profile ready. And I've read the articles on how to implement an installer for ClickOnce or MSI. But I've been using Inno Setup for my project so far and I'd like to continue to use it (as an Express user, I can't easily make MSIs), I need to know how to use the client-profile installer in a custom environment. There is an article on a Deployment.xml schema, but no indication of how to write one, package it or anything else. Can someone explain this process? Finding the articles I linked to alone was a painful search experience.
TITLE: How do I make a custom .net client profile installer? QUESTION: For.net 3.5 SP1, Microsoft have the new client profile which installs only a subset of.net 3.5 SP1 on to Windows XP user's machines. I'm aware of how to make my assemblies client-profile ready. And I've read the articles on how to implement an installer for ClickOnce or MSI. But I've been using Inno Setup for my project so far and I'd like to continue to use it (as an Express user, I can't easily make MSIs), I need to know how to use the client-profile installer in a custom environment. There is an article on a Deployment.xml schema, but no indication of how to write one, package it or anything else. Can someone explain this process? Finding the articles I linked to alone was a painful search experience. ANSWER: Microsoft has now shipped the Client Profile Configuration Designer (Beta). This designer lets you edit the XML files with some limitations, this isn't a 'Google beta' by any means. Information and download
[ "deployment", ".net-3.5", "installation", ".net-client-profile" ]
4
1
1,078
3
0
2008-08-17T20:03:22.600000
2009-02-09T02:02:19.423000
13,927
14,054
In Cocoa do I need to remove an Object from receiving KVO notifications when deallocating it?
When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc?
You need to use -removeObserver:forKeyPath: to remove the observer before -[NSObject dealloc] runs, so yes, doing it in the -dealloc method of your class would work. Better than that though would be to have a deterministic point where whatever owns the object that's doing the observing could tell it it's done and will (eventually) be deallocated. That way, you can stop observing immediately when the thing doing the observing is no longer needed, regardless of when it's actually deallocated. This is important to keep in mind because the lifetime of objects in Cocoa isn't as deterministic as some people seem to think it is. The various Mac OS X frameworks themselves will send your objects -retain and -autorelease, extending their lifetime beyond what you might otherwise think it would be. Furthermore, when you make the transition to Objective-C garbage collection, you'll find that -finalize will run at very different times — and in very different contexts — than -dealloc did. For one thing, finalization takes place on a different thread, so you really can't safely send -removeObserver:forKeyPath: to another object in a -finalize method. Stick to memory (and other scarce resource) management in -dealloc and -finalize, and use a separate -invalidate method to have an owner tell an object you're done with it at a deterministic point; do things like removing KVO observations there. The intent of your code will be clearer and you will have fewer subtle bugs to take care of.
In Cocoa do I need to remove an Object from receiving KVO notifications when deallocating it? When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc?
TITLE: In Cocoa do I need to remove an Object from receiving KVO notifications when deallocating it? QUESTION: When I've registered an object foo to receive KVO notifications from another object bar (using addObserver:...), if I then deallocate foo do I need to send a removeObserver:forKeyPath: message to bar in -dealloc? ANSWER: You need to use -removeObserver:forKeyPath: to remove the observer before -[NSObject dealloc] runs, so yes, doing it in the -dealloc method of your class would work. Better than that though would be to have a deterministic point where whatever owns the object that's doing the observing could tell it it's done and will (eventually) be deallocated. That way, you can stop observing immediately when the thing doing the observing is no longer needed, regardless of when it's actually deallocated. This is important to keep in mind because the lifetime of objects in Cocoa isn't as deterministic as some people seem to think it is. The various Mac OS X frameworks themselves will send your objects -retain and -autorelease, extending their lifetime beyond what you might otherwise think it would be. Furthermore, when you make the transition to Objective-C garbage collection, you'll find that -finalize will run at very different times — and in very different contexts — than -dealloc did. For one thing, finalization takes place on a different thread, so you really can't safely send -removeObserver:forKeyPath: to another object in a -finalize method. Stick to memory (and other scarce resource) management in -dealloc and -finalize, and use a separate -invalidate method to have an owner tell an object you're done with it at a deterministic point; do things like removing KVO observations there. The intent of your code will be clearer and you will have fewer subtle bugs to take care of.
[ "cocoa", "macos" ]
23
39
7,318
3
0
2008-08-17T21:05:13.083000
2008-08-18T00:33:13.943000
13,938
13,969
How do I run (unit) tests in different folders/projects separately in Visual Studio?
I need some advice as to how I easily can separate test runs for unit tests and integration test in Visual Studio. Often, or always, I structure the solution as presented in the above picture: separate projects for unit tests and integration tests. The unit tests is run very frequently while the integration tests naturally is run when the context is correctly aligned. My goal is to somehow be able configure which tests (or test folders) to run when I use a keyboard shortcut. The tests should preferably be run by a graphical test runner (ReSharpers). So for example Alt+1 runs the tests in project BLL.Test, Alt+2 runs the tests in project DAL.Tests, Alt+3 runs them both (i.e. all the tests in the [Tests] folder, and Alt+4 runs the tests in folder [Tests.Integration]. TestDriven.net have an option of running just the test in the selected folder or project by right-clicking it and select Run Test(s). Being able to do this, but via a keyboard command and with a graphical test runner would be awesome. Currently I use VS2008, ReSharper 4 and nUnit. But advice for a setup in the general is of course also appreciated.
I actually found kind of a solution for this on my own by using keyboard command bound to a macro. The macro was recorded from the menu Tools>Macros>Record TemporaryMacro. While recording I selected my [Tests] folder and ran ReSharpers UnitTest.ContextRun. This resulted in the following macro, Sub TemporaryMacro() DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate DTE.ActiveWindow.Object.GetItem("TestUnitTest\Tests").Select(vsUISelectionType.vsUISelectionTypeSelect) DTE.ExecuteCommand("ReSharper.UnitTest_ContextRun") End Sub which was then bound to it's own keyboard command in Tools>Options>Environment>Keyboard. However, what would be even more awesome is a more general solution where I can configure exactly which projects/folders/classes to run and when. For example by the means of an xml file. This could then easily be checked in to version control and distributed to everyone who works with the project.
How do I run (unit) tests in different folders/projects separately in Visual Studio? I need some advice as to how I easily can separate test runs for unit tests and integration test in Visual Studio. Often, or always, I structure the solution as presented in the above picture: separate projects for unit tests and integration tests. The unit tests is run very frequently while the integration tests naturally is run when the context is correctly aligned. My goal is to somehow be able configure which tests (or test folders) to run when I use a keyboard shortcut. The tests should preferably be run by a graphical test runner (ReSharpers). So for example Alt+1 runs the tests in project BLL.Test, Alt+2 runs the tests in project DAL.Tests, Alt+3 runs them both (i.e. all the tests in the [Tests] folder, and Alt+4 runs the tests in folder [Tests.Integration]. TestDriven.net have an option of running just the test in the selected folder or project by right-clicking it and select Run Test(s). Being able to do this, but via a keyboard command and with a graphical test runner would be awesome. Currently I use VS2008, ReSharper 4 and nUnit. But advice for a setup in the general is of course also appreciated.
TITLE: How do I run (unit) tests in different folders/projects separately in Visual Studio? QUESTION: I need some advice as to how I easily can separate test runs for unit tests and integration test in Visual Studio. Often, or always, I structure the solution as presented in the above picture: separate projects for unit tests and integration tests. The unit tests is run very frequently while the integration tests naturally is run when the context is correctly aligned. My goal is to somehow be able configure which tests (or test folders) to run when I use a keyboard shortcut. The tests should preferably be run by a graphical test runner (ReSharpers). So for example Alt+1 runs the tests in project BLL.Test, Alt+2 runs the tests in project DAL.Tests, Alt+3 runs them both (i.e. all the tests in the [Tests] folder, and Alt+4 runs the tests in folder [Tests.Integration]. TestDriven.net have an option of running just the test in the selected folder or project by right-clicking it and select Run Test(s). Being able to do this, but via a keyboard command and with a graphical test runner would be awesome. Currently I use VS2008, ReSharper 4 and nUnit. But advice for a setup in the general is of course also appreciated. ANSWER: I actually found kind of a solution for this on my own by using keyboard command bound to a macro. The macro was recorded from the menu Tools>Macros>Record TemporaryMacro. While recording I selected my [Tests] folder and ran ReSharpers UnitTest.ContextRun. This resulted in the following macro, Sub TemporaryMacro() DTE.Windows.Item(Constants.vsWindowKindSolutionExplorer).Activate DTE.ActiveWindow.Object.GetItem("TestUnitTest\Tests").Select(vsUISelectionType.vsUISelectionTypeSelect) DTE.ExecuteCommand("ReSharper.UnitTest_ContextRun") End Sub which was then bound to it's own keyboard command in Tools>Options>Environment>Keyboard. However, what would be even more awesome is a more general solution where I can configure exactly which projects/folders/classes to run and when. For example by the means of an xml file. This could then easily be checked in to version control and distributed to everyone who works with the project.
[ "visual-studio", "unit-testing", "configuration", "vsx", "extensibility" ]
4
2
1,700
4
0
2008-08-17T21:21:07.670000
2008-08-17T22:08:13.627000
13,941
13,949
Python Sound ("Bell")
I'd like to have a python program alert me when it has completed its task by making a beep noise. Currently, I use import os and then use a command line speech program to say "Process complete". I much rather it be a simple "bell." I know that there's a function that can be used in Cocoa apps, NSBeep, but I don't think that has much anything to do with this. I've also tried print(\a) but that didn't work. I'm using a Mac, if you couldn't tell by my Cocoa comment, so that may help.
Have you tried: import sys sys.stdout.write('\a') sys.stdout.flush() That works for me here on Mac OS 10.5 Actually, I think your original attempt works also with a little modification: print('\a') (You just need the single quotes around the character sequence).
Python Sound ("Bell") I'd like to have a python program alert me when it has completed its task by making a beep noise. Currently, I use import os and then use a command line speech program to say "Process complete". I much rather it be a simple "bell." I know that there's a function that can be used in Cocoa apps, NSBeep, but I don't think that has much anything to do with this. I've also tried print(\a) but that didn't work. I'm using a Mac, if you couldn't tell by my Cocoa comment, so that may help.
TITLE: Python Sound ("Bell") QUESTION: I'd like to have a python program alert me when it has completed its task by making a beep noise. Currently, I use import os and then use a command line speech program to say "Process complete". I much rather it be a simple "bell." I know that there's a function that can be used in Cocoa apps, NSBeep, but I don't think that has much anything to do with this. I've also tried print(\a) but that didn't work. I'm using a Mac, if you couldn't tell by my Cocoa comment, so that may help. ANSWER: Have you tried: import sys sys.stdout.write('\a') sys.stdout.flush() That works for me here on Mac OS 10.5 Actually, I think your original attempt works also with a little modification: print('\a') (You just need the single quotes around the character sequence).
[ "python", "macos", "audio", "terminal" ]
75
94
86,453
7
0
2008-08-17T21:33:39.780000
2008-08-17T21:46:02.533000
13,963
13,990
Best method of Textfile Parsing in C#?
I want to parse a config file sorta thing, like so: [KEY:Value] [SUBKEY:SubValue] Now I started with a StreamReader, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me. One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0.
I considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt.:') Have you looked at YAML? You get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc here's an example customer: name: Orion age: 26 addresses: - type: Work number: 12 street: Bob Street - type: Home number: 15 street: Secret Road There appears to be a C# library here, which I haven't used personally, but yaml is pretty simple, so "how hard can it be?":-) I'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs)
Best method of Textfile Parsing in C#? I want to parse a config file sorta thing, like so: [KEY:Value] [SUBKEY:SubValue] Now I started with a StreamReader, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me. One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0.
TITLE: Best method of Textfile Parsing in C#? QUESTION: I want to parse a config file sorta thing, like so: [KEY:Value] [SUBKEY:SubValue] Now I started with a StreamReader, converting lines into character arrays, when I figured there's gotta be a better way. So I ask you, humble reader, to help me. One restriction is that it has to work in a Linux/Mono environment (1.2.6 to be exact). I don't have the latest 2.0 release (of Mono), so try to restrict language features to C# 2.0 or C# 1.0. ANSWER: I considered it, but I'm not going to use XML. I am going to be writing this stuff by hand, and hand editing XML makes my brain hurt.:') Have you looked at YAML? You get the benefits of XML without all the pain and suffering. It's used extensively in the ruby community for things like config files, pre-prepared database data, etc here's an example customer: name: Orion age: 26 addresses: - type: Work number: 12 street: Bob Street - type: Home number: 15 street: Secret Road There appears to be a C# library here, which I haven't used personally, but yaml is pretty simple, so "how hard can it be?":-) I'd say it's preferable to inventing your own ad-hoc format (and dealing with parser bugs)
[ "c#", "fileparse" ]
10
13
3,100
8
0
2008-08-17T22:02:31.097000
2008-08-17T22:39:50.833000
14,008
1,030,053
Genetic Programming in C#
I've been looking for some good genetic programming examples for C#. Anyone knows of good online/book resources? Wonder if there is a C# library out there for Evolutionary/Genetic programming?
After developing my own Genetic Programming didactic application, I found a complete Genetic Programming Framework called AForge.NET Genetics. It's a part of the Aforge.NET library. It's licensed under LGPL.
Genetic Programming in C# I've been looking for some good genetic programming examples for C#. Anyone knows of good online/book resources? Wonder if there is a C# library out there for Evolutionary/Genetic programming?
TITLE: Genetic Programming in C# QUESTION: I've been looking for some good genetic programming examples for C#. Anyone knows of good online/book resources? Wonder if there is a C# library out there for Evolutionary/Genetic programming? ANSWER: After developing my own Genetic Programming didactic application, I found a complete Genetic Programming Framework called AForge.NET Genetics. It's a part of the Aforge.NET library. It's licensed under LGPL.
[ "c#", "genetic-algorithm", "genetic-programming", "evolutionary-algorithm" ]
60
29
33,770
12
0
2008-08-17T23:25:45.733000
2009-06-23T00:22:44.647000
14,029
40,687
Disabling a ListView in C#, but still showing the current selection
I have a ListView control, and I'm trying to figure out the easiest/best way to disallow changing the selected row(s), without hiding the selected row(s). I know there's a HideSelection property, but that only works when the ListView is still enabled (but not focused). I need the selection to be viewable even when the ListView is disabled. How can I implement this?
You could also make the ListView ownerdraw. You then have complete control over how the items look whether they are selected or not or whether the ListView itself is enabled or not. The DrawListViewItemEventArgs provides a way to ask the ListView to draw individual parts of the item so you only have to draw the bits you're interested in. For example, you can draw the background of the item but leave it up to the ListView to draw the text.
Disabling a ListView in C#, but still showing the current selection I have a ListView control, and I'm trying to figure out the easiest/best way to disallow changing the selected row(s), without hiding the selected row(s). I know there's a HideSelection property, but that only works when the ListView is still enabled (but not focused). I need the selection to be viewable even when the ListView is disabled. How can I implement this?
TITLE: Disabling a ListView in C#, but still showing the current selection QUESTION: I have a ListView control, and I'm trying to figure out the easiest/best way to disallow changing the selected row(s), without hiding the selected row(s). I know there's a HideSelection property, but that only works when the ListView is still enabled (but not focused). I need the selection to be viewable even when the ListView is disabled. How can I implement this? ANSWER: You could also make the ListView ownerdraw. You then have complete control over how the items look whether they are selected or not or whether the ListView itself is enabled or not. The DrawListViewItemEventArgs provides a way to ask the ListView to draw individual parts of the item so you only have to draw the bits you're interested in. For example, you can draw the background of the item but leave it up to the ListView to draw the text.
[ "c#", ".net", "winforms", "listview" ]
7
2
5,423
3
0
2008-08-17T23:59:38.940000
2008-09-02T21:12:48.600000
14,031
14,129
Database triggers
In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios. That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck. What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios? Do think that having a need to bypass a trigger means that you're "doing it wrong"?
Triggers are generally used incorrectly, introduce bugs and therefore should be avoided. Never design a trigger to do integrity constraint checking that crosses rows in a table (e.g "the average salary by dept cannot exceed X). Tom Kyte, VP of Oracle has indicated that he would prefer to remove triggers as a feature of the Oracle database because of their frequent role in bugs. He knows it is just a dream, and triggers are here to stay, but if he could he would remove triggers from Oracle, he would (along with the WHEN OTHERS clause and autonomous transactions). Can triggers be used correctly? Absolutely. The problem is - they are not used correctly in so many cases that I'd be willing to give up any perceived benefit just to get rid of the abuses (and bugs) caused by them. - Tom Kyte
Database triggers In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios. That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck. What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios? Do think that having a need to bypass a trigger means that you're "doing it wrong"?
TITLE: Database triggers QUESTION: In the past I've never been a fan of using triggers on database tables. To me they always represented some "magic" that was going to happen on the database side, far far away from the control of my application code. I also wanted to limit the amount of work the DB had to do, as it's generally a shared resource and I always assumed triggers could get to be expensive in high load scenarios. That said, I have found a couple of instances where triggers have made sense to use (at least in my opinion they made sense). Recently though, I found myself in a situation where I sometimes might need to "bypass" the trigger. I felt really guilty about having to look for ways to do this, and I still think that a better database design would alleviate the need for this bypassing. Unfortunately this DB is used by mulitple applications, some of which are maintained by a very uncooperative development team who would scream about schema changes, so I was stuck. What's the general consesus out there about triggers? Love em? Hate em? Think they serve a purpose in some scenarios? Do think that having a need to bypass a trigger means that you're "doing it wrong"? ANSWER: Triggers are generally used incorrectly, introduce bugs and therefore should be avoided. Never design a trigger to do integrity constraint checking that crosses rows in a table (e.g "the average salary by dept cannot exceed X). Tom Kyte, VP of Oracle has indicated that he would prefer to remove triggers as a feature of the Oracle database because of their frequent role in bugs. He knows it is just a dream, and triggers are here to stay, but if he could he would remove triggers from Oracle, he would (along with the WHEN OTHERS clause and autonomous transactions). Can triggers be used correctly? Absolutely. The problem is - they are not used correctly in so many cases that I'd be willing to give up any perceived benefit just to get rid of the abuses (and bugs) caused by them. - Tom Kyte
[ "sql-server", "database-design", "triggers" ]
18
12
5,965
12
0
2008-08-18T00:01:19.203000
2008-08-18T03:23:39.497000
14,032
14,598
What's a Good Database ERD Tool for Linux?
I've recently switched to Linux on my work machine and, new to the Linux desktop environment, I'd like to find a decent ERD tool for database design. Booting back into my Windows partition every time I need to create a diagram is going to get unpleasant quickly. I looked at Dia, but didn't see any DB tools - only UML, networking, etc. Anyone have any recommendations? For what it's worth, I'm using Ubuntu (Hardy Heron). Thanks.
As a stop gap, I've installed DBDesigner via Wine (I should have just done that first) since that's what my Windows developers are using, but will look at both of these as well. The Eclipse plugin would be ideal if it's decent. Thanks.
What's a Good Database ERD Tool for Linux? I've recently switched to Linux on my work machine and, new to the Linux desktop environment, I'd like to find a decent ERD tool for database design. Booting back into my Windows partition every time I need to create a diagram is going to get unpleasant quickly. I looked at Dia, but didn't see any DB tools - only UML, networking, etc. Anyone have any recommendations? For what it's worth, I'm using Ubuntu (Hardy Heron). Thanks.
TITLE: What's a Good Database ERD Tool for Linux? QUESTION: I've recently switched to Linux on my work machine and, new to the Linux desktop environment, I'd like to find a decent ERD tool for database design. Booting back into my Windows partition every time I need to create a diagram is going to get unpleasant quickly. I looked at Dia, but didn't see any DB tools - only UML, networking, etc. Anyone have any recommendations? For what it's worth, I'm using Ubuntu (Hardy Heron). Thanks. ANSWER: As a stop gap, I've installed DBDesigner via Wine (I should have just done that first) since that's what my Windows developers are using, but will look at both of these as well. The Eclipse plugin would be ideal if it's decent. Thanks.
[ "linux", "ubuntu", "erd" ]
28
0
76,572
12
0
2008-08-18T00:01:42.230000
2008-08-18T13:33:56.973000
14,061
14,337
Preventing Memory Leaks with Attached Behaviours
I've created an "attached behaviour" in my WPF application which lets me handle the Enter keypress and move to the next control. I call it EnterKeyTraversal.IsEnabled, and you can see the code on my blog here. My main concern now is that I may have a memory leak, since I'm handling the PreviewKeyDown event on UIElements and never explicitly "unhook" the event. What's the best approach to prevent this leak (if indeed there is one)? Should I keep a list of the elements I'm managing, and unhook the PreviewKeyDown event in the Application.Exit event? Has anyone had success with attached behaviours in their own WPF applications and come up with an elegant memory-management solution?
I do not agree DannySmurf Some WPF layout objects can clog up your memory and make your application really slow when they are not garbage collected. So I find the choice of words to be correct, you are leaking memory to objects you no longer use. You expect the items to be garbage collected, but they aren't, because there is a reference somewhere (in this case in the from an event handler). Now for a real answer:) I advise you to read this WPF Performance article on MSDN Not Removing Event Handlers on Objects may Keep Objects Alive The delegate that an object passes to its event is effectively a reference to that object. Therefore, event handlers can keep objects alive longer than expected. When performing clean up of an object that has registered to listen to an object's event, it is essential to remove that delegate before releasing the object. Keeping unneeded objects alive increases the application's memory usage. This is especially true when the object is the root of a logical tree or a visual tree. They advise you to look into the Weak Event pattern Another solution would be to remove the event handlers when you are done with an object. But I know that with Attached Properties that point might not always be clear.. Hope this helps!
Preventing Memory Leaks with Attached Behaviours I've created an "attached behaviour" in my WPF application which lets me handle the Enter keypress and move to the next control. I call it EnterKeyTraversal.IsEnabled, and you can see the code on my blog here. My main concern now is that I may have a memory leak, since I'm handling the PreviewKeyDown event on UIElements and never explicitly "unhook" the event. What's the best approach to prevent this leak (if indeed there is one)? Should I keep a list of the elements I'm managing, and unhook the PreviewKeyDown event in the Application.Exit event? Has anyone had success with attached behaviours in their own WPF applications and come up with an elegant memory-management solution?
TITLE: Preventing Memory Leaks with Attached Behaviours QUESTION: I've created an "attached behaviour" in my WPF application which lets me handle the Enter keypress and move to the next control. I call it EnterKeyTraversal.IsEnabled, and you can see the code on my blog here. My main concern now is that I may have a memory leak, since I'm handling the PreviewKeyDown event on UIElements and never explicitly "unhook" the event. What's the best approach to prevent this leak (if indeed there is one)? Should I keep a list of the elements I'm managing, and unhook the PreviewKeyDown event in the Application.Exit event? Has anyone had success with attached behaviours in their own WPF applications and come up with an elegant memory-management solution? ANSWER: I do not agree DannySmurf Some WPF layout objects can clog up your memory and make your application really slow when they are not garbage collected. So I find the choice of words to be correct, you are leaking memory to objects you no longer use. You expect the items to be garbage collected, but they aren't, because there is a reference somewhere (in this case in the from an event handler). Now for a real answer:) I advise you to read this WPF Performance article on MSDN Not Removing Event Handlers on Objects may Keep Objects Alive The delegate that an object passes to its event is effectively a reference to that object. Therefore, event handlers can keep objects alive longer than expected. When performing clean up of an object that has registered to listen to an object's event, it is essential to remove that delegate before releasing the object. Keeping unneeded objects alive increases the application's memory usage. This is especially true when the object is the root of a logical tree or a visual tree. They advise you to look into the Weak Event pattern Another solution would be to remove the event handlers when you are done with an object. But I know that with Attached Properties that point might not always be clear.. Hope this helps!
[ ".net", "wpf", "memory" ]
12
5
6,871
11
0
2008-08-18T00:49:33.957000
2008-08-18T08:39:35.507000
14,106
14,137
How would you go about evaluating a programmer?
A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores). I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level. My question are as follows: As a programmer, what evaluation questions would you like to see? As a manager, what evaluation questions would you like to see? As the evaluator, how can you prevent bias in your evaluation? I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage?
Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard. You can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road. I have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing.
How would you go about evaluating a programmer? A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores). I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level. My question are as follows: As a programmer, what evaluation questions would you like to see? As a manager, what evaluation questions would you like to see? As the evaluator, how can you prevent bias in your evaluation? I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage?
TITLE: How would you go about evaluating a programmer? QUESTION: A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores). I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level. My question are as follows: As a programmer, what evaluation questions would you like to see? As a manager, what evaluation questions would you like to see? As the evaluator, how can you prevent bias in your evaluation? I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage? ANSWER: Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard. You can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road. I have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing.
[ "evaluation" ]
10
11
3,738
4
0
2008-08-18T02:29:34.177000
2008-08-18T03:34:29.817000
14,118
14,124
How can I test STDIN without blocking in Perl?
I'm writing my first Perl app -- an AOL Instant Messenger bot that talks to an Arduino microcontroller, which in turn controls a servo that will push the power button on our sysadmin's server, which freezes randomly every 28 hours or so. I've gotten all the hard stuff done, I'm just trying to add one last bit of code to break the main loop and log out of AIM when the user types 'quit'. The problem is, if I try to read from STDIN in the main program loop, it blocks the process until input is entered, essentially rendering the bot inactive. I've tried testing for EOF before reading, but no dice... EOF just always returns false. Here's below is some sample code I'm working with: while(1) { $oscar->do_one_loop(); # Poll to see if any arduino data is coming in over serial port my $char = $port->lookfor(); # If we get data from arduino, then print it if ($char) { print "". $char; } # reading STDIN blocks until input is received... AAARG! my $a =; print $a; if($a eq "exit" || $a eq "quit" || $a eq 'c' || $a eq 'q') {last;} } print "Signing off... "; $oscar->signoff(); print "Done\n"; print "Closing serial port... "; $port->close() || warn "close failed"; print "Done\n";
The Perl built-in is select(), which is a pass-through to the select() system call, but for sane people I recommend IO::Select. Code sample: #!/usr/bin/perl use IO::Select; $s = IO::Select->new(); $s->add(\*STDIN); while (++$i) { print "Hiya $i!\n"; sleep(5); if ($s->can_read(.5)) { chomp($foo = ); print "Got '$foo' from STDIN\n"; } }
How can I test STDIN without blocking in Perl? I'm writing my first Perl app -- an AOL Instant Messenger bot that talks to an Arduino microcontroller, which in turn controls a servo that will push the power button on our sysadmin's server, which freezes randomly every 28 hours or so. I've gotten all the hard stuff done, I'm just trying to add one last bit of code to break the main loop and log out of AIM when the user types 'quit'. The problem is, if I try to read from STDIN in the main program loop, it blocks the process until input is entered, essentially rendering the bot inactive. I've tried testing for EOF before reading, but no dice... EOF just always returns false. Here's below is some sample code I'm working with: while(1) { $oscar->do_one_loop(); # Poll to see if any arduino data is coming in over serial port my $char = $port->lookfor(); # If we get data from arduino, then print it if ($char) { print "". $char; } # reading STDIN blocks until input is received... AAARG! my $a =; print $a; if($a eq "exit" || $a eq "quit" || $a eq 'c' || $a eq 'q') {last;} } print "Signing off... "; $oscar->signoff(); print "Done\n"; print "Closing serial port... "; $port->close() || warn "close failed"; print "Done\n";
TITLE: How can I test STDIN without blocking in Perl? QUESTION: I'm writing my first Perl app -- an AOL Instant Messenger bot that talks to an Arduino microcontroller, which in turn controls a servo that will push the power button on our sysadmin's server, which freezes randomly every 28 hours or so. I've gotten all the hard stuff done, I'm just trying to add one last bit of code to break the main loop and log out of AIM when the user types 'quit'. The problem is, if I try to read from STDIN in the main program loop, it blocks the process until input is entered, essentially rendering the bot inactive. I've tried testing for EOF before reading, but no dice... EOF just always returns false. Here's below is some sample code I'm working with: while(1) { $oscar->do_one_loop(); # Poll to see if any arduino data is coming in over serial port my $char = $port->lookfor(); # If we get data from arduino, then print it if ($char) { print "". $char; } # reading STDIN blocks until input is received... AAARG! my $a =; print $a; if($a eq "exit" || $a eq "quit" || $a eq 'c' || $a eq 'q') {last;} } print "Signing off... "; $oscar->signoff(); print "Done\n"; print "Closing serial port... "; $port->close() || warn "close failed"; print "Done\n"; ANSWER: The Perl built-in is select(), which is a pass-through to the select() system call, but for sane people I recommend IO::Select. Code sample: #!/usr/bin/perl use IO::Select; $s = IO::Select->new(); $s->add(\*STDIN); while (++$i) { print "Hiya $i!\n"; sleep(5); if ($s->can_read(.5)) { chomp($foo = ); print "Got '$foo' from STDIN\n"; } }
[ "perl", "stdin" ]
14
21
13,735
2
0
2008-08-18T03:02:10.663000
2008-08-18T03:12:06.150000
14,126
14,127
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
// Boost DFS example on an undirected graph. // Create a sample graph, traverse its nodes // in DFS order and print out their values. #include #include #include using namespace std; typedef boost::adjacency_list MyGraph; typedef boost::graph_traits::vertex_descriptor MyVertex; class MyVisitor: public boost::default_dfs_visitor { public: void discover_vertex(MyVertex v, const MyGraph& g) const { cerr << v << endl; return; } }; int main() { MyGraph g; boost::add_edge(0, 1, g); boost::add_edge(0, 2, g); boost::add_edge(1, 2, g); boost::add_edge(1, 3, g); MyVisitor vis; boost::depth_first_search(g, boost::visitor(vis)); return 0; }
How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order? How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order?
TITLE: How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order? QUESTION: How to create a C++ Boost undirected graph and traverse it in depth first search (DFS) order? ANSWER: // Boost DFS example on an undirected graph. // Create a sample graph, traverse its nodes // in DFS order and print out their values. #include #include #include using namespace std; typedef boost::adjacency_list MyGraph; typedef boost::graph_traits::vertex_descriptor MyVertex; class MyVisitor: public boost::default_dfs_visitor { public: void discover_vertex(MyVertex v, const MyGraph& g) const { cerr << v << endl; return; } }; int main() { MyGraph g; boost::add_edge(0, 1, g); boost::add_edge(0, 2, g); boost::add_edge(1, 2, g); boost::add_edge(1, 3, g); MyVisitor vis; boost::depth_first_search(g, boost::visitor(vis)); return 0; }
[ "c++", "boost-graph" ]
32
36
13,790
2
0
2008-08-18T03:15:35.987000
2008-08-18T03:17:39.603000
14,135
14,154
Subversion and web development
I'm introducing Subversion into our web shop. I want to want the checked in files to be uploaded into the server via FTP (and as they get use to Subversion, via SFTP). The files are sent to a release candidate page for testing purposes. A script can be called to move the files into production. My question is this: How do you transfer the files to the server via Subversion? Is there a script I can add that will do the transfer when the files are checked in?
If you have shell access to your sever, and SVN installed on it (or the ability to install SVN), then your best bet may be just to bypass FTP entirely. How we deploy our apps is (simplified) Developers write code and check it into trunk Periodically, when trunk is stable, we will take a snapshot of it as a tag On the server, svn checkout the tag If any changes need to be made to the server (or directly on the live server itself) it is trivial to use subversion to sync the code
Subversion and web development I'm introducing Subversion into our web shop. I want to want the checked in files to be uploaded into the server via FTP (and as they get use to Subversion, via SFTP). The files are sent to a release candidate page for testing purposes. A script can be called to move the files into production. My question is this: How do you transfer the files to the server via Subversion? Is there a script I can add that will do the transfer when the files are checked in?
TITLE: Subversion and web development QUESTION: I'm introducing Subversion into our web shop. I want to want the checked in files to be uploaded into the server via FTP (and as they get use to Subversion, via SFTP). The files are sent to a release candidate page for testing purposes. A script can be called to move the files into production. My question is this: How do you transfer the files to the server via Subversion? Is there a script I can add that will do the transfer when the files are checked in? ANSWER: If you have shell access to your sever, and SVN installed on it (or the ability to install SVN), then your best bet may be just to bypass FTP entirely. How we deploy our apps is (simplified) Developers write code and check it into trunk Periodically, when trunk is stable, we will take a snapshot of it as a tag On the server, svn checkout the tag If any changes need to be made to the server (or directly on the live server itself) it is trivial to use subversion to sync the code
[ "svn", "ftp" ]
9
3
1,034
8
0
2008-08-18T03:33:13.987000
2008-08-18T03:57:18.930000
14,138
72,787
Enabling OpenGL in wxWidgets
I installed the wxWidgets source code, compiled it and am linking the libraries thus obtained with my application code. Now I need to use OpenGL in my wxWidgets application. How do I enable this?
For building on Windows with project files: Assume $(WXWIDGETSROOT) is the root directory of your wxWidgets installation. Open the file $(WXWIDGETSROOT)\include\wx\msw\setup.h Search for the #define for wxUSE_GLCANVAS. Change its value from 0 to 1. Recompile the library. For building on Linux and other./configure based platforms: Just use./configure --with-opengl (A mashup answer from two partial answers given by others)
Enabling OpenGL in wxWidgets I installed the wxWidgets source code, compiled it and am linking the libraries thus obtained with my application code. Now I need to use OpenGL in my wxWidgets application. How do I enable this?
TITLE: Enabling OpenGL in wxWidgets QUESTION: I installed the wxWidgets source code, compiled it and am linking the libraries thus obtained with my application code. Now I need to use OpenGL in my wxWidgets application. How do I enable this? ANSWER: For building on Windows with project files: Assume $(WXWIDGETSROOT) is the root directory of your wxWidgets installation. Open the file $(WXWIDGETSROOT)\include\wx\msw\setup.h Search for the #define for wxUSE_GLCANVAS. Change its value from 0 to 1. Recompile the library. For building on Linux and other./configure based platforms: Just use./configure --with-opengl (A mashup answer from two partial answers given by others)
[ "opengl", "wxwidgets" ]
2
7
9,759
4
0
2008-08-18T03:34:30.723000
2008-09-16T14:17:45.560000
14,165
14,169
Strange C++ errors with code that has min()/max() calls
I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers.
Check if your code is including the windows.h header file and either your code or other third-party headers have their own min() / max() definitions. If yes, then prepend your windows.h inclusion with a definition of NOMINMAX like this: #define NOMINMAX #include
Strange C++ errors with code that has min()/max() calls I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers.
TITLE: Strange C++ errors with code that has min()/max() calls QUESTION: I'm seeing strange errors when my C++ code has min() or max() calls. I'm using Visual C++ compilers. ANSWER: Check if your code is including the windows.h header file and either your code or other third-party headers have their own min() / max() definitions. If yes, then prepend your windows.h inclusion with a definition of NOMINMAX like this: #define NOMINMAX #include
[ "c++", "c" ]
8
18
2,581
6
0
2008-08-18T04:13:34.920000
2008-08-18T04:15:34.510000
14,241
14,369
What's the deal with |Pipe-delimited| variables in connection strings?
I know that |DataDirectory| will resolve to App_Data in an ASP.NET application but is that hard-coded or is there a generalized mechanism at work along the lines of %environment variables%?
From the MSDN Smart Client Data Blog: In this version, the.NET runtime added support for what we call the DataDirectory macro. This allows Visual Studio to put a special variable in the connection string that will be expanded at run-time... By default, the |DataDirectory| variable will be expanded as follow: For applications placed in a directory on the user machine, this will be the app's (.exe) folder. For apps running under ClickOnce, this will be a special data folder created by ClickOnce For Web apps, this will be the App_Data folder Under the hood, the value for |DataDirectory| simply comes from a property on the app domain. It is possible to change that value and override the default behavior by doing this: AppDomain.CurrentDomain.SetData("DataDirectory", newpath)
What's the deal with |Pipe-delimited| variables in connection strings? I know that |DataDirectory| will resolve to App_Data in an ASP.NET application but is that hard-coded or is there a generalized mechanism at work along the lines of %environment variables%?
TITLE: What's the deal with |Pipe-delimited| variables in connection strings? QUESTION: I know that |DataDirectory| will resolve to App_Data in an ASP.NET application but is that hard-coded or is there a generalized mechanism at work along the lines of %environment variables%? ANSWER: From the MSDN Smart Client Data Blog: In this version, the.NET runtime added support for what we call the DataDirectory macro. This allows Visual Studio to put a special variable in the connection string that will be expanded at run-time... By default, the |DataDirectory| variable will be expanded as follow: For applications placed in a directory on the user machine, this will be the app's (.exe) folder. For apps running under ClickOnce, this will be a special data folder created by ClickOnce For Web apps, this will be the App_Data folder Under the hood, the value for |DataDirectory| simply comes from a property on the app domain. It is possible to change that value and override the default behavior by doing this: AppDomain.CurrentDomain.SetData("DataDirectory", newpath)
[ ".net", "ado.net", "syntax", "macros", "connection-string" ]
3
5
819
1
0
2008-08-18T06:06:38.420000
2008-08-18T09:22:47.383000
14,247
14,261
Flex: does painless programmatic data binding exist?
I've only done a bit of Flex development thus far, but I've preferred the approach of creating controls programmatically over mxml files, because (and please, correct me if I'm wrong!) I've gathered that you can't have it both ways -- that is to say, have the class functionality in a separate ActionScript class file but have the contained elements declared in mxml. There doesn't seem to be much of a difference productivity-wise, but doing data binding programmatically seems somewhat less than trivial. I took a look at how the mxml compiler transforms the data binding expressions. The result is a bunch of generated callbacks and a lot more lines than in the mxml representation. So here's the question: is there a way to do data binding programmatically that doesn't involve a world of hurt?
Don't be afraid of MXML. It's great for laying out views. If you write your own reusable components then writing them in ActionScript may sometimes give you a little more control, but for non-reusable views MXML is much better. It's more terse, bindings are extemely easy to set up, etc. However, bindings in pure ActionScript need not be that much of a pain. It will never be as simple as in MXML where a lot of things are done for you, but it can be done with not too much effort. What you have is BindingUtils and it's methods bindSetter and bindProperty. I almost always use the former, since I usually want to do some work, or call invalidateProperties when values change, I almost never just want to set a property. What you need to know is that these two return an object of the type ChangeWatcher, if you want to remove the binding for some reason, you have to hold on to this object. This is what makes manual bindings in ActionScript a little less convenient than those in MXML. Let's start with a simple example: BindingUtils.bindSetter(nameChanged, selectedEmployee, "name"); This sets up a binding that will call the method nameChanged when the name property on the object in the variable selectedEmployee changes. The nameChanged method will recieve the new value of the name property as an argument, so it should look like this: private function nameChanged( newName: String ): void The problem with this simple example is that once you have set up this binding it will fire each time the property of the specified object changes. The value of the variable selectedEmployee may change, but the binding is still set up for the object that the variable pointed to before. There are two ways to solve this: either to keep the ChangeWatcher returned by BindingUtils.bindSetter around and call unwatch on it when you want to remove the binding (and then setting up a new binding instead), or bind to yourself. I'll show you the first option first, and then explain what I mean by binding to yourself. The currentEmployee could be made into a getter/setter pair and implemented like this (only showing the setter): public function set currentEmployee( employee: Employee ): void { if ( _currentEmployee!= employee ) { if ( _currentEmployee!= null ) { currentEmployeeNameCW.unwatch(); } _currentEmployee = employee; if ( _currentEmployee!= null ) { currentEmployeeNameCW = BindingUtils.bindSetter(currentEmployeeNameChanged, _currentEmployee, "name"); } } } What happens is that when the currentEmployee property is set it looks to see if there was a previous value, and if so removes the binding for that object ( currentEmployeeNameCW.unwatch() ), then it sets the private variable, and unless the new value was null sets up a new binding for the name property. Most importantly it saves the ChangeWatcher returned by the binding call. This is a basic binding pattern and I think it works fine. There is, however, a trick that can be used to make it a bit simpler. You can bind to yourself instead. Instead of setting up and removing bindings each time the currentEmployee property changes you can have the binding system do it for you. In your creationComplete handler (or constructor or at least some time early) you can set up a binding like so: BindingUtils.bindSetter(currentEmployeeNameChanged, this, ["currentEmployee", "name"]); This sets up a binding not only to the currentEmployee property on this, but also to the name property on this object. So anytime either changes the method currentEmployeeNameChanged will be called. There's no need to save the ChangeWatcher because the binding will never have to be removed. The second solution works in many cases, but I've found that the first one is sometimes necessary, especially when working with bindings in non-view classes (since this has to be an event dispatcher and the currentEmployee has to be bindable for it to work).
Flex: does painless programmatic data binding exist? I've only done a bit of Flex development thus far, but I've preferred the approach of creating controls programmatically over mxml files, because (and please, correct me if I'm wrong!) I've gathered that you can't have it both ways -- that is to say, have the class functionality in a separate ActionScript class file but have the contained elements declared in mxml. There doesn't seem to be much of a difference productivity-wise, but doing data binding programmatically seems somewhat less than trivial. I took a look at how the mxml compiler transforms the data binding expressions. The result is a bunch of generated callbacks and a lot more lines than in the mxml representation. So here's the question: is there a way to do data binding programmatically that doesn't involve a world of hurt?
TITLE: Flex: does painless programmatic data binding exist? QUESTION: I've only done a bit of Flex development thus far, but I've preferred the approach of creating controls programmatically over mxml files, because (and please, correct me if I'm wrong!) I've gathered that you can't have it both ways -- that is to say, have the class functionality in a separate ActionScript class file but have the contained elements declared in mxml. There doesn't seem to be much of a difference productivity-wise, but doing data binding programmatically seems somewhat less than trivial. I took a look at how the mxml compiler transforms the data binding expressions. The result is a bunch of generated callbacks and a lot more lines than in the mxml representation. So here's the question: is there a way to do data binding programmatically that doesn't involve a world of hurt? ANSWER: Don't be afraid of MXML. It's great for laying out views. If you write your own reusable components then writing them in ActionScript may sometimes give you a little more control, but for non-reusable views MXML is much better. It's more terse, bindings are extemely easy to set up, etc. However, bindings in pure ActionScript need not be that much of a pain. It will never be as simple as in MXML where a lot of things are done for you, but it can be done with not too much effort. What you have is BindingUtils and it's methods bindSetter and bindProperty. I almost always use the former, since I usually want to do some work, or call invalidateProperties when values change, I almost never just want to set a property. What you need to know is that these two return an object of the type ChangeWatcher, if you want to remove the binding for some reason, you have to hold on to this object. This is what makes manual bindings in ActionScript a little less convenient than those in MXML. Let's start with a simple example: BindingUtils.bindSetter(nameChanged, selectedEmployee, "name"); This sets up a binding that will call the method nameChanged when the name property on the object in the variable selectedEmployee changes. The nameChanged method will recieve the new value of the name property as an argument, so it should look like this: private function nameChanged( newName: String ): void The problem with this simple example is that once you have set up this binding it will fire each time the property of the specified object changes. The value of the variable selectedEmployee may change, but the binding is still set up for the object that the variable pointed to before. There are two ways to solve this: either to keep the ChangeWatcher returned by BindingUtils.bindSetter around and call unwatch on it when you want to remove the binding (and then setting up a new binding instead), or bind to yourself. I'll show you the first option first, and then explain what I mean by binding to yourself. The currentEmployee could be made into a getter/setter pair and implemented like this (only showing the setter): public function set currentEmployee( employee: Employee ): void { if ( _currentEmployee!= employee ) { if ( _currentEmployee!= null ) { currentEmployeeNameCW.unwatch(); } _currentEmployee = employee; if ( _currentEmployee!= null ) { currentEmployeeNameCW = BindingUtils.bindSetter(currentEmployeeNameChanged, _currentEmployee, "name"); } } } What happens is that when the currentEmployee property is set it looks to see if there was a previous value, and if so removes the binding for that object ( currentEmployeeNameCW.unwatch() ), then it sets the private variable, and unless the new value was null sets up a new binding for the name property. Most importantly it saves the ChangeWatcher returned by the binding call. This is a basic binding pattern and I think it works fine. There is, however, a trick that can be used to make it a bit simpler. You can bind to yourself instead. Instead of setting up and removing bindings each time the currentEmployee property changes you can have the binding system do it for you. In your creationComplete handler (or constructor or at least some time early) you can set up a binding like so: BindingUtils.bindSetter(currentEmployeeNameChanged, this, ["currentEmployee", "name"]); This sets up a binding not only to the currentEmployee property on this, but also to the name property on this object. So anytime either changes the method currentEmployeeNameChanged will be called. There's no need to save the ChangeWatcher because the binding will never have to be removed. The second solution works in many cases, but I've found that the first one is sometimes necessary, especially when working with bindings in non-view classes (since this has to be an event dispatcher and the currentEmployee has to be bindable for it to work).
[ "apache-flex", "actionscript-3", "data-binding", "mxml" ]
13
29
8,786
4
0
2008-08-18T06:15:08.840000
2008-08-18T06:56:22.250000
14,263
14,795
Best way to let users download a file from my website: http or ftp
We have some files on our website that users of our software can download. Some of the files are in virtual folders on the website while others are on our ftp. The files on the ftp are generally accessed by clicking on an ftp:// link in a browser - most of our customers do not have an ftp client. The other files are accessed by clicking an http:// link in a browser. Should I move all the files to the ftp? or does it not matter? Whats the difference?
HTTP has many advantages over FTP: it is available in more places (think workplaces which block anything other than HTTP/S) it works nicely with proxies (FTP requires extra settings for the proxy - like making sure that it allows the CONNECT method) it provides built-in compression (with GZIP) which almost all browsers can handle (as opposed to FTP which has a non-official "MODE Z" extension) NAT gateways must be configured in a special mode to support active FTP connections, while passive FTP connections require them to allow access to all ports (it it doesn't have conneciton tracking) some FTP clients insist on opening a new data connection for each data transfer, which can leave you with a lot of "TIME_WAIT" sockets
Best way to let users download a file from my website: http or ftp We have some files on our website that users of our software can download. Some of the files are in virtual folders on the website while others are on our ftp. The files on the ftp are generally accessed by clicking on an ftp:// link in a browser - most of our customers do not have an ftp client. The other files are accessed by clicking an http:// link in a browser. Should I move all the files to the ftp? or does it not matter? Whats the difference?
TITLE: Best way to let users download a file from my website: http or ftp QUESTION: We have some files on our website that users of our software can download. Some of the files are in virtual folders on the website while others are on our ftp. The files on the ftp are generally accessed by clicking on an ftp:// link in a browser - most of our customers do not have an ftp client. The other files are accessed by clicking an http:// link in a browser. Should I move all the files to the ftp? or does it not matter? Whats the difference? ANSWER: HTTP has many advantages over FTP: it is available in more places (think workplaces which block anything other than HTTP/S) it works nicely with proxies (FTP requires extra settings for the proxy - like making sure that it allows the CONNECT method) it provides built-in compression (with GZIP) which almost all browsers can handle (as opposed to FTP which has a non-official "MODE Z" extension) NAT gateways must be configured in a special mode to support active FTP connections, while passive FTP connections require them to allow access to all ports (it it doesn't have conneciton tracking) some FTP clients insist on opening a new data connection for each data transfer, which can leave you with a lot of "TIME_WAIT" sockets
[ "http", "ftp", "download" ]
5
6
4,581
5
0
2008-08-18T07:00:15.067000
2008-08-18T15:26:21.287000
14,264
14,265
Using GLUT with Visual C++ Express Edition
What are the basic steps to compile an OpenGL application using GLUT (OpenGL Utility Toolkit) under Visual C++ Express Edition?
If you don't have Visual C++ Express Edition (VCEE), download and install VCEE. The default install of Visual C++ Express Edition builds for the.Net platform. We'll need to build for the Windows platform since OpenGL and GLUT are not yet fully supported under.Net. For this we need the Microsoft Platform SDK. (If you're using an older version of VCEE, download and install the Microsoft Platform SDK. Visual C++ Express Edition will need to be configured to build for Windows platform. All these instructions are available here.) If you don't have GLUT, download and unzip Nate Robin's Windows port of GLUT. Add glut.h to your Platform SDK/include/GL/ directory Link the project with glut.lib. (Go to VCEE Project Properties -> Additional Linker Directories and add the directory which has glut.lib. Add glut.dll to the Windows/System32 directory, so that all programs using GLUT can find it at runtime. Your program which uses GLUT or OpenGL should compile under Visual C++ Express Edition now.
Using GLUT with Visual C++ Express Edition What are the basic steps to compile an OpenGL application using GLUT (OpenGL Utility Toolkit) under Visual C++ Express Edition?
TITLE: Using GLUT with Visual C++ Express Edition QUESTION: What are the basic steps to compile an OpenGL application using GLUT (OpenGL Utility Toolkit) under Visual C++ Express Edition? ANSWER: If you don't have Visual C++ Express Edition (VCEE), download and install VCEE. The default install of Visual C++ Express Edition builds for the.Net platform. We'll need to build for the Windows platform since OpenGL and GLUT are not yet fully supported under.Net. For this we need the Microsoft Platform SDK. (If you're using an older version of VCEE, download and install the Microsoft Platform SDK. Visual C++ Express Edition will need to be configured to build for Windows platform. All these instructions are available here.) If you don't have GLUT, download and unzip Nate Robin's Windows port of GLUT. Add glut.h to your Platform SDK/include/GL/ directory Link the project with glut.lib. (Go to VCEE Project Properties -> Additional Linker Directories and add the directory which has glut.lib. Add glut.dll to the Windows/System32 directory, so that all programs using GLUT can find it at runtime. Your program which uses GLUT or OpenGL should compile under Visual C++ Express Edition now.
[ "visual-studio", "visual-c++", "opengl", "glut" ]
11
9
16,710
2
0
2008-08-18T07:00:21.057000
2008-08-18T07:00:32.600000
14,271
14,272
Boost warnings with VC++ 9
When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds: Warning about the Wp64 setting. Warning about the compiler version. How can I turn off these warnings?
Warning about the Wp64 setting. Turn off the /Wp64 setting which is set by default. You can find it in Project Properties -> C/C++ -> General. Warning about the compiler version. Go to the Boost trunk (online) and get the latest boost\boost\config\compiler\visualc.hpp header file. Diff it with the current file and merge the sections where _MSC_VER is equal to 1800. ( 1800 is the VC9 version number used in Boost configuration.)
Boost warnings with VC++ 9 When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds: Warning about the Wp64 setting. Warning about the compiler version. How can I turn off these warnings?
TITLE: Boost warnings with VC++ 9 QUESTION: When the Boost library/headers is used with VC++ 9 compilers (Visual C++ 2008 Express Edition or Visual Studio 2008), a lot of benign warnings are generated. They are of 2 kinds: Warning about the Wp64 setting. Warning about the compiler version. How can I turn off these warnings? ANSWER: Warning about the Wp64 setting. Turn off the /Wp64 setting which is set by default. You can find it in Project Properties -> C/C++ -> General. Warning about the compiler version. Go to the Boost trunk (online) and get the latest boost\boost\config\compiler\visualc.hpp header file. Diff it with the current file and merge the sections where _MSC_VER is equal to 1800. ( 1800 is the VC9 version number used in Boost configuration.)
[ "c++", "visual-studio", "boost", "warnings" ]
1
1
1,059
1
0
2008-08-18T07:20:19.940000
2008-08-18T07:20:33.373000
14,278
14,282
How to load plugins in .NET?
I'd like to provide some way of creating dynamically loadable plugins in my software. Typical way to do this is using the LoadLibrary WinAPI function to load a dll and calling GetProcAddress to get an pointer to a function inside that dll. My question is how do I dynamically load a plugin in C#/.Net application?
The following code snippet (C#) constructs an instance of any concrete classes derived from Base found in class libraries (*.dll) in the application path and stores them in a list. using System.IO; using System.Reflection; List objects = new List (); DirectoryInfo dir = new DirectoryInfo(Application.StartupPath); foreach (FileInfo file in dir.GetFiles("*.dll")) { Assembly assembly = Assembly.LoadFrom(file.FullName); foreach (Type type in assembly.GetTypes()) { if (type.IsSubclassOf(typeof(Base)) && type.IsAbstract == false) { Base b = type.InvokeMember(null, BindingFlags.CreateInstance, null, null, null) as Base; objects.Add(b); } } } Edit: The classes referred to by Matt are probably a better option in.NET 3.5.
How to load plugins in .NET? I'd like to provide some way of creating dynamically loadable plugins in my software. Typical way to do this is using the LoadLibrary WinAPI function to load a dll and calling GetProcAddress to get an pointer to a function inside that dll. My question is how do I dynamically load a plugin in C#/.Net application?
TITLE: How to load plugins in .NET? QUESTION: I'd like to provide some way of creating dynamically loadable plugins in my software. Typical way to do this is using the LoadLibrary WinAPI function to load a dll and calling GetProcAddress to get an pointer to a function inside that dll. My question is how do I dynamically load a plugin in C#/.Net application? ANSWER: The following code snippet (C#) constructs an instance of any concrete classes derived from Base found in class libraries (*.dll) in the application path and stores them in a list. using System.IO; using System.Reflection; List objects = new List (); DirectoryInfo dir = new DirectoryInfo(Application.StartupPath); foreach (FileInfo file in dir.GetFiles("*.dll")) { Assembly assembly = Assembly.LoadFrom(file.FullName); foreach (Type type in assembly.GetTypes()) { if (type.IsSubclassOf(typeof(Base)) && type.IsAbstract == false) { Base b = type.InvokeMember(null, BindingFlags.CreateInstance, null, null, null) as Base; objects.Add(b); } } } Edit: The classes referred to by Matt are probably a better option in.NET 3.5.
[ ".net", "windows", "plugins", "add-in", "extensibility" ]
27
20
15,732
8
0
2008-08-18T07:34:56.917000
2008-08-18T07:41:24.390000
14,281
14,320
Is there a python module for regex matching in zip files
I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files. Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
There's nothing that will automatically do what you want. However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file. #!/usr/bin/python import zipfile f = zipfile.ZipFile('myfile.zip') for subfile in f.namelist(): print subfile data = f.read(subfile) for line in data.split('\n'): print line
Is there a python module for regex matching in zip files I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files. Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping?
TITLE: Is there a python module for regex matching in zip files QUESTION: I have over a million text files compressed into 40 zip files. I also have a list of about 500 model names of phones. I want to find out the number of times a particular model was mentioned in the text files. Is there any python module which can do a regex match on the files without unzipping it. Is there a simple way to solve this problem without unzipping? ANSWER: There's nothing that will automatically do what you want. However, there is a python zipfile module that will make this easy to do. Here's how to iterate over the lines in the file. #!/usr/bin/python import zipfile f = zipfile.ZipFile('myfile.zip') for subfile in f.namelist(): print subfile data = f.read(subfile) for line in data.split('\n'): print line
[ "python", "regex", "zip", "text-processing" ]
7
10
3,426
4
0
2008-08-18T07:41:09.010000
2008-08-18T08:19:06.390000
14,287
14,290
Increasing camera capture resolution in OpenCV
In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera ( Logitech QuickCam IM ) can capture at resolutions 320x240, 640x480 and 1280x960. But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam?
There doesn't seem to be a solution. The resolution can be increased to 640x480 using this hack shared by lifebelt77. Here are the details reproduced: Add to highgui.h: #define CV_CAP_PROP_DIALOG_DISPLAY 8 #define CV_CAP_PROP_DIALOG_FORMAT 9 #define CV_CAP_PROP_DIALOG_SOURCE 10 #define CV_CAP_PROP_DIALOG_COMPRESSION 11 #define CV_CAP_PROP_FRAME_WIDTH_HEIGHT 12 Add the function icvSetPropertyCAM_VFW to cvcap.cpp: static int icvSetPropertyCAM_VFW( CvCaptureCAM_VFW* capture, int property_id, double value ) { int result = -1; CAPSTATUS capstat; CAPTUREPARMS capparam; BITMAPINFO btmp; switch( property_id ) { case CV_CAP_PROP_DIALOG_DISPLAY: result = capDlgVideoDisplay(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEODISPLAY,0,0); break; case CV_CAP_PROP_DIALOG_FORMAT: result = capDlgVideoFormat(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOFORMAT,0,0); break; case CV_CAP_PROP_DIALOG_SOURCE: result = capDlgVideoSource(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOSOURCE,0,0); break; case CV_CAP_PROP_DIALOG_COMPRESSION: result = capDlgVideoCompression(capture->capWnd); break; case CV_CAP_PROP_FRAME_WIDTH_HEIGHT: capGetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO)); btmp.bmiHeader.biWidth = floor(value/1000); btmp.bmiHeader.biHeight = value-floor(value/1000)*1000; btmp.bmiHeader.biSizeImage = btmp.bmiHeader.biHeight * btmp.bmiHeader.biWidth * btmp.bmiHeader.biPlanes * btmp.bmiHeader.biBitCount / 8; capSetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO)); break; default: break; } return result; } and edit captureCAM_VFW_vtable as following: static CvCaptureVTable captureCAM_VFW_vtable = { 6, (CvCaptureCloseFunc)icvCloseCAM_VFW, (CvCaptureGrabFrameFunc)icvGrabFrameCAM_VFW, (CvCaptureRetrieveFrameFunc)icvRetrieveFrameCAM_VFW, (CvCaptureGetPropertyFunc)icvGetPropertyCAM_VFW, (CvCaptureSetPropertyFunc)icvSetPropertyCAM_VFW, // was NULL (CvCaptureGetDescriptionFunc)0 }; Now rebuilt highgui.dll.
Increasing camera capture resolution in OpenCV In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera ( Logitech QuickCam IM ) can capture at resolutions 320x240, 640x480 and 1280x960. But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam?
TITLE: Increasing camera capture resolution in OpenCV QUESTION: In my C/C++ program, I'm using OpenCV to capture images from my webcam. The camera ( Logitech QuickCam IM ) can capture at resolutions 320x240, 640x480 and 1280x960. But, for some strange reason, OpenCV gives me images of resolution 320x240 only. Calls to change the resolution using cvSetCaptureProperty() with other resolution values just don't work. How do I capture images with the other resolutions possible with my webcam? ANSWER: There doesn't seem to be a solution. The resolution can be increased to 640x480 using this hack shared by lifebelt77. Here are the details reproduced: Add to highgui.h: #define CV_CAP_PROP_DIALOG_DISPLAY 8 #define CV_CAP_PROP_DIALOG_FORMAT 9 #define CV_CAP_PROP_DIALOG_SOURCE 10 #define CV_CAP_PROP_DIALOG_COMPRESSION 11 #define CV_CAP_PROP_FRAME_WIDTH_HEIGHT 12 Add the function icvSetPropertyCAM_VFW to cvcap.cpp: static int icvSetPropertyCAM_VFW( CvCaptureCAM_VFW* capture, int property_id, double value ) { int result = -1; CAPSTATUS capstat; CAPTUREPARMS capparam; BITMAPINFO btmp; switch( property_id ) { case CV_CAP_PROP_DIALOG_DISPLAY: result = capDlgVideoDisplay(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEODISPLAY,0,0); break; case CV_CAP_PROP_DIALOG_FORMAT: result = capDlgVideoFormat(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOFORMAT,0,0); break; case CV_CAP_PROP_DIALOG_SOURCE: result = capDlgVideoSource(capture->capWnd); //SendMessage(capture->capWnd,WM_CAP_DLG_VIDEOSOURCE,0,0); break; case CV_CAP_PROP_DIALOG_COMPRESSION: result = capDlgVideoCompression(capture->capWnd); break; case CV_CAP_PROP_FRAME_WIDTH_HEIGHT: capGetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO)); btmp.bmiHeader.biWidth = floor(value/1000); btmp.bmiHeader.biHeight = value-floor(value/1000)*1000; btmp.bmiHeader.biSizeImage = btmp.bmiHeader.biHeight * btmp.bmiHeader.biWidth * btmp.bmiHeader.biPlanes * btmp.bmiHeader.biBitCount / 8; capSetVideoFormat(capture->capWnd, &btmp, sizeof(BITMAPINFO)); break; default: break; } return result; } and edit captureCAM_VFW_vtable as following: static CvCaptureVTable captureCAM_VFW_vtable = { 6, (CvCaptureCloseFunc)icvCloseCAM_VFW, (CvCaptureGrabFrameFunc)icvGrabFrameCAM_VFW, (CvCaptureRetrieveFrameFunc)icvRetrieveFrameCAM_VFW, (CvCaptureGetPropertyFunc)icvGetPropertyCAM_VFW, (CvCaptureSetPropertyFunc)icvSetPropertyCAM_VFW, // was NULL (CvCaptureGetDescriptionFunc)0 }; Now rebuilt highgui.dll.
[ "c", "image", "opencv", "webcam", "resolutions" ]
52
17
79,494
15
0
2008-08-18T07:45:33.057000
2008-08-18T07:46:15.300000
14,297
14,298
GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors
I'm using FBO s in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it?
The cause of this error is an older version of NVIDIA's glext.h, which still has this definition. Whereas the most recent versions of GLEW don't. This leads to compilation errors in code that you had written previously or got from the web. The GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT definition for FBO used to be present in the specification (and hence in header files). But, it was later removed. The reason for this can be found in the FBO extension specification (look for Issue 87): (87) What happens if a single image is attached more than once to a framebuffer object? RESOLVED: The value written to the pixel is undefined. There used to be a rule in section 4.4.4.2 that resulted in FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT if a single image was attached more than once to a framebuffer object. FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8 * A single image is not attached more than once to the framebuffer object. { FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT } This rule was removed in version #117 of the EXT_framebuffer_object specification after discussion at the September 2005 ARB meeting. The rule essentially required an O(n*lg(n)) search. Some implementations would not need to do that search if the completeness rules did not require it. Instead, language was added to section 4.10 which says the values written to the framebuffer are undefined when this rule is violated. To fix this error, remove all usage of GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT from your code. If this isn't possible in your setup, then add a dummy definition to your glext.h or glew.h file like this: #define GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors I'm using FBO s in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it?
TITLE: GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT errors QUESTION: I'm using FBO s in my OpenGL code and I'm seeing compilation errors on GL\_FRAMEBUFFER\_INCOMPLETE\_DUPLICATE\_ATTACHMENT\_EXT. What's the cause of this and how do I fix it? ANSWER: The cause of this error is an older version of NVIDIA's glext.h, which still has this definition. Whereas the most recent versions of GLEW don't. This leads to compilation errors in code that you had written previously or got from the web. The GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT definition for FBO used to be present in the specification (and hence in header files). But, it was later removed. The reason for this can be found in the FBO extension specification (look for Issue 87): (87) What happens if a single image is attached more than once to a framebuffer object? RESOLVED: The value written to the pixel is undefined. There used to be a rule in section 4.4.4.2 that resulted in FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT if a single image was attached more than once to a framebuffer object. FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8 * A single image is not attached more than once to the framebuffer object. { FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT } This rule was removed in version #117 of the EXT_framebuffer_object specification after discussion at the September 2005 ARB meeting. The rule essentially required an O(n*lg(n)) search. Some implementations would not need to do that search if the completeness rules did not require it. Instead, language was added to section 4.10 which says the values written to the framebuffer are undefined when this rule is violated. To fix this error, remove all usage of GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT from your code. If this isn't possible in your setup, then add a dummy definition to your glext.h or glew.h file like this: #define GL_FRAMEBUFFER_INCOMPLETE_DUPLICATE_ATTACHMENT_EXT 0x8CD8
[ "opengl", "fbo" ]
4
4
577
1
0
2008-08-18T07:59:07.617000
2008-08-18T07:59:23.763000
14,300
14,313
How do I make Powershell run a batch file and then stay open?
For example; with the old command prompt it would be: cmd.exe /k mybatchfile.bat
Drop into a cmd instance (or indeed PowerShell itself) and type this: powershell -? You'll see that powershell.exe has a "-noexit" parameter which tells it not to exit after executing a "startup command".
How do I make Powershell run a batch file and then stay open? For example; with the old command prompt it would be: cmd.exe /k mybatchfile.bat
TITLE: How do I make Powershell run a batch file and then stay open? QUESTION: For example; with the old command prompt it would be: cmd.exe /k mybatchfile.bat ANSWER: Drop into a cmd instance (or indeed PowerShell itself) and type this: powershell -? You'll see that powershell.exe has a "-noexit" parameter which tells it not to exit after executing a "startup command".
[ "powershell", "batch-file" ]
24
35
56,597
3
0
2008-08-18T08:00:15.820000
2008-08-18T08:10:30.633000
14,308
14,327
What is the best way to handle files for a small office?
I'm currently working at a small web development company, we mostly do campaign sites and other promotional stuff. For our first year we've been using a "server" for sharing project files, a plain windows machine with a network share. But this isn't exactly future proof. SVN is great for code (it's what we use now), but I want to have the comfort of versioning (or atleast some form of syncing) for all or most of our files. What I essentially want is something that does what subversion does for code, but for our documents/psd/pdf files. I realize subversion handles binary files too, but I feel it might be a bit overkill for our purposes. It doesn't necessarily need all the bells and whistles of a full version control system, but something that that removes the need for incremental naming (Notes_1.23.doc) and lessens the chance of overwriting something by mistake. It also needs to be multiplatform, handle large files (100 mb+) and be usable by somewhat non technical people.
SVN is great for binaries, too. If you're afraid you can't compare revisions, I can tell you that it is possible for Word docs, using Tortoise. But I do not know, what you mean with "expanding the versioning". SVN is no document management system. Edit: but I feel it might be a bit overkill for our purposes If you are already using SVN and it fulfils your purposes, why bother with a second system?
What is the best way to handle files for a small office? I'm currently working at a small web development company, we mostly do campaign sites and other promotional stuff. For our first year we've been using a "server" for sharing project files, a plain windows machine with a network share. But this isn't exactly future proof. SVN is great for code (it's what we use now), but I want to have the comfort of versioning (or atleast some form of syncing) for all or most of our files. What I essentially want is something that does what subversion does for code, but for our documents/psd/pdf files. I realize subversion handles binary files too, but I feel it might be a bit overkill for our purposes. It doesn't necessarily need all the bells and whistles of a full version control system, but something that that removes the need for incremental naming (Notes_1.23.doc) and lessens the chance of overwriting something by mistake. It also needs to be multiplatform, handle large files (100 mb+) and be usable by somewhat non technical people.
TITLE: What is the best way to handle files for a small office? QUESTION: I'm currently working at a small web development company, we mostly do campaign sites and other promotional stuff. For our first year we've been using a "server" for sharing project files, a plain windows machine with a network share. But this isn't exactly future proof. SVN is great for code (it's what we use now), but I want to have the comfort of versioning (or atleast some form of syncing) for all or most of our files. What I essentially want is something that does what subversion does for code, but for our documents/psd/pdf files. I realize subversion handles binary files too, but I feel it might be a bit overkill for our purposes. It doesn't necessarily need all the bells and whistles of a full version control system, but something that that removes the need for incremental naming (Notes_1.23.doc) and lessens the chance of overwriting something by mistake. It also needs to be multiplatform, handle large files (100 mb+) and be usable by somewhat non technical people. ANSWER: SVN is great for binaries, too. If you're afraid you can't compare revisions, I can tell you that it is possible for Word docs, using Tortoise. But I do not know, what you mean with "expanding the versioning". SVN is no document management system. Edit: but I feel it might be a bit overkill for our purposes If you are already using SVN and it fulfils your purposes, why bother with a second system?
[ "version-control", "versioning", "file-management" ]
4
7
717
8
0
2008-08-18T08:08:39.983000
2008-08-18T08:24:48.157000
14,310
14,311
Using OpenGL textures larger than window/display size
I'm having problems using textures that are larger than the OpenGL window or the display size as non-display render targets. What's the solution for this problem?
There's a simple solution. Assuming your (non-display) textures are 1024x1024 and you are restricted to a 256x256 window/display. unsigned int WIN_WIDTH = 256; unsigned int WIN_HEIGHT = WIN_WIDTH; unsigned int TEX_WIDTH = 1024; unsigned int TEX_HEIGHT = TEX_WIDTH; Use the window size to create your OpenGL window: glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT); But, use the texture size for everything else: glViewport(0, 0, TEX_WIDTH, TEX_HEIGHT); gluOrtho2D(0.0, TEX_WIDTH, 0.0, TEX_HEIGHT); glTexCoord2i(TEX_WIDTH, TEX_HEIGHT);
Using OpenGL textures larger than window/display size I'm having problems using textures that are larger than the OpenGL window or the display size as non-display render targets. What's the solution for this problem?
TITLE: Using OpenGL textures larger than window/display size QUESTION: I'm having problems using textures that are larger than the OpenGL window or the display size as non-display render targets. What's the solution for this problem? ANSWER: There's a simple solution. Assuming your (non-display) textures are 1024x1024 and you are restricted to a 256x256 window/display. unsigned int WIN_WIDTH = 256; unsigned int WIN_HEIGHT = WIN_WIDTH; unsigned int TEX_WIDTH = 1024; unsigned int TEX_HEIGHT = TEX_WIDTH; Use the window size to create your OpenGL window: glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT); But, use the texture size for everything else: glViewport(0, 0, TEX_WIDTH, TEX_HEIGHT); gluOrtho2D(0.0, TEX_WIDTH, 0.0, TEX_HEIGHT); glTexCoord2i(TEX_WIDTH, TEX_HEIGHT);
[ "opengl", "textures" ]
10
4
1,573
1
0
2008-08-18T08:09:10.840000
2008-08-18T08:09:23.990000
14,318
14,319
Using GLUT bitmap fonts
I'm writing a simple OpenGL application that uses GLUT. I don't want to roll my own font rendering code, instead I want to use the simple bitmap fonts that ship with GLUT. What are the steps to get them working?
Simple text display is easy to do in OpenGL using GLUT bitmap fonts. These are simple 2D fonts and are not suitable for display inside your 3D environment. However, they're perfect for text that needs to be overlayed on the display window. Here are the sample steps to display Eric Cartman's favorite quote colored in green on a GLUT window: We'll be setting the raster position in screen coordinates. So, setup the projection and modelview matrices for 2D rendering: glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluOrtho2D(0.0, WIN_WIDTH, 0.0, WIN_HEIGHT); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); Set the font color. (Set this now, not later.) glColor3f(0.0, 1.0, 0.0); // Green Set the window location where the text should be displayed. This is done by setting the raster position in screen coordinates. Lower left corner of the window is (0, 0). glRasterPos2i(10, 10); Set the font and display the string characters using glutBitmapCharacter. string s = "Respect mah authoritah!"; void * font = GLUT_BITMAP_9_BY_15; for (string::iterator i = s.begin(); i!= s.end(); ++i) { char c = *i; glutBitmapCharacter(font, c); } Restore back the matrices. glMatrixMode(GL_MODELVIEW); glPopMatrix(); glMatrixMode(GL_PROJECTION); glPopMatrix();
Using GLUT bitmap fonts I'm writing a simple OpenGL application that uses GLUT. I don't want to roll my own font rendering code, instead I want to use the simple bitmap fonts that ship with GLUT. What are the steps to get them working?
TITLE: Using GLUT bitmap fonts QUESTION: I'm writing a simple OpenGL application that uses GLUT. I don't want to roll my own font rendering code, instead I want to use the simple bitmap fonts that ship with GLUT. What are the steps to get them working? ANSWER: Simple text display is easy to do in OpenGL using GLUT bitmap fonts. These are simple 2D fonts and are not suitable for display inside your 3D environment. However, they're perfect for text that needs to be overlayed on the display window. Here are the sample steps to display Eric Cartman's favorite quote colored in green on a GLUT window: We'll be setting the raster position in screen coordinates. So, setup the projection and modelview matrices for 2D rendering: glMatrixMode(GL_PROJECTION); glPushMatrix(); glLoadIdentity(); gluOrtho2D(0.0, WIN_WIDTH, 0.0, WIN_HEIGHT); glMatrixMode(GL_MODELVIEW); glPushMatrix(); glLoadIdentity(); Set the font color. (Set this now, not later.) glColor3f(0.0, 1.0, 0.0); // Green Set the window location where the text should be displayed. This is done by setting the raster position in screen coordinates. Lower left corner of the window is (0, 0). glRasterPos2i(10, 10); Set the font and display the string characters using glutBitmapCharacter. string s = "Respect mah authoritah!"; void * font = GLUT_BITMAP_9_BY_15; for (string::iterator i = s.begin(); i!= s.end(); ++i) { char c = *i; glutBitmapCharacter(font, c); } Restore back the matrices. glMatrixMode(GL_MODELVIEW); glPopMatrix(); glMatrixMode(GL_PROJECTION); glPopMatrix();
[ "opengl", "fonts", "bitmap", "glut" ]
19
29
30,034
1
0
2008-08-18T08:18:27.827000
2008-08-18T08:18:38.433000
14,330
14,331
RGB to monochrome conversion
How do I convert the RGB values of a pixel to a single monochrome value?
I found one possible solution in the Color FAQ. The luminance component Y (from the CIE XYZ system ) captures what is most perceived by humans as color in one channel. So, use those coefficients: mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
RGB to monochrome conversion How do I convert the RGB values of a pixel to a single monochrome value?
TITLE: RGB to monochrome conversion QUESTION: How do I convert the RGB values of a pixel to a single monochrome value? ANSWER: I found one possible solution in the Color FAQ. The luminance component Y (from the CIE XYZ system ) captures what is most perceived by humans as color in one channel. So, use those coefficients: mono = (0.2125 * color.r) + (0.7154 * color.g) + (0.0721 * color.b);
[ "colors", "rgb", "monochrome" ]
22
27
28,291
6
0
2008-08-18T08:27:32.910000
2008-08-18T08:27:43.830000
14,344
14,346
Starting off with OpenGL under Cygwin
Is it possible to compile and run OpenGL programs from under Cygwin? If yes, how?
It is possible to compile and run OpenGL programs under Cygwin. I illustrate the basic steps here: I assume you know OpenGL programming. If not, get the Red Book ( The OpenGL Programming Guide ). It is mandatory reading for OpenGL anyway. I assume you have Cygwin installed. If not, visit cygwin.com and install it. To compile and run OpenGL programs, you need the Cygwin package named opengl. In the Cygwin installer, it can be found under the Graphics section. Please install this package. Write a simple OpenGL program, say ogl.c. Compile the program using the flags -lglut32 -lglu32 -lopengl32. (This links your program with the GLUT, GLU and OpenGL libraries. An OpenGL program might typically use functions from all the 3 of them.) For example: $ gcc ogl.c -lglut32 -lglu32 -lopengl32 Run the program. It's as simple as that!
Starting off with OpenGL under Cygwin Is it possible to compile and run OpenGL programs from under Cygwin? If yes, how?
TITLE: Starting off with OpenGL under Cygwin QUESTION: Is it possible to compile and run OpenGL programs from under Cygwin? If yes, how? ANSWER: It is possible to compile and run OpenGL programs under Cygwin. I illustrate the basic steps here: I assume you know OpenGL programming. If not, get the Red Book ( The OpenGL Programming Guide ). It is mandatory reading for OpenGL anyway. I assume you have Cygwin installed. If not, visit cygwin.com and install it. To compile and run OpenGL programs, you need the Cygwin package named opengl. In the Cygwin installer, it can be found under the Graphics section. Please install this package. Write a simple OpenGL program, say ogl.c. Compile the program using the flags -lglut32 -lglu32 -lopengl32. (This links your program with the GLUT, GLU and OpenGL libraries. An OpenGL program might typically use functions from all the 3 of them.) For example: $ gcc ogl.c -lglut32 -lglu32 -lopengl32 Run the program. It's as simple as that!
[ "opengl", "cygwin" ]
15
13
21,903
4
0
2008-08-18T08:48:34.037000
2008-08-18T08:50:35.327000
14,350
14,404
How do I call a Flex SWF from a remote domain using Flash (AS3)?
I have a Flex swf hosted at http://www.a.com/a.swf. I have a flash code on another doamin that tries loading the SWF: _loader = new Loader(); var req:URLRequest = new URLRequest("http://services.nuconomy.com/n.swf"); _loader.contentLoaderInfo.addEventListener(Event.COMPLETE,onLoaderFinish); _loader.load(req); On the onLoaderFinish event I try to load classes from the remote SWF and create them: _loader.contentLoaderInfo.applicationDomain.getDefinition("someClassName") as Class When this code runs I get the following exception SecurityError: Error #2119: Security sandbox violation: caller http://localhost.service:1234/flashTest/Main.swf cannot access LoaderInfo.applicationDomain owned by http://www.b.com/b.swf. at flash.display::LoaderInfo/get applicationDomain() at NuconomyLoader/onLoaderFinish() Is there any way to get this code working?
This is all described in The Adobe Flex 3 Programming ActionScript 3 PDF on page 550 (Chapter 27: Flash Player Security / Cross-scripting): If two SWF files written with ActionScript 3.0 are served from different domains—for example, http://siteA.com/swfA.swf and http://siteB.com/swfB.swf —then, by default, Flash Player does not allow swfA.swf to script swfB.swf, nor swfB.swf to script swfA.swf. A SWF file gives permission to SWF files from other domains by calling Security.allowDomain(). By calling Security.allowDomain("siteA.com"), swfB.swf gives SWF files from siteA.com permission to script it. It goes on in some more detail, with diagrams and all.
How do I call a Flex SWF from a remote domain using Flash (AS3)? I have a Flex swf hosted at http://www.a.com/a.swf. I have a flash code on another doamin that tries loading the SWF: _loader = new Loader(); var req:URLRequest = new URLRequest("http://services.nuconomy.com/n.swf"); _loader.contentLoaderInfo.addEventListener(Event.COMPLETE,onLoaderFinish); _loader.load(req); On the onLoaderFinish event I try to load classes from the remote SWF and create them: _loader.contentLoaderInfo.applicationDomain.getDefinition("someClassName") as Class When this code runs I get the following exception SecurityError: Error #2119: Security sandbox violation: caller http://localhost.service:1234/flashTest/Main.swf cannot access LoaderInfo.applicationDomain owned by http://www.b.com/b.swf. at flash.display::LoaderInfo/get applicationDomain() at NuconomyLoader/onLoaderFinish() Is there any way to get this code working?
TITLE: How do I call a Flex SWF from a remote domain using Flash (AS3)? QUESTION: I have a Flex swf hosted at http://www.a.com/a.swf. I have a flash code on another doamin that tries loading the SWF: _loader = new Loader(); var req:URLRequest = new URLRequest("http://services.nuconomy.com/n.swf"); _loader.contentLoaderInfo.addEventListener(Event.COMPLETE,onLoaderFinish); _loader.load(req); On the onLoaderFinish event I try to load classes from the remote SWF and create them: _loader.contentLoaderInfo.applicationDomain.getDefinition("someClassName") as Class When this code runs I get the following exception SecurityError: Error #2119: Security sandbox violation: caller http://localhost.service:1234/flashTest/Main.swf cannot access LoaderInfo.applicationDomain owned by http://www.b.com/b.swf. at flash.display::LoaderInfo/get applicationDomain() at NuconomyLoader/onLoaderFinish() Is there any way to get this code working? ANSWER: This is all described in The Adobe Flex 3 Programming ActionScript 3 PDF on page 550 (Chapter 27: Flash Player Security / Cross-scripting): If two SWF files written with ActionScript 3.0 are served from different domains—for example, http://siteA.com/swfA.swf and http://siteB.com/swfB.swf —then, by default, Flash Player does not allow swfA.swf to script swfB.swf, nor swfB.swf to script swfA.swf. A SWF file gives permission to SWF files from other domains by calling Security.allowDomain(). By calling Security.allowDomain("siteA.com"), swfB.swf gives SWF files from siteA.com permission to script it. It goes on in some more detail, with diagrams and all.
[ "apache-flex", "flash", "actionscript-3", "security" ]
2
6
4,299
3
0
2008-08-18T08:53:39.423000
2008-08-18T09:59:50.697000
14,358
14,360
Accessing OpenGL state variables in Cg
I need to access the OpenGL state variables (such as the MVP matrices) in my Cg shader program. I'm passing these values to my Cg shader program manually using calls such as cgGLSetStateMatrixParameter() in my C/C++ code. Is there an easier way to do this?
If you are on any fairly recent Cg profile ( arbvp1 and later), your Cg shader programs can in fact access the OpenGL state (MVP matrices, material and light settings) directly. This makes writing those programs less painful. Here are some of the state variables which can be accessed: MVP matrices of all types: state.matrix.mvp state.matrix.inverse.mvp state.matrix.modelview state.matrix.inverse.modelview state.matrix.modelview.invtrans state.matrix.projection state.matrix.inverse.projection Light and material properties: state.material.ambient state.material.diffuse state.material.specular state.light[0].ambient For the full list of state variables, refer to the section Accessing OpenGL State, OpenGL ARB Vertex Program Profile (arbvp1) in the Cg Users Manual. Note: All the OpenGL state variables are of uniform type when accessed in Cg. For light variables, the index is mandatory. (Eg: 1 in state.light[1].ambient ) Lighting or light(s) need not be enabled to use those corresponding light values inside Cg. But, they need to be set using glLight() functions.
Accessing OpenGL state variables in Cg I need to access the OpenGL state variables (such as the MVP matrices) in my Cg shader program. I'm passing these values to my Cg shader program manually using calls such as cgGLSetStateMatrixParameter() in my C/C++ code. Is there an easier way to do this?
TITLE: Accessing OpenGL state variables in Cg QUESTION: I need to access the OpenGL state variables (such as the MVP matrices) in my Cg shader program. I'm passing these values to my Cg shader program manually using calls such as cgGLSetStateMatrixParameter() in my C/C++ code. Is there an easier way to do this? ANSWER: If you are on any fairly recent Cg profile ( arbvp1 and later), your Cg shader programs can in fact access the OpenGL state (MVP matrices, material and light settings) directly. This makes writing those programs less painful. Here are some of the state variables which can be accessed: MVP matrices of all types: state.matrix.mvp state.matrix.inverse.mvp state.matrix.modelview state.matrix.inverse.modelview state.matrix.modelview.invtrans state.matrix.projection state.matrix.inverse.projection Light and material properties: state.material.ambient state.material.diffuse state.material.specular state.light[0].ambient For the full list of state variables, refer to the section Accessing OpenGL State, OpenGL ARB Vertex Program Profile (arbvp1) in the Cg Users Manual. Note: All the OpenGL state variables are of uniform type when accessed in Cg. For light variables, the index is mandatory. (Eg: 1 in state.light[1].ambient ) Lighting or light(s) need not be enabled to use those corresponding light values inside Cg. But, they need to be set using glLight() functions.
[ "opengl", "variables", "state", "cg" ]
3
4
2,205
1
0
2008-08-18T09:10:54.290000
2008-08-18T09:11:13.117000
14,359
14,362
.Net Dynamic Plugin Loading with Authority
What recommendations can you give for a system which must do the following: Load Plugins (and eventually execute them) but have 2 methods of loading these plugins: Load only authorized plugins (developed by the owner of the software) Load all plugins And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas?
Basically, if you're putting your code on someone else's machine, there's no absolute guarantee of security. You can look at all kinds of security tricks, but in the end, the code is on their machine so it's out of your control. How much do you stand to lose if the end user loads an unauthorised plugin?
.Net Dynamic Plugin Loading with Authority What recommendations can you give for a system which must do the following: Load Plugins (and eventually execute them) but have 2 methods of loading these plugins: Load only authorized plugins (developed by the owner of the software) Load all plugins And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas?
TITLE: .Net Dynamic Plugin Loading with Authority QUESTION: What recommendations can you give for a system which must do the following: Load Plugins (and eventually execute them) but have 2 methods of loading these plugins: Load only authorized plugins (developed by the owner of the software) Load all plugins And we need to be reasonably secure that the authorized plugins are the real deal (unmodified). However all plugins must be in seperate assemblies. I've been looking at using strong named assemblies for the plugins, with the public key stored in the loader application, but to me this seems too easy to modify the public key within the loader application (if the user was so inclined) regardless of any obfuscation of the loader application. Any more secure ideas? ANSWER: Basically, if you're putting your code on someone else's machine, there's no absolute guarantee of security. You can look at all kinds of security tricks, but in the end, the code is on their machine so it's out of your control. How much do you stand to lose if the end user loads an unauthorised plugin?
[ "c#", ".net", "plugins", "assemblies", "dynamics-crm" ]
1
0
657
4
0
2008-08-18T09:10:56.427000
2008-08-18T09:14:30.973000
14,364
14,365
Valid OpenGL context
How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code.
From the posts on comp.graphics.api.opengl, it seems like most newbies burn their hands on their first OpenGL program. In most cases, the error is caused due to OpenGL functions being called even before a valid OpenGL context is created. OpenGL is a state machine. Only after the machine has been started and humming in the ready state, can it be put to work. Here is some simple code to create a valid OpenGL context: #include #include // Window attributes static const unsigned int WIN_POS_X = 30; static const unsigned int WIN_POS_Y = WIN_POS_X; static const unsigned int WIN_WIDTH = 512; static const unsigned int WIN_HEIGHT = WIN_WIDTH; void glInit(int, char **); int main(int argc, char * argv[]) { // Initialize OpenGL glInit(argc, argv); // A valid OpenGL context has been created. // You can call OpenGL functions from here on. glutMainLoop(); return 0; } void glInit(int argc, char ** argv) { // Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE); glutInitWindowPosition(WIN_POS_X, WIN_POS_Y); glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT); glutCreateWindow("Hello OpenGL!"); return; } Note: The call of interest here is glutCreateWindow(). It not only creates a window, but also creates an OpenGL context. The window created with glutCreateWindow() is not visible until glutMainLoop() is called.
Valid OpenGL context How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code.
TITLE: Valid OpenGL context QUESTION: How and at what stage is a valid OpenGL context created in my code? I'm getting errors on even simple OpenGL code. ANSWER: From the posts on comp.graphics.api.opengl, it seems like most newbies burn their hands on their first OpenGL program. In most cases, the error is caused due to OpenGL functions being called even before a valid OpenGL context is created. OpenGL is a state machine. Only after the machine has been started and humming in the ready state, can it be put to work. Here is some simple code to create a valid OpenGL context: #include #include // Window attributes static const unsigned int WIN_POS_X = 30; static const unsigned int WIN_POS_Y = WIN_POS_X; static const unsigned int WIN_WIDTH = 512; static const unsigned int WIN_HEIGHT = WIN_WIDTH; void glInit(int, char **); int main(int argc, char * argv[]) { // Initialize OpenGL glInit(argc, argv); // A valid OpenGL context has been created. // You can call OpenGL functions from here on. glutMainLoop(); return 0; } void glInit(int argc, char ** argv) { // Initialize GLUT glutInit(&argc, argv); glutInitDisplayMode(GLUT_DOUBLE); glutInitWindowPosition(WIN_POS_X, WIN_POS_Y); glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT); glutCreateWindow("Hello OpenGL!"); return; } Note: The call of interest here is glutCreateWindow(). It not only creates a window, but also creates an OpenGL context. The window created with glutCreateWindow() is not visible until glutMainLoop() is called.
[ "opengl", "glut" ]
4
4
5,862
1
0
2008-08-18T09:17:17.693000
2008-08-18T09:17:46.193000
14,370
14,371
GLUT pop-up menus
Is it easy to create GLUT pop-up menus for my OpenGL application? If yes, how?
Creating and using pop-up menus with GLUT is very simple. Here is a code sample that creates a pop-up menu with 4 options: // Menu items enum MENU_TYPE { MENU_FRONT, MENU_SPOT, MENU_BACK, MENU_BACK_FRONT, }; // Assign a default value MENU_TYPE show = MENU_BACK_FRONT; // Menu handling function declaration void menu(int); int main() { //... // Create a menu glutCreateMenu(menu); // Add menu items glutAddMenuEntry("Show Front", MENU_FRONT); glutAddMenuEntry("Show Back", MENU_BACK); glutAddMenuEntry("Spotlight", MENU_SPOT); glutAddMenuEntry("Blend 'em all", MENU_BACK_FRONT); // Associate a mouse button with menu glutAttachMenu(GLUT_RIGHT_BUTTON); //... return; } // Menu handling function definition void menu(int item) { switch (item) { case MENU_FRONT: case MENU_SPOT: case MENU_DEPTH: case MENU_BACK: case MENU_BACK_FRONT: { show = (MENU_TYPE) item; } break; default: { /* Nothing */ } break; } glutPostRedisplay(); return; }
GLUT pop-up menus Is it easy to create GLUT pop-up menus for my OpenGL application? If yes, how?
TITLE: GLUT pop-up menus QUESTION: Is it easy to create GLUT pop-up menus for my OpenGL application? If yes, how? ANSWER: Creating and using pop-up menus with GLUT is very simple. Here is a code sample that creates a pop-up menu with 4 options: // Menu items enum MENU_TYPE { MENU_FRONT, MENU_SPOT, MENU_BACK, MENU_BACK_FRONT, }; // Assign a default value MENU_TYPE show = MENU_BACK_FRONT; // Menu handling function declaration void menu(int); int main() { //... // Create a menu glutCreateMenu(menu); // Add menu items glutAddMenuEntry("Show Front", MENU_FRONT); glutAddMenuEntry("Show Back", MENU_BACK); glutAddMenuEntry("Spotlight", MENU_SPOT); glutAddMenuEntry("Blend 'em all", MENU_BACK_FRONT); // Associate a mouse button with menu glutAttachMenu(GLUT_RIGHT_BUTTON); //... return; } // Menu handling function definition void menu(int item) { switch (item) { case MENU_FRONT: case MENU_SPOT: case MENU_DEPTH: case MENU_BACK: case MENU_BACK_FRONT: { show = (MENU_TYPE) item; } break; default: { /* Nothing */ } break; } glutPostRedisplay(); return; }
[ "opengl", "glut", "menu" ]
7
13
18,739
1
0
2008-08-18T09:22:49.833000
2008-08-18T09:23:11.627000
14,373
14,411
CVS to SVN conversion and reorganizing branches
I am converting from existing CVS repository to SVN repository. CVS repository has few brances and I'd like to rename branches while converting. Wanted conversion is like this: CVS branch SVN branch HEAD -> branches/branchX branchA -> trunk branchB -> branches/branchB branchC -> branches/branchC That is, CVS HEAD becomes a normal branch and CVS branchA becomes SVN trunk. Both CVS and SVN repositories will be on same linux machine. How could this be done? Also conversion where CVS branchA becomes SVN trunk and all other CVS branches are ignored might be enough.
I am especially interested in preserving commit history. If I rename and move branches around in SVN after the conversion, will the history be preserved? Yes. Subversion also keeps track of changes to the directory structure, and all version history is preserved even if a file is moved in the tree. I recommend converting the repository with cvs2svn, including branches and tags. Once the repository is in Subversion you can move the branches and tags around as you wish. This also keeps the history of the actual tags and branches being renamed, which may be interesting in a historical context later.
CVS to SVN conversion and reorganizing branches I am converting from existing CVS repository to SVN repository. CVS repository has few brances and I'd like to rename branches while converting. Wanted conversion is like this: CVS branch SVN branch HEAD -> branches/branchX branchA -> trunk branchB -> branches/branchB branchC -> branches/branchC That is, CVS HEAD becomes a normal branch and CVS branchA becomes SVN trunk. Both CVS and SVN repositories will be on same linux machine. How could this be done? Also conversion where CVS branchA becomes SVN trunk and all other CVS branches are ignored might be enough.
TITLE: CVS to SVN conversion and reorganizing branches QUESTION: I am converting from existing CVS repository to SVN repository. CVS repository has few brances and I'd like to rename branches while converting. Wanted conversion is like this: CVS branch SVN branch HEAD -> branches/branchX branchA -> trunk branchB -> branches/branchB branchC -> branches/branchC That is, CVS HEAD becomes a normal branch and CVS branchA becomes SVN trunk. Both CVS and SVN repositories will be on same linux machine. How could this be done? Also conversion where CVS branchA becomes SVN trunk and all other CVS branches are ignored might be enough. ANSWER: I am especially interested in preserving commit history. If I rename and move branches around in SVN after the conversion, will the history be preserved? Yes. Subversion also keeps track of changes to the directory structure, and all version history is preserved even if a file is moved in the tree. I recommend converting the repository with cvs2svn, including branches and tags. Once the repository is in Subversion you can move the branches and tags around as you wish. This also keeps the history of the actual tags and branches being renamed, which may be interesting in a historical context later.
[ "svn", "cvs" ]
6
9
2,366
7
0
2008-08-18T09:26:17.487000
2008-08-18T10:08:26.090000
14,378
14,379
Using the mouse scrollwheel in GLUT
I want to use the mouse scrollwheel in my OpenGL GLUT program to zoom in and out of a scene? How do I do that?
Note that venerable Nate Robin's GLUT library doesn't support the scrollwheel. But, later implementations of GLUT like FreeGLUT do. Using the scroll wheel in FreeGLUT is dead simple. Here is how: Declare a callback function that shall be called whenever the scroll wheel is scrolled. This is the prototype: void mouseWheel(int, int, int, int); Register the callback with the (Free)GLUT function glutMouseWheelFunc(). glutMouseWheelFunc(mouseWheel); Define the callback function. The second parameter gives the direction of the scroll. Values of +1 is forward, -1 is backward. void mouseWheel(int button, int dir, int x, int y) { if (dir > 0) { // Zoom in } else { // Zoom out } return; } That's it!
Using the mouse scrollwheel in GLUT I want to use the mouse scrollwheel in my OpenGL GLUT program to zoom in and out of a scene? How do I do that?
TITLE: Using the mouse scrollwheel in GLUT QUESTION: I want to use the mouse scrollwheel in my OpenGL GLUT program to zoom in and out of a scene? How do I do that? ANSWER: Note that venerable Nate Robin's GLUT library doesn't support the scrollwheel. But, later implementations of GLUT like FreeGLUT do. Using the scroll wheel in FreeGLUT is dead simple. Here is how: Declare a callback function that shall be called whenever the scroll wheel is scrolled. This is the prototype: void mouseWheel(int, int, int, int); Register the callback with the (Free)GLUT function glutMouseWheelFunc(). glutMouseWheelFunc(mouseWheel); Define the callback function. The second parameter gives the direction of the scroll. Values of +1 is forward, -1 is backward. void mouseWheel(int button, int dir, int x, int y) { if (dir > 0) { // Zoom in } else { // Zoom out } return; } That's it!
[ "opengl", "glut", "scrollwheel" ]
34
26
57,458
3
0
2008-08-18T09:29:34.593000
2008-08-18T09:29:49.010000
14,386
14,387
fopen deprecated warning
With the Visual Studio 2005 C++ compiler, I get the following warning when my code uses the fopen() and such calls: 1>foo.cpp(5): warning C4996: 'fopen' was declared deprecated 1> c:\program files\microsoft visual studio 8\vc\include\stdio.h(234): see declaration of 'fopen' 1> Message: 'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.' How do I prevent this?
It looks like Microsoft has deprecated lots of calls which use buffers to improve code security. However, the solutions they're providing aren't portable. Anyway, if you aren't interested in using the secure version of their calls (like fopen_s ), you need to place a definition of _CRT_SECURE_NO_DEPRECATE before your included header files. For example: #define _CRT_SECURE_NO_DEPRECATE #include The preprocessor directive can also be added to your project settings to effect it on all the files under the project. To do this add _CRT_SECURE_NO_DEPRECATE to Project Properties -> Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions.
fopen deprecated warning With the Visual Studio 2005 C++ compiler, I get the following warning when my code uses the fopen() and such calls: 1>foo.cpp(5): warning C4996: 'fopen' was declared deprecated 1> c:\program files\microsoft visual studio 8\vc\include\stdio.h(234): see declaration of 'fopen' 1> Message: 'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.' How do I prevent this?
TITLE: fopen deprecated warning QUESTION: With the Visual Studio 2005 C++ compiler, I get the following warning when my code uses the fopen() and such calls: 1>foo.cpp(5): warning C4996: 'fopen' was declared deprecated 1> c:\program files\microsoft visual studio 8\vc\include\stdio.h(234): see declaration of 'fopen' 1> Message: 'This function or variable may be unsafe. Consider using fopen_s instead. To disable deprecation, use _CRT_SECURE_NO_DEPRECATE. See online help for details.' How do I prevent this? ANSWER: It looks like Microsoft has deprecated lots of calls which use buffers to improve code security. However, the solutions they're providing aren't portable. Anyway, if you aren't interested in using the secure version of their calls (like fopen_s ), you need to place a definition of _CRT_SECURE_NO_DEPRECATE before your included header files. For example: #define _CRT_SECURE_NO_DEPRECATE #include The preprocessor directive can also be added to your project settings to effect it on all the files under the project. To do this add _CRT_SECURE_NO_DEPRECATE to Project Properties -> Configuration Properties -> C/C++ -> Preprocessor -> Preprocessor Definitions.
[ "visual-c++", "fopen", "deprecated" ]
75
139
200,908
11
0
2008-08-18T09:38:58.913000
2008-08-18T09:39:16.510000
14,389
14,391
Regex and unicode
I have a script that parses the filenames of TV episodes (show.name.s01e02.avi for example), grabs the episode name (from the www.thetvdb.com API) and automatically renames them into something nicer (Show Name - [01x02].avi) The script works fine, that is until you try and use it on files that have Unicode show-names (something I never really thought about, since all the files I have are English, so mostly pretty-much all fall within [a-zA-Z0-9'\-] ) How can I allow the regular expressions to match accented characters and the likes? Currently the regex's config section looks like.. config['valid_filename_chars'] = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@£$%^&*()_+=-[]{}"'.,<>`~? """ config['valid_filename_chars_regex'] = re.escape(config['valid_filename_chars']) config['name_parse'] = [ # foo_[s01]_[e01] re.compile('''^([%s]+?)[ \._\-]\[[Ss]([0-9]+?)\]_\[[Ee]([0-9]+?)\]?[^\\/]*$'''% (config['valid_filename_chars_regex'])), # foo.1x09* re.compile('''^([%s]+?)[ \._\-]\[?([0-9]+)x([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.s01.e01, foo.s01_e01 re.compile('''^([%s]+?)[ \._\-][Ss]([0-9]+)[\.\- ]?[Ee]([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.103* re.compile('''^([%s]+)[ \._\-]([0-9]{1})([0-9]{2})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.0103* re.compile('''^([%s]+)[ \._\-]([0-9]{2})([0-9]{2,3})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), ]
Use a subrange of [\u0000-\uFFFF] for what you want. You can also use the re.UNICODE compile flag. The docs say that if UNICODE is set, \w will match the characters [0-9_] plus whatever is classified as alphanumeric in the Unicode character properties database. See also http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-05/2560.html.
Regex and unicode I have a script that parses the filenames of TV episodes (show.name.s01e02.avi for example), grabs the episode name (from the www.thetvdb.com API) and automatically renames them into something nicer (Show Name - [01x02].avi) The script works fine, that is until you try and use it on files that have Unicode show-names (something I never really thought about, since all the files I have are English, so mostly pretty-much all fall within [a-zA-Z0-9'\-] ) How can I allow the regular expressions to match accented characters and the likes? Currently the regex's config section looks like.. config['valid_filename_chars'] = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@£$%^&*()_+=-[]{}"'.,<>`~? """ config['valid_filename_chars_regex'] = re.escape(config['valid_filename_chars']) config['name_parse'] = [ # foo_[s01]_[e01] re.compile('''^([%s]+?)[ \._\-]\[[Ss]([0-9]+?)\]_\[[Ee]([0-9]+?)\]?[^\\/]*$'''% (config['valid_filename_chars_regex'])), # foo.1x09* re.compile('''^([%s]+?)[ \._\-]\[?([0-9]+)x([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.s01.e01, foo.s01_e01 re.compile('''^([%s]+?)[ \._\-][Ss]([0-9]+)[\.\- ]?[Ee]([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.103* re.compile('''^([%s]+)[ \._\-]([0-9]{1})([0-9]{2})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.0103* re.compile('''^([%s]+)[ \._\-]([0-9]{2})([0-9]{2,3})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), ]
TITLE: Regex and unicode QUESTION: I have a script that parses the filenames of TV episodes (show.name.s01e02.avi for example), grabs the episode name (from the www.thetvdb.com API) and automatically renames them into something nicer (Show Name - [01x02].avi) The script works fine, that is until you try and use it on files that have Unicode show-names (something I never really thought about, since all the files I have are English, so mostly pretty-much all fall within [a-zA-Z0-9'\-] ) How can I allow the regular expressions to match accented characters and the likes? Currently the regex's config section looks like.. config['valid_filename_chars'] = """0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ!@£$%^&*()_+=-[]{}"'.,<>`~? """ config['valid_filename_chars_regex'] = re.escape(config['valid_filename_chars']) config['name_parse'] = [ # foo_[s01]_[e01] re.compile('''^([%s]+?)[ \._\-]\[[Ss]([0-9]+?)\]_\[[Ee]([0-9]+?)\]?[^\\/]*$'''% (config['valid_filename_chars_regex'])), # foo.1x09* re.compile('''^([%s]+?)[ \._\-]\[?([0-9]+)x([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.s01.e01, foo.s01_e01 re.compile('''^([%s]+?)[ \._\-][Ss]([0-9]+)[\.\- ]?[Ee]([0-9]+)[^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.103* re.compile('''^([%s]+)[ \._\-]([0-9]{1})([0-9]{2})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), # foo.0103* re.compile('''^([%s]+)[ \._\-]([0-9]{2})([0-9]{2,3})[\._ -][^\\/]*$''' % (config['valid_filename_chars_regex'])), ] ANSWER: Use a subrange of [\u0000-\uFFFF] for what you want. You can also use the re.UNICODE compile flag. The docs say that if UNICODE is set, \w will match the characters [0-9_] plus whatever is classified as alphanumeric in the Unicode character properties database. See also http://coding.derkeiler.com/Archive/Python/comp.lang.python/2004-05/2560.html.
[ "python", "regex", "unicode", "character-properties" ]
31
21
17,881
4
0
2008-08-18T09:41:14.273000
2008-08-18T09:43:10.493000
14,397
14,441
When should a multi-module project to split into separate repository trees?
Currently we have a project with a standard subversion repository layout of:./trunk./branches./tags However, as we're moving down the road of OSGi and a modular project, we've ended up with:./trunk/bundle/main./trunk/bundle/modulea./trunk/bundle/moduleb./tags/bundle/main-1.0.0./tags/bundle/main-1.0.1./tags/bundle/modulea-1.0.0 The 'build' is still quite monolithic in that it builds all modules in sequence, though I'm starting to wonder if we should refactor the build/repository to something more like:./bundle/main/trunk./bundle/main/tags/main-1.0.0./bundle/main/tags/main-1.0.1./bundle/modulea/trunk./bundle/modulea/tags/modulea-1.0.0 In this pattern I would imagine each module building itself, and storing its binary in a repository (maven, ivy, or another path of the subversion repository itself). Are there guidelines or 'best-practices' over project layouts once one goes modular?
The Subversion book contains two sections on this: Repository Layout Planning Your Repository Organization A blog entry on the subject: "Subversion Repository Layout" The short answer, though: while your mileage will vary (every situation is individual), your /bundle/ /(trunk|tags|branches) scheme is rather common and will likely work well for you.
When should a multi-module project to split into separate repository trees? Currently we have a project with a standard subversion repository layout of:./trunk./branches./tags However, as we're moving down the road of OSGi and a modular project, we've ended up with:./trunk/bundle/main./trunk/bundle/modulea./trunk/bundle/moduleb./tags/bundle/main-1.0.0./tags/bundle/main-1.0.1./tags/bundle/modulea-1.0.0 The 'build' is still quite monolithic in that it builds all modules in sequence, though I'm starting to wonder if we should refactor the build/repository to something more like:./bundle/main/trunk./bundle/main/tags/main-1.0.0./bundle/main/tags/main-1.0.1./bundle/modulea/trunk./bundle/modulea/tags/modulea-1.0.0 In this pattern I would imagine each module building itself, and storing its binary in a repository (maven, ivy, or another path of the subversion repository itself). Are there guidelines or 'best-practices' over project layouts once one goes modular?
TITLE: When should a multi-module project to split into separate repository trees? QUESTION: Currently we have a project with a standard subversion repository layout of:./trunk./branches./tags However, as we're moving down the road of OSGi and a modular project, we've ended up with:./trunk/bundle/main./trunk/bundle/modulea./trunk/bundle/moduleb./tags/bundle/main-1.0.0./tags/bundle/main-1.0.1./tags/bundle/modulea-1.0.0 The 'build' is still quite monolithic in that it builds all modules in sequence, though I'm starting to wonder if we should refactor the build/repository to something more like:./bundle/main/trunk./bundle/main/tags/main-1.0.0./bundle/main/tags/main-1.0.1./bundle/modulea/trunk./bundle/modulea/tags/modulea-1.0.0 In this pattern I would imagine each module building itself, and storing its binary in a repository (maven, ivy, or another path of the subversion repository itself). Are there guidelines or 'best-practices' over project layouts once one goes modular? ANSWER: The Subversion book contains two sections on this: Repository Layout Planning Your Repository Organization A blog entry on the subject: "Subversion Repository Layout" The short answer, though: while your mileage will vary (every situation is individual), your /bundle/ /(trunk|tags|branches) scheme is rather common and will likely work well for you.
[ "java", "svn", "osgi" ]
7
7
2,821
4
0
2008-08-18T09:51:18.153000
2008-08-18T10:45:46.640000
14,398
15,428
Class Designer in Visual Studio - is it worth it?
Does anybody use the Class Designer much in Visual Studio? I have downloaded the Modeling Power Toys for 2005 and have been impressed with what I've seen so far. The MSDN Class Designer Blog doesn't seem to have been updated for a while but it still looks quite useful. Is the Class Designer a quick way to build the basic application or should I just work out on paper and then start coding? Thanks
As a visualization tool, or for exploratory purposes (drawing up multiple options to see what they look like) it's not bad, but generally I find the object browser does fine for most stuff I care about. As a code generation tool, it's a terrible idea. The whole idea that we will design all our code structure first, then fill in the blanks with small bits of implementation is fundamentally broken. The only time you actually know what the code structure should look like, is if you've done the exact same thing before - however then you can just use your previous code, and you don't need to draw up any new code in any kind of designer. If you decide ahead of time to use a particular class structure before you've actually tried to solve the problem, there is a 100% chance that you will pick the wrong design, and shoot yourself in the foot.
Class Designer in Visual Studio - is it worth it? Does anybody use the Class Designer much in Visual Studio? I have downloaded the Modeling Power Toys for 2005 and have been impressed with what I've seen so far. The MSDN Class Designer Blog doesn't seem to have been updated for a while but it still looks quite useful. Is the Class Designer a quick way to build the basic application or should I just work out on paper and then start coding? Thanks
TITLE: Class Designer in Visual Studio - is it worth it? QUESTION: Does anybody use the Class Designer much in Visual Studio? I have downloaded the Modeling Power Toys for 2005 and have been impressed with what I've seen so far. The MSDN Class Designer Blog doesn't seem to have been updated for a while but it still looks quite useful. Is the Class Designer a quick way to build the basic application or should I just work out on paper and then start coding? Thanks ANSWER: As a visualization tool, or for exploratory purposes (drawing up multiple options to see what they look like) it's not bad, but generally I find the object browser does fine for most stuff I care about. As a code generation tool, it's a terrible idea. The whole idea that we will design all our code structure first, then fill in the blanks with small bits of implementation is fundamentally broken. The only time you actually know what the code structure should look like, is if you've done the exact same thing before - however then you can just use your previous code, and you don't need to draw up any new code in any kind of designer. If you decide ahead of time to use a particular class structure before you've actually tried to solve the problem, there is a 100% chance that you will pick the wrong design, and shoot yourself in the foot.
[ "visual-studio", "class-design", "class-designer" ]
15
18
10,997
8
0
2008-08-18T09:51:36.763000
2008-08-19T00:24:39.280000
14,402
14,403
GLUT exit redefinition error
In my simple OpenGL program I get the following error about exit redefinition: 1>c:\program files\microsoft visual studio 8\vc\include\stdlib.h(406): error C2381: 'exit': redefinition; __declspec(noreturn) differs 1> c:\program files\microsoft visual studio 8\vc\platformsdk\include\gl\glut.h(146): see declaration of 'exit' I'm using Nate Robins' GLUT for Win32 and get this error with Visual Studio 2005 or Visual C++ 2005 (Express Edition). What is the cause of this error and how do I fix it?
Cause: The stdlib.h which ships with the recent versions of Visual Studio has a different (and conflicting) definition of the exit() function. It clashes with the definition in glut.h. Solution: Override the definition in glut.h with that in stdlib.h. Place the stdlib.h line above the glut.h line in your code. #include #include
GLUT exit redefinition error In my simple OpenGL program I get the following error about exit redefinition: 1>c:\program files\microsoft visual studio 8\vc\include\stdlib.h(406): error C2381: 'exit': redefinition; __declspec(noreturn) differs 1> c:\program files\microsoft visual studio 8\vc\platformsdk\include\gl\glut.h(146): see declaration of 'exit' I'm using Nate Robins' GLUT for Win32 and get this error with Visual Studio 2005 or Visual C++ 2005 (Express Edition). What is the cause of this error and how do I fix it?
TITLE: GLUT exit redefinition error QUESTION: In my simple OpenGL program I get the following error about exit redefinition: 1>c:\program files\microsoft visual studio 8\vc\include\stdlib.h(406): error C2381: 'exit': redefinition; __declspec(noreturn) differs 1> c:\program files\microsoft visual studio 8\vc\platformsdk\include\gl\glut.h(146): see declaration of 'exit' I'm using Nate Robins' GLUT for Win32 and get this error with Visual Studio 2005 or Visual C++ 2005 (Express Edition). What is the cause of this error and how do I fix it? ANSWER: Cause: The stdlib.h which ships with the recent versions of Visual Studio has a different (and conflicting) definition of the exit() function. It clashes with the definition in glut.h. Solution: Override the definition in glut.h with that in stdlib.h. Place the stdlib.h line above the glut.h line in your code. #include #include
[ "opengl", "glut" ]
34
75
41,203
2
0
2008-08-18T09:55:29.457000
2008-08-18T09:55:45.883000
14,410
14,489
Why are there so few modal-editors that aren't vi*?
Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts ( ctrl + w to delete back a word and so on)
Early software was often modal, but usability took a turn at some point, away from this style. VI-based editors are total enigmas -- they're the only real surviving members of that order of software. Modes are a no-no in usability and interaction design because we humans are fickle mammals who cannot be trusted to remember what mode the application is in. If you think you are in one "mode" when you are actually in another, then all sorts of badness can ensue. What you believe to be a series of harmless keystrokes can (in the wrong mode) cause unlimited catastrophe. This is known as a "mode error". To learn more, search for the term "modeless" (and "usability") As mentioned in the comments below, a Modal interface in the hands of an experienced and non-fickle person can be extremely efficient.
Why are there so few modal-editors that aren't vi*? Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts ( ctrl + w to delete back a word and so on)
TITLE: Why are there so few modal-editors that aren't vi*? QUESTION: Pretty much every other editor that isn't a vi descendant (vim, cream, vi-emu) seems to use the emacs shortcuts ( ctrl + w to delete back a word and so on) ANSWER: Early software was often modal, but usability took a turn at some point, away from this style. VI-based editors are total enigmas -- they're the only real surviving members of that order of software. Modes are a no-no in usability and interaction design because we humans are fickle mammals who cannot be trusted to remember what mode the application is in. If you think you are in one "mode" when you are actually in another, then all sorts of badness can ensue. What you believe to be a series of harmless keystrokes can (in the wrong mode) cause unlimited catastrophe. This is known as a "mode error". To learn more, search for the term "modeless" (and "usability") As mentioned in the comments below, a Modal interface in the hands of an experienced and non-fickle person can be extremely efficient.
[ "vim", "editor", "vi" ]
32
50
12,658
14
0
2008-08-18T10:08:09.447000
2008-08-18T12:03:03.380000
14,413
14,414
Using OpenGL extensions On Windows
I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this?
Easy solution: Use GLEW. See how here. Hard solution: If you have a really strong reason not to use GLEW, here's how to achieve the same without it: Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry. Example: I wish to use the capabilities of the EXT_framebuffer_object extension. The APIs I wish to use from this extension are: glGenFramebuffersEXT() glBindFramebufferEXT() glFramebufferTexture2DEXT() glCheckFramebufferStatusEXT() glDeleteFramebuffersEXT() Check if your graphic card supports the extension you wish to use. If it does, then your work is almost done! Download and install the latest drivers and SDKs for your graphics card. Example: The graphics card in my PC is a NVIDIA 6600 GT. So, I visit the NVIDIA OpenGL Extension Specifications webpage and find that the EXT_framebuffer_object extension is supported. I then download the latest NVIDIA OpenGL SDK and install it. Your graphic card manufacturer provides a glext.h header file (or a similarly named header file) with all the declarations needed to use the supported OpenGL extensions. (Note that not all extensions might be supported.) Either place this header file somewhere your compiler can pick it up or include its directory in your compiler's include directories list. Add a #include line in your code to include the header file into your code. Open glext.h, find the API you wish to use and grab its corresponding ugly-looking declaration. Example: I search for the above framebuffer APIs and find their corresponding ugly-looking declarations: typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSEXTPROC) (GLsizei n, GLuint *framebuffers); for GLAPI void APIENTRY glGenFramebuffersEXT (GLsizei, GLuint *); All this means is that your header file has the API declaration in 2 forms. One is a wgl-like ugly function pointer declaration. The other is a sane looking function declaration. For each extension API you wish to use, add in your code declarations of the function name as a type of the ugly-looking string. Example: PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT; PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT; PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT; PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT; PFNGLDELETEFRAMEBUFFERSEXTPROC glDeleteFramebuffersEXT; Though it looks ugly, all we're doing is to declare function pointers of the type corresponding to the extension API. Initialize these function pointers with their rightful functions. These functions are exposed by the library or driver. We need to use wglGetProcAddress() function to do this. Example: glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC) wglGetProcAddress("glGenFramebuffersEXT"); glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC) wglGetProcAddress("glBindFramebufferEXT"); glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC) wglGetProcAddress("glFramebufferTexture2DEXT"); glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC) wglGetProcAddress("glCheckFramebufferStatusEXT"); glDeleteFramebuffersEXT = (PFNGLDELETEFRAMEBUFFERSEXTPROC) wglGetProcAddress("glDeleteFramebuffersEXT"); Don't forget to check the function pointers for NULL. If by chance wglGetProcAddress() couldn't find the extension function, it would've initialized the pointer with NULL. Example: if (NULL == glGenFramebuffersEXT || NULL == glBindFramebufferEXT || NULL == glFramebufferTexture2DEXT || NULL == glCheckFramebufferStatusEXT || NULL == glDeleteFramebuffersEXT) { // Extension functions not loaded! exit(1); } That's it, we're done! You can now use these function pointers just as if the function calls existed. Example: glGenFramebuffersEXT(1, &fbo); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorTex[0], 0); Reference: Moving Beyond OpenGL 1.1 for Windows by Dave Astle — The article is a bit dated, but has all the information you need to understand why this pathetic situation exists on Windows and how to get around it.
Using OpenGL extensions On Windows I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this?
TITLE: Using OpenGL extensions On Windows QUESTION: I want to use the functions exposed under the OpenGL extensions. I'm on Windows, how do I do this? ANSWER: Easy solution: Use GLEW. See how here. Hard solution: If you have a really strong reason not to use GLEW, here's how to achieve the same without it: Identify the OpenGL extension and the extension APIs you wish to use. OpenGL extensions are listed in the OpenGL Extension Registry. Example: I wish to use the capabilities of the EXT_framebuffer_object extension. The APIs I wish to use from this extension are: glGenFramebuffersEXT() glBindFramebufferEXT() glFramebufferTexture2DEXT() glCheckFramebufferStatusEXT() glDeleteFramebuffersEXT() Check if your graphic card supports the extension you wish to use. If it does, then your work is almost done! Download and install the latest drivers and SDKs for your graphics card. Example: The graphics card in my PC is a NVIDIA 6600 GT. So, I visit the NVIDIA OpenGL Extension Specifications webpage and find that the EXT_framebuffer_object extension is supported. I then download the latest NVIDIA OpenGL SDK and install it. Your graphic card manufacturer provides a glext.h header file (or a similarly named header file) with all the declarations needed to use the supported OpenGL extensions. (Note that not all extensions might be supported.) Either place this header file somewhere your compiler can pick it up or include its directory in your compiler's include directories list. Add a #include line in your code to include the header file into your code. Open glext.h, find the API you wish to use and grab its corresponding ugly-looking declaration. Example: I search for the above framebuffer APIs and find their corresponding ugly-looking declarations: typedef void (APIENTRYP PFNGLGENFRAMEBUFFERSEXTPROC) (GLsizei n, GLuint *framebuffers); for GLAPI void APIENTRY glGenFramebuffersEXT (GLsizei, GLuint *); All this means is that your header file has the API declaration in 2 forms. One is a wgl-like ugly function pointer declaration. The other is a sane looking function declaration. For each extension API you wish to use, add in your code declarations of the function name as a type of the ugly-looking string. Example: PFNGLGENFRAMEBUFFERSEXTPROC glGenFramebuffersEXT; PFNGLBINDFRAMEBUFFEREXTPROC glBindFramebufferEXT; PFNGLFRAMEBUFFERTEXTURE2DEXTPROC glFramebufferTexture2DEXT; PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC glCheckFramebufferStatusEXT; PFNGLDELETEFRAMEBUFFERSEXTPROC glDeleteFramebuffersEXT; Though it looks ugly, all we're doing is to declare function pointers of the type corresponding to the extension API. Initialize these function pointers with their rightful functions. These functions are exposed by the library or driver. We need to use wglGetProcAddress() function to do this. Example: glGenFramebuffersEXT = (PFNGLGENFRAMEBUFFERSEXTPROC) wglGetProcAddress("glGenFramebuffersEXT"); glBindFramebufferEXT = (PFNGLBINDFRAMEBUFFEREXTPROC) wglGetProcAddress("glBindFramebufferEXT"); glFramebufferTexture2DEXT = (PFNGLFRAMEBUFFERTEXTURE2DEXTPROC) wglGetProcAddress("glFramebufferTexture2DEXT"); glCheckFramebufferStatusEXT = (PFNGLCHECKFRAMEBUFFERSTATUSEXTPROC) wglGetProcAddress("glCheckFramebufferStatusEXT"); glDeleteFramebuffersEXT = (PFNGLDELETEFRAMEBUFFERSEXTPROC) wglGetProcAddress("glDeleteFramebuffersEXT"); Don't forget to check the function pointers for NULL. If by chance wglGetProcAddress() couldn't find the extension function, it would've initialized the pointer with NULL. Example: if (NULL == glGenFramebuffersEXT || NULL == glBindFramebufferEXT || NULL == glFramebufferTexture2DEXT || NULL == glCheckFramebufferStatusEXT || NULL == glDeleteFramebuffersEXT) { // Extension functions not loaded! exit(1); } That's it, we're done! You can now use these function pointers just as if the function calls existed. Example: glGenFramebuffersEXT(1, &fbo); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo); glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT, GL_TEXTURE_2D, colorTex[0], 0); Reference: Moving Beyond OpenGL 1.1 for Windows by Dave Astle — The article is a bit dated, but has all the information you need to understand why this pathetic situation exists on Windows and how to get around it.
[ "opengl", "glew" ]
16
21
17,375
4
0
2008-08-18T10:11:45.600000
2008-08-18T10:12:00.577000
14,422
14,507
Why is the PyObjC documentation so bad?
For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it.. The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/ It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials ( http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit.. Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find.. All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them ("Autorelease pool for memory management" is hardly an explanation). That said… basically all I want to do is write Cocoa applications without having to learn ObjC. I'm afraid that for the time being, you will need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC and Cocoa. For now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this has been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.
Why is the PyObjC documentation so bad? For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it.. The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/ It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials ( http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit.. Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find.. All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..?
TITLE: Why is the PyObjC documentation so bad? QUESTION: For example, http://developer.apple.com/cocoa/pyobjc.html is still for OS X 10.4 Tiger, not 10.5 Leopard.. And that's the official Apple documentation for it.. The official PyObjC page is equally bad, http://pyobjc.sourceforge.net/ It's so bad it's baffling.. I'm considering learning Ruby primarily because the RubyCocoa stuff is so much better documented, and there's lots of decent tutorials ( http://www.rubycocoa.com/ for example), and because of the Shoes GUI toolkit.. Even this badly-auto-translated Japanese tutorial is more useful than the rest of the documentation I could find.. All I want to do is create fairly simple Python applications with Cocoa GUI's.. Can anyone shed light on the horrible documentation, or point me at some tutorials that don't just give you huge blocks of code and assume you know what NSThread.detachNewThreadSelector_toTarget_withObject_("queryController", self, None) does..? ANSWER: I agree that that tutorial is flawed, throwing random, unexplained code right in front of your eyes. It introduces concepts such as the autorelease pool and user defaults without explaining why you would want them ("Autorelease pool for memory management" is hardly an explanation). That said… basically all I want to do is write Cocoa applications without having to learn ObjC. I'm afraid that for the time being, you will need a basic grasp of ObjC in order to benefit from any language that uses Cocoa. PyObjC, RubyCocoa, Nu and others are niches at best, and all of them were developed by people intimately familiar with the ins and outs of ObjC and Cocoa. For now, you will benefit the most if you realistically see those bridges as useful where scripting languages truly shine, rather than trying to build a whole application with them. While this has been done (with LimeChat, I'm using a RubyCocoa-written app right now), it is rare and likely will be for a while.
[ "python", "macos", "cocoa", "pyobjc" ]
17
21
4,377
9
0
2008-08-18T10:23:22.113000
2008-08-18T12:18:52.600000
14,432
14,462
64bit .NET Performance tuning
I know that.NET is JIT compiled to the architecture you are running on just before the app runs, but does the JIT compiler optimize for 64bit architecture at all? Is there anything that needs to be done or considered when programming an app that will run on a 64bit system? (i.e. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems?)
The 64bit JIT is different from the one for 32bit, so I would expect some differences in the output - but I wouldn't switch to 64bit just for that, and I wouldn't expect to gain much speed (if any) in CPU time by switching to 64bit. You will notice a big performance improvement if your app uses a lot of memory and the PC has enough RAM to keep up with it. I've found that 32bit.NET apps tend to start throwing out of memory exceptions when you get to around 1.6gb in use, but they start to thrash the disk due to paging long before that - so you end being I/O bound. Basically, if you're bottleneck is CPU then 64bit is unlikely to help. If your bottleneck is is memory then you should see a big improvement. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems Int64 already works on both 32bit and 64bit systems, but it'll be faster running on 64bit. So if you're mostly number crunching with Int64, running on a 64bit system should help. The most important thing is to measure your performance.
64bit .NET Performance tuning I know that.NET is JIT compiled to the architecture you are running on just before the app runs, but does the JIT compiler optimize for 64bit architecture at all? Is there anything that needs to be done or considered when programming an app that will run on a 64bit system? (i.e. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems?)
TITLE: 64bit .NET Performance tuning QUESTION: I know that.NET is JIT compiled to the architecture you are running on just before the app runs, but does the JIT compiler optimize for 64bit architecture at all? Is there anything that needs to be done or considered when programming an app that will run on a 64bit system? (i.e. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems?) ANSWER: The 64bit JIT is different from the one for 32bit, so I would expect some differences in the output - but I wouldn't switch to 64bit just for that, and I wouldn't expect to gain much speed (if any) in CPU time by switching to 64bit. You will notice a big performance improvement if your app uses a lot of memory and the PC has enough RAM to keep up with it. I've found that 32bit.NET apps tend to start throwing out of memory exceptions when you get to around 1.6gb in use, but they start to thrash the disk due to paging long before that - so you end being I/O bound. Basically, if you're bottleneck is CPU then 64bit is unlikely to help. If your bottleneck is is memory then you should see a big improvement. Will using Int64 improve performance and will the JIT compiler automatically make Int64 work on 32bit systems Int64 already works on both 32bit and 64bit systems, but it'll be faster running on 64bit. So if you're mostly number crunching with Int64, running on a 64bit system should help. The most important thing is to measure your performance.
[ ".net", "performance", "optimization", "64-bit", "jit" ]
10
14
3,895
5
0
2008-08-18T10:35:14.913000
2008-08-18T11:11:31.767000
14,451
14,472
What is the best way to make a Delphi Application completely full screen?
What is the best way to make a Delphi application (Delphi 2007 for Win32) go completely full screen, removing the application border and covering the Windows Taskbar? I am looking for something similar to what Internet Explorer (IE) does when you hit F11. I wish this to be a run time option for the user not a design time decision by my good self. As mentioned in the accepted answer BorderStyle:= bsNone; was part of the way to do it. Strangely I kept getting an E2010 Incompatible types: 'TFormBorderStyle' and 'TBackGroundSymbol' error when using that line (another type had bsNone defined). To overcome this I had to use: BorderStyle:= Forms.bsNone;
Well, this has always worked for me. Seems a bit simpler... procedure TForm52.Button1Click(Sender: TObject); begin BorderStyle:= bsNone; WindowState:= wsMaximized; end;
What is the best way to make a Delphi Application completely full screen? What is the best way to make a Delphi application (Delphi 2007 for Win32) go completely full screen, removing the application border and covering the Windows Taskbar? I am looking for something similar to what Internet Explorer (IE) does when you hit F11. I wish this to be a run time option for the user not a design time decision by my good self. As mentioned in the accepted answer BorderStyle:= bsNone; was part of the way to do it. Strangely I kept getting an E2010 Incompatible types: 'TFormBorderStyle' and 'TBackGroundSymbol' error when using that line (another type had bsNone defined). To overcome this I had to use: BorderStyle:= Forms.bsNone;
TITLE: What is the best way to make a Delphi Application completely full screen? QUESTION: What is the best way to make a Delphi application (Delphi 2007 for Win32) go completely full screen, removing the application border and covering the Windows Taskbar? I am looking for something similar to what Internet Explorer (IE) does when you hit F11. I wish this to be a run time option for the user not a design time decision by my good self. As mentioned in the accepted answer BorderStyle:= bsNone; was part of the way to do it. Strangely I kept getting an E2010 Incompatible types: 'TFormBorderStyle' and 'TBackGroundSymbol' error when using that line (another type had bsNone defined). To overcome this I had to use: BorderStyle:= Forms.bsNone; ANSWER: Well, this has always worked for me. Seems a bit simpler... procedure TForm52.Button1Click(Sender: TObject); begin BorderStyle:= bsNone; WindowState:= wsMaximized; end;
[ "delphi", "fullscreen", "vcl", "delphi-2007" ]
21
31
35,284
10
0
2008-08-18T10:56:42.623000
2008-08-18T11:30:00.310000
14,453
19,672
Does anyone have .Net Excel IO component benchmarks?
I'm needing to access Excel workbooks from.Net. I know all about the different ways of doing it (I've written them up in a blog post ), and I know that using a native.Net component is going to be the fastest. But the question is, which of the components wins? Has anybody benchmarked them? I've been using Syncfusion XlsIO, but that's very slow for some key operations (like deleting rows in a workbook containing thousands of Named ranges).
I haven't done any proper benchmarks, but I tried out several other components,and found that SpreadsheetGear was considerably faster than XlsIO which I was using before. I've written up some of my findings in this post
Does anyone have .Net Excel IO component benchmarks? I'm needing to access Excel workbooks from.Net. I know all about the different ways of doing it (I've written them up in a blog post ), and I know that using a native.Net component is going to be the fastest. But the question is, which of the components wins? Has anybody benchmarked them? I've been using Syncfusion XlsIO, but that's very slow for some key operations (like deleting rows in a workbook containing thousands of Named ranges).
TITLE: Does anyone have .Net Excel IO component benchmarks? QUESTION: I'm needing to access Excel workbooks from.Net. I know all about the different ways of doing it (I've written them up in a blog post ), and I know that using a native.Net component is going to be the fastest. But the question is, which of the components wins? Has anybody benchmarked them? I've been using Syncfusion XlsIO, but that's very slow for some key operations (like deleting rows in a workbook containing thousands of Named ranges). ANSWER: I haven't done any proper benchmarks, but I tried out several other components,and found that SpreadsheetGear was considerably faster than XlsIO which I was using before. I've written up some of my findings in this post
[ "c#", "excel", "components" ]
5
5
631
3
0
2008-08-18T10:58:45.007000
2008-08-21T11:57:50.200000
14,464
14,591
Bit fields in C#
I have a structure which I need to populate and write to disk (several actually). An example is: byte-6 bit0 - original_or_copy bit1 - copyright bit2 - data_alignment_indicator bit3 - PES_priority bit4-bit5 - PES_scrambling control. bit6-bit7 - reserved In C I might do something like the following: struct PESHeader { unsigned reserved:2; unsigned scrambling_control:2; unsigned priority:1; unsigned data_alignment_indicator:1; unsigned copyright:1; unsigned original_or_copy:1; }; Is there any way to do this in C# that would enable me to access the bits using the struct dereferencing dot operator? For a couple of structures, I can just do bit shifting wrapped in an accessor function. I have loads of structures to handle in this way, so I'm looking for something that's easier to read and quicker to write.
I'd probably knock together something using attributes, then a conversion class to convert suitably attributed structures to the bitfield primitives. Something like... using System; namespace BitfieldTest { [global::System.AttributeUsage(AttributeTargets.Field, AllowMultiple = false)] sealed class BitfieldLengthAttribute: Attribute { uint length; public BitfieldLengthAttribute(uint length) { this.length = length; } public uint Length { get { return length; } } } static class PrimitiveConversion { public static long ToLong (T t) where T: struct { long r = 0; int offset = 0; // For every field suitably attributed with a BitfieldLength foreach (System.Reflection.FieldInfo f in t.GetType().GetFields()) { object[] attrs = f.GetCustomAttributes(typeof(BitfieldLengthAttribute), false); if (attrs.Length == 1) { uint fieldLength = ((BitfieldLengthAttribute)attrs[0]).Length; // Calculate a bitmask of the desired length long mask = 0; for (int i = 0; i < fieldLength; i++) mask |= 1 << i; r |= ((UInt32)f.GetValue(t) & mask) << offset; offset += (int)fieldLength; } } return r; } } struct PESHeader { [BitfieldLength(2)] public uint reserved; [BitfieldLength(2)] public uint scrambling_control; [BitfieldLength(1)] public uint priority; [BitfieldLength(1)] public uint data_alignment_indicator; [BitfieldLength(1)] public uint copyright; [BitfieldLength(1)] public uint original_or_copy; }; public class MainClass { public static void Main(string[] args) { PESHeader p = new PESHeader(); p.reserved = 3; p.scrambling_control = 2; p.data_alignment_indicator = 1; long l = PrimitiveConversion.ToLong(p); for (int i = 63; i >= 0; i--) { Console.Write( ((l & (1l << i)) > 0)? "1": "0"); } Console.WriteLine(); return; } } } Which produces the expected...000101011. Of course, it needs more error checking and a slightly saner typing, but the concept is (I think) sound, reusable, and lets you knock out easily maintained structures by the dozen. adamw
Bit fields in C# I have a structure which I need to populate and write to disk (several actually). An example is: byte-6 bit0 - original_or_copy bit1 - copyright bit2 - data_alignment_indicator bit3 - PES_priority bit4-bit5 - PES_scrambling control. bit6-bit7 - reserved In C I might do something like the following: struct PESHeader { unsigned reserved:2; unsigned scrambling_control:2; unsigned priority:1; unsigned data_alignment_indicator:1; unsigned copyright:1; unsigned original_or_copy:1; }; Is there any way to do this in C# that would enable me to access the bits using the struct dereferencing dot operator? For a couple of structures, I can just do bit shifting wrapped in an accessor function. I have loads of structures to handle in this way, so I'm looking for something that's easier to read and quicker to write.
TITLE: Bit fields in C# QUESTION: I have a structure which I need to populate and write to disk (several actually). An example is: byte-6 bit0 - original_or_copy bit1 - copyright bit2 - data_alignment_indicator bit3 - PES_priority bit4-bit5 - PES_scrambling control. bit6-bit7 - reserved In C I might do something like the following: struct PESHeader { unsigned reserved:2; unsigned scrambling_control:2; unsigned priority:1; unsigned data_alignment_indicator:1; unsigned copyright:1; unsigned original_or_copy:1; }; Is there any way to do this in C# that would enable me to access the bits using the struct dereferencing dot operator? For a couple of structures, I can just do bit shifting wrapped in an accessor function. I have loads of structures to handle in this way, so I'm looking for something that's easier to read and quicker to write. ANSWER: I'd probably knock together something using attributes, then a conversion class to convert suitably attributed structures to the bitfield primitives. Something like... using System; namespace BitfieldTest { [global::System.AttributeUsage(AttributeTargets.Field, AllowMultiple = false)] sealed class BitfieldLengthAttribute: Attribute { uint length; public BitfieldLengthAttribute(uint length) { this.length = length; } public uint Length { get { return length; } } } static class PrimitiveConversion { public static long ToLong (T t) where T: struct { long r = 0; int offset = 0; // For every field suitably attributed with a BitfieldLength foreach (System.Reflection.FieldInfo f in t.GetType().GetFields()) { object[] attrs = f.GetCustomAttributes(typeof(BitfieldLengthAttribute), false); if (attrs.Length == 1) { uint fieldLength = ((BitfieldLengthAttribute)attrs[0]).Length; // Calculate a bitmask of the desired length long mask = 0; for (int i = 0; i < fieldLength; i++) mask |= 1 << i; r |= ((UInt32)f.GetValue(t) & mask) << offset; offset += (int)fieldLength; } } return r; } } struct PESHeader { [BitfieldLength(2)] public uint reserved; [BitfieldLength(2)] public uint scrambling_control; [BitfieldLength(1)] public uint priority; [BitfieldLength(1)] public uint data_alignment_indicator; [BitfieldLength(1)] public uint copyright; [BitfieldLength(1)] public uint original_or_copy; }; public class MainClass { public static void Main(string[] args) { PESHeader p = new PESHeader(); p.reserved = 3; p.scrambling_control = 2; p.data_alignment_indicator = 1; long l = PrimitiveConversion.ToLong(p); for (int i = 63; i >= 0; i--) { Console.Write( ((l & (1l << i)) > 0)? "1": "0"); } Console.WriteLine(); return; } } } Which produces the expected...000101011. Of course, it needs more error checking and a slightly saner typing, but the concept is (I think) sound, reusable, and lets you knock out easily maintained structures by the dozen. adamw
[ "c#", "bit-fields" ]
86
59
122,737
12
0
2008-08-18T11:19:28.367000
2008-08-18T13:31:30.533000
14,495
14,773
How can I stop MATLAB from returning until after a command-line script completes?
I see in the MATLAB help ( matlab -h ) that I can use the -r flag to specify an m -file to run. I notice when I do this, MATLAB seems to start the script, but immediately return. The script processes fine, but the main app has already returned. Is there any way to get MATLAB to only return once the command is finished? If you're calling it from a separate program it seems like it's easier to wait on the process than to use a file or sockets to confirm completion. To illustrate, here's a sample function waitHello.m: function waitHello disp('Waiting...'); pause(3); %pauses 3 seconds disp('Hello World'); quit; And I try to run this using: matlab -nosplash -nodesktop -r waitHello
Quick answer: matlab -wait -nosplash -nodesktop -r waitHello In Matlab 7.1 (the version I have) there is an undocumented command line option -wait in matlab.bat. If it doesn't work for your version, you could probably add it in. Here's what I found. The command at the bottom that finally launches matlab is (line 153): start "MATLAB" %START_WAIT% "%MATLAB_BIN_DIR%\%MATLAB_ARCH%\matlab" %MATLAB_ARGS% The relevant syntax of the start command (see "help start" in cmd.exe) in this case is: start ["window title"] [/wait] myprogram.exe args... A bit higher, among all of the documented command line options, I found (line 60): ) else if (%opt%) == (-wait) ( set START_WAIT=/wait ) else ( So specifying -wait should do what you want, as long as you're also exiting matlab from your script (otherwise it will wait for you to terminate it interactively).
How can I stop MATLAB from returning until after a command-line script completes? I see in the MATLAB help ( matlab -h ) that I can use the -r flag to specify an m -file to run. I notice when I do this, MATLAB seems to start the script, but immediately return. The script processes fine, but the main app has already returned. Is there any way to get MATLAB to only return once the command is finished? If you're calling it from a separate program it seems like it's easier to wait on the process than to use a file or sockets to confirm completion. To illustrate, here's a sample function waitHello.m: function waitHello disp('Waiting...'); pause(3); %pauses 3 seconds disp('Hello World'); quit; And I try to run this using: matlab -nosplash -nodesktop -r waitHello
TITLE: How can I stop MATLAB from returning until after a command-line script completes? QUESTION: I see in the MATLAB help ( matlab -h ) that I can use the -r flag to specify an m -file to run. I notice when I do this, MATLAB seems to start the script, but immediately return. The script processes fine, but the main app has already returned. Is there any way to get MATLAB to only return once the command is finished? If you're calling it from a separate program it seems like it's easier to wait on the process than to use a file or sockets to confirm completion. To illustrate, here's a sample function waitHello.m: function waitHello disp('Waiting...'); pause(3); %pauses 3 seconds disp('Hello World'); quit; And I try to run this using: matlab -nosplash -nodesktop -r waitHello ANSWER: Quick answer: matlab -wait -nosplash -nodesktop -r waitHello In Matlab 7.1 (the version I have) there is an undocumented command line option -wait in matlab.bat. If it doesn't work for your version, you could probably add it in. Here's what I found. The command at the bottom that finally launches matlab is (line 153): start "MATLAB" %START_WAIT% "%MATLAB_BIN_DIR%\%MATLAB_ARCH%\matlab" %MATLAB_ARGS% The relevant syntax of the start command (see "help start" in cmd.exe) in this case is: start ["window title"] [/wait] myprogram.exe args... A bit higher, among all of the documented command line options, I found (line 60): ) else if (%opt%) == (-wait) ( set START_WAIT=/wait ) else ( So specifying -wait should do what you want, as long as you're also exiting matlab from your script (otherwise it will wait for you to terminate it interactively).
[ "command-line", "matlab" ]
23
25
13,300
1
0
2008-08-18T12:10:38.450000
2008-08-18T15:10:43.033000
14,497
49,653
Has anyone used NUnitLite with any success?
I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow. I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it?
What we've done that really improves our efficiency and quality is to multi target our mobile application. That is to say with a very little bit of creativity and a few conditional compile tags and custom project configurations it is possible to build a version of your mobile application that also runs on the desktop. If you put all your business logic you need tested in a separate project/assembly then this layer can be very effectively tested using any of the desktop tools you are already familiar with.
Has anyone used NUnitLite with any success? I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow. I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it?
TITLE: Has anyone used NUnitLite with any success? QUESTION: I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow. I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it? ANSWER: What we've done that really improves our efficiency and quality is to multi target our mobile application. That is to say with a very little bit of creativity and a few conditional compile tags and custom project configurations it is possible to build a version of your mobile application that also runs on the desktop. If you put all your business logic you need tested in a separate project/assembly then this layer can be very effectively tested using any of the desktop tools you are already familiar with.
[ "unit-testing", "windows-mobile", "compact-framework", "nunit" ]
2
3
957
2
0
2008-08-18T12:10:52.897000
2008-09-08T13:02:35.573000
14,504
14,513
Proportional font IDE
I would really like to see a proportional font IDE, even if I have to build it myself (perhaps as an extension to Visual Studio). What I basically mean is MS Word style editing of code that sort of looks like the typographical style in The C++ Programming Language book. I want to set tab stops for my indents and lining up function signatures and rows of assignment statements, which could be specified in points instead of fixed character positions. I would also like bold and italics. Various font sizes and even style sheets would be cool. Has anyone seen anything like this out there or know the best way to start building one?
I'd still like to see a popular editor or IDE implement elastic tabstops.
Proportional font IDE I would really like to see a proportional font IDE, even if I have to build it myself (perhaps as an extension to Visual Studio). What I basically mean is MS Word style editing of code that sort of looks like the typographical style in The C++ Programming Language book. I want to set tab stops for my indents and lining up function signatures and rows of assignment statements, which could be specified in points instead of fixed character positions. I would also like bold and italics. Various font sizes and even style sheets would be cool. Has anyone seen anything like this out there or know the best way to start building one?
TITLE: Proportional font IDE QUESTION: I would really like to see a proportional font IDE, even if I have to build it myself (perhaps as an extension to Visual Studio). What I basically mean is MS Word style editing of code that sort of looks like the typographical style in The C++ Programming Language book. I want to set tab stops for my indents and lining up function signatures and rows of assignment statements, which could be specified in points instead of fixed character positions. I would also like bold and italics. Various font sizes and even style sheets would be cool. Has anyone seen anything like this out there or know the best way to start building one? ANSWER: I'd still like to see a popular editor or IDE implement elastic tabstops.
[ "ide", "fonts", "tabs", "text-editor" ]
12
17
2,498
13
0
2008-08-18T12:17:12.337000
2008-08-18T12:24:55.987000
14,505
14,525
Alpha blending colors in .NET Compact Framework 2.0
In the Full.NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green, blue); However in the Compact Framework (2.0 specifically), neither of those methods are available, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on the Compact Framework?
Apparently, it's not quite that simple, but still possible, if you have Windows Mobile 5.0 or newer.
Alpha blending colors in .NET Compact Framework 2.0 In the Full.NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green, blue); However in the Compact Framework (2.0 specifically), neither of those methods are available, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on the Compact Framework?
TITLE: Alpha blending colors in .NET Compact Framework 2.0 QUESTION: In the Full.NET framework you can use the Color.FromArgb() method to create a new color with alpha blending, like this: Color blended = Color.FromArgb(alpha, color); or Color blended = Color.FromArgb(alpha, red, green, blue); However in the Compact Framework (2.0 specifically), neither of those methods are available, you only get: Color.FromArgb(int red, int green, int blue); and Color.FromArgb(int val); The first one, obviously, doesn't even let you enter an alpha value, but the documentation for the latter shows that "val" is a 32bit ARGB value (as 0xAARRGGBB as opposed to the standard 24bit 0xRRGGBB), so it would make sense that you could just build the ARGB value and pass it to the function. I tried this with the following: private Color FromARGB(byte alpha, byte red, byte green, byte blue) { int val = (alpha << 24) | (red << 16) | (green << 8) | blue; return Color.FromArgb(val); } But no matter what I do, the alpha blending never works, the resulting color always as full opacity, even when setting the alpha value to 0. Has anyone gotten this to work on the Compact Framework? ANSWER: Apparently, it's not quite that simple, but still possible, if you have Windows Mobile 5.0 or newer.
[ "c#", ".net", "graphics", "compact-framework" ]
2
1
4,722
5
0
2008-08-18T12:17:16.113000
2008-08-18T12:33:02.273000
14,527
16,414
Reverse Find in a string
I need to be able to find the last occurrence of a character within an element. For example: http://www.blah.com/path/to/file/media.jpg If I try to locate it through using substring-before(mediaurl, '.') and substring-after(mediaurl, '.') then it will, of course, match on the first dot. How would I get the file extension? Essentially, I need to get the file name and the extension from a path like this, but I am quite stumped as to how to do it using XSLT.
The following is an example of a template that would produce the required output in XSLT 1.0:
Reverse Find in a string I need to be able to find the last occurrence of a character within an element. For example: http://www.blah.com/path/to/file/media.jpg If I try to locate it through using substring-before(mediaurl, '.') and substring-after(mediaurl, '.') then it will, of course, match on the first dot. How would I get the file extension? Essentially, I need to get the file name and the extension from a path like this, but I am quite stumped as to how to do it using XSLT.
TITLE: Reverse Find in a string QUESTION: I need to be able to find the last occurrence of a character within an element. For example: http://www.blah.com/path/to/file/media.jpg If I try to locate it through using substring-before(mediaurl, '.') and substring-after(mediaurl, '.') then it will, of course, match on the first dot. How would I get the file extension? Essentially, I need to get the file name and the extension from a path like this, but I am quite stumped as to how to do it using XSLT. ANSWER: The following is an example of a template that would produce the required output in XSLT 1.0:
[ "xml", "xslt" ]
7
14
11,422
4
0
2008-08-18T12:35:42.050000
2008-08-19T15:34:44.903000
14,530
28,992
LINQ-to-SQL vs stored procedures?
I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow ( Beginners Guide to LINQ ), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks.
Some advantages of LINQ over sprocs: Type safety: I think we all understand this. Abstraction: This is especially true with LINQ-to-Entities. This abstraction also allows the framework to add additional improvements that you can easily take advantage of. PLINQ is an example of adding multi-threading support to LINQ. Code changes are minimal to add this support. It would be MUCH harder to do this data access code that simply calls sprocs. Debugging support: I can use any.NET debugger to debug the queries. With sprocs, you cannot easily debug the SQL and that experience is largely tied to your database vendor (MS SQL Server provides a query analyzer, but often that isn't enough). Vendor agnostic: LINQ works with lots of databases and the number of supported databases will only increase. Sprocs are not always portable between databases, either because of varying syntax or feature support (if the database supports sprocs at all). Deployment: Others have mentioned this already, but it's easier to deploy a single assembly than to deploy a set of sprocs. This also ties in with #4. Easier: You don't have to learn T-SQL to do data access, nor do you have to learn the data access API (e.g. ADO.NET) necessary for calling the sprocs. This is related to #3 and #4. Some disadvantages of LINQ vs sprocs: Network traffic: sprocs need only serialize sproc-name and argument data over the wire while LINQ sends the entire query. This can get really bad if the queries are very complex. However, LINQ's abstraction allows Microsoft to improve this over time. Less flexible: Sprocs can take full advantage of a database's featureset. LINQ tends to be more generic in it's support. This is common in any kind of language abstraction (e.g. C# vs assembler). Recompiling: If you need to make changes to the way you do data access, you need to recompile, version, and redeploy your assembly. Sprocs can sometimes allow a DBA to tune the data access routine without a need to redeploy anything. Security and manageability are something that people argue about too. Security: For example, you can protect your sensitive data by restricting access to the tables directly, and put ACLs on the sprocs. With LINQ, however, you can still restrict direct access to tables and instead put ACLs on updatable table views to achieve a similar end (assuming your database supports updatable views). Manageability: Using views also gives you the advantage of shielding your application non-breaking from schema changes (like table normalization). You can update the view without requiring your data access code to change. I used to be a big sproc guy, but I'm starting to lean towards LINQ as a better alternative in general. If there are some areas where sprocs are clearly better, then I'll probably still write a sproc but access it using LINQ.:)
LINQ-to-SQL vs stored procedures? I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow ( Beginners Guide to LINQ ), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks.
TITLE: LINQ-to-SQL vs stored procedures? QUESTION: I took a look at the "Beginner's Guide to LINQ" post here on StackOverflow ( Beginners Guide to LINQ ), but had a follow-up question: We're about to ramp up a new project where nearly all of our database op's will be fairly simple data retrievals (there's another segment of the project which already writes the data). Most of our other projects up to this point make use of stored procedures for such things. However, I'd like to leverage LINQ-to-SQL if it makes more sense. So, the question is this: For simple data retrievals, which approach is better, LINQ-to-SQL or stored procs? Any specific pro's or con's? Thanks. ANSWER: Some advantages of LINQ over sprocs: Type safety: I think we all understand this. Abstraction: This is especially true with LINQ-to-Entities. This abstraction also allows the framework to add additional improvements that you can easily take advantage of. PLINQ is an example of adding multi-threading support to LINQ. Code changes are minimal to add this support. It would be MUCH harder to do this data access code that simply calls sprocs. Debugging support: I can use any.NET debugger to debug the queries. With sprocs, you cannot easily debug the SQL and that experience is largely tied to your database vendor (MS SQL Server provides a query analyzer, but often that isn't enough). Vendor agnostic: LINQ works with lots of databases and the number of supported databases will only increase. Sprocs are not always portable between databases, either because of varying syntax or feature support (if the database supports sprocs at all). Deployment: Others have mentioned this already, but it's easier to deploy a single assembly than to deploy a set of sprocs. This also ties in with #4. Easier: You don't have to learn T-SQL to do data access, nor do you have to learn the data access API (e.g. ADO.NET) necessary for calling the sprocs. This is related to #3 and #4. Some disadvantages of LINQ vs sprocs: Network traffic: sprocs need only serialize sproc-name and argument data over the wire while LINQ sends the entire query. This can get really bad if the queries are very complex. However, LINQ's abstraction allows Microsoft to improve this over time. Less flexible: Sprocs can take full advantage of a database's featureset. LINQ tends to be more generic in it's support. This is common in any kind of language abstraction (e.g. C# vs assembler). Recompiling: If you need to make changes to the way you do data access, you need to recompile, version, and redeploy your assembly. Sprocs can sometimes allow a DBA to tune the data access routine without a need to redeploy anything. Security and manageability are something that people argue about too. Security: For example, you can protect your sensitive data by restricting access to the tables directly, and put ACLs on the sprocs. With LINQ, however, you can still restrict direct access to tables and instead put ACLs on updatable table views to achieve a similar end (assuming your database supports updatable views). Manageability: Using views also gives you the advantage of shielding your application non-breaking from schema changes (like table normalization). You can update the view without requiring your data access code to change. I used to be a big sproc guy, but I'm starting to lean towards LINQ as a better alternative in general. If there are some areas where sprocs are clearly better, then I'll probably still write a sproc but access it using LINQ.:)
[ "linq", "linq-to-sql", "stored-procedures" ]
190
194
114,110
22
0
2008-08-18T12:37:57.553000
2008-08-26T20:04:51.347000
14,552
14,703
SSRS scheduled reports not working
My scheduled reports in SQL server won't run. I checked the logs and found the job that was failing. The error message in the log was: 'EXECUTE AS LOGIN' failed for the requested login 'NT AUTHORITY\NETWORK SERVICE'. The step failed. I'm using SQL authentication for the report so it shouldn't be issues with the permissions to the data. No sheduled reports on the server will run.
I found the answer here: http://www.themssforum.com/SVCS/Unable-execute/ Apperently there was something wrong with the login for 'NT AUTHORITY\NETWORK SERVICE' and it wouldn't run the jobs it owned properly. Anyone understand why this might have happened?
SSRS scheduled reports not working My scheduled reports in SQL server won't run. I checked the logs and found the job that was failing. The error message in the log was: 'EXECUTE AS LOGIN' failed for the requested login 'NT AUTHORITY\NETWORK SERVICE'. The step failed. I'm using SQL authentication for the report so it shouldn't be issues with the permissions to the data. No sheduled reports on the server will run.
TITLE: SSRS scheduled reports not working QUESTION: My scheduled reports in SQL server won't run. I checked the logs and found the job that was failing. The error message in the log was: 'EXECUTE AS LOGIN' failed for the requested login 'NT AUTHORITY\NETWORK SERVICE'. The step failed. I'm using SQL authentication for the report so it shouldn't be issues with the permissions to the data. No sheduled reports on the server will run. ANSWER: I found the answer here: http://www.themssforum.com/SVCS/Unable-execute/ Apperently there was something wrong with the login for 'NT AUTHORITY\NETWORK SERVICE' and it wouldn't run the jobs it owned properly. Anyone understand why this might have happened?
[ "sql-server", "reporting-services", "ssrs-2008" ]
0
2
6,191
6
0
2008-08-18T13:02:27.037000
2008-08-18T14:34:31.373000
14,570
14,595
How to organize dataset queries to improve performance
I don't know when to add to a dataset a tableadapter or a query from toolbox. Does it make any difference? I also dont know where to create instances of the adapters. Should I do it in the Page_Load? Should I just do it when I'm going to use it? Am I opening a new connection when I create a new instance? This doesn't seem very important, but every time I create a query a little voice on my brain asks me these questions.
Should I just do it when I'm going to use it? I would recommend that you only retrieve the data when you are going to use it. If you are not going to need it, there is no reason to waste resources by retrieving it in Page_Load. If you are going to need it multiple times throughout the page load, consider saving the query results to a private variable or collection so that the same data can be reused multiple times throughout the page load. Am I opening a new connection when I create a new instance? Asp.net handles connection pooling, and opens and closes connections in an efficient way. You shouldn't have to worry about this. One other thing to consider from a performance perspective is to avoid using Datasets and TableAdapters. In many cases, they add extra overhead into data retrieval that does not exist when using Linq to Sql, Stored Procedures or DataReaders.
How to organize dataset queries to improve performance I don't know when to add to a dataset a tableadapter or a query from toolbox. Does it make any difference? I also dont know where to create instances of the adapters. Should I do it in the Page_Load? Should I just do it when I'm going to use it? Am I opening a new connection when I create a new instance? This doesn't seem very important, but every time I create a query a little voice on my brain asks me these questions.
TITLE: How to organize dataset queries to improve performance QUESTION: I don't know when to add to a dataset a tableadapter or a query from toolbox. Does it make any difference? I also dont know where to create instances of the adapters. Should I do it in the Page_Load? Should I just do it when I'm going to use it? Am I opening a new connection when I create a new instance? This doesn't seem very important, but every time I create a query a little voice on my brain asks me these questions. ANSWER: Should I just do it when I'm going to use it? I would recommend that you only retrieve the data when you are going to use it. If you are not going to need it, there is no reason to waste resources by retrieving it in Page_Load. If you are going to need it multiple times throughout the page load, consider saving the query results to a private variable or collection so that the same data can be reused multiple times throughout the page load. Am I opening a new connection when I create a new instance? Asp.net handles connection pooling, and opens and closes connections in an efficient way. You shouldn't have to worry about this. One other thing to consider from a performance perspective is to avoid using Datasets and TableAdapters. In many cases, they add extra overhead into data retrieval that does not exist when using Linq to Sql, Stored Procedures or DataReaders.
[ "asp.net", "database", "performance" ]
0
2
485
1
0
2008-08-18T13:21:45.670000
2008-08-18T13:32:51.937000
14,577
14,699
How do I avoid using cursors in Sybase (T-SQL)?
Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code. What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values: declare c_lookup_codes for select distinct lookup_code from #workinprogress while(1=1) begin fetch c_lookup_codes into @lookup_code if @@sqlstatus<>0 begin break end exec proc_code_xref @lookup_code @xref_code OUTPUT update #workinprogress set xref = @xref_code where lookup_code = @lookup_code end Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing? _NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._
You have to have a XRef table if you want to take out the cursor. Assuming you know the 100 distinct lookup values (and that they're static) it's simple to generate one by calling proc_code_xref 100 times and inserting the results into a table
How do I avoid using cursors in Sybase (T-SQL)? Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code. What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values: declare c_lookup_codes for select distinct lookup_code from #workinprogress while(1=1) begin fetch c_lookup_codes into @lookup_code if @@sqlstatus<>0 begin break end exec proc_code_xref @lookup_code @xref_code OUTPUT update #workinprogress set xref = @xref_code where lookup_code = @lookup_code end Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing? _NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._
TITLE: How do I avoid using cursors in Sybase (T-SQL)? QUESTION: Imagine the scene, you're updating some legacy Sybase code and come across a cursor. The stored procedure builds up a result set in a #temporary table which is all ready to be returned except that one of columns isn't terribly human readable, it's an alphanumeric code. What we need to do, is figure out the possible distinct values of this code, call another stored procedure to cross reference these discrete values and then update the result set with the newly deciphered values: declare c_lookup_codes for select distinct lookup_code from #workinprogress while(1=1) begin fetch c_lookup_codes into @lookup_code if @@sqlstatus<>0 begin break end exec proc_code_xref @lookup_code @xref_code OUTPUT update #workinprogress set xref = @xref_code where lookup_code = @lookup_code end Now then, whilst this may give some folks palpitations, it does work. My question is, how best would one avoid this kind of thing? _NB: for the purposes of this example you can also imagine that the result set is in the region of 500k rows and that there are 100 distinct values of look_up_code and finally, that it is not possible to have a table with the xref values in as the logic in proc_code_xref is too arcane._ ANSWER: You have to have a XRef table if you want to take out the cursor. Assuming you know the 100 distinct lookup values (and that they're static) it's simple to generate one by calling proc_code_xref 100 times and inserting the results into a table
[ "t-sql", "sybase", "cursors" ]
2
1
5,856
3
0
2008-08-18T13:24:55.710000
2008-08-18T14:33:03.343000
14,582
14,630
Can I make Subversion + TortoiseSVN case-insensitive for Windows?
I've been using Subversion for code control with TortoiseSVN to interface with the server for the past few months, and in general it's been going great! However, occasionally my FoxPro IDE will change the case of a file extension without warning where " program.prg " becomes " program. PRG ") TortoiseSVN apparently takes this to mean the first file was removed, becoming flagged as "missing" and the second name comes up as "non-versioned", wreaking havoc on my ability to track changes to the file. I understand that Subversion has it origins in the case-sensitive world of *nix but, is there any way to control this behavior in either Subversion or TortoiseSVN to be file name case-insensitive when used with Windows?
Unfortunately, Subversion is case-sensitive. This is due to the fact that files from Subversion can be checked out on both case-sensitive file systems (e.g., *nix) and case-insensitive file systems (e.g., Windows, Mac). This pre-commit hook script may help you avoid problems when you check in files. If it doesn't solve your problem, my best suggestion is to write a little script to make sure that all extensions are lowercase and run it every time before you check in/check out. It'll be a PITA, but maybe your best bet.
Can I make Subversion + TortoiseSVN case-insensitive for Windows? I've been using Subversion for code control with TortoiseSVN to interface with the server for the past few months, and in general it's been going great! However, occasionally my FoxPro IDE will change the case of a file extension without warning where " program.prg " becomes " program. PRG ") TortoiseSVN apparently takes this to mean the first file was removed, becoming flagged as "missing" and the second name comes up as "non-versioned", wreaking havoc on my ability to track changes to the file. I understand that Subversion has it origins in the case-sensitive world of *nix but, is there any way to control this behavior in either Subversion or TortoiseSVN to be file name case-insensitive when used with Windows?
TITLE: Can I make Subversion + TortoiseSVN case-insensitive for Windows? QUESTION: I've been using Subversion for code control with TortoiseSVN to interface with the server for the past few months, and in general it's been going great! However, occasionally my FoxPro IDE will change the case of a file extension without warning where " program.prg " becomes " program. PRG ") TortoiseSVN apparently takes this to mean the first file was removed, becoming flagged as "missing" and the second name comes up as "non-versioned", wreaking havoc on my ability to track changes to the file. I understand that Subversion has it origins in the case-sensitive world of *nix but, is there any way to control this behavior in either Subversion or TortoiseSVN to be file name case-insensitive when used with Windows? ANSWER: Unfortunately, Subversion is case-sensitive. This is due to the fact that files from Subversion can be checked out on both case-sensitive file systems (e.g., *nix) and case-insensitive file systems (e.g., Windows, Mac). This pre-commit hook script may help you avoid problems when you check in files. If it doesn't solve your problem, my best suggestion is to write a little script to make sure that all extensions are lowercase and run it every time before you check in/check out. It'll be a PITA, but maybe your best bet.
[ "windows", "svn", "tortoisesvn" ]
25
20
13,794
8
0
2008-08-18T13:27:00.653000
2008-08-18T13:49:43.507000
14,588
14,621
Do you use version control other than for source code?
I've found SVN to be extremely useful for documentation, personal files, among other non-source code uses. What other practical uses have you found to version control systems in general?
I've seen version control being used for other non-source code purposes, like, Schema files - a set of XML schema files that represent a real world schema Content files - content represented in a specific format, this is tied to a designer in VStudio, using source control, allows history, rollbacks everything, with no database interaction In both the cases we notice that it's basically verbose files, primary reasons to have these files in source control as opposed to "text records in database", is that files that might need ability to compare versions history (because multiple users work on them) ability to rollback to an earlier version labeling and releases by getting a specific label if you use Team Foundation (TFS), the whole scrum templates with work items etc no database involved, no extra development for all the above
Do you use version control other than for source code? I've found SVN to be extremely useful for documentation, personal files, among other non-source code uses. What other practical uses have you found to version control systems in general?
TITLE: Do you use version control other than for source code? QUESTION: I've found SVN to be extremely useful for documentation, personal files, among other non-source code uses. What other practical uses have you found to version control systems in general? ANSWER: I've seen version control being used for other non-source code purposes, like, Schema files - a set of XML schema files that represent a real world schema Content files - content represented in a specific format, this is tied to a designer in VStudio, using source control, allows history, rollbacks everything, with no database interaction In both the cases we notice that it's basically verbose files, primary reasons to have these files in source control as opposed to "text records in database", is that files that might need ability to compare versions history (because multiple users work on them) ability to rollback to an earlier version labeling and releases by getting a specific label if you use Team Foundation (TFS), the whole scrum templates with work items etc no database involved, no extra development for all the above
[ "svn", "version-control", "cvs" ]
14
6
1,054
14
0
2008-08-18T13:30:21.283000
2008-08-18T13:46:29.600000
14,611
14,676
Editing User Profile w/ Forms Authentication
We're using Forms Authentication in SharePoint. When the account is created, the administrator can add some information, like name and address. But the required fields are username and email address. When a user goes to their profile page, all the fields are blank and they are unable to edit them. I have read a number of articles discussing how to import profiles from another data store, or to sync profiles. This doesn't work for us, because we don't have another data store where these profiles are stored. Will I just have to recreate the edit profile page and build a custom profile editor? Is this information exposed via SharePoint API? I don't think directly editing the database is a good solution.
If you log in to the "Shared Services administration" through the "Central Admin Tool" there is an option "Profile services policies". You can define in here what fields are user-overridable.
Editing User Profile w/ Forms Authentication We're using Forms Authentication in SharePoint. When the account is created, the administrator can add some information, like name and address. But the required fields are username and email address. When a user goes to their profile page, all the fields are blank and they are unable to edit them. I have read a number of articles discussing how to import profiles from another data store, or to sync profiles. This doesn't work for us, because we don't have another data store where these profiles are stored. Will I just have to recreate the edit profile page and build a custom profile editor? Is this information exposed via SharePoint API? I don't think directly editing the database is a good solution.
TITLE: Editing User Profile w/ Forms Authentication QUESTION: We're using Forms Authentication in SharePoint. When the account is created, the administrator can add some information, like name and address. But the required fields are username and email address. When a user goes to their profile page, all the fields are blank and they are unable to edit them. I have read a number of articles discussing how to import profiles from another data store, or to sync profiles. This doesn't work for us, because we don't have another data store where these profiles are stored. Will I just have to recreate the edit profile page and build a custom profile editor? Is this information exposed via SharePoint API? I don't think directly editing the database is a good solution. ANSWER: If you log in to the "Shared Services administration" through the "Central Admin Tool" there is an option "Profile services policies". You can define in here what fields are user-overridable.
[ "sharepoint", "moss", "wss" ]
6
4
938
1
0
2008-08-18T13:40:33.387000
2008-08-18T14:21:43.460000
14,614
14,683
Static Methods in an Interface/Abstract Class
First off, I understand the reasons why an interface or abstract class (in the.NET/C# terminology) cannot have abstract static methods. My question is then more focused on the best design solution. What I want is a set of "helper" classes that all have their own static methods such that if I get objects A, B, and C from a third party vendor, I can have helper classes with methods such as AHelper.RetrieveByID(string id); AHelper.RetrieveByName(string name); AHelper.DumpToDatabase(); Since my AHelper, BHelper, and CHelper classes will all basically have the same methods, it seems to makes sense to move these methods to an interface that these classes then derive from. However, wanting these methods to be static precludes me from having a generic interface or abstract class for all of them to derive from. I could always make these methods non-static and then instantiate the objects first such as AHelper a = new AHelper(); a.DumpToDatabase(); However, this code doesn't seem as intuitive to me. What are your suggestions? Should I abandon using an interface or abstract class altogether (the situation I'm in now) or can this possibly be refactored to accomplish the design I'm looking for?
Looking at your response I am thinking along the following lines: You could just have a static method that takes a type parameter and performs the expected logic based on the type. You could create a virtual method in your abstract base, where you specify the SQL in the concrete class. So that contains all the common code that is required by both (e.g. exectuting the command and returning the object) while encapsulating the "specialist" bits (e.g. the SQL) in the sub classes. I prefer the second option, although its of course down to you. If you need me to go into further detail, please let me know and I will be happy to edit/update:)
Static Methods in an Interface/Abstract Class First off, I understand the reasons why an interface or abstract class (in the.NET/C# terminology) cannot have abstract static methods. My question is then more focused on the best design solution. What I want is a set of "helper" classes that all have their own static methods such that if I get objects A, B, and C from a third party vendor, I can have helper classes with methods such as AHelper.RetrieveByID(string id); AHelper.RetrieveByName(string name); AHelper.DumpToDatabase(); Since my AHelper, BHelper, and CHelper classes will all basically have the same methods, it seems to makes sense to move these methods to an interface that these classes then derive from. However, wanting these methods to be static precludes me from having a generic interface or abstract class for all of them to derive from. I could always make these methods non-static and then instantiate the objects first such as AHelper a = new AHelper(); a.DumpToDatabase(); However, this code doesn't seem as intuitive to me. What are your suggestions? Should I abandon using an interface or abstract class altogether (the situation I'm in now) or can this possibly be refactored to accomplish the design I'm looking for?
TITLE: Static Methods in an Interface/Abstract Class QUESTION: First off, I understand the reasons why an interface or abstract class (in the.NET/C# terminology) cannot have abstract static methods. My question is then more focused on the best design solution. What I want is a set of "helper" classes that all have their own static methods such that if I get objects A, B, and C from a third party vendor, I can have helper classes with methods such as AHelper.RetrieveByID(string id); AHelper.RetrieveByName(string name); AHelper.DumpToDatabase(); Since my AHelper, BHelper, and CHelper classes will all basically have the same methods, it seems to makes sense to move these methods to an interface that these classes then derive from. However, wanting these methods to be static precludes me from having a generic interface or abstract class for all of them to derive from. I could always make these methods non-static and then instantiate the objects first such as AHelper a = new AHelper(); a.DumpToDatabase(); However, this code doesn't seem as intuitive to me. What are your suggestions? Should I abandon using an interface or abstract class altogether (the situation I'm in now) or can this possibly be refactored to accomplish the design I'm looking for? ANSWER: Looking at your response I am thinking along the following lines: You could just have a static method that takes a type parameter and performs the expected logic based on the type. You could create a virtual method in your abstract base, where you specify the SQL in the concrete class. So that contains all the common code that is required by both (e.g. exectuting the command and returning the object) while encapsulating the "specialist" bits (e.g. the SQL) in the sub classes. I prefer the second option, although its of course down to you. If you need me to go into further detail, please let me know and I will be happy to edit/update:)
[ ".net", "abstract-class", "static-methods", "interface-design" ]
10
3
9,108
10
0
2008-08-18T13:42:08.530000
2008-08-18T14:25:33.733000
14,617
15,937
How to retrieve a file from a server via SFTP?
I'm trying to retrieve a file from a server using SFTP (as opposed to FTPS) using Java. How can I do this?
Another option is to consider looking at the JSch library. JSch seems to be the preferred library for a few large open source projects, including Eclipse, Ant and Apache Commons HttpClient, amongst others. It supports both user/pass and certificate-based logins nicely, as well as all a whole host of other yummy SSH2 features. Here's a simple remote file retrieve over SFTP. Error handling is left as an exercise for the reader:-) JSch jsch = new JSch(); String knownHostsFilename = "/home/username/.ssh/known_hosts"; jsch.setKnownHosts( knownHostsFilename ); Session session = jsch.getSession( "remote-username", "remote-host" ); { // "interactive" version // can selectively update specified known_hosts file // need to implement UserInfo interface // MyUserInfo is a swing implementation provided in // examples/Sftp.java in the JSch dist UserInfo ui = new MyUserInfo(); session.setUserInfo(ui); // OR non-interactive version. Relies in host key being in known-hosts file session.setPassword( "remote-password" ); } session.connect(); Channel channel = session.openChannel( "sftp" ); channel.connect(); ChannelSftp sftpChannel = (ChannelSftp) channel; sftpChannel.get("remote-file", "local-file" ); // OR InputStream in = sftpChannel.get( "remote-file" ); // process inputstream as needed sftpChannel.exit(); session.disconnect();
How to retrieve a file from a server via SFTP? I'm trying to retrieve a file from a server using SFTP (as opposed to FTPS) using Java. How can I do this?
TITLE: How to retrieve a file from a server via SFTP? QUESTION: I'm trying to retrieve a file from a server using SFTP (as opposed to FTPS) using Java. How can I do this? ANSWER: Another option is to consider looking at the JSch library. JSch seems to be the preferred library for a few large open source projects, including Eclipse, Ant and Apache Commons HttpClient, amongst others. It supports both user/pass and certificate-based logins nicely, as well as all a whole host of other yummy SSH2 features. Here's a simple remote file retrieve over SFTP. Error handling is left as an exercise for the reader:-) JSch jsch = new JSch(); String knownHostsFilename = "/home/username/.ssh/known_hosts"; jsch.setKnownHosts( knownHostsFilename ); Session session = jsch.getSession( "remote-username", "remote-host" ); { // "interactive" version // can selectively update specified known_hosts file // need to implement UserInfo interface // MyUserInfo is a swing implementation provided in // examples/Sftp.java in the JSch dist UserInfo ui = new MyUserInfo(); session.setUserInfo(ui); // OR non-interactive version. Relies in host key being in known-hosts file session.setPassword( "remote-password" ); } session.connect(); Channel channel = session.openChannel( "sftp" ); channel.connect(); ChannelSftp sftpChannel = (ChannelSftp) channel; sftpChannel.get("remote-file", "local-file" ); // OR InputStream in = sftpChannel.get( "remote-file" ); // process inputstream as needed sftpChannel.exit(); session.disconnect();
[ "java", "ftp", "sftp", "security" ]
242
210
440,934
16
0
2008-08-18T13:43:48.730000
2008-08-19T09:44:41.723000
14,618
14,640
Multiple choice on WinForms
What's the best way of implementing a multiple choice option in Windows Forms? I want to enforce a single selection from a list, starting with a default value. It seems like a ComboBox would be a good choice, but is there a way to specify a non-blank default value? I could just set it in the code at some appropriate initialisation point, but I feel like I'm missing something.
If you only want one answer from the group, then a RadioButton control would be your best fit or you could use the ComboBox if you will have a lot of options. To set a default value, just add the item to the ComboBox's collection and set the SelectedIndex or SelectedItem to that item. Depending on how many options you are looking at, you can use a ListBox with the SelectionMode property set to MultiSimple, if it will be multiple choice or you could use the CheckBox control.
Multiple choice on WinForms What's the best way of implementing a multiple choice option in Windows Forms? I want to enforce a single selection from a list, starting with a default value. It seems like a ComboBox would be a good choice, but is there a way to specify a non-blank default value? I could just set it in the code at some appropriate initialisation point, but I feel like I'm missing something.
TITLE: Multiple choice on WinForms QUESTION: What's the best way of implementing a multiple choice option in Windows Forms? I want to enforce a single selection from a list, starting with a default value. It seems like a ComboBox would be a good choice, but is there a way to specify a non-blank default value? I could just set it in the code at some appropriate initialisation point, but I feel like I'm missing something. ANSWER: If you only want one answer from the group, then a RadioButton control would be your best fit or you could use the ComboBox if you will have a lot of options. To set a default value, just add the item to the ComboBox's collection and set the SelectedIndex or SelectedItem to that item. Depending on how many options you are looking at, you can use a ListBox with the SelectionMode property set to MultiSimple, if it will be multiple choice or you could use the CheckBox control.
[ "winforms", "combobox" ]
6
8
4,908
5
0
2008-08-18T13:44:23.300000
2008-08-18T13:56:40.797000
14,634
14,648
Is it possible to automatically make check-outs from any VCS?
Let's take a web development environment, where developers checkout a project onto their local machines, work on it, and check in changes to development. These changes are further tested on development and moved live on a regular schedule (eg weekly, monthly, etc.). Is it possible to have an auto-moveup of the latest tagged version (and not the latest checkin, as that might not be 100% stable), for example 8AM on Monday mornings, either using a script or a built-in feature of the VCS?
Certainly, but the exact product may be dependent upon the VCS you are using. What you might want to do, is have a a few different branches, and migrate up as you progress. E.g., Development -> Stable-Dev -> Beta -> Production. You can then simply auto-update to the latest version of Stable-Dev and Beta for your testers, and always be able to deploy a new Production version at the drop of a hat.
Is it possible to automatically make check-outs from any VCS? Let's take a web development environment, where developers checkout a project onto their local machines, work on it, and check in changes to development. These changes are further tested on development and moved live on a regular schedule (eg weekly, monthly, etc.). Is it possible to have an auto-moveup of the latest tagged version (and not the latest checkin, as that might not be 100% stable), for example 8AM on Monday mornings, either using a script or a built-in feature of the VCS?
TITLE: Is it possible to automatically make check-outs from any VCS? QUESTION: Let's take a web development environment, where developers checkout a project onto their local machines, work on it, and check in changes to development. These changes are further tested on development and moved live on a regular schedule (eg weekly, monthly, etc.). Is it possible to have an auto-moveup of the latest tagged version (and not the latest checkin, as that might not be 100% stable), for example 8AM on Monday mornings, either using a script or a built-in feature of the VCS? ANSWER: Certainly, but the exact product may be dependent upon the VCS you are using. What you might want to do, is have a a few different branches, and migrate up as you progress. E.g., Development -> Stable-Dev -> Beta -> Production. You can then simply auto-update to the latest version of Stable-Dev and Beta for your testers, and always be able to deploy a new Production version at the drop of a hat.
[ "svn", "version-control", "cvs", "versions" ]
2
1
170
6
0
2008-08-18T13:53:54.093000
2008-08-18T14:00:29.603000
14,646
24,826
How to add "Project Description" in FogBugz?
When I create a new project (or even when I edit the Sample Project) there is no way to add Description to the project. Or am I blind to the obvious?
There's no such thing as a project description, really. There's a column in the Projects page which is used so you can see which project is the default, built-in inbox, and we couldn't think of anything better to put as the column header for that column.
How to add "Project Description" in FogBugz? When I create a new project (or even when I edit the Sample Project) there is no way to add Description to the project. Or am I blind to the obvious?
TITLE: How to add "Project Description" in FogBugz? QUESTION: When I create a new project (or even when I edit the Sample Project) there is no way to add Description to the project. Or am I blind to the obvious? ANSWER: There's no such thing as a project description, really. There's a column in the Projects page which is used so you can see which project is the default, built-in inbox, and we couldn't think of anything better to put as the column header for that column.
[ "fogbugz" ]
6
20
1,385
3
0
2008-08-18T14:00:11.303000
2008-08-24T03:22:54.313000
14,656
14,681
Can a proxy server cache SSL GETs? If not, would response body encryption suffice?
Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption. You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible. The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
Can a proxy server cache SSL GETs? If not, would response body encryption suffice? Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL.
TITLE: Can a proxy server cache SSL GETs? If not, would response body encryption suffice? QUESTION: Can a (||any) proxy server cache content that is requested by a client over https? As the proxy server can't see the querystring, or the http headers, I reckon they can't. I'm considering a desktop application, run by a number of people behind their companies proxy. This application may access services across the internet and I'd like to take advantage of the in-built internet caching infrastructure for 'reads'. If the caching proxy servers can't cache SSL delivered content, would simply encrypting the content of a response be a viable option? I am considering all GET requests that we wish to be cachable be requested over http with the body encrypted using asymmetric encryption, where each client has the decryption key. Anytime we wish to perform a GET that is not cachable, or a POST operation, it will be performed over SSL. ANSWER: No, it's not possible to cache https directly. The whole communication between the client and the server is encrypted. A proxy sits between the server and the client, in order to cache it, you need to be able to read it, ie decrypt the encryption. You can do something to cache it. You basically do the SSL on your proxy, intercepting the SSL sent to the client. Basically the data is encrypted between the client and your proxy, it's decrypted, read and cached, and the data is encrypted and sent on the server. The reply from the server is likewise descrypted, read and encrypted. I'm not sure how you do this on major proxy software (like squid), but it is possible. The only problem with this approach is that the proxy will have to use a self signed cert to encrypt it to the client. The client will be able to tell that a proxy in the middle has read the data, since the certificate will not be from the original site.
[ "security", "encryption", "caching", "ssl", "proxy" ]
32
24
24,939
4
0
2008-08-18T14:06:13.973000
2008-08-18T14:24:39.477000
14,697
17,637
What Url rewriter do you use for ASP.Net?
I've looked at several URL rewriters for ASP.Net and IIS and was wondering what everyone else uses, and why. Here are the ones that I have used or looked at: ThunderMain URLRewriter: used in a previous project, didn't quite have the flexibility/performance we were looking for Ewal UrlMapper: used in a current project, but source seems to be abandoned UrlRewritingNet.UrlRewrite: seems like a decent library but documentation's poor grammar leaves me feeling uneasy UrlRewriter.NET: this is my current fav, has great flexibility, although the extra functions pumped into the replacement regexs changes the standard.Net regex syntax a bit Managed Fusion URL Rewriter: I found this one in a previous question on stack overflow, but haven't tried it out yet, from the example syntax, it doesn't seem to be editable via web.config
+1 UrlRewritingNET.URLRewrite -- used in several hundred services/portals/sites on a single box without issue for years! (@Jason -- that is the one you're talking about, right?) and I've also used the URLRewriter.NET on a personal site, and found it, ah, interesting. @travis, you're right about the changed syntax, but once you get used to it, it's good.
What Url rewriter do you use for ASP.Net? I've looked at several URL rewriters for ASP.Net and IIS and was wondering what everyone else uses, and why. Here are the ones that I have used or looked at: ThunderMain URLRewriter: used in a previous project, didn't quite have the flexibility/performance we were looking for Ewal UrlMapper: used in a current project, but source seems to be abandoned UrlRewritingNet.UrlRewrite: seems like a decent library but documentation's poor grammar leaves me feeling uneasy UrlRewriter.NET: this is my current fav, has great flexibility, although the extra functions pumped into the replacement regexs changes the standard.Net regex syntax a bit Managed Fusion URL Rewriter: I found this one in a previous question on stack overflow, but haven't tried it out yet, from the example syntax, it doesn't seem to be editable via web.config
TITLE: What Url rewriter do you use for ASP.Net? QUESTION: I've looked at several URL rewriters for ASP.Net and IIS and was wondering what everyone else uses, and why. Here are the ones that I have used or looked at: ThunderMain URLRewriter: used in a previous project, didn't quite have the flexibility/performance we were looking for Ewal UrlMapper: used in a current project, but source seems to be abandoned UrlRewritingNet.UrlRewrite: seems like a decent library but documentation's poor grammar leaves me feeling uneasy UrlRewriter.NET: this is my current fav, has great flexibility, although the extra functions pumped into the replacement regexs changes the standard.Net regex syntax a bit Managed Fusion URL Rewriter: I found this one in a previous question on stack overflow, but haven't tried it out yet, from the example syntax, it doesn't seem to be editable via web.config ANSWER: +1 UrlRewritingNET.URLRewrite -- used in several hundred services/portals/sites on a single box without issue for years! (@Jason -- that is the one you're talking about, right?) and I've also used the URLRewriter.NET on a personal site, and found it, ah, interesting. @travis, you're right about the changed syntax, but once you get used to it, it's good.
[ "asp.net", "iis", "url-rewriting", "urlrewriter" ]
14
3
1,629
11
0
2008-08-18T14:32:12.237000
2008-08-20T09:09:50.953000
14,698
23,585
How to make Pro*C cope with #warning directives?
When I try to precompile a *.pc file that contains a #warning directive I recieve the following error: PCC-S-02014, Encountered the symbol "warning" when expecting one of the following: (bla bla bla). Can I somehow convince Pro*C to ignore the thing if it doesn't know what to do with it? I can't remove the #warning directive as it's used in a header file that I can't change and must include.
According to the Pro*C/C++ Programmer's Guide (chapter 5 "Advanced Topics"), Pro*C silently ignores a number of preprocessor directives including #error and #pragma, but sadly not #warning. Since your warning directives are included in a header file, you might be able to use the ORA_PROC macro: #ifndef ORA_PROC #include #endif For some reason, Pro*C errors out if you try to hide a straight #warning that way, however.
How to make Pro*C cope with #warning directives? When I try to precompile a *.pc file that contains a #warning directive I recieve the following error: PCC-S-02014, Encountered the symbol "warning" when expecting one of the following: (bla bla bla). Can I somehow convince Pro*C to ignore the thing if it doesn't know what to do with it? I can't remove the #warning directive as it's used in a header file that I can't change and must include.
TITLE: How to make Pro*C cope with #warning directives? QUESTION: When I try to precompile a *.pc file that contains a #warning directive I recieve the following error: PCC-S-02014, Encountered the symbol "warning" when expecting one of the following: (bla bla bla). Can I somehow convince Pro*C to ignore the thing if it doesn't know what to do with it? I can't remove the #warning directive as it's used in a header file that I can't change and must include. ANSWER: According to the Pro*C/C++ Programmer's Guide (chapter 5 "Advanced Topics"), Pro*C silently ignores a number of preprocessor directives including #error and #pragma, but sadly not #warning. Since your warning directives are included in a header file, you might be able to use the ORA_PROC macro: #ifndef ORA_PROC #include #endif For some reason, Pro*C errors out if you try to hide a straight #warning that way, however.
[ "c", "oracle", "oracle-pro-c" ]
3
5
6,978
7
0
2008-08-18T14:32:41.830000
2008-08-22T21:44:00.983000
14,717
14,766
Identifying SQL Server Performance Problems
We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree? Does anyone have some good methods for tracking down expensive queries/processes in this environment?
This will give you the top 50 statements by average CPU time, check here for other scripts: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true SELECT TOP 50 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2 else qs.statement_end_offset end -qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg CPU Time] DESC
Identifying SQL Server Performance Problems We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree? Does anyone have some good methods for tracking down expensive queries/processes in this environment?
TITLE: Identifying SQL Server Performance Problems QUESTION: We're having sporadic, random query timeouts on our SQL Server 2005 cluster. I own a few apps that use it, so I'm helping out in the investigation. When watching the % CPU time in regular ol' Perfmon, you can certainly see it pegging out. However, SQL activity monitor only gives cumulative CPU and IO time used by a process, not what it's using right then, or over a specific timeframe. Perhaps I could use the profiler and run a trace, but this cluster is very heavily used and I'm afraid I'd be looking for a needle in a haystack. Am I barking up the wrong tree? Does anyone have some good methods for tracking down expensive queries/processes in this environment? ANSWER: This will give you the top 50 statements by average CPU time, check here for other scripts: http://www.microsoft.com/technet/scriptcenter/scripts/sql/sql2005/default.mspx?mfr=true SELECT TOP 50 qs.total_worker_time/qs.execution_count as [Avg CPU Time], SUBSTRING(qt.text,qs.statement_start_offset/2, (case when qs.statement_end_offset = -1 then len(convert(nvarchar(max), qt.text)) * 2 else qs.statement_end_offset end -qs.statement_start_offset)/2) as query_text, qt.dbid, dbname=db_name(qt.dbid), qt.objectid FROM sys.dm_exec_query_stats qs cross apply sys.dm_exec_sql_text(qs.sql_handle) as qt ORDER BY [Avg CPU Time] DESC
[ "sql-server", "performance", "sql-server-2005" ]
9
12
3,908
7
0
2008-08-18T14:39:51.333000
2008-08-18T15:09:27.683000
14,760
15,335
Is it possible to disable command input in the toolbar search box?
In the Visual Studio toolbar, you can enter commands into the search box by prefixing them with a > symbol. Is there any way to disable this? I've never used the feature, and it's slightly annoying when trying to actually search for something that you know is prefixed by greater-than in the code. It's particularly annoying when you accidentally search for "> exit" and the IDE quits (I knew there was a line in the code that was something like if(counter > exitCount) so entered that search without thinking). At the very least, can you escape the > symbol so that you can search for it? Prefixing with ^ doesn't seem to work.
This is a really cool feature. I've poked through the feature documentation, and the accompanying command list, and not a heck of a lot is showing up in terms of turning it off. If you want to search for >exit, you could always type >Edit.Find >exit in the search box; that seems to do the trick. A bit verbose, though, but it really is an edge case.
Is it possible to disable command input in the toolbar search box? In the Visual Studio toolbar, you can enter commands into the search box by prefixing them with a > symbol. Is there any way to disable this? I've never used the feature, and it's slightly annoying when trying to actually search for something that you know is prefixed by greater-than in the code. It's particularly annoying when you accidentally search for "> exit" and the IDE quits (I knew there was a line in the code that was something like if(counter > exitCount) so entered that search without thinking). At the very least, can you escape the > symbol so that you can search for it? Prefixing with ^ doesn't seem to work.
TITLE: Is it possible to disable command input in the toolbar search box? QUESTION: In the Visual Studio toolbar, you can enter commands into the search box by prefixing them with a > symbol. Is there any way to disable this? I've never used the feature, and it's slightly annoying when trying to actually search for something that you know is prefixed by greater-than in the code. It's particularly annoying when you accidentally search for "> exit" and the IDE quits (I knew there was a line in the code that was something like if(counter > exitCount) so entered that search without thinking). At the very least, can you escape the > symbol so that you can search for it? Prefixing with ^ doesn't seem to work. ANSWER: This is a really cool feature. I've poked through the feature documentation, and the accompanying command list, and not a heck of a lot is showing up in terms of turning it off. If you want to search for >exit, you could always type >Edit.Find >exit in the search box; that seems to do the trick. A bit verbose, though, but it really is an edge case.
[ "visual-studio" ]
4
2
238
3
0
2008-08-18T15:04:25.603000
2008-08-18T22:42:12.513000
14,770
14,787
How best to use File Version and Assembly Version?
In.NET there are two version numbers available when building a project, File Version and Assembly Version. How are you using these numbers? Keeping them the same? Auto-incrementing one, but manually changing the other? Also what about the AssemblyInformationalVersion attribute? I'd found this support Microsoft Knowledge Base (KB) article that provided some help: How to use Assembly Version and Assembly File Version.
In a scenario where I have multiple file assemblies (i.e. 1 exe and 5 dlls) I will use a different file version for each, but the same assembly version for all of them, allowing you to know which exe each of the dlls go with.
How best to use File Version and Assembly Version? In.NET there are two version numbers available when building a project, File Version and Assembly Version. How are you using these numbers? Keeping them the same? Auto-incrementing one, but manually changing the other? Also what about the AssemblyInformationalVersion attribute? I'd found this support Microsoft Knowledge Base (KB) article that provided some help: How to use Assembly Version and Assembly File Version.
TITLE: How best to use File Version and Assembly Version? QUESTION: In.NET there are two version numbers available when building a project, File Version and Assembly Version. How are you using these numbers? Keeping them the same? Auto-incrementing one, but manually changing the other? Also what about the AssemblyInformationalVersion attribute? I'd found this support Microsoft Knowledge Base (KB) article that provided some help: How to use Assembly Version and Assembly File Version. ANSWER: In a scenario where I have multiple file assemblies (i.e. 1 exe and 5 dlls) I will use a different file version for each, but the same assembly version for all of them, allowing you to know which exe each of the dlls go with.
[ ".net", "attributes", "versions" ]
55
15
28,799
7
0
2008-08-18T15:10:12.660000
2008-08-18T15:20:59.377000
14,775
14,856
Interview question on C# and VB.net similarities/differences
I have been a VB.net developer for a few years now but I am currently applying to a few companies that use C#. I have even been told that at least one of the companies doesn't want VB.net developers. I have been looking online trying to find real differences between the two and have asked on crackoverflow. The only major differences are a few syntax difference which are trivial to me because I am also a Java developer. What would be a good response to an interviewer when they tell me they are looking for a C# developer - or similar questions?
I've had to interview people for a few C# positions and this is my general advice for VB.Net developers interviewing for a C# position: Make sure you are clear that you have been working VB.Net. This seems obvious but is something that apparently isn't (in my experience). Try to give a code sample, if possible. I've seen some horrible VB.Net (and C#) written by VB programmers who didn't seem to learn much in the transition to.Net. Be able to write in C# during the interview, if asked. I know there aren't many real differences between the two, but I don't want to pay you to learn the new syntax. For your specific question: I've asked that type of question before and what I wanted to hear about was how the underlying system and framework were the same. If possible, talk about garbage collection, IDisposable, finalizers, the dangers of unsafe code blocks, stack vs heap, etc. All the kind of stuff to show that you really understand the intricacies of the.Net framework. Right or wrong, the heritage of VB brings with it an expectation of a lack of understand of lower level programming and windows in general (which, ironically enough, a c++ developer would have of a c# developer... and so on). Lastly, how you frame your experience can make a world of difference. If you position yourself as a.Net developer, rather than VB.Net or C#, the stupid, pseudo-religious, banter may not enter the conversation. This of course requires that you actually know both VB.Net and C# at the time of the interview, but that's a good policy regardless. The truth of the matter is that if you find that the person interviewing you writes you off simply because you've previously been developing in VB.Net, it's likely not going to be a place you want to work at anyway.
Interview question on C# and VB.net similarities/differences I have been a VB.net developer for a few years now but I am currently applying to a few companies that use C#. I have even been told that at least one of the companies doesn't want VB.net developers. I have been looking online trying to find real differences between the two and have asked on crackoverflow. The only major differences are a few syntax difference which are trivial to me because I am also a Java developer. What would be a good response to an interviewer when they tell me they are looking for a C# developer - or similar questions?
TITLE: Interview question on C# and VB.net similarities/differences QUESTION: I have been a VB.net developer for a few years now but I am currently applying to a few companies that use C#. I have even been told that at least one of the companies doesn't want VB.net developers. I have been looking online trying to find real differences between the two and have asked on crackoverflow. The only major differences are a few syntax difference which are trivial to me because I am also a Java developer. What would be a good response to an interviewer when they tell me they are looking for a C# developer - or similar questions? ANSWER: I've had to interview people for a few C# positions and this is my general advice for VB.Net developers interviewing for a C# position: Make sure you are clear that you have been working VB.Net. This seems obvious but is something that apparently isn't (in my experience). Try to give a code sample, if possible. I've seen some horrible VB.Net (and C#) written by VB programmers who didn't seem to learn much in the transition to.Net. Be able to write in C# during the interview, if asked. I know there aren't many real differences between the two, but I don't want to pay you to learn the new syntax. For your specific question: I've asked that type of question before and what I wanted to hear about was how the underlying system and framework were the same. If possible, talk about garbage collection, IDisposable, finalizers, the dangers of unsafe code blocks, stack vs heap, etc. All the kind of stuff to show that you really understand the intricacies of the.Net framework. Right or wrong, the heritage of VB brings with it an expectation of a lack of understand of lower level programming and windows in general (which, ironically enough, a c++ developer would have of a c# developer... and so on). Lastly, how you frame your experience can make a world of difference. If you position yourself as a.Net developer, rather than VB.Net or C#, the stupid, pseudo-religious, banter may not enter the conversation. This of course requires that you actually know both VB.Net and C# at the time of the interview, but that's a good policy regardless. The truth of the matter is that if you find that the person interviewing you writes you off simply because you've previously been developing in VB.Net, it's likely not going to be a place you want to work at anyway.
[ "c#", "vb.net" ]
7
9
4,945
8
0
2008-08-18T15:11:40.783000
2008-08-18T16:18:29.823000
14,791
14,815
Is "include file" in shtml the best method to keep non-database changing data
We have a website that uses #include file command to roll info into some web pages. The authors can access the text files to update things like the occasional class or contact information for the department. My question is this, I don't see anyone using this method and wonder if it is a good idea to keep using it. If not, what method should I transition to instead?
I don't think there is anything wrong with it, and I've done some similar things with PHP. If the people that make the changes are comfortable with how they do it and they can do what they need to, I think you should stick with it. There could be some retraining involved and/or a fair amount of work involved for changing the system.
Is "include file" in shtml the best method to keep non-database changing data We have a website that uses #include file command to roll info into some web pages. The authors can access the text files to update things like the occasional class or contact information for the department. My question is this, I don't see anyone using this method and wonder if it is a good idea to keep using it. If not, what method should I transition to instead?
TITLE: Is "include file" in shtml the best method to keep non-database changing data QUESTION: We have a website that uses #include file command to roll info into some web pages. The authors can access the text files to update things like the occasional class or contact information for the department. My question is this, I don't see anyone using this method and wonder if it is a good idea to keep using it. If not, what method should I transition to instead? ANSWER: I don't think there is anything wrong with it, and I've done some similar things with PHP. If the people that make the changes are comfortable with how they do it and they can do what they need to, I think you should stick with it. There could be some retraining involved and/or a fair amount of work involved for changing the system.
[ "html", "include", "shtml" ]
2
1
398
2
0
2008-08-18T15:23:33.693000
2008-08-18T15:50:00.800000
14,801
15,279
How can I override an EJB 3 session bean method with a generic argument - if possible at all?
Suppose you have the following EJB 3 interfaces/classes: public interface Repository { public void delete(E entity); } public abstract class AbstractRepository implements Repository { public void delete(E entity){ //... } } public interface FooRepository { //other methods } @Local(FooRepository.class) @Stateless public class FooRepositoryImpl extends AbstractRepository implements FooRepository { @Override public void delete(Foo entity){ //do something before deleting the entity super.delete(entity); } //other methods } And then another bean that accesses the FooRepository bean: //... @EJB private FooRepository fooRepository; public void someMethod(Foo foo) { fooRepository.delete(foo); } //... However, the overriding method is never executed when the delete method of the FooRepository bean is called. Instead, only the implementation of the delete method that is defined in AbstractRepository is executed. What am I doing wrong or is it simply a limitation of Java/EJB 3 that generics and inheritance don't play well together yet?
I tried it with a pojo and it seems to work. I had to modify your code a bit. I think your interfaces were a bit off, but I'm not sure. I assumed "Foo" was a concrete type, but if not I can do some more testing for you. I just wrote a main method to test this. I hope this helps! public static void main(String[] args){ FooRepository fooRepository = new FooRepositoryImpl(); fooRepository.delete(new Foo("Bar")); } public class Foo { private String value; public Foo(String inValue){ super(); value = inValue; } public String toString(){ return value; } } public interface Repository { public void delete(E entity); } public interface FooRepository extends Repository { //other methods } public class AbstractRespository implements Repository { public void delete(E entity){ System.out.println("Delete-" + entity.toString()); } } public class FooRepositoryImpl extends AbstractRespository implements FooRepository { @Override public void delete(Foo entity){ //do something before deleting the entity System.out.println("something before"); super.delete(entity); } }
How can I override an EJB 3 session bean method with a generic argument - if possible at all? Suppose you have the following EJB 3 interfaces/classes: public interface Repository { public void delete(E entity); } public abstract class AbstractRepository implements Repository { public void delete(E entity){ //... } } public interface FooRepository { //other methods } @Local(FooRepository.class) @Stateless public class FooRepositoryImpl extends AbstractRepository implements FooRepository { @Override public void delete(Foo entity){ //do something before deleting the entity super.delete(entity); } //other methods } And then another bean that accesses the FooRepository bean: //... @EJB private FooRepository fooRepository; public void someMethod(Foo foo) { fooRepository.delete(foo); } //... However, the overriding method is never executed when the delete method of the FooRepository bean is called. Instead, only the implementation of the delete method that is defined in AbstractRepository is executed. What am I doing wrong or is it simply a limitation of Java/EJB 3 that generics and inheritance don't play well together yet?
TITLE: How can I override an EJB 3 session bean method with a generic argument - if possible at all? QUESTION: Suppose you have the following EJB 3 interfaces/classes: public interface Repository { public void delete(E entity); } public abstract class AbstractRepository implements Repository { public void delete(E entity){ //... } } public interface FooRepository { //other methods } @Local(FooRepository.class) @Stateless public class FooRepositoryImpl extends AbstractRepository implements FooRepository { @Override public void delete(Foo entity){ //do something before deleting the entity super.delete(entity); } //other methods } And then another bean that accesses the FooRepository bean: //... @EJB private FooRepository fooRepository; public void someMethod(Foo foo) { fooRepository.delete(foo); } //... However, the overriding method is never executed when the delete method of the FooRepository bean is called. Instead, only the implementation of the delete method that is defined in AbstractRepository is executed. What am I doing wrong or is it simply a limitation of Java/EJB 3 that generics and inheritance don't play well together yet? ANSWER: I tried it with a pojo and it seems to work. I had to modify your code a bit. I think your interfaces were a bit off, but I'm not sure. I assumed "Foo" was a concrete type, but if not I can do some more testing for you. I just wrote a main method to test this. I hope this helps! public static void main(String[] args){ FooRepository fooRepository = new FooRepositoryImpl(); fooRepository.delete(new Foo("Bar")); } public class Foo { private String value; public Foo(String inValue){ super(); value = inValue; } public String toString(){ return value; } } public interface Repository { public void delete(E entity); } public interface FooRepository extends Repository { //other methods } public class AbstractRespository implements Repository { public void delete(E entity){ System.out.println("Delete-" + entity.toString()); } } public class FooRepositoryImpl extends AbstractRespository implements FooRepository { @Override public void delete(Foo entity){ //do something before deleting the entity System.out.println("something before"); super.delete(entity); } }
[ "java", "generics", "inheritance", "jakarta-ee", "ejb-3.0" ]
6
2
4,191
2
0
2008-08-18T15:34:34.920000
2008-08-18T21:51:55.140000
14,837
14,908
How to get started with speech-to-text?
I'm really interested in speech-to-text algorithms, but I'm not sure where to start studying up on them. A bunch of searching around led me to this, but it's from 1996 and I'm fairly certain that there have been improvements since then. Does anyone who has any experience with this sort of stuff have any recommendations for reading / source code to examine? Or just general advice on what I should be trying to learn about if I want to get into the world of writing speech recognition programs (sometimes it's hard to know what to search for if you don't have much knowledge about the domain). Edit: I'd like to do something cross-platform, but for the moment I'd be targeting linux. Edit 2: Thanks csmba for the well-thought out reply. At this point in time, I'm mainly interested in being able to create applications that allow automation, or execution of different commands through voice. So, a limited amount of recognizable commands being able to be strung together. An example would be a music player that took commands like "Play the album Hello Everything by Squarepusher", or an application launcher that allowed the user to create voice-shortcuts to launch specific apps. I realize that it's a pretty giant problem, and that I have nowhere near the level of knowledge required right now to tackle implementing an entire recognition engine, although the techniques involved with doing so fascinate me, and it is something I'd like to work myself up to doing. In all likelihood, I'll probably end up picking up a book or two on the subject and studying up / playing with "simple" implementations in my free time.
This is a HUGE questions, I wouldn't know how to begin... So let me just try giving you the right "terms" so you can refine your quest: First, understand that Speech Recognition is a diverse and complicated subject, and it has many different applications. People tend to map this domain to the first thing that comes to their head (usually, that would be computers understanding what you are saying like in IVR systems). So first lets distinguise the concept into the main categories: Human-to-Machine: Applications that deal with understanding what a human is saying, but the human knows he is talking to a machine and the grammar is very limited. Examples are Computer automation Specialized: Pilots automating some controls for example (noise a huge problem) IVR (Interactive Voice Response) systems like Google-411 or when you call the bank and the computer on the other side says "say 'service' to get customer service" human-to-human (Spontaneous speech): This is a bigger, more complex problem. Here we can also break it down into different applciations: Call Center: conversation between Agent-Customer, phone quality, compressed Intelligence: radio/phone/live conversations between 2 or more individuals Now, Speech-To-Text is not what you should be saying that you care about. What you care about is solving a problem. Different technologies are used to solve different problems. See an overview here of some of them. to summarize, other approaches are Phonetic transcription, LVCSR and direct based. Also, are you interested in being the PHd behind the technology? you would need a Masters equivalent involving Signal processing and probably a PHd to be cutting edge. In which case, you will work for a company that develops the actual speech engine. Companies like Nuance and IBM are the big ones, but also Phillips and other startups exist. On the other hand, if you want to be the one implementing applications, you will not be working on the engine, but working on building application that USE the engine. A good analogy I think is form the gaming industry: Are you developing the graphic engine (like the Cry engine), or working on one of several hundred games, all use the same graphic engine? Don't get me wrong, there is plenty to work on the quality of the search also outside the IBM/Nuance of the world. The engine is usually very open, and there are a lot of algorithmic tweaking to be done that can dramatically affect performance. Each business application has different constraints and cost/benefit function, so you can make experiments for many years building better voice recognition based applications. one more thing: in general, you would also want to have good statistics background the lower in the stack you want to be. At this point in time, I'm mainly interested in being able to create applications that allow automation Good, we are converging here... Then you have no interest in "Speech-to-Text". That buzzwords takes you to the world of full transcription, a place you do not need to go to. You should be focusing on some of the more Human-to-Machine technologies like Voice XML and the ones used in IVR systems (Nuance is the biggest player there)
How to get started with speech-to-text? I'm really interested in speech-to-text algorithms, but I'm not sure where to start studying up on them. A bunch of searching around led me to this, but it's from 1996 and I'm fairly certain that there have been improvements since then. Does anyone who has any experience with this sort of stuff have any recommendations for reading / source code to examine? Or just general advice on what I should be trying to learn about if I want to get into the world of writing speech recognition programs (sometimes it's hard to know what to search for if you don't have much knowledge about the domain). Edit: I'd like to do something cross-platform, but for the moment I'd be targeting linux. Edit 2: Thanks csmba for the well-thought out reply. At this point in time, I'm mainly interested in being able to create applications that allow automation, or execution of different commands through voice. So, a limited amount of recognizable commands being able to be strung together. An example would be a music player that took commands like "Play the album Hello Everything by Squarepusher", or an application launcher that allowed the user to create voice-shortcuts to launch specific apps. I realize that it's a pretty giant problem, and that I have nowhere near the level of knowledge required right now to tackle implementing an entire recognition engine, although the techniques involved with doing so fascinate me, and it is something I'd like to work myself up to doing. In all likelihood, I'll probably end up picking up a book or two on the subject and studying up / playing with "simple" implementations in my free time.
TITLE: How to get started with speech-to-text? QUESTION: I'm really interested in speech-to-text algorithms, but I'm not sure where to start studying up on them. A bunch of searching around led me to this, but it's from 1996 and I'm fairly certain that there have been improvements since then. Does anyone who has any experience with this sort of stuff have any recommendations for reading / source code to examine? Or just general advice on what I should be trying to learn about if I want to get into the world of writing speech recognition programs (sometimes it's hard to know what to search for if you don't have much knowledge about the domain). Edit: I'd like to do something cross-platform, but for the moment I'd be targeting linux. Edit 2: Thanks csmba for the well-thought out reply. At this point in time, I'm mainly interested in being able to create applications that allow automation, or execution of different commands through voice. So, a limited amount of recognizable commands being able to be strung together. An example would be a music player that took commands like "Play the album Hello Everything by Squarepusher", or an application launcher that allowed the user to create voice-shortcuts to launch specific apps. I realize that it's a pretty giant problem, and that I have nowhere near the level of knowledge required right now to tackle implementing an entire recognition engine, although the techniques involved with doing so fascinate me, and it is something I'd like to work myself up to doing. In all likelihood, I'll probably end up picking up a book or two on the subject and studying up / playing with "simple" implementations in my free time. ANSWER: This is a HUGE questions, I wouldn't know how to begin... So let me just try giving you the right "terms" so you can refine your quest: First, understand that Speech Recognition is a diverse and complicated subject, and it has many different applications. People tend to map this domain to the first thing that comes to their head (usually, that would be computers understanding what you are saying like in IVR systems). So first lets distinguise the concept into the main categories: Human-to-Machine: Applications that deal with understanding what a human is saying, but the human knows he is talking to a machine and the grammar is very limited. Examples are Computer automation Specialized: Pilots automating some controls for example (noise a huge problem) IVR (Interactive Voice Response) systems like Google-411 or when you call the bank and the computer on the other side says "say 'service' to get customer service" human-to-human (Spontaneous speech): This is a bigger, more complex problem. Here we can also break it down into different applciations: Call Center: conversation between Agent-Customer, phone quality, compressed Intelligence: radio/phone/live conversations between 2 or more individuals Now, Speech-To-Text is not what you should be saying that you care about. What you care about is solving a problem. Different technologies are used to solve different problems. See an overview here of some of them. to summarize, other approaches are Phonetic transcription, LVCSR and direct based. Also, are you interested in being the PHd behind the technology? you would need a Masters equivalent involving Signal processing and probably a PHd to be cutting edge. In which case, you will work for a company that develops the actual speech engine. Companies like Nuance and IBM are the big ones, but also Phillips and other startups exist. On the other hand, if you want to be the one implementing applications, you will not be working on the engine, but working on building application that USE the engine. A good analogy I think is form the gaming industry: Are you developing the graphic engine (like the Cry engine), or working on one of several hundred games, all use the same graphic engine? Don't get me wrong, there is plenty to work on the quality of the search also outside the IBM/Nuance of the world. The engine is usually very open, and there are a lot of algorithmic tweaking to be done that can dramatically affect performance. Each business application has different constraints and cost/benefit function, so you can make experiments for many years building better voice recognition based applications. one more thing: in general, you would also want to have good statistics background the lower in the stack you want to be. At this point in time, I'm mainly interested in being able to create applications that allow automation Good, we are converging here... Then you have no interest in "Speech-to-Text". That buzzwords takes you to the world of full transcription, a place you do not need to go to. You should be focusing on some of the more Human-to-Machine technologies like Voice XML and the ones used in IVR systems (Nuance is the biggest player there)
[ "language-agnostic", "speech-recognition" ]
10
8
3,972
6
0
2008-08-18T16:05:43.280000
2008-08-18T16:56:43.850000
14,843
14,859
Mixed C++/CLI TypeLoadException Internal limitation: too many fields
On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop). After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile.. However, running the application causes an EETypeLoadException and leaves me unable to debug... Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched. Can anyone suggest any other possible solutions? I'm really at a dead end here.
Make sure the Enable String Pooling option under C/C++ Code Generation is turned on. That usually fixes this issue, which is one of those "huh?" MS limitations like the 64k limit on Excel spreadsheets. Only this one affects the number of symbols that may appear in an assembly.
Mixed C++/CLI TypeLoadException Internal limitation: too many fields On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop). After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile.. However, running the application causes an EETypeLoadException and leaves me unable to debug... Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched. Can anyone suggest any other possible solutions? I'm really at a dead end here.
TITLE: Mixed C++/CLI TypeLoadException Internal limitation: too many fields QUESTION: On a quest to migrate some new UI into Managed/C# land, I have recently turned on Common Language Runtime Support (/clr) on a large legacy project, which uses MFC in a Shared DLL and relies on about a dozen other projects within our overall solution. This project is the core of our application, and would drive any managed UI code that is produced (hence the need to turn on clr support for interop). After fixing a ton of little niggly errors and warnings, I finally managed to get the application to compile.. However, running the application causes an EETypeLoadException and leaves me unable to debug... Doing some digging, I found the cause to be "System.TypeLoadException: Internal limitation: too many fields." which occurs right at the end of compilation. I then found this link which suggests to break the assembly down into two or more dlls. However, this is not possible in my case, as a limitation I have is that the legacy code basically remains untouched. Can anyone suggest any other possible solutions? I'm really at a dead end here. ANSWER: Make sure the Enable String Pooling option under C/C++ Code Generation is turned on. That usually fixes this issue, which is one of those "huh?" MS limitations like the 64k limit on Excel spreadsheets. Only this one affects the number of symbols that may appear in an assembly.
[ "compiler-construction", "c++-cli", "clr" ]
10
16
3,798
3
0
2008-08-18T16:11:14.860000
2008-08-18T16:21:39.380000