question_id int64 4 6.31M | answer_id int64 7 6.31M | title stringlengths 9 150 | question_body stringlengths 0 28.8k | answer_body stringlengths 60 27.2k | question_text stringlengths 40 28.9k | combined_text stringlengths 124 39.6k | tags listlengths 1 6 | question_score int64 0 26.3k | answer_score int64 0 28.8k | view_count int64 15 14M | answer_count int64 0 182 | favorite_count int64 0 32 | question_creation_date stringdate 2008-07-31 21:42:52 2011-06-10 18:12:18 | answer_creation_date stringdate 2008-07-31 22:17:57 2011-06-10 18:14:17 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
36,197 | 36,253 | ASP.NET MVC: Structuring Controllers | So I'm embarking on an ASP.NET MVC project and while the experience has been a good one overall, I'm not quite as pleased with the spaghetti mess that my controllers have become. I've looked around online (CodeCampServer, etc...) and they all seem to suffer the same issue wherein controller methods violate SRP (single responsibility principle) pretty consistently - such as a controller method that simply renders the view if the request is a GET but updates the model if it's a POST. Now I've got controller methods responsible for multiple logical routes throughout the application - say it checks for which button was clicked on the form and acts accordingly. I could redirect each button click to a different form action using JavaScript, but something doesn't feel right there either... The other big issue is the proliferation of magic strings - ViewData["foo"] = blah; Long story short, how do you guys structure your controller logic? One giant model object per view? Lots of little controller methods and JavaScript is the router? My goal is maintainable code - as features get piled on I'm starting to slide down that slippery slope... | ASP.NET Preview 5 (available on CodePlex ) has an answer for this: the [AcceptVerbs] attribute. Phil Haack has a blog post discussion how it's used. As for the view data magic key question, it's an interesting problem. If you think of a view as being a bunch of semi-independent components (especially in light of the new partial view support), then making a strongly-typed model becomes less ideal, as the several pieces of the view should be relatively independent of one another. | ASP.NET MVC: Structuring Controllers So I'm embarking on an ASP.NET MVC project and while the experience has been a good one overall, I'm not quite as pleased with the spaghetti mess that my controllers have become. I've looked around online (CodeCampServer, etc...) and they all seem to suffer the same issue wherein controller methods violate SRP (single responsibility principle) pretty consistently - such as a controller method that simply renders the view if the request is a GET but updates the model if it's a POST. Now I've got controller methods responsible for multiple logical routes throughout the application - say it checks for which button was clicked on the form and acts accordingly. I could redirect each button click to a different form action using JavaScript, but something doesn't feel right there either... The other big issue is the proliferation of magic strings - ViewData["foo"] = blah; Long story short, how do you guys structure your controller logic? One giant model object per view? Lots of little controller methods and JavaScript is the router? My goal is maintainable code - as features get piled on I'm starting to slide down that slippery slope... | TITLE:
ASP.NET MVC: Structuring Controllers
QUESTION:
So I'm embarking on an ASP.NET MVC project and while the experience has been a good one overall, I'm not quite as pleased with the spaghetti mess that my controllers have become. I've looked around online (CodeCampServer, etc...) and they all seem to suffer the same issue wherein controller methods violate SRP (single responsibility principle) pretty consistently - such as a controller method that simply renders the view if the request is a GET but updates the model if it's a POST. Now I've got controller methods responsible for multiple logical routes throughout the application - say it checks for which button was clicked on the form and acts accordingly. I could redirect each button click to a different form action using JavaScript, but something doesn't feel right there either... The other big issue is the proliferation of magic strings - ViewData["foo"] = blah; Long story short, how do you guys structure your controller logic? One giant model object per view? Lots of little controller methods and JavaScript is the router? My goal is maintainable code - as features get piled on I'm starting to slide down that slippery slope...
ANSWER:
ASP.NET Preview 5 (available on CodePlex ) has an answer for this: the [AcceptVerbs] attribute. Phil Haack has a blog post discussion how it's used. As for the view data magic key question, it's an interesting problem. If you think of a view as being a bunch of semi-independent components (especially in light of the new partial view support), then making a strongly-typed model becomes less ideal, as the several pieces of the view should be relatively independent of one another. | [
"c#",
"asp.net-mvc"
] | 10 | 9 | 1,307 | 2 | 0 | 2008-08-30T17:44:52.807000 | 2008-08-30T18:52:47.230000 |
36,239 | 217,348 | What should we do to prepare for 2038? | I would like to think that some of the software I'm writing today will be used in 30 years. But I am also aware that a lot of it is based upon the UNIX tradition of exposing time as the number of seconds since 1970. #include #include #include void print(time_t rt) { struct tm * t = gmtime(&rt); puts(asctime(t)); }
int main() { print(0); print(time(0)); print(LONG_MAX); print(LONG_MAX+1); } Execution results in: Thu Jan 1 00:00:00 1970 Sat Aug 30 18:37:08 2008 Tue Jan 19 03:14:07 2038 Fri Dec 13 20:45:52 1901 The functions ctime(), gmtime(), and localtime() all take as an argument a time value representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970; see time(3) ). I wonder if there is anything proactive to do in this area as a programmer, or are we to trust that all software systems (aka Operating Systems) will some how be magically upgraded in the future? Update It would seem that indeed 64-bit systems are safe from this: import java.util.*;
class TimeTest { public static void main(String[] args) { print(0); print(System.currentTimeMillis()); print(Long.MAX_VALUE); print(Long.MAX_VALUE + 1); }
static void print(long l) { System.out.println(new Date(l)); } } Wed Dec 31 16:00:00 PST 1969 Sat Aug 30 12:02:40 PDT 2008 Sat Aug 16 23:12:55 PST 292278994 Sun Dec 02 08:47:04 PST 292269055 But what about the year 292278994? | I have written portable replacement for time.h (currently just localtime(), gmtime(), mktime() and timegm()) which uses 64 bit time even on 32 bit machines. It is intended to be dropped into C projects as a replacement for time.h. It is being used in Perl and I intend to fix Ruby and Python's 2038 problems with it as well. This gives you a safe range of +/- 292 million years. You can find the code at the y2038 project. Please feel free to post any questions to the issue tracker. As to the "this isn't going to be a problem for another 29 years", peruse this list of standard answers to that. In short, stuff happens in the future and sometimes you need to know when. I also have a presentation on the problem, what is not a solution, and what is. Oh, and don't forget that many time systems don't handle dates before 1970. Stuff happened before 1970, sometimes you need to know when. | What should we do to prepare for 2038? I would like to think that some of the software I'm writing today will be used in 30 years. But I am also aware that a lot of it is based upon the UNIX tradition of exposing time as the number of seconds since 1970. #include #include #include void print(time_t rt) { struct tm * t = gmtime(&rt); puts(asctime(t)); }
int main() { print(0); print(time(0)); print(LONG_MAX); print(LONG_MAX+1); } Execution results in: Thu Jan 1 00:00:00 1970 Sat Aug 30 18:37:08 2008 Tue Jan 19 03:14:07 2038 Fri Dec 13 20:45:52 1901 The functions ctime(), gmtime(), and localtime() all take as an argument a time value representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970; see time(3) ). I wonder if there is anything proactive to do in this area as a programmer, or are we to trust that all software systems (aka Operating Systems) will some how be magically upgraded in the future? Update It would seem that indeed 64-bit systems are safe from this: import java.util.*;
class TimeTest { public static void main(String[] args) { print(0); print(System.currentTimeMillis()); print(Long.MAX_VALUE); print(Long.MAX_VALUE + 1); }
static void print(long l) { System.out.println(new Date(l)); } } Wed Dec 31 16:00:00 PST 1969 Sat Aug 30 12:02:40 PDT 2008 Sat Aug 16 23:12:55 PST 292278994 Sun Dec 02 08:47:04 PST 292269055 But what about the year 292278994? | TITLE:
What should we do to prepare for 2038?
QUESTION:
I would like to think that some of the software I'm writing today will be used in 30 years. But I am also aware that a lot of it is based upon the UNIX tradition of exposing time as the number of seconds since 1970. #include #include #include void print(time_t rt) { struct tm * t = gmtime(&rt); puts(asctime(t)); }
int main() { print(0); print(time(0)); print(LONG_MAX); print(LONG_MAX+1); } Execution results in: Thu Jan 1 00:00:00 1970 Sat Aug 30 18:37:08 2008 Tue Jan 19 03:14:07 2038 Fri Dec 13 20:45:52 1901 The functions ctime(), gmtime(), and localtime() all take as an argument a time value representing the time in seconds since the Epoch (00:00:00 UTC, January 1, 1970; see time(3) ). I wonder if there is anything proactive to do in this area as a programmer, or are we to trust that all software systems (aka Operating Systems) will some how be magically upgraded in the future? Update It would seem that indeed 64-bit systems are safe from this: import java.util.*;
class TimeTest { public static void main(String[] args) { print(0); print(System.currentTimeMillis()); print(Long.MAX_VALUE); print(Long.MAX_VALUE + 1); }
static void print(long l) { System.out.println(new Date(l)); } } Wed Dec 31 16:00:00 PST 1969 Sat Aug 30 12:02:40 PDT 2008 Sat Aug 16 23:12:55 PST 292278994 Sun Dec 02 08:47:04 PST 292269055 But what about the year 292278994?
ANSWER:
I have written portable replacement for time.h (currently just localtime(), gmtime(), mktime() and timegm()) which uses 64 bit time even on 32 bit machines. It is intended to be dropped into C projects as a replacement for time.h. It is being used in Perl and I intend to fix Ruby and Python's 2038 problems with it as well. This gives you a safe range of +/- 292 million years. You can find the code at the y2038 project. Please feel free to post any questions to the issue tracker. As to the "this isn't going to be a problem for another 29 years", peruse this list of standard answers to that. In short, stuff happens in the future and sometimes you need to know when. I also have a presentation on the problem, what is not a solution, and what is. Oh, and don't forget that many time systems don't handle dates before 1970. Stuff happened before 1970, sometimes you need to know when. | [
"unix",
"time",
"maintainability",
"year2038"
] | 67 | 49 | 9,771 | 11 | 0 | 2008-08-30T18:42:57.067000 | 2008-10-20T02:05:35.667000 |
36,260 | 36,527 | Dealing with circular dependencies in OCaml | I'm writing an interpreter for an experimental language. Three of the main constructs of the language are definitions, statements, and expressions. Definitions can contain statements and expressions, statements can contain definitions and expressions, and one kind of expression can contain statements. I represent all of these using union types so I can easily use pattern matching on them. Ideally, I would like to put the code for these in different files, but OMake complains about circular dependency issues. As far as I know, circular type definitions across modules are not allowed. The only way I know of to solve this is to define all three types at once: type defn =... and stmt =... and expr =... It seems like this requires all the code for types to be in the same file. Is there any way around this? How do you deal with circular definitions in your code? | Recursive definitions need to appear in the same file. If you want to separate definitions, statements, and expressions into separate modules, you can do so using recursive modules, but they will still need to appear in the same file. DAG-ifying inter-file dependencies is one of the annoyances of OCaml. | Dealing with circular dependencies in OCaml I'm writing an interpreter for an experimental language. Three of the main constructs of the language are definitions, statements, and expressions. Definitions can contain statements and expressions, statements can contain definitions and expressions, and one kind of expression can contain statements. I represent all of these using union types so I can easily use pattern matching on them. Ideally, I would like to put the code for these in different files, but OMake complains about circular dependency issues. As far as I know, circular type definitions across modules are not allowed. The only way I know of to solve this is to define all three types at once: type defn =... and stmt =... and expr =... It seems like this requires all the code for types to be in the same file. Is there any way around this? How do you deal with circular definitions in your code? | TITLE:
Dealing with circular dependencies in OCaml
QUESTION:
I'm writing an interpreter for an experimental language. Three of the main constructs of the language are definitions, statements, and expressions. Definitions can contain statements and expressions, statements can contain definitions and expressions, and one kind of expression can contain statements. I represent all of these using union types so I can easily use pattern matching on them. Ideally, I would like to put the code for these in different files, but OMake complains about circular dependency issues. As far as I know, circular type definitions across modules are not allowed. The only way I know of to solve this is to define all three types at once: type defn =... and stmt =... and expr =... It seems like this requires all the code for types to be in the same file. Is there any way around this? How do you deal with circular definitions in your code?
ANSWER:
Recursive definitions need to appear in the same file. If you want to separate definitions, statements, and expressions into separate modules, you can do so using recursive modules, but they will still need to appear in the same file. DAG-ifying inter-file dependencies is one of the annoyances of OCaml. | [
"ocaml"
] | 17 | 16 | 4,066 | 3 | 0 | 2008-08-30T19:06:44.077000 | 2008-08-31T01:00:05.537000 |
36,262 | 36,284 | How much should one DataSet represent? | How much should one DataSet represent? Using the example of an ordering system: While showing your order I also show a list of items similar to one of yours as well as a list of our most popular items. While your items are tangled in a web of relationships involving you and your past orders, preferred suppliers, and the various other kinds of information related to you as a client, the other items do not have these same relationships. The set of queries I use to navigate the set of stuff representing you is different than the queries I use for one of these other lists of items. My inclination is to create different DataSets for different kinds of relationships but then I create ten separate item DataTables and that seems wrong. When I instantiate the larger DataSet even though I am only interested in a small subset that seems wrong, and when I try to pack all of these into one DataSet I have a big messy looking thing with several items tables next to each other and I am pretty sure that IS wrong. Maybe I am over-valuing the relationship feature of DataSets or maybe I just need to get over myself, either way I could use some guidance. | This is why I don't use datasets. If you use strongly-typed datasets you benefit from the strong typing but you pay for it in terms of the time it takes to create one even if you're just using part of it and its extensibility in terms of the code base. If you want to modify an existing one and you modify a row definition then this will create "shotgun" breaks in the code base as each definition for adding a new row will have to be modified as it wont compile anymore. To avoid the above scenario the most sensible approach is to generally give up on sensible re-use. Define a dataset per purpose and per use. However the main issue with this is API use, you end up with dataset that is simliar to another dataset but because it is a different dataset type you have to transform it to use the common API which is both painful and inelegant. This, plus the fact that strongly typed datasets make your code look horrid (the length of the type declarations) are pretty much the reasons i've given up on datasets and switched to business objects instead. | How much should one DataSet represent? How much should one DataSet represent? Using the example of an ordering system: While showing your order I also show a list of items similar to one of yours as well as a list of our most popular items. While your items are tangled in a web of relationships involving you and your past orders, preferred suppliers, and the various other kinds of information related to you as a client, the other items do not have these same relationships. The set of queries I use to navigate the set of stuff representing you is different than the queries I use for one of these other lists of items. My inclination is to create different DataSets for different kinds of relationships but then I create ten separate item DataTables and that seems wrong. When I instantiate the larger DataSet even though I am only interested in a small subset that seems wrong, and when I try to pack all of these into one DataSet I have a big messy looking thing with several items tables next to each other and I am pretty sure that IS wrong. Maybe I am over-valuing the relationship feature of DataSets or maybe I just need to get over myself, either way I could use some guidance. | TITLE:
How much should one DataSet represent?
QUESTION:
How much should one DataSet represent? Using the example of an ordering system: While showing your order I also show a list of items similar to one of yours as well as a list of our most popular items. While your items are tangled in a web of relationships involving you and your past orders, preferred suppliers, and the various other kinds of information related to you as a client, the other items do not have these same relationships. The set of queries I use to navigate the set of stuff representing you is different than the queries I use for one of these other lists of items. My inclination is to create different DataSets for different kinds of relationships but then I create ten separate item DataTables and that seems wrong. When I instantiate the larger DataSet even though I am only interested in a small subset that seems wrong, and when I try to pack all of these into one DataSet I have a big messy looking thing with several items tables next to each other and I am pretty sure that IS wrong. Maybe I am over-valuing the relationship feature of DataSets or maybe I just need to get over myself, either way I could use some guidance.
ANSWER:
This is why I don't use datasets. If you use strongly-typed datasets you benefit from the strong typing but you pay for it in terms of the time it takes to create one even if you're just using part of it and its extensibility in terms of the code base. If you want to modify an existing one and you modify a row definition then this will create "shotgun" breaks in the code base as each definition for adding a new row will have to be modified as it wont compile anymore. To avoid the above scenario the most sensible approach is to generally give up on sensible re-use. Define a dataset per purpose and per use. However the main issue with this is API use, you end up with dataset that is simliar to another dataset but because it is a different dataset type you have to transform it to use the common API which is both painful and inelegant. This, plus the fact that strongly typed datasets make your code look horrid (the length of the type declarations) are pretty much the reasons i've given up on datasets and switched to business objects instead. | [
".net",
"dataset"
] | 1 | 1 | 268 | 2 | 0 | 2008-08-30T19:08:50.047000 | 2008-08-30T19:25:42.657000 |
36,274 | 36,297 | What is Lazy Loading? | What is Lazy Loading? [Edit after reading a few answers] Why do people use this term so often? Say you just use a ASP/ADO recordset and load it with data or ADO.NET Datasource for a gridview. I guess I should have asked why people use the term Lazy Loading, what "other" types are their? | It's called lazy loading because, like a lazy person, you are putting off doing something you don't want to. The opposite is Eager Loading, where you load something right away, long before you need it. If you are curious why people might use lazy loading, consider an application that takes a LOOOOONG time to start. This application is probably doing a lot of eager loading... loading things from disk, and doing calculations and whatnot long before it is ever needed. Compare this to lazy loading, the application would start much faster, but then the first time you need to do something that requires some long running load, there may be a slight pause while it is loaded for the first time. Thus, with lazy loading, you are amortizing the load time throughout the course of running your application... and you may actually save from loading things that the user may never intend to use. | What is Lazy Loading? What is Lazy Loading? [Edit after reading a few answers] Why do people use this term so often? Say you just use a ASP/ADO recordset and load it with data or ADO.NET Datasource for a gridview. I guess I should have asked why people use the term Lazy Loading, what "other" types are their? | TITLE:
What is Lazy Loading?
QUESTION:
What is Lazy Loading? [Edit after reading a few answers] Why do people use this term so often? Say you just use a ASP/ADO recordset and load it with data or ADO.NET Datasource for a gridview. I guess I should have asked why people use the term Lazy Loading, what "other" types are their?
ANSWER:
It's called lazy loading because, like a lazy person, you are putting off doing something you don't want to. The opposite is Eager Loading, where you load something right away, long before you need it. If you are curious why people might use lazy loading, consider an application that takes a LOOOOONG time to start. This application is probably doing a lot of eager loading... loading things from disk, and doing calculations and whatnot long before it is ever needed. Compare this to lazy loading, the application would start much faster, but then the first time you need to do something that requires some long running load, there may be a slight pause while it is loaded for the first time. Thus, with lazy loading, you are amortizing the load time throughout the course of running your application... and you may actually save from loading things that the user may never intend to use. | [
"data-binding",
"lazy-loading"
] | 95 | 231 | 77,623 | 13 | 0 | 2008-08-30T19:20:04.933000 | 2008-08-30T19:44:05.653000 |
36,294 | 36,393 | F# language - hints for newbie | Looks like here in StackOveflow there is a group of F# enthusiasts. I'd like to know better this language, so, apart from the functional programming theory, can you point me to the better starting points to start using the F# language? I mean, tutorials, how-tos, but first of all working samples to have the chance to start doing something and enjoy the language. Thanks a lot Andrea | Not to whore myself horribly but I wrote a couple F# overview posts on my blog here and here. Chris Smith (guy on the F# team at MS) has an article called 'F# in 20 minutes' - part 1 and part 2. Note you have to be careful as the latest CTP of F# (version 1.9.6.0) has some seriously breaking changes compared to previous versions, so some examples/tutorials out there might not work without modification. Here's a quick run-down of some cool stuff, maybe I can give you a few hints here myself which are clearly very brief and probably not great but hopefully gives you something to play with!:- First note - most examples on the internet will assume 'lightweight syntax' is turned on. To achieve this use the following line of code:- #light This prevents you from having to insert certain keywords that are present for OCaml compatibility and also having to terminate each line with semicolons. Note that using this syntax means indentation defines scope. This will become clear in later examples, all of which rely on lightweight syntax being switched on. If you're using the interactive mode you have to terminate all statements with double semi-colons, for example:- > #light;; > let f x y = x + y;;
val f: int -> int -> int
> f 1 2;; val it: int = 3 Note that interactive mode returns a 'val' result after each line. This gives important information about the definitions we are making, for example 'val f: int -> int -> int' indicates that a function which takes two ints returns an int. Note that only in interactive do we need to terminate lines with semi-colons, when actually defining F# code we are free of that:-) You define functions using the 'let' keyword. This is probably the most important keyword in all of F# and you'll be using it a lot. For example:- let sumStuff x y = x + y let sumStuffTuple (x, y) = x + y We can call these functions thus:- sumStuff 1 2 3 sumStuffTuple (1, 2) 3 Note there are two different ways of defining functions here - you can either separate parameters by whitespace or specify parameters in 'tuples' (i.e. values in parentheses separated by commas). The difference is that we can use 'partial function application' to obtain functions which take less than the required parameters using the first approach, and not with the second. E.g.:- let sumStuff1 = sumStuff 1 sumStuff 2 3 Note we are obtaining a function from the expression 'sumStuff 1'. When we can pass around functions just as easily as data that is referred to as the language having 'first class functions', this is a fundamental part of any functional language such as F#. Pattern matching is pretty darn cool, it's basically like a switch statement on steroids (yeah I nicked that phrase from another F#-ist:-). You can do stuff like:- let someThing x = match x with | 0 -> "zero" | 1 -> "one" | 2 -> "two" | x when x < 0 -> "negative = " + x.ToString() | _ when x%2 = 0 -> "greater than two but even" | _ -> "greater than two but odd" Note we use the '_' symbol when we want to match on something but the expression we are returning does not depend on the input. We can abbreviate pattern matching using if, elif, and else statements as required:- let negEvenOdd x = if x < 0 then "neg" elif x % 2 = 0 then "even" else "odd" F# lists (which are implemented as linked lists underneath) can be manipulated thus:- let l1 = [1;2;3] l1.[0] 1
let l2 = [1.. 10] List.length l2 10
let squares = [for i in 1..10 -> i * i] squares [1; 4; 9; 16; 25; 36; 49; 64; 81; 100]
let square x = x * x;; let squares2 = List.map square [1..10] squares2 [1; 4; 9; 16; 25; 36; 49; 64; 81; 100]
let evenSquares = List.filter (fun x -> x % 2 = 0) squares evenSqares [4; 16; 36; 64; 100] Note the List.map function 'maps' the square function on to the list from 1 to 10, i.e. applies the function to each element. List.filter 'filters' a list by only returning values in the list that pass the predicate function provided. Also note the 'fun x -> f' syntax - this is the F# lambda. Note that throughout we have not defined any types - the F# compiler/interpreter 'infers' types, i.e. works out what you want from usage. For example:- let f x = "hi " + x Here the compiler/interpreter will determine x is a string since you're performing an operation which requires x to be a string. It also determines the return type will be string as well. When there is ambiguity the compiler makes assumptions, for example:- let f x y = x + y Here x and y could be a number of types, but the compiler defaults to int. If you want to define types you can using type annotation:- let f (x:string) y = x + y Also note that we have had to enclose x:string in parentheses, we often have to do this to separate parts of a function definition. Two really useful and heavily used operators in F# are the pipe forward and function composition operators |> and >> respectively. We define |> thus:- let (|>) x f = f x Note that you can define operators in F#, this is pretty cool:-). This allows you to write things in a clearer way, e.g.:- [1..10] |> List.map (fun x -> x * x) |> List.filter (fun x -> x % 2 = 0) Will allow you to obtain the first 10 even squares. That is clearer than:- List.filter (fun x -> x % 2 = 0) (List.map (fun x -> x * x) [1..10]) Well, at least I think so:-) Function composition defined by the >> operator is defined as follows:- let (>>) f g x = g(f(x)) I.e. you forward-pipe an operation only the parameter of the first function remains unspecified. This is useful as you can do the following:- let mapFilter = List.map (fun x -> x * x) >> List.filter (fun x -> x % 2 = 0) Here mapFilter will accept a list an input and return the list filtered as before. It's an abbreviated version of:- let mapFilter = l |> List.map (fun x -> x * x) |> List.filter (fun x -> x % 2 = 0) If we want to write recursive functions we have to define the function as recursive by placing 'rec' after the let. Examples below. Some cool stuff:- Factorial let rec fact x = if x <= 1 then 1 else x * fact (x-1) nth Fibonacci Number let rec fib n = if n <= 1 then n else fib (n-1) + fib (n-2) FizzBuzz let (/%) x y = x % y = 0 let fb = function | x when x /% 15 -> "FizzBuzz" | x when x /% 3 -> "Fizz" | x when x /% 5 -> "Buzz" | x -> x.ToString()
[1..100] |> List.map (fb >> printfn "%s") Anyway that's a very brief overview, hopefully it helps a little!! | F# language - hints for newbie Looks like here in StackOveflow there is a group of F# enthusiasts. I'd like to know better this language, so, apart from the functional programming theory, can you point me to the better starting points to start using the F# language? I mean, tutorials, how-tos, but first of all working samples to have the chance to start doing something and enjoy the language. Thanks a lot Andrea | TITLE:
F# language - hints for newbie
QUESTION:
Looks like here in StackOveflow there is a group of F# enthusiasts. I'd like to know better this language, so, apart from the functional programming theory, can you point me to the better starting points to start using the F# language? I mean, tutorials, how-tos, but first of all working samples to have the chance to start doing something and enjoy the language. Thanks a lot Andrea
ANSWER:
Not to whore myself horribly but I wrote a couple F# overview posts on my blog here and here. Chris Smith (guy on the F# team at MS) has an article called 'F# in 20 minutes' - part 1 and part 2. Note you have to be careful as the latest CTP of F# (version 1.9.6.0) has some seriously breaking changes compared to previous versions, so some examples/tutorials out there might not work without modification. Here's a quick run-down of some cool stuff, maybe I can give you a few hints here myself which are clearly very brief and probably not great but hopefully gives you something to play with!:- First note - most examples on the internet will assume 'lightweight syntax' is turned on. To achieve this use the following line of code:- #light This prevents you from having to insert certain keywords that are present for OCaml compatibility and also having to terminate each line with semicolons. Note that using this syntax means indentation defines scope. This will become clear in later examples, all of which rely on lightweight syntax being switched on. If you're using the interactive mode you have to terminate all statements with double semi-colons, for example:- > #light;; > let f x y = x + y;;
val f: int -> int -> int
> f 1 2;; val it: int = 3 Note that interactive mode returns a 'val' result after each line. This gives important information about the definitions we are making, for example 'val f: int -> int -> int' indicates that a function which takes two ints returns an int. Note that only in interactive do we need to terminate lines with semi-colons, when actually defining F# code we are free of that:-) You define functions using the 'let' keyword. This is probably the most important keyword in all of F# and you'll be using it a lot. For example:- let sumStuff x y = x + y let sumStuffTuple (x, y) = x + y We can call these functions thus:- sumStuff 1 2 3 sumStuffTuple (1, 2) 3 Note there are two different ways of defining functions here - you can either separate parameters by whitespace or specify parameters in 'tuples' (i.e. values in parentheses separated by commas). The difference is that we can use 'partial function application' to obtain functions which take less than the required parameters using the first approach, and not with the second. E.g.:- let sumStuff1 = sumStuff 1 sumStuff 2 3 Note we are obtaining a function from the expression 'sumStuff 1'. When we can pass around functions just as easily as data that is referred to as the language having 'first class functions', this is a fundamental part of any functional language such as F#. Pattern matching is pretty darn cool, it's basically like a switch statement on steroids (yeah I nicked that phrase from another F#-ist:-). You can do stuff like:- let someThing x = match x with | 0 -> "zero" | 1 -> "one" | 2 -> "two" | x when x < 0 -> "negative = " + x.ToString() | _ when x%2 = 0 -> "greater than two but even" | _ -> "greater than two but odd" Note we use the '_' symbol when we want to match on something but the expression we are returning does not depend on the input. We can abbreviate pattern matching using if, elif, and else statements as required:- let negEvenOdd x = if x < 0 then "neg" elif x % 2 = 0 then "even" else "odd" F# lists (which are implemented as linked lists underneath) can be manipulated thus:- let l1 = [1;2;3] l1.[0] 1
let l2 = [1.. 10] List.length l2 10
let squares = [for i in 1..10 -> i * i] squares [1; 4; 9; 16; 25; 36; 49; 64; 81; 100]
let square x = x * x;; let squares2 = List.map square [1..10] squares2 [1; 4; 9; 16; 25; 36; 49; 64; 81; 100]
let evenSquares = List.filter (fun x -> x % 2 = 0) squares evenSqares [4; 16; 36; 64; 100] Note the List.map function 'maps' the square function on to the list from 1 to 10, i.e. applies the function to each element. List.filter 'filters' a list by only returning values in the list that pass the predicate function provided. Also note the 'fun x -> f' syntax - this is the F# lambda. Note that throughout we have not defined any types - the F# compiler/interpreter 'infers' types, i.e. works out what you want from usage. For example:- let f x = "hi " + x Here the compiler/interpreter will determine x is a string since you're performing an operation which requires x to be a string. It also determines the return type will be string as well. When there is ambiguity the compiler makes assumptions, for example:- let f x y = x + y Here x and y could be a number of types, but the compiler defaults to int. If you want to define types you can using type annotation:- let f (x:string) y = x + y Also note that we have had to enclose x:string in parentheses, we often have to do this to separate parts of a function definition. Two really useful and heavily used operators in F# are the pipe forward and function composition operators |> and >> respectively. We define |> thus:- let (|>) x f = f x Note that you can define operators in F#, this is pretty cool:-). This allows you to write things in a clearer way, e.g.:- [1..10] |> List.map (fun x -> x * x) |> List.filter (fun x -> x % 2 = 0) Will allow you to obtain the first 10 even squares. That is clearer than:- List.filter (fun x -> x % 2 = 0) (List.map (fun x -> x * x) [1..10]) Well, at least I think so:-) Function composition defined by the >> operator is defined as follows:- let (>>) f g x = g(f(x)) I.e. you forward-pipe an operation only the parameter of the first function remains unspecified. This is useful as you can do the following:- let mapFilter = List.map (fun x -> x * x) >> List.filter (fun x -> x % 2 = 0) Here mapFilter will accept a list an input and return the list filtered as before. It's an abbreviated version of:- let mapFilter = l |> List.map (fun x -> x * x) |> List.filter (fun x -> x % 2 = 0) If we want to write recursive functions we have to define the function as recursive by placing 'rec' after the let. Examples below. Some cool stuff:- Factorial let rec fact x = if x <= 1 then 1 else x * fact (x-1) nth Fibonacci Number let rec fib n = if n <= 1 then n else fib (n-1) + fib (n-2) FizzBuzz let (/%) x y = x % y = 0 let fb = function | x when x /% 15 -> "FizzBuzz" | x when x /% 3 -> "Fizz" | x when x /% 5 -> "Buzz" | x -> x.ToString()
[1..100] |> List.map (fb >> printfn "%s") Anyway that's a very brief overview, hopefully it helps a little!! | [
"f#"
] | 19 | 31 | 2,258 | 7 | 0 | 2008-08-30T19:42:24.927000 | 2008-08-30T22:09:05.697000 |
36,314 | 36,321 | What is 'Currying'? | I've seen references to curried functions in several articles and blogs but I can't find a good explanation (or at least one that makes sense!) | Currying is when you break down a function that takes multiple arguments into a series of functions that each take only one argument. Here's an example in JavaScript: function add (a, b) { return a + b; }
add(3, 4); // returns 7 This is a function that takes two arguments, a and b, and returns their sum. We will now curry this function: function add (a) { return function (b) { return a + b; } } This is a function that takes one argument, a, and returns a function that takes another argument, b, and that function returns their sum. add(3)(4); // returns 7
var add3 = add(3); // returns a function
add3(4); // returns 7 The first statement returns 7, like the add(3, 4) statement. The second statement defines a new function called add3 that will add 3 to its argument. (This is what some may call a closure.) The third statement uses the add3 operation to add 3 to 4, again producing 7 as a result. | What is 'Currying'? I've seen references to curried functions in several articles and blogs but I can't find a good explanation (or at least one that makes sense!) | TITLE:
What is 'Currying'?
QUESTION:
I've seen references to curried functions in several articles and blogs but I can't find a good explanation (or at least one that makes sense!)
ANSWER:
Currying is when you break down a function that takes multiple arguments into a series of functions that each take only one argument. Here's an example in JavaScript: function add (a, b) { return a + b; }
add(3, 4); // returns 7 This is a function that takes two arguments, a and b, and returns their sum. We will now curry this function: function add (a) { return function (b) { return a + b; } } This is a function that takes one argument, a, and returns a function that takes another argument, b, and that function returns their sum. add(3)(4); // returns 7
var add3 = add(3); // returns a function
add3(4); // returns 7 The first statement returns 7, like the add(3, 4) statement. The second statement defines a new function called add3 that will add 3 to its argument. (This is what some may call a closure.) The third statement uses the add3 operation to add 3 to 4, again producing 7 as a result. | [
"javascript",
"functional-programming",
"terminology",
"definition",
"currying"
] | 821 | 1,095 | 239,298 | 25 | 0 | 2008-08-30T20:12:55.867000 | 2008-08-30T20:19:51.187000 |
36,315 | 36,552 | Alternative to HttpUtility for .NET 3.5 SP1 client framework? | It'd be really nice to target my Windows Forms app to the.NET 3.5 SP1 client framework. But, right now I'm using the HttpUtility.HtmlDecode and HttpUtility.UrlDecode functions, and the MSDN documentation doesn't point to any alternatives inside of, say, System.Net or something. So, short from reflectoring the source code and copying it into my assembly---which I don't think would be worth it---are there alternatives inside of the.NET 3.5 SP1 client framework that you know of, to replace this functionality? It seems a bit strange that they'd restrict these useful functions to server-only code. | I’d strongly not recommend rolling your own encoding. I’d use the Microsoft Anti-Cross Site Scripting Library which is very small (v1.5 is ~30kb) if HttpUtility.HtmlEncode isn’t available. As for decoding, maybe you could use the decoding routine from Mono? | Alternative to HttpUtility for .NET 3.5 SP1 client framework? It'd be really nice to target my Windows Forms app to the.NET 3.5 SP1 client framework. But, right now I'm using the HttpUtility.HtmlDecode and HttpUtility.UrlDecode functions, and the MSDN documentation doesn't point to any alternatives inside of, say, System.Net or something. So, short from reflectoring the source code and copying it into my assembly---which I don't think would be worth it---are there alternatives inside of the.NET 3.5 SP1 client framework that you know of, to replace this functionality? It seems a bit strange that they'd restrict these useful functions to server-only code. | TITLE:
Alternative to HttpUtility for .NET 3.5 SP1 client framework?
QUESTION:
It'd be really nice to target my Windows Forms app to the.NET 3.5 SP1 client framework. But, right now I'm using the HttpUtility.HtmlDecode and HttpUtility.UrlDecode functions, and the MSDN documentation doesn't point to any alternatives inside of, say, System.Net or something. So, short from reflectoring the source code and copying it into my assembly---which I don't think would be worth it---are there alternatives inside of the.NET 3.5 SP1 client framework that you know of, to replace this functionality? It seems a bit strange that they'd restrict these useful functions to server-only code.
ANSWER:
I’d strongly not recommend rolling your own encoding. I’d use the Microsoft Anti-Cross Site Scripting Library which is very small (v1.5 is ~30kb) if HttpUtility.HtmlEncode isn’t available. As for decoding, maybe you could use the decoding routine from Mono? | [
".net",
"deployment",
".net-3.5",
".net-client-profile"
] | 13 | 3 | 17,195 | 7 | 0 | 2008-08-30T20:13:32.667000 | 2008-08-31T01:41:17.013000 |
36,324 | 36,327 | "The system cannot find the file specified" when invoking subprocess.Popen in python | I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue. I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path): P:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> i,k = os.popen4("svn --version") >>> i.close() >>> k.readline() 'svn, version 1.4.2 (r22196)\n' Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking os.popen4() it uses subprocess.Popen(). Trying that reproduces the error: C:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE, >>> close_fds=False, stderr=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in File "C:\Python25\lib\subprocess.py", line 594, in __init__ errread, errwrite) File "C:\Python25\lib\subprocess.py", line 816, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution. If it's not obvious, I'm a complete python newbie, but Google hasn't helped. | It's a bug, see the documentation of subprocess.Popen. There either needs to be a "shell=True " option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, "svn --version" which it doesn't find. I don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it. | "The system cannot find the file specified" when invoking subprocess.Popen in python I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue. I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path): P:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> i,k = os.popen4("svn --version") >>> i.close() >>> k.readline() 'svn, version 1.4.2 (r22196)\n' Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking os.popen4() it uses subprocess.Popen(). Trying that reproduces the error: C:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE, >>> close_fds=False, stderr=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in File "C:\Python25\lib\subprocess.py", line 594, in __init__ errread, errwrite) File "C:\Python25\lib\subprocess.py", line 816, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution. If it's not obvious, I'm a complete python newbie, but Google hasn't helped. | TITLE:
"The system cannot find the file specified" when invoking subprocess.Popen in python
QUESTION:
I'm trying to use svnmerge.py to merge some files. Under the hood it uses python, and when I use it I get an error - "The system cannot find the file specified". Colleagues at work are running the same version of svnmerge.py, and of python (2.5.2, specifically r252:60911) without an issue. I found this link, which describes my problem. Trying what was outlined there, I confirmed Python could find SVN (it's in my path): P:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> i,k = os.popen4("svn --version") >>> i.close() >>> k.readline() 'svn, version 1.4.2 (r22196)\n' Looking at the svnmerge.py code, though, I noticed for python versions 2.4 and higher it was following a different execution path. Rather than invoking os.popen4() it uses subprocess.Popen(). Trying that reproduces the error: C:\>python Python 2.5.2 (r252:60911, Feb 21 2008, 13:11:45) [MSC v.1310 32 bit (Intel)] on win32 Type "help", "copyright", "credits" or "license" for more information. >>> import subprocess >>> p = subprocess.Popen("svn --version", stdout=subprocess.PIPE, >>> close_fds=False, stderr=subprocess.PIPE) Traceback (most recent call last): File "", line 1, in File "C:\Python25\lib\subprocess.py", line 594, in __init__ errread, errwrite) File "C:\Python25\lib\subprocess.py", line 816, in _execute_child startupinfo) WindowsError: [Error 2] The system cannot find the file specified >>> For now, I've commented out the 2.4-and-higher specific code, but I'd like to find a proper solution. If it's not obvious, I'm a complete python newbie, but Google hasn't helped.
ANSWER:
It's a bug, see the documentation of subprocess.Popen. There either needs to be a "shell=True " option, or the first argument needs to be a sequence ['svn', '--version']. As it is now, Popen is looking for an executable named, literally, "svn --version" which it doesn't find. I don't know why it would work for your colleagues though, if they are running the same OS and version of Python... FWIW it gives me the same error message on a mac, and either of the two ways I gave fixes it. | [
"python",
"svn-merge"
] | 13 | 23 | 16,398 | 1 | 0 | 2008-08-30T20:24:38.037000 | 2008-08-30T20:34:35.877000 |
36,326 | 36,332 | How can I store user-tweakable configuration in app.config? | I know it is a good idea to store configuration data in app.config (e.g. database connection strings) instead of hardcoing it, even if I am writing an application just for myself. But is there a way to update the configuration data stored in app.config from the program that is using it? | If you use the Settings for the project, you can mark each setting as either application or user. If they're set as user, they will be stored per-user and when you call the Save method it will be updated in the config for that user. Code project has a really detailed article on saving all types of settings. | How can I store user-tweakable configuration in app.config? I know it is a good idea to store configuration data in app.config (e.g. database connection strings) instead of hardcoing it, even if I am writing an application just for myself. But is there a way to update the configuration data stored in app.config from the program that is using it? | TITLE:
How can I store user-tweakable configuration in app.config?
QUESTION:
I know it is a good idea to store configuration data in app.config (e.g. database connection strings) instead of hardcoing it, even if I am writing an application just for myself. But is there a way to update the configuration data stored in app.config from the program that is using it?
ANSWER:
If you use the Settings for the project, you can mark each setting as either application or user. If they're set as user, they will be stored per-user and when you call the Save method it will be updated in the config for that user. Code project has a really detailed article on saving all types of settings. | [
"c#",
".net",
"app-config"
] | 4 | 6 | 1,651 | 2 | 0 | 2008-08-30T20:34:18.060000 | 2008-08-30T20:40:42.307000 |
36,333 | 36,339 | Preview theme in WordPress | In the latest version of WordPress, it gives you the opportunity to view a preview of what your site would look like using a different theme. You basically just click on the theme, it takes over the screen and you have a chance to activate or close it (and return to the previous screen, which is grayed out in the background). I have seen a similar technique used on a number of websites recently for display images as well. I'm wondering what technology/code they use to do this? | It's open source - use the source, Luke. Look in wp-admin/js/theme-preview.js | Preview theme in WordPress In the latest version of WordPress, it gives you the opportunity to view a preview of what your site would look like using a different theme. You basically just click on the theme, it takes over the screen and you have a chance to activate or close it (and return to the previous screen, which is grayed out in the background). I have seen a similar technique used on a number of websites recently for display images as well. I'm wondering what technology/code they use to do this? | TITLE:
Preview theme in WordPress
QUESTION:
In the latest version of WordPress, it gives you the opportunity to view a preview of what your site would look like using a different theme. You basically just click on the theme, it takes over the screen and you have a chance to activate or close it (and return to the previous screen, which is grayed out in the background). I have seen a similar technique used on a number of websites recently for display images as well. I'm wondering what technology/code they use to do this?
ANSWER:
It's open source - use the source, Luke. Look in wp-admin/js/theme-preview.js | [
"jquery",
"html",
"wordpress"
] | 5 | 5 | 304 | 1 | 0 | 2008-08-30T20:41:02.337000 | 2008-08-30T21:02:11.037000 |
36,347 | 36,364 | What are the differences between "generic" types in C++ and Java? | Java has generics and C++ provides a very strong programming model with template s. So then, what is the difference between C++ and Java generics? | There is a big difference between them. In C++ you don't have to specify a class or an interface for the generic type. That's why you can create truly generic functions and classes, with the caveat of a looser typing. template T sum(T a, T b) { return a + b; } The method above adds two objects of the same type, and can be used for any type T that has the "+" operator available. In Java you have to specify a type if you want to call methods on the objects passed, something like: T sum(T a, T b) { return a.add ( b ); } In C++ generic functions/classes can only be defined in headers, since the compiler generates different functions for different types (that it's invoked with). So the compilation is slower. In Java the compilation doesn't have a major penalty, but Java uses a technique called "erasure" where the generic type is erased at runtime, so at runtime Java is actually calling... Something sum(Something a, Something b) { return a.add ( b ); } Nevertheless, Java's generics help with type-safety. | What are the differences between "generic" types in C++ and Java? Java has generics and C++ provides a very strong programming model with template s. So then, what is the difference between C++ and Java generics? | TITLE:
What are the differences between "generic" types in C++ and Java?
QUESTION:
Java has generics and C++ provides a very strong programming model with template s. So then, what is the difference between C++ and Java generics?
ANSWER:
There is a big difference between them. In C++ you don't have to specify a class or an interface for the generic type. That's why you can create truly generic functions and classes, with the caveat of a looser typing. template T sum(T a, T b) { return a + b; } The method above adds two objects of the same type, and can be used for any type T that has the "+" operator available. In Java you have to specify a type if you want to call methods on the objects passed, something like: T sum(T a, T b) { return a.add ( b ); } In C++ generic functions/classes can only be defined in headers, since the compiler generates different functions for different types (that it's invoked with). So the compilation is slower. In Java the compilation doesn't have a major penalty, but Java uses a technique called "erasure" where the generic type is erased at runtime, so at runtime Java is actually calling... Something sum(Something a, Something b) { return a.add ( b ); } Nevertheless, Java's generics help with type-safety. | [
"java",
"c++",
"generics",
"templates",
"language-features"
] | 175 | 170 | 116,540 | 13 | 0 | 2008-08-30T21:14:27.623000 | 2008-08-30T21:34:05.967000 |
36,350 | 36,367 | How to pass a single object[] to a params object[] | I have a method which takes params object[] such as: void Foo(params object[] items) { Console.WriteLine(items[0]); } When I pass two object arrays to this method, it works fine: Foo(new object[]{ (object)"1", (object)"2" }, new object[]{ (object)"3", (object)"4" } ); // Output: System.Object[] But when I pass a single object[], it does not take my object[] as the first param, instead it takes its all elements like I wanted to pass them one by one: Foo(new object[]{ (object)"1", (object)"2" }); // Output: 1, expected: System.Object[] How do I pass a single object[] as a first argument to a params array? | A simple typecast will ensure the compiler knows what you mean in this case. Foo((object)new object[]{ (object)"1", (object)"2" })); As an array is a subtype of object, this all works out. Bit of an odd solution though, I'll agree. | How to pass a single object[] to a params object[] I have a method which takes params object[] such as: void Foo(params object[] items) { Console.WriteLine(items[0]); } When I pass two object arrays to this method, it works fine: Foo(new object[]{ (object)"1", (object)"2" }, new object[]{ (object)"3", (object)"4" } ); // Output: System.Object[] But when I pass a single object[], it does not take my object[] as the first param, instead it takes its all elements like I wanted to pass them one by one: Foo(new object[]{ (object)"1", (object)"2" }); // Output: 1, expected: System.Object[] How do I pass a single object[] as a first argument to a params array? | TITLE:
How to pass a single object[] to a params object[]
QUESTION:
I have a method which takes params object[] such as: void Foo(params object[] items) { Console.WriteLine(items[0]); } When I pass two object arrays to this method, it works fine: Foo(new object[]{ (object)"1", (object)"2" }, new object[]{ (object)"3", (object)"4" } ); // Output: System.Object[] But when I pass a single object[], it does not take my object[] as the first param, instead it takes its all elements like I wanted to pass them one by one: Foo(new object[]{ (object)"1", (object)"2" }); // Output: 1, expected: System.Object[] How do I pass a single object[] as a first argument to a params array?
ANSWER:
A simple typecast will ensure the compiler knows what you mean in this case. Foo((object)new object[]{ (object)"1", (object)"2" })); As an array is a subtype of object, this all works out. Bit of an odd solution though, I'll agree. | [
"c#",
"arrays"
] | 128 | 103 | 139,784 | 7 | 0 | 2008-08-30T21:22:06.433000 | 2008-08-30T21:36:59.377000 |
36,406 | 36,423 | Relative Root with Visual Studio ASP.NET debugger | I am working on an ASP.NET project which is physically located at C:\Projects\MyStuff\WebSite2. When I run the app with the Visual Studio debugger it seems that the built in web server considers "C:\Projects\MyStuff\" to be the relative root, not "C:\Projects\MyStuff\WebSite2". Is there a web.config setting or something that will allow tags like to render correctly without having to resort to the ASP.NET specific tags like? If I code for the debugger's peculiarities then when I upload to the production IIS server everthing is off. How do you resolve this? | you can try this trick that Scott Guthrie posted on his blog http://weblogs.asp.net/scottgu/archive/2006/12/19/tip-trick-how-to-run-a-root-site-with-the-local-web-server-using-vs-2005-sp1.aspx to cut to the fix: select your project/solution in solution explorer and then open the Properties tab like you would if you were editing a textbox. If you right click and go to "Property Pages" that is the wrong place. | Relative Root with Visual Studio ASP.NET debugger I am working on an ASP.NET project which is physically located at C:\Projects\MyStuff\WebSite2. When I run the app with the Visual Studio debugger it seems that the built in web server considers "C:\Projects\MyStuff\" to be the relative root, not "C:\Projects\MyStuff\WebSite2". Is there a web.config setting or something that will allow tags like to render correctly without having to resort to the ASP.NET specific tags like? If I code for the debugger's peculiarities then when I upload to the production IIS server everthing is off. How do you resolve this? | TITLE:
Relative Root with Visual Studio ASP.NET debugger
QUESTION:
I am working on an ASP.NET project which is physically located at C:\Projects\MyStuff\WebSite2. When I run the app with the Visual Studio debugger it seems that the built in web server considers "C:\Projects\MyStuff\" to be the relative root, not "C:\Projects\MyStuff\WebSite2". Is there a web.config setting or something that will allow tags like to render correctly without having to resort to the ASP.NET specific tags like? If I code for the debugger's peculiarities then when I upload to the production IIS server everthing is off. How do you resolve this?
ANSWER:
you can try this trick that Scott Guthrie posted on his blog http://weblogs.asp.net/scottgu/archive/2006/12/19/tip-trick-how-to-run-a-root-site-with-the-local-web-server-using-vs-2005-sp1.aspx to cut to the fix: select your project/solution in solution explorer and then open the Properties tab like you would if you were editing a textbox. If you right click and go to "Property Pages" that is the wrong place. | [
"asp.net"
] | 3 | 2 | 973 | 1 | 0 | 2008-08-30T22:22:50.360000 | 2008-08-30T22:31:01.790000 |
36,407 | 36,419 | Firefox add-ons | What Firefox add-ons do you use that are useful for programmers? | I guess it's silly to mention Firebug -- doubt any of us could live without it. Other than that I use the following (only listing dev-related): Console 2: next-generation error console DOM inspector: as the title might indicate, allows you to browse the DOM Edit Cookies: change cookies on the fly Execute JS: ad-hoc Javascript execution IE Tab: render a page in IE Inspect This: brings the selected object into the DOM inspector JSView: display linked javascript and CSS LORI (Life of Request Info): shows how long it takes to render a page Measure IT: a popup ruler. URL Params: shows GET and POST variables Web Developer: a myriad of tools for the web developer | Firefox add-ons What Firefox add-ons do you use that are useful for programmers? | TITLE:
Firefox add-ons
QUESTION:
What Firefox add-ons do you use that are useful for programmers?
ANSWER:
I guess it's silly to mention Firebug -- doubt any of us could live without it. Other than that I use the following (only listing dev-related): Console 2: next-generation error console DOM inspector: as the title might indicate, allows you to browse the DOM Edit Cookies: change cookies on the fly Execute JS: ad-hoc Javascript execution IE Tab: render a page in IE Inspect This: brings the selected object into the DOM inspector JSView: display linked javascript and CSS LORI (Life of Request Info): shows how long it takes to render a page Measure IT: a popup ruler. URL Params: shows GET and POST variables Web Developer: a myriad of tools for the web developer | [
"firefox",
"add-on"
] | 20 | 18 | 1,997 | 19 | 0 | 2008-08-30T22:23:42.010000 | 2008-08-30T22:28:50.580000 |
36,417 | 36,444 | PHP best practices? | What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code. I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable. What can I be doing better in my PHP development? | You don't need a "system" to do templating. You can do it on your own by keeping presentation & logic separate. This way the designer can screw up the display, but not the logic behind it. Here's a simple example: Now here's the people.php file (which you give your designer): Person: | PHP best practices? What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code. I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable. What can I be doing better in my PHP development? | TITLE:
PHP best practices?
QUESTION:
What is a good way to remove the code from display pages when developing with PHP. Often the pages I work on need to be editted by an outside person. This person is often confused by lots of blocks of PHP, and also likes to break my code. I've tried moving blocks of code out into functions, so now there are functions spread out all throughout the HTML now. As some pages become more complex it becomes a program again, and processing POSTs are questionable. What can I be doing better in my PHP development?
ANSWER:
You don't need a "system" to do templating. You can do it on your own by keeping presentation & logic separate. This way the designer can screw up the display, but not the logic behind it. Here's a simple example: Now here's the people.php file (which you give your designer): Person: | [
"php"
] | 14 | 19 | 1,980 | 9 | 0 | 2008-08-30T22:27:15.653000 | 2008-08-30T22:53:41.207000 |
36,430 | 39,103 | What are the important Ruby commands? | I'm not sure of all of them, but what are the commands to do things like update Ruby, download a new gem, or update an existing gem? What other important things are there? Since it might matter, I'm running Windows. | Useful command: Rake In addition to the commands listed by Joseph Pecoraro, the 'rake' command is also pretty standard when working with Ruby. Rake makes it easy to automate (simple) tasks; like building a RubyGem or running your unit tests. With rake, the only important command to remember is 'rake -T', which shows a list of rake tasks available in the current directory. Updating a Ruby gem To get back to your specific question: To update a specific gem, you can do two things: simply update the gem: gem update This will update the gem to the latest version. Install a Ruby gem If you want to update to a specific version, you must install it: gem install -v You can leave out the -v options. Rubygems then installs the latest version. How to help yourself Two useful gem commands to remember are: gem help This shows how to get help with rubygems. gem help commands This shows all commands available to rubygems. From here you can get more specific help on a command by using gem help: gem help update | What are the important Ruby commands? I'm not sure of all of them, but what are the commands to do things like update Ruby, download a new gem, or update an existing gem? What other important things are there? Since it might matter, I'm running Windows. | TITLE:
What are the important Ruby commands?
QUESTION:
I'm not sure of all of them, but what are the commands to do things like update Ruby, download a new gem, or update an existing gem? What other important things are there? Since it might matter, I'm running Windows.
ANSWER:
Useful command: Rake In addition to the commands listed by Joseph Pecoraro, the 'rake' command is also pretty standard when working with Ruby. Rake makes it easy to automate (simple) tasks; like building a RubyGem or running your unit tests. With rake, the only important command to remember is 'rake -T', which shows a list of rake tasks available in the current directory. Updating a Ruby gem To get back to your specific question: To update a specific gem, you can do two things: simply update the gem: gem update This will update the gem to the latest version. Install a Ruby gem If you want to update to a specific version, you must install it: gem install -v You can leave out the -v options. Rubygems then installs the latest version. How to help yourself Two useful gem commands to remember are: gem help This shows how to get help with rubygems. gem help commands This shows all commands available to rubygems. From here you can get more specific help on a command by using gem help: gem help update | [
"ruby"
] | 8 | 11 | 11,010 | 6 | 0 | 2008-08-30T22:36:52.697000 | 2008-09-02T09:37:49.753000 |
36,498 | 36,500 | How do I Send Email from the Command Line? | I would like to quickly send email from the command line. I realize there are probably a number of different ways to do this. I'm looking for a simple way to do this from a linux terminal (likely a bash shell but anything should do) and an alternative way to do this on Windows. I want to be able to whip up an email right on the command line or have the flexibility to pipe the message into the command line program. How would you go about doing this? If you have small scripts that would be fine as well. | You can use mail: $mail -s You then type your message and end it with a line that has only a period. This signals you are done and sends the message. You can also pipe your email in from STDIN and it will be sent as the text of an email: $ | mail -s One small note with this approach - unless your computer is connected to the internet and your DNS settings are set properly, you won't be able to receive replies to your message. For a more robust command-line program you can link to your POP or IMAP email account, check out either pine or mutt. | How do I Send Email from the Command Line? I would like to quickly send email from the command line. I realize there are probably a number of different ways to do this. I'm looking for a simple way to do this from a linux terminal (likely a bash shell but anything should do) and an alternative way to do this on Windows. I want to be able to whip up an email right on the command line or have the flexibility to pipe the message into the command line program. How would you go about doing this? If you have small scripts that would be fine as well. | TITLE:
How do I Send Email from the Command Line?
QUESTION:
I would like to quickly send email from the command line. I realize there are probably a number of different ways to do this. I'm looking for a simple way to do this from a linux terminal (likely a bash shell but anything should do) and an alternative way to do this on Windows. I want to be able to whip up an email right on the command line or have the flexibility to pipe the message into the command line program. How would you go about doing this? If you have small scripts that would be fine as well.
ANSWER:
You can use mail: $mail -s You then type your message and end it with a line that has only a period. This signals you are done and sends the message. You can also pipe your email in from STDIN and it will be sent as the text of an email: $ | mail -s One small note with this approach - unless your computer is connected to the internet and your DNS settings are set properly, you won't be able to receive replies to your message. For a more robust command-line program you can link to your POP or IMAP email account, check out either pine or mutt. | [
"linux",
"email",
"command-line"
] | 21 | 11 | 21,827 | 8 | 0 | 2008-08-31T00:06:37.257000 | 2008-08-31T00:12:15.323000 |
36,502 | 36,529 | How can I disable DLL Caching in Windows Vista via CMD? | I know Windows Vista (and XP) cache recently loaded DLL's in memory... How can this be disabled via the command prompt? | The only thing you can do is disable SuperFetch, which can be done from the command prompt with this command (there has to be a space between the = sign and disabled). sc config Superfetch start= disabled There is a myth out there that you can disable DLL caching, but that only worked for systems prior to Windows 2000. [ source ] | How can I disable DLL Caching in Windows Vista via CMD? I know Windows Vista (and XP) cache recently loaded DLL's in memory... How can this be disabled via the command prompt? | TITLE:
How can I disable DLL Caching in Windows Vista via CMD?
QUESTION:
I know Windows Vista (and XP) cache recently loaded DLL's in memory... How can this be disabled via the command prompt?
ANSWER:
The only thing you can do is disable SuperFetch, which can be done from the command prompt with this command (there has to be a space between the = sign and disabled). sc config Superfetch start= disabled There is a myth out there that you can disable DLL caching, but that only worked for systems prior to Windows 2000. [ source ] | [
"windows-vista",
"command-prompt"
] | 0 | 6 | 2,250 | 3 | 0 | 2008-08-31T00:18:55.993000 | 2008-08-31T01:04:14.653000 |
36,504 | 36,513 | Why functional languages? | I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application? | Functional languages use a different paradigm than imperative and object-oriented languages. They use side-effect-free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to). One of the biggest advantages with functional programming is that the order of execution of side-effect-free functions is not important. For example, in Erlang this is used to enable concurrency in a very transparent way. And because functions in functional languages behave very similar to mathematical functions it's easy to translate those into functional languages. In some cases, this can make code more readable. Traditionally, one of the big disadvantages of functional programming was also the lack of side effects. It's very difficult to write useful software without I/O, but I/O is hard to implement without side effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed-paradigm languages like F# or Scala this is easier. Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in Python too. I think the reasons for the popularity of functional programming is mostly because of two reasons: Concurrency is getting to be a real problem in normal programming, because we're getting more and more multiprocessor computers; and the languages are getting more accessible. | Why functional languages? I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application? | TITLE:
Why functional languages?
QUESTION:
I see a lot of talk on here about functional languages and stuff. Why would you use one over a "traditional" language? What do they do better? What are they worse at? What's the ideal functional programming application?
ANSWER:
Functional languages use a different paradigm than imperative and object-oriented languages. They use side-effect-free functions as a basic building block in the language. This enables lots of things and makes a lot of things more difficult (or in most cases different from what people are used to). One of the biggest advantages with functional programming is that the order of execution of side-effect-free functions is not important. For example, in Erlang this is used to enable concurrency in a very transparent way. And because functions in functional languages behave very similar to mathematical functions it's easy to translate those into functional languages. In some cases, this can make code more readable. Traditionally, one of the big disadvantages of functional programming was also the lack of side effects. It's very difficult to write useful software without I/O, but I/O is hard to implement without side effects in functions. So most people never got more out of functional programming than calculating a single output from a single input. In modern mixed-paradigm languages like F# or Scala this is easier. Lots of modern languages have elements from functional programming languages. C# 3.0 has a lot functional programming features and you can do functional programming in Python too. I think the reasons for the popularity of functional programming is mostly because of two reasons: Concurrency is getting to be a real problem in normal programming, because we're getting more and more multiprocessor computers; and the languages are getting more accessible. | [
"programming-languages",
"functional-programming"
] | 346 | 223 | 191,584 | 47 | 0 | 2008-08-31T00:21:51.900000 | 2008-08-31T00:38:05.340000 |
36,515 | 42,792 | Fixed Legend in Google Maps Mashup | I have a page with a Google Maps mashup that has pushpins that are color-coded by day (Monday, Tuesday, etc.) The IFrame containing the map is dynamically sized, so it gets resized when the browser window is resized. I'd like to put a legend in the corner of the map window that tells the user what each color means. The Google Maps API includes a GScreenOverlay class that has the behavior that I want, but it only lets you specify an image to use as an overlay, and I'd prefer to use a DIV with text in it. What's the easiest way to position a DIV over the map window in (for example) the lower left corner that'll automatically stay in the same place relative to the corner when the browser window is resized? | You can add your own Custom Control and use it as a legend. This code will add a box 150w x 100h (Gray Border/ with White Background) and the words "Hello World" inside of it. You swap out the text for any HTML you would like in the legend. This will stay Anchored to the Top Right (G_ANCHOR_TOP_RIGHT) 10px down and 50px over of the map. function MyPane() {} MyPane.prototype = new GControl; MyPane.prototype.initialize = function(map) { var me = this; me.panel = document.createElement("div"); me.panel.style.width = "150px"; me.panel.style.height = "100px"; me.panel.style.border = "1px solid gray"; me.panel.style.background = "white"; me.panel.innerHTML = "Hello World!"; map.getContainer().appendChild(me.panel); return me.panel; };
MyPane.prototype.getDefaultPosition = function() { return new GControlPosition( G_ANCHOR_TOP_RIGHT, new GSize(10, 50)); //Should be _ and not _ };
MyPane.prototype.getPanel = function() { return me.panel; } map.addControl(new MyPane()); | Fixed Legend in Google Maps Mashup I have a page with a Google Maps mashup that has pushpins that are color-coded by day (Monday, Tuesday, etc.) The IFrame containing the map is dynamically sized, so it gets resized when the browser window is resized. I'd like to put a legend in the corner of the map window that tells the user what each color means. The Google Maps API includes a GScreenOverlay class that has the behavior that I want, but it only lets you specify an image to use as an overlay, and I'd prefer to use a DIV with text in it. What's the easiest way to position a DIV over the map window in (for example) the lower left corner that'll automatically stay in the same place relative to the corner when the browser window is resized? | TITLE:
Fixed Legend in Google Maps Mashup
QUESTION:
I have a page with a Google Maps mashup that has pushpins that are color-coded by day (Monday, Tuesday, etc.) The IFrame containing the map is dynamically sized, so it gets resized when the browser window is resized. I'd like to put a legend in the corner of the map window that tells the user what each color means. The Google Maps API includes a GScreenOverlay class that has the behavior that I want, but it only lets you specify an image to use as an overlay, and I'd prefer to use a DIV with text in it. What's the easiest way to position a DIV over the map window in (for example) the lower left corner that'll automatically stay in the same place relative to the corner when the browser window is resized?
ANSWER:
You can add your own Custom Control and use it as a legend. This code will add a box 150w x 100h (Gray Border/ with White Background) and the words "Hello World" inside of it. You swap out the text for any HTML you would like in the legend. This will stay Anchored to the Top Right (G_ANCHOR_TOP_RIGHT) 10px down and 50px over of the map. function MyPane() {} MyPane.prototype = new GControl; MyPane.prototype.initialize = function(map) { var me = this; me.panel = document.createElement("div"); me.panel.style.width = "150px"; me.panel.style.height = "100px"; me.panel.style.border = "1px solid gray"; me.panel.style.background = "white"; me.panel.innerHTML = "Hello World!"; map.getContainer().appendChild(me.panel); return me.panel; };
MyPane.prototype.getDefaultPosition = function() { return new GControlPosition( G_ANCHOR_TOP_RIGHT, new GSize(10, 50)); //Should be _ and not _ };
MyPane.prototype.getPanel = function() { return me.panel; } map.addControl(new MyPane()); | [
"javascript",
"html",
"google-maps",
"google-maps-api-2"
] | 9 | 10 | 15,031 | 2 | 0 | 2008-08-31T00:41:03.070000 | 2008-09-03T23:06:29.817000 |
36,533 | 36,684 | Vista speech recognition in multiple languages | my primary language is spanish, but I use all my software in english, including windows; however I'd like to use speech recognition in spanish. Do you know if there's a way to use vista's speech recognition in other language than the primary os language? | Citation from Vista speech recognition blog: In Windows Vista, Windows Speech Recognition works in the current language of the OS. That means that in order to use another language for speech recognition, you have to have the appropriate language pack installed. Language packs are available as free downloads through Windows Update for the Ultimate and Enterprise versions of Vista. Once you have the language installed, you’ll need to change the display language of the OS to the language you want to use. Both of these are options on the “Regional and Language Options” control panel. You can look in help for “Install a display language” or “Change the display language”. | Vista speech recognition in multiple languages my primary language is spanish, but I use all my software in english, including windows; however I'd like to use speech recognition in spanish. Do you know if there's a way to use vista's speech recognition in other language than the primary os language? | TITLE:
Vista speech recognition in multiple languages
QUESTION:
my primary language is spanish, but I use all my software in english, including windows; however I'd like to use speech recognition in spanish. Do you know if there's a way to use vista's speech recognition in other language than the primary os language?
ANSWER:
Citation from Vista speech recognition blog: In Windows Vista, Windows Speech Recognition works in the current language of the OS. That means that in order to use another language for speech recognition, you have to have the appropriate language pack installed. Language packs are available as free downloads through Windows Update for the Ultimate and Enterprise versions of Vista. Once you have the language installed, you’ll need to change the display language of the OS to the language you want to use. Both of these are options on the “Regional and Language Options” control panel. You can look in help for “Install a display language” or “Change the display language”. | [
"windows-vista",
"nlp",
"speech-recognition",
"multilingual"
] | 3 | 8 | 5,647 | 6 | 0 | 2008-08-31T01:08:48.493000 | 2008-08-31T08:11:57.620000 |
36,534 | 36,545 | Website Hardware Scaling | So I was listening to the latest Stackoverflow podcast ( episode 19 ), and Jeff and Joel talked a bit about scaling server hardware as a website grows. From what Joel was saying, the first few steps are pretty standard: One server running both the webserver and the database (the current Stackoverflow setup) One webserver and one database server Two load-balanced webservers and one database server They didn't talk much about what comes next though. Do you add more webservers? Another database server? Replicate this three-machine cluster in a different datacenter for redundancy? Where does a web startup go from here in the hardware department? | A reasonable setup supporting an "average" web application might evolve as follows: Single combined application/database server Separate database on a different machine Second application server with DNS round-robin (poor man's load balancing) or, e.g. Perlbal Second, replicated database server (for read loads, requires some application logic changes so eligible database reads go to a slave) At this point, evaluating the current state of affairs would help to determine a better scaling path. For example, if read load is high and content doesn't change too often, it might be better to emphasise caching and introduce dedicated front-end caches, e.g. Squid to avoid un-needed database reads, although you will need to consider how to maintain cache coherency, typically in the application. On the other hand, if content changes reasonably often, then you will probably prefer a more spread-out solution; introduce a few more application servers and database slaves to help mitigate the effects, and use object caching, such as memcached to avoid hitting the database for the less volatile content. For most sites, this is probably enough, although if you do become a global phenomenon, then you'll probably want to start considering having hardware in regional data centres, and using tricks such as geographic load balancing to direct visitors to the closest "cluster". By that point, you'll probably be in a position to hire engineers who can really fine-tune things. Probably the most valuable scaling advice I can think of would be to avoid worrying about it all far too soon; concentrate on developing a service people are going to want to use, and making the application reasonably robust. Some easy early optimisations are to make sure your database design is fairly solid, and that indexes are set up so you're not doing anything painfully crazy; also, make sure the application emits cache-control headers that direct browsers on how to cache the data. Doing this sort of work early on in the design can yield benefits later, especially when you don't have to rework the entire thing to deal with cache coherency issues. The second most valuable piece of advice I want to put across is that you shouldn't assume what works for some other web site will work for you; check your logs, run some analysis on your traffic and profile your application - see where your bottlenecks are and resolve them. | Website Hardware Scaling So I was listening to the latest Stackoverflow podcast ( episode 19 ), and Jeff and Joel talked a bit about scaling server hardware as a website grows. From what Joel was saying, the first few steps are pretty standard: One server running both the webserver and the database (the current Stackoverflow setup) One webserver and one database server Two load-balanced webservers and one database server They didn't talk much about what comes next though. Do you add more webservers? Another database server? Replicate this three-machine cluster in a different datacenter for redundancy? Where does a web startup go from here in the hardware department? | TITLE:
Website Hardware Scaling
QUESTION:
So I was listening to the latest Stackoverflow podcast ( episode 19 ), and Jeff and Joel talked a bit about scaling server hardware as a website grows. From what Joel was saying, the first few steps are pretty standard: One server running both the webserver and the database (the current Stackoverflow setup) One webserver and one database server Two load-balanced webservers and one database server They didn't talk much about what comes next though. Do you add more webservers? Another database server? Replicate this three-machine cluster in a different datacenter for redundancy? Where does a web startup go from here in the hardware department?
ANSWER:
A reasonable setup supporting an "average" web application might evolve as follows: Single combined application/database server Separate database on a different machine Second application server with DNS round-robin (poor man's load balancing) or, e.g. Perlbal Second, replicated database server (for read loads, requires some application logic changes so eligible database reads go to a slave) At this point, evaluating the current state of affairs would help to determine a better scaling path. For example, if read load is high and content doesn't change too often, it might be better to emphasise caching and introduce dedicated front-end caches, e.g. Squid to avoid un-needed database reads, although you will need to consider how to maintain cache coherency, typically in the application. On the other hand, if content changes reasonably often, then you will probably prefer a more spread-out solution; introduce a few more application servers and database slaves to help mitigate the effects, and use object caching, such as memcached to avoid hitting the database for the less volatile content. For most sites, this is probably enough, although if you do become a global phenomenon, then you'll probably want to start considering having hardware in regional data centres, and using tricks such as geographic load balancing to direct visitors to the closest "cluster". By that point, you'll probably be in a position to hire engineers who can really fine-tune things. Probably the most valuable scaling advice I can think of would be to avoid worrying about it all far too soon; concentrate on developing a service people are going to want to use, and making the application reasonably robust. Some easy early optimisations are to make sure your database design is fairly solid, and that indexes are set up so you're not doing anything painfully crazy; also, make sure the application emits cache-control headers that direct browsers on how to cache the data. Doing this sort of work early on in the design can yield benefits later, especially when you don't have to rework the entire thing to deal with cache coherency issues. The second most valuable piece of advice I want to put across is that you shouldn't assume what works for some other web site will work for you; check your logs, run some analysis on your traffic and profile your application - see where your bottlenecks are and resolve them. | [
"hardware",
"scaling"
] | 3 | 10 | 843 | 6 | 0 | 2008-08-31T01:11:47.740000 | 2008-08-31T01:36:08.150000 |
36,563 | 36,577 | Is there an easy way to do transparent forms in a VB .NET app? | I'm writing a simple app that's going to have a tiny form sitting in one corner of the screen, updating itself. I'd really love for that form to be transparent and to have the transparency be user-configurable. Is there any easy way to achieve this? | You could try using the Opacity property of the Form. Here's the relevant snippet from the MSDN page: private Sub CreateMyOpaqueForm() ' Create a new form. Dim form2 As New Form() ' Set the text displayed in the caption. form2.Text = "My Form" ' Set the opacity to 75%. form2.Opacity = 0.75 ' Size the form to be 300 pixels in height and width. form2.Size = New Size(300, 300) ' Display the form in the center of the screen. form2.StartPosition = FormStartPosition.CenterScreen
' Display the form as a modal dialog box. form2.ShowDialog() End Sub | Is there an easy way to do transparent forms in a VB .NET app? I'm writing a simple app that's going to have a tiny form sitting in one corner of the screen, updating itself. I'd really love for that form to be transparent and to have the transparency be user-configurable. Is there any easy way to achieve this? | TITLE:
Is there an easy way to do transparent forms in a VB .NET app?
QUESTION:
I'm writing a simple app that's going to have a tiny form sitting in one corner of the screen, updating itself. I'd really love for that form to be transparent and to have the transparency be user-configurable. Is there any easy way to achieve this?
ANSWER:
You could try using the Opacity property of the Form. Here's the relevant snippet from the MSDN page: private Sub CreateMyOpaqueForm() ' Create a new form. Dim form2 As New Form() ' Set the text displayed in the caption. form2.Text = "My Form" ' Set the opacity to 75%. form2.Opacity = 0.75 ' Size the form to be 300 pixels in height and width. form2.Size = New Size(300, 300) ' Display the form in the center of the screen. form2.StartPosition = FormStartPosition.CenterScreen
' Display the form as a modal dialog box. form2.ShowDialog() End Sub | [
"vb.net",
"transparency"
] | 2 | 4 | 1,966 | 4 | 0 | 2008-08-31T02:05:03.587000 | 2008-08-31T02:29:41.843000 |
36,567 | 38,203 | Beginner Digital Synth | I'm looking into writing a audio syntesizer in Java, and was wondering if anybody has any advice or good resources for writing such a program. I'm looking for info on generating raw sound waves, how to output them into a usable form (playing over speakers), as well as general theory on the topic. Thanks guys. | This problem is basically about mapping functions to arrays of numbers. A language that supports first-class functions would come in really handy here. Check out http://www.harmony-central.com/Computer/Programming and http://www.developer.com/java/other/article.php/3071021 for some Java-related info. If you don't know the basic concepts of encoding sound data, then read http://en.wikipedia.org/wiki/Sampling_rate The canonical WAVE format is very simple, see http://www.lightlink.com/tjweber/StripWav/Canon.html. A header (first 44 bytes) + the wave-data. You don't need any library to implement that. In C/C++, the corresponding data structure would look something like this: typedef struct _WAVstruct { char headertag[4]; unsigned int remnantlength; char fileid[4];
char fmtchunktag[4]; unsigned int fmtlength; unsigned short fmttag; unsigned short channels; unsigned int samplerate; unsigned int bypse; unsigned short ba; unsigned short bipsa;
char datatag[4]; unsigned int datalength;
void* data; //<--- that's where the raw sound-data goes }* WAVstruct; I'm not sure about Java. I guess you'll have to substitute "struct" with "class" and "void* data" with "char[] data" or "short[] data" or "int[] data", corresponding to the number of bits per sample, as defined in the field bipsa. To fill it with data, you would use something like that in C/C++: int data2WAVstruct(unsigned short channels, unsigned short bipsa, unsigned int samplerate, unsigned int datalength, void* data, WAVstruct result) { result->headertag[0] = 'R'; result->headertag[1] = 'I'; result->headertag[2] = 'F'; result->headertag[3] = 'F'; result->remnantlength = 44 + datalength - 8; result->fileid[0] = 'W'; result->fileid[1] = 'A'; result->fileid[2] = 'V'; result->fileid[3] = 'E';
result->fmtchunktag[0] = 'f'; result->fmtchunktag[1] = 'm'; result->fmtchunktag[2] = 't'; result->fmtchunktag[3] = ' '; result->fmtlength = 0x00000010; result->fmttag = 1; result->channels = channels; result->samplerate = samplerate; result->bipsa = bipsa; result->ba = channels*bipsa / 8; result->bypse = samplerate*result->ba;
result->datatag[0] = 'd'; result->datatag[1] = 'a'; result->datatag[2] = 't'; result->datatag[3] = 'a'; result->datalength = datalength;
result->data = data; // <--- that's were the data comes in
return 0; // an error code, not implemented, yet...; in Java: return result } Again, I'm not sure about Java but the conversion should be straightforward if you convert the void-pointer to an array corresponding to the bitrate. Then simply write the entire structure to a file to get a playable wave file. | Beginner Digital Synth I'm looking into writing a audio syntesizer in Java, and was wondering if anybody has any advice or good resources for writing such a program. I'm looking for info on generating raw sound waves, how to output them into a usable form (playing over speakers), as well as general theory on the topic. Thanks guys. | TITLE:
Beginner Digital Synth
QUESTION:
I'm looking into writing a audio syntesizer in Java, and was wondering if anybody has any advice or good resources for writing such a program. I'm looking for info on generating raw sound waves, how to output them into a usable form (playing over speakers), as well as general theory on the topic. Thanks guys.
ANSWER:
This problem is basically about mapping functions to arrays of numbers. A language that supports first-class functions would come in really handy here. Check out http://www.harmony-central.com/Computer/Programming and http://www.developer.com/java/other/article.php/3071021 for some Java-related info. If you don't know the basic concepts of encoding sound data, then read http://en.wikipedia.org/wiki/Sampling_rate The canonical WAVE format is very simple, see http://www.lightlink.com/tjweber/StripWav/Canon.html. A header (first 44 bytes) + the wave-data. You don't need any library to implement that. In C/C++, the corresponding data structure would look something like this: typedef struct _WAVstruct { char headertag[4]; unsigned int remnantlength; char fileid[4];
char fmtchunktag[4]; unsigned int fmtlength; unsigned short fmttag; unsigned short channels; unsigned int samplerate; unsigned int bypse; unsigned short ba; unsigned short bipsa;
char datatag[4]; unsigned int datalength;
void* data; //<--- that's where the raw sound-data goes }* WAVstruct; I'm not sure about Java. I guess you'll have to substitute "struct" with "class" and "void* data" with "char[] data" or "short[] data" or "int[] data", corresponding to the number of bits per sample, as defined in the field bipsa. To fill it with data, you would use something like that in C/C++: int data2WAVstruct(unsigned short channels, unsigned short bipsa, unsigned int samplerate, unsigned int datalength, void* data, WAVstruct result) { result->headertag[0] = 'R'; result->headertag[1] = 'I'; result->headertag[2] = 'F'; result->headertag[3] = 'F'; result->remnantlength = 44 + datalength - 8; result->fileid[0] = 'W'; result->fileid[1] = 'A'; result->fileid[2] = 'V'; result->fileid[3] = 'E';
result->fmtchunktag[0] = 'f'; result->fmtchunktag[1] = 'm'; result->fmtchunktag[2] = 't'; result->fmtchunktag[3] = ' '; result->fmtlength = 0x00000010; result->fmttag = 1; result->channels = channels; result->samplerate = samplerate; result->bipsa = bipsa; result->ba = channels*bipsa / 8; result->bypse = samplerate*result->ba;
result->datatag[0] = 'd'; result->datatag[1] = 'a'; result->datatag[2] = 't'; result->datatag[3] = 'a'; result->datalength = datalength;
result->data = data; // <--- that's were the data comes in
return 0; // an error code, not implemented, yet...; in Java: return result } Again, I'm not sure about Java but the conversion should be straightforward if you convert the void-pointer to an array corresponding to the bitrate. Then simply write the entire structure to a file to get a playable wave file. | [
"java",
"synthesizer"
] | 5 | 6 | 3,293 | 4 | 0 | 2008-08-31T02:08:47.867000 | 2008-09-01T17:48:33.683000 |
36,568 | 36,572 | Automated Builds | I currently use subversion for my version control via AhnkSVN and Visual Studio. I recently started using Tree Surgeon to set up my projects. It creates a build script automatically using NAnt. I would like to be able to automate builds regularly projects within SVN. I like the idea of doing a build on every check in but nightly builds would work as well. I would give more information or more of my thoughts but figured I would leave it open and see what the SO community has to say. | You could use CruiseControl.Net, which can do a build on every check in, nightly builds, or however you want to do it. A quick google search suggests CC.Net has some integration with NAnt already. | Automated Builds I currently use subversion for my version control via AhnkSVN and Visual Studio. I recently started using Tree Surgeon to set up my projects. It creates a build script automatically using NAnt. I would like to be able to automate builds regularly projects within SVN. I like the idea of doing a build on every check in but nightly builds would work as well. I would give more information or more of my thoughts but figured I would leave it open and see what the SO community has to say. | TITLE:
Automated Builds
QUESTION:
I currently use subversion for my version control via AhnkSVN and Visual Studio. I recently started using Tree Surgeon to set up my projects. It creates a build script automatically using NAnt. I would like to be able to automate builds regularly projects within SVN. I like the idea of doing a build on every check in but nightly builds would work as well. I would give more information or more of my thoughts but figured I would leave it open and see what the SO community has to say.
ANSWER:
You could use CruiseControl.Net, which can do a build on every check in, nightly builds, or however you want to do it. A quick google search suggests CC.Net has some integration with NAnt already. | [
"svn",
"build-automation",
"nant"
] | 8 | 5 | 2,588 | 10 | 0 | 2008-08-31T02:09:49.800000 | 2008-08-31T02:14:33.080000 |
36,575 | 36,591 | Add service reference to Amazon service fails | Add service reference to Amazon service fails, saying "Could not load file or assembly "System.Core, Version=3.5.0.0,...' or one or more of it dependencies. The module was expected to contain an assembly manifest." This is in VS 2008, haven't installed SP1 on this machine yet. Any ideas? | This can happen if ASP.NET isn't installed. Go to Add/Remove Windows Components and look under IIS; make sure that ASP.NET is checked (meaning that it's installed.) That should clear up your problem! | Add service reference to Amazon service fails Add service reference to Amazon service fails, saying "Could not load file or assembly "System.Core, Version=3.5.0.0,...' or one or more of it dependencies. The module was expected to contain an assembly manifest." This is in VS 2008, haven't installed SP1 on this machine yet. Any ideas? | TITLE:
Add service reference to Amazon service fails
QUESTION:
Add service reference to Amazon service fails, saying "Could not load file or assembly "System.Core, Version=3.5.0.0,...' or one or more of it dependencies. The module was expected to contain an assembly manifest." This is in VS 2008, haven't installed SP1 on this machine yet. Any ideas?
ANSWER:
This can happen if ASP.NET isn't installed. Go to Add/Remove Windows Components and look under IIS; make sure that ASP.NET is checked (meaning that it's installed.) That should clear up your problem! | [
"asp.net",
"amazon-web-services"
] | 1 | 1 | 194 | 1 | 0 | 2008-08-31T02:23:57.130000 | 2008-08-31T02:50:00.030000 |
36,600 | 36,615 | Viewing DirectX application remotely | We're working on an application that displays information through a Direct3D visualisation. A late client request is the ability to view this application via some Remote Desktop solution. Has anyone done anything similar? What options are available / unavailable? I'm thinking RDC, VNC, Citrix... Any advice? | I think you can still use all of the normal D3D tools, but you won't be able to render to a surface associated with the screen. You'll have to render to a DIB (or some such) and Blt it with GDI to a normal window HDC. RDC/VNC/Citrix should all work with this technique. Performance will definitely suffer - but that's going to be the case over remote desktop anyway. In fact, if I were you, I would mock up a VERY simple prototype and demonstrate the performance before committing to it. Good luck! | Viewing DirectX application remotely We're working on an application that displays information through a Direct3D visualisation. A late client request is the ability to view this application via some Remote Desktop solution. Has anyone done anything similar? What options are available / unavailable? I'm thinking RDC, VNC, Citrix... Any advice? | TITLE:
Viewing DirectX application remotely
QUESTION:
We're working on an application that displays information through a Direct3D visualisation. A late client request is the ability to view this application via some Remote Desktop solution. Has anyone done anything similar? What options are available / unavailable? I'm thinking RDC, VNC, Citrix... Any advice?
ANSWER:
I think you can still use all of the normal D3D tools, but you won't be able to render to a surface associated with the screen. You'll have to render to a DIB (or some such) and Blt it with GDI to a normal window HDC. RDC/VNC/Citrix should all work with this technique. Performance will definitely suffer - but that's going to be the case over remote desktop anyway. In fact, if I were you, I would mock up a VERY simple prototype and demonstrate the performance before committing to it. Good luck! | [
"directx",
"remote-desktop"
] | 6 | 2 | 3,431 | 3 | 0 | 2008-08-31T03:05:52.457000 | 2008-08-31T03:39:53.280000 |
36,605 | 36,612 | IE 6 CSS Hover non Anchor Tag | What is the simplest and most elegant way to simulate the hover pseudo-class for non-Anchor tags in IE6? I am specifically trying to change the cursor in this instance to that of a pointer. | I would say that the simplest method would be to add onmouseover/out Javascript functions. | IE 6 CSS Hover non Anchor Tag What is the simplest and most elegant way to simulate the hover pseudo-class for non-Anchor tags in IE6? I am specifically trying to change the cursor in this instance to that of a pointer. | TITLE:
IE 6 CSS Hover non Anchor Tag
QUESTION:
What is the simplest and most elegant way to simulate the hover pseudo-class for non-Anchor tags in IE6? I am specifically trying to change the cursor in this instance to that of a pointer.
ANSWER:
I would say that the simplest method would be to add onmouseover/out Javascript functions. | [
"css",
"internet-explorer-6"
] | 8 | 4 | 6,764 | 8 | 0 | 2008-08-31T03:14:56.727000 | 2008-08-31T03:30:09.810000 |
36,608 | 36,618 | How can I count the number of records that have a unique value in a particular field in ROR? | I have a record set that includes a date field, and want to determine how many unique dates are represented in the record set. Something like: Record.find(:all).date.unique.count but of course, that doesn't seem to work. | What you're going for is the following SQL: SELECT COUNT(DISTINCT date) FROM records ActiveRecord has this built in: Record.count('date',:distinct => true) | How can I count the number of records that have a unique value in a particular field in ROR? I have a record set that includes a date field, and want to determine how many unique dates are represented in the record set. Something like: Record.find(:all).date.unique.count but of course, that doesn't seem to work. | TITLE:
How can I count the number of records that have a unique value in a particular field in ROR?
QUESTION:
I have a record set that includes a date field, and want to determine how many unique dates are represented in the record set. Something like: Record.find(:all).date.unique.count but of course, that doesn't seem to work.
ANSWER:
What you're going for is the following SQL: SELECT COUNT(DISTINCT date) FROM records ActiveRecord has this built in: Record.count('date',:distinct => true) | [
"ruby-on-rails",
"ruby",
"activerecord"
] | 69 | 85 | 87,269 | 7 | 0 | 2008-08-31T03:19:33.157000 | 2008-08-31T03:52:26.200000 |
36,621 | 36,705 | How to catch undefined functions with set_error_handler in PHP | I'm taking the leap: my PHP scripts will ALL fail gracefully! At least, that's what I'm hoping for...` I don't want to wrap (practically) every single line in try...catch statements, so I think my best bet is to make a custom error handler for the beginning of my files. I'm testing it out on a practice page: function customError($level,$message,$file,$line,$context) { echo "Sorry, an error has occured on line $line. "; echo "The function that caused the error says $message. "; die(); }
set_error_handler("customError");
echo($imAFakeVariable); This works fine, returning: Sorry, an error has occurred on line 17. The function that caused the error says Undefined variable: imAFakeVariable. However, this setup doesn't work for undefined functions. function customError($level,$message,$file,$line,$context) { echo "Sorry, an error has occured on line $line. "; echo "The function that caused the error says $message. "; die(); }
set_error_handler("customError");
imAFakeFunction(); This returns: Fatal error: Call to undefined function: imafakefunction() in /Library/WebServer/Documents/experimental/errorhandle.php on line 17 Why isn't my custom error handler catching undefined functions? Are there other problems that this will cause? | set_error_handler is designed to handle errors with codes of: E_USER_ERROR | E_USER_WARNING | E_USER_NOTICE. This is because set_error_handler is meant to be a method of reporting errors thrown by the user error function trigger_error. However, I did find this comment in the manual that may help you: "The following error types cannot be handled with a user defined function: E_ERROR, E_PARSE, E_CORE_ERROR, E_CORE_WARNING, E_COMPILE_ERROR, E_COMPILE_WARNING, and most of E_STRICT raised in the file where set_error_handler() is called." This is not exactly true. set_error_handler() can't handle them, but ob_start() can handle at least E_ERROR. $string) $output.= "{$info}: {$string}\n"; return $output; }
ob_start('error_handler');
will_this_undefined_function_raise_an_error();?> Really though these errors should be silently reported in a file, for example. Hopefully you won't have many E_PARSE errors in your project!:-) As for general error reporting, stick with Exceptions (I find it helpful to make them tie in with my MVC system). You can build a pretty versatile Exception to provide options via buttons and add plenty of description to let the user know what's wrong. | How to catch undefined functions with set_error_handler in PHP I'm taking the leap: my PHP scripts will ALL fail gracefully! At least, that's what I'm hoping for...` I don't want to wrap (practically) every single line in try...catch statements, so I think my best bet is to make a custom error handler for the beginning of my files. I'm testing it out on a practice page: function customError($level,$message,$file,$line,$context) { echo "Sorry, an error has occured on line $line. "; echo "The function that caused the error says $message. "; die(); }
set_error_handler("customError");
echo($imAFakeVariable); This works fine, returning: Sorry, an error has occurred on line 17. The function that caused the error says Undefined variable: imAFakeVariable. However, this setup doesn't work for undefined functions. function customError($level,$message,$file,$line,$context) { echo "Sorry, an error has occured on line $line. "; echo "The function that caused the error says $message. "; die(); }
set_error_handler("customError");
imAFakeFunction(); This returns: Fatal error: Call to undefined function: imafakefunction() in /Library/WebServer/Documents/experimental/errorhandle.php on line 17 Why isn't my custom error handler catching undefined functions? Are there other problems that this will cause? | TITLE:
How to catch undefined functions with set_error_handler in PHP
QUESTION:
I'm taking the leap: my PHP scripts will ALL fail gracefully! At least, that's what I'm hoping for...` I don't want to wrap (practically) every single line in try...catch statements, so I think my best bet is to make a custom error handler for the beginning of my files. I'm testing it out on a practice page: function customError($level,$message,$file,$line,$context) { echo "Sorry, an error has occured on line $line. "; echo "The function that caused the error says $message. "; die(); }
set_error_handler("customError");
echo($imAFakeVariable); This works fine, returning: Sorry, an error has occurred on line 17. The function that caused the error says Undefined variable: imAFakeVariable. However, this setup doesn't work for undefined functions. function customError($level,$message,$file,$line,$context) { echo "Sorry, an error has occured on line $line. "; echo "The function that caused the error says $message. "; die(); }
set_error_handler("customError");
imAFakeFunction(); This returns: Fatal error: Call to undefined function: imafakefunction() in /Library/WebServer/Documents/experimental/errorhandle.php on line 17 Why isn't my custom error handler catching undefined functions? Are there other problems that this will cause?
ANSWER:
set_error_handler is designed to handle errors with codes of: E_USER_ERROR | E_USER_WARNING | E_USER_NOTICE. This is because set_error_handler is meant to be a method of reporting errors thrown by the user error function trigger_error. However, I did find this comment in the manual that may help you: "The following error types cannot be handled with a user defined function: E_ERROR, E_PARSE, E_CORE_ERROR, E_CORE_WARNING, E_COMPILE_ERROR, E_COMPILE_WARNING, and most of E_STRICT raised in the file where set_error_handler() is called." This is not exactly true. set_error_handler() can't handle them, but ob_start() can handle at least E_ERROR. $string) $output.= "{$info}: {$string}\n"; return $output; }
ob_start('error_handler');
will_this_undefined_function_raise_an_error();?> Really though these errors should be silently reported in a file, for example. Hopefully you won't have many E_PARSE errors in your project!:-) As for general error reporting, stick with Exceptions (I find it helpful to make them tie in with my MVC system). You can build a pretty versatile Exception to provide options via buttons and add plenty of description to let the user know what's wrong. | [
"php"
] | 13 | 15 | 12,204 | 7 | 0 | 2008-08-31T03:57:29.947000 | 2008-08-31T09:23:12.473000 |
36,646 | 36,687 | Do you use Phing? | Does anyone use Phing to deploy PHP applications, and if so how do you use it? We currently have a hand-written "setup" script that we run whenever we deploy a new instance of our project. We just check out from SVN and run it. It sets some basic configuration variables, installs or reloads the database, and generates a v-host for the site instance. I have often thought that maybe we should be using Phing. I haven't used ant much, so I don't have a real sense of what Phing is supposed to do other than script the copying of files from one place to another much as our setup script does. What are some more advanced uses that you can give examples of to help me understand why we would or would not want to integrate Phing into our process? | From Federico Cargnelutti's blog post: Features include file transformations (e.g. token replacement, XSLT transformation, Smarty template transformations), file system operations, interactive build support, SQL execution, CVS operations, tools for creating PEAR packages, and much more. Of course you could write custom scripts for all of the above. However, using a specialized build tool like Phing gives you a number of benefits. You'll be using a proven framework so instead of having to worry about setting up "infrastructure" you can focus on the code you need to write. Using Phing will also make it easier for when new members join your team, they'll be able to understand what is going on if they've used Phing (or Ant, which is what Phing is based on) before. | Do you use Phing? Does anyone use Phing to deploy PHP applications, and if so how do you use it? We currently have a hand-written "setup" script that we run whenever we deploy a new instance of our project. We just check out from SVN and run it. It sets some basic configuration variables, installs or reloads the database, and generates a v-host for the site instance. I have often thought that maybe we should be using Phing. I haven't used ant much, so I don't have a real sense of what Phing is supposed to do other than script the copying of files from one place to another much as our setup script does. What are some more advanced uses that you can give examples of to help me understand why we would or would not want to integrate Phing into our process? | TITLE:
Do you use Phing?
QUESTION:
Does anyone use Phing to deploy PHP applications, and if so how do you use it? We currently have a hand-written "setup" script that we run whenever we deploy a new instance of our project. We just check out from SVN and run it. It sets some basic configuration variables, installs or reloads the database, and generates a v-host for the site instance. I have often thought that maybe we should be using Phing. I haven't used ant much, so I don't have a real sense of what Phing is supposed to do other than script the copying of files from one place to another much as our setup script does. What are some more advanced uses that you can give examples of to help me understand why we would or would not want to integrate Phing into our process?
ANSWER:
From Federico Cargnelutti's blog post: Features include file transformations (e.g. token replacement, XSLT transformation, Smarty template transformations), file system operations, interactive build support, SQL execution, CVS operations, tools for creating PEAR packages, and much more. Of course you could write custom scripts for all of the above. However, using a specialized build tool like Phing gives you a number of benefits. You'll be using a proven framework so instead of having to worry about setting up "infrastructure" you can focus on the code you need to write. Using Phing will also make it easier for when new members join your team, they'll be able to understand what is going on if they've used Phing (or Ant, which is what Phing is based on) before. | [
"php",
"deployment",
"build-process",
"build-automation",
"phing"
] | 25 | 16 | 7,535 | 5 | 0 | 2008-08-31T05:04:34.950000 | 2008-08-31T08:22:20.370000 |
36,647 | 36,648 | Unit tests in Python | Does Python have a unit testing framework compatible with the standard xUnit style of test framework? If so, what is it, where is it, and is it any good? | Python has several testing frameworks, including unittest, doctest, and nose. The most xUnit-like is unittest, which is documented on Python.org. unittest documentation doctest documentation | Unit tests in Python Does Python have a unit testing framework compatible with the standard xUnit style of test framework? If so, what is it, where is it, and is it any good? | TITLE:
Unit tests in Python
QUESTION:
Does Python have a unit testing framework compatible with the standard xUnit style of test framework? If so, what is it, where is it, and is it any good?
ANSWER:
Python has several testing frameworks, including unittest, doctest, and nose. The most xUnit-like is unittest, which is documented on Python.org. unittest documentation doctest documentation | [
"python",
"unit-testing"
] | 22 | 25 | 9,362 | 9 | 0 | 2008-08-31T05:07:41.603000 | 2008-08-31T05:09:33.813000 |
36,656 | 36,658 | How do I keep whitespace formatting using PHP/HTML? | I'm parsing text from a file and storing it in a string. The problem is that some of the text in the original files contains ASCII art and whatnot that I would like to preserve. When I print out the string on the HTML page, even if it does have the same formatting and everything since it is in HTML, the spacing and line breaks are not preserved. What is the best way to print out the text in HTML exactly as it was in the original text file? I would like to give an example, but unfortunately, I was not able to get it to display correctly in this markdown editor:P Basically, I would like suggestions on how to display ASCII art in HTML. | use the tag (pre formatted), that will use a mono spaced font (for your art) and keep all the white space text goes here and here and here and here Some out here ▄ ▄█▄ █▄ ▄ ▄█▀█▓ ▄▓▀▀█▀ ▀▀▀█▓▀▀ ▀▀ ▄█▀█▓▀▀▀▀▀▓▄▀██▀▀ ██ ██ ▀██▄▄ ▄█ ▀ ░▒ ░▒ ██ ██ ▄█▄ █▀ ██ █▓▄▀██ ▄ ▀█▌▓█ ▒▓ ▒▓ █▓▄▀██ ▓█ ▀▄ █▓ █▒ █▓ ██▄▓▀ ▀█▄▄█▄▓█ ▓█ █▒ █▓ ▒█ ▓█▄ ▒ ▀▒ ▀ ▀ █▀ ▀▒ ▀ █▀ ░ You might have to convert any <'s to < 's | How do I keep whitespace formatting using PHP/HTML? I'm parsing text from a file and storing it in a string. The problem is that some of the text in the original files contains ASCII art and whatnot that I would like to preserve. When I print out the string on the HTML page, even if it does have the same formatting and everything since it is in HTML, the spacing and line breaks are not preserved. What is the best way to print out the text in HTML exactly as it was in the original text file? I would like to give an example, but unfortunately, I was not able to get it to display correctly in this markdown editor:P Basically, I would like suggestions on how to display ASCII art in HTML. | TITLE:
How do I keep whitespace formatting using PHP/HTML?
QUESTION:
I'm parsing text from a file and storing it in a string. The problem is that some of the text in the original files contains ASCII art and whatnot that I would like to preserve. When I print out the string on the HTML page, even if it does have the same formatting and everything since it is in HTML, the spacing and line breaks are not preserved. What is the best way to print out the text in HTML exactly as it was in the original text file? I would like to give an example, but unfortunately, I was not able to get it to display correctly in this markdown editor:P Basically, I would like suggestions on how to display ASCII art in HTML.
ANSWER:
use the tag (pre formatted), that will use a mono spaced font (for your art) and keep all the white space text goes here and here and here and here Some out here ▄ ▄█▄ █▄ ▄ ▄█▀█▓ ▄▓▀▀█▀ ▀▀▀█▓▀▀ ▀▀ ▄█▀█▓▀▀▀▀▀▓▄▀██▀▀ ██ ██ ▀██▄▄ ▄█ ▀ ░▒ ░▒ ██ ██ ▄█▄ █▀ ██ █▓▄▀██ ▄ ▀█▌▓█ ▒▓ ▒▓ █▓▄▀██ ▓█ ▀▄ █▓ █▒ █▓ ██▄▓▀ ▀█▄▄█▄▓█ ▓█ █▒ █▓ ▒█ ▓█▄ ▒ ▀▒ ▀ ▀ █▀ ▀▒ ▀ █▀ ░ You might have to convert any <'s to < 's | [
"php",
"html",
"ascii"
] | 33 | 60 | 51,465 | 5 | 0 | 2008-08-31T05:55:12.330000 | 2008-08-31T05:58:01.747000 |
36,682 | 36,689 | Why do .Net WPF DependencyProperties have to be static members of the class | Learning WPF nowadays. Found something new today with.Net dependency properties. What they bring to the table is Support for Callbacks (Validation, Change, etc) Property inheritance Attached properties among others. But my question here is why do they need to be declared as static in the containing class? The recommmended way is to then add instance 'wrapper' property for them. Why? edit: @Matt, but doesn't that also mandate that the property value is also shared across instances - unless of course it is a derived value? | I see 2 reasons behind that requirement: You can't register same DP twice. To comply with this constraint you should use static variable, it will be initialized only one time thus you will register DP one time only. DP should be registered before any class (which uses that DB) instance created | Why do .Net WPF DependencyProperties have to be static members of the class Learning WPF nowadays. Found something new today with.Net dependency properties. What they bring to the table is Support for Callbacks (Validation, Change, etc) Property inheritance Attached properties among others. But my question here is why do they need to be declared as static in the containing class? The recommmended way is to then add instance 'wrapper' property for them. Why? edit: @Matt, but doesn't that also mandate that the property value is also shared across instances - unless of course it is a derived value? | TITLE:
Why do .Net WPF DependencyProperties have to be static members of the class
QUESTION:
Learning WPF nowadays. Found something new today with.Net dependency properties. What they bring to the table is Support for Callbacks (Validation, Change, etc) Property inheritance Attached properties among others. But my question here is why do they need to be declared as static in the containing class? The recommmended way is to then add instance 'wrapper' property for them. Why? edit: @Matt, but doesn't that also mandate that the property value is also shared across instances - unless of course it is a derived value?
ANSWER:
I see 2 reasons behind that requirement: You can't register same DP twice. To comply with this constraint you should use static variable, it will be initialized only one time thus you will register DP one time only. DP should be registered before any class (which uses that DB) instance created | [
".net",
"wpf"
] | 4 | 2 | 425 | 3 | 0 | 2008-08-31T08:08:35.570000 | 2008-08-31T08:26:55.913000 |
36,693 | 155,600 | How can I render a PNG image (as a memory stream) onto a .NET ReportViewer report surface | I have a dynamically created image that I am saving to a stream so that I can display it on a ReportViewer surface. Setup: Windows Client application (not WebForms) Report datasource is an object datasource, with a dynamically generated stream as a property (CustomImage) Report.EnableExternalImages = true Image.Source = Database Image.MIMEType = image/png Image.Value = =Fields!CustomImage.Value This is not working, but is not reporting any errors, just showing an empty image icon on the report surface. All other fields are displaying correctly. Does anyone have a working code sample of this scenario? | I am doing something similar in order to have a changing logo on reports however I utilise report parameters to pass the value. I don't see any reason why this general method wouldn't work if the images were part of the data. Essentially the images are passed over two fields. The first field is the MIME Type value and the second field is a Base64 encoded string containing the image content. Step 1: Convert your image to Base64 encoding. (Our code always passes ImageFormat.Png to this method to make the MIME Type easy) private static string ConvertImageToBase64(Image image, ImageFormat format) { byte[] imageArray;
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream()) { image.Save(imageStream, format); imageArray = new byte[imageStream.Length]; imageStream.Seek(0, System.IO.SeekOrigin.Begin); imageStream.Read(imageArray, 0, imageStream.Length); }
return Convert.ToBase64String(imageArray); } Step 2: Pass the image and MIME Type to the report. reportParams[0] = new ReportParameter("ReportLogo", base64Logo); reportParams[1] = new ReportParameter("ReportLogoMimeType", "image/png");
_reportViewer.LocalReport.SetParameters(reportParams); Step 3: In the report set the following properties on the image (without the quotes): MIMEType: " =Parameters!ReportLogoMimeType.Value " Value: " =System.Convert.FromBase64String(Parameters!ReportLogo.Value) " UPDATE: As Gerardo's says below, the Image Source must be set to 'Database' Trap for young players: Often the images will look horrible and like they've been scaled even though you're passing in an image which seems to be the "right size". This is because the reports are rendered for print (300 dpi) and not the screen (usually 72 or 92 dpi). The fix is to send in an image about 3 times too big, set it's correct size in the report and change the " Sizing " property on the image to " FitProportional ". | How can I render a PNG image (as a memory stream) onto a .NET ReportViewer report surface I have a dynamically created image that I am saving to a stream so that I can display it on a ReportViewer surface. Setup: Windows Client application (not WebForms) Report datasource is an object datasource, with a dynamically generated stream as a property (CustomImage) Report.EnableExternalImages = true Image.Source = Database Image.MIMEType = image/png Image.Value = =Fields!CustomImage.Value This is not working, but is not reporting any errors, just showing an empty image icon on the report surface. All other fields are displaying correctly. Does anyone have a working code sample of this scenario? | TITLE:
How can I render a PNG image (as a memory stream) onto a .NET ReportViewer report surface
QUESTION:
I have a dynamically created image that I am saving to a stream so that I can display it on a ReportViewer surface. Setup: Windows Client application (not WebForms) Report datasource is an object datasource, with a dynamically generated stream as a property (CustomImage) Report.EnableExternalImages = true Image.Source = Database Image.MIMEType = image/png Image.Value = =Fields!CustomImage.Value This is not working, but is not reporting any errors, just showing an empty image icon on the report surface. All other fields are displaying correctly. Does anyone have a working code sample of this scenario?
ANSWER:
I am doing something similar in order to have a changing logo on reports however I utilise report parameters to pass the value. I don't see any reason why this general method wouldn't work if the images were part of the data. Essentially the images are passed over two fields. The first field is the MIME Type value and the second field is a Base64 encoded string containing the image content. Step 1: Convert your image to Base64 encoding. (Our code always passes ImageFormat.Png to this method to make the MIME Type easy) private static string ConvertImageToBase64(Image image, ImageFormat format) { byte[] imageArray;
using (System.IO.MemoryStream imageStream = new System.IO.MemoryStream()) { image.Save(imageStream, format); imageArray = new byte[imageStream.Length]; imageStream.Seek(0, System.IO.SeekOrigin.Begin); imageStream.Read(imageArray, 0, imageStream.Length); }
return Convert.ToBase64String(imageArray); } Step 2: Pass the image and MIME Type to the report. reportParams[0] = new ReportParameter("ReportLogo", base64Logo); reportParams[1] = new ReportParameter("ReportLogoMimeType", "image/png");
_reportViewer.LocalReport.SetParameters(reportParams); Step 3: In the report set the following properties on the image (without the quotes): MIMEType: " =Parameters!ReportLogoMimeType.Value " Value: " =System.Convert.FromBase64String(Parameters!ReportLogo.Value) " UPDATE: As Gerardo's says below, the Image Source must be set to 'Database' Trap for young players: Often the images will look horrible and like they've been scaled even though you're passing in an image which seems to be the "right size". This is because the reports are rendered for print (300 dpi) and not the screen (usually 72 or 92 dpi). The fix is to send in an image about 3 times too big, set it's correct size in the report and change the " Sizing " property on the image to " FitProportional ". | [
".net",
"image",
"reportviewer"
] | 13 | 34 | 16,446 | 1 | 0 | 2008-08-31T08:42:17.797000 | 2008-09-30T23:37:10.057000 |
36,701 | 36,734 | Struct like objects in Java | Is it completely against the Java way to create struct like objects? class SomeData1 { public int x; public int y; } I can see a class with accessors and mutators being more Java like. class SomeData2 { int getX(); void setX(int x);
int getY(); void setY(int y);
private int x; private int y; } The class from the first example is notationally convenient. // a function in a class public int f(SomeData1 d) { return (3 * d.x) / d.y; } This is not as convenient. // a function in a class public int f(SomeData2 d) { return (3 * d.getX()) / d.getY(); } | This is a commonly discussed topic. The drawback of creating public fields in objects is that you have no control over the values that are set to it. In group projects where there are many programmers using the same code, it's important to avoid side effects. Besides, sometimes it's better to return a copy of field's object or transform it somehow etc. You can mock such methods in your tests. If you create a new class you might not see all possible actions. It's like defensive programming - someday getters and setters may be helpful, and it doesn't cost a lot to create/use them. So they are sometimes useful. In practice, most fields have simple getters and setters. A possible solution would look like this: public property String foo; a->Foo = b->Foo; Update: It's highly unlikely that property support will be added in Java 7 or perhaps ever. Other JVM languages like Groovy, Scala, etc do support this feature now. - Alex Miller | Struct like objects in Java Is it completely against the Java way to create struct like objects? class SomeData1 { public int x; public int y; } I can see a class with accessors and mutators being more Java like. class SomeData2 { int getX(); void setX(int x);
int getY(); void setY(int y);
private int x; private int y; } The class from the first example is notationally convenient. // a function in a class public int f(SomeData1 d) { return (3 * d.x) / d.y; } This is not as convenient. // a function in a class public int f(SomeData2 d) { return (3 * d.getX()) / d.getY(); } | TITLE:
Struct like objects in Java
QUESTION:
Is it completely against the Java way to create struct like objects? class SomeData1 { public int x; public int y; } I can see a class with accessors and mutators being more Java like. class SomeData2 { int getX(); void setX(int x);
int getY(); void setY(int y);
private int x; private int y; } The class from the first example is notationally convenient. // a function in a class public int f(SomeData1 d) { return (3 * d.x) / d.y; } This is not as convenient. // a function in a class public int f(SomeData2 d) { return (3 * d.getX()) / d.getY(); }
ANSWER:
This is a commonly discussed topic. The drawback of creating public fields in objects is that you have no control over the values that are set to it. In group projects where there are many programmers using the same code, it's important to avoid side effects. Besides, sometimes it's better to return a copy of field's object or transform it somehow etc. You can mock such methods in your tests. If you create a new class you might not see all possible actions. It's like defensive programming - someday getters and setters may be helpful, and it doesn't cost a lot to create/use them. So they are sometimes useful. In practice, most fields have simple getters and setters. A possible solution would look like this: public property String foo; a->Foo = b->Foo; Update: It's highly unlikely that property support will be added in Java 7 or perhaps ever. Other JVM languages like Groovy, Scala, etc do support this feature now. - Alex Miller | [
"java",
"oop",
"struct"
] | 199 | 62 | 359,225 | 20 | 0 | 2008-08-31T09:17:01.737000 | 2008-08-31T09:50:48.210000 |
36,706 | 36,856 | How can I improve my programming experience on my Linux Desktop? | How can I improve the look and feel of my Linux desktop to suit my programming needs? I found Compiz and it makes switching between my workspaces (which is something I do all the time to make the most of my 13.3" screen laptop) easy and look great - so what else don't I know about that make my programming environment more productive/pleasing? @Rob Cooper - thanks for the heads-up, hope this reword addresses the issues | I've used by Ubuntu desktop for some coding sessions. I haven't settled on an IDE, but if I'm not using gedit, I'll use emacs as my editor. Sometimes I need to ssh to a remote server and edit from there, in which case emacs is preferred. I'm just not the vi(m) type. Maybe I'll try out Eclipse one day... I love Compiz, but it does nothing for my coding experience. It's just eye candy. You can do desktop switching and Alt-Tab just fine without it. Aside from that, Jeff Atwood's recommendations for good chair, multi-monitors, and simplistic background still apply for me. | How can I improve my programming experience on my Linux Desktop? How can I improve the look and feel of my Linux desktop to suit my programming needs? I found Compiz and it makes switching between my workspaces (which is something I do all the time to make the most of my 13.3" screen laptop) easy and look great - so what else don't I know about that make my programming environment more productive/pleasing? @Rob Cooper - thanks for the heads-up, hope this reword addresses the issues | TITLE:
How can I improve my programming experience on my Linux Desktop?
QUESTION:
How can I improve the look and feel of my Linux desktop to suit my programming needs? I found Compiz and it makes switching between my workspaces (which is something I do all the time to make the most of my 13.3" screen laptop) easy and look great - so what else don't I know about that make my programming environment more productive/pleasing? @Rob Cooper - thanks for the heads-up, hope this reword addresses the issues
ANSWER:
I've used by Ubuntu desktop for some coding sessions. I haven't settled on an IDE, but if I'm not using gedit, I'll use emacs as my editor. Sometimes I need to ssh to a remote server and edit from there, in which case emacs is preferred. I'm just not the vi(m) type. Maybe I'll try out Eclipse one day... I love Compiz, but it does nothing for my coding experience. It's just eye candy. You can do desktop switching and Alt-Tab just fine without it. Aside from that, Jeff Atwood's recommendations for good chair, multi-monitors, and simplistic background still apply for me. | [
"linux",
"desktop",
"compiz"
] | 0 | 0 | 779 | 3 | 0 | 2008-08-31T09:26:51.450000 | 2008-08-31T13:33:27.863000 |
36,707 | 36,714 | Should a function have only one return statement? | Are there good reasons why it's a better practice to have only one return statement in a function? Or is it okay to return from a function as soon as it is logically correct to do so, meaning there may be many return statements in the function? | I often have several statements at the start of a method to return for "easy" situations. For example, this: public void DoStuff(Foo foo) { if (foo!= null) {... } }... can be made more readable (IMHO) like this: public void DoStuff(Foo foo) { if (foo == null) return;... } So yes, I think it's fine to have multiple "exit points" from a function/method. | Should a function have only one return statement? Are there good reasons why it's a better practice to have only one return statement in a function? Or is it okay to return from a function as soon as it is logically correct to do so, meaning there may be many return statements in the function? | TITLE:
Should a function have only one return statement?
QUESTION:
Are there good reasons why it's a better practice to have only one return statement in a function? Or is it okay to return from a function as soon as it is logically correct to do so, meaning there may be many return statements in the function?
ANSWER:
I often have several statements at the start of a method to return for "easy" situations. For example, this: public void DoStuff(Foo foo) { if (foo!= null) {... } }... can be made more readable (IMHO) like this: public void DoStuff(Foo foo) { if (foo == null) return;... } So yes, I think it's fine to have multiple "exit points" from a function/method. | [
"language-agnostic",
"coding-style"
] | 780 | 741 | 371,394 | 50 | 0 | 2008-08-31T09:26:55.660000 | 2008-08-31T09:31:40.143000 |
36,709 | 36,836 | Is there a good yacc/bison type LALR parser generator for .NET? | Is there a good yacc/bison type LALR parser generator for.NET? | Antlr supports C# code generation, though it is LL(k) not technically LALR. Its tree rewriting rules are an interesting feature though. | Is there a good yacc/bison type LALR parser generator for .NET? Is there a good yacc/bison type LALR parser generator for.NET? | TITLE:
Is there a good yacc/bison type LALR parser generator for .NET?
QUESTION:
Is there a good yacc/bison type LALR parser generator for.NET?
ANSWER:
Antlr supports C# code generation, though it is LL(k) not technically LALR. Its tree rewriting rules are an interesting feature though. | [
".net",
"yacc",
"lalr"
] | 7 | 5 | 2,729 | 5 | 0 | 2008-08-31T09:28:51.397000 | 2008-08-31T12:44:52.363000 |
36,733 | 36,777 | Redirecting users from edit page back to calling page | I am working on a project management web application. The user has a variety of ways to display a list of tasks. When viewing a list page, they click on task and are redirected to the task edit page. Since they are coming from a variety of ways, I am just curious as to the best way to redirect the user back to the calling page. I have some ideas, but would like to get other developers input. Would you store the calling url in session? as a cookie? I like the concept of using an object handle the redirection. | I would store the referring URL using the ViewState. Storing this outside the scope of the page (i.e. in the Session state or cookie) may cause problems if more than one browser window is open. The example below validates that the page was called internally (i.e. not requested directly) and bounces back to the referring page after the user submits their response. public partial class _Default: System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (Request.UrlReferrer == null) { //Handle the case where the page is requested directly throw new Exception("This page has been called without a referring page"); }
if (!IsPostBack) { ReturnUrl = Request.UrlReferrer.PathAndQuery; } }
public string ReturnUrl { get { return ViewState["returnUrl"].ToString(); } set { ViewState["returnUrl"] = value; } }
protected void btn_Click(object sender, EventArgs e) { //Do what you need to do to save the page //...
//Go back to calling page Response.Redirect(ReturnUrl, true); } } | Redirecting users from edit page back to calling page I am working on a project management web application. The user has a variety of ways to display a list of tasks. When viewing a list page, they click on task and are redirected to the task edit page. Since they are coming from a variety of ways, I am just curious as to the best way to redirect the user back to the calling page. I have some ideas, but would like to get other developers input. Would you store the calling url in session? as a cookie? I like the concept of using an object handle the redirection. | TITLE:
Redirecting users from edit page back to calling page
QUESTION:
I am working on a project management web application. The user has a variety of ways to display a list of tasks. When viewing a list page, they click on task and are redirected to the task edit page. Since they are coming from a variety of ways, I am just curious as to the best way to redirect the user back to the calling page. I have some ideas, but would like to get other developers input. Would you store the calling url in session? as a cookie? I like the concept of using an object handle the redirection.
ANSWER:
I would store the referring URL using the ViewState. Storing this outside the scope of the page (i.e. in the Session state or cookie) may cause problems if more than one browser window is open. The example below validates that the page was called internally (i.e. not requested directly) and bounces back to the referring page after the user submits their response. public partial class _Default: System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (Request.UrlReferrer == null) { //Handle the case where the page is requested directly throw new Exception("This page has been called without a referring page"); }
if (!IsPostBack) { ReturnUrl = Request.UrlReferrer.PathAndQuery; } }
public string ReturnUrl { get { return ViewState["returnUrl"].ToString(); } set { ViewState["returnUrl"] = value; } }
protected void btn_Click(object sender, EventArgs e) { //Do what you need to do to save the page //...
//Go back to calling page Response.Redirect(ReturnUrl, true); } } | [
"asp.net",
"redirect"
] | 7 | 5 | 4,069 | 4 | 0 | 2008-08-31T09:49:18.010000 | 2008-08-31T11:21:17.603000 |
36,760 | 36,762 | SQL Query, Count with 0 count | I have three tables: page, attachment, page-attachment I have data like this: page ID NAME 1 first page 2 second page 3 third page 4 fourth page
attachment ID NAME 1 foo.word 2 test.xsl 3 mm.ppt
page-attachment ID PAGE-ID ATTACHMENT-ID 1 2 1 2 2 2 3 3 3 I would like to get the number of attachments per page also when that number is 0. I have tried with: select page.name, count(page-attachment.id) as attachmentsnumber from page inner join page-attachment on page.id=page-id group by page.id I am getting this output: NAME ATTACHMENTSNUMBER second page 2 third page 1 I would like to get this output: NAME ATTACHMENTSNUMBER first page 0 second page 2 third page 1 fourth page 0 How do I get the 0 part? | Change your "inner join" to a "left outer join", which means "get me all the rows on the left of the join, even if there isn't a matching row on the right." select page.name, count(page-attachment.id) as attachmentsnumber from page left outer join page-attachment on page.id=page-id group by page.name | SQL Query, Count with 0 count I have three tables: page, attachment, page-attachment I have data like this: page ID NAME 1 first page 2 second page 3 third page 4 fourth page
attachment ID NAME 1 foo.word 2 test.xsl 3 mm.ppt
page-attachment ID PAGE-ID ATTACHMENT-ID 1 2 1 2 2 2 3 3 3 I would like to get the number of attachments per page also when that number is 0. I have tried with: select page.name, count(page-attachment.id) as attachmentsnumber from page inner join page-attachment on page.id=page-id group by page.id I am getting this output: NAME ATTACHMENTSNUMBER second page 2 third page 1 I would like to get this output: NAME ATTACHMENTSNUMBER first page 0 second page 2 third page 1 fourth page 0 How do I get the 0 part? | TITLE:
SQL Query, Count with 0 count
QUESTION:
I have three tables: page, attachment, page-attachment I have data like this: page ID NAME 1 first page 2 second page 3 third page 4 fourth page
attachment ID NAME 1 foo.word 2 test.xsl 3 mm.ppt
page-attachment ID PAGE-ID ATTACHMENT-ID 1 2 1 2 2 2 3 3 3 I would like to get the number of attachments per page also when that number is 0. I have tried with: select page.name, count(page-attachment.id) as attachmentsnumber from page inner join page-attachment on page.id=page-id group by page.id I am getting this output: NAME ATTACHMENTSNUMBER second page 2 third page 1 I would like to get this output: NAME ATTACHMENTSNUMBER first page 0 second page 2 third page 1 fourth page 0 How do I get the 0 part?
ANSWER:
Change your "inner join" to a "left outer join", which means "get me all the rows on the left of the join, even if there isn't a matching row on the right." select page.name, count(page-attachment.id) as attachmentsnumber from page left outer join page-attachment on page.id=page-id group by page.name | [
"sql",
"count"
] | 14 | 33 | 34,480 | 6 | 0 | 2008-08-31T10:39:58.230000 | 2008-08-31T10:41:16.793000 |
36,778 | 36,914 | Firefox vs. IE: innerHTML handling | After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript). Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way? | I agree with Pat. At this point in the game, writing your own code to deal with cross-browser compatibility given the available Javascript frameworks doesn't make a lot of sense. There's a framework for nearly any taste (some really quite tiny) and they've focused on really abstracting out all of the differences between the browsers. They're doing WAY more testing of it than you're likely to. Something like jQuery or Yahoo's YUI (think how many people hit the Yahoo Javascript in a day and the variety of browsers) is just way more road-tested than any snippet you or I come up with. | Firefox vs. IE: innerHTML handling After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript). Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way? | TITLE:
Firefox vs. IE: innerHTML handling
QUESTION:
After hours of debugging, it appears to me that in FireFox, the innerHTML of a DOM reflects what is actually in the markup, but in IE, the innerHTML reflects what's in the markup PLUS any changes made by the user or dynamically (i.e. via Javascript). Has anyone else found this to be true? Any interesting work-arounds to ensure both behave the same way?
ANSWER:
I agree with Pat. At this point in the game, writing your own code to deal with cross-browser compatibility given the available Javascript frameworks doesn't make a lot of sense. There's a framework for nearly any taste (some really quite tiny) and they've focused on really abstracting out all of the differences between the browsers. They're doing WAY more testing of it than you're likely to. Something like jQuery or Yahoo's YUI (think how many people hit the Yahoo Javascript in a day and the variety of browsers) is just way more road-tested than any snippet you or I come up with. | [
"javascript",
"internet-explorer",
"firefox",
"dom"
] | 6 | 9 | 3,790 | 4 | 0 | 2008-08-31T11:23:46.147000 | 2008-08-31T15:28:15.027000 |
36,812 | 36,935 | How do I add data to an existing model in Django? | Currently, I am writing up a bit of a product-based CMS as my first project. Here is my question. How can I add additional data (products) to my Product model? I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django How can I do this all without using this existing django admin interface. | You will want to wire your URL to the Django create_object generic view, and pass it either "model" (the model you want to create) or "form_class" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors. Sample URLconf for the simplest case: from django.conf.urls.defaults import * from django.views.generic.create_update import create_object
from my_products_app.models import Product
urlpatterns = patterns('', url(r'^admin/products/add/$', create_object, {'model': Product})) Your template will get the context variable "form", which you just need to wrap in a tag and add a submit button. The simplest working template (by default should go in "my_products_app/product_form.html"): {{ form }} Note that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save. | How do I add data to an existing model in Django? Currently, I am writing up a bit of a product-based CMS as my first project. Here is my question. How can I add additional data (products) to my Product model? I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django How can I do this all without using this existing django admin interface. | TITLE:
How do I add data to an existing model in Django?
QUESTION:
Currently, I am writing up a bit of a product-based CMS as my first project. Here is my question. How can I add additional data (products) to my Product model? I have added '/admin/products/add' to my urls.py, but I don't really know where to go from there. How would i build both my view and my template? Please keep in mind that I don't really know all that much Python, and i am very new to Django How can I do this all without using this existing django admin interface.
ANSWER:
You will want to wire your URL to the Django create_object generic view, and pass it either "model" (the model you want to create) or "form_class" (a customized ModelForm class). There are a number of other arguments you can also pass to override default behaviors. Sample URLconf for the simplest case: from django.conf.urls.defaults import * from django.views.generic.create_update import create_object
from my_products_app.models import Product
urlpatterns = patterns('', url(r'^admin/products/add/$', create_object, {'model': Product})) Your template will get the context variable "form", which you just need to wrap in a tag and add a submit button. The simplest working template (by default should go in "my_products_app/product_form.html"): {{ form }} Note that your Product model must have a get_absolute_url method, or else you must pass in the post_save_redirect parameter to the view. Otherwise it won't know where to redirect to after save. | [
"python",
"django"
] | 7 | 7 | 4,715 | 3 | 0 | 2008-08-31T12:11:10.320000 | 2008-08-31T15:59:11.473000 |
36,813 | 36,927 | Table Stats gathering for Oracle | When and how should table stats gathering be performed for Oracle, version 9 and up? How would you go about gathering stats for a large database, where stats gathering would collide with "business hours". | Gathering stats should be done whenever there has been large changes to the data content, for example a large number of deletes or inserts. If the table structure has changed you should gather stats also. It is advisable to use the 'ESTIMATE' option. Do this as an automated process out of business hours if possible, or if you have to do it during business hours then choose a time when there is minimum access to the tables you wish to gather stats for. | Table Stats gathering for Oracle When and how should table stats gathering be performed for Oracle, version 9 and up? How would you go about gathering stats for a large database, where stats gathering would collide with "business hours". | TITLE:
Table Stats gathering for Oracle
QUESTION:
When and how should table stats gathering be performed for Oracle, version 9 and up? How would you go about gathering stats for a large database, where stats gathering would collide with "business hours".
ANSWER:
Gathering stats should be done whenever there has been large changes to the data content, for example a large number of deletes or inserts. If the table structure has changed you should gather stats also. It is advisable to use the 'ESTIMATE' option. Do this as an automated process out of business hours if possible, or if you have to do it during business hours then choose a time when there is minimum access to the tables you wish to gather stats for. | [
"oracle",
"table-statistics"
] | 3 | 1 | 9,299 | 4 | 0 | 2008-08-31T12:11:42.690000 | 2008-08-31T15:48:19.910000 |
36,825 | 37,071 | Integrating Perl and Oracle Advanced Queuing | Is there any way to listen to an Oracle AQ using a Perl process as the listener. | This Introduction to Oracle Advanced Queuing states that you can interface to it through "Internet access using HTTP, HTTPS, and SMTP" so it should be straightforward to do that using a Perl script. | Integrating Perl and Oracle Advanced Queuing Is there any way to listen to an Oracle AQ using a Perl process as the listener. | TITLE:
Integrating Perl and Oracle Advanced Queuing
QUESTION:
Is there any way to listen to an Oracle AQ using a Perl process as the listener.
ANSWER:
This Introduction to Oracle Advanced Queuing states that you can interface to it through "Internet access using HTTP, HTTPS, and SMTP" so it should be straightforward to do that using a Perl script. | [
"perl",
"oracle",
"messaging",
"advanced-queuing"
] | 0 | 1 | 858 | 1 | 0 | 2008-08-31T12:29:38.700000 | 2008-08-31T20:18:43.973000 |
36,831 | 36,841 | How do you parse an IP address string to a uint value in C#? | I'm writing C# code that uses the windows IP Helper API. One of the functions I'm trying to call is " GetBestInterface " that takes a 'uint' representation of an IP. What I need is to parse a textual representation of the IP to create the 'uint' representation. I've found some examples via Google, like this one or this one, but I'm pretty sure there should be a standard way to achieve this with.NET. Only problem is, I can't find this standard way. IPAddress.Parse seems to be in the right direction, but it doesn't supply any way of getting a 'uint' representation... There is also a way of doing this using IP Helper, using the ParseNetworkString, but again, I'd rather use.NET - I believe the less I rely on pInvoke the better. So, anyone knows of a standard way to do this in.NET? | MSDN says that IPAddress.Address property (which returns numeric representation of IP address) is obsolete and you should use GetAddressBytes method. You can convert IP address to numeric value using following code: var ipAddress = IPAddress.Parse("some.ip.address"); var ipBytes = ipAddress.GetAddressBytes(); var ip = (uint)ipBytes [3] << 24; ip += (uint)ipBytes [2] << 16; ip += (uint)ipBytes [1] <<8; ip += (uint)ipBytes [0]; EDIT: As other commenters noticed above-mentioned code is for IPv4 addresses only. IPv6 address is 128 bits long so it's impossible to convert it to 'uint' as question's author wanted. | How do you parse an IP address string to a uint value in C#? I'm writing C# code that uses the windows IP Helper API. One of the functions I'm trying to call is " GetBestInterface " that takes a 'uint' representation of an IP. What I need is to parse a textual representation of the IP to create the 'uint' representation. I've found some examples via Google, like this one or this one, but I'm pretty sure there should be a standard way to achieve this with.NET. Only problem is, I can't find this standard way. IPAddress.Parse seems to be in the right direction, but it doesn't supply any way of getting a 'uint' representation... There is also a way of doing this using IP Helper, using the ParseNetworkString, but again, I'd rather use.NET - I believe the less I rely on pInvoke the better. So, anyone knows of a standard way to do this in.NET? | TITLE:
How do you parse an IP address string to a uint value in C#?
QUESTION:
I'm writing C# code that uses the windows IP Helper API. One of the functions I'm trying to call is " GetBestInterface " that takes a 'uint' representation of an IP. What I need is to parse a textual representation of the IP to create the 'uint' representation. I've found some examples via Google, like this one or this one, but I'm pretty sure there should be a standard way to achieve this with.NET. Only problem is, I can't find this standard way. IPAddress.Parse seems to be in the right direction, but it doesn't supply any way of getting a 'uint' representation... There is also a way of doing this using IP Helper, using the ParseNetworkString, but again, I'd rather use.NET - I believe the less I rely on pInvoke the better. So, anyone knows of a standard way to do this in.NET?
ANSWER:
MSDN says that IPAddress.Address property (which returns numeric representation of IP address) is obsolete and you should use GetAddressBytes method. You can convert IP address to numeric value using following code: var ipAddress = IPAddress.Parse("some.ip.address"); var ipBytes = ipAddress.GetAddressBytes(); var ip = (uint)ipBytes [3] << 24; ip += (uint)ipBytes [2] << 16; ip += (uint)ipBytes [1] <<8; ip += (uint)ipBytes [0]; EDIT: As other commenters noticed above-mentioned code is for IPv4 addresses only. IPv6 address is 128 bits long so it's impossible to convert it to 'uint' as question's author wanted. | [
"c#",
".net",
"winapi",
"networking",
"iphelper"
] | 8 | 13 | 25,216 | 9 | 0 | 2008-08-31T12:35:30.310000 | 2008-08-31T12:55:45.380000 |
36,832 | 75,654 | Virtual functions in constructors, why do languages differ? | In C++ when a virtual function is called from within a constructor it doesn't behave like a virtual function. I think everyone who encountered this behavior for the first time was surprised but on second thought it made sense: As long as the derived constructor has not been executed the object is not yet a derived instance. So how can a derived function be called? The preconditions haven't had the chance to be set up. Example: class base { public: base() { std::cout << "foo is " << foo() << std::endl; } virtual int foo() { return 42; } };
class derived: public base { int* ptr_; public: derived(int i): ptr_(new int(i*i)) { } // The following cannot be called before derived::derived due to how C++ behaves, // if it was possible... Kaboom! virtual int foo() { return *ptr_; } }; It's exactly the same for Java and.NET yet they chose to go the other way, and is possibly the only reason for the principle of least surprise? Which do you think is the correct choice? | There's a fundamental difference in how the languages define an object's life time. In Java and.Net the object members are zero/null initialized before any constructor is run and is at this point that the object life time begins. So when you enter the constructor you've already got an initialized object. In C++ the object life time only begins when the constructor finishes (although member variables and base classes are fully constructed before it starts). This explains the behaviour when virtual functions are called and also why the destructor isn't run if there's an exception in the constructor's body. The problem with the Java/.Net definition of object lifetime is that it's harder to make sure the object always meets its invariant without having to put in special cases for when the object is initialized but the constructor hasn't run. The problem with the C++ definition is that you have this odd period where the object is in limbo and not fully constructed. | Virtual functions in constructors, why do languages differ? In C++ when a virtual function is called from within a constructor it doesn't behave like a virtual function. I think everyone who encountered this behavior for the first time was surprised but on second thought it made sense: As long as the derived constructor has not been executed the object is not yet a derived instance. So how can a derived function be called? The preconditions haven't had the chance to be set up. Example: class base { public: base() { std::cout << "foo is " << foo() << std::endl; } virtual int foo() { return 42; } };
class derived: public base { int* ptr_; public: derived(int i): ptr_(new int(i*i)) { } // The following cannot be called before derived::derived due to how C++ behaves, // if it was possible... Kaboom! virtual int foo() { return *ptr_; } }; It's exactly the same for Java and.NET yet they chose to go the other way, and is possibly the only reason for the principle of least surprise? Which do you think is the correct choice? | TITLE:
Virtual functions in constructors, why do languages differ?
QUESTION:
In C++ when a virtual function is called from within a constructor it doesn't behave like a virtual function. I think everyone who encountered this behavior for the first time was surprised but on second thought it made sense: As long as the derived constructor has not been executed the object is not yet a derived instance. So how can a derived function be called? The preconditions haven't had the chance to be set up. Example: class base { public: base() { std::cout << "foo is " << foo() << std::endl; } virtual int foo() { return 42; } };
class derived: public base { int* ptr_; public: derived(int i): ptr_(new int(i*i)) { } // The following cannot be called before derived::derived due to how C++ behaves, // if it was possible... Kaboom! virtual int foo() { return *ptr_; } }; It's exactly the same for Java and.NET yet they chose to go the other way, and is possibly the only reason for the principle of least surprise? Which do you think is the correct choice?
ANSWER:
There's a fundamental difference in how the languages define an object's life time. In Java and.Net the object members are zero/null initialized before any constructor is run and is at this point that the object life time begins. So when you enter the constructor you've already got an initialized object. In C++ the object life time only begins when the constructor finishes (although member variables and base classes are fully constructed before it starts). This explains the behaviour when virtual functions are called and also why the destructor isn't run if there's an exception in the constructor's body. The problem with the Java/.Net definition of object lifetime is that it's harder to make sure the object always meets its invariant without having to put in special cases for when the object is initialized but the constructor hasn't run. The problem with the C++ definition is that you have this odd period where the object is in limbo and not fully constructed. | [
"java",
".net",
"c++",
"language-agnostic"
] | 12 | 11 | 3,909 | 6 | 0 | 2008-08-31T12:37:24.067000 | 2008-09-16T18:49:55.313000 |
36,861 | 36,869 | Strange boo language syntax | I've run into a strange syntax in Boo Language Guide: setter = { value | a = value } What does the | operator mean? | The documentation of Boo seems to be lacking in this area -- it seems that setter = { value | a = value } is shorthand for setter = def(value): a = value | Strange boo language syntax I've run into a strange syntax in Boo Language Guide: setter = { value | a = value } What does the | operator mean? | TITLE:
Strange boo language syntax
QUESTION:
I've run into a strange syntax in Boo Language Guide: setter = { value | a = value } What does the | operator mean?
ANSWER:
The documentation of Boo seems to be lacking in this area -- it seems that setter = { value | a = value } is shorthand for setter = def(value): a = value | [
"closures",
"boo"
] | 3 | 5 | 1,127 | 4 | 0 | 2008-08-31T13:52:56.613000 | 2008-08-31T14:10:07.383000 |
36,862 | 37,006 | How do you organise multiple git repositories, so that all of them are backed up together? | With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up? | I would strongly advise against putting unrelated data in a given Git repository. The overhead of creating new repositories is quite low, and that is a feature that makes it possible to keep different lineages completely separate. Fighting that idea means ending up with unnecessarily tangled history, which renders administration more difficult and--more importantly--"archeology" tools less useful because of the resulting dilution. Also, as you mentioned, Git assumes that the "unit of cloning" is the repository, and practically has to do so because of its distributed nature. One solution is to keep every project/package/etc. as its own bare repository (i.e., without working tree) under a blessed hierarchy, like: /repos/a.git /repos/b.git /repos/c.git Once a few conventions have been established, it becomes trivial to apply administrative operations (backup, packing, web publishing) to the complete hierarchy, which serves a role not entirely dissimilar to "monolithic" SVN repositories. Working with these repositories also becomes somewhat similar to SVN workflows, with the addition that one can use local commits and branches: svn checkout --> git clone svn update --> git pull svn commit --> git push You can have multiple remotes in each working clone, for the ease of synchronizing between the multiple parties: $ cd ~/dev $ git clone /repos/foo.git # or the one from github,... $ cd foo $ git remote add github... $ git remote add memorystick... You can then fetch/pull from each of the "sources", work and commit locally, and then push ("backup") to each of these remotes when you are ready with something like (note how that pushes the same commits and history to each of the remotes!): $ for remote in origin github memorystick; do git push $remote; done The easiest way to turn an existing working repository ~/dev/foo into such a bare repository is probably: $ cd ~/dev $ git clone --bare foo /repos/foo.git $ mv foo foo.old $ git clone /repos/foo.git which is mostly equivalent to a svn import --but does not throw the existing, "local" history away. Note: submodules are a mechanism to include shared related lineages, so I indeed wouldn't consider them an appropriate tool for the problem you are trying to solve. | How do you organise multiple git repositories, so that all of them are backed up together? With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up? | TITLE:
How do you organise multiple git repositories, so that all of them are backed up together?
QUESTION:
With SVN, I had a single big repository I kept on a server, and checked-out on a few machines. This was a pretty good backup system, and allowed me easily work on any of the machines. I could checkout a specific project, commit and it updated the 'master' project, or I could checkout the entire thing. Now, I have a bunch of git repositories, for various projects, several of which are on github. I also have the SVN repository I mentioned, imported via the git-svn command.. Basically, I like having all my code (not just projects, but random snippets and scripts, some things like my CV, articles I've written, websites I've made and so on) in one big repository I can easily clone onto remote machines, or memory-sticks/harddrives as backup. The problem is, since it's a private repository, and git doesn't allow checking out of a specific folder (that I could push to github as a separate project, but have the changes appear in both the master-repo, and the sub-repos) I could use the git submodule system, but it doesn't act how I want it too (submodules are pointers to other repositories, and don't really contain the actual code, so it's useless for backup) Currently I have a folder of git-repos (for example, ~/code_projects/proj1/.git/ ~/code_projects/proj2/.git/), and after doing changes to proj1 I do git push github, then I copy the files into ~/Documents/code/python/projects/proj1/ and do a single commit (instead of the numerous ones in the individual repos). Then do git push backupdrive1, git push mymemorystick etc So, the question: How do your personal code and projects with git repositories, and keep them synced and backed-up?
ANSWER:
I would strongly advise against putting unrelated data in a given Git repository. The overhead of creating new repositories is quite low, and that is a feature that makes it possible to keep different lineages completely separate. Fighting that idea means ending up with unnecessarily tangled history, which renders administration more difficult and--more importantly--"archeology" tools less useful because of the resulting dilution. Also, as you mentioned, Git assumes that the "unit of cloning" is the repository, and practically has to do so because of its distributed nature. One solution is to keep every project/package/etc. as its own bare repository (i.e., without working tree) under a blessed hierarchy, like: /repos/a.git /repos/b.git /repos/c.git Once a few conventions have been established, it becomes trivial to apply administrative operations (backup, packing, web publishing) to the complete hierarchy, which serves a role not entirely dissimilar to "monolithic" SVN repositories. Working with these repositories also becomes somewhat similar to SVN workflows, with the addition that one can use local commits and branches: svn checkout --> git clone svn update --> git pull svn commit --> git push You can have multiple remotes in each working clone, for the ease of synchronizing between the multiple parties: $ cd ~/dev $ git clone /repos/foo.git # or the one from github,... $ cd foo $ git remote add github... $ git remote add memorystick... You can then fetch/pull from each of the "sources", work and commit locally, and then push ("backup") to each of these remotes when you are ready with something like (note how that pushes the same commits and history to each of the remotes!): $ for remote in origin github memorystick; do git push $remote; done The easiest way to turn an existing working repository ~/dev/foo into such a bare repository is probably: $ cd ~/dev $ git clone --bare foo /repos/foo.git $ mv foo foo.old $ git clone /repos/foo.git which is mostly equivalent to a svn import --but does not throw the existing, "local" history away. Note: submodules are a mechanism to include shared related lineages, so I indeed wouldn't consider them an appropriate tool for the problem you are trying to solve. | [
"git",
"backup"
] | 101 | 75 | 52,541 | 6 | 0 | 2008-08-31T13:54:20.590000 | 2008-08-31T18:17:07.027000 |
36,876 | 36,879 | Conditional Redirect on Login | I am using forms authentication. My users are redirected to a page (written in web.config) when they login, but some of them may not have the privilages to access this default page. In this case, I want them to redirect to another page but RedirectFromLoginPage method always redirects to the default page in web.config. How do I make the users login, and then redirect to a page which depends on some criteria? | The SetAuthCookie allows you to issue the auth cookie but retain control over the navigation. After that method is called you can run your logic to do a typical ASP.NET redirect to wherever you want. | Conditional Redirect on Login I am using forms authentication. My users are redirected to a page (written in web.config) when they login, but some of them may not have the privilages to access this default page. In this case, I want them to redirect to another page but RedirectFromLoginPage method always redirects to the default page in web.config. How do I make the users login, and then redirect to a page which depends on some criteria? | TITLE:
Conditional Redirect on Login
QUESTION:
I am using forms authentication. My users are redirected to a page (written in web.config) when they login, but some of them may not have the privilages to access this default page. In this case, I want them to redirect to another page but RedirectFromLoginPage method always redirects to the default page in web.config. How do I make the users login, and then redirect to a page which depends on some criteria?
ANSWER:
The SetAuthCookie allows you to issue the auth cookie but retain control over the navigation. After that method is called you can run your logic to do a typical ASP.NET redirect to wherever you want. | [
"asp.net",
"forms-authentication"
] | 1 | 5 | 1,369 | 2 | 0 | 2008-08-31T14:27:32.023000 | 2008-08-31T14:32:39.187000 |
36,877 | 36,885 | How do you set up use HttpOnly cookies in PHP | How can I set the cookies in my PHP apps as HttpOnly cookies? | For your cookies, see this answer. For PHP's own session cookie ( PHPSESSID, by default), see @richie's answer The setcookie() and setrawcookie() functions, introduced the boolean httponly parameter, back in the dark ages of PHP 5.2.0, making this nice and easy. Simply set the 7th parameter to true, as per the syntax Function syntax simplified for brevity setcookie( $name, $value, $expire, $path, $domain, $secure, $httponly ) setrawcookie( $name, $value, $expire, $path, $domain, $secure, $httponly ) In PHP < 8, specify NULL for parameters you wish to remain as default. In PHP >= 8 you can benefit from using named parameters. See this question about named params. setcookie( $name, $value, httponly:true ) It is also possible using the older, lower-level header() function: header( "Set-Cookie: name=value; HttpOnly" ); You may also want to consider if you should be setting the Secure parameter. | How do you set up use HttpOnly cookies in PHP How can I set the cookies in my PHP apps as HttpOnly cookies? | TITLE:
How do you set up use HttpOnly cookies in PHP
QUESTION:
How can I set the cookies in my PHP apps as HttpOnly cookies?
ANSWER:
For your cookies, see this answer. For PHP's own session cookie ( PHPSESSID, by default), see @richie's answer The setcookie() and setrawcookie() functions, introduced the boolean httponly parameter, back in the dark ages of PHP 5.2.0, making this nice and easy. Simply set the 7th parameter to true, as per the syntax Function syntax simplified for brevity setcookie( $name, $value, $expire, $path, $domain, $secure, $httponly ) setrawcookie( $name, $value, $expire, $path, $domain, $secure, $httponly ) In PHP < 8, specify NULL for parameters you wish to remain as default. In PHP >= 8 you can benefit from using named parameters. See this question about named params. setcookie( $name, $value, httponly:true ) It is also possible using the older, lower-level header() function: header( "Set-Cookie: name=value; HttpOnly" ); You may also want to consider if you should be setting the Secure parameter. | [
"php",
"security",
"cookies",
"xss",
"httponly"
] | 107 | 105 | 152,288 | 11 | 0 | 2008-08-31T14:27:50.337000 | 2008-08-31T14:38:41.730000 |
36,881 | 68,078 | Updating Android Tab Icons | I have an activity that has a TabHost containing a set of TabSpecs each with a listview containing the items to be displayed by the tab. When each TabSpec is created, I set an icon to be displayed in the tab header. The TabSpecs are created in this way within a setupTabs() method which loops to create the appropriate number of tabs: TabSpec ts = mTabs.newTabSpec("tab"); ts.setIndicator("TabTitle", iconResource);
ts.setContent(new TabHost.TabContentFactory( { public View createTabContent(String tag) {... } }); mTabs.addTab(ts); There are a couple of instances where I want to be able to change the icon which is displayed in each tab during the execution of my program. Currently, I am deleting all the tabs, and calling the above code again to re-create them. mTabs.getTabWidget().removeAllViews(); mTabs.clearAllTabs(true); setupTabs(); Is there a way to replace the icon that is being displayed without deleting and re-creating all of the tabs? | The short answer is, you're not missing anything. The Android SDK doesn't provide a direct method to change the indicator of a TabHost after it's been created. The TabSpec is only used to build the tab, so changing the TabSpec after the fact will have no effect. I think there's a workaround, though. Call mTabs.getTabWidget() to get a TabWidget object. This is just a subclass of ViewGroup, so you can call getChildCount() and getChildAt() to access individual tabs within the TabWidget. Each of these tabs is also a View, and in the case of a tab with a graphical indicator and a text label, it's almost certainly some other ViewGroup (maybe a LinearLayout, but it doesn't matter) that contains an ImageView and a TextView. So with a little fiddling with the debugger or Log.i, you should be able to figure out a recipe to get the ImageView and change it directly. The downside is that if you're not careful, the exact layout of the controls within a tab could change and your app could break. Your initial solution is perhaps more robust, but then again it might lead to other unwanted side effects like flicker or focus problems. | Updating Android Tab Icons I have an activity that has a TabHost containing a set of TabSpecs each with a listview containing the items to be displayed by the tab. When each TabSpec is created, I set an icon to be displayed in the tab header. The TabSpecs are created in this way within a setupTabs() method which loops to create the appropriate number of tabs: TabSpec ts = mTabs.newTabSpec("tab"); ts.setIndicator("TabTitle", iconResource);
ts.setContent(new TabHost.TabContentFactory( { public View createTabContent(String tag) {... } }); mTabs.addTab(ts); There are a couple of instances where I want to be able to change the icon which is displayed in each tab during the execution of my program. Currently, I am deleting all the tabs, and calling the above code again to re-create them. mTabs.getTabWidget().removeAllViews(); mTabs.clearAllTabs(true); setupTabs(); Is there a way to replace the icon that is being displayed without deleting and re-creating all of the tabs? | TITLE:
Updating Android Tab Icons
QUESTION:
I have an activity that has a TabHost containing a set of TabSpecs each with a listview containing the items to be displayed by the tab. When each TabSpec is created, I set an icon to be displayed in the tab header. The TabSpecs are created in this way within a setupTabs() method which loops to create the appropriate number of tabs: TabSpec ts = mTabs.newTabSpec("tab"); ts.setIndicator("TabTitle", iconResource);
ts.setContent(new TabHost.TabContentFactory( { public View createTabContent(String tag) {... } }); mTabs.addTab(ts); There are a couple of instances where I want to be able to change the icon which is displayed in each tab during the execution of my program. Currently, I am deleting all the tabs, and calling the above code again to re-create them. mTabs.getTabWidget().removeAllViews(); mTabs.clearAllTabs(true); setupTabs(); Is there a way to replace the icon that is being displayed without deleting and re-creating all of the tabs?
ANSWER:
The short answer is, you're not missing anything. The Android SDK doesn't provide a direct method to change the indicator of a TabHost after it's been created. The TabSpec is only used to build the tab, so changing the TabSpec after the fact will have no effect. I think there's a workaround, though. Call mTabs.getTabWidget() to get a TabWidget object. This is just a subclass of ViewGroup, so you can call getChildCount() and getChildAt() to access individual tabs within the TabWidget. Each of these tabs is also a View, and in the case of a tab with a graphical indicator and a text label, it's almost certainly some other ViewGroup (maybe a LinearLayout, but it doesn't matter) that contains an ImageView and a TextView. So with a little fiddling with the debugger or Log.i, you should be able to figure out a recipe to get the ImageView and change it directly. The downside is that if you're not careful, the exact layout of the controls within a tab could change and your app could break. Your initial solution is perhaps more robust, but then again it might lead to other unwanted side effects like flicker or focus problems. | [
"java",
"android",
"android-tabhost"
] | 44 | 37 | 40,621 | 5 | 0 | 2008-08-31T14:36:11.610000 | 2008-09-15T23:59:28.807000 |
36,889 | 37,066 | Memcache control panel? | We've been running eAccelerator on each of 3 webservers and are looking to move to a memcache pool across all 3, hopefully reducing by about 2/3 our db lookups. One of the handy things about eAccelerator is the web-based control interface ( control.php ), which has proved very useful when we've had to flush the cache unexpectedly, quickly monitor which scripts are in cache, etc. We've been looking but haven't found anything that offers the same type of functionality for memcache - does anyone know if such a thing exists? Obviously flushing cache etc is easy enough with memcache on the console, but our particular set-up means we may have guys monitoring our front-end and needing to flush the cache who will not necessarily have shell access on the servers. | memcache.php may be what you're looking for. memcache.php that you can get stats and dump from multiple memcache servers. Can delete keys and flush servers. | Memcache control panel? We've been running eAccelerator on each of 3 webservers and are looking to move to a memcache pool across all 3, hopefully reducing by about 2/3 our db lookups. One of the handy things about eAccelerator is the web-based control interface ( control.php ), which has proved very useful when we've had to flush the cache unexpectedly, quickly monitor which scripts are in cache, etc. We've been looking but haven't found anything that offers the same type of functionality for memcache - does anyone know if such a thing exists? Obviously flushing cache etc is easy enough with memcache on the console, but our particular set-up means we may have guys monitoring our front-end and needing to flush the cache who will not necessarily have shell access on the servers. | TITLE:
Memcache control panel?
QUESTION:
We've been running eAccelerator on each of 3 webservers and are looking to move to a memcache pool across all 3, hopefully reducing by about 2/3 our db lookups. One of the handy things about eAccelerator is the web-based control interface ( control.php ), which has proved very useful when we've had to flush the cache unexpectedly, quickly monitor which scripts are in cache, etc. We've been looking but haven't found anything that offers the same type of functionality for memcache - does anyone know if such a thing exists? Obviously flushing cache etc is easy enough with memcache on the console, but our particular set-up means we may have guys monitoring our front-end and needing to flush the cache who will not necessarily have shell access on the servers.
ANSWER:
memcache.php may be what you're looking for. memcache.php that you can get stats and dump from multiple memcache servers. Can delete keys and flush servers. | [
"caching",
"memcached",
"controlpanel",
"eaccelerator"
] | 6 | 7 | 13,800 | 4 | 0 | 2008-08-31T14:41:46.857000 | 2008-08-31T20:13:51.083000 |
36,890 | 42,467 | Changing a CORBA interface without recompiling | I'd like to add a method to my existing server's CORBA interface. Will that require recompiling all clients? I'm using TAO. | Recompilation of clients is not required (and should not be, regardless of the ORB that you use). As Adam indicated, lookups are done by operation name (a straight text comparison). I've done what you're describing with our ACE/TAO-based system, and encountered no issues (servers were in ACE/TAO C++, clients were ACE/TAO C++, C# using Borland's Janeva, and OmniORBPy). | Changing a CORBA interface without recompiling I'd like to add a method to my existing server's CORBA interface. Will that require recompiling all clients? I'm using TAO. | TITLE:
Changing a CORBA interface without recompiling
QUESTION:
I'd like to add a method to my existing server's CORBA interface. Will that require recompiling all clients? I'm using TAO.
ANSWER:
Recompilation of clients is not required (and should not be, regardless of the ORB that you use). As Adam indicated, lookups are done by operation name (a straight text comparison). I've done what you're describing with our ACE/TAO-based system, and encountered no issues (servers were in ACE/TAO C++, clients were ACE/TAO C++, C# using Borland's Janeva, and OmniORBPy). | [
"c++",
"corba",
"idl",
"tao"
] | 4 | 4 | 519 | 5 | 0 | 2008-08-31T14:44:18.457000 | 2008-09-03T20:11:36.880000 |
36,901 | 36,908 | What does ** (double star/asterisk) and * (star/asterisk) do for parameters? | What do *args and **kwargs mean in these function definitions? def foo(x, y, *args): pass
def bar(x, y, **kwargs): pass See What do ** (double star/asterisk) and * (star/asterisk) mean in a function call? for the complementary question about arguments. | The *args and **kwargs are common idioms to allow an arbitrary number of arguments to functions, as described in the section more on defining functions in the Python tutorial. The *args will give you all positional arguments as a tuple: def foo(*args): for a in args: print(a)
foo(1) # 1
foo(1, 2, 3) # 1 # 2 # 3 The **kwargs will give you all keyword arguments as a dictionary: def bar(**kwargs): for a in kwargs: print(a, kwargs[a])
bar(name='one', age=27) # name one # age 27 Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, bar=None, **kwargs): print(kind, args, bar, kwargs)
foo(123, 'a', 'b', apple='red') # 123 ('a', 'b') None {'apple': 'red'} It is also possible to use this the other way around: def foo(a, b, c): print(a, b, c)
obj = {'b':10, 'c':'lee'}
foo(100, **obj) # 100 10 lee Another usage of the *l idiom is to unpack argument lists when calling a function. def foo(bar, lee): print(bar, lee)
baz = [1, 2]
foo(*baz) # 1 2 In Python 3 it is possible to use *l on the left side of an assignment ( Extended Iterable Unpacking ), though it gives a list instead of a tuple in this context: first, *rest = [1, 2, 3, 4] # first = 1 # rest = [2, 3, 4] Also Python 3 adds a new semantic (refer PEP 3102 ): def func(arg1, arg2, arg3, *, kwarg1, kwarg2): pass Such function accepts only 3 positional arguments, and everything after * can only be passed as keyword arguments. Note: A Python dict, semantically used for keyword argument passing, is arbitrarily ordered. However, in Python 3.6+, keyword arguments are guaranteed to remember insertion order. "The order of elements in **kwargs now corresponds to the order in which keyword arguments were passed to the function." - What’s New In Python 3.6. In fact, all dicts in CPython 3.6 will remember insertion order as an implementation detail, and this becomes standard in Python 3.7. | What does ** (double star/asterisk) and * (star/asterisk) do for parameters? What do *args and **kwargs mean in these function definitions? def foo(x, y, *args): pass
def bar(x, y, **kwargs): pass See What do ** (double star/asterisk) and * (star/asterisk) mean in a function call? for the complementary question about arguments. | TITLE:
What does ** (double star/asterisk) and * (star/asterisk) do for parameters?
QUESTION:
What do *args and **kwargs mean in these function definitions? def foo(x, y, *args): pass
def bar(x, y, **kwargs): pass See What do ** (double star/asterisk) and * (star/asterisk) mean in a function call? for the complementary question about arguments.
ANSWER:
The *args and **kwargs are common idioms to allow an arbitrary number of arguments to functions, as described in the section more on defining functions in the Python tutorial. The *args will give you all positional arguments as a tuple: def foo(*args): for a in args: print(a)
foo(1) # 1
foo(1, 2, 3) # 1 # 2 # 3 The **kwargs will give you all keyword arguments as a dictionary: def bar(**kwargs): for a in kwargs: print(a, kwargs[a])
bar(name='one', age=27) # name one # age 27 Both idioms can be mixed with normal arguments to allow a set of fixed and some variable arguments: def foo(kind, *args, bar=None, **kwargs): print(kind, args, bar, kwargs)
foo(123, 'a', 'b', apple='red') # 123 ('a', 'b') None {'apple': 'red'} It is also possible to use this the other way around: def foo(a, b, c): print(a, b, c)
obj = {'b':10, 'c':'lee'}
foo(100, **obj) # 100 10 lee Another usage of the *l idiom is to unpack argument lists when calling a function. def foo(bar, lee): print(bar, lee)
baz = [1, 2]
foo(*baz) # 1 2 In Python 3 it is possible to use *l on the left side of an assignment ( Extended Iterable Unpacking ), though it gives a list instead of a tuple in this context: first, *rest = [1, 2, 3, 4] # first = 1 # rest = [2, 3, 4] Also Python 3 adds a new semantic (refer PEP 3102 ): def func(arg1, arg2, arg3, *, kwarg1, kwarg2): pass Such function accepts only 3 positional arguments, and everything after * can only be passed as keyword arguments. Note: A Python dict, semantically used for keyword argument passing, is arbitrarily ordered. However, in Python 3.6+, keyword arguments are guaranteed to remember insertion order. "The order of elements in **kwargs now corresponds to the order in which keyword arguments were passed to the function." - What’s New In Python 3.6. In fact, all dicts in CPython 3.6 will remember insertion order as an implementation detail, and this becomes standard in Python 3.7. | [
"python",
"syntax",
"parameter-passing",
"variadic-functions",
"argument-unpacking"
] | 3,381 | 3,226 | 1,388,483 | 28 | 0 | 2008-08-31T15:04:35.350000 | 2008-08-31T15:17:31.137000 |
36,906 | 36,942 | What is the fastest way to swap values in C? | I want to swap two integers, and I want to know which of these two implementations will be faster: The obvious way with a temp variable: void swap(int* a, int* b) { int temp = *a; *a = *b; *b = temp; } Or the xor version that I'm sure most people have seen: void swap(int* a, int* b) { *a ^= *b; *b ^= *a; *a ^= *b; } It seems like the first uses an extra register, but the second one is doing three loads and stores while the first only does two of each. Can someone tell me which is faster and why? The why being more important. | The XOR method fails if a and b point to the same address. The first XOR will clear all of the bits at the memory address pointed to by both variables, so once the function returns (*a == *b == 0), regardless of the initial value. More info on the Wiki page: XOR swap algorithm Although it's not likely that this issue would come up, I'd always prefer to use the method that's guaranteed to work, not the clever method that fails at unexpected moments. | What is the fastest way to swap values in C? I want to swap two integers, and I want to know which of these two implementations will be faster: The obvious way with a temp variable: void swap(int* a, int* b) { int temp = *a; *a = *b; *b = temp; } Or the xor version that I'm sure most people have seen: void swap(int* a, int* b) { *a ^= *b; *b ^= *a; *a ^= *b; } It seems like the first uses an extra register, but the second one is doing three loads and stores while the first only does two of each. Can someone tell me which is faster and why? The why being more important. | TITLE:
What is the fastest way to swap values in C?
QUESTION:
I want to swap two integers, and I want to know which of these two implementations will be faster: The obvious way with a temp variable: void swap(int* a, int* b) { int temp = *a; *a = *b; *b = temp; } Or the xor version that I'm sure most people have seen: void swap(int* a, int* b) { *a ^= *b; *b ^= *a; *a ^= *b; } It seems like the first uses an extra register, but the second one is doing three loads and stores while the first only does two of each. Can someone tell me which is faster and why? The why being more important.
ANSWER:
The XOR method fails if a and b point to the same address. The first XOR will clear all of the bits at the memory address pointed to by both variables, so once the function returns (*a == *b == 0), regardless of the initial value. More info on the Wiki page: XOR swap algorithm Although it's not likely that this issue would come up, I'd always prefer to use the method that's guaranteed to work, not the clever method that fails at unexpected moments. | [
"c",
"performance"
] | 76 | 93 | 67,520 | 21 | 0 | 2008-08-31T15:12:35.503000 | 2008-08-31T16:17:17.577000 |
36,915 | 36,917 | Is there any way to "sticky" a file in subversion? | We have been working with CVS for years, and frequently find it useful to "sticky" a single file here and there. Is there any way to do this in subversion, specifically from TortoiseSVN? | Short answer: no. Long answer: Working copies are sticky to a branch by definition, as changing to a different branch means changing the base-Subversion-URL used to access the repository. However sticky-revision files... that's not a concept that Subversion has. See: Subversion update command reference Appendix A of the subversion book: Subversion for CVS Users One workaround might be to manually return the file to a specific revision after doing an update. Perhaps putting something like the following into a script. svn update svn update -r1234 that/particular/file.txt Another workaround, as tweakt suggests, is to have a partial branch with just one file in it. This needs very careful management though and things can get a bit.. er... sticky:-), if you're not vigilant. | Is there any way to "sticky" a file in subversion? We have been working with CVS for years, and frequently find it useful to "sticky" a single file here and there. Is there any way to do this in subversion, specifically from TortoiseSVN? | TITLE:
Is there any way to "sticky" a file in subversion?
QUESTION:
We have been working with CVS for years, and frequently find it useful to "sticky" a single file here and there. Is there any way to do this in subversion, specifically from TortoiseSVN?
ANSWER:
Short answer: no. Long answer: Working copies are sticky to a branch by definition, as changing to a different branch means changing the base-Subversion-URL used to access the repository. However sticky-revision files... that's not a concept that Subversion has. See: Subversion update command reference Appendix A of the subversion book: Subversion for CVS Users One workaround might be to manually return the file to a specific revision after doing an update. Perhaps putting something like the following into a script. svn update svn update -r1234 that/particular/file.txt Another workaround, as tweakt suggests, is to have a partial branch with just one file in it. This needs very careful management though and things can get a bit.. er... sticky:-), if you're not vigilant. | [
"svn"
] | 1 | 2 | 1,794 | 2 | 0 | 2008-08-31T15:29:34.137000 | 2008-08-31T15:33:49.430000 |
36,916 | 36,931 | Communication between pages | I want to enable an user to be able to communicate with other users through a site. I know that ASP.net is stateless, but what can I use for this synced communication? Java servlets? | I don't think you need to set up Java just to use a servlet for this. I would use AJAX and the database. I don't know ASP.NET but I PHP is similar in this case, being also basically "stateless". If you want to display some kind of asynchronous communication between two different users, say, from two different sessions, without a lot of refreshing (like chat), you can have the AJAX page constantly poll the database for new messages, and display them when they come in. You can also use AJAX to insert the new messages, giving the user read/write access to this messages data structure. Since the "other" user is doing the same thing, user A should see new messages pop up when user B types them in. Is that what you mean? | Communication between pages I want to enable an user to be able to communicate with other users through a site. I know that ASP.net is stateless, but what can I use for this synced communication? Java servlets? | TITLE:
Communication between pages
QUESTION:
I want to enable an user to be able to communicate with other users through a site. I know that ASP.net is stateless, but what can I use for this synced communication? Java servlets?
ANSWER:
I don't think you need to set up Java just to use a servlet for this. I would use AJAX and the database. I don't know ASP.NET but I PHP is similar in this case, being also basically "stateless". If you want to display some kind of asynchronous communication between two different users, say, from two different sessions, without a lot of refreshing (like chat), you can have the AJAX page constantly poll the database for new messages, and display them when they come in. You can also use AJAX to insert the new messages, giving the user read/write access to this messages data structure. Since the "other" user is doing the same thing, user A should see new messages pop up when user B types them in. Is that what you mean? | [
"asp.net"
] | 1 | 2 | 792 | 5 | 0 | 2008-08-31T15:31:39.857000 | 2008-08-31T15:52:32.863000 |
36,932 | 1,695,250 | How can I represent an 'Enum' in Python? | I'm mainly a C# developer, but I'm currently working on a project in Python. How can I represent the equivalent of an Enum in Python? | Enums have been added to Python 3.4 as described in PEP 435. It has also been backported to 3.3, 3.2, 3.1, 2.7, 2.6, 2.5, and 2.4 on pypi. For more advanced Enum techniques try the aenum library (2.7, 3.3+, same author as enum34. Code is not perfectly compatible between py2 and py3, e.g. you'll need __order__ in python 2 ). To use enum34, do $ pip install enum34 To use aenum, do $ pip install aenum Installing enum (no numbers) will install a completely different and incompatible version. from enum import Enum # for enum34, or the stdlib version # from aenum import Enum # for the aenum version Animal = Enum('Animal', 'ant bee cat dog')
Animal.ant # returns Animal['ant'] # returns (string lookup) Animal.ant.name # returns 'ant' (inverse lookup) or equivalently: class Animal(Enum): ant = 1 bee = 2 cat = 3 dog = 4 In earlier versions, one way of accomplishing enums is: def enum(**enums): return type('Enum', (), enums) which is used like so: >>> Numbers = enum(ONE=1, TWO=2, THREE='three') >>> Numbers.ONE 1 >>> Numbers.TWO 2 >>> Numbers.THREE 'three' You can also easily support automatic enumeration with something like this: def enum(*sequential, **named): enums = dict(zip(sequential, range(len(sequential))), **named) return type('Enum', (), enums) and used like so: >>> Numbers = enum('ZERO', 'ONE', 'TWO') >>> Numbers.ZERO 0 >>> Numbers.ONE 1 Support for converting the values back to names can be added this way: def enum(*sequential, **named): enums = dict(zip(sequential, range(len(sequential))), **named) reverse = dict((value, key) for key, value in enums.iteritems()) enums['reverse_mapping'] = reverse return type('Enum', (), enums) This overwrites anything with that name, but it is useful for rendering your enums in output. It will throw a KeyError if the reverse mapping doesn't exist. With the first example: >>> Numbers.reverse_mapping['three'] 'THREE' If you are using MyPy another way to express "enums" is with typing.Literal. For example: from typing import Literal #python >=3.8 from typing_extensions import Literal #python 2.7, 3.4-3.7
Animal = Literal['ant', 'bee', 'cat', 'dog']
def hello_animal(animal: Animal): print(f"hello {animal}")
hello_animal('rock') # error hello_animal('bee') # passes | How can I represent an 'Enum' in Python? I'm mainly a C# developer, but I'm currently working on a project in Python. How can I represent the equivalent of an Enum in Python? | TITLE:
How can I represent an 'Enum' in Python?
QUESTION:
I'm mainly a C# developer, but I'm currently working on a project in Python. How can I represent the equivalent of an Enum in Python?
ANSWER:
Enums have been added to Python 3.4 as described in PEP 435. It has also been backported to 3.3, 3.2, 3.1, 2.7, 2.6, 2.5, and 2.4 on pypi. For more advanced Enum techniques try the aenum library (2.7, 3.3+, same author as enum34. Code is not perfectly compatible between py2 and py3, e.g. you'll need __order__ in python 2 ). To use enum34, do $ pip install enum34 To use aenum, do $ pip install aenum Installing enum (no numbers) will install a completely different and incompatible version. from enum import Enum # for enum34, or the stdlib version # from aenum import Enum # for the aenum version Animal = Enum('Animal', 'ant bee cat dog')
Animal.ant # returns Animal['ant'] # returns (string lookup) Animal.ant.name # returns 'ant' (inverse lookup) or equivalently: class Animal(Enum): ant = 1 bee = 2 cat = 3 dog = 4 In earlier versions, one way of accomplishing enums is: def enum(**enums): return type('Enum', (), enums) which is used like so: >>> Numbers = enum(ONE=1, TWO=2, THREE='three') >>> Numbers.ONE 1 >>> Numbers.TWO 2 >>> Numbers.THREE 'three' You can also easily support automatic enumeration with something like this: def enum(*sequential, **named): enums = dict(zip(sequential, range(len(sequential))), **named) return type('Enum', (), enums) and used like so: >>> Numbers = enum('ZERO', 'ONE', 'TWO') >>> Numbers.ZERO 0 >>> Numbers.ONE 1 Support for converting the values back to names can be added this way: def enum(*sequential, **named): enums = dict(zip(sequential, range(len(sequential))), **named) reverse = dict((value, key) for key, value in enums.iteritems()) enums['reverse_mapping'] = reverse return type('Enum', (), enums) This overwrites anything with that name, but it is useful for rendering your enums in output. It will throw a KeyError if the reverse mapping doesn't exist. With the first example: >>> Numbers.reverse_mapping['three'] 'THREE' If you are using MyPy another way to express "enums" is with typing.Literal. For example: from typing import Literal #python >=3.8 from typing_extensions import Literal #python 2.7, 3.4-3.7
Animal = Literal['ant', 'bee', 'cat', 'dog']
def hello_animal(animal: Animal): print(f"hello {animal}")
hello_animal('rock') # error hello_animal('bee') # passes | [
"python",
"python-3.x",
"enums"
] | 1,141 | 2,990 | 1,285,129 | 43 | 0 | 2008-08-31T15:55:47.910000 | 2009-11-08T03:15:28.320000 |
36,949 | 36,990 | How do I use ADAM to run unit tests? | I writing a web site that uses Active Directory to validate users. I don't have access to an Active Directory instance that I can edit in any way. I've heard that some people are using Active Directory Application Mode (ADAM) to create AD data to be used in Unit and Integration Testing. Has anyone else done this? Are the any good site/blog that tells how to do this? What are the issues? Is this even a good idea? | I don't think this is a good idea just like reading files or accessing the database in unit tests isn't a good idea. Your tests will become dependent on the state of an external piece of software. Or you will have a lot of setup and teardown code. If you write tests this way you can expect you'll spend a lot of extra time maintaining your test-code. Setting up and maintaining a build server will become harder too and setting up the development environment for new programmers will take more time. The way to go in cases like this is to set up an adapter class around the infrastructure for calling into AD and to use something like rhino-mocks or another mocking framework to setup a mock-active-directory in your tests. If you're not familiar with mocking it sounds like a lot of work. But in practice it's usually only a couple of lines of code per test. | How do I use ADAM to run unit tests? I writing a web site that uses Active Directory to validate users. I don't have access to an Active Directory instance that I can edit in any way. I've heard that some people are using Active Directory Application Mode (ADAM) to create AD data to be used in Unit and Integration Testing. Has anyone else done this? Are the any good site/blog that tells how to do this? What are the issues? Is this even a good idea? | TITLE:
How do I use ADAM to run unit tests?
QUESTION:
I writing a web site that uses Active Directory to validate users. I don't have access to an Active Directory instance that I can edit in any way. I've heard that some people are using Active Directory Application Mode (ADAM) to create AD data to be used in Unit and Integration Testing. Has anyone else done this? Are the any good site/blog that tells how to do this? What are the issues? Is this even a good idea?
ANSWER:
I don't think this is a good idea just like reading files or accessing the database in unit tests isn't a good idea. Your tests will become dependent on the state of an external piece of software. Or you will have a lot of setup and teardown code. If you write tests this way you can expect you'll spend a lot of extra time maintaining your test-code. Setting up and maintaining a build server will become harder too and setting up the development environment for new programmers will take more time. The way to go in cases like this is to set up an adapter class around the infrastructure for calling into AD and to use something like rhino-mocks or another mocking framework to setup a mock-active-directory in your tests. If you're not familiar with mocking it sounds like a lot of work. But in practice it's usually only a couple of lines of code per test. | [
"testing",
"active-directory",
"adam"
] | 3 | 4 | 252 | 1 | 0 | 2008-08-31T16:41:54.367000 | 2008-08-31T17:38:43.357000 |
36,959 | 3,588,796 | How do you use script variables in psql? | In MS SQL Server, I create my scripts to use customizable variables: DECLARE @somevariable int SELECT @somevariable = -1
INSERT INTO foo VALUES ( @somevariable ) I'll then change the value of @somevariable at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember. How do I do the same with the PostgreSQL client psql? | Postgres variables are created through the \set command, for example... \set myvariable value... and can then be substituted, for example, as... SELECT * FROM:myvariable.table1;... or... SELECT * FROM table1 WHERE:myvariable IS NULL; edit: As of psql 9.1, variables can be expanded in quotes as in: \set myvariable value
SELECT * FROM table1 WHERE column1 =:'myvariable'; In older versions of the psql client:... If you want to use the variable as the value in a conditional string query, such as... SELECT * FROM table1 WHERE column1 = ':myvariable';... then you need to include the quotes in the variable itself as the above will not work. Instead define your variable as such... \set myvariable 'value' However, if, like me, you ran into a situation in which you wanted to make a string from an existing variable, I found the trick to be this... \set quoted_myvariable '\'':myvariable '\'' Now you have both a quoted and unquoted variable of the same string! And you can do something like this.... INSERT INTO:myvariable.table1 SELECT * FROM table2 WHERE column1 =:quoted_myvariable; | How do you use script variables in psql? In MS SQL Server, I create my scripts to use customizable variables: DECLARE @somevariable int SELECT @somevariable = -1
INSERT INTO foo VALUES ( @somevariable ) I'll then change the value of @somevariable at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember. How do I do the same with the PostgreSQL client psql? | TITLE:
How do you use script variables in psql?
QUESTION:
In MS SQL Server, I create my scripts to use customizable variables: DECLARE @somevariable int SELECT @somevariable = -1
INSERT INTO foo VALUES ( @somevariable ) I'll then change the value of @somevariable at runtime, depending on the value that I want in the particular situation. Since it's at the top of the script it's easy to see and remember. How do I do the same with the PostgreSQL client psql?
ANSWER:
Postgres variables are created through the \set command, for example... \set myvariable value... and can then be substituted, for example, as... SELECT * FROM:myvariable.table1;... or... SELECT * FROM table1 WHERE:myvariable IS NULL; edit: As of psql 9.1, variables can be expanded in quotes as in: \set myvariable value
SELECT * FROM table1 WHERE column1 =:'myvariable'; In older versions of the psql client:... If you want to use the variable as the value in a conditional string query, such as... SELECT * FROM table1 WHERE column1 = ':myvariable';... then you need to include the quotes in the variable itself as the above will not work. Instead define your variable as such... \set myvariable 'value' However, if, like me, you ran into a situation in which you wanted to make a string from an existing variable, I found the trick to be this... \set quoted_myvariable '\'':myvariable '\'' Now you have both a quoted and unquoted variable of the same string! And you can do something like this.... INSERT INTO:myvariable.table1 SELECT * FROM table2 WHERE column1 =:quoted_myvariable; | [
"sql",
"postgresql",
"variables",
"psql"
] | 197 | 242 | 340,195 | 13 | 0 | 2008-08-31T16:54:33.183000 | 2010-08-27T23:40:58.287000 |
36,968 | 36,976 | Designing Panels without a parent Form in VS? | Are there any tools or plugins to design a Panel independently of a Form (Windows, not Web Form) within Visual Studio? I've been using the designer and manually extracting the bits I want from the source, but surely there is a nicer way. | You could do all the design work inside of a UserControl. If you go that route, instead of just copying the bits out of the user control, simply use the user control itself. | Designing Panels without a parent Form in VS? Are there any tools or plugins to design a Panel independently of a Form (Windows, not Web Form) within Visual Studio? I've been using the designer and manually extracting the bits I want from the source, but surely there is a nicer way. | TITLE:
Designing Panels without a parent Form in VS?
QUESTION:
Are there any tools or plugins to design a Panel independently of a Form (Windows, not Web Form) within Visual Studio? I've been using the designer and manually extracting the bits I want from the source, but surely there is a nicer way.
ANSWER:
You could do all the design work inside of a UserControl. If you go that route, instead of just copying the bits out of the user control, simply use the user control itself. | [
"visual-studio"
] | 1 | 1 | 143 | 3 | 0 | 2008-08-31T17:07:39.533000 | 2008-08-31T17:13:33.620000 |
36,984 | 36,993 | Is there a standard way to return values from custom dialogs in Windows Forms? | So right now my project has a few custom dialogs that do things like prompt the user for his birthday, or whatever. Right now they're just doing things like setting a this.Birthday property once they get an answer (which is of type DateTime?, with the null indicating a "Cancel"). Then the caller inspects the Birthday property of the dialog it created to figure out what the user answered. My question is, is there a more standard pattern for doing stuff like this? I know we can set this.DialogResult for basic OK/Cancel stuff, but is there a more general way in Windows Forms for a form to indicate "here's the data I collected"? | I would say exposing properties on your custom dialog is the idiomatic way to go because that is how standard dialogs (like the Select/OpenFileDialog) do it. Someone could argue it is more explicit and intention revealing to have a ShowBirthdayDialog() method that returns the result you're looking for, but following the framework's pattern is probably the wise way to go. | Is there a standard way to return values from custom dialogs in Windows Forms? So right now my project has a few custom dialogs that do things like prompt the user for his birthday, or whatever. Right now they're just doing things like setting a this.Birthday property once they get an answer (which is of type DateTime?, with the null indicating a "Cancel"). Then the caller inspects the Birthday property of the dialog it created to figure out what the user answered. My question is, is there a more standard pattern for doing stuff like this? I know we can set this.DialogResult for basic OK/Cancel stuff, but is there a more general way in Windows Forms for a form to indicate "here's the data I collected"? | TITLE:
Is there a standard way to return values from custom dialogs in Windows Forms?
QUESTION:
So right now my project has a few custom dialogs that do things like prompt the user for his birthday, or whatever. Right now they're just doing things like setting a this.Birthday property once they get an answer (which is of type DateTime?, with the null indicating a "Cancel"). Then the caller inspects the Birthday property of the dialog it created to figure out what the user answered. My question is, is there a more standard pattern for doing stuff like this? I know we can set this.DialogResult for basic OK/Cancel stuff, but is there a more general way in Windows Forms for a form to indicate "here's the data I collected"?
ANSWER:
I would say exposing properties on your custom dialog is the idiomatic way to go because that is how standard dialogs (like the Select/OpenFileDialog) do it. Someone could argue it is more explicit and intention revealing to have a ShowBirthdayDialog() method that returns the result you're looking for, but following the framework's pattern is probably the wise way to go. | [
".net",
"winforms",
"user-interface"
] | 6 | 9 | 386 | 5 | 0 | 2008-08-31T17:28:11.497000 | 2008-08-31T17:50:56.620000 |
36,991 | 37,001 | Do you have to register a Dialog Box? | So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): int WINAPI WinMain( [...] ) {
[...]
// Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...]
// Register class RegisterClassEx( &wndClass );
// Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } The second example creates a dialog box (no abbreviations except the WinMain arguments): int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible). | You do not have to register a dialog box. Dialog boxes are predefined so (as you noted) there is no reference to a window class when you create a dialog. If you want more control of a dialog (like you get when you create your own window class) you would subclass the dialog which is a method by which you replace the dialogs window procedure with your own. When your procedure is called you modify the behavior of the dialog window; you then might or might not call the original window procedure depending upon what you're trying to do. | Do you have to register a Dialog Box? So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): int WINAPI WinMain( [...] ) {
[...]
// Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...]
// Register class RegisterClassEx( &wndClass );
// Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } The second example creates a dialog box (no abbreviations except the WinMain arguments): int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible). | TITLE:
Do you have to register a Dialog Box?
QUESTION:
So, I am a total beginner in any kind of Windows related programming. I have been playing around with the Windows API and came across a couple of examples on how to initialize create windows and such. One example creates a regular window (I abbreviated some of the code): int WINAPI WinMain( [...] ) {
[...]
// Windows Class setup wndClass.cbSize = sizeof( wndClass ); wndClass.style = CS_HREDRAW | CS_VREDRAW; [...]
// Register class RegisterClassEx( &wndClass );
// Create window hWnd = CreateWindow( szAppName, "Win32 App", WS_OVERLAPPEDWINDOW, 0, 0, 512, 384, NULL, NULL, hInstance, NULL ); [...] } The second example creates a dialog box (no abbreviations except the WinMain arguments): int WINAPI WinMain( [...] ) { // Create dialog box DialogBox(hInstance, MAKEINTRESOURCE(IDD_MAIN_DLG), NULL, (DLGPROC)DialogProc); } The second example does not contain any call to the register function. It just creates the DialogBox with its DialogProc process attached. This works fine, but I am wondering if there is a benefit of registering the window class and then creating the dialog box (if this is at all possible).
ANSWER:
You do not have to register a dialog box. Dialog boxes are predefined so (as you noted) there is no reference to a window class when you create a dialog. If you want more control of a dialog (like you get when you create your own window class) you would subclass the dialog which is a method by which you replace the dialogs window procedure with your own. When your procedure is called you modify the behavior of the dialog window; you then might or might not call the original window procedure depending upon what you're trying to do. | [
"c++",
"winapi"
] | 1 | 2 | 1,695 | 2 | 0 | 2008-08-31T17:45:46.227000 | 2008-08-31T18:12:07.443000 |
36,999 | 37,012 | Best practices for versioning your services with WCF? | I'm starting to work with my model almost exclusively in WCF and wanted to get some practical approaches to versioning these services over time. Can anyone point me in the right direction? | There is a good writeup on Craig McMurtry's WebLog. Its from 2006, but most of it is still relevant. As well as a decision tree to walk through the choices, he shows how to implement those changes using Windows Communication Foundation | Best practices for versioning your services with WCF? I'm starting to work with my model almost exclusively in WCF and wanted to get some practical approaches to versioning these services over time. Can anyone point me in the right direction? | TITLE:
Best practices for versioning your services with WCF?
QUESTION:
I'm starting to work with my model almost exclusively in WCF and wanted to get some practical approaches to versioning these services over time. Can anyone point me in the right direction?
ANSWER:
There is a good writeup on Craig McMurtry's WebLog. Its from 2006, but most of it is still relevant. As well as a decision tree to walk through the choices, he shows how to implement those changes using Windows Communication Foundation | [
"wcf",
"versioning",
"backwards-compatibility"
] | 27 | 25 | 11,464 | 3 | 0 | 2008-08-31T18:05:49.207000 | 2008-08-31T18:23:22.237000 |
37,011 | 37,021 | Should you ever use protected member variables? | Should you ever use protected member variables? What are the the advantages and what issues can this cause? | Should you ever use protected member variables? Depends on how picky you are about hiding state. If you don't want any leaking of internal state, then declaring all your member variables private is the way to go. If you don't really care that subclasses can access internal state, then protected is good enough. If a developer comes along and subclasses your class they may mess it up because they don't understand it fully. With private members, other than the public interface, they can't see the implementation specific details of how things are being done which gives you the flexibility of changing it later. | Should you ever use protected member variables? Should you ever use protected member variables? What are the the advantages and what issues can this cause? | TITLE:
Should you ever use protected member variables?
QUESTION:
Should you ever use protected member variables? What are the the advantages and what issues can this cause?
ANSWER:
Should you ever use protected member variables? Depends on how picky you are about hiding state. If you don't want any leaking of internal state, then declaring all your member variables private is the way to go. If you don't really care that subclasses can access internal state, then protected is good enough. If a developer comes along and subclasses your class they may mess it up because they don't understand it fully. With private members, other than the public interface, they can't see the implementation specific details of how things are being done which gives you the flexibility of changing it later. | [
"oop",
"protected"
] | 109 | 86 | 75,870 | 10 | 0 | 2008-08-31T18:22:46.230000 | 2008-08-31T18:34:54.003000 |
37,026 | 37,046 | Java: notify() vs. notifyAll() all over again | If one Googles for "difference between notify() and notifyAll() " then a lot of explanations will pop up (leaving apart the javadoc paragraphs). It all boils down to the number of waiting threads being waken up: one in notify() and all in notifyAll(). However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition; in the first case the one selected by the VM, in the second case the one selected by the system thread scheduler. The exact selection procedures for both of them (in the general case) are not known to the programmer. What's the useful difference between notify() and notifyAll() then? Am I missing something? | However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition. That is not correct. o.notifyAll() wakes all of the threads that are blocked in o.wait() calls. The threads are only allowed to return from o.wait() one-by-one, but they each will get their turn. Simply put, it depends on why your threads are waiting to be notified. Do you want to tell one of the waiting threads that something happened, or do you want to tell all of them at the same time? In some cases, all waiting threads can take useful action once the wait finishes. An example would be a set of threads waiting for a certain task to finish; once the task has finished, all waiting threads can continue with their business. In such a case you would use notifyAll() to wake up all waiting threads at the same time. Another case, for example mutually exclusive locking, only one of the waiting threads can do something useful after being notified (in this case acquire the lock). In such a case, you would rather use notify(). Properly implemented, you could use notifyAll() in this situation as well, but you would unnecessarily wake threads that can't do anything anyway. In many cases, the code to await a condition will be written as a loop: synchronized(o) { while (! IsConditionTrue()) { o.wait(); } DoSomethingThatOnlyMakesSenseWhenConditionIsTrue_and_MaybeMakeConditionFalseAgain(); } That way, if an o.notifyAll() call wakes more than one waiting thread, and the first one to return from the o.wait() makes leaves the condition in the false state, then the other threads that were awakened will go back to waiting. | Java: notify() vs. notifyAll() all over again If one Googles for "difference between notify() and notifyAll() " then a lot of explanations will pop up (leaving apart the javadoc paragraphs). It all boils down to the number of waiting threads being waken up: one in notify() and all in notifyAll(). However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition; in the first case the one selected by the VM, in the second case the one selected by the system thread scheduler. The exact selection procedures for both of them (in the general case) are not known to the programmer. What's the useful difference between notify() and notifyAll() then? Am I missing something? | TITLE:
Java: notify() vs. notifyAll() all over again
QUESTION:
If one Googles for "difference between notify() and notifyAll() " then a lot of explanations will pop up (leaving apart the javadoc paragraphs). It all boils down to the number of waiting threads being waken up: one in notify() and all in notifyAll(). However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition; in the first case the one selected by the VM, in the second case the one selected by the system thread scheduler. The exact selection procedures for both of them (in the general case) are not known to the programmer. What's the useful difference between notify() and notifyAll() then? Am I missing something?
ANSWER:
However (if I do understand the difference between these methods right), only one thread is always selected for further monitor acquisition. That is not correct. o.notifyAll() wakes all of the threads that are blocked in o.wait() calls. The threads are only allowed to return from o.wait() one-by-one, but they each will get their turn. Simply put, it depends on why your threads are waiting to be notified. Do you want to tell one of the waiting threads that something happened, or do you want to tell all of them at the same time? In some cases, all waiting threads can take useful action once the wait finishes. An example would be a set of threads waiting for a certain task to finish; once the task has finished, all waiting threads can continue with their business. In such a case you would use notifyAll() to wake up all waiting threads at the same time. Another case, for example mutually exclusive locking, only one of the waiting threads can do something useful after being notified (in this case acquire the lock). In such a case, you would rather use notify(). Properly implemented, you could use notifyAll() in this situation as well, but you would unnecessarily wake threads that can't do anything anyway. In many cases, the code to await a condition will be written as a loop: synchronized(o) { while (! IsConditionTrue()) { o.wait(); } DoSomethingThatOnlyMakesSenseWhenConditionIsTrue_and_MaybeMakeConditionFalseAgain(); } That way, if an o.notifyAll() call wakes more than one waiting thread, and the first one to return from the o.wait() makes leaves the condition in the false state, then the other threads that were awakened will go back to waiting. | [
"java",
"multithreading"
] | 424 | 269 | 235,058 | 26 | 0 | 2008-08-31T18:47:12.850000 | 2008-08-31T19:25:22.790000 |
37,030 | 37,035 | How to best implement software updates on windows? | I want to implement an "automatic update" system for a windows application. Right now I'm semi-manually creating an "appcast" which my program checks, and notifies the user that a new version is available. (I'm using NSIS for my installers). Is there software that I can use that will handle the "automatic" part of the updates, perhaps similar to Sparkle on the mac? Any issues/pitfalls that I should be aware of? | There is no solution quite as smooth as Sparkle (that I know of). If you need an easy means of deployment and updating applications, ClickOnce is an option. Unfortunately, it's inflexible (e.g., no per-machine installation instead of per-user), opaque (you have very little influence and clarity and control over how its deployment actually works) and non-standard (the paths it stores the installed app in are unlike anything else on Windows). Much closer to what you're asking would be ClickThrough, a side project of WiX, but I'm not sure it's still in development (if it is, they should be clearer about that…) — and it would use MSI in any case, not NSIS. You're likely best off rolling something on your own. I'd love to see a Sparkle-like project for Windows, but nobody seems to have given it a shot thus far. | How to best implement software updates on windows? I want to implement an "automatic update" system for a windows application. Right now I'm semi-manually creating an "appcast" which my program checks, and notifies the user that a new version is available. (I'm using NSIS for my installers). Is there software that I can use that will handle the "automatic" part of the updates, perhaps similar to Sparkle on the mac? Any issues/pitfalls that I should be aware of? | TITLE:
How to best implement software updates on windows?
QUESTION:
I want to implement an "automatic update" system for a windows application. Right now I'm semi-manually creating an "appcast" which my program checks, and notifies the user that a new version is available. (I'm using NSIS for my installers). Is there software that I can use that will handle the "automatic" part of the updates, perhaps similar to Sparkle on the mac? Any issues/pitfalls that I should be aware of?
ANSWER:
There is no solution quite as smooth as Sparkle (that I know of). If you need an easy means of deployment and updating applications, ClickOnce is an option. Unfortunately, it's inflexible (e.g., no per-machine installation instead of per-user), opaque (you have very little influence and clarity and control over how its deployment actually works) and non-standard (the paths it stores the installed app in are unlike anything else on Windows). Much closer to what you're asking would be ClickThrough, a side project of WiX, but I'm not sure it's still in development (if it is, they should be clearer about that…) — and it would use MSI in any case, not NSIS. You're likely best off rolling something on your own. I'd love to see a Sparkle-like project for Windows, but nobody seems to have given it a shot thus far. | [
"windows",
"installation"
] | 23 | 7 | 7,998 | 9 | 0 | 2008-08-31T18:57:29.050000 | 2008-08-31T19:10:46.227000 |
37,041 | 37,093 | Exposing a remote interface or object model | I have a question on the best way of exposing an asynchronous remote interface. The conditions are as follows: The protocol is asynchronous A third party can modify the data at any time The command round-trip can be significant The model should be well suited for UI interaction The protocol supports queries over certain objects, and so must the model As a means of improving my lacking skills in this area (and brush up my Java in general), I have started a project to create an Eclipse-based front-end for xmms2 (described below). So, the question is; how should I expose the remote interface as a neat data model (In this case, track management and event handling)? I welcome anything from generic discussions to pattern name-dropping or concrete examples and patches:) My primary goal here is learning about this class of problems in general. If my project can gain from it, fine, but I present it strictly to have something to start a discussion around. I've implemented a protocol abstraction which I call 'client' (for legacy reasons) which allows me to access most exposed features using method calls which I am happy with even if it's far from perfect. The features provided by the xmms2 daemon are things like track searching, meta-data retrieval and manipulation, change playback state, load playlists and so on and so forth. I'm in the middle of updating to the latest stable release of xmms2, and I figured I might as well fix some of the glaring weaknesses of my current implementation. My plan is to build a better abstraction on top of the protocol interface, one that allows a more natural interaction with the daemon. The current 'model' implementation is hard to use and is frankly quite ugly (not to mention the UI-code which is truly horrible atm). Today I have the Tracks interface which I can use to get instances of Track classes based on their id. Searching is performed through the Collections interface (unfortunate namespace clash) which I'd rather move to Tracks, I think. Any data can be modified by a third party at any time, and this should be properly reflected in the model and change-notifications distributed These interfaces are exposed when connecting, by returning an object hierarchy that looks like this: Connection Playback getPlayback() Play, pause, jump, current track etc Expose playback state changes Tracks getTracks() Track getTrack(id) etc Expose track updates Collections getCollection() Load and manipulate playlists or named collections Query media library Expose collection updates | For the asynchronous bit, I would suggest checking into java.util.concurrent, and especially the Future interface. The future interface is used to represent objects which are not ready yet, but are being created in a separate thread. You say that objects can be modified at any time by a third party, but I would still suggest you use immutable return objects here, and instead have a separate thread/event log you can subscribe to to get noticed when objects expire. I have little programming with UIs, but I believe using Futures for asynchronous calls would let you have a responsive GUI, rather than one that was waiting for a server reply. For the queries I would suggest using method chaining to build the query object, and each object returned by method chaining should be Iterable. Similar to how Djangos model is. Say you have QuerySet which implements Iterable. You can then call allSongs() which would return a result iterating over all Songs. Or allSongs().artist("Beatles"), and you would have an iterable over all Betles songs. Or even allSongs().artist("Beatles").years(1965,1967) and so on. Hope this helps as a starting place. | Exposing a remote interface or object model I have a question on the best way of exposing an asynchronous remote interface. The conditions are as follows: The protocol is asynchronous A third party can modify the data at any time The command round-trip can be significant The model should be well suited for UI interaction The protocol supports queries over certain objects, and so must the model As a means of improving my lacking skills in this area (and brush up my Java in general), I have started a project to create an Eclipse-based front-end for xmms2 (described below). So, the question is; how should I expose the remote interface as a neat data model (In this case, track management and event handling)? I welcome anything from generic discussions to pattern name-dropping or concrete examples and patches:) My primary goal here is learning about this class of problems in general. If my project can gain from it, fine, but I present it strictly to have something to start a discussion around. I've implemented a protocol abstraction which I call 'client' (for legacy reasons) which allows me to access most exposed features using method calls which I am happy with even if it's far from perfect. The features provided by the xmms2 daemon are things like track searching, meta-data retrieval and manipulation, change playback state, load playlists and so on and so forth. I'm in the middle of updating to the latest stable release of xmms2, and I figured I might as well fix some of the glaring weaknesses of my current implementation. My plan is to build a better abstraction on top of the protocol interface, one that allows a more natural interaction with the daemon. The current 'model' implementation is hard to use and is frankly quite ugly (not to mention the UI-code which is truly horrible atm). Today I have the Tracks interface which I can use to get instances of Track classes based on their id. Searching is performed through the Collections interface (unfortunate namespace clash) which I'd rather move to Tracks, I think. Any data can be modified by a third party at any time, and this should be properly reflected in the model and change-notifications distributed These interfaces are exposed when connecting, by returning an object hierarchy that looks like this: Connection Playback getPlayback() Play, pause, jump, current track etc Expose playback state changes Tracks getTracks() Track getTrack(id) etc Expose track updates Collections getCollection() Load and manipulate playlists or named collections Query media library Expose collection updates | TITLE:
Exposing a remote interface or object model
QUESTION:
I have a question on the best way of exposing an asynchronous remote interface. The conditions are as follows: The protocol is asynchronous A third party can modify the data at any time The command round-trip can be significant The model should be well suited for UI interaction The protocol supports queries over certain objects, and so must the model As a means of improving my lacking skills in this area (and brush up my Java in general), I have started a project to create an Eclipse-based front-end for xmms2 (described below). So, the question is; how should I expose the remote interface as a neat data model (In this case, track management and event handling)? I welcome anything from generic discussions to pattern name-dropping or concrete examples and patches:) My primary goal here is learning about this class of problems in general. If my project can gain from it, fine, but I present it strictly to have something to start a discussion around. I've implemented a protocol abstraction which I call 'client' (for legacy reasons) which allows me to access most exposed features using method calls which I am happy with even if it's far from perfect. The features provided by the xmms2 daemon are things like track searching, meta-data retrieval and manipulation, change playback state, load playlists and so on and so forth. I'm in the middle of updating to the latest stable release of xmms2, and I figured I might as well fix some of the glaring weaknesses of my current implementation. My plan is to build a better abstraction on top of the protocol interface, one that allows a more natural interaction with the daemon. The current 'model' implementation is hard to use and is frankly quite ugly (not to mention the UI-code which is truly horrible atm). Today I have the Tracks interface which I can use to get instances of Track classes based on their id. Searching is performed through the Collections interface (unfortunate namespace clash) which I'd rather move to Tracks, I think. Any data can be modified by a third party at any time, and this should be properly reflected in the model and change-notifications distributed These interfaces are exposed when connecting, by returning an object hierarchy that looks like this: Connection Playback getPlayback() Play, pause, jump, current track etc Expose playback state changes Tracks getTracks() Track getTrack(id) etc Expose track updates Collections getCollection() Load and manipulate playlists or named collections Query media library Expose collection updates
ANSWER:
For the asynchronous bit, I would suggest checking into java.util.concurrent, and especially the Future interface. The future interface is used to represent objects which are not ready yet, but are being created in a separate thread. You say that objects can be modified at any time by a third party, but I would still suggest you use immutable return objects here, and instead have a separate thread/event log you can subscribe to to get noticed when objects expire. I have little programming with UIs, but I believe using Futures for asynchronous calls would let you have a responsive GUI, rather than one that was waiting for a server reply. For the queries I would suggest using method chaining to build the query object, and each object returned by method chaining should be Iterable. Similar to how Djangos model is. Say you have QuerySet which implements Iterable. You can then call allSongs() which would return a result iterating over all Songs. Or allSongs().artist("Beatles"), and you would have an iterable over all Betles songs. Or even allSongs().artist("Beatles").years(1965,1967) and so on. Hope this helps as a starting place. | [
"java",
"eclipse",
"osgi",
"oop"
] | 2 | 2 | 571 | 5 | 0 | 2008-08-31T19:21:27.200000 | 2008-08-31T20:52:01.193000 |
37,043 | 37,566 | Flex MVC Frameworks | I'm currently using and enjoying using the Flex MVC framework PureMVC. I have heard some good things about Cairngorm, which is supported by Adobe and has first-to-market momentum. And there is a new player called Mate, which has a good deal of buzz. Has anyone tried two or three of these frameworks and formed an opinion? Thanks! | Mate is my pick. The first and foremost reason is that it is completely unobtrusive. My application code has no dependencies on the framework, it is highly decoupled, reusable and testable. One of the nicest features of Mate is the declarative configuration, essentially you wire up your application in using tags in what is called an event map -- basically a list of events that your application generates, and what actions to take when they occur. The event map gives a good overview of what your application does. Mate uses Flex' own event mechanism, it does not invent its own like most other frameworks. You can dispatch an event from anywhere in the view hierarchy and have it bubble up to the framework automatically, instead of having to have a direct line, like Cairngorms CairngormEventDispatcher or PureMVC's notification system. Mate also uses a form of dependency injection (leveraging bindings) that makes it possible to connect your models to your views without either one knowing about the other. This is probably the most powerful feature of the framework. In my view none of the other Flex application frameworks come anywhere near Mate. However, these are the contenders and why I consider them to be less useful: PureMVC actively denies you many of the benefits of Flex (for example bindings and event bubbling) in order for the framework to be portable -- a doubious goal in my view. It is also over-engineered, and as invasive as they come. Every single part of your application depends on the framework. However, PureMVC isn't terrible, just not a very good fit for Flex. An alternative is FlexMVCS, an effort to make PureMVC more suitable for Flex (unfortunately there's no documentation yet, just source). Cairngorm is a bundle of anti-patterns that lead to applications that are tightly coupled to global variables. Nuff said (but if you're interested, here are some more of my thoughts, and here too ). Swiz is a framework inspired by the Spring framework for Java and Cairngorm (trying to make up for the worst parts of the latter). It provides a dependency injection container and uses metadata to enable auto-wiring of dependencies. It is interesting, but a little bizzare in that goes to such lengths to avoid the global variables of Cairngorm by using dependency injection but then uses a global variable for central event dispatching. Those are the ones I've tried or researched. There are a few others that I've heard about, but none that I think are widely used. Mate and Swiz were both presented at the recent 360|Flex conference, and there are videos available ( the Mate folks have instructions on how to watch them ) | Flex MVC Frameworks I'm currently using and enjoying using the Flex MVC framework PureMVC. I have heard some good things about Cairngorm, which is supported by Adobe and has first-to-market momentum. And there is a new player called Mate, which has a good deal of buzz. Has anyone tried two or three of these frameworks and formed an opinion? Thanks! | TITLE:
Flex MVC Frameworks
QUESTION:
I'm currently using and enjoying using the Flex MVC framework PureMVC. I have heard some good things about Cairngorm, which is supported by Adobe and has first-to-market momentum. And there is a new player called Mate, which has a good deal of buzz. Has anyone tried two or three of these frameworks and formed an opinion? Thanks!
ANSWER:
Mate is my pick. The first and foremost reason is that it is completely unobtrusive. My application code has no dependencies on the framework, it is highly decoupled, reusable and testable. One of the nicest features of Mate is the declarative configuration, essentially you wire up your application in using tags in what is called an event map -- basically a list of events that your application generates, and what actions to take when they occur. The event map gives a good overview of what your application does. Mate uses Flex' own event mechanism, it does not invent its own like most other frameworks. You can dispatch an event from anywhere in the view hierarchy and have it bubble up to the framework automatically, instead of having to have a direct line, like Cairngorms CairngormEventDispatcher or PureMVC's notification system. Mate also uses a form of dependency injection (leveraging bindings) that makes it possible to connect your models to your views without either one knowing about the other. This is probably the most powerful feature of the framework. In my view none of the other Flex application frameworks come anywhere near Mate. However, these are the contenders and why I consider them to be less useful: PureMVC actively denies you many of the benefits of Flex (for example bindings and event bubbling) in order for the framework to be portable -- a doubious goal in my view. It is also over-engineered, and as invasive as they come. Every single part of your application depends on the framework. However, PureMVC isn't terrible, just not a very good fit for Flex. An alternative is FlexMVCS, an effort to make PureMVC more suitable for Flex (unfortunately there's no documentation yet, just source). Cairngorm is a bundle of anti-patterns that lead to applications that are tightly coupled to global variables. Nuff said (but if you're interested, here are some more of my thoughts, and here too ). Swiz is a framework inspired by the Spring framework for Java and Cairngorm (trying to make up for the worst parts of the latter). It provides a dependency injection container and uses metadata to enable auto-wiring of dependencies. It is interesting, but a little bizzare in that goes to such lengths to avoid the global variables of Cairngorm by using dependency injection but then uses a global variable for central event dispatching. Those are the ones I've tried or researched. There are a few others that I've heard about, but none that I think are widely used. Mate and Swiz were both presented at the recent 360|Flex conference, and there are videos available ( the Mate folks have instructions on how to watch them ) | [
"apache-flex",
"model-view-controller",
"frameworks"
] | 33 | 48 | 17,990 | 14 | 0 | 2008-08-31T19:24:55.270000 | 2008-09-01T07:27:05.347000 |
37,053 | 37,055 | How can I get the localized name of a 'special' windows folder (Recycle bin etc.)? | I'm trying to find out the 'correct' windows API for finding out the localized name of 'special' folders, specifically the Recycle Bin. I want to be able to prompt the user with a suitably localized dialog box asking them if they want to send files to the recycle bin or delete them directly. I've found lots on the internet (and on Stackoverflow) about how to do the actual deletion, and it seems simple enough, I just really want to be able to have the text localized. | Read this article for code samples and usage: http://www.codeproject.com/KB/winsdk/SpecialFolders.aspx Also there is an article on MSDN that helps you Identify the Location of Special Folders with API Calls | How can I get the localized name of a 'special' windows folder (Recycle bin etc.)? I'm trying to find out the 'correct' windows API for finding out the localized name of 'special' folders, specifically the Recycle Bin. I want to be able to prompt the user with a suitably localized dialog box asking them if they want to send files to the recycle bin or delete them directly. I've found lots on the internet (and on Stackoverflow) about how to do the actual deletion, and it seems simple enough, I just really want to be able to have the text localized. | TITLE:
How can I get the localized name of a 'special' windows folder (Recycle bin etc.)?
QUESTION:
I'm trying to find out the 'correct' windows API for finding out the localized name of 'special' folders, specifically the Recycle Bin. I want to be able to prompt the user with a suitably localized dialog box asking them if they want to send files to the recycle bin or delete them directly. I've found lots on the internet (and on Stackoverflow) about how to do the actual deletion, and it seems simple enough, I just really want to be able to have the text localized.
ANSWER:
Read this article for code samples and usage: http://www.codeproject.com/KB/winsdk/SpecialFolders.aspx Also there is an article on MSDN that helps you Identify the Location of Special Folders with API Calls | [
"winapi",
"localization",
"recycle-bin"
] | 3 | 2 | 1,963 | 2 | 0 | 2008-08-31T19:44:47.907000 | 2008-08-31T19:47:14.267000 |
37,056 | 37,083 | PostgreSQL performance monitoring tool | I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique. | pgfouine works fairly well for me. And it looks like there's a FreeBSD port for it. | PostgreSQL performance monitoring tool I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique. | TITLE:
PostgreSQL performance monitoring tool
QUESTION:
I'm setting up a web application with a FreeBSD PostgreSQL back-end. I'm looking for some database performance optimization tool/technique.
ANSWER:
pgfouine works fairly well for me. And it looks like there's a FreeBSD port for it. | [
"sql",
"database",
"optimization",
"postgresql",
"freebsd"
] | 11 | 5 | 9,764 | 7 | 0 | 2008-08-31T19:47:35.340000 | 2008-08-31T20:34:02.623000 |
37,059 | 37,173 | Configure Lucene.Net with SQL Server | Has anyone used Lucene.NET rather than using the full text search that comes with sql server? If so I would be interested on how you implemented it. Did you for example write a windows service that queried the database every hour then saved the results to the lucene.net index? | Yes, I've used it for exactly what you are describing. We had two services - one for read, and one for write, but only because we had multiple readers. I'm sure we could have done it with just one service (the writer) and embedded the reader in the web app and services. I've used lucene.net as a general database indexer, so what I got back was basically DB id's (to indexed email messages), and I've also use it to get back enough info to populate search results or such without touching the database. It's worked great in both cases, tho the SQL can get a little slow, as you pretty much have to get an ID, select an ID etc. We got around this by making a temp table (with just the ID row in it) and bulk-inserting from a file (which was the output from lucene) then joining to the message table. Was a lot quicker. Lucene isn't perfect, and you do have to think a little outside the relational database box, because it TOTALLY isn't one, but it's very very good at what it does. Worth a look, and, I'm told, doesn't have the "oops, sorry, you need to rebuild your index again" problems that MS SQL's FTI does. BTW, we were dealing with 20-50million emails (and around 1 million unique attachments), totaling about 20GB of lucene index I think, and 250+GB of SQL database + attachments. Performance was fantastic, to say the least - just make sure you think about, and tweak, your merge factors (when it merges index segments). There is no issue in having more than one segment, but there can be a BIG problem if you try to merge two segments which have 1mil items in each, and you have a watcher thread which kills the process if it takes too long..... (yes, that kicked our arse for a while). So keep the max number of documents per thinggie LOW (ie, dont set it to maxint like we did!) EDIT Corey Trager documented how to use Lucene.NET in BugTracker.NET here. | Configure Lucene.Net with SQL Server Has anyone used Lucene.NET rather than using the full text search that comes with sql server? If so I would be interested on how you implemented it. Did you for example write a windows service that queried the database every hour then saved the results to the lucene.net index? | TITLE:
Configure Lucene.Net with SQL Server
QUESTION:
Has anyone used Lucene.NET rather than using the full text search that comes with sql server? If so I would be interested on how you implemented it. Did you for example write a windows service that queried the database every hour then saved the results to the lucene.net index?
ANSWER:
Yes, I've used it for exactly what you are describing. We had two services - one for read, and one for write, but only because we had multiple readers. I'm sure we could have done it with just one service (the writer) and embedded the reader in the web app and services. I've used lucene.net as a general database indexer, so what I got back was basically DB id's (to indexed email messages), and I've also use it to get back enough info to populate search results or such without touching the database. It's worked great in both cases, tho the SQL can get a little slow, as you pretty much have to get an ID, select an ID etc. We got around this by making a temp table (with just the ID row in it) and bulk-inserting from a file (which was the output from lucene) then joining to the message table. Was a lot quicker. Lucene isn't perfect, and you do have to think a little outside the relational database box, because it TOTALLY isn't one, but it's very very good at what it does. Worth a look, and, I'm told, doesn't have the "oops, sorry, you need to rebuild your index again" problems that MS SQL's FTI does. BTW, we were dealing with 20-50million emails (and around 1 million unique attachments), totaling about 20GB of lucene index I think, and 250+GB of SQL database + attachments. Performance was fantastic, to say the least - just make sure you think about, and tweak, your merge factors (when it merges index segments). There is no issue in having more than one segment, but there can be a BIG problem if you try to merge two segments which have 1mil items in each, and you have a watcher thread which kills the process if it takes too long..... (yes, that kicked our arse for a while). So keep the max number of documents per thinggie LOW (ie, dont set it to maxint like we did!) EDIT Corey Trager documented how to use Lucene.NET in BugTracker.NET here. | [
"sql-server",
"lucene.net"
] | 60 | 59 | 26,555 | 4 | 0 | 2008-08-31T19:53:14.797000 | 2008-08-31T22:11:33.950000 |
37,067 | 37,314 | Task oriented thread pooling | I've created a model for executing worker tasks in a server application using a thread pool associated with an IO completion port such as shown in the posts below: http://weblogs.asp.net/kennykerr/archive/2008/01/03/parallel-programming-with-c-part-4-i-o-completion-ports.aspx http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx Are there any classes in boost that can help with this programming model? | Not really, at least, not last time I looked. I mean, boost::thread_group might make things marginally tidier in places, but not so as would make much of a difference, I don't think. Boost's thread support seems marginally useful when writing something that's cross-platform, but given that what you're writing is going to be Win32-specific anyway (due to the use of IOCPs) there doesn't really seem to be much benefit from that. | Task oriented thread pooling I've created a model for executing worker tasks in a server application using a thread pool associated with an IO completion port such as shown in the posts below: http://weblogs.asp.net/kennykerr/archive/2008/01/03/parallel-programming-with-c-part-4-i-o-completion-ports.aspx http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx Are there any classes in boost that can help with this programming model? | TITLE:
Task oriented thread pooling
QUESTION:
I've created a model for executing worker tasks in a server application using a thread pool associated with an IO completion port such as shown in the posts below: http://weblogs.asp.net/kennykerr/archive/2008/01/03/parallel-programming-with-c-part-4-i-o-completion-ports.aspx http://blogs.msdn.com/larryosterman/archive/2004/03/29/101329.aspx Are there any classes in boost that can help with this programming model?
ANSWER:
Not really, at least, not last time I looked. I mean, boost::thread_group might make things marginally tidier in places, but not so as would make much of a difference, I don't think. Boost's thread support seems marginally useful when writing something that's cross-platform, but given that what you're writing is going to be Win32-specific anyway (due to the use of IOCPs) there doesn't really seem to be much benefit from that. | [
"c++",
"multithreading",
"boost"
] | 4 | 1 | 756 | 4 | 0 | 2008-08-31T20:13:52.673000 | 2008-09-01T00:46:10.180000 |
37,069 | 37,111 | Apache - how do I build individual and/or all modules as shared modules | On Mac OS X 10.5 I downloaded the latest version of Apache 2.2.9. After the usual configure, make, make install dance I had a build of apache without mod_rewrite. This wasn't statically linked and the module was not built in the /modules folder either. I had to do the following to build Apache and mod_rewrite:./configure --prefix=/usr/local/apache2 --enable-rewrite=shared Is there a way to tell Apache to build all modules as Shared Modules (DSOs) so I can control loading from the Apache config? Now that I have built Apache and the mod_rewrite DSO, how can I build another shared module without building all of Apache? (The last time I built Apache (2.2.8) on Solaris, by default it built everything as a shared module.) | Try the./configure option --enable-mods-shared="all", or --enable-mods-shared=" " to compile modules as shared objects. See further details in Apache 2.2 docs To just compile Apache with the ability to load shared objects (and add modules later), use --enable-so, then consult the documentation on compiling modules seperately in the Apache 2.2. DSO docs. | Apache - how do I build individual and/or all modules as shared modules On Mac OS X 10.5 I downloaded the latest version of Apache 2.2.9. After the usual configure, make, make install dance I had a build of apache without mod_rewrite. This wasn't statically linked and the module was not built in the /modules folder either. I had to do the following to build Apache and mod_rewrite:./configure --prefix=/usr/local/apache2 --enable-rewrite=shared Is there a way to tell Apache to build all modules as Shared Modules (DSOs) so I can control loading from the Apache config? Now that I have built Apache and the mod_rewrite DSO, how can I build another shared module without building all of Apache? (The last time I built Apache (2.2.8) on Solaris, by default it built everything as a shared module.) | TITLE:
Apache - how do I build individual and/or all modules as shared modules
QUESTION:
On Mac OS X 10.5 I downloaded the latest version of Apache 2.2.9. After the usual configure, make, make install dance I had a build of apache without mod_rewrite. This wasn't statically linked and the module was not built in the /modules folder either. I had to do the following to build Apache and mod_rewrite:./configure --prefix=/usr/local/apache2 --enable-rewrite=shared Is there a way to tell Apache to build all modules as Shared Modules (DSOs) so I can control loading from the Apache config? Now that I have built Apache and the mod_rewrite DSO, how can I build another shared module without building all of Apache? (The last time I built Apache (2.2.8) on Solaris, by default it built everything as a shared module.)
ANSWER:
Try the./configure option --enable-mods-shared="all", or --enable-mods-shared=" " to compile modules as shared objects. See further details in Apache 2.2 docs To just compile Apache with the ability to load shared objects (and add modules later), use --enable-so, then consult the documentation on compiling modules seperately in the Apache 2.2. DSO docs. | [
"apache",
"unix",
"configuration",
"mod-rewrite",
"build"
] | 13 | 15 | 23,423 | 2 | 0 | 2008-08-31T20:18:13.833000 | 2008-08-31T21:14:05.197000 |
37,070 | 37,092 | What is the meaning of "non temporal" memory accesses in x86 | This is a somewhat low-level question. In x86 assembly there are two SSE instructions: MOVDQA xmmi, m128 and MOVNTDQA xmmi, m128 The IA-32 Software Developer's Manual says that the NT in MOVNTDQA stands for Non-Temporal, and that otherwise it's the same as MOVDQA. My question is, what does Non-Temporal mean? | Non-Temporal SSE instructions (MOVNTI, MOVNTQ, etc.), don't follow the normal cache-coherency rules. Therefore non-temporal stores must be followed by an SFENCE instruction in order for their results to be seen by other processors in a timely fashion. When data is produced and not (immediately) consumed again, the fact that memory store operations read a full cache line first and then modify the cached data is detrimental to performance. This operation pushes data out of the caches which might be needed again in favor of data which will not be used soon. This is especially true for large data structures, like matrices, which are filled and then used later. Before the last element of the matrix is filled the sheer size evicts the first elements, making caching of the writes ineffective. For this and similar situations, processors provide support for non-temporal write operations. Non-temporal in this context means the data will not be reused soon, so there is no reason to cache it. These non-temporal write operations do not read a cache line and then modify it; instead, the new content is directly written to memory. Source: http://lwn.net/Articles/255364/ | What is the meaning of "non temporal" memory accesses in x86 This is a somewhat low-level question. In x86 assembly there are two SSE instructions: MOVDQA xmmi, m128 and MOVNTDQA xmmi, m128 The IA-32 Software Developer's Manual says that the NT in MOVNTDQA stands for Non-Temporal, and that otherwise it's the same as MOVDQA. My question is, what does Non-Temporal mean? | TITLE:
What is the meaning of "non temporal" memory accesses in x86
QUESTION:
This is a somewhat low-level question. In x86 assembly there are two SSE instructions: MOVDQA xmmi, m128 and MOVNTDQA xmmi, m128 The IA-32 Software Developer's Manual says that the NT in MOVNTDQA stands for Non-Temporal, and that otherwise it's the same as MOVDQA. My question is, what does Non-Temporal mean?
ANSWER:
Non-Temporal SSE instructions (MOVNTI, MOVNTQ, etc.), don't follow the normal cache-coherency rules. Therefore non-temporal stores must be followed by an SFENCE instruction in order for their results to be seen by other processors in a timely fashion. When data is produced and not (immediately) consumed again, the fact that memory store operations read a full cache line first and then modify the cached data is detrimental to performance. This operation pushes data out of the caches which might be needed again in favor of data which will not be used soon. This is especially true for large data structures, like matrices, which are filled and then used later. Before the last element of the matrix is filled the sheer size evicts the first elements, making caching of the writes ineffective. For this and similar situations, processors provide support for non-temporal write operations. Non-temporal in this context means the data will not be reused soon, so there is no reason to cache it. These non-temporal write operations do not read a cache line and then modify it; instead, the new content is directly written to memory. Source: http://lwn.net/Articles/255364/ | [
"x86",
"sse",
"assembly"
] | 161 | 191 | 54,273 | 4 | 0 | 2008-08-31T20:18:34.113000 | 2008-08-31T20:50:00.200000 |
37,073 | 37,321 | What is currently the best way to get a favicon to display in all browsers that support Favicons? | What is currently the best way to get a favicon to display in all browsers that currently support it? Please include: Which image formats are supported by which browsers. Which lines are needed in what places for the various browsers. | I go for a belt and braces approach here. I create a 32x32 icon in both the.ico and.png formats called favicon.ico and favicon.png. The icon name doesn't really matter unless you are dealing with older browsers. Place favicon.ico at your site root to support the older browsers (optional and only relevant for older browsers. Place favicon.png in my images sub-directory (just to keep things tidy). Add the following HTML inside the element. Please note that: The MIME type for.ico files was registered as image/vnd.microsoft.icon by the IANA. Internet Explorer will ignore the type attribute for the shortcut icon relationship and this is the only browser to support this relationship, it doesn't need to be supplied. Reference | What is currently the best way to get a favicon to display in all browsers that support Favicons? What is currently the best way to get a favicon to display in all browsers that currently support it? Please include: Which image formats are supported by which browsers. Which lines are needed in what places for the various browsers. | TITLE:
What is currently the best way to get a favicon to display in all browsers that support Favicons?
QUESTION:
What is currently the best way to get a favicon to display in all browsers that currently support it? Please include: Which image formats are supported by which browsers. Which lines are needed in what places for the various browsers.
ANSWER:
I go for a belt and braces approach here. I create a 32x32 icon in both the.ico and.png formats called favicon.ico and favicon.png. The icon name doesn't really matter unless you are dealing with older browsers. Place favicon.ico at your site root to support the older browsers (optional and only relevant for older browsers. Place favicon.png in my images sub-directory (just to keep things tidy). Add the following HTML inside the element. Please note that: The MIME type for.ico files was registered as image/vnd.microsoft.icon by the IANA. Internet Explorer will ignore the type attribute for the shortcut icon relationship and this is the only browser to support this relationship, it doesn't need to be supplied. Reference | [
"html",
"standards",
"favicon"
] | 83 | 110 | 43,826 | 9 | 0 | 2008-08-31T20:20:46.527000 | 2008-09-01T00:55:43.010000 |
37,089 | 37,090 | How can an application use multiple cores or CPUs in .NET or Java? | When launching a thread or a process in.NET or Java, is there a way to choose which processor or core it is launched on? How does the shared memory model work in such cases? | If you're using multiple threads, the operating system will automatically take care of using multiple cores. | How can an application use multiple cores or CPUs in .NET or Java? When launching a thread or a process in.NET or Java, is there a way to choose which processor or core it is launched on? How does the shared memory model work in such cases? | TITLE:
How can an application use multiple cores or CPUs in .NET or Java?
QUESTION:
When launching a thread or a process in.NET or Java, is there a way to choose which processor or core it is launched on? How does the shared memory model work in such cases?
ANSWER:
If you're using multiple threads, the operating system will automatically take care of using multiple cores. | [
"c#",
"java",
"multithreading"
] | 10 | 7 | 15,649 | 5 | 0 | 2008-08-31T20:42:43.650000 | 2008-08-31T20:45:34.873000 |
37,095 | 37,118 | How do I avoid read locks in my database? | How do I avoid read locks in my database? Answers for multiple databases welcome! | In Oracle the default mode of operation is the Read committed isolation level where a select statement is not blocked by another transaction modifying the data it's reading. From Data Concurrency and Consistency: Each query executed by a transaction sees only data that was committed before the query (not the transaction) began. An Oracle query never reads dirty (uncommitted) data. | How do I avoid read locks in my database? How do I avoid read locks in my database? Answers for multiple databases welcome! | TITLE:
How do I avoid read locks in my database?
QUESTION:
How do I avoid read locks in my database? Answers for multiple databases welcome!
ANSWER:
In Oracle the default mode of operation is the Read committed isolation level where a select statement is not blocked by another transaction modifying the data it's reading. From Data Concurrency and Consistency: Each query executed by a transaction sees only data that was committed before the query (not the transaction) began. An Oracle query never reads dirty (uncommitted) data. | [
"sql",
"database",
"performance",
"locking"
] | 6 | 3 | 5,794 | 4 | 0 | 2008-08-31T20:54:19.003000 | 2008-08-31T21:21:44.507000 |
37,101 | 37,167 | How to Clear OutputCache for Website without Restarting App | Is there a way clear or reset the outputcache for an entire website without a restart? I'm just starting to use outputcache on a site and when I make a mistake in setting it up I need a page I can browse to that will reset it. | This should do the trick: Private Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs)
Dim path As String path="/AbosoluteVirtualPath/OutputCached.aspx" HttpResponse.RemoveOutputCacheItem(path)
End Sub | How to Clear OutputCache for Website without Restarting App Is there a way clear or reset the outputcache for an entire website without a restart? I'm just starting to use outputcache on a site and when I make a mistake in setting it up I need a page I can browse to that will reset it. | TITLE:
How to Clear OutputCache for Website without Restarting App
QUESTION:
Is there a way clear or reset the outputcache for an entire website without a restart? I'm just starting to use outputcache on a site and when I make a mistake in setting it up I need a page I can browse to that will reset it.
ANSWER:
This should do the trick: Private Sub Button1_Click(ByVal sender As Object, ByVal e As System.EventArgs)
Dim path As String path="/AbosoluteVirtualPath/OutputCached.aspx" HttpResponse.RemoveOutputCacheItem(path)
End Sub | [
"asp.net",
"outputcache"
] | 11 | 9 | 9,057 | 2 | 0 | 2008-08-31T21:03:14.693000 | 2008-08-31T22:02:53.800000 |
37,103 | 37,131 | CSS - Make divs align horizontally | I have a container div with a fixed width and height, with overflow: hidden. I want a horizontal row of float: left divs within this container. Divs which are floated left will naturally push onto the 'line' below after they read the right bound of their parent. This will happen even if the height of the parent should not allow this. This is how this looks: How I would like it to look:![Right][2] - removed image shack image that had been replaced by an advert Note: the effect I want can be achieved by using inline elements & white-space: no-wrap (that is how I did it in the image shown). This, however, is no good to me (for reasons too lengthy to explain here), as the child divs need to be floated block level elements. | You may put an inner div in the container that is enough wide to hold all the floated divs. #container {
background-color: red;
overflow: hidden;
width: 200px;
}
#inner {
overflow: hidden;
width: 2000px;
}.child {
float: left;
background-color: blue;
width: 50px;
height: 50px;
} | CSS - Make divs align horizontally I have a container div with a fixed width and height, with overflow: hidden. I want a horizontal row of float: left divs within this container. Divs which are floated left will naturally push onto the 'line' below after they read the right bound of their parent. This will happen even if the height of the parent should not allow this. This is how this looks: How I would like it to look:![Right][2] - removed image shack image that had been replaced by an advert Note: the effect I want can be achieved by using inline elements & white-space: no-wrap (that is how I did it in the image shown). This, however, is no good to me (for reasons too lengthy to explain here), as the child divs need to be floated block level elements. | TITLE:
CSS - Make divs align horizontally
QUESTION:
I have a container div with a fixed width and height, with overflow: hidden. I want a horizontal row of float: left divs within this container. Divs which are floated left will naturally push onto the 'line' below after they read the right bound of their parent. This will happen even if the height of the parent should not allow this. This is how this looks: How I would like it to look:![Right][2] - removed image shack image that had been replaced by an advert Note: the effect I want can be achieved by using inline elements & white-space: no-wrap (that is how I did it in the image shown). This, however, is no good to me (for reasons too lengthy to explain here), as the child divs need to be floated block level elements.
ANSWER:
You may put an inner div in the container that is enough wide to hold all the floated divs. #container {
background-color: red;
overflow: hidden;
width: 200px;
}
#inner {
overflow: hidden;
width: 2000px;
}.child {
float: left;
background-color: blue;
width: 50px;
height: 50px;
} | [
"html",
"css",
"alignment"
] | 93 | 107 | 259,039 | 7 | 0 | 2008-08-31T21:05:41.830000 | 2008-08-31T21:29:59.510000 |
37,104 | 37,132 | Best Practices for versioning web site? | What's are the best practices for versioning web sites? Which revision control systems are well suited for such a job? What special-purpose tools exist? What other questions should I be asking? | Firstly you can - and should - use a revision control system, most will handle binary files although unlike text files you can't merge two different set of changes so you may want to set the system up to lock these files whilst they are being changed (assuming that that's not the default mode of operation for you rcs in the first place). Where things get a bit more interesting for Websites is managing those files that are required for the site but don't actually form part of the site - the most obvious example being something like.psd files from which web graphics are produced but which don't get deployed. We therefore have a tree for each site which has two folders: assets and site. Assets are things that aren't in the site, and site is - well the site. What you have to watch with this is that designers tend to have their own "systems" for "versioning" graphic files (count the layers in the PSD). You don't need necessarily to stop them doing this but you do need to ensure that they commit each change too. Other questions? Deployment. We're still working on this one (-: But we're getting better (I'm happier now with what we do!) Murph | Best Practices for versioning web site? What's are the best practices for versioning web sites? Which revision control systems are well suited for such a job? What special-purpose tools exist? What other questions should I be asking? | TITLE:
Best Practices for versioning web site?
QUESTION:
What's are the best practices for versioning web sites? Which revision control systems are well suited for such a job? What special-purpose tools exist? What other questions should I be asking?
ANSWER:
Firstly you can - and should - use a revision control system, most will handle binary files although unlike text files you can't merge two different set of changes so you may want to set the system up to lock these files whilst they are being changed (assuming that that's not the default mode of operation for you rcs in the first place). Where things get a bit more interesting for Websites is managing those files that are required for the site but don't actually form part of the site - the most obvious example being something like.psd files from which web graphics are produced but which don't get deployed. We therefore have a tree for each site which has two folders: assets and site. Assets are things that aren't in the site, and site is - well the site. What you have to watch with this is that designers tend to have their own "systems" for "versioning" graphic files (count the layers in the PSD). You don't need necessarily to stop them doing this but you do need to ensure that they commit each change too. Other questions? Deployment. We're still working on this one (-: But we're getting better (I'm happier now with what we do!) Murph | [
"version-control"
] | 12 | 5 | 10,048 | 3 | 0 | 2008-08-31T21:05:48.943000 | 2008-08-31T21:30:06.623000 |
37,122 | 156,274 | Make browser window blink in task Bar | How do I make a user's browser blink/flash/highlight in the task bar using JavaScript? For example, if I make an AJAX request every 10 seconds to see if the user has any new messages on the server, I want the user to know it right away, even if he is using another application at the time. Edit: These users do want to be distracted when a new message arrives. | this won't make the taskbar button flash in changing colours, but the title will blink on and off until they move the mouse. This should work cross platform, and even if they just have it in a different tab. newExcitingAlerts = (function () { var oldTitle = document.title; var msg = "New!"; var timeoutId; var blink = function() { document.title = document.title == msg? ' ': msg; }; var clear = function() { clearInterval(timeoutId); document.title = oldTitle; window.onmousemove = null; timeoutId = null; }; return function () { if (!timeoutId) { timeoutId = setInterval(blink, 1000); window.onmousemove = clear; } }; }()); Update: You may want to look at using HTML5 notifications. | Make browser window blink in task Bar How do I make a user's browser blink/flash/highlight in the task bar using JavaScript? For example, if I make an AJAX request every 10 seconds to see if the user has any new messages on the server, I want the user to know it right away, even if he is using another application at the time. Edit: These users do want to be distracted when a new message arrives. | TITLE:
Make browser window blink in task Bar
QUESTION:
How do I make a user's browser blink/flash/highlight in the task bar using JavaScript? For example, if I make an AJAX request every 10 seconds to see if the user has any new messages on the server, I want the user to know it right away, even if he is using another application at the time. Edit: These users do want to be distracted when a new message arrives.
ANSWER:
this won't make the taskbar button flash in changing colours, but the title will blink on and off until they move the mouse. This should work cross platform, and even if they just have it in a different tab. newExcitingAlerts = (function () { var oldTitle = document.title; var msg = "New!"; var timeoutId; var blink = function() { document.title = document.title == msg? ' ': msg; }; var clear = function() { clearInterval(timeoutId); document.title = oldTitle; window.onmousemove = null; timeoutId = null; }; return function () { if (!timeoutId) { timeoutId = setInterval(blink, 1000); window.onmousemove = clear; } }; }()); Update: You may want to look at using HTML5 notifications. | [
"javascript",
"browser"
] | 108 | 89 | 118,309 | 12 | 0 | 2008-08-31T21:22:51.930000 | 2008-10-01T04:48:18.450000 |
37,141 | 37,309 | Event handling in Dojo | Taking Jeff Atwood's advice, I decided to use a JavaScript library for the very basic to-do list application I'm writing. I picked the Dojo toolkit, version 1.1.1. At first, all was fine: the drag-and-drop code I wrote worked first time, you can drag tasks on-screen to change their order of precedence, and each drag-and-drop operation calls an event handler that sends an AJAX call to the server to let it know that order has been changed. Then I went to add in the email tracking functionality. Standard stuff: new incoming emails have a unique ID number attached to their subject line, all subsequent emails about that problem can be tracked by simply leaving that ID number in the subject when you reply. So, we have a list of open tasks, each with their own ID number, and each of those tasks has a time-ordered list of associated emails. I wanted the text of those emails to be available to the user as they were looking at their list of tasks, so I made each task box a Dijit "Tree" control - top level contains the task description, branches contain email dates, and a single "leaf" off of each of those branches contains the email text. First problem: I wanted the tree view to be fully-collapsed by default. After searching Google quite extensively, I found a number of solutions, all of which seemed to be valid for previous versions of Dojo but not the one I was using. I eventually figured out that the best solution would seem to be to have a event handler called when the Tree control had loaded that simply collapsed each branch/leaf. Unfortunately, even though the Tree control had been instantiated and its "startup" event handler called, the branches and leaves still hadn't loaded (the data was still being loaded via an AJAX call). So, I modified the system so that all email text and Tree structure is added server-side. This means the whole fully-populated Tree control is available when its startup event handler is called. So, the startup event handler fully collapses the tree. Next, I couldn't find a "proper" way to have nice formatted text for the email leaves. I can put the email text in the leaf just fine, but any HTML gets escaped out and shows up in the web page. Cue more rummaging around Dojo's documentation (tends to be out of date, with code and examples for pre-1.0 versions) and Google. I eventually came up with the solution of getting JavaScript to go and read the SPAN element that's inside each leaf node and un-escape the escaped HTML code in it's innerHTML. I figured I'd put code to do this in with the fully-collapse-the-tree code, in the Tree control's startup event handler. However... it turns out that the SPAN element isn't actually created until the user clicks on the expando (the little "+" symbol in a tree view you click to expand a node). Okay, fair enough - I'll add the re-formatting code to the onExpand() event handler, or whatever it's called. Which doesn't seem to exist. I've searched to documentation, I've searched Google... I'm quite possibly mis-understanding Dojo's "publish/subscribe" event handling system, but I think that mainly because there doesn't seem to be any comprehensive documentation for it anywhere (like, where do I find out what events I can subscribe to?). So, in the end, the best solution I can come up with is to add an onClick event handler (not a "Dojo" event, but a plain JavaScript event that Dojo knows nothing about) to the expando node of each Tree branch that re-formats the HTML inside the SPAN element of each leaf. Except... when that is called, the SPAN element still doesn't exist (sometimes - other times it's been cached, just to further confuse you). Therefore, I have the event handler set up a timer that periodically calls a function that checks to see if the relevant SPAN element has turned up yet before then re-formatting it. // An event handler called whenever a "email title" tree node is expanded. function formatTreeNode(nodeID) { if (dijit.byId(nodeID).getChildren().length!= 0) { clearInterval(nodeUpdateIntervalID); messageBody = dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML if (messageBody.indexOf(" Message text: ") == -1) { messageBody = messageBody.replace(/>/g, ">"); messageBody = messageBody.replace(/</g, "<"); messageBody = messageBody.replace(/&/g, "&"); dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML = " Message text: "+messageBody+" "; } } }
// An event handler called when a tree node has been set up - we changed the default fully-expanded to fully-collapsed. function setupTree(theTree) { dijit.byId("tree-"+theTree).rootNode.collapse();
messageNode = dijit.byId("tree-"+theTree).rootNode.getChildren(); for (pl = 0; pl < messageNode.length; pl++) { messageNode[pl].collapse(); messageNode[pl].expandoNode.onclick = eval("nodeUpdateIntervalID = setInterval(\"formatTreeNode('"+messageNode[pl].id+"')\",200); formatTreeNode('"+messageNode[pl].id+"');"); } } The above has the feel of a truly horrible hack, and I feel sure I must have taken a wrong turn somewhere early on in my thought process. Can someone please tell me: The correct way to go about putting nicely-formatted text inside a Dojo/Dijit Tree control. The correct way to handle Dojo events, like where I can figure out what events are available for me to subscribe to. A better JavaScript library to use (can I do what I want to with JQuery and avoid the all-around-the-houses approach seen above?). PS: If you're naming a software project, give thought to its name's uniqueness in Google - I'm sure searching for "Dojo" documentation in Google would be easier without all the martial arts results getting in the way. PPS: Firefox spellchecker knows how to spell "Atwood", correcting me when I put two 'T's instead of one. Is Jeff just that famous now? | I assume that you followed the dijit.Tree and dojo.data in Dojo 1.1 tutorial which directed you to pass the data to the tree control using a data store. That had me banging my head of a brick wall for a while. Its not really a great approach and the alternative is not really well documented. You need to create a use model instead. I have included an example below of a tree model that I created for displaying the structure of an LDAP directory. You will find the default implementation of the model in your dojo distribution at./dijit/_tree/model.js. The comments should help you understand the functions supported by the model. The IDirectoryService class the code below are stubs for server-side Java POJOs generated by Direct Web Remoting (DWR). I highly recommend DWR if you going to be doing a lot of client-server interaction. dojo.declare("LDAPDirectoryTreeModel", [ dijit.tree.model ], { getRoot: function(onItem) { IDirectoryService.getRoots( function(roots) { onItem(roots[0]) }); },
mayHaveChildren: function(item) { return true; },
getChildren: function(parentItem, onComplete) { IDirectoryService.getChildrenImpl(parentItem, onComplete); },
getIdentity: function(item) { return item.dn; },
getLabel: function(item) { return item.rdn; } }); And here is an extract from the my JSP page where I created the model and used it to populate the tree control. | Event handling in Dojo Taking Jeff Atwood's advice, I decided to use a JavaScript library for the very basic to-do list application I'm writing. I picked the Dojo toolkit, version 1.1.1. At first, all was fine: the drag-and-drop code I wrote worked first time, you can drag tasks on-screen to change their order of precedence, and each drag-and-drop operation calls an event handler that sends an AJAX call to the server to let it know that order has been changed. Then I went to add in the email tracking functionality. Standard stuff: new incoming emails have a unique ID number attached to their subject line, all subsequent emails about that problem can be tracked by simply leaving that ID number in the subject when you reply. So, we have a list of open tasks, each with their own ID number, and each of those tasks has a time-ordered list of associated emails. I wanted the text of those emails to be available to the user as they were looking at their list of tasks, so I made each task box a Dijit "Tree" control - top level contains the task description, branches contain email dates, and a single "leaf" off of each of those branches contains the email text. First problem: I wanted the tree view to be fully-collapsed by default. After searching Google quite extensively, I found a number of solutions, all of which seemed to be valid for previous versions of Dojo but not the one I was using. I eventually figured out that the best solution would seem to be to have a event handler called when the Tree control had loaded that simply collapsed each branch/leaf. Unfortunately, even though the Tree control had been instantiated and its "startup" event handler called, the branches and leaves still hadn't loaded (the data was still being loaded via an AJAX call). So, I modified the system so that all email text and Tree structure is added server-side. This means the whole fully-populated Tree control is available when its startup event handler is called. So, the startup event handler fully collapses the tree. Next, I couldn't find a "proper" way to have nice formatted text for the email leaves. I can put the email text in the leaf just fine, but any HTML gets escaped out and shows up in the web page. Cue more rummaging around Dojo's documentation (tends to be out of date, with code and examples for pre-1.0 versions) and Google. I eventually came up with the solution of getting JavaScript to go and read the SPAN element that's inside each leaf node and un-escape the escaped HTML code in it's innerHTML. I figured I'd put code to do this in with the fully-collapse-the-tree code, in the Tree control's startup event handler. However... it turns out that the SPAN element isn't actually created until the user clicks on the expando (the little "+" symbol in a tree view you click to expand a node). Okay, fair enough - I'll add the re-formatting code to the onExpand() event handler, or whatever it's called. Which doesn't seem to exist. I've searched to documentation, I've searched Google... I'm quite possibly mis-understanding Dojo's "publish/subscribe" event handling system, but I think that mainly because there doesn't seem to be any comprehensive documentation for it anywhere (like, where do I find out what events I can subscribe to?). So, in the end, the best solution I can come up with is to add an onClick event handler (not a "Dojo" event, but a plain JavaScript event that Dojo knows nothing about) to the expando node of each Tree branch that re-formats the HTML inside the SPAN element of each leaf. Except... when that is called, the SPAN element still doesn't exist (sometimes - other times it's been cached, just to further confuse you). Therefore, I have the event handler set up a timer that periodically calls a function that checks to see if the relevant SPAN element has turned up yet before then re-formatting it. // An event handler called whenever a "email title" tree node is expanded. function formatTreeNode(nodeID) { if (dijit.byId(nodeID).getChildren().length!= 0) { clearInterval(nodeUpdateIntervalID); messageBody = dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML if (messageBody.indexOf(" Message text: ") == -1) { messageBody = messageBody.replace(/>/g, ">"); messageBody = messageBody.replace(/</g, "<"); messageBody = messageBody.replace(/&/g, "&"); dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML = " Message text: "+messageBody+" "; } } }
// An event handler called when a tree node has been set up - we changed the default fully-expanded to fully-collapsed. function setupTree(theTree) { dijit.byId("tree-"+theTree).rootNode.collapse();
messageNode = dijit.byId("tree-"+theTree).rootNode.getChildren(); for (pl = 0; pl < messageNode.length; pl++) { messageNode[pl].collapse(); messageNode[pl].expandoNode.onclick = eval("nodeUpdateIntervalID = setInterval(\"formatTreeNode('"+messageNode[pl].id+"')\",200); formatTreeNode('"+messageNode[pl].id+"');"); } } The above has the feel of a truly horrible hack, and I feel sure I must have taken a wrong turn somewhere early on in my thought process. Can someone please tell me: The correct way to go about putting nicely-formatted text inside a Dojo/Dijit Tree control. The correct way to handle Dojo events, like where I can figure out what events are available for me to subscribe to. A better JavaScript library to use (can I do what I want to with JQuery and avoid the all-around-the-houses approach seen above?). PS: If you're naming a software project, give thought to its name's uniqueness in Google - I'm sure searching for "Dojo" documentation in Google would be easier without all the martial arts results getting in the way. PPS: Firefox spellchecker knows how to spell "Atwood", correcting me when I put two 'T's instead of one. Is Jeff just that famous now? | TITLE:
Event handling in Dojo
QUESTION:
Taking Jeff Atwood's advice, I decided to use a JavaScript library for the very basic to-do list application I'm writing. I picked the Dojo toolkit, version 1.1.1. At first, all was fine: the drag-and-drop code I wrote worked first time, you can drag tasks on-screen to change their order of precedence, and each drag-and-drop operation calls an event handler that sends an AJAX call to the server to let it know that order has been changed. Then I went to add in the email tracking functionality. Standard stuff: new incoming emails have a unique ID number attached to their subject line, all subsequent emails about that problem can be tracked by simply leaving that ID number in the subject when you reply. So, we have a list of open tasks, each with their own ID number, and each of those tasks has a time-ordered list of associated emails. I wanted the text of those emails to be available to the user as they were looking at their list of tasks, so I made each task box a Dijit "Tree" control - top level contains the task description, branches contain email dates, and a single "leaf" off of each of those branches contains the email text. First problem: I wanted the tree view to be fully-collapsed by default. After searching Google quite extensively, I found a number of solutions, all of which seemed to be valid for previous versions of Dojo but not the one I was using. I eventually figured out that the best solution would seem to be to have a event handler called when the Tree control had loaded that simply collapsed each branch/leaf. Unfortunately, even though the Tree control had been instantiated and its "startup" event handler called, the branches and leaves still hadn't loaded (the data was still being loaded via an AJAX call). So, I modified the system so that all email text and Tree structure is added server-side. This means the whole fully-populated Tree control is available when its startup event handler is called. So, the startup event handler fully collapses the tree. Next, I couldn't find a "proper" way to have nice formatted text for the email leaves. I can put the email text in the leaf just fine, but any HTML gets escaped out and shows up in the web page. Cue more rummaging around Dojo's documentation (tends to be out of date, with code and examples for pre-1.0 versions) and Google. I eventually came up with the solution of getting JavaScript to go and read the SPAN element that's inside each leaf node and un-escape the escaped HTML code in it's innerHTML. I figured I'd put code to do this in with the fully-collapse-the-tree code, in the Tree control's startup event handler. However... it turns out that the SPAN element isn't actually created until the user clicks on the expando (the little "+" symbol in a tree view you click to expand a node). Okay, fair enough - I'll add the re-formatting code to the onExpand() event handler, or whatever it's called. Which doesn't seem to exist. I've searched to documentation, I've searched Google... I'm quite possibly mis-understanding Dojo's "publish/subscribe" event handling system, but I think that mainly because there doesn't seem to be any comprehensive documentation for it anywhere (like, where do I find out what events I can subscribe to?). So, in the end, the best solution I can come up with is to add an onClick event handler (not a "Dojo" event, but a plain JavaScript event that Dojo knows nothing about) to the expando node of each Tree branch that re-formats the HTML inside the SPAN element of each leaf. Except... when that is called, the SPAN element still doesn't exist (sometimes - other times it's been cached, just to further confuse you). Therefore, I have the event handler set up a timer that periodically calls a function that checks to see if the relevant SPAN element has turned up yet before then re-formatting it. // An event handler called whenever a "email title" tree node is expanded. function formatTreeNode(nodeID) { if (dijit.byId(nodeID).getChildren().length!= 0) { clearInterval(nodeUpdateIntervalID); messageBody = dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML if (messageBody.indexOf(" Message text: ") == -1) { messageBody = messageBody.replace(/>/g, ">"); messageBody = messageBody.replace(/</g, "<"); messageBody = messageBody.replace(/&/g, "&"); dijit.byId(nodeID).getChildren()[0].labelNode.innerHTML = " Message text: "+messageBody+" "; } } }
// An event handler called when a tree node has been set up - we changed the default fully-expanded to fully-collapsed. function setupTree(theTree) { dijit.byId("tree-"+theTree).rootNode.collapse();
messageNode = dijit.byId("tree-"+theTree).rootNode.getChildren(); for (pl = 0; pl < messageNode.length; pl++) { messageNode[pl].collapse(); messageNode[pl].expandoNode.onclick = eval("nodeUpdateIntervalID = setInterval(\"formatTreeNode('"+messageNode[pl].id+"')\",200); formatTreeNode('"+messageNode[pl].id+"');"); } } The above has the feel of a truly horrible hack, and I feel sure I must have taken a wrong turn somewhere early on in my thought process. Can someone please tell me: The correct way to go about putting nicely-formatted text inside a Dojo/Dijit Tree control. The correct way to handle Dojo events, like where I can figure out what events are available for me to subscribe to. A better JavaScript library to use (can I do what I want to with JQuery and avoid the all-around-the-houses approach seen above?). PS: If you're naming a software project, give thought to its name's uniqueness in Google - I'm sure searching for "Dojo" documentation in Google would be easier without all the martial arts results getting in the way. PPS: Firefox spellchecker knows how to spell "Atwood", correcting me when I put two 'T's instead of one. Is Jeff just that famous now?
ANSWER:
I assume that you followed the dijit.Tree and dojo.data in Dojo 1.1 tutorial which directed you to pass the data to the tree control using a data store. That had me banging my head of a brick wall for a while. Its not really a great approach and the alternative is not really well documented. You need to create a use model instead. I have included an example below of a tree model that I created for displaying the structure of an LDAP directory. You will find the default implementation of the model in your dojo distribution at./dijit/_tree/model.js. The comments should help you understand the functions supported by the model. The IDirectoryService class the code below are stubs for server-side Java POJOs generated by Direct Web Remoting (DWR). I highly recommend DWR if you going to be doing a lot of client-server interaction. dojo.declare("LDAPDirectoryTreeModel", [ dijit.tree.model ], { getRoot: function(onItem) { IDirectoryService.getRoots( function(roots) { onItem(roots[0]) }); },
mayHaveChildren: function(item) { return true; },
getChildren: function(parentItem, onComplete) { IDirectoryService.getChildrenImpl(parentItem, onComplete); },
getIdentity: function(item) { return item.dn; },
getLabel: function(item) { return item.rdn; } }); And here is an extract from the my JSP page where I created the model and used it to populate the tree control. | [
"javascript",
"dojo"
] | 1 | 3 | 3,230 | 1 | 0 | 2008-08-31T21:40:39.720000 | 2008-09-01T00:38:52.973000 |
37,157 | 37,160 | Caching MySQL queries | Is there a simple way to cache MySQL queries in PHP or failing that, is there a small class set that someone has written and made available that will do it? I can cache a whole page but that won't work as some data changes but some do not, I want to cache the part that does not. | This is a great overview of how to cache queries in MySQL: The MySQL Query Cache | Caching MySQL queries Is there a simple way to cache MySQL queries in PHP or failing that, is there a small class set that someone has written and made available that will do it? I can cache a whole page but that won't work as some data changes but some do not, I want to cache the part that does not. | TITLE:
Caching MySQL queries
QUESTION:
Is there a simple way to cache MySQL queries in PHP or failing that, is there a small class set that someone has written and made available that will do it? I can cache a whole page but that won't work as some data changes but some do not, I want to cache the part that does not.
ANSWER:
This is a great overview of how to cache queries in MySQL: The MySQL Query Cache | [
"php",
"mysql",
"caching"
] | 6 | 10 | 10,840 | 5 | 0 | 2008-08-31T21:55:35.580000 | 2008-08-31T21:59:03.043000 |
37,162 | 37,183 | How do I make an HTML page print in landscape when the user selects 'print'? | We generate web pages that should always be printed in landscape mode. Web browser print dialogs default to portrait, so for every print job the user has to manually select landscape. It's minor, but would be nice for the user if we can remove this unnecessary step. Thanks in advance to all respondents. | A quick Google indicates that it's not really supported. There's more than a few folks out there trying to hack their way to it - but I'd strongly suggest just rendering a server side PDF instead. | How do I make an HTML page print in landscape when the user selects 'print'? We generate web pages that should always be printed in landscape mode. Web browser print dialogs default to portrait, so for every print job the user has to manually select landscape. It's minor, but would be nice for the user if we can remove this unnecessary step. Thanks in advance to all respondents. | TITLE:
How do I make an HTML page print in landscape when the user selects 'print'?
QUESTION:
We generate web pages that should always be printed in landscape mode. Web browser print dialogs default to portrait, so for every print job the user has to manually select landscape. It's minor, but would be nice for the user if we can remove this unnecessary step. Thanks in advance to all respondents.
ANSWER:
A quick Google indicates that it's not really supported. There's more than a few folks out there trying to hack their way to it - but I'd strongly suggest just rendering a server side PDF instead. | [
"html",
"printing",
"landscape",
"portrait"
] | 8 | 4 | 5,128 | 4 | 0 | 2008-08-31T22:00:25.180000 | 2008-08-31T22:21:15.113000 |
37,185 | 53,369 | What's the idiomatic way to do async socket programming in Delphi? | What is the normal way people writing network code in Delphi use Windows-style overlapped asynchronous socket I/O? Here's my prior research into this question: The Indy components seem entirely synchronous. On the other hand, while ScktComp unit does use WSAAsyncSelect, it basically only asynchronizes a BSD-style multiplexed socket app. You get dumped in a single event callback, as if you had just returned from select() in a loop, and have to do all the state machine navigation yourself. The.NET situation is considerably nicer, with Socket.BeginRead / Socket.EndRead, where the continuation is passed directly to Socket.BeginRead, and that's where you pick back up. A continuation coded as a closure obviously has all the context you need, and more. | For async stuff try ICS http://www.overbyte.be/frame_index.html?redirTo=/products/ics.html | What's the idiomatic way to do async socket programming in Delphi? What is the normal way people writing network code in Delphi use Windows-style overlapped asynchronous socket I/O? Here's my prior research into this question: The Indy components seem entirely synchronous. On the other hand, while ScktComp unit does use WSAAsyncSelect, it basically only asynchronizes a BSD-style multiplexed socket app. You get dumped in a single event callback, as if you had just returned from select() in a loop, and have to do all the state machine navigation yourself. The.NET situation is considerably nicer, with Socket.BeginRead / Socket.EndRead, where the continuation is passed directly to Socket.BeginRead, and that's where you pick back up. A continuation coded as a closure obviously has all the context you need, and more. | TITLE:
What's the idiomatic way to do async socket programming in Delphi?
QUESTION:
What is the normal way people writing network code in Delphi use Windows-style overlapped asynchronous socket I/O? Here's my prior research into this question: The Indy components seem entirely synchronous. On the other hand, while ScktComp unit does use WSAAsyncSelect, it basically only asynchronizes a BSD-style multiplexed socket app. You get dumped in a single event callback, as if you had just returned from select() in a loop, and have to do all the state machine navigation yourself. The.NET situation is considerably nicer, with Socket.BeginRead / Socket.EndRead, where the continuation is passed directly to Socket.BeginRead, and that's where you pick back up. A continuation coded as a closure obviously has all the context you need, and more.
ANSWER:
For async stuff try ICS http://www.overbyte.be/frame_index.html?redirTo=/products/ics.html | [
"delphi",
"winapi",
"sockets",
"asynchronous",
"networking"
] | 10 | 0 | 6,546 | 10 | 0 | 2008-08-31T22:22:58.823000 | 2008-09-10T03:27:22.943000 |
37,189 | 37,202 | C# console program can't send fax when run as a scheduled task | I have a console program written in C# that I am using to send faxes. When I step through the program in Visual Studio it works fine. When I double click on the program in Windows Explorer it works fine. When I setup a Windows scheduled task to run the program it fails with this in the event log. EventType clr20r3, P1 consolefaxtest.exe, P2 1.0.0.0, P3 48bb146b, P4 consolefaxtest, P5 1.0.0.0, P6 48bb146b, P7 1, P8 80, P9 system.io.filenotfoundexception, P10 NIL. I wrote a batch file to run the fax program and it fails with this message. Unhandled Exception: System.IO.FileNotFoundException: Operation failed. at FAXCOMEXLib.FaxDocumentClass.ConnectedSubmit(FaxServer pFaxServer) Can anyone explain this behavior to me? | I can't explain it - but I have a few ideas. Most of the times, when a program works fine testing it, and doesn't when scheduling it - security is the case. In the context of which user is your program scheduled? Maybe that user isn't granted enough access. Is the resource your programm is trying to access a network drive, that the user running the scheduled task simply haven't got? | C# console program can't send fax when run as a scheduled task I have a console program written in C# that I am using to send faxes. When I step through the program in Visual Studio it works fine. When I double click on the program in Windows Explorer it works fine. When I setup a Windows scheduled task to run the program it fails with this in the event log. EventType clr20r3, P1 consolefaxtest.exe, P2 1.0.0.0, P3 48bb146b, P4 consolefaxtest, P5 1.0.0.0, P6 48bb146b, P7 1, P8 80, P9 system.io.filenotfoundexception, P10 NIL. I wrote a batch file to run the fax program and it fails with this message. Unhandled Exception: System.IO.FileNotFoundException: Operation failed. at FAXCOMEXLib.FaxDocumentClass.ConnectedSubmit(FaxServer pFaxServer) Can anyone explain this behavior to me? | TITLE:
C# console program can't send fax when run as a scheduled task
QUESTION:
I have a console program written in C# that I am using to send faxes. When I step through the program in Visual Studio it works fine. When I double click on the program in Windows Explorer it works fine. When I setup a Windows scheduled task to run the program it fails with this in the event log. EventType clr20r3, P1 consolefaxtest.exe, P2 1.0.0.0, P3 48bb146b, P4 consolefaxtest, P5 1.0.0.0, P6 48bb146b, P7 1, P8 80, P9 system.io.filenotfoundexception, P10 NIL. I wrote a batch file to run the fax program and it fails with this message. Unhandled Exception: System.IO.FileNotFoundException: Operation failed. at FAXCOMEXLib.FaxDocumentClass.ConnectedSubmit(FaxServer pFaxServer) Can anyone explain this behavior to me?
ANSWER:
I can't explain it - but I have a few ideas. Most of the times, when a program works fine testing it, and doesn't when scheduling it - security is the case. In the context of which user is your program scheduled? Maybe that user isn't granted enough access. Is the resource your programm is trying to access a network drive, that the user running the scheduled task simply haven't got? | [
"c#",
"console",
"fax"
] | 0 | 5 | 1,884 | 6 | 0 | 2008-08-31T22:28:22.927000 | 2008-08-31T22:40:42.273000 |
37,219 | 37,267 | How do you remove a specific revision in the git history? | Suppose your git history looks like this: 1 2 3 4 5 1–5 are separate revisions. You need to remove 3 while still keeping 1, 2, 4 and 5. How can this be done? Is there an efficient method when there are hundreds of revisions after the one to be deleted? | To combine revision 3 and 4 into a single revision, you can use git rebase. If you want to remove the changes in revision 3, you need to use the edit command in the interactive rebase mode. If you want to combine the changes into a single revision, use squash. I have successfully used this squash technique, but have never needed to remove a revision before. The git-rebase documentation under "Splitting commits" should hopefully give you enough of an idea to figure it out. (Or someone else might know). From the git documentation: Start it with the oldest commit you want to retain as-is: git rebase -i An editor will be fired up with all the commits in your current branch (ignoring merge commits), which come after the given commit. You can reorder the commits in this list to your heart's content, and you can remove them. The list looks more or less like this: pick deadbee The oneline of this commit pick fa1afe1 The oneline of the next commit... The oneline descriptions are purely for your pleasure; git-rebase will not look at them but at the commit names ("deadbee" and "fa1afe1" in this example), so do not delete or edit the names. By replacing the command "pick" with the command "edit", you can tell git-rebase to stop after applying that commit, so that you can edit the files and/or the commit message, amend the commit, and continue rebasing. If you want to fold two or more commits into one, replace the command "pick" with "squash" for the second and subsequent commit. If the commits had different authors, it will attribute the squashed commit to the author of the first commit. | How do you remove a specific revision in the git history? Suppose your git history looks like this: 1 2 3 4 5 1–5 are separate revisions. You need to remove 3 while still keeping 1, 2, 4 and 5. How can this be done? Is there an efficient method when there are hundreds of revisions after the one to be deleted? | TITLE:
How do you remove a specific revision in the git history?
QUESTION:
Suppose your git history looks like this: 1 2 3 4 5 1–5 are separate revisions. You need to remove 3 while still keeping 1, 2, 4 and 5. How can this be done? Is there an efficient method when there are hundreds of revisions after the one to be deleted?
ANSWER:
To combine revision 3 and 4 into a single revision, you can use git rebase. If you want to remove the changes in revision 3, you need to use the edit command in the interactive rebase mode. If you want to combine the changes into a single revision, use squash. I have successfully used this squash technique, but have never needed to remove a revision before. The git-rebase documentation under "Splitting commits" should hopefully give you enough of an idea to figure it out. (Or someone else might know). From the git documentation: Start it with the oldest commit you want to retain as-is: git rebase -i An editor will be fired up with all the commits in your current branch (ignoring merge commits), which come after the given commit. You can reorder the commits in this list to your heart's content, and you can remove them. The list looks more or less like this: pick deadbee The oneline of this commit pick fa1afe1 The oneline of the next commit... The oneline descriptions are purely for your pleasure; git-rebase will not look at them but at the commit names ("deadbee" and "fa1afe1" in this example), so do not delete or edit the names. By replacing the command "pick" with the command "edit", you can tell git-rebase to stop after applying that commit, so that you can edit the files and/or the commit message, amend the commit, and continue rebasing. If you want to fold two or more commits into one, replace the command "pick" with "squash" for the second and subsequent commit. If the commits had different authors, it will attribute the squashed commit to the author of the first commit. | [
"git"
] | 251 | 81 | 139,074 | 9 | 0 | 2008-08-31T22:51:21.327000 | 2008-08-31T23:46:33.880000 |
37,248 | 37,260 | C# Preprocessor | While the C# spec does include a pre-processor and basic directives (#define, #if, etc), the language does not have the same flexible pre-processor found in languages such as C/C++. I believe the lack of such a flexible pre-processor was a design decision made by Anders Hejlsberg (although, unfortunately, I can't find reference to this now). From experience, this is certainly a good decision, as there were some really terrible un-maintainable macros created back when I was doing a lot of C/C++. That said, there are a number of scenarios where I could find a slightly more flexible pre-processor to be useful. Code such as the following could be improved by some simple pre-processor directives: public string MyProperty { get { return _myProperty; } set { if (value!= _myProperty) { _myProperty = value; NotifyPropertyChanged("MyProperty"); // This line above could be improved by replacing the literal string with // a pre-processor directive like "#Property", which could be translated // to the string value "MyProperty" This new notify call would be as follows: // NotifyPropertyChanged(#Property); } } } Would it be a good idea to write a pre-processor to handle extremely simple cases like this? Steve McConnell wrote in Code Complete (p208): Write your own preprocessor If a language doesn't include a preprocessor, it's fairly easy to write one... I am torn. It was a design decision to leave such a flexible pre-processor out of C#. However, an author I highly respect mentions it may be ok in some circumstances. Should I build a C# pre-processor? Is there one available that does the simple things I want to do? | Consider taking a look at an aspect-oriented solution like PostSharp, which injects code after the fact based on custom attributes. It's the opposite of a precompiler but can give you the sort of functionality you're looking for (PropertyChanged notifications etc). | C# Preprocessor While the C# spec does include a pre-processor and basic directives (#define, #if, etc), the language does not have the same flexible pre-processor found in languages such as C/C++. I believe the lack of such a flexible pre-processor was a design decision made by Anders Hejlsberg (although, unfortunately, I can't find reference to this now). From experience, this is certainly a good decision, as there were some really terrible un-maintainable macros created back when I was doing a lot of C/C++. That said, there are a number of scenarios where I could find a slightly more flexible pre-processor to be useful. Code such as the following could be improved by some simple pre-processor directives: public string MyProperty { get { return _myProperty; } set { if (value!= _myProperty) { _myProperty = value; NotifyPropertyChanged("MyProperty"); // This line above could be improved by replacing the literal string with // a pre-processor directive like "#Property", which could be translated // to the string value "MyProperty" This new notify call would be as follows: // NotifyPropertyChanged(#Property); } } } Would it be a good idea to write a pre-processor to handle extremely simple cases like this? Steve McConnell wrote in Code Complete (p208): Write your own preprocessor If a language doesn't include a preprocessor, it's fairly easy to write one... I am torn. It was a design decision to leave such a flexible pre-processor out of C#. However, an author I highly respect mentions it may be ok in some circumstances. Should I build a C# pre-processor? Is there one available that does the simple things I want to do? | TITLE:
C# Preprocessor
QUESTION:
While the C# spec does include a pre-processor and basic directives (#define, #if, etc), the language does not have the same flexible pre-processor found in languages such as C/C++. I believe the lack of such a flexible pre-processor was a design decision made by Anders Hejlsberg (although, unfortunately, I can't find reference to this now). From experience, this is certainly a good decision, as there were some really terrible un-maintainable macros created back when I was doing a lot of C/C++. That said, there are a number of scenarios where I could find a slightly more flexible pre-processor to be useful. Code such as the following could be improved by some simple pre-processor directives: public string MyProperty { get { return _myProperty; } set { if (value!= _myProperty) { _myProperty = value; NotifyPropertyChanged("MyProperty"); // This line above could be improved by replacing the literal string with // a pre-processor directive like "#Property", which could be translated // to the string value "MyProperty" This new notify call would be as follows: // NotifyPropertyChanged(#Property); } } } Would it be a good idea to write a pre-processor to handle extremely simple cases like this? Steve McConnell wrote in Code Complete (p208): Write your own preprocessor If a language doesn't include a preprocessor, it's fairly easy to write one... I am torn. It was a design decision to leave such a flexible pre-processor out of C#. However, an author I highly respect mentions it may be ok in some circumstances. Should I build a C# pre-processor? Is there one available that does the simple things I want to do?
ANSWER:
Consider taking a look at an aspect-oriented solution like PostSharp, which injects code after the fact based on custom attributes. It's the opposite of a precompiler but can give you the sort of functionality you're looking for (PropertyChanged notifications etc). | [
"c#",
"c-preprocessor"
] | 22 | 11 | 10,311 | 13 | 0 | 2008-08-31T23:18:41.917000 | 2008-08-31T23:37:27.910000 |
37,263 | 37,291 | Where does "Change Management" end and "Project Failure" begin? | I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data. Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot. So I ask you this: What is change management, and how does it apply to a project? Where does "change management" end, and "project failure" begin? @shog9: I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented. I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?" | I think, most of the time, we developers forget this we all do is, after all, about bussiness. From that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay. Anyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called "decision making" that studies it at software level, so your boss is not wrong at all. Let's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost? The real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish) | Where does "Change Management" end and "Project Failure" begin? I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data. Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot. So I ask you this: What is change management, and how does it apply to a project? Where does "change management" end, and "project failure" begin? @shog9: I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented. I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?" | TITLE:
Where does "Change Management" end and "Project Failure" begin?
QUESTION:
I got into a mini-argument with my boss recently regarding "project failure." After three years, our project to migrate a codebase to a new platform (a project I was on for 1.5 years, but my team lead was on for only a few months) went live. He, along with senior management of both my company and the client (I'm one of those god-awful consultants you hear so much about. My engagement is an "Application Outsourcing") declared the project to be a success. I disagreed, stating that old presentations I had found showed that compared to the original schedule, the delay in deployment was best measured in months and could potentially be measured in years. I explained what I know of project failure, and the studies and statistics behind failure rates. He responded that that was all academia, and that no project he led had failed, thanks to the wonders of change/risk management - what seems to come down to explaining delays and re-evaluating the schedule based on new data. Maybe consulting like this differs from other projects, but it seems like this is just failure wrapped up in a prettier name to avoid the stigma of having failed to deliver on time, on budget, or with full functionality. The fact that he explained that my company gave away hours of work for free in order to finish the project within the maxed out budget says a lot. So I ask you this: What is change management, and how does it apply to a project? Where does "change management" end, and "project failure" begin? @shog9: I wasn't asking about a blame game with the consultants, especially since in this case I represent the consultants. I was looking for views on when a project should be considered "failed" regardless of if the needed functionality was finally implemented. I'm looking for the difference between "this is actually a little more complex than we thought, and it's going to be another week" which I'd expect is somewhat typical, and "project failure" - however you want to define failure. Is there even a difference? Does this minor level of schedule slippage constitute statistical "project failure?"
ANSWER:
I think, most of the time, we developers forget this we all do is, after all, about bussiness. From that point of view a project is not a failure while the client is willing to pay for it. It all depends on the client, some clients have more patience and understand better the risks of software development, other just won't pay if there's a substantial delay. Anyway, about your question. Whenever you evolve a project there are risks involved, maybe you schedule the end of the project in a certain date but it will take like six month longer than you expected. In that case you have to balance what you have already spent and what you have to gain against the risks you're taking. There's actually an entire science called "decision making" that studies it at software level, so your boss is not wrong at all. Let's look at some questions, Is the client willing to wait for the project? Is he willing to assume certain overcosts? Even if he doesn't, Is worth completing the project assuming the extra costs instead of throwing away all the already done work? Can the company assume what's already lost? The real answer to your problem lies behind that questions. You can't establish a point and say, here, if the project isn't done by this time then it's a failure. As for your specific situation, who knows? Your boss has probably more information that you have so your work is to tell him how is the project going, how much it will take and how much it will cost (in terms hours/man if you wish) | [
"project-management",
"change-management"
] | 1 | 5 | 1,608 | 5 | 0 | 2008-08-31T23:39:48.590000 | 2008-09-01T00:07:26.760000 |
37,299 | 37,304 | Xcode equivalent of ' __asm int 3 / DebugBreak() / Halt? | What's the instruction to cause a hard-break in Xcode? For example under Visual Studio I could do '_asm int 3' or 'DebugBreak()'. Under some GCC implementations it's asm("break 0") or asm("trap"). I've tried various combos under Xcode without any luck. (inline assembler works fine so it's not a syntax issue). For reference this is for an assert macro. I don't want to use the definitions in assert.h both for portability, and because they appear to do an abort() in the version XCode provides. John - Super, cheers. For reference the int 3 syntax is the one required for Intel Macs and iPhone. Chris - Thanks for your comment but there are many reasons to avoid the standard assert() function for codebases ported to different platforms. If you've gone to the trouble of rolling your own assert it's usually because you have additional functionality (logging, stack unwinding, user-interaction) that you wish to retain. Your suggestion of attempting to replace the hander via an implementation of '__assert" or similar is not going to be portable. The standard 'assert' is usually a macro and while it may map to __assert on the Mac it doesn't on other platforms. | http://developer.apple.com/documentation/DeveloperTools/Conceptual/XcodeProjectManagement/090_Running_Programs/chapter_11_section_3.html asm {trap}; Halts a program running on PPC32 or PPC64.
__asm {int 3}; Halts a program running on IA-32. | Xcode equivalent of ' __asm int 3 / DebugBreak() / Halt? What's the instruction to cause a hard-break in Xcode? For example under Visual Studio I could do '_asm int 3' or 'DebugBreak()'. Under some GCC implementations it's asm("break 0") or asm("trap"). I've tried various combos under Xcode without any luck. (inline assembler works fine so it's not a syntax issue). For reference this is for an assert macro. I don't want to use the definitions in assert.h both for portability, and because they appear to do an abort() in the version XCode provides. John - Super, cheers. For reference the int 3 syntax is the one required for Intel Macs and iPhone. Chris - Thanks for your comment but there are many reasons to avoid the standard assert() function for codebases ported to different platforms. If you've gone to the trouble of rolling your own assert it's usually because you have additional functionality (logging, stack unwinding, user-interaction) that you wish to retain. Your suggestion of attempting to replace the hander via an implementation of '__assert" or similar is not going to be portable. The standard 'assert' is usually a macro and while it may map to __assert on the Mac it doesn't on other platforms. | TITLE:
Xcode equivalent of ' __asm int 3 / DebugBreak() / Halt?
QUESTION:
What's the instruction to cause a hard-break in Xcode? For example under Visual Studio I could do '_asm int 3' or 'DebugBreak()'. Under some GCC implementations it's asm("break 0") or asm("trap"). I've tried various combos under Xcode without any luck. (inline assembler works fine so it's not a syntax issue). For reference this is for an assert macro. I don't want to use the definitions in assert.h both for portability, and because they appear to do an abort() in the version XCode provides. John - Super, cheers. For reference the int 3 syntax is the one required for Intel Macs and iPhone. Chris - Thanks for your comment but there are many reasons to avoid the standard assert() function for codebases ported to different platforms. If you've gone to the trouble of rolling your own assert it's usually because you have additional functionality (logging, stack unwinding, user-interaction) that you wish to retain. Your suggestion of attempting to replace the hander via an implementation of '__assert" or similar is not going to be portable. The standard 'assert' is usually a macro and while it may map to __assert on the Mac it doesn't on other platforms.
ANSWER:
http://developer.apple.com/documentation/DeveloperTools/Conceptual/XcodeProjectManagement/090_Running_Programs/chapter_11_section_3.html asm {trap}; Halts a program running on PPC32 or PPC64.
__asm {int 3}; Halts a program running on IA-32. | [
"xcode",
"macos",
"debugbreak"
] | 23 | 24 | 18,330 | 7 | 0 | 2008-09-01T00:18:18.263000 | 2008-09-01T00:22:27.223000 |
37,306 | 37,371 | Font-dependent control positioning | I'd like to use Segoe UI 9 pt on Vista, and Tahoma 8 pt on Windows XP/etc. (Actually, I'd settle for Segoe UI on both, but my users probably don't have it installed.) But, these being quite different, they really screw up the layout of my forms. So... is there a good way to deal with this? An example: I have a Label, with some blank space in the middle, into which I place a NumericUpDown control. If I use Segoe UI, the NumericUpDown is about 5 pixels or so to the left of the blank space, compared to when I use Tahoma. This is a pain; I'm not sure what to do here. So most specifically, my question would be: how can I place controls in the middle of a blank space in my Label s (or CheckBox es, etc.)? Most generally: is there a good way to handle varying fonts in Windows Forms? Edit: I don't think people understood the question. I know how to vary my fonts based on OS. I just don't know how to deal with the layout problems that arise from doing so. Reply to ajryan, quick_dry: OK, you guys understand the question. I guess MeasureString might work, although I'd be interested in further exploration of better ways to solve this problem. The problem with splitting the control is most apparent with, say, a CheckBox. There, if the user clicks on the "second half" of the CheckBox (which would be a separate Label control, I guess), the CheckBox doesn't change state. | It's strange to need to layout one control within another. You might be solving an upstream problem wrong. Are you able to split the label into two labels with the updown between and maybe rely on a Windows Forms TableLayout panel? If it's essential to try to position based on font sizes, you could use Graphics.MeasureString ("String before updown", myLabel.Font) If what you're after is font-dependent control positioning, you should probably retitle the question. [edit] You can handle the click event of the "second half" part of the label and change the checkbox state on that event. The whole thing seems like a hack though. What is the problem being solved by this weird control layout? Why do you need an up-down in the middle of a label? | Font-dependent control positioning I'd like to use Segoe UI 9 pt on Vista, and Tahoma 8 pt on Windows XP/etc. (Actually, I'd settle for Segoe UI on both, but my users probably don't have it installed.) But, these being quite different, they really screw up the layout of my forms. So... is there a good way to deal with this? An example: I have a Label, with some blank space in the middle, into which I place a NumericUpDown control. If I use Segoe UI, the NumericUpDown is about 5 pixels or so to the left of the blank space, compared to when I use Tahoma. This is a pain; I'm not sure what to do here. So most specifically, my question would be: how can I place controls in the middle of a blank space in my Label s (or CheckBox es, etc.)? Most generally: is there a good way to handle varying fonts in Windows Forms? Edit: I don't think people understood the question. I know how to vary my fonts based on OS. I just don't know how to deal with the layout problems that arise from doing so. Reply to ajryan, quick_dry: OK, you guys understand the question. I guess MeasureString might work, although I'd be interested in further exploration of better ways to solve this problem. The problem with splitting the control is most apparent with, say, a CheckBox. There, if the user clicks on the "second half" of the CheckBox (which would be a separate Label control, I guess), the CheckBox doesn't change state. | TITLE:
Font-dependent control positioning
QUESTION:
I'd like to use Segoe UI 9 pt on Vista, and Tahoma 8 pt on Windows XP/etc. (Actually, I'd settle for Segoe UI on both, but my users probably don't have it installed.) But, these being quite different, they really screw up the layout of my forms. So... is there a good way to deal with this? An example: I have a Label, with some blank space in the middle, into which I place a NumericUpDown control. If I use Segoe UI, the NumericUpDown is about 5 pixels or so to the left of the blank space, compared to when I use Tahoma. This is a pain; I'm not sure what to do here. So most specifically, my question would be: how can I place controls in the middle of a blank space in my Label s (or CheckBox es, etc.)? Most generally: is there a good way to handle varying fonts in Windows Forms? Edit: I don't think people understood the question. I know how to vary my fonts based on OS. I just don't know how to deal with the layout problems that arise from doing so. Reply to ajryan, quick_dry: OK, you guys understand the question. I guess MeasureString might work, although I'd be interested in further exploration of better ways to solve this problem. The problem with splitting the control is most apparent with, say, a CheckBox. There, if the user clicks on the "second half" of the CheckBox (which would be a separate Label control, I guess), the CheckBox doesn't change state.
ANSWER:
It's strange to need to layout one control within another. You might be solving an upstream problem wrong. Are you able to split the label into two labels with the updown between and maybe rely on a Windows Forms TableLayout panel? If it's essential to try to position based on font sizes, you could use Graphics.MeasureString ("String before updown", myLabel.Font) If what you're after is font-dependent control positioning, you should probably retitle the question. [edit] You can handle the click event of the "second half" part of the label and change the checkbox state on that event. The whole thing seems like a hack though. What is the problem being solved by this weird control layout? Why do you need an up-down in the middle of a label? | [
".net",
"winforms",
"user-interface",
"layout",
"fonts"
] | 2 | 1 | 385 | 4 | 0 | 2008-09-01T00:32:32.397000 | 2008-09-01T02:08:47.173000 |
37,310 | 37,316 | Checking the results of a Factory in a unit test | I have developed some classes with similar behavior, they all implement the same interface. I implemented a factory that creates the appropriate object and returns the interface. I am writing a unit test for the factory. All you get back is an interface to the object. What is the best way to test that the factory has worked correctly? I would like to know the answer in Java, but if there is a solution that crosses languages I would like to know it. Number 2. in the answer, would be done like the other answer? If so I will mark the other answer accepted as well and reword my question to adress both a factory where an interface is returned and you have no clue what type of concrete class implemented the interface, and the case where you do know what concrete class was used. | Since I don't know how your factory method looks like, all I can advise right now is to Check to see the object is the correct concrete implementation you were looking for: IMyInterface fromFactory = factory.create(...); Assert.assertTrue(fromFactory instanceof MyInterfaceImpl1); You can check if the factory setup the concrete instances with valid instance variables. | Checking the results of a Factory in a unit test I have developed some classes with similar behavior, they all implement the same interface. I implemented a factory that creates the appropriate object and returns the interface. I am writing a unit test for the factory. All you get back is an interface to the object. What is the best way to test that the factory has worked correctly? I would like to know the answer in Java, but if there is a solution that crosses languages I would like to know it. Number 2. in the answer, would be done like the other answer? If so I will mark the other answer accepted as well and reword my question to adress both a factory where an interface is returned and you have no clue what type of concrete class implemented the interface, and the case where you do know what concrete class was used. | TITLE:
Checking the results of a Factory in a unit test
QUESTION:
I have developed some classes with similar behavior, they all implement the same interface. I implemented a factory that creates the appropriate object and returns the interface. I am writing a unit test for the factory. All you get back is an interface to the object. What is the best way to test that the factory has worked correctly? I would like to know the answer in Java, but if there is a solution that crosses languages I would like to know it. Number 2. in the answer, would be done like the other answer? If so I will mark the other answer accepted as well and reword my question to adress both a factory where an interface is returned and you have no clue what type of concrete class implemented the interface, and the case where you do know what concrete class was used.
ANSWER:
Since I don't know how your factory method looks like, all I can advise right now is to Check to see the object is the correct concrete implementation you were looking for: IMyInterface fromFactory = factory.create(...); Assert.assertTrue(fromFactory instanceof MyInterfaceImpl1); You can check if the factory setup the concrete instances with valid instance variables. | [
"java",
"unit-testing",
"tdd"
] | 25 | 23 | 30,750 | 5 | 0 | 2008-09-01T00:40:03.830000 | 2008-09-01T00:47:22.907000 |
37,317 | 37,336 | How do you return the focus to the last used control after clicking a button in a winform app? | I'm working on a windows forms application (C#) where a user is entering data in a form. At any point while editing the data in the form the user can click one of the buttons on the form to perform certain actions. By default the focus goes to the clicked button so the user has to click back on to the control they want to edit in order to continue modifying the data on the form. What I need to be able to do is return the focus to the last edited control after the button click event has been processed. Here's a sample screenshot that illustrates what I'm talking about: The user can be entering data in textbox1, textbox2, textbox3, etc and click the button. I need the button to return the focus back to the control that most recently had the focus before the button was clicked. I'm wondering if anyone has a better way of implementing this functionality than what I've come up with. Here's what I'm doing right now: public partial class Form1: Form { Control _lastEnteredControl;
private void textBox_Enter(object sender, EventArgs e) { _lastEnteredControl = (Control)sender; }
private void button1_Click(object sender, EventArgs e) { MessageBox.Show("Do something here"); _lastEnteredControl.Focus(); }
} So basically what we have here is a class variable that points to the last entered control. Each textbox on the form is setup so the textBox_Enter method is fired when the control receives the focus. Then, when the button is clicked focus is returned to the control that had the focus before the button was clicked. Anybody have any more elegant solutions for this? | For a bit of 'simplicity' maybe try. public Form1() { InitializeComponent();
foreach (Control ctrl in Controls) { if (ctrl is TextBox) { ctrl.Enter += delegate(object sender, EventArgs e) { _lastEnteredControl = (Control)sender; }; } } } then you don't have to worry about decorating each textbox manually (or forgetting about one too). | How do you return the focus to the last used control after clicking a button in a winform app? I'm working on a windows forms application (C#) where a user is entering data in a form. At any point while editing the data in the form the user can click one of the buttons on the form to perform certain actions. By default the focus goes to the clicked button so the user has to click back on to the control they want to edit in order to continue modifying the data on the form. What I need to be able to do is return the focus to the last edited control after the button click event has been processed. Here's a sample screenshot that illustrates what I'm talking about: The user can be entering data in textbox1, textbox2, textbox3, etc and click the button. I need the button to return the focus back to the control that most recently had the focus before the button was clicked. I'm wondering if anyone has a better way of implementing this functionality than what I've come up with. Here's what I'm doing right now: public partial class Form1: Form { Control _lastEnteredControl;
private void textBox_Enter(object sender, EventArgs e) { _lastEnteredControl = (Control)sender; }
private void button1_Click(object sender, EventArgs e) { MessageBox.Show("Do something here"); _lastEnteredControl.Focus(); }
} So basically what we have here is a class variable that points to the last entered control. Each textbox on the form is setup so the textBox_Enter method is fired when the control receives the focus. Then, when the button is clicked focus is returned to the control that had the focus before the button was clicked. Anybody have any more elegant solutions for this? | TITLE:
How do you return the focus to the last used control after clicking a button in a winform app?
QUESTION:
I'm working on a windows forms application (C#) where a user is entering data in a form. At any point while editing the data in the form the user can click one of the buttons on the form to perform certain actions. By default the focus goes to the clicked button so the user has to click back on to the control they want to edit in order to continue modifying the data on the form. What I need to be able to do is return the focus to the last edited control after the button click event has been processed. Here's a sample screenshot that illustrates what I'm talking about: The user can be entering data in textbox1, textbox2, textbox3, etc and click the button. I need the button to return the focus back to the control that most recently had the focus before the button was clicked. I'm wondering if anyone has a better way of implementing this functionality than what I've come up with. Here's what I'm doing right now: public partial class Form1: Form { Control _lastEnteredControl;
private void textBox_Enter(object sender, EventArgs e) { _lastEnteredControl = (Control)sender; }
private void button1_Click(object sender, EventArgs e) { MessageBox.Show("Do something here"); _lastEnteredControl.Focus(); }
} So basically what we have here is a class variable that points to the last entered control. Each textbox on the form is setup so the textBox_Enter method is fired when the control receives the focus. Then, when the button is clicked focus is returned to the control that had the focus before the button was clicked. Anybody have any more elegant solutions for this?
ANSWER:
For a bit of 'simplicity' maybe try. public Form1() { InitializeComponent();
foreach (Control ctrl in Controls) { if (ctrl is TextBox) { ctrl.Enter += delegate(object sender, EventArgs e) { _lastEnteredControl = (Control)sender; }; } } } then you don't have to worry about decorating each textbox manually (or forgetting about one too). | [
"c#",
".net",
"winforms"
] | 7 | 16 | 12,579 | 7 | 0 | 2008-09-01T00:48:03.637000 | 2008-09-01T01:11:00.587000 |
37,324 | 37,332 | What is the syntax for an inner join in LINQ to SQL? | I'm writing a LINQ to SQL statement, and I'm after the standard syntax for a normal inner join with an ON clause in C#. How do you represent the following in LINQ to SQL: select DealerContact.* from Dealer inner join DealerContact on Dealer.DealerID = DealerContact.DealerID | It goes something like: from t1 in db.Table1 join t2 in db.Table2 on t1.field equals t2.field select new { t1.field2, t2.field3} It would be nice to have sensible names and fields for your tables for a better example.:) Update I think for your query this might be more appropriate: var dealercontacts = from contact in DealerContact join dealer in Dealer on contact.DealerId equals dealer.ID select contact; Since you are looking for the contacts, not the dealers. | What is the syntax for an inner join in LINQ to SQL? I'm writing a LINQ to SQL statement, and I'm after the standard syntax for a normal inner join with an ON clause in C#. How do you represent the following in LINQ to SQL: select DealerContact.* from Dealer inner join DealerContact on Dealer.DealerID = DealerContact.DealerID | TITLE:
What is the syntax for an inner join in LINQ to SQL?
QUESTION:
I'm writing a LINQ to SQL statement, and I'm after the standard syntax for a normal inner join with an ON clause in C#. How do you represent the following in LINQ to SQL: select DealerContact.* from Dealer inner join DealerContact on Dealer.DealerID = DealerContact.DealerID
ANSWER:
It goes something like: from t1 in db.Table1 join t2 in db.Table2 on t1.field equals t2.field select new { t1.field2, t2.field3} It would be nice to have sensible names and fields for your tables for a better example.:) Update I think for your query this might be more appropriate: var dealercontacts = from contact in DealerContact join dealer in Dealer on contact.DealerId equals dealer.ID select contact; Since you are looking for the contacts, not the dealers. | [
"c#",
".net",
"sql",
"linq-to-sql",
"join"
] | 482 | 616 | 687,894 | 18 | 0 | 2008-09-01T01:00:24.467000 | 2008-09-01T01:08:58.110000 |
37,335 | 37,349 | How to deal with "java.lang.OutOfMemoryError: Java heap space" error? | I am writing a client-side Swing application (graphical font designer) on Java 5. Recently, I am running into java.lang.OutOfMemoryError: Java heap space error because I am not being conservative on memory usage. The user can open unlimited number of files, and the program keeps the opened objects in the memory. After a quick research I found Ergonomics in the 5.0 Java Virtual Machine and others saying on Windows machine the JVM defaults max heap size as 64MB. Given this situation, how should I deal with this constraint? I could increase the max heap size using command line option to java, but that would require figuring out available RAM and writing some launching program or script. Besides, increasing to some finite max does not ultimately get rid of the issue. I could rewrite some of my code to persist objects to file system frequently (using database is the same thing) to free up the memory. It could work, but it's probably a lot work too. If you could point me to details of above ideas or some alternatives like automatic virtual memory, extending heap size dynamically, that will be great. | Ultimately you always have a finite max of heap to use no matter what platform you are running on. In Windows 32 bit this is around 2GB (not specifically heap but total amount of memory per process). It just happens that Java chooses to make the default smaller (presumably so that the programmer can't create programs that have runaway memory allocation without running into this problem and having to examine exactly what they are doing). So this given there are several approaches you could take to either determine what amount of memory you need or to reduce the amount of memory you are using. One common mistake with garbage collected languages such as Java or C# is to keep around references to objects that you no longer are using, or allocating many objects when you could reuse them instead. As long as objects have a reference to them they will continue to use heap space as the garbage collector will not delete them. In this case you can use a Java memory profiler to determine what methods in your program are allocating large number of objects and then determine if there is a way to make sure they are no longer referenced, or to not allocate them in the first place. One option which I have used in the past is "JMP" http://www.khelekore.org/jmp/. If you determine that you are allocating these objects for a reason and you need to keep around references (depending on what you are doing this might be the case), you will just need to increase the max heap size when you start the program. However, once you do the memory profiling and understand how your objects are getting allocated you should have a better idea about how much memory you need. In general if you can't guarantee that your program will run in some finite amount of memory (perhaps depending on input size) you will always run into this problem. Only after exhausting all of this will you need to look into caching objects out to disk etc. At this point you should have a very good reason to say "I need Xgb of memory" for something and you can't work around it by improving your algorithms or memory allocation patterns. Generally this will only usually be the case for algorithms operating on large datasets (like a database or some scientific analysis program) and then techniques like caching and memory mapped IO become useful. | How to deal with "java.lang.OutOfMemoryError: Java heap space" error? I am writing a client-side Swing application (graphical font designer) on Java 5. Recently, I am running into java.lang.OutOfMemoryError: Java heap space error because I am not being conservative on memory usage. The user can open unlimited number of files, and the program keeps the opened objects in the memory. After a quick research I found Ergonomics in the 5.0 Java Virtual Machine and others saying on Windows machine the JVM defaults max heap size as 64MB. Given this situation, how should I deal with this constraint? I could increase the max heap size using command line option to java, but that would require figuring out available RAM and writing some launching program or script. Besides, increasing to some finite max does not ultimately get rid of the issue. I could rewrite some of my code to persist objects to file system frequently (using database is the same thing) to free up the memory. It could work, but it's probably a lot work too. If you could point me to details of above ideas or some alternatives like automatic virtual memory, extending heap size dynamically, that will be great. | TITLE:
How to deal with "java.lang.OutOfMemoryError: Java heap space" error?
QUESTION:
I am writing a client-side Swing application (graphical font designer) on Java 5. Recently, I am running into java.lang.OutOfMemoryError: Java heap space error because I am not being conservative on memory usage. The user can open unlimited number of files, and the program keeps the opened objects in the memory. After a quick research I found Ergonomics in the 5.0 Java Virtual Machine and others saying on Windows machine the JVM defaults max heap size as 64MB. Given this situation, how should I deal with this constraint? I could increase the max heap size using command line option to java, but that would require figuring out available RAM and writing some launching program or script. Besides, increasing to some finite max does not ultimately get rid of the issue. I could rewrite some of my code to persist objects to file system frequently (using database is the same thing) to free up the memory. It could work, but it's probably a lot work too. If you could point me to details of above ideas or some alternatives like automatic virtual memory, extending heap size dynamically, that will be great.
ANSWER:
Ultimately you always have a finite max of heap to use no matter what platform you are running on. In Windows 32 bit this is around 2GB (not specifically heap but total amount of memory per process). It just happens that Java chooses to make the default smaller (presumably so that the programmer can't create programs that have runaway memory allocation without running into this problem and having to examine exactly what they are doing). So this given there are several approaches you could take to either determine what amount of memory you need or to reduce the amount of memory you are using. One common mistake with garbage collected languages such as Java or C# is to keep around references to objects that you no longer are using, or allocating many objects when you could reuse them instead. As long as objects have a reference to them they will continue to use heap space as the garbage collector will not delete them. In this case you can use a Java memory profiler to determine what methods in your program are allocating large number of objects and then determine if there is a way to make sure they are no longer referenced, or to not allocate them in the first place. One option which I have used in the past is "JMP" http://www.khelekore.org/jmp/. If you determine that you are allocating these objects for a reason and you need to keep around references (depending on what you are doing this might be the case), you will just need to increase the max heap size when you start the program. However, once you do the memory profiling and understand how your objects are getting allocated you should have a better idea about how much memory you need. In general if you can't guarantee that your program will run in some finite amount of memory (perhaps depending on input size) you will always run into this problem. Only after exhausting all of this will you need to look into caching objects out to disk etc. At this point you should have a very good reason to say "I need Xgb of memory" for something and you can't work around it by improving your algorithms or memory allocation patterns. Generally this will only usually be the case for algorithms operating on large datasets (like a database or some scientific analysis program) and then techniques like caching and memory mapped IO become useful. | [
"java",
"jvm",
"out-of-memory",
"heap-memory"
] | 574 | 310 | 2,250,811 | 32 | 0 | 2008-09-01T01:10:03.727000 | 2008-09-01T01:29:54.457000 |
37,343 | 2,189,015 | Wordpress MediaWiki Cookie Integration | I have my Wordpress install and MediaWiki sharing the same login information. Unfortunately, users need to log into both separately, but at least they use the same credentials. What I would like to do is cause a successful login on the Wordpress blog to also cause a login for MediaWiki (ideally both directions). There are two ways this could happen: Logging in causes a login on the other application (writes the cookie and creates the session in the DB) Accessing an application checks for the cookie of the other and then automatically logs in. Since the usernames are shared it seems like the latter should be pretty easy to do. Any tips would be great. | They both support OpenId now. MediaWiki's extension WordPress's plugin There are probably other options for using OpenId, but I think that is the best solution available. | Wordpress MediaWiki Cookie Integration I have my Wordpress install and MediaWiki sharing the same login information. Unfortunately, users need to log into both separately, but at least they use the same credentials. What I would like to do is cause a successful login on the Wordpress blog to also cause a login for MediaWiki (ideally both directions). There are two ways this could happen: Logging in causes a login on the other application (writes the cookie and creates the session in the DB) Accessing an application checks for the cookie of the other and then automatically logs in. Since the usernames are shared it seems like the latter should be pretty easy to do. Any tips would be great. | TITLE:
Wordpress MediaWiki Cookie Integration
QUESTION:
I have my Wordpress install and MediaWiki sharing the same login information. Unfortunately, users need to log into both separately, but at least they use the same credentials. What I would like to do is cause a successful login on the Wordpress blog to also cause a login for MediaWiki (ideally both directions). There are two ways this could happen: Logging in causes a login on the other application (writes the cookie and creates the session in the DB) Accessing an application checks for the cookie of the other and then automatically logs in. Since the usernames are shared it seems like the latter should be pretty easy to do. Any tips would be great.
ANSWER:
They both support OpenId now. MediaWiki's extension WordPress's plugin There are probably other options for using OpenId, but I think that is the best solution available. | [
"php",
"wordpress",
"lamp",
"mediawiki"
] | 3 | 1 | 1,837 | 4 | 0 | 2008-09-01T01:23:51.400000 | 2010-02-03T01:07:39.590000 |
37,346 | 37,348 | Why can't a forward declaration be used for a std::vector? | If I create a class like so: // B.h #ifndef _B_H_ #define _B_H_
class B { private: int x; int y; };
#endif // _B_H_ and use it like this: // main.cpp #include #include class B; // Forward declaration.
class A { public: A() { std::cout << v.size() << std::endl; }
private: std::vector v; };
int main() { A a; } The compiler fails when compiling main.cpp. Now the solution I know is to #include "B.h", but I'm curious as to why it fails. Neither g++ or cl 's error messages were very enlightening in this matter. | The compiler needs to know how big "B" is before it can generate the appropriate layout information. If instead, you said std::vector, then the compiler wouldn't need to know how big B is because it knows how big a pointer is. | Why can't a forward declaration be used for a std::vector? If I create a class like so: // B.h #ifndef _B_H_ #define _B_H_
class B { private: int x; int y; };
#endif // _B_H_ and use it like this: // main.cpp #include #include class B; // Forward declaration.
class A { public: A() { std::cout << v.size() << std::endl; }
private: std::vector v; };
int main() { A a; } The compiler fails when compiling main.cpp. Now the solution I know is to #include "B.h", but I'm curious as to why it fails. Neither g++ or cl 's error messages were very enlightening in this matter. | TITLE:
Why can't a forward declaration be used for a std::vector?
QUESTION:
If I create a class like so: // B.h #ifndef _B_H_ #define _B_H_
class B { private: int x; int y; };
#endif // _B_H_ and use it like this: // main.cpp #include #include class B; // Forward declaration.
class A { public: A() { std::cout << v.size() << std::endl; }
private: std::vector v; };
int main() { A a; } The compiler fails when compiling main.cpp. Now the solution I know is to #include "B.h", but I'm curious as to why it fails. Neither g++ or cl 's error messages were very enlightening in this matter.
ANSWER:
The compiler needs to know how big "B" is before it can generate the appropriate layout information. If instead, you said std::vector, then the compiler wouldn't need to know how big B is because it knows how big a pointer is. | [
"c++",
"stl"
] | 34 | 31 | 23,429 | 8 | 0 | 2008-09-01T01:27:25.400000 | 2008-09-01T01:29:43.617000 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.